A problem I notice at a lot of customers is that they have an enormous collection of cloudwatch logs. Some of them are older than my own son. Just slurping up precious dollars as nobody ever bothers to clean them. That is until someone looks at all the lines of the cost report and sees… “why the fuck are we spending $500 a month on cloudwatch data storage?”.
Logs are mostly immediatly needed for a week, some for 2 weeks and if it is a very critical application maybe a month? But more? Sure, you have some audit logs or very critical authentication logs but I think I can safely say, 99% of the logs can just be deleted after a small timeframe. Also, Cloudwatch logs aren’t the place to store audit logs, you should export them to S3 and deep-archive them!
But everybody forgets to do it. Cloudwatch Logs retention is almost always kept at default, which is never expiring
. Incurring charges and making Amazon very happy.
That is why I created a Lambda function, it just checks monthly through Cloudwatch events if there are any log groups with a never expiring
retention set. And if so, it will change it to one month.
So stop wasting money on logs that should have expired years ago and just deploy my CDK construct on your environment! Or if you aren’t using CDK, just go to the functions folder and copy the run.py
script to your environment and make sure to run it once a month in any region you are using Cloudwatch logs!
That is it! More (up to date) information can be found on Github or on the newly created Construct Hub of Amazon.
< Home