Your alerting system, Nagios, Zabbix, PagerDuty or whatever informs you that your file system is 80% full. In MySQL I usually check space in this order:
- Binary Logs using up a lot of space? Binary Logs are usually set to expire out after a couple days. Extremely high activity (a lot importing, updating, deleting and re-importing) could result in large binary log files
- Slow query logs or general logs turned on to collect EVERYTHING? The slow query log and general log can be set to collect all SELECTS, DDL and DML.
- The slow query log can be set to log all queries that take greater than 0 sec and will include query execution times. This is useful for profiling but on a busy system within minutes it can collect GBs of data.
- General log turned on? When the general log is turned on it will log everything except query execute times. On a busy system, in just a few minutes it can log GBs of data.
- Custom logging or auditing turned on? Your company may have other ways to collect data about what the databases are doing. If this data is stored in separate data store on the same server, for auditing, debugging, etc, check here.
- This can result in several additional GB per day added to server.
- Is customer growth the cause of the disk space usage? This would fall under regular capacity planning. If you are not already checking and storing the size of each database on each server daily you should be. This will help you identify tends and do proper capacity planning.
- Still need more space? Start using the du -h command to find other large files which could be eating up space
Using the du -h and df-h at the command line will help you figure out where all the big files are.
1. Run a df -h command to see which filesystems are nearly full. Now you know which one is 80% but have no clue what files are contributing to that. Is there some log file that has unexpectedly grown huge and isn't being cleared out or isn't being cleared our frequently enough? How are you going to find this file?
Here is are good example:
http://unix.stackexchange.com/questions/125429/tracking-down-where-disk-space-has-gone-on-linux
Even after identifying "run a way" log files and deleting them, it is possible your space may not clear. What I've seen sometimes is whatever script, application or processes that causes the file system to fill up the file system may still be running. There might even be multiple copies of it running in memory and locking the file system so that the files you cleared and not giving the space back.
Check your running processes for the problem. Suppose it was a script called "log_generator.bash".
You could check for all the processes running the script like this:
ps -aux | grep log_generator.bash | grep -v grep
If you see several of them, you can kill them all with a command like this:
ps -aux | grep log_generator.bash | grep -v grep | awk '{print $2}' | xargs kill -9
Here is are good example:
http://unix.stackexchange.com/questions/125429/tracking-down-where-disk-space-has-gone-on-linux
du -h <dir> | grep '[0-9\.]\+G'
Some times you have filled up an entire filesystem. When this happens you gets lot of problems and cannot create new files or edit existing files.Even after identifying "run a way" log files and deleting them, it is possible your space may not clear. What I've seen sometimes is whatever script, application or processes that causes the file system to fill up the file system may still be running. There might even be multiple copies of it running in memory and locking the file system so that the files you cleared and not giving the space back.
Check your running processes for the problem. Suppose it was a script called "log_generator.bash".
You could check for all the processes running the script like this:
ps -aux | grep log_generator.bash | grep -v grep
If you see several of them, you can kill them all with a command like this:
ps -aux | grep log_generator.bash | grep -v grep | awk '{print $2}' | xargs kill -9