I recently ran into an issue where my Standby audit destination (adump) became full. The disk was not full. Rather, there were too many files in the directory. I did “ls -l | wc -l” and it came back with more than 1 million files in the directory. And it took a very long time to return that figure.
This is a known problem with Standby databases monitored by Grid Control. Since the Standby database is not OPEN, it is only MOUNTED, the only user that Grid Control can use to monitor the database is a SYSDBA connection. Each time GC connects to the database as SYSDBA, a *.aud file is created in the adump directory. In the time it takes me to write this blog entry, I expect I will get about 150 *.aud files generated in adump. Over time, this directory reaches some file system limit for the max number of files in a directory.
So I tried to do:
cd /u01/app/oracle/admin/orcl/adump
rm *.aud
However, the rm command results in an error “too many arguments”. In Unix/Linux, the wild card gets replaced with the file names. So “rm *.aud” gets translated to “rm orcl_ora_1001.aud orcl_ora_1002.aud orcl_ora_1003.aud orcl_ora_1004.aud ….”.
My next course of action was to quickly generate a script which did the deletions in pieces. So my script contained:
rm orcl_ora_100*.aud
rm orcl_ora_101*.aud
rm orcl_ora_102*.aud
...
rm orcl_ora_199*.aud
For instance, cialis in uk online available in different dosages Kamagra tablets require different ingestion method. It cialis cost 20mg is not uncommon for people to make sure that they take appropriate and exact pill for the following disorder. Regular intake of shilajit anti aging herbal pills can feel result within a few days time period. cialis tablets india The result is hundreds of thousands now popping pills and rubbing on all manner of strange lotions over their bodies, viagra professional price while drinking bottled water and hoarding tinned food.
I saved the lines in a shell script and executed the script. This took forever to complete! And this would only delete those files with “100” through “199” in their sequential names. I would have to repeat this process for “200” through “299” on the way up to “900” through “999”. Ugh. I’m not sure I would ever get caught up.
Then I found this little tidbit. To quickly remove the files, use the find command with the -delete option:
find . -type f -delete
To see the progress, use the -print option:
find . -type f -print -delete
I was able to see the files scroll by, one by one as they were deleted. I was able to delete a million files in less than 5 minutes with this method.
I did have a crontab entry to delete files every four hours from this directory since I generate audit files so often. Due to the number of files, my cron job was failing. So I changed my cron job to be:
00 01,05,09,13,17,21 * * * find /u01/app/oracle/admin/orcl/adump -type f -delete
And everything is good now. And in the time it took me to write this entry, adump now has 165 new files.