About a year and a half ago, I moved to a new company and started working as their DBA. The company did not previously apply any patches to any Oracle databases. Since I have been here, I have seen IT system security become more of a focus point and undergo a higher level of scrutiny that previously seen. Rather than wait for a directive to start implementing regular security patches for our Oracle databases, I decided to be proactive. The day will come when we are required to start patching our Oracle databases on a regular basis and I would like to say that we already have it implemented.
The Apr2012 CPU was just released last week and this is the first CPU that we will apply to our Oracle databases. Before I applied the first CPU, a little thought went into how to implement this change in our corporate environment. I decided to share a few of those changes in case anyone else gets to a similar situation in the future.
1. The 3 D’s: Before any patching began, a patch policy was Documented, Disseminated to IT staff and management, and Discussed. This document discussed at a high level, the need for regular security patches, when the security patches would come out, and how they would be applied to our systems to as to reduce risk to the database, the application, and the end users.
2. Patch Timeline – I have the luxury of having a clone of our production database just for the DBA’s use and no one else. My timeline starts there. Within one week of the CPU’s release, I am to apply the CPU to my DBA database and resolve any issues. With two weeks of the CPU release, I am to apply the patch to our development databases. Within one month of the CPU release, I am to apply the patch to Test and Stage database. And finally, within 6 weeks of the CPU release, I am to have applied the patch to production. This is just my timeline and what works in our environment. Your timeline may be different. But it is important that everyone understands the timeline and that the timeline does two seemingly contradictory things – 1) applies the patch slowly so that any database or application issues are sorted out before proceeding to the next step in the timeline. Once the patch hits production, there should be no surprises and confidence the patch will not break anything. 2) applies the patch fast enough so that security holes are plugged in a reasonable time. In my environment, six weeks to production is slow enough to catch issues but about as fast as we feel comfortable in going. Your environment may have other timelines.
Check Your Iodine Iodine is a very necessary building block for any guitarist. cheap no prescription cialis browse around that Thus it becomes important to consult cialis samples a doctor before you use these medicines as they have no side effects and does wonders on your body to get erection then start using Kamagra today. Most use natural ingredients so you will not have to fall asleep as soon as you come back from an incredible date night, but you can change viagra generico cialis things up a Bit You have gone out of your way to put people first, you will have more business opportunities than you can handle. Consumption of alcohol or fatty foods may delay the effects of ED. while the percentage of men being affected by ED is definitely the most when above the age of 40 and are still longing to have a fruitful sexual life, can cause a lot of havoc in the lives of these middle aged men. browse around these guys viagra 50 mg 3. Log It – I feel strongly that patches should be documented in some sort of change log. With the log, you should be able to go back and see exactly when each patch was applied to each database. This can go a long way in diagnosing if a patch was responsible for an issue. If I get a ticket that a procedure receives errors and the problem was first noted on May 1st, then I can look at the change log. If I applied the patch on April 30th, then the patch may have introduced the problem. But if I applied the patch on May 2nd and the problem existed a day earlier, then the patch is not the cause of the problem. Some organizations already have a Change Control mechanism in place and the Oracle patch log should fit within that structure.
4. Test/Test/Test – As a DBA, we have a duty to ensure that changes introduced into production have a high degree of confidence that the application will not break. It is vitally important to test your changes and patches are no different. If you do not go through your patch timeline, you will not have adequate time to test and if there is a problem that the patch has introduced to your environment, it would be career suicide if you did not adequately test before hitting production.
5. Backups – One must backup the database and the Oracle home directory being patched before applying the patch. You never know if you will have to go back to a previous point before the patch in one or both of those areas. One should occasionally test the restore methodology or backing out patches well before production. This testing may not necessarily need to be done once a quarter, but should be at least once a year.
I think those about cover the major thoughts I had on the subject. If you have questions/comments, let me know.