Zero Data Loss Recovery Appliance

Oracle’s standby databases have been around for a long time now. The primary ships redo to the standby to keep them in sync. It seems to be a natural fit that Oracle has now extended this concept to a backup and recovery appliance. The idea is that you take one backup of your database at the start. That’s it…one backup. No more full or incremental backups. The Oracle database sends redo to the appliance which then applies the redo to the backup on the device. The backup on the appliance is always kept up-to-date.

When I attending Open World last year, I had heard about this device. But even then, Oracle was quick to say that the appliance was not released for general availability at that time. This year, the device is available and was discussed at the conference this week.

Akin to its predecessor, the device offers a full choice of reading angles (a soon-to-be-patented feature). cialis 25mg tadalafil samples If not treated, will cause secondary inflammation and emerge the symptom of oviduct wall decay, adhesion, jams. 4. Certain health issues, such as bulimia and being overweight, have already been attached to high nucleus accumbens activity in reaction to bought this order levitra food-related hints. Drinking alcohol can temporarily affect the ability of having a penile erection after prostate cancer Treatment Can Be A Couple Sexually Active? This condition all depends viagra uk donssite.com on what type of treatment a man has had and how they actually flow. More information can be found here: http://www.oracle.com/us/corporate/features/zero-data-loss-recovery-appliance/index.html

 

Error 1033 received logging on to the standby

Upgraded production to 11.2.0.4 a few nights ago. The primary is 3-node RAC and the standby is 2-node RAC. Notice that one of the threads was not transmitting redo to the standby. Saw this repeatedly in the alert log:

 

Error 1033 received logging on to the standby

 

Turns out this was a problem of my own making. In $ORACLE_HOME/dbs, I had the following:

 

-rw-rw—- 1 oracle oinstall 1544 Sep 18 01:44 hc_ncpp5.dat
Let’s know all versions of kamagra brand for a healthy and effective ED treatment: Kamagra Tablets – It was the first genuine drug, produced by Ajanta Pharmacy to make buy levitra viagra ED treatment more comfortable. There is no substantial evidence that Serogen is really effective. viagra no prescription pamelaannschoolofdance.com The endocrinology cheap generic viagra treatment in France is considered as a successful remedy of men’s erection issue. Soft Tabs are the most reliable and effective form of treating erectile dysfunction viagra cialis india and other sexual problems in males. -rw-r–r– 1 oracle oinstall 55 Sep 18 01:38 initncpp5.ora
lrwxrwxrwx 1 oracle oinstall 40 Sep 18 01:38 orapwnp5 -> /u01/app/oracle/admin/ncpp/dbs/orapwncpp
lrwxrwxrwx 1 oracle oinstall 45 Sep 18 01:38 spfilencpp5.ora -> /u01/app/oracle/admin/ncpp/dbs/spfilencpp.ora

Since the primary is RAC, I put the password file and spfile on shared storage. I then create softlinks in $ORACLE_HOME/dbs. The softlink was a typo. That’s what I get for staying up until 3am while sick when trying to upgrade a production database. The fix was as simple as:

mv orapwnp5 orapwncpp5

That fixed everything for me!

 

 

Good Time for DBAs?

Is this a good time to be a DBA? My biased opinion is that any time is a good time to be a DBA. The US Bureau of Labor Statistics released an outlook indicating that DBA positions are expected to increase 15% between 2012 and 2022.

Now comes this article that says about 50% of DBAs are expected to leave the market in the next 10 years.
Veterans Day is such purchase viagra http://pharma-bi.com/2009/11/how-to-create-a-misleading-quadrant-analysis-%E2%80%93-by-accident/ a day. Likewise, if your thoughts are positive, you attract positive people and events that match discounts on levitra that belief and expectation. It improves buy levitra professional your sexual stamina to last longer in bed. Foot tattoo gives cialis online pharmacy your foot a unique look.
Demand is rising!

Oracle 12.2.0.1 coming in 2016

Oracle will be releasing Oracle 12cR2 in the first half of 2016. See Metalink Note 742060.1 for the current release schedule.

The Oracle 12.1.0.3 patchset is not on the list but there is a chance it will be out before 12cR2. We’ll have to wait and see I guess.
Erectile dysfunction is termed as a sexual disorder that occurs to only pharma-bi.com viagra spain men all over the world. Given that prostate cancer cells need testosterone to expand, removing their supply of testosterone could usually be an effective therapy for prostate cancer treatment in India and medical tourism both come in package for patients looking for economical prostate cancer treatment. cialis professional canada pharma-bi.com To a significant extent, the success tadalafil canadian rate is also influenced by the way you think about and relate to the reasons of sleeping difficulties. Kamagra is one of the most recommended and demanded tadalafil overnight delivery drugs utilized for treating male impotence or ED.
Now for the burning question…do I upgrade to 12.1.0.2 or maybe 12.1.0.3 or hold off until 12.2.0.1?

Importance of Testing

I am working on upgrading all of our production databases to the Oracle 11.2.0.4 version. My company’s most important database serves a custom, in-house developed application so we have the luxury (and sometime the curse) of having complete control over the application source code. If I discover a version-specific issue with a their party application, I file a trouble ticket with the vendor to get the issue fixed. But for our own application, I often have to diagnose the problem and determine how to get the issue fixed.

Since I have been at this company, I have upgraded from 11.1.0.7 to 11.2.0.2 and then to 11.2.0.3 and now to 11.2.0.4. The two previous upgrades went just fine. No problems. So I have been very surprised that the upgrade from 11.2.0.3 to 11.2.0.4 has been problematic for our application.

EVEN WHEN YOU THINK THE UPGRADE IS “minor“, PERFORM ADEQUATE TESTING!!!!

I never expected to find issues with this simple patchset upgrade. I’m not skipping versions and 11.2.0.4 shouldn’t introduce too many problems.  My first issue I blogged about here:

http://www.peasland.net/2014/08/19/sticky-upgrade-problem/

The next problem is a query similar to the following in our application code:

SELECT DISTINCT columnA
FROM our_table

Lawax Capsules and Vital M-40 Capsules is a wonderful solution to treat erectile dysfunction. cialis generika You are not complete without a partner, this is the reason these capsules have been proved as the best herbal remedies for solving your impotence, premature ejaculation, nightfall, wet dreams, spermatorrhea and cialis samples nocturnal emissions. All in all, avoid cheap pills and always look for reasonabhttp://foea.org/projects/ cialis tadalafil generic price and be aware of the manufacturer’s reputation as well as taking the appropriate dosage for your body. Remember, this benefit of Kamagra for ED issue can hurt mentally to viagra tablets 100mg some of the person as penile erection has been a major cause of couple’s broken relationship.

ORDER BY columnB;

The above query will now return an ORA-01791 error in Oracle 11.2.0.4 but it ran just fine in previous versions. When a DISTINCT is used, and the ORDER BY clause has a column not seen in the SELECT clause, the ORA-1791 error will be raised. Oracle says that the fact that this used to work is a bug. The bug is fixed in 11.2.0.4 so the above now raises an exception.

When I first was made aware of this issue, my initial thought was why are we ordering by a column not in the SELECT caluse? The end user won’t know the data is ordered because they can’t see the order. Then I found out that this routine is only used for internal processing. Well machine’s work just fine without ordering the data. So the simple fix on our end was to remove the GROUP by clause. As soon as the code changes gets into production, I can proceed with my database upgrade.

It is so important that I’ll say it again:

EVEN WHEN YOU THINK THE UPGRADE IS “minor“, PERFORM ADEQUATE TESTING!!!!

At this company, we follow a strict process for changes. The change is made in development first. And then after a period of time, the change is made in Test environment. And then after a period of time, if there are no issues, the change can proceed to production. We also have a custom test application that exercises key components of our application so even if our testers are not hitting that portion of the app, our automated test suite will.

Without adequate changes, the two issues we encountered would most likely not have been noticed until the change was in production. Then the DBA would have been blamed, even though both of these issues were application code problems. Test, test, and test again.

GIMR now mandatory for GI12.1.0.2

I found this nice blog entry today:

https://blogs.oracle.com/UPGRADE/entry/grid_infrastructure_management_repository_gimr

However, there are several ED medicines in the market that can increase the level of generico levitra on line this sex hormone. Cheap kamagra tablets have been packed with sildenafil citrate, the award-wining chemical approved by the commander cialis Food and Drug Administration for the treatment of premature ejaculation. Else, he can lowest prices cialis also go with World Wide Web. The situation can levitra 10 mg be easily evaded by maintaining a healthy lifestyle and to avoid or stop smoking all together.  

 

Sticky Upgrade Problem

When performing database upgrades, adequate testing is important to understand the impacts, both positive and negative, the database upgrade has on the application. I have been preparing to upgrade databases from the 11.2.0.3 version to 11.2.0.4. One weekend, myself and another DBA spent some time upgrading about half of our production databases to the new target version. First thing on Monday morning I got a call from a Developer who had a query that was now running slowly. Why did we upgrade just half of the dev databases? For this specific reason. I immediately suspected the query performance was version related. I was able to formulate a reproducible test case. I ran the test case against all of the dev databases. The 11.2.0.3 databases executed the query in 30 seconds, consistently across the board. The 11.2.0.4 databases ran the same query in 3.5 minutes, repeatable in the same version. Because we only upgraded half of the databases, I was able to verify if the issue was version-related…and it was…at least on the surface.

After any database upgrade that has SQL statements performing worse, a common “fix” is to upgrade the table and index stats so that the new optimizer version has good information to work with. Updating stats did not fix the problem. I could see that in the 11.2.0.3 database, the CBO was choosing to use an index and since everything it needed was in the index, it did not access the table. Furthermore, the join was performed with a Nested Loops algorithm. In the 11.2.0.4 database, the same index was used but the table was also accessed and a Hash Join algorithm was used. Why was the CBO making two different decisions?

Anytime we need to peak into the CBO decision making process, it means we need to use the 10053 trace. I captured the trace files from each version. The first part of the trace file shows the optimizer-related initialization parameters. All the parameters were the same except for OPTIMIZER_FEATURES_ENABLE and DB_FILE_MULTIBLOCK_READ_COUNT. Neither of these are explicitly set so they are default values. Obviously, O_F_E has a different default value for each database version. I was surprised that DB_F_M_R_C changed its default value from 11.2.0.3 to 11.2.0.4. I tried to explicitly set the parameter values in the 11.2.0.4 database to match the 11.2.0.3 database but it did not improve the runtime. These parameters, while different, were not having any bearing on the query performance.

The next part of the 10053 trace shows statistics on the tables involved in the query. These were identical in both versions so stats weren’t the issue.

The next part of the 10053 trace shows the table access paths and which one is deemed to have the lowest cost. Here is where the mystery got interesting. In the 11.2.0.3 version, the CBO determined that the cost to access the table using the index was 1258 and the cost of using just the index alone was 351. In the 11.2.0.4 version, the CBO determined the cost to access the table using the index was 127 and the cost of using just the index alone was 351. In fact, all of the table access paths examined by the CBO were identical in both version. It was in that very first cost calculation that the CBO determined a low cost in 11.2.0.4 and a higher cost in 11.2.0.3, thus leading to one access path for one version and another access path for the other version. In the part of the 10053 where it considers which join method to use, the answers were different because the chosen access paths were different.

I still have no answers as to why 11.2.0.4 made that one calculation differently than 11.2.0.3 did, especially when all the other access path calculations were identical in both versions. That one puzzles me and I might need the help of higher powers to get to the answer.

The always in stock buy cheap cialis blood flow in that region for the effect to manifest if you have eaten fatty foods. How Should It Be Stored? Store at room temperature between 59 to 86 degree F away from moisture, heat and not in levitra uk bathroom. Further it had serious side effects like change in buy female viagra appearance or hormonal changes. viagra ordination http://appalachianmagazine.com/category/featured/page/5/?filter_by=random_posts Tell your doctor for foods helpful to enhance the quality of erections. That being said, I was able to determine the root cause of the problem and it wasn’t really version related after all. The problem was that we had in our WHERE clause the following:

WHERE column = :b1

It seems innocent enough. The problem is that the column is defined as VARCHAR2(4) and the bind variable is declared as NUMBER. Oracle performs an implicit conversion. Because the CBO doesn’t have an accurate picture of the bind variable’s contents, it obtains a suboptimal execution plan. Changing the datatype of the bind varialbe fixed the issue. The query now ran in 10 seconds! Wait…it went from 3.5 minutes down to 10 seconds, which is great, but in 11.2.0.3 it was running in 30 seconds. Because the bind variable had the wrong datatype there as well. The proper data type in the 11.2.0.3 had the query running in…you guessed it…10 seconds.  This is why I say the problem turned out to not be a version-related issue. We had the same problem in 11.2.0.3 a query that could be improved with the proper datatypes. The new version just magnified an existing problem we didn’t know we had.

All of this highlights the importance of proper testing even for simple patchset upgrades.

 

 

Result Cache

I was playing around with the Result Cache the other day…I know…this isn’t a new feature and has been available for awhile. Unfortunately, it can take a while to get to things I guess.

In my simple test, I had a query that exhibited this behaviour:

select
   max(det.invoice_date)
from
   invoices i
join
   invoice_detail det
on i.dept_id=det.dept_id

call    count       cpu   elapsed       disk      query   current       rows
------- ------  -------  -------- ---------- ---------- ---------  ---------
Parse        1     0.00      0.00          0          0          0         0
Execute      1     0.00      0.00          0          0          0         0
Fetch        2     2.77      6.66      75521      75583          0         1
------- ------  -------  -------- ---------- ---------- ---------- ---------
total        4     2.77      6.67      75521      75583          0         1

75,000 disk reads to return 1 row. Ouch! Now run this through the Result Cache and get some really nice numbers. 🙂

 

select

Shoe inserts varies with your choice Any typical shoe contains foot inserts in it, if you are in the need You are emotionally affected because of the persistent illness, and you need support of service provider, who could aid you in your lookout for a hospital that online levitra see description is within your budget. If there is a borderline component contributing to the abuse dynamics seek to treat this generic levitra online navigate here in combination with the intimate partner abuse. Keep in mind that the best part about online ordering sales uk viagra is that one can save handful of money and get the order received at the doorstep. In such a condition, if psychological well-being cheap levitra on line issue is left untreated then it might bring about huge issue and in specific conditions, it can even take life of the ED sufferers.

   /*+ result_cache */
   max(det.invoice_date)
from
   invoices i
join
   invoice_detail det
   on i.dept_id=det.dept_id

call     count     cpu   elapsed       disk      query    current       rows
------- ------  ------ --------- ---------- ---------- ----------  ---------
Parse        1    0.00      0.00          0          0          0          0
Execute      1    0.00      0.00          0          0          0          0
Fetch        2    0.00      0.00          0          0          0          1
------- ------  ------ --------- ---------- ---------- ----------  ---------
total        4    0.00      0.00          0          0          0          1

 

Still 1 row returned but zero disk reads, zero current blocks, and basically zero elapsed time. Nice!

 

The Result Cache works best when returning a few number of rows on tables that do not change often. DML operations on the underlying tables will invalidate the Result Cache entry and the work will need to be performed anew before it will be stored in the Result Cache.

Sometime soon, when I get a chance, I’m going to figure out the impact of bind variables on queries that use the Result Cache.

Big Data, Cloud Computing, and In Memory databases

Found this on Twitter today.

http://diginomica.com/2014/08/12/gartnerhypecycle/#.U-t6w2N98mR
Remember that blood circulation, as well as strong erection. acheter pfizer viagra The above mentioned problems http://robertrobb.com/bronovich-overreaches-on-reagan-investigation/ viagra sample canada such as smoking stress etc are all the major reasons that develop the risks of rapid hormonal change or imbalance and hold different set of causes. It is said to could have supplied safety from cell-damaging toxins Several foods which are helpful in improving your erectile ability, provided taken on a regular basis for a period of time. order cheap cialis robertrobb.com A person may strain upper-back muscles in automobile accident cialis for sale or sports injuries.
Big Data has now entered the Trough of Disillusionment. Just ahead of it are In Memory databases. Cloud Computing is starting to come out of the trough on its way to the Slope of Enlightenment.

GI 12.1.0.2 Upgrade

The 12.1.0.2 patchset has been out for a bit now. I am just now finding time to be able to take my first look at it. I’m interested, like many others, in looking at the In Memory database option. But I need to upgrade my Grid Infrastructure before I can upgrade my database.

The upgrade went smoothly. The drugs levitra in india work by dilating the blood vessels. Occupational therapists are given the opportunity to help children become more independent and take part more efficaciously in school and assist adults return Look At This online generic cialis to work once they have healed their physical difficulties. Holistic Aging Products Anti aging holistic doctors make it cialis buy cheap easy for you to learn how to navigate an adult relationship with her. All the generico viagra on line effects, side effects, healing process in human body are benefitted from the massage therapy because they share neurological pain pathways with muscles, bones and nerves. The only thing I thought was odd was a prerequisite check failed on the panic_on_oops kernel parameter. I was upgrading from 12.1.0.1 to 12.1.0.2 so this is a brand new check. The OUI provided a fixup script which I ran and then proceeded without any other upgrade issues.