Oracle Doesn’t Look Good – 23ai TBA

I have to say that Oracle Corporation is not looking like a company that knows what its doing. At least when it comes to getting Oracle 23ai shipped out the door for on-premise customers. The saga continues and no mention from Oracle why the continued delays. If I’m an Oracle Cloud customer, I can enjoy the 23ai new features today! But for us on-prem customers, we continue to wait.

Silence from Oracle as to the reason for the delay is only going to create FUD: Fear, Uncertainty, Doubt. Oracle 23ai was released to cloud customers in September 2023. Here we are over 14 months later and Oracle has only shipped 23ai to on-premise Engineered customers, i.e. Exadata and Oracle Database Appliance. Here is where the uncertainty comes in. If Oracle 23ai is in good enough shape for cloud customers, and 23ai is in good enough shape for engineered systems, then why isn’t 23ai in good enough shape for my lowly on-premise deployments? Now some readers might be saying that if I want to use 23ai so badly, just jump into the cloud. Well moving the cloud may not be in the best interests of my company’s important enterprise database systems. If Oracle can’t give me 23ai on-prem after this length of time, FUD tells me something might be wrong and moving to the cloud will be moving my important databases into a potentially bad situation. Failure to provide 23ai at this juncture gives me doubts that I won’t have smooth operations in the cloud.

Oracle has been busy as of late updating MOS Note 742060.1 Release Schedule of Current Database Releases. I have this note bookmarked and then I have my MOS account send me an email whenever a bookmarked note gets updated. I’ve been watching this document get updated quite frequently here in November 2024. On Nov 1, they updated the roadmap for 19c, 21c, and 23ai. Then on Nov 19, they updated the 19c LSP dates. And just yesterday, they updated Premier Support (PS) for 19c yet again. The good news is that Oracle 19c will now be provided with Premier Support until December 2029, over five years from now! The bad news is that Oracle 19c started Premier Support in 2019 so the stable 19c version will be on PS for over a decade. At this rate, it will be even longer than 10 years as there is no certainty that Oracle won’t delay the release of 23ai even further. All Oracle is good at lately in this regard is just pushing back the dates. They certainly are not good at delivering what is supposed to be a finished product.

Yesterday, Oracle updated Note 742060.1 to indicate the newest timeline for shipping 23ai to on-prem customers. That doc now says that the release date for 23ai on Linux x86-64, and other on-prem platforms, is now TBA. To Be Announced. Does anyone want to tell Oracle that they already announced the release date! Three times! First, 23ai on-prem was announced to be released 1H2024. That’s the first half of 2024. Then they announced 2H2024. Then they announced 2025. Not the first half of the year or the second half of the year, but Oracle gave themselves a window of an entire year. FUD is creeping up again because without Oracle providing more details, I’m going to fear the worst and think that Oracle has no plans to even ship 23ai to on-prem customers in 2025 at all. Its looking like Oracle won’t even be able to ship in a window that was a year long!

I was planning on a 23ai upgrade for all my database systems in 2024 or even the early part of 2025. As of now, I’m delaying that upgrade project for the foreseeable future. I do have a different project to move my Oracle Enterprise Manager systems to new servers. I was going to install EM24c on the new machines and move everything over. Note 793512.1 Release Schedule of Enterprise Manager Releases and Patchsets tells me that EM24c will be released 2H2024, but at this point I’m not buying it. Its the end of November and not a word on EM24c hitting the street. Oracle hasn’t even bothered to update the ‘c’ to ‘ai’ in its EM nomenclature. So I’m thinking EM24ai won’t see the light of day this year at all.

Roll Forward Standby with 1 Command

I have a testbed that I use for research purposes. This testbed consists of a 2-node RAC primary database (RESP) and a 2-node RAC physical standby database (RESS). Both Oracle 19c on Oracle Linux. I also use the DG Broker in this configuration.

I am planning using this testbed to work on a new Oracle feature. Normally when I let this testbed sit unattended for a long period of time, the physical standby has stopped receiving its redo stream from the primary. Today I checked on it and sure enough, the standby has not been keeping up.

DGMGRL> show database ress

Database – ress

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 233 days 21 hours 42 minutes 19 seconds (computed 1 second ago)
Apply Lag: 234 days 8 hours 23 minutes 43 seconds (computed 2 seconds ago)
Average Apply Rate: 3.74 MByte/s
Real Time Query: OFF
Instance(s):
ress1 (apply instance)
ress2

Database Warning(s):
ORA-16854: apply lag could not be determined
ORA-16855: transport lag has exceeded specified threshold

Database Status:
WARNING

Transport lag of 233 days. Yikes! Obviously I have neglected this database for too long.

In the past, I would use the information in MOS Note 836986.1 to use RMAN incremental backups to roll-forward the physical standby database. The basic procedure is to determine the SCN of the physical standby and then create an RMAN backup set containing all blocks with a higher SCN. Then apply the backup set to the standby and get it back in business.

I’ve always thought that it would be nice if RMAN could look up this SCN and then figure this all out on its own. Apparently Oracle agrees with me. When I was looking up Note 836986.1 this morning, I came across Note 2431311.1 which shows me how to roll forward my physical standby database using just a single RMAN command. This feature is available to anyone with Oracle 18c and higher.

Before I can get started, I need to shutdown both instances of the standby and then start just one instance with SQL*Plus. I then stop managed recovery with the DG Broker.

SQL> startup mount

DGMGRL> edit database ress set state=apply-off;

Now that I have the standby ready, I’m ready to use the one RMAN command: recover standby database from service resp where resp is the TNS alias pointing to the primary. As you can see from the RMAN output below, snipped for brevity, RMAN figures it all out for me.

RMAN> connect target /

RMAN> recover standby database from service resp;

Starting recover at 22-APR-23
using target database control file instead of recovery catalog
Executing: alter database flashback off
Oracle instance started

Total System Global Area 3053449792 bytes

Fixed Size 8929856 bytes
Variable Size 637534208 bytes
Database Buffers 2399141888 bytes
Redo Buffers 7843840 bytes

contents of Memory Script:
{
restore standby controlfile from service ‘resp’;
alter database mount standby database;
}
executing Memory Script

Starting restore at 22-APR-23
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=265 instance=ress1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service resp
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:10
output file name=/u01/app/oracle/oradata/RESP/control01.ctl
output file name=/u01/app/oracle/oradata/RESP/control02.ctl
Finished restore at 22-APR-23

released channel: ORA_DISK_1
Statement processed

<snip>

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 22-APR-23
flashback needs to be reenabled on standby open
Finished recover at 22-APR-23

Once done, I start all instances and use the DG Broker to resume redo apply. After a few minutes, I checked the status of my standby:

DGMGRL> show database ress

Database – ress

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 10.20 MByte/s
Real Time Query: OFF
Instance(s):
ress1 (apply instance)
ress2

Database Status:
SUCCESS

That looks much better and so easy to do!

Token-based server access validation – SCCM

I have a new SQL Server 2019 system that I set up. After migrating applications from the old server to the new server, I started getting reports where the AD users could not connect to the SQL instance. They received a generic message that the “Login failed”. Looking at the Event Viewer log info, I received a more detailed message as can be seen below:

There were many hits for this error on the Internet, but none of them really helped me. “Login lacks connect endpoint permission” is the root cause of the problem. How do I get past this error?

After reading information I found, and educating myself on how SQL Server endpoints worked, I felt that I was at a dead end. SQL Server ships with 4 endpoints out of the box and I had not denied any endpoints to anyone, which is the most common reason one might encounter this error. Just to make sure, I granted CONNECT on the default TCP endpoint to PUBLIC:

grant CONNECT ON ENDPOINT::[TSQL Default TCP] to public;

That did not fix my problem. I even tried granting CONNECT on that endpoint to a specific user and still received the same error. Nothing is denying access to this endpoint and all users should be able to use it so why am I getting the error?

When poking around in SSMS, I could see the default TCP endpoint. I also found another endpoint named ConfigMgrEndpoint.

Where did that endpoint come from? A quick Google search revealed that ConfigMgrEndpoint is created to support Microsoft’s SCCM product. Sure enough, this SQL Server will hold our SCCM repository. Now I didn’t create this endpoint but I bet that when we fired up the SCCM wizard after we moved the SCCM repository database, the wizard created the endpoint for me. To see how this endpoint was created, I right clicked on the endpoint can selected Script Endpoint As –> CREATE To –> New Window. I then saw the code and immediately spotted my problem:

CREATE ENDPOINT [ConfigMgrEndpoint]
STATE=STARTED
AS TCP (LISTENER_PORT = 1433, LISTENER_IP = ALL)
FOR SERVICE_BROKER (MESSAGE_FORWARDING = ENABLED
, MESSAGE_FORWARD_SIZE = 5
, AUTHENTICATION = CERTIFICATE [ConfigMgrEndpointCert]
, ENCRYPTION = REQUIRED ALGORITHM AES)
GO

My problem is that this endpoint was created on port 1433, which is also the port for the default TCP endpoint. So two endpoints on the same port. The root cause of this problem was that the wizard that created this endpoint did not grant CONNECT on the endpoint to anyone. I was able to solve my problem by simply issuing this command:

grant CONNECT ON ENDPOINT::[ConfigMgrEndpoint] to public;

My AD accounts could now connect to the instance!

19c DB RU results in ORA-65033 with PDBs

I am patching an Oracle Multitenant 19.13 database with the April 2022 RU which will bring my database version up to 19.15 (yes I skipped 19.14). When running datapatch against the PDBs, I received the following error stack for every PDB:

Patch 33806152 apply (pdb TST): WITH ERRORS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/33806152/24713297/33806152_apply_ORCL_TST_2022May14_00_47_11.log (errors)
-> Error at line 468: script rdbms/admin/backport_files/bug_32583355_apply.sql
- ORA-65033: a common privilege may not be granted or revoked on a local object
- ORA-06512: at line 185
- ORA-06512: at line 61
- ORA-06512: at line 54
- ORA-06512: at line 130

Well the sneaky datapatch is trying to revoke permissions on what is perceived to be a local object, hence the ORA-65033 error.

These errors are a known issue with Bug 33806152 and can be ignored. If you want to suppress these error messages, you can instruct datapatch to ignore them by envoking database as follows:

datapatch -verbose -ignoreable_errors=”ORA-65033,ORA-06512″ -pdbs PDB_LIST

ORA-6502 with Grant Logging Trigger

I have a new project I’m working on where I want to have an Oracle job revoke privileges I granted to IT staff that are older than 30 days. Our IT staff needs occasional access to a few production tables to troubleshoot problems. We grant SELECT privs on the tables that person needs, but no one ever tells me when they are done with their task and those privileges sit out there forever. I wanted to have a system automatically revoke privileges older than 30 days so that I wouldn’t have to remember to do it. Before I could revoke privileges, I needed a way of tracking those privileges. So I created a trigger that gets fired whenever a GRANT is issued and logs the details to a table. Later, an Oracle job will scan that table and revoke privileges it finds that are too old. My trigger code is as follows:

create or replace trigger sys.grant_logging_trig after grant on database
  declare
    priv  dbms_standard.ora_name_list_t;
    who   dbms_standard.ora_name_list_t;
    npriv pls_integer;
    nwho  pls_integer;
  begin
    npriv := ora_privilege_list(priv);
    if (ora_sysevent = 'GRANT') then
      nwho := ora_grantee(who);
    else
      nwho := ora_revokee(who);
    end if;
     for i in 1..npriv
     loop
       for j in 1..nwho
       loop  
        insert into system.grant_logging values
          ( systimestamp,
            ora_login_user,
            ora_sysevent,
            who(j),
            priv(i),
            ora_dict_obj_owner,
            ora_dict_obj_name
          );
      end loop;
    end loop; 
end;
 / 

The code above is not original. I found a good example on the Internet and modified a few things. After testing the code for 3 weeks, I rolled the trigger into production. It only took a few days for me to receive an error.

SQL> CREATE USER bob IDENTIFIED BY password;

ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-04088: error during execution of trigger 'SYS.GRANT_LOGGING_TRIG'
ORA-00604: error occurred at recursive SQL level 2
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 28

Hmmm…I’m creating a user not granting anything. But we can sure see my trigger is having a problem executing. So why is this trigger firing if all I’m doing is creating a user? A simple SQL trace showed me what was going on with that recursive SQL. Behind the scenes, Oracle is issuing the following on my behalf:

Most viagra on line people with depression are usually prescribed antidepressant medications. Generally, during the surgical process a device is implanted in the viagra generic canada male organ cause erections. Management: The management of Type 2 Diabetes includes life style opacc.cv purchase levitra modifications like exercise and diet. The condition of Erectile Dysfunction is common and particularly super cialis professional in men that are over 40 years.

GRANT INHERIT PRIVILEGES ON USER “BOB” to PUBLIC;

Ok…so at this point, I know that there is a GRANT being issued when I create a user but why is this failing? I tested this trigger with system privileges and it worked just fine. Granted, I didn’t test INHERIT PRIVILEGES, so this is kind of an edge case.

After a good amount of debugging effort, I determined that the ora_privilege_list function call is returning an empty set to the collection named “priv”. As such, npriv is getting set to a NULL value. Because NPRIV is NULL, the line where is says “for i in 1..npriv” doesn’t make much sense, hence the error.

In my opinion, ora_privilege_list should return one item, “INHERIT PRIVILEGES” and I believe that it not returning that list to be a bug. However, if ora_privilege_list is going to return an empty collection, then the output from the function should be zero and then npriv would get a more proper value. For education purposes, ora_privilege_list is a synonym for DBMS_STANDARD.PRIVILEGE_LIST.

All that being said, I cannot control the Oracle function. And I do not want to wait for Oracle to change their code in DBMS_STANDARD to what I think it should be. So I will just code my trigger to handle the issue. Adding two simple lines solved my problem (seen below in bold).

create or replace trigger sys.grant_logging_trig after grant on database
  declare
    priv  dbms_standard.ora_name_list_t;
    who   dbms_standard.ora_name_list_t;
    npriv pls_integer;
    nwho  pls_integer;
  begin
    npriv := ora_privilege_list(priv);
    if (ora_sysevent = 'GRANT') then
      nwho := ora_grantee(who);
    else
      nwho := ora_revokee(who);
    end if;
   if to_char(npriv) is not null then 
     for i in 1..npriv
     loop
       for j in 1..nwho
       loop  
        insert into system.grant_logging values
          ( systimestamp,
            ora_login_user,
            ora_sysevent,
            who(j),
            priv(i),
            ora_dict_obj_owner,
            ora_dict_obj_name
          );
      end loop;
    end loop; 
  end if;
end;
 / 

So the fix is pretty simple. Only perform the two FOR loops if NPRIV is not null.

GTT Table Stats and SYS.WRI$_OPTSTAT_TAB_HISTORY

Back in 2015, I upgraded our Oracle 11.2.0.4 databases to 12.1.0.2 and experienced some performance issues related to our use of GTTs. I blogged about those issues here.

The crux of the issue I was trying to solve was that a behavior change in 12c lead to Oracle saving stats that the GTT has zero rows when it doesn’t. Stats showing the number of rows equals zero lead to full table scans and cartesian products on queries that involve the GTT. As I stated in that blog post, we used DBMS_STATS.SET_TABLE_STATS after we populated the table with data so that each session would have proper stats to arrive at a better execution plan.

After we upgraded to Oracle 19c, we started seeing other performance issues related to the GTT. Queries that used the GTT started waiting on the “cursor pin:S wait on X” wait event. This could have been a behavior change with the new Oracle version, but it also could have been our developers using the GTT more often in our code and have nothing to do with the new version.

For the queries involved in the Cursor Pin wait event, I noticed a high number of versions of the SQL statement in the Shared Pool. When I queried V$SQL_SHARED_CURSOR, I discovered that PURGED_CURSOR=’Y’ for these SQL statements. The cursor is becoming invalidated.

When researching this issue, I discovered that what happens is that every time we called DBMS_STATS.SET_TABLE_STATS to get session-based stats on the GTT, it invalidates all SQL statements that use that GTT. Hence the wait. The wait was not lengthy so many end users didn’t even notice the issue.

But then we had a new problem. When you make a call to SET_TABLE_STATS, Oracle writes an entry into SYS.WRI$_OPTSTAT_TAB_HISTORY and you can see the values the session set for the table’s stats. By default, this table stores 30 days of history. The table was growing very largely and consuming most of SYSAUX. Every so often (hourly?) Oracle will remove entries more than 30 days old. This regular pruning of this table was now negatively impacting end user performance. The following is a performance graph from Lighty showing the impact of the pruning of this table:

All of that scary red color is when the old rows were being removed from SYS.WRI$_OPTSTAT_TAB_HISTORY.

The cultural expectations of male sexuality inhibit numerous men from seeking uk viagra sales medical attention that they actually need. These pills cheap viagra prices provide you bigger, harder, and longer lasting erections. The medicine in all of its form contains Sildenafil citrate 5mg cialis price as the active constituent. Female cialis viagra levitra sexual dysfunction can be due to physical, psychological, or social problems.

So my performance “fix” five years ago introduced another performance issue. To improve performance, what I did was to create shared stats on the GTT and stop using session stats. Here are the steps:

--set prefs to SHARED globally      
exec DBMS_STATS.set_global_prefs ( pname => 'GLOBAL_TEMP_TABLE_STATS', pvalue => 'SHARED');
--set the table and index stats
exec dbms_stats.set_table_stats(ownname=>'MY_SCHEMA',tabname=>'MY_GTT_TABLE',numrows=>1000,numblks=>2,avgrlen=>15);
exec dbms_stats.set_index_stats(ownname=>'MY_SCHEMA',indname=>'GTT_INDEX',indlevel=>1,numlblks=>2,numdist=>15,clstfct=>28,numrows=>1000);
-- set prefs back to SESSION
exec DBMS_STATS.set_global_prefs ( pname => 'GLOBAL_TEMP_TABLE_STATS', pvalue => 'SESSION');
-- verify stats set
select num_rows,blocks,last_analyzed,scope
from dba_tab_statistics
where table_name ='MY_GTT_TABLE';
select blevel,leaf_blocks,distinct_keys,num_rows,clustering_factor,last_analyzed,scope
from dba_ind_statistics
where index_name='GTT_INDEX' and owner='MY_SCHEMA';

Once the shared stats were in place, we remove the calls to DBMS_SET_TABLE_STATS from our code.

ORA-1114 Running Datapatch

I have an Oracle 19.3 Multitenant database that I am attempting to apply the 19.7 Release Update. The RU was installed by opatch just fine. But datapatch will incur the ORA-1114 error. I get errors like the following in one of the logs:

SQL> alter pluggable database GOLD2020_06_18_123653 open read write instances=all;
alter pluggable database GOLD2020_06_18_123653 open read write instances=all
*
ERROR at line 1:
ORA-65107: Error encountered when processing the current task on instance:1
ORA-17500: ODM err:Invalid argument
ORA-01114: IO error writing block to file 12346 (block # 1)
ORA-17500: ODM err:Invalid argument
ORA-01114: IO error writing block to file 12345 (block # 1)
ORA-17500: ODM err:Invalid argument

Before I can explain what the problem is, let me discuss how we use Multitenant in my environment. Once a week, we have a cron job that create a copy of our production database (using disk-based hardware snapshots). We call this copy of production our “golden image”. This golden image is plugged into our database as one of our PDBs. The PDB name is of the format GOLDyyyy_mm_dd_hhmiss so that we know when that PDB was created.

All of our dev and test databases are then created from this golden image PDB. When I need to refresh DEV1 or TEST, we simply shutdown the PDB for DEV1 or TEST and drop it. We then create a snapshot clone of the latest golden image. The golden image PDB is in READ ONLY mode to facilitate the snapshot clone.

As I wrote about here, the when a PDB is used as the source for a snapshot clone, Oracle will change the file permissions to 220. This is Oracle protecting us from ourselves. We shouldn’t be allowed to modify the source of a snapshot clone so Oracle makes the datafiles read only. Whomever at Oracle that decided changing the file permissions was a good idea didn’t talk about it with the datapatch developers.

Datapatch sees the READ ONLY PDB and wants to open it as READ WRITE, patch the inside of the PDB, and then set back to READ ONLY. However, datapatch cannot open the PDB in read write mode because the file permissions won’t allow it. Hence the errors I am receiving.

Nevertheless, experts have estimated that erectile dysfunction affects 30 million men in the US but how many of you are questioning the, supposedly, new developments of previously unheard of diseases, super bacterial infections that are resistant to buying viagra treatment may be present. In studies, men who took the one milligram dose of Propecia, their viagra shop usa levels declined by about two-thirds; testosterone levels were maintained within normal range, and also a ten percent increase was noted. 6. Kamagra on line levitra provides long lasting effect as compare to other contending drugs. You must be aroused for KAMAGRA to work This pill usually works in about 10-15 minutes This drug works for 4-6 hours, so you can take the medicine, even if you are 75 year old guy Ordering levitra from canadian pharmacy has become quite popular in recent past amongst diabetes sufferers mainly due to their quick effective results and absence of any side effects.

Oracle did this to me…they forced file permissions one way and then datapatch can’t do what they now want it to do.

I haven’t received official word with my Service Request yet, but I expect this to become a bug. Datapatch should be skipping any PDBs that are sources for snapshot clones, in my opinion.

In the meantime, you can brute force your way through this. I tried to use the -exclude_pdbs parameter for datapatch but that didn’t work. Apparently there is a known bug where that parameter doesn’t work if your list of PDBs has a comma. So I had to get datapatch run as follows:\

./datapatch -verbose -pdbs cdb\$root
./datapatch -verbose -pdbs pdb\$seed
./datapatch -verbose -pdbs dev1,dev2

First, I ran datapatch to patch CDB$ROOT. Then I patched PDB$SEED. Then I patched my dev PDBs. I just didn’t patch my golden image PDBs.

The Value of Data Over Time

By now, everyone knows that data is very valuable. Major corporations use data to make decisions that hopefully drive the business forward and achieve a higher level of profitability. As database administrators, we safeguard the data, especially Personally Identifiable Information (PII). Systems are hacked to obtain data. There is a lot of value in data and you would have to be living under the proverbial rock to be learning this today.

What I rarely read about, and the subject of this blog post, is how the value of data changes over time. The value of the data should be used to drive your retention policies.

Most data loses its value the older it gets. I was recently working on a project concerning application performance and the metrics we capture to measure that performance. Some people on the project wanted to keep those metrics around for more than five years. I spoke up and let the group know that five year old performance metrics have zero value. Our application changes too much over the years. We cannot compare the performance of the application today with the performance of the application five years ago. It will not be an apples-to-apples comparison.

Not all data value declines at the same rate. In the example I gave in the previous paragraph, the metric data for application performance is worth zero in five years. However, a retailer that has data to indicate a customer purchased diapers five years ago, now knows that the customer is likely to purchase clothing for a five or six year old child today. That child is most likely in elementary school and may need school supplies. In this case, the data of that customer’s purchases from five years ago still has some value. The data is not worthless. That being said, we do not need all of the data points from five years ago. We only need a summary of that customer’s activity to make meaningful conclusions about their current and future purchases.

It is the best website designing and development company Gurgaon in India, which is a good alternative of online prescription for cialis . Kamagra Tablets more reliable and safe levitra fast shipping with its online availability. By shifting the jaw from the individual ahead, the air buy cialis mastercard passage is opened up for exceptional breathing. This is because the cures are not harmful to the eyes as it tab sildenafil does not act on PDE6 that is present in the retina.

All too often, I see people treat database systems as a dumping ground. Data is just dumped in there and very few people give much thought to what to do with that data over the long term. Very few people give much thought to how much that data is worth over the long term. There is a cost associated with storing that data. If the data has little or zero value due to its age, is it worth the cost of keeping that data in the database?

There are mitigating strategies to employ for older data. The database administrator may move older, lesser value, data to a cheaper storage tier. If the data has zero value, the data should be destroyed. Many times, we no longer need the full details of that old data when summaries will suffice in which case we aggregate the data and store the results. Then get rid of the details.

As the database administrator, it is your responsibility to be the steward of your data and the resources needed to host it. You should always be asking for the appropriate steps needed to care for that data as it ages.

Invalid ODCIPARTINFOLIST on Upgrade

I have been working on upgrading Oracle 12.1.0.2 databases to Oracle 19.3 when I ran into a problem that took me quite awhile to figure out. I did a search on My Oracle Support and a quick Google search and didn’t see anyone else have this problem. So hopefully this will save someone time in the future. Side note: I had the same problem upgrading from 12.1.0.2 to 18.3 but did not check 12.1.0.2 to 12.2.0.1.

During the upgrade, the DBUA gives me an error that the type SYS.ODCIPARTINFOLIST is invalid. After a few more minutes, I get a lot of error messages and XDB will not upgrade correctly. As I learned, XDB relies heavily on indextype CTXSYS.CONTEXT, which depends on type CTXSYS.TEXTINDEXMETHODS, which depends on type SYS.ODCIPARTINFOLIST. I boiled the root cause of the problem down to these three objects in the database.

I figured out how to resolve the issue and move forward. I cannot stop this problem from occurring. But if I fix the issue and press the Retry button on the DBUA, it still will not upgrade the database for me so I am left with a manual upgrade.

Fixing the first two invalid objects is easy. Just compile them. Fixing the CONTEXT indextype is a bit more complicated. Here are the steps I used to get everything valid again and get my database upgraded.

$NEW_HOME/bin/dbupgrade -l /tmp/19c_upgrade
alter type sys.odcipartinfolist compile;
grant execute on odcipartinfolist to public;
alter type ctxsys.textindexmethods compile;
drop indextype ctxsys.context;
@?/ctx/admin/catctx.sql password SYSAUX TEMP LOCK
@?/rdbms/admin/utlrp.sql
exec dbms_registry.valid('CONTEXT');

Irrespective of the nature of the appliances, its performance level and period of warranty, there is no side effect attached to male enhancement pills as they are composed of the same active ingredient and helps man getting back sildenafil online no prescription to the normal sexual mode. Some of the most common are: Chronic fatigue and irritability Morning sickness and lethargic feeling Emotional instability due to unstable chemicals in brain Digestive viagra samples problems like diarrhea and stomach ache Inability to lose weight. The profound joy of feeding, clothing, teaching and caring for a child cannot be order viagra viagra see now described or replicated in any way. Taking overdose of the medicine would not minimize the discomfort generic india levitra of migraine headaches attack.

At this point my database was upgraded and ready to roll. So let’s go over the steps above. After starting the database in UPGRADE mode, I used the dbupgrade utility to perform the manual upgrade. I still received the same errors that DBUA reports. Once the upgrade was complete, I compiled the ODCIPARTINFOLIST type. That was easy. For some reason I cannot figure out, the grant to PUBLIC on the type is dropped during the upgrade. I had to issue the grant otherwise the next compile would not be successful. Then compile the TEXTINDEXMETHODS type for Oracle Text.

I wish I could have just compiled the CONTEXT indextype but it would never compile for me. So I had to drop the indextype and then run the catctx.sql script to get it created again. Then run utlrp.sql to recompile everything else.

A note on dropping the indextype. If you have indexes that reference the indextype, you will receive an error. Query DBA_INDEXES where ITYP_OWNER=’CTXSYS’ to find any indexes you have using this indextype. Use DBMS_METADATA.GET_DDL to obtain the statement to recreate those indexes. Drop the indexes and then drop the index type. After you have the database up and running, recreate the indexes.

I spent the better part of a week working through all of this so hopefully this blog post finds you if you have a similar problem and you can be on your way much quicker than I was.

It’s In The Details

I have an Oracle 12.1.0.2 Multitenant database that I’m trying to remove a PDB from. However, I mistakenly removed the storage from the database server and the PDB cannot access its files. When unplugging the PDB, I get the following error:

SQL> alter pluggable database GOLD2019_08_22_125953 unplug into '/tmp/GOLD2019_08_22_125953.xml';
alter pluggable database GOLD2019_08_22_125953 unplug into '/tmp/GOLD2019_08_22_125953.xml'
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 50277 - see DBWR trace file
ORA-01110: data file 50277:
'/u01/app/oracle/oradata/mt_golden_2019_08_22_125953/data03/datafile_20.dbf'

Well that is unfortunate but to be expected in my case. I mistakenly yanked out the storage for this PDB before I unplugged it. Most of the literature on Oracle’s support site says to restore the PDB from your backup. But this PDB is a clone of production and I do not care to back it up. It is 25+TB and if something goes wrong with it, I remove the PDB and create a new clone of production. No backup is needed except may to save me from my silly mistake of removing the storage before unplugging.

Since I could not unplug the PDB, I attempted to just drop the PDB but I get a different error:

SQL> drop pluggable database GOLD2019_08_22_125953 keep datafiles;
drop pluggable database GOLD2019_08_22_125953 keep datafiles
*
ERROR at line 1:
ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged
One must take these pills with the help of water and viagra uk air to create vacuum. It also reports that almost 90% of lung cancer deaths in men and 80% of lung cancer deaths in men and 80% of lung cancer deaths in men and 80% of get levitra check out for more info lung cancer deaths in women are caused by smoking. Surely ginseng will prove to be more beneficial than simply effects of cialis giving a certain look. They are http://www.heritageihc.com/staff-carroll cheapest online cialis very rare occurrences in fact, but they do happen in spite of all that.

Now I feel as though I’m in a Catch-22 situation. I cannot unplug the PDB nor can I drop it.

I’ve seen this error on many occasions and always wished I could just drop the PDB without having to unplug it first. I have no desire to plug this into another CDB. I just want the thing gone for good. And that’s when I realized that it’s in the details. When I read that ORA-65179 error message, I focused on the latter part of it. The PDB is not unplugged first. I’ve read this error message at least 20 times in my work with Multitenant and I missed the critical detail that stopped me from dropping the PDB which I will now highlight below.

ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged

The detail I keep skipping was that I could not KEEP the datafiles. Does that mean I can drop the PDB if I also remove the datafiles?

SQL> drop pluggable database GOLD2019_08_22_125953 including datafiles;

 Pluggable database dropped. 

Sure enough, I could easily drop the PDB. Sometimes, the most interesting things are found in the details if we just slow down a bit and make sure we see everything in front of us instead of jumping to the end.