Feed aggregator

Retrieving the records with only one document type

Tom Kyte - Tue, 2024-05-21 22:46
From the following table , I want to build a query using analytical functions to retrieve the rows where there is a common document type for a Customer ID but not the mix. The output should be only ROW_NO 3,4,5 & 9,10 CREATE TABLE TAB_DOC_TYPES (ROW_NO NUMBER, CID NUMBER, DOC_TYPE VARCHAR2(5) ); INSERT INTO TAB_DOC_TYPES VALUES(1,101,'D1'); INSERT INTO TAB_DOC_TYPES VALUES(2,101,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(3,102,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(4,102,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(5,102,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(6,103,'D1'); INSERT INTO TAB_DOC_TYPES VALUES(7,103,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(8,103,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(9,104,'DZ'); INSERT INTO TAB_DOC_TYPES VALUES(10,104,'DZ'); ROW_NO CID DOC_TYPE 1 101 D1 2 101 DZ 3 102 DZ 4 102 DZ 5 102 DZ 6 103 D1 7 103 DZ 8 103 DZ 9 104 DZ 10 104 DZ Here CID 101 & 103 has both D1 and DZ and query output shouldn't bring these records. CID 102 & 104 has only one document type DZ and the query output should be bringing these record only.
Categories: DBA Blogs

How to know if my oracle DB is on premise or cloud

Tom Kyte - Tue, 2024-05-21 22:46
Hi TOM, Not long ago it was easy to determine if your DB is on premise or on cloud: their banners (from v$VERSION) were different. Now Entreprise and Standard original editions can be on premise or on cloud so we can no more determine in which environment we are just looking at the banner. Starting with 21c, V$PDBS contains a CLOUD_IDENTITY column which is not null if you are on cloud. So my question, in 12.2 to 19c, how to know, using SQL, if my oracle DB is on premise or cloud? Bonus: how to know if it is OCI (Oracle Cloud Infrastructure) or ACE (Authorized Cloud Environment) or even neither (and unsupported)? Regards Michel
Categories: DBA Blogs

Bulk Collect only brings limit rows

Tom Kyte - Tue, 2024-05-21 22:46
Hi TOM: I have to copy an AS400 table, for that i have a DBLINK that connects the as400 database to my Oracle 11g Since it has several millions records, i tried with a bulk collect: <code> CREATE TABLE AS400_VPRA_ABONO ("ABON_NUM_CCTE" NUMBER(9,0) NOT NULL ENABLE, "ABON_FEC_COMPR_PAG" NUMBER(9,0) NOT NULL ENABLE, "ABON_CORR_COMPR" NUMBER(3,0) NOT NULL ENABLE, "ABON_CORRELATIVO" NUMBER(3,0) NOT NULL ENABLE, "ABON_FEC_CPBTE_ING_EGR" NUMBER(9,0) NOT NULL ENABLE, "ABON_TIPO_REND" NUMBER(2,0) NOT NULL ENABLE, "ABON_NUM_CPBTE_ING_EGR" NUMBER(8,0) NOT NULL ENABLE, "ABON_TIPO_COMPR" NUMBER(1,0) NOT NULL ENABLE, "ABON_TIPO_AVISO" NUMBER(2,0) NOT NULL ENABLE, "ABON_NUM_AVISO" NUMBER(8,0) NOT NULL ENABLE, "ABON_LINEA" NUMBER(5,0) NOT NULL ENABLE, "ABON_TIPO_ABONO" NUMBER(2,0) NOT NULL ENABLE, "ABON_TIPO_VIA" NUMBER(1,0) NOT NULL ENABLE, "ABON_RECAUDADOR" NUMBER(5,0) NOT NULL ENABLE, "ABON_MTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE, "ABON_MTO_PAG_PESOS" NUMBER(13,2) NOT NULL ENABLE, "ABON_FEC_PAGO" NUMBER(9,0) NOT NULL ENABLE, "ABON_MEDIO_PAGO" NUMBER(1,0) NOT NULL ENABLE, "ABON_AREA" CHAR(16 BYTE) NOT NULL ENABLE, "ABON_BCO_ADM" CHAR(10 BYTE) NOT NULL ENABLE, "ABON_MTO_DEV_MON" NUMBER(13,2) NOT NULL ENABLE, "ABON_MTO_PAG_MON_AJ" NUMBER(13,2) NOT NULL ENABLE, "ABON_MTO_PAG_PESOS_AJ" NUMBER(13,2) NOT NULL ENABLE, "ABON_MOTIVO" NUMBER(3,0) NOT NULL ENABLE, "ABON_SALDO" NUMBER(13,2) NOT NULL ENABLE, "ABON_STATUS" NUMBER(2,0) NOT NULL ENABLE, "ABON_STA_FACTUR" CHAR(1 BYTE) NOT NULL ENABLE, "ABON_EXENTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE, "ABON_AFECTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE, "ABON_DEREMI_PAG_MON" NUMBER(9,2) NOT NULL ENABLE, "ABON_IMPTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE, "ABON_TIPDOC" CHAR(2 BYTE) NOT NULL ENABLE, "ABON_NUMDOC" CHAR(8 BYTE) NOT NULL ENABLE, "ABON_FILLER" CHAR(14 BYTE) NOT NULL ENABLE ); CREATE TABLE AS400_VPRA_ABONO_ORIGIN ("ABON_NUM_CCTE" NUMBER(9,0) NOT NULL ENABLE, "ABON_FEC_COMPR_PAG" NUMBER(9,0) NOT NULL ENABLE, "ABON_CORR_COMPR" NUMBER(3,0) NOT NULL ENABLE, "ABON_CORRELATIVO" NUMBER(3,0) NOT NULL ENABLE, "ABON_FEC_CPBTE_ING_EGR" NUMBER(9,0) NOT NULL ENABLE, "ABON_TIPO_REND" NUMBER(2,0) NOT NULL ENABLE, "ABON_NUM_CPBTE_ING_EGR" NUMBER(8,0) NOT NULL ENABLE, "ABON_TIPO_COMPR" NUMBER(1,0) NOT NULL ENABLE, "ABON_TIPO_AVISO" NUMBER(2,0) NOT NULL ENABLE, "ABON_NUM_AVISO" NUMBER(8,0) NOT NULL ENABLE, "ABON_LINEA" NUMBER(5,0) NOT NULL ENABLE, "ABON_TIPO_ABONO" NUMBER(2,0) NOT NULL ENABLE, "ABON_TIPO_VIA" NUMBER(1,0) NOT NULL ENABLE, "ABON_RECAUDADOR" NUMBER(5,0) NOT NULL ENABLE, "ABON_MTO_PAG_MON" NUMBER(13,2) NOT NULL ENABLE, "ABON_MTO_PAG_PESOS" NUMBER(13,2) NOT NULL ENABLE, "ABON_FEC_PAGO" NUMBER(9,0) NOT NULL ENABLE, "ABON_MEDIO_PAGO" NUMBER(1,0) NOT NULL ENABLE, "ABON_AREA" CHAR(16 BYTE) NOT NULL ENABLE, "ABON_BCO_ADM" CHAR(10...
Categories: DBA Blogs

Delete Foreign Keys and Primary Table Rows in Large Table

Tom Kyte - Tue, 2024-05-21 22:46
We have two tables, ITEM table has more than 150 million records and ITEM_EVENT table has more than 400 millions. Because of the growing nature of the data, we want to perform periodic cleanup of the tables. Could not find a performant way to achieve the goal, select query was taking very long and eventually got ORA-01114. CREATED Columns in both tables are not indexed. We can do that if they can help to achieve our goal. So please give us some suggestions to achieve our goal. Thanks. Delete records is planned for: - older than some compliance date - with a batch size of say 50000 per iteration - split the deletion in two steps, delete 1st the foreign key records and then primary keys Our tables DDL: <code> CREATE TABLE "ITEM" ( "ID" VARCHAR2(255 CHAR) NOT NULL ENABLE, "CREATED" TIMESTAMP (6) NOT NULL ENABLE, "ITEM_TYPE" VARCHAR2(255 CHAR) NOT NULL ENABLE, "ITEM_ID" VARCHAR2(255 CHAR) NOT NULL ENABLE PRIMARY KEY ("ID") ) CREATE INDEX "ITEM_ID_NDX" ON "ITEM" ("ITEM_ID") CREATE TABLE "ITEM_EVENT" ( "ID" NUMBER(19,0) NOT NULL ENABLE, "CREATED" TIMESTAMP (6) NOT NULL ENABLE, "ITEM_EVENT_TYPE" VARCHAR2(255 CHAR) NOT NULL ENABLE, "ITEM_BID" VARCHAR2(255 CHAR) NOT NULL ENABLE, "ITEM_STATE" VARCHAR2(255 CHAR), "CHANGE_REASON" VARCHAR2(255 CHAR), "ITEM_ID" VARCHAR2(255 CHAR) NOT NULL ENABLE, PRIMARY KEY ("ID") ) alter table ITEM_EVENT add constraint ITEM_EVENT_FK_ITEM_BID foreign key (ITEM_BID) references ITEM; CREATE INDEX "ITEM_EVENT_BID_NDX" ON "ITEM_EVENT" ("ITEM_BID") CREATE INDEX "ITEM_EVENT_ID_NDX" ON "ITEM_EVENT" ("ITEM_ID") </code> Following query tried which was very slow and causing error: <code> DELETE FROM ITEM_EVENT WHERE ITEM_ID IN ( SELECT ITEM_ID FROM ITEM_EVENT WHERE CREATED < current_timestamp - NUMTODSINTERVAL(180, 'DAY') GROUP BY ITEM_ID HAVING MAX(ITEM_STATE) KEEP (DENSE_RANK LAST ORDER BY CREATED ASC)= 'DEACTIVATED' FETCH FIRST 50000 ROWS ONLY); DELETE FROM ITEM i WHERE NOT EXISTS (SELECT 1 FROM ITEM_EVENT ie WHERE ie.ITEM_BID = i.ID) AND CREATED < current_timestamp - NUMTODSINTERVAL(180, 'DAY'); </code>
Categories: DBA Blogs

Oracle row compared to Mongo document

Tom Kyte - Tue, 2024-05-21 22:46
Good Morning, In the last year, I've started to support Mongo databases. Mongo stores data in BSON which is the binary form of JSON. JSON is just the field name followed by a value. This doesn't seem so different from Oracle since Oracle also its data in a series of columns with values. I'm curious to know how an Oracle row looks like. If a table has the following columns: -Fname string -Lname string -notes string If row has say, Fname='John' and Lname='Doe', does Oracle add the field names Fname and Lname to each row? Does the row look like this on disk: Fname='John', Lname='Doe', notes null or does it look like this: 'John','Doe', null My guess is that it looks like option 1. It would be nice if you could also provide what an Oracle row looks like on disk. Thank you John
Categories: DBA Blogs

Testing RENAME LOB (Segment) in 23ai

Hemant K Chitale - Tue, 2024-05-21 09:46
Another new feature of 23ai is the ability to rename a LOB (Segment) in-place without having to use the MOVE clause.

A quick demo :


SQL> -- Version 23ai Free Edition
SQL> select banner from v$version;

BANNER
---------------------------------------------------------------------------------------------------------------------------------
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free

SQL>
SQL>
SQL> DROP TABLE my_lob_objects purge;

Table dropped.

SQL>
SQL> -- create the table with a LOB, column name "c",  lob segment name also "c"
SQL> CREATE TABLE my_lob_objects (object_id NUMBER primary key, c CLOB)
  2        lob (c) STORE AS SECUREFILE c
  3        ( TABLESPACE users
  4          DISABLE STORAGE IN ROW
  5          NOCACHE LOGGING
  6          RETENTION AUTO
  7          COMPRESS
  8        );

Table created.

SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;

TABLE_NAME       COLUMN_NAME      SEGMENT_NAME         TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS   C                C                    USERS

SQL>
SQL> -- insert three rows
SQL> insert into my_lob_objects values (1, dbms_random.string('X',100));

1 row created.

SQL> insert into my_lob_objects values (2, dbms_random.string('X',100));

1 row created.

SQL> insert into my_lob_objects values (3, dbms_random.string('X',100));

1 row created.

SQL>
SQL> -- verify the column name when querying the table
SQL> select * from my_lob_objects;

 OBJECT_ID C
---------- --------------------------------------------------------------------------------
         1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S
         2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA
         3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG

SQL>
SQL> -- now rename the column
SQL> alter table my_lob_objects rename column c to clob_col;

Table altered.

SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;

TABLE_NAME       COLUMN_NAME      SEGMENT_NAME         TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS   CLOB_COL         C                    USERS

SQL>
SQL> -- now rename the lob segment
SQL> alter table my_lob_objects rename lob(clob_col) c to my_lob_objects_clob;

Table altered.

SQL>
SQL> -- query the data dictionary
SQL> select table_name, column_name, segment_name, tablespace_name from user_lobs;

TABLE_NAME       COLUMN_NAME      SEGMENT_NAME         TABLESPACE_NAME
---------------- ---------------- -------------------- ----------------
MY_LOB_OBJECTS   CLOB_COL         MY_LOB_OBJECTS_CLOB  USERS

SQL>
SQL> -- verify the column name when querying the table
SQL> select * from my_lob_objects;

 OBJECT_ID CLOB_COL
---------- --------------------------------------------------------------------------------
         1 IBGOGKA9QKK56O746IJL3C56ZK9LEO0G1W4LWBN11T8EWCFTTLUW9TPIVQAU8BPSGPQ2ZV57BS0ZPK0S
         2 7K04DVVYDQB1URIQ1OQ2458M8ZOURHWW50XIZDMVGAZH6XVN2KKN4PIGKPY5CSVIQ9KU45LHZPJB33AA
         3 2G5194Z7TSR3XG0K698G587AOZOJ8VN6KFCTCH3074TNCOWCSMOPRJLRGTLIZMDD73XAY4KDD14IW4MG

SQL>
SQL> -- identify the segment
SQL> select tablespace_name, segment_name, segment_type, bytes/1024 Size_KB
  2  from user_segments
  3  where segment_name = 'MY_LOB_OBJECTS_CLOB'
  4  /

TABLESPACE_NAME  SEGMENT_NAME         SEGMENT_TYPE         SIZE_KB
---------------- -------------------- ------------------ ---------
USERS            MY_LOB_OBJECTS_CLOB  LOBSEGMENT              2304

SQL>



First I create a Table where the Column and LOB (Segment) are both called "C".  In recent versions, SECUREFILE is the default and recommended for LOBs (e.g. with the COMPRESS, DEDUPLICATION and ENCRYPTION advantages).

Then I insert 3 rows.

I then rename the column "C" to "CLOB_COL".

Next, I rename the LOB (Segment) to "MY_LOB_OBJECTS_CLOB".  I include the Table Name because the LOB segment is an independent segment that I might query in USER_SEGMENTS (where Table Name) is not available.  This RENAME LOB clause is new in 23ai and does not require the use of MOVE LOB.


I then verify the new Segment Name for the LOB as well.

Yes, the 2,304KB "size" seems excessive but this will make sense (with the COMPRESS attribute) when the LOB grows much much larger as new rows with long Character-Strings are inserted.




Categories: DBA Blogs

Pickleball spielen auf einem Badminton Feld

The Oracle Instructor - Sat, 2024-05-18 04:39

Ein Badminton-Spielfeld kann man sehr schnell und kostengünstig für Pickleball umwidmen. Das macht es für Sportlehrer und Sportvereine leicht, den Trendsport anzubieten.

Die Außenmaße eines Badminton-Doppelfeldes sind identisch mit den Außenmaßen eines Pickleball-Felds:

Badminton Spielfeld

Beim Pickleball gibt es übrigens keinen Unterschied in den Außenmaßen des Spielfelds bei Einzel oder Doppel. Also die Außenmaße sind schon mal gleich, da muß man gar nichts ändern. Nur die Auschlaglinie beim Badminton (in 1,98 m Entfernung vom Netz) ist nicht identisch mit der NVZ-Linie beim Pickleball:

Pickleball Spielfeld mit NVZ

Die NVZ ist in 2,13 m Abstand vom Netz. Man muß also nur jeweils eine Linie im Abstand von 15 cm von der Badminton-Aufschlaglinie aufkleben, schon hat man aus einem Badminton-Feld ein Pickleball-Feld gemacht!

Dafür eignet sich z.B. gut das Gauder Malerkrepp (ca. 12 Euro für drei Rollen); es läßt sich leicht aufkleben, hält gut und ist rückstandslos ablösbar. Hat man in 5 Minuten aufgeklebt:

Was noch fehlt ist ein Pickleball-Netz – das Badminton-Netz ist mit 1,55 m Höhe zu hoch. Ein mobiles Pickleball-Netz kostet unter 200 Euro und ist z.B. hier zu bekommen.

Mobiles Pickleball-Netz

Da gibt es übrigens auch kostengünstige Einsteigersets und Schläger mit einem guten Preis/Leistungsverhältnis und Mengenrabatten zu kaufen. Mit anderen Worten: Schulen und Sportvereine mit Zugang zu Badminton-Plätzen können mit wenig Aufwand und Kosten Pickleball anbieten! Und tatsächlich tun das immer mehr auch in Deutschland. Wir stehen meiner Meinung nach hierzulande vor einem Boom dieser Sportart.

Categories: DBA Blogs

SRDC – Collect Data Guard Diagnostic Information (Doc ID 2219763.1)

Michael Dinh - Fri, 2024-05-17 07:47

Auto Collection Using TFA (Recommended)
Manual Collection Using Script for Unix

Note: Oracle support typically request TFA; however, some environment disable TFA due to resource.

This means manual collection is required.

Upload files directly to static application files

Tom Kyte - Wed, 2024-05-15 23:26
Hi, I have a use case in which I want the end user to upload files from UI and it will directly store into the application's static files section. Are there any APIs available? Thanks, Tushar
Categories: DBA Blogs

Call shell script using stored procedure /function.

Tom Kyte - Wed, 2024-05-15 23:26
Hi, I want to create a stored procedure / function which will call shell script and shell script will have command to copy the file from particular location of DB server to another location of DB server. I tried using the scheduler job same is working fine but i don't want to use scheduler job. I want to use procedure/function to call shell script. Request your help in how to call shell script in stored procedure/function. Regards GirishR
Categories: DBA Blogs

Finding purchases on 10+ consecutive days

Tom Kyte - Wed, 2024-05-15 23:26
I'm trying to use march_recognize() to find purchases made for each customer for 10+ consecutive days. A day being the next calendar date. For example, if customer 1 made 2 purchases on 10-MAY-2024 at 1300 hours and 1400 hours this would not be 2 consecutive days it would be considered 1 day.Whereas if customer 1 made a purchase on 10-MAY-2024 at 23:59:59 and on 11-MAY-2024 at 00:00:00 this would be considered 2 consecutive days since the calendar date has changed although it's not 24 hours after the first purchase on 10-MAY-2024 at 23:59:59. Based on my test CASE below and sample data I appear to be finding the following streak of days CUSTOMER_ID FIRST_NAME LAST_NAME START_DATE END_DATE CONSECUTIVE_DAYS and I am unsure why? 2 Jane Smith 15-JAN-2023 20-JAN-2023 6 As you can see this is only 6 consecutive days not 10 or more therefore I thought the match_recognize() would have filtered this out. Is this something match_recognize can detect? If so, how? If not, can you suggest a workaround? <code>ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'DD-MON-YYYY'; CREATE TABLE customers (CUSTOMER_ID, FIRST_NAME, LAST_NAME) AS SELECT 1, 'Ann', 'Aaron' FROM DUAL UNION ALL SELECT 2, 'Jane', 'Smith' FROM DUAL UNION ALL SELECT 3, 'Bonnie', 'Winterbottom' FROM DUAL UNION ALL SELECT 4, 'Sandy', 'Herring' FROM DUAL UNION ALL SELECT 5, 'Roz', 'Doyle' FROM DUAL; create table purchases( ORDER_ID NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL, customer_id number, PRODUCT_ID NUMBER, QUANTITY NUMBER, purchase_date timestamp ); insert into purchases (customer_id, product_id, quantity, purchase_date) select 2 customer_id, 102 product_id, 2 quantity, TIMESTAMP '2024-04-03 00:00:00' + INTERVAL '18' HOUR + ((LEVEL-1) * INTERVAL '1 00:00:01' DAY TO SECOND) * -1 + ((LEVEL-1) * interval '0.007125' second) as purchase_date from dual connect by level <= 15 UNION all select 1, 101, 1, DATE '2024-03-08' + INTERVAL '14' HOUR + ((LEVEL-1) * INTERVAL '1 00:00:00' DAY TO SECOND) * -1 from dual connect by level <= 5 UNION ALL select 3, 103, 3, DATE '2024-02-08' + INTERVAL '15' HOUR + ((LEVEL-1) * INTERVAL '0 23:59:59' DAY TO SECOND) * -1 from dual connect by level <= 5 UNION all select 2, 102,1, date '2023-07-29' + level * interval '1' day from dual connect by level <= 12 union all select 2, 103,1, date '2023-08-29' + level * interval '1' day from dual connect by level <= 15 union all select 2, 104,1, date '2023-11-11' + level * interval '1' day from dual connect by level <= 9 union all select 4, 103,(3*LEVEL), TIMESTAMP '2023-06-01 05:18:03' + numtodsinterval ( (LEVEL -1) * 1, 'day' ) + numtodsinterval ( LEVEL * 37, 'minute' ) + numtodsinterval ( LEVEL * 3, 'second' ) FROM dual CONNECT BY LEVEL <= 4 UNION ALL SELECT 3, 102, 4,TIMESTAMP '2022-12-22 21:44:35' + NUMTODSINTERVAL ( ...
Categories: DBA Blogs

Dependant package not invalidated and recompiled thus providing bad results

Tom Kyte - Wed, 2024-05-15 23:26
Hi Connor/Chris, I am struggling to reason about the case attached in the liveSQL session. P1 spec ('get_field_value' just returns the value of 'field' from the record given) <code> CREATE OR REPLACE PACKAGE p1 AS field_default CONSTANT varchar2(4) := '0000'; TYPE typ_rec IS RECORD ( field varchar2(4) default field_default ); function get_field_value(p_rec typ_rec) return varchar2; end; </code> P2 body <code> CREATE OR REPLACE PACKAGE BODY p2 AS function get_field_value return varchar2 as l_rec p1.typ_rec; begin return p1.get_field_value(l_rec); end; end; / </code> Now we should get '0000' no matter how we get to the record.field value <code> DECLARE l_rec p1.typ_rec; BEGIN dbms_output.put_line(p1.get_field_value(l_rec)); dbms_output.put_line(p2.get_field_value()); END; / </code> However, now if we prepend a new constant to P1 <code> CREATE OR REPLACE PACKAGE p1 AS dummy CONSTANT varchar2(4) := 'XXXX'; field_default CONSTANT varchar2(4) := '0000'; TYPE typ_rec IS RECORD ( field varchar2(4) default field_default ); function get_field_value(p_rec typ_rec) return varchar2; end; </code> P2 is not invalidated and starts to return 'XXXX' as a value for the field. It looks like it stored the index of the constant from P1 to be used and now it happily returns the incorrect value. If one recompiles P2 manually, it starts to return correct result of '0000' again. You can image the fun one has when a values of 200 constants are suddenly offset. I tried to find in the docs some explanation of this behaviour but to no avail.
Categories: DBA Blogs

csv output using sqlplus spool is very slower than expdp

Tom Kyte - Wed, 2024-05-15 23:26
Dear friends, I try to export csv from a patitoned table ( 300GB ) it takes 3 hours (only one table), using following code <code> set term off set feed off set colsep '|' set echo off set feedback off set linesize 1000 set pagesize 0 set trimspool on SET TERMOUT OFF spool '/backup/csv/Mytable.csv' SELECT /*+ parallel */ /*csv*/ col1 || '|' || col2 || '|' || col3 FROM MySchema.MyTable ; spool off exit;</code> but when I export (expdp ) all schema tables & its data (3TB) it takes only 20 minutes! why expdp is very fast comparing to sql spool ? what is fast method csv output from oracle table ? regards Siya AK Hardware 4 TB Ram + 196 core cpu + nvme disk oracle 11g r2
Categories: DBA Blogs

Sequence issue

Tom Kyte - Wed, 2024-05-15 23:26
Hi Tom , Not sure about category of this question but will explain the same We had come across one issue .. In pre staging server environment sequence column is generating incremented value as per the last record updated in a table via procedure (which is what we want and is fine ) ?-issue below Whereas in production (distributed env)we saw that most recently inserted record has lower sequence value than the previous one for one of the cases.As sequence is incremental it should generate highest value for last updated record. We are using golden gate in production Wherein sequence values are taken in odd number for one server and in even numbers for other server . What could be the scenario ? Is it beacuse of multiple instances of that server using the same table (distributed server) but that should nt be the issue i guess because server replication should not create wrong data Note :all commits are in place after dml in a procedure . Is there any pros/cons of using sequence or cache/no order keyword in Oracle which may be causing this issue ? Is there a issue with using sequence ? How to rectify this issue as 100s of procedures are using that sequence functionality?are there any gaps which can be covered while using sequence generated value ? Kindly confirm
Categories: DBA Blogs

Is it possible to create a private user under a DB schema?

Tom Kyte - Wed, 2024-05-15 23:26
We are writing to discuss an operational challenge that we are currently facing with the integration of Oracle Integration Cloud (OIC) and Oracle Autonomous Database within our organization. Our setup utilizes OIC as the primary integration layer, in conjunction with the Autonomous Database for staging data, performing validations, and executing other data derivation tasks. Our infrastructure includes multiple OIC environments, each configured to connect to the same Autonomous Database but utilizing distinct database schemas. These schemas primarily contain custom tables and packages essential for enforcing our business rules. During the integration process, particularly when invoking subroutines, it is necessary to specify the database name along with the package or procedure name, as detailed in the Oracle documentation (https://docs.oracle.com/en/cloud/paas/integration-cloud/atp-adapter/invoke-stored-procedure-page.html). We encounter significant challenges when migrating integrations between different OIC environments due to the requirement of manually updating the schema name in each database activity to match the target environment's schema. This process is not only time-consuming but also prone to errors, impacting our efficiency and operational continuity. In previous discussions with the Oracle Support team, the suggestion was made to utilize separate databases with identical schema names to circumvent this issue. However, due to resource constraints, expanding beyond our current setup of one database for production and another for non-production environments is not feasible. Given these circumstances, we are reaching out to inquire if there might be an alternative solution or workaround that could facilitate a more streamlined migration process between OIC environments without the need for manual updates. Any suggestions or guidance you could provide would be greatly appreciated
Categories: DBA Blogs

How to retrieve single hierarchy from Multiple Hierarchies

Tom Kyte - Wed, 2024-05-15 23:26
Hi, <b>There are several examples of hierarchal queries using the employees-manager example. Recently i came across hierarchal scenario where the table was storing multiple hierarchies. example data: </b> <code>CREATE TABLE example ( Title VARCHAR2(50), ID NUMBER, Link_id NUMBER );</code> <code>INSERT INTO your_table_name (Title, ID, Link_id) VALUES ('A', 1, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('B', 2, 1); INSERT INTO example(Title, ID, Link_id) VALUES ('C', 3, 2); INSERT INTO example (Title, ID, Link_id) VALUES ('D', 4, 3); INSERT INTO example (Title, ID, Link_id) VALUES ('E', 5, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('F', 6, 5); INSERT INTO example (Title, ID, Link_id) VALUES ('G', 7, 6); INSERT INTO example (Title, ID, Link_id) VALUES ('H', 8, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('I', 9, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('J', 10, 9);</code> Title |ID |Link_id A | 1 | null B | 2 | 1 C | 3 | 2 D | 4 | 3 E | 5 | null F | 6 | 5 G | 7 | 6 H | 8 | null I | 9 |null J | 10 | 9 and i wanted retrieve the whole hierarchy given any node i.e. passing ID 3 should return: A B C D i wrote the following function to get the root id: <code>create or replace FUNCTION find_root( p_id IN example.id%TYPE ) RETURN example.id%TYPE AS v_given_id example.id%TYPE := p_id; v_root_id example.id%TYPE; BEGIN LOOP SELECT Link_id INTO v_root_id FROM example WHERE id = v_given_id; IF v_root_id IS NULL THEN EXIT; -- Exit the loop if Link_id is null ELSE v_given_id := v_root_id; -- Update v_given_id with Link_id END IF; END LOOP; RETURN v_given_id; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN NULL; -- Return NULL if no record found for the given id WHEN OTHERS THEN RETURN NULL; -- Handle other errors by returning NULL END;</code> <b>The following stored procedure will then call the above function and return the hierarchy based on the id returned by the function:</b> <code>create or replace PROCEDURE get_hierarchy( p_given_id IN example.id%TYPE, hmm OUT SYS_REFCURSOR ) IS v_root_id example.id%TYPE; BEGIN SELECT find_root_id(p_given_id) INTO v_root_id FROM dual; -- Check if final case ID is not null IF v_final_case_id IS NOT NULL THEN -- Use explicit cursor declaration OPEN hmm FOR SELECT Title, ID, Link_id from example START WITH example.id = v_root_id CONNECT BY PRIOR example.id = example.link_id; ELSE -- Use RAISE_APPLICATION_ERROR for customized error messages RAISE_APPLICATION_ERROR(-20001, 'No record foun...
Categories: DBA Blogs

FAST out-of-place materialized view refresh problem

Tom Kyte - Wed, 2024-05-15 23:26
I encountered a problem related to forcing the refresh procedure on the materialized view in a combined manner: - refresh_method = 'F' - out_of_place = true <code>DBMS_MVIEW.REFRESH('FOO_MV', out_of_place => true, atomic_refresh => false, method => 'F');</code> For the past few days, I have made many different attempts and tests to force a situation in which MV is refreshed using a combination of: refresh-method = FAST and out-of-place = TRUE but only succeeded in achieving the combinations: refresh-method = COMPLETE and out-of-place = TRUE refresh-method = FAST and out-of-place = FALSE Therefore, my main question is: <b>Are there any internal restrictions or conditions that must be met in order to perform FAST out-of-place refresh?</b> Because after reviewing the official documentation: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dwhsg/refreshing-materialized-views.html#GUID-51191C38-D52F-4A4D-B6FF-E631965AD69A I have not found anything that would prevent such a combination from succeeding in my case. It is even clearly stated that out-of-place should work with any refresh method, with FAST preferred first. Below I attach a script setting up and demonstrating the problem I am facing. Due to limited privileges in the LiveSQL tool, I recommend using the script on a local database <code> -- Clean Workspace DROP TABLE FOO; DROP MATERIALIZED VIEW FOO_MV; -- Create the Base Table FOO CREATE TABLE FOO ( product_id NUMBER PRIMARY KEY, product_name VARCHAR2(100), product_price NUMBER(10, 2) ); -- Insert Sample Data into FOO table INSERT INTO FOO (product_id, product_name, product_price) VALUES (1, 'Widget A', 19.99); INSERT INTO FOO (product_id, product_name, product_price) VALUES (2, 'Gizmo B', 29.99); COMMIT; -- Create Materialized View Log CREATE MATERIALIZED VIEW LOG ON FOO WITH ROWID, PRIMARY KEY, SEQUENCE; -- Create simple Materialized View CREATE MATERIALIZED VIEW FOO_MV BUILD DEFERRED REFRESH FAST ON DEMAND AS SELECT product_id, product_name, product_price FROM FOO; -- Drop PK on MV prebuilt table to meet out-of-place refresh requirements ALTER TABLE FOO_MV DROP PRIMARY KEY; -- Enable Advanced statistics collection EXEC DBMS_MVIEW_STATS.SET_MVREF_STATS_PARAMS ('FOO_MV','ADVANCED',30); -- Intial COMPLETE refresh of the Materialized View (out-of-place) EXEC DBMS_MVIEW.REFRESH('FOO_MV', out_of_place => true, atomic_refresh => false, method => 'C'); -- Insert incremental sample data into FOO table INSERT INTO FOO (product_id, product_name, product_price) VALUES (3, 'Gadget X', 49.99); INSERT INTO FOO (product_id, product_name, product_price) VALUES (4, 'Widget B', 24.99); COMMIT; -- Incremental FAST refresh of the Materialized View (HERE IS THE PROBLEM => despite the fact that out-of-place flag is true, the MV is refreshed in-place) EXEC DBMS_MVIEW.REFRESH('FOO_MV', out_of_place => true, atomic_refresh => false, method => 'F...
Categories: DBA Blogs

Can We Add New Language Features to PL/SQL?

Pete Finnigan - Wed, 2024-05-15 14:06
This is a thought experiment really but is possible to do with some efforts and in a more targeted way. I have coded in PL/SQL for around 29 years and it is one of my favourite languages along with C....[Read More]

Posted by Pete On 08/05/24 At 11:20 AM

Categories: Security Blogs

Want To vs Have To

Michael Dinh - Tue, 2024-05-14 22:38

Of course, it’s better to have people act because they want to vs have to.

Like everything, it depends.

Going back to the team I led decades ago. When asked to perform a task, I would typically get the same set of volunteers. They are the ones I can always count on.

Unfortunately that’s not the case for my new team. It’s so quiet that one can hear a dog fart.

In a crisis situation, asking for volunteers is not ideal.

A lifeguard resuscitating a person would not shout, “Someone call 911!”

Instead, lifeguard will need to select a specific person.

You! What’s your name. Do you have a phone? If yes, call 911. This create accountability vs having everyone calling 911.

Know the situations!

How to Fix the etcd Error: “etcdserver: mvcc: database space exceeded” in a Patroni cluster

Yann Neuhaus - Tue, 2024-05-14 07:50

If you’re encountering the etcd error “etcdserver: mvcc: database space exceeded,” it means your etcd database has exceeded its storage limit. This can occur due to a variety of reasons, such as a large number of revisions or excessive data accumulation. However, there’s no need to panic; this issue can be resolved effectively.

I know that there is already plenty of blogs or posts about etcd, but 99% of them are related to Kubernetes topic where etcd is managed in containers. In my case, etcd cluster is installed on three SLES VMs alongside a Patroni cluster. Using etcd with Patroni enhances the reliability, scalability, and manageability of PostgreSQL clusters by providing a robust distributed coordination mechanism for high availability and configuration management. So dear DBA, I hope that this blog will help you ! Below, I’ll outline the steps to fix this error and prevent this error from happening.

Where did this issue happen

The first time I saw this issue was at a customer. They had a Patroni cluster with 3 nodes, including 2 PostgreSQL instance. They noticed Patroni issue on their monitoring so I was asked to have a look. In the end, the Patroni issue was caused by the etcd database being full. I find the error logs from the etcd service status.

Understanding the Error

Before diving into the solution, it’s essential to understand what causes this error. Etcd, a distributed key-value store, utilizes a Multi-Version Concurrency Control (MVCC) model to manage data. When the database space is exceeded, it indicates that there’s too much data stored, potentially leading to performance issues or even service disruptions. By default, the database size is limited to 2Gb, which should be more than enough, but without knowing this limitation, you might encounter the same issue than me one day.

Pause Patroni Cluster Management

Utilize Patroni’s patronictl command to temporarily suspend cluster management, effectively halting automated failover processes and configuration adjustments while conducting the fix procedure. (https://patroni.readthedocs.io/en/latest/pause.html)

# patronictl pause --wait
'pause' request sent, waiting until it is recognized by all nodes
Success: cluster management is paused
Steps to Fix the Error Update etcd Configuration

The first step is to adjust the etcd configuration file to optimize database space usage. Add the following parameters to your etcd configuration file on all nodes of the cluster.

max-wals: 2
auto-compaction-mode: periodic
auto-compaction-retention: "36h"

Below, I’ll provide you with some explanation concerning the three parameters we are adding to the configuration file:

  1. max-wals: 2:
    • This parameter specifies the maximum number of write-ahead logs (WALs) that etcd should retain before compacting them. WALs are temporary files used to store recent transactions before they are written to the main etcd database.
    • By limiting the number of WALs retained, you control the amount of temporary data stored, which helps in managing disk space usage. Keeping a low number of WALs ensures that disk space is not consumed excessively by temporary transaction logs.
  2. auto-compaction-mode: periodic:
    • This parameter determines the mode of automatic database compaction. When set to “periodic,” etcd automatically compacts its database periodically based on the configured retention period.
    • Database compaction removes redundant or obsolete data, reclaiming disk space and preventing the database from growing indefinitely. Periodic compaction ensures that old data is regularly cleaned up, maintaining optimal performance and disk space usage.
  3. auto-compaction-retention: “36h”:
    • This parameter defines the retention period for data before it becomes eligible for automatic compaction. It specifies the duration after which etcd should consider data for compaction.
    • In this example, “36h” represents a retention period of 36 hours. Any data older than 36 hours is eligible for compaction during the next periodic compaction cycle.
    • Adjusting the retention period allows you to control how long historical data is retained in the etcd database. Shorter retention periods result in more frequent compaction and potentially smaller database sizes, while longer retention periods preserve historical data for a longer duration.

Ensure to restart the etcd service on each node after updating the configuration. You can restart the nodes one by one and monitor the cluster’s status between each restart.

Remove Excessive Data and Defragment the Database

Execute various etcd commands to remove excessive data from the etcd database and defragment it. These commands need to be run on each etcd nodes. Complete the whole procedure node by node. In our case, I suggest that we start the process on our third nodes, where we don’t have any PostgreSQL instance running.

# Obtain the current revision
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=<your-endpoints> endpoint status --write-out="json" | grep -o '"revision":[0-9]*' | grep -o '[0-9].*')

# Compact all old revisions
$ ETCDCTL_API=3 etcdctl compact $rev

# Defragment the excessive space (execute for each etcd node)
$ ETCDCTL_API=3 etcdctl defrag --endpoints=<your-endpoints>

# Disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm

# Check the cluster's status again
$ etcdctl endpoint status --cluster -w table
Additional information concerning the previous command
  • if the $rev variable contains three times the same number, only use one instance of the number
  • The first time you run the compact/defrag commands, you may receive an etcd error. To be on the safe side, run the command on the third node first. In case of an error, you may need to restart the etcd service on the node before continuing. From a blog, this potential error might only concerned etcd version 3.5.x : “There is a known issue that etcd might run into data inconsistency issue if it crashes in the middle of an online defragmentation operation using etcdctl or clientv3 API. All the existing v3.5 releases are affected, including 3.5.0 ~ 3.5.5. So please use etcdutl to offline perform defragmentation operation, but this requires taking each member offline one at a time. It means that you need to stop each etcd instance firstly, then perform defragmentation using etcdutl, start the instance at last. Please refer to the issue 1 in public statement.” (https://etcd.io/blog/2023/how_to_debug_large_db_size_issue/#:~:text=Users%20can%20configure%20the%20quota,sufficient%20for%20most%20use%20cases)
  • Run the defrag command for each node and verify that the DB size has properly reduce each time.
Verification

After completing the steps above, ensure there are no more alarms, and the database size has reduced. Monitor the cluster’s performance to confirm that the issue has been resolved successfully.

Resume Patroni Cluster Management

After confirming the successful clean of the alarms, proceed to re-enable cluster management, enabling Patroni to resume its standard operations and exit maintenance mode.

# patronictl resume --wait
'resume' request sent, waiting until it is recognized by all nodes
Success: cluster management is resumed
Conclusion

To conclude, facing the “etcdserver: mvcc: database space exceeded” error can be concerning, but with the right approach, it’s entirely manageable. By updating the etcd configuration and executing appropriate commands to remove excess data and defragment the database, you can optimize your etcd cluster’s performance and ensure smooth operation. Remember to monitor the cluster regularly to catch any potential issues early on. With these steps, you can effectively resolve the etcd database space exceeded error and maintain a healthy etcd environment.

Useful Links

Find more information about etcd database size: How to debug large db size issue?https://etcd.io/blog/2023/how_to_debug_large_db_size_issue/#:~:text=Users%20can%20configure%20the%20quota,sufficient%20for%20most%20use%20cases.

Official etcd operations guide: https://etcd.io/docs/v3.5/op-guide/

L’article How to Fix the etcd Error: “etcdserver: mvcc: database space exceeded” in a Patroni cluster est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator