- Renaming a RAC cluster - 27/09/2018
- Stop/Start all RAC databases at once - 26/09/2018
- RMAN Backup script - 08/11/2017
Partitioning is part of the Oracle Enterprise Edition, this is an expensive option.
Sometimes you just have one big table storing historical data in your database and this is just too much to pay. In that case you can use the procedure below to “emulate” the partitioning. I say “emulate” because you won’t have all the performance benefits and functionnalities of the partitionning. This method is called manual partitioning.
Manual partitioning concept
The concept is really simple, instead of creating partitions, you will create tables corresponding each to a range of values.
For exemple you have to keep 30 days of data with several millions rows of log every day.
If you have only one table, your table should have almost 100 millions rows at the end of the month which will result to bad performances in queries against this table.
What we will do is to create a table for each day of the month, each table will contain approximately 10 millions rows resulting to better performance.
Example
Let’s create the tables:
CREATE TABLE LOG_TABLE_1 ( DATE_LOG DATE, SERVER_NAME VARCHAR2(100), LOG_DETAILS VARCHAR2(4000) ); create table LOG_TABLE_2 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_3 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_4 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_5 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_6 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_7 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_8 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_9 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_10 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_11 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_12 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_13 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_14 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_15 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_16 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_17 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_18 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_19 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_20 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_21 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_22 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_23 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_24 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_25 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_26 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_27 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_28 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_29 as (select * from LOG_TABLE_1 where 1=0); create table LOG_TABLE_30 as (select * from LOG_TABLE_1 where 1=0);
Now we will create an index on the date column because we will have to query our table based on that column:
create index IDX_LOG_TABLE_1 on LOG_TABLE_1 (DATE_LOG); create index IDX_LOG_TABLE_2 on LOG_TABLE_2 (DATE_LOG); create index IDX_LOG_TABLE_3 on LOG_TABLE_3 (DATE_LOG); create index IDX_LOG_TABLE_4 on LOG_TABLE_4 (DATE_LOG); create index IDX_LOG_TABLE_5 on LOG_TABLE_5 (DATE_LOG); create index IDX_LOG_TABLE_6 on LOG_TABLE_6 (DATE_LOG); create index IDX_LOG_TABLE_7 on LOG_TABLE_7 (DATE_LOG); create index IDX_LOG_TABLE_8 on LOG_TABLE_8 (DATE_LOG); create index IDX_LOG_TABLE_9 on LOG_TABLE_9 (DATE_LOG); create index IDX_LOG_TABLE_10 on LOG_TABLE_10 (DATE_LOG); create index IDX_LOG_TABLE_11 on LOG_TABLE_11 (DATE_LOG); create index IDX_LOG_TABLE_12 on LOG_TABLE_12 (DATE_LOG); create index IDX_LOG_TABLE_13 on LOG_TABLE_13 (DATE_LOG); create index IDX_LOG_TABLE_14 on LOG_TABLE_14 (DATE_LOG); create index IDX_LOG_TABLE_15 on LOG_TABLE_15 (DATE_LOG); create index IDX_LOG_TABLE_16 on LOG_TABLE_16 (DATE_LOG); create index IDX_LOG_TABLE_17 on LOG_TABLE_17 (DATE_LOG); create index IDX_LOG_TABLE_18 on LOG_TABLE_18 (DATE_LOG); create index IDX_LOG_TABLE_19 on LOG_TABLE_19 (DATE_LOG); create index IDX_LOG_TABLE_20 on LOG_TABLE_20 (DATE_LOG); create index IDX_LOG_TABLE_21 on LOG_TABLE_21 (DATE_LOG); create index IDX_LOG_TABLE_22 on LOG_TABLE_22 (DATE_LOG); create index IDX_LOG_TABLE_23 on LOG_TABLE_23 (DATE_LOG); create index IDX_LOG_TABLE_24 on LOG_TABLE_24 (DATE_LOG); create index IDX_LOG_TABLE_25 on LOG_TABLE_25 (DATE_LOG); create index IDX_LOG_TABLE_26 on LOG_TABLE_26 (DATE_LOG); create index IDX_LOG_TABLE_27 on LOG_TABLE_27 (DATE_LOG); create index IDX_LOG_TABLE_28 on LOG_TABLE_28 (DATE_LOG); create index IDX_LOG_TABLE_29 on LOG_TABLE_29 (DATE_LOG); create index IDX_LOG_TABLE_30 on LOG_TABLE_30 (DATE_LOG);
Now, to be able to query our tables we will have to create a view wich will make a union all of all our tables:
create or replace view LOG as select * from LOG_TABLE_1 union all select * from LOG_TABLE_2 union all select * from LOG_TABLE_3 union all select * from LOG_TABLE_4 union all select * from LOG_TABLE_5 union all select * from LOG_TABLE_6 union all select * from LOG_TABLE_7 union all select * from LOG_TABLE_8 union all select * from LOG_TABLE_9 union all select * from LOG_TABLE_10 union all select * from LOG_TABLE_11 union all select * from LOG_TABLE_12 union all select * from LOG_TABLE_13 union all select * from LOG_TABLE_14 union all select * from LOG_TABLE_15 union all select * from LOG_TABLE_16 union all select * from LOG_TABLE_17 union all select * from LOG_TABLE_18 union all select * from LOG_TABLE_19 union all select * from LOG_TABLE_20 union all select * from LOG_TABLE_21 union all select * from LOG_TABLE_22 union all select * from LOG_TABLE_23 union all select * from LOG_TABLE_24 union all select * from LOG_TABLE_25 union all select * from LOG_TABLE_26 union all select * from LOG_TABLE_27 union all select * from LOG_TABLE_28 union all select * from LOG_TABLE_29 union all select * from LOG_TABLE_30;
Now you tell me, ok this is cool but how do I insert values in my tables? You’re right, as we are not using a simple view (the view use an UNION ALL), it is not possible to insert directly, what you need to do is create a procedure which will insert the data in the good table for you, here is an example :
create or replace PROCEDURE insert_log_data (DATE_LOG IN DATE, LOG_DETAILS IN VARCHAR2) IS v_sql_stmt varchar2 (4000) := ''; v_table varchar2 (20) := ''; BEGIN CASE WHEN DATE_LOG >= trunc(SYSDATE) THEN v_table := 'LOG_TABLE_1'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 1 AND trunc(SYSDATE) THEN v_table := 'LOG_TABLE_2'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 2 AND trunc(SYSDATE) - 1 THEN v_table := 'LOG_TABLE_3'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 3 AND trunc(SYSDATE) - 2 THEN v_table := 'LOG_TABLE_4'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 4 AND trunc(SYSDATE) - 3 THEN v_table := 'LOG_TABLE_5'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 5 AND trunc(SYSDATE) - 4 THEN v_table := 'LOG_TABLE_6'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 6 AND trunc(SYSDATE) - 5 THEN v_table := 'LOG_TABLE_7'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 7 AND trunc(SYSDATE) - 6 THEN v_table := 'LOG_TABLE_8'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 8 AND trunc(SYSDATE) - 7 THEN v_table := 'LOG_TABLE_9'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 9 AND trunc(SYSDATE) - 8 THEN v_table := 'LOG_TABLE_10'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 10 AND trunc(SYSDATE) - 9 THEN v_table := 'LOG_TABLE_11'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 11 AND trunc(SYSDATE) - 10 THEN v_table := 'LOG_TABLE_12'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 12 AND trunc(SYSDATE) - 11 THEN v_table := 'LOG_TABLE_13'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 13 AND trunc(SYSDATE) - 12 THEN v_table := 'LOG_TABLE_14'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 14 AND trunc(SYSDATE) - 13 THEN v_table := 'LOG_TABLE_15'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 15 AND trunc(SYSDATE) - 14 THEN v_table := 'LOG_TABLE_16'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 16 AND trunc(SYSDATE) - 15 THEN v_table := 'LOG_TABLE_17'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 17 AND trunc(SYSDATE) - 16 THEN v_table := 'LOG_TABLE_18'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 18 AND trunc(SYSDATE) - 17 THEN v_table := 'LOG_TABLE_19'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 19 AND trunc(SYSDATE) - 18 THEN v_table := 'LOG_TABLE_20'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 20 AND trunc(SYSDATE) - 19 THEN v_table := 'LOG_TABLE_21'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 21 AND trunc(SYSDATE) - 20 THEN v_table := 'LOG_TABLE_22'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 22 AND trunc(SYSDATE) - 21 THEN v_table := 'LOG_TABLE_23'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 23 AND trunc(SYSDATE) - 22 THEN v_table := 'LOG_TABLE_24'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 24 AND trunc(SYSDATE) - 23 THEN v_table := 'LOG_TABLE_25'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 25 AND trunc(SYSDATE) - 24 THEN v_table := 'LOG_TABLE_26'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 26 AND trunc(SYSDATE) - 25 THEN v_table := 'LOG_TABLE_27'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 27 AND trunc(SYSDATE) - 26 THEN v_table := 'LOG_TABLE_28'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 28 AND trunc(SYSDATE) - 27 THEN v_table := 'LOG_TABLE_29'; WHEN DATE_LOG BETWEEN trunc(SYSDATE) - 29 AND trunc(SYSDATE) - 28 THEN v_table := 'LOG_TABLE_30'; ELSE RAISE_APPLICATION_ERROR(-20000,'invalid date, date should be between now and 30 days before'); END CASE; v_sql_stmt := 'insert into ' || v_table || ' values(:1,:2)'; dbms_output.put_line(v_sql_stmt); EXECUTE IMMEDIATE v_sql_stmt USING DATE_LOG,LOG_DETAILS; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error code ' || SQLCODE || ': ' || SQLERRM); END; /
Once this procedure created, we will perform an insert by calling the procedure insted of an insert statement:
EXEC INSERT_LOG_DATA(sysdate-2,'just an example');
This statement should have inserted my data in LOG_TABLE_3, let’see :
SQL> set lines 150 SQL> col LOG_DETAILS for a50 SQL> select * from log_table_3; DATE_LOG LOG_DETAILS -------- -------------------------------------------------- 14-01-29 just an example
Now what happens if we query the view, how Oracle will deal with all our tables:
SQL> set autotrace traceonly; SQL> select log_details from ashrep.log where date_log between sysdate-3 and sysdate-1; Execution Plan ---------------------------------------------------------- Plan hash value: 1906144431 ---------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 2011 | 0 (0)| 00:00:01 | |* 1 | FILTER | | | | | | | 2 | VIEW | LOG | 1 | 2011 | 0 (0)| 00:00:01 | | 3 | UNION-ALL | | | | | | |* 4 | FILTER | | | | | | | 5 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_1 | 1 | 2011 | 0 (0)| 00:00:01 | |* 6 | INDEX RANGE SCAN | IDX_LOG_TABLE_1 | 1 | | 0 (0)| 00:00:01 | |* 7 | FILTER | | | | | | | 8 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_2 | 1 | 2011 | 0 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | IDX_LOG_TABLE_2 | 1 | | 0 (0)| 00:00:01 | |* 10 | FILTER | | | | | | | 11 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_3 | 1 | 2011 | 0 (0)| 00:00:01 | |* 12 | INDEX RANGE SCAN | IDX_LOG_TABLE_3 | 1 | | 0 (0)| 00:00:01 | |* 13 | FILTER | | | | | | | 14 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_4 | 1 | 2011 | 0 (0)| 00:00:01 | |* 15 | INDEX RANGE SCAN | IDX_LOG_TABLE_4 | 1 | | 0 (0)| 00:00:01 | |* 16 | FILTER | | | | | | | 17 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_5 | 1 | 2011 | 0 (0)| 00:00:01 | |* 18 | INDEX RANGE SCAN | IDX_LOG_TABLE_5 | 1 | | 0 (0)| 00:00:01 | |* 19 | FILTER | | | | | | | 20 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_6 | 1 | 2011 | 0 (0)| 00:00:01 | |* 21 | INDEX RANGE SCAN | IDX_LOG_TABLE_6 | 1 | | 0 (0)| 00:00:01 | |* 22 | FILTER | | | | | | | 23 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_7 | 1 | 2011 | 0 (0)| 00:00:01 | |* 24 | INDEX RANGE SCAN | IDX_LOG_TABLE_7 | 1 | | 0 (0)| 00:00:01 | |* 25 | FILTER | | | | | | | 26 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_8 | 1 | 2011 | 0 (0)| 00:00:01 | |* 27 | INDEX RANGE SCAN | IDX_LOG_TABLE_8 | 1 | | 0 (0)| 00:00:01 | |* 28 | FILTER | | | | | | | 29 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_9 | 1 | 2011 | 0 (0)| 00:00:01 | |* 30 | INDEX RANGE SCAN | IDX_LOG_TABLE_9 | 1 | | 0 (0)| 00:00:01 | |* 31 | FILTER | | | | | | | 32 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_10 | 1 | 2011 | 0 (0)| 00:00:01 | |* 33 | INDEX RANGE SCAN | IDX_LOG_TABLE_10 | 1 | | 0 (0)| 00:00:01 | |* 34 | FILTER | | | | | | | 35 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_11 | 1 | 2011 | 0 (0)| 00:00:01 | |* 36 | INDEX RANGE SCAN | IDX_LOG_TABLE_11 | 1 | | 0 (0)| 00:00:01 | |* 37 | FILTER | | | | | | | 38 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_12 | 1 | 2011 | 0 (0)| 00:00:01 | |* 39 | INDEX RANGE SCAN | IDX_LOG_TABLE_12 | 1 | | 0 (0)| 00:00:01 | |* 40 | FILTER | | | | | | | 41 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_13 | 1 | 2011 | 0 (0)| 00:00:01 | |* 42 | INDEX RANGE SCAN | IDX_LOG_TABLE_13 | 1 | | 0 (0)| 00:00:01 | |* 43 | FILTER | | | | | | | 44 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_14 | 1 | 2011 | 0 (0)| 00:00:01 | |* 45 | INDEX RANGE SCAN | IDX_LOG_TABLE_14 | 1 | | 0 (0)| 00:00:01 | |* 46 | FILTER | | | | | | | 47 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_15 | 1 | 2011 | 0 (0)| 00:00:01 | |* 48 | INDEX RANGE SCAN | IDX_LOG_TABLE_15 | 1 | | 0 (0)| 00:00:01 | |* 49 | FILTER | | | | | | | 50 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_16 | 1 | 2011 | 0 (0)| 00:00:01 | |* 51 | INDEX RANGE SCAN | IDX_LOG_TABLE_16 | 1 | | 0 (0)| 00:00:01 | |* 52 | FILTER | | | | | | | 53 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_17 | 1 | 2011 | 0 (0)| 00:00:01 | |* 54 | INDEX RANGE SCAN | IDX_LOG_TABLE_17 | 1 | | 0 (0)| 00:00:01 | |* 55 | FILTER | | | | | | | 56 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_18 | 1 | 2011 | 0 (0)| 00:00:01 | |* 57 | INDEX RANGE SCAN | IDX_LOG_TABLE_18 | 1 | | 0 (0)| 00:00:01 | |* 58 | FILTER | | | | | | | 59 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_19 | 1 | 2011 | 0 (0)| 00:00:01 | |* 60 | INDEX RANGE SCAN | IDX_LOG_TABLE_19 | 1 | | 0 (0)| 00:00:01 | |* 61 | FILTER | | | | | | | 62 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_20 | 1 | 2011 | 0 (0)| 00:00:01 | |* 63 | INDEX RANGE SCAN | IDX_LOG_TABLE_20 | 1 | | 0 (0)| 00:00:01 | |* 64 | FILTER | | | | | | | 65 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_21 | 1 | 2011 | 0 (0)| 00:00:01 | |* 66 | INDEX RANGE SCAN | IDX_LOG_TABLE_21 | 1 | | 0 (0)| 00:00:01 | |* 67 | FILTER | | | | | | | 68 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_22 | 1 | 2011 | 0 (0)| 00:00:01 | |* 69 | INDEX RANGE SCAN | IDX_LOG_TABLE_22 | 1 | | 0 (0)| 00:00:01 | |* 70 | FILTER | | | | | | | 71 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_23 | 1 | 2011 | 0 (0)| 00:00:01 | |* 72 | INDEX RANGE SCAN | IDX_LOG_TABLE_23 | 1 | | 0 (0)| 00:00:01 | |* 73 | FILTER | | | | | | | 74 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_24 | 1 | 2011 | 0 (0)| 00:00:01 | |* 75 | INDEX RANGE SCAN | IDX_LOG_TABLE_24 | 1 | | 0 (0)| 00:00:01 | |* 76 | FILTER | | | | | | | 77 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_25 | 1 | 2011 | 0 (0)| 00:00:01 | |* 78 | INDEX RANGE SCAN | IDX_LOG_TABLE_25 | 1 | | 0 (0)| 00:00:01 | |* 79 | FILTER | | | | | | | 80 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_26 | 1 | 2011 | 0 (0)| 00:00:01 | |* 81 | INDEX RANGE SCAN | IDX_LOG_TABLE_26 | 1 | | 0 (0)| 00:00:01 | |* 82 | FILTER | | | | | | | 83 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_27 | 1 | 2011 | 0 (0)| 00:00:01 | |* 84 | INDEX RANGE SCAN | IDX_LOG_TABLE_27 | 1 | | 0 (0)| 00:00:01 | |* 85 | FILTER | | | | | | | 86 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_28 | 1 | 2011 | 0 (0)| 00:00:01 | |* 87 | INDEX RANGE SCAN | IDX_LOG_TABLE_28 | 1 | | 0 (0)| 00:00:01 | |* 88 | FILTER | | | | | | | 89 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_29 | 1 | 2011 | 0 (0)| 00:00:01 | |* 90 | INDEX RANGE SCAN | IDX_LOG_TABLE_29 | 1 | | 0 (0)| 00:00:01 | |* 91 | FILTER | | | | | | | 92 | TABLE ACCESS BY INDEX ROWID| LOG_TABLE_30 | 1 | 2011 | 0 (0)| 00:00:01 | |* 93 | INDEX RANGE SCAN | IDX_LOG_TABLE_30 | 1 | | 0 (0)| 00:00:01 | ---------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(SYSDATE@!-3<=SYSDATE@!-1) 4 - filter(SYSDATE@!-1>=SYSDATE@!-3) 6 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 7 - filter(SYSDATE@!-1>=SYSDATE@!-3) 9 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 10 - filter(SYSDATE@!-1>=SYSDATE@!-3) 12 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 13 - filter(SYSDATE@!-1>=SYSDATE@!-3) 15 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 16 - filter(SYSDATE@!-1>=SYSDATE@!-3) 18 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 19 - filter(SYSDATE@!-1>=SYSDATE@!-3) 21 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 22 - filter(SYSDATE@!-1>=SYSDATE@!-3) 24 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 25 - filter(SYSDATE@!-1>=SYSDATE@!-3) 27 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 28 - filter(SYSDATE@!-1>=SYSDATE@!-3) 30 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 31 - filter(SYSDATE@!-1>=SYSDATE@!-3) 33 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 34 - filter(SYSDATE@!-1>=SYSDATE@!-3) 36 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 37 - filter(SYSDATE@!-1>=SYSDATE@!-3) 39 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 40 - filter(SYSDATE@!-1>=SYSDATE@!-3) 42 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 43 - filter(SYSDATE@!-1>=SYSDATE@!-3) 45 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 46 - filter(SYSDATE@!-1>=SYSDATE@!-3) 48 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 49 - filter(SYSDATE@!-1>=SYSDATE@!-3) 51 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 52 - filter(SYSDATE@!-1>=SYSDATE@!-3) 54 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 55 - filter(SYSDATE@!-1>=SYSDATE@!-3) 57 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 58 - filter(SYSDATE@!-1>=SYSDATE@!-3) 60 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 61 - filter(SYSDATE@!-1>=SYSDATE@!-3) 63 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 64 - filter(SYSDATE@!-1>=SYSDATE@!-3) 66 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 67 - filter(SYSDATE@!-1>=SYSDATE@!-3) 69 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 70 - filter(SYSDATE@!-1>=SYSDATE@!-3) 72 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 73 - filter(SYSDATE@!-1>=SYSDATE@!-3) 75 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 76 - filter(SYSDATE@!-1>=SYSDATE@!-3) 78 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 79 - filter(SYSDATE@!-1>=SYSDATE@!-3) 81 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 82 - filter(SYSDATE@!-1>=SYSDATE@!-3) 84 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 85 - filter(SYSDATE@!-1>=SYSDATE@!-3) 87 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 88 - filter(SYSDATE@!-1>=SYSDATE@!-3) 90 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) 91 - filter(SYSDATE@!-1>=SYSDATE@!-3) 93 - access("DATE_LOG">=SYSDATE@!-3 AND "DATE_LOG"<=SYSDATE@!-1) Note ----- - dynamic sampling used for this statement (level=2) Statistics ---------------------------------------------------------- 12 recursive calls 0 db block gets 70 consistent gets 0 physical reads 0 redo size 445 bytes sent via SQL*Net to client 419 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
Ok, it works, but as we do not use the partitioning option we have no global index allowing Oracle to know in which partition our data is.
However this can be a good solution if you cannot afford the cost of the Partitioning option and it could be used with all Oracle versions from Oracle 7 to Oracle 12c.
See you soon !
great idea….i will try this..!
Thank you kishore, this idea is not from me, it was the way to do partitioning before there was an option for that 🙂
It is still a good option if you can’t afford the partitioning option.