Thursday, September 27, 2018

Oracle 18c- From a Developer's Perspective


Oracle 18C :
1.  Why 18c from 12c?
2.  18c Key features from Developer’s Stand Point
          -- PL/SQL & SQL Related Enhancements
*Private Temporary Tables
*Approximate Top-N Query Processing
    *Inline External Tables
    *Polymorphic Tables
    *PL/SQL Qualified Expressions
    *Not Persistable Types
    *ROUND_TIES_TO_EVEN Function
    -- Query Execution & Tuning Related Enhancements
    *Manual Termination of Runaway Queries
    *PL/SQL DBMS HPROF Procedure Improvements
    *OCI Call Time Out Attribute
    *Spatial Improvements
    *Graph Improvements
--JSON Related Enhancements
    *SODA for PL/SQL
    *Support Indexing of JSON Key Names Longer than 64 Characters
    *New SQL Function TO_UTC_TIMESTAMP_TZ
*JSON IS_EQUAL
    *TREAT(...AS JSON)
    *JSON_TABLE Materialized Views : ON STATEMENT Support
    *JSON_DATAGUIDE Function Enhancements- Formatting & Hierarchy
    *{USER|ALL|DBA|CDB}_JSON_DATAGUIDE_FIELDS Views
    *JSON Dataguide Records for Arrays of Scalar Values
    *Data-Dictionary Views to Record the Presence of JSON Columns
    *JSON Path Expression Item Methods
 *LOB Return Types for JSON_VALUE, JSON_QUERY, and JSON_TABLE Functions
*LOB Support for SQL/JSON Generation Functions
   




















1. Why 18c from 12c?
      
While it may seem to customers that Oracle has apparently skipped 6 releases of Oracle , it may be simpler to regard “Oracle Database 18c” as “Oracle Database 12c Release 2 12.2.0.2”, where they have simply changed the naming to reflect the year in which the product is released(2018).
Oracle has decided to release a new version every year and quarterly updates , to adapt to the agile model. (Next year release of Oracle would be 19c etc.)

 



What’s the benefit?:

Oracle’s change to annual releases addresses both the delay in getting new features out, as well as the amount of changes introduced. This should enable developers and DBAs to rapidly adopt the new Releases, enabling new and innovative solutions.

2. Oracle 18c Key Features - From a Developer’s Stand Point

    The following features are the key changes a developer would need to know , when upgrading to Oracle 18c.

3.3.1 Private Temporary Tables:

Private Temporary table is a new concept in Oracle in 18c, where the table object itself is temporary & Session specific, not just the data.

Private temporary tables are temporary database objects that are automatically dropped at the end of a transaction or a session.

A private temporary table is stored in memory and is visible only to the session that created it.

A private temporary table confines the scope of a temporary table to a session or a transaction, thus providing more flexibility in application coding, leading to easier code maintenance and a better ready-to-use functionality.

Temporary, private (session-based) table definition
– Private table name and shape
- Temporary, private (session-based) data content
– Session or transaction duration

CREATE PRIVATE TEMPORARY TABLE ora$ptt_my_temp_table (
  id           NUMBER,
  description  VARCHAR2(20)
)
ON COMMIT DROP DEFINITION;

-- Insert, but don't commit, then check contents of PTT.
INSERT INTO ora$ptt_my_temp_table VALUES (1, 'ONE');

SELECT COUNT(*) FROM ora$ptt_my_temp_table;

COUNT(*)
--------------
         1

-- Reconnect and check contents of GTT.
CONN test/test@pdb1

SELECT COUNT(*) FROM ora$ptt_my_temp_table;
SELECT COUNT(*) FROM ora$ptt_my_temp_table
                     *
ERROR at line 1:
ORA-00942: table or view does not exist

SQL>
----------------------------------------------------------------------------------------------------------------

   It doesn't make sense for a permanent PL/SQL object to directly reference a temporary object, as it would not exist at compile time.
   If you want to use a private temporary table from a permanent object it would have to be done using dynamic SQL.
   
   Example:
   CREATE OR REPLACE FUNCTION ptt_test (p_id IN NUMBER)
RETURN VARCHAR2
AS
  l_sql     VARCHAR2(32767);
  l_return  VARCHAR2(30);
BEGIN
  l_sql := 'CREATE PRIVATE TEMPORARY TABLE ora$ptt_my_temp_table (
              id           NUMBER,
              description  VARCHAR2(20)
            )
            ON COMMIT DROP DEFINITION';

  EXECUTE IMMEDIATE l_sql;

  EXECUTE IMMEDIATE q'[INSERT INTO ora$ptt_my_temp_table VALUES (1, 'ONE')]';

  EXECUTE IMMEDIATE 'SELECT description INTO :l_return FROM ora$ptt_my_temp_table WHERE id = :id' INTO l_return USING p_id;

  RETURN l_return;
END;
/
***********************************************************************

3.3.2 Approximate Top-N Query Processing:


" What are the top five products sold by week for the past year?
• Who are the top five earners by region?
• How many page views did the top five blog posts get last week?
• How much did my top fifty customers each spend last year?"

 New approximate functions APPROX_COUNT(), APPROX_SUM(), APPROX_RANK()

Data analysis applications heavily use aggregate functions. Approximate query processing (available since Oracle Database 12c Release 1) aims to
deliver faster results for these queries. The approximate results are not identical to the exact results but they are very close.
New approximate SQL functions for rank, sum and count are now available for Top-N style queries.
By making use of approximate query processing, you can instantly improve the performance of existing analytic workloads and enable faster ad-hoc data exploration.

***********************************************************************

3.3.3 Inline External Tables:

Inline external tables enable the run time definition of an external table to be part of a SQL statement, thus eliminating the need to create an external table as a persistent database object in the data dictionary.

Inline external tables remove the need for explicitly creating external tables to access data outside an Oracle database.
This simplifies accessing external data, and enables developing simpler and efficient database applications.

External table definition provided at run time
– Similar to in-line view
- No need to pre-create external tables that are used one time only
– Increased developer productivity


Before 18c:
CREATE TABLE sales_xt
 (prod_id number, … )

 TYPE ORACLE_LOADER
 …
 LOCATION ’new_sales_kw13')
 REJECT LIMIT UNLIMITED );
INSERT INTO sales SELECT * FROM sales_xt;
DROP TABLE sales_xt;

After 18c:
INSERT INTO sales
SELECT sales_xt.*
FROM EXTERNAL(
 (prod_id number, … )
 TYPE ORACLE_LOADER
 …
 LOCATION ’new_sales_kw13')
 REJECT LIMIT UNLIMITED );

***********************************************************************
3.3.4 Polymorphic Table Functions:

Polymorphic table functions (PTF) are table functions whose operands can have more than one type. The return type is determined by the PTF invocation arguments list. 
These user defined Polymorphic Table Functions (PTF) can be invoked in the FROM clause of a SQL query block.

The PTF Functions are used to embed sophisticated algorithms in SQL ,hide the implementation of the algorithm and leverage powerful dynamic features of PL/SQL.

A PTF has two parts:
1. The PL/SQL package which contains the client interface for the PTF implementation.
2. The standalone or package function naming the PTF and its associated implementation package.

Steps to Implement a Polymorphic Table Function
1. Create the implementation package containing the DESCRIBE function (required) and the OPEN, FETCH_ROWS , and CLOSE procedures (optional). This is called PTF Client Interface.

Example :
  An example of how to use polymorphic table functions in 18c to dynamically convert CSV data to columns.
To create a polymorphic table function (PTF), you need to create a package with a describe function. This defines new columns in the output. Fetch_rows is optional. You use this to set values for new columns.



create or replace package csv_pkg as  
  /* The describe function defines the new columns */  
  function describe (  
    tab in out dbms_tf.table_t,  
    col_names varchar2  
  ) return dbms_tf.describe_t;  
  
  /* Fetch_rows sets the values for the new columns */  
  procedure fetch_rows (col_names varchar2);  
end csv_pkg;  

Describe uses the col_names to define how many columns to add to the output and their name. Fetch_rows then extracts the values from the source string to the new columns.

create or replace package body csv_pkg as  
  function describe(  
    tab in out dbms_tf.table_t,  
    col_names varchar2  
  )   
    return dbms_tf.describe_t as  
    new_cols dbms_tf.columns_new_t;  
    col_id   pls_integer := 2;  
  begin   
    
    /* Enable the source column for reading */  
    tab.column(1).pass_through := FALSE;  
    tab.column(1).for_read     := TRUE;  
    new_cols(1) := tab.column(1).description;  
      
    /* Extract the column names from the header string,  
       creating a new column for each   
     */  
     for j in 1 .. ( length(col_names) - length(replace(col_names,',')) ) + 1 loop   
      new_cols(col_id) := dbms_tf.column_metadata_t(  
        name=>regexp_substr(col_names, '[^,]+', 1, j),--'c'||j,   
        type=>dbms_tf.type_varchar2  
      );  
      col_id := col_id + 1;  
     end loop;  
    
    return dbms_tf.describe_t( new_columns => new_cols );  
  end;  
  
  procedure fetch_rows (col_names varchar2) as   
    rowset    dbms_tf.row_set_t;  
    row_count pls_integer;  
  begin  
    /* read the input data set */  
    dbms_tf.get_row_set(rowset, row_count => row_count);  
      
    /* Loop through the input rows... */  
    for i in 1 .. row_count loop  
      /* ...and the defined columns, extracting the relevant value   
         start from 2 to skip the input string  
      */  
      for j in 2 .. ( length(col_names) - length(replace(col_names,',')) ) + 2 loop  
        rowset(j).tab_varchar2(i) :=   
          regexp_substr(rowset(1).tab_varchar2(i), '[^,]+', 1, j - 1);  
      end loop;  
    end loop;  
      
    /* Output the new columns and their values */  
    dbms_tf.put_row_set(rowset);  
      
  end;  
    
end csv_pkg;

2. The below function is the interface to the implementation package. The first argument is the name of the table that is the source data set. The second is a comma separated list of the names of the new columns.

create or replace function csv_to_columns( tab table, col_names varchar2 )
return table pipelined row polymorphic using csv_pkg;

3. You can now pass a table with CSV data and the names of the new columns you want. The PTF will split these out.

with csvs as
( select 'two,comma-separated values' str from dual )
select * from csv_to_columns( csvs, 'c1,c2' );

STR
C1
C2
two,comma-separated values
two
comma-separated values








***********************************************************************
3.3.5 PL/SQL Qualified Expressions:

  Oracle 18c makes it easier to provide complex values when initializing data.
  Qualified Expressions are used to provide values when creating either RECORD Type or Associative Array (PL/SQL Table) easily.
Qualified expressions improve program clarity and developer productivity by providing the ability to declare and define a complex value in a compact form where the value is needed.



Example :
DECLARE
  TYPE t_tab IS TABLE OF VARCHAR2(10) INDEX BY VARCHAR2(10);

  l_tab t_tab;
BEGIN
  -- Pre-18c - Direct assignment to record columns.
  l_tab('IND1') := 'ONE';
  l_tab('IND2') := 'TWO';
  l_tab('IND3') := 'THREE';

  -- 18c - Qualified expression using named association.
  l_tab := t_tab('IND1' => 'ONE',
                 'IND2' => 'TWO',
                 'IND3' => 'THREE');
END;
/
****************************************************************
DECLARE
  TYPE t_tab IS TABLE OF VARCHAR2(10) INDEX BY VARCHAR2(10);

  PROCEDURE dummy (p_tab IN t_tab DEFAULT t_tab(1 => 'ONE', 2 => 'TWO',3 => 'THREE')) AS
  BEGIN
    NULL;
  END;
BEGIN
  NULL;
END;
/


***********************************************************************

3.3.6 Not Persistable Types:

Use [NOT] PERSISTABLE clause to indicate whether or not instances of the object type are persistable.
Only PERSISTABLE types can be stored in a table.
Oracle 18C provides the ability to mark TYPEs as NOT PERSISTABLE to make sure that unsuitable data types are not persisted to disk.
Marking a TYPE as NOT PERSISTABLE makes upgrades smoother and reduces errors.

Example:
--Default Type-> PERSISTABLE
CREATE TYPE ct_phone_list_type1 as VARRAY(8) of VARCHAR2(15);
CREATE TYPE ct_address_type1 as OBJECT (street_address_1 VARCHAR2(40),street_address_2 VARCHAR2(40), postal_code VARCHAR2(10),city VARCHAR2(10),state_province VARCHAR2(10),country_code CHAR(2),phone ct_phone_list_type1);
--Type Created

--NOT PERSISTABLE
CREATE TYPE ct_phone_list_type2 as VARRAY(8) of (VARCHAR2(15)) NOT PERSISTABLE;
CREATE TYPE ct_address_type2 as OBJECT (street_address_1 VARCHAR2(40),street_address_2 VARCHAR2(40), postal_code VARCHAR2(10),city VARCHAR2(10),state_province VARCHAR2(10),country_code CHAR(2),phone ct_phone_list_type2);
--TYPE -  ct_address_type2
--ORA-22383 a persistable type cannot have non-persistable attributes


***********************************************************************

3.3.7 Manual termination of run-away queries:

A SQL query consuming excessive resources, including parallel servers, can be terminated using the ALTER SYSTEM CANCEL SQL statement.
This feature enables the database administrator to terminate a runaway SQL query using a SQL statement.
This frees the system resources consumed by the runaway SQL query so that other SQL queries can use them.
ALTER SYSTEM CANCEL SQL ‘sid,serial#’
/* optional parameters (within quotes) */
@instance_id
Sql_id

***********************************************************************
3.3.8 PL/SQL DBMS HPROF Procedure:

What is DBMS_HPROF?

The dbms_hprof PL/SQL built-in package and related dbmshp_parent_child_info table are used to help plot the execution and debugging of PL/SQL, revealing the hierarchy of calls to other PL/SQL functions.

What’s new in 18c about DBMS_HPROF ?

Starting with Oracle Database 18c, a new DBMS_HPROF procedure creates necessary tables and structures to collect and analyse raw profiler output as an alternative to the raw profiler data file.

A PL/SQL performance engineer with 'INSERT', 'CREATE', and 'SELECT' privileges can record profiler-run information into the database tables and generate reports with tools such as SQL*Plus or SQL Developer.

The engineer need not have additional file permissions to access the script files located in operating system directories.

Overview & Implementation:

You can use the PL/SQL hierarchical profiler to identify bottlenecks and performance-tuning opportunities in PL/SQL applications.

The profiler reports the dynamic execution profile of a PL/SQL program organized by function calls, and accounts for SQL and PL/SQL execution times separately. 

To collect profile data from your PL/SQL program for the PL/SQL hierarchical profiler, follow these steps:

* Ensure that you have these privileges:
1. EXECUTE privilege on the DBMS_HPROF package
2. WRITE privilege on the directory that you specify when you  call DBMS_HPROF.START_PROFILING
* Use the DBMS_HPROF.START_PROFILING PL/SQL API to start hierarchical profiler data collection in a session.
* Run your PL/SQL program long enough to get adequate code coverage.
* To get the most accurate measurements of elapsed time, avoid unrelated activity on the system on which your PL/SQL program is running.
*Use the DBMS_HPROF.STOP_PROFILING PL/SQL API to stop hierarchical profiler data collection.

Sample Code for DBMS_HPROF:
Consider this PL/SQL procedure, test:
CREATE OR REPLACE PROCEDURE test AUTHID DEFINER IS
  n NUMBER;

  PROCEDURE foo IS
  BEGIN
    SELECT COUNT(*) INTO n FROM EMPLOYEES;
  END foo;

BEGIN  -- test
  FOR i IN 1..3 LOOP
    foo;
  END LOOP;
END test;
/
Consider the PL/SQL procedure that analyzes and generates HTML CLOB report from raw profiler data table
declare
reportclob clob ;
trace_id number;
begin
-- create raw profiler data and analysis tables
-- force_it =>TRUE will dropped the tables if table existed
DBMS_HPROF . CREATE_TABLES (force_it =>TRUE );
-- Start profiling .
-- Write raw profiler data in raw profiler data table
trace_id := DBMS_HPROF . START_PROFILING ;
-- Run procedure to be profiled
test ;
-- Stop profiling
DBMS_HPROF . STOP_PROFILING ;
-- analyzes trace_id entry in raw profiler data table and produce
-- analyzed HTML report in reportclob .
DBMS_HPROF .ANALYZE (trace_id , reportclob );
end;
/


Overview of PL/SQL Hierarchical Profiler

Non hierarchical (flat) profilers record the time that a program spends within each subprogram—the function time or self time of each subprogram. Function time is helpful, but often inadequate. For example, it is helpful to know that a program spends 40% of its time in the subprogram INSERT_ORDER, but it is more helpful to know which subprograms call INSERT_ORDER often and the total time the program spends under INSERT_ORDER (including its descendant subprograms). Hierarchical profilers provide such information.
The PL/SQL hierarchical profiler:

Reports the dynamic execution profile of your PL/SQL program, organized by subprogram calls
Accounts for SQL and PL/SQL execution times separately
Requires no special source or compile-time preparation
Stores results in database tables (hierarchical profiler tables) for custom report generation by integrated development environment (IDE) tools (such as SQL Developer and third-party tools)
Provides subprogram-level execution summary information, such as:
Number of calls to the subprogram
Time spent in the subprogram itself (function time or self time)
Time spent in the subprogram itself & in its descendent subprograms (subtree time)

The PL/SQL hierarchical profiler is implemented by the DBMS_HPROF package and has two components:
Data collection
The data collection component is an intrinsic part of the PL/SQL Virtual Machine. The DBMS_HPROF package provides APIs to turn hierarchical profiling on and off and write the raw profiler output to a file or  raw profiler data table.
Analyzer
The analyzer component processes the raw profiler output and produce analyzed results. The analyzer component analyzes:
Raw profiler data located in the raw profiler data file and raw profiler data table into HTML CLOB report, analyzed report file, and hierarchical profiler analysis tables.

Example Implementation in Detail :

https://docs.oracle.com/en/database/oracle/oracle-database/18/adfns/hierarchical-profiler.html#GUID-B927E682-0B0C-46E3-B011-B43FF7F156C0


***********************************************************************








3.3.9 OCI Call Time Out Attribute :

This feature introduces a new attribute allowing OCI applications to specify a millisecond time out value for round trip calls to the database.

Gives applications fine-grained control, allowing them to avoid unexpected amounts of time for OCI calls such as statement execution.

The Oracle Call Interfaces (OCI) is a set of low-level APIs (Application Programming Interface Calls) used to interact with the Oracle Database.
 It allows one to use operations like log on, execute, parse, fetch, etc. OCI programs are normally written in C or C++,
 although they can be written in almost any programming language.
 OCI offers a procedural API for not only performing certain database administration tasks (such as system start up and shutdown), but also for using PL/SQL or SQL to query, access, and manipulate data.
ODBC Loader etc is based on OCI

***********************************************************************

3.3.10 SODA for PL/SQL : - Simple Oracle Document Access

SODA for PL/SQL is a PL/SQL API that implements Simple Oracle Document Access (SODA).
You can use it with PL/SQL to perform create, read (retrieve), update, and delete (CRUD) operations on documents of any kind,
and you can use it to query JSON documents.

SODA is a set of NoSQL-style APIs that let you create and store collections of documents in Oracle Database, retrieve them,
and query them, without needing to know Structured Query Language (SQL) or how the data in the documents is stored in the database.

Oracle relational database management system (RDBMS) supports storing and querying JSON data.
To access this functionality, you need structured query language (SQL) with special JSON SQL operators.
SODA for PL/SQL hides the complexities of SQL/JSON programming.



Sample Implementation

This example creates collection myCollectionName with the default metadata.

DECLARE
    collection  SODA_Collection_T;
BEGIN
    collection := DBMS_SODA.create_collection('myCollectionName');   
END;
/

This example uses PL/SQL function DBMS_SODA.open_collection to open the collection named myCollectionName and returns a SODA_COLLECTION_T instance that represents this collection. If the value returned is NULL then there is no existing collection named myCollectionName.

DECLARE
    collection  SODA_COLLECTION_T;
BEGIN
    collection := DBMS_SODA.open_collection('myCollectionName');
END;
/

This example uses DBMS_SODA.open_collection to try to open an existing collection named myCollectionName. It prints a message if no such collection exists.

DECLARE
    collection SODA_COLLECTION_T;
BEGIN
    collection := DBMS_SODA.open_collection('myCollectionName');
    IF collection IS NULL THEN
        DBMS_OUTPUT.put_line('Collection does not exist');
    END IF;
END;
/

Detailed Implementation and Use Cases are described in the attached document.

               

***********************************************************************




3.3.11 Support Indexing of JSON Key Names Longer than 64 Characters:

*  The upper limit is increased for JSON key names that can be indexed by the JSON Search index.
*  The JSON key name upper limit in Oracle Database 18c is 255 bytes. In previous releases, JSON search indexes that were created did not index key names greater than 64 bytes.
*  Key names greater than 64 bytes are quite common when object serializers, such as GSON and JACKSON, are used to serialize Java Hash maps, such as JSON. Operations on JSON Path expressions containing JSON Key names up to 255 characters long can now be optimized by the JSON Search Index.
*  Raising the limit on the size of the key name for indexing increases the efficiency of searching JSON documents generated from HASH MAP-like structures.

***********************************************************************
3.3.12 New SQL Function TO_UTC_TIMESTAMP_TZ:

SQL function to_UTC_timestamp_tz takes as input an ISO 8601 date format string and returns an instance of SQL data type TIMESTAMP WITH TIMEZONE.
This new function has been introduced ,to be able to comply with ISO 8601 standard.
It normalizes the input to UTC time (Coordinated Universal Time, formerly Greenwich Mean Time).
Unlike SQL function to_timestamp_tz, the new function assumes that the input string uses the ISO 8601 date format, defaulting the time zone to UTC 0.
Although JSON doesn't care what format you use for dates and timestamp,
Java Script typically converts dates and time stamps to strings conforming to the ISO 8601 standard, so this has become common in JSON data.

Example :
select TO_UTC_TIMESTAMP_TZ(sysdate) from dual;
TO_UTC_TIMESTAMP_TZ(SYSDATE)
18-MAY-15 12.00.00.000000 AM +00:00





***********************************************************************
 JSON Improvements in 18C:

3.3.13 JSON_EQUAL:

The JSON_EQUAL condition allows you to compare two JSON documents without worrying about member order or formatting.

Example:
The example below requires the following test table. Each row contains two JSON documents that will be compared.

CREATE TABLE json_equal_tab (
  id     NUMBER NOT NULL,
  data1  VARCHAR2(50),
  data2  VARCHAR2(50),
  CONSTRAINT json_equal_tab_pk PRIMARY KEY (id),
  CONSTRAINT json_equal_tab_json1_chk CHECK (data1 IS JSON),
  CONSTRAINT json_equal_tab_json2_chk CHECK (data2 IS JSON)
);

-- Matching empty JSON.
INSERT INTO json_equal_tab VALUES (1, '{}', '{}');

-- Matching members, order and format.
INSERT INTO json_equal_tab VALUES (2, '{"name1":"value1","name2":"value2"}', '{"name1":"value1","name2":"value2"}');

-- Matching members and order, but differing format.
INSERT INTO json_equal_tab VALUES (3, '{"name1":"value1","name2":"value2"}', '{ "name1":"value1", "name2":"value2" }');

-- Matching members, but differing order.
INSERT INTO json_equal_tab VALUES (4, '{"name1":"value1","name2":"value2"}', '{"name2":"value2","name1":"value1"}');

-- Differing members.
INSERT INTO json_equal_tab VALUES (5, '{"name1":"value1","name2":"value2"}', '{"name2":"value2","name3":"value3"}');

-- Differing members.
INSERT INTO json_equal_tab VALUES (6, '{"name1":"value1","name2":"value2"}', '{"name2":"value2"}');

-- Duplicate members.
INSERT INTO json_equal_tab VALUES (7, '{"name1":"value1"}', '{"name1":"value1","name1":"value1"}');
COMMIT;

Regular String Comparison Results - Pre 18c:

If you do a regular string comparisons between two JSON documents they will only be seen as matching if all the members are in the same order and there is no difference in formatting. We can see this using the following two queries.

SELECT id,
       data1,
       data2
FROM   json_equal_tab
WHERE  data1 = data2
ORDER BY 1;

        ID DATA1                                    DATA2
---------- ---------------------------------------- ----------------------------------------
         1 {}                                       {}
         2 {"name1":"value1","name2":"value2"}      {"name1":"value1","name2":"value2"}
SQL>
SELECT id,
       data1,
       data2
FROM   json_equal_tab
WHERE  data1 != data2
ORDER BY 1;

        ID DATA1                                    DATA2
---------- ---------------------------------------- ----------------------------------------
         3 {"name1":"value1","name2":"value2"}      { "name1":"value1", "name2":"value2" }
         4 {"name1":"value1","name2":"value2"}      {"name2":"value2","name1":"value1"}
         5 {"name1":"value1","name2":"value2"}      {"name2":"value2","name3":"value3"}
         6 {"name1":"value1","name2":"value2"}      {"name2":"value2"}
         7 {"name1":"value1"}                       {"name1":"value1","name1":"value1"}

18c Result with JSON_EQUAL:

SELECT id,
       data1,
       data2
FROM   json_equal_tab
WHERE  JSON_EQUAL(data1, data2)
ORDER BY 1;

        ID DATA1                                    DATA2
---------- ---------------------------------------- ----------------------------------------
         1 {}                                       {}
         2 {"name1":"value1","name2":"value2"}      {"name1":"value1","name2":"value2"}
         3 {"name1":"value1","name2":"value2"}      { "name1":"value1", "name2":"value2" }
         4 {"name1":"value1","name2":"value2"}      {"name2":"value2","name1":"value1"}
         7 {"name1":"value1"}                       {"name1":"value1","name1":"value1"}

SQL>


SELECT id,
       data1,
       data2
FROM   json_equal_tab
WHERE  NOT JSON_EQUAL(data1, data2)
ORDER BY 1;

        ID DATA1                                    DATA2
---------- ---------------------------------------- ----------------------------------------
         5 {"name1":"value1","name2":"value2"}      {"name2":"value2","name3":"value3"}
         6 {"name1":"value1","name2":"value2"}      {"name2":"value2"}

***********************************************************************
3.3.14 TREAT(...AS JSON):

In Oracle 18c the TREAT function can be used in SQL to change the declared type of an expression to JSON, where the expression is an object containing JSON data.

Example:

Create a test table containing some JSON data. Notice there is no IS JSON constraint on the column holding the JSON data.

CREATE TABLE json_documents (
  id    RAW(16) NOT NULL,
  data  CLOB,
  CONSTRAINT json_documents_pk PRIMARY KEY (id)
);

INSERT INTO json_documents VALUES (SYS_GUID(), '{"name":"banana", "quantity":10}');
INSERT INTO json_documents VALUES (SYS_GUID(), '{"name":"apple", "quantity":5}');
INSERT INTO json_documents VALUES (SYS_GUID(), '{"name":"orange", "quantity":7}');
COMMIT;

Without the IS JSON constraint on the column, if we try to query the column and traverse the JSON data inside it using dot notation the query fails, as the data is not seen as JSON data by SQL.

SELECT j.data.name,
       j.data.quantity
FROM   json_documents j
ORDER BY 1;
       j.data.quantity
       *
ERROR at line 2:
ORA-00904: "J"."DATA"."QUANTITY": invalid identifier

In the following example we use the TREAT(... AS JSON) function inside an inline view to tell SQL the contents of the column is JSON.
This allows the outer query to use dot notation, even though the original column doesn't have the IS JSON check constraint.
The TREAT(... AS JSON) function works equally well in a view.

SELECT jd.id,
       jd.json_data.name,
       jd.json_data.quantity
FROM   (SELECT id,
               TREAT(data AS JSON) AS json_data
        FROM   json_documents) jd
ORDER BY 2;

ID                               NAME            QUANTITY
-------------------------------- --------------- ---------------
6943671DB33E4A21E05336BB1C0A4E06 apple           5
6943671DB33D4A21E05336BB1C0A4E06 banana          10
6943671DB33F4A21E05336BB1C0A4E06 orange          7

***********************************************************************
3.3.15 JSON_TABLE Materialized Views : ON STATEMENT Support:

What is On Statement Refresh ??
Specify ON STATEMENT in Refresh Definition for a Materialized View to indicate that a fast refresh is to occur whenever DML happens on a base table of the materialized view. This is to say, ON STATEMENT materialized view is always in sync with base table changes even before the transaction commits. If a transaction that made changes to the base tables rolls back, the corresponding changes in on statement MV are rolled back as well.

In previous database versions we were able to create materialized views containing the JSON_TABLE function using the ON DEMAND and ON COMMIT clauses, but attempting to use the ON STATEMENT clause resulted in the following error.

ORA-32428: on-statement materialized join view error: Shape of MV is not supported

This restriction has been lifted in Oracle 18c.

Example :

CREATE MATERIALIZED VIEW json_documents_v
REFRESH FAST
ON STATEMENT
AS
SELECT id, jt.*
FROM   json_documents,
       JSON_TABLE(data, '$'
         COLUMNS (FirstName, LastName, Job, Active,
           NESTED ContactDetails COLUMNS (Email, Phone)
         )
       ) jt;

Query the data from the materialized view to check it is working as expected.
SET LINESIZE 120

COLUMN firstname FORMAT A10
COLUMN lastname FORMAT A10
COLUMN email FORMAT A23
COLUMN phone FORMAT A13

SELECT * FROM json_documents_v;

ID                               FIRSTNAME  LASTNAME   JOB        ACTIVE EMAIL                   PHONE
-------------------------------- ---------- ---------- ---------- ------ ----------------------- -------------
696878605B4F4508E05336BB1C0A6517 John       Doe        Clerk      true   john.doe@example.com    44 123 123456
Add a new row but don't commit it. You will see the new row is present in the materialized view.

INSERT INTO json_documents (id, data)
VALUES (SYS_GUID(),
        '{
          "FirstName"      : "Mary",
          "LastName"       : "Doe",
          "Job"            : "President",
          "Address"        : {
                              "Street"   : "101 My Street",
                              "City"     : "My City",
                              "Country"  : "UK",
                              "Postcode" : "A12 34B"
                             },
          "ContactDetails" : {
                              "Email"    : "mary.doe@example.com",
                              "Phone"    : "44 123 234567",
                              "Twitter"  : "@marydoe"
                             },
          "DateOfBirth"    : "01-JAN-1980",
          "Active"         : true
         }');
SELECT * FROM json_documents_v;

ID                               FIRSTNAME  LASTNAME   JOB        ACTIVE EMAIL                   PHONE
-------------------------------- ---------- ---------- ---------- ------ ----------------------- -------------
696878605B4F4508E05336BB1C0A6517 John       Doe        Clerk      true   john.doe@example.com    44 123 123456
696878605B514508E05336BB1C0A6517 Mary       Doe        President  true   mary.doe@example.com    44 123 234567


Rollback the insert and check the materialized view again. The row is no longer present.
ROLLBACK;

SELECT * FROM json_documents_v;

ID                               FIRSTNAME  LASTNAME   JOB        ACTIVE EMAIL                   PHONE
-------------------------------- ---------- ---------- ---------- ------ ----------------------- -------------
696878605B4F4508E05336BB1C0A6517 John       Doe        Clerk      true   john.doe@example.com    44 123 123456



***********************************************************************
3.3.16 JSON_DATAGUIDE Function Enhancements- Formatting & Hierarchy :

A data guide is a summary of the structural and type information contained in a set of JSON documents. It records meta data about the fields used in those documents.
In Oracle 12.2 the JSON_DATAGUIDE function returned a flat formatted and minimized data guide.
In Oracle 18c the JSON_DATAGUIDE function can return the flat or hierarchical data guide, and display in pretty print using new optional parameters.

Example:

Create and populate the following table to provide some JSON data to work with.
For the data guide functionality to work the table must include the IS JSON check constraint on the column holding the JSON data.

CREATE TABLE json_documents (
  id    RAW(16) NOT NULL,
  data  CLOB,
  CONSTRAINT json_documents_pk PRIMARY KEY (id),
  CONSTRAINT json_documents_json_chk CHECK (data IS JSON)
);

INSERT INTO json_documents (id, data)
VALUES (SYS_GUID(),
        '{
          "FirstName"      : "John",
          "LastName"       : "Doe",
          "Job"            : "Clerk",
          "Address"        : {
                              "Street"   : "99 My Street",
                              "City"     : "My City",
                              "Country"  : "UK",
                              "Postcode" : "A12 34B"
                             },
          "ContactDetails" : {
                              "Email"    : "john.doe@example.com",
                              "Phone"    : "44 123 123456",
                              "Twitter"  : "@johndoe"
                             },
          "DateOfBirth"    : "01-JAN-1980",
          "Active"         : true
         }');

INSERT INTO json_documents (id, data)
VALUES (SYS_GUID(),
        '{
          "FirstName"      : "Jayne",
          "LastName"       : "Doe",
          "Job"            : "Manager",
          "Address"        : {
                              "Street"   : "100 My Street",
                              "City"     : "My City",
                              "Country"  : "UK",
                              "Postcode" : "A12 34B"
                             },
          "ContactDetails" : {
                              "Email"    : "jayne.doe@example.com",
                              "Phone"    : ""
                             },
          "DateOfBirth"    : "01-JAN-1982",
          "Active"         : false
         }');

COMMIT;

12.2 Result :

SELECT JSON_DATAGUIDE(data) dg_doc
FROM   json_documents;

DG_DOC
--------------------------------------------------------------------------------
[{"o:path":"$.Job","type":"string","o:length":8},{"o:path":"$.Active","type":"
boolean","o:length":8},{"o:path":"$.Address","type":
"object","o:length":128},{"o:path":"$.Address.City",
"type":"string","o:length":8},{"o:path":"$.Address.S
treet","type":"string","o:length":16},{"o:path":"$.A
ddress.Country","type":"string","o:length":2},{"o:pa
th":"$.Address.Postcode","type":"string","o:length":
8},{"o:path":"$.LastName","type":"string","o:length"
:4},{"o:path":"$.FirstName","type":"string","o:lengt
h":8},{"o:path":"$.DateOfBirth","type":"string","o:l
ength":16},{"o:path":"$.ContactDetails","type":"obje
ct","o:length":128},{"o:path":"$.ContactDetails.Emai
l","type":"string","o:length":32},{"o:path":"$.Conta
ctDetails.Phone","type":"string","o:length":16},{"o:
path":"$.ContactDetails.Twitter","type":"string","o:
length":8}]

18.1 Result:

SELECT JSON_DATAGUIDE(data, DBMS_JSON.format_hierarchical, DBMS_JSON.pretty) dg_doc
FROM   json_documents;

DG_DOC
--------------------------------------------------------------------------------
{
  "type" : "object",
  "properties" :
  {
    "Job" :
    {
      "type" : "string",
      "o:length" : 8,
      "o:preferred_column_name" : "Job"
    },
    "Active" :
    {
      "type" : "boolean",
      "o:length" : 8,
      "o:preferred_column_name" : "Active"
    },
    "Address" :
    {
      "type" : "object",
      "o:length" : 128,
      "o:preferred_column_name" : "Address",
      "properties" :
      {
        "City" :
        {
          "type" : "string",
          "o:length" : 8,
          "o:preferred_column_name" : "City"
        },
        "Street" :
        {
          "type" : "string",
          "o:length" : 16,
          "o:preferred_column_name" : "Street"
        },
        "Country" :
        {
          "type" : "string",
          "o:length" : 2,
          "o:preferred_column_name" : "Country"
        },
        "Postcode" :
        {
          "type" : "string",
          "o:length" : 8,
          "o:preferred_column_name" : "Postcode"
        }
      }
    },
    "LastName" :
    {
      "type" : "string",
      "o:length" : 4,
      "o:preferred_column_name" : "LastName"
    },
    "FirstName" :
    {
      "type" : "string",
      "o:length" : 8,
      "o:preferred_column_name" : "FirstName"
    },
    "DateOfBirth" :
    {
      "type" : "string",
      "o:length" : 16,
      "o:preferred_column_name" : "DateOfBirth"
    },
    "ContactDetails" :
    {
      "type" : "object",
      "o:length" : 128,
      "o:preferred_column_name" : "ContactDetails",
      "properties" :
      {
        "Email" :
        {
          "type" : "string",
          "o:length" : 32,
          "o:preferred_column_name" : "Email"
        },
        "Phone" :
        {
          "type" : "string",
          "o:length" : 16,
          "o:preferred_column_name" : "Phone"
        },
        "Twitter" :
        {
          "type" : "string",
          "o:length" : 8,
          "o:preferred_column_name" : "Twitter"
        }
      }
    }
  }
}

***********************************************************************

3.3.17 {USER|ALL|DBA|CDB}_JSON_DATAGUIDE_FIELDS Views:

In Oracle 12.2 if you wanted to display the fields present in the data guide in a relational format you had to combine the data guide with JSON_TABLE to explore the output, as shown here.
In Oracle 18c the {USER|ALL|DBA|CDB}_JSON_DATAGUIDE_FIELDS views have been added to simplify displaying this field information.



SELECT path,
       type,
       length
FROM   user_json_dataguide_fields
WHERE  table_name  = 'JSON_DOCUMENTS'
AND    column_name = 'DATA'
ORDER BY 1;

PATH                                     TYPE           LENGTH
---------------------------------------- ---------- ----------    -------          -------------
$.Active                                   boolean             8
$.Address                                  object            128
$.Address.City                              string              8
$.Address.Country                           string              2
$.Address.Postcode                          string              8
$.Address.Street                            string             16
$.ContactDetails                            object            128
$.ContactDetails.Email                       string             32
$.ContactDetails.Phone                      string             16
$.ContactDetails.Twitter                     string              8
$.DateOfBirth                              string             16
$.FirstName                               string              8
$.Job                                     string              8
$.LastName                                string              4

***********************************************************************
3.3.18 JSON Dataguide Records for Arrays of Scalar Values:

When a JSON document contains an array of scalar values the data guide now records both the array itself and the scalar type of the array.

INSERT INTO json_documents (id, data)
VALUES (           SYS_GUID(),
                    '{
                    "element1" : [1,2,3,4,5],
                    "element2" : ["one","two","three"]
                     }'                                        );
COMMIT;

The data guide is automatically updated, so we can query the field information to see the impact of this. Notice the array path is listed
with the type array, and the scalar type in the array is listed using the array path with "[*]" appended to it.









 SELECT path,
       type,
       length
FROM   user_json_dataguide_fields
WHERE  table_name  = 'JSON_DOCUMENTS'
AND    column_name = 'DATA'
ORDER BY 1;

PATH                                     TYPE           LENGTH
---------------------------------------- ---------- ----------
$.element1                               array              16
$.element1[*]                            number             1
$.element2                               array              32
$.element2[*]                            string               8

***********************************************************************

3.3.19 Data-Dictionary Views That Record the Presence of JSON Columns:

Data-dictionary views USER_JSON_COLUMNS, ALL_JSON_COLUMNS, and DBA_JSON_COLUMNS now list the views, as well as the tables, that have columns with JSON data.

***********************************************************************
3.3.20 JSON Path Expression Item Methods:

JSON Path Expressions first introduced in Oracle 12c may use ‘item methods’ to transform selected data. The JSON standard describes six item methods:
12c: abs(), ceiling(), double(), floor()
18c: size(), type()
Oracle extends the JSON standard with:
12c: date(), length(), lower(), number(), string(), timestamp(), upper()
18c: boolean(), booleanOnly(), numberOnly(), stringOnly()

***********************************************************************

3.3.21 LOB Return Types for JSON_VALUE, JSON_QUERY, and JSON_TABLE Functions:
* SQL/JSON function json_value can now return a CLOB instance.
* SQL/JSON function json_query can now return a CLOB or BLOB instance.


Previously:
* json_value supported only VARCHAR2, NUMBER, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, and SDO_GEOMETRY as return types.
*json_query supported only VARCHAR2 as return type.

 JSON_VALUE - Basic Definition and Example:
      The JSON_VALUE function returns an element from the JSON document, based on the specified JSON path.

SELECT JSON_VALUE(a.data, '$.FirstName') AS first_name,
       JSON_VALUE(a.data, '$.LastName') AS last_name
FROM   json_documents a
ORDER BY 1, 2;

FIRST_NAME      LAST_NAME
--------------- ---------------
Jayne           Doe
John            Doe

2 rows selected.

JSON Query - Basic Definition and Example:
   The JSON_QUERY function returns a JSON fragment representing one or more values. In the following example, JSON_QUERY is used to return a JSON fragment representing all the contact details for each person. The WITH WRAPPER option surrounds the fragment with square brackets.

SELECT a.data.FirstName,
       a.data.LastName,
       JSON_QUERY(a.data, '$.ContactDetails' RETURNING VARCHAR2(1000) WITH WRAPPER) AS contact_details
FROM   json_documents a
ORDER BY a.data.FirstName,
         a.data.Last_name;

FIRSTNAME       LASTNAME        CONTACT_DETAILS
--------------- --------------- --------------------------------------------------
Jayne           Doe             [{"Email":"jayne.doe@example.com","Phone
                                ":""}]

John            Doe             [{"Email":"john.doe@example.com","Phone"
                                :"44 123 123456","Twitter":"@johndoe"}]

2 rows selected.


*********************************************************************
3.3.22 LOB Support for SQL/JSON Generation Functions:

The SQL generation functions 
JSON_OBJECT
JSON_OBJECTAGG
JSON_ARRAY 
JSON_ARRAY can now return LOB results, with the output determined by the RETURNING clause. The valid options are as follows.
{ VARCHAR2 [ ( size [BYTE | CHAR] ) ]
| CLOB
| BLOB
}
The following data types are now supported as input expressions.

-- 12cR2
NUMBER
VARCHAR2
DATE
TIMESTAMP

-- 18c
BINARY_DOUBLE
BINARY_FLOAT
CLOB
BLOB
NVARCHAR2
RAW
TIMESTAMP WITH TIME ZONE
TIMESTAMP WITH LOCAL TIME ZONE
INTERVAL YEAR TO MONTH
INTERVAL DAY TO SECOND

JSON_OBJECT - Basic Definition and Example:

The JSON_OBJECT function converts a comma-separated list of key-value pairs into object members within a JSON object.

SELECT JSON_OBJECT (
         KEY 'department-number' VALUE d.deptno,
         KEY 'department-name' VALUE d.dname,
         KEY 'location' VALUE d.loc
       ) AS departments
FROM   dept d
ORDER BY d.deptno;

DEPARTMENTS
--------------------------------------------------------------------------------
{"department-number":10,"department-name":"ACCOUNTING","location":"NEW YORK"}
{"department-number":20,"department-name":"RESEARCH","location":"DALLAS"}

JSON_OBJECTAGG - Basic Definition and Example:
The JSON_OBJECTAGG aggregate function creates a single JSON object containing a list of object members formed by aggregating a key-value pair from each row.

SELECT JSON_OBJECTAGG (
         KEY d.dname VALUE d.deptno
       ) AS departments
FROM   dept d
ORDER BY d.deptno;

DEPARTMENTS
--------------------------------------------------------------------------------
{"ACCOUNTING":10,"RESEARCH":20,"SALES":30,"OPERATIONS":40}

JSON_ARRAY - Basic Definition and Example:
The JSON_ARRAY function converts a comma-separated list of expressions into a JSON array of JSON values.

SELECT JSON_ARRAY(
         ROWNUM,
         JSON_OBJECT(KEY 'department_no' VALUE d.deptno),
         JSON_OBJECT(KEY 'department_name' VALUE d.dname)
       ) AS department_json_array
FROM   dept d;

DEPARTMENT_JSON_ARRAY
--------------------------------------------------------------------------------
[1,{"department_no":10},{"department_name":"ACCOUNTING"}]

No comments:

Post a Comment