Friday, July 05, 2013

Interesting! played plug non-CDB in CDB

I have not spent much time for Oracle Database 12c test. Maybe, I still felt lazy ^__________^ Actually I was enjoying with O'Reilly Books online.
Anyway, I read Oracle Database 12c: Oracle Multitenant Option blog today (Read more - Adopting a Non-CDB as a Pluggable Database in a Multitenant Container Database). I found how to plug non-CDB in CDB. Interesting! So, I thought I should learn and tested it a bit. So, I decided to play it - plug non-CDB in CDB (the same version 12c). (Example File)
Non-CDB name: opun
CDB name: orcl (After "opun" database was plugged in. PDB name is "pdb_opun").

First, Connected non-CDB and create XML file.
[oracle@test12c trace]$ . oraenv
ORACLE_SID = [orcl] ? opun
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@test12c trace]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Thu Jul 4 21:30:00 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0
PL/SQL Release 12.1.0.1.0 - Production                                                    0
CORE    12.1.0.1.0      Production                                                                0
TNS for Linux: Version 12.1.0.1.0 - Production                                            0
NLSRTL Version 12.1.0.1.0 - Production                                                    0

SQL> select cdb from v$database;

CDB
---
NO

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size                  2287864 bytes
Variable Size             452986632 bytes
Database Buffers          788529152 bytes
Redo Buffers                8859648 bytes
Database mounted.

SQL> alter database open read only;

Database altered.

SQL>
SQL> set serverout on
SQL> !mkdir /u01/app/oracle/opun_db

SQL> exec dbms_pdb.describe('/u01/app/oracle/opun_db/opun.xml')

PL/SQL procedure successfully completed.

SQL> !ls /u01/app/oracle/opun_db/
opun.xml

SQL> !less /u01/app/oracle/opun_db/opun.xml

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit;
Note: I wanted move database files to another path, because I wanted database file of  all PDBs in CDB path (u01/app/oracle/oradata/ORCL/).
[oracle@test12c trace]$ cp /u01/app/oracle/opun_db/opun.xml /u01/app/oracle/opun_db/opun.xml.bak
[oracle@test12c trace]$ grep -i opun /u01/app/oracle/opun_db/opun.xml
  <pdbname>opun</pdbname>
      <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_system_8xb6bgc1_.dbf</path>
      <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_sysaux_8xb640k7_.dbf</path>
      <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp</path>
      <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_users_8xb6lr4t_.dbf</path>
[oracle@test12c trace]$

[oracle@test12c trace]$ mkdir /u01/app/oracle/oradata/ORCL/OPUN
[oracle@test12c trace]$ mv /u01/app/oracle/oradata/OPUN/datafile /u01/app/oracle/oradata/ORCL/OPUN/

[oracle@test12c trace]$ vi /u01/app/oracle/opun_db/opun.xml
[oracle@test12c trace]$ diff  /u01/app/oracle/opun_db/opun.xml.bak /u01/app/oracle/opun_db/opun.xml
20c20
<       <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_system_8xb6bgc1_.dbf</path>
---
>       <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_system_8xb6bgc1_.dbf</path>
44c44
<       <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_sysaux_8xb640k7_.dbf</path>
---
>       <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_sysaux_8xb640k7_.dbf</path>
69c69
<       <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp</path>
---
>       <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp</path>
90c90
<       <path>/u01/app/oracle/oradata/OPUN/datafile/o1_mf_users_8xb6lr4t_.dbf</path>
---
>       <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_users_8xb6lr4t_.dbf</path>
[oracle@test12c trace]$ grep -i opun /u01/app/oracle/opun_db/opun.xml
  <pdbname>opun</pdbname>
      <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_system_8xb6bgc1_.dbf</path>
      <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_sysaux_8xb640k7_.dbf</path>
      <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp</path>
      <path>/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_users_8xb6lr4t_.dbf</path>
[oracle@test12c trace]$ ll /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_system_8xb6bgc1_.dbf /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_sysaux_8xb640k7_.dbf /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_users_8xb6lr4t_.dbf
-rw-r-----. 1 oracle oinstall 807411712 Jul  4 21:19 /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_sysaux_8xb640k7_.dbf
-rw-r-----. 1 oracle oinstall 817897472 Jul  4 21:19 /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_system_8xb6bgc1_.dbf
-rw-r-----. 1 oracle oinstall  91234304 Jul  4 21:14 /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp
-rw-r-----. 1 oracle oinstall   5251072 Jul  4 21:19 /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_users_8xb6lr4t_.dbf
[oracle@test12c trace]$
Or use this, when creating Pluggable database.
Create pluggable database pdb_opun using '/u01/app/oracle/opun_db/opun.xml' 
COPY
  FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/OPUN/', '/u01/app/oracle/oradata/ORCL/OPUN/');

Then, Connected CDB and checked by using "dbms_pdb.check_plug_compatibility".
[oracle@test12c ~]$ . oraenv
ORACLE_SID = [orcl] ?
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@test12c ~]$ sqlplus  / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Thu Jul 4 21:41:10 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> set lines 150
SQL> set pages 1000
SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production              0
PL/SQL Release 12.1.0.1.0 - Production                                                    0
CORE    12.1.0.1.0      Production                                                                0
TNS for Linux: Version 12.1.0.1.0 - Production                                            0
NLSRTL Version 12.1.0.1.0 - Production                                                    0

SQL> select cdb from v$database;

CDB
---
YES

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
ORCLPDB                        MOUNTED

SQL>
SQL> set serverout on
SQL> declare
       compat boolean := FALSE;
    begin
       compat := dbms_pdb.check_plug_compatibility(pdb_descr_file => '/u01/app/oracle/opun_db/opun.xml');
        if compat
        then
               dbms_output.put_line('Yes');
       else
               dbms_output.put_line('No');
      end if;
   end;
   /

No

PL/SQL procedure successfully completed.

SQL> select * from PDB_PLUG_IN_VIOLATIONS;

TIME                                                                        NAME                           CAUSE                  TYPE
--------------------------------------------------------------------------- ------------------------------ ---------------------- ---------
ERROR_NUMBER       LINE
------------ ----------
MESSAGE
------------------------------------------------------------------------------------------------------------------------------------------------------
STATUS
---------
ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
04-JUL-13 09.51.58.096065 PM                                                OPUN                           Non-CDB to PDB         WARNING
           0          1
PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.
PENDING
Run noncdb_to_pdb.sql.

04-JUL-13 09.51.58.096681 PM                                                OPUN                           Parameter              WARNING
           0          1
CDB parameter sga_target mismatch: Previous 1258291200 Current 0
PENDING
Please check the parameter in the current CDB

04-JUL-13 09.51.58.097387 PM                                                OPUN                           Parameter              WARNING
           0          2
CDB parameter pga_aggregate_target mismatch: Previous 415236096 Current 0
PENDING
Please check the parameter in the current CDB

SQL>

After that, Created pluggable database.
SQL> create pluggable database pdb_opun using '/u01/app/oracle/opun_db/opun.xml' nocopy;
create pluggable database pdb_opun using '/u01/app/oracle/opun_db/opun.xml' nocopy
*
ERROR at line 1:
ORA-27038: created file already exists
ORA-01119: error in creating database file '/u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp'

SQL> !rm /u01/app/oracle/oradata/ORCL/OPUN/datafile/o1_mf_temp_8xb6nlc7_.tmp

SQL> create pluggable database pdb_opun using '/u01/app/oracle/opun_db/opun.xml' nocopy;

Pluggable database created.

SQL> select name, open_mode from v$pdbs;
NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
ORCLPDB                        MOUNTED
PDB_OPUN                       MOUNTED

SQL>
SQL> !lsnrctl status

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 04-JUL-2013 21:59:40

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=test12c)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                04-JUL-2013 21:59:05
Uptime                    0 days 0 hr. 0 min. 34 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/test12c/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=test12c)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=test12c)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/orcl/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "orcl" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclpdb" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "pdb_opun" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
The command completed successfully

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> alter session set container=PDB_OPUN
  2  ;

Session altered.

SQL>  show con_name

CON_NAME
------------------------------
PDB_OPUN

SQL> select status from v$instance;

STATUS
------------
MOUNTED
From PDB_PLUG_IN_VIOLATIONS result - PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run. So! ran noncdb_to_pdb.sql script.
SQL>  show con_name

CON_NAME
------------------------------
PDB_OPUN

SQL> select status from v$instance;

STATUS
------------
MOUNTED
SQL> @?/rdbms/admin/noncdb_to_pdb.sql
SQL> SET SERVEROUTPUT ON
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> WHENEVER SQLERROR EXIT;
SQL>
SQL> DOC
DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an "ORA-01403: no data found"
DOC>   error if we're not in a PDB.
DOC>   This script is intended to be run right after plugin of a PDB,
DOC>   while inside the PDB.
DOC>#######################################################################
DOC>#######################################################################
DOC>#
SQL>
SQL> VARIABLE cdbname VARCHAR2(128)
SQL> VARIABLE pdbname VARCHAR2(128)
SQL> BEGIN
  2    SELECT sys_context('USERENV', 'CDB_NAME')
  3      INTO :cdbname
  4      FROM dual
  5      WHERE sys_context('USERENV', 'CDB_NAME') is not null;
  6    SELECT sys_context('USERENV', 'CON_NAME')
  7      INTO :pdbname
  8      FROM dual
  9      WHERE sys_context('USERENV', 'CON_NAME') <> 'CDB$ROOT';
 10  END;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL> COLUMN pdbname NEW_VALUE pdbname
SQL> COLUMN pdbid NEW_VALUE pdbid
SQL>
SQL> select :pdbname pdbname from dual;

PDBNAME
--------------------------------------------------------------------------------
PDB_OPUN

1 row selected.
.
.
.
SQL> WHENEVER SQLERROR CONTINUE;
SQL>
SQL>
Finally, checked tablespaces and etc.
SQL> show con_name

CON_NAME
------------------------------
PDB_OPUN
SQL> select status from v$instance;

STATUS
------------
MOUNTED

1 row selected.

SQL> alter pluggable database PDB_OPUN open;

Pluggable database altered.

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS

4 rows selected.

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
ORCLPDB                        MOUNTED
PDB_OPUN                       READ WRITE

3 rows selected.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> connect system/***@127.0.0.1:1521/PDB_OPUN
Connected.
SQL> show con_name

CON_NAME
------------------------------
PDB_OPUN

SQL>

Tuesday, July 02, 2013

Oracle 12.1.0.1.0 JDBC - java.sql.SQLException: Could not commit with auto-commit set on

Actually, I had not pointed what's wrong about JDBC on 12c database. I thought I will learn more about it later. I thought It's a good day for me to learn something new... 12c database. No! not yet.
I just read about Apache Sqoop book and interested in it. So, some question popped up in my head... Use Sqoop connects to Oracle Database 12c. (Export data from HDFS, then Import to 12c Pluggable DB)

Point! - learn Sqoop and Test a bit about Oracle 12.1.0.1.0 JDBC 4.0.

I began by using "sqoop-1.4.2-cdh4.2.1" for test, why? I had install Kiji for Apache Hadoop + Hbase. It was easy for me ^___________^
I used Sqoop-1 before, I will learn more Sqoop-2 later :)
After I downloaded, copied jdbc driver (Oracle 12.1.0.1.0 JDBC 4.0 compiled with JDK6) from Oracle 12c hHme and ran... I got some error message.
ERROR manager.OracleManager: Failed to rollback transaction
java.sql.SQLException: Could not rollback with auto-commit set on
So, I changed to use jdbc driver (Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6), but I have not got that error message. I suspected something might be wrong on "Oracle 12.1.0.1.0 JDBC". (I will read more about it later).
On my test - Sqoop-1 with 12c Database.
[oracle@test12c ~]$ wget http://archive.cloudera.com/cdh4/cdh/4/sqoop-1.4.2-cdh4.2.1.tar.gz
--2013-07-02 13:52:15--  http://archive.cloudera.com/cdh4/cdh/4/sqoop-1.4.2-cdh4.2.1.tar.gz
Resolving archive.cloudera.com... 184.73.217.71
Connecting to archive.cloudera.com|184.73.217.71|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6294861 (6.0M) [application/x-gzip]
Saving to: “sqoop-1.4.2-cdh4.2.1.tar.gzâ€

100%[======================================================================================================>] 6,294,861    247K/s   in 2m 49s

2013-07-02 13:55:31 (36.3 KB/s) - “sqoop-1.4.2-cdh4.2.1.tar.gzâ€

[oracle@test12c ~]$

[oracle@test12c ~]$ tar zxf sqoop-1.4.2-cdh4.2.1.tar.gz
[oracle@test12c ~]$ cd sqoop-1.4.2-cdh4.2.1
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop list-databases --connect  jdbc:oracle:thin:@localhost:1521/orcl --username system  -P
+======================================================================+
|      Error: JAVA_HOME is not set and Java could not be found         |
+----------------------------------------------------------------------+
| Please download the latest Sun JDK from the Sun Java web site        |
|       > http://java.sun.com/javase/downloads/ <                      |
|                                                                      |
| HBase requires Java 1.6 or later.                                    |
| NOTE: This script will find Sun Java whether you install using the   |
|       binary or the RPM based installer.                             |
+======================================================================+
+======================================================================+
|      Error: JAVA_HOME is not set and Java could not be found         |
+----------------------------------------------------------------------+
| Please download the latest Sun JDK from the Sun Java web site        |
|       > http://java.sun.com/javase/downloads/ <                      |
|                                                                      |
| Hadoop requires Java 1.6 or later.                                   |
| NOTE: This script will find Sun Java whether you install using the   |
|       binary or the RPM based installer.                             |
+======================================================================+

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ export JAVA_HOME=/u01/app/oracle/product/12.1.0/dbhome_1/jdk
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ export PATH=$JAVA_HOME/bin:$PATH
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop list-databases --connect  jdbc:oracle:thin:@localhost:1521/orcl --username system  -P
Enter password:
13/07/02 14:05:50 INFO manager.SqlManager: Using default fetchSize of 1000
13/07/02 14:05:50 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: oracle.jdbc.OracleDriver
java.lang.RuntimeException: Could not load db driver class: oracle.jdbc.OracleDriver
        at org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:275)
        at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
        at org.apache.sqoop.manager.OracleManager.listDatabases(OracleManager.java:604)
        at org.apache.sqoop.tool.ListDatabasesTool.run(ListDatabasesTool.java:49)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ cp /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar lib/
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop list-databases --connect  jdbc:oracle:thin:@localhost:1521/orcl --username system  -P
Enter password:
13/07/02 14:06:43 INFO manager.SqlManager: Using default fetchSize of 1000
13/07/02 14:06:45 INFO manager.OracleManager: Time zone has been set to GMT
13/07/02 14:06:45 ERROR manager.OracleManager: Failed to rollback transaction
java.sql.SQLException: Could not rollback with auto-commit set on

        at oracle.jdbc.driver.PhysicalConnection.rollback(PhysicalConnection.java:4506)
        at org.apache.sqoop.manager.OracleManager.listDatabases(OracleManager.java:615)
        at org.apache.sqoop.tool.ListDatabasesTool.run(ListDatabasesTool.java:49)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
13/07/02 14:06:45 ERROR manager.OracleManager: Failed to list databases
java.sql.SQLException: Could not commit with auto-commit set on

        at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:4439)
        at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:4486)
        at org.apache.sqoop.manager.OracleManager.listDatabases(OracleManager.java:612)
        at org.apache.sqoop.tool.ListDatabasesTool.run(ListDatabasesTool.java:49)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
AUDSYS
GSMUSER
SPATIAL_WFS_ADMIN_USR
SPATIAL_CSW_ADMIN_USR
APEX_PUBLIC_USER
SYSDG
DIP
SYSBACKUP
MDDATA
GSMCATUSER
SYSKM
XS$NULL
OJVMSYS
C##DB_DBA1
ORACLE_OCM
OLAPSYS
SI_INFORMTN_SCHEMA
DVSYS
ORDPLUGINS
XDB
ANONYMOUS
CTXSYS
ORDDATA
GSMADMIN_INTERNAL
APPQOSSYS
APEX_040200
WMSYS
DBSNMP
ORDSYS
MDSYS
DVF
FLOWS_FILES
SYS
SYSTEM
OUTLN
LBACSYS
Got data, but error  - java.sql.SQLException: Could not rollback with auto-commit set on
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ md5sum /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar
b99a9e9b93aa31b787d174a1dbec7cc4  /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ls -la /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar
-rw-r--r--. 1 oracle oinstall 3389454 Apr  4 15:15 /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ls -la ~/ojdbc6-11.2.jar
-rw-r--r--. 1 oracle oinstall 2714189 Jul  2 13:29 /home/oracle/ojdbc6-11.2.jar
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ md5sum /home/oracle/ojdbc6-11.2.jar
54c41acb9df6465f45a931fbe9734e1a  /home/oracle/ojdbc6-11.2.jar
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$  java -jar /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar
Oracle 12.1.0.1.0 JDBC 4.0 compiled with JDK6 on Thu_Apr_04_15:06:58_PDT_2013
#Default Connection Properties Resource
#Tue Jul 02 14:10:14 ICT 2013

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ java -jar /home/oracle/ojdbc6-11.2.jar
Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6 on Fri_Aug_26_08:19:15_PDT_2011
#Default Connection Properties Resource
#Tue Jul 02 14:10:26 ICT 2013

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ java -jar  lib/ojdbc6.jar
Oracle 12.1.0.1.0 JDBC 4.0 compiled with JDK6 on Thu_Apr_04_15:06:58_PDT_2013
#Default Connection Properties Resource
#Tue Jul 02 14:10:44 ICT 2013

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ rm lib/ojdbc6.jar

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ cp /home/oracle/ojdbc6-11.2.jar lib/
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ java -jar  lib/ojdbc6-11.2.jar
Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6 on Fri_Aug_26_08:19:15_PDT_2011
#Default Connection Properties Resource
#Tue Jul 02 14:11:15 ICT 2013

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop list-databases --connect  jdbc:oracle:thin:@localhost:1521/orcl --username system  -P
Enter password:
13/07/02 14:11:30 INFO manager.SqlManager: Using default fetchSize of 1000
13/07/02 14:11:31 INFO manager.OracleManager: Time zone has been set to GMT
AUDSYS
GSMUSER
SPATIAL_WFS_ADMIN_USR
SPATIAL_CSW_ADMIN_USR
APEX_PUBLIC_USER
SYSDG
DIP
SYSBACKUP
MDDATA
GSMCATUSER
SYSKM
XS$NULL
OJVMSYS
C##DB_DBA1
ORACLE_OCM
OLAPSYS
SI_INFORMTN_SCHEMA
DVSYS
ORDPLUGINS
XDB
ANONYMOUS
CTXSYS
ORDDATA
GSMADMIN_INTERNAL
APPQOSSYS
APEX_040200
WMSYS
DBSNMP
ORDSYS
MDSYS
DVF
FLOWS_FILES
SYS
SYSTEM
OUTLN
LBACSYS
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$
Got data and no error message. So, tried to connect Pluggable Database (Service "orclpdb" )
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 02-JUL-2013 14:46:36

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=test12c)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                02-JUL-2013 13:49:28
Uptime                    0 days 0 hr. 57 min. 8 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/test12c/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=test12c)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=test12c)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/orcl/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "orcl" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
Service "orclpdb" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...

Service "y" has 1 instance(s).
  Instance "orcl", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop list-databases --connect  jdbc:oracle:thin:@localhost:1521/orclpdb --username system  -P
Enter password:
13/07/02 14:54:29 INFO manager.SqlManager: Using default fetchSize of 1000
13/07/02 14:54:33 INFO manager.OracleManager: Time zone has been set to GMT
SYS
SYSTEM
OLAPSYS
SI_INFORMTN_SCHEMA
PDBADMIN
DVSYS
AUDSYS
GSMUSER
ORDPLUGINS
SPATIAL_WFS_ADMIN_USR
SPATIAL_CSW_ADMIN_USR
XDB
APEX_PUBLIC_USER
SYSDG
DIP
OUTLN
ANONYMOUS
CTXSYS
ORDDATA
SYSBACKUP
MDDATA
GSMCATUSER
GSMADMIN_INTERNAL
LBACSYS
SYSKM
XS$NULL
OJVMSYS
APPQOSSYS
C##DB_DBA1
ORACLE_OCM
APEX_040200
WMSYS
DBSNMP
ORDSYS
MDSYS
DVF
FLOWS_FILES
[oracle@test12c sqoop-1.4.2-cdh4.2.1]

Wow! I used jdbc driver (Oracle 11.2.0.3.0 JDBC 4.0 compiled with JDK6) to connect Oracle 12c - Pluggable Database.
Something popped up again! Use Sqoop v.1 import data from HDFS to 12c Pluggable Database.

SQL> show con_name

CON_NAME
------------------------------
ORCLPDB

SQL> select * from demo.tb_demo;

no rows selected

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ hadoop fs -ls hdfs://localhost:8020/user/oracle/input
13/07/02 15:47:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   3 oracle supergroup         25 2013-07-02 15:09 hdfs://localhost:8020/user/oracle/input/1.txt
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ hadoop fs -cat hdfs://localhost:8020/user/oracle/input/1.txt
13/07/02 15:47:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
1       A1
2       A2
3       A3
4       A4
5       A5
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$
[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop export  --direct  --connect   jdbc:oracle:thin:@localhost:1521/orclpdb --table demo.tb_demo --username  demo --password demo -export-dir /user/oracle/input  --input-fields-terminated-by "\t"  --columns a,b
13/07/02 15:45:35 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/07/02 15:45:35 INFO manager.SqlManager: Using default fetchSize of 1000
13/07/02 15:45:35 INFO tool.CodeGenTool: Beginning code generation
13/07/02 15:45:36 INFO manager.OracleManager: Time zone has been set to GMT
13/07/02 15:45:37 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM demo.tb_demo t WHERE 1=0
13/07/02 15:45:37 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/oracle/kiji-bento-buri/cluster/bin/../lib/hadoop-2.0.0-mr1-cdh4.2.1
Note: /tmp/sqoop-oracle/compile/e2f345b29efbc16876b18ee822dcc33c/demo_tb_demo.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/07/02 15:45:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-oracle/compile/e2f345b29efbc16876b18ee822dcc33c/demo.tb_demo.jar
13/07/02 15:45:40 INFO mapreduce.ExportJobBase: Beginning export of demo.tb_demo
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/oracle/kiji-bento-buri/cluster/lib/hadoop-2.0.0-mr1-cdh4.2.1/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/oracle/kiji-bento-buri/cluster/lib/hbase-0.94.2-cdh4.2.1/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
13/07/02 15:45:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/07/02 15:45:44 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/07/02 15:45:45 INFO input.FileInputFormat: Total input paths to process : 1
13/07/02 15:45:45 INFO input.FileInputFormat: Total input paths to process : 1
13/07/02 15:45:46 INFO mapred.JobClient: Running job: job_20130702150710259_0002
13/07/02 15:45:47 INFO mapred.JobClient:  map 0% reduce 0%
13/07/02 15:46:08 INFO mapred.JobClient:  map 25% reduce 0%
13/07/02 15:46:14 INFO mapred.JobClient:  map 50% reduce 0%
13/07/02 15:46:22 INFO mapred.JobClient:  map 75% reduce 0%
13/07/02 15:46:27 INFO mapred.JobClient:  map 100% reduce 0%
13/07/02 15:46:31 INFO mapred.JobClient: Job complete: job_20130702150710259_0002
13/07/02 15:46:31 INFO mapred.JobClient: Counters: 24
13/07/02 15:46:31 INFO mapred.JobClient:   File System Counters
13/07/02 15:46:31 INFO mapred.JobClient:     FILE: Number of bytes read=0
13/07/02 15:46:31 INFO mapred.JobClient:     FILE: Number of bytes written=930532
13/07/02 15:46:31 INFO mapred.JobClient:     FILE: Number of read operations=0
13/07/02 15:46:31 INFO mapred.JobClient:     FILE: Number of large read operations=0
13/07/02 15:46:31 INFO mapred.JobClient:     FILE: Number of write operations=0
13/07/02 15:46:31 INFO mapred.JobClient:     HDFS: Number of bytes read=544
13/07/02 15:46:31 INFO mapred.JobClient:     HDFS: Number of bytes written=0
13/07/02 15:46:31 INFO mapred.JobClient:     HDFS: Number of read operations=19
13/07/02 15:46:31 INFO mapred.JobClient:     HDFS: Number of large read operations=0
13/07/02 15:46:31 INFO mapred.JobClient:     HDFS: Number of write operations=0
13/07/02 15:46:31 INFO mapred.JobClient:   Job Counters
13/07/02 15:46:31 INFO mapred.JobClient:     Launched map tasks=4
13/07/02 15:46:31 INFO mapred.JobClient:     Rack-local map tasks=4
13/07/02 15:46:31 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=70714
13/07/02 15:46:31 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=0
13/07/02 15:46:31 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/07/02 15:46:31 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/07/02 15:46:31 INFO mapred.JobClient:   Map-Reduce Framework
13/07/02 15:46:31 INFO mapred.JobClient:     Map input records=5
13/07/02 15:46:31 INFO mapred.JobClient:     Map output records=5
13/07/02 15:46:31 INFO mapred.JobClient:     Input split bytes=461
13/07/02 15:46:31 INFO mapred.JobClient:     Spilled Records=0
13/07/02 15:46:31 INFO mapred.JobClient:     CPU time spent (ms)=5040
13/07/02 15:46:31 INFO mapred.JobClient:     Physical memory (bytes) snapshot=414687232
13/07/02 15:46:31 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=4217012224
13/07/02 15:46:31 INFO mapred.JobClient:     Total committed heap usage (bytes)=251396096
13/07/02 15:46:31 INFO mapreduce.ExportJobBase: Transferred 544 bytes in 47.8912 seconds (11.3591 bytes/sec)
13/07/02 15:46:31 INFO mapreduce.ExportJobBase: Exported 5 records.

SQL> show con_name

CON_NAME
------------------------------
ORCLPDB

SQL>  select * from demo.tb_demo;

         A B
---------- ------------------------------
         1 A1
         2 A2
         3 A3
         4 A4
         5 A5

SQL>
However, I am supposed to use jdbc driver (Oracle 12.1.0.1.0 JDBC 4.0 compiled with JDK6), then Tried it with Oracle 12.1.0.1.0 JDBC 4.0.

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ rm lib/ojdbc6-11.2.jar

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ cp /u01/app/oracle/product/12.1.0/dbhome_1/jdbc/lib/ojdbc6.jar  lib/

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ java -jar lib/ojdbc6.jar
Oracle 12.1.0.1.0 JDBC 4.0 compiled with JDK6 on Thu_Apr_04_15:06:58_PDT_2013
#Default Connection Properties Resource
#Tue Jul 02 15:54:35 ICT 2013

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ ./bin/sqoop export  --direct  --connect   jdbc:oracle:thin:@localhost:1521/orclpdb --table demo.tb_demo --username  demo --password demo -export-dir /user/oracle/input  --input-fields-terminated-by "\t"  --columns a,b
13/07/02 15:50:35 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/07/02 15:50:35 INFO manager.SqlManager: Using default fetchSize of 1000
13/07/02 15:50:35 INFO tool.CodeGenTool: Beginning code generation
13/07/02 15:50:37 INFO manager.OracleManager: Time zone has been set to GMT
13/07/02 15:50:37 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM demo.tb_demo t WHERE 1=0
13/07/02 15:50:37 WARN manager.SqlManager: SQLException closing ResultSet: java.sql.SQLException: Could not commit with auto-commit set on
13/07/02 15:50:37 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/oracle/kiji-bento-buri/cluster/bin/../lib/hadoop-2.0.0-mr1-cdh4.2.1
Note: /tmp/sqoop-oracle/compile/00bc0bcd5dfe349e2c7383db51763a91/demo_tb_demo.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/07/02 15:50:40 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-oracle/compile/00bc0bcd5dfe349e2c7383db51763a91/demo.tb_demo.jar
13/07/02 15:50:40 INFO mapreduce.ExportJobBase: Beginning export of demo.tb_demo
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/oracle/kiji-bento-buri/cluster/lib/hadoop-2.0.0-mr1-cdh4.2.1/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/oracle/kiji-bento-buri/cluster/lib/hbase-0.94.2-cdh4.2.1/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
13/07/02 15:50:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/07/02 15:50:44 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/07/02 15:50:46 INFO input.FileInputFormat: Total input paths to process : 1
13/07/02 15:50:46 INFO input.FileInputFormat: Total input paths to process : 1
13/07/02 15:50:46 INFO mapred.JobClient: Running job: job_20130702150710259_0003
13/07/02 15:50:47 INFO mapred.JobClient:  map 0% reduce 0%
13/07/02 15:51:16 INFO mapred.JobClient:  map 25% reduce 0%
13/07/02 15:51:20 INFO mapred.JobClient:  map 50% reduce 0%
13/07/02 15:51:32 INFO mapred.JobClient:  map 75% reduce 0%
13/07/02 15:51:40 INFO mapred.JobClient:  map 100% reduce 0%
13/07/02 15:51:45 INFO mapred.JobClient: Job complete: job_20130702150710259_0003
13/07/02 15:51:45 INFO mapred.JobClient: Counters: 24
13/07/02 15:51:45 INFO mapred.JobClient:   File System Counters
13/07/02 15:51:45 INFO mapred.JobClient:     FILE: Number of bytes read=0
13/07/02 15:51:45 INFO mapred.JobClient:     FILE: Number of bytes written=930436
13/07/02 15:51:45 INFO mapred.JobClient:     FILE: Number of read operations=0
13/07/02 15:51:45 INFO mapred.JobClient:     FILE: Number of large read operations=0
13/07/02 15:51:45 INFO mapred.JobClient:     FILE: Number of write operations=0
13/07/02 15:51:45 INFO mapred.JobClient:     HDFS: Number of bytes read=544
13/07/02 15:51:45 INFO mapred.JobClient:     HDFS: Number of bytes written=0
13/07/02 15:51:45 INFO mapred.JobClient:     HDFS: Number of read operations=19
13/07/02 15:51:45 INFO mapred.JobClient:     HDFS: Number of large read operations=0
13/07/02 15:51:45 INFO mapred.JobClient:     HDFS: Number of write operations=0
13/07/02 15:51:45 INFO mapred.JobClient:   Job Counters
13/07/02 15:51:45 INFO mapred.JobClient:     Launched map tasks=4
13/07/02 15:51:45 INFO mapred.JobClient:     Rack-local map tasks=4
13/07/02 15:51:45 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=94162
13/07/02 15:51:45 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=0
13/07/02 15:51:45 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/07/02 15:51:45 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/07/02 15:51:45 INFO mapred.JobClient:   Map-Reduce Framework
13/07/02 15:51:45 INFO mapred.JobClient:     Map input records=5
13/07/02 15:51:45 INFO mapred.JobClient:     Map output records=5
13/07/02 15:51:45 INFO mapred.JobClient:     Input split bytes=461
13/07/02 15:51:45 INFO mapred.JobClient:     Spilled Records=0
13/07/02 15:51:45 INFO mapred.JobClient:     CPU time spent (ms)=6030
13/07/02 15:51:45 INFO mapred.JobClient:     Physical memory (bytes) snapshot=418398208
13/07/02 15:51:45 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=4222406656
13/07/02 15:51:45 INFO mapred.JobClient:     Total committed heap usage (bytes)=251396096
13/07/02 15:51:45 INFO mapreduce.ExportJobBase: Transferred 544 bytes in 61.7838 seconds (8.8049 bytes/sec)
13/07/02 15:51:45 INFO mapreduce.ExportJobBase: Exported 5 records.

[oracle@test12c sqoop-1.4.2-cdh4.2.1]$ exit
exit

SQL> l
  1*  select * from demo.tb_demo
SQL> /

         A B
---------- ------------------------------
         1 A1
         2 A2
         3 A3
         4 A4
         1 A1
         2 A2
         3 A3
         4 A4
         5 A5
         5 A5

10 rows selected.
When I tested with Sqoop (export) with jdbc driver (Oracle 12.1.0.1.0 JDBC 4.0 compiled with JDK6) ... I had not found error message. It showed "WARN manager.SqlManager: SQLException closing ResultSet: java.sql.SQLException: Could not commit with auto-commit set on" only. I thought that's all right anyway.


Related posts:
Learned a little bit about importing data from MySQL into HDFS using Sqoop
Learned Kiji in a few minutes

Monday, July 01, 2013

Relational Theory for Computer Professionals A Hands-On Video Workshop

Last time, I reviewed  Relational Theory for Computer Professionals What Relational Databases Are Really All About By C.J. Date. It's a great book, but might be difficult for beginner or who has not worked about Database. Anyway, I had a chance to review Relational Theory for Computer Professionals A Hands-On Video Workshop.

This video is useful too much. It helps for learning and be able to watch again and again. Readers can use this video with "Relational Theory for Computer Professionals" book. I believe audiences will enjoy with watching and practices, they will find questions and answers after finishing each topic.
You will get benefit from this video, if you are someone who wants to understand what relational systems are all about?. In video, You will learn how to see database systems as programming systems.get more detailed definition of the relational model and detailed analysis of SQL from a relational point of view.

Author:
C.J. Date has a stature that is unique within the database industry. C.J. is a prolific writer, and is well-known for his best-selling textbook: An Introduction to Database Systems (Addison Wesley). C.J. is an exceptionally clear-thinking writer who can lay out principles and theory in a way easily understood by his audience.

After watching video, It helped me understand each topic for "what relational systems are all about" easily. In video, you will see 2 parts
Part 1: Foundation - much more details about Relations, Rational Operations, Key, Foreign Keys, Constraints and etc. You will learn about Transactions as well.
Part 2: SQL - SQL Tables, Operators and etc.

Structure of presentation in video:
1. Basic database concepts
2. Relations
3. Keys and foreign keys
4. Relational operations I
5. Relational operations II
6. Constraints and predicates
7. The relational model
** Entr' acte
8. SQL tables
9. SQL operators I
10. SQL operators II
11. SQL constraints
12. SQL vs. the relational model
A. References

If you have a " Relational Theory for Computer Professionals" book, you should have this video. If you don't have a book. You should start with this video.