Mar 22, 2011

Oracle Database with Oracle RAC Architecture


< Hypothesis >
Assuming that we are running a single-instance database. And our database database must be up and running 24/7. If we fail to keep the database available for our users 24/7, we just have to go on vacation for ever. There will be no coming back. 

What if the server machine get damaged on its disk? What if a big earthquake and Tsunami hit our town so our office including the database server all go under the water? If the DB server fails, basically our database is down. And we have to start looking for a new job. My wife is going to be really really upset and sad, which is more scary than any other thing.

I would suggest to my DBA team that we should consider implementing Oracle RAC to our database because RAC could increase the availability and scalability of our database.

(1) Single-instance database 
One-to-one relationship between the database and the instance. So either the instance has a problem or the database storage has a problem, the database could become unavailable.

(2) RAC (Real Application Clusters)
One-to-many relationship between the database and the associated instances. An Oracle RAC database can have up to 100 instance, all of which access ONE database. A cluster comprises multiple interconnected computers or servers that APPEAR AS IF they are one server to the database users or applications. Oracle provides something, called "Oracle Clusterware" that can manage all this integration system.

    As shown in the picture below, each instance usually runs on a separate server(node). We could place each server in different locations(an extended distance cluster). This way, we could have our database still available for the user even if one server is down. For example(using the picture below), even if node 1 is down, we still have the other two nodes that can handle the data requests from the users. The RAC architecture could increase the availability of our database. 
    In addition, we can usually combine smaller commodity servers into one cluster. If we get more database users and need to increase our system capacity, we can simply add one or more commodity servers to the cluster (higher scalability)


Excerpted from http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/admcon.htm#i1058057


** In addition, Oracle Clusterware also enables to create a clustered pool of storage to be used by any combination of single-instance and Oracle RAC database. 

----------------------------------------------------------------------------------------------------------
< References >
http://en.wikipedia.org/wiki/Oracle_RAC
http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/admcon.htm

Mar 20, 2011

Recovering a control file using a current backup copy or a mutiplexed copy

(1) One of the control files specified in CONTROL_FILES parameter is corrupted.
(2) One of the control files is inaccessible due to a media failure.

Even though only one of the multiplexed control files is corrupted or inaccessible, the database can't start up.

(1) When a control file is corrupted
      - Shutdown the instance.
      - Use O/S copy command to overwrite the bad control file with a good copy
          $> cp  /u01/oracle/prod/control01.ctl   /u03/oracle/prod/control3.ctl
      - Start up DB.

(2) When a control file is inaccessible due to a media failure
      - Shutdown the instance.
      - Use O/S copy command to copy the current copy of the control file to a new accessible location
          $> cp /u01/oracle/prod/control01.ctl   /u04/oracle/prod/control03.ctl
      - Edit the Control_files parameter to replace the bad location with the new location
            CONTROL_FILES - (/u01/oracle/prod/control01.ctl,
                                                /u02/oracle/prod/control02.ctl,
                                                /u04/oracle/prod/control03.ctl)
      - Start up the database

------------------------------------------------------------------------

** If we use multiplexed control files, we can just remove the bad control file from the CONTROL_FILES parameter. And we can just start up the database quickly for the db users.  We can then re-construct the bad control file and restart the DB at later time when the database is not busy.

** If any of the multiplexed control file is no longer available or appropriate, we can just remove the control file from the CONTROL_FILES parameter. But we need to make sure that there are always at least 2  control files.

Mar 19, 2011

Control Files & Database


(1)  When a database is created, at least one control file is also created. The control file MUST be available for writing by the Oracle server whenever the database is open. Without the control file, the database cannot be mounted. Thus, it is recommended that we multiplex control files on different disks and backup the control file for safety reason. 

(2) We can create a new control file as following...
CREATE CONTROLFILE SET DATABASE orcl 
LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log', '/u01/oracle/prod/redo01_02.log'), 
                 GROUP 2 ('/u01/oracle/prod/redo02_01.log', '/u01/oracle/prod/redo02_02.log'),
                 GROUP 3 ('/u01/oracle/prod/redo03_01.log', '/u01/oracle/prod/redo03_02.log') RESETLOGS
DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M, 
                   '/u01/oracle/prod/rbs01.dbs' SIZE 5M, 
                   '/u01/oracle/prod/users01.dbs' SIZE 5M, 
                   '/u01/oracle/prod/temp01.dbs' SIZE 5M   
MAXLOGFILES  50  MAXLOGMEMBERS 3 MAXLOGHISTORY 400 
MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;

 (3) We can display control file information using some views such as  v$database, v$controlfile, v$controlfile_record_section and v$parameter

(4) The control file locations and names are specified by  CONTROL_FILES parameter in the initialization parameter file.

Mar 8, 2011

Invalid Dimension Objects - Query Performance in Data warehouse

We come across a situation where the query performance gets poor in a data warehouse environment. There are several reason for that. But I like to focus particularly on the impact from a dimension becoming invalid.

When a dimension becomes invalid, any query that could take an advantage of the dimension during the query rewrite process can no longer use it. This can lead to a slower query execution. It is well-known that the hierarchical/join/attribute relationship structure defined by dimensions can render a significant performance advantages.

As a result, it might be worth checking the validity of dimensions if your query run slower than it used to.


  • Check if any dimension is invalid : A dimension becomes invalid when we alter/drop any of the objects that the dimension references.    SQL>   select invalid, dimension_name from dba_dimensions;
  • Recompile the invalid dimension   SQL>  alter dimension cust_dim compile;
---------------------------------------------------------------------------------------------------------

We can view how a dimension has been defined using DBMS_DIMENSION.describe_dimension('cust_dim') procedure.

Also, we can verify the hierarchical/attribute/join relationships on all the rows that are defined by the dimension using  DBMS_DIMENSION.validate_dimension('sh.cust_dim', false, true)  procedure. Note that 'dimension_exception' table should be created before running this validate procedure because any error(bad row) that is identified during the validation process gets recorded in dimension_exception table.