It is important for a database to ensure that its users read committed data only in most of cases. Each database has its own way of ensuring the read-consistency.
(1) SQL Server
There is something called "Transaction isolation level" in SQL Server. The transaction isolation level is set to "Read Committed" by default. This default setting enforces that data readers read committed data only.
This default behavior could yield a drawback. If a user is updating records and the update transaction takes a long time to complete, other users won't be able to read the same records right away. The other users' read request will hang until the update transaction completes.
To reduce this type of blocking and waiting, SQL Server gives us another option of transaction isolation level: Read Committed Snapshot(RCSI). It uses tempdb to store the original versions of changed data. With this type of transaction isolation level, data readers can still read committed records while another user changes the same records.
(2) Oracle
There is something called, UNDO, in Oracle database. Undo information is a record of how to undo a change. This undo information is stored in a special tablespace,"UNDO tablespaces." Undo information provides a read-consistency to data readers even if the data that they read is currently being changed by another user (reading the data in the original state).
* A similarity can be observed between SQL Server's RCSI and Oracle's UNDO.
Showing posts with label Oracle. Show all posts
Showing posts with label Oracle. Show all posts
Sep 11, 2014
Dec 16, 2013
Mar 25, 2013
Logical query processing order
The logical query processing order
(1) FROM
(2) WHERE
(3) GROUP BY
(4) HAVING
(5) SELECT
(6) ORDER BY
> "Each phase operates on one or more tables as inputs and returns a virtual table as output. The output table of one phase is considered the input of the next phase. This is in accord with operation on relations that yield a relation."
> "If you assign an alias to a table, you basically rename the table for the duration of the query. The original table name isn't visible anymore; only the alias is."
------------------
Reference: Querying Microsoft SQL Server 2012 Training Kit by Itzik Ben-Gan, Dejan Sarka, Ron Talmage
(1) FROM
(2) WHERE
(3) GROUP BY
(4) HAVING
(5) SELECT
(6) ORDER BY
> "Each phase operates on one or more tables as inputs and returns a virtual table as output. The output table of one phase is considered the input of the next phase. This is in accord with operation on relations that yield a relation."
> "If you assign an alias to a table, you basically rename the table for the duration of the query. The original table name isn't visible anymore; only the alias is."
------------------
Reference: Querying Microsoft SQL Server 2012 Training Kit by Itzik Ben-Gan, Dejan Sarka, Ron Talmage
Feb 7, 2012
CURSOR - Current set of Records
"CURSOR" stands for CURrent Set Of Records.
An interesting way of creating a name. I think I like the name, Cursor.
Aug 7, 2011
Set-based operation Versus Cursor-based operation
We develop a query to retrieve/process data. We could have our processing operation done in row level(one row after another row) or in the level of the whole data set (the whole set at once). We could use a cursor to process each row at a time (Cursor-based operation). The following examples* could help us understand the difference between set-based operation and cursor-based operation.
> SET-based operation
UPDATE s -- update the entire set of data at once.
SET StatusCode = 'ACTIVE', ModDate = dbo.DateTrunc('day', GETDATE())
FROM dbo.Site s
INNER JOIN dbo.Contact c WITH (NOLOCK) ON s.SiteNo = c.SiteNo
> Cursor-based operation (row-based/serial)
DECLARE @ls_SiteNo CHAR(10)
DECLARE crsModule1 CURSOR LOCAL STATIC FORWARD_ONLY FOR
SELECT s.SiteNo
FROM dbo.Site s
INNER JOIN dbo.Contact c WITH (NOLOCK) ON s.SiteNo = c.SiteNo
OPEN crsModule1
FETCH NEXT FROM crsModule1 INTO @ls_SiteNo
WHILE (@@fetch_status = 0 ) BEGIN
UPDATE dbo.Site
SET StatusCode = 'ACTIVE', ModDate = dbo.DateTrunc('day', GETDATE())
WHERE SiteNo = @ls_SiteNo -- update each row at a time.
FETCH NEXT FROM crsModule1 INTO @ls_SiteNo
END
CLOSE crsModule1
DEALLOCATE crsModule1
1) Database engines are optimized for set-based operation. However, there are some cases that a serial operation is the only or better option. Generally speaking, if we can avoid using a cursor, we would be better off.
2) When we deal with a smaller set of data, the difference between the two operations might be very minimal or we wouldn't even notice any difference in performance. But as the volume of data grows bigger, the performance difference will probably become more obvious.
3) Personally speaking, I believe that a set-based operation would be processed still serially inside of the database engine in the end. However, experts still recommend that we use a set-based operation if possible. It seems to me that it is a better idea to let the database engine take care of the serial data operation. It is probably not a good idea for us to manually figure out how to serially process data because the database engines normally know better than we do about the data operation. Just a thought...
2) When we deal with a smaller set of data, the difference between the two operations might be very minimal or we wouldn't even notice any difference in performance. But as the volume of data grows bigger, the performance difference will probably become more obvious.
3) Personally speaking, I believe that a set-based operation would be processed still serially inside of the database engine in the end. However, experts still recommend that we use a set-based operation if possible. It seems to me that it is a better idea to let the database engine take care of the serial data operation. It is probably not a good idea for us to manually figure out how to serially process data because the database engines normally know better than we do about the data operation. Just a thought...
-----------------------------------------------------------------
* Thank to Mr. Gord Gray who kindly provided me with these examples to help me understand on this subject.
Aug 6, 2011
NCHAR, NVARCHAR, NTEXT / NCLOB - supporting Unicode characters
As more companies deploy their database globally, their database needs to be able to handle Unicode. Unicode enables us to represent all the characters that are expressed in most of human written languages.
The following datatypes allow us to handle unicode characters.
- NCHAR, NVARCHAR, NTEXT, NVARCHAR(max), NCLOB
And the following example shows how we can use these datatypes.
INSERT INTO unicodeDataTable(text_id, uni_text) VALUES (1, N'이동훈' );
INSERT INTO unicodeDataTable(text_id, uni_text) VALUES (2, N'こんにちは' );
SELECT * FROM unicodeDataTable;
2 こんにちは
The following datatypes allow us to handle unicode characters.
- NCHAR, NVARCHAR, NTEXT, NVARCHAR(max), NCLOB
And the following example shows how we can use these datatypes.
CREATE TABLE unicodeDataTable
( text_id number Primary KEY,
uni_text nvarchar(20)
( text_id number Primary KEY,
uni_text nvarchar(20)
);
INSERT INTO unicodeDataTable(text_id, uni_text) VALUES (1, N'이동훈' );
INSERT INTO unicodeDataTable(text_id, uni_text) VALUES (2, N'こんにちは' );
SELECT * FROM unicodeDataTable;
text_id uni_text
1 이동훈2 こんにちは
---------------------------------------------------------
NTEXT : (SQL Server) NText is going to be deprecated. Microsoft recommends that we use NVARCHAr(max).
NCLOB & CLOB : (Oracle) store up to 8 to 128 terabytes of character data (11g)
May 17, 2011
Space management in Oracle - Shrink Segments
In Oracle database, table rows are stored in data blocks. Usually a block contains mutiple rows. However, in some cases, a certain row can reside in more than one data block. Also, the rows can moves from a block to another block.
* Row migration -- When the length of a row increases, exceeding the available free space in the block, the entire row moves to a new data block. The system leaves a pointer at the original location of the row. That pointer points to the new migrated location of the row.
* Row Chaining -- Another case is when we insert a very long(large) row that doesn't fit into a single data block. Then, the system stores the rows in a chain of data blocks(one or more)
* As we add, delete and modify data in a table, the rows could be easily spread out(location-wise). We all know the logical structure of the database: Segments, Extents, Data Blocks. A segment is a set of extents. And an extent is a specific number of contiguous data blocks. As we insert more data, delete and update data, the segment can be easily a sparsely populated segment. Then, what is the problem with a sparsely populated segment? We might have to read in more data blocks when we run a query because the rows are spread out across more data blocks. It is especially true for a full table scan. Reading more data blocks could degrade our database performance.
As shown below, we can improve our database performance and increase space utilization after SHRINK segment(space). The tablespace must be in Automatic Segment Space Managed(ASSM) mode, and ROW MOVEMENT attribute of the table should be enabled in order to use the SHRINK feature.
The SHRINK process first moves sparsed rows so that the rows are put together. Then, it adjusts the HWM(the High Water Mark) so that the unused space can be released. With COMPACT option, it allows us to move the rows without adjusting HWM.
May 9, 2011
Flashback Database Vs. RMAN Recovery
1) Flashback Database Vs. RMAN incomplete recovery
Flashing back a database to an earlier point in time yields a same result as an incomplete(Point-int-time) RMAN recovery would yield. But the difference is that Flashback Database uses the flashback logs stored in FRA and RMAN incomplete recovery uses a backup set and archived logs.
I was personally curious whether Flashing back a database is really faster than doing a incomplete RMAN recovery. Oracle says that it is. But I wanted to test myself. So I tested and found that it is true. I was even impressed with how quickly the testing flashback job was completed. I dropped a user(schema) in the testing database. The schema had a lot of data. Then I logged in the testing database through RMAN session and ran the flashback database command. It only took a few seconds.
2) When to use Flashback database? When to use RMAN recovery?
Flashback technology seems to be very effective when we want to recover from some errors that has happened recently(not a few days ago or not a couple of weeks ago). How far back can we flash back our database? It depends on how long we keep our undo data(undo_retention) or our flashback logs (db_flashback_retention_target). It seems that these two parameters are not normally set to a large number. They are probably set to a relatively smaller number with the consideration of disk space management. undo_retention is set to 15 minuites by default. And db_flashback_retention_target seems to be set to something like 48 hours according to Oracle's documentation in OTN.
So I would probably use RMAN Incomplete Recovery if I want to recover my database to a relatively more earlier point in time. (example: 5 days ago, 1 week ago, or 3 weeks ago) Flashback database basically involves with rolling back our database incrementally to earlier point in time. Rolling back our database incrementally all the way back to the point in time 2 weeks ago or a month ago will probably takes much longer than restoring and recovering the database with a backupset and relatively a small set of archived logs.
In sum, I think that RMAN Backup/Recovery and Flashback technology are just complementary to making a proper and effective database recovery strategy. I guess that it is good to have both of them in place.
(3) How to configure/enable Flashback Database
- We need to first configure Fast Recovery Area(FRA) because the flashback logs are only stored in FRA.
- We also need to ensure that our database is running in Archivelog mode
- Set db_flashback_retention_target parameter.
SQL> alter system set db_flashback_retention_target = 2880 scope=both;
- Open the database in MOUNT EXCLUSIVE mode.
- Alter database to Flashback ON
SQL> alter database flashback ON;
Flashing back a database to an earlier point in time yields a same result as an incomplete(Point-int-time) RMAN recovery would yield. But the difference is that Flashback Database uses the flashback logs stored in FRA and RMAN incomplete recovery uses a backup set and archived logs.
I was personally curious whether Flashing back a database is really faster than doing a incomplete RMAN recovery. Oracle says that it is. But I wanted to test myself. So I tested and found that it is true. I was even impressed with how quickly the testing flashback job was completed. I dropped a user(schema) in the testing database. The schema had a lot of data. Then I logged in the testing database through RMAN session and ran the flashback database command. It only took a few seconds.
2) When to use Flashback database? When to use RMAN recovery?
Flashback technology seems to be very effective when we want to recover from some errors that has happened recently(not a few days ago or not a couple of weeks ago). How far back can we flash back our database? It depends on how long we keep our undo data(undo_retention) or our flashback logs (db_flashback_retention_target). It seems that these two parameters are not normally set to a large number. They are probably set to a relatively smaller number with the consideration of disk space management. undo_retention is set to 15 minuites by default. And db_flashback_retention_target seems to be set to something like 48 hours according to Oracle's documentation in OTN.
So I would probably use RMAN Incomplete Recovery if I want to recover my database to a relatively more earlier point in time. (example: 5 days ago, 1 week ago, or 3 weeks ago) Flashback database basically involves with rolling back our database incrementally to earlier point in time. Rolling back our database incrementally all the way back to the point in time 2 weeks ago or a month ago will probably takes much longer than restoring and recovering the database with a backupset and relatively a small set of archived logs.
In sum, I think that RMAN Backup/Recovery and Flashback technology are just complementary to making a proper and effective database recovery strategy. I guess that it is good to have both of them in place.
(3) How to configure/enable Flashback Database
- We need to first configure Fast Recovery Area(FRA) because the flashback logs are only stored in FRA.
- We also need to ensure that our database is running in Archivelog mode
- Set db_flashback_retention_target parameter.
SQL> alter system set db_flashback_retention_target = 2880 scope=both;
- Open the database in MOUNT EXCLUSIVE mode.
- Alter database to Flashback ON
SQL> alter database flashback ON;
May 8, 2011
Flashback Drop - restoring a dropped table from the Recycle Bin
Let's say we have dropped a table by mistake and we need to restore it.
Can we restore the table with RMAN backup? Yes, we can. But we got a lot of works to do. We restore the data file(tablespace) that contains the dropped table or the whole database to a new host from the backup. And then we can export and import the table from the database restored in the new host to the target database. Both the tablespace/database restoring process and the export/import process require a lot of detailed steps.
Do we have a quicker way? We can just use Flashback technology to restore the dropped table quickly. "Flashback Drop" allows us to restore a dropped table from the Recycle Bin as long as the table has not been purged yet.
When we drop a table, the table and all associated objects(indexes, contraints) are renamed as "BIN$..." They still exists in the database and still belongs to the owner. They are just remapped to the recycle bin. Because of this reason, we can simply restore the dropped table.
SQL> FLASHBACK TABLE table_name TO BEFORE DROP;
When the table is restored, the associated objects and contraints are also restored except for the referential integrity contraints and bitmap indexes. Note that the name of the restored associated objects and contraints, however, still remains as "BIN$..." Not reverted back to their original name. It is the DBA's job to rename the restored objects and contraints if needed.
In order to be able to use Flashback DROP feature, we first have to make sure that RECYCLEBIN initialization parameter is set to 'ON' (default). Lastly, be aware that Oracle automatically reclaim space in the recycle bin when additional space is needed by purging the oldest dropped object in the recyle bin (FIFO basis). Once an object is purged, that object no longer exists in the database. As a result, we cannot restore the object from the recycle bin with FLASHBACK DROP.
Can we restore the table with RMAN backup? Yes, we can. But we got a lot of works to do. We restore the data file(tablespace) that contains the dropped table or the whole database to a new host from the backup. And then we can export and import the table from the database restored in the new host to the target database. Both the tablespace/database restoring process and the export/import process require a lot of detailed steps.
Do we have a quicker way? We can just use Flashback technology to restore the dropped table quickly. "Flashback Drop" allows us to restore a dropped table from the Recycle Bin as long as the table has not been purged yet.
When we drop a table, the table and all associated objects(indexes, contraints) are renamed as "BIN$..." They still exists in the database and still belongs to the owner. They are just remapped to the recycle bin. Because of this reason, we can simply restore the dropped table.
SQL> FLASHBACK TABLE table_name TO BEFORE DROP;
When the table is restored, the associated objects and contraints are also restored except for the referential integrity contraints and bitmap indexes. Note that the name of the restored associated objects and contraints, however, still remains as "BIN$..." Not reverted back to their original name. It is the DBA's job to rename the restored objects and contraints if needed.
In order to be able to use Flashback DROP feature, we first have to make sure that RECYCLEBIN initialization parameter is set to 'ON' (default). Lastly, be aware that Oracle automatically reclaim space in the recycle bin when additional space is needed by purging the oldest dropped object in the recyle bin (FIFO basis). Once an object is purged, that object no longer exists in the database. As a result, we cannot restore the object from the recycle bin with FLASHBACK DROP.
May 1, 2011
Oracle Flashback Technology and Flashback Data Archive
Oracle first introduced Flashback Query with Oracle 9i. And more flashback technologies were introduced with Oracle 10g. Oracle states that Flashback features are usually more efficient and less discruptive than RMAN point-in-time recovery.
RMAN point-in-time recovery:
we first need to restore and then recover the damaged datafile(s) or the database. The restoring and recovering processes may take a long time. If we have to restore a database from a tape backupset, it will even take longer.
- Restore : Copy the data file/database from backup(s)
- Recover: Apply all the changes to the data file/database from incremental backups, archived redo logs and online redo logs to the point in time to which we want to return the datafile(s) or the database.
Flashback technologies mostly rely on undo data with the exception of Flashback Drop and Flashback Database.
Then, how far back can we return our data to?
Most of the flashback technologies make use of undo data only. So they seem to have some advantages over RMAN point-in-time recovery. However, there is a limitation with Flashback technologies: how far back can we return our data to? The answer is 'only as far back as the undo data allows'. We know that 'UNDO_RETENTION' initial parameter specifies how long we would like to keep unexpired undo data to support the flashback operations as well as Read Consistency. The undo data that falls within the UNDO_RETENTION period is called 'unexpired' undo. In order to guarantee the undo retention for the specified time period, we can alter the undo tablespace to 'RETENTION GUARANTEE.'
Flashback Data Archive(Oracle Total Recall)
The default setting of UNDO_RETENTION is 900 seconds(15 min). What if we want to keep the undo data much longer: like one year or two. Flashback Data Archive(Oracle Total Recall) allows us to keep undo data for a much much longer period of time by archiving the undo data in a seperate tablespace. If we set RETENTION of the flashback archive to 2 years, we can view our data as it would have appeared 2 years ago, without having to restore and recover our database to that point in time. A Flashback Data Archive allows us to track and store transactional changes to a table over its lifetime. It is especially useful for compliance with record stage policies and audit report, according to Oracle's documentation.
-------------------------------------------------------------------------
Reference: http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642.pdf
RMAN point-in-time recovery:
we first need to restore and then recover the damaged datafile(s) or the database. The restoring and recovering processes may take a long time. If we have to restore a database from a tape backupset, it will even take longer.
- Restore : Copy the data file/database from backup(s)
- Recover: Apply all the changes to the data file/database from incremental backups, archived redo logs and online redo logs to the point in time to which we want to return the datafile(s) or the database.
Flashback technologies mostly rely on undo data with the exception of Flashback Drop and Flashback Database.
- Flashback Query : we can specify a target time(timestamp) or SCN(System Change Number) and run queries against a database to view data as it would have appeared at the specified target time or SCN.
- Flashback Version Query : shows all versions of data between two different target times or SCNs.
- Flashback Transaction Query : shows changes made by a single transaction
- Flashback Transaction: reverse a transaction
- Flashback Table: recover a table or set of tables to a specified point in time in the past without taking any part of the database offline. We can use flashback queries and/or flashback version queries to determine a desirable point in time to which we need to return our table.
- Flashback Drop: reverse the effects of a Drop Table statement. When we drop a table without PURGE option, the table is renamed as "bins$..." The table still exists in the database. The table object is just remapped into a recycle bin. Flashback Drop allows us to bring the dropped table back from the recycle bin.
- Flashback Database : Required to use flashback logs in a Fast Recovery Area(FRA). A flashback database produces the same result as a database poin-in-time recovery.
Then, how far back can we return our data to?
Most of the flashback technologies make use of undo data only. So they seem to have some advantages over RMAN point-in-time recovery. However, there is a limitation with Flashback technologies: how far back can we return our data to? The answer is 'only as far back as the undo data allows'. We know that 'UNDO_RETENTION' initial parameter specifies how long we would like to keep unexpired undo data to support the flashback operations as well as Read Consistency. The undo data that falls within the UNDO_RETENTION period is called 'unexpired' undo. In order to guarantee the undo retention for the specified time period, we can alter the undo tablespace to 'RETENTION GUARANTEE.'
Flashback Data Archive(Oracle Total Recall)
The default setting of UNDO_RETENTION is 900 seconds(15 min). What if we want to keep the undo data much longer: like one year or two. Flashback Data Archive(Oracle Total Recall) allows us to keep undo data for a much much longer period of time by archiving the undo data in a seperate tablespace. If we set RETENTION of the flashback archive to 2 years, we can view our data as it would have appeared 2 years ago, without having to restore and recover our database to that point in time. A Flashback Data Archive allows us to track and store transactional changes to a table over its lifetime. It is especially useful for compliance with record stage policies and audit report, according to Oracle's documentation.
-------------------------------------------------------------------------
Reference: http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642.pdf
Apr 19, 2011
Materialized views increases query performance - Why and How ?
In data warehouse environments, we usually run long-running queries. Those queries might take hours. So it is very important for DBAs and developers to make efforts to reduce query response time(query performance).
How can we improve query response time? Use indexes, partitions the data, and implement parallel execution.
Are these all the options that we have?
The queries that we run in data warehouse are involved in very intensive computation, data aggregation, summarization and joinings. Using indexes, partitioning tables and parallel execution will definitely help reduce the query response time. But how much? The queries might still run slow if they are involved with expensive table joinings and intensive data aggregation and summarization. How can we overcome this problem?
What if we store the result of such a long-running query in a separate table(summary table)? We can just query against the summary table to retrieve the data without running the original, long-running query again. This is great!!! We can probably use "Create Table ... AS Select...(CTAS)" to create the summary table.
This solution seems to a great idea in the first place. However, there are a couple of drawbacks with this approach. (1) What if the data in the original tables(base tables) changes? The summary table will become out-of-date. We can't run a query against the summary table and expect to get a result that reflects the current state of our data. We need to manually refresh the summary table: probably drop the table and re-run the CTAS statement. Do we have to do this whenever the origianl(base) tables are modified? Um...I am not sure about that.
(2) Another inconvinience with this approach is that the end users have to know the existence and the name of the summary tables in order to use them. They also have to modify their applications so that the applications can now use the summary tables. As a result, this approach leads to a conlusion that using summary tables is not the perfect solution for improving query response time.
What can we do then? In order to resolve this problem, Oracle has introduced something called MV (Materialized Views). DBAs identify expensive SQL queries and create MVs for those queries. Now when the users run the same expensive queries, Oracle server internally rewrites the user's original query to a new query so that it can make use of the MV that the DBAs have created before. The re-written query just accesses the MV to retrieve the data, instead of accessing the original(base) tables and going through the whole expensive summarization and joinings. As a result, the users get their query result back much faster. This whole query rewrite process happens behind the scene. The user doesn't even have to know the existence of the MV.
They would probably feel that their query(original) now runs much faster than it used to. They would be HAPPY now.
How can we improve query response time? Use indexes, partitions the data, and implement parallel execution.
Are these all the options that we have?
The queries that we run in data warehouse are involved in very intensive computation, data aggregation, summarization and joinings. Using indexes, partitioning tables and parallel execution will definitely help reduce the query response time. But how much? The queries might still run slow if they are involved with expensive table joinings and intensive data aggregation and summarization. How can we overcome this problem?
What if we store the result of such a long-running query in a separate table(summary table)? We can just query against the summary table to retrieve the data without running the original, long-running query again. This is great!!! We can probably use "Create Table ... AS Select...(CTAS)" to create the summary table.
This solution seems to a great idea in the first place. However, there are a couple of drawbacks with this approach. (1) What if the data in the original tables(base tables) changes? The summary table will become out-of-date. We can't run a query against the summary table and expect to get a result that reflects the current state of our data. We need to manually refresh the summary table: probably drop the table and re-run the CTAS statement. Do we have to do this whenever the origianl(base) tables are modified? Um...I am not sure about that.
(2) Another inconvinience with this approach is that the end users have to know the existence and the name of the summary tables in order to use them. They also have to modify their applications so that the applications can now use the summary tables. As a result, this approach leads to a conlusion that using summary tables is not the perfect solution for improving query response time.
What can we do then? In order to resolve this problem, Oracle has introduced something called MV (Materialized Views). DBAs identify expensive SQL queries and create MVs for those queries. Now when the users run the same expensive queries, Oracle server internally rewrites the user's original query to a new query so that it can make use of the MV that the DBAs have created before. The re-written query just accesses the MV to retrieve the data, instead of accessing the original(base) tables and going through the whole expensive summarization and joinings. As a result, the users get their query result back much faster. This whole query rewrite process happens behind the scene. The user doesn't even have to know the existence of the MV.
They would probably feel that their query(original) now runs much faster than it used to. They would be HAPPY now.
Apr 17, 2011
Transportable tablespace - Fastest way to move a large volume of data
In data warehouse environments, we move data from one system to another system. There are a few ways to transport data. We can use flat files(most common and simple), distributed queries or transportable tablespaces. Today, I like to talk about how to use a transportable tablesapce.
Using a transportable tablespace.
- This is the fastest way to move large volumes of data between two Oracle databases.
- Limitations: the source and target systems must have the same O/S and must use the same character set.
< Example >
Let's say we have a table, called 'sales' in our database1(SH schema). And we want to move all the sales records since year 2000 to database2(SH schema). The amount of records that we have to move is very large. We decided to use a transportable tablespace.
> In the source system
1. create a tablespace that will hold a table that we will create in the next step.
SQL> CREATE TABLESPACE temp_sales_tbls
DATAFILE ' /tmp/temp_sales.dbf '
SIZE 30 M REUSE Autoextend ON;
2. create a table in the new tablespace. And we will store the sales records that we need to transport in this new table.
SQL > CREATE TABLE temp_sales_table NOLOGGING
TABLESPACE temp_sales_tbls
AS
select * from sh.sales where time_id > '31-DEC-1999' ;
3. Alter the tablespace to READ-ONLY to prevent any changes made to it.
SQL > ALTER TABLESPACE temp_sales_tbls READ ONLY;
4. Create a directory object that points to the location where the dump file for the Data Pump export are written.
SQL > CREATE DIRECTORY dum_dir AS '/temp/dump/' ;
5. Run a Data Pump untility, expdp, to greate a dump file that contains the meta data for the tablespace, temp_sales_tbls.
$> EXPDP sh/sh DIRECTORY=dump_dir DUMPFILE=sales.dmp
TRANSPORT_TABLESPACES=temp_sales_tbls
> In the target system
6. Now we copy the dump file(sales.dmp) and the datafile(temp_sales.dbf ) to the target system.
- copy the dump file to where a new directory object will point.
(we are going to create this new directory in step 7)
i.e) /temp/dump_import_loc/
- copy the datafile to where the datafiles of database 2 are located.
i.e) /home/oracle/oradata/
7. Create a directory object pointing to the location
SQL> CREATE DIRECTORY dump_import_dir AS '/temp/dump_import_loc/' ;
8. Run a Data Pump utility, impdp, to make the datafile(temp_sales.dbf) accessible to database2.is as
$> IMPDP sh/sh DIRECTORY=dump_import_dir
DUMPFILE=sales.dmp LOGFILE=imp.log
TRANSPORT_DATAFILES='/home/oracle/oradata/temp_sales.dbf '
9. We can verify our works so far.
SQL> connect sh/sh
SQL> Select count(*) from temp_sales_table;
SQL> select * from temp_sales_table;
10. Finally, don't forget that we have 'sales' table in the target database. Our original goal is to move the sales records since year 2000 from thee source database to the target database. We need to inset the imported sales records into the 'Sales' table in the target database. In the target database,
SQL> connect sh/sh
SQL > INSERT /*+ APPEND */ INTO sales
SELECT * FROM temp_sales_table;
--------------------------------------
*** This is a simple example. There might be more processes involved in a real-life situation.
Using a transportable tablespace.
- This is the fastest way to move large volumes of data between two Oracle databases.
- Limitations: the source and target systems must have the same O/S and must use the same character set.
< Example >
Let's say we have a table, called 'sales' in our database1(SH schema). And we want to move all the sales records since year 2000 to database2(SH schema). The amount of records that we have to move is very large. We decided to use a transportable tablespace.
> In the source system
1. create a tablespace that will hold a table that we will create in the next step.
SQL> CREATE TABLESPACE temp_sales_tbls
DATAFILE ' /tmp/temp_sales.dbf '
SIZE 30 M REUSE Autoextend ON;
2. create a table in the new tablespace. And we will store the sales records that we need to transport in this new table.
SQL > CREATE TABLE temp_sales_table NOLOGGING
TABLESPACE temp_sales_tbls
AS
select * from sh.sales where time_id > '31-DEC-1999' ;
3. Alter the tablespace to READ-ONLY to prevent any changes made to it.
SQL > ALTER TABLESPACE temp_sales_tbls READ ONLY;
4. Create a directory object that points to the location where the dump file for the Data Pump export are written.
SQL > CREATE DIRECTORY dum_dir AS '/temp/dump/' ;
5. Run a Data Pump untility, expdp, to greate a dump file that contains the meta data for the tablespace, temp_sales_tbls.
$> EXPDP sh/sh DIRECTORY=dump_dir DUMPFILE=sales.dmp
TRANSPORT_TABLESPACES=temp_sales_tbls
> In the target system
6. Now we copy the dump file(sales.dmp) and the datafile(temp_sales.dbf ) to the target system.
- copy the dump file to where a new directory object will point.
(we are going to create this new directory in step 7)
i.e) /temp/dump_import_loc/
- copy the datafile to where the datafiles of database 2 are located.
i.e) /home/oracle/oradata/
7. Create a directory object pointing to the location
SQL> CREATE DIRECTORY dump_import_dir AS '/temp/dump_import_loc/' ;
8. Run a Data Pump utility, impdp, to make the datafile(temp_sales.dbf) accessible to database2.is as
$> IMPDP sh/sh DIRECTORY=dump_import_dir
DUMPFILE=sales.dmp LOGFILE=imp.log
TRANSPORT_DATAFILES='/home/oracle/oradata/temp_sales.dbf '
9. We can verify our works so far.
SQL> connect sh/sh
SQL> Select count(*) from temp_sales_table;
SQL> select * from temp_sales_table;
10. Finally, don't forget that we have 'sales' table in the target database. Our original goal is to move the sales records since year 2000 from thee source database to the target database. We need to inset the imported sales records into the 'Sales' table in the target database. In the target database,
SQL> connect sh/sh
SQL > INSERT /*+ APPEND */ INTO sales
SELECT * FROM temp_sales_table;
--------------------------------------
*** This is a simple example. There might be more processes involved in a real-life situation.
Apr 3, 2011
Enhancing database security - Application Contexts & Virtual Private Database
< Hypothesis >
(1) You are a DBA for your company's database.
(2) Your company's database has a schema called HR where you basically store all the HR-related data.
(3) There is a table called employees in HR schema. This table contains name, emp_id, position, salary andcommission for all the employees in your company.
(4) All the employees can access employees table. (Granted 'Select' on the table)
(5) However, for security and data-confidentiality reason, you (DBA) need to find a way to limit the data access of each employee so that they can only see their own record. In other words, you should be able to control the data access of each user in row-level.
Issue
Oracle database provides several security measures. Password profile, authentication methods, system/object privileges and roles. However, none of these measure provides an data access control in row level. Object privileges can control data access on object level at most.
Impact
Because all the employees are granted "SELECT" object privilege on employees table. They can basically view anybody's records including salary and commission, which we don't want.
Solution
We can make use of an application context and Virtual Private Database to overcome this issue.
Virtual Private Database(VPD) combines an application context and fine-grained access control. The following steps show how application context and VPD are implemented. However the steps are simplified and doesn't show all in details.
1. Application Context: When we create an application context, a memory area is allocated to the context. An application context is like an associative array. We add attributes to the context and set values of the attributes. In regard to VPD, we can store users' current session information in an application context. Then based on the user's session information stored in the context, we can control a fine-grained data access control.
(1) Make sure HR user has all privileges required to create an application context.
- We can just let HR user to control the data access in HR schema.
- We can grant roles and privileges to HR as followings.
> GRANT create session TO HR WITH ADMIN OPTION;
> GRANT create any context, drop any context, create role, alter any role, drop any role,
(2) Creating an application context (as HR user)
(3) Create a PL/SQL package that set the context attributes.
- This package should contains a procedure that sets the context attributes.
...
procedure set_emp_id
IS
v_emp_id NUmber;
Begin
SELECT employee_id INTO v_emp_id FROM hr.employees
WHERE email = SYS_CONTEXT('userenv', 'session_user') ;
DBMS_Session.SET_CONTEXT('context_name','attribute_name','v_emp_id) ;
End;
...
(4) Create a trigger that calls the PL/SQL package to set the context attributes when a user logs in.
Create or Replace TRIGGER hr_context_logon
After Logon on database
Begin
Package_Name.procedure_name;
End;
2. Virtual Private Database - fine-grained access control implementation. Create a security policy function thatreturns a predicate(a Where condition).
(1) GRANT execute on dbms_rls to HR. Grant exempt access policy to HR.
(2) Create a policy function as following (as HR user)
...
Funiction limit_emp( object_schema IN varchar2, object_name varchar2)
return varchar2
Is
Begin
return 'emp_id = SYS_CONTEXT('context_name', 'attribute_name')';
END;
...
(3) Creating the policy (as HR user)
SQL> dbms_rls.add_policy( object_schema=>'HR', object_name=>'orders',
policy_name=> 'policy_name', function_schema=>'HR',
policy_function=> 'package_name.fuction_name', statement_types => 'select');
All set... Now when an employee run a select statement against EMPLOYEES table, she/he should only see his/her own record only. The security function returns a predicate such as "emp_id = xxx" This xxx is the employee id value that was stored in the application context. This application context was created when the employee logged in.
(1) You are a DBA for your company's database.
(2) Your company's database has a schema called HR where you basically store all the HR-related data.
(3) There is a table called employees in HR schema. This table contains name, emp_id, position, salary andcommission for all the employees in your company.
(4) All the employees can access employees table. (Granted 'Select' on the table)
(5) However, for security and data-confidentiality reason, you (DBA) need to find a way to limit the data access of each employee so that they can only see their own record. In other words, you should be able to control the data access of each user in row-level.
Issue
Oracle database provides several security measures. Password profile, authentication methods, system/object privileges and roles. However, none of these measure provides an data access control in row level. Object privileges can control data access on object level at most.
Impact
Because all the employees are granted "SELECT" object privilege on employees table. They can basically view anybody's records including salary and commission, which we don't want.
Solution
We can make use of an application context and Virtual Private Database to overcome this issue.
Virtual Private Database(VPD) combines an application context and fine-grained access control. The following steps show how application context and VPD are implemented. However the steps are simplified and doesn't show all in details.
1. Application Context: When we create an application context, a memory area is allocated to the context. An application context is like an associative array. We add attributes to the context and set values of the attributes. In regard to VPD, we can store users' current session information in an application context. Then based on the user's session information stored in the context, we can control a fine-grained data access control.
(1) Make sure HR user has all privileges required to create an application context.
- We can just let HR user to control the data access in HR schema.
- We can grant roles and privileges to HR as followings.
> GRANT create session TO HR WITH ADMIN OPTION;
> GRANT create any context, drop any context, create role, alter any role, drop any role,
create table, create procedure, create any trigger, administer database trigger,
create any directory, alter profile, create profile, drop profile, audit system, alter system,
grant any object privilege TO HR;
> GRANT execute on DBMS_SESSION to HR;
(2) Creating an application context (as HR user)
SQL> CREATE CONTEXT context_name USING package_name ;
(3) Create a PL/SQL package that set the context attributes.
- This package should contains a procedure that sets the context attributes.
...
procedure set_emp_id
IS
v_emp_id NUmber;
Begin
SELECT employee_id INTO v_emp_id FROM hr.employees
WHERE email = SYS_CONTEXT('userenv', 'session_user') ;
DBMS_Session.SET_CONTEXT('context_name','attribute_name','v_emp_id) ;
End;
...
(4) Create a trigger that calls the PL/SQL package to set the context attributes when a user logs in.
Create or Replace TRIGGER hr_context_logon
After Logon on database
Begin
Package_Name.procedure_name;
End;
2. Virtual Private Database - fine-grained access control implementation. Create a security policy function thatreturns a predicate(a Where condition).
(1) GRANT execute on dbms_rls to HR. Grant exempt access policy to HR.
(2) Create a policy function as following (as HR user)
...
Funiction limit_emp( object_schema IN varchar2, object_name varchar2)
return varchar2
Is
Begin
return 'emp_id = SYS_CONTEXT('context_name', 'attribute_name')';
END;
...
(3) Creating the policy (as HR user)
SQL> dbms_rls.add_policy( object_schema=>'HR', object_name=>'orders',
policy_name=> 'policy_name', function_schema=>'HR',
policy_function=> 'package_name.fuction_name', statement_types => 'select');
All set... Now when an employee run a select statement against EMPLOYEES table, she/he should only see his/her own record only. The security function returns a predicate such as "emp_id = xxx" This xxx is the employee id value that was stored in the application context. This application context was created when the employee logged in.
Mar 22, 2011
Oracle Database with Oracle RAC Architecture
< Hypothesis >
Assuming that we are running a single-instance database. And our database database must be up and running 24/7. If we fail to keep the database available for our users 24/7, we just have to go on vacation for ever. There will be no coming back.
I would suggest to my DBA team that we should consider implementing Oracle RAC to our database because RAC could increase the availability and scalability of our database.
(1) Single-instance database
One-to-one relationship between the database and the instance. So either the instance has a problem or the database storage has a problem, the database could become unavailable.
(2) RAC (Real Application Clusters)
One-to-many relationship between the database and the associated instances. An Oracle RAC database can have up to 100 instance, all of which access ONE database. A cluster comprises multiple interconnected computers or servers that APPEAR AS IF they are one server to the database users or applications. Oracle provides something, called "Oracle Clusterware" that can manage all this integration system.
As shown in the picture below, each instance usually runs on a separate server(node). We could place each server in different locations(an extended distance cluster). This way, we could have our database still available for the user even if one server is down. For example(using the picture below), even if node 1 is down, we still have the other two nodes that can handle the data requests from the users. The RAC architecture could increase the availability of our database.
In addition, we can usually combine smaller commodity servers into one cluster. If we get more database users and need to increase our system capacity, we can simply add one or more commodity servers to the cluster (higher scalability)
Excerpted from http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/admcon.htm#i1058057
** In addition, Oracle Clusterware also enables to create a clustered pool of storage to be used by any combination of single-instance and Oracle RAC database.
----------------------------------------------------------------------------------------------------------
< References >
http://en.wikipedia.org/wiki/Oracle_RAC
http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/admcon.htm
Mar 20, 2011
Recovering a control file using a current backup copy or a mutiplexed copy
(1) One of the control files specified in CONTROL_FILES parameter is corrupted.
(2) One of the control files is inaccessible due to a media failure.
Even though only one of the multiplexed control files is corrupted or inaccessible, the database can't start up.
(1) When a control file is corrupted
- Shutdown the instance.
- Use O/S copy command to overwrite the bad control file with a good copy
$> cp /u01/oracle/prod/control01.ctl /u03/oracle/prod/control3.ctl
- Start up DB.
(2) When a control file is inaccessible due to a media failure
- Shutdown the instance.
- Use O/S copy command to copy the current copy of the control file to a new accessible location
$> cp /u01/oracle/prod/control01.ctl /u04/oracle/prod/control03.ctl
- Edit the Control_files parameter to replace the bad location with the new location
CONTROL_FILES - (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u04/oracle/prod/control03.ctl)
- Start up the database
------------------------------------------------------------------------
** If we use multiplexed control files, we can just remove the bad control file from the CONTROL_FILES parameter. And we can just start up the database quickly for the db users. We can then re-construct the bad control file and restart the DB at later time when the database is not busy.
** If any of the multiplexed control file is no longer available or appropriate, we can just remove the control file from the CONTROL_FILES parameter. But we need to make sure that there are always at least 2 control files.
(2) One of the control files is inaccessible due to a media failure.
Even though only one of the multiplexed control files is corrupted or inaccessible, the database can't start up.
(1) When a control file is corrupted
- Shutdown the instance.
- Use O/S copy command to overwrite the bad control file with a good copy
$> cp /u01/oracle/prod/control01.ctl /u03/oracle/prod/control3.ctl
- Start up DB.
(2) When a control file is inaccessible due to a media failure
- Shutdown the instance.
- Use O/S copy command to copy the current copy of the control file to a new accessible location
$> cp /u01/oracle/prod/control01.ctl /u04/oracle/prod/control03.ctl
- Edit the Control_files parameter to replace the bad location with the new location
CONTROL_FILES - (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u04/oracle/prod/control03.ctl)
- Start up the database
------------------------------------------------------------------------
** If we use multiplexed control files, we can just remove the bad control file from the CONTROL_FILES parameter. And we can just start up the database quickly for the db users. We can then re-construct the bad control file and restart the DB at later time when the database is not busy.
** If any of the multiplexed control file is no longer available or appropriate, we can just remove the control file from the CONTROL_FILES parameter. But we need to make sure that there are always at least 2 control files.
Mar 19, 2011
Control Files & Database
(1) When a database is created, at least one control file is also created. The control file MUST be available for writing by the Oracle server whenever the database is open. Without the control file, the database cannot be mounted. Thus, it is recommended that we multiplex control files on different disks and backup the control file for safety reason.
(2) We can create a new control file as following...
(2) We can create a new control file as following...
CREATE CONTROLFILE SET DATABASE orcl
LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log', '/u01/oracle/prod/redo01_02.log'),
GROUP 2 ('/u01/oracle/prod/redo02_01.log', '/u01/oracle/prod/redo02_02.log'),
GROUP 3 ('/u01/oracle/prod/redo03_01.log', '/u01/oracle/prod/redo03_02.log') RESETLOGS
DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M,
'/u01/oracle/prod/rbs01.dbs' SIZE 5M,
'/u01/oracle/prod/users01.dbs' SIZE 5M,
'/u01/oracle/prod/temp01.dbs' SIZE 5M
MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400
MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;
(3) We can display control file information using some views such as v$database, v$controlfile, v$controlfile_record_section and v$parameter
LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log', '/u01/oracle/prod/redo01_02.log'),
GROUP 2 ('/u01/oracle/prod/redo02_01.log', '/u01/oracle/prod/redo02_02.log'),
GROUP 3 ('/u01/oracle/prod/redo03_01.log', '/u01/oracle/prod/redo03_02.log') RESETLOGS
DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M,
'/u01/oracle/prod/rbs01.dbs' SIZE 5M,
'/u01/oracle/prod/users01.dbs' SIZE 5M,
'/u01/oracle/prod/temp01.dbs' SIZE 5M
MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400
MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;
(3) We can display control file information using some views such as v$database, v$controlfile, v$controlfile_record_section and v$parameter
(4) The control file locations and names are specified by CONTROL_FILES parameter in the initialization parameter file.
Mar 8, 2011
Invalid Dimension Objects - Query Performance in Data warehouse
We come across a situation where the query performance gets poor in a data warehouse environment. There are several reason for that. But I like to focus particularly on the impact from a dimension becoming invalid.
When a dimension becomes invalid, any query that could take an advantage of the dimension during the query rewrite process can no longer use it. This can lead to a slower query execution. It is well-known that the hierarchical/join/attribute relationship structure defined by dimensions can render a significant performance advantages.
As a result, it might be worth checking the validity of dimensions if your query run slower than it used to.
When a dimension becomes invalid, any query that could take an advantage of the dimension during the query rewrite process can no longer use it. This can lead to a slower query execution. It is well-known that the hierarchical/join/attribute relationship structure defined by dimensions can render a significant performance advantages.
As a result, it might be worth checking the validity of dimensions if your query run slower than it used to.
- Check if any dimension is invalid : A dimension becomes invalid when we alter/drop any of the objects that the dimension references. SQL> select invalid, dimension_name from dba_dimensions;
- Recompile the invalid dimension SQL> alter dimension cust_dim compile;
We can view how a dimension has been defined using DBMS_DIMENSION.describe_dimension('cust_dim') procedure.
Also, we can verify the hierarchical/attribute/join relationships on all the rows that are defined by the dimension using DBMS_DIMENSION.validate_dimension('sh.cust_dim', false, true) procedure. Note that 'dimension_exception' table should be created before running this validate procedure because any error(bad row) that is identified during the validation process gets recorded in dimension_exception table.
Also, we can verify the hierarchical/attribute/join relationships on all the rows that are defined by the dimension using DBMS_DIMENSION.validate_dimension('sh.cust_dim', false, true) procedure. Note that 'dimension_exception' table should be created before running this validate procedure because any error(bad row) that is identified during the validation process gets recorded in dimension_exception table.
Feb 28, 2011
RMAN backups: Incremental Level 0 backup Vs. Full backup.
We all know the difference between RMAN full backup and Incremental Level 0 backup. Only incremental Level 0 backup can be the base for other incremental level 1 (cumulative and differential) backups.
I have been wondering so much about this. Why not a full backup as the base? I finally came up with my own imaginary behind-story. This is purely my own guess. So ignore if you don't like my notion.
We know Oracle DBMS has been evolved a lot. I think there have been a lot of changes and add-on to the backup part of Oracle DBMS.
1) There might not have been "incremental backup" with the earlier version of Oracle(like Oracle 7 for example).
2) they introduced incremental type of backup with later version to allow more flexible backup strategies*: probably level 1 differential only at that time)
3) they found people's need for more, more flexible backup strategies, so Oracle further introduced 'Cumulative incremental' and 'Level 0 incremental'
But, things got a bit complicated in the technical side. The developers couldn't find a way to integrate all the new types of backup functions with their original full backup codes. Or they found it too expensive to change the codes(it might end up being 'spaghetti code'). So they probably decided to keep the original Full backup and Level 0 incremental backup as well.
Again. this is just an idea. Peace!!!
I have been wondering so much about this. Why not a full backup as the base? I finally came up with my own imaginary behind-story. This is purely my own guess. So ignore if you don't like my notion.
We know Oracle DBMS has been evolved a lot. I think there have been a lot of changes and add-on to the backup part of Oracle DBMS.
1) There might not have been "incremental backup" with the earlier version of Oracle(like Oracle 7 for example).
2) they introduced incremental type of backup with later version to allow more flexible backup strategies*: probably level 1 differential only at that time)
3) they found people's need for more, more flexible backup strategies, so Oracle further introduced 'Cumulative incremental' and 'Level 0 incremental'
But, things got a bit complicated in the technical side. The developers couldn't find a way to integrate all the new types of backup functions with their original full backup codes. Or they found it too expensive to change the codes(it might end up being 'spaghetti code'). So they probably decided to keep the original Full backup and Level 0 incremental backup as well.
Again. this is just an idea. Peace!!!
Feb 24, 2011
Oracle data file - the maximum size
An Oracle data file can be up to 128 TB in size. 128 TB is a very huge number.
Feb 18, 2011
"Enterprise Manager is not able to connect to the database instance..." and SYSMAN account
Just want to document this issue here so that I won't forget again...
Issue - EM is not able to connect to the db instance even if the db instance is up and running. (In this picture the listener is down just because I didn't started. It is not our concern here)
Impact - we can't manage the database and its instance using Enterprise Manager.
Resolutions - We need to check on SYSMAN account.
(1) SYSMAN: this accounts should be open, and the password is not expired.
> sysman : Used to perform Oracle Enterprise Manager admin tasks.
The EM super admin account with mgmt_user, resource &
select_catalog_role roles granted. When we log in the OEM using SYS or
SYSTEM, we are logging in as this super user.
** DBSNP : If we see this account locked and expired. We need to alter this account as
well because this account is used to monitor and manage the database.
==> SQL> alter user sysman account unlock identified by newpassword ;
(2) Having SYSMAN and DBSNP accounts available seems to be good enough. However, I would also just check on two more things.
> remote_login_passwordfile
I would make sure that this parameter is set to "exclusive"
==> SQL> show parameter password
SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;
Then, restart the database instance.
> password file exist?
I would also make sure that a password file exist or just recreate it.
==> $> orapwd file=file_name password=new_password entries=max_num_of_users
i.e) orapwd file=$ORACLE_HOME/dbs/orapworcl password=oracle entries=10
Lastly, I have found that we need to be really really PATIENT until the EM starts working correctly. We would expect to see the problem corrected right away after resetting the accounts. But nope!!! It seems that it takes quite long time for the EM to recognize the reconfiguration. It would be a great idea to restart the EM control.
Issue - EM is not able to connect to the db instance even if the db instance is up and running. (In this picture the listener is down just because I didn't started. It is not our concern here)
Impact - we can't manage the database and its instance using Enterprise Manager.
Resolutions - We need to check on SYSMAN account.
(1) SYSMAN: this accounts should be open, and the password is not expired.
> sysman : Used to perform Oracle Enterprise Manager admin tasks.
The EM super admin account with mgmt_user, resource &
select_catalog_role roles granted. When we log in the OEM using SYS or
SYSTEM, we are logging in as this super user.
** DBSNP : If we see this account locked and expired. We need to alter this account as
well because this account is used to monitor and manage the database.
==> SQL> alter user sysman account unlock identified by newpassword ;
(2) Having SYSMAN and DBSNP accounts available seems to be good enough. However, I would also just check on two more things.
> remote_login_passwordfile
I would make sure that this parameter is set to "exclusive"
==> SQL> show parameter password
SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;
Then, restart the database instance.
> password file exist?
I would also make sure that a password file exist or just recreate it.
==> $> orapwd file=file_name password=new_password entries=max_num_of_users
i.e) orapwd file=$ORACLE_HOME/dbs/orapworcl password=oracle entries=10
Lastly, I have found that we need to be really really PATIENT until the EM starts working correctly. We would expect to see the problem corrected right away after resetting the accounts. But nope!!! It seems that it takes quite long time for the EM to recognize the reconfiguration. It would be a great idea to restart the EM control.
Subscribe to:
Posts (Atom)