Showing posts with label Frequently Asked. Show all posts
Showing posts with label Frequently Asked. Show all posts

Friday 25 August 2017

ORA-12505: TNS:listener does not currently know of SID given in connect descriptor



Many a times, When user try connecting to the database though an application, user gets the error as below.





Error: ORA-12505: TNS:listener does not currently know of SID given in connect descriptor at OCI call OCIServerAttach. [nQSError: 17014] Could not connect to Oracle database. (HY000)

This is very comman issue, and there are many reasons for this error.


Cause: User was unable to connect to database through connect descriptor that we use to connect to database from remote server or from application.

Solution: 
Since listener is running,try to connect the database manually through connect descriptor that is usually stored in tnsnames.ora file.  Always make sure that connect descriptor has correct database name, hostname and port name.


Example:

sqlplus usrname/pwd@PRODDB

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 20 04:39:03 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

ERROR:
ORA-12505: TNS:listener does not currently know of SID given in connect
Descriptor



Ø  Then check your database name in connect descriptor available at tnsnames.ora file.

Ø  If that is correct then check your services in listener to find whether requested database is being listened by the listener. You can use below command to check the same.


$lsnrctl services listener_name

Ø  If database is found, then wait for few seconds and check the connection.
Ø  If database is not found in the services, then manually register your database to the listener by using below command.

SQL> alter system register;

Check after few seconds, you will be able to connect to the database using connect descriptor.

If still not working, check the connection string of the database a provide the connection string to the user, to make sure the same string is used by users.

Also, check the mount point of the host. Sometimes the iNodes get occupied on the host, which make the listener state as hung and will not allow users to connect. If so, perform necessary housekeeping on the respective mount-points and check the listener Status again.

Happy Learning  :) 
Keep Sharing . . .


See Also :   


-
-

Saturday 17 September 2016

What a Junior DBA will do, When he Faces “Slowness in the Database” ?



Many a times, a DBA gets a call from Client side regarding the slowness of the Database.

In these case, fact the database is never slow or fast in most of the case session connected to the database slow down when they receives unexpected hit. Thus to solve this issue you need to find those unexpected hit.

There are few checks a Junior DBA can perform to detect & Troubleshoot Slowness

1. Taking user inputs:
·        Is application is slow or any particular batch processing is slow?
·        Slowness is observed through out the system or only few or one user
·        Is it happening in some particular timing ?
By collecting these information we will get an outline of what needs to be checked.


2. Check for any lock contention
You can use the below query for this.
SQL> select count(*) from v$lock where block=1;
·        If count is greater than one, lock is there in database.
·        Check with application team and release the blocking sessions

3. Locking is not only the cause to effects the performance. Disk I/O contention is another case.
·        When a session retrieves data from the database datafiles on disk to the buffer cache, it has to wait until the disk sends the data. The wait event shows up for the session as "db file sequential read" (for index scan) or "db file scattered read" (for full table scan).When you see the event, you know that the session is waiting for I/O from the disk to complete.

·        To improve session performance, you have to reduce that waiting period. The exact step depends on specific situation, but the first technique “reducing the number of blocks retrieved by a SQL statement” almost always works.

·        Reduce the number of blocks retrieved by the SQL statement. Examine the SQL statement to see if it is doing a full-table scan when it should be using an index, if it is using a wrong index, or if it can be rewritten to reduce the amount of data it retrieves.


4. Check the resource utilization:
You can check the following Areas :
·        CPU utilisation
·        Load
·        Memory utilization

ü  Run TOP command in Linux to check CPU usage.
ü  Check  any single process is holding the CPU for long time -- note that process ID.
            Press 'c' in top command, it will give you the time and process which is   consuming more CPU.
ü  Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.



5. Check the alert log:
Check the alert log. You can check how many log switches are happening in one hour.
If you have more than 5 archives per hour we can say you may need to increase the redo log size. Troubleshoot the errors if it is critical or related to performance.

6. Server side checks
Check for the Memory, Paging, I/O utilisation from server side.
Paging and memory can be checked by ‘top’ command and ‘iostat’ will do the I/O statistics.
Contact the concern team for any abnormality if you see in this.

Advance performance tuning is not the scope of this blog, this is for junior DBA. Hope it helps.

Expecting your comments...


Source : ArunSankar Blog



Find more about Tuning Click Here



Saturday 10 September 2016

Datapump Architecture. What is Master table in Datapump ?



Datapump Architecture. What is Master table in Datapump ?


Master Table :

The Master Table is created in the schema of the current user running the Pump Dump export or import, and it keeps tracks of lots of detailed information. 

The Master Table is used to track the detailed progress information of a Data Pump job.

This will store the following information :
·         The status of every worker process involved in the operation.
·         The current set of dump files involved.
·         The job’s user-supplied parameters.
·         The state of current job status and restart information.
·         The current state of every object exported or imported and their locations in the dump file set.

Note :  The Master Table is the key to Data Pump’s restart capability in the event of a planned or unplanned job stoppage.

Behaviour of Master Table :
This table is created at the beginning of a Data Pump operation and is dropped at the end of the successful completion of a Data Pump operation. The Master Table can also be dropped if the job is killed using the kill_job interactive command. If a job is stopped using the stop_job interactive command or if the job is terminated unexpectedly, the Master Table will be retained. 

The keep_master parameter can be set to Y to retain the Master Table at the end of a successful job for debugging purposes


The name of the Master Table is the same as the Data Pump job name and has the following columns:

SQL>  Desc <job_name> ;



Process in Datapump Architecture


The master control process
·         Maintains job state, job description, restart, and dump file set information in the Master Table.
·         This process controls the execution and sequencing of a Data Pump job.
·         The master process has two main functions
1.       To divide the loading and unloading of data and metadata tasks and handle the worker processes;
2.       To manage the information in the Master Table and record job activities in the log file.


Worker Process:
·         This handles the request assigned by the master control process.  This process maintains the current status of the job, like : ‘pending’ or ‘completed’ or ‘failed’.
·         The worker process is responsible for loading and unloading data and metadata.
·         The number of worker processes needed can be defined by assigning a number to the parallel parameter.



Parallel Query Process:
·         This process is used when the Data Pump chooses External Table API as the data access method for loading and unloading data. 
·         The worker process that uses the External Table API creates multiple parallel query processes for data movement, with the worker process acting as the query coordinator.


Shadow Process :
·         This process is created when a client logs into the Oracle server. 
·         The shadow process creates a job, which primarily consists of creating the Master Table, creating the queues in Advanced Queues (AQ) used for communication among the various processes, and creating the master control process.
·         Once a job is running, the shadow process’ main job is to check the job status for the client process.  If the client process detaches, the shadow process goes away; however, the remaining Data Pump job processes are still active.
·         Another client process can create a new shadow process and attach to the existing job.