Wednesday 25 January 2012

Appserver loadbalancing and failover


Appserver loadbalancing and failover

appserver loadbalancing or failover are specified in configuration.properties file located in webserver directory. psserver parameter is edited according to our needs. some examples are given below.

1. psserver=host1:port1#wt1,Host2:port2#wt
eg: psserver=host1:9000#3,host2:9050#1
In this case, appserver running on host1 would receive 3x more requests than appserver2 running on host2


2. psserver=<host>:<port>#wt{failover servers}-failover string is added in curly brackets.
eg: psserver=host1:9000{host2:9020}
if host1 fails host2 will take up. 


3. psserver=Host1:9000#3{Host3:9080#3,Host4:9070#1},Host2:9020#1
In this case host3 and 4 are the failover servers for host 1. means if host1 is down host3 and host4 will replace it. host3 and host 4 will do a weighted load balancing also as specified. 




4.sequential backup with r failover string
psserver=Host1:9000#4{Host3:9010;Host4:9020},Host2:9030#1
In this case, the system assigns Host 4 the requests when both Hosts 1 and 3 are down.

MEMORY_TARGET parameter in Oracle11g

MEMORY_TARGET,MEMORY_MAX_TARGET parameters are introduced in oracle 11g. Using these parameters, you can manage SGA and PGA together rather than managing them separately (using SGA_TARGET, SGA_MAX_SIZE , PGA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY in 10g).

Before 11g we used SGA_TARGET and SGA_MAX_SIZE parameters to dynamically size SGA components such as SHARED_POOL_SIZE,LARGE_POOL_SIZE,JAVA_POOL_SIZE. PGA was controlled separately using GA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY in 10g.

From 11g, if we are using MEMORY_TARGET parameter, SGA and PGA sizes are controlled dynamically using this parameter. If you set SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_SIZE to 0 and set MEMORY_TARGET (and optionally MEMORY_MAX_TARGET) to non zero value, Oracle will manage both SGA components and PGA together within the limit specified by you.


The Memory_Max_Target Parameter
this is similar to sga_max_size. Memory_max_target provides for a maximum size that you can dynamically increase memory_target to. Therefore, within the constraints of memory_max_target, you can easily adjust the overall memory allocated to the database at any given time, dynamically.
Oracle will always configure the memory_max_target parameter if it is not set manually with a derived value. This derived value is set to a value equal to the value of the memory_target parameter. Oracle will set memory_max_target to a value of 0 if memory_target is not set. If memory_target is not set and memory_max_target is set to a value greater than zero, then you can modify memory settings dynamically by changing memory_target, or by dynamically altering the other specific memory parameter settings. The figure below illustrates the auto memory parameter dependency.



For instance :
If MEMORY_TARGET is set to 1024MB, Oracle will manage SGA and PGA components within itself.

If MEMORY_TARGET is set to non zero value:

  • SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET are set to 0, 60% of memory mentioned in MEMORY_TARGET is allocated to SGA and rest 40% is kept for PGA.
  • SGA_TARGET and PGA_AGGREGATE_TARGET are set to non-zero values, these values will be considered minimum values.
  • SGA_TARGET is set to non zero value and PGA_AGGREGATE_TARGET is not set. Still these values will be autotuned and PGA_AGGREGATE_TARGET will be initialized with value of (MEMORY_TARGET-SGA_TARGET).
  • PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set. Still both parameters will be autotunes. SGA_TARGET will be initialized to a value of (MEMORY_TARGET-PGA_AGGREGATE_TARGET).
With this version, oracle has become smart as in exchanging memory between SGA and PGAs. This is a huge achievement.
When starting up, Oracle takes up memory equal to MEMORY_TARGET (or MEMORY_MAX_TARGET if mentioned) from Operating System RAM and manage its reqources within itself.
This feature helps DBA to allocate chunk of memory to a particular instance without worrying about the subcateogary allocations of different components.





Relationship of Memory Parameters

Wednesday 11 January 2012

Symlink requirement for UNIX servers (For oracle10g and 11g from PT848 and above)


Symlink requirement for UNIX servers (For oracle10g and 11g from PT848 and above)

A specific PeopleTools release is compatible with the Oracle database version it was released on, as well as any subsequent Oracle database version supported by PeopleTools. Out of the box, PeopleTools will work with Oracle 9i. However, for Oracle 10g and 11g a symlink is required (Note: This is not required for IBM AIX).
In Oracle 10g the Oracle shared library name is libclntsh.xx.10.1 and In Oracle 11g, the Oracle shared library name is libclntsh.xx.11.1 while PeopleTools looks for libclntsh.xx.9.0.
In order for Application Server to boot up and PSRUN.MAK to run successfully, a Unix symlink must be defined from libclntsh.xx.10.1 or libclntsh.xx.11.1 to libclntsh.xx.9.0 (Note: The '.xx' suffix is platform specific. Solaris, Linux, and HP-Itaniumuse '.so'; HP-PARISC uses '.sl'). To accomplish this:

1) Log in as the Oracle owner.
2) cd $ORACLE_HOME/lib
3) Depending on your platform, create the symlink:
a) For Oracle10g on Solaris, RH 4.0, SuSE 9.0, or HP-Itanium, execute the following command to establish the symlink:
ln -s $ORACLE_HOME/lib/libclntsh.so.10.1 libclntsh.so.9.0
b) For Oracle10g on HP-PARISC, execute the following command to establish the symlink:
ln -s $ORACLE_HOME/lib/libclntsh.sl.10.1 libclntsh.sl.9.0

a) For Oracle 11g on Solaris, RH 4.0 or HP-Itanium, execute the following command to establish the symlink:
ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 libclntsh.so.9.0
b) For Oracle 11g on HP-PARISC, execute the following command to establish the symlink:
ln -s $ORACLE_HOME/lib/libclntsh.sl.11.1 libclntsh.sl.9.0
After the symlink is established, you may rerun PSRUN.MAK or boot up the Servers. manually.


To Remove the symlink simply use the command rm. 
1. Do ls -l to list the files that is created through symlink.  They  will have l in front of them:
lrwxrwxrwx   1 oracle   dba           48 Jul  7 13:48 libclntsh.so -> /ds2/oracle/11.1.0.7-64bit/lib/libclntsh.so.11.1
lrwxrwxrwx   1 oracle   dba           43 Nov  5 11:28 libclntsh.so.10.1 -> /ds2/oracle/11.1.0.7-64bit/lib/libclntsh.so
-rwxr-xr-x   1 oracle   dba            0 Nov  5 11:30 libclntsh.so.11.1
2. Issue rm command
 rm libclntsh.so
When u remove this symlink file, the original file stays.

IB Issues

Same PUBSUB Domain Seen Mulitple Times on the Domain Status Page
We’ve configured to integration broker domains for our DB. One is running on host01, the other on host02. When I navigate to PeopleTools à Integration Broker -> Service Operations Monitor -> Administration->Domain Status page,
I should see only two Integration Broker Domains in the Domains section. Instead I am seeing 6. I’ve tried the following:
· Verified only the two IB application server domains have Pub/Sub services configured/running.
· Recycled all application server domains with cache cleared.
· Verified that no other application server domains are running up against the database.

Solution
The fact that duplicate rows were being entered into the tables update when Purge Domain Status push button is clicked, indicated that there was a problem with the keys/indexes on the tables. The key fields on the three tables involved are:PSAPMSGDOMSTAT - Machinename and appserver_path PSAPMSGDSPSTAT - DispatcherName , Machinename, appserver_pathPSAPMSGQUEUESET - DispatcherName , Machinename, appserver_path, dataseqno Since there are key fields, then there should be a key index for each of the tables. Using the SQL tool it was noted that none of the tables had the key index built.Without the indexes built duplicate rows were being inserted into the tables. Using Appdesigner create all indexes for these records


"Purge Domain Status" Fails to Remove all Domains
"Purge Domain Status" does not purge all domains from the domain status list. After hitting the refresh button, the secondary (new) domain comes back, but the domain is inactive. The expected behavior is that when "Purge Domain Status" is clicked, the primary (inactive) domain would disappear, and the secondary (new) domain would remain.

The following SQL can help to identify the invalid / incorrect data:
select * from PSAPMSGQUEUESET
select * from PSAPMSGDSPSTAT
select * from PSAPMSGDOMSTAT

delete the invalid rows which will resolve the issue
1) Clear the table manually using the following type of sql:
DELETE FROM PSAPMSGDSPSTAT WHERE MACHINENAME <>:1 OR (APPSERVER_PATH <>:2 AND MACHINENAME=:3)
DELETE FROM PSAPMSGDOMSTAT WHERE MACHINENAME <>:1 OR (APPSERVER_PATH <>:2 AND MACHINENAME=:3)
DELETE FROM PSAPMSGQUEUESET WHERE MACHINENAME <>:1 OR (APPSERVER_PATH <>:2 AND MACHINENAME=:3)
where MACHINENAME is the name of the machine of the appserver domain where the PIA request is handled.APPSERVER_PATH is the PS_SERVDIR of the appserver domain where the PIA request is handled.

Tuesday 3 January 2012

What is JVM Heap Size

This is a parameter which normally creates issues in our webserver configurations.

The Java virtual machine heap space is the memory region where the Java objects (both live and dead) resided. When the Java heap runs out of space, the Java Garbage Collector is invoked to deallocate unreferenced objects and free up more space for the program to continue its operation. The JVM cannot service user requests during garbage collections. Many customers have their JVM heap size set to a minimum heap size of 64MB and maximum size of 256MB. Setting the JVM heap size to a greater minimum value (preferably equal to the maximum value) avoids the performance hit incurred by dynamically growing the JVM and improves predictability; it also lessens the frequency of JVM garbage collection.

The Java options, including the JVM heap size, that are used by WebLogic Server are stored in your WebLogic domain's setEnv.cmd(.sh). If you need to adjust any of the java options, including changing the JVM heap size you must manually edit WebLogic's setEnv script. The following two parameters, -Xms and -Xmx, respectively control the JVM min and max heap size, -Xms defines the minimum heap and -Xmx defines the maximum heap size. For performance reasons, it is recommended that both, min and max are set to the same value.Adjust the heap size as follows:
1) Edit the setEnv.cmd/sh script:
-For WebLogic 8.1, it is located in /webserv/
-For WebLogic 9.2 and 10.3, it is located in /webserv//bin
2) You will see a JAVA_OPTIONS parameter in this format: JAVA_OPTIONS_Where is the OS Platform that the web server is on.
3) This "JAVA_OPTIONS_" parameter is where the min/max heap settings are defined. Edit this parameter and adjust the heap settings.For example, to change the min/max heap settings from 256mg to 1gb on a Linux platform, you would change the JAVA_OPTIONS_LINUX parameter from:
JAVA_OPTIONS_LINUX="-jrockit -XnoOpt -XXnoJITInline -Xms256m -Xmx256m -Dtoplink.xml.platform....
to
JAVA_OPTIONS_LINUX="-jrockit -XnoOpt -XXnoJITInline -Xms1024m -Xmx1024m -Dtoplink.xml.platform....

The heap setting that you need to use, depends on environmental factors such as: -Number of users accessing the PeopleSoft application -Session Timeout setting (additional heap memory is needed for higher timeout settings, since the session information is stored in the java heap) -Types of transactions that users are runningIn a production environment, Oracle recommend a minimum heap setting of 512mg. But you may need to set heap higher depending on above factors. If the WebLogic java process runs out of java heap, it will log an 'OutOfMemory' error to file PIA_weblogic.log (and users may receive 'Error 500--Internal Server Errors').If you wish to analyze heap usage in your PeopleSoft environment, you may want to enable Verbose Garbage Collection as per the following document in metalink
Doc #759137.1: What is Verbose Garbage Collection (verbosegc) and How do I Enable it on WebLogic?

From metalink
PeopleTools 8.44-8.50, Weblogic uses a 32-bit Java version, which means that the java process is allowed only 4gb for java heap/native heap/executables . Therefore, if you set the java heap setting too high, the java process may not have enough memory remaining for native heap. So we recommend that you set heap no higher than 1.5GB, as this will allow extra memory for the native heap. If using JRockit, which uses more native memory than other Java brands, then the java heap setting may need to be even lower than 1.5GB.Also, note that when running WebLogic as a Window service, more memory overhead is needed for the service. Therefore you may need to make some adjustments to the heap size, when running as a Windows service.

Starting with PeopleTools 8.51, Weblogic uses a 64-bit Java version. So you can set your heap considerably higher as long as you have enough memory available on the web server. Note that if you use a larger heap setting (eg 4GB), then garbage collection may take longer. When garbage collection occurs, the "world stops", so there will be a momentary pause in processing transactions. So if you find that your environment requires a large java heap, then you may want to consider adding additional PIA's to your environment. For example, instead of having one WebLogic PIA with 8gb of heap, you could have two WebLogic PIA's, each with 4gb heap.

Import/Export data between Peoplesoft instances

To move data between Peoplesoft instance we have many tools. Data mover is a peoplesoft delivered tool, but is limited to peoplesoft objects only, and is very slow for huge objects. DM sometimes hang and behave strangely in large production systems. IMP/EXP are the traditional oracle tools used for import and export. IMP/EXP have some limitations especially with encrypted columns. From 10g onwards the most usefull data movement tool with oracle is impdp and expdb the enhanced version od IMP/EXP. I will explain more on Data Pump options here

exporting/importing a whole schema (SYSADM)
creating DB directory
CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/';GRANT READ, WRITE ON DIRECTORY test_dir TO dba;

expdp dba/pwd@source schemas=SYSADM directory=TEST_DIR dumpfile=SCHEMA.dmp logfile=SCHEMA.log parallel=4
give parallel option based on your system capacity
logfile and dumpfile will be created in TEST_DIR. to find the TEST_DIR path in server do a select * from dba_directories
move DMP files to target TEST_DIR directory befopre import
impdp dba/pwd@target schemas=SYSADM directory=TEST_DIR dumpfile=SCHEMA.dmp logfile=SCHEMA.log TABLE_EXISTS_ACTION=REPLACE

Exporting/Importing few tables

expdp dba/pwd@source tables=SYSADM.PS_A,SYSADM.PS_B directory=TEST_DIR dumpfile=DMP_TABLES.dmp logfile=DMP_TABLES.log
impdp dba/pwd@target tables=SYSADM.PS_A,SYSADM.PS_B directory=TEST_DIR dumpfile=DMP_TABLES.dmp logfile=DMP_TABLES.log TABLE_EXISTS_ACTION=REPLACE

INCLUDE and EXCLUDE

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export/import. When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import. The two parameters are mutually exclusive, so use the parameter that requires the least entries to give you the result you require. The basic syntax for both parameters is the same.
expdp dba/pwd@src schemas=SYSADM include=TABLE:"IN (PSSTATUS, 'PSOPTIONS')" directory=TEST_DIR dumpfile=TEST.dmp logfile=TEST.log
expdp
dba/pwd@tgt schemas=YSADM exclude=TABLE:"= PSOPTIONS" directory=TEST_DIR dumpfile=TEST.dmp logfile=TEST.log
the parameter is used from the command line, depending on your OS, the special characters in the clause may need to be escaped, as follows. Because of this, it is easier to use a parameter file.
include=TABLE:\"IN (\'EMP\', \'DEPT\')\"

The valid object type paths that can be included or excluded can be displayed using the DATABASE_EXPORT_OBJECTS, SCHEMA_EXPORT_OBJECTS, and TABLE_EXPORT_OBJECTS views.
INCLUDE=TABLE,VIEW,PACKAGE:"LIKE '%API'"
or
INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"


IMPDP across network links.
You cannot import a dmp file into a 11gR2 Db if the export is taken from 11gR1. In such cases we can use impdp command across DB link (network link). Here are the steps
Say we need to do a schema refresh from SRC(11gR2) to TGT DB(11gR1)
1. Create a DB link in TGT and point to SRC and test the connection.say dblink name in TGT is SRCDBLINK.WORLD
2. In this case, TGT is 11gR1 and SRC is 11gR2, the script had to be executed from a machine where the client is 11gr1, if you execute from 11gr2 client, it threw the error “UDI-00018: Data Pump client is incompatible with database version 11.01.00.07.00”
3. Execute from 11gR1 client impdp dba/pwd@TGT network_link=SRCDBLINK.WORLD schemas=SYSADM DUMPFILE=TEST.dmp table_exists_action=replace logfile=test.log
this will copy data across network link from SRC to TGT

Issue the following command to know all the options available with impdp and expdp
expdp help=y
impdp help=y
Status of jobs
Query
select * from dba_datapump_jobs;
Unlike the original exp and imp utilities all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.
All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job. Issuing "ctrl+c" on the client during a job stops the client output and presents a command prompt. Typing "status" at this prompt allows you to monitor the current job.


The following commands are valid while in interactive mode.Note: abbreviations are allowedCommand
CONTINUE_CLIENT : Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT : Quit client session and leave job running.
KILL_JOB : Detach and delete job.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job.

Issues
IMPDP Stuck with wait event - "wait for unread message on broadcast channel"
I queried dba_datapump_jobs and found out that I have an old job in "NOT RUNNING" state:
SQL>
select * from dba_datapump_jobs;
I saw one job not running. Need to delete this in order to move ahead.
Checked to find the underlying object:
SQL>
SELECT a.object_id, a.object_type, a.owner, a.object_name, a.statusFROM dba_objects a, dba_datapump_jobs jWHERE a.owner=j.owner_name AND a.object_name=j.job_nameand j.job_name='SYS_IMPORT_FULL_01';
SYS_IMPORT_FULL_01 is the job name found from first query

SQL>
select owner,object_name,subobject_name, object_type,last_ddl_time from dba_objects where object_id=984119
object id found from above query
Dropped the datapump job table:
SQL> drop table SYSTEM.SYS_IMPORT_FULL_01;
Table dropped.
And then my impdp job started progressing:
NOTE: Please make sure that the old job is NOT RUNNING.Secondly, if its a known job you can also try to:impdp username/password attach=JOB_NAMEand then at the IMPDP prompt do a kill_job. This should also clear the jobs status from dba_datapump_jobs.


usefull links
http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_api.htm
http://peerdba.wordpress.com/2011/01/13/killing-expdpimpdp-job/


Useful Oracle queries for Psoft DBA

note: Replace v$ with gv$ if you are running on RAC

check this site. This have a lot of useful queries
http://www.oracle-base.com/dba/Scripts.php 

Find blocking sessionsselect 'alter system disconnect session '''sid','serial#'''immediate;' from V$SESSION where sid in (select sid from V$LOCK where block=1);

Find current Running SQLs
select sesion.sid,
sesion.username,
sesion.machine,
sesion.program,
sesion.serial#,
0ptimizer_mode,
hash_value,
address,
cpu_time,
elapsed_time,
sql_text from v$sqlarea sqlarea, v$session sesion where sesion.sql_hash_value = sqlarea.hash_value and sesion.sql_address = sqlarea.address and sesion.username is not null

Flashback table to a previous point in time
-usefull if we accidently deleted some rows
insert into SYSADM.PS_TABLE(SELECT *FROM SYSADM.PS_TABLE AS OF TIMESTAMP TO_TIMESTAMP('18-JUL-11 06.10.38.129109'))



Finding DB Character set
select value$ from sys.props$ where name='NLS_CHARACTERSET';

STATs update
on a the entire schema: EXEC DBMS_STATS.gather_schema_stats (ownname => 'SYSADM', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
on a single table: EXEC DBMS_STATS.gather_table_stats ('SYSADM','PS_TABLE',cascade=>true)

Finding the remaining time for a long running query
select * from (
select opname, target, sofar, totalwork,
units, elapsed_seconds, message,elapsed_seconds*(totalwork-sofar)/sofar as time_rem
from v$session_longops
where sid = and serial# =
order by start_time desc)
where rownum <=1;

Finding RMAN backup status

selectSESSION_KEY, INPUT_TYPE, STATUS,to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,to_char(END_TIME,'mm/dd/yy hh24:mi') end_time,elapsed_seconds/3600 hrsfrom V$RMAN_BACKUP_JOB_DETAILSorder by session_key;

find out the impdp status of a table-how many rows got imported
select sum(bytes)/1048576
from dba_segments
where owner = 'SCOTT'
and segment_type = 'TEMPORARY'
and tablespace_name = TABSPACENAME

SGA usage
select round(sum(bytes)/1024/1024,2) total_sga,
round(sum(decode(name,'free memory',bytes,0))/1024/1024,2) free,
round((sum(decode(name,'free memory',bytes,0))/1024/1024)/(sum(bytes)/1024/1024)*100,2) free_perfromv$sgastat;

select * from ( select POOL, NAME, BYTES, BYTES/1048576 as MBytes from v$sgastat where pool='shared pool' order by BYTES desc ) where rownum <= 25;

Finding details on a peoplesoft process instance from DB

select * from PSPRCSQUE where PRCSINSTANCE=297037;
select * from PS_PRCSRQSTDIST where PRCSINSTANCE=297037;
select RUNSTATUS from PSPRCSRQST where PRCSINSTANCE=297037;

Finding total size of a Oracle DB
select a.data_size+b.temp_size+c.redo_size+d.controlfile_size "total_size in MB" from ( select sum(bytes)/1024/1024 data_size from dba_data_files ) a,( select nvl(sum(bytes),0)/1024/1024 temp_size from dba_temp_files ) b,( select sum(bytes)/1024/1024 redo_size from sys.v_$log ) c,( select sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1024/1024 controlfile_size from v$controlfile) d;



Long Running Operations

SELECT s.sid,
       s.serial#,
       s.machine,
       TRUNC(sl.elapsed_seconds/60) || ':' || MOD(sl.elapsed_seconds,60) elapsed,
       TRUNC(sl.time_remaining/60) || ':' || MOD(sl.time_remaining,60) remaining,
       ROUND(sl.sofar/sl.totalwork*100, 2) progress_pct
FROM   v$session s,
       v$session_longops sl
WHERE  s.sid     = sl.sid
AND    s.serial# = sl.serial#;


Finding the free space in a tablespace

select a.TABLESPACE_NAME,a.BYTES bytes_used,b.BYTES bytes_free,b.largest,round(((a.BYTES-b.BYTES)/a.BYTES)*100,2) percent_used from (select TABLESPACE_NAME,sum(BYTES) BYTES from dba_data_files  group by TABLESPACE_NAME) a, (select TABLESPACE_NAME,sum(BYTES) BYTES ,max(BYTES) largest from dba_free_space  group by TABLESPACE_NAME)b where a.TABLESPACE_NAME=b.TABLESPACE_NAME order by ((a.BYTES-b.BYTES)/a.BYTES) desc;

the o/p will show the following detailsTablespace Name - Name of the tablespace
  1. Bytes Used - Size of the file in bytes
  2. Bytes Free - Size of free space in bytes
  3. Largest - Largest free space in bytes
  4. Percent Used - Percentage of tablespace that is being used - Careful if it is more than 85%