Monday 10 September 2012

Peoplesoft VERSION Application Engine




We normally run version AE in PeopleSoft during various circumstances. But it’s essential to understand what VERSION AE does
PSVERSION – this table stores an acronym (AEM-Application Engine Management, PJM-Projects Mangement, PPC-PeopleCode management,UPM-User Profile Management,RDM-Record Data Management,etc.) for each object type within the PeopleSoft application and it’s corresponding version value. PSVERSION is updated by the Panel Processor and every time the peoplesoft component processor tries to retrieve an object definition, it first checks the PSVERSION table to provide the most updated version of the specific object.

Therefore, whenever an object definition is changed, the PSVERSION table is updated, correspondingly. For example, if a record is modified in Application Designer, the application first updates the version in PSRECDEFN (UPDATE PSRECDEFN SET version = version from psversion table  +1 WHERE RECNAME=’YourRecord’) and afterwards it updates the PSVERSION table, as shown below:
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’RDM’
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’SYS’
This way, the next time the record object is accessed by the Application Server will issue the following data manipulation SQL statements:
SELECT max(VERSION) FROM PSVERSION WHERE OBJECTTYPENAME = ‘RDM’
SELECT max(VERSION) FROM PSRECDEFN
and will verify that the values returned by the select commands are the same. If these values are different you might encounter this kind of error, when running the SYSAUDIT :
Version Check Audits-Exception(s) Found, Manager- OBJECTTYPENAME (eg:RDM,UPM,etc.) Version check of table xxxxDEFN against PSVERSION failed.
The same approach is taken when one tries to modify an user profile, the PSVERSION is updated and, also, the VERSION filed of the corresponding user from the PSOPRDEFN table is changed accordingly.
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’UPM’
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’SYS’
UPDATE PSOPRDEFN SET VERSION= (SELECT max(VERSION) FROM PSVERSION WHERE OBJECTTYPENAME = ‘UPM’) WHERE OPRID=’YourOPRID’

When running the SQR Report SYSAUDIT from the sample Processes(PT<8.4xx) or System Process Requests (PT>=8.4xx) one might encounter the following error:
Version Check Audits-Exception(s) Found, Manager- OBJECTTYPENAME (eg:RDM,UPM,etc.) Version check of table xxxDEFN against PSVERSION failed

This might occur due to the fact that the Panel Processor failed to update PSVERSION.Normally, the Application server compares the version number in the file RDM.KEY (IDX) to that in the PSVERSION. if the PSVERSION database obeject type version is greater than that of the .IDX (KEY) file, then the object type will be retrieved from the database and the RDM.DAT file will be updated with the latest version of the object.

The PS objects are stored, at any given time, on .DAT files on the Application server and in the cache memory of the same server. The issues generaaly arise when projects containing various objects are migrated from one environment to another without previously stopping and clearing the Application Server cache to ensure that all the version fields get updated correctly, in both XXXDEFN and PSVERSION tables.Of course, deleting the application server cache is not the desired solution, every time a project/object is migrated, taking into consideration that this will impact the final users and generate useless disk loading during the rebuild of the newly migrated objects.
IN order to fix this error one should try the following solutions
- execute the VERSION Application Engine program in command line: psae -CD -CT -CO -CP -R -AI VERSION
afterwards ensure that all the versions values are 1 by issuing the follwoing command on the database:
SELECT * FROM PSVERSION;
and,in the end, empty the Application Servers cache (is recommended to have more than one application server, and that the available servers are designed in a load balancing architecture; this way the users would not be affected when the servers are stopped,cache cleaned, and restarted, one by one).The AE VERSION resets all version numbers for all XXXDEFN tables and sets VERSION=1 in PSVERSION and PSLOCK tables (will not determine every object to be recached as will deleting cache). In the end, the VERSION application engine will update the LASTREFRESHUPDDTM field in the psstatus table which tells the caching mechanism to compare objects in the cache files and the database and synchronize.
-another solution, which is usually not recommended due to its various implications, is to modifiy directly into the database the LASTREFRESHUPDDTM date to today’s date into the PSSTATUS table ( UPDATE PSSTATUS SET LASTREFRESHUPDTTM = sysdate), which will not require the shut down of the application servers. When the column LASTREFRESHDTTM gets updated, the result is the purging of the cache files and when the files are accessed by a process the first time,the process will first read the LASTREFRESHDTTM from PSLOCK, then compare this datetime with the datetime on the cache file (a cache file contains a datetime to reflect when the file was created). If the two datetime values are different, the cache file is purged. But this comparison occurs only when a cache file is accessed by a process the first time. After that, the process uses the cache file without comparing the datetime values again. In third tier , because the application server has been up, the comparison will not be done, so the cache files are not purged and refreshed.



The effect of the VERSION Application Engine program is fairly violent
upon the PeopleTools system tables, and as such should be used only in
certain, specific instances. Technically speaking, it should only be
run when there is reason to believe that the versions in the PSVERSION
table are no longer coherent, or when the versions in one of the
managed objects tables are out of sync with PSVERSION.

Generally speaking it should only be run when indicated by one of the following:

1. The SYSAUDIT report indicates that there are objects out of synch.

2. A GSC analyst recommends its use.

3. The PeopleTools development team recommendation for a specific issue.

4. Following a major PeopleTools upgrade or major application upgrade.

5. An Installation Guide indicates its need.

NOTE: The use of VERSION should NOT be run as matter of standard
operating procedure.

Due to some side effects from the Application Designer program
(PSIDE.EXE) VERSION must be run only when all Application Designer
clients are completely logged out and shut down. Many customers
choose to restart the database to ensure that this is the case. All
Application Servers, Process Schedulers, and other PeopleTools clients
should be completely shut down while it is being run.

PROCESS SCHEDULER OPERATION

Logically following the previous point, the use of the Process
Definition that allow the VERSION AE to be run from the Process
Scheduler is no longer recommended. VERSION AE should only be run
from a command line, and then only when that connection is the only
one active to the database. (Note: this does not mean that the
database is in =1Asingle user mode.=1A)

If the VERSION program is run incorrectly, performance can be
dramatically impacted. It is not uncommon for the stability of the
Application Server processes to be compromised. Additionally,
application integrity can possibly be affected; simple saving efforts
from Application Designer can fail.


PROPER PROCEDURE

The proper steps to follow when running VERSION AE are:

1. Shutdown all Application Servers, Process Schedulers, and
other PeopleTools clients.

2. *critical step* Ensure that all Application Designer session
are logged out and shut down. If necessary, shutdown and restart
the database and its communication software.

3. Establish the proper shell environment. Normally this
includes loging in as your PeopleSoft id, and
changing to the PSFT bin directory (i.e. cd $PS_HOME/bin)
setting the PS_SERVER_CFG environment variable (export
PS_SERVER_CFG=3D$PS_HOME/appserv/prcs/<dbna me>/psprcs.cfg)


4. Execute the command from a command line:
psae -CD <dbname> -CT <dbtype> -CO <oprid> -CP <pswd> -R
INSTALL [where INSTALL is a valid run control] -AI VERSION
(Note: INSTALL must be a valid run control for <oprid>)

5. Issue the following SQL and archive the output to ensure that
the program ran (all the versions should be 1).
SELECT * FROM PSVERSION ORDER BY VERSION DESC;

6. Clear the cache files from the Application Server, Web
Servers, Process Schedulers, and Client machines.

7. Restart the system per normal operational procedures. There
will be expected performance impact as the system rebuilds its cache
files.

8. Over the course of the following days, every 4 hours or so,
rerun the SQL from Step #5. You should observe a gradual growth of
the versions, typically in the order of dozens per day. The version
associated with SYS should always be equal to or greater than all
other values in the table.

Should you observe one of the following conditions contact the GSC
immediately for further advice.

1. The version value associated with SYS is no longer greater to
or equal all other value in the PSVERSION table.

2. Some of the values increase dramatically, on the order of
several thousand, and then remain fairly constant. Normal behavior is
for the values to increase by increments of 1. One exception would be
during the migration of a project with many records. Some values will
increase by the number of records migrated.



Histograms in Oracle




Where there is a high degree of skew in the column distribution, called a non-uniform distribution of data, histograms should lead to a better estimation of selectivity. This should produce plans that are more likely to be optimal.  That means histograms contains information on nature of table data.

What is Histogram? Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer determine the fastest table join order.

What is a Bucket? When Histograms are created the number of buckets can be specified.It is this number that controls the type of histogram created.
 # Buckets = # of Rows of information.
When building histograms the information it stores is interpreted differently depending on whether the number of buckets requested is less than the number distinct values or if it is the same.  Specifically, ENDPOINT_NUMBER and
ENDPOINT_VALUE in dba/user/all_histograms would have different meanings.

Oracle uses two types of histograms for column statistics: height-balanced histograms and frequency histograms

Frequency histograms
Each value of the column corresponds to a single bucket of the histogram.
Each bucket contains the number of occurrences of that single value.
Each value of the column corresponds to a single bucket of the histogram. This is also called value based histogram. Each bucket contains the number of occurrences of that single value. Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified.
data sorted
Y              Y
Y              Y
Y              Y
N             Y
N             N
NA          N
N             N
Y              NA
NA          NA

Results
Bucket1 Y=4
Bucket2 N=3
Bucket3 NA=2

Height-balanced histogram
In a height-balanced histogram, the column values are divided into bands so that each band contains approximately the same number of rows. The useful information that the histogram provides is where in the range of values the endpoints fall.

A histogram is created using DBMS_STATS.GATHER_TABLE_STATS METHOD_OPT => 'FOR COLUMNS SIZE <# of buckets> <Column Name>‘
Size determines the number of buckets to be created
execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR COLUMNS SIZE 10 gender');-for a particular column
execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR ALL COLUMNS SIZE 10’);- for all columns
execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR all indexed columns size 10’);- for all columns with index




EXAMPLE
-------

Table TAB1

SQL> desc tab1
 Name                            Null?    Type
 ------------------------------- -------- ----
 A                                        NUMBER(6)
 B                                        NUMBER(6)

Column A contains unique values from 1 to 10000.

Column B contains 10 distinct values. The value '5' occurs 9991 times. Values
'1, 2, 3, 4, 9996, 9997, 9998, 9999, 10000' occur only once.

Test queries:

(1) select * from tab1 where b=5;
(2) select * from tab1 where b=3;

Both the above queries would use a FULL TABLE SCAN as there is no other
access method available.

Then we create an index on column B.

select lpad(INDEX_NAME,10), lpad(TABLE_NAME,10),
       lpad(COLUMN_NAME,10), COLUMN_POSITION, COLUMN_LENGTH
from user_ind_columns
where table_name='TAB1'

SQL> /

LPAD(INDEX LPAD(TABLE LPAD(COLUM COLUMN_POSITION COLUMN_LENGTH
---------- ---------- ---------- --------------- -------------
    TAB1_B       TAB1          B               1            22

Now,

(1) select * from tab1 where b=5;
(2) select * from tab1 where b=3;

Both do an INDEX RANGE SCAN to get the ROWID to do a lookup in the table.

With an INDEX present, it would preferrable to do an INDEX RANGE SCAN for
query (2), but a FULL TABLE SCAN for query (1).


ANALYZING THE TABLE
-------------------

Next, analyze the table using compute statistics:

SQL> execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'tab1')

From dba_tables:

  NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ---------- ------------ ---------- ---------- -----------                             
     10000         64            0         86          0          10

From dba_tab_columns:

NUM_DISTINCT LOW  HIGH   DENSITY  NUM_NULLS NUM_BUCKETS LAST_ANALYZ SAMPLE_SIZE
------------ ---- ---- --------- ---------- ----------- ----------- -----------
       10000 Full Full     .0001          0           1 30-JUN-1999       10000
          10 Full Full        .1          0           1 30-JUN-1999       10000


SQL> select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 10),
  2  bucket_number, endpoint_value
  3  from user_histograms
  4  where table_name='TAB1';

TABLE_NAME COLUMN_NAME BUCKET_NUMBER ENDPOINT_VALUE
---------- ----------- ------------- --------------
      TAB1           A             0              1
      TAB1           A             1          10000
      TAB1           B             0              1
      TAB1           B             1          10000


SQL> select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 10),
  2  bucket_number, endpoint_value
  3  from user_tab_histograms
  4  where table_name='TAB1';

Analyze has created 1 BUCKET for each column. So all values for the column
are in the same bucket.  The BUCKET_NUMBER represents the BUCKET NUMBER and
ENDPOINT_VALUE represents the last column value in that bucket.

Now query (1) and (2) ; both do a FULL TABLE SCAN.

So, the fact that you have statistics about the table and columns does not
help the optimizer to distinguish between how many of each value we have.
The reason it does a FULL TABLE SCAN is because there is a 1 BUCKET histogram
and any value selected for should be in that bucket.


CREATING HISTOGRAMS
-------------------

What you need now is to create histograms so the Optimizer knows how many
values occur for each column.

Query (1): select * from tab1 where b=5;
           should do a FULL TABLE SCAN   and

Query (2): select * from tab1 where b=3;
           should do an INDEX RANGE SCAN

SQL> execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR COLUMNS SIZE 10 b’);

select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 5),
       endpoint_number, endpoint_value
from user_histograms;

TABLE_NAME COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
      TAB1           B               1              1
      TAB1           B               2              2
      TAB1           B               3              3
      TAB1           B               4              4
      TAB1           B            9995              5
      TAB1           B            9996           9996
      TAB1           B            9997           9997
      TAB1           B            9998           9998
      TAB1           B            9999           9999
      TAB1           B           10000          10000

So, now there are statistics on the table and on the columns.

You requested 10 buckets (size 10) and there are 10 distinct values.

The ENDPOINT_VALUE shows the column value and the ENDPOINT_NUMBER
shows the cumulative number of rows.

For example, for ENDPOINT_VALUE 2, it has an ENDPOINT_NUMBER 2, the previous
ENDPOINT_NUMBER is 1, hence the number of rows with value 2 is 1. 

Another example is ENDPOINT_VALUE 5. Its ENDPOINT_NUMBER is 9995. The previous
bucket ENDPOINT_NUMBER is 4, so 9995 - 4 = 9991 rows containing the value 5.

So, now QUERY (1) does in fact do a Full Table Scan.

SQL> select * from tab1 where b=5
SQL> /

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=10 Card=9991 Bytes=99910)

1    0   TABLE ACCESS (FULL) OF 'TAB1' (Cost=10 Card=9991 Bytes=99910)


And, QUERY (2) does do an Index Range Scan.

SQL> select * from tab1 where b=3
SQL> /

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=500 Bytes=5000)
1    0   TABLE ACCESS (BY ROWID) OF 'TAB1' (Cost=6 Card=500 Bytes=5000)
2    1     INDEX (RANGE SCAN) OF 'TAB1_B' (NON-UNIQUE)

This is fine if you have a low number of distinct values, but there can
be tables with a huge number of distinct values.  You don't want to
create a bucket for each value. There would be too much overhead.
In this case you would request less buckets than distinct values.


CREATING HISTOGRAMS WITH LESS BUCKETS THAN DISTINCT VALUES
----------------------------------------------------------

SQL> execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR COLUMNS SIZE 8 b’);


SQL> select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 5),
  2>       endpoint_number, endpoint_value
  3> from user_histograms;

LPAD(TABLE LPAD( ENDPOINT_NUMBER ENDPOINT_VALUE
---------- ----- --------------- --------------
TAB1     B               0              1
TAB1     B               7              5
TAB1     B               8          10000

Here, Oracle creates the requested number of buckets but puts the same
number of values into each bucket, where there are more endpoint values
that are the same for the frequently occurring value.

The ENDPOINT_NUMBER is the actual bucket number and ENDPOINT_VALUE is
the endpoint value of the bucket determined by the column value.

From above, bucket 0 holds the low value for the column. You cannot see
buckets 1 to 6 so as to save space.

But we have bucket 1 with an endpoint of 5,
                    bucket 2 with an endpoint of 5,
                    bucket 3 with an endpoint of 5,
                    bucket 4 with an endpoint of 5,
                    bucket 5 with an endpoint of 5,
                    bucket 6 with an endpoint of 5,
                    bucket 7 with an endpoint of 5 AND
                    bucket 8 with an endpoint of 10000

So bucket 8 contains values between 5 and 10000.
All buckets contain the same number of values (which is why they are called
height-balanced histograms), except the last bucket may have less values
then the other buckets.

If the data is uniform, you would not use histograms. However, if you request
the same number of buckets as distinct values, Oracle creates 1 bucket.  If
you request less buckets, Oracle uses an algorithm to balance values into each
bucket and any values that remain (which have to be less then the number
stored in each height-balanced bucket) go into the last bucket.


STORING CHARACTER VALUES IN HISTOGRAMS
--------------------------------------

Character columns have some exceptional behaviour, in as much as we store
histogram data for the first 32 bytes of any string.  Any predicates that
contain strings greater than 32 characters will not use histogram information
and the selectivity will be 1 / DISTINCT.

Data in histogram endpoints is normalized to double precision floating point
arithmetic.

EXAMPLE
-------

SQL> select * from morgan;

A
----------
a
b
c
d
e
e
e
e


The table contains 5 distinct values. There is one occurance of 'a', 'b', 'c'
and 'd' and 4 of 'e'.

Create a histogram with 5 buckets.

SQL> analyze table morgan compute statistics for columns a size 5;

Looking in user_histograms:

LPAD(TABLE LPAD( ENDPOINT_NUMBER ENDPOINT_VALUE
---------- ----- --------------- --------------
    MORGAN     A               1     5.0365E+35
    MORGAN     A               2     5.0885E+35
    MORGAN     A               3     5.1404E+35
    MORGAN     A               4     5.1923E+35
    MORGAN     A               8     5.2442E+35

So, ENDPOINT_VALUE   5.0365E+35 represents a
                                                5.0885E+35 represents b
                                                5.1404E+35 represents c
                                                5.1923E+35 represents d
                                                5.2442E+35 represents e

Then if you look at the cumulative values for ENDPOINT_NUMBER,
the corresponding  ENDPOINT_VALUE's are correct.

Friday 13 July 2012

When To Apply Bundles and MPs.

When To Apply Bundles and MPs.


Normally bundles are collection of fixes and one or more bundles constitutes a MP
MPs normally are two types. Cumulative and Delta

When to apply cumulative MP:

If you have not applied any patches, bundles or MPs (delta or cumulative). Cumulative MP is the quickest way to make a vanilla database current.
After applying a cumulative MP, subsequent Delta MP or bundle can be applied.

When to apply delta MP:

After you have applied a cumulative or delta MP and have not applied any patch or bundle after that. After that you can apply next delta MP or next bundle. Once you apply a bundle you can no longer apply MPs. You need to keep applying subsequent bundles.

When to Apply a bundle:

You have applied a bundle or MP previously.
You are on MP and cannot wait for the next delta MP
After a bundle is applied you need to keep applying next bundles, you cannot apply a MP on top of a bundle.This means if at any time you apply a bundle in your DB you cannot apply MPs further. You need to proceed with bundles when they get released. 

Tuesday 29 May 2012

Things to note before and after a people tools patch.


Things to note before and after a people tools patch.


Because of too many customizations normally after applying a tools patch we end up in having lots of issues. From my experience these are few things which we should take care before and after a tools patch. This is more of generic explanation and things may change as per tools releases.

File Server
Backup the file server. If you edit the delivered SQRs , cobol etc.. chances are there patch overwrite those. In such case you may need to reapply the changes.

From DB side
Make sure to back up the DB completely before applying the patch.
It’s always better to apply the patch on a demo DB first and then compare the PTPATCH project to your Peoplesoft DB (Dev SIT or Prod whichever you are going to upgrade). This gives you an idea on what all objects are going to get overwritten or change. Usually if you have some customization on single sign on peoplecode and all, chances are high these will get overwritten by the tools patch and the functionalities won’t work properly after the patch. In such case make sure once the patch is applied you restore these customizations from the backup or from another DB.
On DB side after the patching always run version AE ,clear rowset cahe dms prcsclear.

App and Prcs server
Normally App and prcs servers won’t make too many issues after a patch. Tools patch won’t overwrite the cfg files which store the configuration values we provided. So just reconfiguring and clearing the cache will do for app and batch servers. Don’t recreate an app or batch server after tools patch as it will overwrite all your custom configurations.

Webserver
Webserver is always tricky especially when you have customizations in login logout pages. Normally after a tools upgrade we have to redeploy the PIA. Before doing so make a backup of PS_HOME/webserv folder. Then redeploy the PIA, when you do so make sure you select the correct web profile, domain name etc.. Make  sure that the webserver cache is cleared.
Once the redeploy is done you will be able to login but there are few files which you may need to restore from backup
Configuration.properties , especially when you have a Appserver load balancing setup before the patch.( PORTAL.war/WEB-INF/psftdocs/<SITE>/configuration.properties)
Setenv.sh when you changed values in this file before the tools patch (inside webserv bin)
integrationGateway.properties when we have nodes configured.(./applications/peoplesoft/PSIGW.war/WEB-INF/integrationGateway.properties)-after restoring make sure to change the tools version in this file to reflect the patch.
web.xml(./applications/peoplesoft/PORTAL.war/WEB-INF/web.xml), especially if you have custom MIME types
weblogic.xml- contains custom values. (./applications/peoplesoft/PORTAL.war/WEB-INF/weblogic.xml)
Copy the  customized login and logout files (if present) into /<PS_HOME>/webserver/…../PORTAL.war/WEB-INF/psftdocs/<websitename>/. This directory will also contain any customized html files, so it better to compare this directory from backup.
For applications like portal sometimes there will be separate folders which will store custom scripts. If these are overwritten copy them from backup. Always compare the backup webserv and the new one.


Webserver issues

Issue1: While redeploying PIA after tools patch the following error comes, even though PIA redeployment should not prompt for HTTP port. 

Invalid HTTP port. Please enter a number.

Solution:
1) Make a backup of file weblogic.xml which is located in <PS_HOME>/webserv/<DOMAIN_NAME>/applications/peoplesoft/PORTAL.war/WEB-INF

2) Edit weblogic.xml and remove any hyphens from the machine/host name in the 'CookieName' parameter
Example: Change this:
<session-param>
<param-name>CookieName</param-name>
<param-value>MY-WEB-SERVER-80-PORTAL-PSJSESSIONID</param-value>
</session-param>
To this:
<session-param>
<param-name>CookieName</param-name>
<param-value>MYWEBSERVER-80-PORTAL-PSJSESSIONID</param-value>
</session-param>

3) Now you should be able to proceed with the redeploy and/or the installation of another site

4) After successful installation, restore the backup of weblogic.xml so that the host name is back to its original value

Monday 9 April 2012

Oracle STATS collection. 



Optimizer statistics are a collection of data that describe more details about the database and the objects in the database. These statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Optimizer statistics include table statistics (number of rows,number of blocks,average row length etc..),column statistics (number of  NULLs, cardinality, histograms etc..),Index statistics and System statistics. Since the data in the DB  constantly  changes its very important the STATs are updated properly. Otherwise this will cause serious performance issues.

Automatic Optimizer STATS Collection.

By default 11g have Auto STATs collection enabled during DB  maintenance windows. This is the suggested method for STAT collection. 
Automatic optimizer statistics collection gathers optimizer statistics by calling the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure. The GATHER_DATABASE_STATS_JOB_PROC procedure collects statistics on database objects when the object has no previously gathered statistics or the existing statistics are stale because the underlying object has been modified significantly (more than 10% of the rows).The DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC is an internal procedure, but it operates in a very similar fashion to the DBMS_STATS.GATHER_DATABASE_STATSprocedure using the GATHER AUTO option. The primary difference is that the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure prioritizes the database objects that require statistics, so that those objects which most need updated statistics are processed first. This ensures that the most-needed statistics are gathered before the maintenance window closes

Disabling auto stat collection (use ENABLE if at all you need to enable)
BEGIN
  DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => 'auto optimizer stats collection', 
    operation => NULL, 
    window_name => NULL);
END;
/

Use following query to verify scheduled jobs are running. Here you can find AUTO STAT ran fine


select * from dba_autotask_client ;
You can check the history of this job using the following query . This way we can figure out Auto STATs on a given date ran fine.
SELECT *FROM dba_autotask_client_history
select * from dba_autotask_job_history

make sure background process CJQ0 is running for AUTO STAT to be successful. You can enable the parameter JOB_QUEUE_PROCESSES (maybe to 1000 ) and this will start the CJQ0 process. 

Manual STATS Collection

Even though we have an automated job for collecting STATS regularly, lot of scenarios will come where we need to collect STATS manually for a table,schema or for an index etc..Examples are say you have a table which got truncated and loaded with huge amount of data, the subsequent select on this table will become slow. You have a table where data changes continually than 10%, these are good candidates for collecting STATS.

The DBMS_STATS package can gather statistics on table and indexes and individual columns and partitions of tables
GATHER_INDEX_STATSIndex statistics
GATHER_TABLE_STATSTable, column, and index statistics
GATHER_SCHEMA_STATSStatistics for all objects in a schema
GATHER_DICTIONARY_STATSStatistics for all dictionary objects
GATHER_DATABASE_STATSStatistics for all objects in a database

Collecting Schema Stats

EXEC DBMS_STATS.gather_schema_stats (ownname => 'SYSADM', cascade =>true,estim
ate_percent => dbms_stats.auto_sample_size);
Collecting table stats

EXEC DBMS_STATS.gather_table_stats ('SCHEMA_NAME','TABLE_NAME',cascade=>true,estimate_percent => 5)
Few parameters of interest

cascade- will include all related indexes also.
Using this option is equivalent to runninggather_table_stats plus running gather_index_stats for each index on the table
estimated_percent- decides the sample size. Auto is default in 11g and oracle decides what percentage (sample_size) to take. In Oracle 11g, the AUTO_SAMPLE_SIZE constant is the preferred sample size as the mechanism for determining the actual sample size has been improved. In addition, the statistics estimate based on the auto sampling are near to 100% accurate and much faster to gather than in previous versions. Sample size is notyhing but how much rows you are taking for calculating the STATS. If you are not giving this parameter it will take all rows and SAMPLE_SIZE will be equel to number of rows. SAMPLE_SIZE and NUM_ROWS can be found from dba_tables. 
Degree- to parallelize the collection of statistics. degree=>x (where is the degree of parallelism, depends processors available and other parallel settings on DB)




Tuesday 21 February 2012

SSL Implementation-Peoplesoft


SSL Implementation-Peoplesoft

SSL Handshake- Simple explanation




1.The client sends a client "hello" message that lists the cryptographic capabilities of the client (sorted in client preference order), such as the version of SSL, the cipher suites supported by the client, and the data compression methods supported by the client. The message also contains a 28-byte random number.
2.The server responds with a server "hello" message that contains the cryptographic method (cipher suite) and the data compression method selected by the server, the session ID, and another random number. The server also sends digital certificate which contains server’s public key.
Note: The client and the server must support at least one common cipher suite, or else the handshake fails. The server generally chooses the strongest common cipher suite.
3.The server sends its digital certificate and this contains servers public key If the server uses SSL V3, and if the server application (for example, the Web server) requires a digital certificate for client authentication, the server sends a "digital certificate request" message. In the "digital certificate request" message, the server sends a list of the types of digital certificates supported and the distinguished names of acceptable certificate authorities.
4.The server sends a server "hello done" message and waits for a client response.
5.Upon receipt of the server "hello done" message, the client (the Web browser) verifies the validity of the server's digital certificate and checks that the server's "hello" parameters are acceptable.
If the server requested a client digital certificate, the client sends a digital certificate, or if no suitable digital certificate is available, the client sends a "no digital certificate" alert. This alert is only a warning, but the server application can fail the session if client authentication is mandatory.
6.The client sends a "client key exchange" message. This message contains the pre-master secret, a 46-byte random number used in the generation of the symmetric encryption keys and the message authentication code (MAC) keys, encrypted with the public key of the server.
If the client sent a digital certificate to the server, the client sends a "digital certificate verify" message signed with the client's private key. By verifying the signature of this message, the server can explicitly verify the ownership of the client digital certificate.
Note :An additional process to verify the server digital certificate is not necessary. If the server does not have the private key that belongs to the digital certificate, it cannot decrypt the pre-master secret and create the correct keys for the symmetric encryption algorithm, and the handshake fails.
7.The client uses a series of cryptographic operations to convert the pre-master secret into a master secret, from which all key material required for encryption and message authentication is derived. Then the client sends a "change cipher spec" message to make the server switch to the newly negotiated cipher suite. The next message sent by the client (the "finished" message) is the first message encrypted with this cipher method and keys.
8.The server responds with a "change cipher spec" and a "finished" message of its own.
9.The SSL handshake ends, and encrypted application data can be sent.


How SSL can be configured in Peoplesoft


What all we need for SSL in Peoplesoft?

1.      Webserver’s private key
2.      The web server's public key, digitally signed by a trusted certificate authority (CA), like verisign.
3.      The digitally signed public key of the same CA that signed the web server's key
A public key is transferred and stored as a data element in a digital certificate or a certificate signing request (CSR). You can obtain public keys from a variety of sources in several different formats.
When setting up SSL, you need to:
  • ensure that the encryption keys are correctly formatted.
  • install them in the keystore.
  • configure them using the Administration Console
Generate new key and CSR:
In PT8.49/PT8.50 as in prior versions of PeopleTools, we use pskeymanager to facilitate the SSL setup. This script is located under the following directory:
For PT8.49/8.50: <PS_HOME>/webserv/<DOMAIN>/bin.
For PT8.51: <PS_HOME>/webserv/<DOMAIN>/piabin
  1. Generate key using pskeymanager –createm (For usage information, enter pskeymanager -help.)
  2. Follow the prompts and enter the requested information to create a new private key and a CSR for your web server.
    1. Pskeymanager uses the keystore in PIA_HOME\webserv\domain_name\keystore\pskey, with a default password of password.
    2. Pskeymanager prompts you for an alias for the new keys, for example, ServerABC. This is the name you'll use to refer to the keys in the future.
    3. Pskeymanager prompts you for distinguished name fields. Enter the appropriate values for your organization.
    4. Pskeymanager prompts you for information about the CSR expiration date, key size, key algorithms, and the private key password. All of these fields have default values.
Pskeymanager creates the private key inside the keystore, and creates the CSR as a file called ServerABC_certreq.txt in the current directory. You use the CSR (certificate signing request) to obtain your signed public key certificate and a root certificate from a CA.


  1. Submit the new CSR to your Certification Authority, a.k.a. CA (they will issue a new certificate for your     server). The process may slightly change depending on which CA you selected. I continue the process taking Verisign as an example. In your organizations you will have different CAs.
  2. Open your CSR file in a text editor and copy its entire contents, including the first and last lines:
-----BEGIN NEW CERTIFICATE REQUEST-----
...
...
-----END NEW CERTIFICATE REQUEST-----
5.      Access Verisign's test certificate enrollment site at https://www.verisign.com/ssl/buy-ssl-certificates/free-ssl-certificate-trial/index.html.
Verisign guides you through the CSR submission process, including:
    1. Accepting the Verisign license agreement.
    2. Entering your technical contact information, which includes the email address where Verisign can send your signed public key.
    3. Pasting your CSR contents in the provided text field.
    4. Verifying your CSR.
    5. Confirming and submitting your order.
Verisign also provides its own digitally signed public key in a certificate, which is known as a trusted CA certificate, a root certificate, or a chain certificate.
6.      Download the VeriSign test CA root certificate from http://digitalid.verisign.com/cgi-bin/getcacert.
When prompted, save getcacert.cer to PIA_HOME\webserv\domain_name.
7.      Check your email.
Verisign digitally signs your web server's public key, then returns it to you in a certificate, called the server certificate. Following is an example of the contents of a server certificate:
8.      Copy the entire certificate contents, and save it as a text file called ServerABC-cert.pem in PIA_HOME\webserv\domain_name.
Be sure to include the first and last lines.If you need to FTP the file to a unix box make sure to use ASCII mode.
9.      Once you have the 2 files (root CA and certificate) edit root CA and do a <Select All>, and then <Copy>. Then edit your certificate file and paste the rootCA contents at the bottom of the server certificate. Certificate file will look like this.
-------BEGIN CERTIFICATE---------
dfsfsdfdf
sfsdfwehdfhdf <---------certificate
dgdfgfgfdg
--------END CERTIFICATE-----------
-------BEGIN CERTIFICATE---------
hghjgfjgj
sfsdfwejjhdfhdf <---------intermediate
dgdfgiuiyuiuiyufgfdg
--------END CERTIFICATE-----------
(Your server certificate must contain, in addition to the web server's public key, any keys necessary to establish a chain of trust that culminates in the self-signed root certificate of a trusted root CA. That CA's root certificate must be in the keystore of any browser that's used to access your web server. Most browsers have an extensive set of trusted root certificates in their keystores.)
10.  Import the Root CA first:
pskeymanager -import
When prompted for an Alias, enter anything, such as RootCA
When prompted for the name of the certificate file, enter RootCA.cer
If asked if you want to trust this file, say yes.
11.   Import the server certificate (concatenated file):
pskeymanager -import
When prompted for an Alias, enter the same alias you specified when you created the CSR (step 3 of the above section).
When prompted for the name of the certificate file
When prompted for the key password, enter the password you specified when you created the key/CSR (this password is unrecoverable)
If asked if you want to trust this file, say yes.
12.   Setup SSL in the Admin Console: -weblogic console

Log into the Admin Console: http://hostname:admin_port/console
On the Change Center:
Lock and Edit
Go to:
Domain Structure <Environments> Servers
Click on PIA or the server you want to configure for SSL.
Go to the "Keystores" tab.
Select 'Custom Identity And Custom Trust' from the Keystores dropdown field
Verify that the Custom Identity and Trust keystore and password are correct. The default pskey keystore is "password"
Click on the "Save" button.
Go to the "SSL" tab
Select "Keystores" from the "Identity and Trust Locations" dropdown field
Enter the Private Key Alias: same alias as in the step 3 of the "Generate new key" section..
Enter the Private Key Password: same password as in the step 3 of the "Generate new key" section..
Click on the "Save" button.
Activate Changes.
Restart the Web Server.

Wednesday 25 January 2012

Appserver loadbalancing and failover


Appserver loadbalancing and failover

appserver loadbalancing or failover are specified in configuration.properties file located in webserver directory. psserver parameter is edited according to our needs. some examples are given below.

1. psserver=host1:port1#wt1,Host2:port2#wt
eg: psserver=host1:9000#3,host2:9050#1
In this case, appserver running on host1 would receive 3x more requests than appserver2 running on host2


2. psserver=<host>:<port>#wt{failover servers}-failover string is added in curly brackets.
eg: psserver=host1:9000{host2:9020}
if host1 fails host2 will take up. 


3. psserver=Host1:9000#3{Host3:9080#3,Host4:9070#1},Host2:9020#1
In this case host3 and 4 are the failover servers for host 1. means if host1 is down host3 and host4 will replace it. host3 and host 4 will do a weighted load balancing also as specified. 




4.sequential backup with r failover string
psserver=Host1:9000#4{Host3:9010;Host4:9020},Host2:9030#1
In this case, the system assigns Host 4 the requests when both Hosts 1 and 3 are down.