Monday 10 September 2012

Peoplesoft VERSION Application Engine




We normally run version AE in PeopleSoft during various circumstances. But it’s essential to understand what VERSION AE does
PSVERSION – this table stores an acronym (AEM-Application Engine Management, PJM-Projects Mangement, PPC-PeopleCode management,UPM-User Profile Management,RDM-Record Data Management,etc.) for each object type within the PeopleSoft application and it’s corresponding version value. PSVERSION is updated by the Panel Processor and every time the peoplesoft component processor tries to retrieve an object definition, it first checks the PSVERSION table to provide the most updated version of the specific object.

Therefore, whenever an object definition is changed, the PSVERSION table is updated, correspondingly. For example, if a record is modified in Application Designer, the application first updates the version in PSRECDEFN (UPDATE PSRECDEFN SET version = version from psversion table  +1 WHERE RECNAME=’YourRecord’) and afterwards it updates the PSVERSION table, as shown below:
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’RDM’
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’SYS’
This way, the next time the record object is accessed by the Application Server will issue the following data manipulation SQL statements:
SELECT max(VERSION) FROM PSVERSION WHERE OBJECTTYPENAME = ‘RDM’
SELECT max(VERSION) FROM PSRECDEFN
and will verify that the values returned by the select commands are the same. If these values are different you might encounter this kind of error, when running the SYSAUDIT :
Version Check Audits-Exception(s) Found, Manager- OBJECTTYPENAME (eg:RDM,UPM,etc.) Version check of table xxxxDEFN against PSVERSION failed.
The same approach is taken when one tries to modify an user profile, the PSVERSION is updated and, also, the VERSION filed of the corresponding user from the PSOPRDEFN table is changed accordingly.
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’UPM’
UPDATE PSVERSION SET VERSION= VERSION+1 WHERE
OBJECTTYPENAME=’SYS’
UPDATE PSOPRDEFN SET VERSION= (SELECT max(VERSION) FROM PSVERSION WHERE OBJECTTYPENAME = ‘UPM’) WHERE OPRID=’YourOPRID’

When running the SQR Report SYSAUDIT from the sample Processes(PT<8.4xx) or System Process Requests (PT>=8.4xx) one might encounter the following error:
Version Check Audits-Exception(s) Found, Manager- OBJECTTYPENAME (eg:RDM,UPM,etc.) Version check of table xxxDEFN against PSVERSION failed

This might occur due to the fact that the Panel Processor failed to update PSVERSION.Normally, the Application server compares the version number in the file RDM.KEY (IDX) to that in the PSVERSION. if the PSVERSION database obeject type version is greater than that of the .IDX (KEY) file, then the object type will be retrieved from the database and the RDM.DAT file will be updated with the latest version of the object.

The PS objects are stored, at any given time, on .DAT files on the Application server and in the cache memory of the same server. The issues generaaly arise when projects containing various objects are migrated from one environment to another without previously stopping and clearing the Application Server cache to ensure that all the version fields get updated correctly, in both XXXDEFN and PSVERSION tables.Of course, deleting the application server cache is not the desired solution, every time a project/object is migrated, taking into consideration that this will impact the final users and generate useless disk loading during the rebuild of the newly migrated objects.
IN order to fix this error one should try the following solutions
- execute the VERSION Application Engine program in command line: psae -CD -CT -CO -CP -R -AI VERSION
afterwards ensure that all the versions values are 1 by issuing the follwoing command on the database:
SELECT * FROM PSVERSION;
and,in the end, empty the Application Servers cache (is recommended to have more than one application server, and that the available servers are designed in a load balancing architecture; this way the users would not be affected when the servers are stopped,cache cleaned, and restarted, one by one).The AE VERSION resets all version numbers for all XXXDEFN tables and sets VERSION=1 in PSVERSION and PSLOCK tables (will not determine every object to be recached as will deleting cache). In the end, the VERSION application engine will update the LASTREFRESHUPDDTM field in the psstatus table which tells the caching mechanism to compare objects in the cache files and the database and synchronize.
-another solution, which is usually not recommended due to its various implications, is to modifiy directly into the database the LASTREFRESHUPDDTM date to today’s date into the PSSTATUS table ( UPDATE PSSTATUS SET LASTREFRESHUPDTTM = sysdate), which will not require the shut down of the application servers. When the column LASTREFRESHDTTM gets updated, the result is the purging of the cache files and when the files are accessed by a process the first time,the process will first read the LASTREFRESHDTTM from PSLOCK, then compare this datetime with the datetime on the cache file (a cache file contains a datetime to reflect when the file was created). If the two datetime values are different, the cache file is purged. But this comparison occurs only when a cache file is accessed by a process the first time. After that, the process uses the cache file without comparing the datetime values again. In third tier , because the application server has been up, the comparison will not be done, so the cache files are not purged and refreshed.



The effect of the VERSION Application Engine program is fairly violent
upon the PeopleTools system tables, and as such should be used only in
certain, specific instances. Technically speaking, it should only be
run when there is reason to believe that the versions in the PSVERSION
table are no longer coherent, or when the versions in one of the
managed objects tables are out of sync with PSVERSION.

Generally speaking it should only be run when indicated by one of the following:

1. The SYSAUDIT report indicates that there are objects out of synch.

2. A GSC analyst recommends its use.

3. The PeopleTools development team recommendation for a specific issue.

4. Following a major PeopleTools upgrade or major application upgrade.

5. An Installation Guide indicates its need.

NOTE: The use of VERSION should NOT be run as matter of standard
operating procedure.

Due to some side effects from the Application Designer program
(PSIDE.EXE) VERSION must be run only when all Application Designer
clients are completely logged out and shut down. Many customers
choose to restart the database to ensure that this is the case. All
Application Servers, Process Schedulers, and other PeopleTools clients
should be completely shut down while it is being run.

PROCESS SCHEDULER OPERATION

Logically following the previous point, the use of the Process
Definition that allow the VERSION AE to be run from the Process
Scheduler is no longer recommended. VERSION AE should only be run
from a command line, and then only when that connection is the only
one active to the database. (Note: this does not mean that the
database is in =1Asingle user mode.=1A)

If the VERSION program is run incorrectly, performance can be
dramatically impacted. It is not uncommon for the stability of the
Application Server processes to be compromised. Additionally,
application integrity can possibly be affected; simple saving efforts
from Application Designer can fail.


PROPER PROCEDURE

The proper steps to follow when running VERSION AE are:

1. Shutdown all Application Servers, Process Schedulers, and
other PeopleTools clients.

2. *critical step* Ensure that all Application Designer session
are logged out and shut down. If necessary, shutdown and restart
the database and its communication software.

3. Establish the proper shell environment. Normally this
includes loging in as your PeopleSoft id, and
changing to the PSFT bin directory (i.e. cd $PS_HOME/bin)
setting the PS_SERVER_CFG environment variable (export
PS_SERVER_CFG=3D$PS_HOME/appserv/prcs/<dbna me>/psprcs.cfg)


4. Execute the command from a command line:
psae -CD <dbname> -CT <dbtype> -CO <oprid> -CP <pswd> -R
INSTALL [where INSTALL is a valid run control] -AI VERSION
(Note: INSTALL must be a valid run control for <oprid>)

5. Issue the following SQL and archive the output to ensure that
the program ran (all the versions should be 1).
SELECT * FROM PSVERSION ORDER BY VERSION DESC;

6. Clear the cache files from the Application Server, Web
Servers, Process Schedulers, and Client machines.

7. Restart the system per normal operational procedures. There
will be expected performance impact as the system rebuilds its cache
files.

8. Over the course of the following days, every 4 hours or so,
rerun the SQL from Step #5. You should observe a gradual growth of
the versions, typically in the order of dozens per day. The version
associated with SYS should always be equal to or greater than all
other values in the table.

Should you observe one of the following conditions contact the GSC
immediately for further advice.

1. The version value associated with SYS is no longer greater to
or equal all other value in the PSVERSION table.

2. Some of the values increase dramatically, on the order of
several thousand, and then remain fairly constant. Normal behavior is
for the values to increase by increments of 1. One exception would be
during the migration of a project with many records. Some values will
increase by the number of records migrated.



Histograms in Oracle




Where there is a high degree of skew in the column distribution, called a non-uniform distribution of data, histograms should lead to a better estimation of selectivity. This should produce plans that are more likely to be optimal.  That means histograms contains information on nature of table data.

What is Histogram? Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer determine the fastest table join order.

What is a Bucket? When Histograms are created the number of buckets can be specified.It is this number that controls the type of histogram created.
 # Buckets = # of Rows of information.
When building histograms the information it stores is interpreted differently depending on whether the number of buckets requested is less than the number distinct values or if it is the same.  Specifically, ENDPOINT_NUMBER and
ENDPOINT_VALUE in dba/user/all_histograms would have different meanings.

Oracle uses two types of histograms for column statistics: height-balanced histograms and frequency histograms

Frequency histograms
Each value of the column corresponds to a single bucket of the histogram.
Each bucket contains the number of occurrences of that single value.
Each value of the column corresponds to a single bucket of the histogram. This is also called value based histogram. Each bucket contains the number of occurrences of that single value. Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified.
data sorted
Y              Y
Y              Y
Y              Y
N             Y
N             N
NA          N
N             N
Y              NA
NA          NA

Results
Bucket1 Y=4
Bucket2 N=3
Bucket3 NA=2

Height-balanced histogram
In a height-balanced histogram, the column values are divided into bands so that each band contains approximately the same number of rows. The useful information that the histogram provides is where in the range of values the endpoints fall.

A histogram is created using DBMS_STATS.GATHER_TABLE_STATS METHOD_OPT => 'FOR COLUMNS SIZE <# of buckets> <Column Name>‘
Size determines the number of buckets to be created
execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR COLUMNS SIZE 10 gender');-for a particular column
execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR ALL COLUMNS SIZE 10’);- for all columns
execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR all indexed columns size 10’);- for all columns with index




EXAMPLE
-------

Table TAB1

SQL> desc tab1
 Name                            Null?    Type
 ------------------------------- -------- ----
 A                                        NUMBER(6)
 B                                        NUMBER(6)

Column A contains unique values from 1 to 10000.

Column B contains 10 distinct values. The value '5' occurs 9991 times. Values
'1, 2, 3, 4, 9996, 9997, 9998, 9999, 10000' occur only once.

Test queries:

(1) select * from tab1 where b=5;
(2) select * from tab1 where b=3;

Both the above queries would use a FULL TABLE SCAN as there is no other
access method available.

Then we create an index on column B.

select lpad(INDEX_NAME,10), lpad(TABLE_NAME,10),
       lpad(COLUMN_NAME,10), COLUMN_POSITION, COLUMN_LENGTH
from user_ind_columns
where table_name='TAB1'

SQL> /

LPAD(INDEX LPAD(TABLE LPAD(COLUM COLUMN_POSITION COLUMN_LENGTH
---------- ---------- ---------- --------------- -------------
    TAB1_B       TAB1          B               1            22

Now,

(1) select * from tab1 where b=5;
(2) select * from tab1 where b=3;

Both do an INDEX RANGE SCAN to get the ROWID to do a lookup in the table.

With an INDEX present, it would preferrable to do an INDEX RANGE SCAN for
query (2), but a FULL TABLE SCAN for query (1).


ANALYZING THE TABLE
-------------------

Next, analyze the table using compute statistics:

SQL> execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'tab1')

From dba_tables:

  NUM_ROWS     BLOCKS EMPTY_BLOCKS  AVG_SPACE  CHAIN_CNT AVG_ROW_LEN
---------- ---------- ------------ ---------- ---------- -----------                             
     10000         64            0         86          0          10

From dba_tab_columns:

NUM_DISTINCT LOW  HIGH   DENSITY  NUM_NULLS NUM_BUCKETS LAST_ANALYZ SAMPLE_SIZE
------------ ---- ---- --------- ---------- ----------- ----------- -----------
       10000 Full Full     .0001          0           1 30-JUN-1999       10000
          10 Full Full        .1          0           1 30-JUN-1999       10000


SQL> select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 10),
  2  bucket_number, endpoint_value
  3  from user_histograms
  4  where table_name='TAB1';

TABLE_NAME COLUMN_NAME BUCKET_NUMBER ENDPOINT_VALUE
---------- ----------- ------------- --------------
      TAB1           A             0              1
      TAB1           A             1          10000
      TAB1           B             0              1
      TAB1           B             1          10000


SQL> select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 10),
  2  bucket_number, endpoint_value
  3  from user_tab_histograms
  4  where table_name='TAB1';

Analyze has created 1 BUCKET for each column. So all values for the column
are in the same bucket.  The BUCKET_NUMBER represents the BUCKET NUMBER and
ENDPOINT_VALUE represents the last column value in that bucket.

Now query (1) and (2) ; both do a FULL TABLE SCAN.

So, the fact that you have statistics about the table and columns does not
help the optimizer to distinguish between how many of each value we have.
The reason it does a FULL TABLE SCAN is because there is a 1 BUCKET histogram
and any value selected for should be in that bucket.


CREATING HISTOGRAMS
-------------------

What you need now is to create histograms so the Optimizer knows how many
values occur for each column.

Query (1): select * from tab1 where b=5;
           should do a FULL TABLE SCAN   and

Query (2): select * from tab1 where b=3;
           should do an INDEX RANGE SCAN

SQL> execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR COLUMNS SIZE 10 b’);

select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 5),
       endpoint_number, endpoint_value
from user_histograms;

TABLE_NAME COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
      TAB1           B               1              1
      TAB1           B               2              2
      TAB1           B               3              3
      TAB1           B               4              4
      TAB1           B            9995              5
      TAB1           B            9996           9996
      TAB1           B            9997           9997
      TAB1           B            9998           9998
      TAB1           B            9999           9999
      TAB1           B           10000          10000

So, now there are statistics on the table and on the columns.

You requested 10 buckets (size 10) and there are 10 distinct values.

The ENDPOINT_VALUE shows the column value and the ENDPOINT_NUMBER
shows the cumulative number of rows.

For example, for ENDPOINT_VALUE 2, it has an ENDPOINT_NUMBER 2, the previous
ENDPOINT_NUMBER is 1, hence the number of rows with value 2 is 1. 

Another example is ENDPOINT_VALUE 5. Its ENDPOINT_NUMBER is 9995. The previous
bucket ENDPOINT_NUMBER is 4, so 9995 - 4 = 9991 rows containing the value 5.

So, now QUERY (1) does in fact do a Full Table Scan.

SQL> select * from tab1 where b=5
SQL> /

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=10 Card=9991 Bytes=99910)

1    0   TABLE ACCESS (FULL) OF 'TAB1' (Cost=10 Card=9991 Bytes=99910)


And, QUERY (2) does do an Index Range Scan.

SQL> select * from tab1 where b=3
SQL> /

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=500 Bytes=5000)
1    0   TABLE ACCESS (BY ROWID) OF 'TAB1' (Cost=6 Card=500 Bytes=5000)
2    1     INDEX (RANGE SCAN) OF 'TAB1_B' (NON-UNIQUE)

This is fine if you have a low number of distinct values, but there can
be tables with a huge number of distinct values.  You don't want to
create a bucket for each value. There would be too much overhead.
In this case you would request less buckets than distinct values.


CREATING HISTOGRAMS WITH LESS BUCKETS THAN DISTINCT VALUES
----------------------------------------------------------

SQL> execute dbms_stats.gather_table_stats(ownname => 'scott', tabname => 'employee',METHOD_OPT => 'FOR COLUMNS SIZE 8 b’);


SQL> select lpad(TABLE_NAME,10), lpad(COLUMN_NAME, 5),
  2>       endpoint_number, endpoint_value
  3> from user_histograms;

LPAD(TABLE LPAD( ENDPOINT_NUMBER ENDPOINT_VALUE
---------- ----- --------------- --------------
TAB1     B               0              1
TAB1     B               7              5
TAB1     B               8          10000

Here, Oracle creates the requested number of buckets but puts the same
number of values into each bucket, where there are more endpoint values
that are the same for the frequently occurring value.

The ENDPOINT_NUMBER is the actual bucket number and ENDPOINT_VALUE is
the endpoint value of the bucket determined by the column value.

From above, bucket 0 holds the low value for the column. You cannot see
buckets 1 to 6 so as to save space.

But we have bucket 1 with an endpoint of 5,
                    bucket 2 with an endpoint of 5,
                    bucket 3 with an endpoint of 5,
                    bucket 4 with an endpoint of 5,
                    bucket 5 with an endpoint of 5,
                    bucket 6 with an endpoint of 5,
                    bucket 7 with an endpoint of 5 AND
                    bucket 8 with an endpoint of 10000

So bucket 8 contains values between 5 and 10000.
All buckets contain the same number of values (which is why they are called
height-balanced histograms), except the last bucket may have less values
then the other buckets.

If the data is uniform, you would not use histograms. However, if you request
the same number of buckets as distinct values, Oracle creates 1 bucket.  If
you request less buckets, Oracle uses an algorithm to balance values into each
bucket and any values that remain (which have to be less then the number
stored in each height-balanced bucket) go into the last bucket.


STORING CHARACTER VALUES IN HISTOGRAMS
--------------------------------------

Character columns have some exceptional behaviour, in as much as we store
histogram data for the first 32 bytes of any string.  Any predicates that
contain strings greater than 32 characters will not use histogram information
and the selectivity will be 1 / DISTINCT.

Data in histogram endpoints is normalized to double precision floating point
arithmetic.

EXAMPLE
-------

SQL> select * from morgan;

A
----------
a
b
c
d
e
e
e
e


The table contains 5 distinct values. There is one occurance of 'a', 'b', 'c'
and 'd' and 4 of 'e'.

Create a histogram with 5 buckets.

SQL> analyze table morgan compute statistics for columns a size 5;

Looking in user_histograms:

LPAD(TABLE LPAD( ENDPOINT_NUMBER ENDPOINT_VALUE
---------- ----- --------------- --------------
    MORGAN     A               1     5.0365E+35
    MORGAN     A               2     5.0885E+35
    MORGAN     A               3     5.1404E+35
    MORGAN     A               4     5.1923E+35
    MORGAN     A               8     5.2442E+35

So, ENDPOINT_VALUE   5.0365E+35 represents a
                                                5.0885E+35 represents b
                                                5.1404E+35 represents c
                                                5.1923E+35 represents d
                                                5.2442E+35 represents e

Then if you look at the cumulative values for ENDPOINT_NUMBER,
the corresponding  ENDPOINT_VALUE's are correct.