Netcool Omnibus Notes

ObjectServer and Related Notes

Subject Description
upgrading omnibus to 7.3.1 running launchpad.sh to upgrade omnibus from 7.x.x to 7.3.1 will move the existing $NCHOME to $NCHOME.1 and install 7.3.1 in $NCHOME. After it has completed you need to copy all the missing stuff from $NCHOME.1 directories and files to $NCHOME. If you have impact and other stuff installed you'll need to move everything back over to $NCHOME this is pretty involved and can take a while. This will include $NCHOME.1/platform stuff particularly lib's and etc/rules. I went through each directory and did a mv -i -u -b * ../../netcool/targetdir?? so that none of the existing new 7.3.1 stuff got clobbered. Note it does migrate omni.dat and process control ok.
nco_osreport utility in 7.3.1 omnibus bin directory. Creates SQL files that nco_dbinit can use to create a copy objectserver. nco_osreport -server NCOMS -user root -dbinit = creates SQL files.
nco_osreport -html = creates nco_osreport.html file containing Objectservers complete configuration. - cool tool. Doesn't exist in omnibus pre 7.3.1.
ObjectServer Can't be reached make sure objectserver is not running on loopback ipaddress (127.0.0.1) by
netstat -na|grep 4100 or lsof -i:4100
if it is then insert an entry in /etc/hosts with the correct ipaddress
restart objectserver and check by running
netstat -na|grep 4100
Freetds Linux Free Sybase interface for Perl. Used in Perl Scripts to Access Objectservers Directly install latest from http://freetds.org (0.63) in /opt/netcool/tds
cd /opt/netcool
tar xzvf /downloads/freetds-stable.tgz
mv freetds-0.63 tds
cd tds
./configure
make
make install
ln -s src/ctlib/.libs lib
cd /install
tar xzvf /downloads/DBD-Sybase-1.07.tar.gz
tar xzvf /downloads/DBI-1.50.tar.gz
cd DBI-1.50
perl Makefile.PL
make
make install
cd /install/DBD-Sybase-1.07
export SYBASE=/opt/netcool/tds
perl Makefile.PL
make
make install

install DBD, DBI from cpan http://www.cpan.org
export SYBASE=/opt/netcool/freetds-0.63
export LD_LIBRARY_PATH=$SYBASE/lib
cp $OMNIHOME/etc/interfaces.solaris /opt/netcool/tds
make sure these are not defined
export FREETDS=
export FREETDSCONF=
mv /usr/local/etc/freetds.conf /usr/local/etc/freetds.conf.orig
otherwise it will use the file freetds.conf instead of the
interfaces file.

To connect do
tsql -S NCOMS -U root # to login

in perl
use DBD::Sybase;
use DBI;
$ENV{SYBASE}='/opt/netcool/tds';
$ENV{LD_LIBRARY_PATH}="$SYBASE/lib";
$db=DBI->connect("dbi:Sybase:server=NCOMS","root","");
$c=$db->prepare("describe alerts.status;");
$c->execute;
while ($s=$c->fetchrow_hashref){
foreach $k (sort keys %{$s}){
print "$k=",$s->{$k},"\n";
}
}


nco_sql scripts fail make sure 'go' has no whitespace before it.
nco_os_migrate fails make sure OMNIDB environment var is not set.
lost impact policies replace policy.lst file in $IMPACT_HOME/policy. Stop all impact servers in the cluster. start primary impact server then the others.
vantage point probe To enable the probe to see the objectserver if proxyservers are being used make sure an entry for the proxy server exists in the interfaces file for the probe, and an entry for the objectserver and probe in the proxyservers interfaces file. Make sure the interfaces file was properly created using nco_igen or nco_xigen from the omni.dat file.
Automation Triggers Best Practices

Best practices for creating triggers:


The overriding goal when creating or modifying triggers should be to make the triggers as efficient as possible, with the shortest possible execution time. A trigger has exclusive access to the ObjectServer database for the duration of its execution. By minimizing the execution time of a trigger, you can free up time for other triggers or clients that require access to the database. It is particularly important to reduce the execution time of database triggers because they interrupt the execution of a database operation, thereby slowing down the operation. For example, a pre-insert trigger on the alerts.status table will fire for every new event, so if an event flood occurs, the trigger will be executed multiple times. The degree of efficiency of the trigger will affect the ability of the system to cope. The ObjectServer records the amount of time that each trigger uses during each granularity period and saves the details in the $NCHOME/omnibus/log/servername_trigger_stats.logn file. You can use this file to identify which triggers are using the most time, to prioritize which triggers to review, and to monitor the system to ensure that it is running as expected. In general, if a single trigger is using more than 3 seconds of time every 60 seconds (that is, the default granularity period), the trigger should be reviewed. Whenever you update your triggers, review the log file to verify that your changes do not cause a degradation in performance. Use the following guidelines to improve the performance of your triggers. Avoid table scans in database triggers Table scans are expensive operations and can occur when SQL statements such as FOR EACH ROW are applied to a database table. When such statements are included in a database trigger, the cost can be particularly significant if the trigger is invoked frequently, and if the table being scanned has a large number of rows. For example, if the de-duplication trigger on the alerts.status table is modified so that every time the trigger fires it scans alerts.status for rows matching a set of criteria, this will limit the scalability of the system because the database trigger will take increasing amounts of time as the number of rows in the table being scanned increases. Also avoid nested scans. You can use the following techniques to avoid the table scan in database triggers:

Perform the scan in a temporal trigger that is written so that one scan can match many rows. See the generic_clear trigger in $NCHOME/omnibus/etc/automation.sql for an example. If using a lookup table to enrich events, access the look-up table by using its primary key, as described further on. The use of the primary key results in a direct lookup of the row rather than a scan (V7.2, or later). You can also limit the size of the look-up table. The number of rows that are acceptable for a lookup table is site specific, and will depend on factors such as how often the lookup table is accessed, and hardware performance. Access a lookup table by using an index. Avoid using the EVALUATE clause When a trigger contains an EVALUATE clause, a temporary table is created to hold the results of the SELECT statement in the EVALUATE clause. The amount of time and resources that this temporary table consumes depends on the number of columns being selected and the number of rows matched by the condition in the WHERE clause. In most cases, you can replace the EVALUATE clause with a FOR EACH ROW clause, which cursors over the data and does not incur the overhead of creating a temporary table. A suitable use for an EVALUATE clause is when a GROUP BY clause is being applied to an SQL query. Avoid excessive use of the WRITE INTO statement for logging out to file The WRITE INTO statement is very useful for several purposes; in particular, for debugging triggers during development. However, when a trigger is being deployed in a production environment, it is advisable to comment out or remove WRITE INTO statements, because the quantity of data that is logged during debugging can create a bottleneck. Determine what is suitable for your system. For example, if the logging is infrequently called, there is probably no issue. However, if logging is called multiple times per INSERT statement (for example, within a nested loop), there could be a bottleneck. Where possible, use the primary key when modifying rows If the primary key of a database table is used in the WHERE clause of an UPDATE statement, the row is accessed by using direct lookup, rather than a table scan. For example: update alerts.status where Identifier = tt.Identifier set Severity = Severity + 1; Note: The VIA keyword is no longer required in V7.2, or later. The following command (which uses VIA) is equivalent to the preceding command: update alerts.status VIA Identifier = tt.Identifier set Severity = Severity + 1; Use indexes when using lookup tables In V7.2, or later, the ObjectServer uses an index to access rows in a table if the primary key is used in a FOR EACH ROW statement. This is most useful where an ObjectServer table is being used as a lookup table, perhaps to enrich events. In such a case, the table and triggers that access the lookup table should be designed to access the lookup table by its primary keys to prevent costly full table scans. For example: create table alerts.iplookup persistent
(
IpAddr varchar(32) primary key,
HostName varchar(8),
Owner varchar(40)
);

create or replace trigger set_hostname
group madeup_triggers
priority 10
before insert on alerts.status
for each row
begin
-- Access the lookup table using the primary key
for each row tt in alerts.iplookup where tt.IpAddr = new.Node
begin
set new.Hostname = tt.HostName;
end;
end;

After new triggers are developed and validated, test the performance of the triggers as follows: 1. Ensure that the data on which you run the tests is representative of
the production system.
2. Ensure that the number of rows in any table that the trigger accesses
is representative of the production system.
3. Measure the effect on system performance by using profiling and by
collecting trigger statistics.

 

Email me here simon@simonsaysbiz.com

Note: Please fill out the fields marked with an asterisk.