May the source be with you,
but remember the KISS principle ;-)
Key Softpanorama Topics
|About||Contents||Top Updates||Top Visited|
|Bulletin||Selected Papers||Softpanorama Bookshelf||History|
|News||HP Operations Manager||Recommended Links||Reference||HPOM Documentation||Troubleshooting HPOM agents||Typical operations|
|Listing nodes||Adding nodes||Removing nodes||Agent patching||Starting and stopping HPOM agent||Node IP address change||opcmon|
|Installation using ssh passwordless login||Installation from local files||Local installation of the HPOM agent version 11||Deinstallation of agent||History||Humor||Support Forum|
Note: HP renamed the product called now HP operations manager way too many times. Also it is very inconsistent with using abbreviations. Here we will assume that the term "HP Operations manager" and abbreviations HPOM, OMU, and OVO mean the same thing :-)
HPOM 9 agents use HTTPS for communication which provide secure communication between agent and the "mothership". Message format based on XML. Managed nodes can be identified by their unique OvCoreID and not necessarily by their IP addresses
The architecture of the agent as described in HTTPS Agent Concepts and Configuration Guide Software Version: 9.01 (Figure 1-3, p. 31) looks pretty similar to classic Tivoli agent.
The first agent installation is the creation of agent on the management server. This way the server becomes the first managed node. After the server is installed and configured, agents on other servers can be installed from the management server semi-automatically (ssh connection is needed).
The main difference is that Tivoli agent is just one process and HPOM agent consists of several processes. Also some tasks that in Tivoli were performed by server are delegated to the agent level. In other works HPOM has more powerful, much more heavyweight agent then Tivoli. Due to that the agent that has potential of causing many troubles during installation (troubles with certificates are pretty common during installation), but after it is installed and configured is runs pretty reliably with almost no other problem then when one of multiple processes die (restart in this case almost always cure the problem and can be done automatically). Port 383 is used for communication of HTTP agent with the server.
Port 383 is used for communication with the server
Like in Tivoli, the HPOM agent by default runs as root on UNIX, but if necessary can be configured to run under a regular account with less capabilities.
Running of the daemons is controlled by two RC scripts. On Linux the following startup scripts are used:
HTTPS agents on Linux kernel 2.6 require the standard C++ library (libstdc++.so). There are two main versions of HTTPS agents used with HPOM 9:
Version 11.0x. Later version substantially different from 8.6. Can be used with SLES 11 SP1 and with some tricks with SLES 11 SP2 (which have linux kernel version 3 that agent does not recognize but that can be fixed by manual modification of one of the script on the installation disk (or ISO file). Assuming that agent ISO disribution is mounted to /tmp/ISO) this is the script: /tmp/ISO/packages/LIN/Linux2.6_X64/oareqcheck.cfg. See SLES 11 SP2 installation trick for details
Managed nodes must have a valid, industry standard, X509 certificate issued by the HP Certificate Server to be able to communicate with HP Operations management servers. Certificates, signed by 1024 bit keys, are required to identify managed nodes in a managed environment using the Secure Socket Layer (SSL) protocol. The “SSL handshake” between two managed nodes only succeeds if the issuing authority of the certificate presented by the incoming managed node is a trusted authority of the receiving managed node. The main communication security components responsible for creating and managing certificates are:
HP Certificate Server
HP Key Store
HP Certificate Client
The files associated with the HTTPS agent are found in the following directory structures by default:
-rwxr-xr-x 1 root bin 12966 Sep 3 13:06 cpu_mon.sh* -rwxr-xr-x 1 root bin 13216 Sep 3 13:06 disk_mon.sh* -rwxr-xr-x 1 root bin 10556 Sep 3 13:06 dist_mon.sh* -rwxr-xr-x 1 root bin 9538 Aug 5 13:57 libsis2om.pm* -rwxr-xr-x 1 root bin 12611 Sep 3 13:06 mailq_l.sh* -rwxr-xr-x 1 root bin 41912 Sep 3 13:06 mondbfile.sh* -rwxr-xr-x 1 root bin 352 Aug 5 13:57 OM-SiSAlert* -rwxr-xr-x 1 root bin 366 Aug 5 13:57 OM-SiSAlert_full* -rwxr-xr-x 1 root bin 12515 Sep 3 13:06 proc_mon.sh* -rwxr-xr-x 1 root bin 886 Aug 5 13:57 sis2om_perl.bat* -rwxr-xr-x 1 root bin 13995 Aug 5 13:57 sis2om.pl* -rwxr-xr-x 1 root bin 96162 Aug 5 13:57 sis2om_samples_95* -rwxr-xr-x 1 root bin 14487 Aug 5 13:57 sis2om-setup.pl* -rwxr-xr-x 1 root bin 937 Aug 5 13:57 sisconfigdir* -rwxr-xr-x 1 root bin 2065 Aug 5 13:57 sis_control.pl* -rwxr-xr-x 1 root bin 26062 Aug 5 13:57 sis_disc.pl* -rwxr-xr-x 1 root bin 12615 Sep 3 13:06 swap_mon.sh* -rwxr-xr-x 1 root bin 27672 Aug 5 13:57 TreePP.pm* -rwxr-xr-x 1 root bin 12330 Sep 3 13:06 vp_chk.sh*
HTTPS Communication can be controlled using the following commands.
The opcragt utility is used to control agents from the HP Operations Management server. The operations includes:
There is a wrapper called opcagt on HTTPS nodes. This utility can be used to perform remote control tasks by application launch from the operator's desktop. It allows to setup a common action definition for any kind of managed nodes. Along with starting and stopping agent they can be used to verify the status of the agent
/opt/OV/bin/OpC/opcagt -status coda OV Performance Core COREXT (1603) Running opcacta OVO Action Agent AGENT,EA (1486) Running opcmsga OVO Message Agent AGENT,EA (1487) Running opcmsgi OVO Message Interceptor AGENT,EA (1605) Running
|Bulletin||Latest||Past week||Past month||
The Document KM793358 will show you the list of all nodes in a layout group, however it is required to specify the node hierarchy as a parameter.
If node hierarchy is not specified it takes "Node Bank" as default, but those nodes which are not in the Node Bank are not listed.
Attached is an SQL Query which lists all nodes under all layout groups irrespective of the node hierarchy.
IT Resource Center forums
Aug 31, 2009
HPOM 8.33 HPUX 11.31 IA Server with latest patches (Server and Agents) effective 2009-08.
Updated Linux agent from
Operations Agent 08.11.000 to
Operations Agent 08.60.005
Just prior to update
ovbbccb -verbose -status omuserver.org
succeeds with no errors.
Post agent install (no install errors reported)
root@agent> /root #
ovbbccb -verbose -status omuserver.org
NOTE: Sending status request to: 'https://omuserver.org:383/
ERROR: (bbc-303) An exception occurred while querying the server
'pluto.dpsk12.org'. Exception message: (xpl-117) Timeout occurred
while waiting for data..
On the omu server executes with no errors:
ovbbccb -verbose -status agent
NOTE: Sending status request to: 'https://agent:383/Hewlett-Packard/OpenView/BBC/status/'.
(Namespace, Port, Bind Address, Open Sockets)
<default> 383 ANY 1
HP OpenView HTTP Communication Incoming Connections
BBC 06.10.205; ovbbccb 06.10.205
From HPOM server to agent:
ovbbccb -ping agent (from omu server):
agent: status=eServiceOK coreID=b0407b96-5198-7517-0e49-8d6c1910f2e7 bbcV=06.20.050 appN=ovbbccb appV=06.20.050
conn=1 time=73 ms
From Agent to Server:
ovbbccb -ping pluto
omuserver: status=eServiceOK coreID=401c626c-6433-7537-14e4-94396352425a
bbcV=06.10.205 appN=ovbbccb appV=06.10.205 conn=39 time=77 ms
Try the following on linux node.
1) Check whether any errors logged in System.txt file
2) try ovc -kill
3) Remove the ovbbccb.dat and queue files.
4) Restart the agents using ovc -start
5) Check ovbbccb -status
8.60 is a new & different animal. It is a huge "jump" HPOM agents wise. So, I suggest that you read the 8.6 Release notes. I've just have an (unpleasant) experience, where installation of 8.6 failed, but just going back to 8.53, successful without a single problem.
Ramkumar, thanks for the suggestions...
>>>1) Check whether any errors logged in System.txt file >>>2) try ovc -kill >>>3) Remove the ovbbccb.dat and queue files. >>>4) Restart the agents using ovc -start >>>5) Check ovbbccb -status
I have repeatedly tried ovc -kill and removed the queue files to no avail.
I tried also removing the ovbbccb.dat flie with similar unsucessfull results.
The only thing of interest in the System.txt file is a repeated instance of: "... Unknown monitor 'DBSPI-0088'... " "... Unknown monitor 'DBSPI-0086'... "
I've now determined that this not only applies to linux but HPUX agents updated (at HP Support request mind you) to the latest version 8.60.005. This behavior of breaking things that work just because Support demands you run the latest patches needs to stop. HP needs to provide support for EXISTING releases not simply the most current.
I now will probably have to remove my 8.60.005 patch on the management server and push the older agent 8.53.xx to all nodes.
What a P I T A!!!
As mentioned in my previous reply, the update to the latest patch level was requested by HP support while troubleshooting other issues.
The resulting update causing additional completely unrelated problems is something rather typical of HP. I will likely have to remove the agent patches installed on the HPOM server and the redeploy the agent software to all those ugents recently updated (LINUX and HPUX).
Its experiences like this that make me so darn frustrated at HP.
Jason: Thanks for the response however, I'm not finding the 8.60 Agent Release Notes. I have read the Patch Description and nothing seems to stand out.
Can you post the release notes or provide a pointer?
OML9.01_Linux_HTTPSAgent.pdf - KM772798
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : C++ Humor : ARE YOU A BBS ADDICT? : Object oriented programmers of all nations : C Humor : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor: Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : The Most Comprehensive Collection of Editor-related Humor : Microsoft plans to buy Catholic Church : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor : Best Russian Programmer Humor : Russian Musical Humor : The Perl Purity Test : Politically Incorrect Humor : GPL-related Humor : OFM Humor : IDS Humor : Real Programmers Humor : Scripting Humor : Web Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor :
The Last but not Least
|You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.|
The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: October 01, 2013