Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

TWS integration with TEC

News Tivoli Workload Scheduler Recommended Links Integration with TWS8.2 Integration with TWS 8.5 TEC Rules Programming
TWS Logfile adapter Job scheduling events format  Troubleshooting the UNIX logfile adapter postmsg adapter Humor Etc

Integration Module (Plus Module User's Guide ) is installed under the existing Tivoli Framework. It includes setup for forwarding certain messages  to  TEC logfile adapter and set of rules for TEC.

The predefined rules for TEC include event reporting of some of the following:

Integration with TWS 8.5

Integrating with Tivoli Enterprise Console

Adapted from Integrating V8.5

Configuring the Tivoli Enterprise Console adapter

TEC logfile adapter needs to be installed on TWS server and a set of configuration steps must be performed to enable that adapter to manage the job scheduling events. For information on how to install the Tivoli Enterprise Console logfile adapter, refer to the IBM Tivoli Enterprise Console Installation Guide.

The config_teclogadapter script is used to configure the TEC logfile adapter on TWS server. Perform the following steps:

  1. Set the environment variables for the Tivoli endpoint by running the lcf_env script.
  2. Run the config_teclogadapterscript to configure the adapter. For example:
    config_teclogadapter [-tme] PATH [Adapter ID] [TWS Installation Path]
    where:
    -tme
    The Tivoli Enterprise Console adapter is a TME adapter.
    PATH
    Specify the Tivoli Enterprise Console adapter directory when you did not specify the -tme option. Otherwise it is the endpoint directory.
    Adapter ID
    Specify the Tivoli Enterprise Console Adapter identifier (only for Tivoli Enterprise Console 3.9 and later). If you do not specify an ID, it is ignored.
    TWS Installation Path
    Specify the path where the Tivoli Workload Scheduler you want to monitor is installed.

The script performs the following configuration steps:

  1. If no Tivoli Workload Scheduler installation path was specified, default is the home directory where it is installed.
  2. Copies the config/BmEvents.conf into the home directory if it does not already exist.
  3. Configures the config/BmEvents.conf adding the list of events if not already specified and defines the event.log file as an event output.
  4. Configures the configuration file of the Tivoli Enterprise Console adapter to read from the event.log file.
  5. Appends the maestro.fmt file to the format file of the Tivoli Enterprise Console adapter and rigenerate the cds file.
  6. Restarts the Tivoli Enterprise Console adapter.

After you run the script, perform a conman stop and conman start to apply the changes.

Configuring the Tivoli Enterprise Console server

As well as configuring the Tivoli Enterprise Console adapter, you need to configure the Tivoli Enterprise Console server.

The config_tecserver script enables the TEC server to receive events from the Tivoli Enterprise Console adapter. It must be run on the system where the Tivoli Enterprise Console Server is installed or on a ManagedNode of the same TME network. On the Windows platform, a TME bash is required to run the script. For example:

config_tecserver.sh { -newrb <RuleBase name=""> <RuleBase
Path=""> -clonerb <RuleBase name=""> | -userb <RuleBase
name=""> }   
<EventConsole> [TECUIServer host] USER PASSWORD

where:

-newrb
Specify a new RuleBase with the specified name and path.
-clonerb
Specify the rule base to be cloned into the new Rule base.
-userb
Customize an already existing RuleBase.
EventConsole
Specify the EventConsole to be created and configured.
TECUIServer host
Specify the host name where the Tivoli Enterprise Console UI server is installed.
USER PASSWORD
Specify the user name and password needed to access the EventConsole.

The script performs the following configuration steps:

  1. If specified, creates the new RuleBase from the cloned one.
  2. Adds the Tivoli Workload Scheduler baroc events definition to the specified RuleBase.
  3. Adds the Tivoli Workload Scheduler rules to the RuleBase.
  4. Compile the RuleBase.
  5. Put the RuleBase as the Active RuleBase.
  6. Configures the specified EventConsole with Tivoli Workload Scheduler filters.
  7. Restarts the Tivoli Enterprise Console server.

Event formats

Table 16 lists the engine event formats.

Table 16. Tivoli Workload Scheduler engine events format
Event Number
mstReset 1
mstProcessGone 52
mstProcessAbend 53
mstXagentConnLost 54
mstJobAbend 101
mstJobFailed 102
mstJobLaunch 103
mstJobDone 104
mstJobUntil 105
mstJobSubmit 106
mstJobCancel 107
mstJobReady 108
mstJobHold 109
mstJobRestart 110
mstJobCant 111
mstJobSuccp 112
mstJobExtrn 113
mstJobIntro 114
mstJobStuck 115
mstJobWait 116
mstJobWaitd 117
mstJobSched 118
mstJobModify 119
mstJobLate 120
mstJobUntilCont 121
mstJobUntilCanc 122
mstSchedAbend 151
mstSchedStuck 152
mstSchedStart 153
mstSchedDone 154
mstSchedUntil 155
mstSchedSubmit 156
mstSchedCancel 157
mstSchedReady 158
mstSchedHold 159
mstSchedExtrn 160
mstSchedCnpend 161
mstSchedModify 162
mstSchedLate 163
mstSchedUntilCont 164
mstSchedUntilCanc 165
mstGlobalPrompt 201
mstSchedPrompt 202
mstJobPrompt 203
mstJobRecovPrompt 204
mstLinkDropped 251
mstLinkBroken 252
mstDomainMgrSwitch 301

Positional event variables

This subsection defines the positional event variables.

Table 17. Positional variables for events 101-118,120-122,204 (job events)
Variable Description
1 event number
2 schedule cpu
3 schedule id
4 job name
5 job cpu
6 job number
7 job status
8 real name (different from job name only for MPE jobs)
9 job user
10 jcl name (script name or command name)
11 every time
12 recovery status
13 time stamp (yyyymmddhhmm0000)
14 message number (not equal to zero only for job recovery prompts)
15 eventual text message (delimited by '\t')
16 record number
17 key flag
18 effective start time
19 estimated start time
20 estimated duration
21 deadline time (epoch)
22 return code
23 original schedule name (schedule name for schedules not (yet) carried forward)
24 head job record number (different from record number for rerun/every jobs)
25 Schedule name
26 Schedule input arrival time (yyyymmddhhmm00)

Table 18. Positional variables for event 119 (job property modified)
Variable Description
1 event number
2 schedule cpu
3 schedule id
4 job name
5 job cpu
6 job number
7 property type:

StartTime = 2,

StopTime = 3,

Duration = 4,

TerminatingPriority = 5,

KeyStatus = 6

8 property value
9 record number
10 key flag
11 head job record number (different from record number for rerun/every jobs)
12 real name (different from job name only for MPE jobs)
13 original schedule name (schedule name for schedules not(yet) carried forward)
14 message number (not equal to zero only for job recovery prompts)
15 Schedule name
16 Schedule input arrival time (yyyymmddhhmm00)

Table 19. Positional variables for events 151-161, 163-165 (schedule events)
Variable Description
1 event number
2 schedule cpu
3 schedule ID
4 schedule status
5 record number
6 key flag
7 original schedule name (schedule name for schedules not (yet) carried forward)
8 time stamp
9 Schedule name
10 Schedule input arrival time (yyyymmddhhmm00)

Table 20. Positional variables for event 162 (schedule property modified)
Variable Description
1 event number
2 schedule cpu
3 schedule id
4 property type:

StartTime = 2

StopTime = 3

5 property value
6 record number
7 original schedule name (schedule name for schedules not (yet) carried forward)
8 time stamp
9 Schedule name
10 Schedule input arrival time (yyyymmddhhmm00)

Table 21. Positional variables for event 202 (schedule prompt)
Variable Description
1 event number
2 schedule cpu
3 schedule id
4 Schedule name
5 Schedule input arrival time (yyyymmddhhmm00)

Table 22. Positional variables for event 203 (job prompt)
Variable Description
1 event number
2 schedule cpu
3 schedule id
4 job name
5 job cpu
6 prompt number
7 prompt message
8 Schedule name
9 Schedule input arrival time (yyyymmddhhmm00)

Re-loading monitoring data

The Configure Non-TME adapter and Configure TME® adapter commands set up the file BmEvents.conf in the TWShome directory. This configuration file determines which information the production processes (batchman and mailman) write in the TWSHome/log_source_file file, by default this file is the event.log file, and how this information is written.

You can change the name of the log file as follows:

In the BmEvents.conf file the # sign represents a comment. Remove the # sign to uncomment a line.

The contents of this file are also used by other Tivoli® applications that manage events, that IBM Tivoli Workload Scheduler can interact with, such as IBM® Tivoli NetView® and IBM Tivoli Business Systems Management.

The options you can set in the BmEvents.conf file are described below:

OPTIONS=MASTER|OFF
If the value is set to MASTER then all job scheduling events gathered by that workstation are reported. If that workstation is the master domain manager or the backup master domain manager, with Full Status option switched on, then all scheduling events for all workstations are reported.

If the value is set to OFF, the job scheduling events are reported only if they relate to the workstation where the file is configured.

If commented, it defaults to MASTER on the master domain manager workstation, and to OFF on a workstation other than the master domain manager.

LOGGING=ALL|KEY
Disables or enables the key flag filter mechanism.

If set to ALL then all events from all jobs and job streams are logged.

If set to KEY the event logging is enabled only for those jobs and job streams that are marked as key. The key flag is used to identify the most critical jobs or job streams. To set it in the job or job stream properties use:

  • The keywords KEYSCHED (for job streams) and KEYJOB (for jobs) from the Tivoli Workload Scheduler command line interface.
  • The job Is Monitored Job check box and job stream In Monitored Job Stream check box from the IBM Tivoli Workload Scheduler Job Scheduling Console.
SYMEVNTS=YES|NO
If set to YES it tells the production process, batchman, to report the jobs and job streams status events immediately after having generated the new production day plan. This key is valid only if LOGGING=KEY.

If set to NO, no report is given.

The default value is NO.
CHSCHED=HIGH|LOW
Indicates which events are to be sent during the job stream lifetime.

During the lifetime of a job stream its status can change several times depending on the status of the jobs it contains.

By using the CHSCHED option you choose how the job stream status change is reported.

If you set it to HIGH, during the job stream lifetime an event is sent any time the status of the job stream changes. Because the intermediate status of the job stream can change several times, several events can be sent, each reporting a specific status change. For example, a job stream may go into the READY state several times during its running because its status is related to the status of the jobs it contains. Each time the job stream goes into the READY state, event 153 is sent.

If you set it to LOW, during the job stream lifetime until the final status is reached, only the initial job stream state transaction is tracked. In this way the network traffic of events reporting job stream status changes is heavily reduced. When the CHSCHED value is set to LOW these are the events that are sent only the first time during the job stream life time:

Table 23. CHSCHED event filtered
Event number Event Class Description
153 TWS_Schedule_Started Job stream started
156 TWS_Schedule_Submit Job stream submitted
158 TWS_Schedule_Ready Job stream ready
159 TWS_Schedule_Hold Job stream hold
160 TWS_Schedule_Extern Job stream external
162 TWS_Schedule Job stream properties changed

For final status of a job stream, regardless of the value set for CHSCHED, all events reporting the final status of the job stream are reported, even if the job stream has more than one final status. For example, if a job contained in the job stream completes with an ABEND state, event 151 is sent (Job stream abended). If that job is then rerun and completes successfully, the job stream completes with a SUCC state and event 154 is sent (Job stream completed).

The default value for CHSCHED is HIGH.

EVENT=n[ n ...]
Identifies which events to report in the log_source_file. Event numbers must be separated by at least one space. If omitted, the events reported by default are:
51 101 102 105 111 151 152 155 201 202 203 204 251 252 301

If the EVENT parameter is included, it completely overrides the defaults. To remove only event 102 from the list, for example, you must enter the following:

EVENT=51 101 105 111 151 152 155 201 202 203 204 251 252 301
Note:

Event 51 is always reported each time mailman and batchman are restarted, regardless of the filters specified in the EVENT parameter. If you do not wish to notify this event to the TEC event console, you must manually edit the maestro.fmt file or, for Windows® environments, the maestro_nt.fmt file and comment out the following section:

// TWS Event Log
      FORMAT TWS_Reset
      1 %s %s %s*
      event_type 1
      hostname DEFAULT
      origin DEFAULT
      agent_id $1
      software_version $2
      msg PRINTF("TWS has been reset on host %s",hostname)
      severity HARMLESS
      END  
When this section is commented out, the TEC adapter will not send event 51 to the TEC event console.
FILE=filename
This option is used specifically when interacting with the Tivoli Enterprise Console. Set it to the path and file name of an ASCII log file. Job scheduling events are written to this ASCII log file which is truncated whenever the batchman and mailman processes are restarted, for example at the end of each production day.

or

FILE_NO_UTF8 =filename

Use this option instead of the FILE option when you want job scheduling events written in the local language file specified by this parameter.

scheduling events

After performing the configuration steps described in the Configuring the Tivoli Enterprise Console adapter, use the events gathered from the Tivoli Workload Scheduler log file using the Tivoli Enterprise Console logfile adapter to perform event management and correlation using the Tivoli Enterprise Console in your scheduling environment.

This section describes the events that are generated by using to the information stored in the log file specified in the BmEvents.conf configuration file stored on the system where you installed the Tivoli Enterprise Console logfile adapter.

An important aspect to be considered when configuring the integration with the Tivoli Enterprise Console using event adapters is whether to monitor only the master domain manager or every IBM Tivoli Workload Scheduler agent.

If you integrate only the master domain manager, all the events coming from the entire scheduling environment are reported because the log file on a master domain manager logs the information from the entire scheduling network. On the Tivoli Enterprise Console event server and TEC event console all events will therefore look as if they come from the master domain manager, regardless of which IBM Tivoli Workload Scheduler agent they originate from. The workstation name, job name, and job stream name are still reported to Tivoli Enterprise Console, but as a part of the message inside the event.

If, instead, you install a Tivoli Enterprise Console logfile adapter on every IBM Tivoli Workload Scheduler agent, this results in a duplication of events coming from the master domain manager, and from each agent. Creating and using a Tivoli Enterprise Console that detects these duplicated events, based on job_name, job_cpu, schedule_name, and schedule_cpu, and keeps just the event coming from the log file on the Tivoli Workload Scheduler agent, helps you to handle this problem. The same consideration also applies if you decide to integrate the backup master domain manager, if defined, because the log file on a backup master domain manager logs the information from the entire scheduling network. For information on creating new rules for the Tivoli Enterprise Console refer to the IBM Tivoli Enterprise Console Rule Builder's Guide. For information on how to define a backup master domain manager refer to IBM Tivoli Workload Scheduler: Planning and Installation Guide.

Figure 4 describes how an event is generated. It shows the Tivoli Enterprise Console logfile adapter installed on the master domain manager. This is to ensure that all the information about the job scheduling execution across the entire scheduling environment is available inside the log file on that workstation. You can decide, however, to install the Tivoli Enterprise Console logfile adapter on another workstation in your scheduling environment, depending on your environment and business needs.

Figure 4. Event Generation Flow

The logic that is used to generate job scheduling events is the following:

For some error conditions on event informing that the alarm condition is ended is also stored in the log file and passed to the TEC event server via the Tivoli Enterprise Console logfile adapter. This kind of event is called a clearing event. It ends on the TEC event console any related problem events.

The following table describes the events and rules provided by Tivoli Workload Scheduler.

The text of the message that is assigned by the FMT file to the event is shown in bold. The text message is the one that is sent by the Tivoli Enterprise Console logfile adapter to TEC event server and then to the TEC event console. The percent sign (%s) in the messages indicates a variable. The name of each variable follows the message between brackets.

Table 24. Job scheduling events
"TWS process %s has been reset on host %s" (program_name, host_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Process_Reset.
  • HARMLESS.
  • Tivoli Workload Scheduler daemon process reset.
"TWS process %s is gone on host %s" (program_name, host_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Process_Gone.
  • CRITICAL.
  • Tivoli Workload Scheduler process gone.
"TWS process %s has abended on host %s" (program_name, host_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Process_Abend.
  • CRITICAL.
  • Tivoli Workload Scheduler process abends.
"Job %s.%s failed, no recovery specified" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX® only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_user.
  • Job failed, no recovery specified.
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Job %s.%s failed, recovery job will be run then schedule %s will be stopped" (schedule_name, job_name, schedule_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_ user.
  • Job failed, recovery job runs, and schedule stops
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Job %s.%s failed, this job will be rerun" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_user.
  • Job failed, the job is rerun.
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Job %s.%s failed, this job will be rerun after the recovery job" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_user.
  • Job failed, recovery job is run, and the job is run again.
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Job %s.%s failed, continuing with schedule %s" (schedule_name, job_name, schedule_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to user TWS_user.
  • Job failed, the schedule proceeds.
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Job %s.%s failed, running recovery job then continuing with schedule %s" (schedule_name, job_name, schedule_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_user.
  • Job failed, recovery job runs, schedule proceeds
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Failure while rerunning failed job %s.%s" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_user.
  • Rerun of abended job abends.
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Failure while recovering job %s.%s" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Automated Action (UNIX only):
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Abend.
  • CRITICAL.
  • Send job stdlist to the TWS_user.
  • Recovery job abends.
  • If this job had abended more than once within a 24 hour time window, send a TWS_Job_Repeated_Failure event.
"Multiple failures of Job %s#%s in 24 hour period" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Repeated_Failure.
  • CRITICAL.
  • Same job fails more than once in 24 hours.
"Job %s.%s did not start" (schedule_name, job_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Failed.
  • CRITICAL.
  • Job failed to start.
"Job %s.%s has started on CPU %s" (schedule_name, job_name, cpu_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Launched.
  • HARMLESS.
  • Job started.
  • Clearing Event - Close open job prompt events related to this job.
"Job %s.%s has successfully completed on CPU %s" (schedule_name, job_name, cpu_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Job_Done.
  • HARMLESS.
  • Job completed successfully.
  • Clearing Event - Close open job started events for this job and auto-acknowledge this event.
"Job %s.%s suspended on CPU %s" (schedule_name, job_name, cpu_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Suspended.
  • WARNING.
  • Job suspended, the until time expired (default option suppress).
"Job %s.%s is late on CPU %s" (scheduler_name, job_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Late.
  • WARNING.
  • Job late, the deadline time expired before the job completed.
"Job %s.%s:until (continue) expired on CPU %s", schedule_name, job_name, job_cpu
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Until_Cont.
  • WARNING.
  • Job until time expired (option continue).
"Job %s.%s:until (cancel) expired on CPU %s", schedule_name, job_name, job_cpu
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Until_Canc.
  • WARNING.
  • Job until time expired (option cancel).
(TWS Prompt Message)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Recovery_Prompt.
  • WARNING.
  • Job recovery prompt issued.
"Schedule %s suspended", (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Schedule_Susp.
  • WARNING
  • Schedule suspended, the until time expired (default option suppress).
"Schedule %s is late", (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Schedule_Late.
  • WARNING
  • Schedule late, the deadline time expired before the schedule completion.
"Schedule %s until(continue) expired", (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Schedule_Until_Cont.
  • WARNING.
  • Schedule until time expired (option continue).
"Schedule %s until (cancel) expired", (schedule_name)
  • Event Description:
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Schedule_Until_Canc.
  • WARNING.
  • Schedule until time expired (option cancel).
"Schedule %s has failed" (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Schedule_Abend.
  • CRITICAL.
  • Schedule abends.
  • If event is not acknowledged within 15 minutes, send mail to TWS_user (UNIX only).
"Schedule %s is stuck" (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Schedule_Stuck.
  • CRITICAL.
  • Schedule stuck.
  • If event is not acknowledged within 15 minutes, send mail to TWS_user (UNIX only).
"Schedule %s has started" (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Schedule_Started.
  • HARMLESS.
  • Schedule started.
  • Clearing Event - Close all related pending schedule, or schedule abend events related to this schedule.
"Schedule %s has completed" (schedule_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Schedule_Done.
  • HARMLESS.
  • Schedule completed successfully.
  • Clearing Event - Close all related schedule started events and auto-acknowledge this event.
(Global Prompt Message)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Global_Prompt.
  • WARNING
  • Global prompt issued.
(Schedule Prompt's Message)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Schedule_Prompt.
  • WARNING.
  • Schedule prompt issued.
(Job Recovery Prompt's Message)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Prompt.
  • WARNING.
  • Job recovery prompt issued.
"Comm link from %s to %s unlinked for unknown reason" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Link_Dropped.
  • WARNING.
  • Tivoli Workload Scheduler link to CPU dropped for unknown reason.
"Comm link from %s to %s unlinked via unlink command" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Link_Dropped.
  • HARMLESS.
  • Tivoli Workload Scheduler link to CPU dropped by unlink command.
"Comm link from %s to %s dropped due to error" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Link_Dropped.
  • CRITICAL.
  • Tivoli Workload Scheduler link to CPU dropped due to error.
"Comm link from %s to %s established" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • Correlation Activity:
  • TWS_Link_Established.
  • HARMLESS.
  • Tivoli Workload Scheduler CPU link to CPU established.
  • Close related TWS_Link_Dropped or TWS_Link_Failed events and auto-acknowledge this event.
"Comm link from %s to %s down for unknown reason" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Link_Failed.
  • CRITICAL.
  • Tivoli Workload Scheduler link to CPU failed for unknown reason.
"Comm link from %s to %s down due to unlink" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Link_Failed.
  • HARMLESS.
  • Tivoli Workload Scheduler link to CPU failed due to unlink.
"Comm link from %s to %s down due to error" (hostname, to_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Link_Failed.
  • CRITICAL.
  • Tivoli Workload Scheduler CPU link to CPU failed due to error.
"Active manager % for domain %" (cpu_name, domain_name)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Domain_Manager_Switch.
  • HARMLESS.
  • Tivoli Workload Scheduler domain manager switch has occurred.
Long duration for Job %s.%s on CPU %s. (schedule_name, job_name, job_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Launched.
  • WARNING.
  • If after a time equal to estimated duration, the job is still in exec status, a new message is generated.
Job %s.%s on CPU %s, could miss its deadline. (schedule_name, job_name, job_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Ready, TWS_Job_Hold
  • WARNING.
  • If the job has a deadline and the sum of job estimated start time and estimated duration is greater than the deadline time, a new message is generated.
Job %s.%s on CPU %s, could miss its deadline. (schedule_name, job_name, job_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Ready, TWS_Job_Hold.
  • WARNING
  • If the job has a deadline and the sum of job estimated/effective start time and estimated duration is greater than the deadline time, a new message is generated.
Start delay of Job %s.%s on CPU %s. (schedule_name, job_name, job_cpu)
  • Event Class:
  • Event Severity:
  • Event Description:
  • TWS_Job_Ready.
  • WARNING
  • If the job is still in ready status, after n minutes a new message is generated. The default value for n is 10.

Default criteria that control the correlation of events and the automatic responses can be changed by editing the file maestro_plus.rls (in UNIX environments) or maestront_plus.rls (in Windows environments) file. These RLS files are created during the installation of Tivoli Workload Scheduler and compiled with the BAROC file containing the event classes for the Tivoli Workload Scheduler events on the TEC event server when the Setup Event Server for TWS task is run. Before modifying either of these two files, make a backup copy of the original file and test the modified copy in your sample test environment.

For example, in the last event described in the table you can change the n value, the number of seconds the job has to be in ready state to trigger a new message, by modifying the rule job_ready_open set for the TWS_Job_Ready event class.

rule: job_ready_open : ( 
     		description: 'Start a timer rule for ready',

			event: _event of_class 'TWS_Job_Ready'
   					where [

								 status: outside ['CLOSED'],
								 schedule_name: _schedule_name,
								 job_cpu: _job_cpu,
								 job_name: _job_name
						],
			reception_action: (
						set_timer(_event,600,'ready event')
			)
		).

For example, by changing the value from 600 to 1200 in the set_timer predicates of the reception_action action, and then by recompiling and reloading the Rule Base you change from 600 to 1200 the number of seconds the job has to be in ready state to trigger a new message.

Refer to Tivoli Enterprise Console User's Guide and Tivoli Enterprise Console Rule Builder's Guide for details about rules commands.

scheduling events format

The integration between Tivoli® Workload Scheduler and Tivoli Entreprise Console (TEC) provides the means to identify and manage a set of predefined job scheduling events. These are the events that are managed using the Tivoli Enterprise Console logfile adapter installed on the scheduling workstations. These events are listed in the following table together with the values of their positional fields. These positional fields are the ones used by the FMT files to define the event structure which, once filled up with the information stored for that specific event number in the log file, is sent by the Tivoli Enterprise Console logfile adapter to the TEC event server. For additional information, refer to Job scheduling events.

Table 25. Events formats table
Event Number Event Class Positional Fields Values
51 TWS_Process_Reset Positional Fields for Process Reset Events/only for batchman:
  1. Event number.
  2. Process name.
  3. Local workstation name.
  4. Master workstation name.
101 TWS_Job_Abend Positional Fields for Job Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identified.
  4. Job name. For jobs submitted with at or batch, if the name supplied by the user is not unique, this is the Tivoli Workload Scheduler-generated name, and the name supplied by the user appears as variable 8 below.
  5. Workstation name on which the job runs.
  6. Job number.
  7. Job state, indicated by an integer: 1 (ready), 2 (hold), 3 (exec), 5 (abend), 6 (succ), 7 (cancl), 8 (done), 13 (fail), 16 (intro), 23 (abenp), 24 (succp), 25 (pend).
  8. Job's submitted (real) name. For jobs submitted with at or batch, this is the name supplied by the user if not unique. The unique name generated by Tivoli Workload Scheduler appears as variable 4 above.
  9. Job user.
  10. Name of the job's script file, or the command it runs. White space is replaced by the octal equivalent; for example, a space appears as \040.
  11. The rate at which an "every" job runs, expressed as hhmm. If every was not specified for the job, this is -32768.
  12. Job recovery status, indicated by an integer: 1 (stop), 2 (stop after recovery job), 3 (rerun), 4 (rerunafter recovery job), 5 (continue), 6 (continue after recovery job), 10 (this is the rerun of the job), 20 (this is the run of the recovery job).
  13. An event timestamp. This is the local time on the workstation where the job event occurred. It is expressed as: yyyymmddhhmmss00 (that is, year, month, day, hour, minute, second, hundredths always zeros).
  14. Message number (not zero only for job recovery prompts).
  15. The prompt number delimited by '\t', or zero if there is no prompt.
  16. Job record number. Identifies in the plan the record associated to the job (not for Event number 204).
  17. Job keyflag: 0 (no key flag), 1 (key flag) (not for Event number 204).
  18. Effective start time of the job (not for Event number 204). It has a valid time if it occurred in the event.
  19. Estimated start time of the job (not for Event number 204). It has a valid time if an Estimated Start time has been provided by the user.
  20. Estimated duration of the job (not for Event number 204). Time estimated by the Tivoli Workload Scheduler engine based on statistics.
  21. Deadline in Epoch (not for Event number 204). It has a valid time if a deadline time has been provided by the user.
  22. The prompt text, or Tivoli Workload Scheduler error message.
  23. Original schedule name (for schedules not (yet) carried forward).
  24. Head job record number (different from record number for rerun/every jobs).
  25. Job stream name.
  26. Job stream input arrival time expressed as: yyyymmddhhmm00.
102 TWS_Job_Failed
103 TWS_Job_Launched
104 TWS_Job_Done
105 TWS_Job_Suspended
106 TWS_Job_Submitted
107 TWS_Job_Cancel
108 TWS_Job_Ready
109 TWS_Job_Hold
110 TWS_Job_Restart
111 TWS_Job_Failed
112 TWS_Job_SuccP
113 TWS_Job_Extern
114 TWS_Job_INTRO
115 TWS_Job_Stuck
116 TWS_Job_Wait
117 TWS_Job_Waitd
118 TWS_Job_Sched
120 TWS_Job_Late
121 TWS_Job_Until_Cont
122 TWS_Job_Until_Canc
204 TWS_Job_Recovery_Prompt
119 TWS_Job Positional Fields for Job Property Modified Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job name.
  5. Workstation name on which the job runs.
  6. Job number.
  7. Property type indicated by an integer: 1 (CurrEstComplete), 2 (StartTime), 3 (StopTime), 4 (Duration), 5 (TerminatingPriority), 6 (KeyStatus).
  8. Property value.
  9. Record number.
  10. Key flag.
  11. Head job record number (different from record number for rerun/every jobs).
  12. Job's submitted (real) name. For jobs submitted with at or batch, this is the name supplied by the user if not unique. The unique name generated by Tivoli Workload Scheduler appears as variable 4 above.
  13. Original schedule name (for schedules not (yet) carried forward).
  14. Time stamp.
  15. Job stream name.
  16. Job stream input arrival time expressed as: yyyymmddhhmm00.
151 TWS_Schedule_Abend Positional Fields for Schedule Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job stream state, indicated by an integer: 1 (ready), 2 (hold), 3 (exec), 4 (stuck), 5 (abend), 6 (succ),7 (cancl).
  5. Record number.
  6. Key flag.
  7. Original schedule name (for schedules not (yet) carried forward).
  8. Time stamp.
  9. Job stream name.
  10. Job stream input arrival time expressed as: yyyymmddhhmm00.
152 TWS_Schedule_Stuck
153 TWS_Schedule_Started
154 TWS_Schedule_Done
155 TWS_Schedule_Susp
156 TWS_Schedule_Submit
157 TWS_Schedule_Cancel
158 TWS_Schedule_Ready
159 TWS_Schedule_Hold
160 TWS_Schedule_Extern
161 TWS_Schedule_CnPend
163 TWS_Schedule_Late
164 TWS_Schedule_Until_Cont
165 TWS_Schedule_Until_Canc
162 TWS_Schedule Positional Fields for Schedule Property Modified Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Property type indicated by an integer: 2 (StartTime), 3 ( StopTime), 4 (Duration),
  5. Property value.
  6. Record number.
  7. Original schedule name (for schedules not (yet) carried forward).
  8. Time stamp.
  9. Job stream name.
  10. Job stream input arrival time expressed as: yyyymmddhhmm00.
201 TWS_Global_Prompt Positional Fields for Global Prompt Events:
  1. Event number.
  2. Prompt name.
  3. Prompt number.
  4. Prompt text.
202 TWS_Schedule_Prompt Positional Fields for Schedule Prompt Events:
  1. Event number
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job stream name.
  5. Job stream input arrival time expressed as: yyyymmddhhmm00.
203 TWS_Job_Prompt Positional Fields for Job Prompt Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job name.
  5. Workstation name of the job.
  6. Prompt number.
  7. Prompt text.
  8. Job stream name.
  9. Job stream input arrival time expressed as: yyyymmddhhmm00.
251 TWS_Link_Dropped Positional Fields for Link Dropped/Broken Events:
  1. Event number.
  2. The "to" workstation name.
  3. Link state, indicated by an integer: 1 (unknown), 2 (down due to an unlink), 3 (down due to an error), 4 (up).
252 TWS_Link_Failed
301 TWS_Domain_Manager_Switch Positional Fields for Switch Manager Events:
  1. Event number.
  2. New manager.
  3. The domain name.
  4. Event time stamp.

Integration with TWS 8.2

IBM Tivoli Workload Scheduler uses the configuration file (BmEvents.conf), which needs to be configured to send specific IBM Tivoli Workload Scheduler events to the IBM Tivoli Enterprise Console. This file can also be configured to send SNMP traps (for integration with products that use SNMP events, such as NetView, for example). Events in the configuration file come in the form of numbers, where each number is mapped to a specific class of IBM Tivoli Workload Scheduler event. BmEvents.conf also specifies the name of the application log file that IBM Tivoli Workload Scheduler writes into (event.log). This file is monitored by IBM Tivoli Enterprise Console adapters, which then forward them to the event server. When IBM Tivoli Enterprise Console receives events from IBM Tivoli Workload Scheduler, it evaluates them against a set of rules, processes them, and takes the appropriate action, if needed.

There are some new IBM Tivoli Workload Scheduler events produced in IBM Tivoli Workload Scheduler Version 8.2 that did not exist before. This is to reflect some new features of IBM Tivoli Workload Scheduler only available in Version 8.2. Some of these events are:

Default criteria that control the correlation of events and the automatic responses can be changed by editing the file maestro_plus.rls (in UNIX environments) or maestront_plus.rls (in Windows environments) file. These RLS files are created during the installation of Tivoli Workload Scheduler and compiled with the BAROC file containing the event classes for the Tivoli Workload Scheduler events on the TEC event server when the Setup Event Server for TWS task is run. Before modifying either of these two files, make a backup copy of the original file and test the modified copy in your sample test environment.

For example, in the last event described in the table you can change the n value, the number of seconds the job has to be in ready state to trigger a new message, by modifying the rule job_ready_open set for the TWS_Job_Ready event class.

rule: job_ready_open : ( 
     		description: 'Start a timer rule for ready',

			event: _event of_class 'TWS_Job_Ready'
   					where [

								 status: outside ['CLOSED'],
								 schedule_name: _schedule_name,
								 job_cpu: _job_cpu,
								 job_name: _job_name
						],
			reception_action: (
						set_timer(_event,600,'ready event')
			)
		).

For example, by changing the value from 600 to 1200 in the set_timer predicates of the reception_action action, and then by recompiling and reloading the Rule Base you change from 600 to 1200 the number of seconds the job has to be in ready state to trigger a new message.

Refer to Tivoli Enterprise Console User's Guide and Tivoli Enterprise Console Rule Builder's Guide for details about rules commands.

scheduling events format

The integration between Tivoli® Workload Scheduler and Tivoli Entreprise Console (TEC) provides the means to identify and manage a set of predefined job scheduling events. These are the events that are managed using the Tivoli Enterprise Console logfile adapter installed on the scheduling workstations. These events are listed in the following table together with the values of their positional fields. These positional fields are the ones used by the FMT files to define the event structure which, once filled up with the information stored for that specific event number in the log file, is sent by the Tivoli Enterprise Console logfile adapter to the TEC event server. For additional information, refer to Job scheduling events.

Table 25. Events formats table
Event Number Event Class Positional Fields Values
51 TWS_Process_Reset Positional Fields for Process Reset Events/only for batchman:
  1. Event number.
  2. Process name.
  3. Local workstation name.
  4. Master workstation name.
101 TWS_Job_Abend Positional Fields for Job Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identified.
  4. Job name. For jobs submitted with at or batch, if the name supplied by the user is not unique, this is the Tivoli Workload Scheduler-generated name, and the name supplied by the user appears as variable 8 below.
  5. Workstation name on which the job runs.
  6. Job number.
  7. Job state, indicated by an integer: 1 (ready), 2 (hold), 3 (exec), 5 (abend), 6 (succ), 7 (cancl), 8 (done), 13 (fail), 16 (intro), 23 (abenp), 24 (succp), 25 (pend).
  8. Job's submitted (real) name. For jobs submitted with at or batch, this is the name supplied by the user if not unique. The unique name generated by Tivoli Workload Scheduler appears as variable 4 above.
  9. Job user.
  10. Name of the job's script file, or the command it runs. White space is replaced by the octal equivalent; for example, a space appears as \040.
  11. The rate at which an "every" job runs, expressed as hhmm. If every was not specified for the job, this is -32768.
  12. Job recovery status, indicated by an integer: 1 (stop), 2 (stop after recovery job), 3 (rerun), 4 (rerunafter recovery job), 5 (continue), 6 (continue after recovery job), 10 (this is the rerun of the job), 20 (this is the run of the recovery job).
  13. An event timestamp. This is the local time on the workstation where the job event occurred. It is expressed as: yyyymmddhhmmss00 (that is, year, month, day, hour, minute, second, hundredths always zeros).
  14. Message number (not zero only for job recovery prompts).
  15. The prompt number delimited by '\t', or zero if there is no prompt.
  16. Job record number. Identifies in the plan the record associated to the job (not for Event number 204).
  17. Job keyflag: 0 (no key flag), 1 (key flag) (not for Event number 204).
  18. Effective start time of the job (not for Event number 204). It has a valid time if it occurred in the event.
  19. Estimated start time of the job (not for Event number 204). It has a valid time if an Estimated Start time has been provided by the user.
  20. Estimated duration of the job (not for Event number 204). Time estimated by the Tivoli Workload Scheduler engine based on statistics.
  21. Deadline in Epoch (not for Event number 204). It has a valid time if a deadline time has been provided by the user.
  22. The prompt text, or Tivoli Workload Scheduler error message.
  23. Original schedule name (for schedules not (yet) carried forward).
  24. Head job record number (different from record number for rerun/every jobs).
  25. Job stream name.
  26. Job stream input arrival time expressed as: yyyymmddhhmm00.
102 TWS_Job_Failed
103 TWS_Job_Launched
104 TWS_Job_Done
105 TWS_Job_Suspended
106 TWS_Job_Submitted
107 TWS_Job_Cancel
108 TWS_Job_Ready
109 TWS_Job_Hold
110 TWS_Job_Restart
111 TWS_Job_Failed
112 TWS_Job_SuccP
113 TWS_Job_Extern
114 TWS_Job_INTRO
115 TWS_Job_Stuck
116 TWS_Job_Wait
117 TWS_Job_Waitd
118 TWS_Job_Sched
120 TWS_Job_Late
121 TWS_Job_Until_Cont
122 TWS_Job_Until_Canc
204 TWS_Job_Recovery_Prompt
119 TWS_Job Positional Fields for Job Property Modified Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job name.
  5. Workstation name on which the job runs.
  6. Job number.
  7. Property type indicated by an integer: 1 (CurrEstComplete), 2 (StartTime), 3 (StopTime), 4 (Duration), 5 (TerminatingPriority), 6 (KeyStatus).
  8. Property value.
  9. Record number.
  10. Key flag.
  11. Head job record number (different from record number for rerun/every jobs).
  12. Job's submitted (real) name. For jobs submitted with at or batch, this is the name supplied by the user if not unique. The unique name generated by Tivoli Workload Scheduler appears as variable 4 above.
  13. Original schedule name (for schedules not (yet) carried forward).
  14. Time stamp.
  15. Job stream name.
  16. Job stream input arrival time expressed as: yyyymmddhhmm00.
151 TWS_Schedule_Abend Positional Fields for Schedule Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job stream state, indicated by an integer: 1 (ready), 2 (hold), 3 (exec), 4 (stuck), 5 (abend), 6 (succ),7 (cancl).
  5. Record number.
  6. Key flag.
  7. Original schedule name (for schedules not (yet) carried forward).
  8. Time stamp.
  9. Job stream name.
  10. Job stream input arrival time expressed as: yyyymmddhhmm00.
152 TWS_Schedule_Stuck
153 TWS_Schedule_Started
154 TWS_Schedule_Done
155 TWS_Schedule_Susp
156 TWS_Schedule_Submit
157 TWS_Schedule_Cancel
158 TWS_Schedule_Ready
159 TWS_Schedule_Hold
160 TWS_Schedule_Extern
161 TWS_Schedule_CnPend
163 TWS_Schedule_Late
164 TWS_Schedule_Until_Cont
165 TWS_Schedule_Until_Canc
162 TWS_Schedule Positional Fields for Schedule Property Modified Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Property type indicated by an integer: 2 (StartTime), 3 ( StopTime), 4 (Duration),
  5. Property value.
  6. Record number.
  7. Original schedule name (for schedules not (yet) carried forward).
  8. Time stamp.
  9. Job stream name.
  10. Job stream input arrival time expressed as: yyyymmddhhmm00.
201 TWS_Global_Prompt Positional Fields for Global Prompt Events:
  1. Event number.
  2. Prompt name.
  3. Prompt number.
  4. Prompt text.
202 TWS_Schedule_Prompt Positional Fields for Schedule Prompt Events:
  1. Event number
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job stream name.
  5. Job stream input arrival time expressed as: yyyymmddhhmm00.
203 TWS_Job_Prompt Positional Fields for Job Prompt Events:
  1. Event number.
  2. Job stream workstation name.
  3. Job stream identifier.
  4. Job name.
  5. Workstation name of the job.
  6. Prompt number.
  7. Prompt text.
  8. Job stream name.
  9. Job stream input arrival time expressed as: yyyymmddhhmm00.
251 TWS_Link_Dropped Positional Fields for Link Dropped/Broken Events:
  1. Event number.
  2. The "to" workstation name.
  3. Link state, indicated by an integer: 1 (unknown), 2 (down due to an unlink), 3 (down due to an error), 4 (up).
252 TWS_Link_Failed
301 TWS_Domain_Manager_Switch Positional Fields for Switch Manager Events:
  1. Event number.
  2. New manager.
  3. The domain name.
  4. Event time stamp.

NEWS CONTENTS

Old News

IBM - Installing the TWS PLUS Module for TWS v8.2.x

Question:

What are the steps to install the Tivoli Workload Scheduler (TWS) PLUS Module for TWS version 8.2.x?

Answer:

Perform the following steps to install the Tivoli Workload Scheduler (TWS) PLUS Module for TWS version 8.2.x:

1. Mount Cdrom - # mount /dev/cdrom0 /mnt

2. Source the Tivoli environment - # . /etc/Tivoli/setup_env.sh

3. Open the Tivoli Desktop - # tivoli

4. Select Desktop > Install > Install Product

5. Go to the folder called GA

6. Set Media and Close

Installing Link Binaries ( on TMR)

1. Select Plus Module Support (Link Binaries) - 3.2.r

2. Select client on which to install

3. Install

4. Continue Install

5. Close


Install TWS Plus for TWS version v8.2.x:

1. Enter value for TWS user name: ____________ (twsuser)

2. Enter value for TWS Installation Directory: ___________ (/opt/tws)

3. Enter value for TWS JSC Installation Directory: ________ (/opt/JSCconsole)

4. Set and Close

5. Choose client on which to install

6. Install

7. Continue Install

NOTE: The installation will fail if installing on AIX. In order to install the TWS Plus Module from the GA CD on AIX, install the Plus Module Fixpack 2 or greater. It is recommended to always obtain the latest Fixpack and install that. The Fixpacks for TWSPlus may be found on the IBM FTP site in a separate directory listed under the same directory name as that for the most recent Fixpack.

The file that must be used is PLUSCONFIG-TMA-util . Reinstall once the file is in place.

8. Close


Configure the TEC Server:

Perform the following from the Tivoli Desktop:

1. Select the Tivoli Plus Icon

2. Select Tivoli Plus for Tivoli

3. Select Icon "Setup EventServer for TWS"

4. Select "Add to Existing Rule Base"

a. Existing Rule base name:________ (Tivoli Plus)
b. Name of Event Console to configure: ___________ (root)
c. TEC UI Server Host: ___________ (TMR Hostname)
d. TME Admin login: ___________ (root)


NOTE: If the name of the Event Console is not known, login to the TEC Console and perform the following:

#tec_console

- Windows > Configuration > Consoles and look for console name
- Windows > Configuration > Consoles > Operators (Should say Root_Hostname-region).
- password : #########
- set and close

NOTE: Look at the EventServer on Desktop, Choose Rule Base, look for Maestro_plus, Maestront_plus, and Maestro_mon. These contain the Events Definitions.


Integrating TWS network with TMR and Tivoli Plus:

An endpoint or managed node must be created:

Creating an Endpoint:

1. On Unix, End Point installed from command line:

- Source the Tivoli environment - # . /etc/Tivoli/setup_env.sh

2. Look at the gateway information to confirm a gateway is installed:

- # wgateway

- # wgateway gatewayname

3. If no gateway is present, install the following:

- #wcrtgate -h hostname -p port# -n gatewayname

4. Install EndPoint from CLI:

- #winstlcf -g hostname_of_gateway+port# -L -d3 -n endpoint_name -P port# -hostname_of_endpoint userid

5. To verify the endpoint is installed:

- wep ls
- wadminep endpoint view_config_info


Configure TWS PLUS for TWS Network

1. Go into the Tivoli Desktop:

#. /etc/Tivoli/setup_env.sh
# tivoli

2. From the Desktop:

- Select Tivoli Plus Icon
- Select Tivoli Plus for Tivoli icon
- Select "Set TWS install options", right click "Run on Selected Subscribers"
- Leave everything as Default except:

a. Check "Display on Desktop" in the Output Destination box
b. Add Endpoint to Selected Task Endpoints

3. Execute:

TWS user name _____________________(maestro)
TWS Installation Directory ____________(/opt/maestro)
JSC Installation Directory _____________(/JSCconsole)
CONFIGURE TWS PLUS FOR TWS NETWORK Continued

4. Set and Close

5. Right click on any task and choose "Run on Selected Subscribers".


Install TEC Adapter:

1. For TME ADAPTER on ENDPOINT
From Desktop
Go to Hostname-region

- Select Create
- Profile Manager

a. Name/Icon Label: <name>
b. Check the Dataless Endpoint Mode

- Create and Close

2. Double Click Profile Manager Icon that was just created.

- Select Create
- In the "Create Profile" window:

a. Name/Icon Label: <name>
b. Type should be ACP
c. If TYPE is empty :

- Go to Policy Regions
- Properties
- Set Managed Resources
- Choose ACP
- Create and Close

3. Double Click Policy Region

- Select Profile - <name>
- Adapter Configuration Profile: <name>
- Add Entry
- tecad_logfile_aix4-rl (Platform)
- Edit adapter 0, Profile
- Save and Close
- Close

4. Profile Manager Subscribers

- Add endpoint
- Set Subscriptions and Close

Distribute the profile through Policy regions:

1. Open Desktop

2. Go to hostname-region

3. Go to Policy Region:

a. Select endpoint name
b. Click and Drag and Drop from profile to subscriber

This installs tec adapter on the endpoint.

4. Close window

5. Leave the Desktop Open

6. Click Tivoli Plus

7. Go into Tivoli Plus for Tivoli

8. Right Click on "Set TWS Install Options"

9. Add Endpoint to Selected Task Endpoint

10. Check the Display on Desktop in Output Destination box

11. Everything else is default

12. Execute

13. Close

14. Close

Recycle the TWS environment:

Login as TWS user

1. #conman

2. %unlink @;noask

3. %stop;wait

4. %shut;wait

5. %start


Configuring the TME Logfile Adapter:

This sets up a TEC adapter to be able to log events. (TWS BmEvents.conf and event.log)

1. Copy BmEvents.conf to Bmevents.old

a. #cp /Maestrohome/ov/BmEvents.conf /opt/maestro/BmEvents.old
b. Edit BmEvents.conf
c. Set variable called EVENTS

EVENTS=51,101,111,151,152,155
FILE=/opt/tws/events.log

d. Save changes

2. Stop and restart TWS


Configure ENV for the Endpoint:

1. #. /etc/Tivoli/lcf/1/lcf_env.sh

2. cd /opt/Tivoli/lcf/dat/1

3. vi tecad_logfile.conf

4. add LogSources=/opt/maestro/events.log

5. On TMR

6. cd $BINDIR

7. /usr/local/Tivoli/bin/Generic_unix/TME/PLUS/TWS

Recommended Links

Plus Module User's Guide

[PDF] Plus Module User™s Guide

IBM - Installing the TWS PLUS Module for TWS v8.2.x



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019