Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Performing a parallel upgrade

This section describes how to upgrade your environment using a parallel scenario. The parallel scenario consists of the following procedures:

Parallel 1: Installing a new version 8.5 master domain manager

Install a master domain manager either on the same workstation where the existing master domain manager was installed, or on a different one. See Deciding where to install the new master domain manager and the relational database (RDBMS) for information about the advantages and disadvantages of these options.

See Installing for instructions on how to do this, bearing the following points in mind:

Parallel 2: Creating a workstation definition for new master domain manager in old domain

After the installation has completed, use composer in version 8.2.x to create a definition for the new master domain manager as a full status agent in the domain of the old master domain manager. The new workstation definition must be activated by a new plan creation before proceeding with the upgrade.

Parallel 3D: Manually importing the Mozart data directly - scenario P3D

If the new and the old instances of the master domain manager are on the same system, or you can mount the file system of the old instance on the new instance, perform the following procedures:

Use this procedure to import object data from a previous version into the Tivoli Workload Scheduler v8.5 database in a parallel upgrade. When the Tivoli Workload Scheduler is on another workstation, you must mount the directory of the version 8.2.x environment on the version 8.5 system. To do this the two workstations must have the same system byte order. If they do not, you must use the indirect import option (see Parallel 3U: Manually importing the 8.2.x data indirectly from an unlinked system - scenario P3U).

To import the data directly, follow these steps:

  1. On the version 8.5 system, log in as a user that has full access to the database of both the old and the new Tivoli Workload Scheduler environments.
  2. Set the Tivoli Workload Scheduler environment using the tws_env command. See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  3. Use the datamigrate command to import the data directly from the existing Mozart file.

    This step can be performed object type by object type or in a single command for all object types:

    Importing object data directly from Mozart in steps
    The syntax of the command is as follows:
    datamigrate object_type -path TWS_8.2.x_main_dir [-tmppath temp_path]
    

    where:

    object_type
    Is the type of object you are importing. Possible values are:
    • calendars
    • topology workstations
    • parms
    • prompts
    • resources
    • users
    • jobs
    • job streams
    You must run the command for all the object types indicated.
    TWS_8.2.x_main_dir
    Indicates the root directory of the previous Tivoli Workload Scheduler version.
    temp_path
    Is the temporary path where datamigrate stores the files during the migration process. The default is <TWS_home>/tmp in UNIX® systems and <TWS_home>\tmp in Windows® systems.
    Importing object data directly from Mozart in a single command
    The syntax of the command is as follows:
    datamigrate -path TWS_8.2.x_main_dir [-tmppath temp_path]

    where:

    TWS_8.2.x_main_dir
    Indicates the root directory of the previous Tivoli Workload Scheduler version.
    temp_path
    Is the temporary path where datamigrate stores the files during the migration process. The default is <TWS_home>/tmp in UNIX systems and <TWS_home>\tmp in Windows systems.

Parallel 3U: Manually importing the 8.2.x data indirectly from an unlinked system - scenario P3U

If the new and the old instances of the master domain manager are not on the same system, and you cannot mount the file system of the old instance on the new instance, follow these procedures:

P3U-1: Exporting the 8.2.x object data to flat text files

When you are performing a parallel upgrade you must first manually export the old data to flat text files and then import the flat text files into the new database. This step describes how to export the data manually.

The data export is performed by a special version of the composer command, called composer821. The normal 8.2.x version of composer must not be used. The composer821 command is located on the appropriate installation DVD according to the operating system where the instance of Tivoli Workload Scheduler you are upgrading is installed. To export the data using the composer821 command, perform the following steps:

  1. Log in as the <TWS_user>.
  2. Set the Tivoli Workload Scheduler environment using the tws_env command. See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  3. Locate the file on the appropriate installation DVD (see Installation media for details):
    CDn\operating_system\bin\composer821
  4. Copy the file into the directory where the old version 8.2.x composer is installed.
  5. Assign to composer821 the same rights that the old composer has.
  6. Use the composer821 create command to export the data. The data you export consists of the following: The syntax of the command is as follows:
    composer821 create topology_filename from cpu=@
    composer821 create prompts_filename from prompt
    composer821 create calendar_filename from calendar
    composer821 create parms_filename from parms
    composer821 create resources_filename from resources
    composer821 create jobs_filename from jobs=@#@
    composer821 create scheds_filename from sched=@#@
    
    where:
    topology_filename
    Is the name of the file that is to contain the topology data of the Tivoli Workload Scheduler instance you are upgrading (from cpu=@ indicates all workstations, workstation classes, and domains).
    prompts_filename
    Is the name of the file that is to contain the prompts of the Tivoli Workload Scheduler instance you are upgrading (from prompt indicates all prompts).
    calendar_filename
    Is the name of the file that is to contain the calendars of the Tivoli Workload Scheduler instance you are upgrading (from calendar indicates all calendars).
    parms_filename
    Is the name of the file that is to contain the parameters of the Tivoli Workload Scheduler instance you are upgrading (from parms indicates all parameters).
    resources_filename
    Is the name of the file that is to contain the resources of the Tivoli Workload Scheduler instance you are upgrading (from resources indicates all resources).
    jobs_filename
    Is the name of the file that is to contain the jobs of the Tivoli Workload Scheduler instance you are upgrading (from jobs=@#@ indicates all jobs).
    scheds_filename
    Is the name of the file that is to contain the job streams of the Tivoli Workload Scheduler instance you are upgrading (from scheds=@#@ indicates all schedules).
    The output files are used in the import step.
P3U-2: Exporting the 8.2.x Windows user data to text files

The composer821 create option for Windows users exports user details without their passwords. To include the Windows user passwords, follow these steps:

  1. Log in as the <TWS_user>.
  2. Set the Tivoli Workload Scheduler environment using the tws_env command. See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  3. Clean up the Windows user definitions, eliminating users that are no longer valid. The Tivoli Workload Scheduler: User's Guide and Reference describes how to remove user definitions from the database.
  4. Locate the migrutility utility command in the following tar file on the appropriate installation DVD (see Installation media for details).
    CDn\operating_system\utilities\migrtool.tar
  5. Uncompress the file and place it in a directory where you want to save the Windows users.
  6. Use the command as follows:
    migrutility get_users TWS_8.2.x_user_mozart_file users_filename
    where:
    TWS_8.2.x_user_mozart_file
    The complete path to file userdata located in <TWShome>/network/userdata.
    users_filename
    A name of your choice for the output file to be created by migrutility. It includes the encrypted passwords.

The migrutility command extracts the Windows users (and their passwords) from the Tivoli Workload Scheduler network and stores them in users_filename. You will need users_filename to import the Windows users into the RDBMS of Tivoli Workload Scheduler version 8.5.

P3U-3: Moving the text files to the system where the new master domain manager is installed

Move all the text files created in P3U-1: Exporting the 8.2.x object data to flat text files and P3U-2: Exporting the 8.2.x Windows user data to text files to any directory in the system where the new master domain manager is installed.

P3U-4: Importing object data from exported data files

To perform this step you must have completed P3U-1: Exporting the 8.2.x object data to flat text files , P3U-2: Exporting the 8.2.x Windows user data to text files,. and P3U-3: Moving the text files to the system where the new master domain manager is installed.

Perform these steps:

  1. On the new system, log in as a user that has access to the exported object data files and the new Tivoli Workload Scheduler environment.
  2. Set the Tivoli Workload Scheduler environment using the tws_env command. See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  3. Use the datamigrate command to import the data from the dumped files.

    Note:

    The data export is performed by a special version of the composer command, called composer821. The normal 8.2.x version of composer must not be used. The composer821 command is located on the appropriate installation DVD according to the operating system where the instance of Tivoli Workload Scheduler you are upgrading is installed.

    The syntax and order of the commands to use are:

    datamigrate -topology topology_filename [-tmppath temp_path]
    datamigrate -prompts prompts_filename [-tmppath temp_path]
    datamigrate -calendars calendars_filename [-tmppath temp_path]
    datamigrate -parms parms_filename [-tmppath temp_path]
    datamigrate -resources resources_filename [-tmppath temp_path]
    datamigrate -users users_filename [-tmppath temp_path]
    datamigrate -jobs jobs_filename [-tmppath temp_path]
    datamigrate -scheds scheds_filename [-tmppath temp_path]

    where:

    topology_filename
    Is the name of the topology file created by composer821 in the export process.
    prompts_filename
    Is the name of the prompts file created by composer821 in the export process.
    calendars_filename
    Is the name of the calendars file created by composer821 in the export process.
    parms_filename
    Is the name of the parameters file created by composer821 in the export process.
    resources_filename
    Is the name of the resources file created by composer821 in the export process.
    users_filename
    Is the name of the Windows users file created with the migrutility utility in the export process.
    jobs_filename
    Is the name of the jobs file created by composer821 in the export process.
    scheds_filename
    Is the name of the job streams file created by composer821 in the export process.
    temp_path
    Is the temporary path where datamigrate stores the files during the migrate process. The default is <TWS_home>\tmp in Windows systems and<TWS_home>/tmp in UNIX systems.

Parallel 4: Optionally exporting Tivoli Management Framework user data from the security file

If you have customized user security settings based on Tivoli® Management Framework Administrator IDs rather than user IDs in your security file, you can perform the following steps to transfer the current settings to a file which you can then import into your new Tivoli Workload Scheduler environment.

To extract the Tivoli Management Framework users, perform the following steps:

  1. Log in as the <TWS_user>.
  2. Set the Tivoli Management Framework environment:
  3. Set the Tivoli Workload Scheduler environment using the tws_env command. See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  4. Run the Tivoli Workload Scheduler utility dumpsec to export the user information to a flat text file (input_security_file) as follows:
    dumpsec > input_security_file
    where:
    input_security_file
    Is the text file created by the dumpsec command.
    See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  5. Locate the migrfwkuser utility command in the following tar file on the appropriate installation DVD (see Installation media for details):
    CDn\operating_system\utilities\migrtool.tar
  6. Uncompress the tar file in a directory on the version 8.2.x environment.
  7. On Windows systems only, run the bash command
  8. Run setup_env.cmd on Windows or . ./setup_env on UNIX
  9. Run the migrfwkuser script as follows:
    migrfwkuser -in input_security_file -out output_security_file 
    
    [-cpu workstation] [-hostname local_hostname]
    where:
    input_security_file
    Is the file created using the dumpsec command in step 4.
    output_security_file
    Is the security file that is created by the migrfwkuser script.
    workstation
    Is the name of the local workstation where the login data added by the tool is defined. If you do not specify a workstation, the data is taken from a localopts file if present in the same directory where the migrfwkuser script is located. If there is no localopts file, the workstation is set to the first 8 characters of the local host name.

    Note:

    If you do not specify a workstation, the data is taken from a localopts file located in the same directory as the migrfwkuser script.
    local_hostname
    Is the fully qualified host name of the Tivoli Management Framework users to be extracted. Login data for users with this host name or with this host name and domain name and the host name valid for all computers are extracted. If you do not specify the local host name, migrfwkuser retrieves the host name from the local computer and matches login data for computers with that host name and any domain name.
    Note:

    After you run the command, the output_security_file contains only the framework users and the user definitions of your 8.2.x environment. You must manually merge this information with the new Tivoli Workload Scheduler security settings before you import your final security file into the new environment.

Parallel 5: Preparing the old security file for switching the manager

For Tivoli Workload Scheduler to switch correctly, you must add the new <TWS_user> into the old security file. The new TWS_user is the one that you used when you installed the master domain manager.

Perform the following steps:

  1. On the old master domain manager, log in as the old <TWS_user> and set the Tivoli Workload Scheduler environment. Add the <TWS_user> of the new master domain manager to the old security file.
  2. If you have centralized security, distribute the security file. If you do not have centralized security, copy the compiled security file to your newly installed master domain manager, overwriting the version that is there.
  3. Wait for the scheduled JnextDay to distribute the Symphony file.

Parallel 6: Migrating the run number and global options

Use this procedure to migrate the Tivoli Workload Scheduler run number and globalopts files to the current version.

  1. On the new master domain manager, log in as a user that has full access to the database of both the old and the new Tivoli Workload Scheduler environments.
  2. Set the Tivoli Workload Scheduler environment using the tws_env command. See the Tivoli Workload Scheduler: User's Guide and Reference for detailed information about the command.
  3. Either mount TWS_8.2.x_main_dir on the local system or copy the files globalopts and runmsgno from the old V8.2.x to the new V8.5 temporary directory
  4. Use the optman command to import the installation run number and global options.

    The syntax of the command is as follows:

    optman miggrunnb <input_directory>
    optman miggopts <input_directory>

    where:

    <input_directory>
    is either the mounted V8.2 directory or the temp directory where you copied the files.

Parallel 7: Data import- resolving problems

At this point in the parallel upgrade check the quality of the migrated data, as described in Data import - problem resolving. When this process is complete, return to this parallel upgrade procedure, the next step of which is Parallel 8: Switching the master domain manager.

Parallel 8: Switching the master domain manager

To switch from the old master domain manager to the new one, perform the following steps:

  1. On the old master domain manager, run the following command:
    conman switchmgr MASTERDM;new_master_cpu_name 
    where new_master_cpu_name is the name of the workstation where the new master domain manager is installed.
  2. On the new master domain manager, ensure that the carry forward option is set to ALL, by running the following command:
    optman chg cf=ALL
    See Tivoli Workload Scheduler: Administration Guide.
  3. On the new master domain manager, create a plan with 0 extension period that begins at the end of the current plan, by running the following command:
    JnextPlan -from start_time -for 0000
    where start_time is the date and time when the current plan ends. For example, if Jnextday ran, and the plan was created from today at 06:00 until tomorrow at 05:59 the start_time of the new plan must be tomorrow, at the default time of 06:00.
  4. On the new master domain manager, reset the carry forward option to the value you assigned before running Step 2.
  5. If you have a final schedule in your old environment and want to continue using it in the new environment, submit the following commands on the new master domain manager:
    1. composer "add FINAL"
    2. conman "cs old_master_cpu_name#final"
      where old_master_cpu_name is the name of the workstation where the old master domain manager is installed.
    3. conman "sbs final"
  6. Because you are adding a new final schedule, it is important that the old final schedule does not run. To avoid this, either delete the old final schedule from the database or set the priority to 0. To delete the old final schedule, run the following command:
    composer "del sched=old_master_cpu_name#final"
    where old_master_cpu_name is the name of the workstation where the old master domain manager resides.

Parallel 9: Building the final security file for the new environment

Version 8.5 introduces new security statements for the event management, reporting features, and variable tables. If you have specific security settings in your 8.2.x environment, these settings must be manually merged with the new settings before you build the final security file to be used in your new environment. The statements you might have to add manually vary depending on your specific security settings.

If you ran the migrfwkuser utility on your old security file, you must merge the information contained in the output_security_file with the new security settings into a single text file.

Perform the following steps:

  1. On the new master domain manager, log in as the new <TWS_user> and set the Tivoli Workload Scheduler environment. Extract the new security file on the new master using the following V8.5 command:
    dumpsec > sec_file
    where sec_file is the text file created by the dumpsec command.
  2. Add the following statements to the sec_file:
    REPORT     NAME=@ ACCESS=DISPLAY
    EVENTRULE  NAME=@ ACCESS=ADD,DELETE,DISPLAY,MODIFY,LIST,UNLOCK
    ACTION     PROVIDER=@ 	ACCESS=DISPLAY,SUBMIT,USE,LIST
    EVENT PROVIDER=@ ACCESS=USE
    
    VARTABLE NAME=@ ACCESS=ADD,DELETE,DISPLAY,MODIFY,USE,LIST,UNLOCK
  3. Check that the user permissions of the new statements are correct.
  4. If you ran the procedure described in Parallel 4: Optionally exporting Tivoli Management Framework user data from the security file, perform the following:
    1. Transfer the output_security_file you obtained from your old system to the new master domain manager system.
    2. Open the output_security_file and copy into the sec_file the Tivoli Management Framework user statements.
  5. Save your changes to the sec_file.
  6. Build your final security file for your new master domain manager using the V8.5 makesec command:
    makesec sec_file
  7. If you have centralized security, distribute the security file and run JnextPlan -from start_time -for 0000.
  8. If you want to use EDWA, enable it using optman.

Parallel 10: Customizing the optional final job stream

If you had an old final job stream, no matter what it is called, it is now in the new database referring to the old master domain manager workstation. In addition, if you selected the option to create it when you installed the master domain manager, you will have a new final job stream called FINAL referring to the new master domain manager,

If your old final job stream was customized or is not called FINAL, you must perform some customization steps. Depending on your situation, perform the following:

If you had a customized final job stream in your database:
  1. Edit the new FINAL job stream with composer or Tivoli Dynamic Workload Console.
  2. View the old final job stream with composer or Tivoli Dynamic Workload Console.
  3. Make the corresponding customizations to the new FINAL job stream.
  4. Save your new FINAL job stream with a name of your choice.
  5. Delete your old final job stream.
If you had a final job stream that is not customized:
  1. Delete your old final job stream with composer or Tivoli Dynamic Workload Console.
  2. If necessary, rename the new FINAL job stream with the name of your old final job stream with composer or Tivoli Dynamic Workload Console.

Parallel 11: Rebuilding the plan

To rebuild the plan, perform the following:

  1. Set optman cf to all.
  2. Run JnextPlan -from <old_plan_start_date> -for 0000.
  3. Run the submit job stream command for the final job stream.