My prior three BLOGS discussed basic aspects of licensing IBM MLC and zOTC software products. This time I am focusing on possible ways to control, and hopefully reduce, MLC Software costs. But first, a review of a couple of topics. 

First, I am concentrating on the IBM MLC costs related to sub-capacity (sub-cap) pricing which requires clients to create SCRT (Sub-Capacity Reporting Tool) reports and submit them to IBM every month. Sub-capacity pricing means the monthly run rate is based on how much of the CPU is used during a month and not based on the installed capacity of the CPU. Normally, between 4 and 8 sub-capacity products are used on a CPU. 

Second, the price for each sub-cap product is based on the peak Rolling 4 Hour Average (R4HA) for that product. For a month that has 30 days, there are 720 hours, and SCRT determines the hour that has highest (= peak) usage. The values for the other 719 hours do not matter. The net of this is: to reduce MLC costs, clients would have to reduce the PEAK hour usage. Reducing usage during the other hours may be beneficial, but that will not reduce the MLC cost.

Next, let’s review some numbers. My experience indicates that for the MLC monthly run rate, sub-cap products contribute 96+% of the total cost and non sub-cap products (e.g., FWLC and/or Tier based) contribute 1-4% of the total cost. Therefore the focus has to be on the sub-cap products, which I divide into three functional categories:

  1. z/OS and billable z/OS features: this category typically represents 25%-35% of the MLC cost.
  2. Major Sub-Systems, which include CICS, IMS TM, Db2, IMS DB, and MQSeries: this category typically represents 60%-70% of the MLC cost.
  3. Minor products such as COBOL, PL/I, and NetView: this category typically represents 2-4% of the MLC cost.

Based on these estimates, clients should concentrate on the first two categories.


The peak R4HA z/OS usage represents the total CPU utilization for that hour and the z/OS cost is based on this. For example, assume the CPU utilization was 85.4% for the peak hour on a CPU rated at 1,000 MSUs. This would translate into an 854 MSU usage for z/OS and each of its billable features. To reduce z/OS costs, a client should consider the following techniques to reduce CPU peak usage:

  1. Offload work from the CPU to some other CPU or to a different platform.
  2. Stop running work/jobs that consume a high amount of CPU during the peak hour. An alternative approach would be to move this work to some non-peak period of the day or month.
  3. The peak hour for many clients is at night, perhaps during the 11PM to 4AM time frame. This is usually when batch work is run. One approach to reduce costs is to flatten the workload, for example, by only running 15 jobs at a time instead of 20 jobs at a time.
  4. In addition to the above, clients use soft-capping on their machines to lower/flatten the usage. Soft-capping allows limits to be set on any LPAR (defined capacity) and/or a group of LPARs (group capacity). This option controls the MSU consumption of sub-capacity products. The usage, and therefore the MLC rate, for each product will be based on the lower of either the cap setting or the actual usage of the product.

Major Sub-Systems

The products that fall into this category are normally run 24 hours a day throughout the entire month. They are active as much time as z/OS and the CPU itself. Most clients run these major sub-systems in all the LPARs on a CPU. The key point regarding these products is that cost is based on the peak usage of the LPARs in which they run rather than how much CPU the individual product uses. For example, assume Db2 is run in all LPARs using the CPU example above, where the z/OS usage was 854 MSUs. In this case, Db2, and each of the other major sub-system running in all the LPARs, would also be billed at 854 MSUs. Because of this, clients want to reduce the peak hour usage of any or all LPARs where the major sub-systems are run. Some considerations are:

  1. Is it necessary to run these major sub-systems in all LPARs? Is it possible to run them in only 2 or 3 LPARs instead on a machine that has 4 LPARs? Is it possible to run CICS and/or IMS in only 2 LPARs even though Db2 and/or MQSeries is needed in all 4 LPARs?
  2. Can these online related sub-systems be removed from LPARs where a lot of batch work is run? Looking at this in a different light, can batch work be removed from LPARs which primarily do online work? The goal is to run batch (non-metered) in different LPARs than where the major online systems (metered) are run.
  3. New LPAR(s) could be setup which would run z/OS and batch and no online systems. As much as possible of the batch work from all the other LPARs would be run in the new LPAR(s). The result is that the overall z/OS usage would remain the same, but the LPAR usage of “online” LPARs would be reduced. If this occurred, the cost of all the major sub-systems would decrease.
  4. Again, capping LPARs and/or groups of LPARs can result in lower major sub-system costs.

Minor Products

Although the compilers and NetView only contribute a small amount to the run rate, their usage should be reviewed. COBOL and PL/I are only run when needed but to reduce their overall cost, they should avoid being in LPARs in which online usage is big. NetView is probably run 24 hours a day so the consideration is whether it is needed in all LPARS.


I described two initiatives that should be considered regarding the IBM MLC run rate. First, reduce the peak hour (R4HA) CPU usage to lower the cost of z/OS. This would also reduce peak hour LPAR(s) usage which would lower major sub-systems cost. Second, reduce the peak hour (R4HA) of LPARs where major sub-systems are located by separating metered usage (sub-cap) from non metered usage (e.g., batch work).

Finally, once the peak hour usage has been reduced, the next project is to reduce the new “lower” peak, etc., etc., etc.

It would be interesting to know if clients have been able to change the contents of LPARs by separation of batch and online with a resulting savings in MLC run rate. Does anyone has a success story using this approach?