An Operational Analysis Primer Part 3: Modeling

An Operational Analysis Primer Part 3: Modeling

Author by Tom Wilson

1     Introduction

This is part 3 of a 4-part primer on operational analysis. In part 1 ([Wil14a]), I introduced operational analysis and focused on applying it to evaluation at the system level. In part 2 ([Wil14b]), I extended evaluation to the resource level. In this part, I will describe how operational analysis is used for modeling. This is predominately where much of the literature focuses its attention. Part 4 will investigate some other operational analysis topics.

The point of modeling performance using operational analysis is to predict performance of a system based on assumed values or ranges of values for some parameters. In this case, no observations are available, but the laws are known to hold based on observations from actual systems.

2     Basic Modeling

Table 1 shows the system-level parameters and laws from part 1 as well as two new parameters. The resource-level parameters and laws from part 2 are not shown, but are also applicable to modeling situations. Only a few references to resource-level parameters occur in this paper. The new parameters will be discussed in a later section.

MIT 4 Wilson Table 1 (2)

First, I will briely discuss the assumptions involved in operational analysis modeling. Then I will look at types of modeled systems.

2.1     Assumptions

Much of the literature is focused on proving laws or equating operational analysis derivation of the laws to stochastic (i.e., queueing theory) derivation of the laws. In order to simplify derivations, assumptions are made. Sometimes, the assumptions are convenient and not realistic. I will not devote a lot of discussion to these assumptions since others have already done so. Usually, you can leverage the results without re-proving their validity.

The flow balance assumption is C = A. This assumption is easy to verify in an evaluation situation by comparing C and A. When C = A, it follows that X = λ. This allows X to be substituted where λ appears in any laws. So for Little’s Law N = λ ∗ R, we can also write N = X ∗ R. I noted in part 1 that when A = C then AO = C O , but also when AO + C O > 0 then there is likely to be error in the service time (S) and the response time (R).1 So, flow balance itself does not guarantee accuracy.

Another common assumption is “(A − C )/C is small”. This allows A ;= C , but it states that the error will be negligible. I also noted in part 1, that the magnitude of error is likely to be influenced by the magnitude of R/T . But, another common assumption is “T is large” (i.e., much larger than R); this minimizes the error. [LZGS84] discusses most of the following assumptions, which I will only briefly mention:

1. Service center flow balance: This is the application of the flow balance assumption to each resource (Ck = Ak ).
2. One-step behavior: The only observable state changes result from single requests either entering the system, moving between pairs of resources in the system, or exiting from the system.
3. Device homogeneity: The output rate of a resource is determined completely by its queue length and is otherwise independent of the system’s state.
4. Routing homogeneity: The routing frequencies for a given total load are independent of the system’s state.
5. External arrival homogeneity: Arrival times are independent of the system state.
6. Service time homogeneity: The rate of completions from a resource is independent of the number of requests at that resource and independent of the system state. This is a stronger assumption than device homogeneity.
7. Operational connectedness: Each resource is visited at least once by some request during the observation period.
8. Class independence: For models with multiple classes (see section 3.2), the scheduling discipline used at a resource is independent of the class.

The first five assumptions are sufficient for the system to be separable, which means that each resource can be evaluated independently of the others. When the device homogeneity assumption is true, a resource has load dependent behavior. An example of this is a delay center (which will be introduced later): The more requests that are at the delay center, the higher the completion rate. When the service time homogeneity is true, a resource has load independent behavior.

[LZGS84] discusses the implications of these assumptions in detail, especially with respect to how the accuracy of solution algorithms might be affected. [LZGS84] also discusses extensions of the most of the assumptions when multiple classes exist.

In upcoming subsections, I will introduce different system types. Another assumption is that some derived parameters are known based on the system type. For an open system, λ is known. For a closed system, M and Z are known.

2.2     Open Systems

An open system has an unlimited source of work to execute. Figure 1 illustrates the open system using the computing system from part 2 of this series. All of the parameters and laws in part 1 and part 2 of this series are applicable to the open system view.

MIT 4 Wilson Figure 1

[LZGS84] categorizes systems from a workload perspective: The open system has a transaction workload, with the model input parameter being the arrival rate λ. Given λ, other derived parameters are computed. When evaluating a real

1 AO is the number of orphan arrivals and C O is the number of orphan completions.

system, X would be derived from the observed parameters. When modeling, there are no observations. The flow balance assumption gives us X = λ. Other desired parameters are defined in terms of other unknown parameters which can be assigned values in “what-if ” scenarios.

2.3     Closed Systems

A closed system has a limited source of work to execute. However, the source of work can generate new work when the previous work is complete. Figure 2 illustrates the closed system using the example system from part 2 of this series. The terminals are where the users would interact with the system from. The terminals could be anything that generates requests (like a script). From a notation standpoint, the collection of terminals is called a delay center. Some sources describe the population of users using the parameter N ; others use M . I prefer M , and I will explain this preference.

MIT 4 Wilson Figure 2

I want to highlight the level of detail change between an open and closed system. In Figure 1, the “system” is the computing system. In Figure 2, that same computing system is a subsystem. The “system” in Figure 2 has no inputs or outputs. Therefore, it has no arrivals and completions and, consequently, no arrival rate and throughput. The operational analysis parameters will apply to the computing

subsystem. One of those operational analysis parameters is N . It should reflect the load in the computing subsystem. That is why it is better to use M to represent the population of the closed system. Parameters M and N may not be equal. I will highlight the relationship between M and N in the upcoming subsections. The new think time parameter Z will also apply to the “user” system.

A key aspect to modeling closed systems is that the desired derived parameters are a function of the model input parameter M . So rather than denoting the response time simply by R, the response time is a function of M : R(M ). This is just a reminder that the parameter is dependent upon M . If you change M , R is also likely to change. From an evaluation perspective, R would just be computed for the observation period as described in part 1. In fact, M could be derived as well. But, here, we are modeling.

Closed systems are categorized as either batch, interactive, or mixed. Some operational laws (i.e., equations) differ based on the category. Solving closed models is more difficult than open models. This is because the closed models have feedback. A technique called Mean Value Analysis (MVA) is used to solve the equations. Typically, software is needed to perform MVA.

2.3.1     Batch Systems

In a batch system, the source of work creates new work immediately after results from previous work are obtained. Unfortunately, slightly different definitions exist for batch systems in the literature. [DB78] defines a batch system as a closed system where an new job is submitted as one completes. The model’s input parameter is said to be X (assumed to be equal to λ) even though related discussion says that a closed system model’s input parameter should be M . This problem will be made more confusing in the upcoming discussion on mixed systems. [LZGS84] defines a batch system similarly (although the workload is called “batch”), but states that the model’s input parameter is M . Because no users spend time outside of the computing system, the load N = M .

2.3.2     Interactive Systems

In an interactive system, the source of requests is a group of users at terminals. When results are received, some time is spent “thinking” before the new work is submitted. The average amount of time spent thinking by all users is the think time Z .

[LZGS84] refers to this system type as a terminal workload with the model input parameters being the population M and think time Z . This is a source that uses N for the population and, therefore, must introduce another symbol (Q) for the computing system’s load (I will continue to use M and N rather than N and Q).

Little’s law (N = X ∗ R) can be used to derive the response time law for an interactive system. In this case, N ← M and R ← R + Z . The substitution for R is made because R + Z is the time users spend in “think-and-wait” cycle as they interact with the computing system. The resulting equation is M = X ∗ (R + Z ) and the interactive response time law follows: R = M/X − Z .

[LZGS84, p. 61] derives an expression for N involving M : N = M − Z ∗ X . When Z = 0 (i.e., a batch workload), N = M and R = M/X .

2.3.3     Mixed Systems

In [DB78], a mixed system has two workloads that compete for the computing system’s resources. The “open system” workload (called “batch”) has workload parameter X (or λ). The “closed system” workload (called “interactive”) has workload parameters M and Z . The computing system’s derived parameters are a function of all 3 of these input parameters. Obviously, batch here is “open”, but, when discussing batch by itself, it is “closed”.

[LZGS84] introduces mixed systems when discussing multiple class models (see section 3.2). All three possibilities (transaction, batch, and interactive) are allowed.

3       Advanced Modeling Topics

Several advanced modeling topics are covered in the literature which will only be briefly mentioned here. These topics are load dependent behavior, multiple class models, and asymptotic parameter bounds. These topics are followed by a brief comparison between operational analysis and queueing theory.

3.1     Load Dependent Behavior

Load dependent behavior was defined when discussing assumptions in section 2.1. Basically, the output rate of a resource is dependent upon the number of queued requests. Both [DB78] and [LZGS84] discuss this topic at length and provide several examples.

3.2     Multiple Class Models

A multiple class model defines C customer classes and takes an array of parameters as input: for open models, λc ; for closed models, Mc and Zc (1 ≤ c ≤ C ). Unfortunately, the symbol “C ” is overused, but this is not really an issue since completions are not observed when modeling and customer classes are not necessary when evaluating. At the resource level, a second subscript is added to denote the resource number (e.g., λc,k ). Arrays of parameters are output from the models (e.g., Rc ). Aggregated system or resource parameters can be computed from the arrays of class parameters.

3.3     Asymptotic Parameter Bounds

[LZGS84] discusses the details of asymptotic parameter bounds. The basic idea is that when computing desirable parameters (e.g., R or X ) is difficult, the parameters can often be asymptotically bounded.

For a transaction workload (i.e., an open system), the focus is on when a bottleneck appears. This condition will bound the throughput above and response time below. Table 2 defines the bounding equations without further discussion. Dmax refers to the demand of the resource with the highest utilization (in an open system, this is the bottleneck). For closed systems (i.e., batch and terminal workloads), the population M may bound performance before a bottleneck does. So, the equations in Table 2 involve M and Dmax . However, both parameters are bounded above and below.

[LZGS84] also has important graphs for the parameters and their bounds which help communicate what the equations are saying more effectively. Those graphs are not repeated here.

MIT 4 Wilson Table 2

3.4     Comparison to Queueing Theory

There is a lot of commonality between operational analysis and queueing theory (also called “stochastic modeling”). Yet, the differences appear to be significant, even though much effort has been spent trying to close the gaps. I cannot give this topic the attention it deserves since it tends to get bogged down in theory, which I do not have sufficient knowledge of.

[Den04] provides a good, short history of the conflict between operational analysis and queueing theory. This technical report may be hard to find, but you might find it by searching the world wide web. [HL84] gives a less revealing comparison between the two fields.

The main take-aways on this topic are simple. Both operational analysis and queueing theory attempt to model computing systems. Both make assumptions that might not always be true. Both approaches can have errors in some of their models. It is generally agreed that operational analysis is simpler to understand and it is often used as a starting point for queueing theory. Queueing theory probably has a larger group studying it because of its ties to other probably theory topics.

I certainly have used operational analysis from an evaluation perspective and I do not think that queueing theory has an equivalent concept.

4     Conclusions

This paper has only briefly discussed modeling using operation analysis. Operational analysis is a well-established discipline that is very powerful. Some modeling concepts are simple to understand. Other concepts, particularly the implications of the assumptions, require time to comprehend. Knowing when a model is accurate, an estimate, or simply a bound may not be straightforward for the beginner. As has been noted in other parts of this series, terminology can vary in the literature. This is even more apparent when the focus is modeling.

In the last part of this series, some atypical operational analysis concepts will be presented.

Bibliography

[DB78] Peter J. Denning and Jeffrey P. Buzen. “The Operational Analysis of Queueing Network Models”. Computing Surveys, 10(3):225–261, September 1978.

[Den04] Peter J. Denning. “Operational Analysis”. Technical report, Naval Postgraduate School, Computer Science Department, July 2004.

[HL84] Philip Heidelberger and Stephen S. Lavenberg. “Computer Performance Evaluation Methodology”. IEEE Transactions on Computers, C-33(12):1995–1220, December 1984. Reprinted in Performance Modeling for Computer Architects, C. M. Krishna, Wiley-IEEE Computer Society, 1995.

[LZGS84] Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik. Quantitative System Perfor- mance, Computer System Analysis Using Queuing Network Models. Prentice Hall, 1984.

[Wil14a] Tom Wilson. “Operational Analysis Primer–Part 1: System-Level Evaluation”. CMG MeasureIT, Issue #2, April 2014.

[Wil14b] Tom Wilson. “Operational Analysis Primer–Part 2: Resource-Level Evaluation”. CMG MeasureIT, Issue 14.3,