An Operational Analysis Primer Part 4: Additional Topics

An Operational Analysis Primer Part 4: Additional Topics

 

Author by Tom Wilson

 

Introduction

This is part 4 of a 4-part primer on operational analysis. In part 1 ([Wil14b]), I introduced operational analysis and focused on applying it to evaluation at the system level. In part 2 ([Wil14c]), I extended evaluation to the resource level. In part 3 ([Wil14d]), I described how operational analysis is used for modeling. In this part, I will investigate some other topics not found in the literature.

While working on this series, I created some small examples to help me be sure that I was understanding the material (there is nothing like trying to teach something to someone else). I defined a few things that I do not see elsewhere. Of course, they are minor compared to the material previously presented.

Evaluating Partly-Open Systems

[SWHB06] discusses a partly-open system, but not from an operational analysis perspective. A partly-open system allows users to arrive and depart like an open system, but stay for a while like a closed system. Figure 1 illustrates the partly-open system that is an extension of the closed system model from part 3.

MIT 14.5 Wilson Fig 1

 

Let's consider the task of evaluating a partly-open transaction system. We want to know (1) the user population at the terminals, (2) the user load on the system, and (3) the average user think time. We will only need the system-level parameters and appropriate instrumentation.

I will not investigate this from a modeling perspective, but from an evaluation perspective. I will use an example to illustrate the concepts---most of which I have already presented in earlier parts.

MIT 14.5 Wilson Paragraph

 

 

Figure 2 provides example measurements for clarity. Five users visit the system during the 40-second observation period. The observation period is short to keep the example simple. Obviously, the durations are unrealistically short, but the concepts are still valid. Each request to the computing system (i.e., a transaction) is shown as a box with a response time within it. Login and logout transactions are one second in duration and together these define a user's session. The time between a user's transactions is a think time. Below the user timeline are several parameters and their values over time. Many parameters only show some of their values during the time interval.

MIT 14.5 Wilson Fig 2

 

Unofficial Extensions

As I worked on my recent project ([Wil14a]), I had operational analysis questions that I did not see covered in the literature. Does a workload stress a system? How can I quantify how much a workload stresses a system? This line follows a discussion in part 1 that a bottleneck can be the input to the system. So, how can I tell if a workload is a bottleneck? If a workload is a bottleneck, perhaps the system is over-engineered\ldots this is not necessarily true, but it is possible. I certainly am ignoring the idea that spare capacity might be necessary for future growth. Does a system handle a workload efficiently? How can I express that efficiency?

I am not sure I have fully answered these questions in this paper, but I at least laid some ground work towards possible answers. Figure 3 defines a simple multiple server system that I will use for the upcoming discussion. The system has one resource which consists of p servers (i.e., processors). The servers have one queue where the arrivals wait if all servers are busy.

MIT 14.5 Wilson Fig 3

I created several examples for the system above and the examples are shown in Figure 4. In these examples, I allowed requests to arrive at the same time. For the purist, you can assume the requests were spaced apart by some minuscule interval that will disappear when rounded. Four systems are considered in the figure: One has 8 servers, one has 4 servers, one has 2 servers, and one has only 1 server. Colors are used to reflect the number of servers: green--8, blue--4,

MIT 14.5 Wilson Fig 4

 

So, let's look at some of these in more detail. Example (a) has 8 simultaneous arrivals, and the 8 servers handle them in parallel in one second. Example (b) has 8 simultaneous arrivals, and the 4 servers handle 4 requests in one second and 4 requests in the next second. The latter 4 requests are queued for the first second. Example (c) illustrates how the two-server system handles the 8 simultaneous arrivals; example (d) shows a single-server system processing 8 simultaneous arrivals.

Example (e) has 4 simultaneous arrivals, and the 4 servers handle them in parallel in one second. Four more requests arrive in the next second and are handled. The 8-server situation is not shown because it is the same as the 4-server situation. Examples (f) and (g) show the other two cases where 4 requests arrive in consecutive seconds. Examples (h) and (i) illustrate two cases where 2 requests arrive for each of 4 consecutive seconds. Finally, example (j) shows the case where 1 request arrives for each of 8 consecutive seconds.

Table 1 summarizes the scenarios and highlights which ones are duplicates of others. For examples where more servers are duplicates of less servers (e.g., example (e)), some servers are idle.

MIT 14.5 Wilson Tab 1

MIT 14.5 Wilson Tab 2

MIT 14.5 Wilson Tab 3

MIT 14.5 Wilson Tab 4

Conclusions

The final part of this 4-part operational analysis series investigated two diverse topics. The first was evaluating a partly-open system. This incorporated many of the concepts in part 1 with the system type aspect in part 3. The point of this investigation was to highlight the differences between evaluation and modeling. Evaluation gathers measurements and computes many parameters, based on observable laws, that describe the system. Modeling assumes a few parameters and derives several parameters based on those observable laws, even though they are not observed for the modeled system.

The second topic dealt with some extensions that I defined while trying to better understand our system and its test workloads. These extensions are not found in the literature to my knowledge. They certainly require significant investigation in order to determine if they have any real value.

Oh well, it was fun.

Bibliography

[SWHB06] Bianca Schroeder, Adam Wierman, and Mor Harchol-Balter. “Open Versus Closed: A Cautionary Tale”. In Networked Systems Design and Implementation '06, pages 239-252, 2006. http://www.cs.caltech.edu/~adamw/papers/openvsclosed.pdf.
[Wil14a] Tom Wilson. “My Great Performance Testing Project”. CMG MeasureIT, Issue 14.1, February 2014.
[Wil14b] Tom Wilson. “Operational Analysis Primer—Part 1: System-Level Evaluation”. CMG MeasureIT, Issue #2, April 2014.
[Wil14c] Tom Wilson. “Operational Analysis Primer—Part 2: Resource-Level Evaluation”. CMG MeasureIT, Issue 14.3, 2014.
[Wil14d] Tom Wilson. “Operational Analysis Primer—Part 3: Modeling”. CMG MeasureIT, Issue 4, 2014.