Conference Proceedings: 2011

Late Breaking: Measuring Processor Utilization in Windows and Windows Applications
Mark Friedman

This session discusses the legacy technique for measuring processor utilization in Windows that is based on sampling. This technique for measuring processor utilization is efficient and generally adequate for capacity planning. However, it lacks the precision performance engineers require for application optimization and tuning, particularly over small measurement intervals. The session then introduces newer techniques for measuring processor utilization in Windows that are event-driven. The event-driven approaches are distinguished by far greater accuracy, enabling the reconstruction of the precise path that threads, processes and processors take when they execute. Gathering event-driven measurements entails significantly higher overhead, but measurements indicate this overhead is well within acceptable bounds on today’s high powered server machines.

Download handouts

Big Data, New Physics, and Geospatial Super-Food
Jeff Jonas, IBM Entity Analytics Group

When large collections of data come together very exciting, and somewhat unexpected things happen. As data grows the quality of predictions improve (less false positives, less false negatives), poor quality data starts to become helpful, and computation can actually get faster as the number of records grows. Now, add to this, the space-time-travel data about how people move that is being created by billions of mobile devices and what becomes computable is outright amazing. As it turns out geospatial data is analytic super-food.

Download paper

Cyber Threats - 2011: Are you prepared?
John Sanders

Download paper

Java on z/OS – New Opportunities
Mr. Scott Chapman, American Electric Power

When you see “Java” and “Opportunities” together do you think of problems? Does “Java on z/OS” immediately equate to “performance problems” in your mind? If so, you might need to re-examine the performance of Java on z/OS on modern zSeries hardware. In doing so you might discover that Java runs very well on the mainframe today and truly opens up new opportunities for the type of work that can run on z/OS. Performance analysts should be aware of both the opportunities and the issues associated with Java on z/OS.

Download paper
Download handouts

IT Service Management Reporting
Adrian Heald, ITSM Reporting Services

IT service management performance monitoring and reporting is essential if we are to continually improve the effectiveness of the work we do; after all, if we don’t know where we are today we have no hope of getting where we want to be tomorrow. This session presents a pragmatic approach to implementing IT service management reporting by getting the basics right the first time. We look at what is required for the data collection processes, how to determine what data we should collect; and how to present that data for the differing consumers within your organisation.

Download paper

The Penguins Have Landed - Getting started with Linux on System z
Michael Giglio, Shelter Insurance

Linux for System z has been around for a decade. Lots of companies are running tens or hundreds of Linux virtual servers on a single IBM mainframe. Many enterprises have exploited this technology for server consolidation, cost savings, “green technology” and reducing the size of their data center. This presentation describes the path Shelter Insurance is taking to implement and exploit Linux on System z. This implementation is more of a journey that a destination, with plans to continue exploring virtualized environments.

Download paper

Whan Capacity Planning Becomes a Capacity Problem
Charles Hopf

With the explosive growth of things like MQ, CICS, and (especially) DB2/DDF, the volume of SMF data has grown from the 40GB a day we talked about 15 years ago to 400, 500 or more GB, or even TB of SMF data daily. The post-processing of that data is often one of the largest and most resource intensive applications in many installations. This session will present some ideas on reducing that process by processing only what needs to be processed while keeping what needs to be kept on varying schedules depending on the type and volume of data.

Download paper
Download handouts

To Q or Not To Q: Is Simulation the Question?
Dr Tim R Norton, Simalytic Solutions, LLC

What is the role of modeling today? Is simulation the best way to understand the performance of a system or can usable results be achieved with other solutions? This introductory presentation is a tutorial of a variety of modeling concepts, explaining the basics of queuing theory, analytic modeling, simulation, arrival distributions and other techniques. The second half is a brief discussion focused on the practical uses of modeling rather than modeling theory. It looks at how fundamental principles can be applied to any modeling technique as a business, rather than technical, problem.

Download paper
Download handouts

Capacity Management in a Can -- Open, Heat, Serve, and Succeed
Rich Fronheiser, Metron
Dale Feiste, Metron-Athene

Few events are as scary for IT and business managers as a merger or an acquisition. Such events are even scarier when big decisions regarding overlapping services and infrastructure changes need to be made and there’s not really an ITIL-aligned Capacity Management process in place and little data or information being gathered and used. All hope is not lost, though. Executive commitment to fund such an effort and then bring in the right expertise can smooth over many bumps in the road. This session will examine such a scenario -- come see if there’s a happy ending.

Download paper
Download handouts

CMG-T: z/OS Tuning Basics: Monitoring z/OS Using SMF Logstreams and RMF
Glenn Anderson, IBM

A basic z/OS system includes the SMF and the RMF to measure and monitor resource consumption and system performance. This session positions the use of System Logger log streams as a repository for SMF data. There have been significant enhancements to SMF since the log stream support was initially delivered in z/OS 1.9, and these will be covered. To fully understand the changes brought about by the System Logger-related enhancements in SMF and how they can impact you, this session will help you wrap your mind around this new SMF paradigm. Configuring RMF for data gathering and producing basic reports will also be discussed, including a quick look at some essential RMF historical and real-time reports.

Download handouts

To plan or not to plan, that should not be the question: A practical guide to planning
Robert Jahn, Collaborative Consulting

The session is aimed at helping you incorporate project planning into your performance efforts by presenting a six step planning approach and by providing you the questions to ask along the way. It is primarily targeted at those who are relatively new to project planning, but it can also be used by the more experienced to validate planning approaches as well as provide additional insights. The approach and ideas presented draw upon the author’s experiences gained as a consultant over many years working on and leading performance engineering projects at several Fortune 500 companies.

Download paper

Capacity Management GPS: Guided Practitioner Satnav.
Adam Grummitt, Metron

Capacity Management – A Practitioner Guide was published in 2009. Many readers came back with similar questions. “This is all very well, but what are the first steps I must take to implement (or improve) capacity management in my organization.” This topic has been discussed on web sites, and subsequent strategic consultancy assignments have followed with mentoring, masterclasses, gap analysis and process outlines. We have found the analogy of satnav (satellite navigation) extremely useful in presenting this sort of analysis to show the aspects involved in making capacity management effective.

Download paper
Download handouts

Monitoring Performance QoS using Outliers
Eugene Margulis, Telus

Commonly used Performance Metrics often measure technical parameters that the end user neither knows nor cares about. The statistical nature of these metrics assumes a known underlying distribution when in reality such distributions are also unknown. We propose a QoS metric that is based on counting the outliers - events when the user is clearly “dis”-satisfied based on his/her expectation at the moment. We use outliers to track long term trends and changes in performance of individual transactions as well as to track system-wide freeze events that indicate system-wide resource exhaustion.

Download paper
Download handouts

CMG-T: z/OS Tuning Basics: zIIPs, zAAPs, HiperDispatch and WLM Dispatching (Part 2)
Glenn Anderson, IBM

Your z/OS System and WLM manage different types of transaction and server workloads with multiple dispatchable units - TCBs, SRBs. Hiperdispatch now plays a role in how ready work gets dispatched in z/OS, as well as how PR/SM dispatches logical CPs. In addition, some of these workloads are also eligible to be redirected to zIIP and zAAP specialty engines. Let's connect all these pieces together to understand WLM dispatching measurement, what makes work eligible for zIIPs and zAAPs, and the role of HiperDispatch on the new z196 processor.

Download handouts

The US Government is moving to the Cloud and Open Source Software. Really? Tools and a Case Study
Melvin Greer, Lockeed Martin

Cloud Computing is a promising paradigm designed to harness the power of networks of computers and communications in a more cost effective way. The Cloud Computing paradigm is maturing rapidly and is being considered for adoption in government and business platforms. Open source systems refer to software systems whose source code is available, allowing for immediate incorporation of improvements and adaptations of the system by its users. This paper reports on an evaluation of open source development tools for Cloud Computing. The main tools examined are Eucalyptus, Apache Hadoop, and the Django-Python stack. These tools were used at different layers in the construction of a notional application for managing weather data. The results of our experience are reported in terms of a capability matrix that grades nine different aspects associated with the use of these tools in the development and deployment of applications in Open Source Cloud Computing environments.

Download paper

Optimizing Performance and Capacity in Private and Hybrid Clouds
Russell Rothstein
Russell Rothstein, OpTier

In public, private and hybrid cloud environments, the IT organization has less visibility into service delivery. This makes it harder than ever to identify problems early, and recover before SLAs are affected. In order to cope, many IT departments are over-investing in hardware resources. Yet this undermines the ROI of the cloud. This session will include best practices to: (1) Determine which applications are best suited for the cloud; (2) Optimize service performance in and out of the cloud; (3) Plan for capacity in a dynamic cloud environment; (4) Realize the ROI and full benefits of the cloud.

Download paper

Application Signature - A Way to Identify, Quantify and Report Change
Richard Gimarc, CA Technologies
Kiran Chennuri, Aetna

Identifying change in application performance is a time consuming task. Businesses today have hundreds of applications and each application has hundreds of metrics. How do you wade through that mass of data to find an indication of change? This session describes the use of an Application Signature to identify, quantify and report change. A Signature is a concise description of application performance that is used much like a template to judge if a change has occurred. The Signature has a concise set of visual indicators that supports the identification of change in a timely manner.

Download paper

Instrumentation Strategies for the Cloud
Mr David Halbig, First Data Corporation

Cloud computing holds the promise of cheap, self-service, and on-demand capacity for application owners. However, the mechanisms for accomplishing this flexibility also greatly complicate the lives of those responsible for maintaining service levels. This session explores various problem Use Cases and how associated instrumentation strategies allow rapid and precise identification of underlying performance root causes. Horizontal and Vertical Monitoring as concepts are introduced.

Download paper

Case Study: Optimization and Analysis of a Private Virtual Cloud Computing Environment
Mr Frank Lieble, SAS
Robert Woodruff, SAS
Mr Stephen Sanger, SAS

SAS maintains a private virtual cloud computing environment consisting of a distributed collection of blade servers, virtual machines, storage, and network components. A challenge in managing this technology is to ensure that cloud resources are provisioned and utilized effectively. SAS implemented at an analytical layer that monitors the holistic system by collecting key metrics and analyzing resource usage and user behavior for capacity management. This case study shows how SAS manages its private virtual cloud computing resources and costs through use of analysis and reporting.

Download paper

Are Private Clouds More than Vapor?
Michael Salsburg, Unisys

When architects and CIOs state that ehy have instantiated a private cloud in their datacenter, what the heck are they talking about? There is no simple, formal definition for a private cloud to use as a reference. There seems to be a concensus regarding the term Public Cloud or, simple Cloud. Amazon is a poster child. Infrastructure is made available through the Internet through a simple, self-service portal. The user pays only for the time that is being used. There is no up-front capital investment. Public cloud elasticity allows computing consumption to grow from 1 server to 10,000 adn back to 1, while the cost is incurred on an hourly basis. So, does a private cloud work that way also? Other than elasticity and the pay by the drink cost model, what else does a cloud provide that can be realized within the Enterprise? What does it mean to mix private and public clouds to create a Hybrid Cloud model? This session will address these questions by starting with the basics of cloud computing and then focusing on how public and provate clouds can become an extension of the current enterprise's architecture.

Download paper

A Framework for Enterprise Capacity Management
Mr. Ramapantula Udaya Shankar, Tata Consultancy Services Limited
Mr. Bhargav Lavu, Tata Consultancy Services Limited

Capacity management is an IT discipline that is rarely understood and often sporadically implemented. At its core, capacity management ensures that an organization has sufficient capacity to provide satisfactory service levels to users in a cost effective manner. This paper outlines the foundations of that framework. The framework has been successfully implemented in a large Telco with a footprint of over 40k servers. This session will include real-life experiences from the implementation.

Download paper

I Have Looked at Clouds from Both Sides Now: Measuring Vapors
Dr. H. Pat Artis, Performance Associates, Inc.

While the value proposition of cloud computing is seductive to the management of organizations ranging in size from small to medium businesses to global enterprises, the measurement of the performance, reliability, and availability of the cloud based applications needs to be carefully defined to serve as the basis for a contractual agreement between the service provider and the client. This session will review cloud architecture and then explore four key questions:
- Are there cloud specific metrics?
- What is the minimum desirable set of metrics?
- Who should collect the metrics for the cloud?
- How should the metrics be reported?

We will also spend a moment or two paying tribute to the wonderful music of Joni Mitchell.

Download paper

Case Study: Federal Agency Capacity Planning Success Factors
Ms Ellen M Birch, The MITRE Corporation
Ms Gina M Molla, The MITRE Corporation

In this session, a federal government agency’s capacity planning successes and failures are examined and six critical success factors are proposed for ensuring good capacity planning, using an ITIL® based approach. This agency needs to provide the American public with quality IT service, but the architecture is exceedingly complex and distributed across myriad business functions and locations. Changes to one application often have a huge ripple effect. Application failures resulting from insufficient capacity could halt the federal government’s operations, so good capacity planning is essential.

Download paper
Download handouts

Help Developers Find Their Own Performance Defects
Mr. Erik T Ostermueller, FIS

How early in the software development circle are most performance defects found? Before or after QA? Industry pundits have long sought to reduce costs by fixing software defects earlier in the cycle. The path to these cost reductions, however, is fraught with road blocks. Thsis session focuses on a concrete testing regimen that works around these longstanding obstacles. It empowers developers to finally help locate their own performance defects, instead of relying solely on the assistance of specialized performance tuning experts.

Download paper
Download handouts

Performance Requirements: an Attempt of a Systematic View
Alexander Podelko, Oracle

Performance requirements are supposed to be tracked right from system inception through the whole system lifecycle including design, development, testing, operations, and maintenance. However different groups of people are involved at each stage using their own vision, terminology, metrics, and tools that makes the subject confusing when you go into details. This presentation is an attempt of a systematic view of the subject.

Download paper
Download handouts

Storage (RAM) in a Balanced System
Ray Wicks

The forced flow law in a balanced system would lead one to think that the size of a resource, such as storage (RAM), could be a function of processor usage. This presentation looks at the balanced system set of resource ratios, storage in particular. Topics will include balanced systems metrics, building a function Resource Usage = F(CPU Usage), new System z storage data in SMF 113 as a picture of storage usage, and projections of storage usage for capacity planning.

Download paper
Download handouts

CPU Measurement Inside Virtual Machine
Dr. Jie Lu, BMC Software

Although virtualization has already been widely implemented for years, a fundamental question in performance analysis and capacity planning, “Can I accurately measure the CPU time inside a guest operating system?” is still being debated. This session is trying to put such debate to rest. It explains how the time devices are virtualized in different virtualization solutions, and how they impact the timekeeping and resource accounting in the guest OS. Then it discusses and compares various solutions to the accuracy problem, to help people understand how to use what metrics appropriately.

Download paper
Download handouts

IT Contingency planning for financial crisis
Fernando Martinez, CGI

A financial crisis can have a significant impact on the workload of financial institutions. Unpredictable and huge loads can impact the IT infrastructure, from one day to the next. In this context giving a more or less acceptable level of service to customers becomes a matter of survival; a well conceived contingency plan is more necessary than ever. The hard work today can save tears tomorrow.

Download paper

A Model of Easy Tier Based upon Deployable Applications
Bruce McNutt, IBM

The central concept of the IBM Easy Tier product and similar offerings is the automated, dynamic relocation of data, at a fine level of granularity, based upon its current observed level of I/O demand. We develop a simple (although very approximate) model of the storage use within each tier, when exploiting a storage management offering of this type. To accomplish this, an extended version of the Deployable Applications Model is presented, suitable for analysis of a tiered storage environment.

Download paper

Application Robustness Classification using Perturbation Testing
Amol B Khanapurkar, Tata Consultancy Services
Mr. Mohit Nanda, Tata Consultancy Services Ltd.
Suresh B Malan, Tata Consultancy Services

Load Testing helps to capture performance at different load levels. It is naïve to assume that application performance at a certain load level may be in the same band that the load testing has pointed out. By introducing short spike for short duration we observed that applications either show resiliency, graceful degradation and recovery or total crash even after the spike has subsided. This behavior provides insights into application robustness and subsequent tune-ability which cannot be captured in load tests. Perturbation testing is a useful method to classify applications & mitigate risks.

Download paper

Software Performance Engineering: What Can It Do For You?
Dr. Connie U. Smith, L & S Computer Technology, Inc.

Software Performance Engineering (SPE) has the potential to reduce the cost and improve the responsiveness of systems. With SPE, developers build performance into systems rather than (try to) fix it later. SPE has evolved over more than thirty years and has been demonstrated to be effective during the development of many large systems. This session describes key SPE historical milestones, an overview of the new SPE paradigm for software and system development,and industry trends that will affect adoption and use of SPE. Then it describes key ideas for leading the field and what they can do for you.

Download paper

Event Tracing: Runtime Cost Analysis
Mr Nathan Scott, Aconex

Event tracing technologies are increasingly being used to diagnose performance problems in modern computing systems. Most of the prevalent operating systems of today include at least one native tracing technology with which the operating environment can be traced, and most also provide an option for static trace probe points to be embedded within application level software. We present results from benchmarking several userspace tracing options, with the aim of informing both application and trace toolkit developers as to the expected runtime costs incurred through the use of event tracing.

Download paper

Migrating Applications to the Cloud
Peter Johnson, Unisys

So you have decided that you want to move one or more of your enterprise applications to the cloud. What are migration issues that you should consider? What applications are a good fit for the cloud? Could you possibly offer your application as Software as a Service (SaaS)? This paper looks at these question and many more to help you understand the various possibilities when you start moving your application to the cloud and helps you better prepare for migration.

Download paper
Download handouts

Capacity and Performance Planning considerations for VDI
Ellen Friedman

IT organizations today are looking for new ways to address their desktop challenges, security concerns; cost, remote access, high availability and disaster recovery. All of these considerations play a role in the motivation to virtualizes the desktop environment.

Download paper

Zen and the Art of Leadership: Succeeding by Knowing When, And How, to not Care
Steve Balzac, 7 Steps Ahead, LLC

In sports and martial arts, victory most often goes to the athlete who knows when to care and when not to care about winning. In business, knowing when and how to not care is critical to creating a successful team. We'll cover the nine steps you can use to build your team. You will learn how to make them worthy of your trust and you worthy of theirs so that you can turn them loose with the knowledge that they will succeed.

- Learn techniques you can apply immediately to inspire and motivate others
- Understand how to adapt your leadership style to maximize results in any situation
- Know when to give up power to increase your effectiveness as a leader
- Be challenged to accomplish more difficult goals

Download paper

The New Performance Management Paradigm for the IBM zEnterprise System
Glenn R Anderson, IBM

The Platform Performance Management component of IBM zEnterprise Unified Resource Manager extends a goal oriented performance management capability to both traditional System z and BladeCenter environments, including Power7 and x-blades. z/OS WLM allows you to assign a Service Class based on the goal set by PPM, allowing end-to-end goal management. This session will explain the intersection of these new functions, helping you to understand the performance management capabilities of zEnterprise for cross-platform applications. The paradigm for System z performance management is changing!

Download handouts

A Methodology for Combinining GSPNs and QNs
Daniel A Menasce, George Mason University

Generalized Stochastic Petri Nets (GSPNs) are powerful mechanisms to model systems that exhibit parallelism, synchronization, blocking, and simultaneous resource possession. Large systems, however, suffer from state space explosion. Queuing networks (QNs) provide very efficient solutions for the cases were parallelism, synchronization, blocking, and simultaneous resource possession are not present. This paper presents a methodology by which large GSPNs can be efficiently solved by automatically detecting subnetworks that are equivalent to product-form queuing networks (PFQNs).

Download paper
Download handouts

CMG-T: Windows System Performance Measurement and Analysis
Jeff Schwartz

The basic tutorial in the CMG-T foundation curriculum introduces the metrics that are available from the Windows operating system and it's most prevalant applications. The sheer number of available metrics makes it difficult for anyone, even those analysts who are well versed in performance analysis measurements on other platforms, to discern the most important performance counters. This course will provide the necessary information to enable the Windows performance analyst to ascertain what the most important metrics are, how to interpret them, and the most appropriate collection mechanisms. It will also explain measurements either that are not easily obtainable or must be calculated. Discussion will include performance data collection and analysis issues using commonly available tools.

Note: All topics have been updated to include Server 2008, Windows 7, and Windows Vista.

Download handouts

Non-linear scaling of long running batch jobs: “The twelve days of degradation”
Chris B Papineau, Oracle

Batch jobs and other long-running programs often exhibit non-linear scaling. This means that they run disproportionately longer when processing larger sets of input data. This is a case study of three batch programs whose run times exhibited severe degradation at large volumes. In one case, 3000 records were processed in 55 minutes, while 6000 records took almost 400 minutes. This caused disruptions of the customer’s business cycle. The tools and techniques used to find and correct the cause of the issue are described in detail. An analogy from the non-technical world is used to illustrate the concepts.

Download paper
Download handouts

Automating Avoiding Application Outages/Poor Performance Due to “Disk Full”, Badly Timed Maintenance
Ron Kaminski

Examining the number of application requests for performance investigations that are ultimately discovered to be due to disk full or badly timed backups led us to think about better ways to find these long before they impact the users. Each of these investigations typically require expensive analyst time, and we believed that computationally efficient algorithms applied automatically to predict disk full and poor maintenance timings could be developed, and these issues solved before the end user either notices an issue or feels a slowdown. You can do it too!

Download paper
Download handouts

Quantifying Imbalance in Computer Systems
Dr Charles Z Loboz, Microsoft

The notion of imbalance in computer resource consumption is frequently related to sub-optimality in performance. For example in database systems imbalance between IO transfers to multiple disks signals that the layout of database files needs tuning to improve response time. Such considerations require a quantitative measure of imbalance, especially when a large number of components are involved. This session describes an entropy-based, composable measure of imbalance which is applied to computer systems in many areas and contexts.

Download paper

Demystifying Extended Distance FICON
Dr. Stephen R Guendert, BROCADE
Gordy Flam

Two years ago IBM announced IU pacing enhancements that allow customers to deploy z/OS Global Mirror (zGM) over long distances without a significant impact to performance. This is more commonly known by the marketing term Extended Distance FICON. The more technically accurate term as defined in the FC-SB3/4 standards is persistent IU pacing. How this functionality works, and what it actually does for the end user has remained a mystery, thanks in large part to the marketing term used. This session will demystify how the technology works, and how it can benefit the end user.

Download paper

DB2 10 for z/OS Performance and Scalability
Robert A Catterall, IBM
Robert Catterall

DB2 10 for z/OS features much better CPU and I/O performance, as well as significant relief from virtual storage constraints and latch constraints, that enable each DB2 member to manage a much higher number of threads, and resulting in the possibility of reducing the number of members in a data sharing system. This session will explain these benefits, plus a few of the ''new function'' features that improve performance.

Download paper
Download handouts

CMG-T: Storage Performance Management
Gilbert Houtekamer, IntelliMagic
Brett Allison

This session will help you understand the processes, architecture and measurements available for managing enterprise storage performance. After attending the session you will have an understanding of the key management and technical aspects required for implementing effective storage performance management, such that you improve storage performance and reduce the risks of unexpected problems. 1st HOUR: This is an introduction of storage performance management. It will explain why this new discipline is now emerging. The cost and performance benefits of storage performance management are explained, as well as the required building blocks including the required processes, tools and skills. This session is appropriate for management as well as technologists.

Download handouts

The New Function and New Faces of Mainframe and Cross-Platform Performance Monitoring
Glenn R Anderson, IBM

The IBM zEnterprise Unified Resource Manager provides performance monitoring and reporting functions that provide performance analysts data to understand whether performance goals are being met for cross platform applications running on System z and the BladeCenter Power7 and x-blades. Data at the transactional level can be collected using ARM agents. RMF provides CIM-based performance data gatherers for Linux on System z, Linux on System x, and AIX to provide consistent monitoring for zEnterprise ensembles. This paper will present the new face of monitoring in this expanded mainframe world. \

Download handouts

Minimizing System Lockup During Performance Spikes: Old and New Approaches to Resource Rationing
Mr. Erik T Ostermueller, FIS

Do you lose sleep worrying that your system might die under the harsh punishment of an unexpected performance spike? This session presents old and new approaches of selectively restricting system resources so that performance spikes will cause fewer outages. Traditionally, this problem has been addressed by configuring the system with a limited number of concurrent threads of execution. This session reviews the traditional approach, and also discusses some new options (battle tested in production) that prioritize currently executing traffic and traffic that avoids data-dependent hot-spots.

Download paper
Download handouts

Working Kalmly - A Grab Bag of Performance Tips
Ms. Denise P Kalm

This session includes miscellanea collected from past performance nightmares and triumphs; it is designed to help you handle the “not so typical” scenarios that may challenge you in your performance career. The Performance CryptMaster has been there on the front lines and now wants to offer these cautionary tales to you.

Download paper
Download handouts

Deep Dive Into LOBs Using DB2 10 for z/OS
Robert A Catterall, IBM
Robert Catterall

If you look into today’s applications, many use large objects (LOBs) even in heavy transactional environments. This session reviews the new LOB features in DB2 10 for z/OS. Most important are inline LOBs and utility enhancements. Inline LOBs are useful for small LOBs. We'll examine the performance benefits and tuning considerations for inline LOBs, as well as the improved performance for importing and exporting LOBs using Load and Unload. Other DB2 10 LOB features will be discussed as well.

Download paper
Download handouts

Poster: Performance Engineering for MASSIVE Systems
Mark Lustig, Collaborative Consulting

Massive platforms consist of 50+ distributed systems and components, integrated to process millions of transactions per day, and hundreds of Terabytes of data. The ramifications of 1 component not scaling to support thousands of transactions per second can result in significant lost revenue for a single disruption. Performance Engineering must be implemented across the lifecycle, affecting all aspects of IT. In the massive system platform world, the diversity of technologies requires a disciplined approach to building, measuring, and ensuring system scalability, performance, and throughput.

Download paper
Download handouts

Poster: Targeted Custom Profiling
Chris B Papineau, Oracle

Performance analysis of any software system reduces to one simple principle: “Where is the code or system spending its time?”. Answering this question rigorously usually involves profiling the code. Rather than off-the-shelf profiling tools, it is often possible – and more effective – to perform this task with customized profiling. A case study of a tool which has been productive against batch and interactive programs is presented. The key concepts: - Custom time-stamped instrumentation with a predicable, consistent format, - A user interface, - A parsing engine, - Output reporting formats

Download paper
Download handouts

Poster: Best Practices for Private Cloud Implementation
Laura Knapp, AES

While the cloud simplifies the concept of data center virtualization, there is significant implementation complexity. This session focuses on items overlooked when migrating to a private cloud computing environment. Three private cloud examples will be used to explain a private cloud, outline key items in a implementation plan, emphasize the critical role of the network, discuss management and operations tasks, and summarize best practices.

Download handouts

Poster: Consolidating Database Servers
Mr. Megh Thakkar, CPT Global

It is very common for projects to use an approach of application-based database deployments. This results in a large number of database servers being deployed which are under-utilized and take up data centre floor space, power and resources. A number of operational issues are also common such as licensing, capacity management, availability and recovery. This session discusses how to optimize database server deployments by using consolidation techniques to improve availability, performance as well as derive operational benefits in a cost-effective manner.

Download paper
Download handouts

Late Breaking: Processor Selection for Optimum Middleware Price/Performance
David Kra

Many middleware products can be deployed onto many combinations of processor architecture and operating system. Finding the most cost effective combination is complicated by software pricing based on vendor core weighting factors. This paper explains how to combine core weights, core counts, and performance data to calculate and compare a “Performance Rate per Weighted Core.” Results are provided for the Oracle data base server as used in published TPC-C and TPC-H benchmarks.

Download handouts

The true cost of website downtime; how to develop a convincing case?
Peter van Eijk

Websites can represent a tremendous value to a business. But how can a convincing monetary valuation of downtime be developed? Is it based on lost revenue? On brand value? It might not be the same for every type of website. The true cost of downtime is an important component in the business case for any investment in hardware, software and capacity planning. This session describes how this can be developed in collaboration with non technical business users on the basis of process descriptions combined with actual measurements. A true customer story.

Download paper
Download handouts

Help! My Application Will Not Scale... Oracle Solaris TM Multi-thread Aware Memory Allocation
Rickey Weisner, Oracle

When your application does not scale on new multi-processor, multi-core, multi-thread hardware; the problem may be lock contention in the memory allocator! This session gives you the tools to identify the issue and select a better allocator.

Download paper
Download handouts

Automatic Daily Monitoring of Continuous Processes in Theory and Practice
Frank Bereznay
Mp Welch, SAS

Monitoring large numbers of processes for potential issues before they become problematic can be time consuming and resource intensive. A number of statistical methods have been used to identify change due to a discernable cause and separate it from the fluctuations that are part of normal activity. This session provides a case study of creating a system to track and report these types of changes. Determining the best level of data summarization, control limits, and charting options will be examined as well as all of the SAS code needed to implement the process and extend its functionality.

Download paper
Download handouts

Processing Big Data on the Cloud
Dr. Odysseas I Pentakalos, SYSNET International, Inc

Organizations of all sizes and industries including Wal-Mart, Facebook, the Human Genome project, and the Large Synoptic Survey Telescope reports on the increasing generation of vast amounts of data. Processing enormous amounts of information to extract knowledge requires new tools, techniques, and infrastructure. This article describes the MapReduce framework introduced by Google that enables distributed processing of large data sets and the Apache Hadoop software, which simplifies the task of processing large data on computing clusters with up to thousands of nodes.

Download paper

Poster: Performance scaling analysis of Messaging Application
Agrawal Nishant
Himanshu k Kumar, tata consultancy services limited
Mr Manoj K Nambiar, Tata Consultancy Services

When an online application is accessed by concurrent users, we expect the throughput to be proportional to the number of users. If it does not scale linearly and there is a bottleneck. In case an application does not have shared software resources like locks, bottlenecks could be system resources. When such an application was tested for performance, there was hardly any change in any of the device utilizations when the workload was increased. In this session we discuss performance analysis of such a system and how it was finally tuned to achieve high throughput.

Download paper

Poster: Tansitioning to IPv6
Laura Knapp, AES

IPv6 is gaining implementation momentum after years of media hype. Unfortunately the transition to IPv6 is major and needs to be planned and designed with the thoroughness used when the move was made from SNA to IP. This session will use experiences of worldwide clients to relate some of the Do’s and Don’ts of a transition to IPv6, including background on IPv6, addressing, design, management, and best practices

Download handouts

Poster: Test Environment: A challenge to Performance Engineering
Mohit Verma

Test environments bring in new complexity and challenges to both agile and waterfall performance engineering approaches. Test environments are often not sized equal to the production environment, and nowadays test environments tend to be heavily virtualized. We will discuss the Performance Testing Lifecycle of a mission critical application which we performance engineered to support several hundred concurrent users in a Health Care setting under a restricted Test Environment. The attendees will learn what challenges we faced and the lessons we learned during performance testing of this application in a shared, virtualized test environment.

Download paper
Download handouts

Poster: How to use cloud computing and social media to run a CMG chapter and other communities
Peter van Eijk

How to use cloud computing and social media to run a CMG chapter and other communities. This session discusses the use of social media tools for the development of an online professional community. It covers websites, website hosting, form management, LinkedIN components, and blogs.

Download paper
Download handouts

Poster: Welcome to the SecondLife Computer Measurement Group (SL-CMG)
Mrs. Clea A Zolotow, IBM

Welcome to our new virtual home – SecondLife Computer Measurement Group (SL-CMG)! Come tour the new office building, view the cube farm, and take a tour of our new datacenter (featuring the z/BX). See how presentations and posters can be exhibited and how meetings can take place in this virtual world. Socialize, connect, and create with your fellow CMG members here in Secondlife! (Bring your SecondLife-enabled laptop to share in the fun with your PG avatar.)

Download paper

Poster: Cognitive Overload!
Elizabeth Stahl

How can we apply established theories and recent developments in the cognitive science exploration of traditional study habits to the overload of information we encounter every day? What cognitive best practices can be recommended for both the consumer and producer of information streams? How can we better manage the information overload as performance professionals in our digital life? This poster will forever change HOW you work.

Download paper

Performance Models from Theory to Practice ''A Case Study''
Mr Vaibhav Agrawal, Infosys Technologies Limited
Mr Chintan V Raval, Infosys Technologies Limited
Miss Vaishali V Gulve, Infosys Technologies

For complete implementation of the Performance Modeling concept, we always go through two stages. First, we ask, How will we apply the concept? Second, we ask, What level of data do we need in order to meet our goals? During the second stage we face lots of challenges. We have to find out whether we have the proper data. If we don't have it, we have to know how to collect the data we need. Prior to collecting data, we need to know what percentage accuracy would be sufficient to meet our needs, i.e. to determine whether our process is suitable for implementing in a real time scenario. In this session we present answers to the above challenges in several possible scenarios.

Download paper
Download handouts

Performance Assurance for Packaged Applications
Alexander Podelko, Oracle

Performance is a critical factor for the success of any packaged application implementation. The presentation discusses performance assurance for packaged applications, with the example of Oracle Enterprise Performance Management. While some details in the presentation are related to this particular set of applications, many approaches discussed would be applicable to most packaged applications. The presentation will discuss a holistic performance assurance approach, i.e. a top-down approach to performance troubleshooting. Potential performance issues and ways to address them will be presented.

Download paper
Download handouts

Capacity Management in a Cloud Computing World
David Linthicum, Blue Mountain Labs

With advent of cloud computing enterprises are looking for new ways to measure both capacity and performance. While the elastic nature of cloud computing means that resource intensive applications can go from dozens to hundreds of virtualized servers at the press of a button, this is not always the best way to manage overall computing power and performance. Moreover the use of cloud computing, unless within a performance management program, is often not as cost effective as leveraging local system resources.

In this session we'll look at the new computing models of cloud computing, and how performance management issues both change, and remain the same. We'll take a deep dive into the way that computing power is offered today from both private and public clouds, how to model these environments, and how to reach both cost and computing resource efficiency.

Download paper

IT Around the World - Utilizing Technology, Cloud Infrastructures and Virtualization
Susan Schreitmueller, IBM

A technical overview of IT around the world with particular focus on how growth markets have the opportunity to leapfrog mature countries by utilizing technology, cloud infrastructures and virtualization judisciously. This session will also focus on Innovation and Smarter Computing used to significantly improve infrastructure, economies, and ways of life for countries around the globe.

Download paper