CMG Southwest - Computer Measurement Group

CMG Southwest

Southwest CMG (SWCMG) is a regional group of the international Computer Measurement Group – CMG. SWCMG is focused on performance engineering and capacity of enterprise IT systems.

Members include IT professionals committed to sharing information and best practices on ensuring the efficiency and scalability of IT service delivery using measurement, quantitative analysis, validation techniques, modeling, and forecasting.

SWCMG typically meets 3-4 times a year in Dallas, Austin, and Houston.

All skill levels are welcome to join, share and network with other professionals on a variety of topics that include:

  • Capacity Planning & Management
  • Performance Testing
  • Application Performance Management
  • Performance Modeling
  • System & Application Performance Monitoring
  • Application Diagnostics & Optimization


Richard Gimarc – [email protected]
Joey Capps – [email protected]
Pat Hughes – [email protected]

Join SWCMG via Meetup


April 20, 2020 – ONLINE – Southwest CMG – Performance & Cost Optimization

Monday, April 20 @ 10:00 am – 2:00 pm CST

Join us for a virtual meeting of the Southwest CMG. We have four presentations that look at system performance and cost optimization from different application perspectives.

Registration is required at


Be sure to join our SWCMG Meetup to receive additional meeting information and follow-up:

Here are the details of each presentation:

#1 – VPN is the New Toilet Paper – Leveraging Helix Optimize to Understand the Impact of Employees Working from Home on Businesses due to COVID-19

Download presentation

April 20 @ 10:00 am – 11:00 am CST

Speaker: Karen Hughes, Sr. Principle Software Consultant, BMC Software

Abstract: COVID-19 has completely disrupted the location of where employees work. Organizations that used to require employees to come to the office now are forced to have them work remotely, and they have little experience in understanding the impact it will have on their networks. Other organizations have experience with some employees working from home, but not 100% of their workforce. In this session we will discuss how to manage, plan, and validate your network with Helix Optimize. It will show 4 actual use cases that BMC IT leveraged in order to properly plan for the increase in remote workers, understand the impact on their network, and model business continuity scenarios. Learn what data and metrics are needed to be collected, where to get the from, and how to analyze them.

About the Speaker: Karen Hughes has worked with the BMC TrueSight Capacity Optimization product line since 1996. During that time Karen has had many roles including QA, development, marketing and solution engineering. Prior to BMC, Karen worked for AGFA, a division of the Bayer Corporation as a software developer, (utilizing her bachelor’s degree in Computer Science) and focused her efforts on coding automation tools for their engineering department. Karen has tremendous domain knowledge, is a subject matter expert on Capacity Planning and is certified by ITIL as an ITIL Capacity Management Practitioner.


#2 – Save MSUs and Reduce Run-Times for Analytics and MXG Reporting

Download presentation

April 20 @ 11:00 am – 12:00 pm CST

Speaker: Paul Massengill, Systems Engineer, Mainframe Analytics Specialist, Luminex Software

Abstract: The mainframe team of a Fortune 500 Transportation Provider was tasked with conflicting goals: (1) increase the frequency and variety of reporting and analytics, and (2) avoid an upgrade of their already overtaxed mainframe. Learn how you can apply their success with off-host processing to your operations for faster analytics and MXG reporting, all while retaining mainframe control of scheduling, execution and security.

About the Speaker: Paul Massengill has spent most of this 30-year IT career in Solutions Architecture and Data Analytics for Mainframe and Open Systems. He spent his early years in IT working for top tier banks in Storage Administration, Capacity Planning and Performance Tuning. For the last two decades, Paul has specialized in using Data Analytics coupled with customer business forecasts to bring Enterprise solutions to many TOP 100 customers. He provided these solutions while working for companies such as Wachovia, Bank of America, StorageTek, SUN, Oracle, Hitachi and currently Luminex Software.


#3 – How to Apply Modeling and Optimization to Select the Appropriate Cloud Platform

Download presentation

April 20 @ 12:00 pm – 1:00 pm CST

Speaker: Dr. Boris Zibitsker, CEO of BEZNext

Abstract: Organizations want to take advantage of the flexibility and scalability of Cloud platforms. By migrating to the Cloud, they hope to develop and implement new applications faster with lower cost. Amazon AWS, Microsoft Azure, Google, IBM, Oracle and others Cloud providers support different DBMS like Snowflake, Redshift, Teradata Vantage, and others. These platforms have different architecture, mechanism of allocation and management of resources, and sophistication of DBMS optimizers which affect performance, scalability and cost. As a result, the response time, CPU Service Time and the number of I/Os for the same query, accessing the similar table in the Cloud could be significantly different than On Prem.

In order to select the appropriate Cloud platform, we use modeling and optimization.

  • First, we perform a Workload Characterization for On Prem Data Warehouse. Each Data Warehouse workload represents a specific line of business and includes activity of many users generating concurrently simple and complex queries accessing data from different tables. Each workload has different demand for resources and different Response Time and Throughput Service Level Goals.
  • Secondly, we must collect measurement data for standard TPC-DS benchmark tests performed in AWS Vantage, Redshift and Snowflake Cloud platform for different sizes of the data sets and different number of concurrent users.
  • During third step we use the results of the workload characterization and measurement data collected during the benchmark to modify BEZNext On Prem Closed Queueing model to model individual Clouds.
  • And finally, during the fourth step we use the Model to take into consideration differences in concurrency, priorities and resource allocation to different workloads. BEZNext Capacity Planning optimization algorithms incorporate Graduate search mechanism to find the AWS instance type and minimum number of instances which will be required to meet SLGs for each of the workloads. Publicly available information about the cost of the different AWS instances is used to predict the cost of supporting workloads in the Cloud month by month during next 12 months.

About the Speaker: Dr. Boris Zibitsker is a CEO of BEZNext. His focus is on the development of performance assurance, performance engineering, dynamic performance management and long-term capacity planning software tools for big data, data warehouse and cloud applications. He is a member of SPEC Big Data Research Group. Boris consults with many Fortune 500 companies, and he manages Capstone projects for graduate students in MS in Analytics at University of Chicago. Boris a Honorable Doctor of BGUIR and during last 5 years he was a co-chairman of Big Data Advanced Analytics Conference.


CANCELLED – October 2, 2019 SWCMG Austin Meeting – Cloud & Security

The Oct 2 meeting of SWCMG has been cancelled.

We are looking into rescheduling this meeting for November or early December.

Updates will be posted to our SWCMG Meetup.

Stay tuned…


August 27, 2019 Webinar – A New Model for Cloud Adoption

Presentation Materials:

Webinar Details:

  • Title: A New Model for Cloud Adoption
  • Presenter: Anthony “TJ” Johnson – Managing Partner and Founder of Cloud Sherpa Consulting
  • Date: Tuesday, August 27, 2019,  1:00pm – 2:00pm EDT
  • Register on CMG Event Calendar

You’re invited to a 1-hour Southwest CMG webinar.  We’re excited to have a presentation by Anthony “TJ” Johnson, the Managing Partner and Founder of Cloud Sherpa Consulting

Innovation is at the core of all technologists, and we want to understand and apply innovation to create value for our organizations.  With almost 10 years of cloud experience, Tony Johnson has created a cloud adoption model based on best practices and a wide variety of cloud estates consuming thousands of services called the “6 principles of a successful cloud strategy”.

Please register for this webinar to learn how cloud leaders establish a best practice model to overcome cloud adoption challenges with value governance.

By attending the webinar, you will learn about:

  • How to implement proper strategy, process, technology and economics to build and manage your operating model for cloud.
  • Apply resource optimization and cost management capabilities that support the management of cloud.
  • Manage your multi-cloud journey that can enable automation, lifecycle management, and governance across cloud environments.

About our speaker: Tony Johnson (TJ) has over 25 years of technology experience from CIO, to managing data centers, to global speaker, and to cloud expert to name a few.  He started his professional career in finance and migrated to technology.  Currently TJ is the Managing Partner for Cloud Sherpa Consulting and supports organizations around the world with their cloud adoption and financial management of Cloud to better understand how to use and buy cloud better.


SWCMG April 17, 2019 – Data Lakes, Analytics, Serverless & z/OS Best Practices

The Road to Hell is Paved with Average” – Ben Davies (Moviri)
A conversation that explores a real-world example of how reliance on average on 15 second data slices, failed to reveal actionable intelligence. We will encourage the use of other metrics that seem neglected such as minimum, maximum, and 95th Percentile. The example data is minimum idle workers, an overlooked JVM metric that caused problems in our production environment for months but once found and monitored effectively, eliminated dozens of support incidents a week.

Data Lakes to AI/Analytics” – Anthony Maiello (VNC Technologies)
You have successfully built and executed a great production system that satisfies the current needs of your clients. Now you want to go to the next level and use data from production to deliver more value, insights, and potentially more revenue while avoiding costly mistakes. In order to realize the true benefits of data, a transformation is necessary that applies data to analytics, machine learning, and artificial intelligence. To accomplish this this transformation, you’re probably considering some sort of data and analytics solution such as a data warehouse, data lake, and/or data marts but not sure which is the right one for your company to handle potentially multiple production data sources with a few unique requirements. This type of implementation can appear to be a daunting as you consider current state of your existing platform, the various data platform options available, the experts or resources needed to implement and support, as well as the various priorities for your company and clients. We will review the art of making the right decisions to successfully implementing data, analytics and AI within your company.

Best Practices in 2019 for z/OS Application Infrastructure Availability” – Brent Phillips & Jack Opgenorth (IntelliMagic)
2018 was a watershed year for how RMF (or CMF) and SMF data is used by mainframe performance and capacity teams. 2019 will solidify this momentum. Mainframe transaction volumes continue to grow for most sites, and continuous availability of the infrastructure to deliver required service levels is more important than ever in today’s 24 x 7 economy. Yet, the number of deep z/OS infrastructure performance experts continues to shrink due to the performance and capacity skills gap at the same time that the size and complexity of the z/OS environment is increasing. Fortunately, the maturity of white-box analytics (as well as the more common black-box analytics) not only compensates for the discrepancy between the requirements of the job and the time of the experts, but it also creates capabilities not previously possible.
This presentation will discuss these topics and some of the best practices now possible for ensuring efficient application availability from the z/OS infrastructure.

MXG Genesis” – A conversation with Barry Merrill
This session was initially planned to be a Q&A session with Barry Merrill about the origins of MXG. However, it morphed into an entertaining, engaging and informative session where Barry walked us through his life and times.
Although there wasn’t a formal presentation, an article by Margaret Greenberg titled “Barry Merrill: A Class Act” from the July 2009 issue of MeasureIT will give you a glimpse of what we learned.

Text Mining the CMG Archives” – Richard Gimarc
What can we learn about CMG by looking at the organization’s complete set of conference proceedings? In this presentation we look at the results of text mining on the papers and presentations from CMG conference proceedings from 1976 through 2019. The results show how our choice of words has changed over the years, author contributions and countries represented. The primary purpose of this analysis was to see what we can learn about CMG by simply looking at the contributions to our annual conference over the past 40+ years.

Adventures with Charge Back and the Value of a Useful Consistent Lie” – Ben Davies (Moviri)
A conversation that explores our adventures with chargeback, and the value of “useful consistent lies”. Let’s start with stating that change back is easy. Trivial even. It is simple math. What is your recovery target divided by the number of items expected to be ‘sold’? There you have it. That is it. You are done.
So, what is a useful consistent lie, and what is its value?? Well it is everything around you and how you understand it. When your understanding is sufficient for the conversation, you use that understanding, that useful consistent lie, within the conversation, and the value of the useful consistent lie is that the conversation can now be conducted with a reasonable degree of mutual understanding. Like this. What time is it, right now? Whatever your answer, it is not the most correct to the exclusion of all others, answer. It IS a useful consistent lie. The time may be 12:18 in the afternoon by my digital watch but you may have said it is quarter after noon, lunchtime, mid-day, or any other of a host of answers. If you wish the most correct answer you should ask a physicist and set aside a weekend for the nuance of the ‘right answer’. So how do these ideas fit together? In this session we attempt to explain that. At the end we expect that you realize charge back is easy, except for the people. But you should try it anyway.

Is Capacity Planning Required for Serverless?” – Richard Gimarc & Amy Spellmann
This presentation takes a close look at serverless (FaaS) and capacity planning. On the web, you can find multiple articles that claim one of the reasons for moving to serverless is that you no longer have to perform capacity planning. Instead of you performing capacity planning, it is now the FaaS service provider’s responsibility. After a short introduction to Function-as-a-Service (FaaS), we take a closer look into the question of capacity planning. We view FaaS as a new and evolutionary application deployment platform with its own set of metrics to monitor, track and analyze and a new pricing model. In order to maintain a complete and comprehensive end-to-end view of your application’s footprint (resource usage and cost), we make the case that FaaS should definitely be included in your capacity planning process.


April 17, 2019 Meeting – Presentations


February 6, 2019 Webinar – Presentations


September 19, 2018 Meeting – Presentations


February 21, 2018 Meeting – Presentations

Upcoming Events

Verified by MonsterInsights