The Re-architecting of Shadow Version 7

The Re-architecting of Shadow Version 7

Posted on September 30, 2009 0 Comments

In this podcast Gregg Willhoit explains  the best practices associated with benchmarking zIIP offload. This podcast runs for 4:40. You can listen to the podcast by clicking on the following link: http://blogs.datadirect.com/media/GreggWillhoit_BenchmarksBestPrac_1.MP3

Gregg Willhoit:

Basically once we completed the re-architecting of Shadow Version 7, it was imperative upon us to be able to demonstrate the relative performance gains and TCO gains verses the previous version of Shadow, which was Version 6. Again the main difference between the two versions in terms of TCO was the zIIP enablement of Shadow. In our environment what we basically did was when we compared the two products we just did completely isolated runs using a common benchmark driver – a web services driver tool – which simulated quite a bit of load. We insured, for example, that the LPAR that the load tests were run on was not shared. It had dedicated resources. It had the zIIP dedicated to it. So we tried to eliminate all the variability that we possibly could to make sure that the benchmarks were repeatable. We ran several benchmarks and we came up with very repeatable benchmarks in our environment and with our techniques.

As with any benchmark testing there has to be an agreed upon method for load testing and measuring – probably the most important aspect of a benchmark. Once you’ve achieved the ability to isolate the workload from anything which may impact the repeatability of the benchmark, you then have the capability to measure consistency. And what we did was we experimented with a plethora of options. We looked at the RMF Monitor I, Monitor III, RMF Monitor II, RMF Monitor I, SMF type 30 records, and also our own numbers which we gather in our own monitor which is part of the Shadow product. Our monitor basically allows us to measure zIIP efficiency for all the areas, not just web services, but SQL and Event Publishing as well. This monitor aggregates metrics gathered by the threads executing on behalf of a Web Service, SQL, or Event based thread. These threads execute IWMEQTME calls as well as TIMEUSED call to gather zIIP qualified, zIIP eligible (the sum of zIIP and zIIP on CP time). We chose to use zIIP eligible as opposed to zIIP qualified time due to a somewhat arcane issue we discovered with zIIP eligible being greater than zIIP qualified under some circumstances. The gist of the issue is that what actually runs on the zIIP can be greater than what is reported as zIIP qualified. The difference between the two metrics is not large but we chose to go with zIIP eligible when possible nonetheless.

Interestingly enough in the early days of this project we discovered that the monitoring and measurement of the zIIP wasn’t an exact science, especially with RMF Monitor III. I think there were some measurement issues with all of the products we were using. There were various fixes that we had to install to get some of the measurements done correctly, but we basically ended up deciding that the gold standard for our project was going to be the SMF 30 record. We then validated our own measurement numbers against the SMF 30 records. Once we were happy through validation that our numbers basically agreed with the SMF 30 records, then we were comfortable publishing numbers that were based on either the SMF 30 or our own. But again, we treated the SMF 30 records as the gold standard with regards to measuring zIIP efficiency and zIIP offload.

When we calculate the zIIP offloads – our percentages – we actually do use the time that actually runs on the zIIP as well as the time that the product is zIIP eligible, but the execution was actually diverted to the General Purpose Processor. And the reason that we did that is in the environment we have, at the time that we ran the test; we had one zIIP and two General Purpose Processors. So it is quite possible that dispatchable units of work would not be able to be dispatched upon the zIIP due to the difference in the ration of the number of zIIPs to General Purpose Processors. The other reason we decided on this particular methodology, or this particular formula, is that if the product is zIIP eligible – and some of the work is being dispatched to a GP – that’s really a configuration issue. So our thought was basically if we’re going to report the zIIP eligibility of Shadow we’ll include both the actual time on the zIIP as well as the time that it could have executed on the zIIP but didn’t because the zIIP was busy. And by using that technique or that formula, we came up with a repeatable methodology. One that was not subject to the vagaries of hardware configuration permutations.

So one of the interesting things that came out of this particular benchmark performance analysis of DataDirect Shadow Version 7 vs. Version 6 was that during this process we came up with so many measurements and measurement gathering anomalies that we actually contemplated doing a skit. Kind of like the Who’s on First? Skit with Abbott and Costello, but from a geeky perspective with measuring zIIPs. Because honestly in configurations where the zIIPs run at a faster speed than General Purpose Processors, that is if General Purpose Processors are kneecapped, then some of the measurement methodologies that were in place just weren’t quite up to accurate CPU measurement and gathering. In fact, we found that different monitors were computing vastly different zIIP offload numbers, which is why we actually decided to use the SMF Type 30.

Gregg Willhoit

View all posts from Gregg Willhoit on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation