IBM Compilers: Faster Mainframe Services, No New Business Risks
Wes Simonds 120000EFD6 email@example.com | | Tags:  modernization koo compilers simonds z10 z zenterprise concert ibm c/c++ pl/i mainframe wes enterprise capabilities system rational cobol faster risk roland services no software team
0 Comments | 6,913 Visits
In my last blog post, I said that one very common idea underlying best practices today is this: ‘faster is better.’ There are different ways to get faster, though. And some are certainly more appealing, in a given context, than others.
For instance, consider the context of IT development. This is a world of business logic, algorithms rendered in specific code and the software development environments, in which the first is alchemically transmuted into the second, to create software-driven services.
Faster software-driven services mean faster (and more) business transactions. This is certainly better than slower (and fewer) business transactions.
Now: What's the most efficient way to make your software faster?
If you're an IT ops guy, you probably see the world through the lens of technology infrastructures. So your response would be something like this:
‘We need to buy a faster host. Or, even better, redeploy the app on a grid or cloud architecture. That means we need to get the IT dev guys to rewrite the code so the app's work can be distributed in discrete chunks across that architecture for parallel processing. At that point, to get more speed, we can just add more physical hosts and virtual servers, as well as other resources like virtual storage or network bandwidth as required. Easy as pie.’
But if you're an IT dev guy, you probably got a headache reading all of that, and you see IT ops guys as the enemy. (I'm kidding. Everyone knows IT management is the enemy.)
The idea of completely reworking and redeploying mission-critical applications along these lines sounds slow, risky and impractical. It's difficult enough doing the thing the organization already asked you to do: add new software capabilities to the existing codebase, which was created by completely different guys, at a completely different point in time years ago and intended for completely different hardware.
As far as performance optimization of the whole codebase goes? Well, every neat little trick you might add to the code, to speed it up, introduces the possibility of that app now breaking unexpectedly. And that is a totally unacceptable concept, because your organization depends on the software to create value for customers and thus miraculously make headway even in the current gloomy business climate.
So to you, the IT dev guy, what is the best way to speed up mission-critical software? Ideally, it would involve:
(a) no new coding or code-tweaking required
(b) no new risk that the code will break (because of the clever tweaks you added to speed it up)
(c) no catastrophic service downtime (that creates lots of media attention and generates an estimated $1 bazillion in lost revenue)
(d) no pink slips allocated to IT dev guys, due to the above
(e) no new hardware required
That sounds pretty dreamy. Is it actually possible?
Recompile your code, get faster software-driven services
Turns out that it is. I was fortunate to be able to talk to Roland Koo, Product Manager for Compilers at the IBM Software Solutions Toronto Lab, and he gave me the inside story.
‘Upgrade your compilers,’ said Koo. ‘Move to better compilers, and all of that can happen. The compiler's job is to make life easy for programmers, so they can focus on getting the business logic right.’
How do compilers deliver on this value proposition? Just consider what they do -- and how they work. After a programmer writes up business logic in code (using a specific language, like C++ or COBOL), the compiler then cruises through the code, translating it into machine code (processor instructions) for a specific processor. This machine code, in turn, is what actually runs on the IT production servers (or mainframe).
And because compilers are not all created equal, some do a much better job than others at generating fast machine code. The smarter the compiler, the more efficient will be the machine code it generates -- translating directly into faster software-driven services.
In this sense, then, compilers are much more than just one more technical element of a software development. They are the most direct liaison between your software development team, which speaks one language, and the hardware your applications run on, which speaks another. So by investing in superior compilers, organizations can get both superior software and a superior business outcome from it.
Koo put matters even more directly than that: ‘You cannot maximize your return on investment unless you stay current with compiler technology.’
I have to agree with him. Note how quickly organizations can get that improved ROI: simply install the new compilers, recompile the code as-is and deploy the new applications the compiler generates. No risky code-tweaking is required. No new hardware is required. No new business risk of service downtime is introduced, because the code itself wasn't changed -- only the efficiency of the software.
New IBM compilers offer accelerated performance with no hardware upgrade required
Look at how that applies in the case of IBM System z compilers, for instance. System z mainframes run some of the most mission-critical services in the business world -- customer-facing online banking services, for instance. Better performance is always needed for such services, yet customer tolerance for downtime is practically zero.
So banks need a way to accelerate services without introducing new risk. That's exactly what IBM's new System z compilers, for COBOL, PL/I and C/C++, can deliver -- and not just for banking, but for any industry in which mainframe-based services face the same context.
Koo emphasizes that no new hardware had needs to be purchased. ‘You do not need to upgrade hardware to upgrade compilers,’ he said. ‘In fact, upgrading compilers is a cost-effective way to get more out of existing hardware technology. You can take advantage of new improvements in both optimization and programmer productivity.’
In that second category, programmer productivity, another point to consider about IBM's compiler technology is that it leverages IBM's strengths in related areas, such as development tools, middleware, databases (like DB2) and transaction systems (like CICS and IMS) and modem application development tools such as IBM Rational Developer for System z and Rational Team Concert for Enterprise Platforms providing a high productivity environment for developing business critical applications. Because IBM offers them all, it can also optimize its compilers in ways no competitor can, to deliver even better performance for code that involves IBM middleware via integrated, pre-processor support.
Finally, while hardware upgrades aren't essential to get impressive, measurable business benefits from a new compiler, a new hardware/new compiler combination is unquestionably a great way to go, given the option.
In fact, there is to a 60 percent performance improvement on zEnterprise (the eleventh generation of System z mainframes) for C/C++ applications , when compared to running the same applications on System z10. That's what IBM's own internal tests have shown, and that's probably not too far from what organizations with IBM mainframe-driven services can expect to get as well.
How are you accelerating mainframe applications these days?
Register for this webcast to see what IBM’s latest compilers can do for you
Connect to the IBM Rational Café Communities
Learn more about IBM Software for System z
Read about System z components (including compilers)
Read a paper on The Economic Impact of Mainframe Computing
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.