Bringing Batch and Agile Together

advertisement
WHITE PAPER:
Bringing Batch
and Agile Together
Accelerating the Time to Quality
for Mainframe Applications
SOFTWARE ENGINEERING OF AMERICA
WHITE PAPER:
Bringing Batch and Agile Together
Abstract
Bringing batch and agile together on a mainframe sounds crazy, sort of like mixing oil and water. Sure,
oil and water won’t explode but they never really mix either. Well, in the following paper you will see
that it is not as farfetched as it may seem at first. You’ll also find that significant companies already are
mixing them and have successful results to show for the effort. This is not a fluke and going forward
expect more mainframe shops to do this, probably even yours.
Yet increasingly, the distinctions between the mainframe and distributed environments are being
blurred through Linux and Java as well as cloud and mobile workloads. Did you ever think you would run
Java and Linux on your mainframe and use them to support mobile workloads through your mainframe
data center?
These two worlds—batch and agile—not only must interact but must do so productively, and
seamlessly integrate and interoperate to produce a streamlined development, test, and deployment
process. Compounding the challenge, organizations must do it fast. The business can no longer wait
for six-month or nine-month release cycles to introduce new capabilities. Annual mainframe software
releases are not acceptable anymore. The data center must be prepared to respond to change in
weeks, not months.
The way mainframe data centers can respond faster is through DevOps, an approach that combines
development, testing, and deployment in one fluid process that runs in an almost continuous loop.
With DevOps, even as the latest enhancement is being deployed work already is starting on the next
enhancement.
Does it work? Yes, Nationwide Insurance improved code quality 50%, reduced end-user downtime by
70%, and increased on-time delivery by 90%. A global financial services firm running a large mainframe
operation boosted its batch success rates from 99.90% in 2014 to record highs of 99.92% in the first six
months of 2015 and expects to improve on that further going forward thanks to DevOps. Now DevOps
is gaining acceptance among both mainframe and distributed shops.
WHITE PAPER:
Bringing Batch and Agile Together
The Value of Bringing Batch and Agile Together
Remember the popular book a few years back: Men are from Mars, Women are from Venus. It used to
feel as if agile development and mainframe batch processing come from different, alien worlds. Yet
increasingly, as IBM blurs distinctions between the mainframe and distributed environments through
Linux and Java as well as cloud and mobile workloads these two worlds—agile and batch—must
interact.
More than just interact, they must productively and seamlessly integrate and interoperate to produce
a streamlined development, test, and deployment process. Compounding the challenge, however: they
must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks, opportunities and
revenue can be lost. Agile and batch teams have no choice; they must work together.
Fortunately, the agile-batch challenges are not
as formidable as they once appeared. There are
tools, business and operational processes, and
techniques around change management that are
proving effective. For instance, automated testing
tools like SEA JCLplus+, and SEA XREFplus+/DBr
can streamline the testing process while ensuring complete and accurate results. Furthermore,
numerous organizations are reporting success at
speeding up the development, test, and release
cycle.
Agile development or DevOps refers to an
approach to software development that
combines development, testing, and deployment
in a continuous loop for the purpose of delivering
tested code and subsequent enhancements
quickly and repeatedly. It relies on regular
communication and collaboration between
development, testing, and operations. Intended
as a continuous loop, no sooner is the latest
enhancement deployed, then work begins
on the next enhancement.
For example, Nationwide Insurance improved code quality 50%, reduced end-user downtime by 70%,
and increased on-time delivery by 90%. Meanwhile Laminar Medica reduced new product development
time and costs by 25%, contributing to 10% increase in competitive wins. A global financial services
firm boosted its batch success rates from 99.90% in 2014 to record highs of 99.92% in the first six months
of 2015 and expects to improve on that further going forward. Successes like these constitute part of a
growing trend called DevOps that is gaining acceptance among both mainframe and distributed shops.
WHITE PAPER:
Bringing Batch and Agile Together
Mainframe and Distributed
Bringing agile and batch together—it almost
sounds like an oxymoron or the punchline from a
bad computer systems joke. The idea, however, is
quite serious and increasingly real in a growing
number of mainframe shops. Welcome to the world
of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.
Batch processing is a familiar process in
mainframe data centers. In case you need a
reminder, batch processes are set up as jobs
intended to run through completion without
human interaction. All input parameters are
predefined through scripts, command-line
arguments, control files, or job control language.
As input data are collected into batches or sets
of records each can be processed as a unit.
The results of one batch can be reused for
computation in another, a situation that gives
rise to dependencies.
And frequently that platform is the mainframe.
The latest generations of the mainframes have
been fully hybrid-capable platforms, starting with
the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today,
a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java
and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under
the same unified mainframe management console.
Suddenly, the idea of bringing batch and agile computing together on the mainframe platform doesn’t
seem so farfetched. It certainly made sense to Marriott, the global hospitality company. Marriott had
specific requirements in mind when it developed its current IT infrastructure. A longtime mainframe
shop running a z196 at the time, it insisted on leveraging its current IT investments, including software
solutions, while also adopting and converting to new processes and more advanced technologies.
Similarly a global mainframe-based financial services firm needed to update its processes. It began
by automating its JCL syntax validation process using a combination of Serena ChangeMan (change
management software), SEA JCLplus+, and SEA XREFplus+/DBr for its North America batch mainframe
environments. The point: mainframe data centers everywhere are updating to agile processes in one
form or another.
Hybrid computing is forcing mainframe data centers to bring agile and batch computing together
whether they like it or not. This often happens initially as part of a DevOps initiative. And at first, nobody
seems to like it. But the benefits of DevOps on the mainframe are so compelling that everybody—
mainframe and distributed teams—quickly figure out how to make it work and get along.
WHITE PAPER:
Bringing Batch and Agile Together
Blurring Batch and Agile
This quickly results in a blurring of distinctions between mainframe and distributed computing, at least
as far as data center operations go. Now COBOL, Java, z/OS, Linux and more are mingling on one
platform; typically the distinctions between systems of record and systems of engagement and
systems of decision/insight are blurring as well. The results from batch transactions running on
COBOL, for instance, can be used as part of a Java-based business intelligence application running on
Linux. The platform under it all, of course, is the mainframe, maybe a z196 or zEC12 or even a new z13.
At this point, whether the data originated on the mainframe or a distributed system and how it is being
deployed to end users becomes irrelevant. It might, in fact, be deployed in the cloud.
It doesn’t matter because LOB managers and the business demand efficient computing operations.
If that means converging and rationalizing multiple platforms so be it. Everyone involved, mainframe
or distributed, IT or operations, needs to get the message: the cost of system failures and downtime
simply is too great. Also, the value of early problem detection and defect elimination is equally great.
The benefits of detecting and preventing failures early go straight to the bottom line. This calls for a
streamlined development, test, and deployment process. LOB users will no longer tolerate 12 month
system refresh cycles. They risk missing short-lived windows of opportunity if the data center can’t
update systems fast enough. In short, this calls for batch and agile to work together, otherwise known
as DevOps.
In many ways mainframe application and system development has become the business due to the
critical role software now plays in defining and differentiating the business. Software, in effect, has
become a company’s product development. Almost no new enterprise products ship today without
a significant software component, if only to support increasingly varied go to market strategies that
now invariably include mobile and social. It is this software component that allows the organization to
differentiate itself and meet customer demands. As such, batch and agile together become even more
important as business cycles speed up.
Not too long ago when the IT organization released new software every year or even every 18 months
customers were satisfied. Today cycles are much shorter. Just see how often mobile phone companies
release new models and upgrades. If they wait six months they risk falling behind, losing sales because
they don’t have the very latest features. In the same way data centers can no longer take a year to
deliver the latest fix; LOB managers and customers won’t wait. There are too many competitors waiting
for any chance to seize the competitive advantage. Slow system refreshes and software updates just
play into these competitors’ hands.
WHITE PAPER:
Bringing Batch and Agile Together
Also, whatever happens in the mobile market affects mainframe data centers. Companies in every
industry segment are deploying new mobile apps and then almost immediately updating them. For
many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile
request for information or to make a purchase or to schedule something triggers numerous back end
processes that quickly make their way to the mainframe. It has gotten to the point where IBM must
discount mobile processing so as to not distort monthly license charges. Similarly distributed online
systems are continually interacting with the mainframe backend. Even batch processes must meet
increasingly shorter windows as real time distributed and mobile processing encroach on the batch
windows. That’s why even batch needs agile processes.
Enter DevOps
DevOps, a conflation of development and operations, refers to a software development approach
characterized by communication, collaboration, integration, automation, and cooperation between the
development, software, deployment, operations, and other IT and operational technology professionals,
and LOB participants. The approach recognizes the interdependence of software development, quality
assurance (QA), IT operations, and deployment. The goal of DevOps is to streamline the process of
developing and deploying quality software fast. It aims to speed time to market by eliminating the
latency incurred as a project moves from requirements to development to testing and finally to
deployment. To achieve its goal DevOps entails frequent communication between all parties, short
iterative steps, frequent and early automated testing, and fast deployment of generally defect-free
code. Frequent testing is intended to identify potential defects early and correct them fast. Ultimately
DevOps results in measurable time-to-market and quality improvements that positively impact the
organization’s bottom line.
DevOps as a term has been evolving over the last few years. If you want some interesting perspective,
see what your favorite search engine returns for the term. At it’s most basic, DevOps is the process
of bringing Development and Operations together to share processes and procedures. The goal is
to reduce the risk of change and improve the speed of deployment. This requires true collaboration
across the groups responsible from the business analyst, through Development, Test, Quality Assurance, and Operations. DevOps is about making sure the way an application is deployed in production
is the same way it was deployed in test and development. DevOps includes the radical notion of applying software management to the scripts and processes used for the actual deployment and monitoring
by Operations into production, and bringing the monitoring capabilities from Operations into
development and test to get early understanding of the operational behavior including performance
and the demands on human and automated procedures. DevOps principles allow more frequent
deployments (See Continuous Integration) and a more effective feedback cycle.
WHITE PAPER:
Bringing Batch and Agile Together
DevOps on the Mainframe
• Develop and test against production-like systems
• Deploy with repeatable automation processes (Agility)
• Monitor operational quality (Reporting and Metrics)
• Manage Communication (Feedback)
DevOps on the mainframe focuses
on develop and test, deploy and
automate, monitor and measure,
manage communications.
OPERATE
STEER
ADOPTION
PATHS
DEV
TEST
DEPLOY
DevOps takes advantage of the latest agile development and collaboration techniques involving
high degrees of cross department and cross function communication and cooperation. These
techniques around communication, collaboration, and testing enable DevOps teams to deploy
near defect-free systems fast. Along the way, DevOps promotes methods for thinking about
communication and collaboration between LOB units, development, QA, and IT operations that generally
is absent in traditionally organized enterprises.
This happens when batch and distributed groups work together as a unified team performing the
day-to-day work each previously did separately. They focus on a shared goal of continuous quality
delivery with zero defects. In the process, DevOps drives a number of business benefits:
• Responsive systems fast
• Short time to quality
• Early elimination of defects
• Quick time to market speeds realization of revenue
• Productivity gains for both dev and ops
WHITE PAPER:
Bringing Batch and Agile Together
Implementing Mainframe DevOps
It doesn’t take much to get DevOps going. Begin with a manager, usually senior LOB manager, who
is ready to round up a team consisting of development, operations, and QA people to act as a joint
cooperative team. With luck the group should quickly get over what really has been nothing more than
a trivial clash of cultures and bickering to focus on what really is important; delivering quality, defectfree code fast.
DevOps is not a one-time and be done kind of project. This is an iterative process that begins again
as soon as the latest deliverable is deployed, as would be the case with a continuous improvement
project. By the way, an often under-appreciated benefit of the DevOps process is fully documented
code resulting from the use of automated tools.
At this point, essentially every major mainframe data center uses some form of automated testing tool.
Although automated testing has been evolving for years the focus has shifted over time. Initially JCL
testing, for example, focused on the cost of failure in production. While the cost of failure continues to
be a useful metric, more attention is being given to defect elimination and delivering defect-free code.
The commitment to continuous improvement through automation is paramount for success in DevOps.
Well-deployed process automation and the reporting that is generated from it are key elements in
verification steps that enable compressing release cycles. While complete perfection is likely not
achievable, one of the largest brokerage houses in the world is looking to leverage process automation
and verification reporting to drive the ultimate goal of a self-service release-to-production model.
Confidence in automated testing continues to be a key data center concern. Data centers across key
mainframe segments, especially banking, insurance, brokerage, government, and retail have shifted
the conversation from where event testing tools missed defect detection to false positives, where reported defect conditions did not actually exist. In terms of productivity cost and delay, false positives
can be as damaging to confidence around adoption as missed defects. Especially in a DevOps environment with its pressure to deliver defect-free code fast, data center managers report there is nothing
worse for killing DevOps adoption than lack of confidence in the reliability of the testing tools.
This goes hand-in-hand with another shift in the automated testing conversation—a shift from the cost
of failure to the value in early detection and elimination of defects. This new conversation is giving
rise to a new DevOps metric, time-to-quality, as a key value. The new metric is driving all parties into
proactive steps to eliminate defects early and quickly. For example, one of the largest global insurance
companies not only promotes adoption of their IDE for new developers but they are also advising that
WHITE PAPER:
Bringing Batch and Agile Together
testing activities be leveraged by developers in the early stages of the SDLC. This is possible in part
because developers can now request JCL Testing Services from within the IBM IDE for System z (Rational Developer for System z).
DevOps Automated Tools
Automation begins with documentation. If you think about the agile process as focusing on fast iterations
over solid documentation, think again. Things change when batch gets involved. The majority of batch
applications (which process over 50% of all business data) driving our Fortune 500 businesses are no
longer supported by their original developers, who may have retired. As far as application maintenance
goes in this environment documentation becomes a critical part of the product. Fortunately, this isn’t
a problem in mainframe shops where existing SMF data and workload scheduling databases can be
used to automate creation and maintenance of system level documentation down to the control card
level. The result: applications continue to be well documented, sharing this common information across
build and deploy as it becomes a highly valued DevOps enabler. The ability to leverage this information
is critical in understanding dependencies and accelerating developer maintenance and especially
for the creation and use of accurate test cases. For instance, one of the largest card processors in
the world uses this capability of automated documentation to accelerate maintenance and enable
predictive analysis on production Abends and delays, which greatly speeds problem resolution.
Documentation is the Product
Accurate System
Documentation
• Start with critical path
• Batch process flow /
dependencies
Quickly identify dependencies for testing
• Where-Used Procs
• Where-Used Control Cards
Obsolete components cleanup
Complete automated, accurate and current documentation is a direct benefit of DevOps
WHITE PAPER:
Bringing Batch and Agile Together
Early and frequent automated testing enable DevOps teams to expedite the process and ensure and
fix early fault detection as the large brokerage firm referenced above experienced. Ultimately, you
can combine DevOps with a catalog of components that LOB managers can use in conjunction with
self-service automation to combine proven micro-services and code components for the purpose of
generating new applications from tested pieces and services as needed. This will not only produce a
highly responsive system but LOB units get exactly the applications they want fast while drastically
reducing IT backlogs and freeing IT resources for new initiatives.
The payback for data centers that automate in this way comes in a variety of ways. For example, the
global financial firm that automated its JCL syntax validation process referenced at the top of this
piece generated an attractive ROI through:
• Better control in ensuring code that meets target mainframe standards for production environments
• Freeing costly staff from performing manual syntax validation, leaving them available for more
productive tasks
• Ensuring consistency of syntax validation and standards rules across all mainframe environments
• Automation of the final audit process
• Successful automation of thorough batch runtime validations
• Real-time communication of validation results to end-users in the development community
• Automated documentation
The company’s batch success rates improved from 99.90% in 2014 to record highs of 99.92% in the
first six months of 2015. Robust and tight mainframe syntax validation and business engagements with
business clients of batch standards and best practices were the major drivers of this success.
As other regions of the global firm adopt this model, the target goal of a 99.95% batch success rate
is viewed as quite achievable.
WHITE PAPER:
Bringing Batch and Agile Together
Conclusion: Get mainframe batch and agile working together now
This isn’t rocket science or even bleeding edge technology. The tools and processes to make this
happen are available and well understood. Similarly, automation streamlines processes and drives
efficiency and self-service, both of which save money.
As importantly, bringing batch and agile together through a DevOps process adds a side benefit—
in a rapidly changing IT environment DevOps success ensures the mainframe’s continued
central role. But it does require a culture change and management support to capture the full benefits.
Is your organization ready to change for the better?
About SEA:
Established in 1982 Software Engineering of America has built a worldwide reputation as a
leading provider of infrastructure optimization and automation solutions. With products licensed at
over 10,000 data centers worldwide SEA’s customers include 9 of the fortune 10 and over 90% of the
fortune 500. SEA is a leading provider of software solutions and expertise in the z/Os marketplace with
over 30 years of experience helping the world’s largest companies improve efficiency, lower costs
and incorporate best practices into managing their z/Os environment.
Learn more at www.seasoft.com
Download