DevOps - Maturity Model

advertisement
DevOps Maturity Model
IBM Rational
03/10/2014
1
What Are Our Goals?
• Ultimate goal is to continuously improve tools and process to produce a
quality product with faster delivery to our customers.
• Measuring the continuous improvement is important to show we are
focusing in the areas that provide us the best ROI.
• How do we plan to measure?
– DevOps Maturity Model Self Assessment
– DevOps Outcome Based Self Measurement
• How often do we plan on doing the self assessments?
– Current plan is once quarterly
• What will be done with the measurements?
– Identify Tools/Processes that can help improve DevOps Maturity
– Continuous improvement measured through the metrics collected.
– Will be shared with the executive teams
2
© 2013 IBM Corporation
DevOps Principles and Values
DevOps Maturity Model
Self Assessment
 Iterative and frequent deployments using repeatable and
reliable processes
"Plan & Measure" Questions.
1.
.
63.
 Continuously monitor and validate operational quality
characteristics
DevOps Maturity Model
Define
Define release
release with
with
business
business objectives
objectives
Measure
Measure to
to customer
customer value
value
Improve
Improve continuously
continuously with
with
development
development intelligence
intelligence
Test
Test Continuously
Continuously
Manage
Manage environments
environments
through
through automation
automation
Provide
Provide self-service
self-service build,
build,
provision
provision and
and deploy
deploy
Automate
Automate problem
problem isolation
isolation
and
and issue
issue resolution
resolution
Optimize
Optimize to
to customer
customer KPIs
KPIs
continuously
continuously
Plan
Plan and
and source
source
strategically
strategically
Dashboard
Dashboard portfolio
portfolio
measures
measures
Manage
Manage data
data and
and virtualize
virtualize
services
services for
for test
test
Deliver
Deliver and
and integrate
integrate
continuously
continuously
Standardize
Standardize and
and automate
automate
cross-enterprise
cross-enterprise
Automate
Automate patterns-based
patterns-based
provision
provision and
and deploy
deploy
Optimize
Optimize applications
applications
Use
Use enterprise
enterprise issue
issue
resolution
resolution procedures
procedures
Link
Link objectives
objectives to
to releases
releases
Centralize
Centralize Requirements
Requirements
Management
Management
Link
Link lifecycle
lifecycle information
information
Deliver
Deliver and
and build
build with
with test
test
Centralize
Centralize management
management
and
and automate
automate test
test
Plan
Plan departmental
departmental releases
releases
and
and automate
automate status
status
Monitor
Monitor using
using business
business and
and
end
end user
user context
context
Automated
Automated deployment
deployment with
with
standard
standard topologies
topologies
Centralize
Centralize event
event notification
notification
and
and incident
incident resolution
resolution
Plan
Plan and
and manage
manage releases
releases
Standardize
Standardize deployments
deployments
Monitor
Monitor resources
resources
consistently
consistently
Collaborate
Collaborate Dev/Ops
Dev/Ops
informally
informally
Measure
Measure to
to project
project metrics
metrics
Document
Document objectives
objectives locally
locally
Manage
Manage department
department
resources
resources
Manage
Manage Lifecycle
Lifecycle artifacts
artifacts
Schedule
Schedule SCM
SCM integrations
integrations
and
and automated
automated builds
builds
Test
Test following
following construction
construction
 Develop and test business strategy requirements against
a production-like system
 Amplify feedback loops
STG DevOps Proof of Concept Investigation

Customer
Interaction
 Feedback
Partially Achieved
 Content
 Download
Hosted Environment
RFE
Fully Achieved
Customer
Interaction
Service Management Connect
Driver VM
SCE
Goals
Continuous Feedback
Continuous Test
Driver VM
Driver VM
Outcome Based Metrics
1) Which of the following best describes the state of your code at the end of each iteration?
* Ready for testing
* Partially tested, ready for additional integration, performance, security, and/or other testing
* Fully tested, documented, and ready for production delivery or GA release (modulo translation work or
legal approvals)
2) How quickly can your team pivot (complete a feature in progress and start working on a newly-arrived, highpriority feature)?
* 3 months or longer
* less than 3 months
* less than one month
* less than one week
Continuous Deployment
Agile Development
Continuous Integration

Rational Focal Point

Rational UrbanCode
Rational Team Concert
3) How quickly are you able to change a line of code and deliver to customers as part of a fully-tested, non-fixpack
release?
* 12 months or longer
* less than 6 months
* less than 3 months
* less than one month
* less than one week
* less than one day
4) What is the cost (in person-hours) of executing a full functional regression test?
<enter value>
5) How long does it take for developers to find out that they have committed a source code change that breaks a
critical function?
* One week or longer
* 1 to 7 days
* 12 to 24 hours
* 3 to 12 hours
* 1 to 3 hours
* less than one hour
6) Which of the following best describes the state of your deployment automation for the environments used for
testing?
* We have no deployment automation. Setting up our test environments is entirely manual.
* We have some deployment automation, but manual intervention is typically required (for example, to
provision machines, setup dependencies, or to complete the process).
* We create fully-configured test environments from scratch and we reliably deploy into those environments
without manual intervention.
* We create fully-configured production-congruent environments from scratch and we reliably deploy into
those environments without manual intervention.
3
Continuous Build
 Task WI, Change Record WI
 Jazz SCM
 Jazz Build Engine (JBE)
 Security (AppScan)
Image Catalog
Development VM
Web Browser
RTC Eclipse Client
RTC Web Client
Builder VM
RTC Eclipse Client
UrbanCode Deploy?
RTC Build Engine Client
Build Resources
AppScan
Compilers
Compile Pool Resource
RTC Build Engine Agent
Test Environment
Driver Images
Debug Environment
Test Resources
RTC Web Client
RTC Eclipse Client
Compilers
AppScan
Compile Pool Resource Driver Images
Web Browser
Test Resources
RTC Web Client
Debug Environment
RTC Eclipse Client
RTC Build Client (JBE)
Focal Point Client
© 2013 IBM Corporation
Where do you start? DevOps improvements adoption
Step 3
Step 2
Step 1
Assess and define outcomes & supporting practices to drive strategy and roll-out
Determine
Activities
What am I
trying to
achieve?
• Think through business-level drivers for improvement
• Define measurable goals for your organizational investment
• Look across silos and include key Dev and Ops stakeholders
Where am I
currently?
What are my
priorities ?
•
•
•
•
Objective
Business Goal
Determination
What do you measure and currently achieve
What don’t you measure, but should to improve
What practices are difficult, incubating, well scaled
How do your team members agree with these findings
Current Practice
Assessment
• Start where you are today and where your improvement goals
• Consider changes to People, Practices, Technology
• Prioritize change using goals, complexities and dependencies
Define
Define release
release with
with
business
business objectives
objectives
Measure
Measure to
to customer
customer value
value
Improve
Improve continuously
continuously with
with
development
development intelligence
intelligence
Test
Test Continuously
Continuously
Manage
Manage environments
environments
through
through automation
automation
Provide
Provide self-service
self-service build,
build,
provision
provision and
and deploy
deploy
Plan
Plan and
and source
source
strategically
strategically
Dashboard
Dashboard portfolio
portfolio
measures
measures
Manage
Manage data
data and
and virtualize
virtualize
services
services for
for test
test
Deliver
Deliver and
and integrate
integrate
continuously
continuously
Standardize
Standardize and
and automate
automate
cross-enterprise
cross-enterprise
Automate
Automate patterns-based
patterns-based
provision
provision and
and deploy
deploy
Optimize
Optimize applications
applications
Use
Use enterprise
enterprise issue
issue
resolution
resolution procedures
procedures
Link
Link objectives
objectives to
to releases
releases
Centralize
Centralize Requirements
Requirements
Link
Link lifecycle
lifecycle information
information
Deliver
Deliver and
and build
build with
with test
test
Centralize
Centralize management
management
and
and automate
automate test
test
Plan
Plan departmental
departmental releases
releases
and
and automate
automate status
status
Monitor
Monitor using
using business
business and
and
end
end user
user context
context
Automated
Automated deployment
deployment with
with
standard
standard topologies
topologies
Centralize
Centralize event
event notification
notification
and
and incident
incident resolution
resolution
Plan
Plan and
and manage
manage releases
releases
Standardize
Standardize deployments
deployments
Monitor
Monitor resources
resources
consistently
consistently
Collaborate
Collaborate Dev/Ops
Dev/Ops
informally
informally
Management
Management
Measure
Measure to
to project
project metrics
metrics
Document
Document objectives
objectives locally
locally
Manage
Manage department
department
resources
resources
Manage
Manage Lifecycle
Lifecycle artifacts
artifacts
Schedule
Schedule SCM
SCM integrations
integrations
and
and automated
automated builds
builds
Test
Test following
following construction
construction
Automate
Automate problem
problem isolation
isolation
and
and issue
issue resolution
resolution
Optimize
Optimize to
to customer
customer KPIs
KPIs
continuously
continuously
Objective & Prioritized
Capabilities
Step 4
Fully Achieved
4
How should
my practices
improve?
•
•
•
•
Understand your appetite for cross functional change
Target improvements which get the best bang for the buck
Roadmap and agree on an actionable plan
Use measurable milestones that include early wins
Roadmap
© 2013 IBM Corporation
Partially Achieved
Goals
Outcome Based Metrics
1) Which of the following best describes the state of your code at the end of each iteration?
* Ready for testing
* Partially tested, ready for additional integration, performance, security, and/or other testing
* Fully tested, documented, and ready for production delivery or GA release (modulo translation work or legal approvals)
2) How quickly can your team pivot (complete a feature in progress and start working on a newly-arrived, high-priority feature)?
* 3 months or longer
* less than 3 months
* less than one month
* less than one week
3) How quickly are you able to change a line of code and deliver to customers as part of a fully-tested, non-fixpack release?
* 12 months or longer
* less than 6 months
* less than 3 months
* less than one month
* less than one week
* less than one day
4) What is the cost (in person-hours) of executing a full functional regression test?
<enter value>
5) How long does it take for developers to find out that they have committed a source code change that breaks a critical function?
* One week or longer
* 1 to 7 days
* 12 to 24 hours
* 3 to 12 hours
* 1 to 3 hours
* less than one hour
6) Which of the following best describes the state of your deployment automation for the environments used for testing?
* We have no deployment automation. Setting up our test environments is entirely manual.
* We have some deployment automation, but manual intervention is typically required (for example, to provision machines,
setup dependencies, or to complete the process).
* We create fully-configured test environments from scratch and we reliably deploy into those environments without manual
intervention.
* We create fully-configured production-congruent environments from scratch and we reliably deploy into those environments
without manual intervention.
5
© 2013 IBM Corporation
Outcome Based Metrics
7) (If your product is SaaS) Which of the following best describes the state of your deployment automation for
your staging and production environments?
* We have no deployment automation. Setting up our staging and production environments is entirely
manual.
* We have some deployment automation, but manual intervention is typically required (for example, to
provision machines, setup dependencies, or to complete the process).
* We create fully configured staging and production environments from scratch and we reliably deploy into
those environments without manual intervention.
8) (If your product is SaaS) Are you able to make business decisions based on data provided by
infrastructure, application, and customer experience monitoring?
* yes
* no
9) (If your product is SaaS) How much downtime is generally required to deploy a new version into
production? * 4 hours or longer
* 1-4 hours
* less than 1 hour
* No downtime is needed
10) (If your product is SaaS) How often do problems occur when deploying a new version into production?
* Problems always occur
* Problems occur about 50% of the time
* Problems occur about 25% of the time
* Problems occur about 10% of the time
* Problems are rare
6
© 2013 IBM Corporation
Initial Rollout
7
Project
Contacts
DevOps Overview &
Maturity Model
Self Assessment
Analyze Results
Ice Castle
Marla Berg, John Beveridge, Lafonda Richburg
01/30/2014
03/07/2014
03/21/2014
Platform Resource Scheduler
Anumpa, Jason Adelman, Shen Wu.
01/30/2014
03/06/2014
03/21/2014
IBM Cloud OpenStack Platform
Christine Grev, Gary Palmersheim
01/30/2014
03/06/2014
03/21/2014
GPFS
Bonnie Pulver, Lyle Gayne, Steve Duersch, Yuri L Volobuev
02/19/2014
02/21/2014
03/07/2014
Pinehurst
Sue Townsend
01/30/2014
Power FW
Atit
02/19/2014
AIX
Atit
01/30/2014
PowerVC
(Thomas)
Platform LSF
(Akthar)
Platform Symphony
(Akthar)
MCP
(Frye)
PowerKVM
(Frye)
Power Linux?
(Frye)
Roadmap for Goals
© 2013 IBM Corporation
Sample Results
DevOps Maturity Model Self Assessment Results
Outcome Based Metrics
8
© 2013 IBM Corporation
Sample Details
Plan & Measure
Reliable
9
© 2013 IBM Corporation
Sample Details
Develop & Test
Practiced
10
© 2013 IBM Corporation
Sample Details
Release & Deploy
Practiced
11
© 2013 IBM Corporation
Sample Details
Monitor & Optimize
Practiced
12
© 2013 IBM Corporation
DevOps Outcome Metrics
DevOp Outcome Metrics
Sample1
Sample2
1. State of code at end of iteration
Partially Tested
Partially tested
2. How quick pivot to new high priority
Less than 1 month
Less than 3 months
3. How long from LOC to tested
release
Less than 1 month
Less than 3 months
4. Person Hours of full functional
regression
280 hours (7 PW)
(if 1-2 weeks, what
PW)?
5. Dev time to knowing critical function
broken
1-3 hours
12-24 hours
6. State of deployment automation
13
We create fully
configured test
environments from
scratch and we
reliably deploy into
those environments
without manual
intervention
We have some
deployment
automation but
manual intervention
is typically required
7. If SAAS, Deployment automation for
staging
N/A
N/A
8. If SAAS, decisions based on
monitoring?
N/A
N/A
9. If SAAS, downtime required to
deploy prod
N/A
N/A
10. If SAAS, problems on prod
deployments
N/A
N/A
© 2013 IBM Corporation
Plan / Measure
Development / Test
Scaled
Define release with
business objectives
Measure to customer value
Improve continuously with
development intelligence
Test Continuously
Manage environments
through automation
Provide self-service build,
provision and deploy
Automate problem isolation
and issue resolution
Optimize to customer KPIs
continuously
Reliable
Plan and source
strategically
Dashboard portfolio
measures
Manage data and virtualize
services for test
Deliver and integrate
continuously
Standardize and automate
cross-enterprise
Automate patterns-based
provision and deploy
Optimize applications
Use enterprise issue
resolution procedures
Repeatable
Link objectives to releases
Centralize Requirements
Management
Measure to project metrics
Automated test environment
deployment
Run unattended test
automation / regression
Plan departmental releases
and automate status
Automated deployment
with standard topologies
Monitor using business and
end user context
Practiced
Sample Maturity Model Assessment
Document objectives locally
Manage department
resources
Schedule SCM integrations
and automated builds
Test following construction
Plan and manage releases
Standardize deployments
Fully Achieved
14
14
Release / Deploy
Partially Achieved
Monitor / Optimize
Centralize event notification
and incident resolution
Monitor resources
consistently
Collaborate Dev/Ops
informally
Goal
© 2013 IBM Corporation
Define release with
business objectives
Measure to customer value
Improve continuously with
development intelligence
Test Continuously
Plan and source
strategically
Dashboard portfolio
measures
Link objectives to releases
Centralize Requirements
Management
Measure to project metrics
Document objectives locally
Manage department
resources
15
Monitor / Optimize
Manage environments
through automation
Provide self-service build,
provision and deploy
Manage data and virtualize
services for test
Deliver and integrate
continuously
Automated test environment
deployment
Run unattended test
automation / regression
Plan departmental releases
and automate status
Automated deployment
with standard topologies
Schedule SCM integrations
and automated builds
Test following construction
Plan and across
manage releases
Focus
Standardize deployments
Fully Achieved
15
Release / Deploy
GOALS:
Standardize and automate
Where is the
cross-enterprise
Automate
patterns-based
best
provision
and result?
deploy
Focus up
Reliable
Development / Test
Repeatable
Plan / Measure
Practiced
Scaled
Sample Maturity Model Assessment
Partially Achieved
Automate problem isolation
and issue resolution
Optimize to customer KPIs
continuously
Optimize applications
Use enterprise issue
resolution procedures
Monitor using business and
end user context
Centralize event notification
and incident resolution
Monitor resources
consistently
Collaborate Dev/Ops
informally
Goal
© 2013 IBM Corporation
Goal Discussion: Planning for Initiative #1
Pain Point
16
Improvement
Value
Effort
Required
Priority
Next Step
© 2013 IBM Corporation
DevOps Proof of Concept Investigation

Customer
Interaction
Customer
Interaction
Service Management Connect
 Feedback
 Content
 Download
Hosted Environment
RFE
Driver VM
SCE
Continuous Feedback
Continuous Test
Driver VM
Driver VM
Continuous Deployment
Agile Development
Continuous Integration

Continuous Build
Rational Focal Point

Rational UrbanCode
Rational Team Concert
 Task WI, Change Record WI
 Jazz SCM
 Jazz Build Engine (JBE)
 Security (AppScan)
Image Catalog
Development VM
Web Browser
RTC Eclipse Client
RTC Web Client
Builder VM
RTC Eclipse Client
UrbanCode Deploy?
RTC Build Engine Client
Build Resources
AppScan
Compilers
Compile Pool Resource
RTC Build Engine Agent
17
Test Environment
Driver Images
Debug Environment
Test Resources
RTC Web Client
RTC Eclipse Client
Compilers
AppScan
Compile Pool Resource Driver Images
Web Browser
Test Resources
RTC Web Client
Debug Environment
RTC Eclipse Client
RTC Build Client (JBE)
© 2013 IBM Corporation
Focal Point Client
18
Plan / Measure
Development / Test
Scaled
Define release with
business objectives
Measure to customer value
Improve continuously with
development intelligence
Test Continuously
Manage environments
through automation
Provide self-service build,
provision and deploy
Automate problem isolation
and issue resolution
Optimize to customer KPIs
continuously
Reliable
Plan and source
strategically
Dashboard portfolio
measures
Manage data and virtualize
services for test
Deliver and integrate
continuously
Standardize and automate
cross-enterprise
Automate patterns-based
provision and deploy
Optimize applications
Use enterprise issue
resolution procedures
Repeatable
Link objectives to releases
Centralize Requirements
Management
Measure to project metrics
Link lifecycle information
Deliver and build with test
Centralize management and
automate test
Plan departmental releases
and automate status
Automated deployment with
standard topologies
Monitor using business and
end user context
Centralize event notification
and incident resolution
Practiced
Introduction to Practice Based Maturity Model
Release / Deploy
Monitor / Optimize
Document objectives locally
Manage department
resources
Manage Lifecycle artifacts
Schedule SCM integrations
and automated builds
Test following construction
Plan and manage releases
Standardize deployments
Monitor resources
consistently
Collaborate Dev/Ops
informally
© 2013 IBM Corporation
Maturity Levels Defined
Specific maturity levels are defined by how well an organization can
perform practices. The levels look at consistency, standardization, usage
models, defined practices, mentor team or center of excellence,
automation, continuous improvement and organizational or technical
change management.
Practiced
Some teams exercise activities associated with the practice, inconsistently. No enterprise
standards defined. Automation may be in place but without consistent usage models.
Repeatable
(Consistent)
Enterprise standards for practice are defined. Some teams exercise activities associated
with the practice and follow the standards. No core team or COE to assist with practice
adoption. Automation, if used, follows enterprise standards.
Reliable
Mechanisms exist to assist adoption and ensure that standards are being followed. Core
team of mentors available to assist in adoption.
Scaled
19
Institutionalized adoption across the enterprise. COE is a matured and integral part of
continuous improvement and enablement. Practices are mainstreamed across the
enterprise. Feedback process in place to improve the standards.
© 2013 IBM Corporation
Plan/Measure
At the practiced level, organizations capture business cases or goals in documents for
each project to define scope within the strategy but resourcing for projects are managed at
the department level. Once projects are executed change decisions and scope are
managed within the context of the project or program to achieve goals within budget/time.
As organizations mature business needs are documented within the context of the
enterprise and measured to meet customer value metrics. Those needs are then prioritized
and aligned to releases and linked to program or project requirements. Project change
decisions and scope are managed at the portfolio level.
20
© 2013 IBM Corporation
Development/Test
At the practiced level, project and program teams produce multiple software development lifecycle
products in the form of documents, spreadsheets to explain their requirements, design, test plans. Code
changes and application level builds are performed on a formal, periodic schedule to ensure sufficient
resources are available to overcome challenges. Testing, except for unit level, is performed following a
formal delivery of the application build to the QA team after most if not all construction is completed. As
organizations mature, software development lifecycle information is linked at the object level to improve
collaboration within the context of specific tasks and information. This provides the basis for
development intelligence used to assess the impact of processor technology improvements,
continuously. A centralized testing organization and service provides support across application/projects
that can continuously test regressions and higher level automated tests provided infrastructure and
application deployment can also support. Software delivery, integration and build with code scans/unit
testing are performed routinely and on a continuous basis for individual developers, teams, applications
and products.
.
21
© 2013 IBM Corporation
Release/Deploy
At the practiced level, releases are planned annually for new features and maintenance teams. Critical
repairs and off-cycle releases emerge as needed. All are managed in a spreadsheet updated through
face-to-face meetings. Impact analysis of change is performed manually as events occur. Application
deployments and middleware configurations are performed consistently across departments using
manual or manually staged and initiated scripts. Infrastructure and middleware are provisioned similarly.
As organization mature, releases are managed centrally in a collaborated environment that leverages
automation to maintain the status of individual applications. deployments and middleware
configurations are automated then move to a self-service providing individual developers, teams, testers
and deployment managers with a capability to build, provision, deploy, test and promote, continuously .
Infrastructure and middleware provisioning evolves to an automated then self-service capability similar
to application deployment. Operations engineers move to changing automation code and re-deploying
over manual or scripted changes to existing environments.
.
22
© 2013 IBM Corporation
Monitor/Optimize
At the practiced level, deployed resources are monitored and events or issues are addressed as they
occur without context of the affected business application. Dev and Ops coordination is usually informal
and event driven. Feedback of user experience with business applications is achieved through
formalized defect programs. As organizations mature, monitoring is performed within the context of
business applications and optimization begins in QA environments to improve stability, availability and
overall performance. Customer experience is monitored to optimize experiences within business
applications. Optimization to customer KPIs is part of the continuous improvement program.
23
© 2013 IBM Corporation
Sample: Practice based maturity model: Maturity Goals for an
Initiative
Scaled
Define release with
business objectives
Measure to customer value
Improve continuously with
development intelligence
Test Continuously
Manage environments
through automation
Provide self-service build,
provision and deploy
Automate problem isolation
and issue resolution
Optimize to customer KPIs
continuously
Plan and source
strategically
Dashboard portfolio
measures
Manage data and virtualize
services for test
Deliver and integrate
continuously
Standardize and automate
cross-enterprise
Automate patterns-based
provision and deploy
Optimize applications
Use enterprise issue
resolution procedures
Link objectives to releases
Centralize Requirements
Management
Measure to project metrics
Link lifecycle information
Deliver and build with test
Centralize management
and automate test
Plan departmental releases
and automate status
Automated deployment
with standard topologies
Monitor using business and
end user context
Document objectives locally
Manage department
resources
Manage Lifecycle artifacts
Schedule SCM integrations
and automated builds
Test following construction
Plan and manage releases
Standardize deployments
Fully Achieved
24
Monitor / Optimize
Reliable
Release / Deploy
Repeatable
Development / Test
Practiced
Plan / Measure
Partially Achieved
Centralize event notification
and incident resolution
Monitor resources
consistently
Collaborate Dev/Ops
informally
Goals
© 2013 IBM Corporation
Download