Monday, January 11, 2016

Summary of "Predictions for DevOps in 2016"

Summary of Predictions for DevOps in 2016 (IT Pro Portal, 1/11/2016), written by Greg Dean

"Savvy businesses are using the malleability of software and rapid pace of its development to drive greater differentiation. They’re using instant feedback from users to improve software continuously, and increasingly doing this through DevOps."

"2016 will be a big year for DevOps. Here’s how to compete."

Large enterprises will get onboard fully
"... after a few years of experimenting they’re starting to rack up successes."
  • Small pockets so far.
  • Not adopted widely yet, so releases still slow, inconsistent, expensive.
  • However, C-level executives ... asking how they can leverage DevOps principles at scale.
  • Success will require experimentation and a tolerance for failure.
  • DevOps will be critical to legacy application modernization.
  • Gartner says >25% of Global 2000 will leverage DevOps.
  • In <5 years, DevOps will be the norm.

Standards will emerge
There are no absolute standards, so it's still considered risky by Enterprise orgs.

Security will increasingly become integrated with DevOps
There have been too man breaches to ignore the importance of integrating security into the DevOps framework.  This means adding it from the beginning, which means including a spot for Security on the DevOps team 
"At present, there are far more developers than application security experts, so security must coach DevOps on how to effectively and efficiently embed in current practices."

Key technology adoptions that enable DevOps will take off
Increasing automationAutomation speeds up cycles and reduces errors. It will also help ensure repeatability. Automation must "accelerate tasks, eliminate manual handoffs, and cut down error prone processes."
Decreasing latencyOrgs must identify and remove the biggest hurdles. Must identify the biggest bottlenecks in the delivery pipeline. "... major bottlenecks can still cause technical “debt” to build up earlier in the pipeline or reduce key resources further down in the pipeline."
Increasing visibilityContinuously assess and monitor applications at every stage of their lifecycle. "Key metrics include application user experience, health and availability of the infrastructure, threat and risk monitoring, which must be shared across the team through continuous feedback loops."

Job roles will evolve
Everyone must adopt new skills—technical and cultural. Developers must become more familiar with infrastructure, operations staff with code.  Jobs will morph and evolve. This will impact business analysts, planning teams and even the C-suite. Teams will "become more horizontally embedded around products and services, and multiple roles will become part of the extended DevOps delivery chain."

Monday, December 21, 2015

The 13th Principle: DevOps Takes an Extra Injection to the Agile Manifesto

Take a look at the 12 Principles Behind the Agile Manifesto and you may or may not notice a glaring Principle that is missing.  That Principle is this:

You MUST bring your awkward and distant cousin (a.k.a. IT Ops & Architecture) into the mix.

Let's be honest ... you see your distant cousin only as needed - maybe only a few times a year - so that means each time you have to spend the first 3 hours with uncomfortable "getting to know you again" chit-chat before you can get to the real meat of the family reunion.  That is, unless you keep them involved from the beginning, and throughout all stages of the SDLC, starting with business requirements gathering, you will always go through an uncomfortably awkward dance.

The question is "How do you propose we do that"?  


"I mean, they don't understand what we're trying to accomplish (and many times I question whether they even care), and they speak with a strong brogue of Klingon that makes it very challenging to communicate."


Some talk about creation of a Tiger Team made up of ... everyone.  Yes, everyone:  Dev, the LOB, IT Ops, Security and so on.  This "special forces" squad will challenge the way each group does what they do.  They will - through friction and osmosis - create empathy and understanding of the bigger picture, starting and ending with what the business is trying to accomplish, and filled in with agility, supportability, risk reduction, and consistency.  

SIDE NOTE:  If you haven't read The Phoenix Project, get on it.  Everyone in the org from top to bottom needs to have this perspective.  Hold a book review to ensure everyone gets it before endeavoring on any transformation.  My 2 cents.

Here's the main ingredient:  Executive backing and involvement.  Assuming your edict to "create DevOps" is going to result in a magical transformation is a fairy tail.  It will fail without your constant involvement and servant leadership.

And here's the main thing:  If you don't have a business reason to do DevOps, don't do it.  Doing DevOps because everyone else is, or because you just "think you should" is no way to begin.  So, think long and hard about what business (not just IT) outcome you are looking to accomplish before starting any effort.  For more on this, read the short book, Leading the Transformation.  It's chuck full of great guidance around establishing a strong and purposeful DevOps practice within your ranks.  



Greg Robert Dean is a Transformation Advisor in VMware's Software Defined Enterprise Business Unit.

Friday, December 18, 2015

The Tech Behind DevOps

There was a great article this week on IT World Canada called, "Why over 40% of IT departments are a DevOps nightmare" that discusses what I've been evangelizing to our customers:


"Automation is a key part of any DevOps project, and that must extend down to the infrastructure level, he warned."  

"He" being Ashish Kuthiala, senior director, marketing and strategy for DevOps at Hewlett Packard Enterprise.  Ashish continues:


“Set up automatic testing triggers upon code check-in, automate handoffs between teams, and carefully explore how to leverage automation to consistently deploy and configure your infrastructure,” he said. “Once something works well, codify it. Make it automated and repeatable so you can reduce errors, accelerate routine tasks, and ensure repeatability.”

As I've mentioned in previous posts - the tougher challenges come with people and processes, but technology to seamlessly automate your every repeatable task from Dev to Stage to UAT to Perf to Prod, etc, is equally critical.  Having common tooling for Automation that spans across the "wall" from Dev/Test to Day 2 Ops and back will make things oooohhhh so much easier.  Dealing with the same data, processes, and tools will facilitate process continuity as well as the new communication paradigm needed for DevOps to be a success.

Just like Electric Company.
To that point, the term "DevOps" itself comes with an obvious connotation - Dev is in IT's business and vice versa.  This has substantial benefits AS WELL AS many bumps and bruises to egos and communication constructs along the way.


Companies will have to decide, for example, how much development teams are to be involved in the provisioning process, now that all of the infrastructure suddenly speaks their language. And whoever pays the bills will have to negotiate policies on provisioning and usage of quickly-accessible resources. That’s a whole other layer of politics to contend with, before you even get to the fun stuff.

Here's the cool thing:  VMware has it figured out, having gone through the internal pains ourselves.  Once we topped off the transformation that was the "Journey to IaaS and PaaS", what was next?  Well, once you paint all the walls in the house, the floor boards now look dingy and scuffed.  The logical next phase was continuing upstream into the SDLC with pipeline release management via offerings like vRealize Code Stream.  Stay tuned for more on that.



Greg Robert Dean is a Transformation Advisor in VMware's Software Defined Enterprise Business Unit.

Wednesday, December 16, 2015

Only 1% of Automation Projects Succeed (a.k.a. 99% Fail)

To clarify, 99% of Automation projects fail, where automation software was purchased without some thought leadership services expertise from someone who has REALLY done it.

Sounds like a pitch for services (and to an extent, it is), but the point is not services revenue for your selected Automation software vendor.  The point is ...

it's about PEOPLE and PROCESS primarily

Working with VMware, for instance, you've been able to buy virtualization software and - for the most part - it's just worked without services and without much training.  But Automation is a whole other animal.  It touches every part of IT Ops, plus dev teams, plus IT finance, plus the business.  It touches all ITSM processes and Dev processes, and every person involved in those processes.  So, a plan that takes into consideration process recalibration and rationalization, plus regrouping and training of people, is critical. Then (and only then) should you apply technology to facilitate.

Don't get me wrong ... there is absolute benefit in creating an "Art of the Possible" stack in a lab that shows to all who will be involved what you are aiming for, but expecting that software will magically do it all in the medium and long term - getting you past broken or outdated processes and organizational behavior issues - is just crazy.

Here is a very high level depiction of what must be fleshed out, ideally with the help of experts in Application Delivery Automation:

ABOVE:  All of what must be considered when venturing toward an SDDC Transformation.

While this is a significant transformation to undertake, it is very well worth the effort.  According to a recent Forrester study on the Total Economic Impact of VMware Automated Application Deployment, for instance, 
  • Application delivery was sped from 3-4 weeks on average to <1 day.
  • Consistency and quality improved by a notable margin (not mentioned).
  • HW cost avoidance was appr 15%.
  • Reduced capacity was appr 10%.
  • IT Ops time savings of appr 22 hours per application environment delivered.

What's more important, though, is the business impact:
  • Improvements in developer productivity (20%+ for VMware internal)
  • Significant market share and revenue increases (depends upon industry)
  • Margin (Net Income) thickening
  • Innovation through more IT time spent shoulder-to-shoulder with the business, trying out (and failing fast) edgy ideas to uncover potential advantages in the market.  This benefit can not be measured specifically, but it's the most substantial byproduct of this transformation by a wide margin.

In summary, before endeavoring to "change the IT world" - which is a noble and highly impactful venture - you will want to (a) get commitment to the change from Exec Management and heads of Dev and IT Ops, (b) begin considering the process and people aspects of the transformation that must transform (a la The Phoenix Project) and (c) partner with a company that has done this successfully many times over.  Of course, VMware is considered the best and most experienced, but there are many with specific practices in this arena who will do great things for you, too.



Greg Robert Dean is a Transformation Advisor in VMware's Software Defined Enterprise Business Unit.

Monday, April 20, 2015

IT Transformation in the Insurance and Financial Services Industries

Gowrish_Mallyaby Gowrish Mallya

Insurance and Financial Services companies are undergoing rapid transformation due to the advent of technological innovations. By 2018, nearly one-third of the insurance industries’ business is expected to be generated digitally. In order to be digitally competent, insurance companies need to1:
  • Reduce barriers to customer interaction
  • Use new business models
VMware’s Accelerate Benchmarking Database provides interesting insights into the current state of IT readiness of insurance and financial services companies – and their target state goals. Let’s take a closer look at the two requirements for digital competence.
1.      Reduce Barriers to Customer Interaction
In a perfect environment, all of the Tier-1 applications would be written in lightweight, highly-portable application frameworks, and be capable of harnessing cloud-connectivity and scalability. Virtualizing Tier-1 applications decouples the software stack from the hardware, thereby easing operations like planned maintenance; as a result there is tighter alignment between IT and business needs. IT would then be able to develop applications to keep up with market needs and serve their end customers better.
VMware’s Accelerate Benchmark Database shows that in Insurance and Financial Services industries, currently only 14 percent of the companies have 75 percent or more of their Tier-1 applications virtualized; the industry-wide company average is around 25 percent.
The data also shows that only 34 percent of the companies have executive or line-of-business support for cloud as a strategy. IT can contribute significantly to reduce computing cost, but without management support, cloud efforts will be difficult and challenged, as the true benefit potential cannot be effectively communicated to business units and end users.
By 2018, insurers anticipate nearly one-fifth (19.7 percent) of their business will be generated through Internet-connected PCs, up from 12.7 percent in 2013. Another 10.9 percent is expected to come via mobile channels, up from a mere 1.5 percent in 2013.2 Application virtualization is key to help businesses cater to such exponential growth that will come from the  Internet and mobile devices, as it will help reduce time to market for new features or products across all customer segments.
2.      Use New Business Models
For organizations in this industry, making quick, informed decisions and acting swiftly defines mediocrity from success. Being able to deploy infrastructure at the earliest point in time helps organizations achieve their goals in the shortest time possible. To achieve higher levels of cost performance, agility, scalability and compute, virtualization must be nearly ubiquitous.
Monitoring of the deployed infrastructure is vital for an organization that enables it to run in an optimal state by:
  • Keeping a check on capacity and provisioning issues by giving out an early warning
  • Providing transparency and control over cost, services and quality
  • Benchmarking the IT systems performance
VMware’s Accelerate Benchmark Database shows that 92 percent of the companies are at least 40 percent compute virtualized. Also, 50 percent of the companies do not have storage virtualized, and 56 percent are not network virtualized. Virtualizing storage and network infrastructure can reduce day-to-day operational tasks and costs associated with important—but non-strategic—processes.
The data also shows that 78 percent of the companies have either no ability to meter IT usage, or they do it manually. Also shown is that 86 percent of the companies intend to partially or fully automate IT service metering. With metering of IT service usage completely automated, there is predictive capability to understand when usage will trigger an elastic event within the environment, thereby aiding in achieving a flexible and scalable IT infrastructure.
The insurance sector is witnessing new business models from new entrants. German company Friendsurance has implemented the concept of online peer-to-peer insurance. Friendsurance uses social media to link friends together to buy collective non-life policies from established insurers. A small amount of cash is set aside to cover small claims, and if the pool is untouched at year-end, it is shared among the group.3 In order to be agile, the companies need to focus mainly on infrastructure virtualization and analytics.


Gowrish Mallya brings around 8 years of experience in value engineering and benchmarking. He works closely with account teams and strategists across AMER & EMEA to address VMware customer’s IT challenges and demonstrate our solution value. Gowrish is currently a Value Engineering consultant within Field Sales Services team in India

Thursday, June 5, 2014

Managing Cloud Services with your Existing Service Catalog

Most VMware customers use the vCloud Automation Center service catalog to request and manage their IT services.  However, some customers already have  an existing service catalog  The last thing they want is yet another service catalog.  vCAC has been designed and purpose built to deliver and manage IT services.  vCAC has automation and governance capabilities that general purpose service desk solutions just can’t match.   Many of our customers’ existing services catalogs have been designed and optimized to deliver a wider variety of services than vCAC.   These products have strengths and capabilities that we can’t match.   Rather than try and replace their existing service catalog with vCAC, sometimes the more prudent approach is to call vCAC services from their existing service catalog. 

Today this is not done with out of the box functionality.  It does require some customization.  VMware is investing in making it easier call vCAC services from existing service catalogs and management tools.  One of those capabilities is the vCenter Orchestrator plug-in for vCAC.  This allows vCAC services and actions to be invoked via VCO from other applications.  Other interfaces including APIs, CLS and Java SDKs will be coming in the vCAC 6.1 release in Q3.  This will provide our services team, partners and customers with several options for calling vCAC services from other applications.

Jennifer Galvin, a senior architect from VMware’s Center for Excellence, recently completed a project invoking requests for vCAC services from an existing ServiceNow catalog.  This post looks at what options customers have for consolidating their service catalogs, things to look out for, and the real world example of Jennifer’s project.  If you are looking to consolidate your service catalogs, or call vCAC services from other applications, this post is a must read.  This post will help you to better understand how VMware’s cloud management and automation capabilities can be invoked from other catalogs and applications.   



Every Cloud Management Platform comes with a service catalog.   In my last blog post I discussed what differentiates cloud service catalogs  and how to  distinguish the best from the rest.  However, what if you already have a service catalog?  Some VMware Cloud Automation Center customers already have an existing service catalog. This new post looks at what your options are and provides a real-world example of invoking vCloud Automation Center cloud service from an existing service catalog.   More >>


Thursday, May 29, 2014

Not all Cloud Service Catalogs are created equal

by Rich Bourdeau

The NIST Definition of Cloud Computing defines five essential characteristics of cloud computing solution.  Number one on that list is on-demand self-service.

“On-demand self-service  – A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.”

The service catalog is the user interface for requesting IT services.  Service catalogs provide a list of IT services available to resource consumers.  In that aspect vCloud Automation Center service catalog is similar to other vendors.  However, vCloud Automation Center’s service catalog is unique in the following ways:

Delivers Personalized Business Relevant IT Services


The goal of cloud automation is to simplify, standardize and automate the process of delivering IT services. While standardization is fundamental to delivering repeatable high quality services, standardization does not have to mean one size fits all.  For example: development, test, and production may all need the same application, but how it is provisioned, how much resources each gets,  what service level they receive and what management operations owners are allowed to perform vary wildly between groups and even individual consumers within groups.

What differentiates vCloud Automation Center is the granularity of it’s governance policies.  This level of specificity allows vCloud Automation Center to leverage standardized delivery processes while at the same time customize the services available to different users or groups of users as well as control what actions they are allowed to perform against their resources.


Delivers a “Full Service” IT Catalog


Through a self-service portal consumers should be able to request and manage a variety of IT services.  Many competitive solutions are limited in the types of services they can provide, focused primarily infrastructure services, for their infrastructure, or very prescriptive service delivery processes that cannot easily be modified.

vCloud Automation Center provides a full service catalog of multi-vendor, multi-cloud infrastructure, applications and custom IT services.  These services are available through a single catalog with common governance and control. vCloud Automation Center’s Advance Service Designer and a library of vCenter Orchestrator workflows and plug-ins makes it much easier to deploy custom IT services or adapt out of the box service automation to meet the unique needs of your business.

Enables full lifecycle management


Many vendors tout that they have full lifecycle management when what they really have is provisioning and decommissioning and not much else in between.

vCloud Automation Center  delivers not only initial provisioning of infrastructure, application and custom IT services, it also provides a portal that allows resource owners to perform full lifecycle management of their services.  In addition to simple tasks like scaling resources up or down to meet changing business needs, vCloud Automation Center also automates the delivery of other day-2 operations like snapshot management, archival and automated reclamation of inactive resources.  In addition, custom lifecycle commands can be added using Advanced Service Designer and VCenter Orchestrator plug-ins and workflows.  This allows administrators to rapidly add any day-2 action to the list of tasks authorized users can perform. Finally vCloud Automation Center automates the release automation process keeping application development, test and production environments in synch.

Displays pricing info to help influence consumption behavior


Chargeback or showback is another one of the one of attribute that defines cloud computing according to the NIST definition.  Many cloud automation vendors provide a chargeback report that IT can use to deliver a bill to the business at the end of the month.  However, in order to effectively influence consumption behavior users and resource approvers need to see the prices of the services at the time they request them.  That way cost can be factored into their purchase decision.

In addition to chargeback reporting,  vCloud Automation Center provides full cost transparency through the product.  Users see the costs of the services they are requesting.  As resources are adjusted up or down they also see the cost impact of their decisions.  Approvers also see the costs of any exception requests that require approval.   Leveraging VMware’s ITBM Standard Edition, vCAC administrators can compare the cost of their in-house services with comparable pubic cloud offerings from Amazon, Microsoft and VMware.  These cost displays within the service catalog and ability to compare private to public cloud costs go far beyond the simple chargeback reporting provided by most vendors.



About Rich Bourdeau
Rich Bourdeau has over 20 years of experience in developing, managing and marketing IT infrastructure management solutions for enterprises. Rich has spent the last five years helping companies deploy and manage their private and hybrid cloud infrastructures. He has authored papers on Must Have Cloud Management Capabilities as well as Building the Business Case for Private Cloud.