OpenMakeSoftware: Just Ops

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Its Just Ops

A whitepaper review of the DevOps Challenge facing Windows and Linux Developers By Tracy Ragan, COO, OpenMake Software

Contents
Summary...1 DevOps History..2 Achieving DevOps.6 Conclusion....8

Summary
We have again entered what industry analysts call the spin cycle for yet another term DevOps. But if we have designed, coded, tested and delivered software to end users, and developers have handed-off code to the operations teams for production management; its simply Ops. - And its not new. Depending on the size of the organization and delivery platform, the release of software to production control can take different shapes. Mainframe and UNIX environments tend to have tighter controls than Windows and Linux environments. Large organizations strive for a tightly controlled calendar process, while organizations with small teams release as needed. Whatever level your organization may be at, there is an ongoing challenge to create a transparent and simplified build and deploy process that is fast, consistent and secure.

DevOps History
Development teams coding in Windows or Linux languages are rarely willing to analyze historic approaches to the DevOps challenge, but its worth some evaluation. UNIX and z/OS platforms often have a tightly controlled and transparent process for managing software builds and releases. A review of how Operational procedures have changed gives us insight as to how they solved this particular challenge.

z/OS
It is often said Windows or UNIX cant be compared to the mainframe because mainframe coding is so much more homogenized than Windows or UNIX coding. The origin of this belief is unknown, but it certainly is an urban myth. If you take away the syntax of coding in COBOL compared to Java or .Net, much remains very similar. Like any platform, on the mainframe you have a moving target of underlying configurations that developers must address as they move new applications through the life cycle. The OS level, database level, communication layer and critical third party dependencies can be moving targets between the Development LPAR and Production LPAR. When COBOL was king, application teams were expected to move code through the process as rapidly as Java and Window developers are being asked today. In essence, they are extremely similar. Mainframe Developers and Production Control teams worked together to simplify the development to production hand-off in the late 1980s. They accomplished this by adopting a more mature application life cycle process and moved away from simple version control tools such as Panvalet and oneoff JCL scripts for managing builds and deploys. In place of versioning and JCL scripts, tools such as Serena ChangeMan and Endevor (now CA Endevor) were implemented. Endevor was designed specifically by Legent Corp. as a DevOps tool. Its name was derived from Environment for Development and OpeRations ENVDEVOR thus the reason why it does not have an a in Page | 2 www.openmakesoftware.com

the spelling. This new DevOps process was a substantial paradigm shift that organizations embraced and they have never gone back. The difference between a simple version control tool cobbled together with JCL scripts and a full life cycle tool; is that the full life cycle tool offers the ability to manage source, binaries and configurations moving across the different promotional environments all from one location. The introduction of Processors was unique to both ChangeMan and Endevor and separated them from version tools such as Panvalet. Processors eliminated the need for static, one-off compile/link/ship JCL scripts and replaced JCL scripting with standardized methods for compiling, linking and deploying objects from Development through Test and Production. These tools managed every detail of the process, including what compile options were to be used, the library of objects to be linked and the environment to which the objects would be shipped for release. These new life cycle tools (which are still in place) allowed for a check-in at development which automatically executed a compile of the entire application (sound like continuous integration?) and immediately reported any impact issues the change created with incremental knowledge. In addition, a centralized and shared approval process using quorums were built into the promotion of code from Development through Production. Developers could check-in code, predict impact to the overall system and request a promotion to the next stage with confidence that their update was being compiled with the correct libraries for each environment without requiring custom build or deploy JCL scripts. The Production Control team used the same tools as Development so all information, processes and code are managed centrally by everyone. Handoffs were simplified and one-off scripts eliminated. Configuration information was centralized and a true Development to Production Operations process was in place. This has worked for the mainframe now for close to 30 years.

Page | 3

www.openmakesoftware.com

UNIX
During the time the main frame developers were sorting out their DevOps process, the UNIX Administrators were addressing the same issue with managing C Unix applications. ClearCase became the preferred choice for the UNIX platform as it most closely solved the problem of centralizing the management of source code, the creation of binaries and deployments from a central tool that could be shared by both developers and UNIX Administrators. Unix Administrators did not have the luxury of Processors as with ChangeMan or Endevor on the mainframe, so they took ownership of writing both the build and deployment scripts for each development team. Developers were given access to their own sand boxes to work in and execute their builds using the ClearMake scripts given to them by the UNIX Administrators. The Administrators controlled the machine configurations, the libraries used in the compile process and finally the deployment. ClearCase offered many features that tools such as CVS could not offer which created standards and controls around building UNIX binaries and installing the binaries for production use. For example, ClearMake allows for increased control over what libraries are used to build the binaries, handles incremental processing of objects and centralizes control over basic configuration information - much like Serena ChangeMan and Endevor. Even though ClearCase does not have standardized processors, the UNIX Administrators took ownership of the build and deploy scripts for greater control over what is being built and released. The solution on the Mainframe and the UNIX platform are similar in two very basic ways. The first commonality is that both methods standardized how objects are built and deployed using either processors to automate the build and deploy steps or having a centralized Administrative team control the scripts for building and deploying. In each case, the building and deploying of code is managed by a centralized team that supports development. The second similarity is that a single tool is used by both developers and production control teams. Source, binaries and critical configuration information is managed, allowing individuals from all teams, from development to test and production, to remain informed and in control.

Page | 4

www.openmakesoftware.com

Windows, Linux and DevOps


You seldom hear of organizations trying to solve the development to operations turnover problem in the z/OS environment or large UNIX environments. Understanding the history explains why this is true. However, the discussion comes up around Windows and Linux environments and tends to be the focus of the current DevOps conversation. Both Java and .Net are still young languages (compared to COBOL or C-Unix). It has only been in the past few years that organizations have addressed automating steps around compiling, testing and releasing software to Linux or Windows servers. These environments have matured to levels that require more transparent and standardized practices around the movement of code, binaries and configurations across the lifecycle. Both the Java and .Net development languages are unique in that they offer an IDE that provides more efficient methods for coding. Their challenge can be at the integration phase of software development. This is where the need for a Continuous Integration process comes to play. Developers commit their changes back to the main trunk of code and the entire application needs to be re-created, tested and eventually deployed. The road block for the Java and .Net communities is that most of the back-end processing done to support the compiling and deploying of binaries is done through one-off scripts written in languages such as Make, Maven, Nant or Ant and a dozen other shell type scripts. Attempts are made to automate the creation of the binaries, calling of testing and deploy using Open Source tools such as Jenkins; but the underlying process for both build and deploy depends on static one-off scripts that must be managed by the author and is therefore difficult to hand-off to an individual in production operations. Java and .Net developers are then required to do both the job of designing and developing software as well as serve as a trusted partner on the operation teams for production management.

Page | 5

www.openmakesoftware.com

Achieving DevOps
In order to improve the method for moving Windows and Linux code across the life cycle, developers will need to embrace a paradigm shift that is similar to what z/OS and Unix developers experienced during the late 80s and early 90s. Hands-free is the ultimate goal which means that developers will need to move away from one-off scripted processes. z/OS developers and administrators recognized this as a core problem and addressed it through processors which centralized build and deploy scripts. The use of standardized build and deploy methods is key to solving the DevOps challenge. Using reusable, shared methods for building and deploying allows the central production control team to clearly understand and coordinate the required components for managing the landscape of the production environment starting at the earliest level development and therefore achieving a critical DevOps goal. Achieving the DevOps challenge is well worth the time and effort as it will eliminate time consuming and error prone static scripting and manual systems providing you with a transparent, automated and accelerated production turnover process. To get started, begin with analyzing your current process so you can clearly understand the complexity of your development to production turnover steps. Start your analysis by reviewing these 5 core factors :

How many unique compile and deploy scripts are used by your developers? Your DevOps process should include a step whereby Production Control takes approved source code, compiles/links the application with the approved technology stack, deploys the binaries for testing and eventually production. If a turnover process is to become a reality, these scripts will ultimately need to be taken over by the Production Control team or replaced by standard, reusable methods. You will need to evaluate if this hand-off would be possible depending on the number of scripts this team would need to handle. You can determine if an external tool should be used to replace these one-off scripts, or if the Production Control team can take ownership of the scripts. Cost is always an issue. The important topic to remember is that someone is already paying for these scripts to be managed. You are shifting the cost between departments, not creating new cost. Some companies find that they have literally thousands of scripts, which may point to the need for tools that eliminate some or all of these scripts. If you have only 1 or 2 scripts per team, the management of these scripts may be easily transferred to production control.

Page | 6

www.openmakesoftware.com

What are your production server security policies? If your development team has been managing your release to your production servers you may need to evaluate new policies for securing your production environment. Old habits can be difficult to break. Developers often just want to fix it themselves rather than wait for someone from Production to handle the problem. This means that knowledge is never transferred to the production control team. In addition, if you want to automate the process, restricting access will eventually be required. When you automate the process, manual intervention can be problematic and cause the automated process to break. You will need to develop a plan to restrict who has access to the production server environments. How many different environments are you building and deploying to? A broad variety of platforms and servers to which you are building and deploying indicates a more complex environment to manage. Part of a solid DevOps solution requires that you serve the needs of all type of development teams including Agile practices. Agile teams will push for the management of smaller builds and deploys to reduce the risk of implementing changes. This results in more frequent builds and deploys. If you have a variety of platforms to manage with more frequent build and deploys, establishing clear standards for each of those platforms becomes even more critical to establish some level of repeatability, control and transparency all critical for simplifying the hand-off between development teams and production control teams. Understanding the variations in the delivery platforms will help you evaluate how you will go about implementing DevOps, often choosing a more complete commercial solution over a homegrown process built around open source tooling. How are your current tools performing? Version Control, problem tracking, build and deploy automation are all core components of your DevOps process. Review how well each are performing and who uses each tool set. Your different lifecycle tools should be configured so that information is easily shared between development teams and production control. If each group uses different solutions, then you need to evaluate how the tools can be better integrated so information is centralized and reports are easy to access. If teams continue to use different tooling for the same process, repeatability is much more difficult to achieve; meta data is often hidden and measurable reports are impossible to generate. There needs to be consistency between how developers and production control teams perform their jobs in order to achieve some level of consistency in the process itself. Page | 7 www.openmakesoftware.com

How easy is an audit? Matching source to executables with traceable reports showing who changed what is critical for auditing. It is also important for creating a more reliable continuous deploy process. If you know how an application was constructed, you also know what the application needs to execute successfully in production. Take a look at how much the production control team understands about the dependencies of each application moving to production. Your DevOps process should clearly report to you the required technology stack (dependencies) of each application moving across the lifecycle. Production control needs to be keenly aware of what the production technology stack should look like for each application before it is deployed to production. The technology stack includes dependencies at the OS level, database level and Project to Project library dependencies. Verbal communication between teams is often the method for managing a change in the technology stack, but prone to errors when things begin to move quickly. Your DevOps process should include ways to manage the technology stack and enforce the correct technology stack at the build level, without extra work on the part of either development or production control.

Conclusion
Achieving DevOps will require a paradigm shift from both Developers and Production Control Teams. The use of one-off script driven processes for build and deploy will eventually be replaced with standardized methods that can be shared across teams and managed by production control. Windows and Linux developers should consider taking a walk over to their UNIX and z/OS counterparts to learn how these legacy systems have implemented their DevOps solution. Borrowing on successful processes is often preferred to trial and error. To achieve success in implementing a DevOps solution will require development teams to work closely with production control teams to define a solution. This may require that standardized tools be implemented across multiple teams for sharing information, exposing requirements, and managing detailed configuration information from how an application is compiled down to the environment configurations for execution. There are many details in the process of building and deploying software which can be easily missed or hidden. The devil is in the details, and the more easily the details are exposed the better the process works. Simplifying the hand-off between development and production control is all centralizing access and creating a process that is reusable and Page | 8about sharing knowledge, www.openmakesoftware.com repeatability across the lifecycle.

About OpenMake Software


OpenMake Software, the DevOps Authority, delivers a dynamic solution for streamlining, accelerating, and standardizing build to deploy activities that can flex to meet your ever increasing operational demands. Our solutions automate tasks by eliminating script driven processes and static configurations while supporting continuous build and deploy. We enable you to manage incremental releases, leverage the cloud, increase productivity, eliminate bottlenecks, and provide management with actionable traceability reports. Over 400 companies worldwide use our solutions to dynamically align the development to release process from source to production.

Tracy Ragan COO and Co-Founder, OpenMake Software


Ms. Ragan has had extensive experience in the development and implementation of business applications. It was during her consulting experiences that Ms. Ragan recognized the lack of build and release management procedures for the distributed platform that had long been considered standard on the mainframe and UNIX. In the four years leading to the creation of OpenMake Software she worked with development teams in implementing a team-centric standardized build to release process. She can be reached atTracy.Ragan@OpenMakesSoftware.com

Page | 9

www.openmakesoftware.com

You might also like