This is a syndicated blog from Resources – Blog authored by Orasi’s partner Delphix. Read the original post at: https://www.delphix.com/blog/partner-spotlight-orasi
Delphix spoke with Orasi Managing Director, Terry Brennan, to learn how Orasi is helping customers blaze new trails in DevSecOps delivery and drive business transformation.
Tell us about Orasi and your core competencies.
We help our customers achieve faster, more predictable, and higher quality software delivery and operations by bringing deep DevSecOps expertise, strong technology partnerships like Delphix, and proven approaches across the DevSecOps spectrum.
We offer a breadth of DevSecOps services including assessments, strategy and roadmaps, implementing automation tooling, process change/maturity, facilitating cultural change, and much more.
Our core competency is driving client success across the lifecycle by establishing a DevSecOps pipeline with continuous flow into production. This encompasses not only the familiar CI/CD processes but also includes specialized areas such as continuous data, continuous testing, continuous security, and continuous monitoring.
Our goal is to bring the front end development, pipeline, security, and ongoing operations together to form a cohesive, efficient system for the many verticals we support—from financial services and healthcare to manufacturing and retail.
What do you mean by ‘continuous data’?
We work with agile teams throughout the entire software lifecycle—from ideation, development, build, and testing—to ensure the pipeline is production-relevant. That relates to the environment configuration itself as well as the test data used.
So we take the CI/CD pipeline and add the concept of continuous data. Essentially continuous data means automating the process of collecting production data, securing it, then providing automated access and restoration for use in the application delivery release train.
To establish a high quality, predictable pipeline, shift left testing needs to be as realistic as possible, as early as possible. Without production-quality test data, teams won’t expose the edge cases they need to thoroughly test. They run the risk of not being able to identify a complex data scenario that causes something to blow up further down the pipeline where it’s harder to triage and more expensive to fix.
With continuous data, data is now as agile as code. Access to ephemeral test environments that are automatically fed with secure, virtualized data allows dev and test teams to truly move at a pace required by the business.
Can you describe how you assess your client’s software pipeline?
Most of our clients have started their DevOps journey, and wherever they are in terms of maturity, tooling, and processes we work with them to convert their plan into reality.
We have several dozen categories that we look at as part of the assessment—their build and unit test process, how long it takes to get feedback to developers after functional testing, types of tools used, manual processes that can be eliminated, documentation, and more.
We start with a standard approach for the assessment, but every customer is different. Some of our larger clients have 20 different tech stacks to address, for example. In the end, we create a custom solution and a detailed roadmap for moving forward.
We are not just automating processes; we re-architect processes and help customers think about their pipelines in a completely different way. When we build out the future state we treat all pipeline elements as code. Everything should be integrated into an automated process, so data virtualization is essential.
Driving faster testing and delivering test data quickly increases the speed and flow of the entire pipeline. Providing production-relevant data to application teams earlier in the release cycle means not only rapid cycle time but better feedback to address issues, which in turn have an exponential impact on cycle time and application quality.
When a pull request is made, for example, all elements of that build are tagged and versioned. If a problem occurs along the pipeline or even after changes are released into production, customers can automatically recreate the exact code changes, database versions used, and test configurations using Delphix.
What is the impact of continuous data as clients start to streamline processes?
Often clients are completely manual or have some scripting in place, so the improvements that continuous data and continuous testing bring can be pretty significant. In our initial assessment, we may find that running a single change through the pipeline originally takes 8-10 days plus multiple people.
But if the pipeline elements are treated as code, all elements are automated and every testbed is working from the exact same virtual database, we can run multiple tests simultaneously. Starting with a clean, predictable state of the test data available to all test environments, we can have unlimited test runs going at the same time.
We are able to turn a traditionally serial approach into an extremely efficient parallel process. When they can run multiple changes at the same time and execute to the same purpose using identical datasets, this can be a very powerful transformation.
Transitioning from manual serial testing to automated parallel testing is really powerful.
Companies have to move to this kind of radical process change to advance what they are doing and meet the incredibly demanding cycle times required by the business. There is an acceleration of business needs that, in turn, requires an acceleration in the delivery of innovation to meet those needs.
To move from cycle times that were at best once a quarter, to once a day, or even at every pull request, it’s critical to run tests through the pipeline in parallel. Delphix automates data delivery for immediate access and provides test teams running in parallel with the exact dataset.
You mentioned that you help clients improve not only processes and technologies, but also people and culture. How do you go about doing that?
We also administer a value stream review to understand where the blockages across the lifecycle and pipeline. We look at the technology and tools needed to improve the process, but we also look at the organization and the people as key elements to building the connective tissue needed for long-term success.
For example, people will often focus on a tool that they think will solve all their problems. The tool may add value, but the people may not be trained properly, or culturally they may not be ready to evolve roles and responsibilities. Traditionally, silos were built to keep people separate and focused, and it’s hard to break those down to foster a new era of collaboration. If people aren’t ready for change, aren’t communicating well, and don’t yet understand how the changes will help them do their work better, then they often become roadblocks and sabotage success.
When we introduce new technology, we work with the client to understand who the stakeholders are to build a comprehensive training plan and build out an implementation roadmap that involves everyone. Transformation means roles and responsibilities might change in the new equation.
We will conduct a POC to see how the technology works but in the end, the technology itself has finite combinations of challenges that can be methodically identified and addressed. People and culture will present an infinite number of challenges. So a key element of the POC is to see how people should be aligned, what processes need to change, and what roadblocks need to be cleared. Customers need to do this before they scale.
We live agile ourselves. Delivering on a transformation strategy is all agile in nature. A methodical approach keeps us aligned with the client. We do daily scrums and work in two-week sprints, so they can monitor progress and feel confident in the execution.
Terry Brennan Bio:
With more than 25 years of experience, Orasi Managing Director Terry Brennan has become an established thought-leader in solving complex software delivery challenges. He has the unique ability to understand and define strategies across many facets of IT delivery, including IT strategy, governance, product ownership, application delivery, infrastructure, testing and operations. Notably, Terry developed his expertise through hands-on experience across most IT domains. In recent years, Terry has architected and helped deliver leading-edge DevOps solutions to significantly reduce throughput and enable continuous flow.