These are common phrases you hear around a business:
“Why upgrade? It works fine just the way it is.”
“If it ain’t broke don’t fix it.”
“It is what it is. “
They are rooted in the key concept that the cost of changing is expensive. But the cost of not changing could be more expensive.
Case in point, Customer X had a manual data entry process that would consume 10 days of time from 5 interns. This process was sound – ultimately it had 5 people entering line by line data from excel into an access database (~30K rows).
Conservatively this was costing the business $20K each year in time and resources
“It is what it is.” How does someone know that the process is broken? How would someone know it’s an issue?
Another example, Customer Y has a data ETL strategy, where they have a complex understanding of database design and architecture. They adopted an agile process, with multiple skilled technicians. Yet once they deploy changes to UAT or Production there are multiple data discrepancies, and the team struggles to know why. They know its broken, but struggle to understand how to fix it.
Issues with deployment and quality result in $24K lost per day as technicians “pull everyone” into calls to help trouble shoot issues.
I’ve heard this customer say, “if we had more time, we would have time to test and develop then the quality would be better.”
The reality is if they had all the time they needed they would still be plagued by issues.
So what do we do? Data Modernization. By using the Microsoft Azure data frameworks and integration with 100s of platforms it’s a golden era of fast, simple, scalable solutions with very little overhead. Below are 2 examples of how we used the Azure data tools to build cost effective solutions
Customer X – We need to make things simple and Fast
At Customer X the staff doesn’t have detailed technical knowledge so they would like to keep things simple.
We have created a process where the customer simply needs to drop Excel files into a folder, the files get processed automatically and loaded into a fault tolerant, cloud-based SQL database. The business leads then get a report on what data has been successfully loaded. We do this simply using Azure functions, blob storage and data functions to load data into Azure SQL.
Customer Y – We need to fail fast. We need to know it’s broken early so we can fix it or set expectations that we will not meet sprint commitments.
At customer Y each sprint we deliver large complex projects. 4 – 6 days are allocated for development, 3 days for testing and releases happen after the 2-week sprint has completed. Data and deployment is truly tested during deployment to UAT which starts the second week of the 2 week sprint cycle. This is when the developer code starts interacting with production data.
Why do I have to wait for developer code to interact with production data? If we have issues – let’s find them fast. This is a bit more complex than Customer X –we need to get this right otherwise it will just another “process” that doesn’t get us closer to quality.
We need to recreate production data, and test our changes against prod data
- We leverage Azure DevOps, IaaS, and IAC to recreate the production environment in the Azure cloud.
- In this customer’s case – most test cases are basically SQL scripts so we can capture the test cases in an automation test suite
- Deploy SSIS packages to recreated SSIS server, deploy SQL bits to recreated production instance, run SSIS packages, run automation test suite
Once testing is complete, we tear it down. This entire process happens twice a day every day.
We email a report of health of the build to business owners, where we track the health of the build. As we continually test against production data the team will observe fewer and fewer production issues.
Quality improves and developers are free for more frivolous pursuits, like this.