4 key requirements to succeed with AI in your Azure, BizTalk (or any other) monitoring

March 8 2018

AIMS provides businesses with reduced costs and increased innovation through our intelligent performance monitoring and analytics solution deeply focused on integrations – the beating heart of any digital business.

When we say intelligent, we mean that we apply AI and machine learning to deliver real-time insight and control, freeing IT resources to focus on innovation and providing a common language and reports for both IT and the business.

 

AI wasn’t an add-on, it was our starting point

We’ve been using machine learning since the first version of our product was released in 2014. Using machine learning was never an option for us – we were founded on scientific research from the Institute for Informatics at the University of Oslo.

So, our roots are deeply founded in science.

We didn’t jump on the hype of AI and machine learning. Quite the contrary. With the intellectual property from the University of Oslo we took a deep look at how we could apply our AI & ML science to real world problems.

As discussed in this blog post by Redpoint Venture Capitalist Tom Tunguz, the most frequent use of machine learning today is in efficiency applications.

Tom concludes “…if you are looking to build a machine learning based SaaS company, find a really expensive internal process and automate it.”

That’s what we did.

 

Automating an expensive internal process

So, we had the proprietary machine learning algorithms and now we needed a market. We ended up looking at monitoring of complex IT integrations that support digital business processes (using BizTalk, Azure and other technologies). Not only is enterprise application integration, system integration and B2B integration costly to develop, it also brings major governance challenges and severe downtime cost consequences. That looked like an expensive internal process to automate!

In our analysis, we worked with an ROI model to ensure we could deliver a strong value proposition. In the ROI model we identified these key areas for cost improvements:

  1. Internal efficiency improvements – how much internal staff time can be freed up and primarily in form of IT admin, monitoring & development and consequently reduced costs or freed up capacity.
  2. External efficiency improvements – same as above but for external providers with typically significantly higher cost of staff.
  3. Downtime revenue loss – directly relates to how much revenue is supported by the IT systems and the increased uptime provided from predictive, 360 degree & deeper insight.
  4. Downtime productivity loss – same as downtime revenue loss but for productivity delivered by the IT systems. This could be work orders processed, supply chain or manufacturing as examples. For work orders this could simply be the number of internal work hours lost due to system unavailability @ x $ per hour
  5. Staff attrition costs – succeeding with digitalization and innovation requires attracting the best talent. With a set-up where you use traditional tools and manual resources you give skilled & scarce resources unfulfilling work that is likely to increase your staff attrition from what it could be. This translates indirectly into higher recruiting and onboarding costs of new employees and maybe more important lost velocity & innovation. 

In the original model we excluded cost components that we would have included in a model today. These include:

  1. Reputational damage / customer loyalty
  2. System restore costs
  3. Regulatory breach

Obviously, this model is leveraging a lot of inputs, but as long as the organizations satisfied one of the following assumptions the ROI & value proposition turned out solid:

  1. IT systems were core to deliver on digital business process and digitalization
  2. IT systems were in material extent real-time and not only batch
  3. The company recognized the need to continuously invest in IT and integration to deliver more for the business

We’ll follow up on this ROI model in a later blog post.

 

What about deep, wide and 360° data?

So, with the ROI model we had validated the value machine learning could bring to improve efficiencies in monitoring, troubleshooting and governance of complex hybrid integrations.

But still, proprietary algorithms and an identified and validated market is not all that’s needed. And probably, the most crucial piece is missing: Access to deep data, “wide” data and 360 degree data in real-time without any performance impact on the underlying system.

This is the area where we have spent most of our development and which is a core part of our competitive strength.

For a typical system – such as Azure, BizTalk or SQL – our solution will collect at least 10,000 performance parameters in real-time without any performance impact.

You may ask yourself: “Why do we fetch so much data?”

And we will answer: “Because it’s relevant.”

 

What scares you most?

What happens if we didn't gather so much data? Could you live with knowing that only 2% of your critical business is covered (as is typically the case)? You should have 100% (or 360°) of your business covered. Well, at least for your critical business processes supported by IT – how to identify those is a separate topic. Or rephrased: What scares you the most? What you know or what you don’t know?

With most traditional monitoring solutions, the availability of performance data is limited or requires performance intrusive collection of data. So, the effect is that either you slow down the system and potentially introduce problems or you limit yourself to a fraction of the potential risk your business is exposed to.

 

Four key requirements for success with AI in integrations

So, summing up: AI and machine learning does not necessarily provide the solution to your monitoring challenge. It’s just a piece of the solution where availability of the necessary data is a fundamental requirement and needs to satisfy the following 4 key requirements:

  1. BIG data not small data (relational, performance, events, transactional)
  2. The data needs to be close to real-time
  3. Extracting the data cannot introduce material performance overhead on the applicable system
  4. Collecting the data needs to be an automated process without the need for material configuration or ongoing maintenance

Lastly, for the machine learning to work you need to have algorithms that are generic enough to work out of the box, without the need for guided learning but with the opportunity for guided learning.

 

Schedule a live AIMS demo from an integration expert

Tags: Blog

Integration monitoring tool checklist

How does your Azure, BizTalk or SQL monitoring stack up?