Algorithmia, a leader in ML operations and management software, announces Insights, a new solution for ML model performance monitoring that provides reliable access to algorithm inference and operations metrics.
Read More: With CallRail’s New Lead Center, SMBs Will Never Miss A Lead Again
Many organizations today don’t have the ability to monitor the performance of ML models working their way into production applications, and organizations that do, use a patchwork of disparate tools and manual processes, often without critical data required to satisfy stakeholder requirements. Without comprehensive monitoring and centralized data collection, organizations struggle with model drift, risk of failure, and the inability to meet performance targets in response to shifts in environment and customer behavior.
Algorithmia Insights addresses these problems by combining operational metrics (execution time, request identification, etc.) with user defined inference metrics (confidence, accuracy, etc.), both of which are essential to identify and correct model drift, data skews and negative feedback loops. The data is accessible within Algorithmia’s Enterprise product. The goal is to deliver metrics where they are most actionable by the teams responsible for these production systems.
“Organizations have specific needs when it comes to ML model monitoring and reporting,” said Diego Oppenheimer, CEO of Algorithmia. “For example, they are concerned with compliance as it pertains to external and internal regulations, model performance for improvement of business outcomes, and reducing the risk of model failure. Algorithmia Insights helps users overcome these issues while making it easier to monitor model performance in the context of other operational metrics and variables.”
Read More: SalesTechStar Interview with Sam Zayed, Chief Revenue Officer at Conga
Algorithmia has partnered with Datadog to launch Insights with an integration that allows customers to stream operational, as well as user-defined inference metrics, from Algorithmia to Kafka and into Datadog, using Datadog’s Metrics API. This metrics pipeline can be used to instrument, measure, and monitor your ML models to immediately detect data drift, model drift, and model bias. With Datadog, the performance of your ML models can now be correlated against the performance of your entire infrastructure within a single pane of glass.
“ML models are at the heart of today’s business. Understanding how they perform both statistically and operationally is key to success,” said Ilan Rabinovitch, Vice President, Product and Community, Datadog. “By combining the findings of Algorithmia Insights and Datadog’s deep visibility into code and integration, our mutual customers can drive more accurate and performant outcomes from their ML models.”