Kolena, a startup developing tools for testing AI models, raises $ 15 million

Kolena, a startup that develops tools for testing, evaluating and validating the performance of AI models, announced today that it has raised $ 15 million in a funding round led by Lobby Capital with the participation of SignalFire and Bloomberg Beta.

The new money brings Kolena’s total to $21 million and will be used to expand the company’s research team, collaborate with regulators and expand Kolena’s sales and marketing efforts, co-founder and CEO Mohamed Elgendy told Timenow in an email interview.

“The use cases for AI are huge, but AI lacks the trust of both builders and the public,” Elgendy said. “This technology must be implemented in such a way that digital experiences become better and not worse. The genie does not come back in the bottle, but as an industry we can make sure that we are fulfilling the right wishes.”

Elgendy started Kolena in 2021 with Andrew Shi and Gordon Hart, with whom he worked for about six years in AI departments at companies such as Amazon, Palantir, Rakuten and Synapse. With Kolena, the trio wanted to build a “model quality framework” that provides unit testing and end-to-end testing for models in a customizable, business-friendly package.

“First and foremost, we wanted to create a new framework for model quality – not just a tool that simplifies the current approaches,” said Elgendy. “Kolena makes it possible to conduct continuous tests at the unit or scenario level. It also provides end-to-end testing of the entire AI and machine learning product, not just subcomponents.”

To this end, Kolena can provide insights to identify gaps in the coverage of AI model test data, says Elgendy. And the platform includes risk management features that help track the risks associated with deploying a particular AI system (or AI systems). Kolena’s user interface allows users to create test cases to evaluate the performance of a model and determine the possible reasons for the underperformance of a model, while comparing its performance with various other models.

With Kolena, teams can manage and run tests for specific scenarios that the AI product needs to deal with, rather than applying a general ‘aggregated’ metric like an accuracy score that can obscure the details of a model’s performance,” Elgendy said. . “For example, a model with an accuracy of 95% is not necessarily better at detecting cars than one with an accuracy of 89%. Each has its own strengths and weaknesses – for example, detecting cars in different weather conditions or occlusion levels, detecting the orientation of a car, etc.

If Kolena works as advertised, it could indeed be useful for data scientists who spend a lot of time creating models for AI applications.

According to a survey, AI engineers indicate that they spend only 20% of their time on analyzing and developing models, while the rest is spent on obtaining and cleaning up the data with which they are trained. Another report concludes that due to the challenges of developing accurate performance models, only about 54 percent of the models make it from pilot project to production.

But there are also other actors who are developing tools for testing, monitoring and validating models. In addition to established companies such as Amazon, Google and Microsoft, several startups are testing new approaches to measure the accuracy of models before – and after – they go into production.

Prolific recently raised $32 million for its AI model training and stress testing platform using a crowdsourcing network of testers. Meanwhile, robust intelligence and deepchecks are creating their own toolsets for companies to prevent errors in AI models – and to validate them continuously. And Bobidi rewards developers for testing AI models of companies.

However, Elgendy argues that Kolena’s platform is one of the few that allows customers to take “full control” over the data types, evaluation logic and other components that make up an AI model test. It also emphasizes Kolena’s data protection approach, which saves customers from uploading their data or models to the platform; Kolena only stores model test results for future benchmarking, which can be deleted on request.

“Minimizing the risk of an AI and machine learning system requires rigorous testing before deployment, but companies lack robust tools or processes for model validation,” Elgendy said. Ad hoc model testing is the norm today, and unfortunately, so is evidence of concepts that have failed through machine learning. Kolena focuses on the comprehensive and thorough evaluation of models. We give machine learning managers, product managers and executives an unprecedented insight into the test coverage of a model and the product-specific functional requirements, so that they can effectively influence product quality from the very beginning.”

Read Also: Kolena, a startup developing tools for testing AI models, raises $ 15 million

San Francisco-based Kolena, which employs 28 full-time staff, declined to share the number of clients it is currently working with. However, Elgendy said that for the time being, the company is taking a “selective approach” to partnering with “mission-critical” companies and plans to launch early-stage team packages for mid-sized organizations and AI startups in the second quarter of 2024.

Leave a Comment