Advertisement

Hazelcast Speeds Time-to-Market for Operationalization of ML in Enterprise Applications

By on

A recent press release reports, “Hazelcast, the leading in-memory computing platform, today announced the easiest way to deploy machine learning (ML) models into ultra-low latency production with its support for running native Python- or Java-based models at real-time speeds. The latest release of the event stream processing engine, Hazelcast Jet, now helps enterprises unlock profit potential faster by accelerating and simplifying ML and artificial intelligence (AI) deployments for mission-critical applications. Recent research shows that 33% of IT decision-makers see ML and AI as the greatest opportunity to unlock profits, however, 86% of organizations are having difficulty in managing the advances in technology. From its recent integration as an Apache Beam Runner to the new features announced today, Hazelcast Jet continues to simplify how enterprises can deploy ultra-fast stream processing to support time-sensitive applications and operations pertaining to ML, edge computing and more.”

THe release goes on, “Fast-moving enterprises, such as financial services organizations, often rely on resource-heavy frameworks to create ML and AI applications that analyze transactions, process information and serve customers. These organizations are burdened with infrastructural complexity that not only inhibits their ability to get value from ML and AI, but introduces additional latency throughout the application. With its new capabilities, Hazelcast Jet significantly reduces time-to-deployment through new inference runners for any native Python- and Java-based models. The new Jet release also includes expanded database support and other updates focused on data integrity.”

Read more at PR Newswire.

Image used under license from Shutterstock.com

Leave a Reply