New confluent cloud for Apache Flink capabilities simplify real-time AI development

New confluent cloud for Apache Flink capabilities simplify real-time AI development

Confluent announced new capabilities in Confluent Cloud for Apache Flink® that streamline and simplify the process of developing real-time artificial intelligence (AI) applications. Flink Native Inference cuts through complex workflows by enabling teams to run any open-source AI model directly in Confluent Cloud. Flink search unifies data access across multiple vector databases, streamlining discovery and retrieval within a single interface. New built-in machine learning (ML) functions bring AI-driven use cases, such as forecasting and anomaly detection, directly into Flink SQL, making advanced data science effortless. These innovations redefine how businesses can harness AI for real-time customer engagement and decision-making. 

“Building real-time AI applications has been too complex for too long, requiring a maze of tools and deep expertise just to get started,” said Shaun Clowes, Chief Product Officer at Confluent. “With the latest advancements in Confluent Cloud for Apache Flink, we’re breaking down those barriers—bringing AI-powered streaming intelligence within reach of any team. What once required a patchwork of technologies can now be done seamlessly within our platform, with enterprise-level security and cost efficiencies baked in.” 

The AI boom is here. According to McKinsey, 92% of companies plan to increase their AI investments over the next three years. Organisations want to seize this opportunity and capitalise on the promises of AI. However, the road to building real-time AI apps is complicated. Developers are juggling multiple tools, languages, and interfaces to incorporate ML models and pull valuable context from the many places that data lives. This fragmented workflow leads to costly inefficiencies, slowdowns in operations, and AI hallucinations that can damage reputations.

Simplify the Path to AI Success

“Confluent helps us accelerate copilot adoption for our customers, giving teams access to valuable real-time, organisational knowledge,” said Steffen Hoellinger, Co-founder and CEO at Airy. “Confluent’s data streaming platform with Flink AI Model Inference simplified our tech stack by enabling us to work directly with large language models (LLMs) and vector databases for retrieval-augmented generation (RAG) and schema intelligence, providing real-time context for smarter AI agents. As a result, our customers have achieved greater productivity and improved workflows across their enterprise operations.”

As the only serverless stream processing solution on the market that unifies real-time and batch processing, Confluent Cloud for Apache Flink empowers teams to effortlessly handle both continuous streams of data and batch workloads within a single platform. This eliminates the complexity and operational overhead of managing separate processing solutions. With these newly released AI, ML, and analytics features, it enables businesses to streamline more workflows and unlock greater efficiency. These features are available in an early access program, which is open for signup to Confluent Cloud customers.

  • Flink Native Inference: Run open-source AI models in Confluent Cloud without added infrastructure management.

Developers often use separate tools and languages when working with ML models and data pipelines, leading to complex and fragmented workflows and outdated data. Flink Native Inference simplifies this by enabling teams to run open-source or fine-tuned AI models directly in Confluent Cloud. This approach offers greater flexibility and cost savings. Plus, the data never leaves the platform for inference, adding a greater level of security.

  • Flink search: Use just one interface to access data from multiple vector databases.

Vector searches provide LLMs with the necessary context to prevent hallucinations and ensure trustworthy results. Flink search simplifies accessing real-time data from vector databases, such as MongoDB, Elasticsearch, and Pinecone. This eliminates the need for complex ETL processes or manual data consolidation, saving valuable time and resources while ensuring that data is contextual and always up to date.

  • Built-in ML functions: Make data science skills accessible to more teams.

Many data science solutions require highly specialised expertise, creating bottlenecks in the development cycles. Built-in ML functions simplify complex tasks, such as forecasting, anomaly detection, and real-time visualisation, directly in Flink SQL. These features make real-time AI accessible to more developers, enabling teams to gain actionable insights faster and empowering businesses to make smarter decisions with greater speed and agility. 

“The ability to integrate real-time, contextualised, and trustworthy data into AI and ML models will give companies a competitive edge with AI,” said Stewart Bond, Vice President, Data Intelligence and Integration Software at IDC. Organisations need to unify data processing and AI workflows for accurate predictions and LLM responses. Flink provides a single interface to orchestrate inference and vector search for RAG, and having it available in a cloud-native and fully managed implementation will make real-time analytics and AI more accessible and applicable to the future of generative AI and agentic AI.”

AIAI agentCloudGenAI
Comments (0)
Add Comment