Capella AI Services alongside NVIDIA AI Enterprise

Couchbase is working with NVIDIA to help enterprises accelerate the development of agentic AI applications by adding support for NVIDIA AI Enterprise including its development tools, Neural Models framework (NeMo) and NVIDIA Inference Microservices (NIM). Capella adds support for NIM within its AI Model Services and adds access to the NVIDIA NeMo Framework for building, training, and tuning custom language models. The framework supports data curation, training, model customization, and RAG workflows for enterprises.

Help Accelerate Agentic Application Development

Capella AI Services with NVIDIA AI Enterprise support the full agentic delivery lifecycle

Unlike databases that exclusively support vector search without providing assistance for creation and utilization of them, Couchbase Capella manages operational, analytic, AI, and mobile data–while also simplifying the utilization of that data for AI workflows–within the same platform.

These AI data workflows are where complexity has moved for developers as they include:

    • data preparation
    • vectorization
    • prompt engineering and caching
    • model interactions including on-device
    • response caching
    • transcript storage
    • response validation
    • guardrail management
    • NVIDIA NIM for LLMSagent development and code reuse
    • agent governance

The combined solution enhances, yet simplifies Capella’s retrieval-augmented generation (RAG) capabilities, allowing customers to efficiently power high-throughput AI-powered applications while maintaining model flexibility. With access to the NeMo framework, developers gain a wealth of productivity tools and models within one environment.

They then have access to over 30 models from BigCode, Microsoft, NVIDIA, Mistral, Meta, and Google via NIM. With NVIDIA AI Enterprise, developers can build, train, and fine-tune models for specific applications, and then enjoy the benefits of GPU-enabled acceleration in deployment.

Enterprises are struggling to trust AI for a number of reasons

Organizations building and deploying high-throughput AI applications face challenges with ensuring agent reliability and compliance, and avoiding drifting off of its intended operational path. This drift occurs for a number of reasons beyond the obvious ones like PII data leaks or model-based hallucinations. Drift can happen over time as models can simply change their opinion, and therefore their responses and conclusions, based on the evolution of their training, as well as the evolution of the data contained in their prompts.

Not only that, but models often mistakenly maintain conversational context that is no longer valid, within an active conversation. Capella and NVIDIA AI Enterprise can help developers build tighter guardrails, and more intentional agents that drift less often, maintain proper context over time, and therefore perform as their authors expected. For example, agent conversation transcripts should be captured and compared in real-time within NIM to evaluate model response accuracy.

Safe AI without compromise

Our joint solution offers a safe and fast way for organizations to build, deploy, and evolve AI-powered applications. NVIDIA’s solution leverages pre-tested LLMs and tools including NVIDIA NeMo Guardrails to help organizations accelerate AI development while enforcing policies and safeguards against AI hallucinations, while Capella helps maintain AI’s short and long-term memory through both caching and transcript storage while its performance and proximity to both models and the execution infrastructure reduces conversation latencies which is critical for agent deployment. Running in NVIDIA’s rigorously tested, production-ready NIM microservices creates the ability for enterprises to meet privacy, performance, scalability, and latency requirements with their agentic applications.

One final benefit

Matt McDonough our SVP of Product and Partners lands the value of our collaboration by noting:

Enterprises require a unified and highly performant data platform to underpin their AI efforts and support the full application lifecycle – from development through deployment and optimization. By integrating NVIDIA NIM microservices into Capella AI Model Services, we’re giving customers the flexibility to run their preferred AI models in a secure and governed way, while providing better performance for AI workloads and seamless integration of AI with transactional, analytic, AI, and mobile data. Capella AI Services allow customers to accelerate their RAG and agentic applications with confidence, knowing they can scale and optimize their applications as business needs evolve.

This combined NVIDIA/Couchbase solution not only helps developers deploy, scale, and optimize agentic applications more quickly and safely, we help DevOps accelerate and manage AgentOps–the deployment of agents along with optimized models, low latency performance, enterprise security, governance, observability, and security. And for project leads and budget holders our combined solution enables their organizations to maximize the ROI of these AI investments by combining Capella’s performance and data consolidation advantages of close proximity to the NVIDIA AI Enterprise environment, all while operating on NVIDIA-accelerated infrastructure. There may not be a better combination to maximize the ROI of artificial intelligence.

See us at NVIDIA GPU Technology Conference

NVIDIA GTC eventCouchbase is a silver sponsor at NVIDIA GTC, taking place March 17-24 in San Jose, CA. To learn more about how Couchbase’s work with NVIDIA accelerates agentic AI application development, stop by booth 2004.

Learn more about Capella AI Services and sign up for the private preview.

Additional resources

Author

Posted by Jeff Morris, VP Product Marketing

Jeff Morris is VP of Product and Solutions Marketing at Couchbase. He's spent over three decades marketing software development tools, databases, analytic tools, cloud services, and other open source products. He'd be the first to tell you that anyone looking for a fast, flexible, familiar, and affordable cloud-to-edge database-as-a-service can stop looking after they check out Couchbase.

Leave a reply