What’s New in Couchbase

Capella AI Services to safely bring agent-based applications into production.

Couchbase Capella AI Services 2024

 

Streamlining the development of agentic AI applications

Capella AI Services enable enterprises to tackle the expanding data challenges associated with AI development and deployment, simplifying the process of creating secure, scalable agent-based AI applications. By using Capella AI Services, organizations can efficiently prototype, build, test, and deploy application agents, keeping models and data in close proximity. This setup minimizes latency and reduces operational costs, which are common hurdles when integrating new technology components and workflows.

Capella AI Services include:

  • Model Services: Offers managed endpoints for leading LLMs and embedding models, and provides value-added capabilities, such as prompt and conversation caching, guardrails, and keyword filtering to support RAG and agentic workflows. The colocation of data and AI models makes it easy to meet the desired privacy and latency requirements for building enterprise-grade RAG applications, without needing to use expensive private links or custom solutions.
  • Unstructured Data Services: Extracts, cleans, chunks, and transforms unstructured documents into JSON, preparing them for vectorization. It also extracts structured information from complex documents and makes them queryable in Capella. This saves developers time associated with building DIY preprocessing pipelines.
  • Vectorization Services: Automates vectorization and indexing of data stored in Capella. Along with Model and Unstructured Data Services, this helps developers quickly build a RAG pipeline with fewer tools.
  • AI Agent Catalog Services: Accelerates agentic application development by offering a centralized repository for tools, metadata, prompts, and audit information for LLM flow, traceability, and governance. It also automates discovery of relevant agent tools to answer user questions, and manages guardrails to ensure that agent exchanges are consistent over time.
  • Capella AI Functions: Enhances developer productivity by embedding AI-driven data analysis directly into application workflows using familiar SQL++ syntax, eliminating the need for external tools or custom code. With capabilities like summarization, classification, sentiment analysis, and data masking, Capella empowers developers to seamlessly transform complex transactional and columnar data into actionable insights within a single unified platform.

Learn more

Capella Free Tier

To improve convenience and accessibility for developers, we’re excited to introduce a new Couchbase Capella™ Free Tier that replaces Capella’s 30-day trial. It allows developers to learn, use, and persist their activity in Capella for as long as they need to, and it supports promoting their work into test and production environments. The free tier is available with forum and community support, and it will open on September 9.

Sign up today

Announcing Couchbase Enterprise Server 7.6 and 7.6.2

 

Support for vector search and AI-powered adaptive applications

The spring 2024 release of Capella brought support for Couchbase Server 7.6, including high-performance storage, indexing, and retrieval of vector embeddings. Organizations are racing to build hyper-personalized adaptive applications powered by generative AI that deliver exceptional experiences to their end users. Vector search enables organizations to use the retrieval-augmented generation (RAG) framework to interact with LLMs and other foundation models to make AI-powered chatbots and applications safer, up to date, and correctly aligned to specific corporate information. Additionally, teams can use hybrid search, a combination of vector search, text search, range search, explicit value search, and geospatial search, to build out robust search solutions for end users – all within a single platform. There’s no need for a separate vector database. Advanced searches can be supported by a single SQL query and single index to deliver powerful, low-latency results while lowering TCO at the same time. Learn more

Further performance enhancements for vector search

Couchbase Enterprise Server 7.6.2 adds impressive performance and scalability improvements to vector search for self-managed and Capella-based systems, including a 7 times throughput increase and lower memory requirements. It expands support to 4096-dimensional vectors and adds base64 compressed vectors for both data and queries, reducing storage overhead.

Support for vectors in Couchbase Mobile

Couchbase is the first database vendor to announce vector support for its embeddable mobile database (Couchbase Lite). This will allow customers to build powerful AI-powered applications at the edge where data is produced and consumed. See the Mobile tab for more details.

AI ecosystem integrations

Couchbase is boosting its AI partner ecosystem with LangChain and LlamaIndex to further increase developer productivity. Integration with LangChain enables a common API interface to converse with a broad library of LLMs. Similarly, Couchbase integration with LlamaIndex will provide developers with even more choices for LLMs when building adaptive applications. These integrations will accelerate query prompt assembly, improve response validation, and facilitate RAG applications.

Graph capabilities enabled by query traversals

Couchbase offers support for graph relationship traversals, which means the ability to do recursive queries for hierarchy and network mapping. Query hierarchical data structures allow for complex analyses with ANSI SQL recursive CTE in areas like organization charts, bill of materials, supply chain management, network topology, and social networks. Learn more

New index rebalancing reduces times by up to 80%

In order to make scale-out of index nodes faster without impacting CPU or memory usage, Server 7.6 now utilizes file-based rebalancing. Testing has shown a dramatic improvement in the time to complete rebalances, making the process easier and more successful. Learn more

Couchstore to Magma, one-step upgrade without downtime

Customers will be able to migrate from Couchstore to the Magma storage engine without stopping the front-end workloads. Customers must be on version 7.6 to enable the migration. Migration can be reversed at any time if needed. Learn more

Faster failover times improve HA

In the case of a data node outage, the query will automatically be rerouted to the next available data node without any action from the application. The lower minimum auto-failover timeout has been reduced from 5 seconds to 1 second, and the heartbeat frequency has been reduced from 1 second to 200 milliseconds. Learn more

Query simplifications

To make initial development and testing easier, users can perform all database CRUD and join operations without indexes. This improves the experience for new users with CREATE, INSERT, and SELECT without getting an index error. Additionally, users can do simple key-value range scans based on document key prefixes without needing query and indexing nodes. For the best performance, indexes are still recommended where speed is a priority. Learn more

Couchbase Mobile: What’s New

Enable cloud-to-edge AI

Search is an important part of any app, but searching only for specific words and phrases is not enough to make a personal connection. You need semantic search to achieve the matches that are most meaningful to the user in context. Vector search goes beyond simply finding matching words – it also finds related information based on the core meaning of the input, making it the best option for providing relevant information that connects with users.

Couchbase Mobile 3.2

The release of Couchbase Mobile 3.2 completes our “cloud-to-edge AI” vision by offering vector search on-device with Couchbase Lite, the embedded database for mobile and IoT apps. Now, mobile developers can take advantage of vector search capabilities at the edge without dependencies on the internet, enabling the fastest, most secure, and most reliable GenAI apps possible. These benefits are amplified when combined with vector search in Couchbase Capella and Couchbase Server to enable cloud-to-edge AI support.

Key capabilities

Build fast, reliable, secure AI-powered mobile apps that work even without the internet
By enabling vector search on-device, you eliminate dependencies on distant cloud databases, speeding up apps and eliminating downtime due to internet outages.

Vector search from cloud to edge
With vector search in Capella and Couchbase Lite combined with built-in data sync, you gain the cloud scale to handle the massive amounts of data required for AI and the edge capabilities for the immediacy to make AI effective.

Hybrid search for hyper-personalized engagement
Vector search is even more powerful when you combine it with other search and conditional query techniques (like full-text search) supported by Couchbase Mobile, and you call it using a single standard SQL query.

RAG at the edge
Retrieval-augmented generation refers to an architectural technique that aids the accuracy of AI models by enabling the model to have access to external facts. Results from vector search can be included as context data to queries sent to an LLM for customizing query responses. With Couchbase Mobile, developers can build RAG apps that work at the edge without the internet.

Superior privacy and security for AI
By processing data at the edge, you ensure privacy because sensitive data never has to leave the edge. You can develop GenAI apps that run on-device or at the edge without the worry of feeding sensitive or private data to public models.

Download Couchbase Lite 3.2 here.

Start building

Check out our developer portal to explore NoSQL, browse resources, and get started with tutorials.

Develop now
Use our free DBaaS

Get hands-on with Couchbase in just a few clicks. Capella DBaaS is the easiest and fastest way to get started.

Use free
Join a free Capella Test-Drive

Kick off your Couchbase Certification journey in 90 minutes with a dedicated instructor.

Get started