Vector Database
Open source vector database, all with SQL
Hyper-fast. Queries in milliseconds.
SELECT text, _score
FROM word_embeddings
WHERE knn_match(embedding,[0.3, 0.6, 0.0, 0.9], 2)
ORDER BY _score DESC;
|------------------------|--------|
| text | _score |
|------------------------|--------|
|Discovering galaxies |0.917431|
|Discovering moon |0.909090|
|Exploring the cosmos |0.909090|
|Sending the mission |0.270270|
|------------------------|--------|
SELECT text, _score
FROM word_embeddings
WHERE knn_match(embedding, (SELECT embedding FROM word_embeddings WHERE text ='Discovering galaxies'), 2)
ORDER BY _score DESC
|------------------------|--------|
| text | _score |
|------------------------|--------|
|Discovering galaxies |1 |
|Discovering moon |0.952381|
|Exploring the cosmos |0.840336|
|Sending the mission |0.250626|
|------------------------|--------|
Streamlined data management
Eliminate the need to manage multiple systems. CrateDB seamlessly integrates your data, keeping your (meta-)data and vector representations aligned without the complexity of data synchronization processes. Not only does it offer powerful vector search capabilities, but it also seamlessly integrates with time series, geospatial, JSON, full-text search, and other data types.
Data enriched with semantics
Seamlessly add vector data types to any row in the database, providing context aligned with your (meta-)data and enhancing explainability.
Advanced search capabilities
Enhanced AI model integration
Improved scalability
Faster development & lower maintenance
Keynote - The transformative effects of real-time AI
In this keynote at the AI & Big Data Expo Europe 2023, CrateDB's VP Product shares his vision for the future with multi-model SQL databases and Large Language Models.
Dev Talk - How to Use Private Data in Generative AI
This talk at Fosdem 2024 focuses on the combination of CrateDB and LangChain: it helps get started with using private data as context for large language models through LangChain, incorporating the concept of Retrieval Augmented Generation (RAG).
5 Essential Things You Need to Know about Vector Databases
This infographic gives you some basic understanding of vector databases, from what you should look for when choosing one to combining vector data with other data types.
Interested?
CrateDB stands as a vector store database with key features that elevate its capabilities: vector storage and similarity search.
- Vector storage empowers users to efficiently store embeddings produced by their preferred machine learning models, creating a streamlined method for managing and accessing vectorized data.
- Similarity search enables users to effortlessly discover similarities within datasets represented as vectors, fostering advanced data exploration and in-depth analysis.
By offering these vector database capabilities within a single, scalable product, CrateDB streamlines data management, cutting down both development time and total cost of ownership.
Typical use cases for vector databases
Unlock the potential of CrateDB's vector storage and similarity search across a range of industries and applications:
E-commerce recommendations
Chatbots & customer support
Enhance customer interactions by understanding questions with precision. Contextualize conversations, providing better service with improved understanding of user inquiries, regardless of the terms they use.
Anomaly & fraud detection
Multimodal search
Generative AI
Store embeddings, provide additional context in prompts and act as conversational memory for LLM-based applications. Use vector search functionality for retrieval augmented generation (RAG), which enables LLMs to understand specific data.