tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Generative AI > Graph RAG > Hybrid RAG - Combining Vector Search and Knowledge Graphs

Hybrid RAG - Combining Vector Search and Knowledge Graphs

Author: Venkata Sudhakar

ShopMax India handles two types of queries that need different retrieval strategies. Semantic questions like "best budget TV for a student" need vector search, while relational questions like "which Delhi distributors supply Samsung products under warranty" need graph traversal. Hybrid RAG combines both in a single pipeline, routing each query component to the right retrieval method.

Hybrid RAG runs vector search and graph traversal in parallel, then merges the context before passing it to the LLM. The vector component retrieves semantically relevant chunks from ChromaDB. The graph component extracts entities from the query, traverses Neo4j relationships, and returns structured facts. A merge step combines the context before the final LLM call.

The below example shows ShopMax India's hybrid RAG pipeline combining ChromaDB vector search with a Neo4j knowledge graph for richer answers.


It gives the following output,

Samsung TVs at ShopMax India carry a 1-year manufacturer warranty covering
manufacturing defects. In Mumbai, Samsung TVs are supplied by two partners:
Mehta Electronics (12 warranty claims this quarter) and Sunrise Distributors
(5 claims). For warranty service, customers can visit any of our 8 Mumbai
service centres.

Keep vector and graph retrieval timeouts independent so a slow graph query does not block the vector result. For ShopMax India production, run both in parallel using asyncio.gather() with a 2-second timeout on graph traversal. Cache frequent graph results in Redis since supplier-product relationships change slowly. Start with simple context concatenation and only add ranking logic if the LLM produces inconsistent answers.


 
  


  
bl  br