Introduction

In today’s data-driven world, just having a search engine is not enough; the key is making it smart. Enter Elasticsearch Relevance Engine (ESRE) augmented with Retrieval Augmented Generation (RAG), a powerful solution that marries Elasticsearch’s superior search capabilities with Large Language Models (LLMs) like ChatGPT for precise, contextual querying over proprietary datasets. This session is a hands-on guide that will show you how to amplify the power of Elasticsearch with advanced LLMs.

Key Takeaways:

  • Learn how to supercharge Elasticsearch’s BM25 algorithm with semantic search for results that are not just relevant but contextually accurate.
  • Discover how to plug in Large Language Models like OpenAI’s ChatGPT to enable context-aware question-answering over your proprietary data.
  • Gain insights into the latest advancements in vector search within Lucene and Elasticsearch.
  • A quick live demo: Experience first-hand how ESRE, empowered by RAG, transforms a basic search query into a context-rich, highly relevant result..

This talk is for you if you’re grappling with search relevance issues and are looking for innovative ways to make your search smarter and more efficient. Whether you’re a software developer, data engineer, or ML enthusiast, this session will equip you with the skills you need to build next-generation search capabilities.

Talk video