Level-Up your GenAI Apps with RAG: Use Vector Storage and Search to Augment LLMs in Real-Time
LLMs (Large Language Models) are generally knowledgeable, but they don't know your domain. We'll start by explaining GenAI concepts such as what vector embeddings are, how they work, and how they apply to application development. Then we'll visually build a GenAI workflow powered by a vector database that enables seamless vector storage and search, create a real-time RAG (Retrieval Augmented Generation) pipeline, dynamically enhancing LLMs with domain-specific knowledge, and finally bring it all together in a working app. This is designed for any skill level and will explain the basics of vectors, RAG, and provide the tools needed to create a RAG pipeline.
Prerequisites
Some development experience is good, but it's not required. The tools being used are low-code in nature and while we'll be showing some code, it's not key to understand the concepts being laid out.
Take Aways
- Learn how to augment Large Language Models (LLMs) with domain specific knowledge in real-time
- Gain practical experience in building GenAI workflows
- Discover how to set up and use a vector database for vector storage and search