Introduction
- A framework for building app that use LLMs.
- As a toolkit for making LLMs “smarter” in real-world apps.
- Not only sending and receiving text, it structuring interaction of LLM with other data, APIs, or tools.
- Searching a database
- Calling an API
- Writing to or reading from files
- Reasoning step by step (chains of thought)
- Keeping context in multi-step conversations
Components
- LLMs
- The “brain” that generates text.
- Can be OpenAI GPT, Hugging Face models, or others.
- Prompts
- Templates that structure what you send to the model.
- Can include dynamic variables, instructions, or context.
- Chains
- Sequences of actions or prompts.
- Example:
Extract info → summarize → answer.
- Agents
- Special chains that decide actions dynamically.
- Can choose which tool to call based on user input.
- Example: If the user asks for weather, the agent decides to call a weather API.
- Memory
- Keeps track of previous interactions.
- Useful for chatbots that remember context.
- Data Connectors
- Connect LLMs to external data: databases, Google Sheets, APIs, PDFs, websites, etc.
Typical Use Cases
- Chatbots with context or long-term memory
- Document QA: Ask questions about PDFs or websites
- Personal assistants that can perform tasks or fetch info
- Reasoning applications: step-by-step decision-making