Generative AI and LLM in PHP | Ashish Tiwari | Laracon India 2025

Laracon 2024

🎀 Talk Summary: No-Code RAG Chatbot with PHP, LLMs & Elasticsearch Speaker: Ashish Diwali (Senior Developer Advocate, Elastic) πŸ”‘ Introduction Topic: Integrating Generative AI (LLMs) with PHP. Goal: Show how to build chat assistants, semantic search, and vector search without heavy ML expertise. Demo focus: Using Elasticsearch + PHP + LLM (LLaMA 3.1). 🧩 Core Concepts 1. Prompt Engineering LLMs generate responses based on prompts β†’ predicting next words. Techniques: Zero-shot inference β†’ direct classification or tagging. One-shot inference β†’ provide one example in the prompt. Few-shot inference β†’ multiple examples β†’ useful for structured outputs (SQL, JSON, XML). Iteration + context = In-context learning (ICL). 2. LLM Limitations ❌ Hallucinations (wrong answers). ❌ Complex to build/train from scratch. ❌ No real-time / private data access. ❌ Privacy & security concerns (especially in banking, public sector). 3. RAG (Retrieval-Augmented Generation) Solution to limitations. Workflow: User query β†’ hits database/vector DB (e.g., Elasticsearch). Retrieve top 5–10 relevant docs. Pass as context window β†’ LLM generates accurate answer. Benefits: Grounded responses. Works with private data. Avoids retraining large models. πŸ” Semantic & Vector Search Semantic Search: Understands meaning, not just keywords. Example: β€œbest city” ↔ β€œbeautiful city.” Vector Search: Text, images, and audio converted into embeddings (arrays of floats). Enables image search, recommendation systems, music search (via humming). Similarity algorithms: cosine similarity, dot product, nearest neighbors. πŸ› οΈ Tools & Demo Elephant Library (PHP) Open-source PHP library for GenAI apps. Supports: LLMs: OpenAI, Mistral, Anthropic, LLaMA. Vector DBs: Elasticsearch, Pinecone, Chroma, etc. Features: document chunking, embedding generation, semantic retrieval, Q&A (RAG). Demo Flow Ingestion: ...

August 25, 2025 Β· 2 min Β· Ashish Tiwari