Mô tả
Dive into the rapidly evolving world of Generative AI with our comprehensive course, designed for learners eager to build, train, and deploy Large Language Models (LLMs) from scratch.
This course equips you with a wide range of tools, frameworks, and techniques to create your GenAI applications using Large Language Models, including Python, PyTorch, LangChain, LlamaIndex, Hugging Face, FAISS, Chroma, Tavily, Streamlit, Gradio, FastAPI, Docker, and more.
This hands-on course covers essential topics such as implementing Transformers, fine-tuning models, prompt engineering, vector embeddings, vector stores, and creating cutting-edge AI applications like AI Assistants, Chatbots, Retrieval-Augmented Generation (RAG) systems, autonomous agents, and deploying your GenAI applications from scratch using REST APIs and Docker containerization.
By the end of this course, you will have the practical skills and theoretical knowledge needed to engineer and deploy your own LLM-based applications.
Let's look at our table of contents:
Introduction to the Course
Course Objectives
Course Structure
Learning Paths
Part 1: Software Prerequisites for Python Projects
IDE
VS Code
PyCharm
Terminal
Windows: PowerShell, etc.
macOS: iTerm2, etc.
Linux: Bash, etc.
Python Installation
Python installer
Anaconda distribution
Python Environment
venv
conda
Python Package Installation
PyPI, pip
Anaconda, conda
Software Used in This Course
Part 2: Introduction to Transformers
Introduction to NLP Before and After the Transformer’s Arrival
Mastering Transformers Block by Block
Transformer Training Process
Transformer Inference Process
Part 3: Implementing Transformers from Scratch with PyTorch
Introduction to the Training Process Implementation
Implementing a Transformer as a Python Package
Calling the Training and Inference Processes
Experimenting with Notebooks
Part 4: Generative AI with the Hugging Face Ecosystem
Introduction to Hugging Face
Hugging Face Hubs
Models
Datasets
Spaces
Hugging Face Libraries
Transformers
Datasets
Evaluate, etc.
Practical Guides with Hugging Face
Fine-Tuning a Pre-trained Language Model with Hugging Face
End-to-End Fine-Tuning Example
Sharing Your Model
Part 5: Components to Build LLM-Based Web Applications
Backend Components
LLM Orchestration Frameworks: LangChain, LlamaIndex
Open-Source vs. Proprietary LLMs
Vector Embedding
Vector Database
Prompt Engineering
Frontend Components
Python-Based Frontend Frameworks: Streamlit, Gradio
Part 6: Building LLM-Based Web Applications
Task-Specific AI Assistants
Culinary AI Assistant
Marketing AI Assistant
Customer AI Assistant
SQL-Querying AI Assistant
Travel AI Assistant
Summarization AI Assistant
Interview AI Assistant
Simple AI Chatbot
RAG (Retrieval-Augmented Generation) Based AI Chatbot
Chat with PDF, DOCX, CSV, TXT, Webpage
Agent-Based AI Chatbot
AI Chatbot with Math Problems
AI Chatbot with Search Problems
Part 7: Serving LLM-Based Web Applications
Creating the Frontend and Backend as Two Separate Services
Communicating Between Frontend and Backend Using a REST API
Serving the Application with Docker
Install, Run, and Enable Communication Between Frontend and Backend in a Single Docker Container
Use Case
An LLM-Based Song Recommendation App
Conclusions and Next Steps
What We Have Learned
Next Steps
Thank You
Bạn sẽ học được gì
Understanding how to build, implement, train, and perform inference on a Large Language Model, such as Transformer (Attention Is All You Need) from scratch.
Gaining knowledge of the different components, tools, and frameworks required to build an LLM-based application, such as LangChain, LlamaIndex, Hugging Face
Learn how to fine-tune a Large Language Model on your custom dataset for specific downstream Natural Language Processing (NLP) tasks
Implementing best practices in prompt engineering to optimize the performance of Large Language Models
Building diverse LLM-based applications, including AI assistants, chatbots, Retrieval-Augmented Generation (RAG) systems, and intelligent agents
Learning how to serve your LLM-based application from scratch with REST API and Docker.
Engaging in hands-on technical implementations: Notebook, Python scripts, building model as a Python package, train, fine-tune, inference and more.
Receiving guidance on advanced engineering topics in Generative AI with Large Language Models.
Yêu cầu
- No prior experience in Generative AI, Large Language Models, Natural Language Processing, or Python is needed. This course will provide you with everything you need to enter this field with enthusiasm and curiosity. Concepts and components are first explained theoretically and through documentation, followed by hands-on technical implementations. All code snippets are explained step-by-step, with accompanying Notebook playgrounds and complete Python source code, structured to ensure a clear and comprehensive understanding.
Nội dung khoá học
Viết Bình Luận
Khoá học liên quan
Đăng ký get khoá học Udemy - Unica - Gitiho giá chỉ 50k!
Get khoá học giá rẻ ngay trước khi bị fix.
Đánh giá của học viên
Bình luận khách hàng