Mô tả

Dive into the rapidly evolving world of Generative AI with our comprehensive course, designed for learners eager to build, train, and deploy Large Language Models (LLMs) from scratch.


This course equips you with a wide range of tools, frameworks, and techniques to create your GenAI applications using Large Language Models, including Python, PyTorch, LangChain, LlamaIndex, Hugging Face, FAISS, Chroma, Tavily, Streamlit, Gradio, FastAPI, Docker, and more.


This hands-on course covers essential topics such as implementing Transformers, fine-tuning models, prompt engineering, vector embeddings, vector stores, and creating cutting-edge AI applications like AI Assistants, Chatbots, Retrieval-Augmented Generation (RAG) systems, autonomous agents, and deploying your GenAI applications from scratch using REST APIs and Docker containerization.


By the end of this course, you will have the practical skills and theoretical knowledge needed to engineer and deploy your own LLM-based applications.


Let's look at our table of contents:

Introduction to the Course

  • Course Objectives

  • Course Structure

  • Learning Paths

Part 1: Software Prerequisites for Python Projects

  • IDE

    • VS Code

    • PyCharm

  • Terminal

    • Windows: PowerShell, etc.

    • macOS: iTerm2, etc.

    • Linux: Bash, etc.

  • Python Installation

    • Python installer

    • Anaconda distribution

  • Python Environment

    • venv

    • conda

  • Python Package Installation

    • PyPI, pip

    • Anaconda, conda

  • Software Used in This Course

Part 2: Introduction to Transformers

  • Introduction to NLP Before and After the Transformer’s Arrival

  • Mastering Transformers Block by Block

  • Transformer Training Process

  • Transformer Inference Process

Part 3: Implementing Transformers from Scratch with PyTorch

  • Introduction to the Training Process Implementation

  • Implementing a Transformer as a Python Package

  • Calling the Training and Inference Processes

  • Experimenting with Notebooks

Part 4: Generative AI with the Hugging Face Ecosystem

  • Introduction to Hugging Face

  • Hugging Face Hubs

    • Models

    • Datasets

    • Spaces

  • Hugging Face Libraries

    • Transformers

    • Datasets

    • Evaluate, etc.

  • Practical Guides with Hugging Face

    • Fine-Tuning a Pre-trained Language Model with Hugging Face

    • End-to-End Fine-Tuning Example

    • Sharing Your Model

Part 5: Components to Build LLM-Based Web Applications

  • Backend Components

    • LLM Orchestration Frameworks: LangChain, LlamaIndex

    • Open-Source vs. Proprietary LLMs

    • Vector Embedding

    • Vector Database

    • Prompt Engineering

  • Frontend Components

    • Python-Based Frontend Frameworks: Streamlit, Gradio

Part 6: Building LLM-Based Web Applications

  • Task-Specific AI Assistants

    • Culinary AI Assistant

    • Marketing AI Assistant

    • Customer AI Assistant

    • SQL-Querying AI Assistant

    • Travel AI Assistant

    • Summarization AI Assistant

    • Interview AI Assistant

  • Simple AI Chatbot

  • RAG (Retrieval-Augmented Generation) Based AI Chatbot

    • Chat with PDF, DOCX, CSV, TXT, Webpage

  • Agent-Based AI Chatbot

    • AI Chatbot with Math Problems

    • AI Chatbot with Search Problems

Part 7: Serving LLM-Based Web Applications

  • Creating the Frontend and Backend as Two Separate Services

  • Communicating Between Frontend and Backend Using a REST API

  • Serving the Application with Docker

    • Install, Run, and Enable Communication Between Frontend and Backend in a Single Docker Container

  • Use Case

    • An LLM-Based Song Recommendation App

Conclusions and Next Steps

  • What We Have Learned

  • Next Steps

Thank You

Bạn sẽ học được gì

Understanding how to build, implement, train, and perform inference on a Large Language Model, such as Transformer (Attention Is All You Need) from scratch.

Gaining knowledge of the different components, tools, and frameworks required to build an LLM-based application, such as LangChain, LlamaIndex, Hugging Face

Learn how to fine-tune a Large Language Model on your custom dataset for specific downstream Natural Language Processing (NLP) tasks

Implementing best practices in prompt engineering to optimize the performance of Large Language Models

Building diverse LLM-based applications, including AI assistants, chatbots, Retrieval-Augmented Generation (RAG) systems, and intelligent agents

Learning how to serve your LLM-based application from scratch with REST API and Docker.

Engaging in hands-on technical implementations: Notebook, Python scripts, building model as a Python package, train, fine-tune, inference and more.

Receiving guidance on advanced engineering topics in Generative AI with Large Language Models.

Yêu cầu

  • No prior experience in Generative AI, Large Language Models, Natural Language Processing, or Python is needed. This course will provide you with everything you need to enter this field with enthusiasm and curiosity. Concepts and components are first explained theoretically and through documentation, followed by hands-on technical implementations. All code snippets are explained step-by-step, with accompanying Notebook playgrounds and complete Python source code, structured to ensure a clear and comprehensive understanding.

Nội dung khoá học

10 sections

Introduction to the Course

6 lectures
Welcome to the course
00:33
Big Picture
01:23
What We Will Learn
00:16
Course Objectives
01:10
Course Structure
03:30
Learning Paths
01:24

Software Prerequisites for Python Projects

9 lectures
Introduction
00:27
What We Will Learn
00:30
Overview of Software Prerequisites for Python Projects
06:19
Integrated Development Environment (IDE)
00:45
Terminal
00:42
Python Installation
01:15
Python Environment
03:04
Python Package Installation
01:18
What I Used in This Course
00:44

Introduction to Transformer

31 lectures
Introduction
00:57
What We Will Learn
01:09
NLP Before Transformer's Arrival - Pros & Cons of RNNs
05:19
The Arrival of Transformer
00:46
RNNs vs Transformers
00:47
NLP After Transformer's Arrival
00:32
Objective: Mastering Transformer's Block-by-Block
00:20
Inputs / Outputs
02:35
Tokenizer
01:02
Preparing Inputs for Encoder Part
00:57
Preparing Inputs for Decoder Part
00:42
Preparing Target for Loss Calculation
00:30
Introduction to Encoder / Decoder Inputs
00:50
Input Embedding
01:51
Positional Encoding
01:33
Encoder / Decoder Inputs - Put It All Together
00:51
Introduction to Encoder
00:35
Multi-Head Attention | Self-Attention Mechanism
05:13
Layer Normalization
02:48
Feed Forward
01:06
Residual Connection
00:31
Encoder - Put It All Together
00:27
Introduction to Decoder
01:37
Masked Multi-Head Attention
01:04
Multi-Head Attention for Decoder
00:59
Decoder - Put It All Together
00:27
Prediction Output
01:43
Transformer Building Blocks - Congratulations!
00:23
Transformer's Training Process
01:37
Transformer's Inference Process
02:13
What We Have Learned
01:11

Implementing Transformer from Scratch with PyTorch

41 lectures
Introduction
00:40
What We Will Learn
00:59
Implementation's Formula
00:56
Implementation Plan
00:56
Training Process Implementation
02:29
Source Code Structure
02:28
Load Config
02:30
Get Dataset
02:30
Get Tokenizer
03:40
Introduction to Mask Functions
00:16
Creating Encoder Mask
02:41
Creating Padding Mask
01:10
Creating Causal Mask
02:41
Creating Decoder Mask
01:29
Data Preprocessor
03:57
Preprocessing Data
02:39
Introduction to Transformer's Building Layers
00:22
Input Embedding
04:11
Positional Encoding
05:40
Multi-Head Attention
11:44
Feed Forward
04:41
Layer Normalization
05:43
Residual Connection
05:47
Projection
04:54
Introduction to Encoder & Decoder Implementation
00:24
Encoder Layer
02:38
Encoder
05:36
Decoder Layer
03:43
Decoder
06:04
Transformer
03:01
Creating Transformer Model
07:34
Softmax
02:54
Cross Entropy Loss
06:10
Introduction to Training Functions
00:14
Train Engine
03:03
Evaluation During Training
02:21
Inference During Training
03:09
Training in Action
03:17
Introduction to Inference
00:47
Inference
08:17
What We Have Learned
00:48

Generative AI with the Hugging Face Ecosystem

13 lectures
Introduction
00:26
What We Will Learn
00:49
Introduction to Hugging Face
02:53
Hugging Face Ecosystem
01:06
Hugging Face Hubs
02:07
Hugging Face Libraries
01:24
Transformers - HuggingFace's Python Package
02:57
Datasets - HuggingFace's Python Package
02:30
Evaluate - HuggingFace's Python Package
02:14
Fine-tuning a Pre-Trained Language Model with Hugging Face
02:23
Sharing Your Model
00:35
End-to-End Fine-Tuning Example
04:47
What We Have Learned
00:40

Components to Build LLM-based Web Applications

22 lectures
Introduction
00:20
What We Will Learn
00:52
Introduction to Backend Components for LLM-based Applications
00:38
LLM Orchestration Frameworks
00:38
LangChain
06:07
LangChain - Implementation Example
01:55
LlamaIndex
00:53
LlamaIndex - Implementation Example
02:09
Open-source vs Proprietary LLMs
01:13
Open-source vs Proprietary LLMs - Implementation Examples
01:54
Vector Embedding
08:35
Vector Database
02:54
Vector Database - Implementation Examples
02:12
Prompt Engineering
05:04
Basic Prompt Engineering
08:00
Advanced Prompt Engineering
03:26
Frontend Frameworks for LLM-based Web Applications
01:30
Streamlit
05:33
Streamlit - Implementation Example
01:25
Gradio
04:07
Gradio - Implementation Examples
09:05
What We Have Learned
01:11

Building LLM-based Web Applications

21 lectures
Introduction
00:08
What We Will Build
01:21
Backend & Frontend Frameworks
01:16
Task-Specific AI Assistants
02:37
Task-Specific AI Assistants - Sample Code Snippet
00:55
Culinary AI Assistant
04:16
Marketing AI Assistant
04:37
Customer AI Assistant
02:47
SQL-Querying AI Assistant
04:48
More AI Assistant App Ideas to Explore on Your Own
03:36
Simple AI Chatbot
04:52
RAG-based AI Chatbot
05:28
Simple RAG-based AI Chatbot for PDFs
03:18
RAG for Different Document/Data Types
01:03
What We Have Learned from RAG-based AI Chatbot
01:04
Limitations of Simple AI Chatbot
03:05
Agent-based AI Chatbot
01:26
Agent-based AI Chatbot with Math Tools
04:24
Agent-based AI Chatbot with Search Tools
02:47
What We Have Learned from Agent-based AI Chatbot
00:58
What We Have Learned
01:17

Serving LLM-based Web Applications

14 lectures
Introduction
00:20
What We Will Learn
01:05
Architecture for Serving LLM-based Web Applications
00:47
Software Prerequisites for Serving App
01:06
Use-case: Serving LLM-based Song Recommendation App
01:01
Source-Code Structure
02:18
Backend Source-Code
04:13
Frontend Source-Code
03:23
Testing Backend-Frontend Communication
01:43
Launching App with docker-compose
02:14
Test Backend
01:10
Test Frontend
01:20
Stopping App with docker-compose
00:37
What We Have Learned
01:06

Conclusion and Next Steps

2 lectures
What We Have Learned
01:51
Next Steps
01:51

Thank You

1 lectures
Thank You!
00:30

Đánh giá của học viên

Chưa có đánh giá
Course Rating
5
0%
4
0%
3
0%
2
0%
1
0%

Bình luận khách hàng

Viết Bình Luận

Bạn đánh giá khoá học này thế nào?

image

Đăng ký get khoá học Udemy - Unica - Gitiho giá chỉ 50k!

Get khoá học giá rẻ ngay trước khi bị fix.