Mô tả

The generation of images using Artificial Intelligence is an area that is gaining a lot of attention, both from technology professionals and people from other areas who want to create their own custom images. The tools used for this purpose are based on advanced and modern techniques from machine learning and computer vision, which can contribute to the creation of new compositions with high graphic quality. It is possible to create new images just by sending a textual description: you ask the AI (artificial intelligence) to create an image exactly as you want! For example, you can send the text "a cat reading a book in space" and the AI will create an image according to that description! This technique has been gaining a lot of attention in recent years and it tends to growth in the next few years.

There are several available tools for this purpose and one of the most used is Stable Diffusion developed by StabilityAI. It is Open Source, has great usability, speed, and is capable of generating high quality images. As it is open source, developers have created many extensions that are capable of generating an infinite variety of images in the most different styles.

In this course you will learn everything you need to know to create new images using Stable Diffusion and Python programming language. See below what you will learn in this course that is divided into six parts:


  • Part 1: Stable Diffusion basics: Intuition on how the technology works and how to create the first images. You will also learn about the main parameters to get different results, as well as how to create images with different styles

  • Part 2: Prompt Engineering: You will learn how to send the proper texts so the AI understands exactly what you want to generate

  • Part 3: Training a custom model: How about putting your own photos in the most different environments? In this section you will learn how to use your own images and generate your avatars

  • Part 4: Image to image: In addition to creating images by sending texts, it is also possible to send images as a starting point for the AI to generate the images

  • Part 5: Inpainting - exchaning classes: You will learn how to edit images to remove objects or swap them. For example: remove the dog and replace it with a cat

  • Part 6: ControlNet: In this section you will implement digital image processing techniques (edge and pose detection) to improve the results

All implementations will be done step by step in Google Colab online with GPU, so you don't need a powerful computer to get amazing results in a matter of seconds! More than 50 lessons and more than 6 hours of videos!

Bạn sẽ học được gì

Understand the basic of Stable Diffusion to create new images

Learn how to use Stable Diffusion parameters to get different results

Create images using other models provided by the Open Source community

Learn about Prompt Engineering to choose the best keywords to generate the best images

How to use negative prompts to indicate what should not appear in the images

Use fine-tuning to create your custom model to generate your own images

Send initial images to condition image generation

Use inpainting to edit images, remove unwanted elements or swap objects

Yêu cầu

  • Programming logic and Python basics are desirable but not required
  • It is possible to follow the course without having technological skills

Nội dung khoá học

8 sections

Introduction

2 lectures
Course content
12:20
Course materials
00:07

Stable Diffusion basics

19 lectures
Stable Diffusion - intuition 1
08:40
Stable Diffusion - intuition 2
14:51
Stable Diffusion - intuition 3
19:57
Stable Diffusion - intuition 4
15:50
Stable Diffusion - limitations of use
10:11
Note about the implementation
01:08
Installing the libraries
08:04
Prompts - intuition
05:17
Generating the first image
06:31
Generating multiple images
08:21
Parameters - seed
07:59
Parameters - inference step
08:53
Parameters - guidance scale
09:00
Negative prompts - intuition
06:27
Negative prompts - implementation
03:18
Other models - intuition
06:02
Other models - implementation
06:40
Specific styles
05:29
Changing the scheduler
10:12

Prompt engineering

10 lectures
Preparing the environment
04:00
Subject/object, action/location, and type
11:07
Style, colors, and artist
06:32
Resolution, site, and other attributes
09:34
Negative prompts
11:45
Stable Diffusition v2
05:51
Generating arts and photographs
05:08
Generating landscapes and 3D images
03:29
Generating drawings and architectures
03:40
Custom models
08:17

Custom training

6 lectures
Fine-tuning with Dreambooth – intuition
15:22
Preparing the environment
05:36
Training 1
12:56
Training 2
08:38
Generating the images
08:00
Improving the results
07:04

Image to image

6 lectures
Preparing the environment
04:03
Generating the image
04:25
Strength parameter
07:56
Other image styles
04:51
Other models
08:34
Adding elements
08:08

Inpainting – exchanging classes

3 lectures
Preparing the enviroment
06:19
Exchanging classes 1
06:33
Exchanging classes 2
11:06

ControlNet

5 lectures
Preparing the enviroment
05:38
Generating images using edges 1
12:58
Generating images using edges 2
09:16
Generating images using poses 1
11:56
Generating images using poses 2
07:11

Final remarks

2 lectures
Final remarks
01:31
BONUS
01:32

Đánh giá của học viên

Chưa có đánh giá
Course Rating
5
0%
4
0%
3
0%
2
0%
1
0%

Bình luận khách hàng

Viết Bình Luận

Bạn đánh giá khoá học này thế nào?

image

Đăng ký get khoá học Udemy - Unica - Gitiho giá chỉ 50k!

Get khoá học giá rẻ ngay trước khi bị fix.