Mô tả

Deep learning is increasingly dominating technology and has major implications for society.

From self-driving cars to medical diagnoses, from face recognition to deep fakes, and from language translation to music generation, deep learning is spreading like wildfire throughout all areas of modern technology.

But deep learning is not only about super-fancy, cutting-edge, highly sophisticated applications. Deep learning is increasingly becoming a standard tool in machine-learning, data science, and statistics. Deep learning is used by small startups for data mining and dimension reduction, by governments for detecting tax evasion, and by scientists for detecting patterns in their research data.

Deep learning is now used in most areas of technology, business, and entertainment. And it's becoming more important every year.


How does deep learning work?

Deep learning is built on a really simple principle: Take a super-simple algorithm (weighted sum and nonlinearity), and repeat it many many times until the result is an incredibly complex and sophisticated learned representation of the data.

Is it really that simple? mmm OK, it's actually a tiny bit more complicated than that ;)   but that's the core idea, and everything else -- literally everything else in deep learning -- is just clever ways of putting together these fundamental building blocks. That doesn't mean the deep neural networks are trivial to understand: there are important architectural differences between feedforward networks, convolutional networks, and recurrent networks.

Given the diversity of deep learning model designs, parameters, and applications, you can only learn deep learning -- I mean, really learn deep learning, not just have superficial knowledge from a youtube video -- by having an experienced teacher guide you through the math, implementations, and reasoning. And of course, you need to have lots of hands-on examples and practice problems to work through. Deep learning is basically just applied math, and, as everyone knows, math is not a spectator sport!


What is this course all about?

Simply put: The purpose of this course is to provide a deep-dive into deep learning. You will gain flexible, fundamental, and lasting expertise on deep learning. You will have a deep understanding of the fundamental concepts in deep learning, so that you will be able to learn new topics and trends that emerge in the future.

Please note: This is not a course for someone who wants a quick overview of deep learning with a few solved examples. Instead, this course is designed for people who really want to understand how and why deep learning works; when and how to select metaparameters like optimizers, normalizations, and learning rates; how to evaluate the performance of deep neural network models; and how to modify and adapt existing models to solve new problems.


You can learn everything about deep learning in this course.

In this course, you will learn

  • Theory: Why are deep learning models built the way they are?

  • Math: What are the formulas and mechanisms of deep learning?

  • Implementation: How are deep learning models actually constructed in Python (using the PyTorch library)?

  • Intuition: Why is this or that metaparameter the right choice? How to interpret the effects of regularization? etc.

  • Python: If you're completely new to Python, go through the 8+ hour coding tutorial appendix. If you're already a knowledgeable coder, then you'll still learn some new tricks and code optimizations.

  • Google-colab: Colab is an amazing online tool for running Python code, simulations, and heavy computations using Google's cloud services. No need to install anything on your computer.


Unique aspects of this course

  • Clear and comprehensible explanations of concepts in deep learning, including transfer learning, generative modeling, convolutional neural networks, feedforward networks, generative adversarial networks (GAN), and more.

  • Several distinct explanations of the same ideas, which is a proven technique for learning.

  • Visualizations using graphs, numbers, and spaces that provide intuition of artificial neural networks.

  • LOTS of exercises, projects, code-challenges, suggestions for exploring the code. You learn best by doing it yourself!

  • Active Q&A forum where you can ask questions, get feedback, and contribute to the community.

  • 8+ hour Python tutorial. That means you don't need to master Python before enrolling in this course.


So what are you waiting for??

Watch the course introductory video and free sample videos to learn more about the contents of this course and about my teaching style. If you are unsure if this course is right for you and want to learn more, feel free to contact with me questions before you sign up.

I hope to see you soon in the course!

Mike

Bạn sẽ học được gì

The theory and math underlying deep learning

How to build artificial neural networks

Architectures of feedforward and convolutional networks

Building models in PyTorch

The calculus and code of gradient descent

Fine-tuning deep network models

Learn Python from scratch (no prior coding experience necessary)

How and why autoencoders work

How to use transfer learning

Improving model performance using regularization

Optimizing weight initializations

Understand image convolution using predefined and learned kernels

Whether deep learning models are understandable or mysterious black-boxes!

Using GPUs for deep learning (much faster than CPUs!)

Yêu cầu

  • Interest in learning about deep learning!
  • Python/Pytorch skills are taught in the course
  • A Google account (google-colab is used as the Python IDE)

Nội dung khoá học

32 sections

Introduction

2 lectures
How to learn from this course
09:25
Using Udemy like a pro
07:57

Download all course materials

2 lectures
Downloading and using the code
06:29
My policy on code-sharing
01:38

Concepts in deep learning

5 lectures
What is an artificial neural network?
16:02
How models "learn"
12:26
The role of DL in science and knowledge
16:43
Running experiments to understand DL
13:03
Are artificial "neurons" like biological neurons?
17:49

About the Python tutorial

1 lectures
Should you watch the Python tutorial?
04:25

Math, numpy, PyTorch

19 lectures
PyTorch or TensorFlow?
00:44
Introduction to this section
02:06
Spectral theories in mathematics
09:16
Terms and datatypes in math and computers
07:05
Converting reality to numbers
06:33
Vector and matrix transpose
06:58
OMG it's the dot product!
09:45
Matrix multiplication
15:27
Softmax
19:26
Logarithms
08:26
Entropy and cross-entropy
18:18
Min/max and argmin/argmax
12:47
Mean and variance
15:34
Random sampling and sampling variability
11:18
Reproducible randomness via seeding
08:37
The t-test
13:57
Derivatives: intuition and polynomials
16:39
Derivatives find minima
08:32
Derivatives: product and chain rules
10:00

Gradient descent

10 lectures
Overview of gradient descent
14:15
What about local minima?
11:56
Gradient descent in 1D
17:11
CodeChallenge: unfortunate starting value
11:30
Gradient descent in 2D
14:48
CodeChallenge: 2D gradient ascent
05:16
Parametric experiments on g.d.
18:56
CodeChallenge: fixed vs. dynamic learning rate
15:33
Vanishing and exploding gradients
06:04
Tangent: Notebook revision history
01:52

ANNs (Artificial Neural Networks)

21 lectures
The perceptron and ANN architecture
19:50
A geometric view of ANNs
13:38
ANN math part 1 (forward prop)
16:22
ANN math part 2 (errors, loss, cost)
10:54
ANN math part 3 (backprop)
12:10
ANN for regression
24:09
CodeChallenge: manipulate regression slopes
18:58
ANN for classifying qwerties
22:22
Learning rates comparison
23:46
Multilayer ANN
19:51
Linear solutions to linear problems
08:14
Why multilayer linear models don't exist
06:20
Multi-output ANN (iris dataset)
26:59
CodeChallenge: more qwerties!
11:56
Comparing the number of hidden units
09:59
Depth vs. breadth: number of parameters
17:25
Defining models using sequential vs. class
13:17
Model depth vs. breadth
20:31
CodeChallenge: convert sequential to class
06:37
Diversity of ANN visual representations
00:18
Reflection: Are DL models understandable yet?
08:26

Overfitting and cross-validation

8 lectures
What is overfitting and is it as bad as they say?
12:28
Cross-validation
17:13
Generalization
06:09
Cross-validation -- manual separation
12:39
Cross-validation -- scikitlearn
21:01
Cross-validation -- DataLoader
20:27
Splitting data into train, devset, test
09:45
Cross-validation on regression
08:09

Regularization

12 lectures
Regularization: Concept and methods
13:38
train() and eval() modes
07:14
Dropout regularization
21:56
Dropout regularization in practice
23:13
Dropout example 2
06:33
Weight regularization (L1/L2): math
18:25
L2 regularization in practice
13:24
L1 regularization in practice
12:22
Training in mini-batches
11:32
Batch training in action
10:47
The importance of equal batch sizes
06:59
CodeChallenge: Effects of mini-batch size
11:57

Metaparameters (activations, optimizers)

24 lectures
What are "metaparameters"?
05:02
The "wine quality" dataset
17:29
CodeChallenge: Minibatch size in the wine dataset
15:38
Data normalization
13:12
The importance of data normalization
09:33
Batch normalization
13:16
Batch normalization in practice
07:38
CodeChallenge: Batch-normalize the qwerties
05:06
Activation functions
17:59
Activation functions in PyTorch
12:12
Activation functions comparison
09:27
CodeChallenge: Compare relu variants
07:48
CodeChallenge: Predict sugar
17:06
Loss functions
16:47
Loss functions in PyTorch
18:41
More practice with multioutput ANNs
14:05
Optimizers (minibatch, momentum)
18:41
SGD with momentum
07:46
Optimizers (RMSprop, Adam)
15:40
Optimizers comparison
10:17
CodeChallenge: Optimizers and... something
06:57
CodeChallenge: Adam with L2 regularization
07:42
Learning rate decay
12:15
How to pick the right metaparameters
11:47

FFNs (Feed-Forward Networks)

12 lectures
What are fully-connected and feedforward networks?
04:57
The MNIST dataset
12:33
FFN to classify digits
22:20
CodeChallenge: Binarized MNIST images
05:24
CodeChallenge: Data normalization
16:16
Distributions of weights pre- and post-learning
14:48
CodeChallenge: MNIST and breadth vs. depth
12:35
CodeChallenge: Optimizers and MNIST
07:06
Scrambled MNIST
08:00
Shifted MNIST
11:25
CodeChallenge: The mystery of the missing 7
10:47
Universal approximation theorem
08:31

More on data

11 lectures
Anatomy of a torch dataset and dataloader
17:57
Data size and network size
16:35
CodeChallenge: unbalanced data
20:05
What to do about unbalanced designs?
07:45
Data oversampling in MNIST
16:30
Data noise augmentation (with devset+test)
13:16
Data feature augmentation
19:40
Getting data into colab
06:05
Save and load trained models
06:14
Save the best-performing model
15:18
Where to find online datasets
05:32

Measuring model performance

8 lectures
Two perspectives of the world
07:01
Accuracy, precision, recall, F1
12:39
APRF in code
06:42
APRF example 1: wine quality
13:34
APRF example 2: MNIST
12:01
CodeChallenge: MNIST with unequal groups
09:14
Computation time
09:55
Better performance in test than train?
08:35

FFN milestone projects

6 lectures
Project 1: A gratuitously complex adding machine
07:05
Project 1: My solution
11:18
Project 2: Predicting heart disease
07:14
Project 2: My solution
18:21
Project 3: FFN for missing data interpolation
09:35
Project 3: My solution
08:31

Weight inits and investigations

10 lectures
Explanation of weight matrix sizes
11:54
A surprising demo of weight initializations
15:52
Theory: Why and how to initialize weights
12:46
CodeChallenge: Weight variance inits
13:14
Xavier and Kaiming initializations
15:42
CodeChallenge: Xavier vs. Kaiming
16:54
CodeChallenge: Identically random weights
12:40
Freezing weights during learning
12:58
Learning-related changes in weights
21:55
Use default inits or apply your own?
04:36

Autoencoders

6 lectures
What are autoencoders and what do they do?
11:42
Denoising MNIST
15:48
CodeChallenge: How many units?
19:52
AEs for occlusion
17:55
The latent code of MNIST
21:57
Autoencoder with tied weights
24:14

Running models on a GPU

3 lectures
What is a GPU and why use it?
15:07
Implementation
10:13
CodeChallenge: Run an experiment on the GPU
06:46

Convolution and transformations

12 lectures
Convolution: concepts
21:33
Feature maps and convolution kernels
09:32
Convolution in code
21:05
Convolution parameters (stride, padding)
12:14
The Conv2 class in PyTorch
13:23
CodeChallenge: Choose the parameters
07:10
Transpose convolution
13:41
Max/mean pooling
18:35
Pooling in PyTorch
13:43
To pool or to stride?
09:47
Image transforms
16:57
Creating and using custom DataLoaders
19:06

Understand and design CNNs

16 lectures
The canonical CNN architecture
10:47
CNN to classify MNIST digits
26:06
CNN on shifted MNIST
08:36
Classify Gaussian blurs
24:10
Examine feature map activations
27:50
CodeChallenge: Softcode internal parameters
16:48
CodeChallenge: How wide the FC?
11:25
Do autoencoders clean Gaussians?
17:10
CodeChallenge: AEs and occluded Gaussians
09:36
CodeChallenge: Custom loss functions
20:15
Discover the Gaussian parameters
16:59
The EMNIST dataset (letter recognition)
24:59
Dropout in CNNs
10:14
CodeChallenge: How low can you go?
06:45
CodeChallenge: Varying number of channels
13:39
So many possibilities! How to create a CNN?
04:42

CNN milestone projects

5 lectures
Project 1: Import and classify CIFAR10
07:15
Project 1: My solution
12:01
Project 2: CIFAR-autoencoder
04:51
Project 3: FMNIST
03:52
Project 4: Psychometric functions in CNNs
11:54

Transfer learning

8 lectures
Transfer learning: What, why, and when?
16:52
Transfer learning: MNIST -> FMNIST
10:06
CodeChallenge: letters to numbers
14:25
Famous CNN architectures
06:46
Transfer learning with ResNet-18
16:43
CodeChallenge: VGG-16
03:41
Pretraining with autoencoders
20:01
CIFAR10 with autoencoder-pretrained model
18:11

Style transfer

5 lectures
What is style transfer and how does it work?
04:36
The Gram matrix (feature activation covariance)
12:37
The style transfer algorithm
10:58
Transferring the screaming bathtub
22:16
CodeChallenge: Style transfer with AlexNet
07:14

Generative adversarial networks

7 lectures
GAN: What, why, and how
17:22
Linear GAN with MNIST
21:55
CodeChallenge: Linear GAN with FMNIST
09:50
CNN GAN with Gaussians
15:06
CodeChallenge: Gaussians with fewer layers
06:05
CNN GAN with FMNIST
06:24
CodeChallenge: CNN GAN with CIFAR
07:51

RNNs (Recurrent Neural Networks) (and GRU/LSTM)

9 lectures
Leveraging sequences in deep learning
12:53
How RNNs work
15:14
The RNN class in PyTorch
17:44
Predicting alternating sequences
19:30
CodeChallenge: sine wave extrapolation
24:49
More on RNNs: Hidden states, embeddings
15:51
GRU and LSTM
23:08
The LSTM and GRU classes
13:26
Lorem ipsum
25:10

Ethics of deep learning

5 lectures
Will AI save us or destroy us?
09:40
Example case studies
06:39
Some other possible ethical scenarios
10:35
Will deep learning take our jobs?
10:27
Accountability and making ethical AI
11:22

Where to go from here?

2 lectures
How to learn topic _X_ in deep learning?
08:08
How to read academic DL papers
16:00

Python intro: Data types

8 lectures
How to learn from the Python tutorial
03:25
Variables
18:14
Math and printing
18:31
Lists (1 of 2)
13:31
Lists (2 of 2)
09:29
Tuples
07:40
Booleans
18:19
Dictionaries
11:51

Python intro: Indexing, slicing

2 lectures
Indexing
12:30
Slicing
11:45

Python intro: Functions

8 lectures
Inputs and outputs
07:01
Python libraries (numpy)
14:20
Python libraries (pandas)
13:57
Getting help on functions
07:36
Creating functions
20:27
Global and local variable scopes
13:20
Copies and referents of variables
05:45
Classes and object-oriented programming
18:46

Python intro: Flow control

10 lectures
If-else statements
15:03
If-else statements, part 2
16:58
For loops
17:37
Enumerate and zip
12:11
Continue
07:24
Initializing variables
18:01
Single-line loops (list comprehension)
15:25
while loops
19:30
Broadcasting in numpy
15:41
Function error checking and handling
17:42

Python intro: Text and plots

7 lectures
Printing and string interpolation
17:18
Plotting dots and lines
12:55
Subplot geometry
16:10
Making the graphs look nicer
18:48
Seaborn
11:08
Images
17:59
Export plots in low and high resolution
07:58

Bonus section

1 lectures
Bonus content
01:03

Đánh giá của học viên

Chưa có đánh giá
Course Rating
5
0%
4
0%
3
0%
2
0%
1
0%

Bình luận khách hàng

Viết Bình Luận

Bạn đánh giá khoá học này thế nào?

image

Đăng ký get khoá học Udemy - Unica - Gitiho giá chỉ 50k!

Get khoá học giá rẻ ngay trước khi bị fix.