From the course: Deep Learning: Getting Started
Unlock the full course today
Join today to access over 24,700 courses taught by industry experts.
Gradient descent
From the course: Deep Learning: Getting Started
Gradient descent
- Gradient descent is the process of repeating forward and backward propagations in order to reduce error and move closer to the desired model. To recollect one run of the forward propagation results in predicting the outcomes based on weights and biases. We compute the error using a cost function. We then use back propagation to propagate, edit, and adjust the weights and biases. This is one pass of learning. We have to now repeat this pass again and again, as the weights and biases get refined, and the error gets reduced. This is called gradient descent. In gradient descent, we repeat the learning process of forward propagation, estimating error, backward propagation, and adjusting weights and biases. As we do this, the overall error estimated by the cost function will oscillate around and start moving closer to zero. We keep measuring the error and computing deltas that would minimize the error contribution of individual notes. There are situations where the error will stop…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
-
Setup and initialization2m 43s
-
(Locked)
Forward propagation1m 14s
-
(Locked)
Measuring accuracy and error2m 12s
-
(Locked)
Back propagation2m 8s
-
(Locked)
Gradient descent1m 21s
-
(Locked)
Batches and epochs2m 22s
-
(Locked)
Validation and testing1m 28s
-
(Locked)
An ANN model1m 39s
-
(Locked)
Reusing existing network architectures2m 33s
-
(Locked)
Using available open-source models2m 27s
-
-
-
-
-