Where'd My Gradient Go? It Vanished!
Fetch error
Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on November 09, 2024 13:09 ()
What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.
Manage episode 446714678 series 3605861
This video discusses the vanishing gradient problem, a significant challenge in training deep neural networks. The speaker explains how, as a neural network becomes deeper, gradients—measures of how changes in network parameters affect the loss function—can decrease exponentially, leading to a situation where early layers of the network are effectively frozen and unable to learn. This problem arises because common activation functions like the sigmoid function can produce very small derivatives, which compound during backpropagation. The video then explores solutions like using different activation functions (like ReLU) and architectural changes (like residual networks and LSTMs) to mitigate this issue.
Watch the video: https://www.youtube.com/watch?v=ncTHBi8a9uA&pp=ygUSdmFuaXNoaW5nIGdyYWRpZW50
71 ตอน