Reviewing Stanford on Linear Regression and Gradient Descent
ซีรีส์ที่ถูกเก็บถาวร ("ฟีดที่ไม่ได้ใช้งาน" status)
When? This feed was archived on May 02, 2025 14:13 (). Last successful fetch was on November 09, 2024 13:09 ()
Why? ฟีดที่ไม่ได้ใช้งาน status. เซิร์ฟเวอร์ของเราไม่สามารถดึงฟีดพอดคาสท์ที่ใช้งานได้สักระยะหนึ่ง
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 446880554 series 3605861
This lecture from Stanford University's CS229 course, "Machine Learning," focuses on the theory and practice of linear regression and gradient descent, two fundamental machine learning algorithms. The lecture begins by motivating linear regression as a simple supervised learning algorithm for regression problems where the goal is to predict a continuous output based on a set of input features. The lecture then introduces the cost function used in linear regression, which measures the squared error between the predicted output and the true output. Gradient descent, an iterative algorithm, is then explained as a method to find the parameters that minimize the cost function. Two variants of gradient descent, batch gradient descent and stochastic gradient descent, are discussed with their respective strengths and weaknesses. The lecture concludes with a derivation of the normal equations, an alternative approach to finding the optimal parameters in linear regression that involves solving a system of equations rather than iteratively updating parameters.
Watch Andrew Ng teach it at Stanford: https://www.youtube.com/watch?v=4b4MUYve_U8&t=1086s&pp=ygUSdmFuaXNoaW5nIGdyYWRpZW50
71 ตอน