Code Monkey home page Code Monkey logo

optimization-steepestdescent-vs-gradientdescent's Introduction

Optimization-SteepestDescent vs GradientDescent

Understand working of Steepest-Descent and Gradient-Descent methods
We will work with Quadratic function. Because any higher dimension problem will be harder to plot and visualize.

Problem Statement :-

cost_func_f(x) = (1/2)*x.T*Q*x + b.T*x + c or (5/2)* (X2**2)+(Y2**2)+X2*Y2-2*X2+3*Y2+10 where x = [[X1], [Y1]]

Q = np.array([[5,1], [1, 2]])

b = np.array([[-2], [3]])

c = 10

x = np.array([[10], [10]])

Find point x s.t. cost_func_f(x) is minimum.

Different Approaches:

  • Steepest Descent
  • Gradient Descent

1.Steepest Descent:

In this learning rate or step size is determined based on algorithm and is not a heuristic value.

a) Base problem

Steepest Descent Method also known as saddle method. In this we move in a direction untill the cost function stops decreasing [i.e. ∇f(dx1)=0, where dx1 is that direction. Cost function may still decrease in other direction]. Then we pick another direction let say dx2. In this each next direction is perpendicular to last direction ( dx1 ⊥ dx2 ). The saddle point is point of contact to the contour line where that direction is tangent.
#iterations: 13

b) Steepest Descent with Newton's Method

In newton's method we approximate our cost function with a similar quadratic equation which behaves similar to our original function in some small neigbourhood. It then tries to find the optiamal pt of that quadratic function. It repeats this process untill stopping criteria is met. [Note: here since our original function is quadratic function itself so the approximation is same function and thus we reach it's optimal point in one iteration.]
#iterations: 1

2.Gradient Descent:

In this learning rate or step size is a heuristic value. So we must be very carefull about what we choose as step size. A big step size may create problem and we may never converge to optimal point. [ In general learning rate < 2/ λmax (Hf(x)) , for quadratic cost Hf(x) is Q.
so in our problem 2/ λmax(Q) is 0.3771609692315777]

a) Gradient Descent with learning rate `0.1`

Since leanring rate is much less than the 0.3771609692315777, it will converge nicely.
#iterations: 86

b) Gradient Descent with learning rate `0.3771609692315777 - 0.00002`

Since leanring rate is smaller but very close to 0.3771609692315777 it will converge after much oscillation, thus will take much more iterations.
#iterations: 163093

c) Gradient Descent with learning rate `0.3771609692315777 + 0.00002`

Since leanring rate is larger than 0.3771609692315777 it will keep oscillating and will never converge.

Let's put all our findings in one table

no algorithm #iterations
1.a Steepest Descent (base method) 13
1.b Steepest Descent (newton's method) 1
2.a Gradient Descent (lr=0.1) 86
2.b Gradient Descent (lr=0.3771609692315777 - 0.00002) 163093
2.c Gradient Descent (lr=0.3771609692315777 + 0.00002) NA

optimization-steepestdescent-vs-gradientdescent's People

Contributors

ar8372 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.