• Instructor: Parimala Kancharla: (office: A17.03.06, email: parimala@iitmandi.ac.in)
  • Office Hours: Friday 10:00am-1:00pm (or by appointment)
  • Discussion Forum: Slack ,To Join
  • Class Venue: A17-1B
  • Class Timings: F-slot
  • TAs: Anika Srivasthava, Reshu Bansal, Siddharth Shakhya, Shubham Patwar
  • Resources

    Topic Slides and Notes External References
    Lecture1- Introduction Slides || Lecture1- PCA
    Lecture2- Mathematical Concepts Slides || Notes
    1. The Hessian matrix: Eigenvalues, concavity, and curvature
    2. Calculus
    Lecture3- Jacobian and Hessian Matrices Notes Chapter-2&4: Deep Learning
    Lecture4- Convexity and Convex functions Slides || ||Notes
    1. Chapter 2&3 - Convex Optimization
    2. Chapter 1 - MIT Textbook-Convex Optimization Theory
    3. Practice Problems-I
    4. Practice Problems-II
    5. Practice Problems-III
    Lecture5&6- Taylor Approximation and Hessian Matrix Notes
    1. First and Second order test
    2. External Slides
    Lecture7- Subgradients-I Notes
    1. Sub-Gradients-I
    2. Sub-Gradients-II
    Lecture8- Subgradients-II Notes
    1. Subgradient Notes External Material-I
    2. Subgradient Notes External Material-II
    3. Practice Problems-Sub Gradients
    4. Chapter 5.4-MIT Textbook- Convex Optimization Theory
    Tutorial-I link
    Lecture9-Gradient Descent - Line Search Notes
    1. Exact line search and backtracking
    2. Slides from Prof.Boyd-Backtracking Line Search and Exact Line Search
    Lecture10-Gradient Descent - Backtracking Notes
    1. Backtracking Line Search
    Lecture11-Gradient Descent -Continued Notes
    1. Reference Slides
    2. Textbook-Optimization for Machine Learning
    3. External Material - Gradient Descent
    Tutorial-II link
    Lecture12-Gradient Descent with Momentum-I Notes
    1. Research Paper- GD with Momentum and NAG
    Lecture13-GD with Momentum and Nestorev Accelerated Gradient Descent Method
    1. Notes
    2. Convergence Analysis
    1. https://distill.pub/2017/momentum/
    2. External Slides - Summary of Optimizers
    Lecture14- Stochastic Gradient Descent Method Notes
    1. Summary of Optimizers
    2. External Slides - Summary of Optimizers
    Lecture15- Adagrad, RMSProp and ADAM Notes
    1. https://www.ruder.io/optimizing-gradient-descent/
    2. External Slides - Summary of Optimizers
    3. ADAM Original Paper
    4. Geofferey Hinton's Slides on RMSPROP
    Tutorial-IIIlink
    Lecture16- Bias Correction in ADAM Notes
    1. Unbiased estimate for ML estimation for Gaussian
    2. Derivation of variance estimate of Maximum likelihood for Gaussian
    3. Chapter8-Deep Learning Texbook by Ian goodfellow
    Lecture17- Projection Gradient Descent Notes
    1. Reference
    Lecture18- Subgradient Method Notes
    1. Subgradient Method
    Lecture19,20 and 21- Application of Optimization Techniques- Adversarial Samples Notes
    1. Slides
    2. A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features
    3. External Slides
    Lecture22- Second Order Gradient Descent Method-I Notes Notes
    Lecture23- Second Order Gradient Descent Method-II Notes
    Lecture24- Linear Programming-I Notes Notes External Material
    Lecture23- Duality in Linear Programming-II Notes
    1. External Material
    2. External Material
    Lecture24- Examples for Linear Programming
    1. Notes
    1. External Material
    2. External Material
    Lecture25- Lagrange Multipliers Notes
    1. External Material
    2. External Material
    Lecture26- SVM Dual formulation Notes Notes

    Evaluation

  • Assignments - 25%
  • Project - 15% Guidelines
  • Midsem - 30%
  • Endsem - 30%