2. Linear Algebra

This entry is part 1 of 1 in the series Math for ML

    This series of posts is intended to summarize the contents or note memorable things in Mathematics for ML book which I’ve got some insights from the contents. The book is『MATHEMATICS for MACHINE LEARNING, Marc Peter Deisenroth, A. Aldo Faisal and Cheng Soon Ong, Cambridge University Press. https://mml-book.com』. This post is about 2nd Chapter of …

Read more2. Linear Algebra

Note10. Nesterov Accelerated Gradient Descent

This entry is part 10 of 10 in the series ConvexOptimization

Note10. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote10. Nesterov Accelerated Gradient Descent

Note9. Mirror Descent

This entry is part 9 of 10 in the series ConvexOptimization

Note9. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote9. Mirror Descent

Note8. Proximal Gradient Descent and Subgradient

This entry is part 8 of 10 in the series ConvexOptimization

Note8. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote8. Proximal Gradient Descent and Subgradient

Note7. Lagrange Dual

This entry is part 7 of 10 in the series ConvexOptimization

Note7. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote7. Lagrange Dual

Note6. Projected Gradient Descent

This entry is part 6 of 10 in the series ConvexOptimization

Note6. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote6. Projected Gradient Descent

Note5. Convergence Analysis

This entry is part 5 of 10 in the series ConvexOptimization

          Note5. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote5. Convergence Analysis

Note4. Recap Note1~2

This entry is part 4 of 10 in the series ConvexOptimization

Note4. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote4. Recap Note1~2

Note3. Convex Optimization Problem 02-2.

This entry is part 3 of 10 in the series ConvexOptimization

Note3. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

Read moreNote3. Convex Optimization Problem 02-2.

AI505 Paper list for share

1. ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION https://arxiv.org/pdf/1412.6980.pdf 2. SVRG https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf 3. SGD: General Analysis and Improved Rates https://arxiv.org/pdf/1901.09401.pdf 4. A CLOSER LOOK AT DEEP LEARNING HEURISTICS: LEARNING RATE RESTARTS, WARMUP AND DISTILLATION https://openreview.net/pdf?id=r14EOsCqKX 5. QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding https://arxiv.org/abs/1610.02132 6. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly …

Read moreAI505 Paper list for share