Note10. Nesterov Accelerated Gradient Descent

This entry is part 10 of 10 in the series ConvexOptimization

          Note10. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote10. Nesterov Accelerated Gradient Descent

Note9. Mirror Descent

This entry is part 9 of 10 in the series ConvexOptimization

          Note9. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote9. Mirror Descent

Note8. Proximal Gradient Descent and Subgradient

This entry is part 8 of 10 in the series ConvexOptimization

          Note8. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote8. Proximal Gradient Descent and Subgradient

Note7. Lagrange Dual

This entry is part 7 of 10 in the series ConvexOptimization

          Note7. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote7. Lagrange Dual

Note6. Projected Gradient Descent

This entry is part 6 of 10 in the series ConvexOptimization

          Note6. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote6. Projected Gradient Descent

Note5. Convergence Analysis

This entry is part 5 of 10 in the series ConvexOptimization

          Note5. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote5. Convergence Analysis

Note4. Recap Note1~2

This entry is part 4 of 10 in the series ConvexOptimization

          Note4. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote4. Recap Note1~2

Note3. Convex Optimization Problem 02-2.

This entry is part 3 of 10 in the series ConvexOptimization

          Note3. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote3. Convex Optimization Problem 02-2.

AI505 Paper list for share

1. ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION https://arxiv.org/pdf/1412.6980.pdf 2. SVRG https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf 3. SGD: General Analysis and Improved Rates https://arxiv.org/pdf/1901.09401.pdf 4. A CLOSER LOOK AT DEEP LEARNING HEURISTICS: LEARNING RATE RESTARTS, WARMUP AND DISTILLATION https://openreview.net/pdf?id=r14EOsCqKX 5. QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding https://arxiv.org/abs/1610.02132 6. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly …

Read moreAI505 Paper list for share

Note2. Convex Optimization Problem 02.

This entry is part 2 of 10 in the series ConvexOptimization

          Note2. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will …

Read moreNote2. Convex Optimization Problem 02.