## Note1. Convex Optimization Problem 01.

This entry is part 1 of 10 in the series ConvexOptimization

Note1. [PDF Link] Below notes were taken by my iPad Pro 3.0 and exported to PDF files. All contents were based on “Optimization for AI (AI505)” lecture notes at KAIST. For the supplements, lecture notes from Martin Jaggi [link] and “Convex Optimization” book of Sebastien Bubeck [link] were used. If you will have found any license issue, …

## ICCV2019 in Seoul Review

From 27 OCT to 2 NOV, ICCV 2019 was held in Coex Seoul. [official webpage link]: http://iccv2019.thecvf.com/ I summarized attended programs below. SUN 27 OCT Tutorial. “Interpretable Machine Learning for Computer Vision“ [link] : https://interpretablevision.github.io/ 1. Andrea Vedaldi: Understanding Models via Visualization and Attribution – Generating conic examples > Inversion vs activation maximization > The importance …

## MIT 18.06 Linear Algebra – Basics (Lec01~Lec03)

I’ve written this post based on ‘Linear Algebra’ lectures (MIT 18.06 from lecture-01 to lecture-03) by Gilbert Strang. This post is intended for person who needs to learn basic knowledge about linear algebra. (YouTube link for Strang’s Linear Algebra : Here ) At this post, I used One-Note (Microsoft) program replacing original hand-written notes. So, …

## ISLR chapter 03. Linear Regression_3.1 Simple Linear Regression

From below I’ve quoted some paragraphs from page 59~page 70  directly in the ISLR book.   Recall the Advertising data from Chapter 2. The figure below displays sales for a particular product as a function of advertising budgets for TV, radio, and newspaper media. Suppose that in our role as statistical consultants we are asked …

## ISLR chapter 02. Statistical Learning

2.1 What Is Statistical Learning? X : input variables; predictors; independent variables Y : output variables; response; dependent variable Suppose that we observe a quantitative response Y and p different predictors, . We assume that there is some relationship between Y and 《Very general form》   2.1.1 Why Estimate f ? ① Prediction We can …

## ISLR chapter 01. Introduction

A Brief History of Statistical Learning Though the term statistical learning is fairly new, many of the concepts that underlie the field were developed long ago. At the beginning of the nineteenth century, Legendre and Gauss published papers on the method of least squares, which implemented the first successfully applied to problems in astronomy. Linear …

## Probability and likelihood learned from On-Base Percentage

The previous post was dealt with likelihood . Let’s take a look at some interesting examples introduced in “Major League Baseball statistics”. (Book in Korean) The R markdown can be found in Github  and this post is quoted partially from chapter 4 in the book mentioned above.   1. The probability that goes on the base twice …

## Likelihood

likelihood ? It’s very confusing concept to me, very difficult to understand at a glance. I’ve looked for the definition from Wikipedia and sought some blogs which explained about it. First of all I’ve referred sw4r  ‘s blog which contained huge amount of numerical statistics. And I found the definition of ‘Likelihood’ in Wikipedia which was really …

## Independence of Events and Conditional Probability

We go over the definitions of independence of events and conditional probability. In addition to that we gonna deal with sampling with / without replacement.   1. Independence of events The necessary and sufficient condition of that two events are independent each other is, P(A ∩ B) = P(A) * P(B) ⇔ P(B ∩ A) = …

## The reason why red ball example is not suitable for explaining independence in events

There were some miss writings in this blog. I am very sorry about that. I have corrected my mistakes and please let me know if you will find something wrong in later posts. In the previous post I’ve explained independent trials with two red balls in sampling. If we assume that there are seven black balls …