Topic outline

  • Lecture 1

    Overview on the Statistical learning lectures

    Motivation and first results on the concentration phenomenon

    Sub-Gaussian real-valued random variables

    Psi_2-norm and proxy-variance

    Hoeffding's inequality


  • Lecture 2

    Follow-up on sub-Gaussian random variables

    Polynomial upper bound

    Cramer-Chernoff's principle and exponential moments

    Added-valued of concentration inequalities compared to deterministic upper bounds (bounded RVs)

    Sub-Exponential RVs with heavier tails
  • Lecture 3

    Examples of sub-Exponential random variables

    Connexion between sub-Exponential and sub-Gaussian random variables

    Comparison between the best Exponential and the best Poynomial bound

    Bernstein's inequality and discussion around the shape of the tails (compared to sub-G and sub-E RVs)

  • Lecture 4

    Linear regression model:

    * Model interpretation

    * Quantifying the statistical performance

    * Empirical Risk Minimization (ERM) and least-squares estimator

    * Upper bouding the Estimation error (variance term)

  • Lecture 5

    Deriving the upper bound in expectation on the estimation error in the linear regression model under sub-Gaussian assumptions

    Deriving the upper bound with high probability on the estimation error

    Comparing the above upper bound with what is known when optimizing a constrained function on a convex domain (L1, L2)


  • Lecture 6

    Gaussian mixture models for density esitmation

    GMM for clustering by means of the MAP rule

    How to estimate the unknown parameter vectors?