Pattern recognition -- TIP8311 -- 2015.2

Instructor:
Francesco (F) (francesco dot corona at aalto dot fi)

Teaching assistants:
Rafael (R) (rafael dot de dot oliveira dot e dot lima at gmail dot com)
Edmilson (E) (eqfilho at sfiec dot org dot br)

Lecture times: Tuesdays and Thursdays 2-4 pm
Lecture places: Bloco 726, Sala 14

Material: Course slides will suffice.

The material in the slides can be complemented using material from the following textbooks (list not exhaustive).
  1. Pattern Recognition and Machine Learning
    Christopher M. Bishop
  2. Machine Learning: A Probabilistic Perspective
    Kevin P. Murphy
  3. The Elements of Statistical Learning (Online)
    Trevor Hastie, Robert Tibshirani and Jerome Friedman
  4. Neural Networks (2nd edition)
    Simon Haykin
  5. Learning with Kernels
    Bernhard Schölkopf and Alexander J. Smola
  6. Gaussian Processes for Machine Learning (Online)
    Edward Rasmussen and Christopher K. I. Williams
  7. Pattern Recognition (2nd edition)
    Richard O. Duda, Peter E. Hart and David G. Stork
The material in the slides is mostly based on book [1].

Results: The preliminary results are available.


Go to:   Lectures | Assignments | Schedule | Results |

Lecture notes

  1. Introduction (F)
  2. Probability theory (F)
  3. Decision theory (F)
  4. Information theory (F)
  5. Probability distributions - Binary and multinomial variables (F)
  6. Probability distributions - The Gaussian distribution (F)
  7. Probability distributions - Non-parametric density estimation (F)
  8. Linear models for regression - Linear basis function models (F)
  9. Linear models for regression - Bias-variance trade-off (F)
  10. Linear models for regression - Bayesian linear regression (F)
  11. Linear models for Classification - Discriminant functions (F)
  12. Linear models for Classification - Probabilistic generative models (F)
  13. Linear models for Classification - Probabilistic discriminative models (F)
  14. Neural networks - Feed-forward network functions (F)
  15. Kernel methods - Representation and RBF networks (F)
  16. Kernel methods - Gaussian processes (F)
  17. Sparse kernel methods - Maximum margin classifiers (F)
  18. Sparse kernel methods - Support vector regression (F)
  19. Graphical models - Directed, undirected, factor graphs (F)
Top

Assignments

As we use problem set questions covered by books, papers and webpages, we expect you not to copy, refer to, or look at the solutions in preparing your answers. This is a graduate class, we expect you to want to learn and not google for answers. The purpose of problem sets is to help you think about the material, not just give us the right answers.

If you do happen to use other material, it must be acknowledged clearly with a citation on the submitted solution.

Homeworks must be done individually: Each of you must hand in his/her own answers. In addition, each of you must write his/her own code when requested. It is acceptable, however, for you to collaborate in figuring out answers. We are assuming that, as participants in a graduate course, you take the responsibility to make sure you personally understand the solution to any work arising from collaboration (though, you must indicate on each homework with whom you collaborated).
  1. Assignment 1 (R and E) - Return your solutions via email to R and/or E by SEP 28 2015, 23:59:59 (Fortaleza time)
    Errata:
  2. Assignment 2 (R and E) - Return your solutions via email to R and/or E by OCT 13 2015, 23:59:59 (Fortaleza time)
  3. Assignment 3 (R and E) - Return your solutions via email to R and/or E by NOV 03 2015, 23:59:59 (Fortaleza time)
  4. Assignment 4 (R and E) - Return your solutions via email to R and/or E by NOV 23 (WAS: NOV 16) 2015, 23:59:59 (Fortaleza time)
  5. Assignment 5 (R and E) - Return your solutions via email to R and/or E by DEC 07 (WAS: NOV 30) 2015, 23:59:59 (Fortaleza time)
  6. Assignment 6 (R and E) - Return your solutions via email to R and/or E by DEC 18 2015, 23:59:59 (Fortaleza time)
Top

Schedule

Estimated class schedule: It gets defined as we roll and it is subject to change, according to time and class interests.

THU Aug 20 00. Introduction (F) Course introduction.
TUE Aug 25 01. Probability theory (F) Generalities, densities, expectations and covariances, Bayesian probabilities, the univariate Gaussian.
THU Aug 27 02. Decision theory (F) Generalities, misclassification rates, expected losses, loss for regression.
TUE Sep 01 03. Information theory (F) Generalities, entropy and differential entropy, conditional entropy, relative entropy and mutual information.
THU Sep 03 04. Exercises (R and E) Probability, decision and information theory.
TUE Sep 08 05. Probability distributions (F) The binomial distribution, Bernoulli and beta distributions and the beta prior.
Multinomial distributions, the generalised Bernoulli distribution and the Dirichlet prior.
THU Sep 10 06. Probability distributions (F) The Gaussian distribution, conditional and marginal Gaussians.
TUE Sep 15 07. Probability distributions (F) Bayes' theorem and maximum likelihood for the Gaussian.
Bayesian inference for the Gaussian.
Mixture of Gaussians.
THU Sep 17 08. Probability distributions (F) Non-parametric density estimation.
Histograms, kernel density estimation and nearest-neighbour methods.
TUE Sep 22 09. Exercises (R and E) Probability distributions.
THU Sep 24 10. Linear models for regression (F) Linear basis function models, maximum likelihood and least squares, regularised least squares, and multiple outputs.
Bias-variance decomposition.
TUE Sep 29 11. Linear models for regression (F) Bayesian linear regression, parameter distribution and predictive distribution.
The equivalent kernel.
THU Oct 01 12. Exercises (R and E) Linear models for regression.
TUE Oct 06 13. Linear models for classification (F) Discriminant functions, Fisher's linear discriminant and the perceptron.
THU Oct 08 14. Linear models for classification (F) Probabilistic generative models.
TUE Oct 13 15. Linear models for classification (F) Probabilistic discriminative models, Logistic regression and probit regression.
THU Oct 15 16. Exercises (R and E) Linear models for classification.
TUE Oct 20 17. Neural networks (F) Feed-forward network functions.
Network training, parameter optimisation, local quadratic approximation, gradient information and gradient descent optimisation.
Error back-propagation.
THU Oct 22 18. Exercises (R and E) Recap.
TUE Oct 27 19. Kernel methods (F) Dual representations and constructing kernels.
Radial basis functions networks and the Nadaraya-Watson model.
THU Oct 29 20. Kernel methods (F) Gaussian processes.
TUE Nov 03 21. Exercises (R and E) Kernel methods.
THU Nov 05 22. Sparse kernel methods (F) Maximum margin classifiers.
TUE Nov 10 23. Sparse kernel methods (F) Support vector regression.
THU Nov 12 24. Exercises (R and E) Sparse kernel methods.
TUE Nov 17 25. Exercises (R and E) Recap.
THU Nov 19 26. Exercises (R and E) Recap.
TUE Nov 24 27. Exercises (R and E) Recap.
THU Nov 26 28. Exercises (R and E) Recap.
TUE Dec 01 29. Exercises (R and E) Recap.
THU Dec 03 30. Exercises (R and E) Recap.
TUE Dec 08 31. Grafical models (F) Introduction to undirected (Bayes networks), directed (Markov networks) and factor graphs.

Last lecture ya'll!

Top

Results

Crediting students: Preliminary results
Non-crediting students: Preliminary results


Top