The Aleph Manual

This document provides reference information on A Learning Engine for Proposing Hypotheses (Aleph). Aleph is an Inductive Logic Programming (ILP) system. This manual is not intended to be a tutorial on ILP. A good introduction to the theory, implementation and applications of ILP can be found in S.H. Muggleton and L. De Raedt (1994), Inductive Logic Programming: Theory and Methods, Jnl. Logic Programming, 19,20:629-679, available at ftp://ftp.cs.york.ac.uk/pub/ML_GROUP/Papers/lpj.ps.gz [PDF]

Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition

Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications. Second the models, when applied properly, work very well in practice for several applications. In this paper we attempt to carefully and methodically review the theoretical aspects of this type of statistical modeling and show how they have been applied to selected problems in machine recognition of speech. [PDF]

Markov Models and Hidden Markov Models: A Brief Tutorial

This tutorial gives a gentle introduction to Markov models and Hidden Markov models as mathematical abstractions, and relates them to their use in automatic speech recognition. This material was developed for the Fall 1995 semester of CS188: Introduction to Artificial Intelligence at the University of California, Berkeley. It is targeted for introductory AI courses basic knowledge of probability theory (e.g. Bayes' Rule) is assumed. This version is slightly updated from the original, including a few minor error corrections, a short "Further Reading" section, and exercises that were given as a homework in the Fall 1995 class. [PDF]

Unsupervised Learning

We give a tutorial and overview of the field of unsupervised learning from the perspective of statistical modelling. Unsupervised learning can be motivated from information theoretic and Bayesian principles. We briefly review basic models in unsupervised learning, including factor analysis, PCA, mixtures of Gaussians, ICA, hidden Markov models, state-space models, and many variants and extensions. We derive the EM algorithm and give an overview of fundamental concepts in graphical models, and inference algorithms on graphs. This is followed by a quick tour of approximate Bayesian inference, including Markov chain Monte Carlo (MCMC), Laplace approximation, BIC, variational approximations, and expectation propagation (EP). The aim of this chapter is to provide a high-level view of the field. Along the way, many state-of-the-art ideas and future directions are also reviewed. [PDF]

(JPC) - Last update: 2007/05/23