Introduction to Graphical Models:
-
What are graphical models and why are they needed?
-
Types of graphical models (e.g., Bayesian Networks, Markov Random Fields, Factor Graphs).
-
Representing conditional independence using graph structures.
Bayesian Networks (BNs):
-
Directed Acyclic Graphs (DAGs) representing causal and probabilistic relationships.
-
Conditional probability distributions (CPDs) associated with each node.
-
Local Markov Property and d-separation.
-
Representing joint probability distributions using BNs.
-
Applications in areas like medical diagnosis and spam classification.
-
BN examples often include scenarios like the Sprinkler problem.
Markov Random Fields (MRFs):
-
Undirected graphs representing dependencies between variables.
-
Potential functions and cliques.
-
Hammersley–Clifford theorem and Gibbs distributions.
-
Markov properties (pairwise, local, global).
-
Applications in image processing, computer vision, and more.
Factor Graphs:
-
A bipartite graph representing the factorization of a function into factors defined on subsets of variables.
-
Relationship between Factor Graphs, BNs, and MRFs.
-
Benefits of using factor graphs for inference algorithms.
Inference in Graphical Models:
-
Exact Inference:
-
Variable Elimination.
-
Sum-Product Algorithm (Belief Propagation).
-
Junction Tree Algorithm.
-
Approximate Inference:
-
Sampling methods (e.g., Markov Chain Monte Carlo, Gibbs Sampling, Metropolis-Hastings).
-
Variational methods (e.g., Mean Field approximation).
-
Loopy Belief Propagation.
Learning in Graphical Models:
-
Parameter Learning:
-
Estimating parameters (CPDs for BNs, potential functions for MRFs) from data.
-
Techniques like Maximum Likelihood Estimation (MLE) and Bayesian Estimation.
-
Expectation-Maximization (EM) algorithm, especially with incomplete data.
-
Structure Learning:
-
Determining the graph structure (dependencies between variables) from data.
-
Constraint-based methods (using conditional independence tests).
-
Score-based methods (evaluating structures with a scoring function).
-
Using integer programming for structure learning.
Specialized Graphical Models:
-
Hidden Markov Models (HMMs): Representing sequential data and understanding dynamic inference, often using the Forward-Backward Algorithm and Viterbi Algorithm.
-
Conditional Random Fields (CRFs): Undirected graphical models for structured prediction, particularly useful for tasks like named entity recognition and image segmentation.
-
Dynamic Bayesian Networks (DBNs): Extending BNs to model time-series data.
-
Kalman Filters.
-
Restricted Boltzmann Machines (RBMs).
-
Applications of Graphical Models:
-
Machine learning and Deep Learning algorithms like Naive Bayes and Neural Networks.
-
Computer Vision (image segmentation, object detection).
-
Natural Language Processing (NLP) (part-of-speech tagging, named entity recognition).
-
Robotics.
-
Bioinformatics.
-
Causal inference.
-
Web/IR, and biology.