{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\"FMP\"\n", "\"AudioLabs\"\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\"C5\"\n", "

Hidden Markov Model (HMM)

\n", "
\n", "\n", "
\n", "\n", "

\n", "Motivated by the chord recognition problem, we give in this notebook an overview of hidden Markov models (HMMs) and introduce three famous algorithmic problems related with HMMs following Section 5.3 of [Müller, FMP, Springer 2015]. For a detailed introduction of HMMs, we refer to the famous tutorial paper by Rabiner.\n", "\n", "

\n", "

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Markov Chains\n", "\n", "Certain transitions from one chord to another are more likely than others. To capture such likelihoods, one can employ a concept called **Markov chains**. Abstracting from our chord recognition scenario, we assume that the chord types to be considered are represented by a set \n", "\n", "\\begin{equation}\n", " \\mathcal{A}:=\\{\\alpha_{1},\\alpha_{2},\\ldots,\\alpha_{I}\\}\n", "\\end{equation}\n", "\n", "of size $I\\in\\mathbb{N}$. The elements $\\alpha_{i}$ for $i\\in[1:I]$ are referred to as **states**. A progression of chords is realized by a system that can be described at any time instance $n=1,2,3,\\ldots$ as being in some state $s_{n}\\in\\mathcal{A}$. The change from one state to another is specified according to a set of probabilities associated with each state. In general, a probabilistic description of such a system can be quite complex. To simplify the model, one often makes the assumption that the probability of a change from the current state $s_{n}$ to the next state $s_{n+1}$ only depends on the current state, and not on the events that preceded it. In terms of conditional probabilities, this property is expressed by\n", "\n", "\\begin{equation}\n", " P[s_{n+1}=\\alpha_{j}|s_{n}=\\alpha_{i},s_{n-1}=\\alpha_{k},\\ldots]\n", " = P[s_{n+1}=\\alpha_{j}|s_{n}=\\alpha_{i}].\n", "\\end{equation}\n", "\n", "The specific kind of \"amnesia\" is called the **Markov property**. Besides this property, one also often assumes that the system is **invariant under time shifts**, which means by definition that the following coefficients become independent of the index $n$:\n", "\n", "\\begin{equation}\n", " a_{ij} := P[s_{n+1}=\\alpha_{j} | s_{n}=\\alpha_{i}] \\in [0,1]\n", "\\end{equation}\n", "\n", "for $i,j\\in[1:I]$. These coefficients are also called **state transition probabilities**. They obey the standard stochastic constraint $\\sum_{j=1}^{I} a_{ij} = 1$ and can be expressed by an $(I\\times I)$ matrix, which we denote by $A$. A system that satisfies these properties is also called a (discrete-time) **Markov chain**. The following figure illustrates these definitions. It defines a Markov chain that consists of $I=3$ states $\\alpha_{1}$, $\\alpha_{2}$, and $\\alpha_{3}$, which correspond to the major chords $\\mathbf{C}$, $\\mathbf{G}$, and $\\mathbf{F}$, respectively. In the graph representation, the states correspond to the nodes, the transitions to the edges, and the transition probabilities to the labels attached to the edges. For example, the transition probability to remain in the state $\\alpha_{1}=\\mathbf{C}$ is $a_{11}=0.8$, whereas the transition probability of changing from $\\alpha_{1}=\\mathbf{C}$ to $\\alpha_{2}=\\mathbf{G}$ is $a_{12}=0.1$.\n", "\n", "\"FMP_C5_F24\"\n", "\n", "The model expresses the probability of all possible chord changes. To compute the probability of a given chord progression, one also needs the information on how the model gets started. This information is specified\n", "by additional model parameters referred to as **initial state probabilities**. For a general Markov chain, these probabilities are specified by the numbers\n", "\n", "\\begin{equation}\n", " c_{i} := P[s_{1}=\\alpha_{i}] \\in [0,1]\n", "\\end{equation}\n", "\n", "for $i\\in[1:I]$. These coefficients, which sum up to one, can be expressed by a vector of length $I$ denoted by $C$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Hidden Markov Models\n", "\n", "Based on a Markov chain, one can compute a probability for a given observation consisting of a sequence of states or chord types. In our [chord recognition scenario](../C5/C5S2_ChordRec_Templates.html), however, this is not what we need. Rather than observing a sequence of chord types, we observe a **sequence of chroma vectors** that are somehow related to the chord types. In other words, the state sequence is not directly visible, but only a fuzzier observation sequence that is generated based on the state sequence. This leads to an extension of Markov chains to a statistical model referred to as a **hidden Markov model** (HMM). The idea is to represent the relation between the observed feature vectors and the chord types (the states) using a probabilistic framework. Each state is equipped with a probability function that expresses the likelihood for a given chord type to output or emit a certain feature vector. As a result, we obtain a two-layered process consisting of a **hidden layer** and an **observable layer**. The hidden layer produces a state sequence that is not observable (\"hidden\"), but generates the observation sequence on the basis of the state-dependent probability functions.\n", "\n", "The **first layer** of an HMM is a **Markov chain** as introduced above. To define the second layer of an HMM, we need to specify a space of possible output values and a probability function for each state. In general, the output space can be any set including the real numbers, a vector space, or any kind of feature space. For example, in the case of chord recognition, this space may be modeled as the feature space $\\mathcal{F}=\\mathbb{R}^{12}$ consisting of all possible $12$-dimensional chroma vectors. For the sake of simplicity, we only consider the case of a **discrete HMM**, where the output space is assumed to be discrete and even finite. In this case, the space can be modeled as a finite set \n", "\n", "\\begin{equation}\n", " \\mathcal{B} = \\{\\beta_{1},\\beta_{2},\\ldots,\\beta_{K}\\} \n", "\\end{equation}\n", "\n", "of size $K\\in\\mathbb{N}$ consisting of distinct output elements $\\beta_{k}$, $k\\in[1:K]$, which are also referred to as **observation symbols**. An HMM associates with each state a probability function, which is also referred to as the **emission probability** or **output probability**. In the discrete case, the emission probabilities are specified by coefficients\n", "\n", "\\begin{equation}\n", " b_{ik}\\in[0,1]\n", "\\end{equation}\n", "\n", "for $i\\in[1:I]$ and $k\\in[1:K]$. Each coefficient $b_{ik}$ expresses the probability of the system to output the observation symbol $\\beta_{k}$ when in state $\\alpha_{i}$. Similarly to the state transition probabilities, the emission probabilities are required to satisfy the stochastic constraint $\\sum_{k=1}^{K} \\beta_{ik} = 1$ for $i\\in[1:I]$ (thus forming a probability distribution for each state). The coefficients can be expressed by an $(I\\times K)$ matrix, which we denote by $B$. In summary, an HMM is specified by a tuple\n", "\n", "\\begin{equation}\n", " \\Theta:=(\\mathcal{A},A,C,\\mathcal{B},B).\n", "\\end{equation}\n", "\n", "The sets $\\mathcal{A}$ and $\\mathcal{B}$ are usually considered to be fixed components of the model, while the probability values specified by $A$, $B$, and $C$ are the free parameters to be determined. This can be done explicitly by an expert based on his or her musical knowledge or by employing a learning procedure based on suitably labeled training data. Continuing the above example, the following figure illustrates a hidden Markov model, where the state-dependent emission probabilities are indicated by the labels of the dashed arrows.\n", "\n", "\"FMP_C5_F25\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the following code cell, we define the state transition probability matrix $A$ and the output probability $B$ as specified by the figure. \n", "\n", "* Here, we assume that $\\alpha_{1}=\\mathbf{C}$, $\\alpha_{2}=\\mathbf{G}$, and $\\alpha_{3}=\\mathbf{F}$. \n", "* Furthermore, the elements of the output space $\\mathcal{B} = \\{\\beta_{1},\\beta_{2},\\beta_{3}\\}$ represent the three chroma vectors ordered from left to right. \n", "* Finally, we assume that the initial state probability vector $C$ is given by the values $c_{1}=0.6$, $c_{2}=0.2$, $c_{3}=0.2$." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "execution": { "iopub.execute_input": "2024-02-15T08:56:18.969932Z", "iopub.status.busy": "2024-02-15T08:56:18.969630Z", "iopub.status.idle": "2024-02-15T08:56:20.146998Z", "shell.execute_reply": "2024-02-15T08:56:20.146455Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from sklearn.preprocessing import normalize \n", "\n", "A = np.array([[0.8, 0.1, 0.1], \n", " [0.2, 0.7, 0.1], \n", " [0.1, 0.3, 0.6]])\n", "\n", "C = np.array([0.6, 0.2, 0.2])\n", "\n", "B = np.array([[0.7, 0.0, 0.3], \n", " [0.1, 0.9, 0.0], \n", " [0.0, 0.2, 0.8]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## HMM-Based Sequence Generation \n", "\n", "Once an HMM is specified by $\\Theta:=(\\mathcal{A},A,C,\\mathcal{B},B)$, it can be used for various analysis and synthesis applications. Since it is very instructive, we now discuss how to (artificially) generate, on the basis of a given HMM, an observation sequence $O=(o_{1},o_{2},\\ldots,o_{N})$ of length $N\\in\\mathbb{N}$ with $o_n\\in \\mathcal{B}$, $n\\in[1:N]$. The generation procedure is as follows:\n", "\n", "1. Set $n=1$ and choose an initial state $s_n=\\alpha_i$ for some $i\\in[1:I]$ according to the initial state distribution $C$.\n", "2. Generate an observation $o_n=\\beta_k$ for some $k\\in[1:K]$ according to the emission probability in state $s_n=\\alpha_i$ (specified by the $i^{\\mathrm{th}}$ row of $B$).\n", "3. If $n=N$ then terminate the process. Otherwise, if $n 0, \"N should be at least one\"\n", " I = A.shape[1]\n", " K = B.shape[1]\n", " assert I == A.shape[0], \"A should be an I-square matrix\"\n", " assert I == C.shape[0], \"Dimension of C should be I\"\n", " assert I == B.shape[0], \"Column-dimension of B should be I\"\n", "\n", " O = np.zeros(N, int)\n", " S = np.zeros(N, int)\n", " for n in range(N):\n", " if n == 0:\n", " i = np.random.choice(np.arange(I), p=C)\n", " else:\n", " i = np.random.choice(np.arange(I), p=A[i, :])\n", " k = np.random.choice(np.arange(K), p=B[i, :])\n", " S[n] = i\n", " O[n] = k\n", " if details:\n", " print('n = %d, S[%d] = %d, O[%d] = %d' % (n, n, S[n], n, O[n]))\n", " return O, S\n", "\n", "N = 10\n", "O, S = generate_sequence_hmm(N, A, C, B, details=True)\n", "print('State sequence S: ', S)\n", "print('Observation sequence O:', O)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As a sanity check for the plausibility of our sequence generation approach, we now check if the generated sequences reflect well the probabilities of our HMM. To this end, we estimate the original transition probability matrix $A$ and the output probability matrix $B$ from a generated observation sequence $O$ and state sequence $S$.\n", "\n", "* To obtain an estimate of the entry $a_{ij}$ of $A$, we count all transitions from $n$ to $n+1$ with $S(n)=\\alpha_i$ and $S(n+1)=\\alpha_j$ and then divide this number by the total number of transitions starting with $\\alpha_i$.\n", "\n", "* Similarly, to obtain an estimate of the entry $b_{ik}$ of $B$, we count the number of occurrences $n$ with $S(n)=\\alpha_i$ and $O(n)=\\beta_k$ and divide this number by the total number of occurrences of $\\alpha_i$ in $S$.\n", "\n", "When generating longer sequences by increasing the number $N$, the resulting estimates should approach the original values in $A$ and $B$. This is demonstrated by the subsequent experiment. \n", "\n", "
\n", "Note: In practice, when estimating HMM model parameters from training data, only observation sequences are typically available, and the state sequences (that reflect the hidden generation process) are generally not known. Learning parameters only from observation sequences leads to much harder estimation problems as discussed below. \n", "
" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2024-02-15T08:56:20.187599Z", "iopub.status.busy": "2024-02-15T08:56:20.187385Z", "iopub.status.idle": "2024-02-15T08:56:20.654048Z", "shell.execute_reply": "2024-02-15T08:56:20.653344Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "======== Estimation results when using N = 100 ========\n", "A =\n", "[[ 0.800 0.100 0.100]\n", " [ 0.200 0.700 0.100]\n", " [ 0.100 0.300 0.600]]\n", "A_est =\n", "[[ 0.825 0.125 0.050]\n", " [ 0.140 0.767 0.093]\n", " [ 0.062 0.375 0.562]]\n", "B =\n", "[[ 0.700 0.000 0.300]\n", " [ 0.100 0.900 0.000]\n", " [ 0.000 0.200 0.800]]\n", "B_est =\n", "[[ 0.675 0.000 0.325]\n", " [ 0.045 0.955 0.000]\n", " [ 0.000 0.312 0.688]]\n", "======== Estimation results when using N = 10000 ========\n", "A =\n", "[[ 0.800 0.100 0.100]\n", " [ 0.200 0.700 0.100]\n", " [ 0.100 0.300 0.600]]\n", "A_est =\n", "[[ 0.801 0.102 0.097]\n", " [ 0.208 0.697 0.096]\n", " [ 0.107 0.300 0.593]]\n", "B =\n", "[[ 0.700 0.000 0.300]\n", " [ 0.100 0.900 0.000]\n", " [ 0.000 0.200 0.800]]\n", "B_est =\n", "[[ 0.692 0.000 0.308]\n", " [ 0.108 0.892 0.000]\n", " [ 0.000 0.199 0.801]]\n" ] } ], "source": [ "def estimate_hmm_from_o_s(O, S, I, K):\n", " \"\"\"Estimate the state transition and output probability matrices from\n", " a given observation and state sequence\n", "\n", " Notebook: C5/C5S3_HiddenMarkovModel.ipynb\n", "\n", " Args:\n", " O (np.ndarray): Observation sequence of length N\n", " S (np.ndarray): State sequence of length N\n", " I (int): Number of states\n", " K (int): Number of observation symbols\n", "\n", " Returns:\n", " A_est (np.ndarray): State transition probability matrix of dimension I x I\n", " B_est (np.ndarray): Output probability matrix of dimension I x K\n", " \"\"\"\n", " # Estimate A\n", " A_est = np.zeros([I, I])\n", " N = len(S)\n", " for n in range(N-1):\n", " i = S[n]\n", " j = S[n+1]\n", " A_est[i, j] += 1\n", " A_est = normalize(A_est, axis=1, norm='l1')\n", "\n", " # Estimate B\n", " B_est = np.zeros([I, K])\n", " for i in range(I):\n", " for k in range(K):\n", " B_est[i, k] = np.sum(np.logical_and(S == i, O == k))\n", " B_est = normalize(B_est, axis=1, norm='l1')\n", " return A_est, B_est\n", "\n", "N = 100\n", "print('======== Estimation results when using N = %d ========' % N)\n", "O, S = generate_sequence_hmm(N, A, C, B, details=False)\n", "A_est, B_est = estimate_hmm_from_o_s(O, S, A.shape[1], B.shape[1])\n", "np.set_printoptions(formatter={'float': \"{: 7.3f}\".format})\n", "print('A =', A, sep='\\n')\n", "print('A_est =', A_est, sep='\\n')\n", "print('B =', B, sep='\\n')\n", "print('B_est =', B_est, sep='\\n')\n", "\n", "N = 10000\n", "print('======== Estimation results when using N = %d ========' % N)\n", "O, S = generate_sequence_hmm(N, A, C, B, details=False)\n", "A_est, B_est = estimate_hmm_from_o_s(O, S, A.shape[1], B.shape[1])\n", "np.set_printoptions(formatter={'float': \"{: 7.3f}\".format})\n", "print('A =', A, sep='\\n')\n", "print('A_est =', A_est, sep='\\n')\n", "print('B =', B, sep='\\n')\n", "print('B_est =', B_est, sep='\\n')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## Three Problems for HMMs\n", "\n", "We have seen how a given HMM can be used to generate an observation sequence. We will now look at three famous algorithmic problems for HMMs that concern the specification of the free model parameters and the evaluation of observation sequences. \n", "\n", "### 1. Evaluation Problem\n", "\n", "The first problem is known as **evaluation problem**. Given an HMM specified by $\\Theta=(\\mathcal{A},A,C,\\mathcal{B},B)$ and an observation sequence $O=(o_{1},o_{2},\\ldots,o_{N})$, the task is to compute the probability \n", "\n", "\\begin{equation}\n", " P[O|\\Theta]\n", "\\end{equation}\n", "\n", "of the observation sequence given the model. From a slightly different viewpoint, this probability can be regarded as a score value that expresses how well a given model matches a given observation sequence. This interpretation becomes useful in the case where one is trying to choose among several competing models. The solution would then be to choose the model which best matches the observation sequence. To compute $P[O|\\Theta]$, we first consider a fixed state sequence $S=(s_1,s_2,\\ldots,s_N)$ of length $N$ with $s_n=\\alpha_{i_n}\\in\\mathcal{A}$ for some suitable $i_n\\in[1:I]$, $n\\in[1:N]$. The probability $P[O,S|\\Theta]$ for generating the state sequence $S$ as well as the observation sequence $O$ is given by \n", "\n", "$$\n", "P[O,S|\\Theta] = c_{i_1}\\cdot b_{i_1k_1} \\cdot a_{i_1i_2}\\cdot b_{i_2k_2} \\cdot ...\\cdot a_{i_{N-1}i_N}\\cdot b_{i_Nk_N}\n", "$$\n", "\n", "Next, to obtain the overall probability $P[O|\\Theta]$, one needs to sum up all these probabilities considering all possible state sequences $S$ of length $|S|=N$:\n", "\n", "$$\n", "P[O|\\Theta] = \\sum_{S: |S|=N}P[O,S|\\Theta]\n", "= \\sum_{i_1=1}^I \\sum_{i_2=1}^I \\ldots \\sum_{i_N=1}^I\n", "c_{i_1}\\cdot b_{i_1k_1} \\cdot a_{i_1i_2}\\cdot b_{i_2k_2} \\cdot ...\\cdot a_{i_{N-1}i_N}\\cdot b_{i_Nk_N}\n", "$$\n", "\n", "This leads to $I^N$ summands, a number that is exponential in the length $N$ of the observation sequence. Therefore, in practice, this brute-force calculation is computationally infeasible even for a small $N$. The good news is that there is a more efficient way to compute $P[O|\\Theta]$ using an algorithm that is based on the dynamic programming paradigm. This procedure, which is known as [**Forward–Backward Algorithm**](https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm), requires a number of operations on the order of $I^2N$ (instead of $I^N$). For a detailed description of this algorithm, we refer to the article by [Rabiner](https://ieeexplore.ieee.org/document/18626).\n", "\n", "\n", "### 2. Uncovering Problem\n", "\n", "The second problem is the so-called **uncovering problem**. Again, we are given an HMM specified by $\\Theta=(\\mathcal{A},A,C,\\mathcal{B},B)$ and an observation sequence $O=(o_{1},o_{2},\\ldots,o_{N})$. Instead of finding the overall probability $P[O|\\Theta]$ for $O$, where one needs to consider **all** possible state sequences, the goal of the uncovering problem is to find the **single** state sequence $S=(s_{1},s_{2},\\ldots,s_{N})$ that \"best explains\" the observation sequence. The uncovering problem stated so far is not well defined since, in general, there is not a single \"correct\" state sequence generating the observation sequence. Indeed, one needs a kind of optimization criterion that specifies what is meant when talking about a best possible explanation. There are several reasonable choices for such a criterion, and the actual choice will depend on the intended application. In the [FMP notebook on the Viterbi algorithm](../C5/C5S3_Viterbi.html), we will discuss one possible choice as well as an efficient algorithm (called **Viterbi algorithm**). This algorithm, which can be thought of as a kind of context-sensitive smoothing procedure, will apply in the [FMP notebook on HMM-based chord recognition](../C5/C5S3_ChordRec_HMM.html). \n", "\n", "\n", "### 3. Estimation Problem\n", "\n", "Besides the evaluation and uncovering problems, the third basic problem for HMMs is referred to as the **estimation problem**. Given an observation sequence $O$, the objective is to determine the free model parameters of $\\Theta$ (specified by by $A$, $C$, and $B$) that maximize the probability $P[O|\\Theta]$. In other words, the free model parameters are to be estimated so as to best describe the observation sequence. This is a typical instance of an **optimization problem** where a set of observation sequences serves as **training material** for adjusting or learning the HMM parameters. The estimation problem is by far the most difficult problem of HMMs. In fact, there is no known way to explicitly solve the given optimization problem. However, iterative procedures that find locally optimal solutions have been suggested. One of these procedures is known as the [**Baum–Welch Algorithm**](https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm). Again, we refer to the article by [Rabiner](https://ieeexplore.ieee.org/document/18626) for more details. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "Acknowledgment: This notebook was created by Meinard Müller.\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", "
\"C0\"\"C1\"\"C2\"\"C3\"\"C4\"\"C5\"\"C6\"\"C7\"\"C8\"
" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.16" } }, "nbformat": 4, "nbformat_minor": 1 }