International Audio Laboratories Erlangen
Authors: Meinard Müller, Stefan Balke, Frank Zalkow
References:
[Mueller2015] Meinard Müller. Fundamentals of Music Processing. Springer Verlag, 2015.
The Fourier transform, which is used to convert a time-dependent signal to a frequency-dependent signal, is one of the most important mathematical tools in audio signal processing. Applying the Fourier transform to local sections of an audio signal, one obtains the short-time Fourier transform (STFT). In this lab course, we study a discrete version of the STFT. To work with the discrete STFT in practice, one needs to correctly interpret the discrete time and frequency parameters. Using Python, we compute a discrete STFT and visualize its magnitude in form of a spectrogram representation. Then, we derive from the STFT various audio features that are useful for analyzing music signals. In particular, we develop a log-frequency spectrogram, where the frequency axis is converted into an axis corresponding to musical pitches. From this, we derive a chroma representation, which is a useful tool for capturing harmonic information of music.
Audio signals can be complex mixtures consisting of a multitude of different sound components. A first step in better understanding a given signal is to decompose it into building blocks that are better accessible for the subsequent processing steps. In the case that these building blocks consist of complex-valued sinusoidal functions, such a process is also called Fourier analysis. The Fourier transform maps a time-dependent signal to a frequency-dependent function which reveals the spectrum of frequency components that compose the original signal. Loosely speaking, a signal and its Fourier transform are two sides of the same coin. On the one side, the signal displays the time information and hides the information about frequencies. On the other side, the Fourier transform reveals information about frequencies and hides the time information.
To obtain back the hidden time information, Dennis Gabor introduced in the year 1946 the modified Fourier transform, now known as short-time Fourier transform or simply STFT. This transform is a compromise between a time- and a frequency-based representation by determining the sinusoidal magnitude and phase content of local sections of a signal as it changes over time. In this way, the STFT does not only tell which frequencies are "contained" in the signal but also at which points of times or, to be more precise, in which time intervals these frequencies appear.
The figure sows various representations for a piano recording of the chromatic scale ranging from A0 ($p=21$) to C8 ($p=108$). (a) Piano keys representing the chromatic scale. (b) Spectrogram representation. (c) Pitch-based log-frequency spectrogram. (d) Chromagram representation. For visualization purposes the values are color-coded using a logarithmic scale. The C3 ($p=48$) played at time $t=30~{\mathrm{sec}}$ has been highlighted by the rectangular frames.
The main objective of this lab course is to acquire a good understanding of the STFT. To this end, we study a discrete version of the STFT using the discrete Fourier transform (DFT), which can be efficiently computed using the fast Fourier transform (FFT). The discrete STFT yields a discrete set of Fourier coefficients that are indexed by time and frequency parameters. The correct physical interpretation of these parameters in terms of units such as seconds and Hertz depends on the sampling rate, the window size, and the hop size used in the STFT computation. In this lab course, we will compute a discrete STFT using Python and then visualize its magnitude by a spectrogram representation, see the STFT-section. By applying the STFT to different audio examples and by modifying the various parameters, one should get a better understanding on how the STFT works in practice.
To make music data comparable and algorithmically accessible, the first step in basically all music processing tasks is to extract suitable features that capture relevant aspects while suppressing irrelevant details. In the second part of this lab course, we study audio features and mid-level representations that are particularly useful for capturing pitch information of music signals. Assuming that we are dealing with music that is based on the equal-tempered scale (the scale that corresponds to the keys of a piano keyboard), we will convert an audio recording into a feature representation that reveals the distribution of the signal's energy across the different pitches, see the Log-Frequency-Spectrogram-section. Technically, these features are obtained from a spectrogram by converting the linear frequency axis (measured in Hertz) into a logarithmic axis (measured in pitches). From this log-frequency spectrogram, we then derive a time-chroma representation by suitably combining pitch bands that correspond to the same chroma, see the Chroma-Features-section. The resulting chroma features show a high degree of robustness to variations in timbre and instrumentation.
The Fourier transform and in particular the discrete STFT serve as front-end transform, the first computing step, for deriving a large number of different musically relevant audio features. We now recall the definition of the discrete STFT while fixing some notation. Let $x:[0:L-1]:=\{0,1,\ldots,L-1\}\to{\mathbb R}$ be a real-valued discrete-time signal of length $L$ obtained by equidistant sampling with respect to a fixed sampling rate $F_\mathrm{s}$ given in Hertz ($\mathrm{Hz}$). Furthermore, let $w:[0:N-1]:=\{0,1,\ldots,N-1\}\to{\mathbb R}$ be a discrete-time window of length $N\in{\mathbb N}$ (usually a power of two) and let $H\in{\mathbb N}$ be a hop size parameter. With regards to these parameters, the discrete STFT ${\mathcal X}$ of the signal $x$ is given by
\begin{eqnarray} {\mathcal X}(m,k):= \sum_{n=0}^{N-1} x(n+mH)w(n)\exp(-2\pi ikn/N) \end{eqnarray}
with $m\in[0:\lfloor \frac{L-N}{H} \rfloor]$ and $k\in[0:K]$. The complex number ${\mathcal X}(m,k)$ denotes the $k^{\mathrm{th}}$ Fourier coefficient for the $m^{\mathrm{th}}$ time frame, where $K=N/2$ is the frequency index corresponding to the Nyquist frequency. Each Fourier coefficient ${\mathcal X}(m,k)$ is associated with the physical time position (using the start position of the window as reference point)
\begin{equation} {T_{\mathrm{coef}}(m)} := \frac{m\cdot H}{F_\mathrm{s}} \end{equation}
given in seconds (${\mathrm{sec}}$) and with the physical frequency
\begin{equation} F_{\mathrm{coef}}(k) := \frac{k\cdot F_\mathrm{s}}{N} \end{equation}
given in Hertz ($\mathrm{Hz}$). For example, using $F_\mathrm{s}=44100~\mathrm{Hz}$ as for a CD recording, a window length of $N=4096$, and a hop size of $H=N/2$, we obtain a time resolution of $H/F_\mathrm{s}\approx 46.4~\mathrm{ms}$ and frequency resolution of $F_\mathrm{s}/N\approx 10.8~\mathrm{Hz}$.
# write the functions T_coef and F_coef...
Fs, N, H = 22050, 1024, 512
print('Fs = %5d, N = %d, H = %4d: Tcoef = %6.2f msec, Fcoef = %5.2f Hz, Nyquist = %.2f Hz' % (Fs, N, H, T_coef(1, H, Fs)*1000, F_coef(1, N, Fs), Fs/2))
Fs, N, H = 48000, 1024, 256
print('Fs = %5d, N = %d, H = %4d: Tcoef = %6.2f msec, Fcoef = %5.2f Hz, Nyquist = %.2f Hz' % (Fs, N, H, T_coef(1, H, Fs)*1000, F_coef(1, N, Fs), Fs/2))
Fs, N, H = 4000, 4096, 1024
print('Fs = %5d, N = %d, H = %4d: Tcoef = %6.2f msec, Fcoef = %5.2f Hz, Nyquist = %.2f Hz' % (Fs, N, H, T_coef(1, H, Fs)*1000, F_coef(1, N, Fs), Fs/2))
# write the function ex1_2...
Fs, N, H = 44100, 2048, 1024
m, k = 1000, 1000
ex1_2(Fs, N, H, k, m)
m, k = 17, 0
ex1_2(Fs, N, H, k, m)
m, k = 56, 1024
ex1_2(Fs, N, H, k, m)
The STFT is often visualized by means of a spectrogram, which is a two-dimensional representation of the squared magnitude:
\begin{equation} {\mathcal Y}(m,k) = |{\mathcal X}(m,k)|^2. \end{equation}
When generating an image of a spectrogram, the horizontal axis represents time, the vertical axis is frequency, and the dimension indicating the spectrogram value of a particular frequency at a particular time is represented by the intensity or color in the image.
import soundfile as sf
from IPython.display import Audio
# your code here...
Audio(x, rate=Fs)
# your code here...
from scipy import signal
# your code here...
import librosa
# your code here...
import numpy as np
# your code here...
# your code here, compute t...
# your code here, compute f...
from matplotlib import pyplot as plt
%matplotlib inline
# your code here...
# your code here...
# your code here...
# your code here...
# your code here...
The human sensation of the intensity of a sound is logarithmic in nature. In practice, sounds that have an extremely small intensity may still be relevant for human listeners. Therefore, one often uses a decibel scale, which is a logarithmic unit expressing the ratio between two values. As alternative of using a decibel scale, one often applies in audio processing a step also referred to as logarithmic compression, which works as follows. Let $\gamma\in{\mathbb R}_{>0}$ be a positive constant and $\Gamma_\gamma:{\mathbb R}_{>0} \to {\mathbb R}_{>0}$ a function defined by
\begin{equation} \Gamma_\gamma(v):=\log(1+ \gamma \cdot v). \end{equation}
for $v\in{\mathbb R}_{>0}$, where we use the natural logarithm. Note that the function $\Gamma_\gamma$ yields a positive value $\Gamma_\gamma(v)$ for any positive value $v\in{\mathbb R}_{>0}$. Now, for a representation with positive values such as a spectrogram, one obtains a compressed version by applying the function $\Gamma_\gamma$ to each of the values:
\begin{equation} (\Gamma_\gamma\circ {\mathcal Y})(m,k):=\log(1+ \gamma \cdot {\mathcal Y}(m,k)). \end{equation}
Why is this operation called compression and what is the role of the constant $\gamma$? The problem with representations such as a spectrogram is that its values possess a large dynamic range. As a result, small, but still relevant values may be dominated by large values. Therefore, the idea of compression is to balance out this discrepancy by reducing the difference between large and small values with the effect to enhance the small values. This exactly is done by the function $\Gamma_\gamma$, where the degree of compression can be adjusted by the constant $\gamma$. The larger $\gamma$, the larger the resulting compression
# your code here...
# your code here...
# your code here...
# your code here...
We now derive some audio features from the STFT by converting the frequency axis (given in Hertz) into an axis that corresponds to musical pitches. In Western music, the equal-tempered scale is most often used, where the pitches of the scale correspond to the keys of a piano keyboard. In this scale, each octave (the interval between two tones, whose fundamental frequencies differ by a factor of two) is split up into twelve logarithmically spaced units. In MIDI notation, one considers $128$ pitches, which are serially numbered starting with $0$ and ending with $127$. The MIDI pitch $p=69$ corresponds to the pitch ${\mathbb N}oteMusic{A}{4}$ (having a center frequency of $440~\mathrm{Hz}$), which is often used as standard for tuning musical instruments. In general, the center frequency $F_{\mathrm{pitch}}(p)$ of a pitch $p\in[0:127]$ is given by the formula
\begin{equation} F_{\mathrm{pitch}}(p) = 2^{(p-69)/12} \cdot 440. \end{equation}
The logarithmic perception of frequency motivates the use of a time-frequency representation with a logarithmic frequency axis labeled by the pitches of the equal-tempered scale. To derive such a representation from a given spectrogram representation, the basic idea is to assign each spectral coefficient ${\mathcal X}(m,k)$ to the pitch with center frequency that is closest to the frequency $F_{\mathrm{coef}}(k)$. More precisely, we define for each pitch $p\in[0:127]$ the set
\begin{equation} P(p) := \{k\in[0:K]:F_{\mathrm{pitch}}(p - 0.5) \leq F_{\mathrm{coef}}(k) < F_{\mathrm{pitch}}(p + 0.5)\}. \end{equation}
From this, we obtain a log-frequency spectrogram ${\mathcal Y}_\mathrm{LF}:{\mathbb Z}\times [0:127]\to{\mathbb R}_{\geq 0}$ defined by
\begin{equation} {\mathcal Y}_\mathrm{LF}(m,p) := \sum_{k \in P(p)}{|{\mathcal X}(m,k)|^2}. \end{equation}
By this definition, the frequency axis is partitioned logarithmically and labeled linearly according to MIDI pitches.
# your code here...
print('Fpitch(%d) = %.2f Hz' % (68, F_pitch(68)))
print('Fpitch(%d) = %.2f Hz' % (69, F_pitch(69)))
print('Fpitch(%d) = %.2f Hz' % (70, F_pitch(70)))
# your code here...
# your code here...
Fs, N = 22050, 4096
print('P(%d) = %s' % (69, P(69, Fs, N)))
# your code here...
# your code here...
# your code here...
Audio(x, rate=Fs)
# your code here...
# your code here...
# your code here...
# your code here...
# your code here...
The human perception of pitch is periodic in the sense that two pitches are perceived as similar in color playing a similar harmonic role) if they differ by one or several octaves (where, in our scale, an octave is defined as the distance of $12$ pitches). For example, the pitches $p=60$ and $p=72$ are one octave apart, and the pitches $p=57$ and $p=81$ are two octaves apart. A pitch can be separated into two components, which are referred to as tne height and chroma The tone height refers to the octave number\index{octave number} and the chroma to the respective pitch spelling attribute. In Western music notation, the $12$ pitch attributes are given by the set $\{\mathrm{C},\mathrm{C}^{\sharp},\mathrm{D},\ldots,\mathrm{B}\}$. Enumerating the chroma values, we identify this set with $[0:11]$ where $c=0$ refers to chroma $\mathrm{C}$, $c=1$ to $\mathrm{C}^{\sharp}$, and so on. A pitch class is defined as the set of all pitches that share the same chroma. For example, the pitch class that corresponds to the chroma $c=0$ ($\mathrm{C}$) consists of the set $\{0,12,24,36,48,60,72,84,96,108,120\}$ (which are the musical notes $\{\ldots,\,\mathrm{C}\mathrm{0},\mathrm{C}\mathrm{1},\mathrm{C}\mathrm{2},\mathrm{C}\mathrm{3}\ldots\}$).
The main idea of chroma features is to aggregate all spectral information that relates to a given pitch class into a single coefficient. Given a pitch-based log-frequency spectrogram ${\mathcal Y}_\mathrm{LF}:{\mathbb Z}\times[0:127]\to {\mathbb R}_{\geq 0}$, a chroma representation or chromagram ${\mathbb Z}\times[0:11]\to {\mathbb R}_{\geq 0}$ can be derived by summing up all pitch coefficients that belong to the same chroma:
\begin{equation} {\mathcal C}(m,c) := \sum_{\{p \in [0:127]\,|\,p\,\mathrm{mod}\,12 = c\}}{{\mathcal Y}_\mathrm{LF}(m,p)} \end{equation} for $c\in[0:11]$.
# your code here...
# your code here...
# your code here...
Acknowledgment: The International Audio Laboratories Erlangen are a joint institution of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and Fraunhofer Institute for Integrated Circuits IIS.