If you want take this lab course, please register before 15th of October via StudOn. For an overview of this course, click here. For questions, please contact Frank Zalkow.
The lab consists of:
Note: Attendance is mandatory for all meetings and labs.
All following lab courses take place in Room 3R3.06 (P1, LIKE):
Lab 1: Short-Time Fourier Transform and Chroma Features
09.11.2017: 14:15–18:15
Lab 2: Speech Analysis
23.11.2017: 14:15–18:15
Lab 3: Statistical Methods for Audio Experiments
14.12.2017: 14:15–18:15
Lab 4: Speech Enhancement Using Microphone Arrays
18.01.2018 14:15–18:15
Lab 5: Pitch Estimation and Harmonic to Noise Ratio Estimation
01.02.2018: 14:15–18:15
The objective of this lab course is to give students a hands on experience in audio processing. The lab is organised as follows:
The lab courses will be held weekly for each group and will be supervised by members of the AudioLabs team.
Students are required to have solid Matlab skills. This document gives a small introduction to MATLAB. Rather than being comprehensive, it only introduces some basic functions that are needed in the subsequent lab courses.
For CME students it is required to have passed the MATLAB Preparation Programming Course Exam.
The Fourier transform, which is used to convert a time-dependent signal to a frequency-dependent signal, is one of the most important mathematical tools in audio signal processing. Applying the Fourier transform to local sections of an audio signal, one obtains the short-time Fourier transform (STFT). In this lab course, we study a discrete version of the STFT. To work with the discrete STFT in practice, one needs to correctly interpret the discrete time and frequency parameters. Using MATLAB, we compute a discrete STFT and visualize its magnitude in form of a spectrogram representation. Then, we derive from the STFT various audio features that are useful for analyzing music signals. In particular, we develop a log-frequency spectrogram, where the frequency axis is converted into an axis corresponding to musical pitches. From this, we derive a chroma representation, which is a useful tool for capturing harmonic information of music.
This experiment is designed to give you a brief overview of the physiology of the production of speech. Moreover, it will give a descriptive introduction to the tools of speech coding, their functionality and their strengths but also their shortcomings.
This course intends to teach students the basics of experimental statistics as it is used for evaluating auditory experiments. Listening tests or experiments are a crucial part of assessing the quality of audio systems. There is currently no system available to give researchers and developers the possibility to evaluate the quality of audio systems fully objectively. In fact the best evaluation instrument is the human ear. Since only fair and unbiased comparisons between codecs guarantee that new developments are more preferred than the previous system, it is important to bring fundamental knowledge of statistics into the evaluation process to address the main problems of experimental tests, such as uncontrolled environments, subpar headphones or loudspeaker reproduction systems, listeners who have no experience to listening tests and so on.
This module is designed to give the students a practical understanding of performing speech enhancement using microphone arrays and demonstrate the difference between different techniques. This module is closely related to the lecture Speech Enhancement given by Prof. Dr. ir. Emanuël Habets. In this exercise, the students will implement a commonly used spatial signal processing technique known as beamforming, and analyse the performance of two different beamformers, a fixed beamformer known as delay-and-sum beamformer and a signal-dependent beamformer known as minimum variance distortionless response (MVDR) beamformer, for a noise and interference reduction task. Their performances will be compared via objective measures to demonstrate the advantages of the signal-dependent beamformers.
When looking at audio signals, one possible signal model is to distinguish between harmonic components and noise like components. The harmonic components exhibit a periodic structure in time and it is of course of interest to express this periodicity via the fundamental frequency F0, i.e. the frequency of the first sinusoidal component of the harmonic source. This fundamental frequency is closely related to the so called pitch of the source. The pitch is defined as how "low" or "high" a harmonic or tone-like source is perceived. Although strictly speaking this is a perceptual property, and is not necessarily equal to the fundamental frequency, it is often used as a synonym for the fundamental frequency. We will use the term pitch in this way in the remaining text. It is also of interest how the relationships in terms of energy between the harmonic and noise like components of an audio signal are. One feature expressing this relationship is the Harmonic to Noise Ratio (HNR). The estimation of the pitch and the HNR then can be used e.g. for efficiently coding the signal, or to generate a synthetic signal based on this and other information gained from analysing the signal. In this laboratory we will concentrate on a single audio source, and we will restrict ourselves to speech, which is the primary mode of human interaction. We will use this signals to develop simple estimators for both features and compare the results to state-of-the-art solutions for estimating the pitch and the HNR.
Requirements are a solid mathematical background, a good understanding of fundamentals in digital signal processing, as well as a general background and personal interest in audio. Furthermore, the students are required to have experience with MATLAB. The Statistics Lab will use the R Programming language (beginners tutorial is provided in the course material)