Lecture: Selected Topics in Deep Learning for Audio, Speech, and Music Processing (Summer Term 2021)

01_MusicRepr_Teaser 02_FourierTr_Teaser


  • Instructors: Prof. Dr. ir. Emanuël Habets, Prof. Dr. Meinard Müller
  • Credits: 2.5 ECTS
  • Time (Lecture): Summer Term 2021, Monday, 16:00–17:45 (1. Lecture: 19.04.2021, via ZOOM)
    Link and access information for our ZOOM meetings can be found at StudOn (see below).
  • Exam (graded): Oral examination at the end of term
  • Dates (Lecture)): Mo 19.04.2021, Mo 26.04.2021, Mo 03.05.2021, Mo 10.05.2021, Mo 17.05.2021, Mo 31.05.2021, Mo 07.06.2021, Mo 14.06.2021, Mo 21.06.2021, Mo 28.06.2021, Mo 05.07.2021, Mo 12.07.2021,
  • Examination Dates (Room 3R4.03): To be announced
Important Notes:
  • Due to the COVID-19 pandemic, the lecture Selected Topics of Deep Learning for Audio, Speech, and Music Processing will be offered as a fully virtual course (via ZOOM).
  • Participation in the ZOOM session is only possible for FAU students. The ZOOM access information for this course will be made available via StudOn. Therefore, you must register via StudOn prior to the first lecture.
  • This course will based on articles from the research literature. It is strongly adviced that students prepare for the lecture by reading these articles. The lecture time will be used for an introduction to the respective problem, the deepening of important technical aspects, and for having a question–answering dialogue with participants.
  • As a technical requirement, all participants must have access to a computer capable of running the ZOOM video conferencing software (as provided by FAU), including audio and video transmission as well as screensharing.
  • To ensure privacy, the ZOOM sessions will not be recorded. Also, participants are not permitted to record the ZOOM sessions. Furthermore, ZOOM links may not be distributed.

Content

Many recent advances in audio, speech, and music processing have been driven by techniques based on deep learning (DL). For example, DL-based techniques have led to significant improvements in, for example, speaker separation, speech synthesis, acoustic scene analysis, audio retrieval, chord recognition, melody estimation, and beat tracking. Considering specific audio, speech, and music processing tasks, we study various DL-based approaches and their capability to extract complex features and make predictions based on hidden structures and relations. Rather than giving a comprehensive overview, we will study selected and generally applicable DL-based techniques. Furthermore, in the context of challenging application scenarios, we will critically review the potential and limitations of recent deep learning techniques. As one main general objective of the lecture, we want to discuss how you can integrate domain knowledge into neural network architectures to obtain explainable models that are less vulnerable to data biases and confounding factors.

The course consists of two overview-like lectures, where we introduce current research problems in audio, speech, and music processing. We will then continue with 6 to 8 lectures on selected audio processing topics and DL-based techniques. Being based on articles from the research literature, we will provide detailed explanations covered in mathematical depth; we may also try to attract some of the original authors to serve as guest lecturers. Finally, we round off the course by a concluding lecture covering practical aspects (e.g., hardware, software, version control, reproducibility, datasets) that are relevant when working with DL-based techniques.

Course Requirements

In this course, we require a good knowledge of deep learning techniques, machine learning, and pattern recognition as well as a strong mathematical background. Furthermore, we require a solid background in general digital signal processing and some experience with audio, image, or video processing.

It is recommended to finish the following modules (or having equivalent knowledge) before starting this module:

Links

Lecture: Topics, Material, Instructions

The course consists of two overview-like lectures, where we introduce current research problems in audio, speech, and music processing. We will then continue with 6 to 8 lectures wich are based on articles from the research literature. The lecture material includes handouts of slides, links to the original articles, and possibly links to demonstrators and further online resources. In the following list, you find links to the material. If you have any questions regarding the lecture, please contact Prof. Dr. ir. Emanuël Habets and Prof. Dr. Meinard Müller.

The following tentative schedule gives an overview:

Lecture 1: Introduction to Audio and Speech Processing

  • Date: Monday, 19.04.2021, Start: 16:00 (zoom opens 15:50)
  • Lecturer: Emanuël Habets
  • Slides: Available at StudOn for registered students

Lecture 2: Introduction to Music Processing

  • Date: Monday, 26.04.2021, Start: 16:00 (zoom opens 15:50)
  • Lecturer: Meinard Müller
  • Slides (PDF), Handouts (6 slides per page) (PDF)
  1. Meinard Müller
    Fundamentals of Music Processing
    Springer Verlag, 2015.
    @book{Mueller15_FMP_SPRINGER,
    author    = {Meinard M{\"u}ller},
    title     = {Fundamentals of Music Processing},
    type      = {Monograph},
    year      = {2015},
    publisher = {Springer Verlag}
    }
  2. Meinard Müller and Frank Zalkow
    FMP Notebooks: Educational Material for Teaching and Learning Fundamentals of Music Processing
    In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR): 573–580, 2019. DOI
    @inproceedings{MuellerZ19_FMP_ISMIR,
    author    = {Meinard M{\"u}ller and Frank Zalkow},
    title     = {{FMP} {N}otebooks: {E}ducational Material for Teaching and Learning Fundamentals of Music Processing},
    booktitle = {Proceedings of the International Society for Music Information Retrieval Conference ({ISMIR})},
    address   = {Delft, The Netherlands},
    pages     = {573--580},
    year      = {2019},
    doi       = {10.5281/zenodo.3527872}
    }

Lecture 3: Permutation Invariant Training Techniques for Speech Separation

  1. John R. Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe
    Deep clustering: Discriminative embeddings for segmentation and separation
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 31–35, 2016. DOI
    @inproceedings{HersheyCRW16_DeepClustering_ICASSP,
    author    = {John R. Hershey and Zhuo Chen and Jonathan Le Roux and Shinji Watanabe},
    title     = {Deep clustering: Discriminative embeddings for segmentation and separation},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    pages     = {31--35},
    year      = {2016},
    doi       = {10.1109/ICASSP.2016.7471631}
    }
  2. Zhuo Chen, Yi Luo, and Nima Mesgarani
    Deep attractor network for single-microphone speaker separation
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 246–250, 2017. DOI
    @inproceedings{ChenLM17_DeepAttractor,
    author    = {Zhuo Chen and Yi Luo and Nima Mesgarani},
    title     = {Deep attractor network for single-microphone speaker separation},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    pages     = {246--250},
    year      = {2017},
    doi       = {10.1109/ICASSP.2017.7952155}
    }
  3. Morten Kolbaek, Dong Yu, Zheng-Hua Tan, and Jesper Jensen
    Multitalker Speech Separation With Utterance-Level Permutation Invariant Training of Deep Recurrent Neural Networks
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(10): 1901–1913, 2017. DOI
    @article{KolbaekYTJ17_SpeechSep_TASLP,
    author    = {Morten Kolbaek and Dong Yu and Zheng-Hua Tan and Jesper Jensen},
    title     = {Multitalker Speech Separation With Utterance-Level Permutation Invariant Training of Deep Recurrent Neural Networks},
    journal   = {{IEEE/ACM} Transactions on Audio, Speech, and Language Processing},
    volume    = {25},
    number    = {10},
    pages     = {1901--1913},
    year      = {2017},
    url       = {https://doi.org/10.1109/TASLP.2017.2726762},
    doi       = {10.1109/TASLP.2017.2726762}
    }
  4. Ilya Kavalerov, Scott Wisdom, Hakan Erdogan, Brian Patton, Kevin W. Wilson, Jonathan Le Roux, and John R. Hershey
    Universal Sound Separation
    In Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA): 175–179, 2019. DOI
    @inproceedings{KavalerovWEPWRH19_UniversalSoundSep_WASPAA,
    author    = {Ilya Kavalerov and Scott Wisdom and Hakan Erdogan and Brian Patton and Kevin W. Wilson and Jonathan Le Roux and John R. Hershey},
    title     = {Universal Sound Separation},
    booktitle = {Proceedings of the {IEEE} Workshop on Applications of Signal Processing to Audio and Acoustics ({WASPAA})},
    pages     = {175--179},
    year      = {2019},
    url       = {https://doi.org/10.1109/WASPAA.2019.8937253},
    doi       = {10.1109/WASPAA.2019.8937253}
    }

Lecture 4: Deep Clustering for Single-Channel Ego-Noise Suppression

  • Date: Monday, 10.05.2021, Start: 16:00 (zoom opens 15:50)
  • Lecturer: Annika Briegleb
  1. Annika Briegleb, Alexander Schmidt, and Walter Kellermann
    Deep Clustering for Single-Channel Ego-Noise Suppression
    In Proceedings of the International Congress on Acoustics (ICA): 2813–2820, 2019. PDF DOI
    @inproceedings{BrieglebSK01_EgeNoise_ICA,
    author    = {Annika Briegleb and Alexander Schmidt and Walter Kellermann},
    title     = {Deep Clustering for Single-Channel Ego-Noise Suppression},
    booktitle = {Proceedings of the International Congress on Acoustics ({ICA})},
    pages     = {2813--2820},
    year      = {2019},
    doi       = {10.18154/RWTH-CONV-239374},
    url-pdf    = {https://pub.dega-akustik.de/ICA2019/data/articles/000705.pdf}
    }
  2. John R. Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe
    Deep clustering: Discriminative embeddings for segmentation and separation
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 31–35, 2016. DOI
    @inproceedings{HersheyCRW16_DeepClustering_ICASSP,
    author    = {John R. Hershey and Zhuo Chen and Jonathan Le Roux and Shinji Watanabe},
    title     = {Deep clustering: Discriminative embeddings for segmentation and separation},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    pages     = {31--35},
    year      = {2016},
    doi       = {10.1109/ICASSP.2016.7471631}
    }
  3. Zhuo Chen, Yi Luo, and Nima Mesgarani
    Deep attractor network for single-microphone speaker separation
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 246–250, 2017. DOI
    @inproceedings{ChenLM17_DeepAttractor,
    author    = {Zhuo Chen and Yi Luo and Nima Mesgarani},
    title     = {Deep attractor network for single-microphone speaker separation},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    pages     = {246--250},
    year      = {2017},
    doi       = {10.1109/ICASSP.2017.7952155}
    }

Lecture 5: Music Source Separation

Lecture 6: Nonnegative Autoencoders with Applications to Music Audio Decomposing

  1. Paris Smaragdis and Shrikant Venkataramani
    A neural network alternative to non-negative audio models
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 86–90, 2017. DOI
    @inproceedings{SmaragdisV17_NMFAutoencoder_ICASSP,
    author    = {Paris Smaragdis and Shrikant Venkataramani},
    title     = {A neural network alternative to non-negative audio models},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    address   = {New Orleans, Louisiana, USA},
    pages     = {86--90},
    year      = {2017},
    url       = {https://doi.org/10.1109/ICASSP.2017.7952123},
    doi       = {10.1109/ICASSP.2017.7952123}
    }
  2. Sebastian Ewert and Mark B. Sandler
    Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 2277–2281, 2017. DOI
    @inproceedings{EwertS17_StructuredDropout_ICASSP,
    author    = {Sebastian Ewert and Mark B. Sandler},
    title     = {Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    address   = {New Orleans, Louisiana, USA},
    pages     = {2277--2281},
    year      = {2017},
    url       = {https://doi.org/10.1109/ICASSP.2017.7952562}
    doi       = {10.1109/ICASSP.2017.7952562}
    }
  3. Sebastian Ewert and Meinard Müller
    Using Score-Informed Constraints for NMF-based Source Separation
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 129–132, 2012. Details
    @inproceedings{EwertM12_ScoreInformedNMF_ICASSP,
    author    = {Sebastian Ewert and Meinard M{\"u}ller},
    title     = {Using Score-Informed Constraints for {NMF}-based Source Separation},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    address   = {Kyoto, Japan},
    year      = {2012},
    pages   = {129--132},
    month     = {March},
    url-details = {http://resources.mpi-inf.mpg.de/MIR/ICASSP2012-ScoreInformedNMF/}
    }

Lecture 7: Attention in Sound Source Localization and Speaker Extraction

  1. Katerina Zmoliková, Marc Delcroix, Keisuke Kinoshita, Tsubasa Ochiai, Tomohiro Nakatani, Lukás Burget, and Jan Cernocky
    SpeakerBeam: Speaker Aware Neural Network for Target Speaker Extraction in Speech Mixtures
    IEEE Journal on Selected Topics in Signal Processing, 13(4): 800–814, 2019. DOI
    @article{ZmolikovaDKONBC19_SpeakerBeam_JSTSP,
    author    = {Katerina Zmolikov{\'{a}} and Marc Delcroix and Keisuke Kinoshita and Tsubasa Ochiai and Tomohiro Nakatani and Luk{\'{a}}s Burget and Jan Cernocky},
    title     = {{SpeakerBeam}: {S}peaker Aware Neural Network for Target Speaker Extraction
    in Speech Mixtures},
    journal   = {{IEEE} Journal on Selected Topics in Signal Processing},
    volume    = {13},
    number    = {4},
    pages     = {800--814},
    year      = {2019},
    url       = {https://doi.org/10.1109/JSTSP.2019.2922820},
    doi       = {10.1109/JSTSP.2019.2922820}
    }
  2. Wolfgang Mack, Mohamed Elminshawi, and Emanuël A. P. Habets
    Signal-Guided Source Separation
    2021. PDF
    @misc{Mack20_SignalGuidedSourceSep_arXiv,
    title={Signal-Guided Source Separation},
    author={Wolfgang Mack and Mohamed Elminshawi and Emanu{\"e}l A. P. Habets},
    year={2021},
    eprint={2011.04569v2},
    archivePrefix={arXiv},
    primaryClass={eess.AS},
    url-pdf = {http://128.84.4.27/pdf/2011.04569}
    }
  3. Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Atsunori Ogawa, and Tomohiro Nakatani
    Multimodal SpeakerBeam: Single Channel Target Speech Extraction with Audio-Visual Speaker Clues
    In Proceedings of the Annual Conference of the International Speech Communication Association (Interspeech): 2718–2722, 2019. DOI
    @inproceedings{OchiaiDKON19_SpeakerBeam_Interspeech,
    author    = {Tsubasa Ochiai and Marc Delcroix and Keisuke Kinoshita and Atsunori Ogawa and Tomohiro Nakatani},
    title     = {Multimodal {SpeakerBeam}: {S}ingle Channel Target Speech Extraction with
    Audio-Visual Speaker Clues},
    booktitle = {Proceedings of the Annual Conference of the International Speech Communication Association (Interspeech)},
    pages     = {2718--2722},
    publisher = {{ISCA}},
    year      = {2019},
    url       = {https://doi.org/10.21437/Interspeech.2019-1513},
    doi       = {10.21437/Interspeech.2019-1513}
    }
  4. Ziteng Wang, Junfeng Li, and Yonghong Yan
    Target Speaker Localization Based on the Complex Watson Mixture Model and Time-Frequency Selection Neural Network
    Applied Sciences, 8(11), 2018. PDF
    @article{WangLY18_SpeakerLoc_AppliedSciences,
    author = {Ziteng Wang and Junfeng Li and Yonghong Yan},
    title = {Target Speaker Localization Based on the Complex {W}atson Mixture Model and Time-Frequency Selection Neural Network},
    journal = {Applied Sciences},
    volume = {8},
    year = {2018},
    number = {11},
    url-pdf = {https://www.mdpi.com/2076-3417/8/11/2326}
    }

Lecture 8: Recurrent and Generative Adversarial Network Architectures for Text-to-Speech

  1. Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ-Skerrv Ryan, Rif A. Saurous, Yannis Agiomyrgiannakis, and Yonghui Wu
    Natural TTS Synthesis by Conditioning Wavenet on MEL Spectrogram Predictions
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 4779–4783, 2018. DOI
    @inproceedings{ShenPWSJYCZWRSA18_TTS_ICASSP,
    author    = {Jonathan Shen and Ruoming Pang and Ron J. Weiss and Mike Schuster and Navdeep Jaitly and Zongheng Yang and Zhifeng Chen and Yu Zhang and Yuxuan Wang and RJ-Skerrv Ryan and Rif A. Saurous and Yannis Agiomyrgiannakis and Yonghui Wu},
    title     = {Natural {TTS} Synthesis by Conditioning Wavenet on {MEL} Spectrogram Predictions},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    pages     = {4779--4783},
    year      = {2018},
    url       = {https://doi.org/10.1109/ICASSP.2018.8461368},
    doi       = {10.1109/ICASSP.2018.8461368}
    }
  2. Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu
    FastSpeech: Fast, Robust and Controllable Text to Speech
    In Proceedings of the Annual Conference on Neural Information Processing Systems: 3165–3174, 2019. PDF
    @inproceedings{RenRTQZZL19_FastSpeech_NeurIPS,
    author    = {Yi Ren and Yangjun Ruan and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu},
    title     = {{FastSpeech}: {F}ast, Robust and Controllable Text to Speech},
    booktitle = {Proceedings of the Annual Conference on Neural Information Processing Systems},
    pages     = {3165--3174},
    year      = {2019},
    url-pdf   = {https://proceedings.neurips.cc/paper/2019/file/f63f65b503e22cb970527f23c9ad7db1-Paper.pdf},
    }
  3. Ahmed Mustafa, Nicola Pia, and Guillaume Fuchs
    StyleMelGAN: An Efficient High-Fidelity Adversarial Vocoder with Temporal Adaptive Normalization
    CoRR, abs/2011.01557, 2021. PDF
    @article{MustafaPF21_StyleMelGAN_arXiv,
    author    = {Ahmed Mustafa and Nicola Pia and Guillaume Fuchs},
    title     = {{StyleMelGAN}: {A}n Efficient High-Fidelity Adversarial Vocoder with Temporal Adaptive Normalization},
    journal   = {CoRR},
    volume    = {abs/2011.01557},
    year      = {2021},
    url-pdf   = {https://arxiv.org/pdf/2011.01557.pdf}
    }
  4. Yuxuan Wang, R. J. Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc V. Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous
    Tacotron: Towards End-to-End Speech Synthesis
    In Proceedings of the Annual Conference of the International Speech Communication Association (Interspeech): 4006–4010, 2017.
    @inproceedings{WangSSWWJYXCBLA17_Tacotron_Interspeech,
    author    = {Yuxuan Wang and R. J. Skerry-Ryan and Daisy Stanton and Yonghui Wu and Ron J. Weiss and Navdeep Jaitly and Zongheng Yang and Ying Xiao and Zhifeng Chen and Samy Bengio and Quoc V. Le and Yannis Agiomyrgiannakis and Rob Clark and Rif A. Saurous},
    title     = {{Tacotron}: {T}owards End-to-End Speech Synthesis},
    booktitle = {Proceedings of the Annual Conference of the International Speech Communication Association (Interspeech)},
    pages     = {4006--4010},
    publisher = {{ISCA}},
    year      = {2017},
    url       = {http://www.isca-speech.org/archive/Interspeech\_2017/abstracts/1452.html}
    }

Lecture 9: Connectionist Temporal Classification (CTC) Loss with Applications to Theme-Based Music Retrieval

  1. Frank Zalkow and Meinard Müller
    Using Weakly Aligned Score—Audio Pairs to Train Deep Chroma Models for Cross-Modal Music Retrieval
    In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR): 184–191, 2020. DOI
    @inproceedings{ZalkowM20_WeaklyAlignedTrain_ISMIR,
    author    = {Frank Zalkow and Meinard M{\"u}ller},
    title     = {Using Weakly Aligned Score--Audio Pairs to Train Deep Chroma Models for Cross-Modal Music Retrieval},
    booktitle = {Proceedings of the International Society for Music Information Retrieval Conference ({ISMIR})},
    address   = {Montr{\'{e}}al, Canada},
    pages     = {184--191},
    year      = {2020},
    doi       = {10.5281/zenodo.4245400}
    }
  2. Daniel Stoller, Simon Durand, and Sebastian Ewert
    End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-To-Character Recognition Model
    In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP): 181–185, 2019. DOI
    @inproceedings{StollerDE19_LyricsAlignment_ICASSP,
    author    = {Daniel Stoller and Simon Durand and Sebastian Ewert},
    title     = {End-to-end Lyrics Alignment for Polyphonic Music Using an Audio-To-Character Recognition Model},
    booktitle = {Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing ({ICASSP})},
    pages     = {181--185},
    address   = {Brighton, {UK}},
    year      = {2019},
    doi       = {10.1109/ICASSP.2019.8683470}
    }
  3. Alex Graves, Santiago Fernández, Faustino J. Gomez, and Jürgen Schmidhuber
    Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks
    In Proceedings of the International Conference on Machine Learning (ICML): 369–376, 2006. DOI
    @inproceedings{GravesFGS06_CTCLoss_ICML,
    author    = {Alex Graves and Santiago Fern{\'{a}}ndez and Faustino J. Gomez and J{\"{u}}rgen Schmidhuber},
    title     = {Connectionist Temporal Classification: {L}abelling Unsegmented Sequence Data with Recurrent Neural Networks},
    booktitle = {Proceedings of the International Conference on Machine Learning ({ICML})},
    pages     = {369--376},
    address   = {Pittsburgh, Pennsylvania, USA},
    year      = {2006},
    doi       = {10.1145/1143844.1143891}
    }

Lecture 10: From Theory to Practise

  • Date: Monday, 28.06.2021, Start: 16:00 (zoom opens 15:50)
  • Lecturer: TBA