Main Content

Get Started withAudio Toolbox

Design and analyze speech, acoustic, and audio processing systems

Audio Toolbox™ provides tools for audio processing, speech analysis, and acoustic measurement. It includes algorithms for processing audio signals such as equalization and time stretching, estimating acoustic signal metrics such as loudness and sharpness, and extracting audio features such as MFCC and pitch. It also provides advanced machine learning models, including i-vectors, and pretrained deep learning networks, including VGGish and CREPE. Toolbox apps support live algorithm testing, impulse response measurement, and signal labeling. The toolbox provides streaming interfaces to ASIO™, CoreAudio, and other sound cards; MIDI devices; and tools for generating and hosting VST and Audio Units plugins.

With Audio Toolbox you can import, label, and augment audio data sets, as well as extract features to train machine learning and deep learning models. The pre-trained models provided can be applied to audio recordings for high-level semantic analysis.

You can prototype audio processing algorithms in real time or run custom acoustic measurements by streaming low-latency audio to and from sound cards. You can validate your algorithm by turning it into an audio plugin to run in external host applications such as Digital Audio Workstations. Plugin hosting lets you use external audio plugins as regular MATLAB®objects.

Installation and Configuration

Tutorials

About Audio Plugins

About Deep Learning and Machine Learning for Audio

Featured Examples

Videos

What Is Audio Toolbox?
Design and test audio processing systems with Audio Toolbox.

Introduction to Deep Learning for Audio and Speech Applications
创建或摄取数据集,提取特征和develop audio and speech analytics using Statistics and Machine Learning Toolbox™, Deep Learning Toolbox™, or other machine learning tools.