in

magenta / ddsp, Hacker News

magenta / ddsp, Hacker News


                    

        

[‘filtered_noise/signal’, ‘additive/signal’] ************

****************** [‘filtered_noise/signal’, ‘additive/signal’]

************************ (Tutorials) |OverviewBuild Status|

******** (Installation) **************************************** [‘filtered_noise/signal’, ‘additive/signal’]

DDSP is a library of differentiable versions of common DSP functions (such as synthesizers, waveshapers, and filters). This allows these interpretable elements to be used as part of an deep learning model, especially as the output layers for audio generation.

************************** (********************************
First, follow the steps in theBuild StatusInstallationsection to install the DDSP package and its dependencies. DDSP modules can be used to generate and manipulate audio from neural network outputs as in this simple example:

(import ddsp [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#******************** (Get synthesizer parameters from a neural network.) outputs=network (inputs) [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#********************** Initialize signal processors. additive=ddsp.synths.Additive () [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#Generates audio from additive synthesizer. audio=additive (outputs [amplitudes],                  outputs [harmonic_distribution],                  outputs [f0_hz])[(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] ******************************** (Read the original paper) ******************************************** 📄(********************************************** (Listen to some examples) ************************************************ 🔈

() ******************************Build StatusTutorials

The The best place to start is the step-by-step tutorials for all the major library components that can be found in (colabs / tutorials) **********************************************************logo. [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] ************************************************** 0_processorIntroduction to the Processor class.1_synths_and_effects

: Example usage of processors.2_processor_groupStringing processors together in a ProcessorGroup.3_training: Example of training on a single sound.4_core_functions: Extensive examples for most of the core DDSP functions.(************************************************************************ Modules

The DDSP library code is separated into several modules: [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] Core: All the core differentiable DSP functions. (****************************************************************** (Processors: Base classes for Processor and ProcessorGroup.(******************************************************************** Synths: ************ Processors that generate audio from network outputs.Effects: Processors that transorm audio according to network outputs. (******************************************************************** Losses

: Loss functions relevant to DDSP applications.(********************************************************************** Spectral Ops: Helper library of Fourier and related transforms. (********************************************************************** Pretrained Models: Helper library of models for perceptual loss functions.

******************************Build Status(Processor)

The (Processor) ******************************************************** is the main object type and preferred API of the DDSP library. It inherits from (tfkl.Layer) ********************************************************** and can be used like any other differentiable module.

Unlike other layers, Processors (such as Synthesizers and Effects) specifically format their

************************************ inputsinto (controls) that are physically meaningful. For instance, a synthesizer might need to remove frequencies above the (Nyquist frquency) ********************** to avoid (aliasing) or ensure that its amplitudes are strictly positive. To this end, they have the methods: [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] ************************************ get_controls (): inputs ->controls. (******************************************get_signal (): controls ->signal. (******************************************__ call __ (): inputs ->signal. (ie get_signal get_controls ()))

Where: [

(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] ************************************ inputsis a variable number of tensor arguments (depending on processor). Often the outputs of a neural network. (********************************************************** (controls) ********************************************************** (is a dictionary of tensors scaled and constrained specifically for the processor.) ********************************************** (********************************************************** (signal) is an output tensor (usually audio or control signal for another processor).

For example, here are of some inputs to an (Additive)

synthesizer:********************************************************

And And here are the resulting controls after logarithmically scaling amplitudes, removing harmonics above the Nyquist frequency, and normalizing the remaining harmonic distribution:************************************************************Notice that only harmonics are nonzero (sample rate). (kHz, Nyquist 8kHz,*****************************************************************************************************************************************************=(Hz) and they sum to 1.0 at all times

************************************************************************** () ******************************Build StatusProcessorGroup

Consider the situation where you want to string together a group of Processors. Since Processors are just instances of (tfkl.Layer) ********************************************************** you could use python control flow, as you would with any other differentiable modules.

In the example below, we have an audio autoencoder that uses a differentiable harmonic noise synthesizer with reverb to generate audio for a multi-scale spectrogram reconstruction loss.

(import ddsp [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#******************** (Get synthesizer parameters from the input audio.) outputs=network (audio_input) [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#********************** Initialize signal processors. additive=ddsp.synths.Additive () filtered_noise=ddsp.synths.FilteredNoise () reverb=ddsp.effects.TrainableReverb () spectral_loss=ddsp.losses.SpectralLoss () [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#******************* (Generate audio.) ************************************** audio_additive=additive (outputs [amplitudes],                           outputs [harmonic_distribution],                           outputs [f0_hz]) audio_noise

=filtered_noise (outputs [magnitudes]) audio

=(audio_additive) ************************************ audio_noise audio=reverb (audio) [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] **************************** (#******************** (Multi-scale spectrogram reconstruction loss.) ************************************ loss=spectral_loss (audio, audio_input) [(@ddsp.synths.Additive(), [‘amplitudes’, ‘harmonic_distribution’, ‘f0_hz’] ****************************** (********************************

ProcessorGroup (with a list)

A ProcessorGroup

**************************************** allows specifies a as a Directed Acyclic Graph (DAG) of processors. The main advantage of using a ProcessorGroup is that the entire signal processing chain can be specified in a (********************************************************. gin

**************************************** file, removing the need to write code in python for every different configuration of processors.

You can specify the DAG as a list of tuples dag=[(processor, [‘input1’, ‘input2’, …]), …] (where) ******************************************************** (processor) ******************************************************** (is an Processor instance, and['input1', 'input2', ...]is a list of strings specifying input arguments. The output signal of each processor can be referenced as an input by the string 'processor_name / signal'where processor_name is the name of the processor at construction. The ProcessorGroup takes a dictionary of inputs, who keys can be referenced in the DAG.

(import ddspimport gin [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#******************** (Get synthesizer parameters from the input audio.) outputs=network (audio_input) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#********************** Initialize signal processors. additive=ddsp.synths.Additive () filtered_noise=ddsp.synths.FilteredNoise () add=ddsp.processors.Add () reverb=ddsp.effects.TrainableReverb () spectral_loss=ddsp.losses.SpectralLoss () [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#******************** (Processor group DAG**************** dag=[ (additive, ['amps','harmonic_distribution','f0_hz']),   (filtered_noise,    ['magnitudes']),   (add,    ['additive/signal','filtered_noise/signal'],   (reverb,    ['add/signal']) ] processor_group

=(ddsp.processors.ProcessorGroup [

(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (dag) ************************************=dag) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#******************* (Generate audio.) ************************************** audio=processor_group (outputs) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#******************** (Multi-scale spectrogram reconstruction loss.) ************************************ loss=spectral_loss (audio, audio_input) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] ****************************** [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **********************ProcessorGroup (with (gin) )

The main advantage of a ProcessorGroup is that it can be defined with a. gin

file, allowing flexible configurations without having to write new python code for every new DAG.

In the example below we pretend we have an external file written, which we treat here as a string. Now, after parsing the gin file, the ProcessorGroup will have its arguments configured on construction.

(import ddspimport gin gin_config=(************************************import ddsp

[(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] processors.ProcessorGroup.dag=[(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz']),(@ ddsp.synths.FilteredNoise (),  (************************************** [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] ), (@ ddsp.processors.Add (),  (************************************** [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] ),(@ ddsp.effects.TrainableReverb (),  (************************************** [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] )]  (************************************** [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] ****************************
**********************with gin.unlock_config ():   gin.parse_config (gin_config) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#******************** (Get synthesizer parameters from the input audio.) outputs=network (audio_input) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#********************** Initialize signal processors, arguments are configured by gin.processor_group

=ddsp.processors.ProcessorGroup () [

(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#
******************* (Generate audio.) ************************************** audio=processor_group (outputs) [
(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] **************************** (#******************** (Multi-scale spectrogram reconstruction loss.) ************************************ loss=spectral_loss (audio, audio_input) [(@ddsp.synths.Additive(), ['amplitudes', 'harmonic_distribution', 'f0_hz'] ******************************

******************************************************************************
['filtered_noise/signal', 'additive/signal'] ******************
@ gin.configurable
****************************************** makes a function globally configurable, such that (************************************************************************************ the function or object is called, gin sets its default arguments / constructor values. This can lead to a lot of unintended side-effects. (************************************************************************************ @ gin.register registers a function or object with gin, and only sets the default argument values ​​when the function or object itself is used as an argument to another function.
To "use gin responsibly", by wrapping most functions with@ gin.registerso that they can be specified as arguments of more "global" @ gin.configurable functions / objects such as ProcessorGroup
**************************************** in the main library and (Model) ********************************************************, (train ()
********************************************,evaluate () (********************************************************, and sample () (in) ************************************************************************************ (ddsp / training) ************As you can see in the code, this allows us to flexibly define hyperparameters of Most functions without worrying about side-effects. One exception is (ddsp.core.cumsum) ********************************************************** (where we configure special optimizations for TPU.)

Requires tensorflow version>=2.1.0, but runs in either eager or  graph mode.

sudo apt-get install libsndfile-dev pip install --upgrade pip pip install --upgrade ddsp(*****************************************
We're eager to collaborate with you! See (********************************************************** CONTRIBUTING.md
for a guide on how to contribute.
If you use this code please cite it as:
@ inproceedings {   engel 7920 ddsp,   title={{ {(***************************************} DDSP { }}: Differentiable Digital Signal Processing},   author={Jesse Engel and Lamtharn (Hanoi) Hantrakul and Chenjie Gu and Adam Roberts},   booktitle={International Conference on Learning Representations},   year={(******************************************************************************************************************************},   url={https://openreview.net/forum?id=B1x1ma4tDr} }(****************************************
This is not an official Google product.   (**************************************************************************************** (****************************************************************************************
********************************************************************************** (Read More) (********************************************************************************************