Music generation

Creating your own music generation engine

Overview

JJazzLab framework → RhythmProvider → Rhythm → MusicGenerator → phrases → post-process

RhythmProvider

Your engine must implement the RhythmProvider interface with the @ServiceProvider annotation, so that the RhythmDatabase implementation can automatically find it upon startup. The used Netbeans mechanism is described here (lookup).

Example: See RhythmStubProviderImpl.java for a simple RhythmProvider implementation example.

Rhythm, RhythmVoices and RhythmParameters

The engine provides Rhythm instances to the framework.

The Rhythm interface notably defines :

  • name, time signature, preferred tempo, feel, etc.

  • RhythmVoices : the supported tracks (Drums, Bass, Piano, etc.). Each RhythmVoice defines the recommended Midi instrument and settings for the track.

  • RhythmParameters : the "control knobs" given to the user to modulate music generation. Common rhythm parameters are Variation, Intensity, Mute, etc. You see them in the Song Structure Editor, their value can be set for each Song Part. The framework provides default UI widget for each rhythm parameter type, but you can define your own UI widget if required.

Rhythm instances are actually provided via RhythmInfo instances, which are used by the RhythmDatabase for its cache file. They represent what the user sees in the rhythm selection dialog.

MusicGenerator

A Rhythm instance should implement the MusicGeneratorProvider interface.

Actual music generation is performed by the provided MusicGenerator instance. It receives a song context (chord leadsheet, song structure, tempo, etc.) and returns musical phrases (one per instrument) that form the backing track.

Example: See RhythmStub.java for a simple Rhythm implementation example.

Example: See DummyGenerator.java for a simple MusicGenerator implementation example.

Post-processing

After generation, the framework applies engine-independent post-processing to the returned phrases :

  • muting instruments in a song part

  • per-channel velocity shift

  • volume fade-out

  • tempo factor changes

  • etc.

The details of post-processing tasks performed by the framework are provided in the MusicGenerator interface comments.

Last updated

Was this helpful?