The KAudioSystem's main focus is providing a streamlined development workflow for implementing interactive audio behaviors. The system is written to allow audio designers to author gamaplay responsive audio with a combination of Blueprints and MetaSound while keeping audio implementation separate from other game systems. This system sits alongside Unreal Engine's built-in audio system and does not replace it. The key classes audio designers will utilize are the AudioSuperComponent, MusicLayout, and MusicSegment along with game parameter data assets AudioGroupParameter, AudioSwitchParameter, and AudioFloatParameter which cover global states and floats and local, per actor, switches and floats.
Intended workflow/conventions:
Game systems use static functions to interact with the KAudioSystem. They can also interact with direct references when manual control is needed, but otherwise game parameters set in the actor that requests the sound are the main way game systems will indirectly communicate with sounds.
AudioSuperComponents receive game parameters from the KAudioSystem and are the decision-makers that determine, based on those parameters, what audio behavior should occur. Once a decision is made in Blueprint, the AudioSuperComponent sends input data to MetaSound. There are also many settings about the how this sound will be managed by the KAudioSystem. Inherit from this class to write your custom Blueprint code.
MetaSounds receive input and enact changes in response. They are responsible for the actual effect and changes in audio.
MusicLayouts alongside MusicSegments are the building blocks of the music system. Generally speaking, the intended workflow is that MusicLayouts are responsible for the management of horizontal interactive music structures and MusicSegments + MetaSounds are responsible for the management of vertical interactive music structures. MusicLayouts also receive game parameters from the KAudioSystem. Inherit from this class to write your custom Blueprint code.
MusicSegments have all the same functionality as AudioSuperComponents with extra data about BPM and time signature. Inherit from this class to write your custom Blueprint code or use the base class if no extra code is required.
AudioGroupParameter, AudioSwitchParameter, and AudioFloatParameter are data assets you create that are used when setting, getting, and using parameters within the KAudioSystem. Groups are used to classify sets of switches. A Group can only have one switch setting at a time. Note that there are no guards against mismatching groups and switches.
AudioEnvironments add KAudioSystem functionality on top of Unreal Engine's built-in Audio Volumes. Place these to set switch and float parameters on sounds that enter and exit, as well as trigger states and global floats only when the AudioListener enters and exits.
NOTE: If you need to simply fire and forget sounds that don't have any additional functionality, you can still use Unreal Engine's built-in play sound functions. These sounds will ignore the KAudioSystem entirely and only be governed by the built-in audio system. This can provide a small optimization.
This class provides global static functions for other game systems to call to interact with the KAudioSystem.
The AudioManagerSubsystem is a singleton used to interact with the audio system and call global audio system commands. It handles the creation and management of everything needed to pass game parameters around the audio system. It can also be customized in C++ to deny play sound requests for sound management pre-instantiation the sound alongside Unreal Engine's build-in "Sound Concurrency" system which is post-instantiation.
An AudioSuperComponent inherits from the built-in audio component class provided by Unreal Engine and extends the class to be able to receive game parameters from the audio system. These are the main building blocks of the audio system logic for audio implementors and they should be thought of as decision-makers that determine what to do when game parameter information is received. Once a decision is made within the Blueprint code the AudioSuperComponent should then pass parameter configurations to a MetaSound to handle the resulting audio behavior.
AudioSuperComponents can receive game parameters from a variety of sources:
They can receive game parameters set on the actor that requested their instantiation.
They can receive global game parameters set on the AudioManagerSubsystem.
They can receive game parameters triggered by overlaps with AudioEnvironments.
AudioSuperComponents provide several built-in, overrideable functions for receiving game parameters: OnSwitchParameterChanged(), OnFloatParameterChanged(), OnStateParameterChanged(), and OnGlobalFloatParameterChanged().
AudioSuperComponents also have functions to allow custom implementation logic when receiving commands from the audio system: PrePlay(), PostPlay(), CustomStop(), ManualNext(), ManualPrevious(), ManualReset(), and CustomTrigger(). PrePlay() and PostPlay() in particular can be used to make decisions about sound playback before actually playing the sound (ie. get the surface type to select which footstep sfx to play before playing it).
NOTE: You cannot set inputs on a MetaSound before the Metasound is actually played. Therefore, you should send initialization parameters to MetaSound in PostPlay(). It seems doing this does not cause any timing issues.
An AudioGroupParameter defines a set of AudioSwitchParameters. Only one AudioSwitchParameter value may be set per group per actor.
An AudioSwitchParameter defines a value for an AudioGroupParameter.
An AudioFloatParameter defines a numeric parameter and its range within the KAudioSystem.
You can create an AudioListener derived class if you want to add custom code to the listener. The AudioListener is an actor that the AudioManagerSubsystem automatically instantiates.
Project Settings > K Audio System > Audio Listener Class is where you can set the audio system to spawn your custom derived class instead of the base listener class.
Any actor can claim the AudioListener to have the listener follow a component (typcically the RootComponent) by calling ClaimAudioListener() on the AudioManagerSubsystem. You should also call SetDefaultListener() on the AudioManagerSubsystem as well in case the listener ever has to be reset.
The AudioListener is also the only actor that is allowed to set states and global float parameters when overlapping with AudioEnvironments.
You can create an AudioMixerManager derived class if you want to add custom code to the mixer manager. The AudioMixerManager receives state and global float game parameters set on the AudioManagerSubsystem and should be used to enact mix changes on Unreal Engine's audio routing system. It is an actor that the AudioManagerSubsystem automatically instantiates.
Project Settings > K Audio System > Audio Mixer Class is where you can set the audio system to spawn your custom derived class instead of the base mixer manager class.
An AudioEnvironment has the same functionality as Unreal Engine's built-in audio volumes just with an extra layer that allows it to communicate game parameters through the KAudioSystem.
AudioAgents are pooled actors used to parent AudioSuperComponents that are spawned by the AudioManagerSubsystem. The size of the pool set in Project Settings > Plugins > K Audio System Settings is used to determine when global sound instance limiting takes effect. AudioAgents are also an AudioSuperComponent's window into the game world, allowing them to overlap with and receive game parameters from AudioEnvironments.
An AudioEvent is a list of AudioEventCommands that can be executed simultaneously with one trigger function. These can be used by audio designers to streamline implementation by packaging several commands for the KAudioSystem into a single, reusable data asset.
The MusicManager is accessed through the AudioManagerSubsystem and has several callable functions for controlling music. It can only manage one MusicLayout at a time. The MusicManager also its own game parameter storage for switches and non-global floats. Gameplay systems interact with the music system mainly through game parameters set on the MusicManager and AudioManagerSubsystem.
MusicLayout have MusicSegments attached and are responsible for horizontal transition logic between MusicSegments. MusicLayouts utilize a transition hierarchy set in the "Class Defaults" to determine what transition to use when queueing the next MusicSegment to play. MusicLayouts also receive game parameters, specifically switches and local floats from the music manager and states and global floats from the AudioManagerSubsystem. Only one MusicLayout can be used by the music manager at a time.
A MusicSegment has all the same features as an AudioSuperComponent. The only difference is that MusicSegments have a setting in the "Class Defaults" for BPM, time signature numerator, and time signature denominator that are used by the music system. There are two ways to use a MusicSegment:
You can attach a MusicSegmentBase component to a MusicLayout and configure it's BPM, time signature, and the sound asset it plays. This provides the default, barebones functionality of a MusicSegment.
If want more complex logic within your MusicSegment and the ability to have it and/or the Metasound it uses respond to game parameters, you'll want to make a new Blueprint with the "MusicSegmentBase" parent class and add that as a component to your MusicLayouts. You can utilize all of the same functionality provided in the AudioSuperComponent.