Set this value to an AudioListener derived class that you've added custom functionality to. If null, the KAudioSystem will instantiate the base class.
Set this value to an AudioMixerManager derived class that you've added custom functionality to. If null, the KAudioSystem will instantiate the base class.
Sets the number of AudioAgents in the AudioManagerSubsystem's pool. The size of this pool is used as the global sound limiting threshhold within the KAudioSystem.
These settings modify the weights and values used when calculating global sound stealing. The calculation is:
//Lowest score gets stolen
//ASC = AudioSuperComponent
//GSSF = GlobalSoundStealFactors
priority = ASC->soundPriority * GSSF.priorityModifier;
distance = GSSF.distanceScale.Eval(FVector::Dist(sound->GetOwner()->GetActorLocation(), location)) * GSSF.distanceModifier;
age = GSSF.ageScale.Eval(ASC->age) * GSSF.ageModifier;
SCORE = priority + distance + age;
In editor and non-shipping builds, you can use console commands to open this debug widget at runtime. This can be replaced if you want to use your own debugging widget.
To use game parameters with the KAudioSystem, you'll need to create data assets which will be used to identify and reference your parameters. There are 3 kinds of parameters:
AudioGroupParameter = A group parameter is a label used to mark AudioSwitchParameters. All AudioSwitchParameters belong to a group and within a group only one switch can be active per actor.
AudioSwitchParameter = A switch parameter is a value that is set for an actor. Each AudioSwitchParameter has a reference to an AudioGroupParameter that is evaluated when setting the switch.
AudioFloatParameter = A float parameter acts as an identifier and contains default settings for numeric game parameters which can be reference and retrieved when handling out-of-bounds parameter inputs.
Though there are only 3 asset types, the KAudioSystem supports 4 kinds of game parameters:
Local switches = AudioSwitchParameters set using SetSwitchParameter() in KAudioSystemStatics will be applied locally per actor. This means each actor can have its own switch setting per AudioGroupParameter and only AudioSuperComponents linked to this actor will receive the value.
Local floats = AudioFloatParameters set using SetFloatParameter() in KAudioSystemStatics will be applied locally per actor. This means each actor can have its own float value setting for an AudioFloatParameter and only AudioSuperComponents linked to this actor will receive the value.
Global states = AudioSwitchParameters set using SetStateParameter() in KAudioSystemStatics will be applied globally. All AudioSuperComponents listening for global states will receive the value.
Global floats = AudioFloatParameters set using SetGlobalFloatParameter() in KAudioSystemStatics will be applied globally. All AudioSuperComponents listening for global states will receive the value.
To make a data asset for any of these parameters, right click in the Content Browser and select Miscellaneous > Data Asset. For the asset type, select one of the above classes. You can then name your data asset and configure the settings inside. All parameters have a GUID property. You can batch roll new unique IDs for all selected parameter data assets by selecting and right clicking on an asset and selecting Scripted Asset Actions > Generate GUID.
AudioSuperComponents, MusicLayouts, and MusicSegments can all react to game parameters. The set up for whichever class you are using is the same.
1. First, decide what game parameters you want your class to listen for within the KAudioSystem and enable the corresponding check box in Class Defaults > Game Parameter Settings.
Switch Parameters = Local AudioSwitchParameters.
Float Parameters = Local AudioFloatParameters.
State Parameters = Global AudioSwitchParameters.
Global Float Parameters = Global AudioFloatParameters.
2. Next, override the corresponding function for the parameters you want your class to listen for:
OnSwitchParameterChanged()
OnFloatParameterChanged()
OnStateParameterChanged()
OnGlobalFloatParameterChanged()
These functions are called every time a parameter changes. This means it is up to you to receive and specify which parameters your code will respond to. To help with this, when you drag off an output pin for any parameter asset reference, you can use a special equals node that only provdes data assets that match the type of parameter you're comparing.
3. It's worth noting that the overrideable functions above are called the moment a parameter changed occurs. AudioSuperComponents, MusicSegments, and MusicLayouts by default do not store any of these values, so if your instance misses the signal, your logic in these functions will not run. To account for this, you can manually call getter functions to retrieve parameters from the KAudioSystem.
GetSwitchParameter() = Gets the local AudioSwitchParameter value of an AudioGroupParameter for your sound's linked actor. Note, if you're sound is an orphan (the linked actor was destroyed but the sound was allowed to persist), this function executes the "False" output pin.
GetFloatParameter() = Gets the local float value of an AudioFloatParameter for the sound's linked actor. Note, if you're sound is an orphan (the linked actor was destroyed but the sound was allowed to persist), this function executes the "False" output pin.
GetStateParameter() = Gets the global AudioSwitchParameter value of an AudioGroupParameter. If the value was never set, this function executes the "False" output pin.
GetGlobalFloatParameter() = Gets the global float value of an AudioFloatParameter. If the value was never set, this function executes the "False" output pin.
These functions tend to be most useful when overriding functions like PrePlay() and PostPlay() where you need to initialize your class and grab parameters immediately.
AudioEnvironments can be used the same way Unreal Engine's built-in audio volumes work. In the "Place Actors" window/dropdown, find "Audio Environment" and simply drag the volume into your level. Use the "Brush Settings" to set the size of the volume which is used for both the AudioEnvironment and built-in audio volume functionality. You'll also want to set the "Priority" value if this volume will overlap with others.
In order for the AudioEnvironment to pass parameters to AudioAgents there are several arrays under "Audio Environment" you can edit. Four of the arrays correspond with entering the volume and the other four with exiting. Each type of game parameter has it's own array.
There is also a checkbox labeled "Force Environment Query On Exit". When you have volumes nested inside of each other, AudioAgents exiting the inner volume need to reapply the outer volume's enter parameters. You could configure the exit parameters of the inner volume to match the enter parameters of the outer volume. Or, turn on "Force Environment Query on Exit" on the inner volume so any AudioAgent that exits the volume with query it's environment again and grab the outer volume's enter parameters.
The AudioMixerManager can similarly be set up to receive game parameters like an AudioSuperComponent. However, there are a two key differences.
An AudioMixerManager always listens to all parameters.
It's local parameters are parameters that are set directly on the mixer manager. You can retrieve a reference to the mixer manager through the AudioManagerSubsystem and call SetSwitchParameter() or SetFloatParameter().
You will mainly want to use the AudioMixerManager along with Unreal Engine's built-in audio system to enact global dynamic mix changes. Sound Class Mix assets can be activated at runtime using the the Pop/PushSoundMixModifier() functions. There are many other built-in functions that might be useful, simply check what functions are available under the "Audio" dropdown when using the Blueprint intellisense.
You should think of a MusicSegment as a horizontal section of self-contained music. MusicSegments have all the same capabilities as AudioSuperComponents with an additional property for BPM and time signature that must be set so they can be used in the music system.
MusicSegments are meant to receive parameters, make decisions, and control a MetaSound. The MetaSound is responsible for enacting the musical changes you want. The most common example would be implementing vertical layering. When the MusicSegment receives a parameter, it decides to add a new layer on top of the current music playing. It then pass an input value to the MetaSound which handles fading the music layer in synchronously with the music already playing.
There are two ways to create a MusicSegment. The first and most common way will be by making new Blueprints of the MusicSegmentBase class. These are then attached as unique components onto a MusicLayout. The second way is to attach the base class if you simply need a segment that has no special functionality. Ultimately, to be used within the music system, your MusicSegments must be added to a MusicLayout.