Support & Frequently Asked Questions

Support - Sound Particles

Support Request

  • Do you have a question about Sound Particles?
  • Did you find a bug on the application?
  • Is the software crashing?
  • Do you have an idea for a nice feature to be implemented in a future version?

Don't hesitate to send us an email ([email protected]).
Also, feel free to use our facebook forum.

Frequently Asked Questions

Sound Particles is a groundbreaking software, and, as such, many questions arise...

At the moment, we have 2 different products: Doppler+Air is a bundle with 2 regular plugins (AAX native, VST/VST3, AU/AUv3). Sound Particles is a standalone application (doesn’t work as a plugin). In the future, other plugins may be created.

Even better - schools/students/teachers can have free access to a special academic version of Sound Particles 2.0, with the exact same features as the commercial version.

Currently the software only supports channel-based audio (e.g. Dolby Atmos 9.1 bed, Auro-3D 11.1/13.1, NHK 22.2) and scene-based audio (Ambisonics/HOA). The main issue to support object-based audio is that there isn’t a way to export metadata to other systems. Nevertheless, we are working with Dolby, DTS, Auro and Avid, to support it in the near future, as soon there is a technical way to communicate metadata information with these systems.

Yes, Sound Particles 2.0 supports real-time rendering for a relatively simple audio scene (around 100 particles, depending on the scene complexity and computing power).

The render time depends on the complexity of the scene. As you can imagine, if you have thousands of particles, the software will need to render thousands of audio tracks, and that can take a while. A typical 10 second scene with a small amount of particles (<100) may take around 10 seconds to render or less. As you increase complexity, the render time usually increases in a linear way.

In theory, each particle group can have a maximum of ~ 2 000 000 000 particles, and you can create millions of groups. The problem is that the software would need a LOT of time to be able to render such audio scenes, besides memory/RAM requirements that computers don’t currently have.

For each particle, the engine will calculate its position on a sample-by-sample basis (e.g. a sample rate of 96kHz means that you calculate the position of each particle 96000 times per second). Internally, all calculations use 64-bit floating-point precision. Since the scene could have thousands of explosions happening a few inches away from the virtual microphone or a simple whisper several miles away, while rendering a normalization process is applied to the final stream, preventing any clipping and optimizing the dynamic range of the output signal.