Game Audio Programming 2
eBook - ePub

Game Audio Programming 2

Principles and Practices

  1. 362 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Game Audio Programming 2

Principles and Practices

Book details
Book preview
Table of contents
Citations

About This Book

Welcome to the second volume of Game Audio Programming: Principles and Practices – the first series of its kind dedicated to the art of game audio programming! This volume features more than 20 chapters containing advanced techniques from some of the top game audio programmers and sound designers in the industry. This book continues the tradition of collecting more knowledge and wisdom about game audio programming than any other volume in history.

Both audio programming beginners and seasoned veterans will find content in this book that is valuable, with topics ranging from extreme low-level mixing to high-level game integration. Each chapter contains techniques that were used in games that have shipped, and there is a plethora of code samples and diagrams. There are chapters on threading, DSP implementation, advanced middleware techniques in FMOD Studio and Audiokinetic Wwise, ambiences, mixing, music, and more.

This book has something for everyone who is programming audio for a game: programmers new to the art of audio programming, experienced audio programmers, and those souls who just got assigned the audio code. This book is for you!

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Game Audio Programming 2 by Guy Somberg in PDF and/or ePUB format, as well as other popular books in Computer Science & Programming Games. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781351653947
Edition
1

CHAPTER 1

Life Cycle of Game Audio

Florian FĂźsslin
Crytek GmbH, Frankfrut, Germany

CONTENTS

  • 1.1 Preamble
  • 1.2 Audio Life Cycle Overview
    • 1.2.1 Audio Asset Naming Convention
    • 1.2.2 Audio Asset Production
  • 1.3 Preproduction
    • 1.3.1 Concept and Discovery Phase—Let’s Keep Our Finger on the Pulse!
    • 1.3.2 Audio Modules
    • 1.3.3 The Audio Quality Bar
    • 1.3.4 The Audio Prototypes
    • 1.3.5 First Playable
    • 1.3.6 Preproduction Summary
  • 1.4 Production
    • 1.4.1 Proof of Concept Phase—Let’s Build the Game!
    • 1.4.2 The Alpha Milestone
    • 1.4.3 Production Summary
  • 1.5 Postproduction
    • 1.5.1 Optimization, Beautification, and Mixing—Let’s Stay on Top of Things!
    • 1.5.2 The Beta Milestone
    • 1.5.3 Postproduction Summary
  • 1.6 Conclusion
  • 1.7 Postscript
  • Reference

1.1 PREAMBLE

Two years have passed since I had the pleasure and opportunity to contribute to this book series—two years in which game audio has made another leap forward, both creatively and technically. Key contributors and important components to this leap are the audio middleware software companies and their design tools, which continue to shift power from audio programmers to audio designers. But as always, with great power comes great responsibility, so a structured approach to audio design and implementation has never been more important. This chapter will provide ideas, insight, and overview of the audio production cycle from an audio designer’s perspective.

1.2 AUDIO LIFE CYCLE OVERVIEW

“In the beginning, there was nothing.” This pithy biblical phrase describes the start of a game project. Unlike linear media, there are no sound effects or dialog recorded on set we can use as a starting point. This silence is a bleak starting point, and it does not look better on the technical side. We might have a game engine running with an audio middleware, but we still need to create AudioEntity components, set GameConditions, and define GameParameters.
In this vacuum, it can be hard to determine where to start. Ideally, audio is part of a project from day one and is a strong contributor to the game’s overall vision, so it makes sense to follow the production phases of a project and tailor it for our audio requirements. Game production is usually divided in three phases: preproduction, production, and postproduction. The milestone at the end of each phase functions as the quality gate to continue the development.
  • Preproduction → Milestone First Playable (Vertical Slice)
  • Production → Milestone Alpha (Content Complete)
  • Postproduction → Milestone Beta (Content Finalized)
The ultimate goal of this process is to build and achieve the audio vision alongside the project with as little throwaway work as possible. The current generation of game engines and audio middleware software cater to this requirement. Their architecture allows changing, iterating, extending, and adapting the audio easily, often in real time with the game and audio middleware connected over the network.
All of these tools treat the audio content and the audio structure separately. So, instead of playing back an audio file directly, the game triggers a container (e.g., AudioEvent) which holds the specific AudioAssets and all relevant information about its playback behavior such as volume, positioning, pitch, or other parameters we would like to control in real time. This abstraction layer allows to change the event data (e.g., how a sound attenuates over distance) without touching the audio data (e.g., the actual audio asset) and vice versa. We talk about a data-driven system where the audio container provides all the necessary components to play back in the game engine. Figure 1.1 shows how this works with the audio controls editor in CryEngine.
In CryEngine, we are using an AudioControlsEditor, which functions like a patch bay where all parameters, actions, and events from the game are listed and communicated. Once we create a connection and wire the respective parameter, action, or event to the one on the audio middleware side, this established link can often remain for the rest of the production while we continue to tweak and change the underlying values and assets.
Image
FIGURE 1.1 The audio controls editor in CryEngine.

1.2.1 Audio Asset Naming Convention

Staying on top of a hundred AudioControls, several hundred AudioEvents, and thousands of containing AudioAssets requires a fair amount of discipline and a solid naming convention. We try to keep the naming consistent throughout the pipeline. So the name of the AudioTrigger represents the name of the AudioEvent (which in turn reflects the name of the AudioAssets it uses), along with the behavior (e.g., Start/Stop of an AudioEvent). With the use of one letter identifier (e.g., w_ for weapon) and abbreviation to keep the filename length in check (e.g., pro_ for projectile), we are able to keep a solid overview of the audio content of our projects.

1.2.2 Audio Asset Production

In general, the production chain of an audio event follows three stages:
  • Create audio asset
    An audio source recorded with a microphone or taken from an existing library, edited and processed in a DAW, and exported into the appropriate format (e.g., boss_music_stinger.wav, 24 bit, 48 kHz, mono).
  • Create audio event
    An exported audio asset implemented into the audio middleware in a way a game engine can execute it (e.g., Play_Boss_Music_Stinger)
  • Create audio trigger
    An in-game entity which triggers the audio event (e.g., “Player enters area trigger of boss arena”).
While this sequence of procedures is basically true, the complex and occasionally chaotic nature of game production often requires deviating from this linear pipeline. We can, therefore, think of this chain as an endless circle, where starting from the audio trigger makes designing the audio event and audio asset much easier. The more we know about how our sound is supposed to play out in the end, the more granular and exact we can design the assets to cater for it.
For example, we designed, exported, and implemented a deep drum sound plus a cymbal for our music stinger when the player enters the boss arena. While testing the first implementation in the game, we realized that the stinger happens at the same moment that the boss does its loud scream. While there are many solutions such as delaying the scream until the stinger has worn off, we decided to redesign the asset and get rid of the cymbal to not interfere with the frequency spectrum of the scream to begin with.

1.3 PREPRODUCTION

1.3.1 Concept and Discovery Phase—Let’s Keep Our Finger on the Pulse!

Even in the early stages of development, there are determining factors that help us to define the technical requirements, choose the audio palette, and draft the style of the audio language. For example, the genre (shooter, RTS, adventure game), the number of players (single player, small-scale multiplayer, MMO), the perspective (first-person, third-person), or the setting (sci-fi, fantasy, urban) will all contribute to the choices that you will be making.
Because we might not have a game running yet, we have to think about alternative ways to prototype. We want to get as much information about the project as possible, and we can find a lot of valuable material by looking at other disciplines.
For example, our visually-focused colleagues use concept art sketches as a first draft to envision the mood, look, and feel of the game world. Concept art provides an excellent foundation to determine the requirements for the environment, character movement, setting, and aesthetic.
Similarly, our narrative friends have documents outlining the story, timeline, and main characters. These documents help to envision the story arc, characters’ motivations, language, voice, and dialog requirements. Some studios will put together moving pictures or video reels from snippets of the existing concept art or from other media, which gives us an idea about pacing, tension, and relief. These cues provide a great foundation for music requirements.
Combining all this available information should grant us enough material to structure and document the project in audio modules, and to set the audio quality bar.

1.3.2 Audio Modules

Audio modules function both as documentation and as a structural hub. While the user perceives audio as one experience, we like to split it in the three big pillars to manage the production: dialog, sound effects, and music. In many games, most of the work is required on the sound effects side, so a more granular structure with audio modules helps to stay on top. Development teams usually work cross-discipline and therefore tend to structure their work in features, level sections, characters, or missions, which is not necessarily in line how we want to structure the audio project.
Here are some examples of audio modules we usually create in an online wiki format and update as we move through the production stages:
  • Cast—characters and AI-related information
  • Player—player-relevant information
  • Levels—missions and the game world
  • Environment—possible player interactions with the game world
  • Equipment—tools and gadgets of the game characters
  • Interface—HUD-related requirements
  • Music—music-relevant information
  • Mix—mixing-related information
Each module has subsections containing a brief overview on the creative vision, a collection of examples and references, technical setup for creating the content, and a list of current issues.

1.3.3 The Audio Quality Bar

The audio quality bar functions as the anchor point for the audio vision. We want to give a preview how the final game is going to sound. We usually achieve this by creating an audio play based on the available information, ideally featuring each audio module.
Staying within a linear format, utilizing the DAW, we can draw from the full palette of audio tools, effects, and plugins to simulate pretty much any acoustic behavior. This audio play serves three purposes: First, it creates an audible basis for the game to discuss all audio requirements f...

Table of contents

  1. Cover
  2. Halftitle Page
  3. Title Page
  4. Copyright Page
  5. Dedication Page
  6. Contents
  7. Preface
  8. Acknowledgments
  9. Editor
  10. Contributors
  11. Chapter 1 Life Cycle of Game Audio
  12. Chapter 2 A Rare Breed: The Audio Programmer
  13. Part I Low-Level Topics
  14. Part II Middleware
  15. Part III Game Integration
  16. Part IV Music
  17. Chapter 20 Note-Based Music Systems
  18. Chapter 21 Synchronizing Action-Based Gameplay to Music
  19. Index