How to Import Sound and Add Lip-Sync

Importing Sound

If you wish to add sound to your animation, it is recommended to edit and mix your sound files in a sound editing software. Having full-length pre-mixed soundtracks ensures the audio preserves its timing, mixing and quality should you use third party software for post-production. Another good practice is to keep your soundtrack separated in tracks for music, sound effects and characters to make it easier to sync your animation with voices and sounds. Otherwise, it is possible to clip sound effects and adjust their volume directly in Harmony when needed.

If you create your project in Toon Boom Storyboard Pro, you can export all of your project's scenes as separate Harmony scenes. The storyboard's sound track will be cut up by scene and each piece will be inserted into the exported scenes, allowing you to save time on splitting and importing your sound track.

Harmony can import .wav, .aiff and .mp3 audio files.

NOTE Importing a soundtrack longer than your scene will not extend your scene's length. Sound playback will stop at the end of your scene's length.

Automatic Lip-Sync Detection

Adding a lip-sync to your animation is essential to making your characters seem alive. However, it is also a particularly tedious part of the animation process.

To solve this problem, Harmony provides an automatic lip-sync detection feature. This feature analyzes the content of a sound track in your scene and associates each phoneme it detects with one of the mouth shapes in the following mouth chart, which is a standard mouth chart in the animation industry.

NOTE The letters assigned to these mouth shapes are standard identifiers, they do NOT correspond to the sound they are meant to produce.

This is an approximation of the English phonemes each mouth shape can be used to represent:

  • A: m, b, p, h
  • B: s, d, j, i, k, t
  • C: e, a
  • D: A, E
  • E: o
  • F: u, oo
  • G: f, ph
  • X: Silence, undetermined sound

When performing automatic lip-sync detection, Harmony does not create mouth drawings. It simply fills the drawing column of your character's mouth layer with the generated lip-sync, by inserting the letter associated with the right mouth shape into each cell of the column. Therefore, for the automatic lip-sync detection to work, your character's mouth layer should already contain a mouth drawing for each drawing in the mouth chart, and these drawings should be named by their corresponding letter.

Animating Lip-Sync Manually

You can manually create the lip-syncing for your scene by selecting which mouth drawing should be exposed at each frame of your character's dialogue. For this process, you will be using the Sound Scrubbing functionality, which plays the part of your sound track at the current frame whenever you move your Timeline cursor, allowing you to identify which phonemes you should match your character's mouth to. You will also be using drawing substitution to change which mouth drawing is exposed at every frame.