Animating Lip-Sync

Adding a lip-sync to your animation is essential to making your characters seem alive. However, it is also a particularly tedious part of the animation process.

To solve this problem, Harmony provides an automatic lip-sync detection feature. This feature analyzes the content of a sound track in your scene and associates each phoneme it detects with one of the mouth shapes in the following mouth chart, which is a standard mouth chart in the animation industry.

NOTE The letters assigned to these mouth shapes are standard identifiers, they do NOT correspond to the sound they are meant to produce.

This is an approximation of the English phonemes each mouth shape can be used to represent:

  • A: m, b, p, h
  • B: s, d, j, i, k, t
  • C: e, a
  • D: A, E
  • E: o
  • F: u, oo
  • G: f, ph
  • X: Silence, undetermined sound

When performing automatic lip-sync detection, Harmony does not create mouth drawings. It simply fills the drawing column of your character's mouth layer with the generated lip-sync, by inserting the letter associated with the right mouth shape into each cell of the column. Therefore, for the automatic lip-sync detection to work, your character's mouth layer should already contain a mouth drawing for each drawing in the mouth chart, and these drawings should be named by their corresponding letter.

You can manually create the lip-syncing for your scene by selecting which mouth drawing should be exposed at each frame of your character's d ialogue. For this process, you will be using the Sound Scrubbing functionality, which plays the part of your sound track at the current frame whenever you move your Timeline cursor, allowing you to identify which phonemes you should match your character's mouth to. You will also be using drawing substitution to change which mouth drawing is exposed at every frame.

TIP You can also press [ to substitute the selected drawing with the previous drawing and ] to substitute it with the next drawing.