Skip to content

Automatic gestures

sagatake edited this page Apr 10, 2025 · 3 revisions

Greta is capable to generate gestures based on speech text in FML/BML files.

Currently, we support the following functionality.

If you want to try both of it, I recommend you to check this page.

Architecture

image

From the FML-Reader one reads an FML file or a plain text (which will be converted into fml). We have the communicative intentions specified in the FML file. If it is a plain text, we calculate the intentions with the Meaning Miner module, then we break down this plain text into ideational units (ideationalUnit).

Then, in the Behavior Planner we will have a list of signals in which there will be:

  • the behaviors instantiated from communicative intentions
  • behaviors instantiated from NVBG
  • behaviors instantiated from Meaning Miner

Thus the Behavior Planner produces several lists of multimodal behaviors.

image

Once the list is complete, it is passed to a function called MSE_Selector where the behaviors to be executed are chosen according to an algorithm based on their priority and type

In the case where two hand gestures have been calculated to appear in the same time frame, they cannot be combined. One must be selected. The priority for selecting the hand gesture to be shown is :

  • A behaviorset that includes gestures for all body parts has priority over gestures of short duration.
  • A beat gesture has less priority than an iconic or deictic gesture.
  • Also, a gesture of type "image_schema" found from MeaningMiner has more priority than other types of gestures.
  • GRETA will therefore most often perform behaviorset gestures and metaphorical gestures.

Getting started with Greta

Basics

Advanced

For developpers

Functionalities

Core functionality

Auxiliary functionality

Preview functionality (only in dev branch)

Nothing to show :)

Previous functionality (it might work, but not supported anymore)

Tips

Clone this wiki locally