-
Notifications
You must be signed in to change notification settings - Fork 17
Turn Management
sagatake edited this page Apr 4, 2025
·
5 revisions
This module controls turn taking behaviors based on audio-visual signals from both of the user and the agent. They are implemented with voice activity detection (VAD) model and voice activity projection (VAP) model.
There are two sub-modules are implemented in this module:
- VAD-based backchannel (https://github.com/isir/greta/wiki/Backchannel-based-on-VAD)
- VAP-based turn-management (https://github.com/isir/greta/wiki/Turn-management-based-on-VAP)
- Install conda or anaconda from https://www.anaconda.com/
- Install python3 (usually installed with anaconda but not for some reasons [e.g. Path to "python.exe" is not set globally])
- You can test it by loading Greta - Microphone - backchannel.xml from Modular.jar. If it is correctly installed, Greta will do some nodding to your utterance.
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Integration examples
- Development guideline
- Naming policy
- Github repository
- Technical showcase
- Python integration
- Known issues
- Technical Specifications
- Pitfalls for Greta development
-
FML/BML
-
Tools
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extensions
Nothing to show :)
- Incrementality
- Motion Capture to BAP values
- Interruptions
- Back Channels
- Appreciation Generator
- SSI Greta Integration
- TopicPlanner
- Shore
- Disco
- Watson
- Object node controller
- OSC communication for Unity
- 3D pose estimation from RGB camera through VAE (PFE-OpenPose-to-VAE-to-BVH)
- HOW TO CREATE BEHAVIOR SET
- Projects
- Signals