-
Notifications
You must be signed in to change notification settings - Fork 17
DeepGram
sagatake edited this page Apr 7, 2025
·
5 revisions
The DeepGram module can be used to detect oral speech from the user
- make sure conda is installed on your system
- Create an API key at https://deepgram.com/ and create API_KEY.txt with the created API key in Common/Data/DeepASR/DeepGram
- Add the DeepGram module to your configuration
- Enable Deep speech module
- Choose port and language (can stay the same)
- Push the Listen button before talking
The DeepGram module can be connected to a LLM module and a Feedback module to create a demo configuration see page LLM - Deep ASR integration
A demo of the integration of this module is available at LLM DeepASR integration
- Generating New Facial expressions
- Generating New Gestures
- Generating new Hand configurations
- Creating an Instance for Interaction
- Create a new virtual character
- Creating a Greta Module in Java
- Integration examples
- Development guideline
- Naming policy
- Github repository
- Technical showcase
- Python integration
- Known issues
- Technical Specifications
- Pitfalls for Greta development
-
FML/BML
-
Tools
-
AUs from external sources
-
Large language model (LLM)
-
Automatic speech recognition (ASR)
-
Extensions
Nothing to show :)
- Incrementality
- Motion Capture to BAP values
- Interruptions
- Back Channels
- Appreciation Generator
- SSI Greta Integration
- TopicPlanner
- Shore
- Disco
- Watson
- Object node controller
- OSC communication for Unity
- 3D pose estimation from RGB camera through VAE (PFE-OpenPose-to-VAE-to-BVH)
- HOW TO CREATE BEHAVIOR SET
- Projects
- Signals