Simulate virtual mouse clicks to interact with Touchdesigner while running a diffusion model plugin to re-generate existing animations.
Py file are ready to use, I've added notes inbetween the code explaining what is happening in case you wonder. (Else you can always ask your preferred AI chatbot about it ;)
It's strongly recommended to follow a TouchDiffusion online tutorial to have it working inside TouchDesigner properly. I followed this one: https://www.youtube.com/watch?v=3WqUrWfCX1A&t
Make sure to install all packages inside py to run successfully the scripts on your preferred IDE. You can double-check those libraries in the py scripts provided.
Maybe I haven't explain what TouchDesigner is, in case you're new to it, it's a realtime software that allows to create graphics that can be driven by data from multiple input sources (webcam, audio, sensors, etc). making it extremely flexible for content creation.
Stable Diffusion Plugin installed and working properly inside Touchdesigner.
Early testings using OpenCV to enable gesture-driven features like mouse moving or clicking modes.
- I'm running python version 3.8.0 because Mediapipe didn't seem to be running on later versions (at least for me).
- I'm using TouchDiffusion main version. I've tried to install the portable version but it simply won't run. Don't get discourage if either versions don't work properly at first. Reinstalling the main version seemed to work for me.
- In the -Mouse In- node inside TouchDesigner, you have to adapt the mouse screen values according to the extension of your screen. Check the min/max values for up/down and left/right edges and adjust acccordingly.
- I'm currently working on my own model training that will improve hand detection by adding dataset under different lighting conditions and hand/fingers poses.