Speech recognition and then visualize the 3D text in Blender Eevee in realtime

Voice/speech recognition by my OSC controller-BugOSC, then send the text to Blender and rendered by Eevee, all realtime. Youtube: https://youtu.be/KVT-y5a963Y B站(Chinese): https://www.bilibili.com/video/BV1A54y1d7Sh/ OSC controller: BugOSC, an OSC controller I developed based of Wechat mini program (微信小程序). You should install Wechat App firstly and then search “BugOSC” in it. “BugOSC” is NOT a native App, it should be used with/in “Wechat”. BugOSC now (v0.4) supports speech/voice recognition! Or you can use

Tutorial to voice control HTML GIF animation with mobile phone using MaxMSP and Nodejs

A demo and tutorial to voice control HTML GIF animation with mobile phone using OSC + MaxMSP + Nodejs + SocketIO. Source code: https://gum.co/whmyz https://www.patreon.com/posts/35323161 The data flow: OSC controller —- MaxMSP —- Node for Max —- Animation (HTML Web GIF) OSC controller: BugOSC, an OSC controller I developed based of Wechat mini program (微信小程序). You should install wechat firstly and then search “BugOSC” in it. Or you can use

Interactive between mobile phone and Blender animation through OSC

Just another demo about interactive between mobile phone and Blender animation through OSC. Blender Eevee animation: BLUE FOX Creation https://youtu.be/TYkPvFLDBNI NodeOSC addon of Blender: maybites https://github.com/maybites/blender.NodeOSC OSC controller: BugOSC, an OSC controller I developed based of Wechat mini program (微信小程序). You should install wechat firstly and then search “BugOSC” in it. Detail how-to video: “How to make interactive audiovisual effect in 5 minutes using Blender and MaxMSP“https://youtu.be/ssVcU8xsRT8 Bgm: ” Limit

How to make interactive audiovisual effect in 5 minutes using Blender and MaxMSP

Blender is now a new force in 3D art. Although new but not young, about twenties. In [Experimental Programming], I generally use Blender as a Python runtime environment out of the box, like:Using Blender to run Python and visualizing the Fourier Series. And this time, it is a simple and crude VJ / music visualization / audio-visual interaction software: Fingers hurt a little, change a prop: One more: Controlled by mobile

How to use phone dial tone as an interaction controller, and decode DTMF signals

The first article in 2020, accidentally picked an Old School topic. There was a section in 名探偵コナン 戦慄の楽譜フルスコア(Detective Conan:Full Score of Fear), released ten years ago. Konan was standing in the middle of the water. First, a world wave shot down the phone receiver on the shore, then closed his eyes and shouted loudly. 110 alarm calls were broadcast remotely. This time I will talk about how to use sound