a Web+AR+AI Christmas Tree example

Last year today, I made a Christmas tree using Bender and Python: This time I tested a workflow: Export the model (Mesh) generated by the Sverchok plug-in or Python script from Blender, and then import it into the web page for rendering. By the way, the web version of machine learning face recognition is added, and last year’s Christmas tree “sticks” to the face for fun. – Web + AI

Audio Visual interaction based on AI paintings

Just tried adding some audio-visual effect onto an portrait painting generated by AI. The full video:https://youtu.be/w4WUDmZYKOQ The original painting generated by AI, a model using StyleGAN2. The visual effect created mostly from shader.And I used the handy and cool software “Fragment:Flow” based on MaxMSP/Jitter. BGM: “Green Lake Remix 006”, by dogone – my old friend who can repair airplanes.

“The First Dance” AILog.006 n’ “There is a cycle” AILog.007

Just other computational audio reactive visuals, generated by StyleGAN2.Still have a lot to do. AILog.001-005 are images:https://www.instagram.com/avantcontra/ Donate: https://www.patreon.com/avantcontra There are many articles, patches, source code and some advanced Patron-only content there. Or you can get a source code/patch directly in gumroad.https://gumroad.com/avantcontra If you like something, could buy me a coffee 😀 More articles: https://medium.com/@contra [Experimental Programming], meaning is derived from experimental art, experimental electronics, or experimental music. And that’s

Motion capture with Tenserflow.js/PoseNet + MaxMSP + Blender

The key steps of the video above: Use the PoseNet of TensorFlow based on web-based machine learning for motion capture; Link the PoseNet page to MaxMSP with the Node for Max module provided by MaxMSP; Human motion data captured by PoseNet is sent back to MaxMSP through SocketIO; MaxMSP sends the received data to Blender via OSC; Blender uses the received data to control the deformation animation in real time.