A green scanner effect with depth texture

I made a green scanner effect. Steps: Prepare a handsome photo. Calculate the depth value of the scene in the photo through machine learning, and get a depth texture. Write shader code to implement green scan line effect. Calculate photo scene depth Why do we need to calculate the scene depth of the photo? Because the photo is two-dimensional, if you directly use the color or grayscale of the two-dimensional

a Web+AR+AI Christmas Tree example

Last year today, I made a Christmas tree using Bender and Python: This time I tested a workflow: Export the model (Mesh) generated by the Sverchok plug-in or Python script from Blender, and then import it into the web page for rendering. By the way, the web version of machine learning face recognition is added, and last year’s Christmas tree “sticks” to the face for fun. – Web + AI

Audio Visual interaction based on AI paintings

Just tried adding some audio-visual effect onto an portrait painting generated by AI. The full video:https://youtu.be/w4WUDmZYKOQ The original painting generated by AI, a model using StyleGAN2. The visual effect created mostly from shader.And I used the handy and cool software “Fragment:Flow” based on MaxMSP/Jitter. BGM: “Green Lake Remix 006”, by dogone – my old friend who can repair airplanes.

Simulating Robot Arm with MaxMSP and RobotStudio

This project was about robotic arm with multi screens, (by ManaVR ✖ INT++) made in 2017.In the early stage, I used MaxMSP Jitter with ABB’s RobotStudio to simulate the robotic arm and the large screen. This article only focuses on how to use MaxMSP to do the simulation of the project prototype, making full use of the very convenient TCP communication, multi-screen motion simulation and other functional modules in MaxMSP

Motion capture with Tenserflow.js/PoseNet + MaxMSP + Blender

The key steps of the video above: Use the PoseNet of TensorFlow based on web-based machine learning for motion capture; Link the PoseNet page to MaxMSP with the Node for Max module provided by MaxMSP; Human motion data captured by PoseNet is sent back to MaxMSP through SocketIO; MaxMSP sends the received data to Blender via OSC; Blender uses the received data to control the deformation animation in real time.