Be a Human Controller- Move Tetris blocks by moving your body – — Web Application with Nuxt and Tensorflow.js—

by user


This post is an introduction to an experimental web application created with TensroFlow.js and Nuxt.js. TensroFlow.js is a splendid library from Google, as you might know TensorFlow, which plays a role as a pioneer of the AI ​​boom. The rendering engine of the web browser can calculate the model generated by machine learning in real time and at high speed.

Some of the games created using TensorFlow.js are prepared to be able to demonstrate how to use it on this site. Pac-Man’s game among them is innovative in that users can control a game character by a specific pose which have already registered with a web camera before starting the game. In other words, the user’s pose and the direction character moves are connected.

PoseNet which is another game is also fantastic. This demo analyzes human bones in real time and accurately. So this time I tried to create a human Tetris so that users can control Tetris blocks with his/her own body as a demo application.


Game Logic

The game logic and display of Tetris itself are drawn using p5.js. The method of linking p5.js and Vue is summarized in the previous article, so please take a peek if you are interested. The below image describes a data flow to control the Tetris block. Firstly, human bone data is acquired from PoseNet. Then passes human bone data to the rendering engine via Vue to display the Tetris block. Basically, Tetris could be controlled if there was a button for rotating and if there was a cursor for moving blocks from side to side. Instead of the controller human gesture relating to movement of Tetris block become a controller in this game.


All keypoints are indexed by part id. Check the index in the official repository. PoseNet supports a library called Multi-Person Pose Estimation to detect multiple persons. However, if you choose multiple person library, the speed will get slower.

sample sample

Auto Draft


Since Tensorflow.js is developed based on WebGL, its performance also depends on the device’s GPU. In my environment, it was around 15 fps in single-pose. I think that speed will fall a little more if you choose multi-pose.

PC: MacBook Pro(Retina, 15-inch, Mid 2015)
Processor: 2.2 GHz Intel Core i7
Memory: 16 GB
Graphics: Intel Iris Pro 1536 MB


Detection of bone data was not expected unless being equipped with a infrared sensor devices such as Kinect until now. This PoseNet, however, is quite attractive as it works with reasonable accuracy and speed with Web Camera alone. I have a feeling that I can adopt it successfully with interactive contents. You can demonstrate this web application here. If the application in that site does not start, please build from GitHub instead.

You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

Close Bitnami banner