What just happened? Machine learning is finding a multitude of applications in the modern world, but Google wants to show that it can be used in less serious ways. The search giant has just unveiled Move Mirror---an experiment that uses AI smarts to match the poses you make with images of other people in the same position, and all in real time.

To try out Move Mirror, simply head to the website and allow access to your computer's camera. By using Google's open-source PoseNet computer vision model, each different pose can be detected. The system is powered by Tensorflow.js---a javascript library that runs machine learning models in-browser.

Google says the system identifies 17 different joint positions and body parts, including shoulders, ankles, and hips. The company adds that elements such as body type, height, and gender aren't taken into account.

The poses are compared to a database of over 80,000 photos to find those that most accurately reflect your current position. As you move, the images change---you can even make a GIF capturing your moves and the mirrored pictures.

While the experiment is mostly done in the name of fun, Google hopes Move Mirror will help make machine learning more accessible to those interested in the field. There's a Github repository and companion blog post if you want to learn more about PoseNet for TensorFlow.js.

With companies under the spotlight for the way they handle user data, Google emphasizes that as all the work is done in-browser, no images are sent to its servers.