We’re entering the domain of magic. Augmented reality will see us swapping outfits in the dressing room of your local store without needing to try anything on. We can already use a Snapchat filter to see what we look like as a toddler. And there’s no need to go to IKEA to see what a couch will look like in your living room.
But what about copying and pasting physical objects into your latest Word doc?
This little prototype shows how it can be done. Point your camera at an object nearby and “paste it” into a document:
And yes…it’s a live video of a prototype. The demo comes courtesy of Cyril Diagne who is in residency at Google Arts.
The app uses machine learned to identify an object and remove background noise. Then, the real trick is to use “the OpenCV SIFT trick to find where the phone is pointing at the screen. Send a camera image + a screenshot and you get accurate x, y screen coordinates!”
Cyril says that “latency is about ~2.5s for cut and ~4s for paste. There are tons of ways to speed up the whole flow but that weekend just went too fast”.
You can see her full thread here:
The screen point component of the project is available on Github if you want to check it out.
The project sparks some thinking about where this could go next. It’s scrapbooking on steroids. And demonstrates how seamlessly we might be able to move objects around from physical to digital spaces.
You can imagine with a Lidar scanning iPad that this process might not just take a flat image and throw it into Photoshop, but a 3D image which you can port into a 3D environment (and back again).
But perhaps the more intriguing idea is how it uses your phone to place objects within another screen. This sort of screen-to-screen portability (from reality to phone to computer) hints that an object being a physical thing won’t be a barrier to bringing it into digital spaces.