So, after seeing the coolest touch-free 3D interaction, there’s something else that’s nearly just as cool and adds to that cool factor with the addition of a single-handed glove and holding up an iPad in front of your face as if looking for hidden treasure. The Tangible Media Group at MIT Media Lab is exploriung the ability to co-create and manipulate 3-dimensional objects away from the computer screen. T(ether) is an experimental app which uses a motion capture camera to map your head and hand, giving you a shared virtual space via the iPad.
The MIT crew includes Matthew Blackshaw, Dávid Lakatos, Hiroshi Ishii, Ken Perlin. Blackshaw is also one of the creators of Peddl, a simple app to buy and sell things. With T(ether), you wear a glove embedded with sensors on one hand and in the other you hold an iPad, fit with the motion capture camera, to see into the virtual world in front of you. While tracking your head and hand movement the app gives you a perspective of the environment, augmented with the ability to collaboratively create and edit 3D objects.
“T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the user’s hand extends into the virtual world, enabling the user to interact with objects directly.”
“T(ether) uses Vicon motion capture cameras to track the position and orientation of tablets, user heads and hands. Server-side synchronization was coded using NodeJS and tablet-side code uses Cinder. The synchronization server forwards tag location to each of the tablets over wifi, which in turn renders the scene. Touch events on each tablet are broadcasted to all other tablets using the synchronization server.”
Ideas for uses for CAD? While you’re limited to one hand, I imagine this could be overlayed onto other workspaces, or even mapped in context of a system. Add the ability to read RFID or recognize objects and you could build and interact within the overlay of the actual environment.