Quickly now. Stick your hands against the screen. Now, lift them off slightly and move your head to either side. Did you see that? A physical 3-dimensional object rising from the aether around your digits? No, no you didn’t, because it hasn’t happened yet… but it could.
Three technologies have surfaced showing how we may be able to interact, build, create and manipulate geometry in the future, the near future. Bring all three together and you get something even more spectacular.
Hand’s-free manipulation and Holographic viewing
Finally, some revealing technology from Microsoft which moves 3D off the screen… kind of. Director of Microsoft Applied Sciences Steven Bathiche shows exactly what researchers with usable appendages are doing in the area of hands-free manipulation and holographic imagery. While nothing entirely new, they are using the Microsoft Kinect device to track different people, whereby allowing them to view and manipulate different object.
Tobii is itegrating its eye-tracking technology into Lenovo laptops. These are only prototypes at the moment, but it is coming, believe me, it is coming. It may be up to two years off, but even now it’s highly accurate. Your eyes are typically faster at navigating around the screen. Apply this to mouse control, model viewing or accessing additional commands or components off the screen and you have a whole new way to design.
This is where you as 3D geometry become… virtual. Fabricate Yourself is a fab(ulous) project from Interactive Fabrication. Now, this is being done to produce 3D prints of yourself. Tiny, 3D prints. They’re about 3cm x 3cm. However, it mixes the use of the Microsoft Kinect (used above), the production of a real-time 3D mesh on-screen and a Stratysys Dimension uPrint 3D printer . Take a look.
Time to bring it together
It’s certainly easy to see how all of these can relate to designing products in 3D – move objects without touching them, prompt commands with eyes, see objects produced in 3D in real-time. Put them all together though and you get something much more, particularly when you add the physical material of the 3D print. Think of it like this, instantly creating physical 3D objects in front of you with interaction based on the position of your face, eyes, or hands – flash-forming, if you will. It’s nice to see all of the technology being developed independently, but it’s time for it to be brought together.
How could you use all of these?