ShapeShot can scan a human face in 1/2000 of a second. The result is a high-resolution water-tight 3D-print-ready mesh file. A favorite S.L. Jackson quote comes to mind: “When you absolutely positively got to [scan] every m*f* in the room, accept no substitutes.” And that’s what makerbot store NYC is planning to do: scan every geeked out mofo that waddles in the door. All. Day. Long.
ShapeShot at Makerbot
The installation at the Makerbot store will allow people to snap photos of their faces (or their kids’ faces), and the result is automatically uploaded to thingiverse.com; a brilliant marketing move, and an exciting opportunity for those of us who really (really, really) want an extruded-plastic bust of ourselves on the mantle.
The model that’s uploaded to Thingiverse is a simple flat-backed extrusion of the face, since the scanner’s field of view does not include the back of your head (sorry Ed Mordrake). But if you want to really impress the ladies (or dudes, whatevs), you’ll want to head to ShapeShot.com, where you can download a model of your face superimposed on the bust of Marquis de Lafayette (not kidding), put your ‘mug on a mug’ (i.e. a bas relief on a coffee mug), or an Elven version of yourself, complete with pointy ears and hat, just in time to scare the bejeezus out of any S. Clause foolish enough to set foot in your den.
The technology works the same way that Autodesk 123D Catch works, albeit with a more controlled (and therefor more accurate) setup: the machine snaps three photos of you from different angles, and knits them together into a surprisingly-accurate 3D mesh.
Rewind six weeks
Under the flicker of buzzing fluorescent lights in the back room of a nondescript warehouse northwest of Baltimore at ShapeShot HQ, and I’m sitting in front of what looks strikingly similar to my Aiwa stereo from the 90’s, albeit mounted on a t-slot system so the would-be speakers are angled to point directly at my face, and instead of speakers, they’re high-resolution cameras mounted in speaker-shaped rectangular housings. I’m feel a bit nervous, as three or four people gather to watch me try to position my head such that the three images of my face on the screen in front of me–front, left, and right side views–are centered in their respective frames. It’ll be easier in the makerbot store, they tell me. They’re still tweaking the software.
The flash pops, SLR mirrors click, and my face freezes on the screen momentarily. By the time I’ve adjusted in my seat to take another shot, the first one’s been processed and uploaded to ShapeShot.com, where I’m told I can download an obj file of the resulting mesh. Remember those Chrome ads where the browser loads faster than a potato cannon? They should use the same commercial for ShapeShot; it’s that fast.
The process isn’t perfect. Since eyes are both reflective and refractive, light-based scanning techniques like this don’t work well (hence the pitted eyes in the mesh shown above). Teeth are also problematic due to their sheen, so unless the matte-green fuzz on your teeth completely covers up the shiny enamel finish, toothy-grins are not recommended. Hair never scans well, but the ShapeShot guys have done a surprisingly good job of working with it. Yes, the mesh is very approximate in the fuzzier areas of your head, but unless your Sasquatch you should be able to get a good likeness regardless.
This is exciting stuff, kids. One only wonders when this technology will allow me to print a living, breathing victorian-English speaking replica of myself in another dimension. But I’m guessing that’s a “version 2.0” kind of innovation. Stay tuned.