From the interview:
1. Why choose a projector versus goggles?
Pranav :- ” I took this from the idea, the concept, and developed the software and the hardware. Pattie is my advisor. She helps me brainstorm, “What should we do next?” From the beginning, I started working on the concept of merging the physical world and the digital world, like with my earlier project, Quickies, merging physical sticky notes with digital data.”
3. Where’s the battery?
Pranav :- ” The project itself contains a battery inside, with 3 hours of battery life.
The other thing is, I’m making a small solar panel. I’m trying that out right now because I want to go with sustainable energy — and so you don’t always need to be charging. Whenever you’re outside, you’ll be charging, and with the system, you can be outside more.”
4. How does the software know what you want the system to do next?
Pranav :- ” The software works on the basis of computer vision. There’s a small camera acting as your eye, your third eye, your digital eye, connecting you to the world of digital information.
Processing is happening in your mobile
phone, and basically works on computer vision algorithms that we developed ourselves, taking advantage of some open-source code but mainly writing code ourselves here at the lab. We had to write a lot of algorithms from scratch here because there was nothing that did what we wanted. We wrote 50,000 lines of code. The software recognizes 3 kinds of gestures: + multitouch gestures, like the ones you see in Microsoft Surface or the iPhone — where you touch the screen and make the map move by pinching
and dragging. + what I call freehand gestures, like when you take a picture Or, I do a namaste gesture to start the projection on the wall. + iconic gestures, drawing an icon in the air. Like, Whenever I draw a star, show me the weather. When I draw a magnifying glass, show me the map. You might want to use other gestures that you use in everyday life. This system is very customizable. Because it’s my choice, I do it my way in the demo, but I don’t want the user to change their habits. I want the Sixth Sense to change for them.”
5. Have you thought about using this device for gaming?
Pranav :- ” Definitely. We can do all the kinds of gaming that exists now, but not only that, we can use the physical world inside the game. You can play with physical stuff, invent some new games. Maybe you can hide something in the physical world — open a book and hide something in the pages. We’ve been using “Minority Report” as shorthand to explain the device, or the heads-up screen in “Robocop.” But was this device influenced by science fiction I’m not a very big fan of science fiction.
I think that I’m a very big fan of living in the physical world. I’m good with digital technology, but I start to miss the physical world. I miss riding my bike, talking to friends. Technology now separates us from the physical world more and more. Even social networking sites are taking us away from the
physical world. At the lab, we like making things that we can touch, we can feel, we can take with us wherever we want to go, that we know how to interact with. The digital world has power because it has dynamic information, but it’s important that we stay human instead of being another machine sitting in front of a machine. Whatever science fiction movies we watch now, we can make the technology real in two days. What we can do is not important. What we should do is more important.”
There’s some really interesting comments, not only from people interested in making computer interfaces, but people asking, “Why can’t we use this system for people who have accessibility problems, blind people, deaf people?” The camera can act as a third eye for the blind person, and tell them what it sees. It could be an ear for a deaf person. Ideas are also coming from developing countries — in part because of the low cost. It cost me $350 to build Sixth Sense in the lab, but the price will come down.