Occipital Scanner and the Future
When you imagine the future, how does it work? Part of the reason why this blog exists is to figure out where we are going. Well, there is some very interesting things to look at today.
A company called Occipital has proposed a 3D scanner for your iPad. The device will be called the Structure Sensor. It will have all of the drivers needed for other devices, and everything for it is open source. The website for the device shows ideas of using the environment around you for a game, scanning household items, and placing things from the real world into a 3D world at will. In fact it has an entire SDK for creating ideas with it.
Thats some pretty amazing tech there. It is basically the Xbox Kinect only open, and ready to be used for anything. The ability to use most of the programs in that link involve a great deal of previous knowledge on how to install, and use various programs. It can be fairly difficult to weed your way through it. There is an official Kinect SDK, but it does not have the same amount of programs, or useability. This is not a joke, some of the items for the kinect are being stopped so that someone buys the kinect pc version. In other words, it works the same as the Structure Sensor, but comes with giant problems.
A device like a Kinect or a Structure Sensor is basically a camera that connects a picture to a 3D image. There are ways of take a normal picture, and making a 3D image by hand. In fact doing a search for them has several programs ready to go. From Autodesk’s 123D, to Make 3D, and even insight 3D the world can take pictures, and create a model ready to go. The only problem is that the programs are not perfect. They can make a 3D image, but it can have extra parts that are difficult to take out, or parts of the model stick out in odd ways. There is also a need for patience with them. If you miss a part of the structure, you can’t render it.
The same works for a single photo. There are manual programs to turn items in pictures into 3D objects. You are rendering the object with the idea that it will be the same on the other side of the picture. If you make a pipe, there will not be a pipe hole, so it will not be a pipe. It will be an object that looks like a pipe from most angles.
We get into an even larger problem when you want a video of the device.
Let’s say what you are trying to film is not standing still. For example, you have a child that suddenly desires to play in the sand. There is no way you can have the child hold still for several pictures in a row, and keep the same face. In fact, there is a more likely scenario of the child ignoring you and eating the sand. This is where things like a Structure Sensor come in handy. It can continue to record as the child moves around the sand. With the various angles you can get a good bit of data on the child, and also what he is doing.
The point of the sensor on the iPad is more for 3D scanning, but it does have real time effects. A smart creation would be to allow filming, and scanning at the same time. Right now the only real use for that is for motion capture. You would be surprised to know that motion capture has a long history, and really needs it’s own article. The short version is that it is from medical technology, and then moved over to videogames as they became 3D. So long as the as the giant blocks moved correctly, they seemed more realistic. When taken with 3D scanning of a body, and then of various props, Motion Capture can make a person jump off high cliffs as the camera moves around him. In fact, that was one of the first shots that showed off the idea in the movie Batman Forever.
The movie that brought you bat nipples. Yes, the very same one. |
With the smaller, and easier to use devices we end up in a very strange world. Instead of just scanning a toy at a store, and buying it on amazon, they can now just make it. Using the new technologies like 3D Printers or Fabbers a person can make whatever toy he wants. Toy companies end up in an odd place because the store, the manufacturer, and possibly even the designer are no longer necessary. There will be a licensing war as companies try to protect their ip’s, but the technology is moving faster than their lawyers. Several companies will go under simply because everyone downloaded, and made their toys, gadgets, etc.
Actually, it gets weirder. Let’s say you went to a mall to buy clothes. Traditionally you would check to see if its your size, which can be a pain for women. The best choice is to put it on, and check how it feels and looks on you. With a fabber, and a scanner you suddenly have a way to get the exact style, and exact fit you want. You can see how it looks on you before you try it on. Once again, the designer is not able to profit by the traditional means. Have you noticed in the Huffington Post article how the designers don’t realize how much their jobs will change when these things become common in households?
The other side is just as odd for the customer. How does the device know your measurements? It has to know this to create your clothes. Beforehand, you would need a tailor or place your own measurements into a website. Here is another question, what if you are wearing clothes that do odd things to your figure. You have a poofy dress on, or a thick coat makes someone seem larger than they really are.
There are two solutions to this, but they both have the same end results. The first is that for your own virtual fitting room you need to step into the area naked. It will be able to figure out your size measurements perfectly. It will also know what you look like naked.
The other is to scan you with your clothes on. The program figures out where your limbs are, and how your body moves around. Essentially the program knows what you look like naked. To get the exact specifications, the more it will know what you look like. This information will be stored on a database. The next time you choose out clothes, the size specifications will be there. You can see what you look like while wearing the new set of clothing at the push of a button.
The other more detailed need would be in the future. The camera could collect information beyond the light spectrum we understand. The Structure Sensor already uses some of these to help it in making a 3D picture.
So, it doesn’t matter how you look at it this program will know what you look like naked. It means you can see how you look in a slinky dress, or a new bikini. It also means someone else can see what you look like in whatever they want, clown nose and all.
This was a surprisingly fun search to make. |
Not only can the scanner add or change your clothing, it can change you. On the surface level of the scan blemishes can be taken out, tans can be added on, and your hair can change to whatever you want. The same skills a photoshop artist uses to make their model perfect can be used on your model. The image of you can be changed to have the clown makeup the nefarious villain wants.
Then we get into rendering. Not only will it look like a really nice version of whoever was scanned the rendering of the person can make them look human. The technology for this is actually very basic, and can be used on a computer today. So the amazing computer effects that cost hundreds of millions of dollars for the Avengers may soon be in peoples hands today.
With this technology you can make a pizza in space. You can also create clothing for yourself, and even make your evil teacher dress up in a clown makeup. This should all be happening within the next five years. In fact most of it should start kicking in next year.
After that, we need to look at screens and how they display. If you don’t look at the iPad screen, the way cool trap won’t appear. You need the screen itself to see the amazing rendered version of the evil clown teacher you made to jump out of your closet. If you want the clown to appear without a screen, you need a hologram. By coincidence MIT expects them in 10 years. The early versions so far don’t really look that great, but they are improving. Also, they are incredibly dangerous since what is being created today is a plasma phosphor ignition by lazer. That will improve over time, and when it begins to be shown to promote movies it should be safe enough.
In the next five to ten years, when you sit down to watch the big game you entire room will light up. The stadium will be around you as images are projected off your walls. The players will be holographs of themselves, and be completely lifelike to your vision. You will be able to move around the game, the ball, and the stadium at will. If you want to see the movements of an amazing catch in slow motion from every angle you not only will, it will be expected. The announcers will talk in real time of what is happening, as you move around. There will be angles prechosen out, but you will be able to move wherever you want. It will be a game to move around in this world. Pictures of the game will be in completely rendered projection ready to go. You won’t be watching the game, you will have the best view in the stadium.
Comments