Graduate Requirement – Meng Fu

Graduate Requirement – Meng Fu


 

“Have you encountered this situation? When you want to tell your friends on the social media what you are doing, you find that pictures can’t show everything. You want to have a more lifelike way to express yourself but just end in typing a lot and sending similar pictures?” When I ask Richard why his team wants to make the air modeling system, he asks back to me this question. “We plan to create a generally easier, more autonomous, and accurate way to capture and create a 3D model of any object of interest.”

Richard is an Electrical Engineering student who dreams to work in VR field in the future. He decided to work with his classmates before his graduation to have more experience of developing and designing VR products. Inspired by an existing 3D scanner project[1], he started to develop a new way of 3D scanning.

The existing 3D scanning technology has much limitation to the scale of object. It also works really slow. Let alone the long time to build model in computer after gathering the data. What if there is a device that is easy to control and have a faster scanning speed? Richard and his team aspired to make conveying information digitally more convenient. “Our goal is to be able to create a 3D model of anything ranging from clothes to buildings. We want to remove the constraints of 3D modeling. The limitation of size will no longer be an issue. We also want to simplify it so that there will not be a learning curve to properly use it.” For now, there are several kinds of 3D scanners, can be generally classified into two types: image processing and laser measuring. Richard is trying to combine these two together: using image processing and analyzing programs, to help measure distance and build 3D models. “It is like the computer can do both "watching" (image signal) and "touching" (distance signal). We want it become a better solution for personal or family 3D scanning.”

Their project started from a quadcopter, because flying machine is more suitable for scanning object thoroughly. The whole device also includes the Autonomous AI-Controlling system, the 3D Modeling System, the Navigation System and other unit which communicate each parts together.

The quadcopter contains the following parts: camera and laser distance meter, microprocessor, distance sensors, accelerometers, gyroscopes. Camera and laser distance meter will collect the information. All the pictures and distance data will be send to the server through the wireless communication system. The microcontroller will temporarily save all the data, operate the other components, and transmit the data to transmission part. Distance sensors are used to detect anything in the way of the scanning. With these data, the server will be able to keep the quadcopter in a safe distance to the objects around. Gravity sensors and gyroscopes are used to collect data of the quadcopter status, help the server to where the quadcopter is, in order to maintain the quadcopter on the calculated route. The server is the main control of the whole system. It’s job including 3D model building, commanding, route calculating based on the feedback. To be more intuitive, Richard shows me a picture of the whole system of the Air Modeling device.

QQ20160509-0@2x

For the interface of the software, they tried to make it as simple as possible so that anyone can get use to it easily. In their plan, it should just have the range diameter and the “Start Scanning” button. Once the button is pressed, the command will halt any movement of quadcopter, and begin its scanning operation. The model image will be simultaneously displayed on the interface until the scanning finished. The modeling machine should be clever enough to deal with the possible issues, such as battery shortage and system bugs, by sending messages to user. After job finished, user will be asked whether the scanning is good enough. If not, user will be able to select the part they do not like, set a new reference point and start the scanning again.

To make the project more practical, Richard and his group did almost every simulation in Unity 3D in order that people can use or modify the modeling data directly after scanning. “Our goal is to make a smooth line between scanning, modeling and applying to VR system. We use Unity 3D to work with logical design of AI and the process of modeling. So that it won’t happen that the Air Modeling can’t work in practice after we made it. It will be ironic if a VR tool doesn’t really connect with VR developing platform”, said Richard.

After designing the whole system, they started to make the Air Modeling be real and did several tests. With limited time and money, they can only test different modules separately. For the scanner part, they succeeded in motion tracking with the accelerometer assistant. They then use it to collect data of a test object. They managed to get the data, but it was not accurate enough to give an exact scene to the computer.

sofa

test object

aaa

resulting model 

They also tested the power system, RC system and balance system with their quadcopter. “We modified our components according to our test result. It was a long time for us to try and fix problems.”[2] The progress of making the Air Modeling is not that easy for just a few university students. “We summed up three main problems to solve in the future. First, we need to make the quadcopter more steady during the scanning. Second, the control channel is different from the digital signal of the computer. We still need to find a way of using computer to control the quadcopter. And finally, since the amount of data which need to process is much more than we imagined, we have to solve the problem of low speed of existing processor.”

Although the project still has a long way to go. Talking about future, Richard is very firm. “I want to continue. I know it will take a long time to finish the whole project. But I feel it worth me to do so. And I hope one day the Air Modeling can accelerate the developing speed of VR projects such as games and job simulation. Maybe one day it can even widen the area that VR can work with.”

[1]  http://www-video.eecs.berkeley.edu/
[2] https://www.youtube.com/watch?v=6iPDjQQSW4A

No related post

COMMENTS

LEAVE A REPLY

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.