2014年12月18日木曜日

Intelligence Robotics Challenge


I participated in the  Robotics Challenge at 12/6.
Intelligent Robotics Challenge's objective is to develop Home robotics.
My robot, Kenseiko-Chan 2 Mobile was customized from the Tsukuba Autonomous Robotics Challenge. Now my robot is loading a gun-type-microphone.

I participated in the "Follow Me" challenge.
The "Follow Me" challenge is to track and follow person in a house environment.
I used PN(Propotional Navigation) and PID control to track the person.
To detect human, I used LIDAR by judging the width of the leg.
This robot can also detect several number of legs.
My robot moved until where the other human intercepts.

My robot problems are:
1. It tracks the pillar with the same width of the human's leg.
2. If the object has 2 legs, then it will be a human with high possibility,
   but my recgonizition system judge two legs in only a distance of 1~2 meters.
3. It can't distinguish the tracker and the other people.
4. It doesn't do voice recgonizing, which is neccesary for home robotics.

The solution of No.1. is to use reflection intensity of the LIDAR's data, to distinguish colors.
The solution of No.2 is only to change the recgonizition system. I'm thinking to change the LIDAR's data to image, delete noise, and do pattern matching or making a human walking probabilistic model using OpenCV.
It is difficult to solve the No.3 with using only the LIDAR's reflection intensity data, because some people wear the same pants, so the reflection intensity(color) will be the same. Human walking probalistics model or using other sensors, like xtion/kinect, or camera will solve the problem.
Looks like I have to study more about vocie recgonizing to solve No.4. I'm studing how to use the julius voice recgonizition library.

Also, there were interesting robotics/intelligence topics and robots in the intelligent robotics challenge.

Rospeex is a Cloud-A.I. Voice recognition and synthesis library compatible with ROS(Robot Operation System) developed by Prof.Sugiura. http://rospeex.org/top
Now it needs internet connection to use this. This application front-end is written in HTML5, so you can use this in the web, and also in mobile too. The server do the voice processing, so your device doesn't need any high hardware processing ability. All your device need is query and respond to the server. The server A.I. uses deep learning to classify data.
Robot from Osaka Institute of Technology
There were many turtlebot in the challenge.

Daigoro from The University of Electrocommunications Tokyo
Daigoro is using LIDAR to follow people. It follows people very accurate. It is using pattern matching.


The next day, I joined the Intelligent Robotics Study Meetup.
There were a lot of interesting research and talks about robotics and A.I.
About the sophisticated voice recognition system, Rospeex, Robot for entertaining children using the original-babysitter and child model expressing by bayesian network, grasping the meaning of an imperative sentence using CRF(Conditional Random Field) and SVM(Support Vector Machine), about Robotics business models, and the Roomba's business model(how to remove the barrier between the consumers: the prejudice of robot), and about Whole Brain Architecture.

I'm going to participate in the next Intelligent Robotics Challenge/Meetups too!


0 件のコメント: