Truth and Service
Future StudentsStudentsParents and FamilyAlumniFaculty and StaffVisitors
Directory Image
Customer Service
Robotic Systems

Main goal of this project is to make contribution in the form of new algorithms and develop techniques that will address the research and application issues encountered in developing Cooperative Autonomous Mobile Robotic Systems (CAMoRoS). For such a system to be operational, the wirelessly networked robots must be capable of communicating and navigating reliably. To navigate they must estimate their position based on multi-sensor data of their environment. Over the last few decades, many different types of sensors have been developed in the robotics field, some of which have achieved very promising results. Important problem that must be addressed is to fuse data from different sensor sources so that uncertainty in data can be effectively reduced. Multi-sensor fusion is important since it can reduce the effects of errors in measurements and the methods employed can rely on a probabilistic approach. This framework is a general framework describing the probabilistic foundations of many existing, currently used, methods for solving the localization problem. The use of Kalman filtering in for sensor fusion data, Bayesian method for multi sensory information integration in grid-based methods, and Dempster-Shafer evidence method in multi-ultrasonic sensor fusion for localization have been applied approaches.

In addition to addressing research issues in robot localization, mapping and path planning we also consider issues in energy-efficient wireless communication and application of computer-vision system. The energy resource of the nodes of the networked system is at a premium and must be judiciously spent since the performance and lifetime of the network depends on the ability of the robots to maintain communication with each other and with central command/mission control. This component of the project focuses on issues related to prolonging the battery life of robotic nodes through (i) developing energy-efficient communication schemes, and (ii) prolonging the network lifetime by addressing the “energy hole” problem.

An application area involves the increasing interest in modeling and recognizing human affect by computers. In order to develop an automated, real-time system for stress recognition in practice, we are exploring the use of both thermal infrared and regular visible light cameras. By fusing the visible and infrared information together, we expect the stress recognition performance to be improved significantly, and the system can be automated. Several challenges however need to be addressed including: video registration of these modalities, characterization of the behavioral stresses, and parameter extraction.

Participants

Alade Tokuta, NCCU Mathematics and Computer Science Department