Autonomous  Robots
To do what no robot has been able to do before

Research

I am a Senior Lecturer in the Department of Electrical and Computer Engineering at The University of Auckland (NZ). Between October 2008 and May 2014, I was a faculty member at Texas Tech University, where I am currently an Adjunct Associate Professor in the Department of Mathematics and Statistics. Prior to that, I was a Research Fellow in the School of Computer Science (CS) at The University of Birmingham (UK), and a Ph.D. candidate in the Department of Electrical and Computer Engineering at The University of Texas at Austin. My primary research interests include the following: My Curriculum Vitae (CV): (pdf) (ps)
My Research Statement: (pdf) (ps)
My Teaching Statement: (pdf) (ps)
My Publications.

I direct the Stochastic Estimation and Autonomous Robotics Laboratory (SEARL).
For students: if you are interested in working with me, please read this before you contact me.

Recent papers and talks:
  • Our research on adapting non-parametric Bayesian algorithms to estimate reference evapotranspiration (ET) for agricultural irrigation management has been accepted for publication in the Journal of Hydrology (pdf). An initial version of this research was presented at the Computational Sustainability Track of the International Joint Conference on Artificial Intelligence (IJCAI 2013) (pdf)
  • Our work on online multi-instance active learning for incremental learning of object models from visual cues and limited human (verbal) feedback won the Best Paper Award at the International Conference of the Florida AI Research Society (FLAIRS 2014) (pdf).
  • Our most recent work on an architecture for knowledge representation and reasoning in robotics is described in this workshop paper. Previous work on an architecture for combining non-monotonic logic programming with hierarchical POMDPs is described in a workshop paper. This research builds on our prior work on hierarchical POMDPs that is documented in a IEEE Transactions on Robotics article.

Upcoming and Recent Events/Outreach:

Robot Platforms

A couple of images with representatives from each "category" of robots used in our research projects: (a) the ERRATIC wheeled robot from Videre Design for indoor and outdoor applications; (b) the NAO humanoid Nao robots for indoor applications and robot soccer; (c) some unmanned aerial robots (UAVs) used for indoor applications (e.g., surveillance); and (d) the IPRE fluke robot for indoor applications and outreach activities.

Human-robot interaction Here's looking at you my friend The Robot Platforms
UAV version1 UAV version3

Next, images of some robot platforms that were used in my research in the last few years: the AUV ENDURANCE for autonomous underwater navigation (I am the person in the fluorescent blue pants and black jacket in the second image!), a robot wheelchair for the physically challenged, a robot with a manipulator for cognitive HRI and the SONY ERS-7 Aibo (robot soccer).

AUV endurance AUV endurance 2 Vulcan Wheelchair
Cosy Demo The Sony Aibo

Currently, my students and I evaluate our algorithms on wheeled, humanoid and aerial robot platforms---our algorithms are designed with the long-term goal of enabling robots to socially engage humans. The associated capabilities have tremendous applications, e.g., engage the elderly and assist caregivers at elderly care homes. In the past, we also used the robot soccer domain because it provides a moderately structured environment, while still exhibiting many of the challenges observed on robots deployed in the real world. A joint team comprising Texas Tech University and The University of Texas at Austin competed in the Standard Platform League of RoboCup using the Nao robots. The individual research thrusts are described below.

Knowledge Representation and Reasoning

Mobile robots equipped with multiple sensors obtain far more raw data than is possible to process in real-time. Robots (typically) also require substantial knowledge of the domain and tasks, but it is difficult to equip mobile robots with accurate (and complete) domain knowledge. We address these challenges by developing knowledge representation and reasoning architectures that exploit the complementary strengths of declarative programming and probabilistic graphical models to represent, reason with, and learn from, qualitative and quantitative descriptions of knowledge and uncertainty. Currently, our architecture integrates the commonsense reasoning (especially default reasoning) capabilities of Answer Set Programming with the probabilistic uncertainty modeling capabilities of hierarchical POMDPs, enabling robots to adapt learning, sensing, and acting to tasks at hand. Algorithms are evaluated on different mobile robot platforms and application domains. For more details, please look at recent publications.

This research thrust builds on my post-doctoral work in the School of Computer Science at University of Birmingham (UK), on the CoSy project. My postdoctoral research provided a hierarchical decomposition of POMDPs for visual information processing, enabling a human and a robot to jointly converse about and manipulate objects on a tabletop; this research won a Distinguished Paper Award at the International Conference on Automated Planning and Scheduling (ICAPS), 2008. More recent work won a Paper of Excellence Award at the International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 2012.

Bootstrap Learning

Although mobile robots are increasingly being deployed in real-world applications, it is still a challenge for robots to learn models of domain objects and events. Existing algorithms predominantly require substantial training data and/or are computationally expensive. We address these challenges by exploiting certain observations: (a) many objects have distinctive characteristics, locations, and motion patterns, which may not be known in advance and may change over time; (b) images encode information about objects in the form of many visual cues; and (c) any specific task performed by robots typically requires accurate models of a small number of objects. Our algorithms enable robots to automatically identify relevant domain objects using motion cues (and planning algorithms), incrementally learning representative models of these objects using the complementary strengths of appearance-based and contextual visual cues extracted from a small number of images. These object models are used in information fusion and energy minimization algorithms for reliable and efficient object recognition in novel indoor and outdoor scenes. The algorithms are evaluated on different robot platforms in indoor and outdoor domains. Please look at my recent publications for more details.

My doctoral dissertation at The University of Texas at Austin is related to this research thrust. I developed algorithms to enable a mobile robot to autonomously learn representations for color distributions and illuminations, and to use these representations to detect and adapt to illumination changes. For some related images and videos, please see: color learning and illumination invariance.

Human-Robot and Multirobot Collaboration

Real-world application domains frequently make it difficult for robots to operate without any human supervision. At the same time, human participants may not have the time and expertise to interpret raw sensor readings or provide elaborate and accurate feedback. My students and I develop algorithms that enable robots to acquire and use sensor inputs and human feedback based on need and availability. Human feedback is limited to high-level guidance from non-experts. To facilitate such interactions, our algorithms enable robots to learn multimodal associative models of domain objects, posing specific queries and merging the high-level human feedback with the information extracted from sensor inputs. Recent work on multiple instance active learning of object models from visual and verbal cues won the Best Paper Award at the International Conference of the Florida AI research Society (FLAIRS), 2014. For more details, please look at recent publications.

As a graduate student at The University of Texas at Austin, I helped develop a robot soccer team, AustinVilla, for the international RoboCup initiative. Robot soccer competitions are a lot of fun! We continue to implement and evaluate some of our research algorithms in the robot soccer domain. A joint-team comprising Texas Tech University and The University of Texas at Austin participates in the Standard Platform League of RoboCup using the Nao robots.

Applications of Stochastic Machine Learning

Research challenges in robotics are highly interdisciplinary in nature, drawing upon recent developments in a wide range of fields such as machine learning, control theory, computer vision, psychology and cognitive science. In collaboration with colleagues in other research areas and disciplines, I also investigate the design and adaptation of Bayesian algorithms to address learning and inference challenges in domains such as agricultural irrigation management, climate science, and event/stream processing. For instance, in collaboration with colleagues, I have adapted non-parametric Bayesian algorithms to estimate crop reference evapotranspiration for accurate irrigation management. I also design learning architectures for downscaling global climate models, providing accurate regional climate projections. Prior collaboration has resulted in the adaptation of stochastic sampling algorithms to address software testing challenges, including a tutorial on Bayesian Methods for Data Analysis in Software Engineering at the International Conference on Software Engineering (ICSE-2010, tutorial slides).

For more information on my current (and recent) research projects, please look at my publications.

JMP


Home

Research, Lab

Teaching

Robot Outreach

CV, Publications