Stochastic Estimation and Autonomous Robotics Lab

Objectives

Our primary research interests include knowledge representation and reasoning, machine learning, computer vision and cognitive science as applied to autonomous mobile robots. We seek to develop algorithms that enable robots to collaborate with non-expert human participants, acquiring and using sensor inputs and high-level human feedback based on need and availability. Furthermore, we are interested in developing learning and inference algorithms for application domains characterized by a significant amount of uncertainty.

Lab members (humans and robots!)
Collaborators
Robotics projects, Other cool videos
Non-robotics projects


Lab director: Dr. Mohan Sridharan

Student group: Summer 2013 (including REU students).

Current Students: Past (Graduate) Students: Past (Undergraduate) Students:
  • Patricia Andrews. B.S. (Colorado College), REU Student, Summer 2013.
  • Olatide Omojaro. B.S. (Georgia Perimeter College), REU Student, Summer 2013.
  • Aaron Hester. B.S. (Mathematics+CS), REU Student, Summer 2013.
  • Emilie Featherston. B.S. (co-supervised with Drs. Susan and Joseph Urban), REU Student, Summer 2013.
  • Christian Washington. B.S. (Louisiana State University), REU Student, Summer 2012.
  • Catie Meador. B.S. (Swarthmore College), REU Student, Summer 2012.
  • Sabyne Peeler. B.S. (Florida A&M University) co-supervised with Drs. Susan and Joseph Urban, REU Student, Summer 2012.
  • Shiloh Huff. B.S. (co-supervised with Drs. Susan and Joseph Urban), REU Student, Summer 2012.
  • Stephanie Graham. B.S. (co-supervised with Drs. Susan and Joseph Urban), REU Student, Summer 2012.
  • Austin Ray. B.S. (co-supervised with Drs. Susan and Joseph Urban), Spring 2012.
  • David South. B.S. (co-supervised with Drs. Susan and Joseph Urban), Spring 2012.
  • Kevin Thomas. B.S. (co-supervised with Drs. Susan and Joseph Urban), Spring 2012.
  • Jesse Kawell. B.S. (Samford University), REU Student, Summer 2011.
  • David Kari. B.S. (California Baptist University), REU Student, Summer 2011.
  • David Seibert. B.S. (Emory University), REU Student, Summer 2011.
  • James Smith. B.S. (The University of Texas at Austin), REU Student, Summer 2011.
  • Mary Shuman, B.S. (University of North Carolina at Charlotte, co-supervised with Dr. Susan Urban), REU Student, Summer 2011.
  • Matthew Sullivan. B.S. (Computer Engineering), Spring-Summer 2011.
  • Kshira Nadarajan. B.S. (Iowa State University), Summer 2010.
Students interested in working with me should read this before you contact me. Some robot platforms used in experimental trials are shown below.

Socially Assistive Robot Nao and Erratic platforms All robot platforms
UAV version1 UAV version3


Current (and recent) collaborators:

Some of our collaborators at TTU and elsewhere are listed below.

Robotics Projects:

In the context of human-robot collaboration, we seek to answer the following key questions:
  1. How to best enable robots to represent and reason with incomplete domain knowledge, incrementally revising the knowledge using information learned from sensors and high-level human feedback?
  2. How to best enable robots to adapt these representation and reasoning capabilities for learning from multimodal sensor inputs and limited feedback from non-expert human participants?
Although many sophisticated algorithms have been developed for the associated learning, adaptation and collaboration challenges, the integration of these challenges poses open problems even as it presents novel opportunities to address the individual challenges. We therefore seek to develop an integrated architecture that jointly addresses the learning, adaptation and collaboration challenges by exploiting their mutual dependencies.

Representative publications:
Mohan Sridharan. Integrating Visual Learning and Hierarchical Planning for Autonomy in Human-Robot Collaboration In the AAAI Spring Symposium on Designing Intelligent Robots: Reintegrating AI II, Stanford, USA, March 25-27, 2013. (pdf)

Mohan Sridharan. An Integrated Framework for Robust Human-Robot Interaction. In Jose Garcia-Rodriguez and Miguel Cazorla (editors), Robotic Vision: Technologies for Machine Learning and Vision Applications, pages 281-301 (535), IGI Global, 2013 (web: December 28, 2012). (pre-publication pdf) (book website)


The individual components of the architecture have developed into the research projects described below.
  • Knowledge Representation and Reasoning: The objective is to exploit the complementary strengths of declarative programming and probabilistic graphical models to address the knowledge representation and reasoning challenges in robotics. Towards this objective, we integrate the commonsense reasoning capabilities of Answer Set Programming (ASP), a declarative language, with the probabilistic uncertainty modeling capabilities of hierarchical partially observable Markov decision processes (POMDPs). Robots use this architecture to represent and reason with qualitative and quantitative descriptions of knowledge and uncertainty obtained from sensor inputs and high-level human feedback. The image below is an overview of the architecture, while the videos illustrate the use of the architecture to localize target objects in indoor domains (learned map). All algorithms are implemented in the Robot Operating System (ROS) framework.

    ASP+POMDP for KRR     KRR for robots


    Some recent videos of experimental trials can be found on youtube: video-1, video-2.

    Representative publications:
    Shiqi Zhang, Mohan Sridharan and Christian Washington. Active Visual Planning for Mobile Robot Teams using Hierarchical POMDPs. In the IEEE Transactions on Robotics (T-RO), Volume 29, Issue 4, 2013. (pdf)

    Shiqi Zhang and Mohan Sridharan. Integrating Declarative Programming and Probabilistic Planning on Robots. In the AAAI Fall Symposium on How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or ____?, Arlington, USA, November 15-17, 2013. (pdf)





  • Autonomous (visual) learning of object models: The objective is to enable mobile robots to autonomously learn object models using local, global, temporal and contextual visual cues. Learning is triggered by motion cues, and object models consist of probabilistic representations of visual features with complementary properties. A pictorial representation of the object model is provided below:

    Object model structure


    The learned models can be used for object recognition in complex scenes. You can look at a video of the learning and recognition algorithm on a mobile robot. The following image is an illustrative example of object recognition using the learned models:

    Using learned model for recognition


    Representative publications:
    Xiang Li and Mohan Sridharan. Move and the Robot will Learn: Vision-based Autonomous Learning of Object Models. In the International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, November 25-29, 2013.(pdf)

    Xiang Li, Mohan Sridharan and Shiqi Zhang. Autonomous Learning of Vision-based Layered Object Models on Mobile Robots. In the International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13, 2011. (pdf)





  • Multimodal learning of object descriptions: The objective is to enable robots to learn multimodal associative models of domain objects, using the resultant rich (object and domain) descriptions to pose specific high-level verbal queries to human participants. An overview of the multimodal learning approach is provided below in the context of a robot describing objects using learned visual and verbal vocabularies and associations between these vocabularies.

    Multimodal learning overview


    Representative publication:
    Kimia Salmani and Mohan Sridharan. Multi-Instance Active Learning with Online Labeling for Object Recognition. In the 27th International Conference of the Florida AI Research Society (FLAIRS), Pensacola Beach, USA, May 21-23, 2014. (pdf)

    Ranjini Swaminathan and Mohan Sridharan. Towards Robust Human-Robot Interaction using Multimodal Cues. In the Human-Agent-Robot Teamwork Workshop (HART) at the International Conference on Human-Robot Interaction (HRI), Boston, USA, March 5, 2012. (pdf)





  • Augmented reinforcement learning: The objective is to incrementally and autonomously merge the information extracted from high-level human feedback with the information extracted from sensory inputs, bootstrap off the two feedback mechanisms to make best use of the available information. A pictorial overview of the augmented reinforcement learning approach used to achieve this objective is given below. This approach has been evaluated in single and multiagent (simulated) game domains.

    Augmented reinforcement learning


    Representative publication:
    Mohan Sridharan. Augmented Reinforcement Learning for Interaction with Non-Expert Humans in Agent Domains. In the International Conference on Machine Learning Applications (ICMLA), Honolulu, Hawaii, December 18-21, 2011. (pdf)



Non-Robotics Projects:

We also design learning and inference algorithms for challenges in critical (big data) application domains.
  • Agricultural irrigation management and yield mapping: Agricultural irrigation management poses tough challenges in arid and semi-arid regions, where crop water demand exceeds rainfall. Since daily grass or alfalfa reference ET values are widely used to estimate crop water demand, inaccurate reference ET estimates can impact irrigation costs and the demands on U.S. freshwater resources. ET networks calculate reference ET using accurate measurements of local meteorological data. With gaps in spatial coverage of existing agriculture-based ET networks (e.g., TXHPET) and lack of funding, there is an immediate need for alternative sources capable of filling data gaps without high maintenance and support costs. In collaboration with USDA-ARS and Texas A&M AgriLife Research, we adapt sophisticated machine learning algorithms that use weather observations from non-ET stations to accurately predict the reference ET values.

    Representative publication:
    Daniel Holman, Mohan Sridharan, Prasanna Gowda, Dana Porter, Thomas Marek, Terry Howell and Jerry Moorhead. Estimating Reference Evapotranspiration for Irrigation Scheduling in the Texas High Plains. In the International Joint Conference on Artificial Intelligence (IJCAI 2013), Beijing, China, August 3-9, 2013. (pdf)


  • Downscaling climate models: Climate change and climate forecasts influence policies and planning in fields such as agriculture, ecological preservation and resource management. Although sophisticated global climate models can predict large scale weather patterns in grids of approx. 100km x 100km, they cannot make accurate regional weather predictions since they do not account for local geographic variations (within the grids) such as mountains and lakes. Obtaining high-resolution regional climate predictions by downscaling global models presents formidable big data challenges: (a) processes contributing to global models are highly non-linear; (b) global models and their relationships with regional observations are non-stationary; (c) sensitivity to initial conditions; and (d) use of Petabyte-scale historical data to learn models that can make predictions. In collaboration with the Climate Science Center at TTU and GFDL/Princeton, we are developing deep architectures to learn the relationships between global models and regional observations, thus making accurate regional predictions.

    Representative publication:
    Ranjini Swaminathan, Mohan Sridharan and Katharine Hayhoe. Convolutional Neural Networks for Climate Downscaling. In the Climate Informatics Workshop (CI 2012), Boulder, USA, September 20-21, 2012. (pdf) (poster)



Some cool videos:

  • A screenshot of the 3D range map generated by a vertically-mounted laser on a mobile robot platform. Click on the image to play the video of the robot mapping the entire lab. You can look "into" the map approx. 90seconds from the start of the video.

    Vertically-mounted laser map

    You can also look at a youtube version of the video.

JMP


Home

Research, Lab

Teaching

Robot Outreach

CV, Publications