Google's DeepMind robotics team has collaborated with 33 research institutes to create Open X-Embodiment, a shared database aimed at advancing robotics through the use of a large, diverse dataset. Similar to ImageNet for computer vision, Open X-Embodiment features over 500 skills and 150,000 tasks from 22 different robot types. The database is being made available to the research community to reduce barriers and accelerate research in robot learning, with the goal of enabling robots to learn from each other and researchers to learn from one another.
Lerrel Pinto, an associate professor of computer science at NY University, is teaching robots how to perform tasks in the home by allowing them to fail. Using reinforcement learning, Pinto set up a simulated home environment and gave robots a primer based on videos of humans performing the tasks. The robots then attempted the tasks for 24 hours a day, learning from their mistakes and making adjustments. Pinto's work aims to improve robot learning and autonomy in real-world settings.
Researchers are exploring new approaches to robot learning inspired by human toddlers. Carnegie Mellon University (CMU) has developed RoboAgent, a robotic AI agent that combines passive learning (teaching a system through videos and datasets) with active learning (performing tasks and adjusting). The system can learn from one environment and apply the knowledge to another, similar to CMU's Vision-Robotics Bridge (VRB) system. The dataset used is open source and compatible with off-the-shelf robotics hardware, making it accessible for researchers and companies. The goal is to create multipurpose robots that can adapt and learn in unstructured settings like homes and hospitals.