Sulabh Kumra
  • Home
  • Experience
  • Research
  • Publications
  • Contact

Learning Robotic Manipulation Tasks

Multi-step manipulation tasks in unstructured environments are extremely challenging for a robot to learn. Such tasks interlace high-level reasoning that consists of the expected states that can be attained to achieve an overall task and low-level reasoning that decides what actions will yield these states. We propose a model-free deep reinforcement learning method to learn multi-step manipulation tasks. We introduce a Robotic Manipulation Network (RoManNet), which is a vision-based model architecture, to learn the action-value functions and predict manipulation action candidates. We define a Task Progress based Gaussian (TPG) reward function that computes the reward based on actions that lead to successful motion primitives and progress towards the overall task goal. To balance the ratio of exploration/exploitation, we introduce a Loss Adjusted Exploration (LAE) policy that determines actions from the action candidates according to the Boltzmann distribution of loss estimates.
Paper
CODE

Antipodal Robotic Grasping using GR-ConvNet

​In this work, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (~20ms). We evaluate the proposed model architecture on standard datasets and a diverse set of household objects. We achieved state-of-the-art accuracy of 97.7% and 94.6% on Cornell and Jacquard grasping datasets respectively. We also demonstrate a grasp success rate of 95.4% and 93% on household and adversarial objects respectively using a 7 DoF robotic arm.
Paper
Code

Robotic Grasp Detection using Deep Learning

Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted. In this paper, we present a novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene. The proposed model uses a deep convolutional neural network to extract features from the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest. Our multi-modal model achieved an accuracy of 89.21% on the standard Cornell Grasp Dataset and runs at real-time speeds. This redefines the state-of-the-art for robotic grasp detection.
Paper
Picture

Collaborative Robot Learning from Demonstrations

Robot Learning from Demonstrations (RLfD) enable a human user to add new capabilities to a robot in an intuitive manner without explicitly reprogramming it. In this method, the robot learns skill from demonstrations performed by a human teacher. The robot extracts features from each demonstration called as key-points and learns a model of the demonstrated task or trajectory using Hidden Markov Model (HMM). The learned model is further used to produce a generalized trajectory.
Thesis
PAper

Dexto:Eka: - The Humanoid Robot

Dexto: Eka: is a tele-operated anthropomorphic robot which is approximately 5' 1" tall. It is also India's first tele-operated humanoid and the tallest. . The project was begun with the goal of achieving tele-presence while maintaining low developmental costs. "Dexto" is derived from the word dexterous and "Eka" is the Sanskrit word for one. The intent behind the project was to save lives. These robots can be controlled from anywhere in the world. In accident-prone industrial areas, these low-cost tele- operated robots can be deployed and in case of unpredictable disasters, any harm would befall the robot rather than precious human life. Dexto: Eka: is a tele-operated anthropomorphic robot with three modes of operation: dependent, semi-sovereign and sovereign.
PROJECT WEbsite

Baxter Learns like a Child to Dance

  • Developed a new system that can make a robot learn dance moves according to the input music track
  • Implemented reinforcement learning algorithm in Python to make robot select dance moves
  • ​Just like the way a child learns a new task assigned to him by trying all possible alternatives and further learning from his mistakes, the robot learns in the same manner in learning by trial-and error. We used Q-learning algorithm, in which the robot tries all the possible ways to do a task and creates a matrix that consists of Q-values based on the rewards it received for the actions performed. Using this method, the robot was made to learn dance moves based on a music track.
paper

Robotic Arm Shadowing

  • Developed a human machine interface to make a 5 DOF robotic arm mimic a human arm
  • Determined human arm skeleton using MATLAB image processing toolbox and mapped to robotic arm

Logic will get you from A to B. Imagination will take you everywhere.

- Albert Einstein
Powered by Create your own unique website with customizable templates.
  • Home
  • Experience
  • Research
  • Publications
  • Contact