A vision-based manipulation system that enables robotic arms to grasp and manipulate everyday objects. The system uses deep learning to go from raw camera input to grasp poses, without requiring explicit 3D models of target objects.
Key Features
- End-to-End Learning – From RGB-D input directly to grasp actions
- Sim-to-Real Transfer – Train in simulation, deploy on real hardware
- Novel Object Generalization – Handle objects not seen during training
- Multi-Task Capabilities – Pick, place, pour, and other manipulation primitives
Status
This project is currently in active development. Check back for updates.
