MIE.70 – Robotic Arm with 3D Computer Vision and Machine Learning
Team Members Heading link
- Aleena Albert
- Omar Arrez
- Martin Escobar
- Adkhamjon Khamidullayev
- Ivan Lee
- Diya Patel
Project Description Heading link
Autonomous robots are a viable alternative for performing tasks in environments that pose danger to humans, such as areas exposed to radioactive material. Historically, Fermi National Accelerator Laboratory (FNAL) personnel have performed certain tasks, such as maintenance, in radioactive environments which produces a risk of radiation exposure. When a human is exposed to radiation for a long time, many life-altering effects may befall the victim. The CDC (Centers for Disease Control) reports that if a person is exposed to enough radiation, their cells can become either altered or die which may cause cancer or death. To prevent radiation-based injuries from happening to FNAL staff, a robot can be used to accomplish tasks in such environments. Since robots have no major factors that can be affected by radiation, FNAL has decided to pursue development of a robotic arm that can dismantle radioactive parts for proper waste treatment. This project aim is to expand on this idea to continually progress towards the goal of implementing a fully autonomous robotic system to perform work in hazardous environments in place of FNAL scientists. An autonomous robotic system would eliminate radiation related health risks for the lab personnel, and the possibility of human error. This would improve safety standards as well as performance efficiency by autonomously simplifying the task for humans. The system was trained and programmed to identify the bolts using 3D computer vision and machine learning. The system then located and travelled to the bolt to perform the unscrewing operation.
For the development of the project, specific requirements were created that the robotic arm needed to complete it successfully. The arm had to be able to handle a payload of at least 1 kg, have six degrees of freedom, have the capability to integrate with a depth sensing camera, and be easily programmed. To integrate the camera and arm together, the team used an open-source training program called YOLOv5. The model created using YOLOv5 was trained to identify bolts through object detection and machine learning. This model was used in a Python script that utilized OpenCV and a depth sensing camera to obtain the position of the targeted bolt. The position included x and y coordinates as well as the depth. A custom end effector was designed, to which a thumb ratchet and universal socket were attached to unscrew bolts on a prototype flange with no human intervention. The fully integrated system, which included a depth sensing camera, a trained machine learning model, and the ability to unscrew bolts, the robotic arm will allow for humans to be kept away from hazardous locations while being able to accomplish their tasks. This project is a proof-of-concept and demonstrates the potential to keep humans safe. All workers have the right to feel safe in their workplace, and this project can eliminate exposure to radiation for FNAL personnel.