MIE.61 – Autonomous Robotic Arm with 3D Computer Vision at Fermilab

Team Members Heading link

  • Aleksandar Dyulgerov
  • Danny Garcia
  • Danish Khan
  • Ameer Mustafa
  • Serena Odonnell
  • Nedal Salem
  • Saahil Sorakayala

Project Description Heading link

Energy from radiation poses a serious threat that can permanently damage the human body. To minimize human exposure to radiation, autonomous robots have been implemented as effective substitutes for human operations. Fermi National Accelerator Laboratory (FNAL) experiments require scientists to be close to radioactive materials as a part of the Neutrinos at the Main Injector project. If implemented, it would limit the FNAL scientists’ exposure to radiation and reduce human error. The project aims to produce a completely autonomous robotic system for tasks in hazardous zones, especially radioactive environments. The project aims to validate a proof-of-concept prototype autonomous robot. The robotic system must unscrew, then place, and screw in bolts onto a flange. The robotic system consists of an xArm6 robotic arm manipulator, Intel RealSense depth camera, and a custom end arm effector. The robotic system uses machine learning with computer vision, trained with manually annotated images of bolt heads and bolt inserts. This data was exported into a secondary coding interface to train the model. A code was created that could be used for the robot to identify bolt heads and inserts. Additionally, the code file could be used to generate metrics for the model to determine its precision and accuracy for documentation. The integration of the robot system, sensor data, database, and vision processing are driven by Python-written algorithms that orchestrate their interaction. Intel’s RealSense2 API pulls physically aligned sensor frame data. Frame data is fed into YOLOv5 and ML computations are handled with NVIDIA’s parallel processing platform, CUDA. Initially, a depth threshold is established through an averaging and statistical z-scores approach for filtering out background objects. Simultaneously, the robot initializes based on camera’s view to center of flange using centroid mechanism. Multi-object Tracker, ByteTrack is used to assign and maintain same IDs in consecutive frames. Finetuned hyperparameters were reconfigured to lower the number of uniquely generated IDs and reassign correct IDs. A detection’s location is identified in clockwise orientation and the robot recenters to detection with the same ID and moves closer. A calculated distance approach was considered instead of recentering, but camera lenses distort distance leading to inconsistent pixel locations. It was determined that a bolt dispenser is needed to allow the end effector the pickup bolts. The first dispenser designed used a spring to move bolts. The current design uses gravity to move bolts. The dispenser replaces a bolt to a known location after the robot picks one up. The end effector of the robot involves a DC motor. The initial end arm circuit involved a simple relay switch and battery. The current circuit uses an Arduino and motor driver to control the speed and direction. The Arduino changes the speed and direction of the motor when prompted by signals from the Xarm6 and camera system. The autonomous robotic system is successfully capable of screwing in, placing, and unscrewing bolts, without human intervention using machine learning based computer vision. The prototype has been validated and FNAL can implement these ideas in highly radioactive environments.