Machine Learning for Semi-Autonomous Maintenance in a Hazardous Environment
Team Members Heading link
- Frantishek Akulich
- Christiane Alford
- Harsh Gupta
- Maz Khoshnood
- Muhammad Ishaq Memon
Advisor: Jonathan Komperda
Sponsor: Fermi National Accelerator Laboratory
Project Description Heading link
Implementing maintenance automation with machine learning (ML) in a hazardous environment can not only save time and money, but also decrease potential human exposure to life threatening work conditions. A robotic system, semi-autonomously performing maintenance on various highly radioactive parts of a newly constructed accelerator beamline enclosure is a primary objective of Fermi National Accelerator Laboratory. This will be performed specifically for the Neutrinos at the Main Injector (NuMI) Project. The first step in creating this robotic system consists of the creation of a ML assisted device capable of detecting nuts and bolts on the beamline’s front window flange. By using an NVIDIA GPU together with a 3D stereo camera, a deep learning (DL) model based on convolutional neural networks (CNN) is trained on a custom dataset of images representing flanges. These images are captured by means of set camera paths from a rendered SolidWorks model, in order to identify nuts and bolts using object detection techniques. The final product will be made up of a model based on SSD Inception V2, the Intel Realsense D435 depth camera, and a Jetson Nano GPU, which will house all the necessary software and python scripts for object detection. From preliminary results for TensorBoard, the model trained using the dataset of SolidWorks images can detect nuts and bolts with an average precision and recall of 74.5% and 61% respectively. In practical terms, this translated into the model being able to detect nuts and bolts on a 3D printed flange in real time within a classification score of 80% to 90%. These results not only prove that a model can be accurately trained using rendered CAD image training data, but it also has generalization ability on real world applications by detecting nuts and bolts in a 3D printed flange recorded by a camera in real time. By including depth analysis to the object detection script, the distance between the target object and the camera lens is obtained, which can be used to create a 3D coordinate system. The current work forms a basis for future development of ML and DL techniques for robotics applications.
See supporting documentation in the team’s Box drive.