This is the third year we have worked on the vision project. In the 2022-2023 season, we started the project and wrote code for pose estimation using April Tags. In 2023-2024, we mostly focused on improving the accuracy of our pose estimation, but we also started working on object detection. Our pose estimation was accurate enough to shoot into the speaker by the competition, but our object detection was not functional enough to use the competition.
This year, we are mostly focusing on object detection. The first step is imaging the Jetson Orin Nano that we plan to use since we currently need help finding the one we used last year. We encountered several problems imaging it and installing all of the necessary libraries, but we eventually finished. Our code runs but gets about 7 or 8 frames per second. We are currently trying to improve our framerate, and then we plan to mount a camera on our test robot to create and tune an algorithm to help the driver intake a game piece.
We are also training a practice model. Since FIRST released that there will be two different game pieces next year, we decided to use cones and cubes from 2023 for our model. So far, we have taken hundreds of pictures of the objects and labeled them. Next, we will start the process of training the model.