ELEE XXXXXXXXXXSpring XXXXXXXXXXLab 5 (worth 2 labs) Due 4/12, AoE (Reflection Due 4/17)
The goal of this lab is to practice integrating computer vision. You will be using MediaPipe to detect
poses and count actions, as well as your M5StickCPlus to trigger events. You’ll also be using YOLOv5 to
detect when a specific a
angement of objects is detected.
All of your work should be found in your class GitHub repository under a folder, lab 5.
Part 1. Repetition Counting (lab5_part1.py)
The FaceMesh example was previously demonstrated. Another module is the
mediapipe.solutions.pose.Pose. This is documented at
https:
google.github.io/mediapipe/solutions/pose.html
In this example, they suggest that this li
ary can be used to track the number of repetitions of a
particular exercise. Your goal is to actually do this (in a limited fashion).
The “user” is the person exercising. Let’s assume they are also wearing a smart watch (the
M5StickCPlus), which will be used to initiate the repetition counting. You can use Bluetooth or MQTT to
communicate.
A complete repetition is defined by the user by hitting the M5StickCPlus main button. The first time, the
user should be standing up straight. The second time, they should be squatting down at their target
height.
Once the user hits the button the second time, the workout begins. You can count repetitions by using
the following three-state machine:
• Start -> Top when the user is in state Start and their head gets near or above a target y value.
• Top -> Bottom when the user is in state Top and their head gets near or below a target y value.
• Bottom -> Top when the user is in state Bottom and their head gets near or above a target y
value. This event also triggers a repetition count.
During the workout, you should display the repetition count, repetitions per second, and display their
pose targets. At any point, they should be able to hit the button on the M5StickCPlus again to finish the
workout, which should be indicated on screen.
Take a video of your completed repetition counter, and a add a link in your README.md file to it. Verify
that this link works!
Part 2. A
angement Detection
Using YOLOv5 in OpenCV was previous demonstrated. In this part, you’ll be detecting when a specific
a
angement of three different types of objects in view of a camera. You can find the different object
classes in the file classes.txt. If you don’t have 3 real objects, you may instead use pictures of objects
(e.g. on your phone or another computer screen). Your program should determine when all three
objects are in view at the same time, and when they are, it should save the image to a file (use the latest
time as a filename) and then quit.
https:
google.github.io/mediapipe/solutions/pose.html
Note, this requires very little modification to the example. Rather, you’ll need to get the example
working, and then add logic to make the determination of the moment when the 3 required objects are
present. The list of objects to be recognized should be at the top of your file.
Take a video of your completed repetition counter, and a add a link in your README.md file to it. Verify
that this link works!
Submission (Due 4/12)
As with previous labs, create a README.md file in the root of your lab5 folder. This should include links
to your two videos.
Reflection (Due 4/17)
As with previous labs, update your README.md file in the root of your lab folder to include a reflection
on the provided solution. This should compare and contrast your solution with the provided one.
These reflections should not be shallow. There should be no doubt that you understand both the
provided solution and your own solution based on what you write in the reflection. A good reflection
will specifically call out parts of the code in each where there are differences.
• Please use quality formatting in your README.md file.
• Bullet point lists are appreciated.