YOPO — Reviewing ground truth data
This week I’ve been busy with jobs application to be honest, but all that is coming to and end now.
TLDR: yeah ground truth data looks good and I’m current working intersection over union that YOLO uses to seeing how I’m going to using it for YOPO to detect limbs and draw a skeleton.
So YOPO training data set is the MPII2 I’ve written some python code to review the ground truth data against an image to evaluate if the image annotations where remotely correct.
So first thing is first Ground truth data:

YOPO Evaluation code results:
Green — Joint is not visable
Red — Joint is visible
Numbers are joint id, which you can review a full list here!

So what’s going on here?
Ground truth converted from MatLab file into Python dict then using openCV plot joint coordinates on image and then join points up with lines to form a skeleton.
After this I started looking at how YOLO generates it’s bounding boxes and the confidence scores for each box. Which is the probability that you there is an object inside the object regardless of the class.
Bounding boxes
- So it uses features from the image to detect N number of boxes
- We need to work out the confidence scores for each box, this means likely is it that something that object is in the box and how accurate is the bounding box is around the “object”
- If there isn’t object in the box then it’s zero, however if there is a object then the score if the intersection over union (IOU) between the predicted box and the ground truth.
- So that’s the confidence however, the box also has a x,y,w,h where x and y box centre and w,h are the width and height of the box.
- Key point is that the confidence score represents the IOU between the predicted box and any ground truth box.
- IOU is calculated by Area of Overlap / Area of Union
