Body feature extraction package

Submodules

body_feature_extraction_impl module

classdronebuddylib.atoms.bodyfeatureextraction.body_feature_extraction_impl.BodyFeatureExtractionImpl(engine_configurations:EngineConfigurations)[source]

Bases: IFeatureExtraction

The BodyFeatureExtractionImpl class is used to extract features related to body postures from an image. built on top of Mediapipe’s pose landmarking solution. for more information: https://mediapipe-studio.webapps.google.com/home

draw_landmarks_on_image(rgb_image, detection_result)[source]

Draws pose landmarks on the given RGB image based on the detection result.

This method takes an RGB image and a detection result containing pose landmarks. It copies the original image and then iterates through each detected pose, drawing the landmarks on the copy of the image. The landmarks are drawn according to the specifications provided in the detection result, which includes the coordinates and connections of each landmark point.

Parameters:
  • rgb_image (numpy.ndarray) – The original RGB image on which landmarks need to be drawn.

  • detection_result (object) – An object containing the detected pose landmarks. It typically includes a list of pose landmarks with their x, y, z coordinates.

Returns:

An annotated image with pose landmarks drawn on it.

Return type:

numpy.ndarray

The method utilizes solutions.drawing_utils.draw_landmarks for drawing, which requires converting the landmarks into a format compatible with the drawing utility. Each pose landmark is converted into a NormalizedLandmark and then drawn on the image using the specified pose connections and drawing style.

get_algorithm_name() str[source]

Get the algorithm name.

Returns:

The algorithm name.

Return type:

str

get_class_name() str[source]

Get the class name.

Returns:

The class name.

Return type:

str

get_detected_pose(image) PoseLandmarkerResult[source]

Get the detected pose from an image.

Parameters:

image – The numpy list image to detect the pose from.

Returns:

The detected pose.

Return type:

PoseLandmarkerResult

get_feature(image) list[source]

Abstract method to get features from an image. This method should be implemented by subclasses.

Parameters:

image (list) – The image to extract features from.

Returns:

The extracted features.

Return type:

list

get_optional_params() list[source]

Get the optional parameters for the class.

Returns:

The list of optional parameters.

Return type:

list

get_required_params() list[source]

Get the required parameters for the class.

Returns:

The list of required parameters.

Return type:

list

get_supported_features() list[source]

Get the list of supported features for the engine.

Returns:

The list of supported features.

Return type:

list

hand_feature_extraction_impl module

classdronebuddylib.atoms.bodyfeatureextraction.hand_feature_extraction_impl.HandFeatureExtractionImpl(engine_configurations:EngineConfigurations)[source]

Bases: IFeatureExtraction

Implementation of the hand feature extraction using Mediapipe’s hand detection solution.

count_fingers(frame, show_feedback=False) int[source]

Count the number of fingers in a frame.

Parameters:
  • frame (np.array) – The frame to count fingers in.

  • show_feedback (bool) – Whether to show the processed frame.

Returns:

The number of fingers in the frame.

Return type:

int

count_raised_fingers(image)[source]
get_algorithm_name() str[source]

Get the algorithm name of the engine.

Returns:

The algorithm name of the engine.

Return type:

str

get_class_name() str[source]

Get the class name of the engine.

Returns:

The class name of the engine.

Return type:

str

get_feature(image) list[source]

Detect hands in an image.

Parameters:

image (list) – The frame to detect the hand in.

Returns:

Return the list of the landmark of one hand in the frame.

Return False if no hand is detected.

Return type:

list | bool

get_gesture(numpy_image) GestureRecognizerResult[source]

Get the gesture in an image.

Parameters:

numpy_image – The image to recognize the gesture in.

Returns:

The result of gesture recognition.

Return type:

GestureRecognizerResult

get_optional_params() list[source]

Get the optional parameters for the engine.

Returns:

The list of optional parameters.

Return type:

list

get_required_params() list[source]

Get the required parameters for the engine.

Returns:

The list of required parameters.

Return type:

list

head_feature_extraction_impl module

classdronebuddylib.atoms.bodyfeatureextraction.head_feature_extraction_impl.HeadFeatureExtractionImpl(engine_configurations:EngineConfigurations)[source]

Bases: IFeatureExtraction

Implementation of the head feature extraction using mediapipe’s face detection solution.

get_algorithm_name() str[source]

Get the algorithm name.

Returns:

String containing the algorithm name.

get_class_name() str[source]

Get the class name.

Returns:

String containing the class name.

get_feature(image) list[source]

Get the bounding box of the head in front of the drone.

Parameters:

image (np.array) – the image to be processed

Returns:

x (int): x coordinate of the left top corner of the bounding box, y (int): y coordinate of the left top corner of the bounding box, w (int): width of the bounding box, h (int): height of the bounding box.

Return type:

List containing the coordinates and dimensions of the bounding box [x, y, w, h]

get_optional_params() list[source]

Get the optional parameters for the engine.

Returns:

List containing the optional parameters.

get_required_params() list[source]

Get the required parameters for the engine.

Returns:

List containing the required parameters.

i_feature_extraction module

classdronebuddylib.atoms.bodyfeatureextraction.i_feature_extraction.IFeatureExtraction[source]

Bases: IDBLFunction

The IFeatureExtraction interface declares the method for extracting features from an image.

abstractget_feature(image) list[source]

Extract features from an image.

Parameters:

image – The image from which features should be extracted.

Returns:

A list containing the extracted features.

Module contents