objectidentification package¶
Subpackages¶
- objectidentification.resources package
- Subpackages
- objectidentification.resources.matching package
- Submodules
- objectidentification.resources.matching.SiameseNetworkAPI module
- objectidentification.resources.matching.dataset module
- objectidentification.resources.matching.inferenceDataset module
- objectidentification.resources.matching.main module
- objectidentification.resources.matching.model module
- objectidentification.resources.matching.tune_api module
- Module contents
- objectidentification.resources.matching package
- Module contents
- Subpackages
Submodules¶
objectidentification.benchmarking module¶
- dronebuddylib.atoms.objectidentification.benchmarking.benchmark_feature_extractors_for_dataset(cnn_name=FeatureExtractors.DENSENET121, dataset_path=None)[source]¶
Benchmark the feature extractors for a dataset :param cnn_name: str, the name of the feature extractor :param dataset_path: str, the path to the dataset :return: None
- dronebuddylib.atoms.objectidentification.benchmarking.benchmark_feature_extractors_for_dataset_all(cnn_name=FeatureExtractors.DENSENET121, dataset_path=None)[source]¶
Benchmark the feature extractors for a dataset :param cnn_name: str, the name of the feature extractor :param dataset_path: str, the path to the dataset :return: None
objectidentification.detected_object module¶
- class dronebuddylib.atoms.objectidentification.detected_object.BoundingBox(origin_x, origin_y, width, height)[source]¶
Bases:
object
- class dronebuddylib.atoms.objectidentification.detected_object.DetectedCategories(category_name: str, confidence: float)[source]¶
Bases:
object
- class dronebuddylib.atoms.objectidentification.detected_object.DetectedObject(detected_categories: list[DetectedCategories], bounding_box: BoundingBox)[source]¶
Bases:
object
- class dronebuddylib.atoms.objectidentification.detected_object.ObjectDetectionResult(object_names: list, detected_objects: list[DetectedObject])[source]¶
Bases:
object
objectidentification.i_object_recognition module¶
objectidentification.mp_object_detection_impl module¶
- class dronebuddylib.atoms.objectidentification.mp_object_detection_impl.MPObjectDetectionImpl(engine_configurations: EngineConfigurations)[source]¶
Bases:
IObjectDetection
- get_algorithm_name() str [source]¶
Returns the algorithm name.
- Returns:
The algorithm name.
- Return type:
str
- get_bounding_boxes_of_detected_objects(image) list [source]¶
Detects objects in the provided image and returns a list of bounding boxes for the detected objects.
- Parameters:
image – The image in which to detect objects.
- Returns:
A list of bounding boxes for the detected objects.
- Return type:
list
- get_detected_objects(image) ObjectDetectionResult [source]¶
Detects objects in the provided image and returns a result containing a list of detected objects.
- Parameters:
image – The image in which to detect objects.
- Returns:
A result containing a list of detected objects.
- Return type:
objectidentification.object_identification_engine module¶
- class dronebuddylib.atoms.objectidentification.object_identification_engine.ObjectIdentificationEngine(algorithm: ObjectRecognitionAlgorithm, config: EngineConfigurations)[source]¶
Bases:
object
- identify_object(image)[source]¶
Identify objects in an image, identifying and categorizing various objects depicted in the image.
- Parameters:
image – The image containing objects to be recognized.
- Returns:
A list of recognized objects, each potentially with associated metadata such as object name or coordinates.
- remember_object(image=None, type=None, name=None, drone_instance=None, on_start=None, on_training_set_complete=None, on_validation_set_complete=None)[source]¶
Remember an object by associating it with a name, facilitating its future identification and recall.
- Parameters:
image – The image containing the object.
name (str) – The name to be associated with the object.
- Returns:
True if the operation was successful, False otherwise.
objectidentification.object_identification_gpt_impl module¶
- class dronebuddylib.atoms.objectidentification.object_identification_gpt_impl.ObjectIdentificationGPTImpl(engine_configurations: EngineConfigurations)[source]¶
Bases:
IObjectIdentification
A class to perform object identification using ResNet and GPT integration.
- create_memory_on_the_fly(changes=None)[source]¶
Creates memory on the fly by reading known objects from a JSON file and sending them to the object identifier for processing.
- Parameters:
changes – Optional parameter to pass changes if any.
- describe_image(frame)[source]¶
Describes the image using the GPT model.
- Parameters:
frame – The image to describe.
- Returns:
The description of the image.
- Return type:
str
- format_answers(result)[source]¶
Formats the raw result from the object identifier into IdentifiedObjects.
- Parameters:
result (str) – The raw JSON result from the object identifier.
- Returns:
The formatted identified objects.
- Return type:
- get_algorithm_name() str [source]¶
Gets the algorithm name of the object detection implementation.
- Returns:
The algorithm name of the object detection implementation.
- Return type:
str
- get_class_name() str [source]¶
Gets the class name of the object detection implementation.
- Returns:
The class name of the object detection implementation.
- Return type:
str
- get_optional_params() list [source]¶
Gets the list of optional configuration parameters for the GPT object detection engine.
- Returns:
The list of optional configuration parameters.
- Return type:
list
- get_required_params() list [source]¶
Gets the list of required configuration parameters for the GPT object detection engine.
- Returns:
The list of required configuration parameters.
- Return type:
list
- identify_object(image) IdentifiedObjects [source]¶
Identifies the objects in the given image using the ResNet object detection engine.
- Parameters:
image_path (str) – The path to the image of the objects to identify.
- Returns:
The identified objects with their associated probabilities.
- Return type:
- identify_object_image_path(image_path) IdentifiedObjects [source]¶
Identifies the objects in the given image using the ResNet object detection engine.
- Parameters:
image_path (str) – The path to the image of the objects to identify.
- Returns:
The identified objects with their associated probabilities.
- Return type:
- progress_event = <threading.Event object>¶
- remember_object(image=None, type=None, name=None)[source]¶
Remembers a new object by sending its image and type to the object identifier.
- Parameters:
image – The image of the object to remember.
type – The type of the object.
name – The name of the object.
- Returns:
The result from the object identifier after processing.
- Return type:
success_result
- validate_reference_image(image, image_type) ImageValidatorResults [source]¶
Validates the reference image using the object validator.
- Parameters:
image – The image to validate.
image_type – The type of the image.
- Returns:
The result of the image validation.
- Return type:
objectidentification.object_identification_resnet_impl module¶
- class dronebuddylib.atoms.objectidentification.object_identification_resnet_impl.ObjectIdentificationResnetImpl(engine_configurations: EngineConfigurations)[source]¶
Bases:
IObjectIdentification
- create_dataset(object_type, object_name, data_mode, drone_instance)[source]¶
Generates a dataset for a given place name, optionally using a drone for image collection. Different modes support training, validation, and testing data collection.
- Parameters:
place_name (str) – The name of the place for which to create the dataset.
data_mode (int) – The mode of dataset creation (e.g., training, validation).
drone_instance – An optional drone instance to use for collecting images.
- get_algorithm_name() str [source]¶
Gets the algorithm name of the object detection implementation.
- Returns:
The algorithm name of the object detection implementation.
- Return type:
str
- get_class_name() str [source]¶
Gets the class name of the object detection implementation.
- Returns:
The class name of the object detection implementation.
- Return type:
str
- get_optional_params() list [source]¶
Gets the list of optional configuration parameters for YOLO V8 object detection engine.
- Returns:
The list of optional configuration parameters.
- Return type:
list
- get_required_params() list [source]¶
Gets the list of required configuration parameters for YOLO V8 object detection engine.
- Returns:
The list of required configuration parameters.
- Return type:
list
- progress_event = <threading.Event object>¶
- recognize_objects(image, top_n=3) IdentifiedObjects [source]¶
Recognizes objects depicted in the given image. If the confidence of the predictions is below a given threshold, the object is classified as ‘unknown’.
- Parameters:
image – The image of the objects to recognize.
- Returns:
The recognized objects with their associated probabilities.
- Return type:
RecognizedObjects
- remember_object(image=None, type=None, name=None, drone_instance=None, on_start=None, on_training_set_complete=None, on_validation_set_complete=None)[source]¶
Remembers an object by associating it with a name.
- Parameters:
image – The image containing the object.
object_name (str) – The name to be associated with the object.
- Returns:
True if the operation was successful, False otherwise.
- Return type:
bool
objectidentification.object_identification_result module¶
- class dronebuddylib.atoms.objectidentification.object_identification_result.IdentifiedObjectObject(class_name: str, object_name: str, description: str, confidence: float)[source]¶
Bases:
object
- class dronebuddylib.atoms.objectidentification.object_identification_result.IdentifiedObjects(identified_objects: list[IdentifiedObjectObject], available_objects: list[IdentifiedObjectObject])[source]¶
Bases:
object
- add_available_object(available_object: IdentifiedObjectObject)[source]¶
- add_identified_object(identified_object: IdentifiedObjectObject)[source]¶
- get_available_objects() list[IdentifiedObjectObject] [source]¶
- get_identified_objects() list[IdentifiedObjectObject] [source]¶
objectidentification.object_identification_siamese_impl module¶
- class dronebuddylib.atoms.objectidentification.object_identification_siamese_impl.ObjectIdentificationSiameseSiamese(engine_configurations: EngineConfigurations)[source]¶
Bases:
IObjectIdentification
A class to perform object identification using YOLO V8 and Siamese Network.
- create_dataset(object_type, object_name, data_mode, drone_instance)[source]¶
Generates a dataset for a given place name, optionally using a drone for image collection. Different modes support training, validation, and testing data collection.
- Parameters:
place_name (str) – The name of the place for which to create the dataset.
data_mode (int) – The mode of dataset creation (e.g., training, validation).
drone_instance – An optional drone instance to use for collecting images.
- create_memory(changes=None, drone_instance=None)[source]¶
Creates and trains a KNN classifier using the collected data.
- Parameters:
changes – Any changes to the model or data.
- Returns:
A dictionary containing performance metrics of the trained model.
- Return type:
dict
- extract_and_plot_features(img, layer_index=20, channel_index=5)[source]¶
Extracts and plots features from an image.
- Parameters:
img – The image to extract and plot features from.
layer_index (int) – The layer index for feature extraction.
channel_index (int) – The channel index for plotting.
- extract_features(model, img, layer_index=20)[source]¶
Extracts features from an image using a specific layer of the model.
- Parameters:
model – The model used for feature extraction.
img – The image to extract features from.
layer_index (int) – The layer index for feature extraction.
- Returns:
Extracted features from the specified layer.
- extract_features_from_image(img, layer_index=20)[source]¶
Extracts features from an image using a specific layer of the model.
- Parameters:
img – The image to extract features from.
layer_index (int) – The layer index for feature extraction.
- Returns:
Extracted features from the specified layer.
- get_algorithm_name() str [source]¶
Gets the algorithm name of the object detection implementation.
- Returns:
The algorithm name of the object detection implementation.
- Return type:
str
- get_class_name() str [source]¶
Gets the class name of the object detection implementation.
- Returns:
The class name of the object detection implementation.
- Return type:
str
- get_optional_params() list [source]¶
Gets the list of optional configuration parameters for YOLO V8 object detection engine.
- Returns:
The list of optional configuration parameters.
- Return type:
list
- get_required_params() list [source]¶
Gets the list of required configuration parameters for YOLO V8 object detection engine.
- Returns:
The list of required configuration parameters.
- Return type:
list
- image_files_in_folder(folder)[source]¶
Lists image files in a specified folder.
- Parameters:
folder – Folder to search for image files.
- Returns:
List of image file paths.
- Return type:
list
- preprocess_image(image)[source]¶
Preprocesses an image for model input.
- Parameters:
image – The image to preprocess.
- Returns:
Preprocessed image tensor.
- progress_bar(done_event, title='Training Progress')[source]¶
Displays a progress bar for the training process.
- Parameters:
done_event – Event to signal when the progress is complete.
title (str) – The title of the progress bar.
- progress_event = <threading.Event object>¶
- recognize_objects(image)[source]¶
Recognizes objects depicted in the given image. If the confidence of the predictions is below a given threshold, the object is classified as ‘unknown’.
- Parameters:
image – The image of the objects to recognize.
- Returns:
The recognized objects with their associated probabilities.
- Return type:
RecognizedObjects
- remember_object(image=None, type=None, name=None, drone_instance=None, on_start=None, on_training_set_complete=None, on_validation_set_complete=None)[source]¶
Starts the process to remember an object by creating a training and validation dataset.
- Parameters:
image – The image of the object to remember.
type – The type of object.
name – The name of the object.
drone_instance – The instance of the drone used to capture images.
on_start – Callback when the process starts.
on_training_set_complete – Callback when the training set is complete.
on_validation_set_complete – Callback when the validation set is complete.
- test_image(image_path)[source]¶
Tests an image for object recognition.
- Parameters:
image_path – Path to the image file.
- train(feature_extractor_model='efficientnetv2', num_samples=100, emb_size=20, epochs=10, lr=1e-05, batch_size=4, train_val_split=0.8, num_workers=1, seed=0, output_folder_name=None, lr_scheduler=False, pretrained_weights=None)[source]¶
Trains the model using the specified parameters.
- Parameters:
feature_extractor_model (str) – The model used for feature extraction (default is “efficientnetv2”).
num_samples (int) – The number of samples to use for training (default is 100).
emb_size (int) – The size of the embedding (default is 20).
epochs (int) – The number of training epochs (default is 10).
lr (float) – The learning rate for training (default is 1e-5).
batch_size (int) – The batch size for training (default is 4).
train_val_split (float) – The ratio for splitting training and validation data (default is 0.8).
num_workers (int) – The number of worker threads to use for data loading (default is 1).
seed (int) – The random seed for reproducibility (default is 0).
output_folder_name (str, optional) – The folder name for saving the output.
lr_scheduler (bool) – Flag to use learning rate scheduler (default is False).
pretrained_weights (str, optional) – Path to pretrained weights for the model.
- Returns:
A dictionary containing performance metrics such as accuracy and precision of the trained model.
- Return type:
dict
objectidentification.plotter module¶
- dronebuddylib.atoms.objectidentification.plotter.compare_folder_with_others_tsne(root_folder, target_folder, api)[source]¶
objectidentification.plotter_abs module¶
objectidentification.siamese_impl module¶
- dronebuddylib.atoms.objectidentification.siamese_impl.batch_process_images(images, api, class_name, same_class, other_class_name=None)[source]¶
- dronebuddylib.atoms.objectidentification.siamese_impl.compare_folder_with_others(root_folder, target_folder, api)[source]¶
Compare each image in a target folder with images in all other folders under the root directory.
Args: root_folder (str): The root directory containing all class folders. target_folder (str): The specific folder within the root to compare its images against all others. api (object): An instance of the class with the two_image_inference method.
Returns: None: Results are directly saved to a CSV file.
- dronebuddylib.atoms.objectidentification.siamese_impl.compare_folder_with_others_tsne(root_folder, target_folder, api)[source]¶
Compare each image in a target folder with images in all other folders under the root directory.
Args: root_folder (str): The root directory containing all class folders. target_folder (str): The specific folder within the root to compare its images against all others. api (object): An instance of the class with the two_image_inference method.
Returns: None: Results are directly saved to a CSV file.
- dronebuddylib.atoms.objectidentification.siamese_impl.compare_image_with_folder(target_image_path, folder_path, api)[source]¶
Compare a specific image with all images in a given folder using a specified API and save results to a CSV file.
Args: target_image_path (str): Path to the target image to compare. folder_path (str): Path to the folder containing images to compare against. api (object): An instance of the class with the two_image_inference method.
Returns: DataFrame: A DataFrame containing the comparison results.
- dronebuddylib.atoms.objectidentification.siamese_impl.compare_image_with_folder_tsne(target_image_path, folder_path, api)[source]¶
Compare a specific image with all images in a given folder using a specified API and save results to a CSV file.
Args: target_image_path (str): Path to the target image to compare. folder_path (str): Path to the folder containing images to compare against. api (object): An instance of the class with the two_image_inference method.
Returns: DataFrame: A DataFrame containing the comparison results.
- dronebuddylib.atoms.objectidentification.siamese_impl.compare_images_in_classes(root_folder, api, batch_size=10)[source]¶
- dronebuddylib.atoms.objectidentification.siamese_impl.load_images_from_folder(folder_path, transform=Sequential( (0): Resize(size=(228, 228), interpolation=bilinear, max_size=None, antialias=True) ))[source]¶
- dronebuddylib.atoms.objectidentification.siamese_impl.load_model()[source]¶
load the model for object identification
- dronebuddylib.atoms.objectidentification.siamese_impl.plot_comparison_results(csv_file_path)[source]¶
Plot the comparison results from a CSV file where points are colored based on class similarity.
Args: csv_file_path (str): Path to the CSV file containing comparison results.
- dronebuddylib.atoms.objectidentification.siamese_impl.transform = Sequential( (0): Resize(size=(228, 228), interpolation=bilinear, max_size=None, antialias=True) )¶
folder_path - folder of 1 or more images output: torch tensor of the images
- Type:
input