Models package



chat_session module

classdronebuddylib.models.chat_session.ChatSession(configs:GPTConfigs)[source]

Bases: object

Represents a chat session. Each session has a unique id to associate it with the user. It holds the conversation history and provides functionality to get new response from ChatGPT for user query.

encode_image_cv(image) str[source]

Encode an OpenCV image to a base64 string

end_session()[source]
get_chatgpt_response(user_message:str) str[source]

For the given user_message, get the response from ChatGPT

get_chatgpt_response_for_image_queries(user_message:str, image_path:str) str[source]

For the given user_message, get the response from ChatGPT

get_chatgpt_response_for_image_queries_as_encoded(user_message:str, image_path:str) str[source]

For the given user_message, get the response from ChatGPT

get_messages() List[Dict][source]

Return the list of messages from the current conversation

get_response() ChatCompletionMessage[source]
send_encoded_image_message_to_llm_queue(role, content, image)[source]
send_image_message_to_llm(role, content, image_path)[source]
send_message_with_image(user_message:str, image_path:str) str[source]

Send a message with an attached image

send_text_message_to_llm(role, content)[source]
set_system_prompt(system_prompt:str)[source]

conversation module

classdronebuddylib.models.conversation.Conversation[source]

Bases: object

This class represents a conversation with the ChatGPT model. It stores the conversation history in the form of a list of messages.

add_image_message(role, content, image_url)[source]
add_image_message_as_encoded(role, content, image)[source]
add_message(role, content)[source]
encode_image(image_path:str) str[source]

Encode the image at the given path to a base64 string

get_base64_encoded_image(frame)[source]

engine_configurations module

classdronebuddylib.models.engine_configurations.EngineConfigurations(configurations:dict)[source]

Bases: object

add_configuration(key:AtomicEngineConfigurations, value:str)[source]
get_configuration(key:AtomicEngineConfigurations) str[source]
get_configurations() dict[source]
get_configurations_for_engine(class_name:str) dict[source]
remove_configurations(key:AtomicEngineConfigurations) str[source]

enums module

classdronebuddylib.models.enums.AtomicEngineConfigurations(value)[source]

Bases: Enum

An enumeration.

BODY_FEATURE_EXTRACTION_POSTURE_DETECTION_MODEL_PATH='BODY_FEATURE_EXTRACTION_POSTURE_DETECTION_MODEL_PATH'
FACE_RECOGNITION_KNN_ALGORITHM_NAME='FACE_RECOGNITION_KNN_ALGORITHM_NAME'
FACE_RECOGNITION_KNN_ALGORITHM_NEIGHBORS='FACE_RECOGNITION_KNN_ALGORITHM_NEIGHBORS'
FACE_RECOGNITION_KNN_DRONE_INSTANCE='FACE_RECOGNITION_KNN_DRONE_INSTANCE'
FACE_RECOGNITION_KNN_MODEL_SAVING_PATH='FACE_RECOGNITION_KNN_MODEL_SAVING_PATH'
FACE_RECOGNITION_KNN_MODEL_THRESHOLD='FACE_RECOGNITION_KNN_MODEL_THRESHOLD'
FACE_RECOGNITION_KNN_TESTING_DATA_SET_SIZE='FACE_RECOGNITION_KNN_TESTING_DATA_SET_SIZE'
FACE_RECOGNITION_KNN_TRAINING_DATA_SET_SIZE='FACE_RECOGNITION_KNN_TRAINING_DATA_SET_SIZE'
FACE_RECOGNITION_KNN_USE_DRONE_TO_CREATE_DATASET='FACE_RECOGNITION_KNN_USE_DRONE_TO_CREATE_DATASET'
FACE_RECOGNITION_KNN_VALIDATION_DATA_SET_SIZE='FACE_RECOGNITION_KNN_VALIDATION_DATA_SET_SIZE'
HAND_FEATURE_EXTRACTION_ENABLE_GESTURE_RECOGNITION='HAND_FEATURE_EXTRACTION_ENABLE_GESTURE_RECOGNITION'
HAND_FEATURE_EXTRACTION_GESTURE_RECOGNITION_MODEL_PATH='HAND_FEATURE_EXTRACTION_GESTURE_RECOGNITION_MODEL_PATH'
INTENT_RECOGNITION_OPEN_AI_API_KEY='INTENT_RECOGNITION_OPEN_AI_API_KEY'
INTENT_RECOGNITION_OPEN_AI_API_URL='INTENT_RECOGNITION_OPEN_AI_API_URL'
INTENT_RECOGNITION_OPEN_AI_LOGGER_LOCATION='INTENT_RECOGNITION_OPEN_AI_LOGGER_LOCATION'
INTENT_RECOGNITION_OPEN_AI_MODEL='INTENT_RECOGNITION_OPEN_AI_MODEL'
INTENT_RECOGNITION_OPEN_AI_SYSTEM_ACTIONS_PATH='INTENT_RECOGNITION_OPEN_AI_SYSTEM_ACTIONS_PATH'
INTENT_RECOGNITION_OPEN_AI_SYSTEM_PROMPT='INTENT_RECOGNITION_OPEN_AI_SYSTEM_PROMPT'
INTENT_RECOGNITION_OPEN_AI_TEMPERATURE='INTENT_RECOGNITION_OPEN_AI_TEMPERATURE'
INTENT_RECOGNITION_SNIPS_LANGUAGE_CONFIG='INTENT_RECOGNITION_SNIPS_LANGUAGE_CONFIG'
INTENT_RECOGNITION_SNIPS_NLU_DATASET_PATH='INTENT_RECOGNITION_SNIPS_NLU_DATASET_PATH'
OBJECT_DETECTION_MP_MODELS_PATH='OBJECT_DETECTION_MP_MODELS_PATH'
OBJECT_DETECTION_YOLO_V3_WEIGHTS_PATH='OBJECT_DETECTION_YOLO_V3_WEIGHTS_PATH'
OBJECT_DETECTION_YOLO_VERSION='OBJECT_DETECTION_YOLO_VERSION'
OBJECT_IDENTIFICATION_GPT_API_KEY='OBJECT_IDENTIFICATION_GPT_API_KEY'
OBJECT_IDENTIFICATION_GPT_MODEL='OBJECT_IDENTIFICATION_GPT_MODEL'
OBJECT_IDENTIFICATION_KNN_ALGORITHM_NAME='OBJECT_IDENTIFICATION_KNN_ALGORITHM_NAME'
OBJECT_IDENTIFICATION_KNN_ALGORITHM_NEIGHBORS='OBJECT_IDENTIFICATION_KNN_ALGORITHM_NEIGHBORS'
OBJECT_IDENTIFICATION_KNN_ALGORITHM_WEIGHTS='OBJECT_IDENTIFICATION_KNN_ALGORITHM_WEIGHTS'
OBJECT_IDENTIFICATION_KNN_CLASSIFIER_LOCATION='OBJECT_IDENTIFICATION_KNN_CLASSIFIER_LOCATION'
OBJECT_IDENTIFICATION_KNN_DRONE_INSTANCE='OBJECT_IDENTIFICATION_KNN_DRONE_INSTANCE'
OBJECT_IDENTIFICATION_KNN_END_TRAINING_CALLBACK='OBJECT_IDENTIFICATION_KNN_END_TRAINING_CALLBACK'
OBJECT_IDENTIFICATION_KNN_EXTRACTOR='OBJECT_IDENTIFICATION_KNN_EXTRACTOR'
OBJECT_IDENTIFICATION_KNN_MODEL_NAME='OBJECT_IDENTIFICATION_KNN_MODEL_NAME'
OBJECT_IDENTIFICATION_KNN_MODEL_PATH='OBJECT_IDENTIFICATION_KNN_MODEL_PATH'
OBJECT_IDENTIFICATION_KNN_MODEL_SAVING_PATH='OBJECT_IDENTIFICATION_KNN_MODEL_SAVING_PATH'
OBJECT_IDENTIFICATION_KNN_MODEL_THRESHOLD='OBJECT_IDENTIFICATION_KNN_MODEL_THRESHOLD'
OBJECT_IDENTIFICATION_KNN_MODEL_VERSION='OBJECT_IDENTIFICATION_KNN_MODEL_VERSION'
OBJECT_IDENTIFICATION_KNN_START_TRAINING_CALLBACK='OBJECT_IDENTIFICATION_KNN_START_TRAINING_CALLBACK'
OBJECT_IDENTIFICATION_KNN_TESTING_DATA_SET_SIZE='OBJECT_IDENTIFICATION_KNN_TESTING_DATA_SET_SIZE'
OBJECT_IDENTIFICATION_KNN_TRAINING_DATA_SET_SIZE='OBJECT_IDENTIFICATION_KNN_TRAINING_DATA_SET_SIZE'
OBJECT_IDENTIFICATION_KNN_USE_DRONE_TO_CREATE_DATASET='OBJECT_IDENTIFICATION_KNN_USE_DRONE_TO_CREATE_DATASET'
OBJECT_IDENTIFICATION_KNN_VALIDATION_DATA_SET_SIZE='OBJECT_IDENTIFICATION_KNN_VALIDATION_DATA_SET_SIZE'
OBJECT_IDENTIFICATION_SIAMESE_YOLO_VERSION='OBJECT_IDENTIFICATION_SIAMESE_YOLO_VERSION'
OBJECT_IDENTIFICATION_YOLO_DRONE_INSTANCE='OBJECT_IDENTIFICATION_YOLO_DRONE_INSTANCE'
OBJECT_IDENTIFICATION_YOLO_WEIGHTS_PATH='OBJECT_IDENTIFICATION_YOLO_WEIGHTS_PATH'
PLACE_RECOGNITION_KNN_ALGORITHM_NAME='PLACE_RECOGNITION_KNN_ALGORITHM_NAME'
PLACE_RECOGNITION_KNN_ALGORITHM_NEIGHBORS='PLACE_RECOGNITION_KNN_ALGORITHM_NEIGHBORS'
PLACE_RECOGNITION_KNN_ALGORITHM_WEIGHTS='PLACE_RECOGNITION_KNN_ALGORITHM_WEIGHTS'
PLACE_RECOGNITION_KNN_CLASSIFIER_LOCATION='PLACE_RECOGNITION_KNN_CLASSIFIER_LOCATION'
PLACE_RECOGNITION_KNN_DRONE_INSTANCE='PLACE_RECOGNITION_KNN_DRONE_INSTANCE'
PLACE_RECOGNITION_KNN_END_TRAINING_CALLBACK='PLACE_RECOGNITION_KNN_END_TRAINING_CALLBACK'
PLACE_RECOGNITION_KNN_EXTRACTOR='PLACE_RECOGNITION_KNN_EXTRACTOR'
PLACE_RECOGNITION_KNN_MODEL_SAVING_PATH='PLACE_RECOGNITION_KNN_MODEL_SAVING_PATH'
PLACE_RECOGNITION_KNN_MODEL_THRESHOLD='PLACE_RECOGNITION_KNN_MODEL_THRESHOLD'
PLACE_RECOGNITION_KNN_START_TRAINING_CALLBACK='PLACE_RECOGNITION_KNN_START_TRAINING_CALLBACK'
PLACE_RECOGNITION_KNN_TESTING_DATA_SET_PATH='PLACE_RECOGNITION_KNN_TESTING_DATA_SET_PATH'
PLACE_RECOGNITION_KNN_TRAINING_DATA_SET_PATH='PLACE_RECOGNITION_KNN_TRAINING_DATA_SET_PATH'
PLACE_RECOGNITION_KNN_USE_DRONE_TO_CREATE_DATASET='PLACE_RECOGNITION_KNN_USE_DRONE_TO_CREATE_DATASET'
PLACE_RECOGNITION_KNN_VALIDATION_DATA_SET_PATH='PLACE_RECOGNITION_KNN_VALIDATION_DATA_SET_PATH'
PLACE_RECOGNITION_RF_ALGORITHM_NAME='PLACE_RECOGNITION_RF_ALGORITHM_NAME'
PLACE_RECOGNITION_RF_ALGORITHM_NEIGHBORS='PLACE_RECOGNITION_RF_ALGORITHM_NEIGHBORS'
PLACE_RECOGNITION_RF_ALGORITHM_WEIGHTS='PLACE_RECOGNITION_RF_ALGORITHM_WEIGHTS'
PLACE_RECOGNITION_RF_CLASSIFIER_LOCATION='PLACE_RECOGNITION_RF_CLASSIFIER_LOCATION'
PLACE_RECOGNITION_RF_DRONE_INSTANCE='PLACE_RECOGNITION_RF_DRONE_INSTANCE'
PLACE_RECOGNITION_RF_END_TRAINING_CALLBACK='PLACE_RECOGNITION_RF_END_TRAINING_CALLBACK'
PLACE_RECOGNITION_RF_EXTRACTOR='PLACE_RECOGNITION_RF_EXTRACTOR'
PLACE_RECOGNITION_RF_MODEL_SAVING_PATH='PLACE_RECOGNITION_RF_MODEL_SAVING_PATH'
PLACE_RECOGNITION_RF_MODEL_THRESHOLD='PLACE_RECOGNITION_RF_MODEL_THRESHOLD'
PLACE_RECOGNITION_RF_START_TRAINING_CALLBACK='PLACE_RECOGNITION_RF_START_TRAINING_CALLBACK'
PLACE_RECOGNITION_RF_TESTING_DATA_SET_PATH='PLACE_RECOGNITION_RF_TESTING_DATA_SET_PATH'
PLACE_RECOGNITION_RF_TRAINING_DATA_SET_PATH='PLACE_RECOGNITION_RF_TRAINING_DATA_SET_PATH'
PLACE_RECOGNITION_RF_USE_DRONE_TO_CREATE_DATASET='PLACE_RECOGNITION_RF_USE_DRONE_TO_CREATE_DATASET'
PLACE_RECOGNITION_RF_VALIDATION_DATA_SET_PATH='PLACE_RECOGNITION_RF_VALIDATION_DATA_SET_PATH'
SPEECH_GENERATION_TTS_RATE='SPEECH_GENERATION_TTS_RATE'
SPEECH_GENERATION_TTS_VOICE_ID='SPEECH_GENERATION_TTS_VOICE_ID'
SPEECH_GENERATION_TTS_VOLUME='SPEECH_GENERATION_TTS_VOLUME'
SPEECH_RECOGNITION_GOOGLE_ENCODING='SPEECH_RECOGNITION_GOOGLE_ENCODING'
SPEECH_RECOGNITION_GOOGLE_LANGUAGE_CODE='SPEECH_RECOGNITION_GOOGLE_LANGUAGE_CODE'
SPEECH_RECOGNITION_GOOGLE_SAMPLE_RATE_HERTZ='SPEECH_RECOGNITION_GOOGLE_SAMPLE_RATE_HERTZ'
SPEECH_RECOGNITION_MULTI_ALGO_ALGORITHM_NAME='SPEECH_RECOGNITION_MULTI_ALGO_ALGORITHM_NAME'
SPEECH_RECOGNITION_MULTI_ALGO_ALGO_MIC_TIMEOUT='SPEECH_RECOGNITION_MULTI_ALGO_ALGO_MIC_TIMEOUT'
SPEECH_RECOGNITION_MULTI_ALGO_ALGO_PHRASE_TIME_LIMIT='SPEECH_RECOGNITION_MULTI_ALGO_ALGO_PHRASE_TIME_LIMIT'
SPEECH_RECOGNITION_MULTI_ALGO_IBM_KEY='SPEECH_RECOGNITION_MULTI_ALGO_IBM_KEY'
SPEECH_RECOGNITION_VOSK_LANGUAGE='SPEECH_RECOGNITION_VOSK_LANGUAGE'
SPEECH_RECOGNITION_VOSK_LANGUAGE_MODEL_PATH='SPEECH_RECOGNITION_VOSK_LANGUAGE_MODEL_PATH'
classdronebuddylib.models.enums.DroneCommands(value)[source]

Bases: Enum

An enumeration.

BACKWARD=('move the drone backward',)
BATTERY=('BATTERY',)
DOWN=('go down',)
FLIP=('do a flip',)
FOLLOW_ME=('follow a person',)
FORWARD=('move the drone forward',)
HEIGHT=('HEIGHT',)
LAND=('land the drone',)
LEFT=('move to the left',)
LOCATE_OBJECTS_AND_RECOGNIZE=('find the objects and recognize them',)
MOVE_AROUND=('move around the room',)
NONE=None
RECOGNIZE_OBJECTS=('recognize the objects in the image',)
RECOGNIZE_PEOPLE=('recognize the people in the image',)
RECOGNIZE_TEXT=('recognize a text from the image',)
RIGHT=('move to the right',)
ROTATE_CLOCKWISE=('rotate the drone clockwise',)
ROTATE_COUNTER_CLOCKWISE=('rotate the drone counter clockwise',)
SPEED=('SPEED',)
STOP=('STOP',)
TAKE_A_PHOTO=('take a photo',)
TAKE_OFF=('start flying the drone',)
UP=('go up',)
classdronebuddylib.models.enums.FaceRecognitionAlgorithm(value)[source]

Bases: Enum

An enumeration.

FACE_RECOGNITION_EUCLIDEAN=('FACE_RECOGNITION_EUCLIDEAN',)
FACE_RECOGNITION_KNN=('FACE_RECOGNITION_KNN',)
classdronebuddylib.models.enums.IntentRecognitionAlgorithm(value)[source]

Bases: Enum

An enumeration.

CHAT_GPT=('CHAT_GPT',)
SNIPS_NLU=('SNIPS_NLU',)
classdronebuddylib.models.enums.Language(value)[source]

Bases: Enum

An enumeration.

ENGLISH=('en-gb',)
FRENCH=('FR',)
classdronebuddylib.models.enums.LoggerColors(value)[source]

Bases: Enum

An enumeration.

BLUE='\x1b[0;34m'
CYAN='\x1b[0;36m'
GREEN='\x1b[0;32m'
PURPLE='\x1b[0;35m'
RED='\x1b[0;31m'
WHITE='\x1b[0;37m'
YELLOW='\x1b[0;33m'
classdronebuddylib.models.enums.ObjectDetectionReturnTypes(value)[source]

Bases: Enum

Enum for the return types of the object detection functions.

ALL='ALL'
BBOX='BBOX'
CONF='CONF'
LABELS='LABELS'
classdronebuddylib.models.enums.ObjectRecognitionAlgorithm(value)[source]

Bases: Enum

An enumeration.

YOLO_TRANSFER_LEARNING=('YOLO_TRANSFER_LEARNING',)
classdronebuddylib.models.enums.PlaceRecognitionAlgorithm(value)[source]

Bases: Enum

An enumeration.

PLACE_RECOGNITION_KNN=('PLACE_RECOGNITION_KNN',)
PLACE_RECOGNITION_RF=('PLACE_RECOGNITION_RF',)
classdronebuddylib.models.enums.SpeechGenerationAlgorithm(value)[source]

Bases: Enum

An enumeration.

GOOGLE_TTS_OFFLINE=('GOOGLE_TTS_OFFLINE',)
classdronebuddylib.models.enums.TextRecognitionAlgorithm(value)[source]

Bases: Enum

An enumeration.

GOOGLE_VISION=('GOOGLE_VISION',)
classdronebuddylib.models.enums.VisionAlgorithm(value)[source]

Bases: Enum

An enumeration.

GOOGLE_VISION=('GOOGLE_VISION',)
MEDIA_PIPE=('MEDIA_PIPE',)
YOLO=('YOLO',)

gpt_configs module

classdronebuddylib.models.gpt_configs.GPTConfigs(open_ai_api_key:str, open_ai_model:str, open_ai_temperature:float, loger_location:str)[source]

Bases: object

i_dbl_function module

classdronebuddylib.models.i_dbl_function.IDBLFunction[source]

Bases: ABC

abstractget_algorithm_name() str[source]
abstractget_class_name() str[source]
abstractget_optional_params() list[source]
abstractget_required_params() list[source]

intent module

session_logger module

classdronebuddylib.models.session_logger.SessionLogger(logger_file_location:str)[source]

Bases: object

close_file()[source]
log_chat(role, token_count, message)[source]

token_counter module

dronebuddylib.models.token_counter.num_tokens_from_messages(messages, model)[source]

Return the number of tokens used by a list of messages.

Module contents