Clusters
CLUSTER_GPT_35_ETE_CONVERSATION = {'openai_speech2text': {'order': 0, 'extra_params': {}, 'component_type': 'task', 'task_name': 'openai_speech2text'}, 'completed_openai_speech2text': {'order': 1, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'created_data_text': {'order': 2, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'completed_openai_gpt_35': {'order': 3, 'extra_params': {'sample_ratio': 10, 'prompt_template': '{text}'}, 'component_type': 'task', 'task_name': 'openai_gpt_35'}, 'completed_openai_text2speech': {'order': 4, 'extra_params': {}, 'component_type': 'task', 'task_name': 'openai_text2speech'}}
module-attribute
Cluster for gpt3.5 model and gpt3.5 with RAG
CLUSTER_GPT_4O_TEXT_ETE_CONVERSATION = {'openai_speech2text': {'order': 0, 'extra_params': {}, 'component_type': 'task', 'task_name': 'openai_speech2text'}, 'completed_openai_speech2text': {'order': 1, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'created_data_text': {'order': 2, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'completed_openai_gpt_4o_text_only': {'order': 2, 'extra_params': {'sample_ratio': 10, 'prompt_template': '\n You are a robot, and you are talking to a human.\n\n Your task is to generate a response to the human based on the text\n\n You response will be directly send to end user.\n\n The text is: {text}\n '}, 'component_type': 'task', 'task_name': 'openai_gpt_4o_text_only'}, 'completed_openai_text2speech': {'order': 3, 'extra_params': {}, 'component_type': 'task', 'task_name': 'openai_text2speech'}}
module-attribute
Cluster for gpt3.5 model and gpt3.5 with RAG
CLUSTER_HF_ETE_CONVERSATION = {'speech2text': {'order': 0, 'extra_params': {}, 'component_type': 'task', 'task_name': 'speech2text'}, 'completed_speech2text': {'order': 1, 'extra_params': {}, 'component_type': 'signal', 'task_name': 'None'}, 'created_data_text': {'order': 2, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'completed_emotion_detection': {'order': 3, 'extra_params': {}, 'component_type': 'task', 'task_name': 'emotion_detection'}, 'completed_hf_llm': {'order': 4, 'extra_params': {'hf_model_name': 'Qwen/Qwen2-7B-Instruct'}, 'component_type': 'task', 'task_name': 'hf_llm'}, 'completed_text2speech': {'order': 5, 'extra_params': {}, 'component_type': 'task', 'task_name': 'text2speech'}}
module-attribute
Create one to use the full GPT-4o models.
In theory, it should takes the audio and video in, and then output audio.
However, until now, the API for audio is not yet available.
So we will use the walk around by using the speech to text model first, and then call GPT-4o
CLUSTER_Q_ETE_CONVERSATION = {'speech2text': {'order': 0, 'extra_params': {}, 'component_type': 'task', 'task_name': 'speech2text'}, 'completed_speech2text': {'order': 1, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'created_data_text': {'order': 2, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'completed_emotion_detection': {'order': 3, 'extra_params': {}, 'component_type': 'task', 'task_name': 'emotion_detection'}, 'completed_quantization_llm': {'order': 4, 'extra_params': {'llm_model_name': 'SOLAR-10'}, 'component_type': 'task', 'task_name': 'quantization_llm'}, 'completed_text2speech': {'order': 5, 'extra_params': {}, 'component_type': 'task', 'task_name': 'text2speech'}}
module-attribute
Get rid of the emotion detection model
CLUSTER_Q_NO_EMOTION_ETE_CONVERSATION = {'speech2text': {'order': 0, 'extra_params': {}, 'component_type': 'task', 'task_name': 'speech2text'}, 'completed_speech2text': {'order': 1, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'created_data_text': {'order': 2, 'extra_params': {}, 'component_type': 'signal', 'task_name': None}, 'completed_quantization_llm': {'order': 4, 'extra_params': {'llm_model_name': 'SOLAR-10'}, 'component_type': 'task', 'task_name': 'quantization_llm'}, 'completed_text2speech': {'order': 5, 'extra_params': {}, 'component_type': 'task', 'task_name': 'text2speech'}}
module-attribute
This is the pipeline using the HF LLM model for the ETE conversation
logger = get_logger(__name__)
module-attribute
This is for the quantization LLM model for the ETE conversation