Historic OpenAI
Description: This module enables integration with Azure OpenAI for intelligent conversation processing.
Functionality: It retrieves the conversation history while maintaining its context, allowing the user to define how the history is structured according to their needs.
Technical Prerequisites
An Azure account with access to the Azure OpenAI service. A configured and active Azure OpenAI resource. API key and endpoint obtained from the Azure portal (Keys and Endpoint).
Implementation
Name: A String input field to specify the name that will identify the flow extension module.
Azure OpenAI endpoint: A String input field that must contain the URL of the Azure OpenAI service in the format: https://[resource-name].openai.azure.com/.
Azure OpenAI key: A Password input field that must store the authentication key (API Key) required to access the service.
Steps to obtain the Azure OpenAI Endpoint
From the Azure portal:
- Sign in to the Azure Portal.
- In the search bar, type Azure OpenAI and select the service.
- Click on the Azure OpenAI resource you created.
- In the left-side menu, select Keys and Endpoint.
Conversation Evaluation Model: A String input field to specify the model used for conversation evaluation, depending on the type of analysis required.
Need | Recommended Model |
---|---|
Evaluate conversation quality | GPT-4 / GPT-3.5-Turbo |
Analyze emotions and sentiment | Text Analytics (Sentiment Analysis) / GPT-4 |
Validate intent detection | CLU / GPT-4 with prompting |
Identify incorrect or out-of-context responses | GPT-4 with prompting |
Detect anomalies in interactions | Anomaly Detector |
Temperature. Value between 0 and 1: A Number input field to set the value that controls the randomness of the model’s responses.
- Low values (close to 0): More deterministic and predictable responses.
- High values (close to 1): More diverse and creative responses.
TOP_P. Value between 0 and 1: A Number input field to set the value that determines the percentage of the most probable responses to consider, filtering out less relevant options.
- TOP_P = 1: Considers the entire range of possible words.
- TOP_P = 0.5: Considers only words with a 50% cumulative probability.
Max Output Token Count: A Number input field to define the maximum number of tokens (words or word fragments) the model can generate in a response.
- A higher value allows for longer responses.
- Should be adjusted according to the token limit allowed in the request.
Frequency Penalty. Value between 0 and 1: A Number input field to set a value that penalizes word or phrase repetition in the response.
- Values close to 0: Allow more repetition.
- Higher values (close to 1): Reduce the repetition of previously used words.
Entity to Store the Result: A String input field to specify the name of the entity or database where the interaction result will be stored.
If the maximum number of failed attempts is reached, the action flow will be interrupted, and the error description will be passed to the selected intent with the name: 'historicOpenAi_Error': Dropdown input field to select the intent that will be executed in case of an error in this action. The error description will be stored in the historicOpenAi_Error entity.
Expression that defines the prompt: Text input field that defines the structure or template of the prompt to be sent to the Azure OpenAI model. A prompt is an instruction or a set of instructions that guide the model in generating responses.
Format and Examples of Use: The content of this field can be a fixed or dynamic expression, depending on the need.
Example 1: Fixed (static) prompt
If the prompt is always the same, it can be defined as a fixed text string:
'Respond clearly and concisely to the following user question:'
Example 2: Prompt with dynamic variables
If the prompt needs to include dynamic data, such as the user's message or the conversation context, it can be defined using substitution variables:
'User: {user_input}. Respond clearly and helpfully considering the previous context: {conversation_history}.'
In this case, {user_input} will be replaced with the user's actual input, and {conversation_history} with the previous conversation history.