Skip to content

AI Functions

Rose Heart edited this page Feb 24, 2026 · 2 revisions

Introduction

Imagine having a helpful assistant that can remember your past conversations and adapt to your needs over time. This library is designed to make that possible by providing a simple way to build applications that interact with artificial intelligence. Instead of dealing with complex technical details, it handles the behind-the-scenes work so you can focus on creating meaningful experiences. Whether you're building a chatbot for customer support or a personal helper for daily tasks, this tool gives your application the ability to hold natural, flowing conversations that feel genuinely responsive.

The library works quietly in the background to manage how your AI remembers information. It automatically keeps track of what's been discussed in previous interactions, ensuring the AI doesn't forget important context while staying within practical limits. This means your conversations can build upon themselves naturally, just like human dialogue, without overwhelming the system. You don't need to worry about adjusting settings or handling technical constraints—the library takes care of balancing memory and performance seamlessly as your conversations grow.

What makes this approach special is how it connects to many different AI services through a single, straightforward interface. You can switch between various AI providers without rewriting your entire application, giving you flexibility as technology evolves. The library also includes thoughtful features like timing how long responses take, which helps maintain smooth interactions. Ultimately, it empowers creators to build intelligent applications that feel personal and attentive, turning complex AI capabilities into something accessible and user-friendly for everyone.


class Agent:

Non-technical

This code creates a smart assistant that can talk with different artificial intelligence systems, remembering past conversations to make interactions feel more natural and personal. It carefully manages how much information is shared with the AI at once to stay within technical limits, automatically saves conversation history so nothing gets lost between sessions, and can adjust its personality based on what the user prefers. The assistant handles all the complicated details behind the scenes like connecting to various AI services, measuring text length appropriately for each system, and retrying requests if connections are slow, so users can simply focus on having meaningful conversations without worrying about technical hurdles.

Technical

  • engine is a string specifying which AI service provider to use (like OpenAI or Anthropic), determining which platform handles the conversation and affecting compatibility with other settings.
  • model is a string identifying the specific AI model within the chosen service, influencing the quality and style of responses as different models have unique capabilities.
  • maxtokens is an integer representing the maximum text length the AI can process at once, crucial for preventing errors when handling long conversations.
  • encoding is an optional string that defines how text gets broken into units for processing, automatically set for some services but customizable for others.
  • persona is an optional string that loads a predefined personality or role for the AI, shaping how it responds to create consistent character interactions.
  • user is an optional string used to identify whose conversation history gets saved, allowing multiple users to have separate memory records.
  • userhome is an optional string specifying a custom folder location for storing conversation data instead of the default directory.
  • maxmem is an integer (default 100) setting how many previous conversation turns to remember, balancing context retention with memory usage.
  • freqpenalty is a float (default 0.73) controlling how often the AI repeats itself, with higher values reducing repetition in responses.
  • temperature is a float (default 0.31) adjusting response creativity, where lower values produce more predictable answers and higher values encourage novelty.
  • seed is an integer (default 0) that makes responses reproducible when set to a specific value, useful for testing and consistency.
  • timeout is an integer (default 300) defining how many seconds to wait for an AI response before trying again, preventing hangs during slow connections.
  • reset is a boolean (default False) that clears previous conversation history when starting, useful for fresh interactions without old context.
  • save is a boolean (default True) determining whether to store conversation history between sessions, with False creating temporary interactions.
  • timing is a boolean (default True) enabling performance tracking to measure how long AI responses take, helpful for monitoring service quality.
  • isolation is a boolean (default False) running the AI without accessing past conversations, creating self-contained single-question interactions.
  • retry is an integer (default 7) setting how many times to attempt a failed request before giving up, improving reliability with unstable connections.
  • retrytimeout is an integer (default 37) specifying seconds to wait between retry attempts, preventing overwhelming services with rapid retries.
  • maxrespsize is an integer (default 0) limiting acceptable response length, with non-zero values rejecting unusually large outputs that might indicate errors.
  • maxrespretry is an integer (default 7) determining retries specifically for oversized responses, working with the size limit parameter.
  • maxrespretrytimeout is an integer (default 37) setting delay between size-related retry attempts, similar to regular retry timing but for response length issues.
  • UseOpenAI is a boolean (default False) enabling OpenAI-compatible libraries for certain services, potentially improving compatibility with some AI providers.

Returns

Returns None.


def SetEngine(self,engine):

Non-technical

This function lets you switch between different intelligent assistants that can help with your tasks. When you choose a new assistant, the application automatically adjusts how it processes your messages to work optimally with that specific helper. For example, if you select a particular advanced assistant, it will configure the message handling method to ensure the best possible conversation quality. This makes it easy to work with various intelligent helpers without needing to manually tweak settings each time you change your preference.

Technical

  • engine - A string representing the AI engine to use for processing requests. This parameter determines which backend service will handle AI operations and specifically affects the text encoding configuration, with special handling for the OpenAI engine to select appropriate tokenization settings based on the current model.

Returns

Returns None.


def SetModel(self,model):

Non-technical

This function lets you change which specific artificial intelligence brain your assistant uses to understand and respond to conversations, allowing you to switch between different specialized thinking styles that each have their own unique knowledge and problem-solving approaches for various types of discussions and tasks.

Technical

  • model: A string representing the name of the AI processing system to activate, directly determining the knowledge base and reasoning capabilities used during interactions.

Returns

Returns None.


def SetModel(self,maxtokens):

Non-technical

This function adjusts how much information the AI assistant can process at one time by setting a limit on the amount of text it handles during conversations. When you use this feature, you're telling the system how much content it should consider when forming responses, which helps balance between getting detailed answers and keeping interactions efficient. By modifying this setting, you can control whether the AI focuses on concise exchanges or takes more context into account when responding to your questions.

Technical

  • maxtokens: An integer value representing the maximum number of text units the AI model can process in a single interaction. This parameter directly determines how much conversation history and context the system retains when generating responses, with higher values allowing for more comprehensive but potentially slower interactions.

Returns

Returns None.


def SetEncoding(self,encoding):

Non-technical

This function lets you determine how the system should interpret and process written language by selecting a specific text format standard. When you choose a particular way to represent text characters, the system updates its internal settings to properly handle all special symbols, letters, and language structures according to that standard, ensuring clear and accurate communication regardless of the language or special characters being used.

Technical

  • encoding: Represents the text format standard that determines how characters are converted to numerical values for processing. Expected as a string value (like "utf-8" or "cl100k_base"). Setting this parameter directly controls how the system understands and generates text, affecting compatibility with different AI models and ensuring proper handling of special characters across various languages.

Returns

Returns None.


def FreqPenalty(self,freqpenalty):

Non-technical

This function adjusts how the AI handles repetitive language during conversations by setting a level of discouragement for repeated words or phrases. When you interact with the AI, it sometimes tends to reuse certain expressions too frequently, which can make responses feel unnatural or redundant. This feature lets you control that tendency by establishing how strongly the AI should avoid recycling previous content - higher settings encourage more creative word choices while lower settings permit more repetition. It's like giving the AI a gentle guidance to maintain fresh and varied responses without disrupting the natural flow of conversation.

Technical

  • freqpenalty: A numerical value (float) representing the strength of discouragement applied to repeated words or phrases in AI responses. Higher values increase the penalty for repetition, encouraging more diverse language patterns, while lower values allow for more repetition in generated content.

Returns

Returns None.


def Temperature(self,temperture):

Non-technical

This function adjusts how creatively the artificial intelligence responds during conversations by setting a control that influences the balance between predictable answers and imaginative exploration. When you increase this setting, the AI becomes more willing to take conversational risks and generate unexpected ideas, while lowering it makes responses more focused and reliable. It works like a tuning knob that helps shape the personality of the interaction, allowing you to customize whether the AI should stick closely to established patterns or feel comfortable venturing into novel territory based on what best serves the current discussion needs.

Technical

  • temperature: Represents the randomness factor for AI response generation; expected data type is float (typically between 0.0 and 1.0); higher values increase response creativity while lower values make responses more deterministic and focused.

Returns

Returns None.


def Timeout(self,timeout):

Non-technical

This function lets you control how long the system will wait for a response from the artificial intelligence service before deciding it's taking too long and moving on, helping prevent the program from getting stuck when connections are slow or services aren't responding promptly, so you can balance between giving enough time for legitimate responses and avoiding indefinite waiting during technical difficulties.

Technical

  • timeout: Represents the maximum number of seconds to wait for a response from the AI service before timing out. Expected to be an integer value. Increasing this value allows more time for slower connections or busy services, while decreasing it makes the system more responsive to connection issues but might cause premature timeouts with legitimate but slow responses.

Returns

Returns None.


def SetStorage(self,user=None,userhome=None):

Non-technical

This function determines where to keep your conversation history and timing information when interacting with the AI system. It intelligently selects a storage location based on whether you're logged in as a specific user or using the system anonymously, ensuring your data is kept in the appropriate place whether you're using the system through a personal account or as a general user. The system automatically creates the necessary folders to store this information securely, adapting to different computing environments while maintaining your interaction history across sessions.

Technical

  • user: Represents the username for whom storage should be configured; expected to be a string. When provided, it directs the system to store memory and timing files in a user-specific location rather than a default shared location.
  • userhome: Specifies a custom home directory path where storage should be placed; expected to be a string. When provided, it completely overrides the standard storage location logic, directing all files to be stored within this specified directory instead of the default locations.

Returns

Returns None.


def Reset(self):

Non-technical

This function gives you a completely fresh start with the AI assistant by clearing away all previous conversations and memories, as if you were meeting for the first time, so the assistant won't remember anything you've discussed before and will respond without any prior context or history influencing its answers.

Technical

  • No arguments required.

Returns

Returns None.


def Get(self):

Non-technical

This feature helps the system recall past conversations by gathering just the essential parts of what was said and who said it, creating a clean record of the dialogue history that can be easily understood and used for continuing the conversation, while leaving out behind-the-scenes technical details that wouldn't be helpful for the actual chat experience.

Technical

No arguments required.

Returns

Returns a list of dictionaries, where each dictionary contains exactly two string-valued keys: 'role' (indicating participant type) and 'content' (holding the message text).


def Put(self,role,data):

Non-technical

This function helps the AI remember conversations by storing each message along with who said it and important details about how the message was processed. When you talk to the AI or it responds to you, this function tucks away those exchanges in a special memory storage that keeps track of not just what was said, but also which AI system handled it and how much processing power was needed. This memory system allows the AI to recall previous parts of your conversation when needed, making interactions feel more natural and continuous rather than starting from scratch each time you speak.

Technical

  • role: A string representing who is speaking in the conversation, such as the user or the AI assistant; this determines how the message will be categorized in memory and affects which parts of the conversation history are preserved
  • data: A string containing the actual message content being stored; this directly impacts the memory size and token count calculations that determine how much conversation history can be retained

Returns

Returns None.


def UpdateLast(self,role,data):

Non-technical

This function allows the system to modify the most recent entry in the conversation memory, ensuring that the latest interaction can be adjusted or corrected as needed. When the AI processes a conversation, it maintains a running record of all exchanges, and this feature provides the ability to update specific details of the newest message without affecting the rest of the conversation history. It helps maintain accuracy in the dialogue flow by letting the system refine or replace information in the latest response before proceeding, which is particularly useful when fine-tuning responses or incorporating additional context that emerged after the initial message was recorded.

Technical

  • role: A string representing the specific field within the most recent memory entry to be modified (e.g., "content" or "result"). This determines which aspect of the latest interaction will be updated with new information.
  • data: The new value to assign to the specified role field, typically a string containing updated message content or metadata. This replaces the existing value in the designated field of the most recent memory entry.

Returns

Returns None.


def Read(self):

Non-technical

This function loads previous conversations from storage so the AI assistant can remember past interactions when continuing a discussion. It carefully checks each saved message to ensure compatibility with the current settings, automatically filling in any missing technical details about how the messages were originally processed. The function skips any corrupted or unreadable entries while preserving the meaningful parts of the conversation history, allowing the assistant to pick up where it left off without being confused by incomplete or incompatible data.

Technical

  • No arguments required.

Returns

Returns None.


def Write(self):

Non-technical

This function saves the conversation history between you and the AI assistant to a file on your device, making sure to keep only the most recent exchanges when there's a limit on how much history can be stored. It carefully excludes internal system messages that help the AI operate but aren't part of your actual conversation, ensuring the saved record contains only the meaningful dialogue between you and the assistant. By managing the size of this history, it helps maintain smooth performance while preserving the context needed for the AI to provide relevant and coherent responses during your interactions.

Technical

  • engine: The AI service provider to use (string), determines which platform handles the conversation processing.
  • model: The specific AI model within the service (string), affects response quality and capabilities.
  • maxtokens: Maximum token count allowed per response (integer), controls response length and complexity.
  • encoding: Tokenization method (string, optional), used for accurate token counting with certain AI services.
  • persona: Character or role for the AI (string, optional), shapes the assistant's communication style.
  • user: Identifier for personalized settings (string, optional), enables user-specific memory storage.
  • userhome: Custom directory path (string, optional), specifies where to store conversation history.
  • maxmem: Maximum conversation items to retain (integer), limits how much history is saved.
  • freqpenalty: Repetition discouragement level (float), reduces redundant responses.
  • temperature: Creativity parameter (float), influences response randomness and originality.
  • seed: Consistency value (integer), ensures reproducible responses with identical inputs.
  • timeout: Maximum wait time for responses (integer), prevents indefinite hanging on requests.
  • reset: Memory clearing flag (boolean), determines if previous history should be discarded.
  • save: Persistence setting (boolean), controls whether conversation history is stored between sessions.
  • timing: Performance tracking flag (boolean), enables recording of response time metrics.
  • isolation: Memory isolation setting (boolean), operates without accessing previous conversation context.
  • retry: Failed request attempts (integer), specifies how many times to retry after errors.
  • retrytimeout: Retry interval (integer), sets waiting period between retry attempts.
  • maxrespsize: Response size limit (integer), rejects unusually large responses that may indicate errors.
  • maxrespretry: Size rejection retries (integer), determines retry attempts for oversized responses.
  • maxrespretrytimeout: Size retry interval (integer), sets waiting period between size-related retries.
  • UseOpenAI: Library preference (boolean), indicates whether to use OpenAI's official libraries when available.

Returns

Returns None. The function performs file operations to save conversation history but does not return any value to the caller.


def GetTokenCount(self,data):

Non-technical

This function determines how much text content would count as when processed by different AI systems. It adapts its counting method based on which specific AI technology is being used, as different systems break down text into smaller units in unique ways. For some AI services, it uses precise counting tools provided by those services, while for others it applies a general estimation method. The result helps ensure that text stays within appropriate length limits required by various AI platforms, which is important for both functionality and cost management since many AI services charge based on these units of text.

Technical

  • data: Represents the text content to be analyzed, expected as a string. This input directly determines the token count result, with longer text generally producing higher values. The function processes this text differently depending on the AI engine in use.

Returns

Returns an integer representing the estimated or exact number of tokens in the provided text. The value is calculated using engine-specific methods when available, or through a general estimation formula for unsupported engines.


def MaintainTokenLimit(self):

Non-technical

This function helps keep conversations with the AI within a reasonable length by automatically trimming older parts of the discussion when it gets too long. It carefully removes earlier exchanges between you and the assistant while trying to preserve the most recent and relevant parts of your conversation, ensuring that the AI can still understand the context without being overwhelmed by too much information at once. The system intelligently decides which parts to keep and which to remove based on conversation patterns, maintaining a smooth and responsive interaction experience.

Technical

  • No arguments required.

Returns

Returns a tuple containing a list of message objects (each with 'role' and 'content' fields) that fit within the token limit and an integer representing the current token count. If unable to reduce tokens below the limit, returns a tuple with None as the first element and the current token count as the second element.


def JumpTable(self,messages,engine,model,freqpenalty,temperature,timeout,seed=0,mt=2048):

Non-technical

This function serves as an intelligent switchboard that automatically connects your conversation to the right artificial intelligence service based on your preferences, ensuring your questions and previous messages reach the appropriate system whether you're using OpenAI, Google's AI, Anthropic, or any of the other supported platforms, handling all the behind-the-scenes communication so you receive a thoughtful response without needing to understand the complex technical processes happening between different AI services.

Technical

  • messages: A list of conversation exchanges expected to be dictionaries with role and content, provides the context and history for the AI to generate an appropriate response
  • engine: A string specifying which AI service provider to use such as "openai" or "anthropic", determines which API endpoint will process the request
  • model: A string identifying the specific AI model within the chosen service, affects the quality, speed, and capabilities of the generated response
  • freqpenalty: A floating-point number between 0 and 2 that controls repetition in responses, higher values make the AI avoid repeating phrases more aggressively
  • temperature: A floating-point number between 0 and 1 that controls response creativity, lower values produce more predictable and focused outputs while higher values increase randomness
  • timeout: An integer representing maximum seconds to wait for a response before aborting, measured in seconds and affects how long the system will attempt to get a reply
  • seed: An optional integer that makes responses reproducible when set to a specific value, defaults to 0 which means responses may vary even with identical inputs
  • mt: An optional integer specifying maximum tokens for the response generation, defaults to 2048 which limits how much text the AI can produce in a single response

Returns

Returns None. The function modifies the object's internal state by setting self.response and self.completion attributes rather than returning values directly.


def Response(self,input):

Non-technical

This function serves as a thoughtful conversation partner that helps you interact with various artificial intelligence systems in a natural, flowing way. It carefully remembers previous exchanges to provide context-aware responses while automatically managing conversation length to stay within practical limits. When you share your thoughts, it thoughtfully prepares your message for the AI, patiently retries if connections are momentarily unavailable, and thoughtfully decides whether to preserve the conversation for future reference. Throughout this process, it maintains a balance between remembering enough context to be helpful while avoiding overwhelming the system, ultimately delivering the AI's response back to you in a clear, usable format.

Technical

  • input: A string representing the user's message to be processed; this text gets incorporated into the conversation history and sent to the selected AI service to generate a meaningful reply.

Returns

Returns a string containing the AI-generated response to the user's input, or returns None if the system encounters communication errors, exceeds token limits, or fails to obtain a valid response after exhausting all retry attempts.


def GetOpenAI(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function acts as a bridge between your application and an artificial intelligence service, allowing you to have meaningful conversations by sending your questions and previous discussion history to receive thoughtful responses. It carefully manages how the AI thinks about your request, controlling factors like creativity and repetition to ensure the answers feel natural and helpful while handling all the technical complexities behind the scenes so you can focus on the conversation itself.

Technical

  • apikey: A string containing the authentication key required to access the AI service, which verifies your identity and permissions with the provider.
  • messages: A list of conversation history items where each contains a role (such as user or assistant) and corresponding content, providing the context needed for the AI to understand and respond appropriately.
  • model: A string specifying which version of the AI technology to use, determining the capabilities, knowledge base, and performance characteristics of the responses.
  • freqpenalty: A floating-point number between -2.0 and 2.0 that influences how often the AI repeats itself, with higher values encouraging more varied vocabulary and sentence structures.
  • temperature: A floating-point number between 0.0 and 2.0 that controls the randomness of responses, where lower values produce more predictable, focused answers and higher values generate more creative, diverse outputs.
  • timeout: An integer representing the maximum number of seconds to wait for the AI service to respond before terminating the request to prevent indefinite waiting.

Returns

Returns a tuple containing two elements: a string with the AI's response text (with leading and trailing whitespace removed), and an object containing the complete response data from the AI service including metadata about the request processing and response details.


def GetMistral(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects to Mistral's artificial intelligence service to generate thoughtful responses to your questions or prompts, handling all the behind-the-scenes communication so you receive a clean, well-formatted answer that considers your specific request along with adjustable settings that influence how creatively or precisely the AI responds, making it feel like you're having a natural conversation with an intelligent assistant who understands context and nuance.

Technical

  • apikey: A string containing the authentication key required to access Mistral's service, which must be valid for the request to be processed successfully.
  • messages: A list of structured conversation elements where each item specifies a role (such as user or assistant) and corresponding content, forming the complete context the AI uses to generate its response.
  • model: A string identifying which specific Mistral AI model to employ for generating the response, with different models offering varying capabilities and performance characteristics.
  • freqpenalty: A floating-point number between -2.0 and 2.0 that determines how strongly the AI avoids repeating phrases, where higher values produce more diverse wording.
  • temperature: A floating-point number between 0.0 and 2.0 controlling the randomness of the response, with lower values yielding more predictable, focused answers and higher values generating more creative, varied outputs.
  • timeout: An integer specifying the maximum number of seconds to wait for Mistral's service to respond before terminating the request attempt.

Returns

Returns a tuple containing two elements: a string with the AI's response content (with leading and trailing whitespace removed), and the complete response object from the Mistral API that includes additional metadata about the request processing and response details.


def GetxAI(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects to a specialized artificial intelligence service to generate thoughtful responses in ongoing conversations, carefully considering the discussion history and adjusting its approach based on how creative or focused the responses should be, while handling any communication issues that might arise during the process to ensure users receive clear, well-formatted replies that continue the dialogue naturally.

Technical

  • apikey: A string containing the authentication key required to access the AI service; without a valid key, the function cannot establish a connection to the service.
  • messages: A list of message objects representing the conversation history, where each message has a role and content; this shapes the context and direction of the AI's response.
  • model: A string specifying which AI model to use for generating responses; different models offer varying capabilities, response quality, and processing characteristics.
  • freqpenalty: A floating-point number between -2.0 and 2.0 that controls repetition avoidance; higher values make the AI less likely to repeat phrases, increasing response diversity.
  • temperature: A floating-point number between 0.0 and 2.0 that influences response creativity; lower values produce more predictable, focused responses while higher values encourage more varied, imaginative outputs.
  • timeout: An integer representing the maximum number of seconds to wait for a response from the service before terminating the request.

Returns

Returns a tuple containing two elements: a string representing the AI's response text with leading and trailing whitespace removed, and an object containing the complete response data from the AI service including metadata about the interaction.


def GetGoogleAI(self,apikey,messages,model,freqpenalty,temperature,timeout,UseOpenAI=False):

Non-technical

This function acts as a bridge between your application and Google's artificial intelligence services, allowing you to send questions and conversation history to receive thoughtful responses. It intelligently adapts to different ways of communicating with Google's AI systems, either through a familiar interface used by other services or through Google's specialized connection method. The function carefully manages how the AI responds by adjusting creativity levels and ensuring appropriate boundaries are maintained, while also monitoring for any communication issues that might require retrying the request. It processes your conversation context to provide relevant, coherent answers that build upon previous interactions, making your AI experience smooth and natural.

Technical

  • apikey: A string containing the authentication key required to access Google's AI services; without a valid key, the function cannot establish a connection to the service.
  • messages: A list of conversation history items where each contains a role and content; this provides contextual information that shapes the AI's understanding and response.
  • model: A string specifying which Google AI model to utilize; different models offer varying capabilities, speed, and quality of responses.
  • freqpenalty: A floating-point number between 0 and 2 that controls repetition avoidance; higher values discourage the AI from repeating phrases while lower values allow more repetition.
  • temperature: A floating-point number between 0 and 1 that regulates response creativity; lower values produce more focused, predictable answers while higher values increase variability and creative exploration.
  • timeout: An integer representing the maximum seconds to wait for a response before terminating the request; prevents the application from hanging indefinitely during service delays.
  • UseOpenAI: A boolean determining the communication method; when true, uses an OpenAI-compatible interface for easier integration with existing code, and when false, uses Google's native connection approach.

Returns

Returns a tuple containing two elements: the first element is a string with the AI's response text, and the second element is an object containing the complete response data from the AI service; if an error occurs during the request, it returns (None, None).


def GetGoogleAI(self, apikey, messages, model, freqpenalty, temperature, timeout, UseOpenAI=False):

Non-technical

This function helps your application connect to Google's artificial intelligence services to get thoughtful responses to questions or requests, working behind the scenes to translate your conversation into a format Google's system understands while respecting your preferences for creativity and safety, then delivering the AI's reply in a way your application can easily use, whether you're building a helpful assistant, creative tool, or information service that needs reliable access to advanced language understanding.

Technical

  • apikey: A string containing the authentication key required to access Google's AI services; without a valid key, the connection fails
  • messages: A list of conversation history items with roles and content; determines the context the AI considers when formulating its response
  • model: A string specifying which Google AI model to utilize; different models offer varying capabilities, speed, and quality for different tasks
  • freqpenalty: A floating-point value between -2.0 and 2.0 that influences repetition patterns; higher values discourage repetitive content in the response
  • temperature: A floating-point value between 0.0 and 1.0 controlling response randomness; lower values produce more focused, predictable outputs while higher values generate more creative, varied responses
  • timeout: An integer representing maximum seconds to wait for service response; prevents indefinite waiting during connectivity issues or service delays
  • UseOpenAI: A boolean determining interface approach; when true uses OpenAI-compatible protocol, when false uses Google's native protocol affecting request formatting and feature availability

Returns

Returns a tuple containing two elements: a string with the AI-generated response text and an object with detailed completion information; if the request fails completely, returns None for both elements.


def GetOpenRouter(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects to a service that provides access to various artificial intelligence models through a single interface. It takes your conversation history and specific settings to generate a thoughtful reply from the selected AI model. The system carefully manages the connection process, ensuring your request is properly formatted and delivered to the service, then handles the response to provide you with a clean, well-structured answer that continues your conversation naturally while monitoring the entire process to ensure reliability and proper completion.

Technical

  • apikey - A string containing the authentication key needed to access the OpenRouter service; without a valid key, the connection will be denied
  • messages - A list of conversation history items, each containing role and content information; this shapes the context and influences the AI's understanding of the conversation
  • model - A string specifying which AI model to use for generating responses; different models have varying capabilities and response styles
  • freqpenalty - A floating-point number between -2.0 and 2.0 that controls repetition in responses; higher values discourage repetitive content
  • temperature - A floating-point number between 0.0 and 2.0 that affects response creativity; lower values produce more predictable responses while higher values increase randomness
  • timeout - An integer representing the maximum time in seconds to wait for a response; prevents the system from hanging indefinitely if the service is slow or unavailable

Returns

Returns a tuple containing two elements: a string with the AI's response text (with leading/trailing whitespace removed) and the complete response object from the API that includes metadata about the completion process.


def GetAnthropic(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function helps you have conversations with an AI assistant by securely sending your questions and chat history to a specialized service that crafts thoughtful replies, using your preferred settings to shape how creative or focused the responses should be while ensuring the interaction completes within a reasonable timeframe.

Technical

  • apikey: A string containing the secret authentication key required to access the AI service, ensuring secure communication with the provider's servers.
  • messages: A list of dictionaries representing conversation history where each entry has role and content details, providing essential context for generating relevant responses.
  • model: A string specifying which version of the AI technology to utilize, directly influencing the quality, capabilities, and characteristics of the generated output.
  • freqpenalty: A floating-point number between 0 and 2 that controls repetition patterns in responses, with higher values making the AI less likely to repeat phrases or ideas.
  • temperature: A floating-point number between 0 and 1 that determines the balance between predictable accuracy and creative variation in responses, where lower values yield more consistent answers.
  • timeout: An integer representing the maximum number of seconds to wait for the AI service to deliver a response before automatically canceling the request.

Returns

Returns a tuple containing two elements: a string with the cleaned AI-generated response text and an object containing detailed metadata about the API interaction and response processing.


def GetHuggingFace(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects to a language model service to generate responses based on conversation history. It takes your conversation so far and sends it to the AI system, which then crafts a thoughtful reply considering how creative or focused you want the response to be. The function manages the connection to the service, handles the conversation flow, and brings back the AI's response in a clean, readable format that continues your discussion naturally. It's designed to make interacting with advanced language models seamless and straightforward, allowing you to focus on the conversation rather than technical details.

Technical

  • apikey - A string containing the authentication key needed to access the Hugging Face service; without a valid key, the function cannot connect to the AI service
  • messages - A list of message dictionaries representing the conversation history, where each message has a role (like "user" or "assistant") and content; this provides context for the AI to generate appropriate responses
  • model - A string specifying which AI model to use for generating responses; different models have varying capabilities, knowledge, and response styles
  • freqpenalty - A floating-point number between -2.0 and 2.0 that controls how much the AI avoids repeating itself; higher values make the AI less likely to repeat phrases
  • temperature - A floating-point number between 0.0 and 2.0 that controls the randomness of the AI's responses; lower values make responses more predictable and focused, while higher values make them more creative and varied
  • timeout - An integer representing the maximum number of seconds to wait for a response from the service; if the service doesn't respond within this time, the request is cancelled

Returns

Returns a tuple containing two elements: a string representing the AI's response text, and a completion object containing detailed information about the API response including metadata, token usage, and other technical details from the Hugging Face service.


def GetTogetherAI(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects you to an AI assistant through a specific service provider, allowing you to have conversations where the AI remembers previous messages and responds according to your chosen personality and creativity settings. It takes your conversation history along with specific parameters that shape how the AI thinks and responds, sends this information securely to the service, waits for the AI to craft its reply, checks that everything went smoothly, and then delivers the response back to you while keeping track of how the interaction unfolded. This creates a seamless experience where you can engage in meaningful dialogue with the AI, with the system automatically managing the technical details behind the scenes so you can focus on the conversation itself.

Technical

  • apikey: A string containing the authentication key needed to access the Together AI service; without a valid key, the connection will fail
  • messages: A list of conversation history items that provide context for the AI; more detailed history can lead to more coherent and contextually appropriate responses
  • model: A string specifying which AI model to use for generating responses; different models have varying capabilities, knowledge cutoffs, and response styles
  • freqpenalty: A floating-point number between -2.0 and 2.0 that controls how much the AI avoids repeating phrases; higher values make the output more diverse but potentially less coherent
  • temperature: A floating-point number between 0.0 and 2.0 that affects the randomness of the AI's responses; lower values make responses more predictable and focused, while higher values increase creativity and variability
  • timeout: An integer representing the maximum number of seconds to wait for a response from the AI service; if exceeded, the request will be terminated

Returns

Returns a tuple containing two elements: a string with the AI's response text and an object containing the complete API response data from Together AI.


def GetCohere(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects to an artificial intelligence service to generate thoughtful responses to conversations, automatically adjusting its approach when needed to ensure helpful and appropriate answers while maintaining the flow of dialogue without requiring the user to manage technical details or safety restrictions behind the scenes.

Technical

  • apikey: A string containing the authentication key required to access the AI service, verifying the user's identity and permissions for the API connection.
  • messages: A list of conversation history items that provide context for the AI, shaping the response based on previous interactions in the dialogue.
  • model: A string specifying which version of the AI to use, with different models offering varying capabilities, response styles, and performance characteristics.
  • freqpenalty: A numeric value between 0 and 1 that controls repetition in responses, where higher values discourage the AI from repeating itself too often.
  • temperature: A numeric value between 0 and 1 that influences response creativity, with lower values producing more predictable answers and higher values generating more diverse and imaginative responses.
  • timeout: A numeric value representing the maximum time in seconds to wait for a response before terminating the connection attempt.

Returns

Returns a tuple containing two elements: the first element is a string with the AI's response text, and the second element is an object containing detailed information about the API call and its results.


def GetOllama(self,apikey,messages,model,freqpenalty,temperature,timeout,seed=0,mt=2048):

Non-technical

This function helps your computer talk to a special kind of artificial intelligence that runs locally on your machine, allowing it to understand your questions and provide thoughtful answers by carefully managing how the conversation flows, how creative or focused the responses should be, and making sure everything happens within a reasonable timeframe while maintaining the context of your discussion.

Technical

  • apikey: A string containing authentication credentials (though not actively used in this implementation as Ollama typically operates without keys for local usage)
  • messages: A list of dictionaries representing the conversation history, where each dictionary contains roles and content that form the context for generating responses
  • model: A string specifying which language model to use for generating responses, determining the AI's capabilities and knowledge base
  • freqpenalty: A floating-point number between 0 and 2 that controls how much the AI avoids repeating itself, with higher values producing more varied responses
  • temperature: A floating-point number between 0 and 2 that determines the randomness of responses, where lower values yield more predictable answers and higher values create more creative outputs
  • timeout: An integer representing the maximum number of seconds to wait for a response before giving up
  • seed: An integer that, when set to a specific value, ensures identical responses for identical inputs, with 0 meaning no fixed pattern
  • mt: An integer specifying the maximum context window size in tokens, determining how much conversation history the AI can consider when formulating responses

Returns

Returns a tuple containing two elements: a string with the AI's response content and a dictionary with the complete response object from the Ollama API, including metadata about the generation process.


def GetPerplexity(self,apikey,messages,model,freqpenalty,temperature,timeout):

Non-technical

This function connects to an artificial intelligence service to obtain thoughtful and contextually appropriate responses to questions or conversation prompts by securely transmitting the discussion history and receiving well-crafted answers that maintain the flow of dialogue while adhering to content guidelines and user preferences.

Technical

  • apikey: A string containing the secret authentication key required to access the AI service, must be kept confidential and properly formatted for API authorization
  • messages: A list of conversation exchanges providing context for the AI, where each message has a role and content that collectively shape the response's relevance and continuity
  • model: A string specifying which version of the artificial intelligence to utilize, determining the response quality, knowledge scope, and processing capabilities
  • freqpenalty: A floating-point number between 0 and 2 that controls repetition in responses, with higher values making the AI less likely to repeat phrases or concepts
  • temperature: A floating-point number between 0 and 1 that determines the creativity and randomness of responses, with lower values producing more predictable answers and higher values yielding more diverse outputs
  • timeout: An integer representing the maximum seconds to wait for the AI service to respond, preventing indefinite waiting if the service is unresponsive or experiencing delays

Returns

Returns a tuple containing two elements: a string with the AI's response text and a dictionary with detailed information about the response generation process including metadata and potential citations.


def GetPersona(basename,channel=None,nsfw=False):

Non-technical

This function helps an AI assistant find the perfect personality settings for different situations by checking through various configuration files in a specific order. It first looks for specialized settings tailored to a particular conversation channel with mature content options when appropriate, then checks for the same channel with standard content settings, followed by general mature content settings, and finally general standard settings. If it locates a matching configuration file, it delivers the personality instructions in a properly formatted way; otherwise, it indicates that no suitable personality setting could be found, ensuring the AI always has appropriate guidance for its interactions.

Technical

  • basename: A string representing the core identity name of the persona configuration to search for, determining which set of personality files to examine.
  • channel: An optional string specifying a particular communication context or platform where the AI operates, which influences whether channel-specific persona configurations are considered.
  • nsfw: A boolean value indicating whether mature content settings should be included in the search, with True enabling NSFW configurations and False restricting to standard content only.

Returns

Returns a string containing the properly formatted persona configuration text with newline characters escaped and quotes sanitized, or returns None if no matching persona configuration file exists in the expected locations.

Clone this wiki locally