Skip to content

Latest commit

 

History

History
104 lines (82 loc) · 4.18 KB

File metadata and controls

104 lines (82 loc) · 4.18 KB
title Introduction
description Intelligent optimization and routing for LLM workflows

DeepMyst Hero Light

DeepMyst Hero Dark

What is DeepMyst?

DeepMyst is an intelligent LLM gateway that enhances AI interactions through advanced token optimization and routing capabilities. By intelligently directing queries to the most appropriate model and optimizing tokens, DeepMyst helps you achieve better results while reducing costs.

Our platform serves as a unified API layer that connects to all major LLM providers, enabling you to access the best models for each task while maintaining a single, consistent integration point. DeepMyst requires no new libraries or major code changes - simply redirect your existing OpenAI SDK calls to our API endpoint.

Why DeepMyst?

Traditional LLM implementations face several challenges:

  • Cost inefficiency: Using high-performance models for every query leads to unnecessary expenses
  • Token waste: Standard implementations don't optimize token usage, resulting in higher costs
  • Quality inconsistency: Different models excel at different tasks, but selecting the right one is complex
  • Integration complexity: Managing multiple model providers requires maintaining separate integrations

DeepMyst addresses these challenges by providing:

  • Token optimization that reduces costs without sacrificing quality
  • Intelligent routing that matches each query with the optimal model
  • Unified API that works with your existing code and libraries

Key Features

Automatically route queries to the optimal LLM based on query type, complexity, and capabilities Reduce token usage by up to 75% with our suffix-array compression technology

How DeepMyst Works

DeepMyst operates as an intelligent middleware layer between your application and various LLM providers:

  1. Request Processing: When you send a request to DeepMyst, our system analyzes the query to understand its content, complexity, and required capabilities.
  2. Token Optimization: If enabled, DeepMyst applies sophisticated compression techniques to reduce token usage while preserving content quality.
  3. Model Selection: Based on this analysis, DeepMyst either routes to the model you specified or, if using auto-routing, selects the optimal model from your connected providers.
  4. Response Delivery: The optimized response is returned through the familiar OpenAI-compatible API format, ready for integration into your application.

Benefits

Save up to 65% on token costs without sacrificing quality Get better answers through intelligent routing and reasoning One API for all your LLM needs with standard compatibility

Supported Models

DeepMyst provides access to a wide range of models from leading providers:

GPT-4o, GPT-4o-mini, O1, O1-mini, O3-mini Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash Llama 3.1/3.3, Mixtral-8x7b, Gemma2, Qwen, DeepSeek

Next Steps

Set up your DeepMyst account and make your first API call Learn how our token optimization reduces costs Discover how our intelligent routing system works