diff --git a/chapters/audio-intelligence/audio-to-llm.mdx b/chapters/audio-intelligence/audio-to-llm.mdx index 6124879..f1d9727 100644 --- a/chapters/audio-intelligence/audio-to-llm.mdx +++ b/chapters/audio-intelligence/audio-to-llm.mdx @@ -1,5 +1,5 @@ --- -title: Audio to LLM +title: Audio-to-LLM description: "Run your own prompts on a pre-recorded transcript with an LLM - summaries, Q&A, extraction, and more." --- @@ -7,9 +7,9 @@ import PrerecordedBadge from "/snippets/badges/prerecorded.mdx" -**Audio to LLM** runs once the transcription is generated. You provide **one or more prompts**; each prompt is executed against the **transcript text** from the same job using the configured model, yielding **one LLM response per prompt**. Use it to extract action items, answer questions about the recording, or run any text analysis you express in natural language. +**Audio-to-LLM** runs once the transcription is generated. You provide **one or more prompts**; each prompt is executed against the **transcript text** from the same job using the configured model, yielding **one LLM response per prompt**. Use it to extract action items, answer questions about the recording, or run any text analysis you express in natural language. -Unlike the built-in [Summarization](/chapters/audio-intelligence/summarization) feature — which produces a fixed-format summary — Audio to LLM lets you write **your own instructions**: ask for a summary in the exact format, tone, and level of detail your product needs, or combine a summary with other analyses (action items, compliance checks) in a single request. +Unlike the built-in [Summarization](/chapters/audio-intelligence/summarization) feature — which produces a fixed-format summary — Audio-to-LLM lets you write **your own instructions**: ask for a summary in the exact format, tone, and level of detail your product needs, or combine a summary with other analyses (action items, compliance checks) in a single request. ## Usage @@ -19,7 +19,7 @@ Unlike the built-in [Summarization](/chapters/audio-intelligence/summarization) 4. The API returns **one result object per prompt** (same order as `prompts`), each containing the original `prompt` and the model `response`. - Audio to LLM sends **plain transcript text** to the model. Raw audio and other fields from the transcription response are **not** added to the LLM prompt context. + Audio-to-LLM sends **plain transcript text** to the model. Raw audio and other fields from the transcription response are **not** added to the LLM prompt context. ## Model selection diff --git a/chapters/audio-intelligence/index.mdx b/chapters/audio-intelligence/index.mdx index 0c6e055..7e6568a 100644 --- a/chapters/audio-intelligence/index.mdx +++ b/chapters/audio-intelligence/index.mdx @@ -57,7 +57,7 @@ Use these capabilities alongside Live or Pre-recorded STT to automate workflows diff --git a/chapters/pre-recorded-stt/audio-intelligence.mdx b/chapters/pre-recorded-stt/audio-intelligence.mdx index 4032968..aab4645 100644 --- a/chapters/pre-recorded-stt/audio-intelligence.mdx +++ b/chapters/pre-recorded-stt/audio-intelligence.mdx @@ -74,7 +74,7 @@ Audio intelligence turns raw speech into structured, useful data on top of trans