Skip to content

Feature Request: Consistent Markdown Rendering for LLM Responses #65

@17Swagat

Description

@17Swagat

First off—this package is fantastic. The output quality is impressive, and the integration feels smooth. However, I’ve noticed some inconsistencies when trying to render responses in Markdown format, especially compared to how Meta AI handles it.

🧩 Problem

When explicitly prompting the model to respond in Markdown (e.g., “Need the complete answer as <Markdown-code>”), it works well for some queries. But for more complex questions—like:

Explain the transformer architecture and the equations involved in each step.

…the output tends to be partially formatted. Some sections are correctly rendered in Markdown, while others are lumped into a single code block, making it harder to parse or render cleanly.

Equations, in particular, are often returned in raw Markdown or LaTeX-like syntax, but not consistently wrapped or structured for proper rendering.

📷 Example

Here’s a screenshot illustrating the formatting issue:

Markdown Rendering Issue

✅ Feature Request

It would be incredibly helpful if the package offered an option to:

  • Enforce consistent Markdown formatting across the entire response
  • Properly segment code blocks, equations, and text
  • Optionally return structured formats (e.g., Markdown, HTML, JSON) for easier rendering downstream

This would make it much easier to integrate the output into front-end components or documentation pipelines.

Thanks again for the great work :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions