Skip to content

Commit 08d39f7

Browse files
authored
fix: eval cards (#129)
1 parent 88651e7 commit 08d39f7

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

evaluators/made-by-traceloop.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -67,10 +67,6 @@ Each evaluator comes with a predefined input and output schema. When using an ev
6767
Measure how well the LLM response follows given instructions to ensure compliance with specified requirements.
6868
</Card>
6969

70-
<Card title="Prompt Perplexity" icon="brain">
71-
Measure how predictable/familiar a prompt is to a language model.
72-
</Card>
73-
7470
<Card title="Measure Perplexity" icon="hashtag">
7571
Measure text perplexity from logprobs to assess the predictability and coherence of generated text.
7672
</Card>
@@ -82,6 +78,10 @@ Each evaluator comes with a predefined input and output schema. When using an ev
8278
<Card title="Conversation Quality" icon="comments">
8379
Evaluate conversation quality based on tone, clarity, flow, responsiveness, and transparency.
8480
</Card>
81+
82+
<Card title="Context Relevance" icon="hashtag">
83+
Validate context relevance to ensure retrieved context is pertinent to the query.
84+
</Card>
8585
</CardGroup>
8686

8787
### Security & Compliance

0 commit comments

Comments
 (0)