Skip to content

Commit d69bdcc

Browse files
committed
chore: update translations file from Crowdin
1 parent 74184a3 commit d69bdcc

4 files changed

Lines changed: 154 additions & 230 deletions

File tree

basilisk/res/locale/fr/LC_MESSAGES/basiliskLLM.po

Lines changed: 38 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@ msgid ""
22
msgstr ""
33
"Project-Id-Version: basiliskllm\n"
44
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
5-
"POT-Creation-Date: 2025-10-11 13:06+0000\n"
6-
"PO-Revision-Date: 2025-10-11 13:23\n"
5+
"POT-Creation-Date: 2026-01-17 13:23+0000\n"
6+
"PO-Revision-Date: 2026-01-17 13:32\n"
77
"Last-Translator: \n"
88
"Language-Team: French\n"
99
"MIME-Version: 1.0\n"
@@ -1383,48 +1383,56 @@ msgid "Error checking for updates: %s"
13831383
msgstr "Erreur durant la recherche des mises à jour : %s"
13841384

13851385
#: basilisk/provider_engine/anthropic_engine.py:99
1386+
msgid "Our fastest model with near-frontier intelligenceBest model for complex agents and coding with highest intelligence"
1387+
msgstr ""
1388+
13861389
#: basilisk/provider_engine/anthropic_engine.py:110
1387-
msgid "Best model for complex agents and coding with highest intelligence"
1390+
msgid "Our fastest model with near-frontier intelligence"
13881391
msgstr ""
13891392

13901393
#: basilisk/provider_engine/anthropic_engine.py:122
13911394
#: basilisk/provider_engine/anthropic_engine.py:133
1395+
msgid "Best model for complex agents and coding with highest intelligence"
1396+
msgstr ""
1397+
1398+
#: basilisk/provider_engine/anthropic_engine.py:145
1399+
#: basilisk/provider_engine/anthropic_engine.py:156
13921400
msgid "Exceptional model for specialized complex tasks"
13931401
msgstr ""
13941402

13951403
#. Translators: This is a model description
1396-
#: basilisk/provider_engine/anthropic_engine.py:144
1397-
#: basilisk/provider_engine/anthropic_engine.py:153
1404+
#: basilisk/provider_engine/anthropic_engine.py:167
1405+
#: basilisk/provider_engine/anthropic_engine.py:176
13981406
msgid "High-performance model"
13991407
msgstr ""
14001408

14011409
#. Translators: This is a model description
1402-
#: basilisk/provider_engine/anthropic_engine.py:163
1403-
#: basilisk/provider_engine/anthropic_engine.py:172
1410+
#: basilisk/provider_engine/anthropic_engine.py:186
1411+
#: basilisk/provider_engine/anthropic_engine.py:195
14041412
msgid "Our most capable model"
14051413
msgstr ""
14061414

1407-
#: basilisk/provider_engine/anthropic_engine.py:183
1408-
#: basilisk/provider_engine/anthropic_engine.py:194
1415+
#: basilisk/provider_engine/anthropic_engine.py:206
1416+
#: basilisk/provider_engine/anthropic_engine.py:217
14091417
msgid "High-performance model with early extended thinking"
14101418
msgstr ""
14111419

14121420
#. Translators: This is a model description
1413-
#: basilisk/provider_engine/anthropic_engine.py:205
1421+
#: basilisk/provider_engine/anthropic_engine.py:228
14141422
msgid "Our previous intelligent model"
14151423
msgstr ""
14161424

14171425
#. Translators: This is a model description
1418-
#: basilisk/provider_engine/anthropic_engine.py:214
1426+
#: basilisk/provider_engine/anthropic_engine.py:237
14191427
msgid "Our fastest model"
14201428
msgstr ""
14211429

14221430
#. Translators: This is a model description
1423-
#: basilisk/provider_engine/anthropic_engine.py:223
1431+
#: basilisk/provider_engine/anthropic_engine.py:246
14241432
msgid "Powerful model for complex tasks"
14251433
msgstr ""
14261434

1427-
#: basilisk/provider_engine/anthropic_engine.py:233
1435+
#: basilisk/provider_engine/anthropic_engine.py:256
14281436
msgid "Fastest and most compact model for near-instant responsiveness"
14291437
msgstr "Modèle le plus rapide et le plus compact pour une réactivité quasi instantanée"
14301438

@@ -1484,88 +1492,60 @@ msgstr ""
14841492
msgid "GPT-5 model used in ChatGPT"
14851493
msgstr ""
14861494

1495+
#: basilisk/provider_engine/openai_engine.py:156
1496+
msgid "Version of GPT-5 that produces smarter and more precise responses"
1497+
msgstr ""
1498+
14871499
#. Translators: This is a model description
1488-
#: basilisk/provider_engine/openai_engine.py:155
1500+
#: basilisk/provider_engine/openai_engine.py:167
14891501
msgid "Flagship GPT model for complex tasks"
14901502
msgstr ""
14911503

14921504
#. Translators: This is a model description
1493-
#: basilisk/provider_engine/openai_engine.py:165
1505+
#: basilisk/provider_engine/openai_engine.py:177
14941506
msgid "Balanced for intelligence, speed, and cost"
14951507
msgstr ""
14961508

14971509
#. Translators: This is a model description
1498-
#: basilisk/provider_engine/openai_engine.py:175
1510+
#: basilisk/provider_engine/openai_engine.py:187
14991511
msgid "Fastest, most cost-effective GPT-4.1 model"
15001512
msgstr ""
15011513

15021514
#. Translators: This is a model description
1503-
#: basilisk/provider_engine/openai_engine.py:185
1515+
#: basilisk/provider_engine/openai_engine.py:197
15041516
msgid "Faster, more affordable reasoning model"
15051517
msgstr ""
15061518

15071519
#. Translators: This is a model description
1508-
#: basilisk/provider_engine/openai_engine.py:195
1520+
#: basilisk/provider_engine/openai_engine.py:207
15091521
msgid "Our most powerful reasoning model"
15101522
msgstr ""
15111523

1512-
#: basilisk/provider_engine/openai_engine.py:207
1513-
#: basilisk/provider_engine/openai_engine.py:231
1524+
#: basilisk/provider_engine/openai_engine.py:219
1525+
#: basilisk/provider_engine/openai_engine.py:243
15141526
msgid "Points to one of the most recent iterations of gpt-4o-mini model"
15151527
msgstr "Pointe vers l'une des itérations les plus récentes du modèle gpt-4o-mini"
15161528

1517-
#: basilisk/provider_engine/openai_engine.py:219
1529+
#: basilisk/provider_engine/openai_engine.py:231
15181530
msgid "Dynamic model continuously updated to the current version of GPT-4o in ChatGPT"
15191531
msgstr "Modèle dynamique mis à jour en continu avec la version actuelle de GPT-4 dans ChatGPT"
15201532

1521-
#: basilisk/provider_engine/openai_engine.py:243
1533+
#: basilisk/provider_engine/openai_engine.py:255
15221534
msgid "Our most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini. o3-mini also supports key developer features, like Structured Outputs, function calling, Batch API, and more. Like other models in the o-series, it is designed to excel at science, math, and coding tasks."
15231535
msgstr ""
15241536

1525-
#: basilisk/provider_engine/openai_engine.py:256
1537+
#: basilisk/provider_engine/openai_engine.py:268
15261538
msgid "Points to the most recent snapshot of the o1 model"
15271539
msgstr "Pointe vers l'instantané le plus récent du modèle o1"
15281540

1529-
#: basilisk/provider_engine/openai_engine.py:269
1541+
#: basilisk/provider_engine/openai_engine.py:281
15301542
msgid "The latest GPT-4 Turbo model with vision capabilities"
15311543
msgstr "Le dernier modèle GPT-4 Turbo avec des capacités de vision"
15321544

1533-
#: basilisk/provider_engine/openai_engine.py:281
1545+
#: basilisk/provider_engine/openai_engine.py:293
15341546
msgid "Points to one of the most recent iterations of gpt-3.5 model"
15351547
msgstr "Pointe vers l'une des itérations les plus récentes du modèle gpt-3.5"
15361548

1537-
#: basilisk/provider_engine/openai_engine.py:291
1538-
msgid "Latest snapshot that supports Structured Outputs"
1539-
msgstr "Dernière version prenant en charge les sorties structurées"
1540-
1541-
#: basilisk/provider_engine/openai_engine.py:303
1542-
msgid "Our high-intelligence flagship model for complex, multi-step tasks"
1543-
msgstr "Notre modèle phare à haute intelligence pour des tâches complexes et multilingues"
1544-
1545-
#: basilisk/provider_engine/openai_engine.py:315
1546-
msgid "Our affordable and intelligent small model for fast, lightweight tasks. GPT-4o mini is cheaper and more capable than GPT-3.5 Turbo"
1547-
msgstr "Notre modèle petit, abordable et intelligent pour des tâches rapides et légères. GPT-4o mini est moins cher et plus performant que GPT-3.5 Turbo"
1548-
1549-
#: basilisk/provider_engine/openai_engine.py:327
1550-
msgid "The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls"
1551-
msgstr "Le dernier GPT-3.5 Turbo ProviderAIModel avec une précision accrue dans les réponses aux formats demandés et une correction d’un bogue qui causait un problème d’encodage de texte pour les appels de fonction en langues autres que l’anglais"
1552-
1553-
#: basilisk/provider_engine/openai_engine.py:337
1554-
msgid "Points to one of the most recent iterations of gpt-4 model"
1555-
msgstr "Pointe vers l'une des itérations les plus récentes du modèle GPT-4"
1556-
1557-
#: basilisk/provider_engine/openai_engine.py:348
1558-
msgid "The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task"
1559-
msgstr "Le dernier modèle GPT-4 ProviderAIModel conçu pour réduire les cas de \"paresse\" où le ProviderAIModel ne termine pas une tâche"
1560-
1561-
#: basilisk/provider_engine/openai_engine.py:359
1562-
msgid "GPT-4 Turbo model featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more"
1563-
msgstr "GPT-4 Turbo ProviderAIModel offrant un meilleur suivi des instructions, le mode JSON, des sorties reproductibles, l’appel de fonctions parallèles, et plus encore"
1564-
1565-
#: basilisk/provider_engine/openai_engine.py:370
1566-
msgid "More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat"
1567-
msgstr "Plus performant que n'importe quel modèle GPT-3.5, capable de réaliser des tâches plus complexes et optimisé pour les conversations"
1568-
15691549
#: basilisk/provider_engine/xai_engine.py:47
15701550
msgid "The most intelligent model from xAI, featuring native tool use, real-time search integration, and a context window of 256,000 tokens."
15711551
msgstr ""
@@ -1601,3 +1581,4 @@ msgstr ""
16011581
#: basilisk/provider_engine/xai_engine.py:130
16021582
msgid "The original Grok model, providing foundational AI capabilities with a context window of 8,192 tokens."
16031583
msgstr ""
1584+

0 commit comments

Comments
 (0)