Hi @JimSalesforce 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2507.12806.
The paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance),
you can also claim the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
Your MCPEval framework introduces a novel approach for automated task generation and deep evaluation of LLM agents. I noticed that the framework generates specific evaluation tasks (e.g., as .jsonl files) which are central to benchmarking. Would you consider hosting these MCPEval Evaluation Tasks on https://huggingface.co/datasets?
I see the code to generate them is on your GitHub. Hosting a representative set of these tasks on Hugging Face will give them more visibility/enable better discoverability, and will also allow people to do:
from datasets import load_dataset
dataset = load_dataset("your-hf-org-or-username/your-dataset")
If you're down, leaving a guide here: https://huggingface.co/docs/datasets/loading.
We also support Webdataset, useful for image/video datasets: https://huggingface.co/docs/datasets/en/loading#webdataset.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser.
After uploaded, we can also link the datasets to the paper page (read here) so people can discover your work.
Let me know if you're interested/need any guidance.
Kind regards,
Niels
Hi @JimSalesforce 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2507.12806.
The paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance),
you can also claim the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
Your MCPEval framework introduces a novel approach for automated task generation and deep evaluation of LLM agents. I noticed that the framework generates specific evaluation tasks (e.g., as
.jsonlfiles) which are central to benchmarking. Would you consider hosting these MCPEval Evaluation Tasks on https://huggingface.co/datasets?I see the code to generate them is on your GitHub. Hosting a representative set of these tasks on Hugging Face will give them more visibility/enable better discoverability, and will also allow people to do:
If you're down, leaving a guide here: https://huggingface.co/docs/datasets/loading.
We also support Webdataset, useful for image/video datasets: https://huggingface.co/docs/datasets/en/loading#webdataset.
Besides that, there's the dataset viewer which allows people to quickly explore the first few rows of the data in the browser.
After uploaded, we can also link the datasets to the paper page (read here) so people can discover your work.
Let me know if you're interested/need any guidance.
Kind regards,
Niels