Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 118 additions & 16 deletions .github/workflows/test_notebooks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,35 +9,137 @@ on:
- main

jobs:
discover:
name: Discover notebooks
runs-on: ubuntu-22.04
outputs:
notebooks: ${{ steps.find-notebooks.outputs.notebooks }}
steps:
- name: Check out code
uses: actions/checkout@v4

- name: Find all notebooks
id: find-notebooks
run: |
echo "=================================================="
echo "Discovering notebooks"
echo "=================================================="
notebooks=$(find . -name "*.ipynb" -type f -not -path "*/venv/*" | jq -R -s -c 'split("\n")[:-1]')
echo "Found notebooks:"
echo "$notebooks" | jq -r '.[]'
echo "notebooks=$notebooks" >> $GITHUB_OUTPUT

test:
name: Test ${{ matrix.notebook }}
needs: discover
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
notebook: ${{ fromJson(needs.discover.outputs.notebooks) }}
steps:
- name: Check out code
uses: actions/checkout@v4

- name: Set up Conda
uses: conda-incubator/setup-miniconda@v2
- name: Install uv
uses: astral-sh/setup-uv@v4
with:
enable-cache: true
cache-dependency-glob: "pyproject.toml"

- name: Set up Python
uses: actions/setup-python@v5
with:
activate-environment: myenv
environment-file: environment.yml
python-version: "3.11"
auto-activate-base: false

- name: Prepare Conda environment
shell: bash -l {0}
- name: Install dependencies with uv
run: |
if conda env list | grep -q 'myenv'; then
echo "Environment 'myenv' already exists, updating environment"
conda env update --name myenv --file environment.yml --prune
echo "=================================================="
echo "Installing dependencies for ${{ matrix.notebook }}"
echo "=================================================="
start_time=$(date +%s)

uv pip install --system -e .

end_time=$(date +%s)
elapsed=$((end_time - start_time))
echo "✓ Dependencies installed in ${elapsed}s"

- name: Verify environment
run: |
echo "=================================================="
echo "Verifying environment setup"
echo "=================================================="
echo "✓ Python version:"
python --version
echo ""
echo "✓ uv version:"
uv --version
echo ""
echo "✓ Installed packages:"
uv pip list | grep -E "(jupyter|qiskit|cirq|pennylane|qbraid|cudaq)" || echo " (packages installed)"
echo ""
echo "✓ IONQ_API_KEY status:"
if [ -n "$IONQ_API_KEY" ]; then
echo " API key is set (length: ${#IONQ_API_KEY} characters)"
else
echo "Creating new environment 'myenv'"
conda env create -f environment.yml
echo " WARNING: API key is not set!"
fi
env:
IONQ_API_KEY: ${{ secrets.IONQ_API_KEY }}

- name: Run notebook tests
shell: bash -l {0}
- name: Execute notebook
run: |
conda activate myenv
python tests/test_notebooks.py
echo "=================================================="
echo "Executing: ${{ matrix.notebook }}"
echo "Timeout: 600s"
echo "=================================================="
start_time=$(date +%s)
echo "Starting execution at $(date '+%Y-%m-%d %H:%M:%S')"

python -m jupyter nbconvert \
--to html \
--execute \
--ExecutePreprocessor.timeout=600 \
"${{ matrix.notebook }}"

exit_code=$?
end_time=$(date +%s)
elapsed=$((end_time - start_time))

if [ $exit_code -eq 0 ]; then
echo "✓ PASSED in ${elapsed}s"
echo "Completed at $(date '+%Y-%m-%d %H:%M:%S')"
else
echo "✗ FAILED after ${elapsed}s"
exit $exit_code
fi
env:
IONQ_API_KEY: ${{ secrets.IONQ_API_KEY }}

- name: Upload execution artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: notebook-output-${{ hashFiles(matrix.notebook) }}
path: |
**/*.html
retention-days: 7

summary:
name: Test Summary
needs: test
runs-on: ubuntu-22.04
if: always()
steps:
- name: Summary
run: |
echo "=================================================="
echo "Test Execution Complete"
echo "=================================================="
if [ "${{ needs.test.result }}" == "success" ]; then
echo "✓ All notebook tests passed successfully!"
else
echo "✗ Some tests failed. Check individual job logs for details."
exit 1
fi
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ share/python-wheels/
.Python
env/
venv/
.venv/
ionq/
build/
develop-eggs/
Expand Down
22 changes: 21 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,32 @@ There are a wide variety of ways to run these notebooks, but for starters you'll
1. [Python](https://www.python.org/downloads/) installed, using a version between 3.8 and 3.11.
2. A [virtual environment](https://docs.python.org/3/library/venv.html) to help ensure your dependencies don't conflict with anything else you have installed.
3. An [IonQ API key](https://cloud.ionq.com/settings/keys), which optionally you can store as an environment variable for ease of use. Our notebooks expect to find it stored as `IONQ_API_KEY`.
4. An installation of the library you're wanting to run. To install all the libraries at once using Conda, run the following command from the root directory of this repository:
4. An installation of the library you're wanting to run. You can install all libraries using one of the following methods:

**Option 1: Using uv (Fastest - Recommended)**

[uv](https://github.com/astral-sh/uv) is an extremely fast Python package installer and resolver:

```shell
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install all dependencies
uv pip install -e .
```

**Option 2: Using Conda**

```shell
conda env create -f environment.yml
```

**Option 3: Using pip**

```shell
pip install -e .
```

---

## Usage
Expand Down
107 changes: 12 additions & 95 deletions api.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,83 +13,24 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import json, os, requests, time\n",
"from getpass import getpass\n",
"\n",
"# Before you begin, get your API key from https://cloud.ionq.com/settings/keys\n",
"\n",
"# If your API key is stored as \"IONQ_API_KEY\" in your local environment, this\n",
"# should find it. Otherwise you'll be prompted to enter your API key manually.\n",
"\n",
"api_key = os.getenv('IONQ_API_KEY') or getpass('Enter your IonQ API key: ')"
]
"source": "import json\nimport time\n\nimport requests\n\nfrom helpers import get_ionq_api_key\n\n# Before you begin, get your API key from https://cloud.ionq.com/settings/keys\n\n# If your API key is stored as \"IONQ_API_KEY\" in your local environment, this\n# should find it. Otherwise you'll be prompted to enter your API key manually.\n\napi_key = get_ionq_api_key()"
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def submit_job(headers, data):\n",
" url = \"https://api.ionq.co/v0.3/jobs\"\n",
" response = requests.post(url, headers=headers, data=data)\n",
" response_json = response.json()\n",
" assert response.status_code == 200, f\"Error: {response_json.get('message', 'Unknown error')}\"\n",
" return response_json[\"id\"]\n",
"\n",
"def query_job(job_id, headers):\n",
" url = f\"https://api.ionq.co/v0.3/jobs/{job_id}\"\n",
" response = requests.get(url, headers=headers)\n",
" response_json = response.json()\n",
" assert response.status_code == 200, f\"Error: {response_json.get('message', 'Unknown error')}\"\n",
" return response_json[\"status\"]\n",
"\n",
"def get_job_results(job_id, headers):\n",
" url = f\"https://api.ionq.co/v0.3/jobs/{job_id}/results\"\n",
" response = requests.get(url, headers=headers)\n",
" response_json = response.json()\n",
" assert response.status_code == 200, f\"Error: {response_json.get('message', 'Unknown error')}\"\n",
" return response_json"
]
"source": "# Define helper functions for interacting with IonQ's API.\n\ndef submit_job(headers, data):\n \"\"\"Submit a quantum job to IonQ's API.\"\"\"\n url = \"https://api.ionq.co/v0.3/jobs\"\n response = requests.post(url, headers=headers, data=data)\n response_json = response.json()\n assert response.status_code == 200, f\"Error: {response_json.get('message', 'Unknown error')}\"\n return response_json[\"id\"]\n\ndef query_job(job_id, headers):\n \"\"\"Query the status of a submitted job.\"\"\"\n url = f\"https://api.ionq.co/v0.3/jobs/{job_id}\"\n response = requests.get(url, headers=headers)\n response_json = response.json()\n assert response.status_code == 200, f\"Error: {response_json.get('message', 'Unknown error')}\"\n return response_json[\"status\"]\n\ndef get_job_results(job_id, headers):\n \"\"\"Retrieve the results of a completed job.\"\"\"\n url = f\"https://api.ionq.co/v0.3/jobs/{job_id}/results\"\n response = requests.get(url, headers=headers)\n response_json = response.json()\n assert response.status_code == 200, f\"Error: {response_json.get('message', 'Unknown error')}\"\n return response_json"
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"headers = {\n",
" \"Authorization\": f\"apiKey {api_key}\",\n",
" \"Content-Type\": \"application/json\",\n",
"}\n",
"\n",
"data = {\n",
" \"name\": \"API Example Circuit\",\n",
" \"shots\": 100,\n",
" \"target\": \"simulator\",\n",
" \"input\": {\n",
" \"format\": \"ionq.circuit.v0\",\n",
" \"gateset\": \"qis\",\n",
" \"qubits\": 2,\n",
" \"circuit\": [\n",
" {\n",
" \"gate\": \"h\",\n",
" \"target\": 0\n",
" },\n",
" {\n",
" \"gate\": \"cnot\",\n",
" \"control\": 0,\n",
" \"target\": 1\n",
" }\n",
" ]\n",
" }\n",
"}\n",
"\n"
]
"source": "# Set up the request headers and define our circuit.\n# This circuit creates a Bell state with two qubits using IonQ's native format.\n\nheaders = {\n \"Authorization\": f\"apiKey {api_key}\",\n \"Content-Type\": \"application/json\",\n}\n\ndata = {\n \"name\": \"API Example Circuit\",\n \"shots\": 100,\n \"target\": \"simulator\",\n \"input\": {\n \"format\": \"ionq.circuit.v0\",\n \"gateset\": \"qis\",\n \"qubits\": 2,\n \"circuit\": [\n {\n \"gate\": \"h\",\n \"target\": 0\n },\n {\n \"gate\": \"cnot\",\n \"control\": 0,\n \"target\": 1\n }\n ]\n }\n}\n"
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -121,34 +62,10 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'0': 0.5, '3': 0.5}\n"
]
}
],
"source": [
"# Now we'll send the job to our backend for processing.\n",
"\n",
"job_id = submit_job(headers, json.dumps(data))\n",
"\n",
"# And wait for the job to be run.\n",
"\n",
"status = \"ready\"\n",
"while status != \"completed\":\n",
" time.sleep(1) # wait for 1 second before querying again\n",
" status = query_job(job_id, headers)\n",
"\n",
"# And once the job has run, we can plot the results.\n",
"\n",
"results = get_job_results(job_id, headers)\n",
"print(results)"
]
"outputs": [],
"source": "# Now we'll send the job to our backend for processing.\n\njob_id = submit_job(headers, json.dumps(data))\n\n# And wait for the job to complete.\n\nstatus = \"ready\"\nwhile status != \"completed\":\n time.sleep(1) # Wait for 1 second before querying again.\n status = query_job(job_id, headers)\n\n# Once the job has completed, we can retrieve and display the results.\n\nresults = get_job_results(job_id, headers)\nprint(results)"
},
{
"cell_type": "markdown",
Expand All @@ -162,7 +79,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "venv (3.14.2)",
"language": "python",
"name": "python3"
},
Expand All @@ -176,10 +93,10 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.14.2"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
}
Loading
Loading