Replies: 1 comment
-
|
Hi, I do not know any "qcodes" way... If speed is really a concern, I would do something like this: import sqlite3
import json
databaseAbsPath = '' # Your database absolute path
runId = 1 # Your run id
conn = sqlite3.connect(databaseAbsPath)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# Get runs infos
cur.execute("SELECT snapshot FROM 'runs' WHERE run_id="+str(runId))
row = cur.fetchall()[0]
# Create nice dict object from a string
d = json.loads(row['snapshot'])
cur.close()
conn.close()From their, d should contain a python dict that you can handle. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a .db file with over 30,000 datasets.
I need to extract some information from the snapshot and metadata of each data set.
The fastest method I have found is to load the dataset:
experiment.load_dataset()
This operation takes 0.2 seconds for each dataset, which is takes 30 minutes for 10,000 datasets.
I am only extracting one number from the snapshot, and one from the metadata, so it might not be necessary to load the entire dataset each time.
Is there a way to speed this up?
Beta Was this translation helpful? Give feedback.
All reactions