Conversation
|
Thank you for opening this PR. Each PR into dev requires a code review. For the code review, look at the following:
|
|
Benchmarking Results
|
Codecov Report
@@ Coverage Diff @@
## dev #540 +/- ##
==========================================
- Coverage 84.50% 84.48% -0.02%
==========================================
Files 45 45
Lines 3504 3506 +2
==========================================
+ Hits 2961 2962 +1
- Misses 543 544 +1
|
kjjarvis
left a comment
There was a problem hiding this comment.
Great job on a first jupyter notebook example!
| " `runs = [(t, u, z) for t, u, z in zip(times, inputs, outputs)]`\n", | ||
| "\n", | ||
| "\n", | ||
| "Using our optimization function, which runs `calc_error()` as a subroutine (more information about `calc_error()` found in our Calculating Error Example), given each of the runs.\n", |
There was a problem hiding this comment.
This sentence doesn't make sense to me. Can we reword it?
There was a problem hiding this comment.
As discussed in our meeting, let's create a new example with multiple runs to introduce the concept of runs. I have ideas for this, so let me know if you need help.
Then we can reword this to just describe how calc_error is applied to each run independently, without the definition of runs.
| "You can also adjust the metric that is used to estimate parameters by setting the error_method to a different `calc_error()` method.\n", | ||
| "e.g., m.estimate_params([(times, inputs, outputs)], keys, dt=0.01, error_method='MAX_E')\n", | ||
| "Default is Mean Squared Error (MSE)\n", | ||
| "See calc_error method for list of options." |
There was a problem hiding this comment.
This is a really nice additional note. Should we include a line of code here to illustrate it? (basically pull out line 206 and make it an executable statement?)
| "source": [ | ||
| "Now we can build a model with your best guess in parameters.\n", | ||
| "\n", | ||
| "We will use a ThrownObject Model and guess that our thrower is 20 meters tall. However, given our times, inputs and outputs, we can clearly tell this is obviously not true! Let's see if parameter estimation can fix this!" |
There was a problem hiding this comment.
This comment confused me a bit because we don't have times, inputs, or outputs at this point. Maybe we could reword it? It is an unreasonable height for a human, I think that is enough
| "`estimate_params()` creates a structure of 'runs' by constructing each index of times, inputs, and outputs and placing them into a tuple.\n", | ||
| "\n", | ||
| " `runs = [(t, u, z) for t, u, z in zip(times, inputs, outputs)]`\n", | ||
| "\n", | ||
| "\n", | ||
| "Using our optimization function, which runs `calc_error()` as a subroutine (more information about `calc_error()` found in our Calculating Error Example), given each of the runs.\n", |
There was a problem hiding this comment.
What is the motivation for including these lines?
Co-authored-by: Katy Jarvis <55932920+kjjarvis@users.noreply.github.com>
|
Benchmarking Results [Update]
To:
|
teubert
left a comment
There was a problem hiding this comment.
This is really great! A great addition.
Just a few comments/suggestions.
| " `runs = [(t, u, z) for t, u, z in zip(times, inputs, outputs)]`\n", | ||
| "\n", | ||
| "\n", | ||
| "Using our optimization function, which runs `calc_error()` as a subroutine (more information about `calc_error()` found in our Calculating Error Example), given each of the runs.\n", |
There was a problem hiding this comment.
As discussed in our meeting, let's create a new example with multiple runs to introduce the concept of runs. I have ideas for this, so let me know if you need help.
Then we can reword this to just describe how calc_error is applied to each run independently, without the definition of runs.
|
Benchmarking Results [Update]
To:
|
Co-authored-by: Christopher Teubert <christopher.a.teubert@nasa.gov>
|
Benchmarking Results [Update]
To:
|
|
Benchmarking Results [Update]
To:
|
|
Benchmarking Results [Update]
To:
|
| { | ||
| "attachments": {}, | ||
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [] | ||
| } |
There was a problem hiding this comment.
| { | |
| "attachments": {}, | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [] | |
| } |
|
Benchmarking Results [Update]
To:
|
|
Also, let's get rid of the estimate_params.py example. It's replaced by this |
|
Benchmarking Results [Update]
To:
|
|
Benchmarking Results [Update]
To:
|
|
Benchmarking Results [Update]
To:
|
kjjarvis
left a comment
There was a problem hiding this comment.
Most of these are just small changes, but I'm a bit confused with the example using tolerance. Once we finalize that, this PR will be good to go.
|
Benchmarking Results [Update]
To:
|
kjjarvis
left a comment
There was a problem hiding this comment.
This is looking good, @aqitya. I made just minor suggestions this time, but there are still a handful of outstanding comments from the last review. As soon as those are addressed, I think we'll be ready to approve.
Co-authored-by: Katy Jarvis <55932920+kjjarvis@users.noreply.github.com>
kjjarvis
left a comment
There was a problem hiding this comment.
This looks great. Thanks, @aqitya, for being so thorough!
Jupyter Notebook for estimate_params example.