Skip to content

Approximation problem with NN #54

@riberaborrell

Description

@riberaborrell

we consider two different types of algorithms which aim to minimize the MSE loss between the function that we want to approximate (target function) and our NN representation that we want to fit.
[x] Classic: the training data is fixed during the whole training period.

  • choose data points. Either sampled from the uniform distribution or the points conforming the discretized grid.
  • separate between testing data and training data (75 % vs 25 %)
  • evaluate target function once.
  • start training loop. We train with respect to the training data, but we also evaluate the loss with respect to the test data, such that we can keep track of the generalization error.

[x] Alternative: the training data is sampled at each training iteration.

  • start training loop.
  • sample points. Evaluate target set. Compute loss.
  • For the metadynamics case: for each SGD iteration sample new data. For each center sample normally distributed and also one time uniformly.

[x] Both algorithms should be applied for different types of target functions (not controlled, meta)

[ ] Both algorithms should be applied for different types of target functions (hjb)

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions