Skip to content

Latest commit

 

History

History
32 lines (26 loc) · 3.03 KB

File metadata and controls

32 lines (26 loc) · 3.03 KB

Internships in SuReLI

The Supaero Reinforcement Learning Initiative (SuReLI) can host internships this year over Spring and Summer. Those are generally open to outstanding MSc students, although we might be able to arrange things for PhD students too (with more flexible dates). Check our website and publications for our current research interests.

Among current hot topics:

  • offline RL
  • RL in the low data regime
  • robust MDPs
  • learning representations for generalization in RL (possible extensions of [1,2,3])
  • better understanding of variance in SGD and implications on model-based Deep RL (following up on [4])
  • representations of deep NNs (for evolutionary optimization among other things [5])
  • neural architecture search [6]

Common application benchmarks in SuReLI:

  • the usual suspects in RL (OpenAI Gym, ProcGen, Deepmind Control Suite, etc.)
  • new line of research on the application of deep RL to the control of tumoral micro-environments
  • control of fluid flows and computational fluid dynamics
  • mobile robotic applications (simulated or real, we have a bunch of platforms in the lab)
  • coupling RL and mixed integer linear programming for industrial engineering topics

We can offer detailed internship topics, but we are interested in particular in students who take an interest in our research first. We are also open to discussing new research topics. Feel free to reach out to Emmanuel Rachelson to discuss your research proposal. Some opportunities to stay in the team as a PhD student might arise during the coming year.

[1] Bertoin, D., & Rachelson, E. (2022). Disentanglement by cyclic reconstruction. IEEE Transactions on Neural Networks and Learning Systems.
[2] Bertoin, D., & Rachelson, E. (2022). Local Feature Swapping for Generalization in Reinforcement Learning. In International Conference on Learning Representations.
[3] Bertoin, D., Zouitine, A., Zouitine, M., & Rachelson, E. (2022). Look where you look! Saliency-guided Q-networks for visual RL tasks. In 36th Conference on Neural Information Processing Systems.
[4] Lahire, T., Geist, M., & Rachelson, E. (2022). Large Batch Experience Replay. In 39th International Conference on Machine Learning.
[5] Templier, P., Rachelson, E., & Wilson, D. G. (2021). A geometric encoding for neural network evolution. In Genetic and Evolutionary Computation Conference.
[6] Maile, K., Rachelson, E., Luga, H., & Wilson, D. G. (2022). When, where, and how to add new neurons to ANNs. In International Conference on Automated Machine Learning.