You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary: James is a "rockstar" among the python speakers, that much that he is not in the need for handing in an abstract or even a title and still being accepted. He is well known for his fast-paced presentations only using vim. This time he talked about metaprogramming in python, but there is no way around to watch it to get a clue what it is about.
63
+
Summary: James is a "rockstar" among the python speakers, that much that he is not in the need for handing in an abstract or even a title and still being accepted. He is well known for his fast-paced presentations only using vim. This time he talked about metaprogramming in python, but there is no way around watching the talk to get a clue on what it is about.
Summary: Having the CEO of a big software company give the keynote at a community driven conference is at least unexpected. But Peter Wang has definitively proven that it was a very good choice. In his talk "Rethinking Open Source in the Era of Cloud & Machine Learning" he is deep-diving into how to sustainably run an open source project, commercial or uncommercial. A must see for everyone who is interested in the hidden forces behind the tectonic shifts of the IT landscape in the recent years.
120
+
Summary: Having the CEO of a big software company give the keynote at a community driven conference is at least unexpected. But Peter Wang has definitively proven that it was a very good choice. In his talk "Rethinking Open Source in the Era of Cloud & Machine Learning" he dives deeply into how to sustainably run an open source project, commercial or noncommercial. A must see for everyone who is interested in the hidden forces behind the tectonic shifts of the IT landscape in the recent years.
121
121
122
122
### Title: Are you sure about that?! Uncertainty Quantification in AI
Summary: The talk introduced the concepts of aleatoric and epistemic uncertainty. It compared various methods for uncertainty estimates according to several categories, such as performance, implementation effort etc. A simple, one variable toy dataset was used to evaluate these methods in practice. Some methods apparently showed a poor performance such as Monte-Carlo dropouts. I personally would have like to learn on why some methods performed better or worse on the dataset or not and how this generalizes to real-world datasets. However, based on later conversations with the speaker, this seems a tough problem for some of the methods used. What I definitely learned was how to give an easy explanation on the difference between aleatoric and epistemic uncertainty, and on quantile regression to a broad audience. And it was the first time I had been given such a systematic overview on uncertainty quantification.
128
+
Summary: The talk introduced the concepts of aleatoric and epistemic uncertainty. It compared various methods for uncertainty estimates according to several categories, such as performance, implementation effort etc. A simple, one variable toy dataset was used to evaluate these methods in practice. Some methods apparently showed a poor performance such as Monte-Carlo dropouts. I personally would have like to learn on why some methods performed better or worse on the dataset or not and how this generalizes to real-world datasets. However, based on later conversations with the speaker, this seems to be a tough problem for some of the methods used. What I definitely learned was how to give an easy explanation on the difference between aleatoric and epistemic uncertainty, and on quantile regression to a broad audience. And it was the first time I had been given such a systematic overview on uncertainty quantification.
129
129
130
130
### Title: Time series modelling with probabilistic programming
131
131
@@ -135,7 +135,7 @@ Speaker: Sean Matthews, Jannes Quer
135
135
136
136
Summary: The talk presented an extrapolation problem in demand forecasting (the aggregated demand for drugs). It was remarkably different from many other data science talks in several respects.
137
137
138
-
1. It did rather deal with seemingly old-school methods on a small dataset.
138
+
1. It did deal with seemingly old-school methods on a small dataset.
139
139
2. The model choice was done extremely deliberately. For example, the speaker first applied standard methods such a Gaussian process regression and then demonstrated the need to go beyond, since the data had a secular event at the end of the sample data. In the end, he came up with a custom state-space model, and I would need to explore the literature a little further to really understand his final solution.
140
140
3. The speaker was extremely explicit on the methods chosen and about the implementation, although the problem was not an academic one, but occurred in an industry context. (He showed parts of his `pystan` code explicitly.)
141
141
4. The modelling was more of a one-off undertaking and not conceived for contributing to a productive model pipeline that is automatically retrained regularly. When I asked the speaker if he would recommend fitting the same model again after one year, the answer was a clear "No, since I don't know the future, I can't tell whether the model would still perform well then."
Summary: The speaker gave an overview on the typical feature engineering workflow in machine learning. She illustrated how this can be automated useing the [autofeat](https://github.com/cod3licious/autofeat) library. My impression was that this is a viable workflow when you start off with a new dataset, to get a good first iteration and to gain insight into the dataset. It however it seems not to be a tool that can automate away feature engineering completely when you are aiming for best-in-class predictions with a high degree of reliability and retraceability. Nevertheless it could a huge time-saver on the way to that goal.
151
+
Summary: The speaker gave an overview on the typical feature engineering workflow in machine learning. She illustrated how this can be automated useing the [autofeat](https://github.com/cod3licious/autofeat) library. My impression was that this is a viable workflow when you start off with a new dataset, to get a good first iteration and to gain insight into the dataset. It, however, does not seem to be a tool that can automate away feature engineering completely when you are aiming for best-in-class predictions with a high degree of reliability and retraceability. Nevertheless it could a huge time-saver on the way to that goal.
152
152
153
153
### Title: Why you don’t see many real-world applications of Reinforcement Learning.
Summary: Tim currently works at [Kiwi.com](http://Kiwi.com) and focussed his talk on reducing "code debt" by refactoring the code base regularly. The key elements he touched upon were easy wins, like automating some checks by using `black`, `mypy`, `coala`, etc., patterns that hint to "smelly code", and possible reasons ranging from "historical reasons" to "high priority urgent hacky requests". This is often not easily apparent, however, tools like "SonarQube" can help in identifying sections of code that might need refactoring. Another element that he talked about are over use of decorators, which may seem like a good idea but can lead to non-obvious functionality, and recommended that they be only limited, and shouldn't at least alter the function signature and calls. Another suggestion that I liked was around code reviews, in which it may be good idea to structure the reviews at overall scope, followed by system scope, and finally code scope. This can help save coding time in case the architecture or overall scope needs to be changed. Overall, the focus was to always keep an eye out for code debt and try to improve little by little.
182
+
Summary: Tim currently works at [Kiwi.com](http://Kiwi.com) and focused his talk on reducing "code debt" by refactoring the code base regularly. The key elements he touched upon were easy wins, like automating some checks by using `black`, `mypy`, `coala`, etc., patterns that hint to "smelly code", and possible reasons ranging from "historical reasons" to "high priority urgent hacky requests". This is often not easily apparent, however, tools like "SonarQube" can help in identifying sections of code that might need refactoring. Another element that he talked about are over use of decorators, which may seem like a good idea but can lead to non-obvious functionality, and recommended that they be only limited, and shouldn't at least alter the function signature and calls. Another suggestion that I liked was around code reviews, in which it may be good idea to structure the reviews at overall scope, followed by system scope, and finally code scope. This can help save coding time in case the architecture or overall scope needs to be changed. Overall, the focus was to always keep an eye out for code debt and try to improve little by little.
Summary: Vincent provided a very intuitive conception of Gaussian Process, and then gradually extended this intuition to more complex algorithms. It was very well presented (even included successful live coding) and very helpful in understanding this concept.
279
+
Summary: Vincent provided a very intuitive conception of Gaussian Process, and then gradually extended this intuition to more complex algorithms. It was very well presented (even included successful live coding) and very helpful in understanding this concept.
0 commit comments