Virtual Workshops
Since the COVID-19 lockdown, we have had to adapt to a completely new paradigm, that is, 'working from home'. Although not encouraged prior to this pandemic, I have personally found that I have been quite efficient at multi-tasking. In this post, I will focus on the virtual workshops which I have attended (or will attend).
- ICLR 2020 (26th April to 1st May, )
- MLSS 2020 (28th June to 10th July, )
- ICML 2020 (12th July to 18th July, )
- OxML 2020 (17th August to 25th August, )
- GPSS 2020 (14th September to 17th September, )
I had the opportunity to present my work on Weak Lensing, Compression and Gaussian Processes (see here) at the ICLR 2020. Apart from my work, it was quite inspiring to see how Machine Learning has developed from a purely scientific field to an engineering field. Today, it encompasses almost all branch of Science and Engineering. One of the main focus was on Climate Change, which is undeniably one of the hardest problems that humanity is currently facing.
Although I did not register for this workshop, we all had the opportunity to follow all talks live on Youtube. In particular, since my work is closely related to Bayesian Analysis, I was inspired by Shakir's talks (Bayesian Inference I and II). Another topic which got me thinking was Meta-Learning by Prof. Yee Whye Teh in which he refers to the following:
"Our training procedure is based on a simple machine learning principle: test and train conditions must match" - Vinyals et al. 2016
We had a plethora of talks, workshops and tutorials for ICML 2020. I was particularly focused on the Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models workshop () and we had an interesting talk on how deep learning techniques are being used in Science by Kyle Cranmer. Moreover, I also followed the nice tutorial on Bayesian Deep Learning and a Probabilistic Perspective of Model Construction () by Andrew Wilson. We also had the opportunity to participate in various mentoring sessions (career advice, rising topics in Machine Learning, equality and many more) which are enormously helpful as a student.
We started this workshop with two important lectures which were related to my own research. The two lectures were on Bayesian Machine Learning by Cheng Zhang and Gaussian Processes by James Hensman. Moreover we had a series of other interesting lectures on Neural Networks, Natural Language Processing (NLP), Computer Vision, Representation Learning, Causal Machine Learning, Reinforcement Learning and many more. Along wih these lectures, we had various tutorials and unconference sessions where participants themselves took the initiative to discuss a specific topic in depth.
We had a series of lectures on various aspects of Gaussian Processes (GP), such as Scalable GP, Deep GP and various other lectures related to, for example, Bayesian Optimisation, Kernel design, Bayesian Neural Network, Composite GPs and many more. Importantly, Carl Henrik gave an interesting introduction to GPs and one funny, yet thought-provoking quote in one of his slides was that:
Deep Learning is a bit like smoking, you know that it's wrong but you do it anyway because you want to look cool.