Fundamental Science in the AI Era
April 26 2020
Virtual Workshop at ICLR 2020 (previously in Addis Ababa, Ethiopia)
Fundamental Science in the era of AI at ICLR 2020
We are pleased to announce the first “Fundamental Science in the era of AI” Workshop at the International Conference on Learning Representations (ICLR) 2020.
Invited Speakers (Confirmed)
- Pavlos Protopapas (Harvard)
- Francois Lanusse (CosmoStat Laboratory at CEA Saclay)
- Ashish Mahalbal (Caltech)
- Benjamin Nachmann (Lawrence Berkeley National Laboratory)
- Anais Moeller (CNRS at Laboratoire de Physique de Clermont)
Scientific/Local/Virtual Organising Committee
- Bruce Bassett - AIMS and SARAO
- Richard Armstrong - SARAO
- Michelle Lochner - SARAO, AIMS, UWC
- Nadeem Oozeer - SARAO
On The Day: How To Participate?
This virtual workshop will consist of a mix of pre-recorded and live content, with various different ways to participate and engage:
Throughout the day we will be livestreaming invited talks, contributed talks, lightning poster presentations, and 2 live panel discussions. Please see the schedule below for details.
During the livestream, we encourage you to discuss the workshop content with other participants and to ask questions of our speakers. We have two tools, rocket.chat and sli.do (event code 56754). We have two to aid with global access, and the moderator will monitor both.
Our invited speakers and panelists will discuss a mix of curated and audience questions. Please suggest topics of discussion on the Sli.do to help us put together the best discussion possible
Code of Conduct
By participating in FSAI, including Q&A sessions, panels, and poster sessions, you agree to follow the ICLR Code of Conduct.
(scroll left for links to .pdfs and videos)
The key themes for the workshop are:
- Rigorous and interpretable ML for fundamental science - “Big” science goals require precise control of systematic errors and yet deliver exabytes of data that will likely need the techniques of machine and deep learning to process. Fundamental sciences require models that are interpretable, free from bias and with precise measures of uncertainty. How best can we blend the speed and generality of machine learning with rigorous statistical methodology in a computationally feasible way on exabytes of data? And how do we optimally embed known physics, laws and symmetries into deep learning models?
- “Unknown Unknowns” - discovering new classes and new laws - “Big” science experiments such as SKA, CERN, LIGO and LSST offer incredible opportunities for scientific discovery. However the tsunami of data will require the use of AI to detect anomalies. How do we discover new classes of objects given the No Free Lunch theorems? How can AI best drive new discoveries in fundamental science in exabytes of data?
- What is good science in the era of AI? Dangers and opportunities - AI is impacting virtually every aspect of life and the way we do science is no exception. What will the scientific method, and good science in general, look like in 2030? What are the opportunities and potential dangers of “blackbox” AI science?
- Social Good through AI for Fundamental Science Research - Most big science experiments are funded at least partly because of their long term contribution to the wellbeing of society. What new opportunities does AI-enhanced science provide for the SKA and other big science experiments to contribute to social good through AI, particularly in Africa?
Potential topics for submissions include, but are not limited to:
- Interpretable machine learning
- Building physical laws and insights into deep learning
- Bayesian machine learning
- Dealing with dataset shift and non-representative training data
- ML for visualisation of large datasets
- ML for exabyte scale science
- Human-in-the-loop learning for fundamental science
- Propagating errors correctly through complex pipelines
- Anomaly detection, one-shot and zero-shot learning for fundamental science
- Automated discovery of fundamental laws and relations
- What is good science in an era of black box AI?
- What are the dangers of AI for science?
- How can fundamental science best be used to amplify social good?
Our belief is that this workshop will bring together machine learning experts and scientists doing fundamental research at a key time for science both worldwide and on the African continent. We hope the workshop will stimulate new collaborations and networks and catalyse radical new ideas on a range of fundamental and important questions.
Extended Abstract Submissions
We invite anonymous submissions of short papers in the ICLR style for two tracks to address the key themes of the workshop. The “papers” track is for published or completed work, or work on which there has already been considerable progress. The “proposals” track is for detailed proposals for future work with a focus on rigorous concepts and ideas.
All machine learning techniques are welcome but particularly those which have the potential to scale to be useful to science in the coming decades. Submissions should make clear how the application will advance at least one of the major themes of the workshop. Submissions will not be published other than on the workshop webpage, and hence do not affect future publication of the work.
Submissions are limited to 4 pages for “Papers Track” (work that is in progress, published, and/or deployed), and 3 pages for the “Proposals Track” (detailed descriptions of ideas for future work). The emphasis should be on clearly presenting the key ideas, and why they are (1) important, (2) relevant and (3) correct.
Accepted submissions will be included in the poster sessions. A subset will be invited to give 10-20 minute spotlight talks.
Abstract Submission deadline:
Tuesday Feb 18, 2020
Please go to https://cmt3.research.microsoft.com/FSEAI2020 to apply.
Travel grant support
Limited funding may be available to support participants from South Africa whose contributions are accepted for a poster or talk, based on need.