Keynote Speakers

Carlos Guestrin

Stanford University, USA

How can you trust machine learning?

Abstract:
Machine learning (ML) and AI systems are becoming integral parts of every aspect of our lives. The definition, development and deployment of these systems are driven by (complex) human choices. And, as these AIs are making more and more decisions for us, and the underlying ML systems are becoming more and more complex, it is natural to ask the question: How can we trust machine learning? In this talk, I'll present a framework, anchored on three pillars: Clarity, Competence and Alignment. For each, I'll describe algorithmic and human processes that can help drive towards more effective, impactful and trustworthy AIs. For Clarity, I'll cover methods for making the predictions of machine learning more explainable. For Competence, I will focus on methods to evaluating and testing ML models with the rigor that we apply to complex software products. Finally, for Alignment, I'll describe the complexities of aligning the behaviors of an AI with the values we want to reflect in the world, along with methods that can yield more aligned outcomes. Through this discussion, we will cover both fundamental concepts and actionable algorithms and tools that can lead to increased trust in ML.

Bio:
Carlos Guestrin is a Professor in the Computer Science Department at Stanford University. His previous positions include the Amazon Professor of Machine Learning at the Computer Science & Engineering Department of the University of Washington, the Finmeccanica Associate Professor at Carnegie Mellon University, and the Senior Director of Machine Learning and AI at Apple, after the acquisition of Turi, Inc. (formerly GraphLab and Dato) -- Carlos co-founded Turi, which developed a platform for developers and data scientist to build and deploy intelligent applications. His team also released a number of popular open-source projects, including XGBoost, MXNet, TVM, Turi Create, LIME, GraphLab/PowerGraph, SFrame, and GraphChi. Carlos’ work received awards at a number of conferences and journals, including ACL, AISTATS, ICML, IPSN, JAIR, JWRPM, KDD, NeurIPS, UAI, and VLDB. He is also a recipient of the ONR Young Investigator Award, NSF Career Award, Alfred P. Sloan Fellowship, and IBM Faculty Fellowship. Carlos was named one of the 2008 ‘Brilliant 10’ by Popular Science Magazine, received the IJCAI Computers and Thought Award, and the Presidential Early Career Award for Scientists and Engineers (PECASE). He is a former member of the Information Sciences and Technology (ISAT) advisory group for DARPA.


Cynthia Rudin

Duke University, USA

Do Simpler Machine Learning Models Exist and How Can We Find Them?

Abstract:
While the trend in machine learning has tended towards building more complicated (black box) models, such models are not as useful for high stakes decisions - black box models have led to mistakes in bail and parole decisions in criminal justice, flawed models in healthcare, and inexplicable loan decisions in finance. Simpler, interpretable models would be better.Thus, we consider questions that diametrically oppose the trend in the field: for which types of datasets would we expect to get simpler models at the same level of accuracy as black box models? If such simpler-yet-accurate models exist, how can we use optimization to find these simpler models? In this talk, I present an easy calculation to check for the possibility of a simpler (yet accurate) model before computing one.This calculation indicates that simpler-but-accurate models do exist in practice more often than you might think. Also, some types of these simple models are (surprisingly) small enough that they can be memorized or printed on an index card.

Bio:
Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics & bioinformatics at Duke University, and directs the Interpretable Machine Learning Lab. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is the recipient of the 2022 Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI). This award, similar only to world-renowned recognitions, such as the Nobel Prize and the Turing Award, carries a monetary reward at the million-dollar level. She is also a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics. She is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science Section of the American Statistical Association. She has also served on committees for DARPA, the National Institute of Justice, AAAI, and ACM SIGKDD. She has served on three committees for the National Academies of Sciences, Engineering and Medicine, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. She has given keynote/invited talks at several conferences including KDD (twice), AISTATS, CODE, Machine Learning in Healthcare (MLHC), Fairness, Accountability and Transparency in Machine Learning (FAT-ML), ECML-PKDD, and the Nobel Conference. Her work has been featured in news outlets including the NY Times, Washington Post, Wall Street Journal, the Boston Globe, Businessweek, and NPR.


Max Welling

University of Amsterdam & Microsoft Research, Netherlands

The PDE Prior for Deep Learning

Abstract:
There is an interesting new field developing at the intersection of the physical sciences and deep learning, sometimes called AI4Science. In one direction, tools developed in the AI community are used to solve problems in science, such as protein folding, molecular simulation, and so on. But also in the other direction, deep insights from mathematics and physics are inspiring new DL architectures, such as Neural ODE solvers, and equivariance. In this talk I will start with mapping out some of the opportunities at this intersection and subsequently dive a little deeper into PDE solving. Also, in this subfield, cross fertilization has already happened both ways: people have used DL tools to successfully solve PDE equations much faster than with traditional solvers. But reversely, there are also efforts to use PDEs as "infinite width", functional representations of layers in a deep NN architecture. The latter is helpful for instance to become independent of gridding choices. In the second half of this talk I will explain our most recent efforts to solve PDEs faster and more accurately using DL, and reversely, new ways to use PDEs as an approximate equivariance prior.

Bio:
Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies, professor at UC Irvine, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate prof. Gerard ‘t Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015, he serves on the advisory board of the Neurips foundation since 2015 and has been program chair and general chair of Neurips in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. Max Welling is recipient of the ECCV Koenderink Prize in 2010 and the ICML Test of Time award in 2021. He directs the Amsterdam Machine Learning Lab (AMLAB) and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA).


Nikko Strom

Amazon, USA

Edge computing in Alexa

Abstract:
In the evolution of computing, we went from mainframes and minicomputers, to personal computers, the cloud, mobile devices, the internet of things, and smart ambient devices like the Amazon Echo. Along the way, new and old workloads are continuously rebalanced between the central server, the cloud, user terminals, and edge devices. Currently, while many workloads are being migrated to the cloud, tasks that require fast responsiveness are increasingly being performed on edge devices, including machine learning tasks such as speech recognition and computer vision. In this talk, I will discuss how Amazon Devices are developing hardware and software solutions that enable on-device processing (or enable workloads to be processed on device) and support new features that are only possible with edge computing.

Bio:
Nikko Strom is a technologist and scientist with a deep background in AI and speech technology. He is a Distinguished Scientist / VP and founding member of the team that built Amazon Echo and Alexa. Nikko has 30+ years of experience in AI and Automatic Speech Recognition from a few of the most prominent research laboratories and companies in the world, and has published extensively at international conferences, journals, patents, and books.


Partha Pratim Talukdar

IISc Bangalore & Google Research, India

Scaling Natural Language Processing for the Next Billion Users

Abstract:
Even though there are more than 7000 languages in the world, language technologies are available only for a handful of these languages. Lack of training data poses a significant challenge in developing language technologies for these languages. Recent advances in Multilingual Representation Learning presents an opportunity to transfer knowledge and supervision from high web-resource languages to languages with lower web-resources.In this talk, I shall present an overview of research in this exciting and emerging area in the NLU group at Google Research India.

Bio:
Partha is a Research Scientist at Google Research, Bangalore where he leads a group focused on Natural Language Understanding. He is also an Associate Professor (on leave) at IISc Bangalore. Partha founded KENOME, an enterprise Knowledge graph company with the mission to help enterprises make sense of unstructured data. Previously, Partha was a Postdoctoral Fellow in the Machine Learning Department at Carnegie Mellon University, working with Tom Mitchell on the NELL project. He received his PhD (2010) in CIS from the University of Pennsylvania. Partha is broadly interested in Natural Language Processing and Machine Learning. Partha is a recipient of several awards, including an Outstanding Paper Award at ACL 2019. He is a co-author of a book on Graph-based Semi-Supervised Learning.
    
Homepage - https://parthatalukdar.github.io/


Sunita Sarawagi

IIT Bombay, India

Natural Language Interfaces to Database Systems: the deep learning way and the challenges therein

Abstract:
Recently deep neural models have been shown to surpass traditional rule-based methods for converting natural text to SQL in terms of their capacity of handling diverse natural text. State-of-the-art Text2SQL models leverage large language models and labeled data spanning hundreds of schemas and have yielded significantly higher accuracy on benchmarks compared to rule-based systems. However, when we look beyond leaderboard numbers, several limitations surface.Text2SQL systems fail to generalize to real-world schemas without additional fine-tuning with schema-specific labeled data which is often unavailable. This has led to many proposals to generate synthetic (Text,SQL) pairs via another deep model for SQL2Text conversion.The data they generate lacks the diversity of natural queries, and we explore better methods that are hybrids of earlier template-based and neural methods.Another challenge is on-the-fly incorporation of feedback from users. Memory-based models are a natural choice for online quick adaptation but integrating user feedback with blackbox deep models is non-trivial. The last challenge we discuss is detecting and interactively correcting a user query that is wrongly interpreted when the user does not understand SQL.

Bio:
Sunita Sarawagi researches in the fields of databases and machine learning. She is head of the Center for Machine Intelligence and Data Science at IIT Bombay. She got her PhD in databases from the University of California at Berkeley and a bachelors degree from IIT Kharagpur. She has also worked at Google Research (2014-2016), CMU (2004), and IBM Almaden Research Center (1996-1999). She is an ACM fellow, was awarded the Infosys Prize in 2019 for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur. She has several publications including notable paper awards at ACM SIGMOD, ICDM, and NeuRIPS conferences. She has served on the board of directors of the ACM SIGKDD and VLDB foundation, program chair for the ACM SIGKDD 2008 conference, and research track co-chair for the VLDB 2011 conference, and on the editorial boards of the ACM TODS and ACM TKDD journals.


Thorsten Joachims

Cornell University, USA

Beyond Engagement: Optimizing the Long-Term Sustainability of Online Platforms

Abstract:
The feedback that users provide through their choices (e.g. clicks, purchases) is one of the most common types of data readily available for training autonomous systems. However, naively training systems based on choice data may only improve short-term engagement, but not the long-term sustainability of the platform. In this talk, I will discuss some of the pitfalls of engagment-maximization, and explore methods that allow us to supplement engagement with additional criteria that are not limited to individual action-response metrics. The goal is to give platform operators a new set of macroscopic interventions for steering the dynamics of the platform, providing a new level of abstraction that goes beyond the engagement with individual recommendations or rankings.

Bio:
Thorsten Joachims is a Professor in the Department of Computer Science and in the Department of Information Science at Cornell University. He has served a two-year term as Chair of the Department of Information Science. He joined Cornell in 2001 after finishing his Ph. D. as a student of Prof. Morik at the AI-unit of the University of Dortmund, from where he also received a Diplom in Computer Science in 1997. Between 2000 and 2001 he worked as a PostDoc at the GMD in the Knowledge Discovery Team of the Institute for Autonomous Intelligent Systems. From 1994 to 1996 he spent one and a half years at Carnegie Mellon University as a visiting scholar of Prof. Tom Mitchell. Working with his students and collaborators, his papers won 10 Best Paper Awards and 4 Test-of-Time Awards. Thorsten Joachims is an ACM Fellow, AAAI Fellow, and member of the SIGIR Academy.