Industry Liaison

Industry Liaison

Saturday, 23 October 2021, 02:00 PM - 04:00 PM




Panel Discussion

02:00 PM - 03:00 PM

Topic: MLOps and the Challenges of Deploying ML Models in the Real World
Moderator

Chandrashekar Ramanathan

International Institute of Information Technology Bangalore, India

Panelists

Jai Ganesh

Mphasis NEXT Labs, Bangalore, India

Sunil Kumar Vuppala

Ericsson Global AI Accelerator (GAIA), Bangalore, India

Manish Gupta

Google Research, India

Mayank Mishra

TCS Research, India





Invited Talks

Dinesh Babu Jayagopi

IIIT Bangalore, India

03:20 PM - 03:40 PM

Virtual Agent based Intelligent Platform for Multimodal Conversations

Abstract: In this talk, we will discuss technologies for real-time multimodal conversational agents and applications. Multimodal analysis enables understanding user emotion and state, multimodal dialog enables making use of this information along with spoken text and subsequently generate the surface text along with the appropriate state for the virtual agent, and finally the multimodal synthesis module generates suitable nonverbal behavior and prosody for the agent reply. These systems can find applications in healthcare or education, where information can be solicited from the user and a simple task-oriented conversation can be accomplished. Innovations for quickly customizing the avatar appearance and animations are also emerging. In particular, we will discuss an attempt to build a Virtual Agent based UPSC mock interviewing platform called Margadarshi, to train students in rural areas to prepare for IAS interviews conducted by 4-5 panel members.
Bio: Dr. Dinesh Babu Jayagopi is currently an Associate Professor at IIIT Bangalore, where he heads the Multimodal Perception Lab (mpl.iiitb.ac.in). His research interests are in Audio-Visual Signal Processing, Machine Learning, and Social Computing. He obtained his doctorate from Ecole Polytechnic Federale Lausanne (EPFL), Switzerland, beginning of 2011. He received the Outstanding paper award in the International Conference on Multimodal Interaction (ICMI), 2012, Idiap PhD student research award for the year 2009. He also received the Indian Department of Science and Technology (DST) Young Scientist Start up Grant in 2016. He has successfully collaborated with Defence Research and Development Organization, Openstream AI, Accenture Labs and NI Systems. He was a visiting professor at University of Lausanne in summer 2019.



Rekha Singhal

TCS, India

03:40 PM - 04:00 PM

Next Generation Enterprise IT Systems (EIT 2.0)

Bio: Rekha Singhal is a Principal scientist and heads the Computing Systems Research area in TCS. She is a ACM Senior member. She focuses on accelerating development and deployment of enterprise applications in data-driven programming environment. Her research interests are heterogeneous architectures for accelerating ML pipelines, Learned systems, high-performance data analytics systems, big data performance analysis, query optimization, storage area networks, and distributed systems. She is associated with Spec RG Big data and ML Perf. She has filed patents and 13 granted in international territories. She also has several publications ininternational and national conferences, workshops, and journals. She had led the project on Disaster Recovery appliance which was runner up for NASSCOm award. She has received her M.Tech. and Ph.D. in Computer Science from IIT, Delhi, and has been a visiting researcher for a year at Stanford University, United States.



Anurag Dwarakanath

Amazon, India

04:00 PM - 04:20 PM

Spoken Language Understanding for the Indic Region

Abstract: In this talk, we will touch upon some of the key challenges in building Spoken Language Understanding systems for the Indic region. We begin with an insight on the usage of code-mixed multi-lingual utterances where many Indic languages (beyond Hindi) are freely used. We show how such Indic languages are represented in transliterated form and surprisingly, the current state-of-the-art multi-lingual language models (such as XLM-R, mBERT) do not build common representations for transliterated text. We then introduce research in Continual Language Learning as an emerging area to bridge this gap. In Continual Language Learning, we aim to build methods that can add support for new languages & vocabularies onto existing pre-trained language models in an incremental way. The Indic region also sees wide variety of spoken language variations including grammatical errors and ambiguous utterances leading to noise in data. We present recent progress in the area of Robust Machine Learning that aims to build learning algorithms that are resilient to noise in data. The talk will present results from experiments on both open source data and Alexa data.
Bio: Anurag Dwarakanath is an applied science manager in Alexa AI and leads a team of scientists building machine learning and statistical models for the Natural Language Understanding components of Alexa. His interests include multi-lingual natural language processing, robustness in deep learning and verification & validation of deep learning systems. Anurag holds a PhD from Indian Institute of Management Calcutta where he studied the application of Graph Theory in Wireless Sensor Networks. Anurag has over 20 publications and 15 patents.



Sujeeth Gattu

Zentree Labs

04:20 PM - 04:40 PM

Embedded Vision - Challenges

Abstract: In recent times, we have seen compact vision systems (based on adapted camera modules) integrated directly into machines or devices, where, together with bespoke computer platforms and lower power consumption, they have made intelligent image processing possible in the most diverse applications without a classic industrial PC being required.
     Migration of complex neural networks onto embedded platforms has become a major challenge given the rising complexity of the neural networks. Here we discuss the challenges of migrating the CNNs onto targeted devices. The key topics covered here are Pruning, Calibration and Quantization. We discuss the challenges we faced and the techniques we used to overcome them in real world scenarios.
Bio: Sujeeth is VP Engineering at Zentree Labs, a global leader in AI driven solutions. Sujeeth has more than 2 decades of experience in Vision and Signal Processing. In recent times he headed the R&D division at an AI semiconductor company. He is a passionate “AI explorer” whose mission is to provide cost-effective leading edge computer vision and NLP solutions to customers from domains as diverse as healthcare, fintech and semiconductors. Sujeeth is an alumnus of IIT, Guwahati
     Sujeeth’s key areas of expertise are:
     ● Machine learning,
     ● Computer vision
     ● NLP
     ● Statistical Signal Processing