Industry Invited Speakers

Debdoot Mukherjee

Head of AI, Meesho

Riding the Flywheel of Recommender Systems

Today recommender systems have an unprecedented influence on what content people consume on the internet and social media and what products they purchase on e-commerce platforms. For many internet companies, their recommender system happens to be the key lever to trigger the flywheel on user growth as well as monetisation. This talk emphasizes why we need to balance the objectives of multiple stakeholders when recommenders are deployed in a marketplace in order to properly ride the flywheel. We explain the need of optimizing for long term success in such a recommender system and what kind of short term trade-offs may be necessary. Further, we discuss multiple open problems in the state of the art of deep recommender systems, including addressing different kinds of biases picked up by trained representations, handling evolving behavioral data and so on.

Debdoot is Chief Data Scientist at Meesho, where he leads a team that enables every pillar of the e-commerce marketplace to be smarter and more efficient with the use of AI. At Meesho, AI/ML helps increase demand by recommending the right products to every user at every touch point, empowers suppliers to effectively catalog and price products and optimizes the supply chain to improve fulfillment efficiency. Debdoot has over 15 years of experience in building innovative AI products in social, mobile and e-commerce domains. Prior to Meesho, Debdoot was VP & Head of AI at ShareChat & Moj, where he led teams in the areas of recommender systems, multimodal learning and camera tech. Before that, he set up the AI team at Hike Messenger that developed novel methods for conversation modeling in Indic languages, massive scale social graph mining etc. Previously, he led ML efforts at Myntra for applications such as personalized search, product discovery, marketing and merchandising intelligence. Debdoot started his career in the domains of enterprise search and information extraction at IBM Research. Debdoot is a gold medallist from IIT Delhi from where he graduated with a Master’s degree in Computer Science & Engineering.

Girish Nathan

American Physical Society, Society of Exploration Geophysicist

Machine Learning in Fintech

Machine learning has various applications in financial technology, from fraud detection to risk modeling and mitigation to underwriting, amongst several others. In this talk, we show two interesting and important applications of machine learning at Razorpay- KYC validation for setting up current and savings accounts, and building insurance models to predict the return-to-origin (RTO) probabilities for merchants. Further, we discuss the role of privacy preserving machine learning on sensitive and confidential data.

Girish Nathan heads the Data Science function for Razorpay. Girish brings 16 years of data and ML brilliance to the industry and has been a part of the DS practice at renowned technology leaders like Expedia, Amazon, Microsoft, Yahoo to name a few. Girish is a Ph.D in Theoretical Statistical Physics from University of Houston.

Ramesh Nallapati

Principal Applied Scientist, Amazon Web Services

Using AI to Accelerate Code Development

Although the cloud has democratized application development by providing on-demand access to compute, storage, database, analytics, and ML, the traditional process of building software applications still requires developers to spend significant time searching for documentation and code samples instead of focusing on core problems. Hence more automated practices are needed to help developers boost their productivity by finding relevant code, meeting coding best practices, and exploring new APIs without leaving their development environment. In this talk, we will share some of the research work we did as part of Amazon CodeWhisperer towards building large language models for automatic code completion and code generation based on natural language intents. In line with what is reported in recent literature, we will show how accuracy scales with training data and model size. We will also share interesting insights we learned on how models trained on multiple programming languages are more performant than mono-lingual models on zero-shot translation of code to out-of-domain languages, and on few-shot prompting of natural-language-to-code generation tasks. We will also talk about some of the experiments we did with novel contrastive loss functions both at token-level and sequence-level that boosted our accuracy further compared to traditional auto-regressive loss functions.

Ramesh Nallapati earned his Ph.D. in Computer Science from University of Massachusetts Amherst, and has extensive research and development experience from top universities and industry research labs such as CMU, Stanford, SRI and IBM Research. His primary research interests are in NLP and Applied Machine Learning and he has published more than 80 conference and workshop papers at top venues such as ACL, SIGIR, NAACL, KDD, CONLL, CIKM, AISTATS, NeurIPS and ICML. He currently serves as a Principal Applied Scientist at Amazon and has helped launch several cutting edge AI/ML products such as Amazon Kendra, Amazon Contact Lens, Amazon Quicksight Q and Amazon CodeWhisperer.

Parminder Bhatia

Science Manager, AWS AI Labs, Amazon

Parminder is Head of AI/ML for Low-code/No-code, Large language models, and CodeWhisperer at AWS AI Labs. Previously, Parminder was one of the founding members of AWS Health AI, where he managed applied science teams towards building numerous healthcare-specific NLP services, which allow enterprises to leverage state-of-the-art NLP services on clinical documents. He has published a number of academic papers at top-tier NLP and machine learning conferences and has successfully built applications in the areas of healthcare analytics, interactive dialog systems, information retrieval, and speech science. Parminder graduated from Georgia Tech. Prior to joining Amazon, he worked at Microsoft and several startups developing conversational models. He has expertise in building and deploying machine learning services and medical NLP systems at scale. He has previously co-organized the NLPMC workshop at ACL’20 and ACL’21.

Vignesh Subrahmaniam

Principal Data Scientist, Intuit

Automated Tax Notice Resolution

QuickBooks Online Payroll Full Service customers rely on Intuit to do their tax submissions. However, due to multiple reasons, these customers end up getting tax notices from Federal (IRS) or State tax agencies. This notice is a physical hard copy which is mailed to the customer. Such notices are complex, and need manual intervention to read, understand and analyze the cause and help resolve it. We’ve built a smart solution backed by AI/ML which will recognize the type of tax notice and hence will be able to provide information related to the type, cause of the notice, key attributes extraction, etc. In this talk, we’ll discuss the 3 part system covering -
- Conversion of the scanned image to text using OCR
- Document classification using NLP techniques to categorize the tax notices
- Extraction of key information from the document required for further resolution by tax experts

-Vignesh Subrahmaniam, PhD. is a Principal Data Scientist in Intuit India. He has 12+ years of experience in ML research with a focus on building innovative products and services that leverage AI / ML. He’s an alumnus of the Indian Statistical Institute and was working with GE Global Research embedding AI/ML solutions in industries like Healthcare and Renewables before joining Intuit India. At Intuit, he leads a team of data scientists and machine learning engineers responsible for delivering key AI initiatives for QuickBooks Online Advanced and Payroll products. He is also the architect for AI capabilities that 1) aims to standardize the development and delivery of data insights to end customers as well as 2) to detect data entry errors in Intuit products to avoid financial mistakes.