Demonstrations



Demonstration 1

Connected Intelligence

 

Speaker:

  • Amit Mate, GMAC Intelligence

 

Abstract:

AI, 5G and Edge compute are three mega trends that are converging to create a new class of applications - “connected intelligent applications” that are characterized by application autonomy, hierarchical intelligence and a server-less architecture. We are demonstrating a few consumer applications that leverage on-device, on-premise and server-less implementations of tensorflow models.

Demonstration 2

vaAnI - an AI-powered voice-enabled search engine for the visually-challenged community

 

Speakers:

  • Shirsha Ray Chaudhuri, Thomson Reuters
  • Prudhvi Nokku, Thomson Reuters

 

Abstract

EnAble India is a NGO that provides a platform for People with Disabilities to come together and resolve their challenges. It provides many learning facilities, self-help channels, employment training and social navigation aids for People with Disabilities. Enable Vaani is one such platform that this NGO setup, using which the blind population from more than 15 states in our country are able to call in for their asks and get responses. This is a voice-enabled question answer, information retrieval, crowd-sourced recommendation system for the blind. This recommendation system is today executed manually. By this we mean, that mediators work on queries received and match them to a best possible response received in the past. This workflow demands automation with the use of AI. This solution demo shall showcase a solution provided for this platform, where AI-powered services in the cloud are leveraged to provide automation into this workflow, for this platform. Mediators receive auto-suggestions from the system for a given audio query and can select the best-fitting one. Past audio responses and possible suggestions are stored translated, indexed and parsed, so that they may be used for responding against a given query. It is a voice-enabled search and recommendation engine for the visually challenged for their resources.

Demonstration 3

MLOps at FIAI

 

Speaker:

  • Ruju N Alurkar, FlAi MLOps

 

Abstract:

Artificial Intelligence/Machine Learning is facing many hurdles in its journey towards becoming a daily occurrence in our lives. One of the biggest challenges is handling large multi-process, multi-component and multi-algorithm Processing Pipelines in Machine Learning/Ai - and doing so without everything falling apart! It is very easy to build a single Machine Learning or Deep Learning model. The real challenge is to build multiple such models that work together, work in a coordinated manner, and in the end, produce usable, coherent results - that is what will be “True Ai”. The FlAi MLOps Suite is an end-to-end Drag-and-Drop SaaS platform that enables users to create their own Machine Learning pipelines running on Kubernetes. FlAi allows for high-level container-customisation while completely automating the code needed to provision the Infrastructure of the Machine Learning/Ai Pipeline - keeping the user (the Data Scientist) in full control of his/her own Data Science code. The Drag-and-Drop functionality, inherent containerisation, rapid object-level (block and pipeline level) replication, collaborative development all gear up the user towards massive, yet manageable Ai/ML pipelines. Inherent benefits of using Kubernetes under-the-hood include multi fold increase in processing speed, failure-redundancy and container-level replication for components of the Pipeline. It is due to FlAi’s abstraction layer, that even a Data Scientist can easily understand, and exploit the power of Kubernetes without having to necessarily endure the pain of configuring and working it out themselves.

Demonstration 4

MLOps with enhanced performance control and observability

 

Speakers:

  • Dinesh Mr Ghanta, Oracle
  • Prateek Katageri, Oracle
  • Girish Nautiyal, Oracle
  • Indradumna Banerjee, Oracle
  • Pradeep Sanchana, Oracle
  • Atin Modi, Oracle

 

Abstract:

The explosion of data and its ever increasing complexity in the last few years, has made MLOps systems more prone to failure, and new tools need to be embedded in such systems to avoid such failure. In this demo, we will introduce crucial tools in the observability module of a MLOps system that target difficult issues like data drift and model version control for optimum model selection. We believe integrating these features in our MLOps pipeline would go a long way in building a robust system immune to early stage ML system failures.

Demonstration 5

Automating NLP workflows using AutoML NLP

 

Speakers:

  • SHIPRA JAIN, Microsoft R&D India Pvt Ltd
  • Chandra Sekhar Gupta Aravapalli, Microsoft

 

Abstract:

With recent advancements in technology, everything from the purchase of electronics to household items is happening online at massive scale which results in thousands to millions of data points. Needless to emphasize the importance of reviews and that they can take our scale of growth in an exponential manner. According to the study from Maslowska et al. and the study from Askalidis et al. of online customers read reviews before making a purchase. Its significant for business to address unsatisfactory reviews by providing them a prompt response or by taking feedback to the manufacturer. However, what adds to the complexity is humungous multi-lingual data received from disparate data sources and the complex free flow data semantic in the natural language involved. Thus, there is a need for automated analysis techniques to extract feedback and opinions conveyed in the user-comments. With the products and services increasingly graduating from being data-driven to “data and intelligence” driven and progressing towards more AI-driven world, leveraging AI and automating the solution with AI-driven applications is the key to meeting that need. In this segment we would see how Automated ML solution can come in very handy to quickly train and pick best of the class machine learning model to accept multi-lingual Natural Language data, classify it, and extract meaningful information which is leading to unsatisfactory customer feedback. Thus, enabling manufacturers to harness true potential of their data and make informed data-driven decisions.

Demonstration 6

COSWARA: A Web Application for Data Collection and Real-time Screening of SARS-CoV-2 Infected Subjects

 

Speakers:

  • Debarpan Bhattacharya, Indian Institute of Science
  • Neeraj Sharma, Indian Institute of Science
  • Srikanth Raj Chetupalli, Indian Institute of Science
  • Sriram Ganapathy, Indian Institute of Science

 

Abstract:

The primary symptom of COVID-19 disease is respiratory discomfort. The pandemic is the most serious international health emergency since the 1918 influenza epidemic. A wide range of information has been gathered to understand how the biggest health crisis of the century affected individuals’ respiratory health in particular. While vaccinations have provided a first line of defence, widespread and ongoing screening remains one of the most important measures to lower the incidence of COVID-19. It is necessary to design a variety of screening procedures that offer a trade-off between cost, testing simplicity, testing speed, and accuracy. In this study, we examine the potential for COVID-19 screening utilising respiratory acoustics, which may be quickly deployed and scaled to a large population. The conception of the data collection strategy, as well as the creation and evaluation of machine learning models that permit COVID-19 screening using audio samples, are presented in this study. The creation of a classifier based on recurrent neural networks for the screening of COVID-19 utilising respiratory sound samples is described in the study. The classifier, which is installed in a cloud-based server, quickly estimates the likelihood of SARS-CoV-2 infection. The screening tool is generally unaffected by the subject’s age, gender, geography, language background, and pre-existing medical issues, according to a bias and fairness analysis.

Demonstration 7

Recommendation system for ML model deployment

 

Speakers

  • John Barboza, Amazon
  • Mohan Gandhi, Amazon

 

Abstract:

Configuring a deployment setup to run Machine Learning models that meets performance at scale is a hard problem to solve. This requires knowing what types of hardware are available, their strengths and weaknesses, trade-offs between performance and cost, tuning parameters of the framework etc. Inference Recommender is a service that aims to take away this burden from the ML customers. It will recommend the best performing instance and tuning parameters to get the lowest cost per inference call.

Demonstration 8

We built a wall for the better – running online ML experiments at scale

 

Speakers:

  • Data Science Platform Team, Gojek

 

Abstract:

In the real world, any serious production system must account for performance, scalability, observability, and a slew of other non-functional requirements, to be successful. This is no different for ML systems and the typical Data Scientist either takes on some of the Engineering burden or, in more established companies, relies on a team of savvy Engineers that make the (Jupyter) notebook science ‘production-ready’. But, the relationship between the Engineer and the Data Scientist is not an easy one to navigate – it is fraught with many back-and-forths, misunderstandings, delays and disappointments. That’s hardly surprising given that one desires stability and predictability while the other loves to experiment.

This motivated us to look for a solution and build an MLOps product that would give both camps what they wanted. One such area was model experimentation. We surveyed multiple Data Science projects at Gojek that conducted experiments and realized we had a strong opportunity to standardize. While they each had diverse goals, their requirements and workflows were common enough to extract into a set of abstractions. This would help us provide data scientists the flexibility and control they desired while simultaneously adhering to good engineering standards. And thus, we created Turing, a multi-model orchestration and experimentation platform that is also completely self-service