Generative AI Workshop 

Recent progress in generative models have resulted in models that can produce realistic text, images and video that can potentially revolutionize the way humans work, create content and interact with machines. The workshop on Generative AI at AIMLSystems will focus on the entire life-cycle of building and deploying such Generative AI systems, including data collection and processing, developing systems and requisite infrastructure, applications it enables, and the ethics associated with such technology covering concerns related to fairness, transparency and accountability. We invite original, unpublished work on Artificial Intelligence with a focus on generative AI and their use cases. Specifically, the topics of interest include but are not limited to:

  • Systems, architecture and infrastructure for Generative AI
  • Machine learning and Modeling using LLMs and Diffusion models
  • Large Language Models and its applications
  • Multi-modal Generative AI and its applications
  • Gen AI based Plugins and agents
  • Deployment of Generative AI solutions
  • Evaluation of Language and Diffusion based models
  • Responsible use of Gen AI

  • Anupam Purwar, Senior Research Scientist, Amazon
  • Danish Pruthi, Indian Institute of Science (IISC), Bangalore
  • Sunayana Sitaram, Principal Researcher, Microsoft Research 







Keynote: My Experiments with Large Language Models 

Prof. Mausam, IIT Delhi

Towards reducing hallucination in extracting information from financial reports using Large Language Models

Bhaskarjit Sarmah (BlackRock)*; Dhagash Mehta (BlackRock, Inc.)

ChatGPT for Mental Health Applications: A study on biases

Ritesh S Soun (Sri Venkateswara College); Aadya Nair (PepsiCo)*

Building a Llama2-finetuned LLM for Odia Language Utilizing Domain Knowledge Instruction Set

Guneet S Kohli (Thapar University); shantipriya parida (Silo AI)*; Sambit Sekhar (Odia Generative AI); Samirit _ Saha (Amrita School of Engineering, Bengaluru); Nipun B Nair (Amrita School of Engineering, Bengaluru); Parul Agarwal (nstitute of Mathematics and Applications); Sonal Khosla (Odia Generative AI); Kusum Lata (NIT hamirpur); Debasish Dhal (NISER Bhubaneswar)

Observations on LLMs for Telecom Domain: Capabilities and Limitations

Sumit Soman (Ericsson)*; Ranjani H. G. (Ericsson)

ScripTONES: Sentiment-Conditioned Music Generation for Movie Scripts

Vishruth Veerendranath (PES University)*; Vibha Masti (Carnegie Mellon University); Utkarsh Gupta (PES University); Hrishit Chaudhuri (PES University); Gowri Srinivasa (PES University )

Tea/Coffee Break (1130-1200)






Are you a Foodie looking for New Cookies to try out? Better not ask an LLM

Binay Gupta (Walmart Global Tech India); Saptarshi Misra (Walmart Global Tech)*; Anirban Chatterjee (Walmart Global Tech); Kunal Banerjee (Walmart Global Tech)

CoReGAN: Contrastive Regularized Generative Adversarial Network for Guided Depth Map Super Resolution

Aditya Kasliwal (Manipal Institute of Technology)*; Ishaan Gakhar (Manipal Institute of Technology, Manipal Academy of Higher Education); Aryan Bhavin Kamani (Manipal Institute of Technology)

Applications of Generative AI in Fintech

kalpesh barde (*; Parth Kulkarni (Adobe)


Keynote:Towards transforming the landscape of Indian language technology

Prof. Mitesh Khapra, IIT Madras

Lunch (1330-1430)


 Tutorial: Enhancing LLM inferencing with RAG and fine-tuned LLMs

Abhinav Kimothi, Head of AI, Yarnit

Tea/Coffee Break(1600-1630)


Panel discussion: Responsible Thinking for Generative AI



Keynote Talks

Title: My Experiments with Large Language Models

Speaker: Prof. Mausam, IITD.

Abstract: The development of large language models, leading up to OpenAI’s GPT4 has caused another AI revolution. These models are being envisaged as foundation models – i.e., they are a strong starting point for all aspects of AI, including language, knowledge, reasoning and decision making. However, the strongest models are only available through an API, so the standard fine-tuning paradigm is not applicable to them. In this talk, I describe our initial experiments that assess the extent to which the current best LLMs hold promise to be foundation models. I also explore supervised settings, and find that workflows that can use LLMs along with trained models obtain best performance. Finally, I argue that workflows which include LLMs as components will be quite useful, necessitating optimization approaches for obtaining strong cost-quality tradeoffs.

Title: Towards transforming the landscape of Indian language technology

Speaker: Prof. Mitesh Khapra, IITM

Abstract: In this talk, I will reflect on our journey towards transforming the landscape of Indian language technology. I will delve on our engineering-heavy approach in addressing the initial scarcity of data for Indian languages, while gradually establishing the necessary human resources to gather high-quality data on a larger scale through Bhashini. The objective is to share our insights into developing high quality open-source technology for Indian languages. This involves curating extensive data from the internet, constructing multilingual models for transfer learning, and crafting high-quality datasets for fine-tuning and evaluation. I will then transition into how our experiences can benefit the broader AI community, particularly as India aspires to create Language Model Models (LLMs) for Indic languages.Bio: Mitesh M. Khapra is an Associate Professor in the Department of Computer Science and Engineering at IIT Madras. He heads the AI4Bharat Research Lab at IIT Madras which focuses on building datasets, tools, models and applications for Indian languages. His research work has been published in several top conferences and journals including TACL, ACL, NeurIPS, TALLIP, EMNLP, EACL, AAAI, etc. He has also served as Area Chair or Senior PC member in top conferences such as ICLR and AAAI. Prior to IIT Madras, he was a Researcher at IBM Research India for four and a half years, where he worked on several interesting problems in the areas of Statistical Machine Translation, Cross Language Learning, Multimodal Learning, Argument Mining and Deep Learning. Prior to IBM, he completed his PhD and M.Tech from IIT Bombay in Jan 2012 and July 2008 respectively. His PhD thesis dealt with the important problem of reusing resources for multilingual computation. During his PhD he was a recipient of the IBM PhD Fellowship (2011) and the Microsoft Rising Star Award (2011). He is also a recipient of the Google Faculty Research Award (2018), the IITM Young Faculty Recognition Award (2019), the Prof. B. Yegnanarayana Award for Excellence in Research and Teaching (2020) and the Srimathi Marti Annapurna Gurunath Award for Excellence in Teaching (2022).



Tutorial: Enhancing LLM inferencing with RAG and fine-tuned LLMs

Presentor: Abhinav Kimothi, Yarnit

Abstract: Today, Large Language Models like GPT4, Llama2, Claude etc. are easily accessible to those who desire – and developers have taken upon themselves to explore the possibilities with these models. While large businesses, governments and organizations are still in the process of separating the wheat from the chaff, one point that everyone concedes is that LLMs cannot be ignored. Those who will apply the power of LLMs to the apt use cases will be on the winning side. In this workshop, we will discuss the process of building applications that leverage LLMs. Beginning with an introduction to the LLM landscape, we will focus on the project lifecycle of an LLM based application. We will then get hands-on in invoking proprietary and open source LLMs and the process of in-context inferencing. To address the challenges of hallucinations in LLM inferencing, the bulk of the workshop will focus on Retrieval Augmented Generation (RAG) and fine-tuning models to specific tasks. We will leverage embeddings and vector databases to guardrail LLM generations to user context. By the end of the session, the attendees will familiarize themselves with executing concepts of in-context learning, fine-tuning and RAG. The demonstration will be done in Python and use OpenAI APIs, the LangChain framework amongst others.

Submission Guidelines

We invite authors to submit original and unpublished research papers (up to 4 pages excluding references). All submissions will undergo a rigorous peer-review process by the program committee. The authors are requested to follow the ACM sigconf template (see All accepted papers will be published in the proceedings of AIMLSys 2023.

Submission link:
(please select the GenerateAI Workshop AI track)

    Important Dates

    • Paper Submission Deadline: 12th September 2023 21st September 2023
    • Notification of Acceptance: 29 September 2023 3rd October 2023
    • Camera-Ready Deadline: 8th October 2023
    • Workshop Date: 28th October 2023

    Workshop Organization

    The workshop will feature keynote speeches, technical paper and poster sessions, tutorials and possibly panel discussions. A detailed outline of the program would be available on the website shortly. The workshop would also provide ample opportunities for attendees to network with leading experts and well gain hands-on experience on Generative AI from tutorial sessions.


    At least one author of an accepted paper will need to register for the conference and in case of multiple papers with the same author, co-authors need to register.

    Workshop Venue

    The Chancery Pavilion | Bengaluru, India

    Contact Information

    For any inquiries regarding the workshop, please feel free to contact the workshop organizers.