{"id":1314,"date":"2023-08-18T21:52:13","date_gmt":"2023-08-18T16:22:13","guid":{"rendered":"https:\/\/www.aimlsystems.org\/2023\/?page_id=1314"},"modified":"2023-10-27T16:01:37","modified_gmt":"2023-10-27T10:31:37","slug":"workshop-genai","status":"publish","type":"page","link":"https:\/\/www.aimlsystems.org\/2023\/workshop-genai\/","title":{"rendered":"Workshop-GenAI"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;Header&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;gcid-1bcf785a-50e1-437b-b09f-65567babc1de&#8221; background_image=&#8221;https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/05\/grid-bg-2.png&#8221; background_size=&#8221;initial&#8221; background_position=&#8221;bottom_center&#8221; background_repeat=&#8221;repeat&#8221; custom_padding=&#8221;||0px|||&#8221; collapsed=&#8221;on&#8221; global_colors_info=&#8221;{%22gcid-1bcf785a-50e1-437b-b09f-65567babc1de%22:%91%22background_color%22%93}&#8221;][et_pb_row _builder_version=&#8221;4.19.2&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.19.2&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;7f63b212-a10a-4d30-afa2-e478a747ca88&#8243; header_2_font_size=&#8221;44px&#8221; custom_margin=&#8221;||10px||false|false&#8221; header_2_font_size_phone=&#8221;33px&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2><strong> Generative AI Workshop\u00a0<\/strong><\/h2>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;Features&#8221; module_id=&#8221;about&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;#dbdbdb&#8221; background_image=&#8221;https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/05\/rm380-10.jpg&#8221; background_blend=&#8221;overlay&#8221; custom_padding=&#8221;3.9%||||false|false&#8221; use_background_color_gradient_phone=&#8221;on&#8221; background_color_gradient_stops_phone=&#8221;#001528 0%|rgba(255, 255, 255, 0) 10%|rgba(255,255,255,0) 70%|#0f0122 100%&#8221; collapsed=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row column_structure=&#8221;2_3,1_3&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;27px||43px|||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;2_3&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Recent progress in generative models have resulted in models that can produce realistic text, images and video that can potentially revolutionize the way humans work, create content and interact with machines. The workshop on Generative AI at AIMLSystems will focus on the entire life-cycle of building and deploying such Generative AI systems, including data collection and processing, developing systems and requisite infrastructure, applications it enables, and the ethics associated with such technology covering concerns related to fairness, transparency and accountability. We invite original, unpublished work on Artificial Intelligence with a focus on generative AI and their use cases. Specifically, the topics of interest include but are not limited to:<\/p>\n<ul>\n<li>Systems, architecture and infrastructure for Generative AI<\/li>\n<li>Machine learning and Modeling using LLMs and Diffusion models<\/li>\n<li>Large Language Models and its applications<\/li>\n<li>Multi-modal Generative AI and its applications<\/li>\n<li>Gen AI based Plugins and agents<\/li>\n<li>Deployment of Generative AI solutions<\/li>\n<li>Evaluation of Language and Diffusion based models<\/li>\n<li>Responsible use of Gen AI<\/li>\n<\/ul>\n<p><span><\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_3&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_tabs active_tab_background_color=&#8221;#1c1b3a&#8221; inactive_tab_background_color=&#8221;#0b91c6&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; tab_text_color=&#8221;#FFFFFF&#8221; background_color=&#8221;rgba(0,0,0,0)&#8221; custom_padding=&#8221;||0px||false|false&#8221; border_radii=&#8221;on|11px|11px|11px|11px&#8221; global_colors_info=&#8221;{%22gcid-f1f9244b-c8ab-43e1-95c3-c0bdf69ac7b5%22:%91%22active_tab_background_color%22,%22active_tab_background_color%22%93}&#8221;][et_pb_tab title=&#8221;Program Committee&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; body_line_height=&#8221;1.4em&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Anupam Purwar, Senior Research Scientist, Amazon<\/span><\/li>\n<li><span style=\"font-weight: 400;\">Danish Pruthi, Indian Institute of Science (IISC), Bangalore<\/span><\/li>\n<li>Sunayana Sitaram, <span>Principal Researcher<\/span>, Microsoft Research\u00a0<\/li>\n<\/ul>\n<p>[\/et_pb_tab][\/et_pb_tabs][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; hover_enabled=&#8221;0&#8243; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h2><b>Schedule<\/b><\/h2>\n<div class=\"WordSection1\">\n<p>&nbsp;<\/p>\n<table class=\"MsoNormalTable\" border=\"0\" cellspacing=\"0\" cellpadding=\"0\" width=\"100%\" style=\"height: 340px; width: 100%; border-collapse: collapse;\">\n<tbody>\n<tr style=\"height: 15.75pt;\">\n<td width=\"17%\" valign=\"bottom\" style=\"width: 17.52%; border: 1pt solid black; background: #6d9eeb; padding: 1.5pt 2.25pt; height: 15px;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Session<\/span><\/b><\/p>\n<\/td>\n<td width=\"20%\" valign=\"bottom\" style=\"width: 38.9553%; border-top: 1pt solid black; border-right: 1pt solid black; border-bottom: 1pt solid black; border-image: initial; border-left: none; background: #6d9eeb; padding: 1.5pt 2.25pt; height: 15px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Title<\/span><\/b><\/p>\n<\/td>\n<td width=\"62%\" valign=\"bottom\" style=\"width: 24.1959%; border-top: 1pt solid black; border-right: 1pt solid black; border-bottom: 1pt solid black; border-image: initial; border-left: none; background: #6d9eeb; padding: 1.5pt 2.25pt; height: 15px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Author\/Speaker<\/span><\/b><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"17%\" style=\"width: 17.52%; border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #eaffd9; padding: 1.5pt 2.25pt; height: 15px;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">0930-1015<br \/><\/span><\/b><i><\/i><\/p>\n<\/td>\n<td width=\"40%\" style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 15px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Keynote: My Experiments with Large Language Models\u00a0<\/span><\/span><\/p>\n<\/td>\n<td width=\"42%\" style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 15px;\">Prof. Mausam, IIT Delhi<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 17.52%; border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #eaffd9; padding: 1.5pt 2.25pt;\" rowspan=\"5\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1015-1130<\/span><\/b><\/td>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 38.9553%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Towards reducing hallucination in extracting information from financial reports using Large Language Models<\/span><\/span><\/p>\n<\/td>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 24.1959%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Bhaskarjit Sarmah (BlackRock)*; Dhagash Mehta (BlackRock, Inc.)<\/span><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 38.9553%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>ChatGPT for Mental Health Applications: A study on biases<\/span><\/span><\/p>\n<\/td>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 24.1959%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Ritesh S Soun (Sri Venkateswara College); Aadya Nair (PepsiCo)*<\/span><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 38.9553%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Building a Llama2-finetuned LLM for Odia Language Utilizing Domain Knowledge Instruction Set<\/span><\/span><\/p>\n<\/td>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 24.1959%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Guneet S Kohli (Thapar University); shantipriya parida (Silo AI)*; Sambit Sekhar (Odia Generative AI); Samirit _ Saha (Amrita School of Engineering, Bengaluru); Nipun B Nair (Amrita School of Engineering, Bengaluru); Parul Agarwal (nstitute of Mathematics and Applications); Sonal Khosla (Odia Generative AI); Kusum Lata (NIT hamirpur); Debasish Dhal (NISER Bhubaneswar)<\/span><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 38.9553%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Observations on LLMs for Telecom Domain: Capabilities and Limitations<\/span><\/span><\/p>\n<\/td>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 24.1959%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Sumit Soman (Ericsson)*; Ranjani H. G. (Ericsson)<\/span><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 38.9553%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>ScripTONES: Sentiment-Conditioned Music Generation for Movie Scripts<\/span><\/span><\/p>\n<\/td>\n<td style=\"border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; width: 24.1959%;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Vishruth Veerendranath (PES University)*; Vibha Masti (Carnegie Mellon University); Utkarsh Gupta (PES University); Hrishit Chaudhuri (PES University); Gowri Srinivasa (PES University )<\/span><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"100%\" colspan=\"3\" style=\"border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #cccccc; padding: 1.5pt 2.25pt; height: 15px; width: 80.6712%;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Tea\/Coffee Break (1130-1200)<\/span><\/b><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"17%\" style=\"width: 17.52%; border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\" rowspan=\"3\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><\/span><\/b><\/p>\n<p>&nbsp;<\/p>\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>1200-1245<\/span><\/span><\/b><\/p>\n<p>&nbsp;<\/p>\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">\u00a0<\/span><\/b><\/p>\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">\u00a0<\/span><\/b><\/p>\n<\/td>\n<td style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Are you a Foodie looking for New Cookies to try out? Better not ask an LLM<\/span><\/p>\n<\/td>\n<td style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Binay Gupta (Walmart Global Tech India); Saptarshi Misra (Walmart Global Tech)*; Anirban Chatterjee (Walmart Global Tech); Kunal Banerjee (Walmart Global Tech)<\/span><\/span><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">CoReGAN: Contrastive Regularized Generative Adversarial Network for Guided Depth Map Super Resolution<\/span><\/p>\n<\/td>\n<td style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Aditya Kasliwal (Manipal Institute of Technology)*; Ishaan Gakhar (Manipal Institute of Technology, Manipal Academy of Higher Education); Aryan Bhavin Kamani (Manipal Institute of Technology)<\/span><\/span><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Applications of Generative AI in Fintech<\/span><\/p>\n<\/td>\n<td style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>kalpesh barde (Salesforce.com)*; Parth Kulkarni (Adobe)<\/span><\/span><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"17%\" style=\"width: 17.52%; border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><br \/><span>1245-1330<\/span><br \/><\/span><\/b><\/p>\n<\/td>\n<td width=\"40%\" style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Keynote:<span>Towards transforming the landscape of Indian language technology<\/span><\/span><\/p>\n<\/td>\n<td width=\"42%\" style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Prof. Mitesh Khapra, IIT Madras<br \/><\/span><\/span><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"100%\" colspan=\"3\" valign=\"bottom\" style=\"border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #cccccc; padding: 1.5pt 2.25pt; height: 15px; width: 80.6712%;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Lunch (1330-1430)<\/span><\/b><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"17%\" style=\"width: 17.52%; border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><br \/><span>1430-1600<\/span><br \/><\/span><\/b><\/p>\n<\/td>\n<td width=\"40%\" style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">\u00a0<span>Tutorial: Enhancing LLM inferencing with RAG and fine-tuned LLMs<\/span><\/span><\/p>\n<\/td>\n<td width=\"42%\" style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 45px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>Abhinav Kimothi, Head of AI, Yarnit<\/span><\/span><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 15.75pt;\">\n<td width=\"100%\" colspan=\"3\" valign=\"bottom\" style=\"border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #cccccc; padding: 1.5pt 2.25pt; height: 15px; width: 80.6712%;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Tea\/Coffee Break(1600-1630)<\/span><\/b><\/p>\n<\/td>\n<\/tr>\n<tr style=\"height: 52px;\">\n<td style=\"width: 17.52%; border-right: 1pt solid black; border-bottom: 1pt solid black; border-left: 1pt solid black; border-image: initial; border-top: none; background: #eaffd9; padding: 1.5pt 2.25pt; height: 52px;\">\n<p class=\"MsoNormal\" align=\"center\" style=\"margin-bottom: 0cm; text-align: center; line-height: normal;\"><b><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">1630-1730<\/span><\/b><\/p>\n<\/td>\n<td style=\"width: 38.9553%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 52px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\">Panel discussion: Responsible Thinking for Generative AI<\/span><\/p>\n<\/td>\n<td style=\"width: 24.1959%; border-top: none; border-left: none; border-bottom: 1pt solid black; border-right: 1pt solid black; background: #eaffd9; padding: 1.5pt 2.25pt; height: 52px;\">\n<p class=\"MsoNormal\" style=\"margin-bottom: 0cm; line-height: normal;\"><span style=\"font-size: 10.0pt; font-family: 'Arial',sans-serif; color: #3b3838;\"><span>\u00a0<\/span><\/span><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p><b><\/b><\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text content_tablet=&#8221;<\/p>\n<h2><b>Keynote Talks<\/b><\/h2>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong>Title:<\/strong> My Experiments with Large Language Models<\/span><\/p>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong>Speaker: <a href=%22https:\/\/www.cse.iitd.ac.in\/~mausam\/%22><span style=%22font-weight: 400;%22>Prof. Mausam, IITD.<\/span><\/a><\/strong><\/span><span data-ogsc=%22rgb(34, 34, 34)%22><strong><span style=%22font-weight: 400;%22><\/span><\/strong><\/span><\/p>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong><span style=%22font-weight: 400;%22><img src=%22https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/10\/mausam-head_cropped-300x300.jpg%22 width=%22300%22 height=%22300%22 alt=%22%22 class=%22wp-image-4051 alignnone size-medium%22 \/><\/span><\/strong><\/span><\/p>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong>Abstract:<\/strong> The development of large language models, leading up to OpenAI\u2019s GPT4 has caused another AI revolution. These models are being envisaged as foundation models \u2013 i.e., they are a strong starting point for all aspects of AI, including language, knowledge, reasoning and decision making. However, the strongest models are only available through an API, so the standard fine-tuning paradigm is not applicable to them. In this talk, I describe our initial experiments that assess the extent to which the current best LLMs hold promise to be foundation models. I also explore supervised settings, and find that workflows that can use LLMs along with trained models obtain best performance. Finally, I argue that workflows which include LLMs as components will be quite useful, necessitating optimization approaches for obtaining strong cost-quality tradeoffs.<\/span><\/p>\n<p><span style=%22font-weight: 400;%22><strong>Title:<span> <\/span><\/strong><span>Towards transforming the landscape of Indian language technology<\/span><strong><\/strong><br aria-hidden=%22true%22 \/><\/span><\/p>\n<p><span style=%22font-weight: 400;%22><strong>Speaker:<\/strong> <a href=%22https:\/\/www.cse.iitm.ac.in\/~miteshk\/%22>Prof. <span>Mitesh Khapra, IITM<\/span><\/a><\/span><\/p>\n<p><span style=%22font-weight: 400;%22><span><img src=%22https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/10\/mitesh-300x300.jpg%22 width=%22300%22 height=%22300%22 alt=%22%22 class=%22wp-image-4052 alignnone size-medium%22 \/><\/span><\/span><\/p>\n<p><strong>Abstract:<\/strong><span> In this talk, I will reflect on our journey towards transforming the landscape of Indian language technology. I will delve on our engineering-heavy approach in addressing the initial scarcity of data for Indian languages, while gradually establishing the necessary human resources to gather high-quality data on a larger scale through Bhashini. The objective is to share our insights into developing high quality open-source technology for Indian languages. This involves curating extensive data from the internet, constructing multilingual models for transfer learning, and crafting high-quality datasets for fine-tuning and evaluation. I will then transition into how our experiences can benefit the broader AI community, particularly as India aspires to create Language Model Models (LLMs) for Indic languages.<\/span><br aria-hidden=%22true%22 \/><br aria-hidden=%22true%22 \/><strong>Bio<\/strong><span>: Mitesh M. Khapra is an Associate Professor in the Department of Computer Science and Engineering at IIT Madras. He heads the AI4Bharat Research Lab at IIT Madras which focuses on building datasets, tools, models and applications for Indian languages. His research work has been published in several top conferences and journals including TACL, ACL, NeurIPS, TALLIP, EMNLP, EACL, AAAI, etc. He has also served as Area Chair or Senior PC member in top conferences such as ICLR and AAAI. Prior to IIT Madras, he was a Researcher at IBM Research India for four and a half years, where he worked on several interesting problems in the areas of Statistical Machine Translation, Cross Language Learning, Multimodal Learning, Argument Mining and Deep Learning. Prior to IBM, he completed his PhD and M.Tech from IIT Bombay in Jan 2012 and July 2008 respectively. His PhD thesis dealt with the important problem of reusing resources for multilingual computation. During his PhD he was a recipient of the IBM PhD Fellowship (2011) and the Microsoft Rising Star Award (2011). He is also a recipient of the Google Faculty Research Award (2018), the IITM Young Faculty Recognition Award (2019), the Prof. B. Yegnanarayana Award for Excellence in Research and Teaching (2020) and the Srimathi Marti Annapurna Gurunath Award for Excellence in Teaching (2022).<\/span><\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p>&#8221; content_phone=&#8221;<\/p>\n<h2><b>Keynote Talks<\/b><\/h2>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong>Title:<\/strong> My Experiments with Large Language Models<\/span><\/p>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong>Speaker: <a href=%22https:\/\/www.cse.iitd.ac.in\/~mausam\/%22><span style=%22font-weight: 400;%22>Prof. Mausam, IITD.<\/span><\/a><\/strong><\/span><span data-ogsc=%22rgb(34, 34, 34)%22><strong><span style=%22font-weight: 400;%22><\/span><\/strong><\/span><\/p>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong><span style=%22font-weight: 400;%22><img src=%22https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/10\/mausam-head_cropped-300x300.jpg%22 width=%22300%22 height=%22300%22 alt=%22%22 class=%22wp-image-4051 alignnone size-medium%22 \/><\/span><\/strong><\/span><\/p>\n<p data-ogsb=%22white%22><span data-ogsc=%22rgb(34, 34, 34)%22><strong>Abstract:<\/strong> The development of large language models, leading up to OpenAI\u2019s GPT4 has caused another AI revolution. These models are being envisaged as foundation models \u2013 i.e., they are a strong starting point for all aspects of AI, including language, knowledge, reasoning and decision making. However, the strongest models are only available through an API, so the standard fine-tuning paradigm is not applicable to them. In this talk, I describe our initial experiments that assess the extent to which the current best LLMs hold promise to be foundation models. I also explore supervised settings, and find that workflows that can use LLMs along with trained models obtain best performance. Finally, I argue that workflows which include LLMs as components will be quite useful, necessitating optimization approaches for obtaining strong cost-quality tradeoffs.<\/span><\/p>\n<p><span style=%22font-weight: 400;%22><strong>Title:<span> <\/span><\/strong><span>Towards transforming the landscape of Indian language technology<\/span><strong><\/strong><br aria-hidden=%22true%22 \/><\/span><\/p>\n<p><span style=%22font-weight: 400;%22><strong>Speaker:<\/strong> <a href=%22https:\/\/www.cse.iitm.ac.in\/~miteshk\/%22>Prof. <span>Mitesh Khapra, IITM<\/span><\/a><\/span><\/p>\n<p><span style=%22font-weight: 400;%22><span><img src=%22https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/10\/mitesh-300x300.jpg%22 width=%22300%22 height=%22300%22 alt=%22%22 class=%22wp-image-4052 alignnone size-medium%22 \/><\/span><\/span><\/p>\n<p><strong>Abstract:<\/strong><span> In this talk, I will reflect on our journey towards transforming the landscape of Indian language technology. I will delve on our engineering-heavy approach in addressing the initial scarcity of data for Indian languages, while gradually establishing the necessary human resources to gather high-quality data on a larger scale through Bhashini. The objective is to share our insights into developing high quality open-source technology for Indian languages. This involves curating extensive data from the internet, constructing multilingual models for transfer learning, and crafting high-quality datasets for fine-tuning and evaluation. I will then transition into how our experiences can benefit the broader AI community, particularly as India aspires to create Language Model Models (LLMs) for Indic languages.<\/span><br aria-hidden=%22true%22 \/><br aria-hidden=%22true%22 \/><strong>Bio<\/strong><span>: Mitesh M. Khapra is an Associate Professor in the Department of Computer Science and Engineering at IIT Madras. He heads the AI4Bharat Research Lab at IIT Madras which focuses on building datasets, tools, models and applications for Indian languages. His research work has been published in several top conferences and journals including TACL, ACL, NeurIPS, TALLIP, EMNLP, EACL, AAAI, etc. He has also served as Area Chair or Senior PC member in top conferences such as ICLR and AAAI. Prior to IIT Madras, he was a Researcher at IBM Research India for four and a half years, where he worked on several interesting problems in the areas of Statistical Machine Translation, Cross Language Learning, Multimodal Learning, Argument Mining and Deep Learning. Prior to IBM, he completed his PhD and M.Tech from IIT Bombay in Jan 2012 and July 2008 respectively. His PhD thesis dealt with the important problem of reusing resources for multilingual computation. During his PhD he was a recipient of the IBM PhD Fellowship (2011) and the Microsoft Rising Star Award (2011). He is also a recipient of the Google Faculty Research Award (2018), the IITM Young Faculty Recognition Award (2019), the Prof. B. Yegnanarayana Award for Excellence in Research and Teaching (2020) and the Srimathi Marti Annapurna Gurunath Award for Excellence in Teaching (2022).<\/span><\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n<p>&#8221; content_last_edited=&#8221;on|desktop&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2><b>Keynote Talks<\/b><\/h2>\n<p data-ogsb=\"white\"><span data-ogsc=\"rgb(34, 34, 34)\"><strong>Title:<\/strong> My Experiments with Large Language Models<\/span><\/p>\n<p data-ogsb=\"white\"><span data-ogsc=\"rgb(34, 34, 34)\"><strong>Speaker: <a href=\"https:\/\/www.cse.iitd.ac.in\/~mausam\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Prof. Mausam, IITD.<\/span><\/a><\/strong><\/span><span data-ogsc=\"rgb(34, 34, 34)\"><strong><span style=\"font-weight: 400;\"><\/span><\/strong><\/span><\/p>\n<p data-ogsb=\"white\"><span data-ogsc=\"rgb(34, 34, 34)\"><strong><span style=\"font-weight: 400;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/10\/mausam-head_cropped-300x300.jpg\" width=\"300\" height=\"300\" alt=\"\" class=\"wp-image-4051 alignnone size-medium\" \/><\/span><\/strong><\/span><\/p>\n<p data-ogsb=\"white\"><span data-ogsc=\"rgb(34, 34, 34)\"><strong>Abstract:<\/strong> The development of large language models, leading up to OpenAI\u2019s GPT4 has caused another AI revolution. These models are being envisaged as foundation models \u2013 i.e., they are a strong starting point for all aspects of AI, including language, knowledge, reasoning and decision making. However, the strongest models are only available through an API, so the standard fine-tuning paradigm is not applicable to them. In this talk, I describe our initial experiments that assess the extent to which the current best LLMs hold promise to be foundation models. I also explore supervised settings, and find that workflows that can use LLMs along with trained models obtain best performance. Finally, I argue that workflows which include LLMs as components will be quite useful, necessitating optimization approaches for obtaining strong cost-quality tradeoffs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>Title:<span>\u00a0<\/span><\/strong><span>Towards transforming the landscape of Indian language technology<\/span><strong><\/strong><br aria-hidden=\"true\" \/><\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>Speaker:<\/strong> <a href=\"https:\/\/www.cse.iitm.ac.in\/~miteshk\/\" target=\"_blank\" rel=\"noopener\">Prof. <span>Mitesh Khapra, IITM<\/span><\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\"><span><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/10\/mitesh-300x300.jpg\" width=\"300\" height=\"300\" alt=\"\" class=\"wp-image-4052 alignnone size-medium\" \/><\/span><\/span><\/p>\n<p><strong>Abstract:<\/strong><span>\u00a0In this talk, I will reflect on our journey towards transforming the landscape of Indian language technology. I will delve on our engineering-heavy approach in addressing the initial scarcity of data for Indian languages, while gradually establishing the necessary human resources to gather high-quality data on a larger scale through Bhashini. The objective is to share our insights into developing high quality open-source technology for Indian languages. This involves curating extensive data from the internet, constructing multilingual models for transfer learning, and crafting high-quality datasets for fine-tuning and evaluation. I will then transition into how our experiences can benefit the broader AI community, particularly as India aspires to create Language Model Models (LLMs) for Indic languages.<\/span><br aria-hidden=\"true\" \/><br aria-hidden=\"true\" \/><strong>Bio<\/strong><span>: Mitesh M. Khapra is an Associate Professor in the Department of Computer Science and Engineering at IIT Madras. He heads the AI4Bharat Research Lab at IIT Madras which focuses on building datasets, tools, models and applications for Indian languages. His research work has been published in several top conferences and journals including TACL, ACL, NeurIPS, TALLIP, EMNLP, EACL, AAAI, etc. He has also served as Area Chair or Senior PC member in top conferences such as ICLR and AAAI. Prior to IIT Madras, he was a Researcher at IBM Research India for four and a half years, where he worked on several interesting problems in the areas of Statistical Machine Translation, Cross Language Learning, Multimodal Learning, Argument Mining and Deep Learning. Prior to IBM, he completed his PhD and M.Tech from IIT Bombay in Jan 2012 and July 2008 respectively. His PhD thesis dealt with the important problem of reusing resources for multilingual computation. During his PhD he was a recipient of the IBM PhD Fellowship (2011) and the Microsoft Rising Star Award (2011). He is also a recipient of the Google Faculty Research Award (2018), the IITM Young Faculty Recognition Award (2019), the Prof. B. Yegnanarayana Award for Excellence in Research and Teaching (2020) and the Srimathi Marti Annapurna Gurunath Award for Excellence in Teaching (2022).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><strong>Tutorial: Enhancing LLM inferencing with RAG and fine-tuned LLMs<\/strong><span><\/span><\/p>\n<p>Presentor: Abhinav Kimothi, Yarnit<\/p>\n<p>Abstract: Today, Large Language Models like GPT4, Llama2, Claude etc. are easily accessible to those who desire \u2013 and developers have taken upon themselves to explore the possibilities with these models. While large businesses, governments and organizations are still in the process of separating the wheat from the chaff, one point that everyone concedes is that LLMs cannot be ignored. Those who will apply the power of LLMs to the apt use cases will be on the winning side. In this workshop, we will discuss the process of building applications that leverage LLMs. Beginning with an introduction to the LLM landscape, we will focus on the project lifecycle of an LLM based application. We will then get hands-on in invoking proprietary and open source LLMs and the process of in-context inferencing. To address the challenges of hallucinations in LLM inferencing, the bulk of the workshop will focus on Retrieval Augmented Generation (RAG) and fine-tuning models to specific tasks. We will leverage embeddings and vector databases to guardrail LLM generations to user context. By the end of the session, the attendees will familiarize themselves with executing concepts of in-context learning, fine-tuning and RAG. The demonstration will be done in Python and use OpenAI APIs, the LangChain framework amongst others.<\/p>\n<p><b><\/b><\/p>\n<p><b>Submission Guidelines<\/b><\/p>\n<p><span style=\"font-weight: 400;\">We invite authors to submit original and unpublished research papers (up to 4 pages excluding references). All submissions will undergo a rigorous peer-review process by the program committee. The authors are requested to follow the ACM sigconf template (see <\/span><a href=\"https:\/\/www.overleaf.com\/gallery\/tagged\/acm-official\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/www.overleaf.com\/gallery\/tagged\/acm-official<\/span><\/a><span style=\"font-weight: 400;\">). All accepted papers will be published in the proceedings of AIMLSys 2023.<\/span><\/p>\n<p><strong><br \/>Submission link: <a href=\"https:\/\/cmt3.research.microsoft.com\/AIMLSystems2023\/\" target=\"_blank\" rel=\"noopener\">https:\/\/cmt3.research.microsoft.com\/AIMLSystems2023\/<\/a><br \/><\/strong><strong><\/strong><strong>(please select the GenerateAI Workshop AI track)<\/strong><\/p>\n<ul><\/ul>\n<p><b>Important Dates<\/b><br \/><span><\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Paper Submission Deadline: <span style=\"text-decoration: line-through;\">12th September 2023<\/span> <span>21st September 2023<\/span><\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Notification of Acceptance: <span style=\"text-decoration: line-through;\">29 September 2023<\/span> 3rd October 2023<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Camera-Ready Deadline: 8th October 2023<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Workshop Date: 28th October 2023<\/span><\/li>\n<\/ul>\n<p><b>Workshop Organization<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The workshop will feature keynote speeches, technical paper and poster sessions, tutorials and possibly panel discussions. A detailed outline of the program would be available on the website shortly. The workshop would also provide ample opportunities for attendees to network with leading experts and well gain hands-on experience on Generative AI from tutorial sessions.<\/span><\/p>\n<p><b>Registration<\/b><\/p>\n<p><span style=\"font-weight: 400;\">At least one author of an accepted paper will need to register for the conference and in case of multiple papers with the same author, co-authors need to register.<\/span><\/p>\n<p><b>Workshop Venue<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The Chancery Pavilion | Bengaluru, India<\/span><\/p>\n<p><b>Contact Information<\/b><\/p>\n<p><span style=\"font-weight: 400;\">For any inquiries regarding the workshop, please feel free to contact the workshop organizers.<\/span><\/p>\n<p><span><\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Generative AI Workshop\u00a0Recent progress in generative models have resulted in models that can produce realistic text, images and video that can potentially revolutionize the way humans work, create content and interact with machines. The workshop on Generative AI at AIMLSystems will focus on the entire life-cycle of building and deploying such Generative AI systems, including [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph -->\n<p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p>\n<!-- \/wp:paragraph --><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph -->\n<p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p>\n<!-- \/wp:paragraph --><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/www.aimlsystems.org\/2023\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-1314","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/pages\/1314","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/comments?post=1314"}],"version-history":[{"count":32,"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/pages\/1314\/revisions"}],"predecessor-version":[{"id":4100,"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/pages\/1314\/revisions\/4100"}],"wp:attachment":[{"href":"https:\/\/www.aimlsystems.org\/2023\/wp-json\/wp\/v2\/media?parent=1314"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}