{"id":6515,"date":"2025-06-23T16:27:18","date_gmt":"2025-06-23T10:57:18","guid":{"rendered":"https:\/\/www.aimlsystems.org\/2025\/?page_id=6515"},"modified":"2025-10-07T10:37:28","modified_gmt":"2025-10-07T05:07:28","slug":"workshop-edge-x","status":"publish","type":"page","link":"https:\/\/www.aimlsystems.org\/2026\/workshop-edge-x\/","title":{"rendered":"Workshop-EDGE-X"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;Header&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;gcid-1bcf785a-50e1-437b-b09f-65567babc1de&#8221; background_image=&#8221;https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/05\/grid-bg-2.png&#8221; background_size=&#8221;initial&#8221; background_position=&#8221;bottom_center&#8221; background_repeat=&#8221;repeat&#8221; custom_padding=&#8221;||0px|||&#8221; collapsed=&#8221;on&#8221; global_colors_info=&#8221;{%22gcid-1bcf785a-50e1-437b-b09f-65567babc1de%22:%91%22background_color%22%93}&#8221;][et_pb_row _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.19.2&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;7f63b212-a10a-4d30-afa2-e478a747ca88&#8243; header_2_font_size=&#8221;44px&#8221; custom_margin=&#8221;||10px||false|false&#8221; header_2_font_size_phone=&#8221;33px&#8221; custom_css_free_form=&#8221;selector h2{color:white}&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2><strong>EDGE-X 2025<\/strong><\/h2>\n<h3><span style=\"color: #ffffff;\">Reimagining edge intelligence with low-power high efficiency AI Systems<\/span><strong><\/strong><\/h3>\n<p><strong><\/strong><\/p>\n<p><strong><\/strong><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;Features&#8221; module_id=&#8221;about&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;#dbdbdb&#8221; background_image=&#8221;https:\/\/www.aimlsystems.org\/2023\/wp-content\/uploads\/2023\/05\/rm380-10.jpg&#8221; background_blend=&#8221;overlay&#8221; custom_padding=&#8221;1.9%||||false|false&#8221; use_background_color_gradient_phone=&#8221;on&#8221; background_color_gradient_stops_phone=&#8221;#001528 0%|rgba(255, 255, 255, 0) 10%|rgba(255,255,255,0) 70%|#0f0122 100%&#8221; collapsed=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; min_height=&#8221;888.4px&#8221; custom_padding=&#8221;4px|||||&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; min_height=&#8221;1px&#8221; custom_margin=&#8221;||3px|||&#8221; custom_padding=&#8221;||8px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\">Schedule<\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; min_height=&#8221;1px&#8221; custom_margin=&#8221;||7px|||&#8221; custom_padding=&#8221;||8px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4 style=\"text-align: center;\"><strong>DATE: October 8, 2025<\/strong><\/h4>\n<h4 style=\"text-align: center;\"><strong>Venue: Sigma 3, Chancery Pavilion, Bangalore<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;||0px|||&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<div class=\"WordSection1\">\n<p>&nbsp;<\/p>\n<table class=\"MsoNormalTable\" border=\"0\" cellspacing=\"0\" cellpadding=\"0\" width=\"100%\" style=\"border-collapse: collapse; width: 100%;\">\n<tbody>\n<tr>\n<td width=\"20%\" style=\"border: 1pt solid black; background: #6d9eeb; text-align: center; padding: 6px; width: 20%;\"><b><span style=\"font-size: 10pt; font-family: 'Arial',sans-serif; color: black;\">Time<\/span><\/b><\/td>\n<td width=\"80%\" style=\"border: 1pt solid black; background: #6d9eeb; text-align: center; padding: 6px; width: 80%;\"><b><span style=\"font-size: 10pt; font-family: 'Arial',sans-serif; color: black;\">Event<\/span><\/b><\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1pt solid black; background: #ffefe7; text-align: center; width: 20%;\"><strong>09:30 AM \u2013 11:00 AM<\/strong><\/td>\n<td style=\"border: 1pt solid black; background: #ffefe7; width: 80%;\"><b>Keynote Talk \u2013 Prof. Chetan Singh Thakur (IISc Bangalore)<\/b><br \/><i>\u201cRAMAN: An Edge AI Accelerator from High-Speed Imaging to Brain\u2013Computer Interfaces\u201d<\/i><\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"border: 1pt solid black; background: #cccccc; text-align: center; width: 100%;\"><b>Tea \/ Coffee Break <\/b><br \/>11:00 AM \u2013 11:30 AM<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1pt solid black; background: #ffefe7; text-align: center; width: 20%;\"><strong>11:30 AM\u2013 12:30 PM<\/strong><\/td>\n<td style=\"border: 1pt solid black; background: #ffefe7; width: 80%;\"><b>Technical Paper Presentations<\/b><br \/>\u2022 <strong>Cost-Aware Fine-Tuning: Evaluating Hyperparameters, Datasets, and PEFT Methods for Efficient LLM Adaptation<\/strong> \u2013 Aditya Chatterjee, Sankar Menon, Dr. Kunal Kishore Korgaonkar, Rupesh Yarlagadda<br \/>\u2022 <strong>Neuromorphic Approaches for Energy<\/strong>-Efficient Object Localization \u2013 Debarati Paul, Sayan Pradhan<br \/>\u2022 <strong>Voice-to-Insight: STM32-Based Audio Logging with Offline AI-Driven Transcription, Translation, and Speaker Profiling<\/strong> \u2013 Ashwini Shinde, Eshal Shaikh, Harshal Patil, Akshay Dhere<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1pt solid black; background: #ffefe7; text-align: center; width: 20%;\"><strong>12:30 PM\u2013 1:00 PM<\/strong><\/td>\n<td style=\"border: 1pt solid black; background: #ffefe7; width: 80%;\">\n<p><b>Tech talk:\u00a0 <\/b><b style=\"font-size: 14px;\">Edge Wizard: The Magic Wand for EffortlessAutomation<\/b><\/p>\n<p>Arijit Mukherjee, TCS Research<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"border: 1pt solid black; background: #cccccc; text-align: center; width: 100%;\"><b>Lunch Break <\/b><br \/>1:00 PM \u2013 2:00 PM<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1pt solid black; background: #ffefe7; text-align: center; width: 20%;\"><strong>2:00 PM \u2013 4:00 PM<\/strong><\/td>\n<td style=\"border: 1pt solid black; background: #ffefe7; width: 80%;\"><b>TinyML Hands-on Session by STMicroelectronics (Saurabh Rawat)<\/b><br \/>\u2022 Introduction to STMicroelectronics Edge AI Tools: X-CUBE-AI and Model Zoo<br \/>\u2022 Walkthrough of Process of Quantizing and Deploying Model Zoo on STM32N6 Platform<br \/>\u2022 Analysis of Model Deployment<br \/>\u2022 Other Models: MEMS and Audio<br \/>\u2022 Q&amp;A<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"border: 1pt solid black; background: #cccccc; text-align: center; width: 100%;\"><b>Tea \/ Coffee Break <\/b><br \/>4:00 PM \u2013 4:30 PM<\/td>\n<\/tr>\n<tr>\n<td style=\"border: 1pt solid black; background: #ffefe7; text-align: center; width: 20%;\"><strong>4:30 PM \u2013 5:30 PM<\/strong><\/td>\n<td style=\"border: 1pt solid black; background: #ffefe7; width: 80%;\">\n<p><strong>Panel Discussion<\/strong><b> \u2013 \u201cEnergy-Aware Intelligence: Can We Sustain the Edge Revolution?\u201d<\/b><\/p>\n<ul>\n<li>Prof. Chetan Singh Thakur (IISc Bangalore)<\/li>\n<li>Prof. Hemangee K. Kapoor (IIT Guwahati)<\/li>\n<li>Saurabh Rawat (STMicroelectronics)<\/li>\n<li>Dr. Arpan Pal (TCS Research)<br \/>Panel Host: <b>Prasant Misra<\/b><\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;3_5,2_5&#8243; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;27px||43px|||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;3_5&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_button button_url=&#8221;https:\/\/www.aimlsystems.org\/2025\/wp-content\/uploads\/2025\/07\/edgex-cfp-2.0_102315.pdf&#8221; url_new_window=&#8221;on&#8221; button_text=&#8221;Call for Papers&#8221; button_alignment=&#8221;left&#8221; disabled_on=&#8221;off|off|off&#8221; module_id=&#8221;submit_button&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_button=&#8221;on&#8221; button_text_size=&#8221;15px&#8221; button_text_color=&#8221;#000000&#8243; button_border_width=&#8221;1px&#8221; button_border_radius=&#8221;78px&#8221; button_font=&#8221;Poppins|500||on|||||&#8221; button_icon=&#8221;&#x24;||divi||400&#8243; animation_style=&#8221;fade&#8221; custom_css_free_form=&#8221;#submit_button {color:black}&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221; button_bg_color__hover_enabled=&#8221;on|hover&#8221; button_bg_color__hover=&#8221;&#8221; button_bg_enable_color__hover=&#8221;off&#8221; button_bg_color_gradient_stops__hover=&#8221;#2b87da 0%|#0d1c63 100%&#8221; button_bg_use_color_gradient__hover=&#8221;on&#8221;][\/et_pb_button][et_pb_text disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;|600|||||||&#8221; text_text_color=&#8221;#E02B20&#8243; disabled=&#8221;on&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span style=\"font-weight: 400;\">Paper Submission Deadline: 04 August 2024, 11:59 pm AOE.<\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3><strong>About the workshop<\/strong><\/h3>\n<hr \/>\n<p>As intelligent systems expand into diverse environments \u2014 from IoT sensors to autonomous devices \u2014 traditional applications, architectures, and methodologies face new limits. The increasing demand for real-time, low-power, and context-aware intelligence at the edge is pushing the boundaries of what current computing systems can deliver. Edge devices must now operate under tight constraints of memory, latency, and energy, while still supporting sophisticated AI workloads. These challenges call for a rethinking of how we design, deploy, and optimize intelligent systems at the edge.<\/p>\n<p>The <strong>EDGE-X 2025<\/strong> workshop, part of the Fifth International AI-ML Systems Conference (AIMLSys 2025), aims to address the critical challenges and opportunities in next-generation edge computing. EDGE-X explores innovative solutions across various domains, including on-device learning and inferencing, ML\/DL optimization approaches to achieve efficiency in memory\/latency\/power, hardware-software co-optimization, and emerging beyond von Neumann paradigms including but not limited to neuromorphic, in-memory, photonic, and spintronic computing. The workshop seeks to unite researchers, engineers, and architects to share ideas and breakthroughs in devices, architectures, algorithms, tools, and methodologies that redefine performance and efficiency for edge computing.<\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;2_5&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_tabs active_tab_background_color=&#8221;#1c1b3a&#8221; inactive_tab_background_color=&#8221;#0b91c6&#8243; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; tab_text_color=&#8221;#FFFFFF&#8221; background_color=&#8221;rgba(0,0,0,0)&#8221; border_radii=&#8221;on|11px|11px|11px|11px&#8221; locked=&#8221;off&#8221; global_colors_info=&#8221;{%22gcid-f1f9244b-c8ab-43e1-95c3-c0bdf69ac7b5%22:%91%22active_tab_background_color%22,%22active_tab_background_color%22%93}&#8221;][et_pb_tab title=&#8221;Important Dates&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<div class=\"text-attention\">\n<div class=\"s10\"><strong><span class=\"s5\">\u2022 Paper Submission Deadline: <span style=\"color: #ff0000;\">10th Aug\u00a02025<\/span><\/span><\/strong><\/div>\n<div class=\"s10\"><strong><span class=\"s5\">\u2022 Acceptance Notification: 1st Sept 2025<\/span><\/strong><\/div>\n<div class=\"s10\"><strong><span class=\"s5\">\u2022 Camera-Ready Deadline: 15th Sept 2025<\/span><\/strong><\/div>\n<\/div>\n<p>[\/et_pb_tab][\/et_pb_tabs][et_pb_tabs active_tab_background_color=&#8221;#1c1b3a&#8221; inactive_tab_background_color=&#8221;#0b91c6&#8243; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; tab_text_color=&#8221;#FFFFFF&#8221; background_color=&#8221;rgba(0,0,0,0)&#8221; custom_padding=&#8221;||0px||false|false&#8221; link_option_url_new_window=&#8221;on&#8221; border_radii=&#8221;on|11px|11px|11px|11px&#8221; global_colors_info=&#8221;{%22gcid-f1f9244b-c8ab-43e1-95c3-c0bdf69ac7b5%22:%91%22active_tab_background_color%22,%22active_tab_background_color%22%93}&#8221;][et_pb_tab title=&#8221;EDGE &#8211; X Workshop Chair&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; body_line_height=&#8221;1.4em&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/arijit72\" target=\"_blank\" rel=\"noopener\"><\/a><\/p>\n<ul>\n<li><span><a href=\"https:\/\/scholar.google.com\/citations?user=HmMXdB4AAAAJ&amp;hl=it\" target=\"_blank\" rel=\"noopener\">Claudio Gallicchio<\/a>, University of Pisa<\/span><\/li>\n<li><a href=\"https:\/\/www.ee.iitb.ac.in\/web\/people\/udayan-ganguly\/\" target=\"_blank\" rel=\"noopener\">Udayan Ganguly<\/a>, <span>IIT Mumbai<\/span><span><\/span><\/li>\n<li><a href=\"https:\/\/www.linkedin.com\/in\/arijit72\" target=\"_blank\" rel=\"noopener\">Arijit Mukherjee<\/a>, TCS Research<\/li>\n<\/ul>\n<p>[\/et_pb_tab][\/et_pb_tabs][et_pb_tabs active_tab_background_color=&#8221;#1c1b3a&#8221; inactive_tab_background_color=&#8221;#0b91c6&#8243; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; tab_text_color=&#8221;#FFFFFF&#8221; background_color=&#8221;rgba(0,0,0,0)&#8221; custom_padding=&#8221;||0px||false|false&#8221; link_option_url_new_window=&#8221;on&#8221; border_radii=&#8221;on|11px|11px|11px|11px&#8221; global_colors_info=&#8221;{%22gcid-f1f9244b-c8ab-43e1-95c3-c0bdf69ac7b5%22:%91%22active_tab_background_color%22,%22active_tab_background_color%22%93}&#8221;][et_pb_tab title=&#8221;Technical Program Committee&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; body_line_height=&#8221;1.4em&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul>\n<li><span><a href=\"https:\/\/scholar.google.com\/citations?user=s6EyYlUAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">Sounak Dey<\/a>, TCS Research<\/span><\/li>\n<li><span><a href=\"https:\/\/www.linkedin.com\/in\/swarnava-dey-8506454\/?originalSubdomain=in\" target=\"_blank\" rel=\"noopener\">Swarnava Dey<\/a>, TCS Research<\/span><\/li>\n<li><span><a href=\"https:\/\/www.linkedin.com\/in\/manansuri\/?originalSubdomain=in\" target=\"_blank\" rel=\"noopener\">Manan Suri<\/a>, IIT Delhi<\/span><\/li>\n<li><span><a href=\"https:\/\/www.punitrathore.com\/\" target=\"_blank\" rel=\"noopener\">Punit Rathore<\/a>, IISc Bangalore<\/span><\/li>\n<li><span><a href=\"https:\/\/scholar.google.com\/citations?user=Ec2g4ewAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">Jay Gubbi<\/a>, TCS Research<\/span><span><\/span><\/li>\n<li><span><a href=\"Prasant%20Misra\" target=\"_blank\" rel=\"noopener\">Prasant Misra<\/a>, TCS Research<\/span><\/li>\n<\/ul>\n<p>[\/et_pb_tab][\/et_pb_tabs][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||21px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Topics<\/h3>\n<hr \/>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><strong>EDGE-X 2025<\/strong> invites submissions of original research papers, case studies, and review articles in the field of low-power efficient edge AI. The workshop seeks to foster discussions on a wide range of topics, including but not limited to:<\/p>\n<ul>\n<li>Ultra-Efficient Machine Learning &#8211; TinyML, binary\/ternary neural networks, federated learning, model pruning, compression, quantisation, and edge-training<\/li>\n<li>Hardware-Software Co-Design &#8211; RISC-V custom extensions for edge AI, non-von-Neumann accelerators (e.g., in-memory compute, FPGAs)<\/li>\n<li>Beyond CMOS &amp; von Neumann Paradigms &#8211; Neuromorphic computing (spiking networks, event-based sensing), inmemory\/compute architectures (memristors, ReRAM), photonic integrated circuits, spintronic and quantum-inspired devices<\/li>\n<li>System-Level Innovations &#8211; Near-\/sub-threshold computing, power-aware OS\/runtime frameworks, approximate computing for error-tolerant workloads<\/li>\n<li>Tools &amp; Methodologies &#8211; Simulators for emerging edge devices (photonic, spintronic), energy-accuracy trade-off optimisation, benchmarks for edge heterogeneous platforms<\/li>\n<li>Use Cases &amp; Deployment Challenges &#8211; Self-powered\/swarm systems, ruggedised edge AI, privacy\/security for distributed intelligence, sustainability and lifecycle management<\/li>\n<li>Interdisciplinary approaches &amp; collaborations in low-power high-efficiency edge AI researchfor edge computing.<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;25px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Submission Instructions&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Papers should be at most 4 pages, including title, abstract, figures and results, but excluding references, and not published or under review elsewhere. Papers should be prepared as per IEEE conference proceedings format. Please submit your papers through Microsoft CMT.<br \/><strong><span style=\"color: #ff0000;\">All accepted workshop full papers will be included in the IEEE proceedings<\/span><\/strong>. At least one author of each accepted paper must register for the conference and present the paper. In addition, no-shows of accepted papers at the workshop will result in those papers NOT being included in the proceedings.<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;15px|||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Hands-on Session<\/h3>\n<hr \/>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><span>Overview<\/span><\/h4>\n<p><span>Deploying artificial intelligence (AI) models on embedded systems that use microcontrollers (MCUs) present several challenges. These include stringent memory constraints, limited processing power, and the need for real-time responsiveness. Traditional AI models, often designed for resource-rich environments, require significant optimizations and efforts to work efficiently on embedded platforms. Balancing model accuracy with performance, managing quantization trade-offs, and minimizing latency are critical considerations in this context. The STM32AI Model Zoo addresses some of these challenges by offering a comprehensive collection of pre-trained models and tools specifically optimized for the STM32 devices. This workshop explains how the STM32AI Model Zoo facilitates efficient Edge AI deployment through tailored optimizations and full lifecycle support, enabling developers to integrate AI capabilities into STMicroelectronics Neural Accelerator STM32N6 based applications seamlessly.<\/span><\/p>\n<p><span>It takes the developers through a few typical use cases related to computer vision, audio and motion sensors showcasing a deployment on STM32N6 platform.<\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h4><span>Agenda<\/span><\/h4>\n<ul>\n<li><span>\u00a0Introduction to STMicroelectronics Edge AI Tools \u2013 X-CUBE-AI and Model Zoo<\/span><\/li>\n<li><span>\u00a0Walkthrough of Process of Quantizing and deploying Model Zoo on STM32N6 Platform<\/span><\/li>\n<li><span>\u00a0Analysis of Model deployment<\/span><\/li>\n<li><span>\u00a0Othe Models \u2013 MEMS and Audio<\/span><\/li>\n<li><span>\u00a0QnA<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_4,3_4&#8243; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;27px||43px|||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_image src=&#8221;https:\/\/www.aimlsystems.org\/2025\/wp-content\/uploads\/2025\/08\/Saurabh-rawant.png&#8221; title_text=&#8221;Saurabh rawant&#8221; align=&#8221;center&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; max_width=&#8221;200px&#8221; custom_margin=&#8221;||15px|||&#8221; filter_saturate=&#8221;0%&#8221; animation_style=&#8221;slide&#8221; border_radii=&#8221;on|115px|115px|115px|115px&#8221; border_color_all=&#8221;#FFFFFF&#8221; box_shadow_style=&#8221;preset2&#8243; global_colors_info=&#8221;{}&#8221; transform_styles__hover_enabled=&#8221;on|hover&#8221; transform_scale__hover_enabled=&#8221;on|hover&#8221; transform_translate__hover_enabled=&#8221;on|hover&#8221; transform_rotate__hover_enabled=&#8221;on|hover&#8221; transform_skew__hover_enabled=&#8221;on|hover&#8221; transform_origin__hover_enabled=&#8221;on|hover&#8221; transform_scale__hover=&#8221;104%|104%&#8221; filter_saturate__hover_enabled=&#8221;on|hover&#8221; filter_saturate__hover=&#8221;100%&#8221; border_width_all__hover_enabled=&#8221;on|hover&#8221; border_width_all__hover=&#8221;1px&#8221; border_radii__hover_enabled=&#8221;on|hover&#8221; border_radii__hover=&#8221;on|115px|115px|115px|115px&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;25d2b0d8-2373-4ae8-9188-0ef4b1bb77f4&#8243; text_text_color=&#8221;#212A4F&#8221; header_4_text_color=&#8221;gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68&#8243; header_4_font_size=&#8221;20px&#8221; custom_margin=&#8221;||15px|||&#8221; global_colors_info=&#8221;{%22gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68%22:%91%22header_4_text_color%22%93}&#8221;]<\/p>\n<h4>Saurabh Rawat<\/h4>\n<p>Senior Staff Engineer, STMicroelectronics India Pvt Ltd<\/p>\n<p>[\/et_pb_text][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Title&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>TinyML Hands-on Session by STMicroelectronics<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Bio&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Saurabh Rawat is a Senior Staff Engineer with a commendable tenure at STMicroelectronics, spanning over 13 years. He holds a BTech in Electronics and Communication Engineering from National Institute of Technology,Prayagraj (Allahabad erstwhile) and M.Tech. in Augmented Reality and Virtual Reality from IIT Jodhpur. His expertise lies in developing innovative embedded solutions using sensors and the in the field of the Sensors, Connectivity and Internet of Things (IoT). He has worked on many MEMS based solutions and has also developed the STMicroelectronics Bluetooth Mesh Stack for Android.<\/p>\n<p>Currently, Saurabh is at the forefront of Augmented and Virtual Reality technology, where he is developing Embedded and Edge AI Solutions related to Advance computer Vision, MEMS and Audio Sensors utilizing ST\u2019s devices and building Hardware and Software Ecosystem around it. Saurabh has been a prolific contributor to the body of knowledge in IoT and sensor technology, with multiple publications and articles to his name. He and his team has filed 2 patents, specifically in the domains of Augmented\/Virtual Reality and IoT.<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;25px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;15px|||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Keynote Speaker<\/h3>\n<hr>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_4,3_4&#8243; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;27px||43px|||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_image src=&#8221;https:\/\/www.aimlsystems.org\/2025\/wp-content\/uploads\/2025\/07\/Chetan-Singh-Thakur.jpg&#8221; title_text=&#8221;Chetan Singh Thakur&#8221; url=&#8221;https:\/\/scholar.google.com\/citations?user=AO3LyLMAAAAJ&#038;hl=en&#8221; url_new_window=&#8221;on&#8221; align=&#8221;center&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; max_width=&#8221;200px&#8221; custom_margin=&#8221;||15px|||&#8221; filter_saturate=&#8221;0%&#8221; animation_style=&#8221;slide&#8221; border_radii=&#8221;on|115px|115px|115px|115px&#8221; border_color_all=&#8221;#FFFFFF&#8221; box_shadow_style=&#8221;preset2&#8243; global_colors_info=&#8221;{}&#8221; transform_styles__hover_enabled=&#8221;on|hover&#8221; transform_scale__hover_enabled=&#8221;on|hover&#8221; transform_translate__hover_enabled=&#8221;on|hover&#8221; transform_rotate__hover_enabled=&#8221;on|hover&#8221; transform_skew__hover_enabled=&#8221;on|hover&#8221; transform_origin__hover_enabled=&#8221;on|hover&#8221; transform_scale__hover=&#8221;104%|104%&#8221; filter_saturate__hover_enabled=&#8221;on|hover&#8221; filter_saturate__hover=&#8221;100%&#8221; border_width_all__hover_enabled=&#8221;on|hover&#8221; border_width_all__hover=&#8221;1px&#8221; border_radii__hover_enabled=&#8221;on|hover&#8221; border_radii__hover=&#8221;on|115px|115px|115px|115px&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;25d2b0d8-2373-4ae8-9188-0ef4b1bb77f4&#8243; text_text_color=&#8221;#212A4F&#8221; header_4_text_color=&#8221;gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68&#8243; header_4_font_size=&#8221;20px&#8221; custom_margin=&#8221;||15px|||&#8221; global_colors_info=&#8221;{%22gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68%22:%91%22header_4_text_color%22%93}&#8221;]<\/p>\n<h4><a href=\"https:\/\/scholar.google.com\/citations?user=AO3LyLMAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">Chetan Singh Thakur<\/a><\/h4>\n<p><span>Indian Institute of Science (IISc), Bangalore<\/span><\/p>\n<p>[\/et_pb_text][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Title&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>RAMAN: An Edge AI Accelerator from High-speed Imaging to Brain-Computer Interfaces<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Abstract&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>This talk will present RAMAN \u2014 our in-house developed, reconfigurable, and sparsity-aware TinyML accelerator, purpose-built for edge AI. RAMAN leverages both structured and unstructured sparsity within a highly adaptable framework and incorporates quantization techniques to further minimize latency. We will highlight its versatility through applications in high-speed imaging, acoustic signal processing, and brain-computer interfaces. Remarkably, RAMAN achieves real-time throughput of up to 1000 FPS for video workloads on edge devices.<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Bio&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span>Prof. Chetan Singh Thakur has received his PhD in Neuromorphic Engineering from Western Sydney University, Australia in 2016, and MTech from IIT Bombay in 2007. Prof. Chetan has worked for 6 years with Texas Instruments Singapore as a senior Integrated Circuit Design Engineer. Prof. Thakur worked as a research fellow at the Johns Hopkins University, USA before joining IISc as a faculty. He is also an <\/span><b>Adjunct Faculty<\/b><span> in International Centre for Neuromorphic Systems, Australia. He is recipients of several awards such as\u00a0<\/span><b>Young Investigator Award\u00a0<\/b><span>from Pratiksha Trust<\/span><b>, Early Career Research Award<\/b><span>\u00a0by Science and Engineering Research Board- India,<\/span><b>\u00a0Inspire Faculty Award<\/b><span> by Department of Science and Technology- India.<\/span><\/p>\n<p><span>Prof. Chetan\u2019s research interest is to understand the signal processing aspects of the brain and apply those to build novel intelligent systems. His research expertise lies in neuromorphic computing, FPGA &amp; mixed-signal VLSI systems, computational neuroscience, and machine learning for edge-computing.<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;25px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;15px|||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Panel Discussion<\/h3>\n<hr \/>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;27px||43px|||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;25d2b0d8-2373-4ae8-9188-0ef4b1bb77f4&#8243; text_text_color=&#8221;#212A4F&#8221; header_4_text_color=&#8221;gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68&#8243; header_4_font_size=&#8221;20px&#8221; custom_margin=&#8221;||15px|||&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{%22gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68%22:%91%22header_4_text_color%22%93}&#8221;]<\/p>\n<h4><a href=\"https:\/\/scholar.google.com\/citations?user=AO3LyLMAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">Chetan Singh Thakur<\/a><\/h4>\n<p><span>Indian Institute of Science (IISc), Bangalore<\/span><\/p>\n<p>[\/et_pb_text][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Title : Energy-Aware Intelligence: Can We Sustain the Edge Revolution?&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><strong>Coordinator: <\/strong>Prashant Misra<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Abstract&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>This talk will present RAMAN \u2014 our in-house developed, reconfigurable, and sparsity-aware TinyML accelerator, purpose-built for edge AI. RAMAN leverages both structured and unstructured sparsity within a highly adaptable framework and incorporates quantization techniques to further minimize latency. We will highlight its versatility through applications in high-speed imaging, acoustic signal processing, and brain-computer interfaces. Remarkably, RAMAN achieves real-time throughput of up to 1000 FPS for video workloads on edge devices.<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Bio&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span>Prof. Chetan Singh Thakur has received his PhD in Neuromorphic Engineering from Western Sydney University, Australia in 2016, and MTech from IIT Bombay in 2007. Prof. Chetan has worked for 6 years with Texas Instruments Singapore as a senior Integrated Circuit Design Engineer. Prof. Thakur worked as a research fellow at the Johns Hopkins University, USA before joining IISc as a faculty. He is also an <\/span><b>Adjunct Faculty<\/b><span> in International Centre for Neuromorphic Systems, Australia. He is recipients of several awards such as\u00a0<\/span><b>Young Investigator Award\u00a0<\/b><span>from Pratiksha Trust<\/span><b>, Early Career Research Award<\/b><span>\u00a0by Science and Engineering Research Board- India,<\/span><b>\u00a0Inspire Faculty Award<\/b><span> by Department of Science and Technology- India.<\/span><\/p>\n<p><span>Prof. Chetan\u2019s research interest is to understand the signal processing aspects of the brain and apply those to build novel intelligent systems. His research expertise lies in neuromorphic computing, FPGA &amp; mixed-signal VLSI systems, computational neuroscience, and machine learning for edge-computing.<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][\/et_pb_column][\/et_pb_row][et_pb_row disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;25px|||||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;||8px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Accepted Papers<\/h3>\n<hr>\n<p>[\/et_pb_text][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;1. Currency Recognition for Visually Challenged Individuals Using Enhanced Deep learning model&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span>Anujna Shetty, Ramakrishna M<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;2. Cost-Aware Fine-Tuning: Evaluating Hyperparameters, Datasets, and PEFT Methods for Efficient LLM Adaptation&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p role=\"presentation\"><span>Aditya Chatterjee, Sankar Menon, Dr. Kunal Kishore Korgaonkar, Rupesh Yarlagadda<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;3. Real-time Temperature Prediction of PMSM Motor Using Machine Learning&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p role=\"presentation\"><span>Nandish Goudar, Divesh Harikant, Yuvaraj Paragond, Sagar Khanade<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;4. Neuromorphic approaches for energy efficient Object Localization&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p role=\"presentation\"><span>Debarati Paul, Sayan Pradhan<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;5. Voice-to-Insight: STM32-Based Audio Logging with Offline AI-Driven Transcription, Translation, and Speaker Profiling&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p role=\"presentation\"><span>Ashwini Shinde, Eshal Shaikh, Harshal Patil, Akshay Dhere<\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_text disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;15px|||||&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h3>Keynote Speakers<\/h3>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=&#8221;1_4,3_4&#8243; disabled_on=&#8221;on|on|on&#8221; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; custom_padding=&#8221;27px||43px|||&#8221; disabled=&#8221;on&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_image src=&#8221;https:\/\/www.aimlsystems.org\/2024\/wp-content\/uploads\/2024\/10\/Hamm_jihun_002.jpg&#8221; title_text=&#8221;Hamm_jihun_002&#8243; align=&#8221;center&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; max_width=&#8221;200px&#8221; custom_margin=&#8221;||15px|||&#8221; filter_saturate=&#8221;0%&#8221; animation_style=&#8221;slide&#8221; border_radii=&#8221;on|115px|115px|115px|115px&#8221; border_color_all=&#8221;#FFFFFF&#8221; box_shadow_style=&#8221;preset2&#8243; global_colors_info=&#8221;{}&#8221; transform_styles__hover_enabled=&#8221;on|hover&#8221; transform_scale__hover_enabled=&#8221;on|hover&#8221; transform_translate__hover_enabled=&#8221;on|hover&#8221; transform_rotate__hover_enabled=&#8221;on|hover&#8221; transform_skew__hover_enabled=&#8221;on|hover&#8221; transform_origin__hover_enabled=&#8221;on|hover&#8221; transform_scale__hover=&#8221;104%|104%&#8221; filter_saturate__hover_enabled=&#8221;on|hover&#8221; filter_saturate__hover=&#8221;100%&#8221; border_width_all__hover_enabled=&#8221;on|hover&#8221; border_width_all__hover=&#8221;1px&#8221; border_radii__hover_enabled=&#8221;on|hover&#8221; border_radii__hover=&#8221;on|115px|115px|115px|115px&#8221;][\/et_pb_image][\/et_pb_column][et_pb_column type=&#8221;3_4&#8243; _builder_version=&#8221;4.21.0&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;25d2b0d8-2373-4ae8-9188-0ef4b1bb77f4&#8243; text_text_color=&#8221;#212A4F&#8221; header_4_text_color=&#8221;gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68&#8243; header_4_font_size=&#8221;20px&#8221; custom_margin=&#8221;||15px|||&#8221; global_colors_info=&#8221;{%22gcid-5fa2e3a6-d98c-4022-811a-b5fb6fa40d68%22:%91%22header_4_text_color%22%93}&#8221;]<\/p>\n<h4><a href=\"https:\/\/www.cs.tulane.edu\/~jhamm3\/\" target=\"_blank\" rel=\"noopener\">Dr. Jihun Hamm<\/a><\/h4>\n<p>Tulane University, USA<\/p>\n<p>[\/et_pb_text][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Title&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Can synthetic images be better than real images?<br \/>A study of utility and privacy of synthetic images in dermatology<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Abstract&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Advances in generative models such as GAN, VAE, and more recently, Diffusion models have revolutionized the field of image generation by enabling the generation of photorealistic synthetic images for many potential applications. Along with the advances, the question whether synthetic data can replace real data has become increasingly relevant. Synthetic data has been demonstrated to improve classification and to overcome the data scarcity issues such as data imbalance, robustness, and biases. This is done by generating synthetic data distributions with improved balance and other properties. Furthermore, there is an increasing demand for privacy-preserving synthetic image generation across various domains such as healthcare, finance, and social media. However, achieving the optimal tradeoff between utility and privacy remains a significant technical challenge, and various privacy-preserving techniques are being studied. In this talk, I will first discuss how generative AI can help in skin disease diagnosis problem, and also present benchmarking results on utility-privacy of recent image synthesis methods.<\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][et_pb_accordion open_toggle_background_color=&#8221;#f7f7f7&#8243; icon_color=&#8221;#0C71C3&#8243; use_icon_font_size=&#8221;on&#8221; disabled_on=&#8221;off|off|off&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;||14px|||&#8221; animation_style=&#8221;slide&#8221; animation_direction=&#8221;bottom&#8221; animation_intensity_slide=&#8221;18%&#8221; border_radii=&#8221;on|30px|30px|30px|30px&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_accordion_item title=&#8221;Bio&#8221; open=&#8221;on&#8221; _builder_version=&#8221;4.25.1&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Dr. Jihun Hamm has been an Associate Professor of Computer Science at Tulane University since 2019. He received his PhD degree from the University of Pennsylvania in 2008 supervised by Dr. Daniel Lee. Dr. Hamm&#8217;s research interest is in machine learning, from theory to applications. He has worked on the theory and practice of robust and adversarial machine learning, privacy and security and optimization. Dr. Hamm also has worked on medical data analysis. His work in machine learning has been published in top venues such as ICML, NeurIPS, CVPR, JMLR, and IEEE-TPAMI. His work has also been published in medical research venues such as MICCAI, MedIA, and IEEE-TMI. Among other awards, he has earned the Best Paper Award from MedIA, Finalist for MICCAI Young Scientist Publication Impact Award, and Google Faculty Research Award.<\/p>\n<p><span><\/span><\/p>\n<p>[\/et_pb_accordion_item][\/et_pb_accordion][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>EDGE-X 2025 Reimagining edge intelligence with low-power high efficiency AI Systems ScheduleDATE: October 8, 2025 Venue: Sigma 3, Chancery Pavilion, Bangalore &nbsp; Time Event 09:30 AM \u2013 11:00 AM Keynote Talk \u2013 Prof. Chetan Singh Thakur (IISc Bangalore)\u201cRAMAN: An Edge AI Accelerator from High-Speed Imaging to Brain\u2013Computer Interfaces\u201d Tea \/ Coffee Break 11:00 AM \u2013 [&hellip;]<\/p>\n","protected":false},"author":10,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph -->\n<p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p>\n<!-- \/wp:paragraph --><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph -->\n<p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p>\n<!-- \/wp:paragraph --><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/www.aimlsystems.org\/2023\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","footnotes":""},"class_list":["post-6515","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/pages\/6515","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/comments?post=6515"}],"version-history":[{"count":57,"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/pages\/6515\/revisions"}],"predecessor-version":[{"id":7592,"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/pages\/6515\/revisions\/7592"}],"wp:attachment":[{"href":"https:\/\/www.aimlsystems.org\/2026\/wp-json\/wp\/v2\/media?parent=6515"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}