EDGE-X 2025

Reimagining edge intelligence with low-power high efficiency AI Systems

Schedule

DATE: October 8, 2025

Venue: Sigma 3, Chancery Pavilion, Bangalore

TimeEvent
09:30 AM – 11:00 AMKeynote Talk – Prof. Chetan Singh Thakur (IISc Bangalore)
“RAMAN: An Edge AI Accelerator from High-Speed Imaging to Brain–Computer Interfaces”
Tea / Coffee Break
11:00 AM – 11:30 AM
11:30 AM– 1:00 PMTechnical Paper Presentations
Currency Recognition for Visually Challenged Individuals Using Enhanced Deep Learning Model – Anujna Shetty, Ramakrishna M
Cost-Aware Fine-Tuning: Evaluating Hyperparameters, Datasets, and PEFT Methods for Efficient LLM Adaptation – Aditya Chatterjee, Sankar Menon, Dr. Kunal Kishore Korgaonkar, Rupesh Yarlagadda
Real-time Temperature Prediction of PMSM Motor Using Machine Learning – Nandish Goudar, Divesh Harikant, Yuvaraj Paragond, Sagar Khanade
Neuromorphic Approaches for Energy-Efficient Object Localization – Debarati Paul, Sayan Pradhan
Voice-to-Insight: STM32-Based Audio Logging with Offline AI-Driven Transcription, Translation, and Speaker Profiling – Ashwini Shinde, Eshal Shaikh, Harshal Patil, Akshay Dhere
Lunch Break
1:00 PM– 2:00 PM
2:00 PM – 4:00 PMTinyML Hands-on Session by STMicroelectronics (Saurabh Rawat)
• Introduction to STMicroelectronics Edge AI Tools: X-CUBE-AI and Model Zoo
• Walkthrough of Process of Quantizing and Deploying Model Zoo on STM32N6 Platform
• Analysis of Model Deployment
• Other Models: MEMS and Audio
• Q&A
Tea / Coffee Break
4:00 PM – 4:30 PM
4:30 PM – 5:30 PMPanel Discussion – “Energy-Aware Intelligence: Can We Sustain the Edge Revolution?”
Panel Host: Prasant Misra

About the workshop


As intelligent systems expand into diverse environments — from IoT sensors to autonomous devices — traditional applications, architectures, and methodologies face new limits. The increasing demand for real-time, low-power, and context-aware intelligence at the edge is pushing the boundaries of what current computing systems can deliver. Edge devices must now operate under tight constraints of memory, latency, and energy, while still supporting sophisticated AI workloads. These challenges call for a rethinking of how we design, deploy, and optimize intelligent systems at the edge.

The EDGE-X 2025 workshop, part of the Fifth International AI-ML Systems Conference (AIMLSys 2025), aims to address the critical challenges and opportunities in next-generation edge computing. EDGE-X explores innovative solutions across various domains, including on-device learning and inferencing, ML/DL optimization approaches to achieve efficiency in memory/latency/power, hardware-software co-optimization, and emerging beyond von Neumann paradigms including but not limited to neuromorphic, in-memory, photonic, and spintronic computing. The workshop seeks to unite researchers, engineers, and architects to share ideas and breakthroughs in devices, architectures, algorithms, tools, and methodologies that redefine performance and efficiency for edge computing.

• Paper Submission Deadline: 10th Aug 2025
• Acceptance Notification: 1st Sept 2025
• Camera-Ready Deadline: 15th Sept 2025

Topics


EDGE-X 2025 invites submissions of original research papers, case studies, and review articles in the field of low-power efficient edge AI. The workshop seeks to foster discussions on a wide range of topics, including but not limited to:

  • Ultra-Efficient Machine Learning – TinyML, binary/ternary neural networks, federated learning, model pruning, compression, quantisation, and edge-training
  • Hardware-Software Co-Design – RISC-V custom extensions for edge AI, non-von-Neumann accelerators (e.g., in-memory compute, FPGAs)
  • Beyond CMOS & von Neumann Paradigms – Neuromorphic computing (spiking networks, event-based sensing), inmemory/compute architectures (memristors, ReRAM), photonic integrated circuits, spintronic and quantum-inspired devices
  • System-Level Innovations – Near-/sub-threshold computing, power-aware OS/runtime frameworks, approximate computing for error-tolerant workloads
  • Tools & Methodologies – Simulators for emerging edge devices (photonic, spintronic), energy-accuracy trade-off optimisation, benchmarks for edge heterogeneous platforms
  • Use Cases & Deployment Challenges – Self-powered/swarm systems, ruggedised edge AI, privacy/security for distributed intelligence, sustainability and lifecycle management
  • Interdisciplinary approaches & collaborations in low-power high-efficiency edge AI researchfor edge computing.
Submission Instructions

Papers should be at most 4 pages, including title, abstract, figures and results, but excluding references, and not published or under review elsewhere. Papers should be prepared as per IEEE conference proceedings format. Please submit your papers through Microsoft CMT.
All accepted workshop full papers will be included in the IEEE proceedings. At least one author of each accepted paper must register for the conference and present the paper. In addition, no-shows of accepted papers at the workshop will result in those papers NOT being included in the proceedings.

Hands-on Session


Overview

Deploying artificial intelligence (AI) models on embedded systems that use microcontrollers (MCUs) present several challenges. These include stringent memory constraints, limited processing power, and the need for real-time responsiveness. Traditional AI models, often designed for resource-rich environments, require significant optimizations and efforts to work efficiently on embedded platforms. Balancing model accuracy with performance, managing quantization trade-offs, and minimizing latency are critical considerations in this context. The STM32AI Model Zoo addresses some of these challenges by offering a comprehensive collection of pre-trained models and tools specifically optimized for the STM32 devices. This workshop explains how the STM32AI Model Zoo facilitates efficient Edge AI deployment through tailored optimizations and full lifecycle support, enabling developers to integrate AI capabilities into STMicroelectronics Neural Accelerator STM32N6 based applications seamlessly.

It takes the developers through a few typical use cases related to computer vision, audio and motion sensors showcasing a deployment on STM32N6 platform.

Agenda

  •  Introduction to STMicroelectronics Edge AI Tools – X-CUBE-AI and Model Zoo
  •  Walkthrough of Process of Quantizing and deploying Model Zoo on STM32N6 Platform
  •  Analysis of Model deployment
  •  Othe Models – MEMS and Audio
  •  QnA

 

Saurabh Rawat

Senior Staff Engineer, STMicroelectronics India Pvt Ltd

Title

TinyML Hands-on Session by STMicroelectronics

Bio

Saurabh Rawat is a Senior Staff Engineer with a commendable tenure at STMicroelectronics, spanning over 13 years. He holds a BTech in Electronics and Communication Engineering from National Institute of Technology,Prayagraj (Allahabad erstwhile) and M.Tech. in Augmented Reality and Virtual Reality from IIT Jodhpur. His expertise lies in developing innovative embedded solutions using sensors and the in the field of the Sensors, Connectivity and Internet of Things (IoT). He has worked on many MEMS based solutions and has also developed the STMicroelectronics Bluetooth Mesh Stack for Android.

Currently, Saurabh is at the forefront of Augmented and Virtual Reality technology, where he is developing Embedded and Edge AI Solutions related to Advance computer Vision, MEMS and Audio Sensors utilizing ST’s devices and building Hardware and Software Ecosystem around it. Saurabh has been a prolific contributor to the body of knowledge in IoT and sensor technology, with multiple publications and articles to his name. He and his team has filed 2 patents, specifically in the domains of Augmented/Virtual Reality and IoT.

Keynote Speaker


Chetan Singh Thakur

Indian Institute of Science (IISc), Bangalore

Title

RAMAN: An Edge AI Accelerator from High-speed Imaging to Brain-Computer Interfaces

Abstract

This talk will present RAMAN — our in-house developed, reconfigurable, and sparsity-aware TinyML accelerator, purpose-built for edge AI. RAMAN leverages both structured and unstructured sparsity within a highly adaptable framework and incorporates quantization techniques to further minimize latency. We will highlight its versatility through applications in high-speed imaging, acoustic signal processing, and brain-computer interfaces. Remarkably, RAMAN achieves real-time throughput of up to 1000 FPS for video workloads on edge devices.

Bio

Prof. Chetan Singh Thakur has received his PhD in Neuromorphic Engineering from Western Sydney University, Australia in 2016, and MTech from IIT Bombay in 2007. Prof. Chetan has worked for 6 years with Texas Instruments Singapore as a senior Integrated Circuit Design Engineer. Prof. Thakur worked as a research fellow at the Johns Hopkins University, USA before joining IISc as a faculty. He is also an Adjunct Faculty in International Centre for Neuromorphic Systems, Australia. He is recipients of several awards such as Young Investigator Award from Pratiksha Trust, Early Career Research Award by Science and Engineering Research Board- India, Inspire Faculty Award by Department of Science and Technology- India.

Prof. Chetan’s research interest is to understand the signal processing aspects of the brain and apply those to build novel intelligent systems. His research expertise lies in neuromorphic computing, FPGA & mixed-signal VLSI systems, computational neuroscience, and machine learning for edge-computing.

Panel Discussion


Title
Energy-Aware Intelligence: Can We Sustain the Edge Revolution?

Coordinator: Prashant Misra

Accepted Papers

1. Currency Recognition for Visually Challenged Individuals Using Enhanced Deep learning model

Anujna Shetty, Ramakrishna M

2. Cost-Aware Fine-Tuning: Evaluating Hyperparameters, Datasets, and PEFT Methods for Efficient LLM Adaptation

Aditya Chatterjee, Sankar Menon, Dr. Kunal Kishore Korgaonkar, Rupesh Yarlagadda

3. Real-time Temperature Prediction of PMSM Motor Using Machine Learning

Nandish Goudar, Divesh Harikant, Yuvaraj Paragond, Sagar Khanade

4. Neuromorphic approaches for energy efficient Object Localization

Debarati Paul, Sayan Pradhan

5. Voice-to-Insight: STM32-Based Audio Logging with Offline AI-Driven Transcription, Translation, and Speaker Profiling

Ashwini Shinde, Eshal Shaikh, Harshal Patil, Akshay Dhere