[Blog] How FPGAs Can Push AI to the Edge
Posted 09/25/2025 by Bob O’Donnell, president and chief analyst of TECHnalysis Research, LLC
Everyone, it seems, is now talking about how they’re planning to integrate AI into their devices, their factories, their workflows and, well, everything. But how they actually plan to make that happen isn’t always clear. Part of the challenge, of course, is that different workloads and different environments require different types of solutions.
For those looking to integrate AI-powered capabilities into edge computing-based offerings, there are a relatively broad range of ways to achieve those goals. One option that organizations may not have considered, however, are low power FPGAs, like the kind offered by Lattice Semiconductor.
These FPGAs and their associated software stacks are now capable of running a number of advanced AI applications in an extremely low power envelope, making them very well-suited for a variety of different applications.
Need to fuse the input from a variety of different sensors to do inferencing on a device through an AI-powered model? Check. How about detecting objects or identifying people in real-time and enabling (or disabling) access to a specific machine or set of controls? Check.
These applications and many others that need to run at very low power (that is, measured in milliwatts) directly on a hardware device are possible with the correct combination of Lattice FPGA hardware and accompanying software. Leveraging this type of on-device computing removes the need to make a cloud-based connection to run AI models, as has typically been the case until recently.
There are many benefits of this local AI computing option, including lower costs, less energy consumption, enhanced data security, the ability to function offline, real-time, latency-free responses, and increased reliability. It’s a powerful combination of benefits that make the potential for on-device AI very compelling.
Of course, these solutions aren’t going to work for all types of AI applications, nor are they intended to. However, they can unlock a surprisingly large set of capabilities that can create new opportunities for many different organizations.
Lattice offers a range of options across its Lattice iCE40™ Ultra, Lattice Nexus™, Lattice Nexus 2™, and Lattice Avant™ chip families. From a pure computing horsepower perspective, depending on the FPGA selected for the job, they can deliver up to 5 TOPs of AI inferencing performance. Additionally, they can function in a power range of 20 mW up to 5 W and can respond in as little as 20 milliseconds for very fast operation.
Beyond the basic hardware specs, of course, the other key consideration is software support. Here, again, there are a wide range of choices from which to select. Lattice has a dedicated group of engineers focused on creating AI and ML (Machine Learning) models that are optimized for their FPGAs. In the field of Human-Machine Interface (HMI), in particular, the company has created numerous off-the-shelf models covering presence detection and activation, attention sensing, user identification, hand gesture detection and more. All of these can be easily adapted for use across a wide range of industries and applications.
For an industry like automotive, there are also prebuilt models for applications like automotive external monitoring systems (EMS), which can continuously monitor a vehicle in an extremely low-power state. With an EMS, if an FPGA deployed in a car detects a person without a key approaching the vehicle in a suspicious manner, it can disable the primary domain controller, preventing the car from being operated, and/or activate video recording capabilities to capture a potential thief. Together, these capabilities greatly enhance security for the vehicle, reducing thefts and helping prevent damage from being done while the car is parked.
In addition, for companies interested in either more customization or creating their own models, Lattice also offers a suite of software tools specifically designed to work with their FPGA hardware. The Lattice sensAI™ solution stack, in particular, provides a platform and several tools that companies can use to build and/or tailor their own AI applications and models.
Lattice sensAI Studio, for example, can be used to train off-the-shelf models with custom data and then compile the resulting ML model into firmware that can be embedded directly into the FPGA. In addition, Lattice’s design tools like Lattice Propel™ and Lattice Radiant™ can help with the physical design and layout of both the FPGA chip and a larger SoC it could be integrated into.
All these capabilities sit on top of the existing applications for which Lattice’s low power FPGA solutions have traditionally been deployed. So, for example, organizations who may be using FPGAs as a hub to connect and direct sensor input to other computing elements in a system could combine those functions with the AI-enhanced applications to deliver even more advanced features and functions.
There’s little doubt that the potential for on-device AI and ML is an exciting opportunity that can open up new possibilities in existing markets and even the potential for entirely new markets. Low power FPGAs, like the ones offered by Lattice Semiconductor, along with a full suite of software tools can play a big part in enabling these new possibilities. So, if you’re in the process of trying to figure out how to integrate AI capabilities into devices and have a limited power budget, there’s a clear option you need to add to your consideration.
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.