SEMICONWest2019 - AI at the Edge
Internet of Things #IoTis growing up. The gold rush to put wirelessly connected sensors everywhere is well underway. As lots of companies have found out with IoT, you end up with a huge pile of semi-structured data very quickly, but little real intelligence, especially if you didn’t dedicate people to looking at and trying to use the data. On top of it all, not all sensor data have the same value. So comes the next wave of investment, the ability to task #AI with learning from (training) and actually acting on (inference) all of this sensor data. Here’s the thing, it may not make business sense to send all of this data to the cloud for analysis.
Lets take a look at investment and market trends (courtesy of Semi via McKinsey, IDC, Gartner, and Objective Analysis).
Cloud hardware investments have stalled
Memory prices are falling and projected to continue to fall anywhere from 9 months to 2 1/2 years (excess inventory with a lot of oversupply, price trajectory well above cost for one of the longest periods in history, both situations that always correct themselves #CommodityPricing)
Automotive semiconductor hardware is off to the races
Industrial applications are also gaining speed
What does this equal - trend away from data center computing and towards distributed computing. Instinctively, we know this already. Self-driving cars need to keep learning and adapting. IoT needs to actually consume and apply data to provide any of its potential value. Why on-location (a.k.a. edge) computing and not data center/cloud based analysis? Short answers are bandwidth, latency, and power. 5G implementations are still years away, and current IoT data rates would swamp existing networks. Imagine if your self-driving car had to send all of its data to servers at Tesla to know whether you could change lanes. On-location computing edge computing makes a lot of sense and this is an area of heavy investment growth.
To get to a place where edge and cloud-based AI can realize the potential value, more innovation is needed. Gary Dickerson, CEO of Applied Materials, cites a need to improve Power, Performance, Area, and Cost (PPAC) by anywhere from 50x to 1000x from capabilities today to enable AI for edge and cloud for the applications underway today. Currently we are on pace for 10% of all electrical power produced in the world to be consumed by AI for training and inference (think 500M sensor systems running at 5W-8W = >1TW). Historically we’ve met the innovation challenge. The example Gary used is an iPhone using 1980’s technology, think of a phone 18m tall, using 600kW, and costing $100M. Any you thought your iPhone XS was expensive…
Solutions are popping up rapidly trying to extend AI implementation. This includes heavy VC investment into AI silicon companies designing chips optimized to AI computing for different families of solutions. One of the biggest families is edge computing with low power, on-location training capability, and 5G integration. Adaptation of AI analysis tools will need to follow. At the end of the day, AI will find its way into more and more aspects of our life and be available in more locations. We are basically one big step closer to serving our future robot overlords.