Microsoft uses Intel FPGAs for real-time AI in the cloud
- Author:Ella Cai
- Release on:2017-08-24
Microsoft has selected Intel’s Stratix 10 FPGAs as the main hardware accelerator in its new accelerated deep learning platform – code-named Project Brainwave.
The aim is to use the accelerated deep learning platform to deliver artificial intelligence in real-time for its cloud services. This will provide facial recognition and voice recognition on smartphones.
As well has cloud-based data processing for autonomous driving
Real-time AI is important for processing live data streams, including video, sensors or search queries, and rapidly deliver data back to users.
Microsoft first demonstrated its FPGA-based deep learning platform at Hot Chips 2017, a symposium that showcases the latest advancements in semiconductor technology.
FPGAs can bring the flexibility of customisation to deep learning hardware accelerators that are typically optimised to run a single workload.
Microsoft’s Project Brainwave has demonstrated over 39 Teraflops of achieved performance on a single request, setting a new standard in the cloud for real-time AI computation.
“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning process units,” said Doug Burger at Microsoft Research NExT.
Microsoft is currently working to deploy Project Brainwave in the Azure cloud service.
The aim is to use the accelerated deep learning platform to deliver artificial intelligence in real-time for its cloud services. This will provide facial recognition and voice recognition on smartphones.
As well has cloud-based data processing for autonomous driving
Real-time AI is important for processing live data streams, including video, sensors or search queries, and rapidly deliver data back to users.
Microsoft first demonstrated its FPGA-based deep learning platform at Hot Chips 2017, a symposium that showcases the latest advancements in semiconductor technology.
FPGAs can bring the flexibility of customisation to deep learning hardware accelerators that are typically optimised to run a single workload.
Microsoft’s Project Brainwave has demonstrated over 39 Teraflops of achieved performance on a single request, setting a new standard in the cloud for real-time AI computation.
“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning process units,” said Doug Burger at Microsoft Research NExT.
Microsoft is currently working to deploy Project Brainwave in the Azure cloud service.