Microsoft Corporation
Come join us in blazing the trail for FPGA-based AI acceleration at datacenter scale!
By leveraging the huge amounts of fine-grained parallelism delivered by current and future FPGAs, reconfigurable computing can radically accelerate many types of computations.
We are applying this technology to challenging applications important to Microsoft and our customers, including Bing and Office.
Our work includes developing, optimizing, deploying, and maintaining FPGA accelerators, infrastructure, and tools.
Our current focus is on accelerating deep neural networks (DNNs) via the Brainwave accelerator architecture.
We are heavily involved in developing, extending, and deploying Brainwave, and in bringing DNN models to production on FPGAs using Brainwave.
Working with Microsoft model developers to deploy their models on Brainwave gives us the unusual opportunity of working across the whole DNN stack, from cutting-edge DNN models, techniques, and frameworks to accelerator architecture.
This cutting-edge development takes place in the context of a live, global-scale distributed system that touches millions of users daily, running on the world’s largest distributed FPGA-accelerated hardware platform.