The bandwidth demands on server processors have significantly outgrown the increases in
central processing unit (CPU) performance over the past few years. This is already having
an impact on cloud and network service providers, which are installing a growing number of
servers and data centers to meet the demand from customers. Virtualization is key to delivering many of these services but also increases the workload on server CPUs. Server performance can be a critical factor in providing cost-effective cloud and networking services.
Domain-specific accelerators such as SmartNICs, and machine learning and inference coprocessors can offload processing from CPU cores, significantly increasing application performance and releasing additional CPU cores for other revenue earning workloads. To deliver
the best benefits these accelerators should integrate best-of-breed components such as
processors, hardware engines, memory and I/O peripherals.
The Open Domain-Specific Accelerator (ODSA) Workgroup was launched in October 2018 by
seven companies to develop an open architecture and related specifications for developing
chiplets. The group is working on a reference design and a complete protocol stack to support
domain-specific accelerator development. This white paper is based on inputs from the ODSA
Workgroup and individual member companies including Achronix, Avera Semiconductor,
Aquantia, ESnet, Kandou, Netronome, NXP, SamTec, Sarcina and SiFive.