A key benefit of using general-purpose processors to implement open RAN/vRAN is that the same platforms can be used to support AI inference and other applications at the far edge of the network, such as cell site routers (CSRs) and content delivery and hosting. These edge platforms can be used to host virtualized applications closer to the user, offering significant benefits in terms of lower latency and shared infrastructure.
AI is becoming a critical application in many market areas, including wireless networks. The use of AI within wireless networks can significantly improve network performance and end-user quality of experience. AI learning is usually hosted in data centers with high power servers and GPU, FPGA or other acceleration hardware. Cloud native AI inference can be easily moved to far edge server platforms, reducing latency and sharing resources with open RAN baseband functions.