申请试用
HOT
登录
注册
 
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise

Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise

Spark开源社区
/
发布于
/
8232
人观看
The ever-growing continuous influx of data causes every component in a system to burst at its seams. GPUs and ASICs are helping on the compute side, whereas in-memory and flash storage devices are utilized to keep up with those local IOPS. All of those can perform extremely well in smaller setups and under contained workloads. However, today’s workloads require more and more power that directly translates into higher scale. Training major AI models can no longer fit into humble setups. Streaming ingestion systems are barely keeping up with the load. These are just a few examples of why enterprises require a massive versatile infrastructure, that continuously grows and scales. The problems start when workloads are then scaled out to reveal the hardships of traditional network infrastructures in coping with those bandwidth hungry and latency sensitive applications. In this talk, we are going to dive into how intelligent hardware offloads can mitigate network bottlenecks in Big Data and AI platforms, and compare the offering and performance of what’s available in major public clouds, as well as a la carte on-premise solutions.
0点赞
0收藏
1下载
确认
3秒后跳转登录页面
去登陆