Put CPU next to hard drives or SSDs for a new approach to large scale out infrastructures, improving overall efficiency while reducing failure domains.
Enrico describes how to overcome the limitations imposed by fat serves and how to scale storage and compute for next generation workloads.
The concept of offloading some CPU tasks to the storage infrastructure is not entirely new but now, thanks to the power of modern CPUs, their efficiency, and smaller SoC (system on chip) designs, it is possible to do more and bring the CPU closer to data instead the opposite. By doing so, each device can perform some computational tasks that were, in the past, carried out on the CPU. Data is not moved anymore, saving bandwidth, improving overall parallelism, and enhancing overall system efficiency. With an adequate amount of RAM and connectivity, they can perform a lot of simple tasks. Those operations, executed by storage devices, make it possible to keep data local, minimizing latency, while improving overall parallelization and system efficiency. At the same time, the failure domain is smaller, and this improves the scalability of the entire infrastructure.