Page Nav

HIDE

Breaking News:

latest

Ads Place

GigaOm Sonar Report for Data Processing Units (DPU)

https://ift.tt/3FDPkiW A data processing unit (DPU) is a hardware accelerator usually installed on commodity x86 servers in the form of a g...

https://ift.tt/3FDPkiW

A data processing unit (DPU) is a hardware accelerator usually installed on commodity x86 servers in the form of a generic PCIe card or as part of a SmartNIC (a sophisticated network interface card). Its main function is to offload specialized compute tasks from the general-purpose system CPU, improving overall performance and efficiency of the entire infrastructure. DPUs help organizations build IT infrastructures that are denser, faster, more efficient and cost effective, with the goal of providing an overall better TCO. Figure 1 shows the difference between traditional servers (left) and those with DPUs (right).

Figure 1. Comparison of traditional and DPU-based server configurations

Organizations can implement DPUs with field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or proprietary systems on a chip (SoCs). The most sophisticated implementations use a combination of these methods. Each approach has benefits and trade-offs—making them more or less suitable for different use cases—though they share some characteristics, such as a high degree of internal parallelism to ensure consistent low latency and high throughput. Additionally, when building a viable solution for the enterprise, integration with the operating system (OS), hypervisor, and other software components is key. In some areas, like HPC or AI, the availability of software development kits (SDKs) helps users customize the solution and get the best out of the hardware for the most demanding workloads.

These devices accelerate a number of tasks including network and storage functions concerned with data protection, security, and encryption as well as data footprint optimization and high availability. In some cases, these accelerators implement additional functions to replace common data structures, such as key-value stores. They can also include programming libraries to support compute-intensive tasks that need high parallelism and throughput, at times competing with GPUs. More importantly, thanks to their field programmability, it’s easy to upgrade and adapt these devices to meet future needs while maintaining the latest protocols, algorithms, and features.

How We Got Here

Accelerators have been integral to computers for a long time. The process of offloading tasks to specialized devices or circuits has been often used to speed up operations and remove bottlenecks.

In recent years, the adoption of new technologies in the data center, such as flash memory and faster networks, has made the CPU a bottleneck. Accelerators reduce the bottleneck by offloading specialized tasks to preserve CPU power for applications without moving too much data around.

There’s been impressive development to improve accelerators in terms of performance and capabilities. Previously, completing a task required processing all operations and moving all data to or from the CPU, which led to unbalanced server architectures.

Data centers now require much more computing power and efficiency per square meter. With today’s accelerators, it’s possible to increase the number of tasks performed in parallel by a single server, thereby increasing density and overall power efficiency in terms of watts consumed per amount of work done.

Before DPUs, other accelerators, like GPUs or FPGAs, had emerged but were relegated to specific vertical tasks. Now DPUs enable the acceleration of a large number of workloads thanks to their flexibility, programmability, and ease of use.

DPUs are still relatively new for enterprises. However, hyperscale cloud providers are already using accelerators to improve efficiency and performance of the back end, and provide bleeding edge, virtual machine instances to customers requiring the best performance and optimization for their applications.

About the GigaOm Sonar Report

This GigaOm report is focused on emerging technologies and market segments. It helps organizations of all sizes to understand a technology, its strengths and weaknesses, and how it can fit in an overall IT strategy. The report is organized into four sections:

Overview: An overview of the technology, its major benefits, possible use cases, and relevant characteristics of different product implementations already available in the market.

Considerations for Adoption: An analysis of the potential risks and benefits to introduce products based on this technology in an enterprise IT scenario, including table stakes and key differentiating features, as well as consideration on how to integrate the new product with the existing environment.

GigaOm Sonar: A graphical representation of the market and its most important players focused on their value proposition and their roadmaps for the future. This section also includes a breakdown of each vendor’s offering in the sector.

Near-Term Roadmap: A 12-18 month forecast of the future development of the technology, its ecosystem, and major players of this market segment.

The post GigaOm Sonar Report for Data Processing Units (DPU) appeared first on Gigaom.



from Data Infrastructure, AI & Analytics – Gigaom https://ift.tt/3xdfJkp
via RiYo Analytics

ليست هناك تعليقات

Latest Articles