CryptoURANUS Economics: Hardware-FPGA Offloading-CPU-GPU: Cryptocurrency

Anti-AdBlocker

Saturday, July 20, 2019

Hardware-FPGA Offloading-CPU-GPU: Cryptocurrency



Hardware-FPGA Offloading-CPU-GPU


Hardware-FPGA Offloading-CPU-GPU: As an industry standard preferred TCP/IP, CPU, GPU hardware offloading card is the Xilinx XCVU440 chip on the HTG-840 Card. Their is no other work-engine on Virtex-7 UltraScale+ Xilinx market. Their is no other more powerful FPGA chip in the Virtex-7 UltraScale+ Series than this chip and widely used by Cryptocurrency Mining Farms globally. This is a huge secret in the Cryptocurrency mining industry and will never be displayed or test for public scrutiny. Research proves me honest, and big crypto-mining secret no more!



Many applications in industry can benefit from Ethernet and TCP/IP as it is a well-known and supported networking standard.

These industrial applications require higher bandwidth and TR, (Real-Time), lower latency network response time avoidance of high-latency lag-time is the magic of this article.

Networking is becoming a challenge not to overload the CPU with a TCP/IP stack running at maximum bandwidth which is far often the problem.

These increasing requirements wasting processor resources spend more time handling data rather than running your CPU Data-throughput of Data-Centers, and or Applications.

The Easics' TCP Offload Engine (TOE) can be used to offload the TCP/IP stack from the CPU and handle it into FPGA or ASIC hardware.

This core configuration is an all-hardware IP block. It acts as a TCP server for sending and receiving of TCP/IP data.

In this configuration everything is handled in FPGA hardware very high throughput and low latency are possible.

The IP block is completely self-sufficient and can be used as a black box module which takes care of all networking tasks.

This means that the rest of the system throughput loads have been freed-up with  zero CPU percent used and all processing power is open for application logic processing.

In some application/hardware cases, integrating a full-hardware TCP/IP stack will eliminate the need for any built-in embedded processor, because the highly efficient low latency FPGA resolves as the work engine far better than any standardized IC chips.


The easics TCP Offload Engine is available as a 1 Gbit/s or 10 Gbit/s version. Both versions support Ethernet packets, IP packets, ICMP packets for ping, TCP packets and ARP packets. The 10 Gbit/s version additionally supports pause frames.




IP Core Architecture:
The figure below shows the core’s building blocks and its four most important interfaces. The first of these is an industry-standard (X)GMII interface which communicates with a 1(0) Gbit PHY.

The second is situated on the application side: two FIFOs with a simple push/pop interface, one for RX and one for TX. These FIFO interfaces, as well as an internal TCP block, communicate with a memory system which is to be provided outside of the core (the third interface).

The size and type of memory can be selected by the user. ARM’s AMBA AXI4 2.0E is the protocol used for this communication.

Various FPGA vendors, such as e.g. Xilinx , Intel , provide building blocks to interface internal block RAM, SRAM, or DRAM with an AXI bus.

The fourth and final interface is used to configure various networking parameters and to read status info.
tcp offload engine block diagram



  Performance Xilinx ZC706:
 

Following data throughput numbers have been measured on Xilinx ZC706 :
1G TCP (Mbps)
MTUTXCPU (%)RXCPU (%)
150090509490
10G TCP (Gbps)
MTUTXCPU (%)RXCPU (%)
15009.1809.350
90009.6909.730

The data throughput is thus higher for MTU=9000 (jumbo frames). CPU load is 0% since the full TCP/IP connectivity is in the FPGA (full hardware acceleration).
Following latency numbers have been measured for the 10G TOE in simulation, making use of the Xilinx transceiver models (responsible for 160 ns latency):
  • TX latency = 656 ns
  • RX latency = 640 ns
  • Round Trip Time (RTT) = 1.3 µs


Reference Information Source: Easics

No comments: