2006 IEEE International Conference on Cluster Computing 2006
DOI: 10.1109/clustr.2006.311915
|View full text |Cite
|
Sign up to set email alerts
|

Initial Performance Evaluation of the NetEffect 10 Gigabit iWARP Adapter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2007
2007
2013
2013

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…If the RDMA flow control credits of a connection are used up without being released by the receiver, the communication falls back on the Send/Receive channel. Some work has also looked at the cost of memory registration in RDMA-enabled networks, especially its high costs for small buffers [11,23]. In a recent work presented in [24], researchers have proposed a pinning model in Open-MX based on the decoupling of memory pinning from the application, as a step toward a reliable pinning cache in the kernel.…”
Section: Related Workmentioning
confidence: 98%
“…If the RDMA flow control credits of a connection are used up without being released by the receiver, the communication falls back on the Send/Receive channel. Some work has also looked at the cost of memory registration in RDMA-enabled networks, especially its high costs for small buffers [11,23]. In a recent work presented in [24], researchers have proposed a pinning model in Open-MX based on the decoupling of memory pinning from the application, as a step toward a reliable pinning cache in the kernel.…”
Section: Related Workmentioning
confidence: 98%
“…The most common Type 1 implementations in the research community are open source Open-iSCSI [6] and UNH-iSCSI projects [10]. Examples of Type 2 are ASIC-based 10 GbE TOEs: Chelsio's Terminator 3 chip [11] and NetEffect's NE010 adapter [12]. Both adapters show low CPU utilization and near 10 Gbps performance, especially for larger data sizes.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, 10 GBit/s networking capability is supported by a number of commercial products, with a range of interconnect technologies. Specific examples include: (1) 10GbE offerings from Myricom [4] and Chelsio [2], (2) traditional cluster-interconnect technologies from Myricom [7] and Quadrics [24,23], and (3) interconnects based on the Infiniband standards [1] from Mellanox [3], NetEffect [9], and other vendors. [15] presents a representative performance comparison of such interconnects.…”
Section: Introductionmentioning
confidence: 99%