The DEGIMA ( DE stination for G pu I ntensive MA chine) is a high performance computer cluster used for hierarchical N-body simulations at the Nagasaki Advanced Computing Center, Nagasaki University .
62-491: The system consists of a 144-node cluster of PCs connected via an InfiniBand interconnect. Each node is composed of 2.66 GHz Intel Core i7 920 processor, two GeForce GTX295 graphics cards, 12 GB DDR3-1333 memory and Mellanox MHES14-XTC SDR InfiniBand host adapter on MSI X58 pro-E motherboard. Each graphics card has two GT200 GPU chips. As a whole, the system has 144 CPUs and 576 GPUs . It runs astrophysical N-body simulations with over 3,000,000,000 particles using
124-505: A Fibre Channel vendor. At the 2011 International Supercomputing Conference , links running at about 56 gigabits per second (known as FDR, see below), were announced and demonstrated by connecting booths in the trade show. In 2012, Intel acquired QLogic's InfiniBand technology, leaving only one independent supplier. By 2014, InfiniBand was the most popular internal connection technology for supercomputers, although within two years, 10 Gigabit Ethernet started displacing it. In 2016, it
186-416: A switched fabric topology, as opposed to early shared medium Ethernet . All transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service (QoS). InfiniBand transmits data in packets of up to 4 KB that are taken together to form
248-410: A 0.8 mm pitch. Each row has eight contacts, a gap equivalent to four contacts, then a further 18 contacts. Boards have a thickness of 1.0 mm, excluding the components. A "Half Mini Card" (sometimes abbreviated as HMC) is also specified, having approximately half the physical length of 26.8 mm. There are also half size mini PCIe cards that are 30 x 31.90 mm which is about half the length of
310-485: A choice of BSD license for Windows. It has been adopted by most of the InfiniBand vendors, for Linux , FreeBSD , and Microsoft Windows . IBM refers to a software library called libibverbs , for its AIX operating system, as well as "AIX InfiniBand verbs". The Linux kernel support was integrated in 2005 into the kernel version 2.6.11. Ethernet over InfiniBand, abbreviated to EoIB, is an Ethernet implementation over
372-609: A failure tolerance in case bad or unreliable lanes are present. The PCI Express standard defines link widths of x1, x2, x4, x8, and x16. Up to and including PCIe 5.0, x12, and x32 links were defined as well but never used. This allows the PCI Express bus to serve both cost-sensitive applications where high throughput is not needed, and performance-critical applications such as 3D graphics, networking ( 10 Gigabit Ethernet or multiport Gigabit Ethernet ), and enterprise storage ( SAS or Fibre Channel ). Slots and connectors are only defined for
434-465: A full size mini PCIe card. PCI Express Mini Card edge connectors provide multiple connections and buses: Despite sharing the Mini PCI Express form factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using
496-529: A message. A message can be: In addition to a board form factor connection, it can use both active and passive copper (up to 10 meters) and optical fiber cable (up to 10 km). QSFP connectors are used. The InfiniBand Association also specified the CXP connector system for speeds up to 120 Gbit/s over copper, active optical cables, and optical transceivers using parallel multi-mode fiber cables with 24-fiber MPO connectors. Mellanox operating system support
558-496: A more detailed error detection and reporting mechanism (Advanced Error Reporting, AER), and native hot-swap functionality. More recent revisions of the PCIe standard provide hardware support for I/O virtualization . The PCI Express electrical interface is measured by the number of simultaneous lanes. (A lane is a single send/receive line of data, analogous to a "one-lane road" having one lane of traffic in both directions.) The interface
620-406: A parallel interface traveling through conductors of different lengths, on potentially different printed circuit board (PCB) layers, and at possibly different signal velocities . Despite being transmitted simultaneously as a single word , signals on a parallel interface have different travel duration and arrive at their destinations at different times. When the interface clock period is shorter than
682-410: A slot of its physical size or larger (with x16 as the largest used), but may not fit into a smaller PCI Express slot; for example, a x16 card may not fit into a x4 or x8 slot. Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection. The number of lanes actually connected to a slot may also be fewer than the number supported by
SECTION 10
#1732858659653744-497: A slot to transmit video signals from the host CPU's integrated graphics instead of PCIe, using a supported add-in. The PCIe transaction-layer protocol can also be used over some other interconnects, which are not electrically PCIe: While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect ), and underwent a name change to 3GIO (for 3rd Generation I/O ) before finally settling on its PCI-SIG name PCI Express . A technical working group named
806-510: A subset of these widths, with link widths in between using the next larger physical slot size. As a point of reference, a PCI-X (133 MHz 64-bit) device and a PCI Express 1.0 device using four lanes (x4) have roughly the same peak single-direction transfer rate of 1064 MB/s. The PCI Express bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data simultaneously, or if communication with
868-409: A transfer rate of 250 MB/s per lane. The PCI-SIG also expects the norm to evolve to reach 500 MB/s, as in PCI Express 2.0. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCIe slots and PCIe-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe specification. OCuLink (standing for "optical-copper link", since Cu
930-586: Is embedded within the serial signal itself. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCI Express is one example of the general trend toward replacing parallel buses with serial interconnects; other examples include Serial ATA (SATA), USB , Serial Attached SCSI (SAS), FireWire (IEEE 1394), and RapidIO . In digital video, examples in common use are DVI , HDMI , and DisplayPort . Multichannel serial design increases flexibility with its ability to allocate fewer lanes for slower devices. A PCI Express card fits into
992-469: Is a stub . You can help Misplaced Pages by expanding it . InfiniBand InfiniBand ( IB ) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency . It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It
1054-537: Is a high-speed serial computer expansion bus standard, designed to replace the older PCI , PCI-X and AGP bus standards. It is the common motherboard interface for personal computers' graphics cards , capture cards , sound cards , hard disk drive host adapters , SSDs , Wi-Fi , and Ethernet hardware connections. PCIe has numerous improvements over the older standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance scaling for bus devices,
1116-408: Is a standard for connecting graphics processing units (GPUs) to computer power supplies for up to 600 W power delivery. It was introduced in 2022 to supersede the previous 6- and 8-pin power connectors for GPUs. The primary aim was to cater to the increasing power requirements of high-performance GPUs. It was replaced by a minor revision called 12V-2x6, which changed the connector to ensure that
1178-470: Is also used in a variety of other standards — most notably the laptop expansion card interface called ExpressCard . It is also used in the storage interfaces of SATA Express , U.2 (SFF-8639) and M.2 . Formal specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group ) — a group of more than 900 companies that also maintains the conventional PCI specifications. Conceptually,
1240-430: Is available for Solaris , FreeBSD , Red Hat Enterprise Linux , SUSE Linux Enterprise Server (SLES), Windows , HP-UX , VMware ESX , and AIX . InfiniBand has no specific standard application programming interface (API). The standard only lists a set of verbs such as ibv_open_device or ibv_post_send , which are abstract representations of functions or methods that must exist. The syntax of these functions
1302-429: Is composed of one or more lanes . Low-speed peripherals (such as an 802.11 Wi-Fi card ) use a single-lane (x1) link, while a graphics adapter typically uses a much wider and therefore faster 16-lane (x16) link. A lane is composed of two differential signaling pairs, with one pair for receiving data and the other for transmitting. Thus, each lane is composed of four wires or signal traces . Conceptually, each lane
SECTION 20
#17328586596531364-550: Is designed to be scalable and uses a switched fabric network topology . Between 2014 and June 2016, it was the most commonly used interconnect in the TOP500 list of supercomputers. Mellanox (acquired by Nvidia ) manufactures InfiniBand host bus adapters and network switches , which are used by large computer system and database vendors in their product lines. As a computer cluster interconnect, IB competes with Ethernet , Fibre Channel , and Intel Omni-Path . The technology
1426-456: Is duplex. Links can be aggregated: most systems use a 4 link/lane connector (QSFP). HDR often makes use of 2x links (aka HDR100, 100 Gb link using 2 lanes of HDR, while still using a QSFP connector). 8x is called for with NDR switch ports using OSFP (Octal Small Form Factor Pluggable) connectors "Cable and Connector Definitions" . InfiniBand provides remote direct memory access (RDMA) capabilities for low CPU overhead. InfiniBand uses
1488-434: Is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCI Express port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCI Express slots are not interchangeable. At
1550-497: Is expressed in transfers per second instead of bits per second because the number of transfers includes the overhead bits, which do not provide additional throughput; PCIe 1.x uses an 8b/10b encoding scheme, resulting in a 20% (= 2/10) overhead on the raw channel bandwidth. So in the PCIe terminology, transfer rate refers to the encoded bit rate: 2.5 GT/s is 2.5 Gbit/s on the encoded serial link. This corresponds to 2.0 Gbit/s of pre-coded data or 250 MB/s, which
1612-528: Is left to the vendors. Sometimes for reference this is called the verbs API. The de facto standard software is developed by OpenFabrics Alliance and called the Open Fabrics Enterprise Distribution (OFED). It is released under two licenses GPL2 or BSD license for Linux and FreeBSD, and as Mellanox OFED for Windows (product names: WinOF / WinOF-2; attributed as host controller driver for matching specific ConnectX 3 to 5 devices) under
1674-652: Is promoted by the InfiniBand Trade Association . InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next Generation I/O (NGIO). NGIO was led by Intel , with a specification released in 1998, and joined by Sun Microsystems and Dell . Future I/O was backed by Compaq , IBM , and Hewlett-Packard . This led to the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors as well as software vendors such as Microsoft . At
1736-494: Is referred to as throughput in PCIe. In 2005, PCI-SIG introduced PCIe 1.1. This updated specification includes clarifications and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate. PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007. The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and
1798-505: Is the chemical symbol for copper ) is an extension for the "cable version of PCI Express". Version 1.0 of OCuLink, released in Oct 2015, supports up to 4 PCIe 3.0 lanes (3.9 GB/s) over copper cabling; a fiber optic version may appear in the future. The most recent version of OCuLink, OCuLink-2, supports up to 16 GB/s (PCIe 4.0 x8) while the maximum bandwidth of a USB 4 cable is 10GB/s. While initially intended for use in laptops for
1860-408: Is used as a full-duplex byte stream , transporting data packets in eight-bit "byte" format simultaneously in both directions between endpoints of a link. Physical PCI Express links may contain 1, 4, 8 or 16 lanes. Lane counts are written with an "x" prefix (for example, "x8" represents an eight-lane card or slot), with x16 being the largest size in common use. Lane sizes are also referred to via
1922-554: The Arapaho Work Group (AWG) drew up the standard. For initial drafts, the AWG consisted only of Intel engineers; subsequently, the AWG expanded to include industry partners. Since, PCIe has undergone several large and smaller revisions, improving on performance and other features. In 2003, PCI-SIG introduced PCIe 1.0a, with a per-lane data rate of 250 MB/s and a transfer rate of 2.5 gigatransfers per second (GT/s). Transfer rate
DEGIMA - Misplaced Pages Continue
1984-687: The Asus Eee PC , the Apple MacBook Air , and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an SSD . This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe x1 bus intact. This makes the "miniPCIe" flash and solid-state drives sold for netbooks largely incompatible with true PCI Express Mini implementations. Also,
2046-591: The root complex (host). Because of its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to one master at a time, in a single direction. Furthermore, the older PCI clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCI Express bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints. In terms of bus protocol, PCI Express communication
2108-688: The Huron River platform. Notebooks such as Lenovo's ThinkPad T, W and X series, released in March–April 2011, have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560/Y570/Y580 also support mSATA. On the contrary, the L-series among others can only support M.2 cards using the PCIe standard in the WWAN slot. Some notebooks (notably
2170-562: The InfiniBand protocol and connector technology. EoIB enables multiple Ethernet bandwidths varying on the InfiniBand (IB) version. Ethernet's implementation of the Internet Protocol Suite , usually referred to as TCP/IP, is different in some details compared to the direct InfiniBand protocol in IP over IB (IPoIB). PCI Express PCI Express ( Peripheral Component Interconnect Express ), officially abbreviated as PCIe or PCI-e ,
2232-523: The Multiple-Walk parallel treecode. The system is noted for being highly cost and energy-efficient, having a peak performance of 111 TFLOPS with an energy efficiency of 1376 MFLOPS/watt . The overall cost of the hardware was approximately US$ 500,000. The name of the system is also derived from the name of a small artificial island called " Dejima " in Nagasaki . This supercomputer-related article
2294-488: The PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus. One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines. In contrast, PCI Express is based on point-to-point topology , with separate serial links connecting every device to
2356-400: The PCI Express peripheral is bidirectional . PCI Express devices communicate via a logical connection called an interconnect or link . A link is a point-to-point communication channel between two PCI Express ports allowing both of them to send and receive ordinary PCI requests (configuration, I/O or memory read/write) and interrupts ( INTx , MSI or MSI-X ). At the physical level, a link
2418-499: The PCIe x1 Mini-Card slot that typically do not support mSATA SSD. A list of desktop boards that natively support mSATA in the PCIe x1 Mini-Card slot (typically multiplexed with a SATA port) is provided on the Intel Support site. M.2 replaces the mSATA standard and Mini PCIe. Computer bus interfaces provided through the M.2 connector are PCI Express 3.0 (up to four lanes), Serial ATA 3.0, and USB 3.0 (a single logical port for each of
2480-520: The burst of the dot-com bubble there was hesitation in the industry to invest in such a far-reaching technology jump. By 2002, Intel announced that instead of shipping IB integrated circuits ("chips"), it would focus on developing PCI Express , and Microsoft discontinued IB development in favor of extending Ethernet. Sun Microsystems and Hitachi continued to support IB. In 2003, the System X supercomputer built at Virginia Tech used InfiniBand in what
2542-521: The card is wake capable. All PCI express cards may consume up to 3 A at +3.3 V ( 9.9 W ). The amount of +12 V and total power they may consume depends on the form factor and the role of the card: Optional connectors add 75 W (6-pin) or 150 W (8-pin) of +12 V power for up to 300 W total ( 2 × 75 W + 1 × 150 W ). Some cards use two 8-pin connectors, but this has not been standardized yet as of 2018 , therefore such cards must not carry
DEGIMA - Misplaced Pages Continue
2604-517: The conductors on each side of the edge connector on a PCI Express card. The solder side of the printed circuit board (PCB) is the A-side, and the component side is the B-side. PRSNT1# and PRSNT2# pins must be slightly shorter than the rest, to ensure that a hot-plugged card is fully inserted. The WAKE# pin uses full voltage to wake the computer, but must be pulled high from the standby power to indicate that
2666-410: The connection of powerful external GPU boxes, OCuLink's popularity lies primarily in its use for PCIe interconnections in servers, a more prevalent application. Numerous other form factors use, or are able to use, PCIe. These include: The PCIe slot connector can also carry protocols other than PCIe. Some 9xx series Intel chipsets support Serial Digital Video Out , a proprietary technology that uses
2728-519: The full transfer rate. Standard mechanical sizes are x1, x4, x8, and x16. Cards using a number of lanes other than the standard mechanical sizes need to physically fit the next larger mechanical size (e.g. an x2 card uses the x4 size, or an x12 card uses the x16 size). The cards themselves are designed and manufactured in various sizes. For example, solid-state drives (SSDs) that come in the form of PCI Express cards often use HHHL (half height, half length) and FHHL (full height, half length) to describe
2790-453: The largest time difference between signal arrivals, recovery of the transmitted word is no longer possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz. A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information
2852-490: The latter two). It is up to the manufacturer of the M.2 host or device to choose which interfaces to support, depending on the desired level of host support and device type. PCI Express External Cabling (also known as External PCI Express , Cabled PCI Express , or ePCIe ) specifications were released by the PCI-SIG in February 2007. Standard cables and connectors have been defined for x1, x4, x8, and x16 link widths, with
2914-406: The market, adopted a "buy to kill" strategy. Cisco successfully killed InfiniBand switching companies such as Topspin via acquisition. Of the top 500 supercomputers in 2009, Gigabit Ethernet was the internal interconnect technology in 259 installations, compared with 181 using InfiniBand. In 2010, market leaders Mellanox and Voltaire merged, leaving just one other IB vendor, QLogic , primarily
2976-435: The newer M.2 form factor for this purpose. Due to different dimensions, PCI Express Mini Cards are not physically compatible with standard full-size PCI Express slots; however, passive adapters exist that let them be used in full-size slots. Dimensions of PCI Express Mini Cards are 30 mm × 50.95 mm (width × length) for a Full Mini Card. There is a 52-pin edge connector , consisting of two staggered rows on
3038-548: The official PCI Express logo. This configuration allows 375 W total ( 1 × 75 W + 2 × 150 W ) and will likely be standardized by PCI-SIG with the PCI Express 4.0 standard. The 8-pin PCI Express connector could be confused with the EPS12V connector, which is mainly used for powering SMP and multi-core systems. The power connectors are variants of the Molex Mini-Fit Jr. series connectors. The 16-pin 12VHPWR connector
3100-458: The other being v1.1 or v1.0a. The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture. Intel 's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors ( Abit , Asus , Gigabyte ) as of 21 October 2007. AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with
3162-406: The overall link width. The lane count is automatically negotiated during device initialization and can be restricted by either endpoint. For example, a single-lane PCI Express (x1) card can be inserted into a multi-lane slot (x4, x8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure itself to use fewer lanes, providing
SECTION 50
#17328586596533224-452: The per-lane throughput rises from 250 MB/s to 500 MB/s. Consequently, a 16-lane PCIe connector (x16) can support an aggregate throughput of up to 8 GB/s. PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 work, with
3286-540: The physical dimensions of the card. Modern (since c. 2012 ) gaming video cards usually exceed the height as well as thickness specified in the PCI Express standard, due to the need for more capable and quieter cooling fans , as gaming video cards often emit hundreds of watts of heat. Modern computer cases are often wider to accommodate these taller cards, but not always. Since full-length cards (312 mm) are uncommon, modern cases sometimes cannot fit those. The thickness of these cards also typically occupies
3348-409: The physical slot size. An example is a x16 slot that runs at x4, which accepts any x1, x2, x4, x8 or x16 card, but provides only four lanes. Its specification may read as "x16 (x4 mode)", while "mechanical @ electrical" notation (e.g. "x16 @ x4") is also common. The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support
3410-584: The sense pins only make contact if the power pins are seated properly. PCI Express Mini Card (also known as Mini PCI Express , Mini PCIe , Mini PCI-E , mPCIe , and PEM ), based on PCI Express, is a replacement for the Mini PCI form factor. It is developed by the PCI-SIG . The host device supports both PCI Express and USB 2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 use PCI Express for expansion cards; however, as of 2015 , many vendors are moving toward using
3472-443: The software level, PCI Express preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCI Express devices without explicit support for the PCI Express standard, though new PCI Express features are inaccessible. The PCI Express link between two devices can vary in size from one to 16 lanes . In a multi-lane link, the packet data is striped across lanes, and peak data throughput scales with
3534-782: The space of 2 to 5 PCIe slots. In fact, even the methodology of how to measure the cards varies between vendors, with some including the metal bracket size in dimensions and others not. For instance, comparing three high-end video cards released in 2020: a Sapphire Radeon RX 5700 XT card measures 135 mm in height (excluding the metal bracket), which exceeds the PCIe standard height by 28 mm, another Radeon RX 5700 XT card by XFX measures 55 mm thick (i.e. 2.7 PCI slots at 20.32 mm), taking up 3 PCIe slots, while an Asus GeForce RTX 3080 video card takes up two slots and measures 140.1 mm × 318.5 mm × 57.8 mm, exceeding PCI Express' maximum height, length, and thickness respectively. The following table identifies
3596-453: The terms "width" or "by" e.g., an eight-lane slot could be referred to as a "by 8" or as "8 lanes wide." For mechanical card sizes, see below . The bonded serial bus architecture was chosen over the traditional parallel bus because of the inherent limitations of the latter, including half-duplex operation, excess signal count, and inherently lower bandwidth due to timing skew . Timing skew results from separate electrical signals within
3658-857: The time it was thought some of the more powerful computers were approaching the interconnect bottleneck of the PCI bus, in spite of upgrades like PCI-X . Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision for IB was simultaneously a replacement for PCI in I/O, Ethernet in the machine room , cluster interconnect and Fibre Channel . IBTA also envisaged decomposing server hardware on an IB fabric . Mellanox had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand product line called InfiniBridge at 10 Gbit/second speeds. Following
3720-450: The typical Asus miniPCIe SSD is 71 mm long, causing the Dell 51 mm model to often be (incorrectly) referred to as half length. A true 51 mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers that allow for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed. Intel has numerous desktop boards with
3782-497: Was estimated to be the third largest computer in the world at the time. The OpenIB Alliance (later renamed OpenFabrics Alliance) was founded in 2004 to develop an open set of software for the Linux kernel. By February, 2005, the support was accepted into the 2.6.11 Linux kernel. In November 2005 storage devices finally were released using InfiniBand from vendors such as Engenio. Cisco, desiring to keep technology superior to Ethernet off
SECTION 60
#17328586596533844-537: Was reported that Oracle Corporation (an investor in Mellanox) might engineer its own InfiniBand hardware. In 2019 Nvidia acquired Mellanox, the last independent supplier of InfiniBand products. Specifications are published by the InfiniBand trade association. Original names for speeds were single-data rate (SDR), double-data rate (DDR) and quad-data rate (QDR) as given below. Subsequently, other three-letter acronyms were added for even higher data rates. Each link
#652347