The Cray XK6 made by Cray is an enhanced version of the Cray XE6 supercomputer , announced in May 2011. The XK6 uses the same " blade " architecture of the XE6, with each XK6 blade comprising four compute "nodes". Each node consists of a 16-core AMD Opteron 6200 processor with 16 or 32 GB of DDR3 RAM and an Nvidia Tesla X2090 GPGPU with 6 GB of GDDR5 RAM, the two connected via PCI Express 2.0. Two Gemini router ASICs are shared between the nodes on a blade, providing a 3-dimensional torus network topology between nodes. This means that it has 576 GB of Graphics memory and over 1500 CPU cores, several orders of magnitude more powerful than the best publicly available computer on the market.
47-548: An XK6 cabinet accommodates 24 blades (96 nodes). Each of the Tesla processors is rated at 665 double-precision gigaflops giving 63.8 teraflops per cabinet. The XK6 is capable of scaling to 500,000 Opteron cores, giving up to 50 petaflops total hybrid peak performance. The XK6 runs the Cray Linux Environment . This incorporates SUSE Linux Enterprise Server and Cray's Compute Node Linux . The first order for an XK6 system
94-461: A / ( 1 + a ) {\displaystyle a/(1+a)} . The maximum occurs when a {\displaystyle a} is at the upper end of its range. The 1 + a {\displaystyle 1+a} in the denominator is negligible compared to the numerator, so it is left off for expediency, and just b − ( p − 1 ) / 2 {\displaystyle b^{-(p-1)}/2}
141-540: A radix which is also called the base, b {\displaystyle b} , and by the precision p {\displaystyle p} , i.e. the number of radix b {\displaystyle b} digits of the significand (including any leading implicit bit). All the numbers with the same exponent , e {\displaystyle e} , have the spacing, b e − ( p − 1 ) {\displaystyle b^{e-(p-1)}} . The spacing changes at
188-404: A 52-bit fraction is or Between 2 =4,503,599,627,370,496 and 2 =9,007,199,254,740,992 the representable numbers are exactly the integers. For the next range, from 2 to 2 , everything is multiplied by 2, so the representable numbers are the even ones, etc. Conversely, for the previous range from 2 to 2 , the spacing is 0.5, etc. The spacing as a fraction of the numbers in the range from 2 to 2
235-437: A decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 double-precision number is converted to a decimal string with at least 17 significant digits, and then converted back to double-precision representation, the final result must match the original number. The format is written with the significand having an implicit integer bit of value 1 (except for special data, see
282-430: A factor of two. The more-widely used term (referred to as the mainstream definition in this article), is used in most modern programming languages and is simply defined as machine epsilon is the difference between 1 and the next larger floating point number . The formal definition can generally be considered to yield an epsilon half the size of the mainstream definition , although its definition does vary depending on
329-496: A floating radix point . Double precision may be chosen when the range or precision of single precision would be insufficient. In the IEEE 754 standard , the 64-bit base-2 format is officially referred to as binary64 ; it was called double in IEEE 754-1985 . IEEE 754 specifies additional floating-point formats, including 32-bit base-2 single precision and, more recently, base-10 representations ( decimal floating point ). One of
376-557: A machine number x b {\textstyle x_{b}} below x {\textstyle x} and a machine number x u {\textstyle x_{u}} above x {\textstyle x} . If x b = ( 1. b 1 b 2 … b m ) 2 × 2 k {\textstyle x_{b}=\left(1.b_{1}b_{2}\ldots b_{m}\right)_{2}\times 2^{k}} , where m {\textstyle m}
423-566: A modifier strictfp was introduced to enforce strict IEEE 754 computations. Strict floating point has been restored in Java ;17. As specified by the ECMAScript standard, all arithmetic in JavaScript shall be done using double-precision floating-point arithmetic. The JSON data encoding format supports numeric values, and the grammar to which numeric expressions must conform has no limits on
470-426: Is 2 ε | x | {\displaystyle 2\varepsilon |x|} . Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating-point operations are done as if it were possible to perform
517-402: Is 2 as expected. The following simple algorithm can be used to approximate the machine epsilon, to within a factor of two (one order of magnitude ) of its true value, using a linear search . The machine epsilon, ε mach {\textstyle \varepsilon _{\text{mach}}} can also simply be calculated as two to the negative power of the number of bits used for
SECTION 10
#1732855886655564-442: Is 2 . The maximum relative rounding error when rounding a number to the nearest representable one (the machine epsilon ) is therefore 2 . The 11 bit width of the exponent allows the representation of numbers between 10 and 10 , with full 15–17 decimal digits precision. By compromising precision, the subnormal representation allows even smaller values up to about 5 × 10 . The double-precision binary floating-point exponent
611-467: Is a commonly used format on PCs, due to its wider range over single-precision floating point, in spite of its performance and bandwidth cost. It is commonly known simply as double . The IEEE 754 standard specifies a binary64 as having: The sign bit determines the sign of the number (including when this number is zero, which is signed ). The exponent field is an 11-bit unsigned integer from 0 to 2047, in biased form : an exponent value of 1023 represents
658-480: Is described by: In the case of subnormal numbers ( e = 0) the double-precision number is described by: Encodings of qNaN and sNaN are not completely specified in IEEE 754 and depend on the processor. Most processors, such as the x86 family and the ARM family processors, use the most significant bit of the significand field to indicate a quiet NaN; this is what is recommended by IEEE 754. The PA-RISC processors use
705-464: Is encoded using an offset-binary representation, with the zero offset being 1023; also known as exponent bias in the IEEE 754 standard. Examples of such representations would be: The exponents 000 16 and 7ff 16 have a special meaning: where F is the fractional part of the significand . All bit patterns are valid encoding. Except for the above exceptions, the entire double-precision number
752-417: Is explored in depth throughout this subsection. Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure. Some background is needed to determine a value from this definition. A floating point number system is characterized by
799-594: Is rooted in its use in the ISO C Standard for constants relating to floating-point types and corresponding constants in other programming languages. It is also widely used in scientific computing software and in the numerics and computing literature. Where standard libraries do not provide precomputed values (as < float.h > does with FLT_EPSILON , DBL_EPSILON and LDBL_EPSILON for C and < limits > does with std::numeric_limits< T >::epsilon() in C++),
846-440: Is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to 1 {\displaystyle 1} , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value". Thus, the maximum spacing between a normalised floating point number, x {\displaystyle x} , and an adjacent normalised number
893-565: Is the aforementioned integer reinterpretation of x {\displaystyle x} ). In languages that allow type punning and always use IEEE 754–1985, we can exploit this to compute a machine epsilon in constant time. For example, in C: This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with: Example in Python: 64-bit doubles give 2.220446e-16, which
940-399: Is the infinite precision operation. According to the standard, the computer calculates: By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so: where z {\displaystyle z} in absolute magnitude is at most ε {\displaystyle \varepsilon } or u . The books by Demmel and Higham in
987-1292: Is the number of bits used for the magnitude of the significand , then: x u = [ ( 1. b 1 b 2 … b m ) 2 + ( 0.00 … 1 ) 2 ] × 2 k = [ ( 1. b 1 b 2 … b m ) 2 + 2 − m ] × 2 k = ( 1. b 1 b 2 … b m ) 2 × 2 k + 2 − m × 2 k = ( 1. b 1 b 2 … b m ) 2 × 2 k + 2 − m + k . {\displaystyle {\begin{aligned}x_{u}&=\left[(1.b_{1}b_{2}\ldots b_{m})_{2}+(0.00\ldots 1)_{2}\right]\times 2^{k}\\&=\left[(1.b_{1}b_{2}\ldots b_{m})_{2}+2^{-m}\right]\times 2^{k}\\&=(1.b_{1}b_{2}\ldots b_{m})_{2}\times 2^{k}+2^{-m}\times 2^{k}\\&=(1.b_{1}b_{2}\ldots b_{m})_{2}\times 2^{k}+2^{-m+k}.\end{aligned}}} Since
SECTION 20
#17328558866551034-420: The double type corresponds to double precision. However, on 32-bit x86 with extended precision by default, some compilers may not conform to the C standard or the arithmetic may suffer from double rounding . Fortran provides several integer and real types, and the 64-bit type real64 , accessible via Fortran's intrinsic module iso_fortran_env , corresponds to double precision. Common Lisp provides
1081-399: The formal definition and interval machine epsilon or mainstream definition . In the mainstream definition , machine epsilon is independent of rounding method, and is defined simply as the difference between 1 and the next larger floating point number . In the formal definition , machine epsilon is dependent on the type of rounding used and is also called unit roundoff , which has
1128-463: The relative approximation error due to rounding in floating point number systems. This value characterizes computer arithmetic in the field of numerical analysis , and by extension in the subject of computational science . The quantity is also called macheps and it has the symbols Greek epsilon ε {\displaystyle \varepsilon } . There are two prevailing definitions, denoted here as rounding machine epsilon or
1175-434: The actual zero. Exponents range from −1022 to +1023 because exponents of −1023 (all 0s) and +1024 (all 1s) are reserved for special numbers. The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2 ≈ 1.11 × 10 ). If a decimal string with at most 15 significant digits is converted to the IEEE 754 double-precision format, giving a normal number, and then converted back to
1222-495: The best way to determine machine epsilon is to refer to the table, above, and use the appropriate power formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute interval machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff. Note that results depend on the particular floating-point format used, such as float , double , long double , or similar as supported by
1269-534: The binary representation of 32 bit floats ). They also have the property that 0 < | f ( x ) | < ∞ {\displaystyle 0<|f(x)|<\infty } , and | f ( x + 1 ) − f ( x ) | ≥ | f ( x ) − f ( x − 1 ) | {\displaystyle |f(x+1)-f(x)|\geq |f(x)-f(x-1)|} (where f ( x ) {\displaystyle f(x)}
1316-864: The bit to indicate a signaling NaN. By default, / 3 rounds down, instead of up like single precision , because of the odd number of bits in the significand. In more detail: Using double-precision floating-point variables is usually slower than working with their single precision counterparts. One area of computing where this is a particular issue is parallel code running on GPUs. For example, when using NVIDIA 's CUDA platform, calculations with double precision can take, depending on hardware, from 2 to 32 times as long to complete compared to those done using single precision . Additionally, many mathematical functions (e.g., sin, cos, atan2, log, exp and sqrt) need more computations to give accurate double-precision results, and are therefore slower. Doubles are implemented in many programming languages in different ways such as
1363-415: The exponent encoding below). With the 52 bits of the fraction (F) significand appearing in the memory format, the total precision is therefore 53 bits (approximately 16 decimal digits, 53 log 10 (2) ≈ 15.955). The bits are laid out as follows: [REDACTED] The real value assumed by a given 64-bit double-precision datum with a given biased exponent e {\displaystyle e} and
1410-451: The first programming languages to provide floating-point data types was Fortran . Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on the computer manufacturer and computer model, and upon decisions made by programming-language implementers. E.g., GW-BASIC 's double-precision data type was the 64-bit MBF floating-point format. Double-precision binary floating-point
1457-446: The following. On processors with only dynamic precision, such as x86 without SSE2 (or when SSE2 is not used, for compatibility purpose) and with extended precision used by default, software may have difficulties to fulfill some requirements. C and C++ offer a wide variety of arithmetic types . Double precision is not required by the standards (except by the optional annex F of C99 , covering IEEE 754 arithmetic), but on most systems,
Cray XK6 - Misplaced Pages Continue
1504-475: The form of rounding used. The two terms are described at length in the next two subsections. The formal definition for machine epsilon is the one used by Prof. James Demmel in lecture scripts, the LAPACK linear algebra package, numerics research papers and some scientific computing software. Most numerical analysts use the words machine epsilon and unit roundoff interchangeably with this meaning, which
1551-439: The infinite-precision operation, and then, the result is rounded to a floating-point number. Suppose (1) x {\displaystyle x} , y {\displaystyle y} are floating-point numbers, (2) ∙ {\displaystyle \bullet } is an arithmetic operation on floating-point numbers such as addition or multiplication, and (3) ∘ {\displaystyle \circ }
1598-427: The mantissa. ε mach = 2 − bits used for magnitude of mantissa {\displaystyle \varepsilon _{\text{mach}}\ =\ 2^{-{\text{bits used for magnitude of mantissa}}}} If y {\textstyle y} is the machine representation of a number x {\textstyle x} then the absolute relative error in
1645-455: The numbers that are perfect powers of b {\displaystyle b} ; the spacing on the side of larger magnitude is b {\displaystyle b} times larger than the spacing on the side of smaller magnitude. Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent e = 0 {\displaystyle e=0} . It also suffices to consider positive numbers. For
1692-413: The precision or range of the numbers so encoded. However, RFC 8259 advises that, since IEEE 754 binary64 numbers are widely implemented, good interoperability can be achieved by implementations processing JSON if they expect no more precision or range than binary64 offers. Rust and Zig have the f64 data type. Machine epsilon Machine epsilon or machine precision is an upper bound on
1739-464: The processor (or coprocessor), not some 1 + ε {\displaystyle 1+\varepsilon } accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format. IEEE 754 floating-point formats have the property that, when reinterpreted as a two's complement integer of the same width, they monotonically increase over positive values and monotonically decrease over negative values (see
1786-505: The programming language, the compiler, and the runtime library for the actual platform. Some formats supported by the processor might not be supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic available in some languages and libraries. In a strict sense the term machine epsilon means the 1 + ε {\displaystyle 1+\varepsilon } accuracy directly supported by
1833-571: The references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination. This alternative definition is significantly more widespread: machine epsilon is the difference between 1 and the next larger floating point number . This definition is used in language constants in Ada , C , C++ , Fortran , MATLAB , Mathematica , Octave , Pascal , Python and Rust etc., and defined in textbooks like « Numerical Recipes » by Press et al . By this definition, ε equals
1880-506: The relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form 1 + a {\displaystyle 1+a} where a {\displaystyle a} is between 0 {\displaystyle 0} and b − ( p − 1 ) / 2 {\displaystyle b^{-(p-1)}/2} . All these numbers round to 1 {\displaystyle 1} with relative error
1927-440: The representation is | x − y x | ≤ ε mach . {\textstyle \left|{\dfrac {x-y}{x}}\right|\leq \varepsilon _{\text{mach}}.} The following proof is limited to positive numbers and machine representations using round-by-chop . If x {\textstyle x} is a positive number we want to represent, it will be between
Cray XK6 - Misplaced Pages Continue
1974-1614: The representation of x {\textstyle x} will be either x b {\textstyle x_{b}} or x u {\textstyle x_{u}} , | x − y | ≤ | x b − x u | = 2 − m + k {\displaystyle {\begin{aligned}\left|x-y\right|&\leq \left|x_{b}-x_{u}\right|\\&=2^{-m+k}\end{aligned}}} | x − y x | ≤ 2 − m + k x ≤ 2 − m + k x b = 2 − m + k ( 1 ⋅ b 1 b 2 … b m ) 2 2 k = 2 − m ( 1 ⋅ b 1 b 2 … b m ) 2 ≤ 2 − m = ε mach . {\displaystyle {\begin{aligned}\left|{\frac {x-y}{x}}\right|&\leq {\frac {2^{-m+k}}{x}}\\&\leq {\frac {2^{-m+k}}{x_{b}}}\\&={\frac {2^{-m+k}}{(1\cdot b_{1}b_{2}\ldots b_{m})_{2}2^{k}}}\\&={\frac {2^{-m}}{(1\cdot b_{1}b_{2}\ldots b_{m})_{2}}}\\&\leq 2^{-m}=\varepsilon _{\text{mach}}.\end{aligned}}} Although this proof
2021-541: The symbol bold Roman u . The two terms can generally be considered to differ by simply a factor of two, with the formal definition yielding an epsilon half the size of the mainstream definition , as summarized in the tables in the next section. The following table lists machine epsilon values for standard floating-point formats. The IEEE standard does not define the terms machine epsilon and unit roundoff , so differing definitions of these terms are in use, which can cause some confusion. The two terms differ by simply
2068-691: The types SHORT-FLOAT, SINGLE-FLOAT, DOUBLE-FLOAT and LONG-FLOAT. Most implementations provide SINGLE-FLOATs and DOUBLE-FLOATs with the other types appropriate synonyms. Common Lisp provides exceptions for catching floating-point underflows and overflows, and the inexact floating-point exception, as per IEEE 754. No infinities and NaNs are described in the ANSI standard, however, several implementations do provide these as extensions. On Java before version 1.2, every implementation had to be IEEE 754 compliant. Version 1.2 allowed implementations to bring extra precision in intermediate computations for platforms like x87 . Thus
2115-413: The usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or b − ( p − 1 ) / 2 {\displaystyle b^{-(p-1)}/2} . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make
2162-409: The value of the unit in the last place relative to 1, i.e. b − ( p − 1 ) {\displaystyle b^{-(p-1)}} (where b is the base of the floating point system and p is the precision) and the unit roundoff is u = ε / 2, assuming round-to-nearest mode, and u = ε , assuming round-by-chop . The prevalence of this definition
2209-511: Was an upgrade of an existing XE6m at the Swiss National Supercomputing Centre (CSCS). This supercomputer-related article is a stub . You can help Misplaced Pages by expanding it . Double-precision Double-precision floating-point format (sometimes called FP64 or float64 ) is a floating-point number format , usually occupying 64 bits in computer memory; it represents a wide range of numeric values by using
#654345