Misplaced Pages

Emotion Engine

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Emotion Engine is a central processing unit developed and manufactured by Sony Computer Entertainment and Toshiba for use in the PlayStation 2 video game console . It was also used in early PlayStation 3 models sold in Japan and North America (Model Numbers CECHAxx & CECHBxx) to provide PlayStation 2 game support. Mass production of the Emotion Engine began in 1999 and ended in late 2012 with the discontinuation of the PlayStation 2.

#303696

58-453: The Emotion Engine consists of eight separate "units", each performing a specific task, integrated onto the same die . These units are: a CPU core, two Vector Processing Units (VPU), a 10-channel DMA unit, a memory controller , and an Image Processing Unit (IPU). There are three interfaces: an input output interface to the I/O processor, a graphics interface (GIF) to the graphics synthesizer, and

116-455: A 128-bit load–store unit (LSU), a Branch Execution Unit (BXU), and a 32-bit VU1 floating-point unit (FPU) coprocessor (which acted as a sync controller for the VPU0/VPU1) containing a MIPS base processor core with 32 64-bit FP registers and 15 32-bit integer registers. The ALUs are 64-bit, with a 32-bit FPU that isn't IEEE 754 compliant. The custom instruction set 107 MMI (Multimedia Extensions)

174-582: A 540-contact plastic ball grid array (PBGA). The primary use of the Emotion Engine was to serve as the PlayStation 2 's CPU. The first PlayStation 3 revisions produced also featured an Emotion Engine on the motherboard to achieve backwards compatibility with PlayStation 2 games. However, the second revisions of the PlayStation 3 lacked a physical Emotion Engine in order to lower costs, performing all of its functions using software emulation performed by

232-461: A computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number, usually as a string of digits. There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of the radix point is indicated by placing an explicit "point" character (dot or comma) there. If

290-497: A few details. Five of these formats are called basic formats , and others are termed extended precision formats and extendable precision format . Three formats are especially widely used in computer hardware and languages: Increasing the precision of the floating-point representation generally reduces the amount of accumulated round-off error caused by intermediate calculations. Other IEEE formats include: Any integer with absolute value less than 2 can be exactly represented in

348-565: A floating point divide (FDIV) unit and a local data memory . The data memory for VPU0 is 4 KB in size, while VPU1 features a 16 KB data memory. To achieve high bandwidth, the VPU's data memory is connected directly to the GIF, and both of the data memories can be read directly by the DMA unit. A single vector instruction consists of four 32-bit single-precision floating-point values which are distributed to

406-485: A maximum theoretical bandwidth of 2.4 GB/s. Communication between the Emotion Engine and RAM occurs through two channels of DRDRAM (Direct Rambus Dynamic Random Access Memory) and the memory controller , which interfaces to the internal data bus. Each channel is 16 bits wide and operates at 400 MHz DDR (Double Data Rate). Combined, the two channels of DRDRAM have a maximum theoretical bandwidth of 25.6 Gbit/s (3.2 GB/s), about 33% more bandwidth than

464-537: A memory interface to the system memory. The CPU core is tightly coupled to the first VPU, VPU 0 . Together, they are responsible for executing game code and high-level modeling computations. The second VPU, VPU 1 , is dedicated to geometry-transformations and lighting and operates independently, parallel to the CPU core, controlled by microcode . VPU 0 , when not utilized, can also be used for geometry-transformations. Display lists generated by CPU/VPU0 and VPU1 are sent to

522-408: A number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred. Symbolically, this final value is: s b p − 1 × b e , {\displaystyle {\frac {s}{b^{\,p-1}}}\times b^{e},} where s is the significand (ignoring any implied decimal point), p

580-545: A single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs ) through processes such as photolithography . The wafer is cut ( diced ) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die. There are three commonly used plural forms: dice , dies, and die . To simplify handling and integration onto a printed circuit board , most dies are packaged in various forms . Most dies are composed of silicon and used for integrated circuits. The process begins with

638-437: A single instruction). Instructions defined include: add, subtract, multiply, divide, min/max, shift, logical, leading-zero count, 128-bit load/store and 256-bit to 128-bit funnel shift in addition to some not described by Sony for competitive reasons. Contrary to some misconceptions, these SIMD capabilities did not amount to the processor being "128-bit", as neither the memory addresses nor the integers themselves were 128-bit, only

SECTION 10

#1732852637304

696-401: A slimmer design. The Emotion Engine contained 13.5 million metal–oxide–semiconductor (MOS) transistors, on an integrated circuit (IC) die measuring 240 mm. It was fabricated by Sony and Toshiba in a 0.25 μm ( 0.18 μm effective L G ) complementary metal–oxide–semiconductor (CMOS) process with four levels of interconnect. The Emotion Engine was packaged in

754-1115: A syntax to be used that could be entered through a typewriter , as was the case of his Electromechanical Arithmometer in 1920. In 1938, Konrad Zuse of Berlin completed the Z1 , the first binary, programmable mechanical computer ; it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit. The more reliable relay -based Z3 , completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as 1 / ∞ = 0 {\displaystyle ^{1}/_{\infty }=0} , and it stops on undefined operations, such as 0 × ∞ {\displaystyle 0\times \infty } . Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes ± ∞ {\displaystyle \pm \infty } and NaN representations, anticipating features of

812-831: Is arithmetic on subsets of real numbers formed by a signed string of a fixed number of digits in some base , called a significand , scaled by an integer exponent of that base. Numbers of this form are called floating-point numbers . For example, the number 2469/200 is a floating-point number in base ten with five digits: 2469 / 200 = 12.345 = 12345 ⏟ significand × 10 ⏟ base − 3 ⏞ exponent {\displaystyle 2469/200=12.345=\!\underbrace {12345} _{\text{significand}}\!\times \!\underbrace {10} _{\text{base}}\!\!\!\!\!\!\!\overbrace {{}^{-3}} ^{\text{exponent}}} However, unlike 2469/200 = 12.345, 7716/625 = 12.3456

870-500: Is a rational number , because it can be represented as one integer divided by another; for example 1.45 × 10 is (145/100)×1000 or 145,000 /100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base ( 0.2 , or 2 × 10 ). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but in base 3 , it

928-524: Is a smallest positive normal floating-point number, which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent. There is a largest floating-point number, which has B − 1 as the value for each digit of the significand and the largest possible value for the exponent. In addition, there are representable values strictly between −UFL and UFL. Namely, positive and negative zeros , as well as subnormal numbers . The IEEE standardized

986-432: Is from 2  ≈ 2 × 10 to approximately 2  ≈ 2 × 10 . The number of normal floating-point numbers in a system ( B , P , L , U ) where is 2 ( B − 1 ) ( B P − 1 ) ( U − L + 1 ) {\displaystyle 2\left(B-1\right)\left(B^{P-1}\right)\left(U-L+1\right)} . There

1044-447: Is not a floating-point number in base ten with five digits—it needs six digits. The nearest floating-point number with only five digits is 12.346. And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits. In practice, most floating-point systems use base two , though base ten ( decimal floating point ) is also common. Floating-point arithmetic operations, such as addition and division, approximate

1102-492: Is often used to allow very small and very large real numbers that require fast processing times. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic

1160-399: Is similar in concept to scientific notation. Logically, a floating-point number consists of: To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent , equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to

1218-1354: Is stored in memory using the IEEE 754 encoding, this becomes the significand s . The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows: ( ∑ n = 0 p − 1 bit n × 2 − n ) × 2 e = ( 1 × 2 − 0 + 1 × 2 − 1 + 0 × 2 − 2 + 0 × 2 − 3 + 1 × 2 − 4 + ⋯ + 1 × 2 − 23 ) × 2 1 ≈ 1.57079637 × 2 ≈ 3.1415927 {\displaystyle {\begin{aligned}&\left(\sum _{n=0}^{p-1}{\text{bit}}_{n}\times 2^{-n}\right)\times 2^{e}\\={}&\left(1\times 2^{-0}+1\times 2^{-1}+0\times 2^{-2}+0\times 2^{-3}+1\times 2^{-4}+\cdots +1\times 2^{-23}\right)\times 2^{1}\\\approx {}&1.57079637\times 2\\\approx {}&3.1415927\end{aligned}}} where p

SECTION 20

#1732852637304

1276-458: Is the precision ( 24 in this example), n is the position of the bit of the significand from the left (starting at 0 and finishing at 23 here) and e is the exponent ( 1 in this example). It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is called normalization . For binary formats (which uses only

1334-621: Is the precision (the number of digits in the significand), b is the base (in our example, this is the number ten ), and e is the exponent. Historically, several number bases have been used for representing floating-point numbers, with base two ( binary ) being the most common, followed by base ten ( decimal floating point ), and other less common varieties, such as base sixteen ( hexadecimal floating point ), base eight (octal floating point ), base four (quaternary floating point ), base three ( balanced ternary floating point ) and even base 256 and base 65,536 . A floating-point number

1392-469: Is trivial (0.1 or 1×3 ) . The occasions on which infinite expansions occur depend on the base and its prime factors . The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation, p = 24 {\displaystyle p=24} , and so

1450-615: The Cell Broadband Processor , coupled with a hardware Graphics Synthesizer still present to achieve PlayStation 2 backwards compatibility. In all subsequent revisions, the Graphics Synthesizer was removed; however, a PlayStation 2 software emulator is available in later system software revisions for use with Sony's PS2 Classics titles available for purchase on the Sony Entertainment Network. The Emotion Engine

1508-678: The English Electric DEUCE . The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers. The mass-produced IBM 704 followed in 1954; it introduced the use of a biased exponent . For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have " scientific computation " (SC) capability (see also Extensions for Scientific Computation (XSC)). It

1566-437: The scratchpad RAM exists in a separate memory space. A combined 48 double entry instruction and data translation lookaside buffer is provided for translating virtual addresses . Branch prediction is achieved by a 64-entry branch target address cache and a branch history table that is integrated into the instruction cache. The branch misprediction penalty is three cycles due to the short six stage pipeline. The majority of

1624-419: The Emotion Engine's floating point performance is provided by two vector processing units (VPU), designated VPU0 and VPU1. These were essentially DSPs tailored for 3D math, and the forerunner to hardware vertex shader pipelines . Each VPU features 32  128-bit vector SIMD registers (holding 4D vector data), 16 16-bit fixed-point registers, four floating point multiply-accumulate (FMAC) units,

1682-705: The GIF, which prioritizes them before dispatching them to the Graphics Synthesizer for rendering. The CPU core is a two-way superscalar in-order RISC processor. Based on the MIPS R5900, it implements the MIPS-III instruction set architecture (ISA) and much of MIPS-IV, in addition to a custom instruction set developed by Sony which operated on 128-bit wide groups of either 32-bit, 16-bit, or 8-bit integers in single instruction, multiple data (SIMD) fashion (e.g. four 32-bit integers could be added to four others using

1740-690: The IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine , arguing that fixed-point arithmetic is preferable. The first commercial computer with floating-point hardware was Zuse's Z4 computer, designed in 1942–1945. In 1946, Bell Laboratories introduced the Model ;V , which implemented decimal floating-point numbers . The Pilot ACE has binary floating-point arithmetic, and it became operational in 1950 at National Physical Laboratory, UK . Thirty-three were later sold commercially as

1798-671: The Network Adapter with built-in IDE HDD support or other extensions to extend functionality and product lifecycle which can be seen as a competitive advantage. In newer variants (like the slim edition), the interface would however, offer vastly more bandwidth than what is required by the PlayStation's input output devices as the HDD support was removed and the PCMCIA connector design was abandoned in favor of

Emotion Engine - Misplaced Pages Continue

1856-603: The Spanish engineer Leonardo Torres Quevedo published Essays on Automatics , where he designed a special-purpose electromechanical calculator based on Charles Babbage 's analytical engine and described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format as n x 10 m {\displaystyle ^{m}} , and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, " n will always be

1914-565: The computer representation for binary floating-point numbers in IEEE 754 (a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It was revised in 2008 . IBM mainframes support IBM's own hexadecimal floating point format and IEEE 754-2008 decimal floating point in addition to the IEEE 754 binary format. The Cray T90 series had an IEEE version, but the SV1 still uses Cray floating-point format. The standard provides for many closely related formats, differing in only

1972-420: The corresponding real number arithmetic operations by rounding any result that is not a floating-point number itself to a nearby floating-point number. For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345. The term floating point refers to the fact that the number's radix point can "float" anywhere to the left, right, or between

2030-477: The creation of the IEEE 754 standard once the 32-bit (or 64-bit) word had become commonplace. This standard was significantly based on a proposal from Intel, which was designing the i8087 numerical coprocessor; Motorola, which was designing the 68000 around the same time, gave significant input as well. In 1989, mathematician and computer scientist William Kahan was honored with the Turing Award for being

2088-503: The digits 0 and 1 ), this non-zero digit is necessarily 1 . Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called the leading bit convention , the implicit bit convention , the hidden bit convention , or the assumed bit convention . The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives: In 1914,

2146-604: The faulty dies. Functional dies are then packaged and the completed integrated circuit is ready to be shipped. A die can host many types of circuits. One common use case of an integrated circuit die is in the form of a Central Processing Unit (CPU) . Through advances in modern technology, the size of the transistor within the die has shrunk exponentially, following Moore's Law . Other uses for dies can range from LED lighting to power semiconductor devices . Images of dies are commonly called die shots . Floating point In computing , floating-point arithmetic ( FP )

2204-668: The four single-precision (32-bit) FMAC units for processing. This scheme is similar to the SSEx extensions by Intel. The FMAC units take four cycles to execute one instruction, but as the units have a six-stage pipeline , they have a throughput of one instruction per cycle. The FDIV unit has a nine-stage pipeline and can execute one instruction every seven cycles. The IPU allowed MPEG-2 compressed image decoding, allowing playback of DVDs and game FMV . It also allowed vector quantization for 2D graphics data. The memory management unit, RDRAM controller and DMA controller handle memory access within

2262-492: The given number is scaled by a power of 10 , so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period of Jupiter 's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047 × 10 seconds. Floating-point representation

2320-510: The input output interface interfaces a 32-bit wide, 37.5 MHz input output bus with a maximum theoretical bandwidth of 150 MB/s to the internal data bus. The interface provides enough bandwidth for the PCMCIA extension connector which was used for the network adapter with built-in P-ATA interface for faster data access and online functionality. An advantage of the high bandwidth was that it could be easily used to introduce hardware extensions like

2378-544: The internal data bus. Because of this, the memory controller buffers data sent from the DRDRAM channels so the extra bandwidth can be utilised by the CPU. The Emotion Engine interfaces directly to the Graphics Synthesizer via the GIF with a dedicated 64-bit, 150 MHz bus that has a maximum theoretical bandwidth of 1.2 GB/s. To provide communications between the Emotion Engine and the Input Output Processor (IOP),

Emotion Engine - Misplaced Pages Continue

2436-453: The left if the exponent is negative. Using base-10 (the familiar decimal notation) as an example, the number 152,853.5047 , which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 10 to give 1.528535047 × 10 , or 152,853.5047 . In storing such

2494-411: The mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to

2552-420: The primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, Harold Stone . Among the x86 innovations are these: A floating-point number consists of two fixed-point components, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on

2610-450: The production of monocrystalline silicon ingots. These ingots are then sliced into disks with a diameter of up to 300 mm. These wafers are then polished to a mirror finish before going through photolithography . In many steps the transistors are manufactured and connected with metal interconnect layers. These prepared wafers then go through wafer testing to test their functionality. The wafers are then sliced and sorted to filter out

2668-457: The radix point is not specified, then the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit. In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation ,

2726-437: The same number of digits (e.g. six), the first digit of n will be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form: n ; m ." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying

2784-716: The shared SIMD/integer registers. For comparison, 128-bit wide registers and SIMD instructions had been present in the 32-bit x86 architecture since 1999, with the introduction of SSE . However the internal data paths were 128-bit wide, and its processors were capable of operating on 4x32bit quantities in parallel in single registers. It has a 6-stage integer pipeline and a 15-stage floating-point (FP) pipeline. Its assortment of registers consists of 32 128-bit VLIW SIMD registers (naming/renaming), one 64-bit accumulator and two 64-bit general data registers, 8 16-bit fix function registers, 16 8-bit controller registers. The processor also has two 64-bit integer arithmetic logic units (ALUs),

2842-496: The significand is a string of 24 bits . For instance, the number π 's first 33 bits are: 11001001   00001111   1101101 0 _   10100010   0. {\displaystyle 11001001\ 00001111\ 1101101{\underline {0}}\ 10100010\ 0.} In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as

2900-417: The significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number. On a typical computer system, a double-precision (64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 2 = 1024, the complete range of the positive normal floating-point numbers in this format

2958-408: The significant digits of the number. This position is indicated by the exponent, so floating point can be considered a form of scientific notation . A floating-point system can be used to represent, with a fixed number of digits, numbers of very different orders of magnitude — such as the number of meters between galaxies or between protons in an atom . For this reason, floating-point arithmetic

SECTION 50

#1732852637304

3016-500: The single-precision format, and any integer with absolute value less than 2 can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers. The standard specifies some special values, and their representation: positive infinity ( +∞ ), negative infinity ( −∞ ),

3074-452: The system. Communications between the MIPS core, the two VPUs, GIF, memory controller and other units is handled by a 128-bit wide internal data bus running at half the clock frequency of the Emotion Engine but, to offer greater bandwidth, there is also a 128-bit dedicated path between the CPU and VPU0 and a 128-bit dedicated path between VPU1 and GIF. At 150 MHz, the internal data bus provides

3132-547: The underlined bit 0 above. The next bit, at position 24, is called the round bit or rounding bit . It is used to round the 33-bit approximation to the nearest 24-bit number (there are specific rules for halfway values , which is not the case here). This bit, which is 1 in this example, is added to the integer formed by the leftmost 24 bits, yielding: 11001001   00001111   1101101 1 _ . {\displaystyle 11001001\ 00001111\ 1101101{\underline {1}}.} When this

3190-553: Was also used in the PSX (digital video recorder) as well as the HDTV television models Sony WEGA HVX (Model Numbers KDE-xxxHVX/KDL-xxxHVX) and Sony BRAVIA KDL22PX300, all of which used PlayStation 2 hardware. Die (integrated circuit) A die , in the context of integrated circuits , is a small block of semiconducting material on which a given functional circuit is fabricated . Typically, integrated circuits are produced in large batches on

3248-408: Was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE. The speed of floating-point operations, commonly measured in terms of FLOPS , is an important characteristic of a computer system , especially for applications that involve intensive mathematical calculations. A floating-point unit (FPU, colloquially a math coprocessor ) is a part of

3306-433: Was implemented by grouping the two 64-bit integer ALUs. Both the integer and floating-point pipelines are six stages long. To feed the execution units with instructions and data, there is a 16 KB two-way set associative instruction cache , an 8 KB two-way set associative non blocking data cache and a 16 KB scratchpad RAM . Both the instruction and data caches are virtually indexed and physically tagged while

3364-1002: Was not until the launch of the Intel i486 in 1989 that general-purpose personal computers had floating-point capability in hardware as a standard feature. The UNIVAC 1100/2200 series , introduced in 1962, supported two floating-point representations: The IBM 7094 , also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introduced hexadecimal floating-point representations in its System/360 mainframes; these same representations are still available for use in modern z/Architecture systems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic. Initially, computers used many different representations for floating-point numbers. The lack of standardization at

#303696