Static random-access memory ( static RAM or SRAM ) is a type of random-access memory (RAM) that uses latching circuitry (flip-flop) to store each bit. SRAM is volatile memory ; data is lost when power is removed.
137-422: The static qualifier differentiates SRAM from dynamic random-access memory (DRAM): Semiconductor bipolar SRAM was invented in 1963 by Robert Norman at Fairchild Semiconductor . Metal–oxide–semiconductor SRAM (MOS-SRAM) was invented in 1964 by John Schmidt at Fairchild Semiconductor. It was a 64-bit MOS p-channel SRAM. SRAM was the main driver behind any new CMOS -based technology fabrication process since
274-507: A page mode , where words of a page (256, 512, or 1024 words) can be read sequentially with a significantly shorter access time (typically approximately 30 ns). The page is selected by setting the upper address lines and then words are sequentially read by stepping through the lower address lines. With the introduction of the FinFET transistor implementation of SRAM cells, they started to suffer from increasing inefficiencies in cell sizes. Over
411-637: A storage hierarchy , which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". Even the first computer designs, Charles Babbage 's Analytical Engine and Percy Ludgate 's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction
548-588: A 16-bit silicon memory chip based on the Farber-Schlig cell, with 84 transistors, 64 resistors, and 4 diodes. In April 1969, Intel Inc. introduced its first product, Intel 3101, a SRAM memory chip intended to replace bulky magnetic-core memory modules; Its capacity was 64 bits (In the first versions, only 63 bits were usable due to a bug) and was based on bipolar junction transistors . It was designed by using rubylith . Though it can be characterized as volatile memory , SRAM exhibits data remanence . SRAM offers
685-423: A DRAM, the bit line is connected to storage capacitors and charge sharing causes the bit line to swing upwards or downwards. The symmetric structure of SRAMs also allows for differential signaling , which makes small voltage swings more easily detectable. Another difference with DRAM that contributes to making SRAM faster is that commercial chips accept all address bits at a time. By comparison, commodity DRAMs have
822-483: A French institute reported on a research of an IoT -purposed 28nm fabricated IC . It was based on fully depleted silicon on insulator -transistors (FD-SOI), had two-ported SRAM memory rail for synchronous/asynchronous accesses, and selective virtual ground (SVGND). The study claimed reaching an ultra-low SVGND current in a sleep and read modes by finely tuning its voltage. Dynamic random-access memory Dynamic random-access memory ( dynamic RAM or DRAM )
959-501: A bipolar dynamic RAM for its electronic calculator Toscal BC-1411. In 1966, Tomohisa Yoshimaru and Hiroshi Komikawa from Toshiba applied for a Japanese patent of a memory circuit composed of several transistors and a capacitor, in 1967 they applied for a patent in the US. The earliest forms of DRAM mentioned above used bipolar transistors . While it offered improved performance over magnetic-core memory , bipolar DRAM could not compete with
1096-475: A bit pattern to each character , digit , or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII , image encodings like JPEG , and video encodings like MPEG-4 ). By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of
1233-531: A capacity of 2 = 2,048 = 2 k words) and an 8-bit word, so they are referred to as 2k × 8 SRAM . The dimensions of an SRAM cell on an IC is determined by the minimum feature size of the process used to make the IC. ‹The template Manual is being considered for merging .› An SRAM cell has three states: SRAM operating in read and write modes should have readability and write stability , respectively. The three different states work as follows: If
1370-493: A computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix , have integrated batteries that maintain volatile storage for several minutes. Utilities such as hdparm and sar can be used to measure IO performance in Linux. Full disk encryption , volume and virtual disk encryption, andor file/folder encryption
1507-466: A cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16 Kbit density, the cost advantage increased; the 16 Kbit Mostek MK4116 DRAM, introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64 Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers, which dominated
SECTION 10
#17328515101671644-424: A database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability
1781-439: A density and cost advantage over true SRAM, and without the access complexity of DRAM. In the 1990s, asynchronous SRAM used to be employed for fast access time. Asynchronous SRAM was used as main memory for small cache-less embedded processors used in everything from industrial electronics and measurement systems to hard disks and networking equipment, among many other applications. Nowadays, synchronous SRAM (e.g. DDR SRAM)
1918-702: A drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library. Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is: For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks ( MAID ), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries , are nearline storage, while tape cartridges that must be manually loaded are offline storage. Off-line storage
2055-437: A hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store." The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')". In November 1965, Toshiba introduced
2192-412: A logic one requires the wordline be driven to a voltage greater than the sum of V CC and the access transistor's threshold voltage (V TH ). This voltage is called V CC pumped (V CCP ). The time required to discharge a capacitor thus depends on what logic value is stored in the capacitor. A capacitor containing logic one begins to discharge when the voltage at the access transistor's gate terminal
2329-450: A more complex process is used in practice: The read cycle is started by precharging both bit lines BL and BL , to high (logic 1 ) voltage. Then asserting the word line WL enables both the access transistors M 5 and M 6 , which causes one bit line BL voltage to slightly drop. Then the BL and BL lines will have a small voltage difference between them. A sense amplifier will sense which line has
2466-400: A relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system . Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits , or binary digits, each of which has
2603-671: A simple data access model and does not require a refresh circuit. Performance and reliability are good and power consumption is low when idle. Since SRAM requires more transistors per bit to implement, it is less dense and more expensive than DRAM and also has a higher power consumption during read or write access. The power consumption of SRAM varies widely depending on how frequently it is accessed. Many categories of industrial and scientific subsystems, automotive electronics, and similar embedded systems , contain SRAM which, in this context, may be referred to as ESRAM . Some amount (kilobytes or less)
2740-514: A single bitline contact) from a column, then move the DRAM cells from an adjacent column into the voids. The location where the bitline twists occupies additional area. To minimize area overhead, engineers select the simplest and most area-minimal twisting scheme that is able to reduce noise under the specified limit. As process technology improves to reduce minimum feature sizes, the signal to noise problem worsens, since coupling between adjacent metal wires
2877-407: A single chip, to accommodate more capacity without becoming too slow. When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This
SECTION 20
#17328515101673014-574: A source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program ( BIOS ) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access ). Many types of "ROM" are not literally read only , as updates to them are possible; however it
3151-462: A time determined by an external timer function that governs the operation of the rest of a system, such as the vertical blanking interval that occurs every 10–20 ms in video equipment. The row address of the row that will be refreshed next is maintained by external logic or a counter within the DRAM. A system that provides the row address (and the refresh command) does so to have greater control over when to refresh and which row to refresh. This
3288-444: A value is read, modified, and then written back as a single, indivisible operation (Jacob, p. 459). The one-transistor, zero-capacitor (1T, or 1T0C) DRAM cell has been a topic of research since the late-1990s. 1T DRAM is a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as 1T DRAM , particularly in comparison to
3425-490: A value of 0 or 1. The most common unit of storage is the byte , equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information , or simply data . For example, the complete works of Shakespeare , about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character. Data are encoded by assigning
3562-450: Is 3-4-4-8 with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at 2-2-2-5 timing. Minimum random access time has improved from t RAC = 50 ns to t RCD + t CL = 22.5 ns , and even the premium 20 ns variety is only 2.5 times faster than the asynchronous DRAM. CAS latency has improved even less, from t CAC = 13 ns to 10 ns. However,
3699-407: Is volatile memory (vs. non-volatile memory ), since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence . DRAM typically takes the form of an integrated circuit chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in digital electronics where low-cost and high-capacity computer memory is required. One of
3836-427: Is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed , otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost. An uninterruptible power supply (UPS) can be used to give
3973-425: Is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This
4110-466: Is a type of random-access semiconductor memory that stores each bit of data in a memory cell , usually consisting of a tiny capacitor and a transistor , both typically based on metal–oxide–semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent
4247-442: Is able to offer better long-term area efficiencies; since folded array architectures require increasingly complex folding schemes to match any advance in process technology. The relationship between process technology, array architecture, and area efficiency is an active area of research. The first DRAM integrated circuits did not have any redundancy. An integrated circuit with a defective DRAM cell would be discarded. Beginning with
Static random-access memory - Misplaced Pages Continue
4384-439: Is above V CCP . If the capacitor contains a logic zero, it begins to discharge when the gate terminal voltage is above V TH . Up until the mid-1980s, the capacitors in DRAM cells were co-planar with the access transistor (they were constructed on the surface of the substrate), thus they were referred to as planar capacitors. The drive to increase both density and, to a lesser extent, performance, required denser designs. This
4521-517: Is also embedded in practically all modern appliances, toys, etc. that implement an electronic user interface. SRAM in its dual-ported form is sometimes used for real-time digital signal processing circuits. SRAM is also used in personal computers, workstations, routers and peripheral equipment: CPU register files , internal CPU caches , internal GPU caches and external burst mode SRAM caches, hard disk buffers, router buffers, etc. LCD screens and printers also normally employ SRAM to hold
4658-410: Is computer data storage on a medium or a device that is not under the control of a processing unit . The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Off-line storage
4795-408: Is done before deciding whether to keep certain data compressed or not. For security reasons , certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency
4932-413: Is done to minimize conflicts with memory accesses, since such a system has both knowledge of the memory access patterns and the refresh requirements of the DRAM. When the row address is supplied by a counter within the DRAM, the system relinquishes control over which row is refreshed and only provides the refresh command. Some modern DRAMs are capable of self-refresh; no external logic is required to instruct
5069-609: Is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed. Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct. The health of optical media can be determined by measuring correctable minor errors , of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning. As of 2011 ,
5206-1063: Is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually fast but temporary semiconductor read-write memory , typically DRAM (dynamic RAM) or other such devices. Storage consists of storage devices and their media not directly accessible by the CPU ( secondary or tertiary storage ), typically hard disk drives , optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down). Historically, memory has, depending on technology, been called central memory , core memory , core storage , drum , main memory , real storage , or internal memory . Meanwhile, slower persistent storage devices have been referred to as secondary storage , external memory , or auxiliary/peripheral storage . Primary storage (also known as main memory , internal memory , or prime memory ), often referred to simply as memory ,
5343-436: Is fully at its highest voltage and the other bit-line is at the lowest possible voltage. To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low-voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value. Due to the sense amplifier's positive feedback configuration, it will hold a bit-line at stable voltage even after
5480-405: Is given as n F , where n is a number derived from the DRAM cell design, and F is the smallest feature size of a given process technology. This scheme permits comparison of DRAM size over different process technology generations, as DRAM cell area scales at linear or near-linear rates with respect to feature size. The typical area for modern DRAM cells varies between 6–8 F . The horizontal wire,
5617-401: Is increased static power due to the constant current flow through one of the pull-down transistors (M1 or M2). This is sometimes used to implement more than one (read and/or write) port, which may be useful in certain types of video memory and register files implemented with multi-ported SRAM circuitry. Generally, the fewer transistors needed per cell, the smaller each cell can be. Since
Static random-access memory - Misplaced Pages Continue
5754-424: Is inversely proportional to their pitch. The array folding and bitline twisting schemes that are used must increase in complexity in order to maintain sufficient noise reduction. Schemes that have desirable noise immunity characteristics for a minimal impact in area is the topic of current research (Kenner, p. 37). Advances in process technology could result in open bitline array architectures being favored if it
5891-408: Is limited by its capacitance (which increases with length), which must be kept within a range for proper sensing (as DRAMs operate by sensing the charge of the capacitor released onto the bitline). Bitline length is also limited by the amount of operating current the DRAM can draw and by how power can be dissipated, since these two characteristics are largely determined by the charging and discharging of
6028-416: Is mainly used for CPU cache , small on-chip memory, FIFOs or other small buffers. A typical SRAM cell is made up of six MOSFETs , and is often called a 6T SRAM cell . Each bit in the cell is stored on four transistors (M1, M2, M3, M4) that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control
6165-427: Is only slightly overridden by the write process, the opposite transistors pair (M 1 and M 2 ) gate voltage is also changed. This means that the M 1 and M 2 transistors can be easier overridden, and so on. Thus, cross-coupled inverters magnify the writing process. RAM with an access time of 70 ns will output valid data within 70 ns from the time that the address lines are valid. Some SRAM cells have
6302-416: Is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes . When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in
6439-494: Is rather employed similarly to synchronous DRAM – DDR SDRAM memory is rather used than asynchronous DRAM . Synchronous memory interface is much faster as access time can be significantly reduced by employing pipeline architecture. Furthermore, as DRAM is much cheaper than SRAM, SRAM is often replaced by DRAM, especially in the case when a large volume of data is required. SRAM memory is, however, much faster for random (not block / burst) access. Therefore, SRAM memory
6576-700: Is readily available for most storage devices. Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015. Distinct types of data storage have different points of failure and various methods of predictive failure analysis . Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage. Impending failure on hard disk drives
6713-488: Is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly. Recently, primary storage and secondary storage in some uses refer to what
6850-489: Is the clearest way to compare between the performance of different DRAM memories, as it sets the slower limit regardless of the row length or page size. Bigger arrays forcibly result in larger bit line capacitance and longer propagation delays, which cause this time to increase as the sense amplifier settling time is dependent on both the capacitance as well as the propagation latency. This is countered in modern DRAM chips by instead integrating many more complete DRAM arrays within
6987-442: Is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner. Historically, early computers used delay lines , Williams tubes , or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory . Core memory remained dominant until
SECTION 50
#17328515101677124-405: Is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection . A detected error is then retried. Data compression methods allow in many cases (such as
7261-505: Is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives , floppy disks , magnetic tape , paper tape , punched cards , and RAM disks . Once
7398-496: Is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery . Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if
7535-443: Is usually arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit. The figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in height and width. The long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in
7672-497: The Intel 1103 , in October 1970, despite initial problems with low yield until the fifth revision of the masks . The 1103 was designed by Joel Karp and laid out by Pat Earhart. The masks were cut by Barbara Maness and Judy Garcia. MOS memory overtook magnetic-core memory as the dominant memory technology in the early 1970s. The first DRAM with multiplexed row and column address lines was
7809-468: The JEDEC standard. Some systems refresh every row in a burst of activity involving all rows every 64 ms. Other systems refresh one row at a time staggered throughout the 64 ms interval. For example, a system with 2 = 8,192 rows would require a staggered refresh rate of one row every 7.8 μs which is 64 ms divided by 8,192 rows. A few real-time systems refresh a portion of memory at
7946-490: The Mostek MK4096 4 Kbit DRAM designed by Robert Proebsting and introduced in 1973. This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles. This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins,
8083-407: The cache memories in processors . The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This complexity is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities with a simultaneous reduction in cost per bit. Refreshing
8220-476: The disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce
8357-404: The /RAS low to valid data out time. This is the time to open a row, settle the sense amplifiers, and deliver the selected column data to the output. This is also the minimum /RAS low time, which includes the time for the amplified data to be delivered back to recharge the cells. The time to read additional bits from an open page is much less, defined by the /CAS to /CAS cycle time. The quoted number
SECTION 60
#17328515101678494-493: The 1960s, when CMOS was invented. In 1964, Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch . They replaced the latch with two transistors and two resistors , a configuration that became known as the Farber-Schlig cell. That year they submitted an invention disclosure, but it was initially rejected. In 1965, Benjamin Agusta and his team at IBM created
8631-690: The 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive. This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile , meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage. As shown in
8768-504: The 3T and 4T DRAM which it replaced in the 1970s. In 1T DRAM cells, the bit of data is still stored in a capacitive region controlled by a transistor, but this capacitance is no longer provided by a separate capacitor. 1T DRAM is a "capacitorless" bit cell design that stores data using the parasitic body capacitance that is inherent to silicon on insulator (SOI) transistors. Considered a nuisance in logic design, this floating body effect can be used for data storage. This gives 1T DRAM cells
8905-464: The 3T1C cell for performance reasons (Kenner, p. 6). These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read). A second performance advantage relates to the 3T1C cell's separate transistors for reading and writing; the memory controller can exploit this feature to perform atomic read-modify-writes, where
9042-462: The 64 Kbit generation, DRAM arrays have included spare rows and columns to improve yields. Spare rows and columns provide tolerance of minor fabrication defects which have caused a small number of rows or columns to be inoperable. The defective rows and columns are physically disconnected from the rest of the array by a triggering a programmable fuse or by cutting the wire by a laser. The spare rows or columns are substituted in by remapping logic in
9179-498: The DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns (1 600 Mword/s) , while the EDO DRAM can output one word per t PC = 20 ns (50 Mword/s). Each bit of data in a DRAM is stored as a positive or negative electrical charge in a capacitive structure. The structure providing
9316-514: The DRAM chips in them), such as Kingston Technology , and some manufacturers that sell stacked DRAM (used e.g. in the fastest supercomputers on the exascale ), separately such as Viking Technology . Others sell such integrated into other products, such as Fujitsu into its CPUs, AMD in GPUs, and Nvidia , with HBM2 in some of their GPU chips. The cryptanalytic machine code-named Aquarius used at Bletchley Park during World War II incorporated
9453-400: The DRAM to refresh or to provide a row address. Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes. Many parameters are required to fully describe the timing of DRAM operation. Here are some examples for two timing grades of asynchronous DRAM, from a data sheet published in 1998: Thus, the generally quoted number is
9590-508: The I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory. Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories , while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information. Most computer operating systems use
9727-536: The US and worldwide markets during the 1980s and 1990s. Early in 1985, Gordon Moore decided to withdraw Intel from producing DRAM. By 1986, many, but not all, United States chip makers had stopped making DRAMs. Micron Technology and Texas Instruments continued to produce them commercially, and IBM produced them for internal use. In 1985, when 64K DRAM memory chips were the most common memory chips used in computers, and when more than 60 percent of those chips were produced by Japanese companies, semiconductor makers in
9864-540: The United States accused Japanese companies of export dumping for the purpose of driving makers in the United States out of the commodity memory chip business. Prices for the 64K product plummeted to as low as 35 cents apiece from $ 3.50 within 18 months, with disastrous financial consequences for some U.S. firms. On 4 December 1985 the US Commerce Department's International Trade Administration ruled in favor of
10001-500: The access to a storage cell during read and write operations. 6T SRAM is the most common kind of SRAM. In addition to 6T SRAM, other kinds of SRAM use 4, 5, 7, 8, 9, 10 (4T, 5T, 7T 8T, 9T, 10T SRAM), or more transistors per bit. Four-transistor SRAM is quite common in stand-alone SRAM devices (as opposed to SRAM used for CPU caches), implemented in special processes with an extra layer of polysilicon , allowing for very high-resistance pull-up resistors. The principal drawback of using 4T SRAM
10138-447: The address multiplexed in two halves, i.e. higher bits followed by lower bits, over the same package pins in order to keep their size and cost down. The size of an SRAM with m address lines and n data lines is 2 words, or 2 × n bits. The most common word size is 8 bits, meaning that a single byte can be read or written to each of 2 different words within the SRAM chip. Several common SRAM chips have 11 address lines (thus
10275-476: The bitline, which is almost always made of polysilicon, but is otherwise identical to the COB variation. The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface. However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that
10412-415: The bitline. Sense amplifiers are required to read the state contained in the DRAM cells. When the access transistor is activated, the electrical charge in the capacitor is shared with the bitline. The bitline's capacitance is much greater than that of the capacitor (approximately ten times). Thus, the change in bitline voltage is minute. Sense amplifiers are required to resolve the voltage differential into
10549-436: The bitlines are divided into multiple segments, and the differential sense amplifiers are placed in between bitline segments. Because the sense amplifiers are placed between bitline segments, to route their outputs outside the array, an additional layer of interconnect placed above those used to construct the wordlines and bitlines is required. The DRAM cells that are on the edges of the array do not have adjacent segments. Since
10686-417: The capacitance can be increased by etching a deeper hole without any increase to surface area (Kenner, p. 44). Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate. The fact that
10823-406: The capacitance, as well as the transistors that control access to it, is collectively referred to as a DRAM cell . They are the fundamental building block in DRAM arrays. Multiple DRAM memory cell variants exist, but the most commonly used variant in modern DRAMs is the one-transistor, one-capacitor (1T1C) cell. The transistor is used to admit current into the capacitor during writes, and to discharge
10960-410: The capacitor contact does not touch the bitline. CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp. 33–42). The trench capacitor is constructed by etching a deep hole into the silicon substrate. The substrate volume surrounding
11097-417: The capacitor during reads. The access transistor is designed to maximize drive strength and minimize transistor-transistor leakage (Kenner, p. 34). The capacitor has two terminals, one of which is connected to its access transistor, and the other to either ground or V CC /2. In modern DRAMs, the latter case is more common, since it allows faster operation. In modern DRAMs, a voltage of +V CC /2 across
11234-405: The capacitor is required to store a logic one; and a voltage of −V CC /2 across the capacitor is required to store a logic zero. The resultant charge is Q = ± V C C 2 ⋅ C {\textstyle Q=\pm {V_{CC} \over 2}\cdot C} , where Q is the charge in coulombs and C is the capacitance in farads . Reading or writing
11371-410: The capacitor is under the logic means that it is constructed before the transistors are. This allows high-temperature processes to fabricate the capacitors, which would otherwise degrade the logic transistors and their performance. This makes trench capacitors suitable for constructing embedded DRAM (eDRAM) (Jacob, p. 357). Disadvantages of trench capacitors are difficulties in reliably constructing
11508-426: The capacitor to the write bitline just as in the 1T1C cell, but there was a separate read wordline and read transistor which connected an amplifier transistor to the read bitline. By the second generation, the drive to reduce cost by fitting the same amount of bits in a smaller area led to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16 Kbit capacities continued to use
11645-404: The capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, p. 44). First-generation DRAM ICs (those with capacities of 1 Kbit), such as the archetypical Intel 1103 , used a three-transistor, one-capacitor (3T1C) DRAM cell with separate read and write circuitry. The write wordline drove a write transistor which connected
11782-463: The cell should be connected to the bit lines: BL and BL. They are used to transfer data for both read and write operations. Although it is not strictly necessary to have two bit lines, both the signal and its inverse are typically provided in order to improve noise margins and speed. During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This improves SRAM bandwidth compared to DRAMs – in
11919-640: The cell's temperature rises. The cell power drain occurs in both active and idle states, thus wasting useful energy without any useful work done. Even though in the last 20 years the issue was partially addressed by the Data Retention Voltage technique (DRV) with reduction rates ranging from 5 to 10, the decrease in node size caused reduction rates to fall to about 2. With these two issues it became more challenging to develop energy-efficient and dense SRAM memories, prompting semiconductor industry to look for alternatives such as STT-MRAM and F-RAM . In 2019
12056-435: The circuitry used to read/write them. Main memory Computer data storage or digital data storage is a technology consisting of computer components and recording media that are used to retain digital data . It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use
12193-482: The column (the illustration to the right does not include this important detail). They are generally known as the + and − bit lines. A sense amplifier is essentially a pair of cross-connected inverters between the bit-lines. The first inverter is connected with input from the + bit-line and output to the − bit-line. The second inverter's input is from the − bit-line with output to the + bit-line. This results in positive feedback which stabilizes after one bit-line
12330-586: The commercialized Z-RAM from Innovative Silicon, the TTRAM from Renesas and the A-RAM from the UGR / CNRS consortium. DRAM cells are laid out in a regular rectangular, grid-like pattern to facilitate their control and access via wordlines and bitlines. The physical layout of the DRAM cells in an array is typically designed so that two adjacent DRAM cells in a column share a single bitline contact to reduce their area. DRAM cell area
12467-580: The complaint. Synchronous dynamic random-access memory (SDRAM) was developed by Samsung . The first commercial SDRAM chip was the Samsung KM48SL2000, which had a capacity of 16 Mb , and was introduced in 1992. The first commercial DDR SDRAM ( double data rate SDRAM) memory chip was Samsung's 64 Mb DDR SDRAM chip, released in 1998. Later, in 2001, Japanese DRAM makers accused Korean DRAM manufacturers of dumping. In 2002, US computer makers made claims of DRAM price fixing . DRAM
12604-524: The concept of virtual memory , allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks ( pages ) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded. The secondary storage, including HDD , ODD and SSD , are usually block-addressable. Tertiary storage or tertiary memory
12741-457: The cost of processing a silicon wafer is relatively fixed, using smaller cells and so packing more bits on one wafer reduces the cost per bit of memory. Memory cells that use fewer than four transistors are possible; however, such 3T or 1T cells are DRAM, not SRAM (even the so-called 1T-SRAM ). Access to the cell is enabled by the word line (WL in figure) which controls the two access transistors M 5 and M 6 which, in turn, control whether
12878-400: The data consumes power, causing a variety of techniques to be used to manage the overall power consumption. For this reason, DRAM usually needs to operate with a memory controller ; the memory controller needs to know DRAM parameters, especially memory timings , to initialize DRAMs, which may be different depending on different DRAM manufacturers and part numbers. DRAM had a 47% increase in
13015-426: The data when the power supply is lost, ensuring preservation of critical information. nvSRAMs are used in a wide range of situations – networking, aerospace, and medical, among many others – where the preservation of data is critical and where batteries are impractical. Pseudostatic RAM (PSRAM) is DRAM combined with a self-refresh circuit. It appears externally as slower SRAM, albeit with
13152-430: The desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive. In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs
13289-491: The desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks. As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have
13426-400: The diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM: Main memory is directly or indirectly connected to the central processing unit via a memory bus . It is actually two buses (not on the diagram): an address bus and a data bus . The CPU firstly sends a number through an address bus, a number called memory address , that indicates
13563-412: The differential sense amplifiers require identical capacitance and bitline lengths from both segments, dummy bitline segments are provided. The advantage of the open bitline array is a smaller array area, although this advantage is slightly diminished by the dummy bitline segments. The disadvantage that caused the near disappearance of this architecture is the inherent vulnerability to noise , which affects
13700-434: The ease of interfacing. It is much easier to work with than DRAM as there are no refresh cycles and the address and data buses are often directly accessible. In addition to buses and power connections, SRAM usually requires only three controls: Chip Enable (CE), Write Enable (WE) and Output Enable (OE). In synchronous SRAM, Clock (CLK) is also included. Non-volatile SRAM (nvSRAM) has standard SRAM functionality, but they save
13837-553: The effectiveness of the differential sense amplifiers. Since each bitline segment does not have any spatial relationship to the other, it is likely that noise would affect only one of the two bitline segments. The folded bitline array architecture routes bitlines in pairs throughout the array. The close proximity of the paired bitlines provide superior common-mode noise rejection characteristics over open bitline arrays. The folded bitline array architecture began appearing in DRAM ICs during
13974-418: The forcing voltage is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right. Typically, manufacturers specify that each row must be refreshed every 64 ms or less, as defined by
14111-477: The former using standard MOSFETs and the latter using floating-gate MOSFETs . In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory
14248-512: The greatest density as well as allowing easier integration with high-performance logic circuits since they are constructed with the same SOI process technologies. Refreshing of cells remains necessary, but unlike with 1T1C DRAM, reads in 1T DRAM are non-destructive; the stored charge causes a detectable shift in the threshold voltage of the transistor. Performance-wise, access times are significantly better than capacitor-based DRAMs, but slightly worse than SRAM. There are several types of 1T DRAMs:
14385-463: The higher voltage and thus determine whether there was 1 or 0 stored. The higher the sensitivity of the sense amplifier, the faster the read operation. As the NMOS is more powerful, the pull-down is easier. Therefore, bit lines are traditionally precharged to high voltage. Many researchers are also trying to precharge at a slightly low voltage to reduce the power consumption. The write cycle begins by applying
14522-485: The hole is then heavily doped to produce a buried n plate with low resistance. A layer of oxide-nitride-oxide dielectric is grown or deposited, and finally the hole is filled by depositing doped polysilicon, which forms the top plate of the capacitor. The top of the capacitor is connected to the access transistor's drain terminal via a polysilicon strap (Kenner, pp. 42–44). A trench capacitor's depth-to-width ratio in DRAMs of
14659-533: The image displayed (or to be printed). LCDs can have SRAM in their LCD controllers. SRAM was used for the main memory of many early personal computers such as the ZX80 , TRS-80 Model 100 , and VIC-20 . Some early memory cards in the late 1980s to early 1990s used SRAM as a storage medium, which required a lithium battery to keep the contents of the SRAM. SRAM may be integrated on chip for: Hobbyists, specifically home-built processor enthusiasts, often prefer SRAM due to
14796-568: The information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage. In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards. Storage technologies at all levels of
14933-449: The largest applications for DRAM is the main memory (colloquially called the RAM) in modern computers and graphics cards (where the main memory is called the graphics memory ). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as
15070-436: The last 30 years (from 1987 to 2017) with a steadily decreasing transistor size (node size) the footprint-shrinking of the SRAM cell topology itself slowed down, making it harder to pack the cells more densely. Besides issues with size a significant challenge of modern SRAM cells is a static current leakage. The current, that flows from positive supply (V dd ), through the cell, and to the ground, increases exponentially when
15207-425: The lengths of the bitlines and the number of attached DRAM cells attached to them are equal, two basic architectures to array design have emerged to provide for the requirements of the sense amplifiers: open and folded bitline arrays. The first generation (1 Kbit) DRAM ICs, up until the 64 Kbit generation (and some 256 Kbit generation devices) had open bitline array architectures. In these architectures,
15344-471: The levels specified by the logic signaling system. Modern DRAMs use differential sense amplifiers, and are accompanied by requirements as to how the DRAM arrays are constructed. Differential sense amplifiers work by driving their outputs to opposing extremes based on the relative voltages on pairs of bitlines. The sense amplifiers function effectively and efficient only if the capacitance and voltages of these bitline pairs are closely matched. Besides ensuring that
15481-643: The lower price of the then-dominant magnetic-core memory. Capacitors had also been used for earlier memory schemes, such as the drum of the Atanasoff–Berry Computer , the Williams tube and the Selectron tube . In 1966, Dr. Robert Dennard invented modern DRAM architecture in which there's a single MOS transistor per capacitor, at the IBM Thomas J. Watson Research Center , while he was working on MOS memory and
15618-402: The mid-1980s, beginning with the 256 Kbit generation. This architecture is favored in modern DRAM ICs for its superior noise immunity. This architecture is referred to as folded because it takes its basis from the open array architecture from the perspective of the circuit schematic. The folded array architecture appears to remove DRAM cells in alternate pairs (because two DRAM cells share
15755-407: The mid-2000s can exceed 50:1 (Jacob, p. 357). Trench capacitors have numerous advantages. Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp. 356–357). Alternatively,
15892-641: The most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development. Semiconductor memory uses semiconductor -based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells . A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors . Both volatile and non-volatile forms of semiconductor memory exist,
16029-408: The physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation ) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device)
16166-530: The price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down. In 2018, a "key characteristic of the DRAM market is that there are currently only three major suppliers — Micron Technology , SK Hynix and Samsung Electronics " that are "keeping a pretty tight rein on their capacity". There is also Kioxia (previously Toshiba Memory Corporation after 2017 spin-off) which doesn't manufacture DRAM. Other manufacturers make and sell DIMMs (but not
16303-439: The relatively weak transistors in the cell itself so they can easily override the previous state of the cross-coupled inverters. In practice, access NMOS transistors M 5 and M 6 have to be stronger than either bottom NMOS (M 1 , M 3 ) or top PMOS (M 2 , M 4 ) transistors. This is easily obtained as PMOS transistors are much weaker than NMOS when same sized. Consequently, when one transistor pair (e.g. M 3 and M 4 )
16440-530: The result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators , digital signal processors , and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that
16577-422: The row and column decoders (Jacob, pp. 358–361). Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. The majority of one-off (" soft ") errors in DRAM chips occur as a result of background radiation , chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with
16714-492: The single-transistor MOS DRAM memory cell. He filed a patent in 1967, and was granted U.S. patent number 3,387,286 in 1968. MOS memory offered higher performance, was cheaper, and consumed less power, than magnetic-core memory. The patent describes the invention: "Each cell is formed, in one embodiment, using a single field-effect transistor and a single capacitor." MOS DRAM chips were commercialized in 1969 by Advanced Memory Systems, Inc of Sunnyvale, CA . This 1024 bit chip
16851-399: The stacked capacitor, based on its location relative to the bitline—capacitor-under-bitline (CUB) and capacitor-over-bitline (COB). In the former, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal. In the latter, the capacitor is constructed above
16988-413: The storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance. Non-volatile memory retains
17125-422: The stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory. Dynamic random-access memory
17262-462: The substrate surface are referred to as trench capacitors. In the 2000s, manufacturers were sharply divided by the type of capacitor used in their DRAMs and the relative cost and long-term scalability of both designs have been the subject of extensive debate. The majority of DRAMs, from major manufactures such as Hynix , Micron Technology , Samsung Electronics use the stacked capacitor structure, whereas smaller manufacturers such Nanya Technology use
17399-453: The trench capacitor structure (Jacob, pp. 355–357). The capacitor in the stacked capacitor scheme is constructed above the surface of the substrate. The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape. There are two basic variations of
17536-561: The two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory , DRAM
17673-444: The value to be written to the bit lines. To write a 0, a 0 is applied to the bit lines, such as setting BL to 1 and BL to 0. This is similar to applying a reset pulse to an SR-latch , which causes the flip flop to change state. A 1 is written by inverting the values of the bit lines. WL is then asserted and the value that is to be stored is latched in. This works because the bit line input-drivers are designed to be much stronger than
17810-524: The word line is not asserted, the access transistors M 5 and M 6 disconnect the cell from the bit lines. The two cross-coupled inverters formed by M 1 – M 4 will continue to reinforce each other as long as they are connected to the supply. In theory, reading only requires asserting the word line WL and reading the SRAM cell state by a single access transistor and bit line, e.g. M 6 , BL. However, bit lines are relatively long and have large parasitic capacitance . To speed up reading,
17947-479: The wordline, is connected to the gate terminal of every access transistor in its row. The vertical bitline is connected to the source terminal of the transistors in its column. The lengths of the wordlines and bitlines are limited. The wordline length is limited by the desired performance of the array, since propagation time of the signal that must transverse the wordline is determined by the RC time constant . The bitline length
18084-461: Was extended in the Von Neumann architecture , where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output
18221-431: Was generally described as "5-2-2-2" timing, as bursts of four reads within a page were common. When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers represent t CL - t RCD - t RP - t RAS in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing
18358-441: Was historically called, respectively, secondary storage and tertiary storage . The primary storage, including ROM , EEPROM , NOR flash , and RAM , are usually byte-addressable . Secondary storage (also known as external memory or auxiliary storage ) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer
18495-462: Was sold to Honeywell , Raytheon , Wang Laboratories , and others. The same year, Honeywell asked Intel to make a DRAM using a three-transistor cell that they had developed. This became the Intel 1102 in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially available DRAM,
18632-477: Was strongly motivated by economics, a major consideration for DRAM devices, especially commodity DRAMs. The minimization of DRAM cell area can produce a denser device and lower the cost per bit of storage. Starting in the mid-1980s, the capacitor was moved above or below the silicon substrate in order to meet these objectives. DRAM cells featuring capacitors above the substrate are referred to as stacked or folded plate capacitors. Those with capacitors buried beneath
18769-456: Was trying to create an alternative to SRAM which required six MOS transistors for each bit of data. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of
#166833