Misplaced Pages

UNIVAC III

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The UNIVAC III , designed as an improved transistorized replacement for the vacuum tube UNIVAC I and UNIVAC II computers. The project was started by the Philadelphia division of Remington Rand UNIVAC in 1958 with the initial announcement of the system been made in the Spring of 1960, however as this division was heavily focused on the UNIVAC LARC project the shipment of the system was delayed until June 1962, with Westinghouse agreeing to furnish system programing and marketing on June 1, 1962. It was designed to be compatible for all data formats. However the word size and instruction set were completely different; this presented significant difficulty as all programs had to be rewritten, so many customers switched to different vendors instead of upgrading existing UNIVACs.

#51948

84-490: The UNIVAC III weighed about 27,225 pounds (13.6 short tons; 12.3 t). The system was engineered to use as little core memory as possible, as it was a very expensive item. The memory system was 25 bits wide and could be configured with from 8,192 words to 32,768 words of memory. Memory was built in stacks of 29 planes of 4,096 cores: 25 for the data word, two for "modulo-3 check" bits, and two for spares. Each memory cabinet held up to four stacks (16,384 words). It supported

168-477: A 1 or 0. This writing process also causes electricity to be induced into nearby wires. If the new pulse being applied in the X-Y wires is the same as the last applied to that core, the existing field will do nothing, and no induction will result. If the new pulse is in the opposite direction, a pulse will be generated. This is normally picked up in a separate "sense" wire, allowing the system to know whether that core held

252-399: A 1 or 0. As this readout process requires the core to be written, this process is known as destructive readout , and requires additional circuitry to reset the core to its original value if the process flipped it. When not being read or written, the cores maintain the last value they had, even if the power is turned off. Therefore, they are a type of non-volatile memory . Depending on how it

336-539: A 12-character alphanumeric value. When accumulators were combined in an instruction, the sign bit of the Most Significant Accumulator was used and the others ignored. The CPU had 15 index registers, a four-bit field (x) allowed selection of one index register as the base register. Operand addresses were determined by adding the contents of the selected base register and the 10-bit displacement field (m). Instructions that modified or stored index registers used

420-481: A backing sheet "patch" that supported them during manufacture and later use. Threading needles were butt welded to the wires, the needle and wire diameters were the same, and efforts were made to eliminate the use of needles. The most important change, from the point of view of automation, was the combination of the sense and inhibit wires, eliminating the need for a circuitous diagonal sense wire. With small changes in layout, this also allowed much tighter packing of

504-399: A certain threshold is applied to the wires, the core will become magnetized. The core to be assigned a value – or written – is selected by powering one X and one Y wire to half of the required power, such that only the single core at the intersection is written. Depending on the direction of the currents, the core will pick up a clockwise or counterclockwise magnetic field, storing

588-502: A considerable voltage across the whole line due to the superposition of the voltage at each single core. This potential risk of "misread" limits the minimum number of Sense wires. Increasing Sense wires also requires more decode circuitry. Core memory controllers were designed so that every read was followed immediately by a write (because the read forced all bits to 0, and because the write assumed this had happened). Instruction sets were designed to take advantage of this. For example,

672-501: A converted aspirin press in 1949. Rajchman later developed versions of the Williams tube and led development of the Selectron . Two key inventions led to the development of magnetic core memory in 1951. The first, An Wang's, was the write-after-read cycle, which solved the problem of how to use a storage medium in which the act of reading erased the data read, enabling the construction of

756-584: A data word. For instance, a machine might use 32 grids of core with a single bit of the 32-bit word in each one, and the controller could access the entire 32-bit word in a single read/write cycle. Core memory is non-volatile storage —it can retain its contents indefinitely without power. It is also relatively unaffected by EMP and radiation. These were important advantages for some applications like first-generation industrial programmable controllers , military installations and vehicles like fighter aircraft , as well as spacecraft , and led to core being used for

840-413: A density of about 2.2 Gbit/in . Single-layer HD DVD and Blu-ray disks offer densities around 7.5 Gbit/in and 12.5 Gbit/in , respectively. When introduced in 1982 CDs had considerably higher densities than hard disk drives , but hard disk drives have since advanced much more quickly and eclipsed optical media in both areal density and capacity per device. The first magnetic tape drive,

924-489: A four-bit field (xo) to select that index register. Indirect addressing or field selection was selected if the one-bit field (i/a) was set. Both indirect addressing and a base register could be selected in the indirect address in memory. Only a base register could be selected in the field selector in memory. Sperry Rand began shipment in June 1962 and produced 96 UNIVAC III systems. The operating systems(s) which were developed for

SECTION 10

#1733084750052

1008-529: A later time with a random-access FASTRAND drum. Core memory In computing , magnetic-core memory is a form of random-access memory . It predominated for roughly 20 years between 1955 and 1975, and is often just called core memory , or, informally, core . Core memory uses toroids (rings) of a hard magnetic material (usually a semi-hard ferrite ). Each core stores one bit of information. Two or more wires pass through each core, forming an X-Y array of cores. When an electrical current above

1092-425: A metal or plastic plate. The term "core" comes from conventional transformers whose windings surround a magnetic core . In core memory, the wires pass once through any given core—they are single-turn devices. The properties of materials used for memory cores are dramatically different from those used in power transformers. The magnetic material for a core memory requires a high degree of magnetic remanence ,

1176-525: A number of years after availability of semiconductor MOS memory (see also MOSFET ). For example, the Space Shuttle IBM AP-101B flight computers used core memory, which preserved the contents of memory even through the Challenger ' s disintegration and subsequent plunge into the sea in 1986. Another characteristic of early core was that the coercive force was very temperature-sensitive;

1260-405: A one and a zero, these diagnostics tested the core memory with worst-case patterns and had to run for several hours. As most computers had just a single core-memory board, these diagnostics also moved themselves around in memory, making it possible to test every bit. An advanced test was called a " Shmoo test " in which the half-select currents were modified along with the time at which the sense line

1344-424: A plastic surface that is then covered with a thin layer of reflective metal. Compact discs (CDs) offer a density of about 0.90 Gbit/in , using pits which are 0.83 micrometers long and 0.5 micrometers wide, arranged in tracks spaced 1.6 micrometers apart. DVD disks are essentially a higher-density CD, using more of the disk surface, smaller pits (0.64 micrometers), and tighter tracks (0.74 micrometers), offering

1428-518: A serial, one-dimensional shift register (of 50 bits), using two cores to store a bit. A Wang core shift register is in the Revolution exhibit at the Computer History Museum . The second, Forrester's, was the coincident-current system, which enabled a small number of wires to control a large number of cores enabling 3D memory arrays of several million bits. The first use of magnetic core was in

1512-589: A single cycle. A typical machine's register set usually used only one small plane of this form of core memory. Some very large memories were built with this technology, for example the Extended Core Storage (ECS) auxiliary memory in the CDC 6600 , which was up to 2 million 60-bit words. Core rope memory is a read-only memory (ROM) form of core memory. In this case, the cores, which had more linear magnetic materials, were simply used as transformers ; no information

1596-491: A value in memory could be read and modified almost as quickly as it could be read and written. In the PDP-6 , the AOS* (or SOS* ) instructions incremented (or decremented) the value between the read phase and the write phase of a single memory cycle (perhaps signaling the memory controller to pause briefly in the middle of the cycle). This might be twice as fast as the process of obtaining

1680-402: Is a stored 1 , while the other is a stored 0 . The toroidal shape of a core is preferred since the magnetic path is closed, there are no magnetic poles and thus very little external flux. This allows the cores to be packed closely together without their magnetic fields interacting. The alternating 45-degree positioning used in early core arrays was necessitated by the diagonal sense wires. With

1764-413: Is applied to each bit sense/write line for a bit to be set. In some designs, the word read and word write lines were combined into a single wire, resulting in a memory array with just two wires per bit. For write, multiple word write lines could be selected. This offered a performance advantage over X/Y line coincident-current in that multiple words could be cleared or written with the same value in

SECTION 20

#1733084750052

1848-535: Is made up of what are called floating gate transistors . Unlike the transistor designs used in DRAM , which must be refreshed multiple times per second, NAND flash is designed to retain its charge state even when not powered up. The highest capacity drives commercially available are the Nimbus Data Exadrive© DC series drives, these drives come in capacities ranging 16 TB to 100 TB . Nimbus states that for its size

1932-463: Is normally associated with three independent teams. Substantial work in the field was carried out by the Shanghai -born American physicists An Wang and Way-Dong Woo , who created the pulse transfer controlling device in 1949. The patent described a type of memory that would today be known as a delay-line or shift-register system. Each bit was stored using a pair of transformers, one that held

2016-443: Is obsolete, computer memory is still sometimes called "core" even though it is made of semiconductors, particularly by people who had worked with machines having actual core memory. The files that result from saving the entire contents of memory to disk for inspection, which is nowadays commonly performed automatically when a major error occurs in a computer program, are still called " core dumps ". Algorithms which work on more data than

2100-468: The IBM 7090 , early IBM 7094s , and IBM 7030 . Core was heated instead of cooled because the primary requirement was a consistent temperature, and it was easier (and cheaper) to maintain a constant temperature well above room temperature than one at or below it. Diagnosing hardware problems in core memory required time-consuming diagnostic programs to be run. While a quick test checked if every bit could contain

2184-492: The New York Genome Center published a method known as DNA Fountain which allows perfect retrieval of information from a density of 215 petabytes per gram of DNA, 85% of the theoretical limit. With the notable exception of NAND Flash memory, increasing storage density of a medium typically improves the transfer speed at which that medium can operate. This is most obvious when considering various disk-based media, where

2268-492: The PDP-6 at the MIT Artificial Intelligence Laboratory by 1967. This was considered "unimaginably huge" at the time, and nicknamed the "Moby Memory". It cost $ 380,000 ($ 0.04/bit) and its width, height and depth was 175 cm × 127 cm × 64 cm (69 in × 50 in × 25 in) with its supporting circuitry (189 kilobits/cubic foot = 6.7 kilobits/litre). Its cycle time

2352-488: The Univac Uniservo , recorded at the density of 128 bit/in on a half-inch magnetic tape, resulting in the areal density of 256 bit/in . In 2015, IBM and Fujifilm claimed a new record for the magnetic tape areal density of 123 Gbit/in , while LTO-6 , the highest-density production tape shipping in 2015, provides an areal density of 0.84 Gbit/in . A number of technologies are attempting to surpass

2436-490: The 100TB SSD has a 6:1 space saving ratio over a nearline HDD Hard disk drives store data in the magnetic polarization of small patches of the surface coating on a disk. The maximum areal density is defined by the size of the magnetic particles in the surface, as well as the size of the "head" used to read and write the data. In 1956 the first hard drive, the IBM ;350 , had an areal density of 2,000 bit / in . Since then,

2520-484: The 1980s. The vast majority of PCs included interfaces designed for high density drives that ran at 500 kbit/s instead. These, too, were completely overwhelmed by newer devices like the LS-120 , which were forced to use higher-speed interfaces such as IDE . Although the effect on performance is most obvious on rotating media, similar effects come into play even for solid-state media like Flash RAM or DRAM . In this case

2604-599: The RAMAC. This is without adjusting for inflation, which increased prices nine-fold from 1956 to 2018. Solid-state storage has seen a similar drop in cost per bit. In this case the cost is determined by the yield , the number of viable chips produced in a unit time. Chips are produced in batches printed on the surface of a single large silicon wafer, which is cut up and non-working samples are discarded. Fabrication has improved yields over time by using larger wafers, and producing wafers with fewer failures. The lower limit on this process

UNIVAC III - Misplaced Pages Continue

2688-477: The UNIVAC III's were called CHIEF, and BOSS.The assembly language was SALT. The majority of UNIVAC III systems were equipped with tape drives ; tapes contained images of the system data at the head of any tape, followed by data. The OS could handle jobs at this time, so some tapes had data relating to job control, and others had data. UNIVAC III systems could have up to 32 tape drives. Some systems were equipped at

2772-661: The Whirlwind computer, and Project Whirlwind's "most famous contribution was the random-access, magnetic core storage feature." Commercialization followed quickly. Magnetic core was used in peripherals of the ENIAC in 1953, the IBM 702 delivered in July 1955, and later in the 702 itself. The IBM 704 (1954) and the Ferranti Mercury (1957) used magnetic-core memory. It was during the early 1950s that Seeburg Corporation developed one of

2856-404: The ability to stay highly magnetized, and a low coercivity so that less energy is required to change the magnetization direction. The core can take two states, encoding one bit . The core memory contents are retained even when the memory system is powered down ( non-volatile memory ). However, when the core is read, it is reset to a "zero" value. Circuits in the computer memory system then restore

2940-455: The case. As the density increases, the number of platters can be reduced, leading to lower costs. Hard drives are often measured in terms of cost per bit. For example, the first commercial hard drive, IBM's RAMAC in 1957, supplied 3.75 MB for $ 34,500, or $ 9,200 per megabyte. In 1989, a 40 MB hard drive cost $ 1200, or $ 30/MB. And in 2018, 4 Tb drives sold for $ 75, or 1.9¢/GB, an improvement of 1.5 million since 1989 and 520 million since

3024-506: The chain. Wang and Woo were working at Harvard University 's Computation Laboratory at the time, and the university was not interested in promoting inventions created in their labs. Wang was able to patent the system on his own. The MIT Project Whirlwind computer required a fast memory system for real-time aircraft tracking. At first, an array of Williams tubes —a storage system based on cathode-ray tubes —was used, but proved temperamental and unreliable. Several researchers in

3108-477: The circuitry assumes there has been a read operation and the bit is in the 0 state. The Sense wire is used only during the read, and the Inhibit wire is used only during the write. For this reason, later core systems combined the two into a single wire, and used circuitry in the memory controller to switch the function of the wire. However, when Sense wire crosses too many cores, the half select current can also induce

3192-442: The combined magnetic field generated where the X and Y lines cross (the logical conjunction ) is sufficient to change the state; other cores will see only half the needed field ("half-selected"), or none at all. By driving the current through the wires in a particular direction, the resulting induced field forces the selected core's magnetic flux to circulate in one direction or the other (clockwise or counterclockwise). One direction

3276-467: The cores in each patch. By the early 1960s, the cost of core fell to the point that it became nearly universal as main memory , replacing both inexpensive low-performance drum memory and costly high-performance systems using vacuum tubes , and later discrete transistors as memory. The cost of core memory declined sharply over the lifetime of the technology: costs began at roughly US$ 1.00 per bit and dropped to roughly US$ 0.01 per bit. Core memory

3360-421: The day. Memory density Density is a measure of the quantity of information bits that can be stored on a given physical space of a computer storage medium . There are three types of density: length ( linear density ) of track , area of the surface ( areal density ), or in a given volume ( volumetric density ). Generally, higher density is more desirable, for it allows more data to be stored in

3444-532: The densities of existing media. IBM aimed to commercialize their Millipede memory system at 1 Tbit/in in 2007 but development appears to be moribund. A newer IBM technology, racetrack memory , uses an array of many small nanoscopic wires arranged in 3D, each holding numerous bits to improve density. Although exact numbers have not been mentioned, IBM news articles talk of "100 times" increases. Holographic storage technologies are also attempting to leapfrog existing systems, but they too have been losing

UNIVAC III - Misplaced Pages Continue

3528-407: The design that doubles the density of the bits by reducing sample length and keeping the same track spacing. This would double the transfer speed because the bits would be passing under the head twice as fast. Early floppy disk interfaces were designed for 250 kbit/s transfer speeds, but were rapidly outperformed with the introduction of the "high density" 1.44  MB (1,440 KB) floppies in

3612-435: The early 1970s, and by the mid-70s it was down to 600  ns (0.6 μs). Some designs had substantially higher performance: the CDC 6600 had a memory cycle time of 1.0 μs in 1964, using cores that required a half-select current of 200 mA. Everything possible was done in order to decrease access times and increase data rates (bandwidth), including the simultaneous use of multiple grids of core, each storing one bit of

3696-413: The elimination of these diagonal wires, tighter packing was possible. The access time plus the time to rewrite is the memory cycle time . To read a bit of core memory, the circuitry tries to flip the bit to the polarity assigned to the 0 state, by driving the selected X and Y lines that intersect at that core. The detection of such a pulse means that the bit had most recently contained a 1. Absence of

3780-399: The ferrite material used to make the toroids. An electric current in a wire that passes through a core creates a magnetic field. Only a magnetic field greater than a certain intensity ("select") can cause the core to change its magnetic polarity. To select a memory location, one of the X and one of the Y lines are driven with half the current ("half-select") required to cause this change. Only

3864-506: The first commercial applications of coincident-current core memory storage in the "Tormat" memory of its new range of jukeboxes , starting with the V200 developed in 1953 and released in 1955. Numerous uses in computing, telephony and industrial process control followed. Wang's patent was not granted until 1955, and by that time magnetic-core memory was already in use. This started a long series of lawsuits, which eventually ended when IBM bought

3948-431: The follow-on core memory systems built by DEC for their PDP line of air-cooled computers. Another method of handling the temperature sensitivity was to enclose the magnetic core "stack" in a temperature controlled oven. Examples of this are the heated-air core memory of the IBM 1620 (which could take up to 30 minutes to reach operating temperature , about 106 °F (41 °C) and the heated-oil-bath core memory of

4032-452: The following data formats: Instructions were 25 bits long. The CPU had four accumulators, a four-bit field (ar) allowed selection of any combination of the accumulators for operations on data from one to four words in length. For backward compatibility with the UNIVAC I and UNIVAC II data, two accumulators were needed to store a 12-digit decimal number and three accumulators were needed to store

4116-429: The full current is applied to one or more word read lines; this clears the selected cores and any that flip induce voltage pulses in their bit sense/write lines. For read, normally only one word read line would be selected; but for clear, multiple word read lines could be selected while the bit sense/write lines ignored. To write words, the half current is applied to one or more word write lines, and half current

4200-473: The full plane of cores in a "nest" and then pushed an array of hollow needles through the cores to guide the wires. Use of this machine reduced the time taken to thread the straight X and Y select lines from 25 hours to 12 minutes on a 128 by 128 core array. Smaller cores made the use of hollow needles impractical, but there were numerous advances in semi-automatic core threading. Support nests with guide channels were developed. Cores were permanently bonded to

4284-641: The increase in density has matched Moore's Law , reaching 1 Tbit/in in 2014. In 2015, Seagate introduced a hard drive with a density of 1.34 Tbit/in , more than 600 million times that of the IBM 350. It is expected that current recording technology can "feasibly" scale to at least 5  Tbit /in in the near future. New technologies like heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR) are under development and are expected to allow increases in magnetic areal density to continue. Optical discs store data in small pits in

SECTION 50

#1733084750052

4368-494: The information in an immediate re-write cycle. The most common form of core memory, X/Y line coincident-current , used for the main memory of a computer, consists of a large number of small toroidal ferrimagnetic ceramic ferrites ( cores ) held together in a grid structure (organized as a "stack" of layers called planes ), with wires woven through the holes in the cores' centers. In early systems there were four wires: X , Y , Sense , and Inhibit , but later cores combined

4452-432: The innermost track is about 66 mm long (10.5 mm radius). At 300 rpm the linear speed of the media under the head is thus about 66 mm × 300 rpm = 19800 mm/minute, or 330 mm/s. Along that track the bits are stored at a density of 686 bit/mm, which means that the head sees 686 bit/mm × 330 mm/s = 226,380 bit/s (or 28.3  KB /s). Now consider an improvement to

4536-421: The late 1940s conceived the idea of using magnetic cores for computer memory, but MIT computer engineer Jay Forrester received the principal patent for his invention of the coincident-current core memory that enabled the 3D storage of information. William Papian of Project Whirlwind cited one of these efforts, Harvard's "Static Magnetic Delay Line", in an internal memo. The first core memory of 32 × 32 × 16 bits

4620-404: The latter two wires into one Sense/Inhibit line. Each toroid stored one bit (0 or 1). One bit in each plane could be accessed in one cycle, so each machine word in an array of words was spread over a "stack" of planes. Each plane would manipulate one bit of a word in parallel , allowing the full word to be read or written in one cycle. Core relies on the square hysteresis loop properties of

4704-541: The main memory can fit are likewise called out-of-core algorithms . Algorithms which only work inside the main memory are sometimes called in-core algorithms. The basic concept of using the square hysteresis loop of certain magnetic materials as a storage or switching device was known from the earliest days of computer development. Much of this knowledge had developed due to an understanding of transformers , which allowed amplification and switch-like performance when built using certain materials. The stable switching behavior

4788-419: The next transformer pair. Those that did not contain a value simply faded out. Stored values were thus moved bit by bit down the chain with every pulse. Values were read out at the end, and fed back into the start of the chain to keep the values continually cycling through the system. Such systems have the disadvantage of not being random-access, to read any particular value one has to wait for it to cycle through

4872-404: The order of 10 ns or less. A less obvious effect is that as density improves, the number of DIMMs needed to supply any particular amount of memory decreases, which in turn means less DIMMs overall in any particular computer. This often leads to improved performance as well, as there is less bus traffic. However, this effect is generally not linear. Storage density also has a strong effect on

4956-500: The patent outright from Wang for US$ 500,000 . Wang used the funds to greatly expand Wang Laboratories , which he had co-founded with Dr. Ge-Yao Chu, a schoolmate from China. MIT wanted to charge IBM $ 0.02 per bit royalty on core memory. In 1964, after years of legal wrangling, IBM paid MIT $ 13 million for rights to Forrester's patent—the largest patent settlement to that date. In 1953, tested but not-yet-strung cores cost US$ 0.33 each. As manufacturing volume increased, by 1970 IBM

5040-429: The performance is generally defined by the time it takes for the electrical signals to travel through the computer bus to the chips, and then through the chips to the individual "cells" used to store data (each cell holds one bit). One defining electrical property is the resistance of the wires inside the chips. As the cell size decreases, through the improvements in semiconductor fabrication that led to Moore's Law ,

5124-501: The price of memory, although in this case, the reasons are not so obvious. In the case of disk-based media, the primary cost is the moving parts inside the drive. This sets a fixed lower limit, which is why the average selling price for both of the major HDD manufacturers has been US$ 45–75 since 2007. That said, the price of high-capacity drives has fallen rapidly, and this is indeed an effect of density. The highest capacity drives use more platters, essentially individual hard drives within

SECTION 60

#1733084750052

5208-405: The proper half-select current at one temperature is not the proper half-select current at another temperature. So a memory controller would include a temperature sensor (typically a thermistor ) to adjust the current levels correctly for temperature changes. An example of this is the core memory used by Digital Equipment Corporation for their PDP-1 computer; this strategy continued through all of

5292-401: The pulse means that the bit had contained a 0. The delay in sensing the voltage pulse is called the access time of the core memory. Following any such read, the bit contains a 0. This illustrates why a core memory access is called a destructive read : Any operation that reads the contents of a core erases those contents, and they must immediately be recreated. To write a bit of core memory,

5376-449: The race, and are estimated to offer 1 Tbit/in as well, with about 250  GB /in being the best demonstrated to date for non-quantum holography systems. Other experimental technologies offer even higher densities. Molecular polymer storage has been shown to store 10 Tbit/in . By far the densest type of memory storage experimentally to date is electronic quantum holography . By superimposing images of different wavelengths into

5460-513: The resistance is reduced and less power is needed to operate the cells. This, in turn, means that less electric current is needed for operation, and thus less time is needed to send the required amount of electrical charge into the system. In DRAM, in particular, the amount of charge that needs to be stored in a cell's capacitor also directly affects this time. As fabrication has improved, solid-state memory has improved dramatically in terms of performance. Modern DRAM chips had operational speeds on

5544-451: The same hologram, in 2009 a Stanford research team achieved a bit density of 35 bit/electron (approximately 3 exabytes /in ) using electron microscopes and a copper medium. In 2012, DNA was successfully used as an experimental data storage medium, but required a DNA synthesizer and DNA microchips for the transcoding. As of 2012 , DNA holds the record for highest-density storage medium. In March 2017, scientists at Columbia University and

5628-423: The same physical space. Density therefore has a direct relationship to storage capacity of a given medium. Density also generally affects the performance within a particular medium, as well as price. Solid state drives use flash memory to store non-volatile media . They are the latest form of mass produced storage and rival magnetic disk media . Solid state media data is saved to a pool of NAND flash. NAND itself

5712-422: The storage elements are spread over the surface of the disk and must be physically rotated under the "head" in order to be read or written. Higher density means more data moves under the head for any given mechanical movement. For example, we can calculate the effective transfer speed for a floppy disc by determining how fast the bits move under the head. A standard 3½- inch floppy disk spins at 300  rpm , and

5796-425: The value and a second used for control. A signal generator produced a series of pulses which were sent into the control transformers at half the energy needed to flip the polarity. The pulses were timed so the field in the transformers had not faded away before the next pulse arrived. If the storage transformer's field matched the field created by the pulse, then the total energy would cause a pulse to be injected into

5880-449: The value with a read-write cycle, incrementing (or decrementing) the value in some processor register, and then writing the new value with another read-write cycle. Word line core memory was often used to provide register memory. Other names for this type are linear select and 2-D . This form of core memory typically wove three wires through each core on the plane, word read , word write , and bit sense/write . To read or clear words,

5964-462: Was 2.75 μs. In 1980, the price of a 16 kW ( kiloword , equivalent to 32 kB) core memory board that fitted into a DEC Q-bus computer was around US$ 3,000 . At that time, core array and supporting electronics can fit on a single printed circuit board about 25 cm × 20 cm (10 in × 8 in) in size, the core array was mounted a few mm above the PCB and was protected with

6048-461: Was actually stored magnetically within the individual cores. Each bit of the word had one core. Reading the contents of a given memory address generated a pulse of current in a wire corresponding to that address. Each address wire was threaded either through a core to signify a binary [1], or around the outside of that core, to signify a binary [0]. As expected, the cores were much larger physically than those of read-write core memory. This type of memory

6132-446: Was almost always carried out by hand in spite of repeated major efforts to automate the process. Core was almost universal until the introduction of the first semiconductor memory chips in the late 1960s, and especially dynamic random-access memory (DRAM) in the early 1970s. Initially around the same price as core, DRAM was smaller and simpler to use. Core was driven from the market gradually between 1973 and 1978. Although core memory

6216-447: Was dominated by the cost of stringing the wires through the cores. Forrester's coincident-current system required one of the wires to be run at 45 degrees to the cores, which proved difficult to wire by machine, so that core arrays had to be assembled under microscopes by workers with fine motor control. In 1956, a group at IBM filed for a patent on a machine to automatically thread the first few wires through each core. This machine held

6300-602: Was exceptionally reliable. An example was the Apollo Guidance Computer used for the NASA Moon landings. The performance of early core memories can be characterized in today's terms as being very roughly comparable to a clock rate of 1  MHz (equivalent to early 1980s home computers, like the Apple II and Commodore 64 ). Early core memory systems had cycle times of about 6  μs , which had fallen to 1.2 μs by

6384-449: Was expensive and complicated. As I recall, which may not be entirely correct, it used two cores per binary bit and was essentially a delay line that moved a bit forward. To the extent that I may have focused on it, the approach was not suitable for our purposes." He describes the invention and associated events, in 1975. Forrester has since observed, "It took us about seven years to convince the industry that random-access magnetic-core memory

6468-431: Was further refined via 5 additional patents and ultimately used in the first industrial robot . Frederick Viehe applied for various patents on the use of transformers for building digital logic circuits in place of relay logic beginning in 1947. A fully developed core system was patented in 1947, and later purchased by IBM in 1956. This development was little-known, however, and the mainstream development of core

6552-559: Was installed on Whirlwind in the summer of 1953. Papian stated: "Magnetic-Core Storage has two big advantages: (1) greater reliability with a consequent reduction in maintenance time devoted to storage; (2) shorter access time (core access time is 9 microseconds: tube access time is approximately 25 microseconds) thus increasing the speed of computer operation." In April 2011, Forrester recalled, "the Wang use of cores did not have any influence on my development of random-access memory. The Wang memory

6636-612: Was made obsolete by semiconductor integrated circuit memories in the 1970s, though remained in use for mission-critical and high-reliability applications in the IBM System/4 Pi AP-101 (used in the Space Shuttle until an upgrade in early 1990s, and the B-52 and B-1B bombers). An example of the scale, economics, and technology of core memory in the 1960s was the 256K 36-bit word (1.2  MiB ) core memory unit installed on

6720-423: Was producing 20 billion cores per year, and the price per core fell to US$ 0.0003 . Core sizes shrank over the same period from around 0.1 inches (2.5 mm) diameter in the 1950s to 0.013 inches (0.33 mm) in 1966. The power required to flip the magnetization of one core is proportional to the volume, so this represents a drop in power consumption by a factor of 125. The cost of complete core memory systems

6804-479: Was tested ("strobed"). The data plot of this test seemed to resemble a cartoon character called " Shmoo ," and the name stuck. In many occasions, errors could be resolved by gently tapping the printed circuit board with the core array on a table. This slightly changed the positions of the cores along the wires running through them, and could fix the problem. The procedure was seldom needed, as core memory proved to be very reliable compared to other computer components of

6888-423: Was the solution to a missing link in computer technology. Then we spent the following seven years in the patent courts convincing them that they had not all thought of it first." A third developer involved in the early development of core was Jan A. Rajchman at RCA . A prolific inventor, Rajchman designed a unique core system using ferrite bands wrapped around thin metal tubes, building his first examples using

6972-465: Was well known in the electrical engineering field, and its application in computer systems was immediate. For example, J. Presper Eckert and Jeffrey Chuan Chu had done some development work on the concept in 1945 at the Moore School during the ENIAC efforts. Robotics pioneer George Devol filed a patent for the first static (non-moving) magnetic memory on 3 April 1946. Devol's magnetic memory

7056-549: Was wired, core memory could be exceptionally reliable. Read-only core rope memory , for example, was used on the mission-critical Apollo Guidance Computer essential to NASA 's successful Moon landings. Using smaller cores and wires, the memory density of core slowly increased. By the late 1960s a density of about 32 kilobits per cubic foot (about 0.9 kilobits per litre) was typical. The cost declined over this period from about $ 1 per bit to about 1 cent per bit. Reaching this density requires extremely careful manufacturing, which

#51948