Misplaced Pages

English Electric KDF9

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computer architecture , 48-bit integers can represent 281,474,976,710,656 (2 or 2.814749767×10 ) discrete values. This allows an unsigned binary integer range of 0 through 281,474,976,710,655 (2 − 1) or a signed two's complement range of −140,737,488,355,328 (−2 ) through 140,737,488,355,327 (2 − 1). A 48-bit memory address can directly address every byte of 256 terabytes of storage. 48-bit can refer to any other data unit that consumes 48 bits (6 octets ) in width. Examples include 48-bit CPU and ALU architectures that are based on registers , address buses , or data buses of that size.

#861138

81-639: KDF9 was an early British 48-bit computer designed and built by English Electric (which in 1968 was merged into International Computers Limited (ICL)). The first machine came into service in 1964 and the last of 29 machines was decommissioned in 1980 at the National Physical Laboratory . The KDF9 was designed for, and used almost entirely in, the mathematical and scientific processing fields – in 1967, nine were in use in UK universities and technical colleges. The KDF8 , developed in parallel,

162-492: A {\displaystyle {\color {red}\mathbf {a} }} and b {\displaystyle {\color {blue}\mathbf {b} }} separated by angle θ {\displaystyle \theta } (see the upper image), they form a triangle with a third side c = a − b {\displaystyle {\color {orange}\mathbf {c} }={\color {red}\mathbf {a} }-{\color {blue}\mathbf {b} }} . Let

243-472: A {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } is defined by a ⋅ b = ‖ a ‖ ‖ b ‖ cos ⁡ θ , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta ,} where θ {\displaystyle \theta }

324-413: A − b ) ⋅ ( a − b ) = a ⋅ a − a ⋅ b − b ⋅ a + b ⋅ b = a 2 − a ⋅ b −

405-1275: A ⋅ b + b 2 = a 2 − 2 a ⋅ b + b 2 c 2 = a 2 + b 2 − 2 a b cos ⁡ θ {\displaystyle {\begin{aligned}\mathbf {\color {orange}c} \cdot \mathbf {\color {orange}c} &=(\mathbf {\color {red}a} -\mathbf {\color {blue}b} )\cdot (\mathbf {\color {red}a} -\mathbf {\color {blue}b} )\\&=\mathbf {\color {red}a} \cdot \mathbf {\color {red}a} -\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} -\mathbf {\color {blue}b} \cdot \mathbf {\color {red}a} +\mathbf {\color {blue}b} \cdot \mathbf {\color {blue}b} \\&={\color {red}a}^{2}-\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} -\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} +{\color {blue}b}^{2}\\&={\color {red}a}^{2}-2\mathbf {\color {red}a} \cdot \mathbf {\color {blue}b} +{\color {blue}b}^{2}\\{\color {orange}c}^{2}&={\color {red}a}^{2}+{\color {blue}b}^{2}-2{\color {red}a}{\color {blue}b}\cos \mathbf {\color {purple}\theta } \\\end{aligned}}} which

486-579: A b r ( x ) u ( x ) v ( x ) d x . {\displaystyle \left\langle u,v\right\rangle _{r}=\int _{a}^{b}r(x)u(x)v(x)\,dx.} A double-dot product for matrices is the Frobenius inner product , which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices A {\displaystyle \mathbf {A} } and B {\displaystyle \mathbf {B} } of

567-539: A {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} denote the lengths of a {\displaystyle {\color {red}\mathbf {a} }} , b {\displaystyle {\color {blue}\mathbf {b} }} , and c {\displaystyle {\color {orange}\mathbf {c} }} , respectively. The dot product of this with itself is: c ⋅ c = (

648-443: A ‖ cos ⁡ θ i = a i , {\displaystyle \mathbf {a} \cdot \mathbf {e} _{i}=\left\|\mathbf {a} \right\|\left\|\mathbf {e} _{i}\right\|\cos \theta _{i}=\left\|\mathbf {a} \right\|\cos \theta _{i}=a_{i},} where a i {\displaystyle a_{i}} is the component of vector a {\displaystyle \mathbf {a} } in

729-404: A ⋅ e i ) = ∑ i b i a i = ∑ i a i b i , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {a} \cdot \sum _{i}b_{i}\mathbf {e} _{i}=\sum _{i}b_{i}(\mathbf {a} \cdot \mathbf {e} _{i})=\sum _{i}b_{i}a_{i}=\sum _{i}a_{i}b_{i},} which is precisely

810-536: A ⋅ b = 0. {\displaystyle \mathbf {a} \cdot \mathbf {b} =0.} At the other extreme, if they are codirectional , then the angle between them is zero with cos ⁡ 0 = 1 {\displaystyle \cos 0=1} and a ⋅ b = ‖ a ‖ ‖ b ‖ {\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\,\left\|\mathbf {b} \right\|} This implies that

891-527: A ⋅ c ) b − ( a ⋅ b ) c . {\displaystyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=(\mathbf {a} \cdot \mathbf {c} )\,\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\,\mathbf {c} .} This identity, also known as Lagrange's formula , may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics . In physics ,

SECTION 10

#1733085411862

972-490: A ⋅ c ) + α δ ( a ⋅ d ) + β γ ( b ⋅ c ) + β δ ( b ⋅ d ) . {\displaystyle =\alpha \gamma (\mathbf {a} \cdot \mathbf {c} )+\alpha \delta (\mathbf {a} \cdot \mathbf {d} )+\beta \gamma (\mathbf {b} \cdot \mathbf {c} )+\beta \delta (\mathbf {b} \cdot \mathbf {d} ).} Given two vectors

1053-407: A ⋅ ( γ c + δ d ) ) + β ( b ⋅ ( γ c + δ d ) ) = {\displaystyle =\alpha (\mathbf {a} \cdot (\gamma \mathbf {c} +\delta \mathbf {d} ))+\beta (\mathbf {b} \cdot (\gamma \mathbf {c} +\delta \mathbf {d} ))=} = α γ (

1134-413: A i b i ¯ , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum _{i}{{a_{i}}\,{\overline {b_{i}}}},} where b i ¯ {\displaystyle {\overline {b_{i}}}} is the complex conjugate of b i {\displaystyle b_{i}} . When vectors are represented by column vectors ,

1215-471: A Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector a {\displaystyle \mathbf {a} } is denoted by ‖ a ‖ {\displaystyle \left\|\mathbf {a} \right\|} . The dot product of two Euclidean vectors

1296-725: A compact subset K {\displaystyle K} of R n {\displaystyle \mathbb {R} ^{n}} with the standard Lebesgue measure , the above definition becomes: ⟨ f , g ⟩ = ∫ K f ( x ) g ( x ) d n ⁡ x . {\displaystyle \left\langle f,g\right\rangle =\int _{K}f(\mathbf {x} )g(\mathbf {x} )\,\operatorname {d} ^{n}\mathbf {x} .} Generalized further to complex continuous functions ψ {\displaystyle \psi } and χ {\displaystyle \chi } , by analogy with

1377-463: A field of scalars , being either the field of real numbers R {\displaystyle \mathbb {R} } or the field of complex numbers C {\displaystyle \mathbb {C} } . It is usually denoted using angular brackets by ⟨ a , b ⟩ {\displaystyle \left\langle \mathbf {a} \,,\mathbf {b} \right\rangle } . The inner product of two vectors over

1458-442: A network interface controller uses a 48-bit address space. In digital images, 48 bits per pixel, or 16 bits per each color channel (red, green and blue), is used for accurate processing. For the human eye, it is almost impossible to see any difference between such an image and a 24-bit image, but the existence of more shades of each of the three primary colors (65,536 as opposed to 256) means that more operations can be performed on

1539-454: A 16-bit address offset, most jump instructions, and 16-bit literal load instructions, all used 3 syllables. Dense instruction coding , and intensive use of the register sets, meant that relatively few store accesses were needed for common scientific codes, such as scalar product and polynomial inner loops. This did much to offset the relatively slow core cycle time, giving the KDF9 about a third of

1620-837: A commercially oriented computer.) The EGDON operating system was so named because one was going to UKAEA Winfrith : in Thomas Hardy 's book The Return of the Native Winfrith Heath is called Egdon Heath . EGDON Fortran was called EGTRAN. Eldon was so named because Leeds University 's computer was located in a converted Eldon chapel. The machine weighed more than 10,300 pounds (5.2 short tons; 4.7 t). Control desk with interruption typewriter 300 lb (136 kg), main store and input/output control unit 3,500 (1,587 kg), arithmetic and main control unit 3,500 (1,587 kg), power supply unit 3,000 (1,360 kg). 48-bit computing Computers with 48-bit words include

1701-480: A cycle time of 6 microseconds. Each word could hold a single 48-bit integer or floating-point number, two 24-bit integer or floating-point numbers, six 8-bit instruction syllables , or eight 6-bit characters. There was also provision for efficient handling of double-word (96-bit) numbers in both integer and floating-point formats. However, there was no facility for byte or character addressing, so that non-numerical work suffered by comparison. Its standard character set

SECTION 20

#1733085411862

1782-482: A distinct interrupt in that case. Later operating systems, including Eldon 2 at the University of Leeds, and COTAN, developed by UKAEA Culham Laboratories with the collaboration of Glasgow University, were fully interactive multi-access systems, with PDP-8 front ends to handle the terminals. The Kidsgrove and Whetstone Algol 60 compilers were among the first of their class. The Kidsgrove compiler stressed optimization;

1863-452: A function which weights each term of the inner product with a value). Explicitly, the inner product of functions u ( x ) {\displaystyle u(x)} and v ( x ) {\displaystyle v(x)} with respect to the weight function r ( x ) > 0 {\displaystyle r(x)>0} is ⟨ u , v ⟩ r = ∫

1944-473: A function with domain { k ∈ N : 1 ≤ k ≤ n } {\displaystyle \{k\in \mathbb {N} :1\leq k\leq n\}} , and u i {\displaystyle u_{i}} is a notation for the image of i {\displaystyle i} by the function/vector u {\displaystyle u} . This notion can be generalized to square-integrable functions : just as

2025-508: A matrix as a dyadic , we can define a different double-dot product (see Dyadics § Product of dyadic and dyadic ) however it is not an inner product. The inner product between a tensor of order n {\displaystyle n} and a tensor of order m {\displaystyle m} is a tensor of order n + m − 2 {\displaystyle n+m-2} , see Tensor contraction for details. The straightforward algorithm for calculating

2106-411: A presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and

2187-491: Is homogeneous under scaling in each variable, meaning that for any scalar α {\displaystyle \alpha } , ( α a ) ⋅ b = α ( a ⋅ b ) = a ⋅ ( α b ) . {\displaystyle (\alpha \mathbf {a} )\cdot \mathbf {b} =\alpha (\mathbf {a} \cdot \mathbf {b} )=\mathbf {a} \cdot (\alpha \mathbf {b} ).} It also satisfies

2268-450: Is positive definite , which means that a ⋅ a {\displaystyle \mathbf {a} \cdot \mathbf {a} } is never negative, and is zero if and only if a = 0 {\displaystyle \mathbf {a} =\mathbf {0} } , the zero vector. If e 1 , ⋯ , e n {\displaystyle \mathbf {e} _{1},\cdots ,\mathbf {e} _{n}} are

2349-461: Is a legend that the KDF9 was developed as project KD9 (Kidsgrove Development 9) and that the 'F' in its designation was contributed by the then chairman after a long and tedious discussion on what to name the machine at launch—"I don’t care if you call it the F— ". The truth is more mundane: the name was chosen essentially at random by a marketing manager. (See also KDF8 for the parallel development and use of

2430-547: Is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in a {\displaystyle \mathbf {a} } . The dot product is not symmetric, since a ⋅ b = b ⋅ a ¯ . {\displaystyle \mathbf {a} \cdot \mathbf {b} ={\overline {\mathbf {b} \cdot \mathbf {a} }}.} The angle between two complex vectors

2511-478: Is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In modern presentations of Euclidean geometry , the points of space are defined in terms of their Cartesian coordinates , and Euclidean space itself is commonly identified with the real coordinate space R n {\displaystyle \mathbf {R} ^{n}} . In such

English Electric KDF9 - Misplaced Pages Continue

2592-407: Is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector. For example: For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with

2673-537: Is defined as: a ⋅ b = ∑ i = 1 n a i b i = a 1 b 1 + a 2 b 2 + ⋯ + a n b n {\displaystyle \mathbf {a} \cdot \mathbf {b} =\sum _{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}} where Σ {\displaystyle \Sigma } denotes summation and n {\displaystyle n}

2754-428: Is not associative.) Before long, however, it was recognized that error rates with transistor machines was not an issue; they either worked correctly or did not work at all. Consequently, the idea of sum checks was abandoned. The initial matrix package proved a very useful system testing tool as it was able to generate lengthy performance checks well before more formal test packages which were subsequently developed. There

2835-502: Is the Kronecker delta . Also, by the geometric definition, for any vector e i {\displaystyle \mathbf {e} _{i}} and a vector a {\displaystyle \mathbf {a} } , we note that a ⋅ e i = ‖ a ‖ ‖ e i ‖ cos ⁡ θ i = ‖

2916-657: Is the angle between a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } . In particular, if the vectors a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } are orthogonal (i.e., their angle is π 2 {\displaystyle {\frac {\pi }{2}}} or 90 ∘ {\displaystyle 90^{\circ }} ), then cos ⁡ π 2 = 0 {\displaystyle \cos {\frac {\pi }{2}}=0} , which implies that

2997-459: Is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors. The vector triple product is defined by a × ( b × c ) = (

3078-885: Is the dimension of the vector space . For instance, in three-dimensional space , the dot product of vectors [ 1 , 3 , − 5 ] {\displaystyle [1,3,-5]} and [ 4 , − 2 , − 1 ] {\displaystyle [4,-2,-1]} is:   [ 1 , 3 , − 5 ] ⋅ [ 4 , − 2 , − 1 ] = ( 1 × 4 ) + ( 3 × − 2 ) + ( − 5 × − 1 ) = 4 − 6 + 5 = 3 {\displaystyle {\begin{aligned}\ [1,3,-5]\cdot [4,-2,-1]&=(1\times 4)+(3\times -2)+(-5\times -1)\\&=4-6+5\\&=3\end{aligned}}} Likewise,

3159-574: Is the law of cosines . There are two ternary operations involving dot product and cross product . The scalar triple product of three vectors is defined as a ⋅ ( b × c ) = b ⋅ ( c × a ) = c ⋅ ( a × b ) . {\displaystyle \mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )=\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )=\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} ).} Its value

3240-493: Is the unit vector in the direction of b {\displaystyle \mathbf {b} } . The dot product is thus characterized geometrically by a ⋅ b = a b ‖ b ‖ = b a ‖ a ‖ . {\displaystyle \mathbf {a} \cdot \mathbf {b} =a_{b}\left\|\mathbf {b} \right\|=b_{a}\left\|\mathbf {a} \right\|.} The dot product, defined in this manner,

3321-602: Is the angle between a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } . In terms of the geometric definition of the dot product, this can be rewritten as a b = a ⋅ b ^ , {\displaystyle a_{b}=\mathbf {a} \cdot {\widehat {\mathbf {b} }},} where b ^ = b / ‖ b ‖ {\displaystyle {\widehat {\mathbf {b} }}=\mathbf {b} /\left\|\mathbf {b} \right\|}

English Electric KDF9 - Misplaced Pages Continue

3402-414: Is then given by cos ⁡ θ = Re ⁡ ( a ⋅ b ) ‖ a ‖ ‖ b ‖ . {\displaystyle \cos \theta ={\frac {\operatorname {Re} (\mathbf {a} \cdot \mathbf {b} )}{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|}}.} The complex dot product leads to

3483-912: The AN/FSQ-32 , CDC 1604/ upper-3000 series , BESM-6 , Ferranti Atlas , Philco TRANSAC S-2000 and Burroughs large systems . The Honeywell DATAmatic 1000 , H-800 , the MANIAC II , the MANIAC III , the Brookhaven National Laboratory Merlin, the Philco CXPQ , the Ferranti Orion , the Telefunken Rechner TR 440 , the ICT 1301 , and many other early transistor-based and vacuum tube computers used 48-bit words. The IBM System/38 , and

3564-562: The IBM AS/400 in its CISC variants, use 48-bit addresses. The address size used in logical block addressing was increased to 48 bits with the introduction of ATA-6 . The Ext4 file system physically limits the file block count to 48 bits. The minimal implementation of the x86-64 architecture provides 48-bit addressing encoded into 64 bits; future versions of the architecture can expand this without breaking properly written applications. The media access control address ( MAC address ) of

3645-437: The cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry , Euclidean spaces are often defined by using vector spaces . In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their dot product by

3726-434: The distributive law , meaning that a ⋅ ( b + c ) = a ⋅ b + a ⋅ c . {\displaystyle \mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} .} These properties may be summarized by saying that the dot product is a bilinear form . Moreover, this bilinear form

3807-463: The inner product (or rarely the projection product ) of Euclidean space , even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more). Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and

3888-422: The norm squared , a ⋅ a = ‖ a ‖ 2 {\textstyle \mathbf {a} \cdot \mathbf {a} =\|\mathbf {a} \|^{2}} , after the Euclidean norm ; it is a vector generalization of the absolute square of a complex scalar (see also: Squared Euclidean distance ). The inner product generalizes the dot product to abstract vector spaces over

3969-1648: The standard basis vectors in R n {\displaystyle \mathbf {R} ^{n}} , then we may write a = [ a 1 , … , a n ] = ∑ i a i e i b = [ b 1 , … , b n ] = ∑ i b i e i . {\displaystyle {\begin{aligned}\mathbf {a} &=[a_{1},\dots ,a_{n}]=\sum _{i}a_{i}\mathbf {e} _{i}\\\mathbf {b} &=[b_{1},\dots ,b_{n}]=\sum _{i}b_{i}\mathbf {e} _{i}.\end{aligned}}} The vectors e i {\displaystyle \mathbf {e} _{i}} are an orthonormal basis , which means that they have unit length and are at right angles to each other. Since these vectors have unit length, e i ⋅ e i = 1 {\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{i}=1} and since they form right angles with each other, if i ≠ j {\displaystyle i\neq j} , e i ⋅ e j = 0. {\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=0.} Thus in general, we can say that: e i ⋅ e j = δ i j , {\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij},} where δ i j {\displaystyle \delta _{ij}}

4050-406: The KDF9 were entirely solid-state . The KDF9 used transformer-coupled diode–transistor logic , built from germanium diodes, about 20,000 transistors, and about 2,000 toroid pulse transformers . They ran on a 1 MHz clock that delivered two pulses of 250 ns separated by 500 ns, in each clock cycle. The maximum configuration incorporated 32K words of 48-bit core storage (192K bytes) with

4131-512: The NOL register. Somewhat different was the Lock-Out interrupt, which resulted from trying to access an area of store that was currently being used by an I/O device, so that there was hardware mutual exclusion of access to DMA buffers. When a program blocked on a Lock-Out, or by voluntarily waiting for an I/O transfer to terminate, it was interrupted and Director switched to the program of highest priority that

SECTION 50

#1733085411862

4212-669: The Whetstone compiler produced an interpretive object code aimed at debugging. It was by instrumenting the latter that Brian Wichmann obtained the statistics on program behaviour that led to the Whetstone benchmark for scientific computation, which inspired in turn the Dhrystone benchmark for non-numerical workloads. Machine code orders were written in a form of octal officially named syllabic octal (also known as 'slob-octal' or 'slob' notation,). It represented 8 bits with three octal digits but

4293-495: The above example in this way, a 1 × 3 matrix ( row vector ) is multiplied by a 3 × 1 matrix ( column vector ) to get a 1 × 1 matrix that is identified with its unique entry: [ 1 3 − 5 ] [ 4 − 2 − 1 ] = 3 . {\displaystyle {\begin{bmatrix}1&3&-5\end{bmatrix}}{\begin{bmatrix}4\\-2\\-1\end{bmatrix}}=3\,.} In Euclidean space ,

4374-756: The algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product. The dot product fulfills the following properties if a {\displaystyle \mathbf {a} } , b {\displaystyle \mathbf {b} } , c {\displaystyle \mathbf {c} } and d {\displaystyle \mathbf {d} } are real vectors and α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } and δ {\displaystyle \delta } are scalars . The commutative property can also be easily proven with

4455-601: The algebraic definition, and in more general spaces (where the notion of angle might not be geometrically intuitive but an analogous product can be defined) the angle between two vectors can be defined as θ = arccos ⁡ ( a ⋅ b ‖ a ‖ ‖ b ‖ ) . {\displaystyle \theta =\operatorname {arccos} \left({\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|}}\right).} = α (

4536-419: The complex inner product above, gives: ⟨ ψ , χ ⟩ = ∫ K ψ ( z ) χ ( z ) ¯ d z . {\displaystyle \left\langle \psi ,\chi \right\rangle =\int _{K}\psi (z){\overline {\chi (z)}}\,{\text{d}}z.} Inner products can have a weight function (i.e.,

4617-471: The contents of the I part. This made the coding of counting loops very efficient. Three additional Nest levels and one additional SJNS level were reserved to Director, the Operating System, allowing short-path interrupts to be handled without explicit register saving and restoring. As a result, the interrupt overhead was only three clock cycles. Instructions were of one, two, or three syllables. Although

4698-423: The direction of e i {\displaystyle \mathbf {e} _{i}} . The last step in the equality can be seen from the figure. Now applying the distributivity of the geometric version of the dot product gives a ⋅ b = a ⋅ ∑ i b i e i = ∑ i b i (

4779-437: The dot product can also be written as a matrix product a ⋅ b = a T b , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {a} ^{\mathsf {T}}\mathbf {b} ,} where a T {\displaystyle a{^{\mathsf {T}}}} denotes the transpose of a {\displaystyle \mathbf {a} } . Expressing

4860-444: The dot product can be expressed as a matrix product involving a conjugate transpose , denoted with the superscript H: a ⋅ b = b H a . {\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {b} ^{\mathsf {H}}\mathbf {a} .} In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself

4941-494: The dot product of a vector a {\displaystyle \mathbf {a} } with itself is a ⋅ a = ‖ a ‖ 2 , {\displaystyle \mathbf {a} \cdot \mathbf {a} =\left\|\mathbf {a} \right\|^{2},} which gives ‖ a ‖ = a ⋅ a , {\displaystyle \left\|\mathbf {a} \right\|={\sqrt {\mathbf {a} \cdot \mathbf {a} }},}

SECTION 60

#1733085411862

5022-706: The dot product of the vector [ 1 , 3 , − 5 ] {\displaystyle [1,3,-5]} with itself is:   [ 1 , 3 , − 5 ] ⋅ [ 1 , 3 , − 5 ] = ( 1 × 1 ) + ( 3 × 3 ) + ( − 5 × − 5 ) = 1 + 9 + 25 = 35 {\displaystyle {\begin{aligned}\ [1,3,-5]\cdot [1,3,-5]&=(1\times 1)+(3\times 3)+(-5\times -5)\\&=1+9+25\\&=35\end{aligned}}} If vectors are identified with column vectors ,

5103-522: The dot product takes two vectors and returns a scalar quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, a ⋅ b = | a | | b | cos ⁡ θ {\displaystyle \mathbf {a} \cdot \mathbf {b} =|\mathbf {a} |\,|\mathbf {b} |\cos \theta } Alternatively, it

5184-492: The field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space , and the inner product of a vector with itself is real and positive-definite. The dot product is defined for vectors that have a finite number of entries . Thus these vectors can be regarded as discrete functions : a length- n {\displaystyle n} vector u {\displaystyle u} is, then,

5265-524: The first digit represented only the two most-significant bits, whilst the others the remaining two groups of three bits each. Within English Electric, its predecessor, DEUCE , had a well-used matrix scheme based on GIP (General Interpretive Programme). The unreliability of valve machines led to the inclusion of a sum-check mechanism to detect errors in matrix operations. The scheme used block floating-point using fixed-point arithmetic hardware, in which

5346-537: The formula for the Euclidean length of the vector. The scalar projection (or scalar component) of a Euclidean vector a {\displaystyle \mathbf {a} } in the direction of a Euclidean vector b {\displaystyle \mathbf {b} } is given by a b = ‖ a ‖ cos ⁡ θ , {\displaystyle a_{b}=\left\|\mathbf {a} \right\|\cos \theta ,} where θ {\displaystyle \theta }

5427-411: The image without risk of noticeable banding or posterization . Scalar product In mathematics , the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors ), and returns a single number. In Euclidean geometry , the dot product of the Cartesian coordinates of two vectors is widely used. It is often called

5508-630: The inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some measure space ( X , A , μ ) {\displaystyle (X,{\mathcal {A}},\mu )} : ⟨ u , v ⟩ = ∫ X u v d μ . {\displaystyle \left\langle u,v\right\rangle =\int _{X}uv\,{\text{d}}\mu .} For example, if f {\displaystyle f} and g {\displaystyle g} are continuous functions over

5589-507: The modern formulations of Euclidean geometry. The dot product of two vectors a = [ a 1 , a 2 , ⋯ , a n ] {\displaystyle \mathbf {a} =[a_{1},a_{2},\cdots ,a_{n}]} and b = [ b 1 , b 2 , ⋯ , b n ] {\displaystyle \mathbf {b} =[b_{1},b_{2},\cdots ,b_{n}]} , specified with respect to an orthonormal basis ,

5670-413: The notions of Hermitian forms and general inner product spaces , which are widely used in mathematics and physics . The self dot product of a complex vector a ⋅ a = a H a {\displaystyle \mathbf {a} \cdot \mathbf {a} =\mathbf {a} ^{\mathsf {H}}\mathbf {a} } , involving the conjugate transpose of a row vector, is also known as

5751-407: The product of their lengths). The name "dot product" is derived from the dot operator "  ·  " that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar , rather than a vector (as with the vector product in three-dimensional space). The dot product may be defined algebraically or geometrically. The geometric definition

5832-1310: The same size: A : B = ∑ i ∑ j A i j B i j ¯ = tr ⁡ ( B H A ) = tr ⁡ ( A B H ) . {\displaystyle \mathbf {A} :\mathbf {B} =\sum _{i}\sum _{j}A_{ij}{\overline {B_{ij}}}=\operatorname {tr} (\mathbf {B} ^{\mathsf {H}}\mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {B} ^{\mathsf {H}}).} And for real matrices, A : B = ∑ i ∑ j A i j B i j = tr ⁡ ( B T A ) = tr ⁡ ( A B T ) = tr ⁡ ( A T B ) = tr ⁡ ( B A T ) . {\displaystyle \mathbf {A} :\mathbf {B} =\sum _{i}\sum _{j}A_{ij}B_{ij}=\operatorname {tr} (\mathbf {B} ^{\mathsf {T}}\mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {B} ^{\mathsf {T}})=\operatorname {tr} (\mathbf {A} ^{\mathsf {T}}\mathbf {B} )=\operatorname {tr} (\mathbf {B} \mathbf {A} ^{\mathsf {T}}).} Writing

5913-719: The speed of its much more famous, but 8 times more expensive and much less commercially successful contemporary, the Manchester/ Ferranti Atlas Computer . The KDF9 was one of the earliest fully hardware-secured multiprogramming systems. Up to four programs could be run at once under the control of its elegantly simple operating system, the Timesharing Director , each being confined to its own core area by BA (Base Address) and NOL (Number of Locations) registers. Each program had its own sets of stack and Q store registers, which were activated when that program

5994-406: The sum-checks were precise. However, when the corresponding scheme was implemented on KDF9, it used floating point, a new concept that had only limited mathematical analysis. It quickly became clear that sum checks were no longer precise and a project was established in an attempt to provide a usable check. (In floating point (A + B) + C is not necessarily the same as A + (B + C) i.e. the + operation

6075-435: The vector a = [ 1   i ] {\displaystyle \mathbf {a} =[1\ i]} ). This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition a ⋅ b = ∑ i

6156-665: The word ' byte ' had been coined by the designers of the IBM 7030 Stretch for a group of eight bits , it was not yet well known, and English Electric used the word ' syllable ' for what is now called a byte. Most arithmetic took place at the top of the Nest and used zero-address , one-syllable instructions, although address arithmetic and index updating were handled separately in the ;store. Q Store handling, and some memory reference instructions used two syllables. Memory reference instructions with

6237-404: Was a similar stack of return addresses. The Q Store was a set of 16 index registers, each of 48 bits divided into Counter (C), Increment (I) and Modifier (M) parts of 16 bits each. Flags on a memory-reference instruction specified whether the address should be modified by the M part of a Q Store, and, if so, whether the C part should be decremented by 1 and the M part incremented by

6318-597: Was a version of the Friden Flexowriter paper tape code that was oriented to Algol 60, and included unusual characters such as the Algol subscript 10. However, each other I/O device type implemented its own subset of that. Not every character that could be read from paper tape could be successfully printed, for example. The CPU architecture featured three register sets. The Nest was a 16-deep pushdown stack of arithmetic registers, The SJNS (Subroutine Jump Nesting Store)

6399-543: Was aimed at commercial processing workloads. The KDF9 was an early example of a machine that directly supported multiprogramming , using offsets into its core memory to separate the programs into distinct virtual address spaces. Several operating systems were developed for the platform, including some that provided fully interactive use through PDP-8 machines acting as smart terminal servers . A number of compilers were available, notably both checkout and globally optimizing compilers for Algol 60 . The logic circuits of

6480-470: Was dispatched, so that context switching was very efficient. Each program could drive hardware I/O devices directly, but was limited by hardware checks to those that the Director had allocated to it. Any attempt to use an unallocated device caused an error interrupt. A similar interrupt resulted from overfilling (or over-emptying) the Nest or SJNS, or attempting to access storage at an address above that given in

6561-462: Was not itself blocked. When a Lock-Out cleared, or an awaited transfer terminated, and the responsible program was of higher priority than the program currently running, the I/O Control (IOC) unit interrupted to allow an immediate context switch. IOC also made provision to avoid priority inversion, in which a program of high priority waits for a device made busy by a program of lower priority, requesting

#861138