Misplaced Pages

Han unification

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters . Han characters are a feature shared in common by written Chinese ( hanzi ), Japanese ( kanji ), Korean ( hanja ) and Vietnamese ( chữ Hán ).

#908091

134-636: Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character . In the formulation of Unicode, an attempt was made to unify these variants by considering them as allographs  – different glyphs representing the same "grapheme" or orthographic unit – hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan . Nevertheless, many characters have regional variants assigned to different code points , such as Traditional 個 (U+500B) versus Simplified 个 (U+4E2A). The Unicode Standard details

268-430: A monospaced ( non-proportional or fixed-width ) typeface uses a single standard width for all glyphs in the font. Duospaced fonts are similar to monospaced fonts, but characters can also be two character widths instead of a single character width. Many people generally find proportional typefaces nicer-looking and easier to read, and thus they appear more commonly in professionally published printed material. For

402-403: A vector font . Bitmap fonts were more commonly used in the earlier stages of digital type, and are rarely used today. These bitmapped typefaces were first produced by Casady & Greene, Inc. and were also known as Fluent Fonts. Fluent Fonts became mostly obsolete with the creation of downloadable PostScript fonts, and these new fonts are called Fluent Laser Fonts (FLF). When an outline font

536-604: A (B)TRON -based system was adopted by Japanese government organizations "Center for Educational Computing" as the system of choice for school education including compulsory education . However, in April, a report titled "1989 National Trade Estimate Report on Foreign Trade Barriers" from Office of the United States Trade Representative have specifically listed the system as a trade barrier in Japan. The report claimed that

670-458: A computer file containing scalable outline letterforms ( digital font ), in one of several common formats. Some typefaces, such as Verdana , are designed primarily for use on computer screens . Digital type became the dominant form of type in the late 1980s and early 1990s. Digital fonts store the image of each character either as a bitmap in a bitmap font , or by mathematical description of lines and curves in an outline font , also called

804-405: A font family is a set of fonts within the same typeface: for example Times Roman 8, Times Roman 10, Times Roman 12 etc. In web typography (using span style="font-family: ), a 'font family' equates to a 'typeface family' or even to a very broad category such as sans-serif that encompass many typeface families. Another way to look at the distinction between font and typeface is that a font

938-431: A base character, they signal the two character sequence selects a variation (typically in terms of grapheme, but also in terms of underlying meaning as in the case of a location name or other proper noun) of the base character. This then is not a selection of an alternate glyph, but the selection of a grapheme variation or a variation of the base abstract character. Such a two-character sequence however can be easily mapped to

1072-457: A bracketed serif and a substantial difference in weight within the strokes. Though some argument exists as to whether Transitional fonts exist as a discrete category among serif fonts, Transitional fonts lie somewhere between Old Style and Modern style typefaces. Transitional fonts exhibit a marked increase in the variation of stroke weight and a more horizontal serif compared to Old Style. Slab serif designs have particularly large serifs, and date to

1206-666: A character in a specific typeface . One character may be represented by many distinct glyphs, for example a "g" or an "a", both of which may have one loop ( ɑ , ɡ ) or two ( a , g ). Yet for a reader of Latin script based languages the two variations of the "a" character are both recognized as the same grapheme. Graphemes present in national character code standards have been added to Unicode, as required by Unicode's Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are considerably more involved, given

1340-451: A complementary set of numeric digits. Numbers can be typeset in two main independent sets of ways: lining and non-lining figures , and proportional and tabular styles. Most modern typefaces set numeric digits by default as lining figures, which are the height of upper-case letters. Non-lining figures , styled to match lower-case letters, are often common in fonts intended for body text, as they are thought to be less disruptive to

1474-608: A comprehensive vocabulary for describing the many aspects of typefaces and typography. Some vocabulary applies only to a subset of all scripts . Serifs , for example, are a purely decorative characteristic of typefaces used for European scripts, whereas the glyphs used in Arabic or East Asian scripts have characteristics (such as stroke width) that may be similar in some respects but cannot reasonably be called serifs and may not be purely decorative. Typefaces can be divided into two main categories: serif and sans serif . Serifs comprise

SECTION 10

#1732830531909

1608-582: A cursive version of the 糸 component. However, in mainland China, the standards bodies wanted to standardize the cursive form when used in characters like 红 . Because this change happened relatively recently, there was a transition period. Both 紅 (U+7D05) and 红 (U+7EA2) got separate code points in the PRC's text encoding standards bodies so Chinese-language documents could use both versions. The two variants received distinct code points in Unicode as well. The case of

1742-585: A device may come with only one font pre-installed. The system font must make a decision for the default glyph for each code point and these glyphs can differ greatly, indicating different underlying graphemes. Consequently, relying on language markup across the board as an approach is beset with two major issues. First, there are contexts where language markup is not available (code commits, plain text). Second, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In addition to

1876-543: A document with "foreign" glyphs: variants of 骨 can appear as mirror images, 者 can be missing a stroke/have an extraneous stroke, and 令 may be unreadable to Non-Japanese people. (In Japan, both variants are accepted). In some cases, often where the changes are the most striking, Unicode has encoded variant characters, making it unnecessary to switch between fonts or lang attributes. However, some variants with arguably minimal differences get distinct codepoints, and not every variant with arguably substantial changes gets

2010-540: A document without changing the document's text flow are said to be "metrically identical" (or "metrically compatible"). Several typefaces have been created to be metrically compatible with widely used proprietary typefaces to allow the editing of documents set in such typefaces in digital typesetting environments where these typefaces are not available. For instance, the free and open-source Liberation fonts and Croscore fonts have been designed as metrically compatible substitutes for widely used Microsoft fonts. During

2144-453: A film strip (in the form of a film negative, with the letters as clear areas on an opaque black background). A high-intensity light source behind the film strip projected the image of each glyph through an optical system, which focused the desired letter onto the light-sensitive phototypesetting paper at a specific size and position. This photographic typesetting process permitted optical scaling , allowing designers to produce multiple sizes from

2278-453: A font) suitable to the specified language. (Besides actual character variation—look for differences in stroke order, number, or direction—the typefaces may also reflect different typographical styles, as with serif and non-serif alphabets.) This only works for fallback glyph selection if you have CJK fonts installed on your system and the font selected to display this article does not include glyphs for these characters. No character variant that

2412-449: A form of semantic variant. Unicode classifies 丟 and 丢 as each other's respective traditional and simplified variants and also as each other's semantic variants. However, while Unicode classifies 億 (U+5104) and 亿 (U+4EBF) as each other's respective traditional and simplified variants, Unicode does not consider 億 and 亿 to be semantic variants of each other. Unicode claims that "Ideally, there would be no pairs of z-variants in

2546-417: A glyph cannot possibly still, for example, mean the same grapheme understood as the small letter "a"—Unicode separates those into separate code points. For Unihan the same thing is done whenever the abstract meaning changes, however rather than speaking of the abstract meaning of a grapheme (the letter "a"), the unification of Han ideographs assigns a new code point for each different meaning—even if that meaning

2680-439: A glyph rising above the x-height as the ascender . The distance from the baseline to the top of the ascent or a regular uppercase glyphs (cap line) is also known as the cap height. The height of the ascender can have a dramatic effect on the readability and appearance of a font. The ratio between the x-height and the ascent or cap height often serves to characterize typefaces. Typefaces that can be substituted for one another in

2814-451: A grapheme and an abstract character assigned as a sememe. In contrast, consider ASCII 's unification of punctuation and diacritics , where graphemes with widely different meanings (for example, an apostrophe and a single quotation mark) are unified because the glyphs are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning. For a grapheme to be represented by various glyphs means that

SECTION 20

#1732830531909

2948-461: A graphical representation or rendering problem to be overcome by more artful fonts, the widespread use of Unicode would make it difficult to preserve such distinctions. The problem of one character representing semantically different concepts is also present in the Latin part of Unicode. The Unicode character for a curved apostrophe is the same as the character for a right single quote (’). On the other hand,

3082-589: A main typeface have been in use for centuries. In some formats they have been marketed as separate fonts. In the early 1990s, the Adobe Systems type group introduced the idea of expert set fonts, which had a standardized set of additional glyphs, including small caps , old style figures , and additional superior letters, fractions and ligatures not found in the main fonts for the typeface. Supplemental fonts have also included alternate letters such as swashes , dingbats , and alternate character sets, complementing

3216-664: A monospaced font for proper viewing, with the exception of Shift JIS art which takes advantage of the proportional characters in the MS PGothic font. In a web page , the <tt> </tt> , <code> </code> or <pre> </pre> HTML tags most commonly specify monospaced fonts. In LaTeX , the verbatim environment or the Teletype font family (e.g., \texttt{...} or {\ttfamily ...} ) uses monospaced fonts (in TeX , use {\tt ...} ). Any two lines of text with

3350-448: A plan would also eliminate the very visually distinct variations for characters like 直 (U+76F4) and 雇 (U+96C7). One would expect that all simplified characters would simultaneously also be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the strange case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode's definition

3484-406: A result of revival, such as Linotype Syntax , Linotype Univers ; while others have alternate styling designed as compatible replacements of each other, such as Compatil , Generis . Font superfamilies began to emerge when foundries began to include typefaces with significant structural differences, but some design relationship, under the same general family name. Arguably the first superfamily

3618-439: A separate grapheme added to the dotless "ı" . To deal with the use of different graphemes for the same Unihan sememe, Unicode has relied on several mechanisms: especially as it relates to rendering text. One has been to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean. Also font formats such as OpenType allow for the mapping of alternate glyphs according to language so that

3752-596: A separate single glyph in modern fonts. Since Unicode has assigned 256 separate variation selectors, it is capable of assigning 256 variations for any Han ideograph. Such variations can be specific to one language or another and enable the encoding of plain text that includes such grapheme variations. Since the Unihan standard encodes "abstract characters", not "glyphs", the graphical artifacts produced by Unicode have been considered temporary technical hurdles, and at most, cosmetic. However, again, particularly in Japan, due in part to

3886-654: A shared code point, the reference glyph image is usually biased toward the Traditional Chinese version. Also, the decision of whether to classify pairs as semantic variants or z-variants is not always consistent or clear, despite rationalizations in the handbook. So-called semantic variants of 丟 (U+4E1F) and 丢 (U+4E22) are examples that Unicode gives as differing in a significant way in their abstract shapes, while Unicode lists 佛 and 仏 as z-variants, differing only in font styling. Paradoxically, Unicode considers 兩 and 両 to be near identical z-variants while at

4020-433: A single font, although physical constraints on the reproduction system used still required design changes at different sizes; for example, ink traps and spikes to allow for spread of ink encountered in the printing stage. Manually operated photocomposition systems using fonts on filmstrips allowed fine kerning between letters without the physical effort of manual typesetting, and spawned an enlarged type design industry in

4154-418: A single grapheme while being composed of multiple Unicode abstract characters. In addition, Unicode also assigns some code points to a small number (other than for compatibility reasons) of formatting characters, whitespace characters, and other abstract characters that are not graphemes, but instead used to control the breaks between lines, words, graphemes and grapheme clusters. With the unified Han ideographs,

Han unification - Misplaced Pages Continue

4288-444: A specific size is known as optical sizing . Others will be offered in only one style, but optimised for a specific size. Optical sizes are particularly common for serif fonts, since the fine detail of serif fonts can need to be bulked up for smaller sizes. Typefaces may also be designed differently considering the type of paper on which they will be printed. Designs to be printed on absorbent newsprint paper will be more slender as

4422-518: A specific variant in the case given, only the language-specific font more likely to depict a character as that variant. (At this point, merely stylistic differences do enter in, as a selection of Japanese and Chinese fonts are not likely to be visually compatible.) Chinese users seem to have fewer objections to Han unification, largely because Unicode did not attempt to unify Simplified Chinese characters with Traditional Chinese characters . (Simplified Chinese characters are used among Chinese speakers in

4556-488: A standard feature of so-called monospaced fonts , used in programming and on typewriters. However, many fonts that are not monospaced use tabular figures. More complex font designs may include two or more combinations with one as the default and others as alternate characters. Of the four possibilities, non-lining tabular figures are particularly rare since there is no common use for them. Fonts intended for professional use in documents such as business reports may also make

4690-1067: A symbolic event for the loss of momentum and eventual demise of the BTRON system, which led to the widespread adoption of MS-DOS in Japan and the eventual adoption of Unicode with its successor Windows. There has not been any push for full semantic unification of all semantically linked characters, though the idea would treat the respective users of East Asian languages the same, whether they write in Korean, Simplified Chinese, Traditional Chinese, Kyūjitai Japanese, Shinjitai Japanese or Vietnamese. Instead of some variants getting distinct code points while other groups of variants have to share single code points, all variants could be reliably expressed only with metadata tags (e.g., CSS formatting in webpages). The burden would be on all those who use differing versions of 直 , 別 , 兩 , 兔 , whether that difference be due to simplification, international variance or intra-national variance. However, for some platforms (e.g., smartphones),

4824-568: A text rendering system can look to the user's environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode to define a consistent way of encoding multilingual text. So rather than treat the issue as a rich text problem of glyph alternates, Unicode added the concept of variation selectors , first introduced in version 3.2 and supplemented in version 4.0. While variation selectors are treated as combining characters, they have no associated diacritic or mark. Instead, by combining with

4958-422: A typeface. Typefaces with serifs are often considered easier to read in long passages than those without. Studies on the matter are ambiguous, suggesting that most of this effect is due to the greater familiarity of serif typefaces. As a general rule, printed works such as newspapers and books almost always use serif typefaces, at least for the text body. Websites do not have to specify a font and can simply respect

5092-406: A unique codepoint. As an example, take a character such as 入 (U+5165), for which the only way to display the variants is to change font (or lang attribute) as described in the previous table. On the other hand, for 內 (U+5167), the variant of 内 (U+5185) gets a unique codepoint. For some characters, like 兌 / 兑 (U+514C/U+5151), either method can be used to display the different glyphs. In

5226-407: A user thinks of as a "character" and should not be confused with a grapheme . However, this quote refers to the fact that some graphemes are composed of several graphic elements or "characters". So, for example, the character U+0061 a LATIN SMALL LETTER A combined with U+030A ◌̊ COMBINING RING ABOVE (generating the combination "å") might be understood by a user as

5360-450: A written language do not necessarily map one-to-one. In English the combining diaeresis , "¨", and the "o" it modifies may be seen as two separate graphemes, whereas in languages such as Swedish, the letter "ö" may be seen as a single grapheme. Similarly in English the dot on an "i" is understood as a part of the "i" grapheme whereas in other languages, such as Turkish, the dot may be seen as

5494-681: Is also noted that Traditional and Simplified characters should be encoded separately according to Unicode Han Unification rules, because they are distinguished in pre-existing PRC character sets. Furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship. There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus free from its restrictions: These region-dependent character sets are also seen as not affected by Han Unification because of their region-specific nature: However, none of these alternative standards has been as widely adopted as Unicode , which

Han unification - Misplaced Pages Continue

5628-411: Is exclusive to Korean or Vietnamese has received its own code point, whereas almost all Shinjitai Japanese variants or Simplified Chinese variants each have distinct code points and unambiguous reference glyphs in the Unicode standard. In the twentieth century, East Asian countries made their own respective encoding standards. Within each standard, there coexisted variants with distinct code points, hence

5762-535: Is expressed by distinct graphemes in different languages. Although a grapheme such as "ö" might mean something different in English (as used in the word "coördinated") than it does in German (as used in the word "schön"), it is still the same grapheme and can be easily unified so that English and German can share a common abstract Latin writing system (along with Latin itself). This example also points to another reason that "abstract character" and grapheme as an abstract unit in

5896-457: Is no longer valid, as a single font may be scaled to any size. The first "extended" font families, which included a wide range of widths and weights in the same general style emerged in the early 1900s, starting with ATF 's Cheltenham (1902–1913), with an initial design by Bertram Grosvenor Goodhue, and many additional faces designed by Morris Fuller Benton . Later examples include Futura , Lucida , ITC Officina . Some became superfamilies as

6030-609: Is not exhaustive. In order to resolve issues brought by Han unification, a Unicode Technical Standard known as the Unicode Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in plain text environment. By registering glyph collections into the Ideographic Variation Database (IVD), it is possible to use Ideographic Variation Selectors to form Ideographic Variation Sequence (IVS) to specify or restrict

6164-579: Is now the base character set for many new standards and protocols, internationally adopted, and is built into the architecture of operating systems ( Microsoft Windows , Apple macOS , and many Unix-like systems), programming languages ( Perl , Python , C# , Java , Common Lisp , APL , C , C++ ), and libraries (IBM International Components for Unicode (ICU) along with the Pango , Graphite , Scribe , Uniscribe , and ATSUI rendering engines), font formats ( TrueType and OpenType ) and so on. In March 1989,

6298-483: Is relevant in structural semiotics . A seme is a proposed unit of transmitted or intended meaning; it is atomic or indivisible. A sememe can be the meaning expressed by a morpheme, such as the English pluralizing morpheme -s , which carries the sememic feature [+ plural]. Alternatively, a single sememe (for example [go] or [move]) can be conceived as the abstract representation of such verbs as skate , roll , jump , slide , turn , or boogie . It can be thought of as

6432-557: Is similar to how U+212B Å ANGSTROM SIGN is canonically equivalent to a pre-composed U+00C5 Å LATIN CAPITAL LETTER A WITH RING ABOVE . Much software (such as the MediaWiki software that hosts Misplaced Pages) will replace all canonically equivalent characters that are discouraged (e.g. the angstrom symbol) with the recommended equivalent. Despite the name, CJK "compatibility variants" are canonically equivalent characters and not compatibility characters. 漢 (U+FA9A)

6566-512: Is someone who uses typefaces to design a page layout). Every typeface is a collection of glyphs , each of which represents an individual letter, number, punctuation mark, or other symbol. The same glyph may be used for characters from different writing systems , e.g. Roman uppercase A looks the same as Cyrillic uppercase А and Greek uppercase alpha (Α). There are typefaces tailored for special applications, such as cartography , astrology or mathematics . In professional typography ,

6700-466: Is still used by TeX and its variants. Applications using these font formats, including the rasterizers, appear in Microsoft and Apple Computer operating systems , Adobe Systems products and those of several other companies. Digital fonts are created with font editors such as FontForge , RoboFont, Glyphs, Fontlab 's TypeTool, FontLab Studio, Fontographer, or AsiaFont Studio. Typographers have developed

6834-568: Is that specialized semantic variants have the same meaning only in certain contexts. Languages use them differently. A pair whose characters are 100% drop-in replacements for each other in Japanese may not be so flexible in Chinese. Thus, any comprehensive merger of recommended code points would have to maintain some variants that differ only slightly in appearance even if the meaning is 100% the same for all contexts in one language, because in another language

SECTION 50

#1732830531909

6968-462: Is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be considered variants). However, Han unification has also caused considerable controversy, particularly among the Japanese public, who, with the nation's literati, have a history of protesting the culling of historically and culturally significant variants. (See Kanji § Orthographic reform and lists of kanji . Today,

7102-468: Is the name of the class of typefaces used with the earliest printing presses in Europe, which imitated the calligraphy style of that time and place. Various forms exist including textualis , rotunda , schwabacher and fraktur . (Some people refer to Blackletter as " gothic script " or "gothic font", though the term "Gothic" in typography refers to sans serif typefaces. ) Gaelic fonts were first used for

7236-518: Is the smallest abstract unit of meaning in a writing system. Any grapheme has many possible glyph expressions, but all are recognized as the same grapheme by those with reading and writing knowledge of a particular writing system. Although Unicode typically assigns characters to code points to express the graphemes within a system of writing, the Unicode Standard ( section 3.4 D7 ) cautions: An abstract character does not necessarily correspond to what

7370-463: Is the vessel (e.g. the software) that allows you to use a set of characters with a given appearance, whereas a typeface is the actual design of such characters. Therefore, a given typeface, such as Times, may be rendered by different fonts, such as computer font files created by this or that vendor, a set of metal type characters etc. In the metal type era, a font also meant a specific point size, but with digital scalable outline fonts this distinction

7504-453: Is typically a group of related typefaces which vary only in weight, orientation, width , etc., but not design. For example, Times is a typeface family, whereas Times Roman, Times Italic and Times Bold are individual typefaces making up the Times family. Typeface families typically include several typefaces, though some, such as Helvetica , may consist of dozens of fonts. In traditional typography,

7638-410: Is used, a rasterizing routine (in the application software, operating system or printer) renders the character outlines, interpreting the vector instructions to decide which pixels should be black and which ones white. Rasterization is straightforward at high resolutions such as those used by laser printers and in high-end publishing systems. For computer screens , where each individual pixel can mean

7772-502: The Irish language in 1571, and were used regularly for Irish until the early 1960s, though they continue to be used in display type and type for signage. Their use was effectively confined to Ireland, though Gaelic typefaces were designed and produced in France, Belgium, and Italy. Gaelic typefaces make use of insular letterforms, and early fonts made use of a variety of abbreviations deriving from

7906-598: The People's Republic of China , Singapore , and Malaysia . Traditional Chinese characters are used in Hong Kong and Taiwan ( Big5 ) and they are, with some differences, more familiar to Korean and Japanese users.) Unicode is seen as neutral with regards to this politically charged issue, and has encoded Simplified and Traditional Chinese glyphs separately (e.g. the ideograph for "discard" is 丟 U+4E1F for Traditional Chinese Big5 #A5E1 and 丢 U+4E22 for Simplified Chinese GB #2210). It

8040-465: The cap-height , the height of the capital letters. Font size is also commonly measured in millimeters (mm) and q s (a quarter of a millimeter, kyu in romanized Japanese) and inches. Type foundries have cast fonts in lead alloys from the 1450s until the present, although wood served as the material for some large fonts called wood type during the 19th century, particularly in the United States . In

8174-406: The denotation ): The operational definition of synonymy depends on the distinctions between these classes of sememes. For example, the differentiation between what some academics call cognitive synonyms and near-synonyms depends on these differences. A related concept is that of the episememe (as described in the works of Leonard Bloomfield ), which is a unit of meaning corresponding to

SECTION 60

#1732830531909

8308-408: The "grass" example, happens to appear more typically in another language style. (That is to say, it would be difficult to access "grass" with the four-stroke radical more typical of Traditional Chinese in a Japanese environment, which fonts would typically depict the three-stroke radical.) Unihan proponents tend to favor markup languages for defining language strings, but this would not ensure the use of

8442-574: The 1890s, the mechanization of typesetting allowed automated casting of fonts on the fly as lines of type in the size and length needed. This was known as continuous casting, and remained profitable and widespread until its demise in the 1970s. The first machine of this type was the Linotype machine , invented by Ottmar Mergenthaler . During a brief transitional period ( c.  1950s –1990s), photographic technology, known as phototypesetting , utilized tiny high-resolution images of individual glyphs on

8576-516: The 1960s and 1970s. By the mid-1970s, all of the major typeface technologies and all their fonts were in use: letterpress; continuous casting machines; phototypositors; computer-controlled phototypesetters; and the earliest digital typesetters – bulky machines with primitive processors and CRT outputs. From the mid-1980s, as digital typography has grown, users have almost universally adopted the American spelling font , which has come to primarily refer to

8710-480: The Japanese position was unclear). Endorsing the Unicode Han unification was a necessary step for the heated ISO 10646/Unicode merger. Much of the controversy surrounding Han unification is based on the distinction between glyphs , as defined in Unicode, and the related but distinct idea of graphemes. Unicode assigns abstract characters (graphemes), as opposed to glyphs, which are a particular visual representations of

8844-506: The Shinjitai version and 海 (U+FA45) as the Kyūjitai version (which is identical to the traditional version in written Chinese and Korean). The radical 糸 (U+7CF8) is used in characters like 紅 / 红 , with two variants, the second form being simply the cursive form. The radical components of 紅 (U+7D05) and 红 (U+7EA2) are semantically identical and the glyphs differ only in the latter using

8978-409: The Unicode Standard makes a departure from prior practices in assigning abstract characters not as graphemes, but according to the underlying meaning of the grapheme: what linguists sometimes call sememes . This departure therefore is not simply explained by the oft quoted distinction between an abstract character and a glyph, but is more rooted in the difference between an abstract character assigned as

9112-413: The Unicode Standard." This would make it seem that the goal is to at least unify all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to language tags. This conflicts with the stated goal of Unicode to take away that overhead, and to allow any number of any of the world's scripts to be on the same document with one encoding system. Chapter One of

9246-444: The added compatibility character lists the already present version of 車 as both its compatibility variant and its z-variant. The compatibility variant field overrides the z-variant field, forcing normalization under all forms, including canonical equivalence. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode normalization scheme and not only under compatibility normalization. This

9380-516: The adoption of the TRON-based system by the Japanese government is advantageous to Japanese manufacturers, and thus excluding US operating systems from the huge new market; specifically the report lists MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft's influence as its former officer Tom Robertson was then offered a lucrative position by Microsoft. While the TRON system itself

9514-401: The appropriate glyph in text processing in a Unicode environment. Typeface A typeface (or font family ) is a design of letters , numbers and other symbols , to be used in printing or for electronic display. Most typefaces include variations in size (e.g., 24 point), weight (e.g., light, bold), slope (e.g., italic), width (e.g., condensed), and so on. Each of these variations of

9648-451: The baseline and the top of the glyph that reaches farthest from the baseline. The ascent and descent may or may not include distance added by accents or diacritical marks. In the Latin , Greek and Cyrillic (sometimes collectively referred to as LGC) scripts, one can refer to the distance from the baseline to the top of regular lowercase glyphs ( mean line ) as the x-height , and the part of

9782-449: The bold-style tabular figures take up the same width as the regular (non-bold) numbers, so a bold-style total would appear just as wide as the same sum in regular style. Because an abundance of typefaces has been created over the centuries, they are commonly categorized according to their appearance. At the highest level (in the context of Latin-script fonts), one can differentiate Roman, Blackletter, and Gaelic types. Roman types are in

9916-399: The browser settings of the user. But of those web sites that do specify a font, most use modern sans serif fonts, because it is commonly believed that, in contrast to the case for printed material, sans serif fonts are easier than serif fonts to read on the low-resolution computer screen. A proportional typeface, also called variable-width typeface, contains glyphs of varying widths, while

10050-415: The capacity to encode all characters used for the written languages of the world – more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility." This leaves

10184-540: The capital Latin letter A is not unified with the Greek letter Α or the Cyrillic letter А . This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set. While the unification aspect of Unicode is controversial in some quarters for the reasons given above, Unicode itself does now encode a vast number of seldom-used characters of a more-or-less antiquarian nature. Some of

10318-468: The characters i, t, l, and 1) use less space than the average. In the publishing industry, it was once the case that editors read manuscripts in monospaced fonts (typically Courier ) for ease of editing and word count estimates, and it was considered discourteous to submit a manuscript in a proportional font. This has become less universal in recent years, such that authors need to check with editors as to their preference, though monospaced fonts are still

10452-602: The characters which were missing on either Macintosh or Windows computers, e.g. fractions, ligatures or some accented glyphs. The goal was to deliver the whole character set to the customer regardless of which operating system was used. The size of typefaces and fonts is traditionally measured in points ; point has been defined differently at different times, but now the most popular is the Desktop Publishing point of 1 ⁄ 72  in (0.0139 in or 0.35 mm). When specified in typographic sizes (points, kyus),

10586-540: The controversy stems from the fact that the very decision of performing Han unification was made by the initial Unicode Consortium, which at the time was a consortium of North American companies and organizations (most of them in California), but included no East Asian government representatives. The initial design goal was to create a 16-bit standard, and Han unification was therefore a critical step for avoiding tens of thousands of character duplications. This 16-bit requirement

10720-410: The desired glyph in a specific typeface in order to convey the text as written, defeating the purpose of a unified character set. Unicode has responded to these needs by assigning variation selectors so that authors can select grapheme variations of particular ideographs (or even other characters). Small differences in graphical representation are also problematic when they affect legibility or belong to

10854-524: The difference between legible and illegible characters, some digital fonts use hinting algorithms to make readable bitmaps at small sizes. Digital fonts may also contain data representing the metrics used for composition, including kerning pairs, component creation data for accented characters, glyph substitution rules for Arabic typography and for connecting script faces, and for simple everyday ligatures like "fl". Common font formats include TrueType , OpenType and PostScript Type 1 , while Metafont

10988-456: The distinct code points in Unicode for certain sets of variants. Taking Simplified Chinese as an example, the two character variants of 內 (U+5167) and 内 (U+5185) differ in exactly the same way as do the Korean and non-Korean variants of 全 (U+5168). Each respective variant of the first character has either 入 (U+5165) or 人 (U+4EBA). Each respective variant of the second character has either 入 (U+5165) or 人 (U+4EBA). Both variants of

11122-429: The domestic bodies view the characters themselves. China went through a process in the twentieth century that changed (if not simplified) several characters. During this transition, there was a need to be able to encode both variants within the same document. Korean has always used the variant of 全 with the 入 (U+5165) radical on top. Therefore, it had no reason to encode both variants. Korean language documents made in

11256-434: The early nineteenth century. The earliest known slab serif font was first shown around 1817 by the English typefounder Vincent Figgins . Roman , italic , and oblique are also terms used to differentiate between upright and two possible slanted forms of a typeface. Italic and oblique fonts are similar (indeed, oblique fonts are often simply called italics) but there is strictly a difference: italic applies to fonts where

11390-583: The entry for 龜 does not list 亀 as a z-variant, even though 龜 was obviously already in the database at the time that the entry for 亀 was written. Some clerical errors led to doubling of completely identical characters such as 﨣 (U+FA23) and 𧺯 (U+27EAF). If a font has glyphs encoded to both points so that one font is used for both, they should appear identical. These cases are listed as z-variants despite having no variance at all. Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion . Because round-trip conversion

11524-537: The features at the ends of their strokes. Times New Roman and Garamond are common examples of serif typefaces. Serif fonts are probably the most used class in printed materials, including most books, newspapers and magazines. Serif fonts are often classified into three subcategories: Old Style , Transitional , and Didone (or Modern), representative examples of which are Garamond , Baskerville , and Bodoni respectively. Old Style typefaces are influenced by early Italian lettering design. Modern fonts often exhibit

11658-444: The first character got their own distinct code points. However, the two variants of the second character had to share the same code point. The justification Unicode gives is that the national standards body in the PRC made distinct code points for the two variations of the first character 內 / 内 , whereas Korea never made separate code points for the different variants of 全 . There is a reason for this that has nothing to do with how

11792-399: The following table, each row compares variants that have been assigned different code points. For brevity, note that shinjitai variants with different components will usually (and unsurprisingly) take unique codepoints (e.g., 氣/気 ). They will not appear here nor will the simplified Chinese characters that take consistently simplified radical components (e.g., 紅 / 红 , 語 / 语 ). This list

11926-503: The glyphs found in brush calligraphy during the Tang dynasty. These later evolved into the Song style (宋体字) which used thick vertical strokes and thin horizontal strokes in wood block printing. Sememe A sememe ( / ˈ s ɛ m iː m / ; from Ancient Greek σημαίνω (sēmaínō)  'mean, signify') is a semantic language unit of meaning, analogous to a morpheme . The concept

12060-433: The grapheme has glyph variations that are usually determined by selecting one font or another or using glyph substitution features where multiple glyphs are included in a single font. Such glyph variations are considered by Unicode a feature of rich text protocols and not properly handled by the plain text goals of Unicode. However, when the change from one glyph to another constitutes a change from one grapheme to another—where

12194-480: The handbook states that "With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software, and reduced development costs. While taking the ASCII character set as its starting point, the Unicode Standard goes far beyond ASCII's limited ability to encode only the upper- and lowercase letters A through Z. It provides

12328-569: The height of an em-square , an invisible box which is typically a bit larger than the distance from the tallest ascender to the lowest descender , is scaled to equal the specified size. For example, when setting Helvetica at 12 point, the em square defined in the Helvetica font is scaled to 12 points or 1 ⁄ 6  in or 4.2 mm. Yet no particular element of 12-point Helvetica need measure exactly 12 points. Frequently measurement in non-typographic units (feet, inches, meters) will be of

12462-427: The ink will naturally spread out as it absorbs into the paper, and may feature ink traps : areas left blank into which the ink will soak as it dries. These corrections will not be needed for printing on high-gloss cardboard or display on-screen. Fonts designed for low-resolution displays, meanwhile, may avoid pure circles, fine lines and details a screen cannot render. Most typefaces, especially modern designs, include

12596-421: The language tagging strategy. There is no universal tag for the traditional and "simplified" versions of Japanese as there are for Chinese. Thus, any Japanese writer wanting to display the Kyūjitai form of 海 may have to tag the character as "Traditional Chinese" or trust that the recipient's Japanese font uses only the Kyūjitai glyphs, but tags of Traditional Chinese and Simplified Chinese may be necessary to show

12730-403: The letter forms are redesigned, not just slanted. Almost all serif faces have italic forms; some sans-serif faces have oblique designs. (Most faces do not offer both as this is an artistic choice by the font designer about how the slanted form should look.) Sans serif (lit. without serif) designs appeared relatively recently in the history of type design. The first, similar to slab serif designs,

12864-581: The list of characters officially recognized for use in proper names continues to expand at a modest pace.) In 1993, the Japan Electronic Industries Development Association (JEIDA) published a pamphlet titled " 未来の文字コード体系に私達は不安をもっています " (We are feeling anxious for the future character encoding system JPNO   20985671 ), summarizing major criticism against the Han Unification approach adopted by Unicode. A grapheme

12998-433: The manuscript tradition. Various forms exist, including manuscript, traditional, and modern styles, chiefly distinguished as having angular or uncial features. Monospaced fonts are typefaces in which every glyph is the same width (as opposed to variable-width fonts, where the w and m are wider than most letters, and the i is narrower). The first monospaced typefaces were designed for typewriters, which could only move

13132-400: The metal type era, all type was cut in metal and could only be printed at a specific size. It was a natural process to vary a design at different sizes, making it chunkier and clearer to read at smaller sizes. Many digital typefaces are offered with a range of fonts (or a variable font axis) for different sizes, especially designs sold for professional design use. The art of designing fonts for

13266-750: The most widespread use today, and are sub-classified as serif, sans serif, ornamental, and script types. Historically, the first European fonts were blackletter, followed by Roman serif, then sans serif and then the other types. The use of Gaelic faces was restricted to the Irish language, though these form a unique if minority class. Typefaces may be monospaced regardless of whether they are Roman, Blackletter, or Gaelic. Symbol typefaces are non-alphabetic. The Cyrillic script comes in two varieties, Roman-appearance type (called гражданский шрифт graždanskij šrift ) and traditional Slavonic type (called славянский шрифт slavjanskij šrift ). Serif, or Roman , typefaces are named for

13400-410: The norm. Most scripts share the notion of a baseline : an imaginary horizontal line on which characters rest. In some scripts, parts of glyphs lie below the baseline. The descent spans the distance between the baseline and the lowest descending glyph in a typeface, and the part of a glyph that descends below the baseline has the name descender . Conversely, the ascent spans the distance between

13534-494: The numbers to blend into the text more effectively. As tabular spacing makes all numbers with the same number of digits the same width, it is used for typesetting documents such as price lists, stock listings and sums in mathematics textbooks, all of which require columns of numeric figures to line up on top of each other for easier comparison. Tabular spacing is also a common feature of simple printing devices such as cash registers and date-stamps. Characters of uniform width are

13668-401: The option to settle on one unified reference grapheme for all z-variants, which is contentious since few outside of Japan would recognize 佛 and 仏 as equivalent. Even within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode would effectively make the PRC's simplification of 侣 (U+4FA3) and 侶 (U+4FB6) a monumental difference by comparison. Such

13802-470: The original standards, as well as accidental mergers that are later corrected, providing precedent for dis-unifying characters. For native speakers, variants can be unintelligible or be unacceptable in educated contexts. English speakers may understand a handwritten note saying "4P5 kg" as "495 kg", but writing the nine backwards (so it looks like a "P") can be jarring and would be considered incorrect in any school. Likewise, to users of one CJK language reading

13936-489: The principles of Han unification. The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process. One rationale was the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 characters. Version 1 of Unicode

14070-530: The radical 艸 (U+8278) proves how arbitrary the state of affairs is. When used to compose characters like 草 (U+8349), the radical was placed at the top, but had two different forms. Traditional Chinese and Korean use a four-stroke version. At the top of 草 should be something that looks like two plus signs ( ⺿ ). Simplified Chinese, Kyūjitai Japanese and Shinjitai Japanese use a three-stroke version, like two plus signs sharing their horizontal strokes ( ⺾ , i.e. 草 ). The PRC's text encoding bodies did not encode

14204-401: The range of typeface designs increased and requirements of publishers broadened over the centuries, fonts of specific weight (blackness or lightness) and stylistic variants (most commonly regular or roman as distinct from italic , as well as condensed ) have led to font families , collections of closely related typeface designs that can include hundreds of styles. A typeface family

14338-427: The regular fonts under the same family. However, with introduction of font formats such as OpenType , those supplemental glyphs were merged into the main fonts, relying on specific software capabilities to access the alternate glyphs. Since Apple's and Microsoft's operating systems supported different character sets in the platform related fonts, some foundries used expert fonts in a different way. These fonts included

14472-806: The same distance forward with each letter typed. Their use continued with early computers, which could only display a single font. Although modern computers can display any desired typeface, monospaced fonts are still important for computer programming , terminal emulation, and for laying out tabulated data in plain text documents; they may also be particularly legible at small sizes due to all characters being quite wide. Examples of monospaced typefaces are Courier , Prestige Elite , Fixedsys , and Monaco . Most monospaced fonts are sans-serif or slab-serif as these designs are easiest to read printed small or display on low-resolution screens, though many exceptions exist. CJK, or Chinese, Japanese and Korean typefaces consist of large sets of glyphs. These typefaces originate in

14606-399: The same number of characters in each line in a monospaced typeface should display as equal in width, while the same two lines in a proportional typeface may have radically different widths. This occurs because in a proportional font, glyph widths vary, such that wider glyphs (typically those for characters such as W, Q, Z, M, D, O, H, and U) use more space, and narrower glyphs (such as those for

14740-575: The same reason, GUI computer applications (such as word processors and web browsers ) typically use proportional fonts. However, many proportional fonts contain fixed-width ( tabular ) numerals so that columns of numbers stay aligned. Monospaced typefaces function better for some purposes because their glyphs line up in neat, regular columns. No glyph is given any more weight than another. Most manually operated typewriters use monospaced fonts. So do text-only computer displays and third- and fourth-generation game console graphics processors, which treat

14874-405: The same time classifying them as significantly different semantic variants. There are also cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants: 個 (U+500B) and 个 (U+4E2A). There are cases of non-mutual equivalence. For example, the Unihan database entry for 亀 (U+4E80) considers 龜 (U+9F9C) to be its z-variant, but

15008-517: The screen as a uniform grid of character cells. Most computer programs which have a text-based interface ( terminal emulators , for example) use only monospaced fonts (or add additional spacing to proportional fonts to fit them in monospaced cells) in their configuration. Monospaced fonts are commonly used by computer programmers for displaying and editing source code so that certain characters (for example parentheses used to group arithmetic expressions) are easy to see. ASCII art usually requires

15142-445: The semantic counterpart to any of the following: a meme in a culture, a gene in a genome, or an atom (or, more generally, an elementary particle ) in a substance. A seme is the name for the smallest unit of meaning recognized in semantics , referring to a single characteristic of a sememe. There are five types of sememes: two denotational and three connotational , the latter occurring only in phrase units (they do not reflect

15276-504: The simplified Chinese, Japanese, and Korean glyphs [ ⺾ ] use three. But there is only one Unicode point for the grass character (U+8349) [ 草 ] regardless of writing system. Another example is the ideograph for "one," which is different in Chinese, Japanese, and Korean. Many people think that the three versions should be encoded differently. In fact, the three ideographs for "one" ( 一 , 壹 , or 壱 ) are encoded separately in Unicode, as they are not considered national variants. The first

15410-547: The small features at the end of strokes within letters. The printing industry refers to typeface without serifs as sans serif (from French sans , meaning without ), or as grotesque (or, in German , grotesk ). Great variety exists among both serif and sans serif typefaces. Both groups contain faces designed for setting large amounts of body text, and others intended primarily as decorative. The presence or absence of serifs represents only one of many factors to consider when choosing

15544-525: The standard character sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai Japanese, there also exist "ancient" forms of characters that are of interest to historians, linguists and philologists. Unicode's Unihan database has already drawn connections between many characters. The Unicode database catalogs the connections between variant characters with distinct code points already. However, for characters with

15678-411: The style of running text. They are also called lower-case numbers or text figures for the same reason. The horizontal spacing of digits can also be proportional , with a character width tightly matching the width of the figure itself, or tabular , where all digits have the same width. Proportional spacing places the digits closely together, reducing empty space in a document, and is thought to allow

15812-445: The technological limitations under which they evolved, and so the official CJK participants in Han unification may well have been amenable to reform. Unlike European versions, CJK Unicode fonts, due to Han unification, have large but irregular patterns of overlap, requiring language-specific fonts. Unfortunately, language-specific fonts also make it difficult to access a variant which, as with

15946-522: The term typeface is not interchangeable with the word font (originally "fount" in British English, and pronounced "font"), because the term font has historically been defined as a given alphabet and its associated characters in a single size. For example, 8-point Caslon Italic was one font, and 10-point Caslon Italic was another. Historically, a font came from a type foundry as a set of " sorts ", with number of copies of each character included. As

16080-736: The twentieth century had little reason to represent both versions in the same document. Almost all of the variants that the PRC developed or standardized got distinct code points owing simply to the fortune of the Simplified Chinese transition carrying through into the computing age. This privilege however, seems to apply inconsistently, whereas most simplifications performed in Japan and mainland China with code points in national standards, including characters simplified differently in each country, did make it into Unicode as distinct code points. Sixty-two Shinjitai "simplified" characters with distinct code points in Japan got merged with their Kyūjitai traditional equivalents, like 海 . This can cause problems for

16214-399: The two characters may not be 100% drop-in replacements. In each row of the following table, the same character is repeated in all six columns. However, each column is marked (by the lang attribute) as being in a different language: Chinese ( simplified and two types of traditional ), Japanese , Korean , or Vietnamese . The browser should select, for each character, a glyph (from

16348-420: The two forms side by side in a Japanese textbook. This would preclude one from using the same font for an entire document, however. There are two distinct code points for 海 in Unicode, but only for "compatibility reasons". Any Unicode-conformant font must display the Kyūjitai and Shinjitai versions' equivalent code points in Unicode as the same. Unofficially, a font may display 海 differently with 海 (U+6D77) as

16482-491: The two variants differently. The fact that almost every other change brought about by the PRC, no matter how minor, did warrant its own code point suggests that this exception may have been unintentional. Unicode copied the existing standards as is, preserving such irregularities. The Unicode Consortium has recognized errors in other instances. The myriad Unicode blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flawed importation of

16616-411: The typeface is a font . There are thousands of different typefaces in existence, with new ones being developed constantly. The art and craft of designing typefaces is called type design . Designers of typefaces are called type designers and are often employed by type foundries . In desktop publishing , type designers are sometimes also called "font developers" or "font designers" (a typographer

16750-412: The visual representations of the characters. There are four basic traditions for East Asian character shapes: traditional Chinese, simplified Chinese, Japanese, and Korean. While the Han root character may be the same for CJK languages, the glyphs in common use for the same characters may not be. For example, the traditional Chinese glyph for "grass" uses four strokes for the "grass" radical [ ⺿ ], whereas

16884-434: The way in which Chinese characters were incorporated into Japanese writing systems historically, the inability to specify a particular variant was considered a significant obstacle to the use of Unicode in scholarly work. For example, the unification of "grass" (explained above), means that a historical text cannot be encoded so as to preserve its peculiar orthography. Instead, for example, the scholar would be required to locate

17018-453: The wrong cultural tradition. Besides making some Unicode fonts unusable for texts involving multiple "Unihan languages", names or other orthographically sensitive terminology might be displayed incorrectly. (Proper names tend to be especially orthographically conservative—compare this to changing the spelling of one's name to suit a language reform in the US or UK.) While this may be considered primarily

17152-629: Was added to the database later than 漢 (U+6F22) was and its entry informs the user of the compatibility information. On the other hand, 漢 (U+6F22) does not have this equivalence listed in this entry. Unicode demands that all entries, once admitted, cannot change compatibility or equivalence so that normalization rules for already existing characters do not change. Some pairs of Traditional and Simplified are also considered to be semantic variants. According to Unicode's definitions, it makes sense that all simplifications (that do not result in wholly different characters being merged for their homophony) will be

17286-437: Was an early selling point of Unicode, this meant that if a national standard in use unnecessarily duplicated a character, Unicode had to do the same. Unicode calls these intentional duplications " compatibility variants " as with 漢 (U+FA9A) which calls 漢 (U+6F22) its compatibility variant. As long as an application uses the same font for both, they should appear identical. Sometimes, as in the case of 車 with U+8ECA and U+F902,

17420-636: Was created when Morris Fuller Benton created Clearface Gothic for ATF in 1910, a sans serif companion to the existing (serifed) Clearface. The superfamily label does not include quite different designs given the same family name for what would seem to be purely marketing, rather than design, considerations: Caslon Antique , Futura Black and Futura Display are structurally unrelated to the Caslon and Futura families, respectively, and are generally not considered part of those families by typographers, despite their names. Additional or supplemental glyphs intended to match

17554-449: Was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs . Unicode was later extended to 21 bits allowing many more CJK characters (97,680 are assigned, with room for more). An article hosted by IBM attempts to illustrate part of the motivation for Han unification: The problem stems from the fact that Unicode encodes characters rather than "glyphs," which are

17688-493: Was later abandoned, making the size of the character set less of an issue today. The controversy later extended to the internationally representative ISO: the initial CJK Joint Research Group (CJK-JRG) favored a proposal (DIS 10646) for a non-unified character set, "which was thrown out in favor of unification with the Unicode Consortium's unified character set by the votes of American and European ISO members" (even though

17822-460: Was shown in 1816 by William Caslon IV. Many have minimal variation in stroke width, creating the impression of a minimal, simplified design. When first introduced, the faces were disparaged as "grotesque" (or "grotesk") and "gothic": but by the late nineteenth century were commonly used for san-serif without negative implication. The major sub-classes of Sans-serif are " Grotesque ", " Neo-grotesque ", " Geometric " and " Humanist ". "Blackletter"

17956-603: Was subsequently removed from the list of sanction by Section 301 of the Trade Act of 1974 after protests by the organization in May 1989, the trade dispute caused the Ministry of International Trade and Industry to accept a request from Masayoshi Son to cancel the Center of Educational Computing's selection of the TRON-based system for the use of educational computers. The incident is regarded as

#908091