Latin Extended-A is a Unicode block and is the third block of the Unicode standard. It encodes Latin letters from the Latin ISO character sets other than Latin-1 (which is already encoded in the Latin-1 Supplement block) and also legacy characters from the ISO 6937 standard.
56-670: The Latin Extended-A block has been in the Unicode Standard since version 1.0, with its entire character repertoire, except for the Latin Small Letter Long S, which was added during unification with ISO 10646 in version 1.1. Its block name in Unicode 1.0 was European Latin . The Latin Extended-A block contains only two subheadings: European Latin and Deprecated letter. The European Latin subheading contains all but one character in
112-498: A block is always a multiple of 16, and is often a multiple of 128, but is otherwise arbitrary. Characters required for a given script may be spread out over several different, potentially disjunct blocks within the codespace. Each code point is assigned a classification, listed as the code point's General Category property. Here, at the uppermost level code points are categorized as one of Letter, Mark, Number, Punctuation, Symbol, Separator, or Other. Under each category, each code point
168-710: A calendar year and with rare cases where the scheduled release had to be postponed. For instance, in April 2020, a month after version 13.0 was published, the Unicode Consortium announced they had changed the intended release date for version 14.0, pushing it back six months to September 2021 due to the COVID-19 pandemic . Unicode 16.0, the latest version, was released on 10 September 2024. It added 5,185 characters and seven new scripts: Garay , Gurung Khema , Kirat Rai , Ol Onal , Sunuwar , Todhri , and Tulu-Tigalari . Thus far,
224-432: A comprehensive catalog of character properties, including those needed for supporting bidirectional text , as well as visual charts and reference data sets to aid implementers. Previously, The Unicode Standard was sold as a print volume containing the complete core specification, standard annexes, and code charts. However, version 5.0, published in 2006, was the last version printed this way. Starting with version 5.2, only
280-516: A full semantic duplicate of the Latin alphabet, because legacy CJK encodings contained both "fullwidth" (matching the width of CJK characters) and "halfwidth" (matching ordinary Latin script) characters. The Unicode Bulldog Award is given to people deemed to be influential in Unicode's development, with recipients including Tatsuo Kobayashi , Thomas Milo, Roozbeh Pournader , Ken Lunde , and Michael Everson . The origins of Unicode can be traced back to
336-442: A large number of scripts, and not with all of the scripts supported being treated in a consistent manner. The philosophy that underpins Unicode seeks to encode the underlying characters— graphemes and grapheme-like units—rather than graphical distinctions considered mere variant glyphs thereof, that are instead best handled by the typeface , through the use of markup , or by some other means. In particularly complex cases, such as
392-530: A low-surrogate code point forms a surrogate pair in UTF-16 in order to represent code points greater than U+FFFF . In principle, these code points cannot otherwise be used, though in practice this rule is often ignored, especially when not using UTF-16. A small set of code points are guaranteed never to be assigned to characters, although third-parties may make independent use of them at their discretion. There are 66 of these noncharacters : U+FDD0 – U+FDEF and
448-421: A part of the standard. Moreover, the widespread adoption of Unicode was in large part responsible for the initial popularization of emoji outside of Japan. Unicode is ultimately capable of encoding more than 1.1 million characters. Unicode has largely supplanted the previous environment of a myriad of incompatible character sets , each used within different locales and on different computer architectures. Unicode
504-526: A project run by Deborah Anderson at the University of California, Berkeley was founded in 2002 with the goal of funding proposals for scripts not yet encoded in the standard. The project has become a major source of proposed additions to the standard in recent years. The Unicode Consortium together with the ISO have developed a shared repertoire following the initial publication of The Unicode Standard : Unicode and
560-399: A properly engineered design, 16 bits per character are more than sufficient for this purpose. This design decision was made based on the assumption that only scripts and characters in "modern" use would require encoding: Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in
616-426: A single directionality. It can handle some combining marks by simple overstriking methods, but cannot display Hebrew (bidirectional), Devanagari (one character to many glyphs) or Arabic (both features). Most GUI applications use standard OS text drawing routines which handle such scripts, although the applications themselves still do not always handle them correctly. ISO/IEC 10646 , a general, informal citation for
SECTION 10
#1732852488514672-477: A single part, which has since had a number of amendments adding characters to the standard in approximate synchrony with the Unicode standard. Related standards: Unicode Unicode , formally The Unicode Standard , is a text encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 of
728-558: A total of 168 scripts are included in the latest version of Unicode (covering alphabets , abugidas and syllabaries ), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur. The Unicode Roadmap Committee ( Michael Everson , Rick McGowan, Ken Whistler, V.S. Umamaheswaran) maintain
784-415: A universal character set existed: Unicode , with 16 bits for every character (65,536 possible characters), and ISO/IEC 10646. The software companies refused to accept the complexity and size requirement of the ISO standard and were able to convince a number of ISO National Bodies to vote against it. ISO officials realised they could not continue to support the standard in its current state and negotiated
840-648: A universal encoding than the original Unicode architecture envisioned. Version 1.0 of Microsoft's TrueType specification, published in 1992, used the name "Apple Unicode" instead of "Unicode" for the Platform ID in the naming table. The Unicode Consortium is a nonprofit organization that coordinates Unicode's development. Full members include most of the main computer software and hardware companies (and few others) with any interest in text-processing standards, including Adobe , Apple , Google , IBM , Meta (previously as Facebook), Microsoft , Netflix , and SAP . Over
896-419: Is a standard set of characters defined by the international standard ISO / IEC 10646, Information technology — Universal Coded Character Set (UCS) (plus amendments to that standard), which is the basis of many character encodings , improving as characters from previously unrepresented typing systems are added. The UCS has over 1.1 million possible code points available for use/allocation, but only
952-413: Is intended to suggest a unique, unified, universal encoding". In this document, entitled Unicode 88 , Becker outlined a scheme using 16-bit characters: Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII " that has been stretched to 16 bits to encompass the characters of all the world's living languages. In
1008-451: Is not enough to support ISO/IEC 10646; Unicode must be implemented. To support these rules and algorithms, Unicode adds many properties to each character in the set such as properties determining a character's default bidirectional class and properties to determine how the character combines with other characters. If the character represents a numeric value such as the European number '8', or
1064-453: Is not padded. There are a total of 2 + (2 − 2 ) = 1 112 064 valid code points within the codespace. (This number arises from the limitations of the UTF-16 character encoding, which can encode the 2 code points in the range U+0000 through U+FFFF except for the 2 code points in the range U+D800 through U+DFFF , which are used as surrogate pairs to encode the 2 code points in
1120-480: Is projected to include 4301 new unified CJK characters . The Unicode Standard defines a codespace : a sequence of integers called code points in the range from 0 to 1 114 111 , notated according to the standard as U+0000 – U+10FFFF . The codespace is a systematic, architecture-independent representation of The Unicode Standard ; actual text is processed as binary data via one of several Unicode encodings, such as UTF-8 . In this normative notation,
1176-400: Is then further subcategorized. In most cases, other properties must be used to adequately describe all the characteristics of any given code point. The 1024 points in the range U+D800 – U+DBFF are known as high-surrogate code points, and code points in the range U+DC00 – U+DFFF ( 1024 code points) are known as low-surrogate code points. A high-surrogate code point followed by
SECTION 20
#17328524885141232-502: Is used to encode the vast majority of text on the Internet, including most web pages , and relevant Unicode support has become a common consideration in contemporary software development. The Unicode character repertoire is synchronized with ISO/IEC 10646 , each being code-for-code identical with one another. However, The Unicode Standard is more than just a repertoire within which characters are assigned. To aid developers and designers,
1288-589: The ISO / IEC have developed The Unicode Standard ("Unicode") and ISO/IEC 10646 in tandem. The repertoire, character names, and code points of Unicode Version 2.0 exactly match those of ISO/IEC 10646-1:1993 with its first seven published amendments. After Unicode 3.0 was published in February 2000, corresponding new and updated characters entered the UCS via ISO/IEC 10646-1:2000. In 2003, parts 1 and 2 of ISO/IEC 10646 were combined into
1344-574: The 1980s, to a group of individuals with connections to Xerox 's Character Code Standard (XCCS). In 1987, Xerox employee Joe Becker , along with Apple employees Lee Collins and Mark Davis , started investigating the practicalities of creating a universal character set. With additional input from Peter Fenwick and Dave Opstad , Becker published a draft proposal for an "international/multilingual text character encoding system in August 1988, tentatively called Unicode". He explained that "the name 'Unicode'
1400-511: The BMP. It does this to allow for future expansion or to minimise conflicts with other encoding forms. The original edition of the UCS defined UTF-16 , an extension of UCS-2, to represent code points outside the BMP. A range of code points in the S (Special) Zone of the BMP remains unassigned to characters. UCS-2 disallows use of code values for these code points, but UTF-16 allows their use in pairs. Unicode also adopted UTF-16, but in Unicode terminology,
1456-518: The ISO's Universal Coded Character Set (UCS) use identical character names and code points. However, the Unicode versions do differ from their ISO equivalents in two significant ways. While the UCS is a simple character map, Unicode specifies the rules, algorithms, and properties necessary to achieve interoperability between different platforms and languages. Thus, The Unicode Standard includes more information, covering in-depth topics such as bitwise encoding, collation , and rendering. It also provides
1512-451: The ISO/IEC 10646 family of standards, is acceptable in most prose. And even though it is a separate standard, the term Unicode is used just as often, informally, when discussing the UCS. However, any normative references to the UCS as a publication should cite the year of the edition in the form ISO/IEC 10646:{year} , for example: ISO/IEC 10646:2014 . Since 1991, the Unicode Consortium and
1568-643: The Latin Extended-A block. It is populated with accented and variant majuscule and minuscule Latin letters for writing mostly eastern European languages. The Deprecated letter subheading contains a single character, Latin Small Letter N Preceded by Apostrophe, which was included for compatibility with the ISO/IEC 6937 standard. It was deprecated as of Unicode version 5.2.0, with the comment that U+0149 ʼn LATIN SMALL LETTER N PRECEDED BY APOSTROPHE
1624-496: The core specification, published as a print-on-demand paperback, may be purchased. The full text, on the other hand, is published as a free PDF on the Unicode website. A practical reason for this publication method highlights the second significant difference between the UCS and Unicode—the frequency with which updated versions are released and new characters added. The Unicode Standard has regularly released annual expanded versions, occasionally with more than one version released in
1680-421: The discretion of the software actually rendering the text, such as a web browser or word processor . However, partially with the intent of encouraging rapid adoption, the simplicity of this original model has become somewhat more elaborate over time, and various pragmatic concessions have been made over the course of the standard's development. The first 256 code points mirror the ISO/IEC 8859-1 standard, with
1736-593: The first 65,536, which is the Basic Multilingual Plane (BMP), had entered into common use before 2000. This situation began changing when the People's Republic of China (PRC) ruled in 2006 that all software sold in its jurisdiction would have to support GB 18030 . This required software intended for sale in the PRC to move beyond the BMP. The system deliberately leaves many code points not assigned to characters, even in
Latin Extended-A - Misplaced Pages Continue
1792-401: The following versions of The Unicode Standard have been published. Update versions, which do not include any changes to character repertoire, are signified by the third number (e.g., "version 4.0.1") and are omitted in the table below. The Unicode Consortium normally releases a new version of The Unicode Standard once a year. Version 17.0, the next major version,
1848-516: The group. By the end of 1990, most of the work of remapping existing standards had been completed, and a final review draft of Unicode was ready. The Unicode Consortium was incorporated in California on 3 January 1991, and the first volume of The Unicode Standard was published that October. The second volume, now adding Han ideographs, was published in June 1992. In 1996, a surrogate character mechanism
1904-495: The high-half zone elements become "high surrogates" and the low-half zone elements become "low surrogates". Another encoding, UTF-32 (previously named UCS-4), uses four bytes (total 32 bits) to encode a single character of the codespace. UTF-32 thereby permits a binary representation of every code point (as of year 2024) in the APIs, and software applications. The International Organization for Standardization (ISO) set out to compose
1960-549: The intent of trivializing the conversion of text already written in Western European scripts. To preserve the distinctions made by different legacy encodings, therefore allowing for conversion between them and Unicode without any loss of information, many characters nearly identical to others , in both appearance and intended function, were given distinct code points. For example, the Halfwidth and Fullwidth Forms block encompasses
2016-403: The last two code points in each of the 17 planes (e.g. U+FFFE , U+FFFF , U+1FFFE , U+1FFFF , ..., U+10FFFE , U+10FFFF ). The set of noncharacters is stable, and no new noncharacters will ever be defined. Like surrogates, the rule that these cannot be used is often ignored, although the operation of the byte order mark assumes that U+FFFE will never be the first code point in
2072-544: The limitation to the UTF-16 range and under the name UTF-32 , although it has almost no use outside programs' internal data. Rob Pike and Ken Thompson , the designers of the Plan 9 operating system, devised a new, fast and well-designed mixed-width encoding that was also backward-compatible with 7-bit ASCII , which came to be called UTF-8 , and is currently the most popular UCS encoding. ISO/IEC 10646 and Unicode have an identical repertoire and numbers—the same characters with
2128-625: The list of scripts that are candidates or potential candidates for encoding and their tentative code block assignments on the Unicode Roadmap page of the Unicode Consortium website. For some scripts on the Roadmap, such as Jurchen and Khitan large script , encoding proposals have been made and they are working their way through the approval process. For other scripts, such as Numidian and Rongorongo , no proposal has yet been made, and they await agreement on character repertoire and other details from
2184-675: The modern text (e.g. in the union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 2 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicode. In early 1989, the Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of Research Libraries Group , and Glenn Wright of Sun Microsystems . In 1990, Michel Suignard and Asmus Freytag of Microsoft and NeXT 's Rick McGowan had also joined
2240-695: The range U+10000 through U+10FFFF .) The Unicode codespace is divided into 17 planes , numbered 0 to 16. Plane 0 is the Basic Multilingual Plane (BMP), and contains the most commonly used characters. All code points in the BMP are accessed as a single code unit in UTF-16 encoding and can be encoded in one, two or three bytes in UTF-8. Code points in planes 1 through 16 (the supplementary planes ) are accessed as surrogate pairs in UTF-16 and encoded in four bytes in UTF-8 . Within each plane, characters are allocated within named blocks of related characters. The size of
2296-539: The same numbers exist on both standards, although Unicode releases new versions and adds new characters more often. Unicode has rules and specifications outside the scope of ISO/IEC 10646. ISO/IEC 10646 is a simple character map, an extension of previous standards like ISO/IEC 8859 . In contrast, Unicode adds rules for collation , normalisation of forms , and the bidirectional algorithm for right-to-left scripts such as Arabic and Hebrew. For interoperability between platforms, especially if bidirectional scripts are used, it
Latin Extended-A - Misplaced Pages Continue
2352-458: The standard defines 154 998 characters and 168 scripts used in various ordinary, literary, academic, and technical contexts. Many common characters, including numerals, punctuation, and other symbols, are unified within the standard and are not treated as specific to any given writing system. Unicode encodes 3790 emoji , with the continued development thereof conducted by the Consortium as
2408-429: The standard also provides charts and reference data, as well as annexes explaining concepts germane to various scripts, providing guidance for their implementation. Topics covered by these annexes include character normalization , character composition and decomposition, collation , and directionality . Unicode text is processed and stored as binary data using one of several encodings , which define how to translate
2464-486: The standard could code only 679,477,248 characters, as the policy forbade byte values of C0 and C1 control codes (0x00 to 0x1F and 0x80 to 0x9F, in hexadecimal notation) in any one of the four bytes specifying a group, plane, row and cell. The Latin capital letter A, for example, had a location in group 0x20, plane 0x20, row 0x20, cell 0x41. One could code the characters of this primordial ISO/IEC 10646 standard in one of three ways: In 1990, therefore, two initiatives for
2520-420: The standard from version 2.0 and onwards supports encoding of 1,112,064 code points from 17 planes by means of the UTF-16 surrogate mechanism. For that reason, ISO/IEC 10646 was limited to contain as many characters as could be encoded by UTF-16 and no more, that is, a little over a million characters instead of over 679 million. The UCS-4 encoding of ISO/IEC 10646 was incorporated into the Unicode standard with
2576-453: The standard's abstracted codes for characters into sequences of bytes. The Unicode Standard itself defines three encodings: UTF-8 , UTF-16 , and UTF-32 , though several others exist. Of these, UTF-8 is the most widely used by a large margin, in part due to its backwards-compatibility with ASCII . Unicode was originally designed with the intent of transcending limitations present in all text encodings designed up to that point: each encoding
2632-459: The treatment of orthographical variants in Han characters , there is considerable disagreement regarding which differences justify their own encodings, and which are only graphical variants of other characters. At the most abstract level, Unicode assigns a unique number called a code point to each character. Many issues of visual representation—including size, shape, and style—are intended to be up to
2688-418: The two-character prefix U+ always precedes a written code point, and the code points themselves are written as hexadecimal numbers. At least four hexadecimal digits are always written, with leading zeros prepended as needed. For example, the code point U+00F7 ÷ DIVISION SIGN is padded with two leading zeros, but U+13254 𓉔 EGYPTIAN HIEROGLYPH O004 ( [REDACTED] )
2744-501: The unification of their standard with Unicode. Two changes took place: the lifting of the limitation upon characters (prohibition of control code values), thus opening code points for allocation; and the synchronisation of the repertoire of the Basic Multilingual Plane with that of Unicode. Meanwhile, in the passage of time, the situation changed in the Unicode standard itself: 65,536 characters came to appear insufficient, and
2800-489: The universal character set in 1989, and published the draft of ISO 10646 in 1990. Hugh McGregor Ross was one of its principal architects. This work happened independently of the development of the Unicode standard, which had been in development since 1987 by Xerox and Apple . The original ISO 10646 draft differed markedly from the current standard. It defined: for an apparent total of 2,147,483,648 characters, but actually
2856-607: The user communities involved. Some modern invented scripts which have not yet been included in Unicode (e.g., Tengwar ) or which do not qualify for inclusion in Unicode due to lack of real-world use (e.g., Klingon ) are listed in the ConScript Unicode Registry , along with unofficial but widely used Private Use Areas code assignments. There is also a Medieval Unicode Font Initiative focused on special Latin medieval characters. Part of these proposals has been already included in Unicode. The Script Encoding Initiative,
SECTION 50
#17328524885142912-413: The vulgar fraction '¼', that numeric value is also added as a property of the character. Unicode intends these properties to support interoperable text handling with a mixture of languages. Some applications support ISO/IEC 10646 characters but do not fully support Unicode. One such application, Xterm , can properly display all ISO/IEC 10646 characters that have a one-to-one character-to-glyph mapping and
2968-635: The years several countries or government agencies have been members of the Unicode Consortium. Presently only the Ministry of Endowments and Religious Affairs (Oman) is a full member with voting rights. The Consortium has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments. Unicode currently covers most major writing systems in use today. As of 2024 ,
3024-480: Was encoded for use in Afrikaans. The character is deprecated, and its use is strongly discouraged. In nearly all cases it is better represented by a sequence of an apostrophe followed by the letter "n": 'n . The following Unicode-related documents record the purpose and process of defining specific characters in the Latin Extended-A block: ISO 10646 The Universal Coded Character Set ( UCS , Unicode )
3080-491: Was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts, such as Egyptian hieroglyphs , and thousands of rarely used or obsolete characters that had not been anticipated for inclusion in the standard. Among these characters are various rarely used CJK characters—many mainly being used in proper names, making them far more necessary for
3136-439: Was relied upon for use in its own context, but with no particular expectation of compatibility with any other. Indeed, any two encodings chosen were often totally unworkable when used together, with text encoded in one interpreted as garbage characters by the other. Most encodings had only been designed to facilitate interoperation between a handful of scripts—often primarily between a given script and Latin characters —not between
#513486