InterPro is a database of protein families , protein domains and functional sites in which identifiable features found in known proteins can be applied to new protein sequences in order to functionally characterise them.
70-514: The contents of InterPro consist of diagnostic signatures and the proteins that they significantly match. The signatures consist of models (simple types, such as regular expressions or more complex ones, such as Hidden Markov models ) which describe protein families, domains or sites. Models are built from the amino acid sequences of known families or domains and they are subsequently used to search unknown sequences (such as those arising from novel genome sequencing) in order to classify them. Each of
140-465: A recursive descent parser via sub-rules. The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax. Prior to
210-574: A "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax). The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression ( ERE ) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \)
280-797: A "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU grep has the following options: " grep -E " for ERE, and " grep -G " for BRE (the default), and " grep -P " for Perl regexes. Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, ( ) and { } are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching , backreferences , named capture groups, and recursive patterns. In
350-481: A bracket expression if it is the first (after the ^ , if present) character: []abc] , [^]abc] . Examples: According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define
420-493: A concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard . A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor , the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and
490-551: A finite alphabet Σ, the following constants are defined as regular expressions: Given regular expressions R and S, the following operations over them are defined to produce regular expressions: To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example, (ab)c can be written as abc , and a|(b(c*)) can be written as a|bc* . Many textbooks use
560-426: A given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, . is a very general pattern, [a-z] (match all lower case letters from 'a' to 'z') is less general and b is a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in
630-542: A large number of possible strings, rather than compiling a large list of all the literal possibilities. Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence , in this case, the backslash \ . Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome , they have
700-504: A larger set of characters. For example, [A-Z] could stand for any uppercase letter in the English alphabet, and \ d could mean any digit. Character classes apply to both POSIX levels. When specifying a range of characters, such as [a-Z] (i.e. lowercase a to uppercase Z ), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or
770-421: A metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters ( ) and { } be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are {}[]()^$ .|*+? and \ . The usual characters that become metacharacters when escaped are dswDSW and N . When entering
SECTION 10
#1732880342517840-448: A mismatch. A simple and inefficient way to see where one string occurs inside another is to check at each index, one by one. First, we see if there is a copy of the needle starting at the first character of the haystack; if not, we look to see if there's a copy of the needle starting at the second character of the haystack, and so forth. In the normal case, we only have to look at one or two characters for each wrong position to see that it
910-406: A pattern can be found quickly. As an example, a suffix tree can be built in Θ ( n ) {\displaystyle \Theta (n)} time, and all z {\displaystyle z} occurrences of a pattern can be found in O ( m ) {\displaystyle O(m)} time under the assumption that the alphabet has a constant size and all inner nodes in
980-456: A place where one or several strings (also called patterns) are found within a larger string or text. A basic example of string searching is when the pattern and the searched text are arrays of elements of an alphabet ( finite set ) Σ. Σ may be a human language alphabet, for example, the letters A through Z and other applications may use a binary alphabet (Σ = {0,1}) or a DNA alphabet (Σ = {A,C,G,T}) in bioinformatics . In practice,
1050-452: A range of lines (matching the pattern), which can be combined with other commands on either side, most famously g/re/p as in grep ("global regex print"), which is included in most Unix -based operating systems, such as Linux distributions. A similar convention is used in sed , where search and replace is given by s/re/replacement/ and patterns can be joined with a comma to specify a range of lines as in /re1/,/re2/ . This notation
1120-429: A regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex re is entered as "re" . However, they are often written with slashes as delimiters , as in /re/ for the regex re . This originates in ed , where / is the editor command for searching, and an expression /re/ can be used to specify
1190-497: A regular expression (that is, each character in the string describing its pattern) is either a metacharacter , having a special meaning, or a regular character that has a literal meaning. For example, in the regex b. , 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of
1260-498: A regular expression in the above syntax into an internal representation that can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match
1330-408: A regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file . Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in
1400-431: A significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages L k consisting of all strings over the alphabet { a , b } whose k th-from-last letter equals a . On the one hand, a regular expression describing L 4
1470-503: A simple language-base. The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^ [ \t] +| [ \t] +$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-] ?( \ d +( \ . \ d *)?| \ . \ d +)( [eE][+-] ? \ d +)? . A regex processor translates
SECTION 20
#17328803425171540-406: A whole needle-length at each step. Baeza–Yates keeps track of whether the previous j characters were a prefix of the search string, and is therefore adaptable to fuzzy string searching . The bitap algorithm is an application of Baeza–Yates' approach. Faster search algorithms preprocess the text. After building a substring index , for example a suffix tree or suffix array , the occurrences of
1610-706: A wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992. In the 1980s, the more complicated regexes arose in Perl , which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation for Tcl called Advanced Regular Expressions . The Tcl library is a hybrid NFA / DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL . Perl later expanded on Spencer's original library to add many new features. Part of
1680-643: Is deprecated , in favor of BRE, as both provide backward compatibility . The subsection below covering the character classes applies to both BRE and ERE. BRE and ERE work together. ERE adds ? , + , and | , and it removes the need to escape the metacharacters ( ) and { } , which are required in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide
1750-431: Is a greedy quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for
1820-503: Is a sequence of characters that specifies a match pattern in text . Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings , or for input validation . Regular expression techniques are developed in theoretical computer science and formal language theory. The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized
1890-484: Is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy . In the opposite direction, there are many languages easily described by a DFA that are not easily described by
1960-468: Is a wrong position, so in the average case, this takes O ( n + m ) steps, where n is the length of the haystack and m is the length of the needle; but in the worst case, searching for a string like "aaaab" in a string like "aaaaaaaaab", it takes O ( nm ) In this approach, backtracking is avoided by constructing a deterministic finite automaton (DFA) that recognizes a stored search string. These are expensive to construct—they are usually created using
2030-464: Is given by ( a ∣ b ) ∗ a ( a ∣ b ) ( a ∣ b ) ( a ∣ b ) {\displaystyle (a\mid b)^{*}a(a\mid b)(a\mid b)(a\mid b)} . Generalizing this pattern to L k gives the expression: On the other hand, it is known that every deterministic finite automaton accepting the language L k must have at least 2 states. Luckily, there
2100-413: Is given in § Syntax . Regular expressions describe regular languages in formal language theory . They have the same expressive power as regular grammars . Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given
2170-550: Is in the public domain , since its content can be used "by any individual and for any purpose". InterPro aims to release data to the public every 8 weeks, typically within a day of the UniProtKB release of the same proteins. InterPro provides an API for programmatic access to all InterPro entries and their related entries in Json format. There are six main endpoints for the API corresponding to
InterPro - Misplaced Pages Continue
2240-430: Is now ( ) and \{ \} is now { } . Additionally, support is removed for \ n backreferences and the following metacharacters are added: Examples: POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E . The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match
2310-601: Is particularly well known due to its use in Perl , where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command s,/,X, will replace a / with an X , using commas as delimiters. The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions), ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE
2380-414: Is the fourth word; or all occurrences, of which there are 3; or the last, which is the fifth word from the end. Very commonly, however, various constraints are added. For example, one might want to match the "needle" only where it consists of one (or more) complete words—perhaps defined as not having other letters immediately adjacent on either side. In that case a search for "hew" or "low" should fail for
2450-418: Is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel ; we say that this pattern matches each of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example, (Hän|Han|Haen)del also specifies
2520-419: Is used by many modern tools including PHP and Apache HTTP Server . Today, regexes are widely supported in programming languages, text processing programs (particularly lexers ), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python , and is built into the syntax of others, including Perl and ECMAScript . In
2590-533: The Compatible Time-Sharing System , an important early example of JIT compilation. He later added this capability to the Unix editor ed , which eventually led to the popular search tool grep 's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/ re /p meaning "Global search for Regular Expression and Print matching lines"). Around
2660-541: The Kleene star and set unions over finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem . In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra , using equational and Horn clause axioms. Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize
2730-461: The POSIX standard, Basic Regular Syntax ( BRE ) requires that the metacharacters ( ) and { } be designated \(\) and \{\} , whereas Extended Regular Syntax ( ERE ) does not. The - character is treated as a literal character if it is the last or the first (after the ^ , if present) character within the brackets: [abc-] , [-abc] , [^-abc] . Backslash escapes are not allowed. The ] character can be included in
2800-600: The SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files . For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on
2870-463: The powerset construction —but are very quick to use. For example, the DFA shown to the right recognizes the word "MOMMY". This approach is frequently generalized in practice to search for arbitrary regular expressions . Knuth–Morris–Pratt computes a DFA that recognizes inputs with the string to search for as a suffix, Boyer–Moore starts searching from the end of the needle, so it can usually jump ahead
InterPro - Misplaced Pages Continue
2940-558: The British equivalent "colour", instead of searching for two different literal strings, one might use a regular expression such as: where the "?" conventionally makes the preceding character ("u") optional. This article mainly discusses algorithms for the simpler kinds of string searching. A similar problem introduced in the field of bioinformatics and genomics is the maximal exact matching (MEM). Given two strings, MEMs are common substrings that cannot be extended left or right without causing
3010-566: The InterPro database. Signatures which represent equivalent domains, sites or families are put into the same entry and entries can also be related to one another. Additional information such as a description, consistent names and Gene Ontology (GO) terms are associated with each entry, where possible. InterPro contains three main entities: proteins, signatures (also referred to as "methods" or "models") and entries. The proteins in UniProtKB are also
3080-450: The algebra of regular languages. A regex pattern matches a target string . The pattern is composed of a sequence of atoms . An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using ( ) as metacharacters. Metacharacters help form: atoms ; quantifiers telling how many atoms (and whether it
3150-406: The central protein entities in InterPro. Information regarding which signatures significantly match these proteins are calculated as the sequences are released by UniProtKB and these results are made available to the public (see below). The matches of signatures to proteins are what determine how signatures are integrated together into InterPro entries: comparative overlap of matched protein sets and
3220-416: The complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause a double exponential blow-up of its length. Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata . There is, however,
3290-862: The concept of a regular language . They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax. Regular expressions are used in search engines , in search and replace dialogs of word processors and text editors , in text processing utilities such as sed and AWK , and in lexical analysis . Regular expressions are supported in many programming languages. Library implementations are often called an " engine ", and many of these are available for reuse. Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular events . These arose in theoretical computer science , in
3360-422: The different InterPro data types: entry, protein, structure, taxonomy, proteome and set. InterProScan is a software package that allows users to scan sequences against member database signatures. Users can use this signature scanning software to functionally characterize novel nucleotide or protein sequences. InterProScan is frequently used in genome projects in order to obtain a "first-pass" characterisation of
3430-452: The effort in the design of Raku (formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars . The result is a mini-language called Raku rules , which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF -style definition of
3500-414: The encoding is specifically designed to avoid it. The most basic case of string searching involves one (often very long) string, sometimes called the haystack , and one (often very short) string, sometimes called the needle . The goal is to find one or more occurrences of the needle within the haystack. For example, one might search for to within: One might request the first occurrence of "to", which
3570-440: The example sentence above, even though those literal strings do occur. Another common example involves "normalization". For many purposes, a search for a phrase such as "to be" should succeed even in places where there is something else intervening between the "to" and the "be": Many symbol systems include characters that are synonymous (at least for some purposes): Finally, for strings that represent natural language, aspects of
SECTION 50
#17328803425173640-505: The genome of interest. As of December 2020, the public version of InterProScan (v5.x) uses a Java-based architecture. The software package is currently only supported on a 64-bit Linux operating system. InterProScan, along with many other EMBL-EBI bioinformatics tools, can also be accessed programmatically using RESTful and SOAP Web Services APIs. Regular expressions A regular expression (shortened as regex or regexp ), sometimes referred to as rational expression ,
3710-487: The language itself become involved. For example, one might wish to find all occurrences of a "word" despite it having alternate spellings, prefixes or suffixes, etc. Another more complex type of search is regular expression searching, where the user constructs a pattern of characters or other symbols, and any match to the pattern should fulfill the search. For example, to catch both the American English word "color" and
3780-413: The late 2010s, several companies started to offer hardware, FPGA , GPU implementations of PCRE compatible regex engines that are faster compared to CPU implementations . The phrase regular expressions , or regexes , is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in
3850-414: The location of the signatures' matches on the sequences are used as indicators of relatedness. Only signatures deemed to be of sufficient quality are integrated into InterPro. As of version 81.0 (released 21 August 2020) InterPro entries annotated 73.9% of residues found in UniProtKB with another 9.2% annotated by signatures that are pending integration. InterPro also includes data for splice variants and
3920-416: The member databases of InterPro contributes towards a different niche, from very high-level, structure-based classifications ( SUPERFAMILY and CATH-Gene3D) through to quite specific sub-family classifications ( PRINTS and PANTHER ). InterPro's intention is to provide a one-stop-shop for protein classification, where all the signatures produced by the different member databases are placed into entries within
3990-420: The method of feasible string-search algorithm may be affected by the string encoding. In particular, if a variable-width encoding is in use, then it may be slower to find the N th character, perhaps requiring time proportional to N . This may significantly slow some search algorithms. One of many possible solutions is to search for the sequence of code units instead, but doing so may produce false matches unless
4060-405: The number of patterns each uses. In the following compilation, m is the length of the pattern, n the length of the searchable text, and k = |Σ| is the size of the alphabet. The Boyer–Moore string-search algorithm has been the standard benchmark for the practical string-search literature. In the following compilation, M is the length of the longest pattern, m their total length, n
4130-403: The opposite direction is achieved by Kleene's algorithm . Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement regexes . See below for more on this. As seen in many of the examples above, there is more than one way to construct a regular expression to achieve
4200-572: The ordering could be abc...zABC...Z , or aAbBcC...zZ . So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table: POSIX character classes can only be used within bracket expressions. For example, [[:upper:] ab ] matches the uppercase letters and lowercase "a" and "b". String-searching algorithm In computer science , string-searching algorithms , sometimes called string-matching algorithms , are an important class of string algorithms that try to find
4270-502: The proteins contained in the UniParc and UniMES databases. The signatures from InterPro come from 13 "member databases", which are listed below. InterPro consists of seven types of data provided by different members of the consortium: InterPro entries can be further broken down into five types: The database is available for text- and sequence-based searches via a webserver, and for download via anonymous FTP. Like other EBI databases, it
SECTION 60
#17328803425174340-450: The regular expression. The picture shows the NFA scheme N ( s *) obtained from the regular expression s * , where s denotes a simpler regular expression in turn, which has already been recursively translated to the NFA N ( s ). A regular expression, often called a pattern , specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings
4410-490: The same regular language, for all regular expressions X , Y , it is necessary and sufficient to check whether the particular regular expressions ( a + b ) and ( a b ) denote the same language over the alphabet Σ={ a , b }. More generally, an equation E = F between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds. Every regular expression can be written solely in terms of
4480-484: The same results. It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine , and determines whether they are isomorphic (equivalent). Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether ( X + Y ) and ( X Y ) denote
4550-407: The same set of three strings in this example. Most formalisms provide the following operations to construct regular expressions. These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. The precise syntax for regular expressions varies among tools and with context; more detail
4620-551: The same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design. Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including lex , sed , AWK , and expr , and in other programs such as vi , and Emacs (which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by
4690-421: The subfields of automata theory (models of computation) and the description and classification of formal languages , motivated by Kleene's attempt to describe early artificial neural networks . (Kleene introduced it as an alternative to McCulloch & Pitts's "prehensible", but admitted "We would welcome any suggestions as to a more descriptive term." ) Other early implementations of pattern matching include
4760-409: The suffix tree know what leaves are underneath them. The latter can be accomplished by running a DFS algorithm from the root of the suffix tree. Some search methods, for instance trigram search , are intended to find a "closeness" score between the search string and the text rather than a "match/non-match". These are sometimes called "fuzzy" searches . The various algorithms can be classified by
4830-425: The symbols ∪, +, or ∨ for alternation instead of the vertical bar. Examples: The formal definition of regular expressions is minimal on purpose, and avoids defining ? and + —these can be expressed as follows: a+ = aa* , and a? = (a|ε) . Sometimes the complement operator is added, to give a generalized regular expression ; here R matches all strings over Σ* that do not match R . In principle,
4900-479: The use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in the glob syntax for filenames, and in the SQL LIKE operator. Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and
#516483