Misplaced Pages

VNP

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computational complexity theory , arithmetic circuits are the standard model for computing polynomials . Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial f {\displaystyle f} ?"

#286713

41-643: VNP may refer to: VNP , standing for Valiant 's NP , an arithmetic circuit complexity class Ventricular Natriuretic Peptide, an alternative name for Brain natriuretic peptide Voters Not Politicians , a political movement in Michigan Voyageurs National Park , northern Minnesota, USA Vulcan nerve pinch , a fictional manoeuvre Video Network Platform Virtual Network Protocol Beijing South railway station , China Railway telegraph code VNP von Neumann probe Topics referred to by

82-431: A directed acyclic graph as follows. Every node in it with indegree zero is called an input gate and is labeled by either a variable x i {\displaystyle x_{i}} or a field element in F . {\displaystyle F.} Every other gate is labeled by either + {\displaystyle +} or × ; {\displaystyle \times ;} in

123-507: A circuit of size s , {\displaystyle s,} then f {\displaystyle f} also has a circuit of size polynomial in r {\displaystyle r} and s {\displaystyle s} of depth O ( log ⁡ ( r ) log ⁡ ( s ) ) . {\displaystyle O(\log(r)\log(s)).} For example, any polynomial of degree n {\displaystyle n} that has

164-407: A circuit of size order n 3 . {\displaystyle n^{3}.} Strassen showed that we can, in fact, multiply two matrices using a circuit of size roughly n 2.807 . {\displaystyle n^{2.807}.} Strassen's basic idea is a clever way for multiplying 2 × 2 {\displaystyle 2\times 2} matrices. This idea

205-475: A complete polynomial f {\displaystyle f} for this class is a polynomial with two properties: (1) it is part of the class, and (2) any other polynomial g {\displaystyle g} in the class is easier than f , {\displaystyle f,} in the sense that if f {\displaystyle f} has a small circuit then so does g . {\displaystyle g.} Valiant showed that

246-445: A directed graph is its incidence matrix . See direction for more definitions. For a vertex, the number of head ends adjacent to a vertex is called the indegree of the vertex and the number of tail ends adjacent to a vertex is its outdegree (called branching factor in trees). Let G = ( V , E ) and v ∈ V . The indegree of v is denoted deg ( v ) and its outdegree is denoted deg ( v ). A vertex with deg ( v ) = 0

287-551: A directed graphic or directed graphical sequence. This problem can either be solved by the Kleitman–Wang algorithm or by the Fulkerson–Chen–Anstee theorem . A directed graph is weakly connected (or just connected ) if the undirected underlying graph obtained by replacing all directed edges of the graph with undirected edges is a connected graph . A directed graph is strongly connected or strong if it contains

328-515: A given problem can be solved as easily as it can be shown that a solution exists to the given problem. In his seminal work Valiant suggested an algebraic analog of this problem, the VP vs. VNP problem. The class VP is the algebraic analog of P; it is the class of polynomials f {\displaystyle f} of polynomial degree that have polynomial size circuits over a fixed field K . {\displaystyle K.} The class VNP

369-426: A multidigraph with loops is the integer-valued matrix with rows and columns corresponding to the vertices, where a nondiagonal entry a ij is the number of arcs from vertex i to vertex j , and the diagonal entry a ii is the number of loops at vertex i . The adjacency matrix of a directed graph is a logical matrix , and is unique up to permutation of rows and columns. Another matrix representation for

410-746: A polynomial f , {\displaystyle f,} one can construct a new circuit of size at most O ( s ) {\displaystyle O(s)} that computes f {\displaystyle f} and all the n {\displaystyle n} partial derivatives of f . {\displaystyle f.} Since the partial derivatives of x 1 d + ⋯ + x n d {\displaystyle x_{1}^{d}+\cdots +x_{n}^{d}} are d x 1 d − 1 , … , d x n d − 1 , {\displaystyle dx_{1}^{d-1},\ldots ,dx_{n}^{d-1},}

451-404: A polynomial f , {\displaystyle f,} we may ask ourselves what is the best way to compute it — for example, what is the smallest size of a circuit computing f . {\displaystyle f.} The answer to this question consists of two parts. The first part is finding some circuit that computes f ; {\displaystyle f;} this part

SECTION 10

#1732844310287

492-628: A polynomial of degree 2 2 n {\displaystyle 2^{2^{n}}} require a circuit of size roughly 2 n . {\displaystyle 2^{n}.} So, the main goal is to prove lower bound for polynomials of small degree, say, polynomial in n . {\displaystyle n.} In fact, as in many areas of mathematics, counting arguments tell us that there are polynomials of polynomial degree that require circuits of superpolynomial size. However, these counting arguments usually do not improve our understanding of computation. The following problem

533-489: A polynomial size circuit, also has a polynomial size circuit of depth roughly log 2 ⁡ ( n ) . {\displaystyle \log ^{2}(n).} This result generalizes the circuit of Berkowitz to any polynomial of polynomial degree that has a polynomial size circuit (such as the determinant). The analog of this result in the Boolean setting is believed to be false. One corollary of this result

574-452: Is a simulation of circuits by relatively small formulas, formulas of quasipolynomial size: if a polynomial f {\displaystyle f} of degree r {\displaystyle r} has a circuit of size s , {\displaystyle s,} then it has a formula of size s O ( log ⁡ ( r ) ) . {\displaystyle s^{O(\log(r))}.} This simulation

615-400: Is called a source , as it is the origin of each of its outcoming arcs. Similarly, a vertex with deg ( v ) = 0 is called a sink , since it is the end of each of its incoming arcs. The degree sum formula states that, for a directed graph, If for every vertex v ∈ V , deg ( v ) = deg ( v ) , the graph is called a balanced directed graph . The degree sequence of a directed graph is

656-460: Is considered to be directed from x to y ; y is called the head and x is called the tail of the arc; y is said to be a direct successor of x and x is said to be a direct predecessor of y . If a path leads from x to y , then y is said to be a successor of x and reachable from x , and x is said to be a predecessor of y . The arc ( y , x ) is called the reversed arc of ( x , y ) . The adjacency matrix of

697-401: Is different from Wikidata All article disambiguation pages All disambiguation pages VNP (class) An arithmetic circuit C {\displaystyle C} over the field F {\displaystyle F} and the set of variables x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} is

738-446: Is easier than the depth reduction of Valiant el al. and was shown earlier by Hyafil. Outdegree In mathematics , and more specifically in graph theory , a directed graph (or digraph ) is a graph that is made up of a set of vertices connected by directed edges , often called arcs . In formal terms, a directed graph is an ordered pair G = ( V , A ) where It differs from an ordinary or undirected graph , in that

779-458: Is given by Ryser's formula: for an n × n {\displaystyle n\times n} matrix X = ( x i , j ) , {\displaystyle X=(x_{i,j}),} (this is a depth three circuit). In terms of proving lower bounds, our knowledge is very limited. Since we study the computation of formal polynomials, we know that polynomials of very large degree require large circuits, for example,

820-422: Is the analog of NP. VNP can be thought of as the class of polynomials f {\displaystyle f} of polynomial degree such that given a monomial we can determine its coefficient in f {\displaystyle f} efficiently, with a polynomial size circuit. One of the basic notions in complexity theory is the notion of completeness . Given a class of polynomials (such as VP or VNP),

861-651: Is the main open problem in this area of research: find an explicit polynomial of polynomial degree that requires circuits of superpolynomial size . The state of the art is a Ω ( n log ⁡ d ) {\displaystyle \Omega (n\log d)} lower bound for the size of a circuit computing, e.g., the polynomial x 1 d + ⋯ + x n d {\displaystyle x_{1}^{d}+\cdots +x_{n}^{d}} given by Strassen and by Baur and Strassen. More precisely, Strassen used Bézout's theorem to show that any circuit that simultaneously computes

SECTION 20

#1732844310287

902-408: Is the problem of finding a directed graph with the degree sequence a given sequence of positive integer pairs. (Trailing pairs of zeros may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the directed graph.) A sequence which is the degree sequence of some directed graph, i.e. for which the directed graph realization problem has a solution, is called

943-608: Is the starting point of the best theoretical way for multiplying two matrices that takes time roughly n 2.376 . {\displaystyle n^{2.376}.} Another interesting story lies behind the computation of the determinant of an n × n {\displaystyle n\times n} matrix. The naive way for computing the determinant requires circuits of size roughly n ! . {\displaystyle n!.} Nevertheless, we know that there are circuits of size polynomial in n {\displaystyle n} for computing

984-446: Is usually called upper bounding the complexity of f . {\displaystyle f.} The second part is showing that no other circuit can do better; this part is called lower bounding the complexity of f . {\displaystyle f.} Although these two tasks are strongly related, proving lower bounds is usually harder, since in order to prove a lower bound one needs to argue about all circuits at

1025-473: The n {\displaystyle n} polynomials x 1 d , … , x n d {\displaystyle x_{1}^{d},\ldots ,x_{n}^{d}} is of size Ω ( n log ⁡ d ) , {\displaystyle \Omega (n\log d),} and later Baur and Strassen showed the following: given an arithmetic circuit of size s {\displaystyle s} computing

1066-405: The best circuit known for the permanent of an n × n {\displaystyle n\times n} matrix. As for the determinant, the naive circuit for the permanent has size roughly n ! . {\displaystyle n!.} However, for the permanent the best circuit known has size roughly 2 n , {\displaystyle 2^{n},} which

1107-422: The circuit in the figure has size six and depth two. An arithmetic circuit computes a polynomial in the following natural way. An input gate computes the polynomial it is labeled by. A sum gate v {\displaystyle v} computes the sum of the polynomials computed by its children (a gate u {\displaystyle u} is a child of v {\displaystyle v} if

1148-415: The determinant. These circuits, however, have depth that is linear in n . {\displaystyle n.} Berkowitz came up with an improvement: a circuit of size polynomial in n , {\displaystyle n,} but of depth O ( log 2 ⁡ ( n ) ) . {\displaystyle O(\log ^{2}(n)).} We would also like to mention

1189-420: The directed edge ( v , u ) {\displaystyle (v,u)} is in the graph). A product gate computes the product of the polynomials computed by its children. Consider the circuit in the figure, for example: the input gates compute (from left to right) x 1 , x 2 {\displaystyle x_{1},x_{2}} and 1 , {\displaystyle 1,}

1230-406: The field elements are nonnegative real numbers), constant depth circuits, and multilinear circuits (in which every gate computes a multilinear polynomial ). These restricted models have been studied extensively and some understanding and results were obtained. The most interesting open problem in computational complexity theory is the P vs. NP problem. Roughly, this problem is to determine whether

1271-420: The first case it is a sum gate and in the second a product gate. An arithmetic formula is a circuit in which every gate has outdegree one (and so the underlying graph is a directed tree ). A circuit has two complexity measures associated with it: size and depth. The size of a circuit is the number of gates in it, and the depth of a circuit is the length of the longest directed path in it. For example,

VNP - Misplaced Pages Continue

1312-488: The latter is defined in terms of unordered pairs of vertices, which are usually called edges , links or lines . The aforementioned definition does not allow a directed graph to have multiple arrows with the same source and target nodes, but some authors consider a broader definition that allows directed graphs to have such multiple arcs (namely, they allow the arc set to be a multiset ). Sometimes these entities are called directed multigraphs (or multidigraphs ). On

1353-435: The list of its indegree and outdegree pairs; for the above example we have degree sequence ((2, 0), (2, 2), (0, 2), (1, 1)). The degree sequence is a directed graph invariant so isomorphic directed graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a directed graph; in some cases, non-isomorphic digraphs have the same degree sequence. The directed graph realization problem

1394-523: The lower bound of Strassen applies to x 1 d + ⋯ + x n d {\displaystyle x_{1}^{d}+\cdots +x_{n}^{d}} as well. This is one example where some upper bound helps in proving lower bounds; the construction of a circuit given by Baur and Strassen implies a lower bound for more general polynomials. The lack of ability to prove lower bounds brings us to consider simpler models of computation. Some examples are: monotone circuits (in which all

1435-441: The other hand, the aforementioned definition allows a directed graph to have loops (that is, arcs that directly connect nodes with themselves), but some authors consider a narrower definition that does not allow directed graphs to have loops. Directed graphs without loops may be called simple directed graphs , while directed graphs with loops may be called loop-digraphs (see section Types of directed graph ). An arc ( x , y )

1476-477: The permanent is complete for the class VNP. So in order to show that VP is not equal to VNP, one needs to show that the permanent does not have polynomial size circuits. This remains an outstanding open problem. One benchmark in our understanding of the computation of polynomials is the work of Valiant, Skyum, Berkowitz and Rackoff. They showed that if a polynomial f {\displaystyle f} of degree r {\displaystyle r} has

1517-403: The same term [REDACTED] This disambiguation page lists articles associated with the title VNP . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=VNP&oldid=1214651717 " Category : Disambiguation pages Hidden categories: Short description

1558-412: The same time. Note that we are interested in the formal computation of polynomials, rather than the functions that the polynomials define. For example, consider the polynomial x 2 + x ; {\displaystyle x^{2}+x;} over the field of two elements this polynomial represents the zero function, but it is not the zero polynomial. This is one of the differences between

1599-417: The study of arithmetic circuits and the study of Boolean circuits . In Boolean complexity, one is mostly interested in computing a function, rather than some representation of it (in our case, a representation by a polynomial). This is one of the reasons that make Boolean complexity harder than arithmetic complexity. The study of arithmetic circuits may also be considered as one of the intermediate steps towards

1640-402: The study of the Boolean case, which we hardly understand. As part of the study of the complexity of computing polynomials, some clever circuits (alternatively algorithms) were found. A well-known example is Strassen's algorithm for matrix product . The straightforward way for computing the product of two n × n {\displaystyle n\times n} matrices requires

1681-416: The sum gates compute x 1 + x 2 {\displaystyle x_{1}+x_{2}} and x 2 + 1 , {\displaystyle x_{2}+1,} and the product gate computes ( x 1 + x 2 ) x 2 ( x 2 + 1 ) . {\displaystyle (x_{1}+x_{2})x_{2}(x_{2}+1).} Given

VNP - Misplaced Pages Continue

#286713