A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems , especially optimization problems . An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value which is at least 1 − ε {\displaystyle 1-\varepsilon } times the correct value, and at most 1 + ε {\displaystyle 1+\varepsilon } times the correct value.
40-411: In the context of optimization problems, the correct value is understood to be the value of the optimal solution, and it is often implied that an FPTAS should produce a valid solution (and not just the value of the solution). Returning a value and finding a solution with that value are equivalent assuming that the problem possesses self reducibility . Importantly, the run-time of an FPTAS is polynomial in
80-536: A y {\displaystyle y} satisfying ( x , y ) ∈ R {\displaystyle (x,y)\in R} , the algorithm produces one such y {\displaystyle y} , and if there are no such y {\displaystyle y} , it rejects. A promise function problem is allowed to do anything (thus may not terminate) if no such y {\displaystyle y} exists. A well-known function problem
120-400: A certain class of dynamic programs to an FPTAS. The scheme handles optimization problems in which the input is defined as follows: It is assumed that the problem has a dynamic-programming (DP) algorithm using states . Each state is a vector made of some b {\displaystyle b} non-negative integers, where b {\displaystyle b} is independent of
160-410: A certificate y {\displaystyle y} for x {\displaystyle x} ". This function problem is called the function variant of L {\displaystyle L} ; it belongs to the class FNP . FNP can be thought of as the function class analogue of NP , in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of
200-450: A common due-date on any fixed number of machines: m|| ∑ w j | C j | {\displaystyle \sum w_{j}|C_{j}|} . Simple dynamic programs add to the above formulation the following components: The original DP is modified as follows: A problem is called benevolent if it satisfies the following conditions (which extend conditions 1, 2, 3 above): For every benevolent problem,
240-434: A minimization problem. Here are some examples of extremely-benevolent problems, that have an FPTAS by the above theorem. 1. Multiway number partitioning (equivalently, Identical-machines scheduling ) with the goal of minimizing the largest sum is extremely-benevolent. Here, we have a = 1 (the inputs are integers) and b = the number of bins (which is considered fixed). Each state is a vector of b integers representing
280-403: A single machine: 1|deteriorate| max C j {\displaystyle \max C_{j}} . 5. Total late work on a single machine: 1|| ∑ V j {\displaystyle \sum V_{j}} . 6. Total weighted late work on a single machine: 1|| ∑ w j V j {\displaystyle \sum w_{j}V_{j}} . Despite
320-745: Is ( d , r )-close to s s . Consider now the state s in S n , which corresponds to the optimal solution (that is, g ( s* )=OPT). By the lemma above, there is a state t * in T n , which is ( d , r )-close to s . Since proximity is preserved by the value function, g (t*) ≥ r · g ( s* ) for a maximization problem. By definition of r , r − G n ≥ ( 1 − ϵ ) {\displaystyle r^{-Gn}\geq (1-\epsilon )} . So g ( t ∗ ) ≥ ( 1 − ϵ ) ⋅ O P T {\displaystyle g(t^{*})\geq (1-\epsilon )\cdot OPT} . A similar argument works for
360-606: Is a subset of the S k in the original (untrimmed) DP. The main lemma for proving the correctness of the FPTAS is: For every step k in 0,..., n , for every state s s in S k , there is a state s t in T k that is ( d , r )-close to s s . The proof is by induction on k . For k =0 we have T k = S k ; every state is ( d ,1)-close to itself. Suppose the lemma holds for k -1. For every state s s in S k , let s s- be one of its predecessors in S k-1 , so that f ( s s − , x )= s s . By
400-466: Is an irrational number . To approximate it by a rational number, we can compute the sum of the first k elements, for some finite k . One can show that the error in approximation is about 1 2 k 2 {\displaystyle {\frac {1}{2k^{2}}}} . Therefore, to get an error of ε, we need about 1 2 ϵ {\displaystyle {\sqrt {\frac {1}{2\epsilon }}}} elements, so this
440-406: Is an FPTAS. Note that this particular sum can be represented by another sum in which only O(log(ε)) elements are needed, so the sum can actually be approximated in polynomial time in the encoding length of ε. Self reducibility In computational complexity theory , a function problem is a computational problem where a single output (of a total function ) is expected for every input, but
SECTION 10
#1732869095534480-403: Is answered 'yes' has a polynomial-size certificate y {\displaystyle y} which serves as a proof for the 'yes' answer. Thus, the set of these tuples ( x , y ) {\displaystyle (x,y)} forms a relation, representing the function problem "given x {\displaystyle x} in L {\displaystyle L} , find
520-445: Is called extremely-benevolent if it satisfies the following three conditions: For every extremely-benevolent problem, the dynamic program can be converted into an FPTAS. Define: The FPTAS runs similarly to the DP, but in each step, it trims the state set into a smaller set T k , that contains exactly one state in each r -box. The algorithm of the FPTAS is: The run-time of the FPTAS
560-435: Is fixed (this is required because R - the number of r -boxes - is exponential in b ). Denoted Pm|| max C j {\displaystyle \max C_{j}} or Qm|| max C j {\displaystyle \max C_{j}} or Rm|| max C j {\displaystyle \max C_{j}} . 2. Sum of cubed job completion time on any fixed number of identical or uniform machines -
600-686: Is given by the Functional Boolean Satisfiability Problem, FSAT for short. The problem, which is closely related to the SAT decision problem, can be formulated as follows: In this case the relation R {\displaystyle R} is given by tuples of suitably encoded boolean formulas and satisfying assignments. While a SAT algorithm, fed with a formula φ {\displaystyle \varphi } , only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in
640-1004: Is in P (easy), while the integer factorization problem is believed to be hard for a classical computer. There are several (slightly different) notions of self-reducibility. Function problems can be reduced much like decision problems: Given function problems Π R {\displaystyle \Pi _{R}} and Π S {\displaystyle \Pi _{S}} we say that Π R {\displaystyle \Pi _{R}} reduces to Π S {\displaystyle \Pi _{S}} if there exists polynomially-time computable functions f {\displaystyle f} and g {\displaystyle g} such that for all instances x {\displaystyle x} of R {\displaystyle R} and possible solutions y {\displaystyle y} of S {\displaystyle S} , it holds that It
680-420: Is polynomial in the total number of possible states in each T i , which is at most the total number of r -boxes, which is at most R , which is polynomial in n , log( X ), and 1 / ϵ {\displaystyle 1/\epsilon } . Note that, for each state s u in U k , its subset T k contains at least one state s t that is (d,r)-close to s u . Also, each U k
720-447: Is solvable in polynomial time using an oracle deciding SAT . In general, a problem in NP is called self-reducible if its function variant can be solved in polynomial time using an oracle deciding the original problem. Every NP-complete problem is self-reducible. It is conjectured that the integer factorization problem is not self-reducible, because deciding whether an integer is prime
760-502: Is therefore possible to define FNP-complete problems analogous to the NP-complete problem: A problem Π R {\displaystyle \Pi _{R}} is FNP-complete if every problem in FNP can be reduced to Π R {\displaystyle \Pi _{R}} . The complexity class of FNP-complete problems is denoted by FNP-C or FNPC . Hence
800-422: The drawback of being incomplete: Not every input x {\displaystyle x} has a counterpart y {\displaystyle y} such that ( x , y ) ∈ R {\displaystyle (x,y)\in R} . Therefore the question of computability of proofs is not separated from the question of their existence. To overcome this problem it is convenient to consider
840-453: The dynamic program can be converted into an FPTAS similarly to the one above, with two changes (boldfaced): Here are some examples of benevolent problems, that have an FPTAS by the above theorem. 1. The 0-1 knapsack problem is benevolent. Here, we have a =2: each input is a 2-vector (weight, value). There is a DP with b =2: each state encodes (current weight, current value). There are two transition functions: f 1 corresponds to adding
SECTION 20
#1732869095534880-576: The formula φ {\displaystyle \varphi } is satisfiable. After that the algorithm can fix variable x 1 {\displaystyle x_{1}} to TRUE and ask again. If the resulting formula is still satisfiable the algorithm keeps x 1 {\displaystyle x_{1}} fixed to TRUE and continues to fix x 2 {\displaystyle x_{2}} , otherwise it decides that x 1 {\displaystyle x_{1}} has to be FALSE and continues. Thus, FSAT
920-441: The generality of the above result, there are cases in which it cannot be used. 1. In the total tardiness problem 1|| ∑ T j {\displaystyle \sum T_{j}} , the dynamic programming formulation of Lawler requires to update all states in the old state space some B times, where B is of the order of X (the maximum input size). The same is true for a DP for economic lot-sizing. In these cases,
960-410: The induction assumption, there is a state s t- in T k-1 , that is ( d , r )-close to s s − . Since proximity is preserved by transitions (Condition 1 above), f ( s t − , x ) is ( d , r )-close to f ( s s − , x )= s s . This f ( s t − , x ) is in U k . After the trimming, there is a state s t in T k that is ( d , r )-close to f(s t- ,x) . This s t
1000-494: The input problem: it can be in O( n V ), where V is the largest integer than can appear in a state. If V is in O( X ), then the run-time is in O( n X ), which is only pseudo-polynomial time , since it is exponential in the problem size which is in O(log X ). The way to make it polynomial is to trim the state-space : instead of keeping all possible states in each step, keep only a subset of
1040-406: The input. The DP works in n steps. At each step i , it processes the input x i , and constructs a set of states S i . Each state encodes a partial solution to the problem, using inputs x 1 ,..., x i . The components of the DP are: The algorithm of the DP is: The run-time of the DP is linear in the number of possible states. In general, this number can be exponential in the size of
1080-419: The latter case. Other notable examples include the travelling salesman problem , which asks for the route taken by the salesman, and the integer factorization problem , which asks for the list of factors. Consider an arbitrary decision problem L {\displaystyle L} in the class NP . By the definition of NP , each problem instance x {\displaystyle x} that
1120-763: The latter denoted by Qm|| ∑ C j 3 {\displaystyle \sum C_{j}^{3}} - is ex-benevolent with a =1, b =3, d=(1,1,3). It can be extended to any fixed power of the completion time. 3. Sum of weighted completion time on any fixed number of identical or uniform machines - the latter denoted by Qm|| ∑ w j C j {\displaystyle \sum w_{j}C_{j}} . 4. Sum of completion time on any fixed number of identical or uniform machines, with time-dependent processing times: Qm|time-dep| ∑ C j {\displaystyle \sum C_{j}} . This holds even for weighted sum of completion time. 5. Weighted earliness-tardiness about
1160-479: The length of the input) verified , but not necessarily efficiently found . In contrast, the class FP , which can be thought of as the function class analogue of P , consists of function problems whose solutions can be found in polynomial time. Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether
1200-413: The next input item, and f 2 corresponds to not adding it. The corresponding filter functions are: h 1 verifies that the weight with the next input item is at most the knapsack capacity; h 2 always returns True. The value function g ( s ) returns s 2 . The initial state-set is {(0,0)}. The degree vector is (1,1). The dominance relation is trivial. The quasi-dominance relation compares only
1240-628: The number of transition functions in F is B , which is exponential in the log( X ), so the second technical condition is violated. The state-trimming technique is not useful, but another technique - input-rounding - has been used to design an FPTAS. 2. In the variance minimization problem 1|| C T V {\displaystyle CTV} , the objective function is g ( s ) = s 5 − ( s 4 − s 3 ) 2 / n {\displaystyle g(s)=s_{5}-(s_{4}-s_{3})^{2}/n} , which violates Condition 2, so
Fully polynomial-time approximation scheme - Misplaced Pages Continue
1280-525: The output is more complex than that of a decision problem . For function problems, the output is not simply 'yes' or 'no'. A functional problem P {\displaystyle P} is defined by a relation R {\displaystyle R} over strings of an arbitrary alphabet Σ {\displaystyle \Sigma } : An algorithm solves P {\displaystyle P} if for every input x {\displaystyle x} such that there exists
1320-415: The problem FSAT is also an FNP-complete problem, and it holds that P = N P {\displaystyle \mathbf {P} =\mathbf {NP} } if and only if F P = F N P {\displaystyle \mathbf {FP} =\mathbf {FNP} } . The relation R ( x , y ) {\displaystyle R(x,y)} used to define function problems has
1360-459: The problem size and in 1/ε. This is in contrast to a general polynomial-time approximation scheme (PTAS). The run-time of a general PTAS is polynomial in the problem size for each specific ε, but might be exponential in 1/ε. The term FPTAS may also be used to refer to the class of problems that have an FPTAS. FPTAS is a subset of PTAS, and unless P = NP , it is a strict subset. All problems in FPTAS are fixed-parameter tractable with respect to
1400-403: The standard parameterization. Any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have an FPTAS unless P=NP. However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded. Woeginger presented a general scheme for converting
1440-984: The states; remove states that are "sufficiently close" to other states. Under certain conditions, this trimming can be done in a way that does not change the objective value by too much. To formalize this, we assume that the problem at hand has a non-negative integer vector d = ( d 1 ,..., d b ), called the degree vector of the problem. For every real number r >1, we say that two state-vectors s 1 , s 2 are (d,r)-close if, for each coordinate j in 1,..., b : r − d j ⋅ s 1 , j ≤ s 2 , j ≤ r d j ⋅ s 1 , j {\displaystyle r^{-d_{j}}\cdot s_{1,j}\leq s_{2,j}\leq r^{d_{j}}\cdot s_{1,j}} (in particular, if d j =0 for some j , then s 1 , j = s 2 , j {\displaystyle s_{1,j}=s_{2,j}} ). A problem
1480-415: The sums of the b bins. There are b functions: each function j represents inserting the next input into bin j . The function g ( s ) picks the largest element of s . S 0 = {(0,...,0)}. The conditions for extreme-benevolence are satisfied with degree-vector d =(1,...,1) and G =1. The result extends to Uniform-machines scheduling and Unrelated-machines scheduling whenever the number of machines
1520-424: The theorem cannot be used. But different techniques have been used to design an FPTAS. A different kind of problem in which FPTAS may be useful is finding rational numbers that approximate some real numbers. For example, consider the infinite series ∑ i = 1 ∞ 1 i 3 {\displaystyle \sum _{i=1}^{\infty }{\frac {1}{i^{3}}}} . The sum
1560-688: The weight coordinate: s quasi-dominates t iff s 1 ≤ t 1 . The implication of this is that, if state t has a higher weight than state s , then the transition functions are allowed to not preserve the proximity between t and s (it is possible, for example, that s has a successor and t does not have a corresponding successor). A similar algorithm was presented earlier by Ibarra and Kim. The run-time of this FPTAS can be improved to O ( n log 1 / ϵ + 1 / ϵ 4 ) {\displaystyle O(n\log {1/\epsilon }+1/\epsilon ^{4})} operations on integers. The exponent
1600-487: Was later improved to 2.5. 2. Minimizing the weighted number of tardy jobs, or maximizing the weighted number of early jobs, on a single machine; denoted 1|| ∑ w j U j {\displaystyle \sum w_{j}U_{j}} . 3. Batch scheduling for minimizing the weighted number of tardy jobs: 1|batch| ∑ w j U j {\displaystyle \sum w_{j}U_{j}} . 4. Makespan of deteriorating jobs on
#533466