SAT requires exhaustive search

Ke XU , Guangyan ZHOU

Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (12) : 1912405

PDF (483KB)
Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (12) : 1912405 DOI: 10.1007/s11704-025-50231-4
Theoretical Computer Science
RESEARCH ARTICLE

SAT requires exhaustive search

Author information +
History +
PDF (483KB)

Abstract

In this paper, we identify the distinction between non-brute-force computation and brute-force computation as the most fundamental problem in computer science. Subsequently, we prove, by the diagonalization method, that constructed self-referential CSPs cannot be solved by non-brute-force computation, which is stronger than P NP. This constructive method for proving impossibility results is very different (and missing) from existing approaches in computational complexity theory, but aligns with Gödel’s technique for proving logical impossibility. Just as Gödel showed that proving formal unprovability is feasible in mathematics, our results show that proving computational hardness is not hard in mathematics. Specifically, proving lower bounds for many problems, such as 3-SAT, can be challenging because these problems have various effective strategies available to avoid exhaustive search. However, for self-referential examples that are extremely hard, exhaustive search becomes unavoidable, making its necessity easier to prove. Consequently, it renders the separation between non-brute-force computation and brute-force computation much simpler than that between P and NP. Finally, our results are akin to Gödel’s incompleteness theorem, as they reveal the limits of reasoning and highlight the intrinsic distinction between syntax and semantics.

Graphical abstract

Keywords

CSP / SAT / exhaustive search / brute-force computation / lower bounds / self-reference / Model RB

Cite this article

Download citation ▾
Ke XU, Guangyan ZHOU. SAT requires exhaustive search. Front. Comput. Sci., 2025, 19(12): 1912405 DOI:10.1007/s11704-025-50231-4

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Model RB is a random constraint satisfaction problem (CSP) model that was proposed by Xu and Li [1] in 2000, which could also be encoded to well-known NP-complete problems like SAT and CLIQUE. The purpose of this model was to address the issue of trivial unsatisfiability that was prevalent in previous random CSP models. One of the key features of Model RB is that its domain size d grows with the number of variables n. Additionally, Model RB has been proved to exhibit exact phase transitions from satisfiability to unsatisfiability, making it a useful tool for analyzing and evaluating the performance of algorithms. Over the last two decades, Model RB has been extensively researched from multiple perspectives, as evidenced by various studies (e.g., [221]). Moreover, this model has gained significant popularity and widespread use in renowned international algorithm competitions. A random instance of Model RB with a planted solution named frb100-40, where n=100 and d=40, has remained elusive since it was made available online in 2005 as a 20-year challenge for algorithms. Despite numerous attempts, no one has been able to solve it thus far. In summary, the results suggest that Model RB possesses nice mathematical properties that can be easily derived. In contrast to its mathematical tractability, the random instances of this model, particularly those generated in the phase transition region, present a significant challenge for various algorithms, proving to be extremely difficult to solve.

As shown in the proof of Gödel’s incompleteness theorem [22], the constructive approach plays an indispensable role in revealing the fundamental limitations of finite formal systems. An algorithm is essentially a mechanized finite formal system. In this paper, we will study the limitations of algorithms based on the constructive approach. Specifically, we will explore whether non-exhaustive (non-brute-force) algorithms can always replace exhaustive (brute-force) ones, or if some computable problems inherently lack such alternatives. The Strong Exponential Time Hypothesis [23] conjectures that there is no non-brute-force algorithm for SAT. This problem is very similar to the foundational problem resolved by Gödel’s incompleteness theorem, originally proposed by David Hilbert nearly a century ago: whether a finite formal system can always replace a branch of mathematics (e.g., arithmetic) that contains infinitely many true mathematical statements. These two problems raise essentially the same deep philosophical question: can the part always replace the whole within the limits of reasoning? This inquiry concerns the limits of human knowledge—a subject extensively explored by many great philosophers (e.g., Laozi, Zeno, Socrates, Descartes, Kant, and Wittgenstein). Therefore, the most fundamental problem in computer science is non-brute-force computation vs. brute-force computation, rather than P vs. NP. From a mathematical perspective, the distinction between non-brute-force computation, which takes O(Tc) time (where c<1 is a constant), and brute-force computation, which takes T time, seems more natural and intuitive compared to that between P and NP. From a practical standpoint, the framework of non-brute-force computation vs. brute-force computation is also broader and more universally applicable than that of P vs. NP. For instance, the P vs. NP paradigm does not apply to problems where brute-force algorithms run in polynomial time or those that lie outside NP. However, even in such cases, we can still explore the possibility of developing non-brute-force algorithms. Finally, from a historical perspective, non-brute-force computation vs. brute-force computation extends Gödel’s framework, which unveils the limits of mathematics. Similarly, P vs. NP builds on Turing’s framework, shedding light on the limits of machines. The limits of mathematics are more fundamental than those of machines, as anything mathematics cannot achieve is also beyond the reach of machines.

The advantages of Model RB enable us to choose specific threshold points at which instances with a symmetry requirement are on the edge of being satisfiable and unsatisfiable. In fact, we will show that there exist instances at exactly the same point which are either satisfiable with exactly one solution or unsatisfiable but only fail on one constraint. The satisfiability of such instances can be flipped under a special symmetry mapping. As a result, they form a fixed point set under this mapping, allowing us to create the most indistinguishable examples (self-referential examples) which are a source of computational hardness. Based on the symmetry mapping and driven by the famous method of diagonalization and self-reference, we show that unless exhaustive search is executed, the satisfiability of a certain constraint (thus the whole instance) is possible to be changed, while the subproblems of the whole instance remain unchanged. Therefore, whether the whole instance is satisfiable or unsatisfiable cannot be distinguished without exhaustive search. In summary, if we can construct the most indistinguishable examples with exactly the same method and the same parameter values (a task that is both rare and challenging), then it is not hard to understand and prove why they are extremely hard to solve.

2 Model RB

A random instance I of Model RB consists of the following:

● A set of variables X={ x1,. ..,xn}: Each variable xi takes values from its domain Di, and the domain size is | Di|=d, where d= nα for i=1,...,n, and α> 0 is a constant.

● A set of constraints C={ C1,. ..,Cm} (m=rnlnd, where r>0 is a constant): for each i=1,. ..,m, constraint Ci=(Xi,R i). Xi=(xi1,x i2,.. .,xik) (k2 is a constant) is a sequence of k distinct variables chosen uniformly at random without repetition from X. Ri is the permitted set of tuples of values which are selected uniformly without repetition from the subsets of Di1×Di2×× Dik, and |Ri|=(1 p) dk where 0<p<1 is a constant.

In this paper, we have a symmetry requirement of the permitted set of each constraint, and the m permitted sets will be generated in the following way. Initially, we generate a symmetry set R which contains (1p)dk tuples of values, then generate each permitted set Ri of the constraint C i (i=1,2,...,m) by running random permutations of domains of k1 variables in Xi based on R. For example, if k=2 and the domains are D1=D2 ={1 ,2,3, 4}, then R={ (1,1 ),(1,2),(2 ,1), (2,2 ),(3,3),(3 ,4), (4,3 ),(4,4)} is a symmetry set. If we run a random permutation of D1, e.g., f(1)=3, f(2)=1, f(3)=4, f(4)=2, then we get a permitted set {(3,1),(3 ,2), (1,1 ),(1,2),(4 ,3), (4,4 ),(2,3),(2 ,4)}. Through this method all Ri(i=1,...,m) are isomorphic and every domain value of the variables shares the same properties.

A constraint Ci=(Xi ,Ri) is said to be satisfied by an assignment σD1×D2×× Dn if the values assigned to Xi are in the set Ri. An assignment σ is called a solution if it satisfies all the constraints. I is called satisfiable if there exists a solution, and called unsatisfiable if there is no solution. It has been proved that Model RB not only avoids the trivial asymptotic behavior but also has exact phase transitions from satisfiability to unsatisfiability. Indeed, denote Pr[I is SAT] the probability that a random instance I of Model RB is satisfiable, then

Theorem 2.1 ([1]). Let rc r=1 ln(1p ). If α> 1/ k, 0<p<1 are two constants and k,p satisfy the inequality k1/(1p), then

limn Pr[I isSAT] =1ifr< rc r, lim nPr[Iis SAT]=0 ifr>rcr .

In the following we will present some properties of Model RB which are important to prove our main theorems in the next section. From here on we tacitly take r=rcr +δnlnd, where δ=ln2ln(1 p), and take

α>max {1,inf {α:ω<0}, 2(10099)2 ln( 1p)k 100ln(1p )kln( 1p3)},

where ω =1+α(1 rcrpk). In fact note that ω =1+ α(1+pkln (1p) ), and a simple calculation yields that pk+ln (1p)>0 for all p(0 ,1) under the condition that k11p. Thus it is possible to take α>0 large enough such that ω<0.

First, we bound the probability that a random RB instance is satisfiable.

Lemma 2.2 Let I be a random CSP instance of Model RB with n variables and rnlnd constraints. Then

13Pr(I isSAT) 12.

Proof Let X be the number of solutions of I, then

Pr(X>0 )E[X] =d n( 1p)rn lnd=12.

As shown in [1],

E[X 2]=S= 0n dn( nS)(d 1)nS((1p) ( Sk)(nk) +(1 p) 2( 1(S k)(nk)))rnln d = E[ X] 2(1+O( 1n)) S=0nF( S),

where F(S) =(nS)(1 1d)nS (1d )S[1+p 1psk]rnlnd, and S=ns is the number of variables for which an assignment pair take the same values. Note that α >1, using an argument similar to that in [1,7], we obtain that only the terms near S=0 and S=n are not negligible, and

E[ X2] E[X]21+ 1E[X]+o(1)= 3+o( 1).

Indeed, asymptotic calculations show that

F(0)=1 o(1),F(i) =(1+ o(1))ni(1α ),..., F(ni)=( 1+o( 1))exp{i( lnn+ln dpkrlnd)} /E[X],F(n) =1/E[X],

where i=1,2,... is an integer. Note that exp{i( lnn+ln d pkrlnd)} =n iω+o(1) and α>1,ω<0 are constants, thus the upper bound of (2.4) comes from F(0) and F(n).

Using the Cauchy inequality, we get Pr(X>0) E[X ]2/ E[X2]13.                       □

As an immediate consequence of Lemma 2.2 we obtain a lower bound of the probability that I has exactly one solution.

Corollary 2.3 Let I be a random CSP instance of Model RB with n variables and rnlnd constraints. Then the probability that I has exactly one solution is at least 1/6.

Proof Let ρ1 be the probability that I has exactly one solution, and ρ 2 be the probability that I has at least two solutions. Then from Lemma 2.2, we have

E[X]=12ρ1+2ρ 2, Pr(X> 0)= ρ1+ρ21 3.

Therefore ρ11/6.                  □

Next we show that if a random instance is unsatisfiable, then w.h.p. it fails at only one constraint. We introduce the following definitions.

Definition 2.1 Let I be a CSP instance. A constraint C is called a self-unsatisfiable constraint if there exists an assignment under which C is the only unsatisfied constraint in I. If variable x is contained in C, then x is called a self-unsatisfiable variable. If I is unsatisfiable and every variable is a self-unsatisfiable variable, then I is called a self-unsatisfiable formula.

Lemma 2.4 Let I be a random CSP instance of Model RB with n variables and rnlnd constraints. If I is unsatisfiable, then w.h.p. I is a self-unsatisfiable formula.

Proof First we show that for any constraint C of I, with positive probability there exists an assignment which satisfies all constraints except C. In fact let N be the number of such assignments, then

E[N]=dn(1p )rnlnd1p.

Using a similar argument as in [1], we have

E[N2]= S=0ndn (nS ) (d1)nS((1p )(Sk)(nk)+(1p )2 (1 (Sk ) (nk)))rnln d1 (p(Sk)(nk)+ p2(1(Sk)(nk))).

Hence, (2.3),(2.4) and (2.5) ensure that

E[N2] E[N]2= E[X2] E[X ]21+ 1ppsk1+p1psk1pE[X2] E[X]23p.

Then Pr(N> 0) E[N]2 E[N2] p3. Therefore the probability that C is a self-unsatisfiable constraint is at least p3.

Next, since the number of constraints is rnlnd, the average degree of each variable xX is rklnd. By the Chernoff Bound,

Pr[ D eg(x)1100rklnd] e (99/100 )2r klnd/2=n(99/100)2rkα /2,

where Deg (x) denotes the degree of variable x. From the requirement (2.1) we know that 1(99/100)2rkα/2<0, thus

nPr [Deg( x) 1100rkln d]o(1).

Therefore, almost surely all variables have degree at least 1100rkln d.

Furthermore, note that each variable appears in at least 1100rkln d constraints and the probability that each constraint appears to be a self-unsatisfiable constraint is at least p3, thus the probability that x is not a self-unsatisfiable variable is at most (1 p3)1100rklnd.

Note that (2.1) entails that 1+1100rkαln (1 p/ 3)<0, therefore the probability that there exists a variable which is not self-unsatisfiable is at most

n(1p 3) 1100rklnd= n1+ 1100rkα ln(1p/ 3)=o(1 ).

Thus w.h.p. all variables are self-unsatisfiable variables.     □

Remark 2.1 We claim that Lemmas 2.2 and 2.4 hold for any domain size greater than polynomial. Take exponential domain size d=βn (where β>1 is a constant) for example, similarly, the dominant contributions of E[X 2]/E[X]2 come from F(0)= 1o(1) and F(n)= 1/ E[X], and asymptotic calculations show that F(i)=Θ((nd)i) and

F(n i)=(1+o (1))1 E[X]exp{ ilnn+ i(1+ pkln (1 p))lnd }

are negligible for small integer i, since 1+pkln(1p)<0. Moreover, probability analysis holds more easily in the proof of Lemma 2.4 if d=βn ((2.6) and (2.7)).

Next we define a symmetry mapping of a constraint which changes its permitted set slightly.

Definition 2.2 Consider a random instance I of Model RB with k=2. Assume that C=(X ,R) is a constraint of I and X=( x1,x 2), then a symmetry mapping of C is to change R by choosing u1, u2D 1 (u1u2), v1, v2D 2 (v1v2), where (u1,v1 ),(u2, v2)R and ( u1, v2) ,(u 2,v1)R, and then exchanging u1 with u2 (see Fig.1).

With the above properties of Model RB, we obtain the following interesting results.

Theorem 2.5 There exists an infinite set of satisfiable and unsatisfiable instances of Model RB such that this set is a fixed point under the symmetry mapping of changing satisfiability.

Proof Let I be the set of RB instances with n variables and rnlnd constraints, where each instance either has a unique solution or has no solution.

Assume that II has exactly one solution σ, which happens with probability at least 1 /6 from Corollary 2.3. For an arbitrary constraint C, assume without loss that C =(X,R), X=(x ,x1),σ(x) =u,σ(x1 )=v, and D,D1 are the domains of x and x1, respectively. By the symmetry requirement, there exist uD,vD1 such that (u,v)R,(u,v) R ,(u ,v)R , then we will exchange u with u. It is easy to see that this symmetry mapping will convert (u,v ) into an unpermitted tuple and convert (u, v),(u,v) into permitted tuples, thus σ is no longer a solution. However, it is possible that at most 2(1p)d pairs (u,),(u, )R can be expanded to new solutions (this is because by the symmetry requirement, the degree of each domain value of each variable is (1p)d). Specifically, the probability that (u,v) can be expanded to a new solution is at most dn2(1p) rnlnd1=12(1 p)d 2, thus a simple calculation yields that the probability that I is still satisfiable after the symmetry mapping is at most O(1d)=o(1).

Assume that II is an unsatisfiable instance, then w.h.p. all variables in I are self-unsatisfiable variables from Lemma 2.4. This implies that there exist a constraint C Cx and an assignment τ such that C is the only unsatisfied one under τ. Assume without loss that C=(X,R),X=(x,x2),τ(x )=u, τ(x 2)= w, and D, D2 are the domains of x and x2, respectively. It is apparent that (u,w )R . By our symmetry requirement, there exist uD,wD2, where (u, w),(u,w)R and ( u, w)R, such that a symmetry mapping of exchanging u with u will convert (u,w ) into a permitted tuple, thus C becomes satisfiable under τ. Moreover, using a similar argument as above, we can see that the probability that the new pairs (u,),(u, )R could expand to solutions is at most O(1d)=o(1). Thus w.h.p. I has only one solution after this symmetry mapping.

From the above two cases we can see that for any II, the symmetry mapping changes its satisfiability, however, I still belongs to I after the mapping, thus I can be considered as a fixed point under the symmetry mapping.               □

In this section, it has been shown that we can construct satisfiable and unsatisfiable instances using exactly the same method and the same parameter values. Moreover, these satisfiable and unsatisfiable instances can be transformed into each other by performing the same mapping. This property is very similar to that of the self-referential proposition introduced by Gödel [22] in order to prove that such a proposition can be neither proved nor disproved (i.e., whether this proposition is true or false cannot be distinguished in finite formal systems). Gödel’s results reveal the fundamental difference between the syntax defined by rules and the semantics defined by models (or assignments). An algorithm is essentially a finite sequence of rules, which can also be viewed as a finite formal system. In contrast, the exhaustive search method determines, using the semantic definition of a property, whether this property can be satisfied through trying every possible model (or assignment) one by one. Inspired by Gödel’s idea, we refer to these satisfiable and unsatisfiable instances as the most indistinguishable examples or self-referential examples. Their self-referential property makes them extremely hard to differentiate syntactically. Simply put, in this context, syntax cannot replace semantics.

3 Main results

Proving complexity lower bounds (algorithmic impossibility results) for a given problem is essentially reasoning and making conclusions about an infinite set of algorithms. In mathematics, any such conclusion should be based on assumptions about the nature of the infinite set. These assumptions must be consistent with the reality and usually appear as axioms. Similar to many combinatorial problems, the general CSP has no global structure that can be exploited to design algorithms. The only exact algorithm currently available for solving CSPs is a divide-and-conquer method that systematically explores the solution space while employing various pruning strategies to enhance efficiency.

In this paper, we view an algorithm as a finite formal system which is defined by a finite set of symbols and rules. It is easy to see that finite formal systems possess greater expressive power than algorithms (Turing machines). This is because finite formal systems are essentially finite sets of rules while algorithms are essentially finite sequences of rules. Here we use finite formal systems to solve CSPs (i.e., to determine if a CSP instance is satisfiable). We only need to assume that this task is finished by dividing the original problem into subproblems. Under this assumption, we have the following lemma.

Lemma 3.1 If a CSP problem with n variables and domain size d can be solved in T(n)= O(d cn) time (0<c<1 is a constant), then at most O(d c) subproblems with n1 variables are needed to solve the original problem.

Proof It is easy to see that d subproblems with n1 variables are sufficient to solve the original problem. By condition, a CSP problem with n 1 variables can be solved in T(n1) time. Note that

T(n)=O(dcn )=O(dc dc (n1))=O(dc )T(n1),

thus at most O(d c) subproblems with n1 variables are needed to solve the original problem.                 □

The above lemma is a key to proving the main results of this paper. For a better understanding, we take the classical sorting problem as an example. Assume that our goal is to prove that sorting n elements cannot be done in T(n)=O(n) time. By condition, we have T(n)= O(n 1)+O (1)=T( n1)+O( 1). This means that we need a subproblem of n1 elements with additional O(1) operations to solve the original problem. We can show by contradiction that this is impossible and so finish the proof.

Theorem 3.2 Model RB cannot be solved in O(dcn) time for any constant 0<c<1.

Proof For the sake of simplicity, we will prove that Theorem 3.2 holds for Model RB with k=2 by contradiction. Let I be a RB instance with n variables and rnlnd constraints. Suppose there exists some constant 0<c<1 such that I can be solved in O(dcn ) time, then Lemma 3.1 implies that I can also be solved by assigning at most O(dc) values to an arbitrary variable, say x, and then solving the resulting subproblems (with n 1 variables) which require O(dc(n 1)) time. We will show that there exist instances where the O(d c) subproblems produced by assigning O(d c) values to x are impossible to determine its satisfiability. For convenience, let D be the domain of x, D~ be the set of O(dc) values which have been assigned to x ( | D~|=O(dc)), and Cx be the set of constraints containing x.

Follow the strategy of the proof of Theorem 2.5, if I is an instance having exactly one solution σ, then the constraint C will be arbitrarily selected from Cx. Note that O(d c) is o(1) compared with the domain size d, thus the probability that u belongs to D~ is o(1). Therefore the symmetry mapping in Theorem 2.5 will be performed by exchanging u with u, where u is chosen from DD~, then I becomes unsatisfiable. Similarly, if I is an unsatisfiable instance, then the symmetry mapping of exchanging u with u, where u is chosen from DD~, will convert (u,w ) into a permitted tuple, thus I becomes satisfiable.

Note that in either of the above cases, the O(dc) subproblems with n 1 variables remain unchanged, while the satisfiability of the original problem with n variables has been changed after performing the symmetry mapping. We can conclude that the O(d c) subproblems are insufficient thus impossible to determine if I is satisfiable or unsatisfiable. This completes the proof of Theorem 3.2.

As mentioned before, finite formal systems possess greater expressive power than algorithms (Turing machines). The above theorem indicates that no finite formal system for solving Model RB can significantly outperform or replace the exhaustive search method, which evaluates dn possibilities one by one. That is to say, non-brute-force computation cannot replace brute-force computation for Model RB. More interestingly, the above proof follows the same method (i.e., diagonalization and self-reference) used by Kurt Gödel [22] and Alan Turing [24], respectively, in their epoch-making papers of proving logical and computational impossibility results. This classical method, pioneered by Cantor, stands out as not only simple, elegant, and powerful, but also as the only approach that has successfully provided lower bounds on complexity classes. Moreover, the distinctive mathematical properties and the inherent extreme hardness of Model RB make it both feasible and effective in establishing algorithmic impossibility results. Importantly, applying this method to constructed instances, rather than directly to complexity classes, is crucial for achieving a successful proof. This study highlights the significance of creatively utilizing classical methods, particularly when addressing long-standing open problems. We believe that the underlying idea of this paper (i.e., Constructing the Most Indistinguishable Examples) will unlock the door to proving complexity lower bounds, which has always been a challenging task, even for many polynomial-time solvable problems.

It is worth mentioning that Xu and Li [3] established a direct link between proved phase transitions and the abundance with hard examples by proving that Model RB has both of these two properties. They also made a detailed comparison between Model RB and some other well-studied models with phase transitions, and stated that “such mathematical tractability should be another advantage of RB/RD, making it possible to obtain some interesting results which do not hold or cannot be easily obtained for random 3-SAT”. This paper confirms that remarkable results can indeed be achieved through a simple and elegant approach, as expected. More than two thousand years ago, Laozi, a great Chinese thinker and philosopher, once said: "The greatest truths are the simplest". We believe that the truth of computational hardness should also adhere to this fundamental and universal principle advocated by Laozi. In this sense, both the problem and the results of this paper are simple yet intuitively accessible. Additionally, if a problem is so difficult that there are no nontrivial results, then the most likely scenario is that the problem under study is not inherently foundational. In such a case, what should be done is to redefine the problem in alignment with Laozi’s principle. Foundational problems must be grounded in fundamental concepts. In this paper, we argue that non-brute-force computation vs. brute-force computation is more foundational than P vs. NP. This is because the concept of brute-force computation is inherently more fundamental than that of polynomial-time computation.

CSP can be encoded into SAT by use of the log-encoding [25] which introduces a new Boolean variable for each bit in the domain value of each CSP variable and thus uses a logarithmic number of Boolean variables to encode domains. It must be emphasized that each clause of these encoded SAT instances could be very long with Θ( log2d) Boolean variables if the domain size d grows with the number of CSP variables n. This is in sharp contrast to 3-SAT that is very short in clause length and has received the most attention in the past half-century. For encoded SAT instances of Model RB using the log-encoding, we have totally N=nlog2 d Boolean variables, O(Ndk ) clauses and O(Ndk log2d) literals. Note that dcn= 2cnlog2d=2cN, so it is easy to derive the following corollary from Theorem 3.2.

Corollary 3.3 SAT with N Boolean variables cannot be solved in O(2 cN) time for any constant 0<c<1.

The above corollary holds for SAT with no restriction on the clause length, and so is not directly applicable to k-SAT with constant k3 whose lower bounds can be obtained by reduction from encoded SAT instances of Model RB. Other CSP instances of domain size l can also be encoded from Model RB using N=nlogl d variables, thus cannot be solved in O(lcN ) time for any constant 0<c<1.

It is well-known that SAT is NP-complete [26], and so it follows that P NP.

References

[1]

Xu K, Li W. Exact phase transitions in random constraint satisfaction problems. Journal of Artificial Intelligence Research, 2000, 12( 1): 93–103

[2]

Xu K, Boussemart F, Hemery F, Lecoutre C. A simple model to generate hard satisfiable instances. In: Proceedings of the 19th International Joint Conference on Artificial Intelligence. 2005, 337−342

[3]

Xu K, Li W. Many hard examples in exact phase transitions. Theoretical Computer Science, 2006, 355( 3): 291–302

[4]

Xu K, Boussemart F, Hemery F, Lecoutre C. Random constraint satisfaction: easy generation of hard (satisfiable) instances. Artificial Intelligence, 2007, 171(8−9): 514−534

[5]

Liu T, Lin X, Wang C, Su K, Xu K. Large hinge width on sparse random hypergraphs. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence. 2011, 611−616

[6]

Cai S, Su K, Sattar A. Local search with edge weighting and configuration checking heuristics for minimum vertex cover. Artificial Intelligence, 2011, 175(9−10): 1672−1696

[7]

Zhao C, Zheng Z. Threshold behaviors of a random constraint satisfaction problem with exact phase transitions. Information Processing Letters, 2011, 111( 20): 985–988

[8]

Saitta L, Giordana A, Cornuejols A. Phase Transitions in Machine Learning. Cambridge: Cambridge University Press, 2011

[9]

Zhao C, Zhang P, Zheng Z, Xu K. Analytical and belief-propagation studies of random constraint satisfaction problems with growing domains. Physical Review E, 2012, 85( 1): 016106

[10]

Fan Y, Shen J, Xu K. A general model and thresholds for random constraint satisfaction problems. Artificial Intelligence, 2012, 193: 1–17

[11]

Lecoutre C. Constraint Networks: Targeting Simplicity for Techniques and Algorithms. John Wiley & Sons, 2013

[12]

Huang P, Yin M. An upper (lower) bound for Max (Min) CSP. Science China Information Sciences, 2014, 57( 7): 1–9

[13]

Xu W, Zhang P, Liu T, Gong F. The solution space structure of random constraint satisfaction problems with growing domains. Journal of Statistical Mechanics: Theory and Experiment, 2015, 2015: P12006

[14]

Liu T, Wang C, Xu K. Large hypertree width for sparse random hypergraphs. Journal of Combinatorial Optimization, 2015, 29( 3): 531–540

[15]

Knuth D E. The Art of Computer Programming. Addison-Wesley Professional, 2015

[16]

Xu W, Gong F. Performances of pure random walk algorithms on constraint satisfaction problems with growing domains. Journal of Combinatorial Optimization, 2016, 32( 1): 51–66

[17]

Fang Z, Li C M, Xu K. An exact algorithm based on MaxSAT reasoning for the maximum weight clique problem. Journal of Artificial Intelligence Research, 2016, 55( 1): 799–833

[18]

Wang X F, Xu D Y. Convergence of the belief propagation algorithm for RB model instances. Journal of Software, 2016, 27(11): 2712−2724

[19]

Li C M, Fang Z, Jiang H, Xu K. Incremental upper bound for the maximum clique problem. INFORMS Journal on Computing, 2018, 30( 1): 137–153

[20]

Karalias N, Loukas A. Erdős goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 6659−6672

[21]

Zhou G, Xu W. Super solutions of the model RB. Frontiers of Computer Science, 2022, 16( 6): 166406

[22]

Gödel K. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 1931, 38( 1): 173–198

[23]

Calabro C, Impagliazzo R, Paturi R. The complexity of satisfiability of small depth circuits. In: Proceedings of the 4th International Workshop on Parameterized and Exact Computation. 2009, 75−85

[24]

Turing A M. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 1937, S2−42(1): 230−265

[25]

Walsh T. SAT v CSP. In: Proceedings of the 6th International Conference on Principles and Practice of Constraint Programming. 2000, 441−456

[26]

Cook S A. The complexity of theorem-proving procedures. In: Proceedings of the 3rd Annual ACM Symposium on Theory of Computing. 1971, 151−158

RIGHTS & PERMISSIONS

The Author(s) 2025. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (483KB)

1630

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/