My current strategy to prove P=NP
Since I have already opened up about my intent and strategy to prove P=NP in some HN comments (https://news.ycombinator.com/item?id=36082578 and https://news.ycombinator.com/item?id=35729626), I decided I should explain it in a series of posts. After all, this was my original goal when I started the blog, to explain more the exciting possibility that P=NP (with a nice constructive algorithm), so that more people could get involved and contribute to that approach.
As for my motivation, that's probably a topic for another blog post, let's just say I think it is an extremely important problem to work on. I have been working on understanding P vs NP question as a serious hobby for about 6 years now, however, it was a discovery I made about a year ago that made me really optimistic about P=NP, and that a proof can be close. (Up to that point, I was always trying to be agnostic about the question, although I do admit I don't really buy many arguments or intuitions for P!=NP, as most of them can also be interpreted in a contrarian way.)
So, the current strategy is a sort of three step plan, outlined as follows:
- Prove that 2XSAT, a subclass of SAT that only allows 2-SAT and XORSAT clauses, is NP-complete, by reduction from 3-SAT.
- Prove that deduction system (logic) - I call these poset-linear - that governs 2XSAT is refutation complete. That means if you generate all tautologies in any theory of 2XSAT logic, and you don't reach the contradiction, then the corresponding theory is satisfiable.
- Find a polynomial representation of all tautologies of any 2XSAT theory, and show that all tautologies can be generated in polynomial time.
I might detail each of the steps more in the upcoming blog posts; let me just explain each of them. I will also add a link to my work-in-progress rewritten paper (the previous blog post was, in retrospect, a rather naive attempt to use step 1 to combine algorithms for 2-SAT and XORSAT, without proper theoretical framework). The step 1 (in section 1) is done and detailed in the paper. As I was writing this blogpost (over few months in fact), step 2 turned out to be false (I used to think I almost have a proof), the poset-linear logic is not refutation complete. However, current research indicates that this problem might be fixable with adding small number of additional variables without change of semantics (this is detailed in the draft). Finally, step 3 is still a big unknown, while I have some ideas what to do, it's not described in the paper.
Step 1 - 2XSAT Decomposition
The first step, NP-completeness of 2XSAT, is what I discovered a year ago, and to my surprise, even though the result is pretty trivial, I couldn't find it mentioned in any literature. Back then, it was based on a boolean formula:
$$ x \vee y \vee z = (\neg a \vee x) \wedge (\neg b \vee \neg y) \wedge (c \vee z) \wedge (a \oplus b \oplus x \oplus \top) \wedge (b \oplus c \oplus y \oplus \top) $$
(However, since I found even more efficient methods of representing logic gates, see the WIP paper for more details.) The above equation means that we can characterize solutions to SAT instance as a projection (shedding away extra variables $a,b,c$ for each clause on literals $x,y,z$, similar to e.g. 1-IN-3-EXACT-SAT reduction) of an intersection of solutions to a XORSAT instance and a 2-SAT instance. Since both XORSAT and 2-SAT instances can be decided in P (but also "described", in the sense of finding all tautologies in the underlying logics), this opens an exciting possibility that there is a clever generalization of both algorithms.
To me, the 2XSAT reduction answered an important question - what exactly is the role of XORSAT (linear equations over $\mathbb{Z}_2^n$) in boolean satisfiability problems. I have felt for a long time that XORSAT exhibits many complications of 3-SAT; if we didn't know Gaussian elimination, we would be similarly baffled and the only thing we could do would be to try all the solutions with some kind of backtracking search. So it is natural to look at XORSAT for hints how to solve a more general problem of SAT (in particular, how to deal with "circularity" inherent in the stated instance).
The 2XSAT reduction can be interpreted in this way - the only clause you really need (if you allow enough extra variables) to represent an NP-complete problem is $(x \oplus y) \vee z$. This is very interesting to compare with 3-SAT, because $x \vee y \vee z$ only disallows one out of 8 combinations of literals $x,y,z$. However, because $$(x \oplus y) \vee z = (x \vee y \vee z) \wedge (\neg x \vee \neg y \vee z),$$ we can see that the clause $(x \oplus y) \vee z$ disallows two out of 8 combinations of literals $x,y,z$. It is argued by many that 2-SAT is simple and 3-SAT is hard for the reason that while 2-SAT clause disallows 1 out of 4 combinations, the 3-SAT clause disallows 1 out of 8. However, here we see that we can always sort of "pair up" the 3-SAT clauses in the underlying instance so that each clause has the "easy" property of 2-SAT.
Another way to look at 2XSAT is that it is pretty much just 2-SAT, but confined to an affine subspace of $\mathbb{Z}_2^n$. One can solve the linear system (the XORSAT portion of the instance) and substitute the dependent variables into 2-SAT portion of 2XSAT, and then we obtain a sort of 2-SAT instance, but instead of on original variables, we have it on some other set of vectors in $\mathbb{Z}_2^n$. It is really strange to think then that 2-SAT itself would be easy but after the linear transform (which can be done piecemeal), it would suddenly become really hard.
This perspective also explains what are actually "hard" instances of SAT. Those have really dominating XORSAT portion, the affine subspace of solutions has a really small dimension, and so these solutions are hard to find by testing variables. Resolution (which is the method used for 2-SAT) doesn't help either, because it won't "hone you into" the affine subspace where the solution is.
I will also note that I have been able to run 2XSAT reduction to transform small factoring problems (with several thousands of variables after the reduction), so I have also verified to a good extent experimentally that it works. From the current 3 point strategy, this is the result I am most certain about.
Step 2 - Refutation-completeness
So it was really the above similarity of 2XSAT to 2-SAT that spurred my current research project, and got me so excited about the 2XSAT reduction. This takes us to the step 2 of the strategy. I decided to read the original Melville Krom's 1967 paper on 2-SAT, and I was blown away. This guy, it seemed, had everything that I needed to find a polynomial algorithm for 2XSAT (note that his paper was written before NP class was even defined!).
The key idea of Krom's paper, as I interpret it, is the following. We face a decision problem of boolean satisfiability of certain class of formulas. If we want a polynomial algorithm, then clearly, we will have to be able to look at exponentially many variable assignments at once, because there are exponentially many variable assignments to begin with. So how do we efficiently represent these subsets of assignments? This question haunted me for many years, as I understood that it is the key to a polynomial algorithm for SAT.
Krom answers the question as follows. Create a custom deduction system (aka logic) with a small number of possible formulas as a finite subset of classical propositional logic (also make sure your deduction rules are sound, but that is easy). The boolean satisfiability instance will be some theory in that logic (additional axioms coming from the instance). The variable assignments are just classical interpretations of the theory, and the correct assignments are its models. Now what you also do, you prove that every theory in that small logic is refutation complete. That means, instead of checking all the variable assignments (interpretations) to verify satisfiability, you only have to generate all the true sentences in the theory, and if you don't generate a contradiction, from the refutation completeness, you know the instance (from which the theory was built) has to be satisfiable. So despite the fact that there can be exponentially many models, we can control the amount of true sentences to be checked by the choice of the deduction system.
So this is what Krom does for 2-SAT. He constructs a simple deduction system (so simple he doesn't even give it a formal name) that governs 2-SAT, which only has formulas of the form $x \vee y$ for any literals $x,y$, and the deduction rule (which represents resolution, but I call it transitivity in my paper) $$ x \vee y,\ \neg y \vee z\ \vdash\ x \vee z.$$ Then he proves that this deduction system is refutation complete, and also proves a nice criterion (existence of two chains of resolutions that implies both $x \vee x$ and $\neg x \vee \neg x$ for some variable $x$) for detecting the satisfiability of 2-SAT theories. However, he doesn't really need the criterion to show polynomial decidability of 2-SAT, because, as you can see, the total number of possible formulas in this deduction system is pretty small (quadratic in number of variables), so if we generate all of them using the deduction rule, from the refutation completeness, we can also decide the instance (though doing this is much less efficient than using the criterion).
It's also worth noting that using a refutation complete deduction system to prove satisfiability is a kind of non-deterministic algorithm. We don't really know in what order we arrive at true sentences for the given theory (deriving a true sentence can be seen as a "step" of the algorithm), but we know, from the refutation completeness, that if the system is not satisfiable, we will get a contradiction eventually. We can control the algorithm being polynomial by keeping the deduction system small (relative to number of variables involved). I think, in retrospect, it is very fitting to approach the search for a polynomial algorithm for NP by considering this class of algorithms, that are based on deduction systems and non-deterministic.
The second step of the strategy is then an adaptation of the Krom's approach to 2XSAT. First, we need to define our logic. After some experimentation, I picked the logic that is similar to 2-SAT, but instead of usual literals we allow for what I call generalized literals - any combination of literals joined by the XOR operation (with optional constant $\top$). I denote these generalized literals with greek letters and they are essentially linear equations over $\mathbb{Z}_2^n$. In our logic, the allowed formulas are disjunctions of two generalized literals. I call this logic poset-linear ("poset" refers to implications represented by the disjunctions and "linear" to the linear equations).The logic also needs deduction rules. In classical propositional logic, there is only modus ponens rule, and everything is done with it and substitution into "logical axioms" (IMHO these should also be called deduction rules but it's a matter of taste). Here we only have limited ability to substitute (we can substitute for another generalized literal), so we will only have deduction rules for everything. After some experimentation, I arrived at the following set of deduction rules (greek letters $\alpha, \beta, \gamma$ can be arbitrary generalized literals):
- Identity (I): $$\vdash\ \neg \alpha \vee \alpha$$
- Commutativity (C): $$\alpha \vee \beta \ \vdash\ \beta \vee \alpha$$
- Linearity (L): $$\alpha \vee \beta,\ \alpha \vee \gamma \ \vdash\ \alpha \vee \neg (\beta \oplus \gamma)$$
- Transitivity (T): $$\alpha \vee \beta,\ \neg \beta \vee \gamma \ \vdash\ \alpha \vee \gamma$$
Komentáře
Okomentovat