Is P vs. NP a practical problem?

Here’s a recent quote from ACM TechNews:

The ACM’s Special Interest Group on Algorithms and Computing Theory honored Rudich and Razborov for their contributions to addressing the P vs. NP problem, which involves the computational complexity behind the security of ATM cards, computer passwords, and electronic commerce.

The implication here is that the P vs. NP problem is important for computer security. This seems like saying that General Relativity is important to establish a mining operation on the Moon.

This may be a naïve question, but would proving that P=NP (or disproving it) change anything in computer security?

Yes, I can appreciate the fundamental nature of the P vs. NP problem. But does it have any practical consequences?

Note that whether a problem requires 2n or n150 time will not make much difference: both are intractable.

As a database researcher, anything requiring n4 time is already intractable. Don’t believe me? If n is 1 million and a computer can do 1012 operations per second, it takes 30 thousand years to solve a n4 time problem. I am not even going to get in the constants: what if your complexity is 10120 n?

Oh yes! Please, give prizes to anyone who makes progress toward the P vs. NP problem, but I am still waiting for the practical implications.

(If I am making a crucial mistake here, please tell me! I want to know.)

Update. André pointed me to a web site that pretty much says that P vs. NP is not so important for cryptography.

2 thoughts on “Is P vs. NP a practical problem?”

  1. Daniel,

    I think the issue is essentially relative scaling.

    If you can solve an exponential problem, eg 2^n, with n bits, adding a single bit will double the required time, adding 10 bits will x1000, etc. So adding a few bits will make it intractable quite fast.

    In linear time, adding a single bit will _add_ constant time. So if you could do it with n bit, you probably can do it with n+1 or n+10 bits.

    So of course your example holds: there are some polynomial algorithms that won’t run for high values of n. But as computing power and capacity increases, the situation is a lot more favourable with those. If you double you computing power, this will _double_ the size of problems you solve in linear time; you gain 19% with O(n^4), and 0.6% with O(n^120). In O(2^n) you will gain 1 bit…

    Hope this makes sense.

  2. Well, to use your analogy – do advances in General Relativity have any consequences for mining minerals on the moon? Maybe not – as far as we can tell right now. The point is – we don’t actually know.

    Suppose your N^4 algorithm could be parallelized and we harnessed 30,000 tera-flop computers to solve your problem, then your formerly impractical algorithm becomes feasible.

    I think that’s the point with knowing whether P=NP. It is true that there are some exp-time algorithms (e.g. Simplex for solving LP problems) which are nevertheless practical and some polynomial-time algorithms (e.g. Kamarkar’s algorithm for the same LP problems) which are *impractical* because of the size of the constants in the exponent.

    One valuable thing about knowing that LP is in the class P is that it at least offers the hope that there exists a feasible solution for large N, whereas knowing that it doesn’t (and not knowing whether P=NP) sustains the hope that there is *no* feasible solution.

Leave a Reply

Your email address will not be published. Required fields are marked *