thibault

P and NP are Incomparable

6 posts in this topic

@Leo Gura, I've been inspired by your latest video about being wrong to share an intuition which I've had for some time and am unable to budge on being wrong about.

 

P vs NP as those of you who are familiar with computer science may know, is the most fundamental problem of that field. Essentially it asks whether all problems whose solution can be efficiently verified by a computer (NP) can also be solved efficiently by a computer (P). If this is true then P = NP, if not P ≠ NP.

 

I have a deep intuition that eats at me from the inside that this problem can not be solved. In Gödelian terms, we could say it is undecidable, but since we are talking about comparing two sets of problems, I prefer the term incomparable.

 

I know it is arrogant to think I know the "answer" to this problem (that there is none) and even more so to think I can formally prove it, knowing I did not even complete my bachelor's of computer science. Yet I can't help but research it trying to find a way to tell the world "stop working on this, you will never find the answer".

 

Here's how I think about the problem and my justification for why it's bogus:

We know that all problems that can be solved efficiently by a computer (P problems from now on) have solutions that can also be efficiently verified by a computer (NP problems from now on). This is because you would just need to solve the problem yourself on your computer and compare against the solution that was given to you.

The problem arises when we find NP problems for which we have no efficient algorithm to solve. A classic example of this is the Traveling Salesman Problem.

"Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?"

We can check a solution to this problem easily: check that each city is visited once and calculate the total distance traveled, but it's not obvious how we can efficiently find the shortest possible route.

Solving P vs NP in this case would mean proving that either we can efficiently find the shortest possible route or that we can't.

I propose we can't definitely prove either! There are an infinite number of city and distance configurations that can be made and proving P vs NP would mean being able to write an algorithm that works efficiently in all those configurations. Unless you allow room for an infinite algorithm that is infinitely complex, then we can't ever account for the infinite diversity of problems that can be created within this framework. But also, our inability to write and execute this infinite algorithm means that we can never prove one way or another whether we can or can't actually write and execute it.

If you have one guy who thinks yes and the other who thinks no and they start the process of writing the algorithm to prove each other wrong, they will die, and even if their descendants continue their work, humanity will die long before it is completed considering it is... infinite.

 

I did my best to summarize my current understanding of the problem and put it in layman's terms. Ever since I started looking into this problem, I have been disappointed that many think it will one day be solved one way or the other. I'm curious to be challenged and proven wrong myself, because I know there's no good reason I would have the "answer" to a problem that has been stumping every expert in the field for years but I feel so deeply that this is the truth of the matter. Would love to hear from any and everyone what they think about it.

Share this post


Link to post
Share on other sites

If a problem is solved in a formalised system a computer will always calculate if this is the case if it has sufficient horsepower or sufficient time, something else would contradict laws of logic.

However, though proofreading is excessive force (brute force) it does not involve the much more excessive force of variability, there is no solving for x in proofreading.

Solving a problem via brute force begins with x, x can be solved for via the machine that reads proof, even x, y and z, but the curve will exponentiate real quick, but it will never go vertical, thus also here the laws of logic will stay intact.

 

Logically speaking all problems that can be verified via a computer can be solved via a computer, practically speaking certain exponential curves outcompetes the limits of computation conditioned on the matter and time in the universe, so effectively speaking all problems that can be verified via a computer can not be solved via a computer.

 

To conceptualise this we should resolve to basic epistemology, and proportion the known to the unknown. 

When it comes to the unknown of a proofreading there is non, just as when it comes to the exception to the identity of existence there is non.

When it comes to the unknown of a problem that is not constructed out of the axioms that proofreads solutions you must now generalise new principles from these axioms blindly. The complexity of the problem (how many variables it consists of) will output an exponential difficulty in proportion to the richness of your the algorithmic instructions. It follows from definition that something is a less magnitude than itself exponentiated.

The only possible counterargument takes problem with the premise of the exponentiation itself, which is only done by introducing a theory on inherent limits or "upper bounds". In other words that if a problem becomes sufficiently complex it exhaust itself such that what in the initial instructions held only for 1 now holds for exponentially more.

Until someone create such a theory from upper bounds I will say that P does not equal NP.


how much can you bend your mind? and how much do you have to do it to see straight?

Share this post


Link to post
Share on other sites

Posted (edited)

Also, if NP includes all subjective or semantic problems then that NP does not equal P is trivial.

If there is no structure to the problem itself, if it does not analogically involve known quantities as well unknown ones then the fact that it can not be effectively solved by an algorithm is trivially obvious.

The real question of dispute must then structure all relevant problems to NP on some basis that overlaps with the basis of the algorithm.

 

In other words, that which constitutes the variability of the problem must 1. relate to that which constitutes the non-variability of the problem in a mere quantitative way, if their qualities are entirely separate then it is a contradictory problem.

The relation between the known and unknown of the problem, by being a mere proportion between the two and bearing the same essence as the known, must too bear the same essence as those instructions employed in the algorithm, this is the nature of deduction, can you induce red from a composition of wavelengths? No. Is that an unsolved problem? No.

 

So how can an algorithm which instructions share their basis with the problem they attempt to solve fail if given "almost" infinite time?

For the same reason that your headphones sound somewhere between two and four times more loud when they are pretty close to your ears as when they are just twice that distance.

What about infinite time? 

Since several infinites would here contradict the laws of logic, as in undermines the structure on which the very meaning of the problem is contingent, then infinite time would always be sufficient for even the most rudimentary algorithm to solve the most conceivable complex problem within their mutual limits.

 

Edited by Reciprocality

how much can you bend your mind? and how much do you have to do it to see straight?

Share this post


Link to post
Share on other sites

Hey thanks for the reply, I think we're on the same page for much of the reasoning.

One thing about the P and NP classes is that they refer to problems that can be solved in "polynomial" time and whose solution can be checked in "polynomial" time respectively.

This means pretty much, for an input size x to a given problem, the algorithm takes x^k to solve or check the solution. There are problems like the boolean satisfiability problem or the traveling salesman problem for which we have polynomial time algorithms to check the solution but only exponential time algorithms to solve them (for input x, they take e^x time to solve).

Share this post


Link to post
Share on other sites

It's interesting to know if, specifically, there is a polynomial time solution to the travelling salesmen problem. At least in the 2d case where cities are on a co-ordinate plane and they are joined by straight lines.

There are some heuristic arguments. The first is to start off with the base case of three cities, this is just a triangle whose solution is zero effort (just join the cities to each other). For each city you add to the mix, there clearly has to be a polynomial time solution to the problem. In the case of four cities you could make a triangle around the three outermost cities and then find the shortest distance from the remaining point to all other cities.

Intuitively, it seems the way to solve in the general case is to start with a convex polygon which joins all the outermost cities (this is definitely polynomial time), and then "suck the air out" of the polygon so to speak. In other words with each iteration the deflated polygon "sticks to" more and more cities. The result is a radially spiky polygon whose edges are the solution. Each step is polynomial time as you just have to check which cities are closest to the polygon edges in the current iteration of "sucking the air out". As to whether this actually produces the shortest route is beyond my abilities to check!

Anyway, just throwing it out there.


57% paranoid

Share this post


Link to post
Share on other sites
21 hours ago, LastThursday said:

It's interesting to know if, specifically, there is a polynomial time solution to the travelling salesmen problem. At least in the 2d case where cities are on a co-ordinate plane and they are joined by straight lines.

There are some heuristic arguments. The first is to start off with the base case of three cities, this is just a triangle whose solution is zero effort (just join the cities to each other). For each city you add to the mix, there clearly has to be a polynomial time solution to the problem. In the case of four cities you could make a triangle around the three outermost cities and then find the shortest distance from the remaining point to all other cities.

Intuitively, it seems the way to solve in the general case is to start with a convex polygon which joins all the outermost cities (this is definitely polynomial time), and then "suck the air out" of the polygon so to speak. In other words with each iteration the deflated polygon "sticks to" more and more cities. The result is a radially spiky polygon whose edges are the solution. Each step is polynomial time as you just have to check which cities are closest to the polygon edges in the current iteration of "sucking the air out". As to whether this actually produces the shortest route is beyond my abilities to check!

Anyway, just throwing it out there.

This is kind of the trap of this problem. It seems deceptively easy, like you should immediately know the answer. Yet it is one of the hardest problems in computer science. For the TSP we have polynomial time approximations that can solve it in polynomial time most times. The problem is we don't have an algorithm that can solve it in polynomial time every time. Furthermore, the real difficulty of NP-complete problems like the TSP is the infinite variety of the problem. Whatever algorithm you come up with, I can come up with an edge case that your algorithm can't solve efficiently.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now