Gödel's Disjunction [video]

(youtube.com)

36 points | by raoof 947 days ago

4 comments

  • simonh 945 days ago
    Where this all falls down is in the assumption that human minds are consistent systems. I have to wonder if anyone who thinks such a thing has ever met any actual humans. Even a brief interrogation of a typical human as to their beliefs and assumptions about the world should rapidly disabuse us of the thought.

    Humans are perfectly capable of assuming unproved axioms, changing their set of axioms, and accepting contradictory axioms. We do these things all the time, applying one axiom in one situation, and it's opposite in another. Just ask someone about their political beliefs for a while.

    The fact is human intelligence is not an end in itself, it's a tool we use to achieve goals set by our evolutionary priorities, as encoded into our emotions and needs. These are the things that drive us, not logical axioms and proven truths. Even smart people have an emotional need to be correct, and many will resist having their beliefs challenged and changed tooth and nail. It takes constant effort and self discipline to maintain an open mind to new ideas and the rejection of existing assumptions, and certainly doesn't come naturally to us.

    So this systematic theorising all seems somewhat beside the point. Don't get me wrong. It's interesting and useful philosophical work, no question, but it's not really applicable to actual human minds.

    • mannykannot 945 days ago
      By "all this", I assume you mean the work of first Lucas, and then Penrose, in using Gödel's disjunction to argue against Strong AI [1]. This argument cannot be dismissed simply by noting that humans are often inconsistent, and it is not plausible that people of the intellectual stature of Lucas and Penrose would make so simple an error.

      What these arguments claim is that if a digital computer could emulate a human mind, then the human mind is ultimately algorithmic (so there is an algorithm that can produce all the theorems that the human mind is capable of producing.) The Church-Turing thesis says that every algorithmically computable function is computable with a Turing machine. This is equivalent to the proposition that the collection of humanly knowable theorems can be recursively axiomatized in some formal theory T. This theory would then be consistent [2].

      Given that this hypothetical system is consistent, it has at least one Gödel sentence: a true statement which cannot be proven from within the system. Lucas and Penrose assert, however, that humans could see its truth by "stepping outside of the system", as they do, for example, in proving the consistency of Peano arithmetic. Therefore, there would be at least one thing that a human mind could do, but which its supposedly equivalent digital computer could not.

      So Lucas and Penrose are not assuming that humans are consistent; on the contrary, they are saying that Strong AI proponents themselves are implying that minds are reducible to a consistent formal theory. As Lucas and Penrose do not accept the Strong AI premise, they are not making this implication, and they do not have to reconcile it with the observed inconsistency of humans - that is, as it were, left as an exercise for Strong AI proponents to solve (this, however, is not the crux of their argument, which concerns deducing the truth of the implied system's Gödel sentence, as outlined above.)

      I believe these arguments can be plausibly challenged, but not by anything so simple as observing that humans are inconsistent.

      [1] https://iep.utm.edu/lp-argue/

      [2] https://www.maa.org/press/maa-reviews/g-dels-disjunction

      • simonh 944 days ago
        Ive never heard of Peano arithmetic before and I'm not a logician or mathematician, so I'll rely on wikipedia for what it's worth:

        "Whether or not Gentzen's proof meets the requirements Hilbert envisioned is unclear: there is no generally accepted definition of exactly what is meant by a finitistic proof, and Hilbert himself never gave a precise definition.

        The vast majority of contemporary mathematicians believe that Peano's axioms are consistent, relying either on intuition or the acceptance of a consistency proof such as Gentzen's proof. A small number of philosophers and mathematicians, some of whom also advocate ultrafinitism, reject Peano's axioms because accepting the axioms amounts to accepting the infinite collection of natural numbers."

        - Me again - Frankly that doesn't seem to me to be a case of stepping out of anything to prove it. All they did was expand the set of axioms to include an additional set that allowed the construction of a separate proof, but that still leaves you in the same position you started in because you can't prove the new expanded system in terms of it's aggregate axioms either. In fact it seems we don't even have a clear definition of what a finitistic proof even is. I see no reason why an algorithmic system couldn't engage in such games.

        • mannykannot 943 days ago
          Here's a quote that I think you may find interesting, as it seems to be along the lines of what you are saying:

          "A meta-mathematical proof of the consistency of arithmetic is not excluded by...Goedel's analysis. In point of fact, meta-mathematical proofs of the consistency of arithmetic have been constructed, notably by Gerhard Gentzen, a member of the Hilbert school, in 1936. But such proofs are in a sense pointless if, as can be demonstrated, they employ rules of inference whose own consistency is as much open to doubt as is the formal consistency of arithmetic itself. Thus, Gentzen used the so-called "principle of transfinite mathematical induction" in his proof. But the principle in effect stipulates that a formula is derivable from an infinite class of premises. Its use therefore requires the employment of nonfinitistic meta - mathematical notions, and so raises once more the question which Hilbert's original program was intended to resolve." -Ernest Nagel and James Newman

          Taken from https://www.mathpages.com/home/kmath347/kmath347.htm

        • mannykannot 944 days ago
          Lucas himself foresaw the 'expand the axioms' response, and pointed out that once you do this, you get a new formal system with a new Gödel sentence. This means that no such argument against his thesis - i.e. by adding the truth of the formal system's Gödel sentence to that system's axioms - will terminate.

          Note that I am not saying the Lucas-Penrose argument is indefeasible; I am saying that it cannot be defeated simply by the argument that goes 1) it claims the human mind is consistent; 2) we can see that the human mind is not consistent; ergo, it is false. This argument fails because premise 1 is false. If it were this simple, the video here would not be nearly two hours long!

          Addendum: The "stepping out" argument is not a matter of adding axioms, it is a sort of meta-argument that a Gödel sentence for a consistent system must be true[1], and the consistency of the hypothetical system under discussion follows via the Church-Turing thesis (which you could challenge if you want to, but I suspect that, to most people who understand the implications of doing so, this would be an undesirable trade of one implausibility for another.)

          This leaves me wondering if the Lucas-Penrose argument can be challenged on the grounds that the algorithm in question probably does not halt (other than for causes outside the system; namely, the causes of death or the causes of a power outage), or that the question of its halting is undecidable. I am sure this response has been debated somewhere, if it is not obviously wrong.

          [1] https://math.stackexchange.com/questions/1491061/paradoxhow-...

    • contravariant 945 days ago
      That basically implies the second option: some mathematical truths are beyond the grasp of humans. Which I suppose isn't too weird, it's entirely conceivable that for some theorems the shortest possible statement of them simply can't be understood in a lifetime, or ever.

      Which to me makes more sense at least than the first option, which seems to argue that human minds cannot be mechanized because humans are capable of proving mathematical truths and machines have mathematical truths they cannot prove.

  • gjm11 945 days ago
    The argument discussed here was first published by Lucas in 1961 ("Minds, Machines and Goedel"). What seems to me a conclusive refutation of it was published by Hilary Putnam in 1960 ("Minds and Machines"). Putnam's point is much the same as simonh's in a comment here: the Lucas(/Penrose) argument makes assumptions about human mathematicians that do not in fact apply to human mathematicians.

    [EDITED to add:] Contra simonh, though, one can refute the argument without going so far as to say that human mathematicians are definitely inconsistent. (Maybe a sufficiently careful human mathematician is consistent.) All that's required is that we not be able to prove that we are consistent, and I think it is extremely clear that we can't.

    In other words, the Lucas(/Penrose) argument was refuted before it was ever published.

    (Penrose's version isn't really any improvement on Lucas's.)

    Note: The video is 2 hours long and highly technical. I haven't watched anything like all of it. The speaker is _not_ endorsing Lucas's or Penrose's conclusion that Goedel's theorem shows that minds cannot be mechanized; he makes observations similar to Putnam's, simonh's, and mine, but clearly makes them with more subtlety and intricacy :-).

    • raoof 945 days ago
      I have to say I'm just an amateur programmer but if there are absolute undecidable statements then what is the point of thinking? and if we are inconsistent what is the point of science? why not call it religion?
      • simonh 945 days ago
        Science isn't about formal and absolutely certain proofs. That's why we have things like the 5 Sigma standard for evidence in physics. We come up with a plausible theory, then look for evidence that it makes useful predictions. No experimental verification can be truly absolute though. Also consider the fact that Newtonian mechanics was superseded by Relativity, and that even though relativity and Quantum Mechanics are thoroughly verified across many domains they are also inconsistent with each other in some ways.

        We don't have absolute knowledge in Physics, or science generally in the strict sense that this video and philosophers of logic discuss. We only have useful theories that describe the world very closely to reality across most situations.

        We also very importantly get to conclusively rule out an awful lot of ideas and theories. That's very important work. It is all about levels of confidence though. If you think about it, you probably already knew all this, you just hadn't considered the consequences yet. We're all in that boat about a lot of things. I'm afraid in the grand scheme of things, despite all out truly incredible accomplishments, it's also true that we're stuck with fairly limited mental faculties.

        As to why? Well, we need to feed and clothe our children somehow. That means figuring out how to get stuff done. Working out explanations of the world that verifiably work, more or less, has turned out pretty well for us.

        • raoof 945 days ago
          if I am inconsistent how can I trust the conclusion that I am a machine considering that my desire is to not to be?
          • simonh 945 days ago
            I don't think you can. We don't have enough information to be certain yet, we can just try to estimate the least inconsistent hypothesis.
            • raoof 945 days ago
              I don't understand your philosophical position. mind body problem is not about the data it's about explanation, how much data do you need? we are bombarded with data every second of our lives.
              • simonh 944 days ago
                We don't have a sufficiently robust explanation, backed up by evidence for it's accuracy.
    • simonh 945 days ago
      Thanks so much for the reference, it's great to see someone expressed this much more rigorously.
  • simias 945 days ago
    I've only barely started with this video but I'm currently two thirds of the way into Penroses's "The Emperor's New Mind" and I've read a bunch of other opinion pieces on the subject along the way.

    I think I'm just too dumb to understand this discussion. I think for me we could split the discussion in two potentially separate questions:

    - Can a machine emulate a human mind so well that it would be indistinguishable from a "real" human to an external observer (that's the Turing test, effectively)

    - Can a machine emulate human consciousness

    And maybe a third bonus question:

    - Is there a meaningful difference between these two propositions from a scientific perspective? I.e. can we make falsifiable claims that would let us suss out philosophical zombies?

    At this point I'm absolutely convinced that the answer to the first question is affirmative. We're not there yet and there's quite a long way to go until we do, but I really don't see why there would be a fundamental mathematical hurdle on the way there. Maybe it exists, but I have yet to find a really compelling argument of where that hurdle would lie concretely. Give me an example of a thought that we couldn't teach a machine.

    Regarding the 2nd question (and the 3rd) it just boils down to "what's consciousness exactly? Is it even knowable?" and I don't think anybody has an answer to that. My personal intuition is that it's unknowable.

    • sva_ 945 days ago
      > Regarding the 2nd question (and the 3rd) it just boils down to "what's consciousness exactly? Is it even knowable?" and I don't think anybody has an answer to that. My personal intuition is that it's unknowable.

      You can't even know if anyone other than you is conscious. For all you know, the rest of us might all be robots being good at fooling you.

      I think that if we achieve AGI, at some point there will be a division between people: Those who acknowledge them as living, feeling entities, and those who don't.

      Just like even nowadays there are still people who feel about others (based on race, skin color, etc), that they're not really people.

      • mannykannot 945 days ago
        > You can't even know if anyone other than you is conscious.

        This claim comes up quite often in these sorts of discussions, but it is very hard to maintain this level of skepticism, about the external world, consistently (to start with, do you believe any of your thoughts are about an external world?) To take this extremely skeptical position only about other peoples' minds would be quite tendentious.

      • roenxi 945 days ago
        > You can't even know if anyone other than you is conscious.

        Given that we can't define consciousness, even that might be implying more knowledge than someone can possess.

    • prometheus76 945 days ago
      EDIT: I am watching the video now, and made my comment below before I started watching the video. I realize now that my comment is somewhat outside the scope of the video itself, but I'll leave the comment up anyway, because I think it reveals a similar conclusion as the video.

      I'll take a stab at what seems to me to be an insurmountable problem for AI: vision. And not just vision, but "seeing". Pretend I am standing next to a table, and the AI successfully identifies me, and identifies a table next to me (which, in itself, is a very difficult, if not impossible, thing for AI to do right now). Now, let's say I sit on the table. Now, we ask the AI, "Am I sitting on a table, or sitting on a chair?"

      Further to the point (and a less "human" scenario): Musk has been attempting for years to make a lights-out facility for building cars. As of yet, there are still many things that a robot cannot do, or a human can do faster and more reliably, in spite of throwing billions of dollars and millions of work-hours at the problem. Another example: shoe manufacturing robots. Another example: brick laying robots. Another example: pipe welding robots. None of those can even come close the a human's ability to adapt on the fly to small variations, or to learn new behaviors. AI sees the world in a very low-resolution way, and humans are generally unaware of how much "constructing of the world" our brains do compared to the input data it receives from our senses. Replicating this is going to take more than a GPT3, for example.

      • worldsayshi 945 days ago
        Feels like what you're referring to is just "understanding". GPT3 fails at this as well. It seems that gpt3 can string together sentences and whole paragraphs that seem to sort of make sense together but it seems that it never really puts hard constraints on any concepts. It doesn't try to build a comprehensive world with rules. Everything is just a sort of a fuzzy continuum. If you train it on enough data this fuzzyness generate a lot of plausibly sounding output but it will have a hard time coming up with the right answer in a new situation.

        I feel the missing ingredient is to be able to combine this fuzzy or intuitive understanding with some kind of rule engine.

        It's like in the book "Thinking Fast and Slow". It feels like GPT and the best image recognition algorithms we have today are good at the fast kind of thinking but doesn't have the slow and logical part.

      • abecedarius 945 days ago
        > very difficult, if not impossible, thing for AI to do right now

        Is that right? I thought computer vision was getting pretty good at object detection in the past few years.

        • simonh 945 days ago
          Those are just image classifiers though. They associate this pixel patters with the character string "table". They might be able to recognise this image as relating to both table and person, but most likely as competing possible classifications. They generally don't have a concept of relatedness between classifications. They have no spacial conceptual model relating concepts like 'above', beside', 'below', distance, etc.

          They also have no idea what a table is, no general concept of furniture, or even of a table as a physical or 3 dimensional object. They certainly have no idea how it relates to the human form or what "sitting" on something means. It's literally just pixel pattern => string("table"). That's it.

          There may be some experimental models that attempt such things, but the image classifiers that get all the press these days are very good at classification but nothing else.

          • abecedarius 945 days ago
            Image classifiers aren't what I said. BabyTalk in 2013 could take a photo and write out "This is a photo of one person and one brown sofa and one dog. The person is against the brown sofa. And the dog is near the person, and beside the brown sofa." (All correct.) That's almost prehistoric in neural-network years.
            • heyitsguay 945 days ago
              The problem is going from results on curated datasets - usually with at least a little implicit overfitting, since even if you reserve test data, it's typically chosen in the dataset creation process - to data in the wild. BabyTalk had that cool result in 2013, and there have been great papers following up in that area, but there's still not, e.g., a website where you can upload a photo and reliably get an accurate description. It's not because nobody's tried, it's because it's hard - neural network vision methods are still generally quite brittle to unexpected variations in data, and "accurate description" is actually a very loaded term that typically needs some sort of in-the-world contextualization to unpack.
    • mannykannot 945 days ago
      I think it is worth pointing out that the Lucas-Penrose argument goes beyond Gödel's disjunction: apparently Gödel himself could not rule out both sides of his disjunction being true, but Penrose, expanding Lucas's argument[1] to book length, thinks he can. It seems, however, that most experts now find their arguments inadequate[2], and the speaker in this video is prominent among them.

      Wikipedia has a short summary of some of the objections[3], and I think Minsky's is particularly to the point: as humans can believe false ideas to be true, human mathematical understanding need not be consistent and consciousness may easily have a deterministic basis.

      [1] http://cogprints.org/356/1/lucas.html

      [2] https://www.maa.org/press/maa-reviews/g-dels-disjunction

      [3] https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument...

    • trashtester 945 days ago
      My take on your third question:

      If you program a mind specifically to say "I'm concious.", it does not prove that it is.

      However, if conciousness depends on something we cannot simulate and that conciousness is the causal reason we think (or say) we are concious, I would not expect that an exact simulation of a brain would be able to do the things we attribute to conciousness.

      So if you emulate a mind to the best accuracy available, and it suddenly says "I'm concious.", it is at least a very strong indication that it really is, at least in the same sense that humans are.

    • chromaton 945 days ago
      We do have a natural experiment which may provide some insight into consciousness: conjoined twins. They seem to be able to both think independently but also communicate wordlessly.
      • prometheus76 945 days ago
        This applies to any two humans, not just twins. Ever smiled at someone and had them smile back? Ever waved at someone and they waved back? Ever shook your head at someone's behavior? We communicate wordlessly all the time.
        • mannykannot 945 days ago
          If one twin had direct neural access to the other twin's emotional states or "what it's like" aspect of sensory perception, that would be different than any form of communication involving one making gestures that the other interprets.
          • simonh 945 days ago
            I don't believe we have any reliable evidence for that though.
            • mannykannot 945 days ago
              No; the cases of conjoined-at-the-brain twins where both twins are conscious to the degree of each knowing that it is a person are extremely rare, and in any case, a dualist could always claim that whatever is physically observable is not the whole story, as dualism is unfalsifiable so long as its proponents avoid making any affirmative statements about how the mind works.
      • amw-zero 945 days ago
        They have independent brains though. I don’t think it’s a useful data point.
    • bobthechef 945 days ago
      The first task is to define our terms. It's impossible to reason about anything without a good grasp of terms.

      So, for example, if by "consciousness" we mean "intentionality" or at least that it entails intentionality, then you have to explain a) how anything within a computer can be about anything in the world, and b) how abstract concepts can exist in a computer. And you have to think very carefully about this because the internet is full of flippant and unexamined responses that confuse the interpreter with the interpreted.

      Let's say I take a picture of a bike wheel and store it on a computer. What makes this about the bike wheel? I know that the picture is about the wheel, but the picture does not contain that information. No "metadata" can accomplish that either because metadata is of the same nature as the image. Metadata isn't even intrinsically about the image in question. It's there, maybe adjacent to it, but its aboutness is, like the image's aboutness, entirely in my mind. A program can be written to simulate aboutness, but again, the aboutness is simulated in the sense that it is not a real feature of either the metadata or the program. It is an interpretation the programmer or user brings to the computer. So again, no aboutness is to be found in the program either.

      Then there's the question of abstract concepts. A wheel is a concrete thing, but to know that it is circular is no longer a concrete matter. Circularity is nowhere to be found in the world as such. All you ever have are particular circular things. The concept of circularity requires abstraction from particulars. So why can't a computer do that? The reason is the same as the reason why circularity does not exist in the world on its own and that is that matter is always particular. Matter is not abstract. But computers are material things. So how could a computer abstract circularity from an image of a wheel? Sure, you can have an algorithm that is written to match circles in images, but this is not the same as having a concept of circularity on its own. For that you need an intellect.

      The problem of qualia is also related. A materialist account of physical reality entails the denial of qualities like color because all that a materialist accepts is a world of geometric extension in space. Qualities are therefore taken to be features of consciousness. But if consciousness is a physical phenomenon, then how can that be? Here again computers provide no answer. Sure, I can represent the color "red" using #FF0000, but this is not the color red. I can construct a monitor that, when fed a signal that corresponds to #FF0000 activates some physical elements to produce red light, but this is just a conventional alignment between that representation and the construction of the monitor. There is nothing intrinsically red about #FF0000. And besides, the monitor isn't really producing red light according to the materialist. It is producing light that is in a state that, when observed by a conscious being, appears red (which, again, runs into the problem of how even that is possible given that consciousness is physical).

      So the whole idea that computers can be understood as truly autonomous things apart from human beings is bankrupt. They are better understood like all other technology: an "extension" of human beings. While most animals have all they need in themselves to achieve their ends, human beings can construct technological artefacts to perform actions that they could not otherwise. A turtle has a shell to protect himself, but he is also limited to that shell. Human beings are on the other hand lack any such protection, we're soft and easier to hurt, but we have a mind that allows us to make protection when we need it and in an (in principle at least) uncountable many ways depending on the needs facing us. We make coats for winter weather, astronaut suits for the low-pressure environment of space, a beekeeper's helmet when extracting honey from hives, and so on. Computers are just another expression of the same principle.

      • simonh 945 days ago
        Context comes from internal models of how the world works. A database doesn't know what a bike is, exactly as you say, because no matter how many tags it associates with it they have no purpose to it. To understand a bike in the way we do it would need to have kinematic models of the bike, a person, a road, physics simulations of their behaviour and models of human behaviour for the activity of riding the bike.

        We do have kinematic simulations of this kind, but what we don't have is a way to map a visual image of a bike to generate an internal kinematic model, or associate them together in a useful way, and then contextualise that within operational and intensional models, then a way to relate runs of the simulation to the actual physical environment in an intensional way. I don't think there's anything impossible about any of that, but we certainly don't have this capability at the moment. There's a long, long way to go.