Pure Maths in Crisis?

(plus.maths.org)

72 points | by lelf 1861 days ago

8 comments

  • klyrs 1860 days ago
    As a computational mathematician... I should clarify; I write programs to construct and verify proofs in discrete mathematics, and also I'm a hacker obsessed with high performance code... I agree that mathematics is in crisis. Why? Because I'm an oddball rarity. My first supervisor was a hacker too, but my PhD supervisor thought I was a wizard. Professors, by and large, can't hack. Those who could, abandoned the field for CS. Students are rarely hackers - often the celebrated kids are groomed through Olympiad, then Putnam, and jump straight into grad school with nary a glance towards a computer -- until they are broken against the only language seen as practical: latex.

    Mathematics is innately about computation; every "proof" is an algorithm in disguise, and yet the field fails to attract talent because computation itself is treated with such disdain. Math has its own reproducibility crisis -- we don't write code that performs the tedious operations "left to the reader" and even when we do, journals won't post it.

    And I don't see a ton of progress. My old department has hired a few young professors since I graduated... none can hack. It's sad

    • EGreg 1860 days ago
      I totally disagree. I’m an amateur mathematician who dropped out of his PhD program at NYU with a Master’s. But I remember when I was doing my math major ever since college, seeing the Appel and Haken proof of the four color theorem and thinking it wasn’t a real “proof”, only a “half proof”. Because it just came down to a computer checking thousands of cases. The “last mile” was not proven.

      Proofs are basically being able to explain something with a small number of concepts at every step. That’s the human way. What we lose when we turn everything over to ML is the ability to understand anything. And I don’t just mean by humans. I mean by machines too.

      I can accept that human logic and thinking is just whatever we can fit into a small vector, and small bandwidth. If that’s all there was, and understanding was synonymous with pattern recognition, then machine learning would just be an amazing upgrade to that, with far larger vectors and so on.

      And although there may be a “superlogic” out there where each statement involves millions of variables, I do not think we are achieving “reasoning” using this logic when it comes to computers. A project like “Cyc” tries to do it. But Machine Learning is something completely different.

      For the real world, the results have been undeniable. So far MCTS is king, followed by RNNs and gradient descent I think. And these may actually be really good for learning situations.

      But proofs are supposed to be more elegant. In mathematics, the whole explanatory elegance is in the “bandwidth” at every step of how many concepts you invoke. The question of “Why” something is.

      MCTS simply seems to say “well, dunno why it is but in these 927282781618 cases things just seem to be worse than in these other 91273737833738 cases”.

      Although I am open to the idea that a lot of human learning doesn’t really have the “why” explanation either. Maybe “why” is out of place to ask in many questions. But NOT in mathematics!

      • klyrs 1859 days ago
        Appel and Haken were pioneers in the field of computer proofs. They didn't do it to modern (or contemporary) standards, and it did leave a bad taste for those who seek "elegance" in proof. Since then, theory and practice have advanced in strides; but we may have to agree to disagree on the thrust of your post.

        I'd strongly recommend Gonthier's treatment of that particular theorem. In my opinon, [1] is beautifully written and illustrated; a shining example of how computer proofs should be written... and [2] is the actual proof.

        > What we lose when we turn everything over to ML is the ability to understand anything.

        Yuck, we agree here.

        [1] https://www.ams.org/notices/200811/tx081101382p.pdf [2] https://github.com/math-comp/fourcolor

    • dTal 1859 days ago
      Mathematics may be innately about computation, but hacking is in large part about social constructs. I wouldn't be at all surprised if the same mental tendency that causes one to appreciate the beauty and elegance of pure mathematics, also causes one to flee in horror away from the computing edifice we've built so far.
    • stabbles 1860 days ago
      I have no idea what you mean with hacking or hacker in this context
      • klyrs 1860 days ago
        This is literally Hacker News, I assumed that would be obvious. There's more to the culture than programming. There's two types of programmers: those who learn the grammar, and those who learn the interpreter / compiler.
        • ruang 1860 days ago
          Best description I’ve seen! Can sometimes see it in the bimodality of the types of comments too.
        • burrows 1860 days ago
          But that’s wrong.
      • alex-wallish 1860 days ago
        program
      • 0815test 1860 days ago
        "Learn to code", perhaps.
    • woopwoop 1860 days ago
      To clarify, are you saying you've met people who have been successful at Olympiads and the Putnam who fail out of grad school because they can't learn LaTex?
      • klyrs 1860 days ago
        lol no, nobody (within epsilon...) fails because of latex. But they aren't exactly endeared to programming as a result of the experience. I came from the other direction -- a hacker who had a dozen programming languages under her belt, including bf and Perl, before I was introduced to the moderately arcane latex.

        They're quite frequently brilliant mathematicians... but far too often, they're steeped in the "pure" math culture that favors pencil over keyboard. In the context of this crisis, my opinion is that their education is not preparing them for the future. They succeed in academia... but their work is too often unverifiable.

        • mcguire 1860 days ago
          It probably doesn't help that all of the proof tools I've seen are incredibly obscure and unrelated to any formal techniques taught to human beings.
      • chongli 1860 days ago
        Yeah, that seems very odd to me. I sat next to a successful math Olympiad student in first year advanced algebra. He typed his lecture notes in LaTeX in real time, producing textbook-like results on the fly. It looked like he was running Linux on his laptop. Suffice it to say, I think he could accomplish anything he wants to with his math and computer abilities.
      • pmiller2 1860 days ago
        I'd be shocked if that's the case. Basic LaTeX isn't that hard to learn, and the basics will take you really far.
        • wolfgke 1860 days ago
          > Basic LaTeX isn't that hard to learn

          Basic LaTeX is probably not that hard to learn. But LaTeX is a really deep rabbit hole and you need a lot more knowledge of it if you write math papers.

          • pmiller2 1860 days ago
            You don’t have to go very far down the rabbit hole to write papers. If you can draw a commutative diagram, and insert a PDF graphic, there isn’t really anything much more complex than that.
            • wolfgke 1860 days ago
              > If you can draw a commutative diagram, and insert a PDF graphic, there isn’t really anything much more complex than that.

              Not true. I know what I am talking about since I work in mathematical research (I like math, of course, but I openly admit not to like LaTeX).

              • pmiller2 1860 days ago
                Well, I’ve been to grad school, done significant amounts of mathematical writing, read my share of papers, so I know what I’m talking about, too. The depth of the rabbit hole isn’t that deep for any one person; it’s more like a gopher hole, but everyone has their own little gopher hole.
              • j7ake 1860 days ago
                Could you give an example where advanced LaTeX skills is required to write a paper?
                • wolfgke 1860 days ago
                  - TikZ wizardry.

                  - Often, I had to look deeply into forum posts to get LaTeX to format a formula in the way that I imagined (often the solution involved deep LaTeX wizardry)

                  - Creating a document that involves text in multiple alphabets (e.g. Latin alphabet, Greek alphabet, cyrillic alphabet). Actually trivial with Microsoft Word. ;-)

                  Not to even mention problems where I have no idea how to start such as how to do reproducable build processes for LaTeX documents (i.e. each run of the build process produces a binarily identical PDF document).

    • thatoneuser 1860 days ago
      Oh God latex breaks them? They should just use Lyx. I could take physics notes on a cpu faster than the profs could copy their notes.
  • dannykwells 1860 days ago
    I think articles like this are hilarious because they act like written down proofs are a gold standard of unassailable logic. Of course they aren't - what constitutes proof to one person (or in one age) may or may not constitute proof to others. Look at Mochizuki's work - someone so convinced they are right, and yet no one else can understand it:

    https://www.nature.com/news/the-biggest-mystery-in-mathemati...

    Another good example: Euler's proofs in his day would be failed in even a basic real analysis class now.

    Having known many many pure mathematicians, I believe that they don't want to use computers for the simple reason that many of them don't know how to, and thus denying the legitimacy of computer-generated (or assisted) proofs protects their livelihood/careers.

    • AlexCoventry 1860 days ago
      Most mathematicians would love for a machine to check their proofs. The problem, as Buzzard says, is that is much too expensive. Another way to put is, at the moment the math you can do with machine-checkable proofs is a bit boring, compared to math on the cutting edge outside formal methods.
    • tacomonstrous 1860 days ago
      >denying the legitimacy of computer-generated proofs protects their livelihood/careers.

      While this might be an issue, the bigger problem is that there are no interesting new theorems that were obtained in this way yet.

      What is true is that there is still serious prejudice against theorems verified by reducing to a finite (yet large) number of concrete cases, which are then verified by an algorithm. This is the case even after the whole brouhaha over and resolution of the 4 colors theorem by Hales.

      • auggierose 1860 days ago
        Hales didn't resolve the 4 colors theorem, Gonthier did that. Hales proved the Kepler Conjecture and afterwards facilitated its formal verification via the Flyspeck project.
        • wolfgke 1860 days ago
          > Hales didn't resolve the 4 colors theorem, Gonthier did that.

          Gonthier didn't resolve the 4 colors theorem, Appel and Haken did that. Gonthier just created the first computer-checked proof for the Four-color theorem.

        • tacomonstrous 1860 days ago
          Yes, thank you. Mixed up machine assisted proofs.
      • dwheeler 1860 days ago
        >> denying the legitimacy of computer-generated proofs protects their livelihood/careers.

        > While this might be an issue, the bigger problem is that there are no interesting new theorems that were obtained in this way yet.

        I think the issue is that there is a need for computer verified proofs, not necessarily computer generated proofs. It's much easier to create a proof verifier than a proof generator, and verifiers can give us much greater confidence that the proofs are correct (regardless of whether a human or computer generated them).

      • dannykwells 1860 days ago
        Good catch on the generated vs. assisted - I edited my comment.
    • smadge 1860 days ago
      Why do you insist mathematicians become programmers? They have different skills, have different ways of thinking, and solve different problems. As an analogy, plenty of famous musicians were obviously very musically talented but didn’t know how to read or write musical notation. Proofs are just a medium to convey mathematical ideas to other mathematicians. I bet if Erdos were required to publish his ideas in a theorem prover language his mathematical output would have at least halved.
      • dwheeler 1860 days ago
        > Why do you insist mathematicians become programmers?

        No one is insisting that. We're insisting that proofs actually be correct, something that demonstrably cannot be done (to an adequate level) when we depend only on humans. Humans get tired and make mistakes such as failing to notice missing or invalid steps. Humans are still really good at coming up with proofs - but not so good at verifying them.

        > Proofs are just a medium to convey mathematical ideas to other mathematicians.

        I disagree. For example, claiming that a^n + b^n = c^n for any integer value of n greater than 2 when all values are integer is clearly a mathematical idea. But that idea is not a proof of Fermat's Last Theorem; it is simply an idea. A proof should establish, beyond doubt, that a claim is actually true.

        > I bet if Erdos were required to publish his ideas in a theorem prover language his mathematical output would have at least halved.

        I'm not sure about that, Erdos was pretty smart. There's relatively little investment in theorem prover languages; a little investment would go a long way. For example, today you have to prove lots of little things that really should be in libraries and tools, but the lack of investment holds things back.

        This also assumes that "mathematical output" is acceptable if it is not computer-verified. At some point in the future I think a "proof" that is not computer-verified will be viewed as an unproven claim. Humans simply aren't very good at verifying proofs - and computers are extremely good at it.

        • smadge 1860 days ago
          Well, I guess what I meant is that proofs are for convincing other mathematicians. If someone is unconvinced by the proof they’ll try to fill in the gaps or further specify certain parts of the proof themselves. If that doesn’t work, they might ask for clarification. Eventually a consensus on the validity of the proof will be reached. I agree if the proof were presented in a machine verifiable format to begin with we wouldn’t need this process. I think though, that the reason it isn’t done this way is partly because the effort required isn’t worth it yet. My professor would actually get annoyed if I am too specific in my proofs because it overall makes the proof more difficult for a human to understand, which as I said is ultimately the purpose of a proof.

          Additionally the proof itself sometimes conveys an interesting idea that isn’t captured in the theorem. Some theorems have many different proofs that use different methods to prove the same theorem. These different proofs provide different perspectives that lead to a better understanding of the topic.

          I have a background in both computer science and pure math and am very interested in highly formal methods, I just don’t think we should disparage mathemicians for doing more human oriented math.

          • dwheeler 1860 days ago
            > Some theorems have many different proofs that use different methods to prove the same theorem. These different proofs provide different perspectives that lead to a better understanding of the topic.

            Agreed, but there's no conflict. You can create multiple formal proofs, each of which provide different perspective that can lead to a better understanding.

            > Well, I guess what I meant is that proofs are for convincing other mathematicians.

            Not everyone agrees with this claim, and this disagreement is the essence of the "crisis". Is the purpose of a proof (1) to show that something is actually true, or (2) to convince another mathematician that it's true (whether or not it's actually true)? Historically the two were conflated, but that is no longer the case. Now we must ask if #1 or #2 is the real goal of a proof, because we now can choose between those alternatives.

            In short, I disagree with you. I believe that the purpose of a proof is to show that something is actually true (#1). Whether or not you can convince a human - any human - is historically how we determined #1, but increasingly this alternative (#2) is no longer an adequate measure of anything.

            Mathematicians are humans. They can be fooled, fail to see errors due to fatigue, or simply be overwhelmed because they have limited time. When humans were the only way to verify proofs, we had to live with those limitations - because there were no alternatives.

            But computers today can verify proofs. They can do it far more accurately than humans, they don't get tired, and they can do it 24x7. Computers aren't as good (today) at creating proofs, but they don't need to create proofs to verify proofs. We can take a number of actions that make computer-verified proofs far far more reliable than proofs merely checked by humans. As I stated earlier, it's odd that we make humans do the verification job, because a computer is much better at doing it.

            The question now is: should we continue to accept proofs that are only human-verified? Why? Why is it acceptable if "other mathematicians" can be convinced, if the proofs are invalid? Do we really believe mathematicians can't make mistakes?

            It's certainly useful to show "summary proofs" so that humans can see the big picture. But once you have the details, they can be summarized. If only "summaries of proofs" are available that aren't computer-verified, that means that errors may lurk that would have been countered by a formal proof. Large proofs are easily shared via the Internet, so size is no longer a serious concern.

            I don't disparage mathematicians for doing more "human oriented math". Indeed, we should honor the work of past math giants, who had no alternatives available. But we need to move beyond "human oriented math" because it is not trustworthy enough and we have a better alternative available.

            I'm not saying we can instantly do that today, by the way. There are tools that make it possible, but it's hard. It will take persistent work over many years to transition to a world where proofs are routinely computer-verified. There will need to be decisions on what to prioritize on formalization. Different approaches will need to tried, and I doubt there will be a single approach everyone uses. People with different skills will need to work together. Libraries and tools will need creation and refinement. But the first step in that journey is to have a general agreement that it's a journey worth taking.

            Do mathematicians and those who fund them care about truth, or do they care about merely convincing other mathematicians? We must now choose between those two options. I choose truth.

    • throwawaymath 1860 days ago
      > Look at Mochizuki's work - someone so convinced they are right, and yet no one else can understand it:

      I don't think this is a good characterization of Mochizuki or his Inter-universal Teichmuller theory. The vast majority of new results in both pure and applied mathematics are accepted by the mathematics community without incident - this applies to extremely sophisticated and groundbreaking work as well. For example, if you look at the work of almost any Fields medalist in the last century, you'll see that it was contemporaneously well-received.

      What you're citing is one example of 1) an extremely complicated proof, 2) delivered to the mathematics community in a very unorthodox manner, 3) by an author who has demonstrated significant reluctance to refine their work to make it more comprehensible to others. This is very uncommon and doesn't actually support your point (not that your point doesn't have merit, but this just isn't really applicable).

      I feel your framing of Mochizuki's IUT could be interpreted as saying that he somehow developed a robust proof which is, for a variety of reasons, not accepted by the community at large. This is not the case - his proof is not robust! There is significant evidence that it doesn't actually work. Furthermore, if your attempt at a proof does not agree with the general consensus of the mathematics community, it makes more sense to conclude that your proof is likely in error than it does to use it as an example of how proofs are subjective, and different people could require different levels of rigor. That's almost never the case in real world peer review - you're more likely to have your math paper dinged for a subjective assessment of how interesting or useful it is than because the reviewer actually finds it to in error.

      I'm also going to have to strongly disagree with you in regards to Euler's proofs. Analysis as a discipline is far more mature than it was in Euler's time, but Euler still clearly proved the things we attribute to him. I'm not really sure what your expectation is here - we have more sophisticated means of articulating proofs, and likewise much more mathematical scaffolding to support new theories. But Euler's work is rigorous.

      > Having known many many pure mathematicians, I believe that they don't want to use computers for the simple reason that many of them don't know how to, and thus denying the legitimacy of computer-generated (or assisted) proofs protects their livelihood/careers.

      Trading anecdote for anecdote: I've never known a single mathematician who felt that computer-generated proofs posed a threat to their careers. I don't mean to call your experience into question, but frankly I'm a little shocked that this is your interpretation of their reluctance to use computers. In my experience mathematicians are loathe to use computers not because they feel threatened by them, but because they just don't need to and never learned to. That's a very different attitude!

      As for my own opinion: the legitimacy of computer-generated proofs and the work of professional mathematicians are independent of each other. It would be naive to think that computer-assisted proofs would actually threaten a mathematician's livelihood. From the discussions I've had with mathematicians both in person and online (such as on /r/math), I'm fairly sure most of them would consider this idea incoherent and not actually be concerned about it. In fact, most mathematicians I know (myself included) would be thrilled to have more computer-assisted proofs available.

      • akalin 1860 days ago
        > I'm also going to have to strongly disagree with you in regards to Euler's proofs. Analysis as a discipline is far more mature than it was in Euler's time, but Euler still clearly proved the things we attribute to him. I'm not really sure what your expectation is here - we have more sophisticated means of articulating proofs, and likewise much more mathematical scaffolding to support new theories. But Euler's work is rigorous.

        What about the Euler product formula and using it to prove the infinitude of primes? At best it has gaps that can be easily filled, but it wouldn't meet modern standards of rigor. See http://www.math.harvard.edu/~elkies/M259.06/euler.pdf (the best source I can find on my phone)

  • romwell 1860 days ago
    Mathematics has always been like that.

    The Fundamental Theorem of Algebra went through many "proofs" over decades before settling on one we accept as rigorous.

    And rigor itself? A relatively modem requirement. Calculus was put on a rigorous basis only in the 19th century. It existed, and was applied, for centuries before that.

    The real truth about mathematics is that it's way more about intuition than rigor, and that things that most people think are true are taken as such. It's human nature, and mathematics is a human endeavor.

    Moreover, math itself resists rigor in a fundamental way, as Godel showed us. Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms. Will the computer believe in the axiom of choice?

    And stuff that's too complex to be understood except for the select few? So be it. It means nobody really cares about that stuff. Maybe someone will discover a better proof later. Maybe someone will stumble upon a counterexample. Maybe nobody will touch that for a thousand years. Math will go on.

    Not that all math is accessible, but all good math is not limited to in-groups.

    For an extreme example, Grigori Perelman's famous work is surely not accessible, and is famously incomplete. It didn't matter. The ideas were clear to enough people around the world that it didn't take long for them to spread, and for others to step up and close the gaps.

    I've always said that mathematics is a form of art, perhaps with a smaller audience than most. The only criterion in mathematics is beauty: whether the work is interesting or not. Some works lack in that aspect, as it happens in all arts, and will not be appreciated by future generations. And it's fine.

    Disclaimer: I do math, sometimes.

    • dwheeler 1860 days ago
      > And rigor itself? A relatively modern requirement.

      Perhaps. I would argue that it is LONG overdue.

      > Moreover, math itself resists rigor in a fundamental way, as Godel showed us.

      This merely shows that some things cannot be proved, be it by machine or computer. That's not the issue. The issue is whether or not we should be moving towards machine-verified proofs, since "proofs" verified only by humans often turn out to be wrong.

      > Machine proofs? Eh. Ultimately, everything is up to humans, even the axioms. Will the computer believe in the axiom of choice?

      A computer will "believe" the axiom of choice if it is told that it is an axiom. I agree that the humans get to decide what is an axiom (at least in their system), but it should be immediately obvious exactly what axioms are being accepted, and then rigorously shown that only those accepted axioms are being used.

      I can even point to a demonstration. One of the mathematical formalization tools available today is Metamath ( http://us.metamath.org/ ). In Metamath you can state the axioms you wish to accept, and then generate proofs that are automatically checked; every step must lead back to a previously-proved theorem or an axiom.

      There are several existing Metamath databases. The "set.mm" database (Metamath Proof Explorer) is based on classical logic, ZFC, and the axiom of choice. Since the axiom of choice is included as an axiom, you can use it. See here: http://us.metamath.org/mpeuni/mmset.html

      In contrast, the "iset.mm" database (Intuitionistic Logic Explorer) is based on intuitionistic logic and does not include the axiom of choice. Since the axiom of choice is not included, you cannot verify a proof that tries to use it - the verifier will simply complain that the step is invalid if you try to do it. See here: http://us.metamath.org/ileuni/mmil.html

  • fspeech 1860 days ago
    For people not familiar with the subject (or Buzzard and his cohort) I recommend checking out the Zulip archive https://leanprover-community.github.io/archive/116395maths/i... Some of the (esp. earlier) threads are very illuminating on the process of constructing proofs using ITPs (interactive theorem provers).

    My personal experience is that modeling a suitable domain of math formally can be a very good pedagogical exercise. There are many fine points that are easy to ignore when doing math informally but a formal tool would not let you. This is particularly helpful when working on foundational subjects. When successful you are rewarded with the feeling of possessing the understanding at a level deeper than everyone else (most likely untrue, but certainly a typical expository paper or book could not cover the subject as thoroughly).

    That said note my use of the word "modeling". In a formalization attempt you may have in your head an isomorphism between the code and the mathematical objects under study but the isomorphism is not necessarily apparent or universally recognized. Beyond the most quotidian matters other people's modeling code (think of it as an API) may or may not fit with your conceptual models. Just like there are different programming languages and libraries, there is no universally accepted approach to encode mathematical constructs. Your claim of correctness is only as good as your axioms and definitions. Your theorem, with all definitions completely unfolded, will be an unreadable tangle of first order logic formulas that anyone would be hard pressed to recognize as universally meaningful -- though it is extremely likely to be logically sound. Even if no one questions the soundness of your logic there is no tool that can certify that your formal expression actually means what you say it means in a mathematical domain.

  • 0815test 1860 days ago
    There's not much that's new here - Freek Wedijk estimated the cost of formalizing the undergrad math curriculum in its entirety at 140 man-years, for a rough monetary cost well above the millions-of-dollars range - link @ https://www.cs.ru.nl/~freek/notes/index.html Expensive to be sure, but not excessively so for a mostly one-time effort. Part of the problem though is that we are still far from having a reasonably-standard platform for formal math that could be expected to fully unlock its benefits. Most of the effort in formalization so far seems to be occurring in domains of "synthetic mathematics", which seem to be mostly self-contained and don't take much formalization effort to reach close to the research frontier.
    • philipkglass 1860 days ago
      Billions of dollars? If it only takes 140 man-years that implies more than $14 million per man-year.
      • 0815test 1860 days ago
        Yup, I suppose that it's closer to the hundreds-of-millions range for the undergrad curriculum only, in a single system - maybe even a bit less than that. But that then leaves open the question of how to go the rest of the way to the actual research frontier in whatever domains of math you care about, plus the inherent uncertainty as to whether these efforts will in fact be useful, since these systems are in such an early state. You're right though that the way I phrased that was a bit confusing, so thanks!
  • mathgenius 1860 days ago
    > Proof is the essence of mathematics.

    Proof may be essential, but it is not the essence of mathematics. Like many things, the essence of mathematics is beauty.

    In this mad world, of which academia is a part, we only seek to measure things. And how can you measure beauty? You cannot, or, at best, it is subjective. So, in place of beauty we have big complicated proofs, that no-one understands.

    John Conway, later on in life, decided to quit being a "real mathematician" and focus instead on trying to understand mathematics. His student Borcherds came up with a proof of Conway & Norton's moonshine conjecture, but Conway himself never accepted it. Why? Because it was too complicated, it couldn't possibly be the right explanation for moonshine. What a dude.

  • mcguire 1860 days ago
    As an aside, "The Endoscopic Classification of Representations: Orthogonal and Symplectic Groups" is a damn good title. I'll assume it means something to someone, but all I can say is that I expect it to make a big hole in the ground.
  • F-0X 1860 days ago
    I cannot understand this "solution" of turning to machine-verified proofs. Coq, Isabelle, etc. invariably contain bugs. Even if they didn't, you can still run into hardware bugs. Computers are not a source of perfect computation and software is certainly not a source of perfect logic.
    • dwheeler 1860 days ago
      > I cannot understand this "solution" of turning to machine-verified proofs. Coq, Isabelle, etc. invariably contain bugs. Even if they didn't, you can still run into hardware bugs. Computers are not a source of perfect computation and software is certainly not a source of perfect logic.

      The issue is not that computers are always perfect. The issue is that humans are far, far worse than computers at detecting errors in proofs. So-called proofs are later found to be wrong, even after lots of human peer review, and there are often doubts about published papers. If pure math is about proof (and it is), then checking proofs with only humans is unacceptable in the long term. Humans simply aren't good enough at it, and why should we make humans do a job (verifying proofs) when computers are obviously better at it?

      In addition, there are many mechanisms for countering computer errors that make their likelihood essentially zero. Hardware bugs happen, but executing software on multiple different computers using diverse CPUs basically makes that disappear.

      Software can have errors, but there are ways to counter that too. The Metamath community verifies set.mm (its main database at http://us.metamath.org/mpeuni/mmset.html ) by running running 4 different verifiers that were implemented by four different developers in 4 different programming languages (C, Rust, Python, and Java). There are at least 13 different Metamath verifiers, so a few more could be added if desired. The underlying Metamath language is extremely simple, too, so a verifier can be written in only a few hundred lines of code (reducing the risk of error in any one of the verifiers). There's evidence that N-version programming doesn't reduce errors as quickly as if errors were independent (see Knight and Leveson's paper), but even so, it's still quite difficult to slip by that many independent verifiers. In the long term, proof verifiers can be proven correct. It's hard to prove large programs correct, but you only need to verify the verifier; there are already examples such as ivy for prover9.

      I can imagine that computers might someday be better at creating proofs. But that isn't necessary for computers to be useful. The main issue today is that proofs are typically not computer-verified, and computers can do that today. Many tools have managed to do well on Freek's "Formalizing 100 Theorems" challenge list at http://www.cs.ru.nl/%7Efreek/100/ - and for the most part that is without a lot of investment.

      It's not easy to use existing tools, that's definitely true. But a big part of the problem is that there hasn't been a lot of work to use or grow them. If more work and money was invested in improving the systems for verifying proofs, and it was held in higher regard, a lot more would happen.

      I believe future mathematicians will view proofs unchecked by computers the same way we view alchemy today. I suspect they'll say something like, "once in a while those alchemists and pre-mathematicians happened to be correct, but of course we don't accept their work without computer verification today." I think we should be working to make that future happen sooner.

    • moomin 1860 days ago
      They’re not, but they’re spectacularly good at spotting dumb errors that humans are spectacularly bad at spotting.

      Me, I still haven’t got my head around how to prove something on a computer, but the principle is sound. Theoretically one could build the proof as some form of literate program.

      • chess93 1860 days ago
        A computer just does symbolic manipulation according to a list of rules (axioms) and the human/programmer specifies which sequence of symbolic manipulations to apply and then the computer simply states whether or not the specified manipulations transform the theorem in to the truth symbol.

        http://us.metamath.org/mpegif/mmcomplex.html

        (Warning: I've never actually done much of this before so some details might be wrong.)

    • adamnemecek 1860 days ago
      You mean unlike human?