12 comments

  • iujjkfjdkkdkf 1143 days ago
    Pour water into a maze (or create a potential difference across a conductor) and you'll see that as new current flows in, it doesnt explore every pathway but takes the correct path through.

    This intelligent behavior arises from the simple compulsions things to reach equal potential.

    The slime mold experiments are cool because they connect simple compulsions with emergent intelligent behavior in an organism. I have wondered if it's the same for us, if conciousness is really just the sum of all our simple compulsions, arising from basic rules - like is water "conscious" of wanting to seek it's own level, ions conscious of wanting to react, etc, and together that makes up what human conciousness is?

    • fiftyfifty 1143 days ago
      This article says that they've found that the slime mold's memories are stored in tubes that form intricate networks inside the cell. It's interesting because neurons have lots of microtubules and there have been some recent research showing that they may be more than just structural components of the neuron:

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3979999/

      Could there be a relationship between these slime mold memories and memories in more complex organisms?

    • cercatrova 1143 days ago
      I don't understand the point about the water, why wouldn't it cover uniformly the entire maze? Assuming the maze is level with the ground.

      Edit: thanks for the clarification all, I didn't realize it was an entrance exit maze, I was thinking of one sealed on a ll sides where the water can't get out.

      • jfoutz 1143 days ago
        I think if a maze has an entrance and and exit, and there's some surface tension at the leading edge, when you pour water into the entrance, it'll fill the maze like a breadth first search. as soon as the water can flow freely at the exit it'll drain because there's less resistance at the exit.

        pour a little water on a counter and you'll get a round area with little walls at the edge, if it's not clean it'll kinda break down where it's dirty (surface tension doesn't hold up) once it hits the edge of the counter, the surface tension pushes all the water over the edge. if it's a perfectly flat clean surface it'll make a perfect circle till it hits the edge.

        so the water won't fill every nook and cranny of the maze, it'll start a new circle at every decision point, till one of those circles goes over the edge.

      • sgtnoodle 1143 days ago
        It would to an extent. The idea is that the maze has an exit, and once the water made it to the exit, it would spill out. The steady spilling out of water would create a current of flowing water all the way back to the water's source, following the "solved" path of the maze. The water didn't intelligently solve the maze, though, but rather the solution emerged out of the simple but massively parallel interactions between collisions of atoms (i.e "weak forces") and gravity.
        • lrem 1143 days ago
          I would leave gravity out of this.
      • Stratoscope 1143 days ago
        I think OP was talking about a maze with an exit where the water can drain out, not a maze sealed all around the edges.

        It would be a fun experiment to test this with and with an open exit drain at the end of the maze.

        • cercatrova 1143 days ago
          Ah OK makes sense now, I was assuming like a kids toy maze where it's covered on all sides. An entrance and exit maze makes a lot more sense.
      • coryrc 1143 days ago
        Surface tension will keep it from spreading beyond a certain point unless the exit is the longest path of the entire maze.
    • Aerroon 1143 days ago
      The universe is moving towards maximizing entropy. We (life) are just an artifact on that path. In some ways you could argue that we are 'compelled' by the same thing the water is.

      It could be that every atom is conscious. But my guess is that it requires some kind of a larger physical construction for consciousness to appear. It would still mean that our consciousness is a collection of simple compulsions. I just think atoms are too small to contain that behavior on their own.

      • adamrezich 1143 days ago
        people sure do go to great lengths to explain away/around the concept of God
        • uselpa 1142 days ago
          They’ve been saying that for centuries. But it did serve us well so far.
        • imvetri 1142 days ago
          I am god.
    • adolph 1143 days ago
      The Deep History of Ourselves is a book that goes from single cell to consciousnesses. Here is a review:

      https://www.nature.com/articles/d41586-019-02475-x

    • ta1234567890 1143 days ago
      > and together that makes up what human conciousness is?

      If you believe consciousness is completely materialistic, then maybe it is like that.

      I personally think that to be conscious means to be aware of being aware.

      You could say it’s a circular definition (or recursive), and it is, but it’s the only way to define something by itself instead of as reference to something else.

      • neatze 1143 days ago
        consciousness is dissimilar to awareness, in my limited understanding it is about feelings (experience) in itself and not about being aware of being aware about feelings.
        • jhickok 1143 days ago
          That is highly controversial. The most popular theories of consciousness are that they are related to certain thoughts about thoughts, i.e. higher order awareness rather than first-order access.

          The qualia realist views that you are describing are likely more popular in philosophy classrooms than in CogSci or neuroscience programs.

    • rland 1143 days ago
      I believe it's panpsychism to believe so. I definitely do.

      If dogs have consciousness, then there is no line between dogs and humans.

      If mice have consciousness, then there is no line between mice and dogs.

      If bugs have consciousness, then there is no line between bugs and mice.

      etc.

      Until you get to, say, a rock rolling down a hill to find the low-energy spot, or an electron... Does that have consciousness? Most people would say no, but I would say then, where does "no line" become "line"?

      This is a fairly solid proof of panpsychism imo.

      edit:

      You can go up the chain as well, from the complex (human beings) to the very-complex: groups of friends, to companies, which are collections of human beings, to the global economy, which is a collection of collections of human beings. I think that's as far up as it goes, but it is interesting to think about.

      • Darvon 1142 days ago
        The buhddists put in the legwork ages ago: the line is if the organism has a chemical stress response to stimuli.

        Like could an motivated human torture it and could it try and escape the situation.

        • mromanuk 1142 days ago
          But that's assuming the organism/thing can move. easy counter-example: a paralyzed human being.
          • sidpatil 1134 days ago
            How is that a counter-example? A paralyzed human can still exhibit a chemical or physical response to an external stressor or stimulus in general, such as body temperature, sweating, neural activity, etc.
    • johnsmith4739 1143 days ago
      Exactly, this is the core concept of homeostasis. Organism in equilibrium? Passive. Stimuli toy with the balance? Compensatory behaviour to restore equilibrium. Humans work just the same, the only difference is the complex brains that allow complex compensatory behaviours.
    • TaupeRanger 1143 days ago
      Well it wouldn't tell us much about consciousness per se, because what you're describing is an explanation of behavior, not the first-person experiential thing we call consciousness. Although I think there must at least be a correlation between the two. After all, when you put your hand on a burning stove, something somewhere goes "out of equilibrium", causing the reaction AND the accompanying experience of pain. It's just that we don't really understand why or how the latter accompanies the former.
    • eevilspock 1143 days ago
      Your notion seems to belong to an existing school of thought on consciousness. I forget its technical name, but the idea is that the thing we perceive as consciousness is just a side affect, an artifact of the process, an effect not the cause, a shadow. That we perceive it to be "the decider" is an illusion. Per this school it is inconsequential except that, like any illusion, it can affect our brain's decisions.
      • taylorfinley 1143 days ago
        I think the term you're looking for is epiphenomenalism.
      • ksaj 1143 days ago
        Artificial Life folks would say that consciousness is an emergent property of sufficient complexity.
    • asimpletune 1143 days ago
      Wait, um, can you explain this more or tell me what to search for to learn more about this?

      I googled “simple compulsions” already

    • blowski 1143 days ago
      Like this? https://www.youtube.com/watch?v=ztOk-v8epAg

      (I'm a complete numpty here, so need very basic explanations!)

    • mrmonkeyman 1143 days ago
      You call it equal potential, like it is some obvious thing, but what if that is intelligence?
      • sidpatil 1143 days ago
        I think it's something interesting to consider. I'd describe it more as computation than as intelligence, but I also do keep in mind that there is no single universally-agreed-upon definition of intelligence, so the goalposts can shift.

        For an example, consider the AI effect [1].

        [1] https://en.m.wikipedia.org/wiki/AI_effect

  • AndrewKemendo 1143 days ago
    With few exceptions, the AI research community completely overlooks the "rest of the body" when it comes to thinking about intelligent systems and focuses too much on the brain.

    The amount of computing going on in the peripheral nervous system is staggering - and when you look at HOW and WHERE this computing works with effectors and sensors you realize how much of intelligence is reliant on those systems being there.

    Brains are interesting - but they actually don't do all that much when it comes to the majority of how people interact with the real world, and frankly you don't need that much (physical mass of) brain to be intelligent.

    • patmorgan23 1143 days ago
      This. Mind vs body is a false dicotomy. Your mind is fully integrated through out your body. Physical and mental health are so heavily intertwined.
      • ravi-delia 1143 days ago
        And yet I can go a-chopping anywhere but the brain without cognitive deficit, but even a little scraping of the cortex has a notable effect. The brain is fed and maintained by the body, and as such is vulnerable to the body's failures, but such a connection doesn't exactly break down the difference between mind and body.
        • AndrewKemendo 1143 days ago
          This is a perfect example of what I'm talking about. You are framing this in terms of "cognitive deficit" which is specifically a (poorly defined) function of a generally high level capability test focused on brain function evaluation.

          Intelligence is not simply the theoretical capability to do higher reasoning in a structured test in my opinion - it's the functional capability to actually prove you can do higher reasoning through demonstration.

          You can't just go "a-chopping" anywhere any have the same functional capabilities. If I chop off your thumb, you are significantly functionally less intelligence in a practical test: if my intelligence test requires you to zip up your pants as a step, you would do much more poorly without your thumbs than someone with 40 fewer IQ points.

          Think differently - if you can't prove you're more intelligent by actually DOING something then you aren't more intelligent.

          • ravi-delia 1142 days ago
            On one hand, I appreciate the way this perspective smooths out a lot of pointless argument in favor of an observable truth. On the other, I worry it falls prey to the streetlight effect. Sure, capacity to do things in an absolute sense is a useful thing to measure, but we already have lots of words for it; 'capacity to do things' is a bit wordy, but 'ability' as in 'disability' isn't too far off.

            Similarly, even if we don't like the words we use to describe 'ability', why steal intelligence? We still need a word for the pretty obvious cluster of things relating generally to cognition, information crunching, and problem solving.

            There are many axis on which Stephen Hawking and I differ (pre-death of course). I, for instance, can go jogging, or speak without machine assistance, or zip up my pants. Stephen Hawking could understand advanced mathematics and come up with physics breakthroughs. Am I smarter than Stephen Hawking? Are we even tied?

            It seems evident that there are at least two types of ability that are at least somewhat decoupled from each other. Some tasks involve both, some tasks involve one, and some tasks involve the other. Certainly we see correlations between them, and it would be unwise to completely discount one when considering the other, but grouping them under one heading needlessly confuses the issue, especially when that heading is generally understood to refer to one of the types of ability specifically.

            It seems clear that the word we use for the axis on which Stephen Hawking had me beat is 'intelligence'. Now that doesn't mean that the definition is set in stone whatsoever. If you want to use the same sequence of letters to describe a kind of fruit, you can do so. But regardless of if you give it a name or not, that axis, that real empirical grouping of ability, still exists. And while that grouping exists, and while a word is commonly used to refer to it, there is little reason to try and use that word to refer to something else except to try and transfer some of the associations with the grouping to that other thing. No matter what it confuses the issue.

            Regardless, how we choose to define intelligence actually has little to do with the relevance of the body to the development of AI. Computers already have lots of ways to interact with the world, from a myriad of sensors to motors to screens. There is no problem with the statement "intelligence is highly dependent on the body", except the potential confusion that I noted above. There is, however, a problem with the statement "cognition is highly dependent on the body". The problem with that statement is that it is demonstrably false. Most of the body doesn't do any sort of informational computation except the simple control systems needed to handle the local area. Those control systems are fascinating (every joint has a little PID loop with incredibly clever ways of essentially integrating and deriving!), but hardly beyond the understanding or evaluation of AI researchers. So we shouldn't expect some secret of better AI in the body. We shouldn't expect it in the brain either, but certainly not the body.

            edit: Having actually commented I now see how pointlessly long and confusing my comment is. Sorry about that, I'm having trouble actually translating my thoughts into words here.

        • loveistheanswer 1143 days ago
          >And yet I can go a-chopping anywhere but the brain without cognitive deficit, but even a little scraping of the cortex has a notable effect.

          Is this not obviously false? Cut out someones heart, lungs, stomach, liver, kidneys, etc. and they will surely have a "cognitive deficit" in the form of death. (Assuming no transplant is used)

          • ravi-delia 1143 days ago
            A-chopping anywhere but the brain without nontrivial cognitive deficit would have perhaps been better, but a little less pithy. The next sentence admits as much.
      • avaldeso 1143 days ago
        [Citation needed]
        • carapace 1143 days ago
          Check out Levin's lab's work: "What Bodies Think About: Bioelectric Computation Outside the Nervous System" - NeurIPS 2018

          https://www.youtube.com/watch?v=RjD1aLm4Thg

          https://news.ycombinator.com/item?id=18736698

          In short, the biomolecular machinery that neurons use to think is present in all cells.

          • Teever 1143 days ago
            This may be the case but a quad amputee is still able to form and recall memories as well as tell jokes and sing songs.
            • ksaj 1143 days ago
              I don't understand the meaning of this response. You seem to be suggesting that all memories are in the limbs. I don't think that's what was being suggested here.

              Trying to stay within what I think you are saying, don't forget that if you amputate some insect's legs, they will keep flicking around for a while, even though there is no connection to the brain. Surely the insect doesn't suddenly forget how to move it's now missing leg. But the leg does seem to have its own capacity to remember how to jump or whatever even without the brain. Until it runs out of energy, of course.

              • Teever 1143 days ago
                Insect legs don't have the capacity to remember how to jump. They retain the ability to move for a brief period of time.

                You're ascribing a higher meaning to the limited spasms of a dismembered limb.

        • tiborsaas 1143 days ago
          You can cite your body. Before you get offended, really, just examine it as a system, and try to explain how can you have a conscious experience without any sensory input.

          Even with a lame comparison to computers, the machines also need a lot of stuff to put a CPU to work.

          • avaldeso 1143 days ago
            > You can cite your body.

            Anecdotal evidence.

            Also, if the mind is fully integrated with the body, how you explain seemingly inconsistent states that seems to work just fine. Eg., people with ALS or quadriplegic or severely injured or mutilated. If the mind can perfectly works without a perfectly abled body, where's this mind body connection? Also, where's such connection in a comatose brain with a completely funcional body? Maybe I misunderstood what this mind body connection is supposed to be.

            • AndrewKemendo 1143 days ago
              Show me these people who aren't significantly augmented to replace their bodily functions eg Hawking.

              I'm not sure how you consider that working "just fine."

          • quesera 1143 days ago
            > try to explain how can you have a conscious experience without any sensory input.

            John Lilly, sensory deprivation tanks?

            https://en.wikipedia.org/wiki/John_C._Lilly

            • tiborsaas 1141 days ago
              Sensory deprivation is interesting, but you already have a conscious experience when you enter the tank. It even reinforces that your mind is tied to your body.
          • SkyPuncher 1143 days ago
            > You can cite your body.

            At best, this is an anecdote.

        • ErikVandeWater 1143 days ago
          I think the second sentence is opinion, not something that could be objectively tested. Last sentence is mostly true. Sick people are much less happy than when they are healthy.
    • 01100011 1143 days ago
      Sure but think about what happened. 50+ years ago, researchers figured out some aspects of a neuron, simulated a network of grossly simplified neurons, and found out they could do useful things. Much of modern NN stuff is just following that trajectory.

      I don't think many people seriously believe that artificial neurons are in any way comparable to a real neuron, much less believe that an ANN is comparable to what goes on in the human body. Maybe in some very limited cases like the visual cortex, but even then I think most people would admit that it's a poor model valid only to a 1st approximation.

      That said, there is still merit in pushing the current approach further while other researchers continue to try to understand how biology implements intelligence and consciousness.

      • AndrewKemendo 1143 days ago
        I don't know a single luminary in AI who seriously considers the whole of body approach to their work.

        Pretty much everyone talks/reasons specifically only about the brain and never how they work holisitcally sensors and effectors.

        For example, in computer vision, all of the biggest work assume the starting point of a 2D (RGB+Grey) matrix as the starting point. They never make any assumption about how that image is generated. Only really in the LIDAR world is the sensor considered, and even then everyone is trying to jam LIDAR return into a 2D matrix.

    • jtsiskin 1143 days ago
      Most AI tasks that I think of - image labeling, NLP - the majority of that happens in the brain? Do we process language in our peripheral nervous system?
      • _Microft 1143 days ago
        Edge-enhancement happens at the retina already by clever combinations of inputs from different photoreceptor cells for example.
        • ravi-delia 1143 days ago
          But it's also pretty obviously just a convolution, so not exactly a big unknown. It's super neat, and it makes sense that it would be in the eye, but at the end of the day the interesting processing is done in the brain.
      • AndrewKemendo 1143 days ago
        Of course it does - physical phenomena must have a biological transducer to interact with [1]

        The structure of these transducers is critically important as they gate/filter the interaction types with the physical world that ultimately are the bounds on what humans can reason about. They are doing transformations and "pre-processing" if you like to compress real world signal into something that can be interpreted by the other systems in the body.

        [1] https://www.umms.org/ummc/health-services/hearing-balance/pa...

        • jtsiskin 1143 days ago
          By language I mean words, the symbols that NLP operates on. Raw audio -> words is a much simpler problem than words -> understanding/response.
    • Darvon 1143 days ago
      Artificial Life community does this research. Simulating worm brains and modelling ants, etc.
    • IdiocyInAction 1143 days ago
      How do you suggest that AI research should incorporate that? Most modern AI research isn't even brain-inspired anymore; the origins of ANNs are brain-inspired, but most SOTA approaches don't really seem to be.
      • AndrewKemendo 1143 days ago
        Start from first principles in the physical world. Do more work on what kind of processing we can do at the edge of the sensor and work our way up from there.

        For example, build a system that learns to have reflexes. That is, has processing close to or near a sensor and can work collaboratively with other sensors to learn (not explicitly programmed) to take action based on input without a central processing system.

        I would argue that if you can build a complex enough physical reflex learning system, then you have enough of the building blocks for a human level system.

  • lisper 1143 days ago
    Meh. A thermostat makes decisions without a central nervous system too.

    But the title of the article is actually, "A memory without a brain", which is actually much more interesting. A better rewrite of the title would be "A single-cell slime mold can remember the locations of food sources", which is actually pretty cool.

    • robotresearcher 1143 days ago
      An alternate stance is that a thermostat is, or takes the role of, a central nervous system in the system it regulates.
  • Barrin92 1143 days ago
    Great book on the topic is Wetware: A Computer in Every Living Cell. It really does a lot to show the complexity and amount of work that is done within every-single cell purely at a mechanical or chemical level, and has made me a lot more skeptical about the reductionism that is common today in a lot of AI related fields.
  • fiftyfifty 1143 days ago
    Previous studies were already zeroing in on the cytoskeleton (made of microtubules) as the likely place where slime molds stored their memories: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4594612/

    The break through here is that they've found the memories are encapsulated in the diameter of the microtubules: “Past feeding events are embedded in the hierarchy of tube diameters, specifically in the arrangement of thick and thin tubes in the network.”

  • szhu 1143 days ago
    This doesn't feel as shocking or startling to me as it probably does for many.

    A slime mold is a collection of adjacent cells without a hierarchy that can act together to make decisions. Our brain is also a collection of adjacent cells that can act together to make decisions.

    They're fundamentally the same thing. It seems like people are shocked primarily because we arbitrarily defined a notion of certain collections of cells being an "organism", and a slime mold doesn't fit within this ontology.

    • dangerbird2 1143 days ago
      When slime molds are in the plasmodial life stage (when they become macroscopic and "smart" according to the paper), it is not a collection of cells, but a single cytoplasm sharing multiple nuclei, making it a technically single-celled organism. Of course, because it contains so many nuclei over such a large area, it ends up behaving like a multicellular colonial organism.
  • tapoxi 1143 days ago
    Highly recommend this episode of Nova: https://www.pbs.org/wgbh/nova/video/secret-mind-of-slime/
  • imvetri 1143 days ago
    A single slime mold cell is a neuron cell, A neuron is capable of learning without central nervous system. Don't compare leaf to a forest.
  • rhyn00 1143 days ago
    This sort of reminds me of the book "Vehicles: Experiments in Synthetic Psychology" by Valentino Braitenberg. In this book the author starts a series of thought experiments by constructing small "vehicles" which drive around on a table top. The vehicles start with very simple behaviors, then he applies evolution (by vehicles falling off, or being selectively removed) while adding more complex behaviors until the vehicles eventually become intelligent.

    In a way the slim mold is a analogous to one of the simple vehicles that ends up becoming more intelligent through the simple mechanisms and evolution.

    Book link: https://mitpress.mit.edu/books/vehicles

  • emrah 1143 days ago
    Without the "How" in the title, the meaning changes a bit and made it sound to me like scientists were shocked that a single called organism could be intelligent without a nervous system, when of course a nervous system is not an absolute requirement to behave intelligently
  • peignoir 1143 days ago
    Reminds me of the book wetware discribing a similar behavior