pedalpete 2 hours ago

I believe that training a system to understand the electrical signals that define a movement is significantly different from a system that understands thought.

I work in neurotech, I don't believe that the electrical signals of the brain define thought or memory.

When humans understood hydro-dynamics, we applied that understanding to the body and thought we had it all figured out. The heart pumped blood, which brought nutients to the organs, etc etc.

When humans discovered electricity, we slapped ourselves on the forehead and exclaimed "of course!! it's electric" and we have now applied that understanding on top of our previous understanding.

But we still don't know what consciousness or thought is, and the idea that it is a bunch of electrical impulses is not quite proven.

There are electrical firing of neurons, absolutely, but do they directly define thought?

I'm happy to say we don't know, and that "mind-reading" devices are yet un-proven.

A few start-ups are doing things like showing people images while reading brain activity and then trying to understand what areas of the brain "light-up" on certain images, but I think this path will prove to be fruitless in understanding thought and how the mind works.

  • Night_Thastus an hour ago

    Agree completely. The brain is so incredibly complex that we've barely scratched the surface. It's not just neurons, which are very complex and vary wildly in genetics between them - it's hundreds of other helper cells all interacting with each other in sometimes bizarre ways.

    To try to boil down it all to any simple signal is just never going to work. If we want to map consciousness it's going to be as complex as simulating it ourselves, creating something as dense and detailed as a real brain.

  • PaulRobinson 2 hours ago

    We know that the brain is a structure that works through electrochemical reactions. Synapses transmit signals sent by axons to neurons. We can test this. We can measure it. There's nothing else going on that we can describe using known science.

    Ah, we might say, maybe there is an unknown science - we did not know about so much before, like electricity, like X-Rays, like quantum physics, and then we did, and the World changed.

    The difference is that we observed something that science could not explain, and then we found the new science that explained it, and a new science was born.

    It's pretty clear to me - but you may know more - that we can explain all brain activity through known science. It might be hard to think of us as nothing more than a bunch of electrochemical reactions in a real-world reinforcement learning system, but that's what we are: there's no gap that needs new science, is there?

    • estimator7292 30 minutes ago

      No, none of this is settled. We cannot adequately explain brain function with current science.

      There have been studies this year implying that some brain functions rely on quantum interactions.

    • jaapz 34 minutes ago

      Can we? We can only see whatever we can measure with the tools we currently have, which are based on the knowledge we currently have. Who's to say there isn't something out there we haven't discovered yet? There's more than enough we still don't understand in many domains of science

  • quantummagic an hour ago

    > There are electrical firing of neurons, absolutely, but do they directly define thought?

    Well, surgeons and researchers have shown that electrical stimulation of certain brain regions, can induce "perception" during procedures. They can make a patient have the conscious experience of certain smells, for instance.

    It's not conclusive proof of anything, but I wouldn't bet against us getting closer to the mark, than we were when we only considered hydro-dynamics as the model.

  • baxtr an hour ago

    This sounds logical and convincing.

    At the same time, it should also be easy to falsify.

    Has there been an experimental setup like this tested? If I’m not mistaken it should falsify your claim.

    Train a decoder on rich neural recordings, then test it on entirely new thoughts chosen under blinded conditions.

    If it can still recover the precise unseen content from signals alone, the claim that electrical activity is insufficient is overturned.

    • plastic-enjoyer an hour ago

      > Train a decoder on rich neural recordings, then test it on entirely new thoughts chosen under blinded conditions.

      There have been enough studies about this and the result is mostly the same: it's difficult to nearly impossible to reliable decode neural recordings that differ from the distribution of neural recordings that the decoder was trained on. There are a lot of reasons why this happens, electrical activity being insufficient is not one of them.

  • PunchyHamster an hour ago

    seems like trying to take a single pixel signal (so to speak) and interpolate entire image out of it.

  • fainpul 2 hours ago

    Does it make sense to think of thoughts, consciousness etc. as an emergent property of the neuronal activity in our brains?

  • observationist an hour ago

    This is silly. It's the sum of electrical and chemical network activity in the brain. There's nothing else it can be. We've got a good enough handle on physics to know that it's not some weird quantum thing, it's not picking up radio signals from some other dimension, and it's not some sort of spirit or mystical phlogiston.

    Your mind is the state of your brain as it processes information. It's a computer, in the sense that anything that processes information is a computer. It's not much like silicon chips or the synthetic computers we build, as far as specific implementation details go.

    There's no scientific evidence that anything more is needed to explain everything the mind and brain does. Electrical and chemical signaling activity is sufficient. We can induce emotions, sights, sounds, smells, memories, moods, pleasure, pain, and anything you can experience through targeted stimulation of neurons in the brain. The scale of our experiments has been gross, only able to read and write from large numbers of neurons, but all the evidence is consistent.

    There's not a single rigorously documented phenomenon, experiment, or any data in existence that suggests anything more than electrical and chemical signaling is needed to explain the full and wonderful and awe-inspiring phenomenon of the human mind.

    It's the brain. We are self constructing software running on 2lb chunks of fancy electric meat stored in a bone vat with a sophisticated network of sensors and actuators in a wonderful biomechanical mobility platform that empowers us to interact with the world.

    It explains consciousness, intelligence, qualia, and every other facet and nuance of the phenomena of mind - there's no need to tack on other explanations. It'd be like insisting that gasoline also requires the rage of fire spirits in order to ignite and power combustion engines - once you get to the point of understanding chemical combustion and expansion of gases and transfer of force, you don't need the fire spirits. They don't bring anything to the table. The scientific explanation is sufficient.

    Neocortical networks, with thalamic and hippocampal system integrations, are sufficient to explain the entirety of human experience, in principle. We don't need fire spirits animating cortical stacks, or phlogiston or ether or spirit.

    Could spirit exist as a distinct, separate phenomenon? Sure. It's not intrinsic to subjective experience, consciousness, and biological intelligence, though, and we should use tools of rational thinking when approaching these subjects, because a whole lot of pseudo-scientific BS gets passed as legitimate scientific and philosophical discourse without having any firm grounding in reality.

    We are brains in bone vats - nothing says otherwise. Unless or until there's evidence to the contrary, let that be enough.

    • Night_Thastus an hour ago

      I think you misunderstood the person you're responding to. They did not say there was some higher force beyond the physical pieces.

      What they're saying is that the brain is really really complicated and our understanding of biology is far too rudimentary right now to be saying "yes, absolutely, 100% sure that we know the nature of consciousness from this one measurement of one type of signal".

      * Neurons are very complex and all have unique mutations from one another

      * Hundreds of other types of cells in the brain interact with them and each other in ways we don't understand

      * The various other parts of the body chemically interact with the brain in ways we don't understand yet, like the gut microbiome

      Trying to flatten all of consciousness to one measurement is just not sufficient. It's like trying to simulate the entire planet as a perfect sphere of uniform density. That works OK for some things but falls apart for more complex questions.

      • observationist 42 minutes ago

        I get that, but there's no need to complicate things unnecessarily.

        I'll make an even stronger claim, that biological brains are not only computers, but that they operate in binary, as well. Active and inactive - the mechanisms that trigger activation are incredibly nuanced and sophisticated, but the transfer of information through the network of biological neurons is a matter of zeroes and ones. A signal happens, or doesn't. Intensity, from a qualia persepective, ends up being a matter of frequency and spread, as opposed to level of stimulation. That, in conjunction with all sorts of models of brain function, is allowing neuroscience to make steady, plodding progress in determining the function and behavior of different neurons, networks, clusters, and types in the context in which they are found.

        All else being equal, at the rate neuroscience is proceeding, we should be able to precisely simulate a human brain, in functionally real-time, using real brain networks as models, by around 2040. We should have a handle on every facet of brain chemistry, networking, electrical signaling, and individual neuronal behavior based on a comprehensive and total taxonomy of feature types down to the molecular level.

        Figure out the underlying algorithms and you can migrate those functional structures to purely code. If you can run a mind on code, then it doesn't matter whether you're executing a sequence of computations in a meat brain, in a silicon chip, or using a billion genetically engineered notebook monkeys to painstakingly and tediously do the computations and information transfer manually, passing sheets of paper between them. ( the monkeys, of course, could not operate in real time.)

        There won't be another significant phase change, like we saw from hydraulics to computation equivalence. Computation is what it actually, physically is, at the level of electrical signals and molecular behaviors. It's just extremely complex and sophisticated and elegantly interwoven with the rest of the human organism.

        Brain gut interactions aren't necessary for human subjective experience or cognition. You could remove your brain entirely from your skull, while maintaining a equivalent level of electrical and chemical signaling from an entirely artifical platform of some sort, and as long as the interface between the biological and synthetic maintains the same signaling frequency, chemistry, and connectivity, then it doesn't matter what's on the synthetic end.

        There are independently intelligent aspects to things like the gut biome, and other complex biological systems. Those aren't necessary for brains to do what brains do, except in a supportive role. Decouple the nutrition and evolutionary drives from the mind, and you're left with a fairly small chunk of brain - something like 5B neocortical neurons is the bare minimum of what you'd need to get human level intelligence. Everything on top of that is nice to have, but not strictly necessary from a proof of concept perspective.

Terr_ 4 hours ago

From some dystopic device log:

    [alert] Pre-thought match blacklist: 7f314541-abad-4df0-b22b-daa6003bdd43
    [debug] Perceived injustice, from authority, in-person
    [info]  Resolution path: eaa6a1ea-a9aa-42dd-b9c6-2ec40aa6b943
    [debug] Generate positive vague memory of past encounter
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.
  • callamdelaney 3 hours ago

    Can you imagine having chatgpt in your brain to constantly police wrongthink? Would save the British media a job.

  • iberator 3 hours ago

    You should make text based game

guiand 4 hours ago

Split brain experiments show that a person rationalizes and accommodates their own behavior even when "they" didn't choose to perform an action[1]. I wonder if ML-based implants which extrapolate behavior from CNS signals may actually drive behavior that a person wouldn't intrinsically choose, yet the person accommodates that behavior as coming from their own free will.

[1]: "The interpreter" https://en.wikipedia.org/wiki/Left-brain_interpreter

  • brnaftr361 4 hours ago

    Split brain experiments have been called into question.[0]

    [0]: https://www.sciencedaily.com/releases/2017/01/170125093823.h...

    • comboy 3 hours ago

      > The patients could accurately indicate whether an object was present in the left visual field and pinpoint its location, even when they responded with the right hand or verbally. This despite the fact that their cerebral hemispheres can hardly communicate with each other and do so at perhaps 1 bit per second

      1 bit per second and we are passing complex information about location in 3d space?

      • vilhelm_s 2 hours ago

        Yeah, that sounds very unlikely. The full paper dismisses the possibility:

        > Another possible explanation to consider is that the current indings were caused by cross-cueing (one hemisphere informing the other hemisphere with behavioural tricks, such as touching the left hand with the right hand). We deem this explanation implausible for four reasons. First, cross-cueing is thought to only allow the transfer of one bit of information (Baynes et al., 1995). Yet, both patients could localize stimuli throughout the entire visual field irrespective of response mode (Experiments 1 and 5), and localizing a stimulus requires more than one bit of information. Second, [...]

        I get the impression that the authors of the paper have some kind of woo (nonmaterialist) view of consciousness. But they also mention this possiblity, which seems more plausible to me:

        > Finally, a possibility is that we observed the current results because we tested these patients well after their surgical removal of the corpus callosum (Patient DDC and Patient DDV were operated on at ages 19 and 22 years, and were tested 10–16 and 17–23 years after the operation, respectively). This would raise the interesting possibility that the original split brain phenomenon is transient, and that patients somehow develop mechanisms or even structural connections to re-integrate information across the hemispheres, particularly when operated at early adulthood.

    • pinkmuffinere 3 hours ago

      Wow this is fascinating, and gets rid of one of my eldritch memetic horrors. Thanks for sharing, I’m going to submit it as its own post as well!

    • empath75 3 hours ago

      That's a great paper, but I don't think it calls into question anything about post-hoc rationalizations, and it might actually put that idea on more solid ground.

      • debo_ 3 hours ago

        Maybe you are just rationalizing it.

zh3 3 hours ago

AI following the Libet ([0]1983) paper about preconscious thought apparently preceding 'voluntary' acts (which really elevated the question of what 'freewill' means).

* [0] https://pubmed.ncbi.nlm.nih.gov/6640273/

  • Lerc 2 hours ago

    The prima facie case for free will* is that it feels free. If you can predict the action before the feeling it removes that argument (unless you want to invoke time travel as an option)

    *one of the predominant characterisations of free will, anyway. I'm a compatiblist, so I have no issue with caused feelings of decision making being in conflict with free will. I also have a variation of Tourette's, so I have a different perception of doing things wilfully when compared to most people. It's really hard to describe how sometimes you can't tell if something was tic or not.

    • kelseyfrog 2 hours ago

      There are a lot of things I feel that end up not being "real," like embarrassment, a failure. and anxiety. Why should free will not be like any of those?

      • nicoty 2 hours ago

        Like how capsaicin makes food feel hot even when it isn't?

        • kelseyfrog 2 hours ago

          Yes, my point is that our senses often portray a reality that doesn't exist. Why should we assume free will is any different?

          We don't even have a coherent and agreed upon definition. Every attempt at operationalizating it, results in it not being detectable. It's time we admit that there is no scientific basis for free will. It's not a scientific belief.

          • Lerc 42 minutes ago

            That's kind-of where compatibalism comes in. For most definitions of free will, it is fighting the feeling that we would like to have it because it feels like we have it, and the evidence that says that the feeling that we have it does not provide any proof.

            Sapolsky takes the approach to come up with a definition that can't be met and declare that it can't be met. Compatatablisim is more about finding a definition that is consistent with the feeling because without the feeling there isn't really anything to anchor the idea to anything meaningful. It doesn't use the feeling of making decisions as proof of any power as such, it is treating the feeling more like a measurement of the concept. Doing what we feel like doing is free will. What we feel like doing can be caused by anything and it still wouldn't matter.

            Considering the inverse situation makes it seem like any other definition of freedom would be intolerable. If you 'freely' chose to do X but consistently had the perception of wanting to do not-X, it seems like you would not have a happy life. Similarly considering the alternative to determanism for your decisions seems like literal chaos. If something has no cause, it is literally random. Any pattern you can discern would indicate that it has a cause.

            Then of-course you get into the nature of what is a cause and consequently considering the nature of time itself. Something travelling backwards in time interacting with something that you see appears uncaused because there was literally no preceding event that indicated it was going to happen (because it wasn't preceding, it came from the other direction)

            That's one of those weird things I wonder about when people ask why is there more matter than antimatter, or why does times arrow mainly point in one direction. It feels like riding the crest of a wave wondering why all the water is going the same direction.

    • bananaflag 2 hours ago

      Hm, but maybe you can predict the feeling before you can predict the action. Checkmate atheists :)

      (for the record I am also a compatibilist)

    • immibis an hour ago

      I don't see why having some latency in the path of free will makes it no longer free. Before my arm moves up, there is a motor neuron that fires that is always correlated with my arm moving up; doesn't that just mean the free will occurs earlier in the process than the motor neuron firing?

      • Lerc 37 minutes ago

        The signal preceding the feeling is not an argument against free will. It is an argument against the feeling of free will being evidence for free will.

  • criddell 2 hours ago

    Well, what does freewill mean to scientists?

    • czl 2 hours ago

      There is no single definition for all scientists. However if you define free will as choices that are completely free of deterministic or even statistically deterministic causes that science could in principle predict, then most scientists would say: no, that kind of free will probably doesn’t exist.

  • keybored 2 hours ago

    That it precedes voluntary acts tells us that most of what we do are not conscious. Which has been known for over a century, maybe millenia.

    (opinion stolen from some Chomsky video)

handedness 4 hours ago

> is it time to worry?

Shouldn't the device be the judge of that?

rpq 4 hours ago

I think the real danger lies in how many will accept that output as the unadulterated unmistakable truth for actions, for judgment. Talk about a sinister device.

  • smilebot 3 hours ago

    You don’t need a sinister device. This is essentially how propaganda works.

    • rpq an hour ago

      Propaganda is mostly without Science. This is with.

    • analog8374 2 hours ago

      A handsome, well-dressed alpha speaking with confidence and certainty. That's truth right there.

mostertoaster 3 hours ago

Ok does anyone else’s mind just immediately go to “The Minority Report” is soon going to no longer be just a sci fi dystopia?

  • retox 2 hours ago

    [dead]

analog8374 2 hours ago

Install one of these on every citizen!

bryanrasmussen 2 hours ago

I guess I will start paying attention when it can predict word choice in my internal monologue.

  • robot-wrangler 2 hours ago

    Of course, for shallow people in need of validation, doing exactly that is the point of sycophancy. "I have a great idea, I will ask the AI.." followed immediately by "What an insightful question, this gets to the heart of the matter"

fjfaase 4 hours ago

I wonder how much this experience is similar to the Alien Hand Syndrome, where people experience that part of their body, usually a hand, act on their own.

ryandv an hour ago

You don't need a device to do this.

  • rpq an hour ago

    It’s so much more convincing with a device though. Backed by scientific consensus. Where do I leave my brain and offload my judgment effort and let it tell me what i need to know about anyone

    • ryandv an hour ago

      > It’s so much more convincing with a device though. Backed by scientific consensus.

      Hold on, I dropped my Nobel Prize for lobotomy on the ground somewhere... going to need some help looking for it...

      On second thought, maybe it's better off not being found.

amarant 3 hours ago

I find the take a quirk in how the state of the art assistive technology works is reason for privacy fear mongering to be tired, unimaginative, and typical of today's journalism that cares more for clicks than reporting fact.

It's a very interesting quirk of a immensely useful device for those that need it, but it's not an ethical dilemma.

I for one am sick and tired of these so-called ethicists who's only work appear to be so stir up outrage over nothing holding back medicinal progress.

Similar disingenuous articles appeared when stem-cell research was new, and still do from time to time. Saving lives and improving life for the least fortunate is not an ethical dilemma, it's an unequivocally good thing.

Quit the concern trolling nature.com, you're supposed to be better than that

  • fsckboy 2 hours ago

    >it's an unequivocally good thing.

    the miracle that is humans and humanity came about through millennia of unrestrained fertility and lots of sex℠ producing many babies, most of whom didn't survive to adulthood. Insects and fish still do this on a grand scale. It's where we came from and who we are, and that's Not Bad™ and I think it's unequivocally a bad thing that we keep thinking we know better and interfering with it. we are failing miserably to propel our species forward for the future. "survival of the fittest" is a mistake, it's destruction of the no-longer-adequate that works the magic.

    (pretty proud of myself for realizing after I put in the tm that I could go back and put in a service mark too)

  • pixl97 2 hours ago

    This is quite the spicy take for something that could have far more than one purpose.

    The problem with humanity is some people pick up the hammer and build a house while others will crack your head open with it and eat the pink gooey insides. The discussion of technology should be able to withstand the good and bad points of its conception.

    • amarant 44 minutes ago

      >The discussion of technology should be able to withstand the good and bad points of its conception

      True, but that's a poor excuse to make up hypothetical problems that don't even exist. Tying it into the current craze about data privacy is a bit too transparent imo.

      The journalists should do better, the article makes a downright dishonest interpretation of the issue by shoehorning it into a lens of a long ongoing controversy about data privacy that is in no way applicable to the technology they're discussing.

      The privacy issue is not only hypothetical, but far enough off in the future that it'd be a better fit for a sci-fi novel then nature.com reporting.

      The problem is not discussing the downsides, the problem is doing it dishonestly, as the article does.

cma 3 hours ago

Rather than the Karpathy thing about in class essays for everything, maybe random selections of students will be asked to head to the school fMRI machine and be asked to remember the details of writing their essay homework away from school.

  • Lerc 27 minutes ago

    fMRI machines are not cheap, nor plentiful.

    If, one day someone can make a small, cheap device that can do the job of a fMRI it would be more world changing than you can imagine. If you had easy access to realtime data about what is going on in your brain, there is evidence to suggest that you can learn to influence the data and literally change your own mind.

j45 3 hours ago

Maybe skulls will need a faraday cage.

  • aareselle 3 hours ago

    We already have tinfoil hats for that.

idiotsecant 4 hours ago

It's interesting that the path from 'decide to do something' to performing the action is hundreds of ms long. It's also interesting that grabbing the data early in the process and acting on it can perform the action before the conscious 'self' understands fully that the action will take place. It's just another reminder that the 'you' that you consider to be running the show is really just a thin translation layer on top of an ocean of instinct, emotion, and hormones that is the real 'you'.

  • svieira 4 hours ago

    I rather prefer the holistic take that we are our whole selves and not just the part that reflects on what we do or the part that reacts to external and internal material stimuli. We know we can change the instincts, emotions, and hormones when they conflict with what we know by reflection to be just and good. To put it another way, we know that we can do things "without thinking" that are either just or unjust and by reflection can achieve some level of mastery over the direction of our impetuses.

    • sturbes 3 hours ago

      I’ve been saying “There is a real you, unfortunately, you’re not it.”

      • technothrasher 3 hours ago

        I watched an interview with Carrie Fisher years ago where she was talking about her struggle with drug abuse. She said something that I thought was quite inciteful, "I am but a spy in the house of myself."

      • throw4847285 3 hours ago

        I think the opposite is equally true.

        "There is a real you. Unfortunately, it's you."

      • Terr_ 3 hours ago

        "That's ridiculous", said the hivemind-module of a mobile megafortress powered by a swarm of nanomachines.

        • analog8374 2 hours ago

          Speaking as a guy who meditates, EVERYTHING dissolves when you look at it closely. To reveal another "reality". And then that reality dissolves. And so on.

keybored 3 hours ago

Unlike the vast sea of the subconscious, we can try to take direct control of technology. But we don’t. So we are left to fret about what technology will do to us (meaning: what people will power will use it for).

  • analog8374 2 hours ago

    The Amish take control of technology like that. If it doesn't pass review it's forbidden. It's a pretty good idea.

    Consider that heroin used to be legal. We might have a dozen such technologies that we know are hazardous but have yet to kill enough people to force the issue.

thomastjeffery 2 hours ago

Seems like they are really jumping to conclusions here.

> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are some serious problems lurking in the narrative here.

Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!

> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so

There are two overconfident assumptions at play here:

1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.

2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.

There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.

Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:

> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”

So the model is not responding to her intention. That's supposed to support your hypothesis how?

---

These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:

> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.

This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.

By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:

LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".

This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.

Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.