BookClub logo

Philosophical Foundations of Neuroscience — Eighteen Years Later

The Masterclass in Logical Analysis and Analytic Philosophy is Needed Now More Than Ever

By Everyday JunglistPublished 9 months ago 7 min read
1
My own well read copy.

Authors preface: I first published this review on Vocal two years ago, and before that I had published a version on Medium. With each republication I have updated or revised various sections. Thought it made sense to resurrect it one more time for a Vocal Book Club Challenge to "write about a book that changed you." This particular book, PFoN, I happen to believe, is one of the most important ever written and, it has impacted my own thinking on a huge range of topics very deeply. At the time this was written I was reading tons and tons about neuroscience. Specifically at this time I had recently completed reading a number of works by Patricia Churchland Smith, a neuroscientist/philosopher of great renown. I found her views disturbing and her positions misguided. She is the queen of the mereological fallacy (see below for what this is) often using the brain and the person interchangeably as she views them as one and the same. She also believes that if we fully understood everything about how the brain works, we could recreate particular states of consciousness. She is the ultimate hard core reductionist and views consciousness as nothing more than a particular series of electro-chemical reactions in the brain which, like particular states of consciousness, we could replicate artificially if we fully understood. No body would be required for this miraculous achievement. My guess is the AI crowd is a big fan of her work, myself, not so much. In any event, Dr. Churchland-Smith is a frequent target of Bennet and Hacker's logical breakdowns of various neuroscientific studies and claims about the brain and consciousness, which they dissect and show to be in error point by point.

It has been almost exactly 18 years since Bennet and Hacker published their seminal work “Philosophical Foundations of Neuroscience.” In that modern classic they first described the mereological fallacy and discussed its implications for neuroscience (circa 2003). This logical position grows naturally out of the philosophical tradition of Wittgenstein and, in my view, represents the most coherent and comprehensive deconstruction of the arguments of cognitive neuroscience ever published. In the book they methodically, through example after example, exposed and described in great detail the logical contradictions at the core of some of the most mainstream. At a high level and greatly simplified, the mereological fallacy shows the logical contradictions that arise (are inherent in) when assigning states of consciousness to a part of a being rather than to its whole. Specifically they demonstrated that attempting to locate consciousness in parts of the brain is doomed to failure. Logic dictates that a brain, while necessary to the condition of consciousness, is not alone sufficient to be IN a state of consciousness. The impact of that book on my own beliefs was seismic and the field was thrown into an uproar. As you might imagine having one’s entire life work exposed as based on logical contradictions did not sit well with many. When I first read the book shortly after it was published artificial intelligence was still mostly considered impossible, and the hedge term machine learning had yet to hit the mainstream. It is interesting now to note the parallels between the mereological fallacy and machine learning. Both are logical fallacies based on logical contradictions, the first because of the nature of human beings as whole persons needing both a (mostly) functioning brain and nervous system to be alive and conscious, the other because of the meanings (which are based on the nature of the things themselves) of the words of which it is composed. There is no term yet for the particular logical fallacy that machine learning exemplifies that I am aware of. Certainly it is a category error but that is not specific enough. How about the compulogical fallacy in honor of B&H. The compulogical fallacy is the category mistake people make when they assign abilities or attributes to computers or machines that only human beings or some intelligent non-human animals are capable of, e.g. the ability to learn. AI would probably not technically commit this fallacy as it might be possible for an artificial consciousness to arise/be invented/emerge/be born someday. While highly unlikely it is not logically impossible as would be the case for a machine with the ability to learn. Any machine that could learn would no longer be a machine or the definition of “learning” would no longer be the same.

Apparently some of today’s neuroscientists have failed to learn the lessons B&H taught. For just one example look no further than a recent book by the neuroscientist Dr. Lisa Feldman Barrett. It is titled How Emotions Are Made: The Secret Life of the Brain. In it Dr. Barrett claims that your brain is capable of many things it is patently not capable of including "adding stuff from the full photograph into its vast array of prior experiences….” , Only a human being, and possibly some intelligent non human animals, with a (mostly) fully functional brain and nervous system is capable of doing this. Your brain also does not have a “vast array of prior experiences” or actually any prior experiences, or any experiences at all, you do. Only a living being (human being in this case) with a (mostly) functioning brain and nervous system is capable of having experiences. Moreover it is not the neurons in your visual cortex that link the blobs into shapes that aren’t actually there, you do. Only a human being (and perhaps some intelligent non human animals or even some insects) with a (mostly) functioning brain and nervous system is capable of perceiving that linked together blobs are actually recognizable shapes. While functioning neurons in your visual cortex are a requirement for you to be able to do this, the neurons cannot do it without your (the full human being you) participation in the process. Incidentally, the mereological fallacy provides some of the strongest support for my belief in an absolute requirement for embodiment for any future “artificial intelligence”, though that is a complicated and difficult argument and not for this post. It also provides an interesting window for discussing some thorny and, theoretically at least, plausible modern medical procedures such as full human head transplantation, an issue I have written about previously.

Dr. Barrett is not alone in her crimes against logic in neuroscience. If you read any of the technical or popular literature in the field you will find that those two examples are only the tip of iceberg when it comes to claims about what the brain is supposedly able to do. The brain can taste, eat, dream, sleep, imagine, etc. In fact, given how impressively capable our brains reportedly are it is a wonder we (human beings) are needed at all.

I have not read the entirety of Dr. Barret’s book so I cannot say how damaging her apparently rampant commission of logical fallacies is to her arguments, but I will say I was and continue to be surprised by the high amount of attention this book has received. Mostly, because this idea of the brain as “constructor” or “simulator ‘ of our reality has been around in philosophy since before philosophers even new what a brain was, and in modern times has been a position of many philosophers and neuroscientists as well. My point here is not to respond to Dr. Barrett’s position in the book and, not having read the whole thing, I have no idea how strong or weak her arguments are to support it. Supposing that her position is the correct one, the one thing I can say for certain is that it is not our brains that are ‘simulators’ of our reality, it is ourselves.

Authors postscript: Another interesting discussion would be the impact of the correctness of this theory on the Simulation hypothesis. If we are already simulated beings in a simulated reality would it make any difference that we are also ‘simulators’ of the simulation itself. For instance at first blush it seems to weaken the probability of SH1 for it would mean the Simulators ‘programmed’ the simulation in such a way as to make ourselves also do exactly what they have already done, ‘simulate reality for us’. That seems a very odd thing to do. Perhaps it is a way for the simulators to hide the reality of the simulation from us? Since all of our experiences are simulated internally we will never’see’ the reality of the simulation for what it is. Interesting indeed. Incidentally that is another great example of the core weakness of SH1, the too good to be true problem. Need a reason why your particular theory of neuroscience is a correct, I guarantee I can find support in some version of SH1. The only limits are those of the imagination.

ReviewNonfictionChallengeAnalysis
1

About the Creator

Everyday Junglist

Practicing mage of the natural sciences (Ph.D. micro/mol bio), Thought middle manager, Everyday Junglist, Boulderer, Cat lover, No tie shoelace user, Humorist, Argan oil aficionado. Occasional LinkedIn & Facebook user

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.