Philosophical Foundations of Neuroscience — Sixteen Years Later
The Masterclass in Logical Analysis and Analytic Philosophy is Needed Now More Than Ever
Authors note: Portions of this are directly reposted from a response I had written to a Jack Preston King review/discussion of the book How Emotions Are Made: The Secret Life of the Brain by Lisa Feldman Barrett. From the little I have read of her work (I have not read her current book, which is the subject of Jack’s discussion) Dr. Barrett writes and thinks very much in the tradition of Patricia Churchland Smith, a neuroscientist/philosopher of great renown. That said I view her positions as misguided and she is the queen of the mereological fallacy often using the brain and the person interchangeably as she views them as one and the same. She also believes that if we fully understood everything about how the brain works, we could recreate particular states of consciousness. She is the ultimate reductionist and views consciousness as nothing more than a particular series of electro-chemical reactions in the brain which, like particular states of consciousness, we could replicate artificially if we fully understood. No body would be required for this miraculous achievement. My guess is the AI crowd is a big fan of her work, myself, not so much. I do not know if Dr. Barrett would go so far but it seems she commits the mereological fallacy almost as frequently. In any event, neither would fare well at the hands of Bennet and Hacker. In fact Dr. Churchland-Smith is a frequent target of their logical breakdowns of various neuroscientific studies and claims about the brain and consciousness, which they dissect and show to be in error point by point.
It has been almost exactly 16 years since Bennet and Hacker published their seminal work “Philosophical Foundations of Neuroscience.” In that modern classic they first described the mereological fallacy and discussed its implications for neuroscience (circa 2003). This logical position grows naturally out of the philosophical tradition of Wittgenstein and, in my view, represents the most coherent and comprehensive deconstruction of the arguments of cognitive neuroscience ever published. In the book they methodically, through example after example, exposed and described in great detail the logical contradictions at the core of some of the most mainstream. At a high level and greatly simplified, the mereological fallacy shows the logical contradictions that arise (are inherent in) when assigning states of consciousness to a part of a being rather than to its whole. Specifically they demonstrated that attempting to locate consciousness in parts of the brain is doomed to failure. Logic dictates that a brain, while necessary to the condition of consciousness, is not alone sufficient to be IN a state of consciousness. The impact of that book on my own beliefs was seismic and the field was thrown into an uproar. As you might imagine having one’s entire life work exposed as based on logical contradictions did not sit well with many. When I first read the book shortly after it was published artificial intelligence was still mostly considered impossible, and the hedge term machine learning had yet to hit the mainstream. It is interesting now to note the parallels between the mereological fallacy and machine learning. Both are logical fallacies based on logical contradictions, the first because of the nature of human beings as whole persons needing both a (mostly) functioning brain and nervous system to be alive and conscious, the other because of the meanings (which are based on the nature of the things themselves) of the words of which it is composed. There is no term yet for the particular logical fallacy that machine learning exemplifies that I am aware of. Certainly it is a category error but that is not specific enough. How about the compulogical fallacy in honor of B&H. The compulogical fallacy is the category mistake people make when they assign abilities or attributes to computers or machines that only human beings or some intelligent non-human animals are capable of, e.g. the ability to learn. AI would probably not technically commit this fallacy as it might be possible for an artificial consciousness to arise/be invented/emerge/be born someday. While highly unlikely it is not logically impossible as would be the case for a machine with the ability to learn. Any machine that could learn would no longer be a machine or the definition of “learning” would no longer be the same.
Apparently some of today’s neuroscientists have failed to learn the lessons B&H taught. Contrary to what Dr. Barrett might claim in her book your brain is not capable of “adding stuff from the full photograph into its vast array of prior experiences….” , only a human being with a (mostly) fully functional brain and nervous system is. Your brain also does not have a “vast array of prior experiences” or actually any prior experiences, or any experiences at all, you do. Only a living being (human being in this case) with a (mostly) functioning brain and nervous system is capable of having experiences. Moreover it is not the neurons in your visual cortex that link the blobs into shapes that aren’t actually there, you do. Only a human being (and perhaps some intelligent non human animals or even some insects) with a (mostly) functioning brain and nervous system is capable of perceiving that linked together blobs are actually recognizable shapes. While functioning neurons in your visual cortex are a requirement for you to be able to do this, the neurons cannot do it without your (the full human being you) participation in the process. Incidentally, the mereological fallacy provides some of the strongest support for my belief in an absolute requirement for embodiment for any future “artificial intelligence”, though that is a complicated and difficult argument and not for this post. It also provides an interesting window for discussing some thorny and, theoretically at least, plausible modern medical procedures such as full human head transplantation, an issue I have written about previously.
Dr. Barrett is not alone in her crimes against logic in neuroscience. If you read any of the technical or popular literature in the field you will find that those two examples are only the tip of iceberg when it comes to claims about what the brain is supposedly able to do. The brain can taste, eat, dream, sleep, imagine, etc. In fact, given how impressively capable our brains reportedly are it is a wonder we (human beings) are needed at all.
I have not read Dr. Barret’s book so I cannot say how damaging her apparently rampant commission of logical fallacies is to her arguments, but I will say I was and continue to be surprised by the high amount of attention this book has received. Mostly, because this idea of the brain as “constructor” or “simulator ‘ of our reality has been around in philosophy since before philosophers even new what a brain was, and in modern times has been a position of many philosophers and neuroscientists as well. My point here is not to respond to Dr. Barrett’s position in the book and, not having read it, I have no idea how strong or weak her arguments are to support it. Supposing that her position is the correct one, the one thing I can say for certain is that it is not our brains that are ‘simulators’ of our reality, it is ourselves.
Another interesting discussion would be the impact of the correctness of this theory on the Simulation hypothesis. If we are already simulated beings in a simulated reality would it make any difference that we are also ‘simulators’ of the simulation itself. For instance at first blush it seems to weaken the probability of SH1 for it would mean the Simulators ‘programmed’ the simulation in such a way as to make ourselves also do exactly what they have already done, ‘simulate reality for us’. That seems a very odd thing to do. Perhaps it is a way for the simulators to hide the reality of the simulation from us? Since all of our experiences are simulated internally we will never’see’ the reality of the simulation for what it is. Interesting indeed. Incidentally that is another great example of the core weakness of SH1, the too good to be true problem. Need a reason why your particular theory of neuroscience is a correct, I guarantee I can find support in some version of SH1. The only limits are those of the imagination.