“The biggest concern is that we might one day create conscious machines: sentient beings with beliefs, desires and, most morally pressing, the capacity to suffer,” write psychologist Paul Bloom and neuroscientist Sam Harris. This is bad psychology, worse philosophy, and awful spirituality.
For starters, the assumptions about sentience are wrongheaded. No machine can be a “sentient being.” That is anthropomorphism and self-love run amok.
But the concern with thought machines suffering is the real hoot. Here we have academic, thought-venerating eggheads expressing sentimental worry about hypothetical robots in the future, without expressing a byte of concern for the real suffering of human beings in the present.
“Philosophers and scientists remain uncertain about how consciousness emerges from the material world, but few doubt that it does. This suggests that the creation of conscious machines is possible.”
This is risible stuff, since philosophers and scientists can’t even agree on what consciousness is, much less how it emerges from the material world (though “few doubt that it does,” as they conceive the material world of course).
The authors make a classic philosophical mistake by inferring a logical progression from “consciousness that emerges from the material world” to “the creation of conscious machines.” Even allowing for their extremely limited idea of “the material world,” the fact that consciousness emerges within the material brain does not imply that humans will be able to “create conscious machines.”
Of course scientists and programmers are already beginning to make good imitations of the self’s programs and contents, which they are using through AI to accurately predict our desires and behaviors.
That may soon give the appearance of self-awareness. But that’s a far cry from sentience and self-knowing, much less suffering and transcendence.
Boil it down, and these unexamined premises and prognostications rest on what these two and their ilk mean by “the material world.”
In the broadest sense, all energy and matter belong to the material world. But what these men actually mean is mechanism, a simplistic, Cartesian view of matter that has long since between overtaken by quantum physics, with all its strange and as yet unexplained properties.
Defining consciousness as that which “arises in a sufficiently complex system that processes information” is not only woefully inadequate; it’s circular.
Even if humans fabricate and program a thought machine with enough complexity and sophistication for the machine to agree it is conscious, does that make it conscious?
No, it doesn’t, and ascribing poorly understood human capacities to thought machines (like self-awareness and self-knowing, which aren’t the same thing) is malignant folly.
The notion that it’s “only a matter of time before we either emulate the workings of the human brain in our computers or build conscious minds of another sort” suffers from the fallacy that consciousness is only what the programmed brain generates.
In a spontaneously arising meditative state emerging out of complete attention to thought and emotion, the information processing of words, memories, knowledge and experience ceases. One is much more conscious. But there is no place for a consciousness beyond thought in Bloom and Harris’ small and squalid worldview.
They assert, “there is no reason to think that such a system need be made of meat,” and that “conscious minds are most likely platform-independent — ultimately the product of the right software.”
This too is bad philosophy and fallacious reasoning. How in the hell did they come to the conclusion that “conscious minds are most likely platform-independent?” Can their precious computer programs exist without computer hardware?
More importantly, the use of the word “meat” with regard to the human brain evinces a level of removal from nature and the evolution of the human brain that characterizes extreme technophiles and Cartesian materialists. Descartes, my friends, is long dead.
Bloom and Harris tell us to “think not of a machine with visible wires, cartoonish eyes and a voice that sounds like Siri but of a beautiful stranger who engages you in intelligent conversation and who may be more aware of your emotions than your spouse or best friends ever were.”
Translation? Being fooled, and projecting human qualities onto a simulacrum of a human being, a thought machine that can out-think and anticipate our thoughts and emotions, is their criteria for consciousness.
Why are Bloom and Harris more concerned with the suffering of future thought machines than they are with the suffering of living people in the present? Their essay represents not just a monumental failure of imagination, but a perverse sense of morality in which the actual world of people today is less important than the pictured world of AI tomorrow.
They degrade humanity, greatly diminish the capacity of the human brain, and contribute to the destruction of the human potential and prospect.
There is consciousness beyond thought. It is awakened in the human brain when the mind-as-thought falls completely silent in intense, inclusive and undirected attention to what is.
For humanity not to be taken over by Bloom and Harris’ thought machines, and suffer much more from it, it’s imperative to awaken consciousness beyond thought within ourselves as human beings.
Martin LeFevre
Link: https://www.nytimes.com/2018/04/23/opinion/westworld-conscious-robots-morality.html