Wonder Club world wonders pyramid logo
×

Reviews for Brainchildren

 Brainchildren magazine reviews

The average rating for Brainchildren based on 2 reviews is 2.5 stars.has a rating of 2.5 stars

Review # 1 was written on 2020-09-01 00:00:00
0was given a rating of 1 stars Angel Batallan
The pomposity and arrogance of this book is really something else. Also it's rare to see an entire field of academic thought based on a category mistake (literally!); Dennett's most famous essays on artificial intelligence serve as the philosophical core of the strange fantasy-land of 'Strong AI' / AGI theory. The best succinct critique that I've found -- though there are many out there -- is from D. B. Hart, which I've quoted elsewhere (in a Bostrom review) but it certainly bears repeating: For some theorists, an artificial computer comparable in complexity to the human brain and nervous system could achieve something like our conscious states. This entire theory is an incorrigible confusion of categories, for a very great number of reasons, foremost among them the absolute dependency of all computational processes upon the prior reality of intentional consciousness. I do not mean this simply in the sense that computers and their programs happen to have been designed by human minds, which is an important but ancillary issue; rather that, as Searle has correctly argued, apart from specific representations produced by intentional consciousness, the operations of a computer are merely physical events devoid of meaning. The physical brain may be something very remotely like the physical object we use to run software programs, but to speak of the mind in terms of computation is really no better than speaking of representation in terms of photography. The analogy is momentarily compelling, but it cannot help us make sense of what consciousness is, because it involves comparing the mind to a device that exists only relative to the logically prior and mysteriously irreducible reality of mind itself. We have become so accustomed to speaking of computers as artificial minds and of their operations as thinking we have forgotten that these are mere figures of speech. We speak of computer memory, for instance, but of course computers recall nothing. They do not even store any 'remembered' information -- in the sense of symbols with real semantic content, real meaning -- but only preserve the binary patterns of certain electronic notations. And I do not mean simply that the computers are not aware of the information they contain; I mean that, in themselves, they do not contain any semantic information at all. They are merely the silicon parchment and electrical ink on which we record symbols that possess semantic content only in respect to our intentional representations of their meanings. Nor can one credibly argue that, even though computer 'memory' may have no intentional meaning, still the 'higher functions' of the computer's software transform those notations into coherent meanings by integrating them into a larger functional system. There are no higher functions and no programs as such, either in the computer considered purely as a physical object or in its operations considered purely as physical events; there are only the material components of the machine, electrical impulses, and binary patterns, which we use to construct certain representations and which have meanings only so long as they are the objects of the representing mind's attention. We have imposed the metaphor of an artificial mind on computers and then reimported the image of a thinking machine and imposed it upon our own minds. Computational models of the mind would make sense if what a computer actually does could be characterized as an elementary version of what the mind does, or at least as something remotely like thinking. In fact, though, there is not even a useful analogy to be drawn here. A computer does not even really compute. We compute, using it as a tool. We can set a program in motion to calculate the square root of pi, but the stream of digits that will appear on the screen will have mathematical content only because of our intentions, and because we -- not the computer -- are running algorithms. The computer, in itself, as an object or a series of physical events, does not contain or produce any symbols at all; its operations are not determined by any semantic content but only by binary sequences that mean nothing in themselves. The visible figures that appear on the computer's screen are only the electronic traces of sets of binary correlates, and they serve as symbols only when we represent them as such, and assign them intelligible significances. The computer could just as well be programmed so that it would respond to the request for the square root of pi with the result "Rupert Bear"; nor would it be wrong to do so, because an ensemble of merely material components and purely physical events can be neither right nor wrong about anything -- in fact, it cannot be about anything at all. Software no more 'thinks' than a minute hand knows the time or the printed word 'pelican' knows what a pelican is. We might just as well liken the mind to an abacus, a typewriter, or a library. No computer has ever used language, or responded to a question, or assigned a meaning to anything. No computer has ever so much as added two numbers together, let alone entertained a thought, and none ever will. The only intelligence or consciousness or even illusion of consciousness in the whole computational process is situated, quite incommutably, in us; everything seemingly analogous to our minds in our machines is reducible, when analyzed correctly, only back to our own minds once again, and we end where we began. Rational thought -- understanding, intention, will, consciousness -- is not a species of computation. To imagine that it is involves an error regarding not only what the mind does, but what a computer does as well. One of the assumptions underlying artificial intelligence theory is that the brain, like a computer, uses algorithms in the form of complex neuronal events, which translate neural information into representational symbols. If we were to undertake a "homuncular decomposition" of the mind in computationalist terms, supposedly, we would descend through a symbolic level of operations down to a level of something like binary functions, then down further until we reached the simple "switches" in the brain that underlie those functions. But this reverses the order of causality on both sides of the analogy. Neither brains nor computers, considered purely as physical systems, contain algorithms or symbols; it is only as represented to consciousness that the physical behaviors of those systems yield any intentional content. It is in the consciousness of the person who programs or uses a computer, and in the consciousness that operates through the physical apparatus of the brain, that symbols reside. In fact, it is only for this reason that symbolic translation is possible, because the metabolism of data into meaning, or of one kind of meaning into another, is the work of an intentional subjectivity that already transcends the difference between the original 'text' and its translation. A computer programmer can translate meanings or functions into algorithms because, being intentionally conscious, he or she is capable of representing the operations of the computer not merely as physical events but as intelligible symbolic transcriptions of something else; it is in his or her consciousness, at either end of the process, that the physical serves the purposes of the mental. If the brain produces "symbols" of the world perceived by the senses, for instance, it is not a physical transaction but a mental act of representation that already intends perception as an experience of a world beyond. And it is solely there, where symbolic thinking exists, that anything we might call thinking occurs. Thus, even if we could imaginatively or deductively descend from the level of consciousness down through strata or symbols, simple notational functions, and neural machinery, we would not be able then to ascend back again the way we came. Once more, the physicalist reduction of any phenomenon to purely material forces explains nothing if one cannot then reconstruct that phenomenon from its material basis without invoking any higher causes; but this no computational picture of thought can ever do. Symbols exist only from above, as it were, in the consciousness looking downward along the path of that descent, acting always as a higher cause upon material reality. Looking up in the opposite direction, from below to above, one finds only an untraversable abyss, separating the intentional nullity of matter from the intentional plenitude of mind. It is an absolute error to imagine that the electrical activity in a computer is itself computation; and, when a believer in A.I. claims that the electrochemical operations of a brain are a kind of computation, and that consciousness arises from that computation, he or she is saying something utterly without meaning. All computation is ontologically dependent on consciousness, simply said, and so computation cannot provide the foundation upon which consciousness rests. One might just as well attempt to explain the existence of the sun as the result of the warmth and brightness of summer days.
Review # 2 was written on 2020-04-29 00:00:00
0was given a rating of 4 stars Lindsey Gautreaux
If you like Dennett, then this is for you. However, being a fan, there wasn't much here that I haven't already read. So more for the true Dennett heads. I skimmed most of the essays... but the ones I read fully were fun, as always.


Click here to write your own review.


Login

  |  

Complaints

  |  

Blog

  |  

Games

  |  

Digital Media

  |  

Souls

  |  

Obituary

  |  

Contact Us

  |  

FAQ

CAN'T FIND WHAT YOU'RE LOOKING FOR? CLICK HERE!!!