The Black Box

March 10, 2007 at 10:27 am (Uncategorized)

According to Ned Block, a black box is a system whose internal workings are unknown or irrelevant to current purposes. The computer model of the mind treats the mind as a system that itself is composed of interacting systems, which themselves may be composed of further interacting systems, and so on. The bottom-level primitive processors, the black boxes that cognitive science leaves unopened, are understood behavioristically: what they do (their input-output function) is in the domain of cognitive science, but how they do it is not(how they do it is in the domain of electronics or neurophysiology, etc.). Via the hierarchy of systems, cognitive science explains intelligence, by reducing the capacities of an intelligent system to the interactions among the capacities of unintelligent systems, grounded in the bottom-level black boxes. But the model does not explain mental intentionality (a.k.a. aboutness) in this way since, according to Block, the bottom-level black boxes are themselves intentional systems.

14 Comments

  1. rabeldin said,

    soniarott wrote:
    …But the model does not explain mental intentionality (a.k.a. aboutness) in this way since, according to Block, the bottom-level black boxes are themselves intentional systems.

    I had heard that intentionality is an emergent property of the system composed of the lowest level black boxes. Is that the same idea or a different one?

  2. soniarott said,

    Yes, intentionality emerges. But the black boxes are supposed to be unintelligent (i.e. lacking intentionality). See the problem?

  3. faustus said,

    If the level of the black box is the operation of a particular module or location such as the fusiform face area (responsible for recognizing faces in the brain), then you can plausibly make the case that the black box is an intentional system and that its intentionality remains unexplained. This would still leave the upper levels of intentionality explained so long as we knew enough of what the relevant black boxes did, even if figuring out how they did it remained a task yet undone.

    If your black box is a very small collection of a few hundred neurons, its status as an intentional system is beginning to look a bit shaky. When your black box is an individual neuron, it?s shakier still, and when it?s a system of molecules like RNA, the concept is really starting to fray. Once you descend to the level of atoms, I can?t see any case for intentionality remaining.

    My point is that if you see intentionality not as an intrinsic property but rather a macroscopic pattern of behavior which invites intentional interpretations and vocabulary, then this argument by Block is pretty harmless. He has a point, but not a particularly important one for cognitive scientists.

  4. no soul said,

    Well, this still seems an interesting argument. It seems to directly respond to the counter argument of “eliminating homunculi.”

    Faustus, I’ll state this up front: I’m not necessarily challenging you. My position, I feel, is sincerely one of curiosity & my sense of intellectual honesty.

    The “black box” argument is simply the counterpart to the “eliminating homunculi” argument. Both arguments presuppose something, apparently a-priori. “Black box” seems to presume that “black-box-ness” never disappears entirely; that is, whatever properties the original larger Black Box originally possessed, if they haven’t been explained (i.e. “Un-Black-Box-ified”), then the smaller constituent parts of the first Black Box, themselves, remain “unresolved” in the form of as yet more black boxes. I can’t really think of exactly “why” this seems intuitively satisfying to assert, but it does seem intuitively satisfying. In other words, Ned Block’s “Black Box” is really John Searle’s “Chinese Room” argument restated.

    “Eliminating Homunculi,” on the other hand, apparently rather equally a-priori presupposes that the simple act of dissecting & chopping up the initial largest Black Box will NOT produce merely more black boxes, but instead real actual components which, by virtue apparently simply of being (material, physical, natural) components, actually will EXPLAIN the workings of the first large Black Box.

    (Of course, “Black Box” is a much more general term for any phenomenon or process which is not overtly understood completely; while “Eliminating Homunculi” refers specifically to the act of eliminating a Homunculus, a “Little Man,” e.g. of eliminating a specific, ‘central,’ executive (Cartesian) Conscious Observer. But this Homunculus’ inner operations are, initially, given as NOT being understood; hence it is a specific kind of Black Box. So to be more clear & concise, it might be helpful to call the “Black Box,” instead, “The Black Box Homunculus.”)

    Let me cut right to the point: Both arguments initially assume fundamentally a-priori presuppositions which seem, no matter what, merely to be a matter of intuitive preference. Thus, if probeman et. al. are to be believed, and “intuition” is to remain untrusted on its own, whenever it is used or relied upon solely without reference to empirical data…Then in one way at least it seems very problematic to assert either argument as true and the other false.

    I have just been reading probeman’s “Should Metaphysics Conform to Known Scientific Principles?” thread, primarily the posts debating the validity of intuitive insight & the utility of intuition (I was going to create a thread on that topic but finally decided the debate there was probably of greater quality than any thread I made could aspire to). The entire question of metaphysics & intuition seems importantly exemplified in the very question here: If the only “real” criterion, for distinguishing which argument (“Black Box” or “Eliminating Homunculi”) is true, is intuition itself — Does this not directly point to the important relevance of metaphysical orientation (and of course intuition) in even the hardcore materialistic scientist’s most skeptical moments of his life? (But I realize that probeman there did not actually overtly dismiss either metaphysics or intuition, but rather simply argued/asserted/implied that they should conform to “known scientific principles.”)

    Ah! But Faustus & probeman et. al. would probably counter: The evidence of the material processes that make up the brain & its activities are empirical data which support one intution over the other. The intuition that favors the “Eliminating Homunculi” argument is supported.

    But, again, it is impossible to escape raw intuition, IMO. Intuition nonetheless is required to “get over the hump” which is the “roadblock” to understanding exactly “why” we should accept that external materialistic structures & processes actually explain the inner workings of the first large Black Box, the Cartesian Homunculus.

    In one sense, this of course is not simply “raw intuition.” It is in fact intuition guided by previously acquired empirical data and, most importantly, well-accepted (but in most cases “received”) rigorous scientific theory. Starting with, of course, the well-founded but nonetheless presuppositional assumption that consciousness IS brain activity, which is in turn amenable to scientific study & the like (including Metaphysical Materialism, a legitimate philosophical school).

    On the other side, what about the intuitions which inform the “Black Box Homunculus-cum-Chinese-Room” argument? What are those intuitions? Are they necessarily anti-materialist? IMO I don’t think so, BUT: For most people who seem to intuit that Searle & “Black Box Homunculus” are correct, I honestly suspect most such people are at least implicitly anti-materialist in at least this one case.

    What other forms might these “Black Box Homunculus” intuitions take? IMO even if the intuition is not anti-materialist, the intuition’s over-riding feature is the unshakeable feeling that “external,” “non-felt,” cold, non-sentient material objects simply cannot explain, cannot account-for, cannot compose the feeling, sentient, “internal” object that is “me.”

    If such an intuition is sworn NOT to be anti-materialist BUT nonetheless unwavering in its dismissal that “normal” material objects/processes cannot constitute “me,” what must that intuition comprise? I think such an intuition would either have to adopt some type of “Panpsychism” or “Neutral Monism,” likely with the implausible assumption that all matter & physical processes in the universe are somehow rudimentarily “sentient”… Or the intuition would have to be based upon the hypothesis that certain forms of non-sentient matter (or other physical processes) are simply capable of attaining certain extremely complex configurations in which certain behaviors ultimately emerge which are completely mind-blowing and, yes, counter-intuitive. That is, consciousness itself simply arises — emerges — as a state of matter which occurs in certain types of configurations of matter.

    This last intuition from the “Black Box Homunculus” camp, which is neither anti-materialism nor “Neutral Monism,” happily also conforms with one of the possible consequences of the “Eliminating Homunculi” camp. In other words, it would seem intuitively sound to surmise/guess that “Black Box Homunculus” AND “Eliminating Homunculi” can be miraculously reconciled simply by hypothesizing/asserting that consciousness is (“simply”) an amazingly unique, emergent state of certain very complex configurations of matter.

    ———————————————————————–

    (Lastly, to Faustus: You say that as we scrutinize the brain ever-more minutely, the Homunculus seems to “shrink” until “homunculus-ness” dissects into tiniest “atoms” and then evaporates into nothingness. e.g.

    a brain module [the fusiform face area] –> a few hundred neurons –> an individual neuron –> RNA molecules –> individual atoms –> etc.

    You state that the black box’s status of intentionality grows “shakier” with each subsequent division into finer elemental parts.

    I think you are probably correct, but I think what irks me about this type of explanation is that it seems to me that it’s possible to explain such things from an “intuitive,” a-priori, reasoned basis, rather than solely from a-posteriori, descriptive explanations which so often confuse people largely because, IMO, there is so little deference given to intuitively-graspable explanations, partly out of snide disrespect for “intuition,” and partly because the empirical & descriptivist & analytic training & methods used aren’t easily amenable to such intuitive explanations. IMO.)

  5. monroe said,

    Consciousness is merely a pattern of behavior? Then it would be analytically true that “zombies” are impossible. But it’s not…

  6. soniarott said,

    I seem to have misconstrued Block’s point. Block makes a distinction between intelligence and intentionality. Here’s what he wrote to me:
    Ned Block wrote:

    what I am saying is that the computer model of the mind is geared towards explaining intelligence rather than intentionality. The distinction is explained in this paper:
    http://www.nyu.edu/gsas/dept/philo/faculty/block/

  7. faustus said,

    I might be missing something, but I don?t, in the end, see that there are really two different doctrines opposed to one another (i.e., Black Box versus Eliminating Homunculi as described by Nosoul). Insofar as I?m familiar with Block?s work (not very), I do have some disagreements, but despite the fact that Techno may have misrepresented his argument, the version he handed down to us makes a certain amount of sense.

    The part where Block has a point reminds me of the arrogant practice of some cognitive scientists in the 60?s and 70?s, where they thought they could get away with putting together a science of mind free of consultation with what neurologists were finding out. It wasn?t exactly armchair science, since their boxes and flowcharts were based on some empirical findings discoverable without brain science. But it was a big mistake, and it was partly responsible for a backlash against computational models by neurologists and fans of connectionist models. Even today, some neurologists seem to still have a big chip on their shoulders over this, and I think they are justified. By studying how the brain actually works, you may find out that not only are the boundaries and roles for your boxes violated and the flows of processes in and out of them more tangled than you first thought. . .you might find there?s no reason to even think they exist any more. Back to square one.

    ?Boxology? is still around, but I think the folks who practice it now have a greater appreciation of the fact that empirical findings from brain science can and should trump their little black boxes, and also that the very justification of assigning roles to boxes might be an example of a priori theorizing rather than good science. In the speech production chapter of the CE discussion, Dennett plays around with a black box model and uncovers the latter sort of limitation.

    Now, to circle back to what I think of as the faux opposition, Block?s argument as Techno originally described it is just an example of the sort of thing an ?eliminate the homunculi? person would point out. The only way I think you can turn it into a problem for computational models is if you define intentionality in such as way as to make sure it isn?t a matter of interpretation, that it is (by way of Searle) somehow intrinsic to a person or system. And once you?ve done that, you?ve simply manufactured a problem out of thin air, guaranteeing the result by wave of a definitional wand. Best to understand intentionality some other way.

    As long as you have systems that are as yet unexplained?in this case, systems about which it is useful think of as having intentionality or other representational capacities?you are still leaving an explanatory task undone. That might be a task you hand off to someone who specializes in another discipline, but the hand off has to occur eventually.

    Now, your theory, insofar as it addresses phenomena of interest to you and your field of study, might be ?complete? in the sense that all the mysteries you wanted solved are indeed solved to the degree of tolerance you find acceptable. . .but leaving black boxes opens up the real possibility that you?ve just been fooling yourself. I guess it depends on the individual cases. If you are trying to discover how networks of neurons in one particular area respond to visual inputs, you might be justified in having lots of microbiological black boxes in your model, since ?surely? (you tell yourself) we understand enough about that to know there aren?t any fatal surprises likely to rear their ugly heads.

    And it can also work the other way, now that I think about it. In fact, it mostly does! For instance, the impression I get is that insofar as neurons as cells are understood, we know a lot more about how they and their own internal black boxes work than we know about how their activities create intelligence, intentionality and consciousness at more macroscopic scales. As far as black boxes go, neurons aren?t very black at all?lots of stuff is already filled in. So here we have a wealth of detail at the bottom level, but lots and lots of very black boxes further ?up? whose nature we have yet to comprehend. This sort of thing is what motivates ?bottom-up? theorists, of course. What the top down theorists have going in their favor is that eventually we have to figure out a way to map what our science says onto what our ordinary language about all mental states says. So with hope, they will eventually meet somewhere in the middle.

    Now about the intuition issue, I?m not sure what to say. I certainly have the impression that Searle (and to some degree Block, perhaps) are desperate to reify intuitions against any attempt by science or philosophy to dethrone them. This comes out particularly well in Searle?s notion that intentionality is somehow ?intrinsic? rather than a matter of interpretation or derivation. I can?t see how the view that intentionality is intrinsic could possibly be compatible with any sort of scientific approach, especially when you consider that evolution simply has to have built up intentionality and everything else in our mental lives from elements that completely lack them, step by minute step. That?s enough as far as I?m concerned to show the concept is fatally flawed.

    At any rate, intuitions are certainly a necessary starting point. When they conflict, as they are bound to do, then I can think of two ways to resolve the problem. Think of a way in which they might yield falsifiable or confirming predictions, and do an experiment. Or if they are simply conceptual and involve differences of vocabulary, then ditch the intuition that encourages you to wallow in mystery.

  8. monroe said,

    Say intentionality is a matter of interpretation. But what is interpretation? Interpretation is an intentional activity. Let’s say you interpret a system that seems suited to intentional and representational ascriptions. This interpretation is a representation of what the system is representing. The notion of “it’s all a matter of interpretation” suggests that under different interpretations, the system would be seen as representing differently (with different senses and referents). But following the same reasoning, your own interpretation is still all a matter of interpretation. So it is indeterminate, under this theory. BUT IT’S NOT, OBVIOUSLY. When you have a representation, you know what it is about, by your own assignment of sense, reference and interpretation.

    In conclusion, mental representations, unlike language, must somehow assign their own interpretations. Searle is right.

  9. no soul said,

    Monroe wrote:
    Consciousness is merely a pattern of behavior? Then it would be analytically true that “zombies” are impossible. But it’s not…

    Why do you say this? Where are you getting that “consciousness is merely a pattern of behavior?” That is so vague & overgeneralized IMO as to be quite useless (or useful in terms of, as Faustus says, creating an undefeatable black box with a wave of a conceptual wand).

    BTW, since you bring it up Monroe, could you or someone else please explain to me the Zombie argument succinctly? Why is it “analytically possible?” How? In what way?

  10. no soul said,

    Monroe wrote:
    Consciousness is merely a pattern of behavior? Then it would be analytically true that “zombies” are impossible. But it’s not…

    Why do you say this? Where are you getting that “consciousness is merely a pattern of behavior?” That is so vague & overgeneralized IMO as to be quite useless (or useful in terms of, as Faustus says, creating an undefeatable black box with a wave of a conceptual wand).

    BTW, since you bring it up Monroe, could you or someone else please explain to me the Zombie argument succinctly? Why is it “analytically possible?” How? In what way?

    Here’s how I perceive the “Zombie” argument in a nutshell: I, the conscious mind & deeply profound analytic mathematician-philosopher, analytically may be a Solipsistic Mind, that is, the only conscious mind in the universe. Or the case may not be actual Solipsism, but it still might be the case that I am indeed the only conscious & sentient mind in the universe, even if the rest of the universe as I perceive it actually exists.

    Therefore, it could be the case that no other humans I perceive, nor animals nor insects nor plants etc., are capable of conceptual thinking, nor of perceiving anything sentiently, nor of any emotions at all.

    That is, it could be the case that all external (apparently-living) creatures I perceive simply lack not only the rational mind, but also the emotional-sensual faculties of subjective experience, that I possess.

    If this is essentially the Zombie argument, then how is it substantively different from DesCartes’ own black hole of Solipsism, in which he couldn’t “prove” anything other than that he himself could think & perceive & “therefore” (allegedly) existed?

  11. monroe said,

    What the zombie argument actually says is that it is logically possible for there to be a material human body (in a possible world) which operates exactly like any normal human, has the same kind of input/output patterns and adaptive capabilities, and the brain activity is the same physically, but there is no consciousness. Now, if consciousness were merely a kind of input/output pattern, then this would not be possible. But we recognize that the scenario is possible because consciousness is something different from whatever input/ouput patterns.

    “Brain” and “mind” are different concepts, and many viewpoints throughout history have seen them as separate, sometimes even unrelated (e.g. Egyptian relgion, Aristotle). The only way to explain this is that “brain” and “mind” refer to different things. How does one make the identity claim?

  12. no soul said,

    Monroe wrote:
    What the zombie argument actually says is that it is logically possible for there to be a material human body (in a possible world) which operates exactly like any normal human, has the same kind of input/output patterns and adaptive capabilities, and the brain activity is the same physically, but there is no consciousness.

    You have simply restated what I already wrote. I don’t think you understand what I said. Chalmers’ Zombie Argument is really simply a special case of Descartes’ solipsism. The only difference is that the Zombie argument assumes that the external world is real & objectively exists — but then the Zombie argument retains Cartesian Radical Doubt in its total skepticism towards the reality of “consciousness” in other human beings (as well as other animals & other living sentient creatures).

    I have the feeling I could keep restating that till I’m blue in the face but you will likely never understand that. Is English not your first language?

    Now, if consciousness were merely a kind of input/output pattern, then this would not be possible. But we recognize that the scenario is possible because consciousness is something different from whatever input/ouput patterns.

    That is completely unsubstantiated, merely an assertion. How can you prove that consciousness is “somtehing different from input/output patterns?” At the very most, all you can honestly assume is an agnostic viewpoint on the subject. You can’t simply presume that “consciousness” cannot possibly arise from (as you call it) “input/output processes.”

    What about the theory that consciousness is simply an exotic state of matter, sort of the way solidness is a state of matter (e.g. ice & water), etc.

    I guess you don’t grasp the implications of the three theories I outlined in my first post: Anti-materialism, Panpsychism, and Emergence. Emergence by far most easily reconciles with modern scientific principles of Materialism. Anti-materialism obviously violates Materialism. Panpsychism calls for a completely alien foreign understanding of nature which no serious scientific study today even remotely holds as tenable.

    Can you tell me if there is some other possible alternative to these 3 broadest ways of explaining “what” consciousness is — Anti-materialism, Panpsychism, Emergence?

    “Brain” and “mind” are different concepts, and many viewpoints throughout history have seen them as separate, sometimes even unrelated (e.g. Egyptian relgion, Aristotle). The only way to explain this is that “brain” and “mind” refer to different things. How does one make the identity claim?

    What? Because other cultures or people have held beliefs that certain concepts are different, they must refer to different things? So anyone’s random arbitrary intuition about anything is to be taken seriously as completely valid & perfectly descriptive of objective reality?

    I can’t take you seriously anymore I’m afraid.

  13. monroe said,

    NoSoul wrote:
    That is completely unsubstantiated, merely an assertion. How can you prove that consciousness is “somtehing different from input/output patterns?” At the very most, all you can honestly assume is an agnostic viewpoint on the subject. You can’t simply presume that “consciousness” cannot possibly arise from (as you call it) “input/output processes.”

    Maybe it can “arise” from such. Sure, I believe that consciousness is caused by the brain, that is, I believe in mental-physical interactions. But I believe these causal laws to just be contingent laws of nature connecting two phenomena together. The firs-person view which is consciousness is something different from the behavior that is observable from the outside. We can observe the way people react to stimuli and observe their brain processes to some extent, but we cannot see their perceptions and thoughts. From the outside, we can observe the activity of the eye and what happens in the brain, but we cannot see the conscious images this process creates.
    What about the theory that consciousness is simply an exotic state of matter, sort of the way solidness is a state of matter (e.g. ice & water), etc.

    Perhaps, but I think it would be an independent, irreducible property of the matter. This is unlike solidity because solidity can be reduced to a pattern molecular motions and tendencies.
    Can you tell me if there is some other possible alternative to these 3 broadest ways of explaining “what” consciousness is — Anti-materialism, Panpsychism, Emergence?
    No. Can you tell me what emergence is?
    What? Because other cultures or people have held beliefs that certain concepts are different, they must refer to different things? So anyone’s random arbitrary intuition about anything is to be taken seriously as completely valid & perfectly descriptive of objective reality?

    I can’t take you seriously anymore I’m afraid.
    When people establish a word for mind or consciousness on the one hand, and then brain on the other hand, and then believe that these are different things, then the sense of the words must have been different. Otherwise people wouldn’t have thought they were different things. (Same sense implies same referent.) So the senses of the words either pick out different things, or different properties of a thing. People often bring up the morning star/ evening star example in this discussion. This is a case where two senses pick out the same thing, and people say maybe it is like that for consciousness and certain brain processes. But senses pick out things via properties. The evening star is that which is the first to be seen in the evening. The morning star is the last to be seen in the morning. These are different predicates, different properties. It just happens that the same object fits both things. Now, if somehow (let’s just assume to give you a headstart) that “consciousness” and “brain processes” are different ways of referring to the same thing, they must pick out that same thing by different properties. In this case, there must be on the one hand conscious properties and on the other hand brain properties.

    Now, you could say that this is like the case of solidity. The properties of solids are understood in different senses than complex arrangements of molecules with certain tendencies. Different senses are invoked here. However, one is logically reducible to the other. But how can consciousness be logically reducible to the brain? I think the burden is on the materialists for that one.

  14. arthur s said,

    sonia:
    don’t the black boxes instatiate a program? programs have syntax, at least. and in order to undertstand a program AS a program, don’t we have to think of it as pointing at the world, that is, as having a job to do?
    Maybe we can think of it as whistling a song that would go “i am doing work right now, right now i’m doing work.”
    I mean, we only “have a job to do” insofar as that phrase is applicable. And I apply that phrase to my computer — or, if i haven’t, I could. Then I could substitute salve veritates ‘have a job to do’ in that phrase to me when, in normal circumstances, rational people would be inclined to attribute the noraml human phrase ‘have a job to do’ to me. That is, the content of the phrases is, or seems, the same. So likewise with the little black boxes. the program provides the content.
    Maybe you don’t like ‘have a job to do’ as our phrase — pick whatever. I think it is about in the relevant way.
    erm: this is just a first pass.

Leave a comment