NCSU Philosophy Club
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Representation and Consciousness

3 posters

Go down

Representation and Consciousness Empty Representation and Consciousness

Post  Tim Thu Oct 04, 2012 9:36 pm

Disclaimer: 'Consciousness' and 'experience' are used interchangeably, as are 'intentionality' and 'representation.'

Consciousness poses a problem for those who think the world is entirely "physical" or "material", as defined by the branches of science. In a narrow sense, consciousness is what it's like to be a certain organism or to have it's subjective point of view. Science tries to describe things objectively and so the realm of subjectivity seems to be out of its reach.

In humans, consciousness obviously has a basis in the brain. But to say it is literally identical with any particular part of the brain is problematic: Many organisms seem to be conscious without needing the parts of the brain that we have.

Beyond consciousness, all mental states are thought to be multiply realizable, so what matters is what they do, not what they're made of. A knife is defined functionally, so knifes can be made out of metal, plastic, wood, etc. If this is true about mental states, then computers can have minds and perhaps our minds are quite literally computers because they're defined in terms of abstractions, like math and logic. Moreover, if mental states were identical to brain states, then its hard to see how any two people could share the same belief.

This brings up another problem in philosophy of mind: How do our beliefs refer to things in the world? How are they about things? How do you define meaning in a non-circular way? This is the problem of Intentionality or Mental Representation. Some say that I can believe things about objects because of my causal interaction with those objects. Intentionality itself is thought to be kind of a necessary one-to-one correspondence between two things, such as the number of rings on a tree representing how old it is, or a set of computer circuits corresponding to a set of 0s and 1s, which in turn correspond to higher level representations as programming languages and so forth. This last comparison is especially apt because neurons are thought to be at least functionally like logic gates in circuitry, aside from the huge differences between how brains and computers process information. Other popular theories of intentionality, known as psychosemantics, define intentional states in terms of their evolutionary histories or functional roles...

In short: If intentionality/representation can be explained purely in terms of physical processes, mostly as information, then maybe we can explain consciousness by saying its just a kind of nonconceptual intentionality. My beliefs are about objects because they're determined by them, and so my experiences are also about objects because they're determined by them.

I'm willing to grant that intentionality can be reduced, based on the examples above. And I think that our experiences always represent things, so they are intentional. But even granting those two points, I don't think consciousness is just representation. I will explain why I think so, but first I want to address some preliminary points:

Some people, like John Searle, say that intentionality requires consciousness: Any "meaning" we find on a computer is given by our experiences and our interpretations. If this is true, then reducing consciousness to representation is circular. Of course, computers aren't conscious but seem to be genuine intentional or representational systems. But Chris made a good point that we can never get out of our experience, so finding unexperienced meaning is not only a contradiction but epistemically impossible.

But aside from that, I think that humans can represent things without being conscious of them: Most of our beliefs and memories seem to affect our behavior in virtue of being representations, and yet we don't need to experience them for this to happen.

Chris objects that these aren't really beliefs:
If beliefs are mental representations that I need not be conscious of, it seems that there is an equivocation between consciousness and 'conscious.' Consciousness is, as I indicated above, "what it is like," but when discussing beliefs you seem to be verbalizing the word. This is common, but it also seems to include the vernacular meaning of 'conscious' which is essentially, 'awareness.' The sentence in which this is uttered reads thus, "beliefs seem to be mental representations and I don't need to be aware of them." This seems to be the difference in what we were discussing at the meeting among consciousness, subconsciousness, and unconsciousness and whether such distinctions were needed. If this is the case, then it seems odd to say that things in the unconscious are even beliefs. If anything, they seem like tacit inculcated structures which we cannot access. A belief holds places us in relation to a proposition which has semantic content, but unconscious beliefs (the lowest level where there is no access) seems to lack such content.
To this I reply:
1. What is a belief without content? If there's something there, then what is it and how does it influence our behavior?
2. Many of our "unconscious beliefs" may be inaccessible, but many of them also seem accessible. Isn't that what memory is?
3. How can we be conscious of old beliefs if they can't be stored somewhere for retrieval? A storage system that can exist without us constantly experiencing it.

I defend the view that representation doesn't require consciousness not only because I think this vindicates what we mean when we speak of the "unconscious" or "subconscious", but also because the dominant physicalist view of consciousness as representation can't even get off the ground without this.
Tim
Tim

Posts : 15
Join date : 2012-02-11

Back to top Go down

Representation and Consciousness Empty Re: Representation and Consciousness

Post  Tim Sat Oct 06, 2012 1:47 am

Could it be that unconscious beliefs and memories are represented in our bodies by processes (or parts) that are themselves conscious but aren't directed towards our "mind's eye"?

Related: Galen Strawson - Real intentionality V.2: Why intentionality entails consciousness


Last edited by Tim on Sat Oct 06, 2012 1:48 am; edited 1 time in total
Tim
Tim

Posts : 15
Join date : 2012-02-11

Back to top Go down

Representation and Consciousness Empty Re: Representation and Consciousness

Post  Satirical Sat Oct 06, 2012 1:47 am

I will take this paragraph by paragraph labeled sequentially (for appropriate reference):

(a) I agree, science is presented with a definitional problem. Phenomenal experience, if not reducible to physical terms, will either require an extension of what it means to be science (possibly with recourse to leniency in what counts as a physical model, or will eliminate consciousness as a substantive ontological entity.

(b) I agree, identity theory runs multiple risks including Sorite's objections (of which I am quite fond) and issues with Leibniz's Law (it's not just a suggestion, it's the law). I would take issue with the statement that many organisms have consciousness, since I cannot possibly know this. This is no mere 'other-mind' problem, though this is a significant point, but one of selective terminology. If Nagel is right, and we can't know what it is like to be a bat, then we can't know that it is like anything to be a bat. Of course, this is ancillary, and so I will not continue the point here. I include this, however, as a thematic prelude to the remainder of my response.

(c) I take issue with several points in this paragraph, notably:
If this is true about mental states, then computers can have minds and perhaps our minds are quite literally computers because they're defined in terms of abstractions, like math and logic.
Computers process symbols, they shift and order syntactical structures. No matter how complex a mathematical or computational abstraction, any function capable of performing this feat is Turing Computable (Church Thesis), and so is doing nothing more syntactical maneuvering by definition. When coupled with the Chinese Room argument against Strong A.I., the thesis that machines can think and have minds, this seems to preclude the proposition that brains are quite literally computers. This argument does not imply that a human brain could not operate computationally, we do it all of the time (!), but what it does rule out is that this a sufficient description of the human brain. What additional requirements are necessary, I do not know. The paragraph continues:
Moreover, if mental states were identical to brain states, then its hard to see how any two people could share the same belief.
This seems like an odd thing to say. Who says that two people share the same belief? It might seem commonsensical, but it seems much more plausible to say that no two people can have the same beliefs. Later, Tim seems to argue that objects in the world cause our beliefs (see (e) below). If this is true, then based on the simple argument that no two people ever have identical experiences of objects, seems to entail that no two people ever have identical beliefs. This desk before me is a certain shade of color, which is experienced differently by others. Moreover, if they were in my position in my place they would not have the same experience unless they had identical sensory mechanisms (whatever that means). But note that this voids the hypothesis anyway, since two people could not share the same belief if one is replaced by another.

(d) I take issue once again, but for novel reasons (lest the reader gets bored):
Intentionality itself is thought to be kind of a necessary one-to-one correspondence between two things, such as the number of rings on a tree representing how old it is...
Necessary one-to-one correspondences are hard to come by. Rings on a tree emerge from environmental and seasonal constraints, and this seems to be an adequate representation. Saying that they are necessary one-to-one implies that this representation is unique, but this cannot possibly be correct. An equally valid alternative representative model could adequately describe the rings on a tree (perhaps, every two rings equals two autumns, or something similar). I think I must be misunderstanding you here though. Please correct me, I despise being incorrect...More to the point of my disagreement, however, is the following:
...or a set of computer circuits corresponding to a set of 0s and 1s, which in turn correspond to higher level representations as programming languages and so forth. This last comparison is especially apt because neurons are thought to be at least functionally like logic gates in circuitry, aside from the huge differences between how brains and computers process information.
Some do argue that neurons can be represented with logic gates, but unfortunately they are 'not the neurons you are looking for.' Motor neurons, comparatively simple cells, are able to be simulated with logic gates, but neurons in the brain, specifically neurons in the cortex, are much more complex. The complexity is not a mere computational matter, but a difference in kind. Cortex neurons are dynamic, capable of responding to environmental changes, electrical variations by degree, and alter shape, firing speed, and extent based on the needs of the living organism. Logical gates are capable, and quite good, at modeling and imitating static behavior in increasing complexity, but they lack the dynamism of cortex neurons. This interesting, and idiosyncratic biochemist has more to say about the issue though.

(e) This is where the object-belief statement is made (mentioned above).

(f) Basic definition of intentionality, nothing to take umbrage with.

(g) Oh Searle! His arguments for the Connection Thesis are, to be sure, weak. I am not seeing how the argument is circular though. Perhaps you could give a deeper explanation of why requiring consciousness for intentionality entails the premise of consciousness? Perhaps I am having a 'dense' moment...Still, I can see why Searle would be motivated to such an thesis based on my arguments against being able to get out of our own experience. I do not think this is a reason to endorse Searle's view though, since I prefer proofs by construction. In any event, Searle seems indefensible, and methinks Tim's view is correct.

(h) Tim argues that we can represent things without being conscious of them. I take this to mean that we can have intentionality about things without having phenomenal experience of them (taken broadly to include mental content), or we can have intentionality without experience. I disagree. Intentionality is the capacity of the mind to be directed towards objects (again, painting with a broad brush), but what does such direction entail without an object? Brentano famously posited that consciousness is always consciousness of something, and this capacity is intentionality. Intentional contents, like beliefs, are such 'objects' but if someone were to have an intentional content belief without consciousness then how is this a belief? Beliefs have semantic content (truth value, reference, truth conditions, among other things), but how can an intentional belief have truth conditions? This might seem like an odd thing to say. Certainly, intentional beliefs, regardless of whether the individual having them is conscious of them, are true or false. I would agree. If an individual is influenced by childhood abuse, which reflects itself unknowingly in his or her life later, then this is surely true of false. It is true or false to, maybe, a psychotherapist, or (scientologist!) that attempts to 'unearth' these factoids, but it is not true to the individual for whom this proposition holds! The issue seems to be one of access, which I will turn to in my attempted answers to Tim's questions.

Question 1a) A belief without content is not a belief. Perhaps it is a structure underlying later beliefs? This point should become clear in later questions.
Question 1b) Inaccessible 'beliefs' seem to structure our later actions. I love Norse literature, but Alicia prefers Indian literature (which is enjoyable, but not a preference for me). We can rationalize are preferences, but this always seems to leave out something important. For instance, I prefer the Edda structure, while she prefers the purple prose of the Mahabharata. We can justify these beliefs arguably well, but when it comes down to it, the preference is always a choice made for reasons I do not fully comprehend. Aesthetic causes? Perhaps, but it seems reasonable to suggest that some experience I had early in my life which is now inaccessible to me, informs my decisions on certain matters (like aesthetics, or taste, or political opinion). Tim has pointed out implicitly that I do not have a mechanism for this interaction between the inaccessible and the accessible or experience laden mental life. I admit, I do not have a satisfactory response, but I will attempt to formulate something! I would be inclined, for example, to say that experiences at an early age, which are so far removed from the present day that I cannot recall them from memory, were at some point relevant to my mind and influenced a series of habitual patterns of action which I became accustomed to over time. This is a weak point, however, and will require more work...
2) I am not sure what accessible beliefs as memory has to do with my argument. Memory is the process of retrieving information/beliefs, but I am arguing that those 'beliefs' which are inaccessible to memory are not 'beliefs,' since they lack semantic content. I think there may be a bit of confusion over the terminology. "Unconscious beliefs" does not mean the same thing as a 'belief' in the unconscious. The former seems to imply a lack of awareness, whereas the latter implies a lack of accessibility, and consequently, awareness.
3) I would hope that accessible beliefs could be accessed by a healthy individual. When I play trivia, I am often able to correctly answer questions, and usually able to recall where I learned the information. The information, up to that retrieval, was not in my immediate awareness though until I retrieved it through sheer force of will. I am not clear where it seems that I indicated that storage of information in this sense was an issue. I did not intend that. I did intend to remark that beliefs in the unconscious, an inaccessible region, can not be retrieved, and yet influence us, at least causally, in many ways. Hence the distinction between conscious (immediate awareness), subconscious (retrievable memory), and unconscious (irretrievable due to lack of access).

Whew! Come at me bro!
Satirical
Satirical
Admin

Posts : 10
Join date : 2012-02-07

Back to top Go down

Representation and Consciousness Empty Re: Representation and Consciousness

Post  Tim Sat Oct 06, 2012 3:31 pm

Thanks so much for taking the time to respond. In my replies I'll include a lot of background information that you're probably already aware of for other readers. I'll leave the quotes out for the sake of limiting space and scrolling time. Listed in almost descending order of satisfaction:

(b) Either Other Minds or Solipsism
If we can't know what it's like to be another organism, then doesn't that entail that there is something it's like to be another organism? The only other option seems to be that nothing else is conscious besides you, so of course we can't know otherwise. But I reject that because I know that I'm conscious Mwuahahahaha. I take the problem of other minds and "what it's like" to support the view that consciousness isn't physical or at least objectively knowable, not solipsism or even idealism.

(c) Strong AI
If Strong AI is false, then by definition, computer's can't think. But if thinking just is syntactic manipulation, then I don't see why Strong AI couldn't be true. And if computers could become conscious (see (b)), then it seems it has to be true. You won't find meanings by looking closely at a brain. Likewise, you won't find algorithms by looking computer circuits. This seems to be the closest analogy we have, and the aspects of the mind that are algorithmic seem best explained by computation. If mentality is multiply realizable, then describing or replicating a human brain isn't a necessary goal for describing a mind. But then again it all depends on the definition of 'mind.'

Identical Mental States
This is a very interesting topic and my comment about sharing beliefs was way too quick. I think you're mostly right about all of this. There's a sense in which two things can't be truly identical unless they're literally the same thing - the same token. So that alone would preclude people from sharing beliefs. Yet there's another sense in which we obviously share beliefs. What are we sharing when we communicate or believe "the same thing"? Need to think a lot more about this.
Related: Max Black - The Identity Indiscernibles
Max Black used Frege's Puzzle to object to Mind-Brain Identity. Ned Block has a long, in-depth paper devoted to refuting this objection.

(d) Natural Representation
In the case of alternative interpretations being available, I think this is where teleological psychosemantics (teleosemantics) comes in. People like William Lycan and Fred Dretske will say that what determines a necessary representation is it's history of purposes (in the case of man-made symbols) and evolutionary adaptations (in the case of traits). Others like Jerry Fodor think that all that matters is having the right causal connection between a symbol and what it symbolizes. Honestly I'm not as familiar with this issue. I'm taking for granted that intentionality can be naturalized. And still show that consciousness can't be (which I haven't discussed yet).
Related: Teleosemantics & Causal Psychosemantics

Connectionism and Dynamic Systems
The brain is vastly more complex than any computer we've built and neural networks don't process information linearly like circuits do. But my takeaway from all of this, given multiple realizability, is that these are just two different ways of implementing computation. The only thing that can explain our ability to produce and connect a seemingly infinite amount of beliefs is computation - where the key idea is that computation is defined abstractly. I think this all goes back to Chomsky's arguments for universal grammar, and, on the computation side, Alan Turing's formal definitions of computation.
Related: Computationalism vs Connectionism

(g) The Connection Thesis
Searle's argument isn't circular. But if he's right about grounding all intentionality in consciousness (the "Connection Thesis"), then any attempts by Representationalists to ground consciousness in (non-experiential) intentionality will lead to circularity. By the way, isn't the Connection Thesis his basis for rejecting Strong AI? Or is it Type-Identity Theory? I don't see what else it could be, so if neither are true, then Strong AI seems to be true.

I think what's lurking in the background is whether or not consciousness and intentionality are one in the same thing... so neither is more fundamental. That would avoid circularity, or whatever you wanna call it. Perhaps machine consciousness is possible only if its parts are conscious (in addition to them functioning together the right way). I feel sympathetic to that even though it seems like a part/whole fallacy. Maybe Leibniz would have equated (first-order) consciousness with intentionality. Again, though, I'm trying to deny grounding consciousness in intentionality even while accepting that intentionality can be naturalized. But perhaps the panpsychist must be committed to some form of the Connection Thesis?... I remain puzzled.

(h) Unconscious vs Subconscious
Both the "unconscious" (inaccessible) and "subconscious" (accessible) states do not seem to involve consciousness, in the first-order sense of the word, and consequently, in the higher-order sense. The main question is do either of these states represent anything?

The literature about higher-order mental states is very, very confusing. I'll just say how I'd put things.
A conscious belief (in the higher-order sense) is a belief you're aware of. It involves two different levels of intentional objects. The intentional object of the higher-order awareness is the belief. While the intentional objects of the first-order belief state is its semantic content.

I can't see how accessibility determines whether or not mental states (beliefs, memories, whatever) can have semantic content even when we aren't aware of them.

I guess the clearest way of putting it is... what is a memory (unconscious or subconscious) without content? If it can't have content without being the object of higher-order awareness, then that seems to imply the Connection Thesis. Well, OK. But how do memories exist without us being aware of them?
Maybe I'm equivocating beliefs with memories and representation with information.
Tim
Tim

Posts : 15
Join date : 2012-02-11

Back to top Go down

Representation and Consciousness Empty In continuance of the best debate this week ;)

Post  Satirical Sun Oct 07, 2012 2:00 pm

First, I would have written a shorter response by I didn't have the time. All kidding aside, I apologize if some of the responses are superficial, I was a little busy today, but I didn't want to cede the field by default...

(b) Either Other Minds or Solipsism
You have a sharp eye. I was hoping to leave out solipsism due to the complexity (simplicity?) of the issue, but you are correct, there does seem to be no way around this consequence. I do not think that solipsism derives specifically from my arguments though, but from Nagel's 'what it's like characterization' of consciousness. Saying that there is something it is like to be a bat, that we cannot know, seems to postulate some entity, inaccessible, which, for some reason, must be there. Why must it? Well, by analogy of course! This is too strong though when applied to other species, since not only is 'consciousness' for said species inaccessible, but we cannot even refer to it. Anyway, the real issue is not Nagle-ness, but the consequences of my claim (which, admittedly, seems like it leaves us in a closed system of awareness). I do see the consequence of solipsism, but rely on Russell for support. As a human being, familiar with experience, consciousness, etc, I infer that you, an entity outside of my subjective experience, also have a similar 'what it's like' experience. We walk, talk, act, speak, and seem to think similarly. Now, of course, you could be a robot (lacking consciousness, though this is not my definition of 'robot'), but I think it is more probable that the squirrel sitting on my windowsill is a robot. Why more probable? There is no airtight, deductive reason (hence the argument via analogy), but if forced to make a judgment based on experience, it seems more likely that something (re: some animal, alien, plant, etc) other than a human being has a consciousness, if at all, different from mine in kind. Moreover, since I cannot refer to this kind of consciousness (and alternatively, I think I'm referring to the same kind of consciousness when talking about other humans), it can't be what I mean when I say consciousness. Geez, this is really depressing...On a brighter note, I am sympathetic to the view that consciousness is not physical, and here is where I run into problems (aside from the obvious reference issue)...

(c) Strong AI
I think there may be confusion over the term 'think,' here. On the face of it, I would disagree that "thinking just is syntactic manipulation," but not because it is not, for I am not sure, but more because it seems that consciousness is not merely syntactic manipulation (this does not exclude emergent properties, and it would be disingenuous for me to leave this out, but I think this is another issue entirely, but I mention it for future reference). Searle argues in Rediscovery of the Mind that Strong A.I. fails because linguistic symbols, used by thinking humans, are endowed with meaning from which consciousness is derived (I am a little fuzzy on the details, but I think he is too). This is in his Systems Reply, and honestly, it is not very convincing. What is convincing, however, is the takeaway that algorithms manipulate symbols, but there is no understanding in that manipulation by definition. The power of Turing machines is the breadth. They are impressively capable of solving and imitating algorithms or programs as long as certain initial conditions are met. Now, Strong A.I. argues that machines can think, and therefore brains would be analogous to hardware while minds would be analogous to software, but neither software nor hardware has understanding. This is because of the definition of computation spelled out by computability as symbol manipulation. Now, to say that a computer can be made to imitate thinking, Weak A.I., then that is another matter (see Eliza!). But Strong A.I. seems to be ruled out by definition. I think the issue of the definition of 'mind' trades on the distinction between Strong and Weak programs here. I actually think that the Weak program definitely doable...

Identical Mental States
Praise from Caesar! As any good citizen Tim not only praises, but forces a rendering unto Caesar that which belongs to him, namely, an explanation of previous statements, he writes:
there's another sense in which we obviously share beliefs. What are we sharing when we communicate or believe "the same thing"?
I once argued in a lengthy paper on ethnology, and would still stand by, that we agree on approximations of approximations of approximations. In short, sense is approximations all the way down! For instance, "Hey, Tim, that thing is a rock (ostensive pointing to get us off the ground here)." Tim looks over and responds, "sure, okay, let's go with that." Now I have a certain reference from my view that seems to be necessarily different from Tim's. What he 'agrees' is a rock, is different from what I 'agree' is a rock. I could be calling a rock, in fact, what we (in English) call a greeble (namely, a small protuberance on the side of an approximately linear object, think, the spaceships in Star Wars). On the other hand, Tim might mean what we in English mean as a rock. Tim-rock and John-rock are different, and one would assume that continued language use would make this clear. Still, each clarification of language use remains an approximation. Of what, the astute (and still awake) reader, might add? Of a conscious experience of course! Language use approximates subjective, what I would call, phenomenal, experience, and precludes identity agreement. Even more troubling might be the following corollary: Tim-rock cannot be identical to John-rock because Tim-rock experience, if held as a quality of Tim-rock, is held by Tim, whereas John-rock, is held by John. But this is more of a metaphysical quandary, so I think it is safe to leave it aside for our purposes. Does a description of the universe as turtles all the way down make you nauseated? Of course not; that old lady was crazy. But language as approximation, all the way down...that gives me rickets...

(d) Natural Representation
I am also not very familiar with the issue, but deign to Gareth Evans in arguments against causal reference (the Madagascar example). I will have to get back with you after I read a bit about the issue, but I wanted to mention Evans so you could see where I will probably be going with this argument in the near future.

Connectionism and Dynamic Systems
Tim makes the strong statement that
...the only thing that can explain our ability to produce and connect a seemingly infinite amount of beliefs is computation.
I find this hard to believe for multiple reasons. First, literally speaking, computation does not explain our ability to connect a seemingly infinite amount of beliefs, if it did, then I doubt that we would be having this debate! If anything, and slightly weaker, it is a potential explanation, but certainly not the only explanation. Now, a more combative point which is contrary to Tim's view would be that computation cannot explain our ability to produce and connect...(see above). I think this is what I have been arguing. Of course, there is also the point that defining something abstractly enough can explain just about anything. This is a crucial factor in the application of Turing machine computation. It is an extremely abstract mathematical result, and hence can be applied to almost anything (rats in a maze, for instance, has been given as an example). But the breadth, while a strength, is also a weakness as indicated above.

(g) The Connection Thesis
Searle objected to Strong A.I. due to the persistence of the Chinese Room argument (which he developed on a plane trip to Oxford to give a series of lectures in support of advancements in A.I. at the time). As he defined Strong A.I., he noted that it appears self-refuting due to a lack of meaning in the symbols employed by Turing computation, and by extension due to Church's Thesis, any computable system (by itself, i.e. without something else applying meaning, like I said, we are computers in some sense since we compute, but that is not all that we are).

Taking consciousness and intentionality as primitive is something I think Searle tries to do (and Brentano before him if I recall correctly), but if there is a distinction to be made, one not of definition but of kind (read: exclusive function) then it seems problematic. I can't redefine a system as primitive just because I disagree with it (like, for instance, introduction of primitive modal operators to describe the movement of time for A-theorists). The 'parts are consciousness' thought is interesting, and something I thought about bringing up in the last discussion when you mentioned it with regards to organisms, but it does seem to suffer from the fallacy of composition, and therefore is extremely susceptible to Sorite's style arguments (which I LOVE). I think Leibniz does ascribe 'apperception' (what I read as first-order consciousness) to intentionality in a primitive manner in his correspondence with Clarke (check it out, great discussion).

(h) Unconscious vs Subconscious
First, I would hope that the "unconscious" and "subconscious" states lack "consciousness." If not, there is a severe series of misnomers (namely, "un" and "sub"). Accessibility is a fine demarcation including what we know directly, what we can retrieve, and excluding what we cannot retrieve. I am not sure about the HOT literature, and I will check it out, but I was arguing that beliefs, described as requiring semantic content, can only have that content when there is something 'it is like to be' for that belief for an individual. So, I believe, at this moment, that Big Ben is in London. Nothing wrong here. Now, when I am not having an intentional experience of that belief, I can retrieve it easily and, then, when retrieved, nothing is wrong. Conversely, if I cannot retrieve that 'belief' then it does not have a semantic value (how could it?) even though I might have a feeling that it is true (this would almost certainly derive from a previous experience which I do recall in which I stated that Big Ben was in London, a statement which, if retrieved, has semantic content). Would a lack of retrieval capability place a formerly held 'belief' in the unconscious? Possibly. In fact, I think that it precisely what i mean by unconscious. Tim asks what these states then represent if not consciousness? I say, causal links to accessible and inaccessible 'beliefs' (I keep using this term for conservation of vocabulary, but I should probably introduce another term) which influence currently held beliefs.

Tim also states:
I can't see how accessibility determines whether or not mental states (beliefs, memories, whatever) can have semantic content even when we aren't aware of them.

I think the best response to this is twofold: 1) How do beliefs which you are not aware of have semantic content, and 2) assuming they do, how plausible is that?

I argue that beliefs of which you are not aware have semantic content only with respect to an outside perspective. Thus, my 'belief' that Big Ben is in London, if I cannot retrieve it, is true in one sense. That is, it is true because Big Ben is, in fact, in London. But, it cannot be true for me since by hypothesis I cannot retrieve that proposition. In fact, it cannot have semantic content for me since I cannot retrieve it. Certainly the proposition has objective semantic content, but it is easy to confuse objective and subjective distinctions in what we mean by content. Now, let's assume that I cannot access a 'belief' and yet it has semantic content for me. What would this entail? It seems to entail that the proposition can be true or false, has reference, etc. As indicated, surely it is an objective fact that the proposition is true, but it is a far cry from this to saying that if I do not know that the proposition is true, then it is either true or false, or has a reference. This is akin to saying that, for instance, Goldbach's conjecture is true or false, despite not having any proof of the matter either way (to be sure, many mathematicians believe this). Like I mentioned earlier, I am partial to constructive proofs. The conjecture may turn out to be true, false, meaningless, inapplicable, super-true, or something else. Similarly, Big Ben, might not have existed, and hence would have no reference, or it might have been in Dresden just before the Allied fire-bombings, and so no longer exists. In either event, it seems more reasonable to remain agnostic about the 'belief' without retrieval, as opposed to saying it has content as a matter of course. My agnosticism leads me to the position that these 'beliefs' are not rightly dubbed, and there may be a better description than what we currently have available.

One last tidbit, are you arguing that we have to be aware in some sense of all of our beliefs? Is that what you mean by memory? If you want to hold that beliefs have semantic content, even when we are not 'aware' of them, then it seems you must hold that we are aware of all of them to some extent, if we are to "have" our myriad beliefs ('have' doesn't seem to be the right word to use here though). I think the name 'belief' is wrong, but I am not sure what to replace it with. I am hoping to clarify, since from what I gather in the readings, these guys are throwing around nomenclature like they're Christmas miracles (Scrooge!). I mean honestly, we are not MADE of nomenclature!

Anyway, this is great, I am enjoying myself, and I eagerly await your response!
Satirical
Satirical
Admin

Posts : 10
Join date : 2012-02-07

Back to top Go down

Representation and Consciousness Empty Re: Representation and Consciousness

Post  Tim Mon Oct 08, 2012 4:08 pm

Praise from Caesar! As any good citizen Tim not only praises, but forces a rendering unto Caesar that which belongs to him, namely, an explanation of previous statements
Okay Socrates...

(c) Strong AI in Searle's sense may not be possible, but I think machines will one day pass the Turing Test with flying colors. Now the question is, given the problem of other minds, do we judge whether another human has a mind based on something equivalent to the Turing Test? And if so, then does that mean our minds just are Turing machines?

I'm still inclined to say yes for that and other reasons given, but I grant that consciousness is a key part that's missing in computers. Maybe that's where I'd have to say the electrons in a computer have some sort of proto-consciousness after all. If you accept the possibility of machine consciousness but deny that it comes only from pushing around syntax, then I'm not sure how else to explain it. But I think it's best to remain agnostic about speculation like that... I'm not (yet) sure what's to be gained by switching a ghost in the machine for a machine composed of one (or a bunch).

(g)
. The 'parts are consciousness' thought is interesting, and something I thought about bringing up in the last discussion when you mentioned it with regards to organisms, but it does seem to suffer from the fallacy of composition, and therefore is extremely susceptible to Sorite's style arguments (which I LOVE). I think Leibniz does ascribe 'apperception' (what I read as first-order consciousness) to intentionality in a primitive manner in his correspondence with Clarke (check it out, great discussion).
If I say that a system is conscious only if its parts are conscious then it seems like I might be committing the converse "fallacy of division." But then there's the question of how these parts combine: "if each part is conscious, then they can combine into a single consciousness." I think that might run into the fallacy of composition. Panpsychism lacks explanatory power in this regard. Then again there seems to be a disunity of consciousness in split-brain patients, schizophrenics, etc... Anyways just wanted to make the terminological point about the informal fallacies for myself since I had to look them both up.

On Leibniz: Stephen Puryear (of NCSU) - Perception and Representation in Leibniz. From what I understand, 'perception' is the first-order representational quality of monads, while 'apperception' is a higher-order awareness of that representation only found in higher level monads. I could be wrong though, I'm holding off on my Leibniz research for now.

(h)
I argue that beliefs of which you are not aware have semantic content only with respect to an outside perspective. Thus, my 'belief' that Big Ben is in London, if I cannot retrieve it, is true in one sense. That is, it is true because Big Ben is, in fact, in London. But, it cannot be true for me since by hypothesis I cannot retrieve that proposition. In fact, it cannot have semantic content for me since I cannot retrieve it. Certainly the proposition has objective semantic content, but it is easy to confuse objective and subjective distinctions in what we mean by content. Now, let's assume that I cannot access a 'belief' and yet it has semantic content for me. What would this entail? It seems to entail that the proposition can be true or false, has reference, etc.
I now understand what you mean by a mental state only having content if it's accessed...
What you call objective semantic content is what I identify as the content of non-conscious (unconscious or subconscious) mental states. If this state is accessed and I'm experiencing it, perhaps it then has subjective content. Some would call this the phenomenology of intentionality. But it's another thing to say objective content requires subjective content of this sort. Most philosophers who try to naturalize intentionality aim to explain the latter in terms of the former, where as Searle would do the converse. But maybe neither is more fundamental as we said.

As indicated, surely it is an objective fact that the proposition is true, but it is a far cry from this to saying that if I do not know that the proposition is true, then it is either true or false, or has a reference.
What I would say is the proposition or memory or "belief" is either true or false and has reference regardless of whether I'm consciously aware of it, much less whether or not I know it's veridical. If that seems implausible or you take issue with abstract objects, then all I can say is computers store plenty of unaccessed representations--but that just brings us back to functionalism.


One last tidbit, are you arguing that we have to be aware in some sense of all of our beliefs? Is that what you mean by memory? If you want to hold that beliefs have semantic content, even when we are not 'aware' of them, then it seems you must hold that we are aware of all of them to some extent, if we are to "have" our myriad beliefs ('have' doesn't seem to be the right word to use here though).

Let's say I believe Big Ben is in London, but i'm not thinking about it right now. I say I'm not aware of this belief, but it still has content. It has content because the content is what influences my behavior through the causal links you mentioned. It also has content because that's the thing that is (in)accessible. Whatever it's called, it has to have content otherwise there'd be nothing to potentially retrieve/recall. So there is a kind of internal representation that exists in my mind that doesn't presently depend on my consciousness. So representation doesn't entail consciousness contra Searle.

I really appreciate the discussion. Not sure if I'll have time for philosophy club tomorrow but soon I'll present my argument for why even naturalized/objective representational content (that doesn't require consciousness) isn't sufficient for consciousness.
Tim
Tim

Posts : 15
Join date : 2012-02-11

Back to top Go down

Representation and Consciousness Empty Three Simple Rules for Data-Mining my Robot

Post  John Mon Oct 08, 2012 9:11 pm

(c) I too think that machines will eventually pass the Turing test, but this test seems to reek of behaviorism. I say that because of the truth of your assertion, namely, that it would seem to imply that minds are just Turing machines. Say, some robot which passes the Turing test has been created, call it Aye, since if anyone ever asks it if it’s a robot, it must respond truthfully with “Aye!” Now, Aye-Robot can pass any test we throw at it, and thus seems very much conscious. Would we say it is conscious though? How could we? As you have clearly stated, there is no way to penetrate Aye-Robot’s point of view, if there is such a thing, to look for consciousness in the same way that there is no way I can penetrate your thoughts to avail myself that you are not some clever artifice. Perhaps there is a way to program our experience into Aye-Robot in such a way that it has causal reference to what we mean when we say ‘consciousness.’ Maybe then we could pose the question, “are you conscious?” and receive a truthful “Aye!”

You raise an interesting point with electrons. When we hop down to the quantum level, the terminology gets extra murky. We start saying things like, “electrons have perspectives,” and “causal links between transition states require observers,” which could be taken as grounds for proto-consciousness. It could be that our explanations of quantum theory imply a necessary perspective from sub-atomic particles, but I am unclear on the specifics. Of course, this all follows from an assumption that machines can have consciousness, which I am willing to grant, but not without trepidation. I mostly take issue with the reference though, which you mentioned that you are currently working on...

Also,
I'm not (yet) sure what's to be gained by switching a ghost in the machine for a machine composed of one.
So far, the only thing metaphysics can ever hope to gain, closer adherence to descriptive theories. I mean, at one time human brains were compared to locomotives. Of course, this was due to the lack of knowledge concerning computer paradigms, modern physics, etc. But then again, we are so compared to future generations. Still, point taken…and a good one…

(g) I apologize for my jump in reasoning here, my intent was that the fallacy of composition charge relate to the eventual combination of parts into consciousness as you pointed out. The accounts of split-brain patients perplex me to no end. I can think of no modern account of consciousness that can satisfactorily explain them (outside of a broad, identity as social interaction perspective, which I defended during the first meeting this semester, yet which itself suffers from logical issues). It seems interesting to think that we might be able to split consciousness into multiple patients, each with a relatively clear justification for ownership of the experiences of the other. How far could such experiments go? Say, we augment the patient with any biological requirement, which is lacking post-op. That is, we split a brain, place one hemisphere into one body, and keep the other intact. Then, we augment both bodies with whatever is required (even a mechanical brain half!) necessary to bring them back ‘up to par’ as it were. Moreover, what if this procedure were continued, with larger portions of each hemisphere slowly split and replaced. It seems like we would end up with hundreds of people capable of responding “Aye!” while passing Turing tests out and about in public!

I believe that you are correct about Leibniz, but I meant ‘primitive’ in the sense that intentionality and apperception were on equal footing (and so non-circular in his view). I was wrong to read apperception as first-order since he clearly argues that perception fills this category. I am not sure why I even wrote that to be honest…

(h) I do find it implausible that my unconscious “beliefs” have reference, since that supposes a relation between a real entity, in the case of Big Ben, and a proposition of which I am not currently aware. Of course, I would take issue with abstraction, but my arguments against abstract objects apply to their literal ontology, not their epistemological status, and hence would not be an option for criticism here. Now, it is nice to talk as if there were relations existing between these Big Ben and thoughts that I do not currently entertain, but I am unclear on how this relation would be explained. I think this will become clear in the following section though.

Time for a slogan! “I give my beliefs content.” More pretentiously, “Reflection bestows upon objects significance.” I am merely attempting to describe the world here. When I believe that Big Ben is in London, I do so based on evidence, and a system of holistic beliefs, which directly influence my decision to give content to this proposition. Surely, I am not directly thinking that gravity works in this proposition, or else Big Ben might fly away, but I must be! I say, ‘directly’ but what I mean is that beliefs which I have entertained in the past have a causal link to beliefs which I am directly entertaining now, and these previous beliefs are such that they have influenced my decisions to hold propositions like, “big Ben is in London,” and “Gravity works” explicitly in the first sense, and derivatively in the second. The literal proposition “Gravity works” has no content for me until I reflect on it, though at one time it did have content. I think this is why, for instance, people have such a difficult time understanding Relativity. They say, “it’s counter-intuitive,” and they are correct. This seems to entail that “gravity” is intuitive, but this is only so with respect to previous inculcation. Does that make any sense?

Tim, I certainly hope you can make it to the meeting. I would love to continue our discussion in person and see what the rest of the club thinks. I eagerly anticipate your further arguments. And as always, “Aye-Robot!”

John
John
Admin

Posts : 49
Join date : 2012-02-04
Age : 41
Location : Raleigh, NC

https://ncsuphiforum.forumotion.com

Back to top Go down

Representation and Consciousness Empty Re: Representation and Consciousness

Post  Tim Tue Oct 09, 2012 12:15 am

(g) From Ship of Theseus to Sorites...

(1)A brain is conscious
(2) A brain minus one neuron is still conscious
~*~Induction~*~
(3) Therefore a single neuron is conscious afro

(h) Come to think of it, it seems unlikely we "store" all of our beliefs. Maybe the causal links you mentioned are just inferential roles or functions. You've certainly given me a lot to think about, and I definitely see the intuition behind grounding all meaning in experience. I've gone back and forth on this. But in any case, my targets right now are attempts to naturalize or reduce consciousness, not so much intentionality. I'll get back to you after I look over Galen Strawson's paper I linked above.

Relevant: https://www.youtube.com/watch?v=fdfC4_IEoa4#t=120s
Tim
Tim

Posts : 15
Join date : 2012-02-11

Back to top Go down

Representation and Consciousness Empty Re: Representation and Consciousness

Post  Sponsored content


Sponsored content


Back to top Go down

Back to top

- Similar topics

 
Permissions in this forum:
You cannot reply to topics in this forum