Go Back   Freethought Forum > The Marketplace > Philosophy

Reply
 
Thread Tools Display Modes
  #51  
Old 11-15-2008, 01:05 AM
naturalist.atheist naturalist.atheist is offline
Reality Adventurer
 
Join Date: Jul 2007
Posts: VMMCXXX
Images: 7
Default Re: Chinese Room experiment

Quote:
Originally Posted by Clutch Munny View Post
Quote:
Originally Posted by naturalist.atheist View Post
Quote:
Originally Posted by Clutch Munny View Post
Both. And more besides: lessons from ethology and "artificial life" research.
So 15 years ago exactly what was the state of embodied robotics?
Are you familiar with Google?

I'm aware of your thesis that other people should find evidence for your statements, and are to blame for your not having any in the first place. I demur from the invitation to attempt to educate you, however.
I've checked google and as far as I can tell in the field of robotics there isn't much before 2001. So you have piqued my curiosity. I am not asking you to prove anything, just share what you know. But if that is too much for you or you figure out that maybe you misspoke then that is fine. Carry on.
Reply With Quote
  #52  
Old 11-15-2008, 04:27 PM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCII
Default Re: Chinese Room experiment

I think the Chinese Room is a good thought experiment, in that it highlights we really don't know what contitutes, or even how to look for, understanding of Chinese.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
Thanks, from:
Adam (11-15-2008)
  #53  
Old 11-15-2008, 04:34 PM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCII
Default Re: Chinese Room experiment

Quote:
Originally Posted by seebs
The laws of physics don't understand Chinese. However, the laws of physics are manipulating a set of symbols (we call them "particles" and "energy states"), and this seems to produce occasional patterns of those symbols which we describe as "understanding Chinese".

The neurons don't have to understand Chinese for the brain to understand Chinese. Searle, in this story, is performing the roles of a large number of non-understanding neurons in turn. The internal state of understanding isn't in searle, it's in the pattern.
It sounds almost like you're saying the room does understand Chinese, seebs, when taken as a whole. Perhaps it doesn't understand in the same way a native Chinese speaker does, but it does understand in a different way. Is that why you are saying?
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
  #54  
Old 11-19-2008, 09:12 PM
seebs seebs is offline
God Made Me A Skeptic
 
Join Date: Jul 2004
Location: Minnesota
Posts: VMMMCLXX
Images: 1
Default Re: Chinese Room experiment

My thought is that if something can consistently conclude discussions in Chinese such that people think it understands Chinese, it probably does.

(Note, BTW, that the original experiment presumed you were given access to the questions in advance, and could simply look up answers in a table -- I think it also assumed no continuity of discussion.)

So I think "the room as a whole" probably understands Chinese, for the same reason that I think some people do -- the only test I have available is to compare behaviors with the behaviors I exhibit about things I understand.

I don't know whether it's the same way or a different way. I don't think it necessarily matters. But, in the fairly obvious case of a perfect physics-level simulation of a human brain, I think it's clear that either the simulated brain understands, or the human brain doesn't -- I don't think they can be different. (Unless understanding really DOES come from souls... And furthermore there's no souls for simulators. I have no reason to think that.)
__________________
Hear me / and if I close my mind in fear / please pry it open
See me / and if my face becomes sincere / beware
Hold me / and when I start to come undone / stitch me together
Save me / and when you see me strut / remind me of what left this outlaw torn
Reply With Quote
Thanks, from:
Farren (11-25-2008)
  #55  
Old 11-25-2008, 12:07 PM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCII
Default Re: Chinese Room experiment

Quote:
Originally Posted by seebs
So I think "the room as a whole" probably understands Chinese, for the same reason that I think some people do -- the only test I have available is to compare behaviors with the behaviors I exhibit about things I understand.
I think we believe other people understand Chinese for more than just their respnses to our questions in Chinese, though. I think we also believe that these other people have a 'mind', or at least mental states, similar to the ones we have and can explore through introspection. When we talk about 'understanding', it's surely as much in as reference to these mental states as it is to responses to questions, even if responses to questions are the only observables.

The power of the Chinese Room is that quite clearly the room as a whole does not have a mental state we can identify with being 'understanding Chinese'. In fact (perhaps by design), the only mental state we can identify is Searle's state of confusion as he slips mysterious symbols back through the slots of the room - most certainly not 'understanding'.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
  #56  
Old 11-25-2008, 01:13 PM
Farren's Avatar
Farren Farren is offline
Pistachio nut
 
Join Date: Jul 2004
Location: South Africa
Gender: Male
Posts: MMMDCCXXIII
Images: 26
Default Re: Chinese Room experiment

Thats is because imagination is taken too far. To simulate something as complex and subtle as a Chinese person convincingly you need something with a topologically similar internal state. There are lots of patterns in nature like that.
__________________
:ilovesa:
Reply With Quote
  #57  
Old 11-25-2008, 11:27 PM
Clutch Munny's Avatar
Clutch Munny Clutch Munny is offline
Clutchenheimer
 
Join Date: Jul 2004
Location: Canada
Gender: Male
Posts: VMMMXCII
Images: 1
Default Re: Chinese Room experiment

Quote:
Originally Posted by Dragar View Post

The power of the Chinese Room is that quite clearly the room as a whole does not have a mental state we can identify with being 'understanding Chinese'.
Well, the room as described also doesn't answer questions written in Chinese, though, so this fact alone doesn't tell us anything. And if you consider what you'd see if you peeked inside the room once it was augmented so as to be able to answer Chinese questions in real time -- operations performed over an Earth-sized rule book by something moving at c+n, for some large n -- I submit that it would be very far from clear what the room as a whole has in the way of mental states.

More precisely, it would seem every bit as mysterious as how the operations of a bunch of neurons generate human behaviour, and we would be driven back on basically the same reasoning we use with humans: if it robustly acts like it understands, it understands.
__________________
Your very presence is making me itchy.
Reply With Quote
Thanks, from:
erimir (11-26-2008), Farren (11-26-2008)
  #58  
Old 11-26-2008, 02:10 AM
seebs seebs is offline
God Made Me A Skeptic
 
Join Date: Jul 2004
Location: Minnesota
Posts: VMMMCLXX
Images: 1
Default Re: Chinese Room experiment

Quote:
Originally Posted by Dragar View Post
I think we believe other people understand Chinese for more than just their respnses to our questions in Chinese, though. I think we also believe that these other people have a 'mind', or at least mental states, similar to the ones we have and can explore through introspection. When we talk about 'understanding', it's surely as much in as reference to these mental states as it is to responses to questions, even if responses to questions are the only observables.
Sure -- but we're inferring those mental states from the observed behaviors.

Quote:
The power of the Chinese Room is that quite clearly the room as a whole does not have a mental state we can identify with being 'understanding Chinese'.
It's not clear at all to me. If you show me two identical things, and then assert that one is magically different from the other, I don't necessarily find it "clear" that the difference is real.

Quote:
In fact (perhaps by design), the only mental state we can identify is Searle's state of confusion as he slips mysterious symbols back through the slots of the room - most certainly not 'understanding'.
Exactly -- that's the red herring. You're looking at Searle because you have a heuristic telling you that "people" understand things, and that "non-people" don't.

Replace Searle with a purely mechanical device, and the expectation that if anything understood Chinese, it would be the simple mechanical device... well, that doesn't happen. The design of the thought experiment is very specifically to trick you by making you assign all the understanding of the system to the one component specifically excluded from having the understanding.
__________________
Hear me / and if I close my mind in fear / please pry it open
See me / and if my face becomes sincere / beware
Hold me / and when I start to come undone / stitch me together
Save me / and when you see me strut / remind me of what left this outlaw torn
Reply With Quote
  #59  
Old 11-27-2008, 11:08 AM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCII
Default Re: Chinese Room experiment

Quote:
Originally Posted by Clutch
More precisely, it would seem every bit as mysterious as how the operations of a bunch of neurons generate human behaviour, and we would be driven back on basically the same reasoning we use with humans: if it robustly acts like it understands, it understands.
I'm questioning this assertion. Is it the case that we attribute understanding to other humans because they act as if they understand? Or is it because we have, long before we've had a discussion in Chinese, we've attributed to them various properties and mental states (similar to our own), and reason that conversing in Chinese is impossible for ourselves without understanding, therefore it is impossible for them?


Quote:
Originally Posted by Seebs
Exactly -- that's the red herring. You're looking at Searle because you have a heuristic telling you that "people" understand things, and that "non-people" don't.

Replace Searle with a purely mechanical device, and the expectation that if anything understood Chinese, it would be the simple mechanical device... well, that doesn't happen. The design of the thought experiment is very specifically to trick you by making you assign all the understanding of the system to the one component specifically excluded from having the understanding.
Fair enough (and that's probably in small way a part of the design of the Room).

But let's replace Searle then by a (simple) mechnanical device. It still isn't apparent to me that a machine looking up the answer in a book as to how to respond is at all the same as understanding Chinese.

I can see how - given sufficient complexity of the machine/algorithm - we might approach something like understanding. But to answer Clutch's point:

Quote:
Originally Posted by Clutch
And if you consider what you'd see if you peeked inside the room once it was augmented so as to be able to answer Chinese questions in real time...I submit that it would be very far from clear what the room as a whole has in the way of mental states.
I don't see why speed would have anything to do with it. If, on the other hand, we were to replace looking up the correct response in a very large book by some very complicated algorithm, carried out mechnically (via electrons or gears), then I would begin to be swayed. But 'looking up the answers in a book' is not (to my best understanding) how brains function in the slightest, and I only have a definition of 'understanding' suitable for the sorts of processes carried out in brains.


Maybe just to summarise because I feel I've gone off point: my answer would be that the Chinese Room does not understand Chinese not because chinese rooms do not understand Chinese, but because this Chinese room does not understand Chinese. This Chinese Room operates (at whatever speed) via an algorithm that is so simplistic as to be uncomparable to human brains - the only thing we do agree that can understand Chinese.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
  #60  
Old 11-27-2008, 11:08 AM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCII
Default Re: Chinese Room experiment

Quote:
Originally Posted by Clutch
More precisely, it would seem every bit as mysterious as how the operations of a bunch of neurons generate human behaviour, and we would be driven back on basically the same reasoning we use with humans: if it robustly acts like it understands, it understands.
I'm questioning this assertion. Is it the case that we attribute understanding to other humans because they act as if they understand? Or is it because we have, long before we've had a discussion in Chinese, we've attributed to them various properties and mental states (similar to our own), and reason that conversing in Chinese is impossible for ourselves without understanding, therefore it is impossible for them?


Quote:
Originally Posted by Seebs
Exactly -- that's the red herring. You're looking at Searle because you have a heuristic telling you that "people" understand things, and that "non-people" don't.

Replace Searle with a purely mechanical device, and the expectation that if anything understood Chinese, it would be the simple mechanical device... well, that doesn't happen. The design of the thought experiment is very specifically to trick you by making you assign all the understanding of the system to the one component specifically excluded from having the understanding.
Fair enough (and I agree about the rather duplicitous design of the Room).

But let's replace Searle then by a (simple) mechnanical device. It still isn't apparent to me that a machine looking up the answer in a book as to how to respond is at all the same as understanding Chinese.

I can see how - given sufficient complexity of the machine/algorithm - we might approach something like understanding. But to answer Clutch's point:

Quote:
Originally Posted by Clutch
And if you consider what you'd see if you peeked inside the room once it was augmented so as to be able to answer Chinese questions in real time...I submit that it would be very far from clear what the room as a whole has in the way of mental states.
I don't see why speed would have anything to do with it. If, on the other hand, we were to replace looking up the correct response in a very large book by some very complicated algorithm, carried out mechnically (via electrons or gears), then I would begin to be swayed. But 'looking up the answers in a book' is not (to my best understanding) how brains function in the slightest, and I only have a definition of 'understanding' suitable for the sorts of processes carried out in brains.

Maybe just to summarise because I feel I've gone off point: my answer would be that the Chinese Room does not understand Chinese not because Chinese rooms do not understand Chinese, but because this Chinese room does not understand Chinese. This Chinese Room operates (at whatever speed) via an algorithm that is so simplistic as to be uncomparable to human brains - the only thing we do agree that can understand Chinese, and the only thing we have to define what 'understanding' means in the first place.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
Thanks, from:
Adam (11-28-2008), Clutch Munny (11-27-2008)
  #61  
Old 11-27-2008, 12:10 PM
Farren's Avatar
Farren Farren is offline
Pistachio nut
 
Join Date: Jul 2004
Location: South Africa
Gender: Male
Posts: MMMDCCXXIII
Images: 26
Default Re: Chinese Room experiment

Quote:
Originally Posted by Dragar View Post

I'm questioning this assertion. Is it the case that we attribute understanding to other humans because they act as if they understand? Or is it because we have, long before we've had a discussion in Chinese, we've attributed to them various properties and mental states (similar to our own), and reason that conversing in Chinese is impossible for ourselves without understanding, therefore it is impossible for them?
I'm fairly certain, as a programmer, that your fundamental error lies here. Physical and logical similarity are not the same. If a pattern in a medium closely resembles a pattern in another medium, and the pattern itself is what gives meaning to the applicable term (In this case consciousness) it doesn't matter if the physical respresentation of the pattern is dissimilar, it matters if the logical structure of the pattern is the same. The physical, microscopic nature is essentially irrelevant. In much the same way that its irrelevant if I type "a" on my keyboard or render it in ink. Either way, you'll still read it the same.

In fact, no two humans are physically even close in terms of microscopic similarity. The nature of neural nets is that we are born with structures than predispose us to certain abilities, without those abilities themselves being inherent. For example, according to Chomsky we have "deep stuctures" which facilitate the learning of language, but no actual language when we're born.

Glossalalia, or the "speaking in tongues" of many spiritual traditions, taps this primeval predisposition in the brain of homo sapiens to produce human-sounding but ultimately meaningless patterns. But we have only the basic, meaning-free structures in Werner's and Broca's areas of the brain when we are born. Absent the intervention of adult humans, we do not actually learn meaningful language. Our brain structure predisposes us to extract meaning from arbitrary sound waves, but, alone, does not supply understanding.

And learning understanding involves a long series of pavlovian response training that ultimately leads to our brains being wired differently for exactly the same words. Examine two human brains in enough detail and you will observe the fact that the same, simple statement triggers two entirely dissimilar neuron paths. The only similarity will be that they are in a common region of the brain, since genetic predispostion sets up different parts of the brain to learn specific kinds of lesson faster or slower.

To put predisposition in perspective take any (Old- or new-world) monkey and damage the motor center in a way that disables the left hand. In time, another part of the brain will take on the deliberately damaged part in the motor center. But in the absence of interference, the part of the brain congenitally predisposed to controlling the left hand will most likely learn to do so quickest, resulting in a commonality of location for control of the left hemisphere apparent in most monkeys. But the fact that another part of the brain will learn those motor functions should the congenitally predisposed part fail illustrates that it is not a hard wiring but a "soft"-wiring with a congenital bias (I use this example beacause it well tested).

This has been specifically observed in people who have suffered left-brain damage (the left brain house both Werner's and Broca's areas) and suffered temporary speech impairment as a result. Those who, for instance, have learned to communicate from the right brain using newly adapted structures represent a vanishingly small exception to the rule of where comprehension of speech is processed. But they exist. Therefore, we must acknowledge both predispostions in the brain and the fact that brain plasticity allows exceptions to exist.

Ultimately, what I'm saying is that even in healthy, undamaged humans, processing the word "the" involves fundamentally different circuits in any two individuals. Predisposition dictates for any arbitrary humans they'll probably be in the same region of the brain, but be dissimilar circuits. In the case of outliers they'll be in entirely different regions of the brain. In light of this, Searle's (implicit) assumption of a "normal human", against which the apparently dissimilar room is considered, is actually fucking absurd. Searle is really acting as the very stereotype of a big fat fluffy philosopher who's paucity of knowledge outside of philosophy renders his arguments, um, fucking stupid, when the scientific understanding of the reality of the agents of his thought experiment are actually considered.

If any two actual humans process the word "the" in a dissimilar fashion, the only thing that allows us to claim similarity is that we have a similar topology of logic at a higher level of abstraction. In turn, this implies that something entirely dissimilar at a certain level of granularity can be logically similar at a higher level of abstraction and, by implication, experience consciousnessness as we understand it. Make the claim that they're dissimilar and you're basically claiming that any two actual human consiousnesnesses are dissimilar and hence screwing up your own argument, big time.
__________________
:ilovesa:

Last edited by Farren; 11-27-2008 at 01:06 PM.
Reply With Quote
  #62  
Old 11-27-2008, 06:05 PM
erimir's Avatar
erimir erimir is offline
Projecting my phallogos with long, hard diction
 
Join Date: Sep 2005
Location: Dee Cee
Gender: Male
Posts: XMMMCMVI
Images: 11
Default Re: Chinese Room experiment

Quote:
Originally Posted by Dragar View Post
Quote:
Originally Posted by Clutch
And if you consider what you'd see if you peeked inside the room once it was augmented so as to be able to answer Chinese questions in real time...I submit that it would be very far from clear what the room as a whole has in the way of mental states.
I don't see why speed would have anything to do with it. If, on the other hand, we were to replace looking up the correct response in a very large book by some very complicated algorithm, carried out mechnically (via electrons or gears), then I would begin to be swayed. But 'looking up the answers in a book' is not (to my best understanding) how brains function in the slightest, and I only have a definition of 'understanding' suitable for the sorts of processes carried out in brains.
I think the point Clutch was making was that the algorithm that Searle proposes couldn't possibly reply in real time, because even if it was put into a very fast computer, if it was merely using a look up table it would still take too long to respond, and thus fail the Turing test. And I'm not sure a look up table for human conversation is even possible, as it would have to have trillions and trillions of entries, since it has to remember all of what has been said before in the conversation. I mean, that would be possible in theory, but it would be enormous - certainly it would be impossible to work with a human looking things up in paper books. I mean, I dunno if I mentioned this, just think of the trillions of ways a conversation could go using only standard grammatical utterances, but I think in order to pass the Turing test, a machine would also have to be able to interpret sentences using non-standard grammar, making production errors, using nonce words, etc. thus increasing the number of entries a look up table would need by orders of magnitude.

Thus Searle's question is misleading because he suggests that something that cannot pass the Turing test could pass the Turing test.

On the other hand, a program that could process human language and could also respond as fast as a human would not necessarily be so "obviously non-mental".
Quote:
Maybe just to summarise because I feel I've gone off point: my answer would be that the Chinese Room does not understand Chinese not because chinese rooms do not understand Chinese, but because this Chinese room does not understand Chinese. This Chinese Room operates (at whatever speed) via an algorithm that is so simplistic as to be uncomparable to human brains - the only thing we do agree that can understand Chinese.
I think we're agreeing here actually, it's just that I think that this Chinese room would also fail the Turing test.
Reply With Quote
Thanks, from:
Adam (11-28-2008), Clutch Munny (11-27-2008)
  #63  
Old 11-27-2008, 06:45 PM
seebs seebs is offline
God Made Me A Skeptic
 
Join Date: Jul 2004
Location: Minnesota
Posts: VMMMCLXX
Images: 1
Default Re: Chinese Room experiment

Quote:
Originally Posted by Dragar View Post
I'm questioning this assertion. Is it the case that we attribute understanding to other humans because they act as if they understand? Or is it because we have, long before we've had a discussion in Chinese, we've attributed to them various properties and mental states (similar to our own), and reason that conversing in Chinese is impossible for ourselves without understanding, therefore it is impossible for them?
I think it's a bit of both.

Quote:
But let's replace Searle then by a (simple) mechnanical device. It still isn't apparent to me that a machine looking up the answer in a book as to how to respond is at all the same as understanding Chinese.
It doesn't have to be apparent that it is the same. The key is that once you take the red herring in a suit out of the room, it is no longer immediately obvious that it's not understanding.

I'm not trying to argue that there is necessarily understanding going on; I'm merely arguing that Searle's argument is completely free of substance at any level. It's a pure deception.

Quote:
I don't see why speed would have anything to do with it. If, on the other hand, we were to replace looking up the correct response in a very large book by some very complicated algorithm, carried out mechnically (via electrons or gears), then I would begin to be swayed. But 'looking up the answers in a book' is not (to my best understanding) how brains function in the slightest, and I only have a definition of 'understanding' suitable for the sorts of processes carried out in brains.
Hmm.

In Searle's original description, the questions are known in advance, and "looking up the answers" might make sense (in which case, obviously, the understanding occurred in the person writing the algorithm). In the version most people talk about, it's a conversation, and the only way it can be even plausible to imagine pulling that off is an algorithm with saved state. (Think about pronouns.)

Quote:
Maybe just to summarise because I feel I've gone off point: my answer would be that the Chinese Room does not understand Chinese not because chinese rooms do not understand Chinese, but because this Chinese room does not understand Chinese. This Chinese Room operates (at whatever speed) via an algorithm that is so simplistic as to be uncomparable to human brains - the only thing we do agree that can understand Chinese.
I would argue that there are two possibilities:
1. The room doesn't actually work.
2. The algorithm is comparable in total complexity to human brains.

In short, "assume we could simulate intelligence on something much simpler than a human brain; if so, then simulation of intelligence wouldn't imply similarity to the human brain" turns out to be begging the question...
__________________
Hear me / and if I close my mind in fear / please pry it open
See me / and if my face becomes sincere / beware
Hold me / and when I start to come undone / stitch me together
Save me / and when you see me strut / remind me of what left this outlaw torn
Reply With Quote
  #64  
Old 11-27-2008, 07:09 PM
Clutch Munny's Avatar
Clutch Munny Clutch Munny is offline
Clutchenheimer
 
Join Date: Jul 2004
Location: Canada
Gender: Male
Posts: VMMMXCII
Images: 1
Default Re: Chinese Room experiment

Quote:
Originally Posted by Dragar View Post
Quote:
Originally Posted by Clutch
And if you consider what you'd see if you peeked inside the room once it was augmented so as to be able to answer Chinese questions in real time...I submit that it would be very far from clear what the room as a whole has in the way of mental states.
I don't see why speed would have anything to do with it. If, on the other hand, we were to replace looking up the correct response in a very large book by some very complicated algorithm, carried out mechnically (via electrons or gears), then I would begin to be swayed. But 'looking up the answers in a book' is not (to my best understanding) how brains function in the slightest, and I only have a definition of 'understanding' suitable for the sorts of processes carried out in brains.
Speed has everything to do with it, because (i) the thought experiment hinges on intuitions arising from the described implementation of the system -- how could a guy at a desk with a book in English thereby understand Chinese? -- and (ii) systems that could perform the way the CR is described as performing (i.e., passing the TT in real time) could not be implemented the way the CR is constituted, as described. Not without the laws of the universe being very different, at the very least.

Even if "the book" was the size of the moon, and could actually contain enough canned responses to make the CR robustly sensitive to arbitrarily chosen inputs -- which is far from clear -- how does one "flip through" such a book fast enough? At most you'd be talking about a process that's in some sense functionally analogous to flipping through a book, but implemented very, very differently. And what are our intuitions about whether that functional analogue -- whatever it looked like -- could instantiate genuine understanding? I submit that any alleged intuition to this effect is sheer confabulation based on the lingering mental image of a man flipping through a rule book at a desk.

Indeed, at this level of functional analogy, there's little reason for confidence that the brain isn't doing some such thing, perhaps broken down modularly and by sub-task. (Which shouldn't matter, by Searle's reasoning).
__________________
Your very presence is making me itchy.
Reply With Quote
Thanks, from:
Adam (11-28-2008), Dragar (11-28-2008), Farren (11-28-2008)
  #65  
Old 11-28-2008, 12:16 PM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCII
Default Re: Chinese Room experiment

Quote:
Originally Posted by Clutch
Not without the laws of the universe being very different, at the very least.
But it's a thought experiment! But I take your point.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
  #66  
Old 11-28-2008, 10:35 PM
seebs seebs is offline
God Made Me A Skeptic
 
Join Date: Jul 2004
Location: Minnesota
Posts: VMMMCLXX
Images: 1
Default Re: Chinese Room experiment

The key relevance of the physics is that the presumption that someone is "just flipping through a book" contributes to our intuitive sense that this must be a simple process with no preserved state and not much computation, which is why it seems like something which wouldn't have understanding. It's a pure appeal to intuition; there's no actual argument for the conclusion really asserted!
__________________
Hear me / and if I close my mind in fear / please pry it open
See me / and if my face becomes sincere / beware
Hold me / and when I start to come undone / stitch me together
Save me / and when you see me strut / remind me of what left this outlaw torn
Reply With Quote
Thanks, from:
beyelzu (12-02-2008), Farren (11-29-2008)
  #67  
Old 12-02-2008, 11:43 AM
Luke Culpitt's Avatar
Luke Culpitt Luke Culpitt is offline
Member
 
Join Date: Aug 2008
Posts: XX
Default Re: Chinese Room experiment

Maybe I'm just thick, but if you allow for a machine that can pass the Turing test, what difference does it make to replace a mindless cog in that machine, with Searle, who performs the same functions as, and supposedly understands no more (about Chinese) than, the mindless cog? What does that prove?
Reply With Quote
  #68  
Old 12-02-2008, 05:46 PM
seebs seebs is offline
God Made Me A Skeptic
 
Join Date: Jul 2004
Location: Minnesota
Posts: VMMMCLXX
Images: 1
Default Re: Chinese Room experiment

Quote:
Originally Posted by Luke Culpitt View Post
Maybe I'm just thick, but if you allow for a machine that can pass the Turing test, what difference does it make to replace a mindless cog in that machine, with Searle, who performs the same functions as, and supposedly understands no more (about Chinese) than, the mindless cog? What does that prove?
It distracts the reader.
__________________
Hear me / and if I close my mind in fear / please pry it open
See me / and if my face becomes sincere / beware
Hold me / and when I start to come undone / stitch me together
Save me / and when you see me strut / remind me of what left this outlaw torn
Reply With Quote
Reply

  Freethought Forum > The Marketplace > Philosophy


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

 

All times are GMT +1. The time now is 11:39 PM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Page generated in 0.63588 seconds with 14 queries