PokeVideoPlayer v23.9-app.js-020924_
0143ab93_videojs8_1563605_YT_2d24ba15 licensed under gpl3-or-later
Views : 59,099
Genre: Science & Technology
License: Standard YouTube License
Uploaded At Mar 22, 2023 ^^
warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.83 (108/2,427 LTDR)
95.74% of the users lieked the video!!
4.26% of the users dislieked the video!!
User score: 93.61- Overwhelmingly Positive
RYD date created : 2024-09-15T00:29:08.666064Z
See in json
Top Comments of this video!! :3
Searle did not imagine a rulebook which translated Chinese into English. The entire point, is that English is not involved at all, and the person in the Chinese Room has no idea what they are actually communicating. They are just following the rules in a rulebook that are completely meaningless to them. That is the underlying basis of this thought experiment.
68 |
This remains an extremely powerful case against strong AI - that's why so many AI advocates resort to quantum flapdoodle in desperation. If you've ever programmed a CPU to do anything, the lesson is reinforced - put this number into that register, divide it by this mumber, store it at this address etc. It;s claimed that neural ents are different, but in principle, at core, think not.
1 |
The Chinese Room thought experiment has always been a non-starter for me, because Searle uses the term ārule bookā very loosly. There have always been travel translation books that translate one phrase to another. Does such a book understand its responeses? Of course not. If thatās all Searleās rule book is doing, a kind of āwhen you see this, respond with thisā rule book, then of course he wonāt understand, and thereās no conflict here. But if the rule book is one that teaches words and grammar, still all in Chinese, then Searle will slowly learn Chinese the same way a child learns their first language, and he will eventually understand it. He will start to see associationsā¦āIāve seen this word before, but in a different context, and itās related to this and that usageā, etc. Heād start learning simple words like Yes and No, as they appear in answers, and heād pick up frequently used groupings of words as meaning something by themselves, etc. Be able to differentiate questions and statements, etc. And any AI that is pulling together answers in this way, is doing the same thing. It will understand Yes from No, and so on, as it associates the components of language conversation from experience. The only thing the AI will not have (at least not at this point) is a sense of subjectivity, or āfeelingsā about what the words mean to itself.
2 |
The problem with this analogy when applied to an intelligent agent is that "you", the part that reads the rule book and gives the correct response, well you are not really supposed to understand what you are saying. Your eyes can "read" words, your mouth can output a response, but your eyes and your mouth don't understand english. Your brain does. And in this analogy the brain is the rule book.
Just because a part of an agent doesn't understand what it is doing, it doesn't necessarily follow that the agent as a whole doesn't either.
1 |
If emotions, memory, predisposed assumptions... Etc. Are added to the robots ability to understand them what is the difference between us and them? Nothing. (Only that because of the very nature of the fact that it was constructed and that it can increase it's very possible that they are very quickly become so much more intelligent than us).
1 |
@hexeddecimals
1 year ago
a correction about the Chinese
room thought experiment: the manual doesn't tell you how to translate a Chinese sentence into an English one. it tells you how to construct a response in Chinese to the Chinese input. for example, the manual could tell you the
response to the input "ä½ å„½å?" is "ęę²”äŗ"
175 |