The chinese room argument
Imagine a man sitting in a room with computer-like instructions on how to respond to Chinese instructions. The man doesn't speak Chinese but can provide correct answers to any Chinese prompt he receives by merely following the instructions in the room, which are purely based on how to manipulate Chinese characters. Now, picture someone passing pieces of paper with written Chinese questions under the door, oblivious to who is inside the room. As the correct answers are returned, the person outside may assume that there must be someone inside the room who speaks Chinese, even though the man doesn't understand a word of it. The program, therefore, enables the person in the room to pass the Turing test.
This is the Chinese Room Argument (CRA), developed by Berkeley philosopher John Searle in 1980.
Applying this to the latest developments with large language models (LLMs), following the Chinese Room Argument, machines, including LLMs, wouldn't be able to truly understand the language they use to complete strings. However, the Chinese Room Argument has faced considerable criticism since its inception. For instance, some critics argue that a machine being able to perfectly answer a question must indicate some level of understanding of language within the system, even if not by the man himself.
How do current AI and LLMs compare to the hypothetical man in the room when it comes to processing and understanding language inputs?