The philosopher and cognitive scientist Andy Clark argues that our minds are best understood as generative prediction models and that our subjective experience derives from the interface between the models' predictions and the incoming sensory signal. We use our senses to validate our expectations, and use "prediction errors" (i.e. mismatch with actual sensory signal) to train our model to make better predictions.
I wholeheartedly agree that we impose our understanding of the world onto the raw sensory data, and that indeed there is no such thing as raw sensory data "untouched by our own expectations." I agree with most of Clark's conclusions, but I find his arguments and conceptualizations hand-wavy. I was particularly unconvinced by his explanation regarding "action as self-fulfilling prediction."
Clark (or his publisher) tries to present this theory as being revolutionary "For as long as we've studied human cognition, we've believed that our senses give us direct access to the world." Really? I don't think we've believed that since the 18th century. He himself cites earlier authorities that go back at least as far as the mid-19th century. Clark's thesis is just an au courant version of the long-standing tendency to understand the mind in terms of the latest technological advances, in this case generative AI modeling.
To me, the most intriguing innovation in Clark's conception is his treatment of conscious attention as comparable to modeling precision.
By increasing or decreasing these "precision-weightings," the impact of certain predictions or of certain bits of sensory evidence can be amplified or dampened. ... What we informally think of as "attention" is implemented in these systems by mechanisms that alter these precision-weightings.
No comments:
Post a Comment