The argument from intentionality (AFI) relies on the claim that one physical state can’t be “about” another. That is to say intentionality, the property of mental phenomena directed upon some object, can’t reduce to physical states of the brain and because minds clearly have this capability then dualism must be true. Most simply A can’t be “about” B if they are both purely physical. Many dualists argue this is because intentionality is fundamentally irreducible and to reduce it would be to explain something else. They claim any attempt to reduce intentionality to something nonmental will always fail because it leaves out intentionality. As philosopher John Searle argues:
Suppose for example that you had a perfect causal account of the belief that water is wet. This account is given by stating the set of causal relations in which a system stands to water and to wetness and these relations are entirely specified without any mental component. The problem is obvious: a system could have all those relations and still not believe that water is wet… You cannot reduce intentional content (or pains, or “qualia”) to something else, because if you did they would be something else, and it is not something else. - The Rediscovery of Mind p. 51
There are many variations of this argument, and many include unnecessary complications, but in essence the AFI claims intentionality can’t arise from physical systems because… well because proponents can’t imagine a way to synchronize their first person ideas of intentionality with physical descriptions of the processes of the brain. Of course they will deny this is what’s happening, claiming instead that it’s something in the fundamental nature of thinking “about” something which makes it irreducible but this line of thinking generally has two fundamental problems. The first, and most obvious, is to argue that something can’t be physical because you can’t understand how it is physical makes this, like its familial arguments against naturalism, an argument from ignorance. Worse yet like those arguments, as you’ll soon see, this is an argument from ignorance on a topic on which we have already have a reasonable answer. So even if the absence of the type of description of intentionality as a physical system that you’ll find below this argument would still not be effective because it’s specious reasoning.
So how can intentionality arise in a physical system? It’s a complex question, and admittedly my explanation here won’t cover all aspects of that question, but the answer will lead me to the second general problem which almost always accompanies the AFI, it commits the fallacy of composition. Just because the parts of something don’t contain a property doesn’t mean the combination of those parts can’t have that property. Atoms don’t have wetness but that doesn’t mean combinations of atoms can’t have wetness. With that said, how is it exactly I know my thoughts are about something?
Imagine I think of a brick. What makes this thought “about” a brick? Well my thought “about” a brick is based on past experiences I’ve had with bricks and this thought is just a model of those experiences. This model is only “about” a real object in the sense it allows me to recognize the corresponding real brick if I encountered it. In other words, my intentionality is based upon my ability to have a model “about” objects and being capable of recognizing what that model is supposed to represent. How exactly does this come about physically?
Well very roughly, I see a brick and the image of the brick feeds into my brain via my visual cortex. Neural networks in my brain learn this image and can play it back using that same visual cortex on different scales and in different locations. Later when I don’t have direct stimulus but I think “about” a brick what I am thinking of is the external object that would create that pattern in my visual cortex in the same way my thought is currently doing. Granted this is a very rough and incomplete sketch (but one which I could outline in greater detail if I so desired) but already the question must be asked where does the need for something non-physical come in? It seems apparent that this type of model could easily be extrapolated or generalized to every instance of “aboutness.”
This really is a very simple physical model but at no point did it need to include some supernatural ghost in the machine to account for any step. With this simple system intentionality was able to be reduced to the types of modeling, generalization, pattern recognition, etc. which physical systems can produce. The only way I see to avoid this is to either simply insist that intentionality isn’t reducible—no that isn’t intentionality!—or to simply fail to consider what it means for thoughts to be “about” something. Of course they could also argue that we can’t yet create physical systems, computers, that can do what it is claimed brains can do in my model above but again to say because we currently can’t is far different from saying it’s impossible.
The relationship of minds to the external world has a long and interesting philosophical history but one of the defining aspects is the repeated claim by dualists that some aspect of mental life can’t be explained by merely physical phenomena simply because they can’t imagine how it can be. The argument from intentionality seems to be just yet another case in which in which dualists are demonstrably wrong about an aspect of mental life.