This article is in response to Josh Willms' "screen argument".
This is a rather impressive photo of a waterspout. In the same vein as Josh's investigation into the colours green and red, I ask "Does the waterspout exist?"
Note that in this case I am not speaking about a category associated with the waterspout, I'm not asking whether wetness exists, or twistiness, or white.
Josh's approach to thinking about green and red was relentlessly reductive. We experience red and green in the image of an apple, but where is the image or at least the experience of the image? Eventually he gets to the point where he is looking very closely at the brain and sees … neurons, nothing but neurons.
We could do the same thing with the waterspout. If we get closer and closer to the waterspout (assuming that we don't mind getting wet and we are immune to being tossed about by it), it will eventually disappear and we might see nothing more than water molecules banging together. Is a waterspout more than the water molecules?
Well, yes, of course it is. Despite its name, a waterspout is largely an air-based phenomenon - it's a tornado-like event above water, so we are just seeing water pulled up into the air and tossed around. But even if we accept that, is a waterspout more than the air and water molecules banging together?
Again, yes, it is. The waterspout is a located process.
By this I mean that we are not so much observing a thing that exists but rather we are observing a thing that is happening. And in order for it to happen, just where it is happening, there are factors that have to be conducive to a waterspout - a body of warm water, an air mass and (in order for this to be an observed process so that it becomes an experience) an observer.
I think that this is a major issue in Josh's thinking. He's looking for a thing when he should, at the very least, be accepting the possibility that what he is looking at (consciousness) is more of a process than a thing.
There are factors that are conducive to (and I would go so far as to suggest necessary as a prerequisite to) consciousness happening - like the presence of lots and lots of neurons and sensory input, but consciousness exists in neither the neurons nor the sensory input.
Another problem with the example given by Josh is associated with looking at an apple.
The implication is that a detailed, faithful image of the apple is generated in the brain, similar to the image above as displayed on a screen. Then Josh asks, "where is this image?"
The problem is that Josh is begging the question here. I believe that by assuming that there is an image of the sort that we believe that we experience (which is essentially indistinguishable from saying simply "of the sort that we experience"), Josh is heading off in the wrong direction.
There is plenty of research that indicates that we don't generate and then maintain images directly from our sensory input - at least not comprehensive and faithful images. We don't have the ability to take in as much information as exists in our standard interface with the world. For example, if I stop and look around me, I will see dozens of artefacts which are in my field of view. Then when I go back to typing, the artefacts are still there in my peripheral vision, but I am not strictly looking at them with enough focus to paint in the details - my brain is doing the hard work via imagination and memory to put items in locations which make sense to me.
What seems to happen (and perhaps I might be wrong in this) is that the brain has a bunch of labels and applies them to those things within my field of vision - cup, bottle of water, mouse, screen, roll of tape, aircon controller, plastic bag, cables, piece of paper, pad of post-it notes, receipt, calling card, keyboard, hands, roll of paper. Each of those, if I think about them triggers a memory such that I don't even have to consciously look at the item. But if I don't think about them, they simply don't exist - like the fork, the battery, the pair of scissors and the two jars that I didn't "see" until I stopped and looked around more carefully to see what I missed.
These items also exist in a context, so I have a method by which I can place them in my reconstruction of what I "see". (So the cup is to the left and just behind the post-it notes and between them is the pair of previously invisible scissors.)
I think the same happens even with things we are looking directly at. Melvin will break the view of the apple up into labels, "apple", "red skin", "green leaf", "moderate size", "generally unblemished" and this will allow him to regenerate the image if he is required to recall it later. But it is the "regeneration" of the image that he is experiencing, not the image itself.
And furthermore, I am pretty sure that neuroscience has shown that the parts of the brain that are brought to bear in imagining tasks are the same as used in image processing based on sensory input. So my argument would be that when looking at an apple Melvin is not experiencing an image per se but is experiencing the process of generating an image. As soon as he stops generating the image, it's gone - there's no created image that an external observer could extract from his brain.
This is all complicated, a little, by "attentional circuitry" in the brain. Most people who looked at the image of the waterspout above will not have noticed that the person who took the photo was standing on a balcony with a metal fence around it to the left, nor that there is set of stairs heading down to the right towards the beach, nor that what looks like red chair is close to a brick wall that surrounds the remainder of the balcony. Most would probably have noticed the impressive lensing effect on the sun, the plume to the left of the waterspout, the twisted clouds, the palm tree, the beach, the jetty and the fact that waterspout is happening in a bay (due to the landmasses to the left and right).