The most important photography skill you can learn is seeing like a camera. That means, among other things:
- Abandoning three dimensional vision. A camera flattens everything into two dimensions. All the unconscious information about the depth and relative size of objects in a scene disappear. All the information binocular vision gives us to separate objects from each other is gone. All that is left is tone, color, sharpness, and local contrast, and the viewer’s assumption about the relative size of known objects. An effective composition uses these to create the illusion of what is not present in a photograph.
- Eliminating selective perception. Why didn’t you see the guy wearing the T-shirt with the obscene message behind your friend when you were taking her picture? Because you were spending too much attention on your main subject. As a photographer, you have to cultivate an attitude of detachment from the scene, treating each object in the scene as of potentially equal value and weight. Short of photoshopping things out in post, you are pretty much stuck with everything in the frame.
- Photographing light, not stuff. A CMOS or CCD sensor is not a stuff sensor. It’s a light sensor. You are, as the word literally means, writing with light. That light reflects off, contours, limns, silhouettes, and colors the physical objects of the world, then makes its way to the sensor or the film. If you are totally focused on what cool, funny, amazing, or interesting stuff you are photographing, you will be confounded by how dull a photograph it makes. Seeing like a camera means seeing light first. Yes, photographs are aboutthings, but weirdly, they are not pictures of things. They are pictures of light.
- Working within the limitations of your camera system’s dynamic range and depth of field. In any given moment the optical physics of your eye are probably much more limited than your camera lens and sensor, which are miracles of modern engineering and manufacturing. However, your eyes are connected to your brain, which is a massive neural network processing system devoted to creating the illusion of unlimited dynamic range and depth of field. If you’re shooting with the native camera app on an iPhone XS, you are using one of the first systems that attempts to emulate some of that neural processing. Otherwise, you need to learn that the camera cannot always, in a single exposure, capture all tones in the scene or keep everything in focus, in the way you experience it through your visual system.
- If you are shooting in black and white, becoming sensitive to luminance rather than color, and in particular recognizing that objects visually distinguished by color may merge together when color information is stripped out. This can be mitigated by shooting in color and remapping colors to tones in post, but if you are using black and white film or a monochrome sensor, the die is cast with the click of the shutter.
There is lots more to learn, and I feel that composition skills come a close second, but I think this process of learning to “unsee” is the hardest and takes the longest. As a personal anecdote, about ten years ago I began to develop multiple cataracts in one of my eyes, and I slowly lost my stereo vision over the course of several years. During that time my photography improved significantly, and I attribute much of that improvement to the enforced training I received in how to see the world in two dimensions. As an exercise, you may want to cover one eye with an eyepatch while shooting. If people look at you funny, just tell them you’re a pirate.