OK Computer: Computational photography is here to stay!

Screen Shot 2018-05-29 at 20.04.13

Whilst many photographers remain deep in conversation concerning the evolution from analog to digital photography—and the positives and negatives of both—the world of photographic technology has paid little heed to their protestations and continued on with its research and development. As a result, we find ourselves today — in fact we have been here for some time — in the age of computational photography.

In 2014, the World Press Photo Foundation — the publishers of Witness — commissioned research on “The Integrity of the Image” to assess contemporary industry standards worldwide. The purpose of the report was to produce some form of understanding concerning ethics in the digital image, especially the issue of post-production manipulation due to widespread concerns expressed within the photographic community about the credibility of news and documentary images. Nearly four years later, things have changed and understanding that change is what I want and need to address here.

One of the findings outlined in ‘The Integrity of the Image’ is that “we are now in an era of computational photography, where most cameras capture data rather than images. This means that there is no original image, and that all images require processing to exist.” The report continues:

In the digital era, we still think of the camera [as] a picture-making device. This, however, is a mistake. In the digital era, we need to understand the camera as a data-collection device, a device which is “gathering as much data as you can about the scene, and then later using advanced computational techniques to process that data into the final image…”

However, we are no longer in the digital era; we are now in the computational one. Pedantic? No, accurate. Not in all areas of photographic capture yet, but certainly in the principle one of smartphone photography.

According to “The Integrity of the Image”:

Debates about digital manipulation often proceed in terms of how images are captured in camera and then post-processed outside the camera. However, this is a rendering of the problem dependent on an analogue view of photography, one which fails to appreciate the radical changes of the digital era. If we understand that digital photography is computational, then every image requires “post-processing” in order to be an image. We have no original image in computational, digital photography. At the point of capture there is only data that has to be processed. This means “post-processing” is a necessity in the making of an image. Therefore, the assumption that we have an in-camera image which can function as the authentic, original image is no longer sustainable.

I agree with much of this, and in 2014 it was true, but things have changed and this is how.

We are in the early days of computational photography and its implementation within smartphones is still being developed. It is therefore worth discussing as a potential indicator to the future of photographic capture outside of the smartphone format. Despite all of the technological developments of the medium, a traditional camera — both analog and digital— is still based on the basic principle of the camera obscura, and as such produces linear perspective images. A computational camera, however, uses unconventional optics to capture a coded image and software to decode the captured image to produce new forms of visual information.

In computational photography, the aim is to achieve a potentially richer representation of a scene during the camera’s encoding process. In some cases, this is reduced to the process of epsilon photography, where a scene is recorded via a series of multiple images, each captured by the epsilon variation of the camera’s parameters. For example, successive images (or neighbouring pixels) may have a different exposure, focus, aperture, view, illumination, or moment of capture. Each setting allows the recording of partial information about the scene and the final image is reconstructed from these multiple observations.

You may not be aware of the term ‘epsilon’, but you will most likely have either used, or be aware of, a number of its functionalities that are already being implemented within digital cameras, such as high dynamic range, multi-image panorama stitching and confocal stereo imaging. The common thread within all of these imaging techniques is that the multiple images are captured in order to produce a composite image of believed higher quality, such as richer color information, wider field of view, a more accurate depth map, less image noise/blur and greater image resolution.

Computational photography also includes developments that require specialized equipment, such as the light-field camera, or ‘plenoptic’ camera. This captures information about the light field emanating from a scene; that is, the intensity of light in a scene and the direction that the light rays are traveling in space, in contrast with a conventional camera that only records light intensity. This is a form of image creation we are most used to experiencing as a hologram. In other cases, computational photography techniques lead to ‘coded photography’, where the recorded photos capture an encoded representation of the world. In some cases, the raw sensed photos may appear distorted or random to a human observer. But the corresponding decoding recovers valuable information about the captured scene.

Still with me on this? I hope so.

The sense that photography is now in the hands of the coders who are directly focused on the photographic capabilities of the smartphone was further illustrated with the 2017 launch of Apple’s iPhone 8 Plus and iPhone X, with Apple promoting the phones on the idea that their cameras were the best they had ever produced. Their continued quest for these cameras to be seen as “professional” pieces of equipment was evidenced by their emphasis on coding that promised a functionality delivering a portrait lighting feature, allowing the user to create “professional” looking images. Amongst the professional photographic community, such claims are often viewed with skepticism and disdain, but the manufacturer’s continued desire to deliver what they believe to be professional images gives, I believe, an accurate indication of how these cameras will develop in the future.

A number of reviews of the iPhone 8 obsessed over the camera and the website TechCrunch, for example, chose to review the phone purely as a camera dismissing all of its other functionality. The interesting developments of the iPhone 8 rely on the fact that its functionalities are not merely reliant on filters; instead they are closer to computational photography in their ability to sense a scene, map it for depth, and then change the lighting contours over the subject, all of which is completed in real time. To achieve this, Apple reportedly studied the work of artists and photographers such as Richard Avedon and Annie Leibovitz to inform their coded software and what it was able to mimic and imitate. This attention to detail when designing the camera functionality within a smartphone must point to not only how we will capture what we see in the future, but also what we will use to document what we see and how we will view the medium of photography.

Apple is not a traditional camera manufacturer recognized by professional photographers in the same way as Nikon, Canon or Leica. It does not have a photographic history, and many believe that this form of computational implementation merely leads to a “dumbing down” of the photographic medium. But the photographic medium is changing, not only in its capture but also in its forms of engagement. We are now in an environment that includes virtual reality (VR) and augmented reality (AR), both of which are revolutionising our forms of engagement with the medium. Based primarily on the imposition of objects, characters and filters to adapt our photographic viewing, AR is the logical next step in a lineage of image enhancement that can be traced back to colored lens filters, practical effects, polarizers, digital CGI, and most recently, Snapchat filters.

We engage with AR without realizing it every time we use our smartphones. Their cameras and each new functionality is a direct result of AR development. But it is in the ability to recognize an object without needing to record that object as a photograph that the opportunities for computational photography to affect our daily lives becomes most exciting. If we see AR as a broad computational platform, and not just as a collection of individual apps and software fixes to manipulate images, its possibilities are endless and life changing as it adapts your phone into an all-seeing and connecting tool. By capturing an image and transferring that image, AR will be able to use the digital image as an information artifact to instigate actions and implement a decision-making process. VR is changing the way in which we see, but AR will be able to change the way in which we act.

What we are talking about here is a radical change in the moment of capture and a re-imagining of the basic process of what photography is. Whereas photography offers an interpretation of a situation controlled by the photographer, computational photography places the control of what is interpreted into the hands of software that aims to create a heightened experience of what is seen to potentially replicate the experience of being at the moment of capture. The reality of this heightened experience, in turn, alters our understanding of what we see and how we see. We have already seen this expectation of the heightened experience of photography with the saturated color and extreme contrast of so much digital photography. The digital images we see are rarely the world we experience thanks to the pre-programmed nature of our digital visual devices and their desire to present images that are bright, colorful and sharp.

In that sense, we have already left the photographic world of accurate representation and entered the hyper-realism of computer-coded image making, as “The Integrity of the Image” report outlined. However, we should no longer be solely focused on the process of manipulation by the photographer post-capture; we now need to be aware of a similar form of manipulation being placed in the heart of the device that we are using at the very moment of capture.

 

Grant Scott is the founder/curator of United Nations of Photography, a Senior Lecturer in Editorial and Advertising Photography at the University of Gloucestershire, a working photographer, and the author of Professional Photography: The New Global Landscape Explained (Focal Press 2014) and The Essential Student Guide to Professional Photography (Focal Press 2015). His next book #New Ways of Seeing: The Democratic Language of Photography will be published by Bloomsbury Academic in 2018.

His documentary film, Do Not Bend: The Photographic Life of Bill Jay (see donotbendfilm.com) will be screened across the UK and the US in 2018.

You can follow Grant on Twitter and on Instagram @UNofPhoto.

Text © Grant Scott 2018

1 comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: