1 12 min 3 yrs

Until very recently photography, for decades, has not endured much change. Most changes to film emulsions, lens optics, and mechanics were more quantitative than qualitative. Even the first switch to digital which amounted to slapping a digital sensor instead of the film was not revolutionary.

The art of photography is of translating the image of the real world to a still picture on an arbitrary media (paper, screen, slide). It was never about fidelity as the precise replication of that is seen by the human eye is physically impossible. The “art” part is how the picture is transformed. The arsenal of effects of traditional photography was limited: depth of field, focal distance, movement blur, color curves, vignetting, and various optical distortions like chromatic aberrations and results of using optical filters, to name a few. The post-processing steps add a few more, but let us put these aside for now.

A photographer builds a mental model of the camera system, and learns to estimate how changing its parameters will affect the image. Again, fundamentally they are very few: focus, sensor ISO sensitivity, focal length (zoom), aperture, and exposure. For more specialized cases you may want to know about the type of shutter (running or circular) and your optical filters properties. Most cameras, whether point-and-shoot or SLR just help you to manage these very same parameters.

The early generations of mobile phone cameras worked exactly like that where you can switch to “Pro” mode and tweak focus, exposure, aperture, sensitivity as some now long dead photographer did on his TLR camera 130 years ago!

But unlike cameras, smart phones have formidable computational resources. They can process data from camera sensor in real-time using their fast CPUs & GPUs. They can do it while taking a picture. Suddenly techniques like using imperceptible hand motion to get several slightly offset images from which a higher-resolution image could be reconstructed become possible. Another burst technique, HDR, used to achieve the better dynamic range become so standard that it is enabled by default on some bare / core models. The camera sensors are also evolving.

Companies developed sensors which capture information about direction that the light rays are traveling in space. Mac first pioneered phones with dual cameras which opened a world of possibilities. Now you can take simultaneously two pictures at different focal lengths, with different parameters and combine information from them to construct a composite image. This could also be combined with burst techniques. As the two cameras are physically apart and giving two distinct vantage points, one can try to reconstruct some of the 3D features of a scene.

Comparing digital cameras become more difficult. Before you look at sensor resolution (megapixels), dynamic range, color response curves, focal length, and range of apertures and exposures as the starting point to compare 2 cameras. This is no longer true. Your phones now have multiple cameras, taking multiple sensor readings for each picture, controlled by complex AI-driven proprietary algorithms, which are sometimes as smart as detecting the types of multiple objects in your viewfinder and choosing the best way to render them in a shot.

And this is not because they chose to “dumb it down” for the consumer. The reason is that the set of controls we used to is no longer applicable. The world has changed and many photography skills no longer apply. Real disruptions to camera industry have good & bad consequences – some are known but many others are hidden in deep space. (Ref. Blog of Vadim Zaliva)

One thought on “Digitizing computational photography – quick mobile tweaks & easy apps are tactically killing creativity!

  1. Vigilante cronies and its mafia wings having access to big-data dashboards watching over heat mapped anomalies and Geo-tagged AI profiles of masses inside their hi-tech bunkers. These safe cheats are agents of deep state, they’re being abused worldwide, expect much smarter revenge.

Leave a Reply