Menu Close

Why isn’t camera image quality improving as fast as smartphone IQ?

Smartphone image quality has come on in leaps and bounds in the past decade. Not every phone delivers results that hold up to pixel peeping, but the exposure, color, and noise performance increasingly outstrips the requirements of most people. At a time when social media is the main destination for most photographs and smartphones are the most common way to view them, smartphones passed the ‘good enough’ threshold for most people some time ago.

Smartphones, like Apple’s iPhone 14 Pro, outnumber standalone cameras by a wide margin, and the image quality is more than “good enough” for most people.

So why isn’t mirrorless camera IQ improving at a similar rate? There are a number of factors, but it’s not because large sensors are missing out on the latest technology.

Small sensors have different needs

It’s true that smartphones receive the latest sensor technologies before dedicated cameras do. You could put it down simply to the relative size of the two markets: estimates suggest around 1.45B smartphones will be sold in 2023, whereas the market for ILC is expected to be around 5.8M, of course the development will focus on the market that’s 250x larger.

But smartphones have rather different requirements. To keep the phones small and battery usage under control, smartphones need tiny sensors, and that creates demand for tiny, tiny pixels.

Hence smartphone and compact camera sensors got backside illumination (BSI) technology years before large sensors did. In very small pixels, there was a major benefit in low light performance to be had by flipping the sensor and pushing the wiring to the back. In large sensors, the wiring makes up a much smaller proportion of each pixel, so there’s much less of a benefit to be had. Instead, the benefits come from putting the light sensitive region at the front of the pixel, letting the pixels at the edge of the sensor receive light at much more acute angles. Moving the wiring to the back also brings the freedom to build more complex, (i.e. faster) readout circuitry.

New technologies like Stacked CMOS make their way to cameras eventually, but with a very different balance of cost and capability.

Likewise, Stacked CMOS technology arrived in smartphones around five years before it cropped up in the Sony a9. Again in smartphones this allowed more space for the photodiode section of the pixel, allowing smaller pixels and faster readout. This technology is more difficult to produce in large sensors, so we’re only seeing it in cameras that need that fast readout, so far. The next development in Stacked CMOS for smartphones appears to be to further separate the elements of the pixel, allowing small pixels with greater storage capacity, helping to boost their dynamic range but again, focused on a problem unique to the tiniest pixels.

Similarly, non-silicon approaches such as Quad Bayer/Tetracell color filter layouts are being adopted, giving the options of combining pixels in low light, using different exposures or gain levels on alternate lines of pixels to boost DR in high-contrast situations or trying to deconvolve the full nominal resolution in bright light. Again, these are workarounds for the challenges presented by sub-1μm pixels, which would bring less benefit when applied to large sensor cameras (though there are some Quad Bayer and Quad Pixel AF cameras that don’t always promote or acknowledge the technology).

A rising tide…

To a degree, all dedicated cameras benefit from these developments. The technologies and the fine-scale production lines they’re made on then get passed on to make larger sensors, later. This can result in a situation where the worst of the development costs have already been borne by the smartphone market, rather than being shouldered by camera buyers.

But the closer you look at the latest technology, the less there is to suggest that more money spent directly on large sensors would result in a significant improvement in IQ. The leaps forward in smartphone image quality in the past few years have come from the sophisticated alignment and combination of multiple shots, along with machine learning-derived processing. Sensors have contributed to this through faster readout but not by any inherent improvement in the quality of their output that is somehow being denied to large sensor users.

Quad Bayer offers several clever ways to get around the limitations of the very small pixels used in smartphones. It’s seen limited use in dedicated cameras, so far.

It’s difficult even for the most expensive large Stacked CMOS sensor to match the super-fast readout of a small sensor with tiny pixels, so it’s not easy to deliver these computational approaches in a large sensor camera (assuming camera makers could match the amount of R&D the likes of Apple and Google are throwing at the problem). But would you want them to? It’s worth stopping to think about what photography as a pursuit is for. Is it simply to get an attractive version of whatever you point the camera at, or does understanding the tools and making decisions about how to capture and convey the image play a role?

The boundary between smartphones and dedicated cameras will, perhaps, be increasingly philosophical, rather than technological.

It may be that some technologies become fundamental to smartphone photography yet irrelevant to dedicated cameras. For instance, the adoption of time-of-flight sensors that measure depth in the scene make it easier for smartphones to selectively blur backgrounds but is that something most photographers want in their ILC?

The boundary between smartphones and dedicated cameras will, perhaps, be increasingly philosophical, rather than technological. But for now, it’s not obvious that technology is being withheld that would make dedicated cameras better.

Author:
This article comes from DP Review and can be read on the original site.

Related Posts