Hey there!
Arnab Chatterjee, from DT Scientific here.
For those of you that aren’t aware, DT is heavily involved in scientific and industrial imaging, with our own internal R&D department always testing and exploring new applications of Phase One technology.
But like most of our team at DT, I’m also a passionate photographer, and spend nearly every weekend out-and-about with some sort of Phase One system. And with the exciting new release of the IQ4 150MP, I was asked to take some time to write a thorough, but accessible technical article on what resolution means, and what the medium format advantage is.
Today, more than ever, with a crowded landscape of high-megapixel cameras at all price points, photographers deserve to know why their hard-earned money is well-invested when they purchase a medium format system. Let’s jump right in.
The Megapixel Arms Race
Since the dawn of the digital camera, the race for higher and higher resolution has been characterized by ever-increasing numbers of megapixels.
From Kodak’s first 0.01 megapixel miracle (or monstrosity, depending on who you ask) in 1975 to today’s 151 megapixel medium format machines, the megapixel wars continue to rage on with no end in sight.
But now that Nikon, Canon, and Sony’s 35mm Full Frame systems are catching up to basic medium format systems in the megapixel race, there’s one question on everyone’s minds: is medium format really better? Or is it just a gimmick to get people to spend exorbitant sums of their hard-earned money? Isn’t 150 MP too much? Why would anyone need that kind of resolution?
I’ll give you a quick spoiler – medium format does offer significant advantages over 35mm Full Frame systems, and these medium format megapixels work somewhat differently when placed into a larger, higher quality lens/sensor optical system.
We’ll address this further in depth moving forward, but first, we have to ask a very simple question – what is resolution, and what do megapixels have to do with it?
Spot the Difference
Resolution is a word that gets thrown around a lot, and in the camera world these days, it’s largely become defined by a single metric – how many megapixels your camera has.
But do megapixels tell the whole story? Not quite.
The concept of resolution was around long before the first pixels were even dreamed up, let alone when the first sensors were built.
It’s surprisingly straightforward to define:
Resolution refers to the ability of a system to determine that one thing is different than another thing.
That’s literally it. No mention of megapixels, MTF curves, or even cameras at all.
That’s because resolution isn’t unique to cameras – it can apply to anything.
All of our senses (and the associated processing our brain does to actualize them) have varying degrees of resolving power.
I can resolve an apple from a peach by feeling its texture, a friend’s voice from the chatter in a noisy room by listening carefully, or even my ex-girlfriend from her twin sister by paying close enough attention to their appearance and behavior.
For the record, that last example was not a joke, and there were moments where my resolution wasn’t so good. It led to some awkward conversations.
Lost in Space
Personal problems aside, when it comes to cameras, we’re talking about a very specific kind of resolution – spatial resolution, or the ability to differentiate objects based on their location in space.
Now there’s no denying it, space is really cool. Perhaps you dreamed of being an astronaut when you were young, or still feel that sense of cosmic wonder when you look out into the starlit night sky.
I’m sorry to inform you that we’re not talking about the sexy, “Neil Armstrong” kind of space – it’s more like the enigmatic, French, “René Descartes” kind of space.
Descartes is credited with developing the Cartesian coordinate system, by which we identify locations using x and y coordinates. Way less cool, but arguably more useful. Let’s see how this applies to cameras.
I Get Buckets (of Photons)
If you’re the kind of person that spends time reading technical articles by companies like DT, you probably know most of this already, but I want to walk through the process of image formation and highlight some very specific points relevant to the high megapixel discussion.
All digital cameras use lenses to collect photons – tiny particles of light. These lenses focus incoming photons into an image, which is projected onto a sensor. This sensor contains our precious pixels, which take the projected image and chop it up into tiny, micron-sized boxes. Each pixel captures the photons focused on it and converts them into to electrical signals, which are then measured and eventually converted into the RGB or LAB values you see in Capture One.
So pixels are like tiny, tiny buckets that catch and count photons. And because each pixel corresponds to a different point in the projected image, it should only receive photons originating from that point on the real-world object. Let’s look at an example.
Eye Spy
Let’s say you’re taking a macro picture of my eye at 1-to-1 magnification with the brand new Nikon Z7 mirrorless system. That means that the size of each pixel (in this case we’ll say 4 microns to make the math easy) corresponds to 4 microns on my face.
Note that the Nikon Z7 actually has 4.35 micron pixels, but rounding down the pixel size actually approximates the camera having more resolution than it does in the real world, so we’re not shortchanging the camera at all here. We’ll see later that there can be some advantages to having larger pixels, but we’ll revert to the true value then, to be fair.
With some rough measurements and quick maths, I can tell you that my eyeball is about 24 by 24 millimeters, which corresponds to 6250 x 6250 pixels, or 39 megapixels. So that’s neat, and now I have a nice pretty photo of my eye.
But knowing my pixel size and its relationship to real-world distances gives me an important ability – I can use my pixels to make measurements on real-world objects!
One pixel spans 4 microns, two pixels corresponds to 8 microns, and 250 pixels make one millimeter.
This means that if two points in my image are 4 microns apart, the light coming from each of them should fall into different pixels, and I can tell you how far apart they are.
If they’re further apart than that, then discerning them is even easier – there will be a nice gap of pixels between the objects.
And what did we say earlier? If we can tell that two points are different from one another, we’ve resolved them! This is the basis of the definition of resolution. If I can resolve smaller distances, I can bring out more details in my image, and have higher resolution.
But what happens if my points are less than 4 microns apart from one another? All of a sudden the light coming from those different points falls into the same pixel. When that happens, measurement-wise, those photons appear to have come from the same point in space.
So I can’t resolve those points, or tell you how far apart they are, or even know for sure that there are two points at all! My camera only sees one overall blob.
But what if we add more pixels, without changing anything else? If I cram in four times as many pixels along the same length and width, I’d now be looking at a 625 megapixel image of my eye (a factor of 4 on length and width corresponds to a factor of 16 in area), and each pixel would measure out 1 micron instead of 4. Now I can resolve details even further – so I have even higher resolution. As long as points are at least 1 micron apart from one another, I can discern between them.
As you can see, the more pixels you add to a given sensor, the smaller and smaller those pixels get, and the better and better your resolution gets. So obviously, more pixels (and more importantly, smaller pixels) give us better resolution! Right?
The Caveat
Well it’s true – the simple answer is that in a perfect environment, the only limit to your resolution on a fixed-sized sensor is the size of the pixels you can cram on board.
But there is one critical element outside the sensor that has an enormous impact on image quality that we’ve completely ignored until now – the lens.
You see, our friendly-neighborhood photons only ever “touch” two things throughout their journey from our subject to their eternal, digital home in our camera – the lens, and the sensor.
Let’s explore the lens with a quick thought experiment.
A lens transmits and focuses light to form an image. So let’s imagine a tiny beam of photons traveling in single file, shooting straight down the center of the lens. What happens?
When we focus a perfect lens, we should get a perfect, little point at our focal plane, exactly the same width as our beam going into the lens.
In the real world, this won’t ever happen though. The point at your focal plane will always be wider than your beam going in.
That’s because of a tiny little thing called the point spread function.
Everything the Light Touches… Has a PSF
Everything our light passes through – air, filters, lens elements, etc. – has what’s called a point-spread function (PSF). That term might sound intimidating, but all it does is tell you what happens when a beam of photons enters the lens.
Like we saw before, if you send in a perfectly straight beam of photons, one after the other, they’ll always spread into a larger shape. Hence the point from where the photons originate is spread into a larger shape according to this function. See? It’s not as complicated a name as you might think.
Oddly, even if PSFs are new to you, you’ve probably heard of the PSF’s frequent companion, the MTF or modulation transfer function. The MTF is fundamentally linked to the PSF by a special relationship called a Fourier Transform. But unlike “point spread function,” which is quite descriptive, “modulation transfer function” tells you nothing about what the function does. So we’ll stick with talking about PSFs, because they’re easy to visualize.
A great example of a PSF is the box blur filter in photoshop. Let’s explore this with an old picture of my puppy here:
Now I’m going to apply a box blur with a radius of 100. But what does choosing that radius do?
It defines a PSF! In this case, Our PSF takes each pixel and spreads it into a 200-by-200 square box by taking the average of each pixel value. Here’s a visualization of the PSF superimposed on his little paw:
I swear there’s a tiny red dot in the center pixel of that box, which indicates the point that will be spread out when I apply the blur. The white transparent box indicates the area over which the point will be spread. When we apply this blur, Photoshop uses a special operation to apply our PSF to every single pixel in the image simultaneously. That results in the following:
See? It’s not terribly complicated. One tiny point gets averaged out across an area with a certain size. As a result, the image is blurred, decreasing in resolution.
Make your PSF too big though, and you can’t even tell what’s in the image anymore:
See? If our PSF is too wide, all of our points get blended together and we can’t tell which pixels came from which real-world points, making image reconstruction impossible.
As a final example, what if our radius is set to 1?
Well, each pixel gets spread out across…itself. So no blurring occurs, and we don’t lose any resolution because we’re not averaging out any data. If there were such a thing as a perfect lens, it would have a PSF with just one single point, and nothing else. If you want to be super technical and pretentious about it, there’s a word for this – it’s called a Dirac delta distribution. Nerd words are fun.
But as I mentioned earlier, there’s no such thing as a perfect lens. So what determines the size of the real-world PSF in your camera lens?
The Master Function
Well, every element in your lens, every coating on each surface, and even the air between elements, has its own, specific PSF. And as light passes from your subject to the sensor, each beam is spread ever so slightly according to these PSFs, in sequence.
This creates a cascading effect which can be boiled down into one, big PSF that describes the entire lens. This master-PSF is most heavily impacted by the quality of materials, the actual geometry of the lens design, and how precise one is in manufacturing the lens.
It also varies at different points across the lens. That’s why many lenses are nice and sharp in the center, but can get blurry and soft around the edges – the PSFs at the edges of the lens are naturally wider than at the center (for reasons I won’t get into in this article), turning what should be sharp points in the corner of the lenses into incoherent, blurry messes.
But now let’s throw our sensor back into the equation. We know that our pixels have a certain size, and if they’re very small and we have a lot of them, we get really high resolution.
But with the introduction of the PSF, this is no longer the end of the story. Instead, it leaves us with two cases to consider, based on the size of our PSF relative to our pixel size.
Two Cases
In the first case, our lens has a relatively small PSF, and while the point of light entering my lens is still spread out by some amount, it’s still contained in an area smaller than (or equal to) my sensor’s pixel size.
So I’ve lost a small amount of resolution – the existence of PSFs means that I will always lose some resolution when I transmit light outside of a vacuum, no matter how good my system is – but my spread-out point still only ever falls (mostly) in one corresponding pixel. So our resolution is still only limited by the size of our pixel. We’re good to go!
In the second case though, our lens has a PSF that is larger than our pixel size.
Now we have a problem.
A point in our image gets spread out into an area so large that it can never fit into a single corresponding pixel, and will always bleed into the surrounding pixels. Remember that the PSF is determined by the lens, so whether you have one pixel or one trillion pixels on your sensor, you can never resolve any points finer than the diameter of that spread-out point. When a lens has a PSF larger than (or even very close to) the sensor’s pixel size, we refer to the sensor as “stressing” the lens. Other terms you might hear include “straining” and “taxing” – these all mean the same thing.
So what does this mean for you? You’ve probably heard it before, and if no one has told you yet, I’m honored to be the first:
Do. Not. Put. A. Shitty. Lens. On. A. Good. Camera.
Do not do it. You’re wasting your time, you’re wasting your money, and you’re going to end up very frustrated with soft images from your very expensive system.
This phenomenon of lens stress is why you may have heard that people are concerned about sensors in modern, high resolution cameras, out-resolving lenses designed for those systems. And it’s a real problem.
Old Lenses + New Sensors = Not-So-Great Systems
Take, for example, some of the world’s finest lensmakers – Leica, Zeiss, Schneider, and Rodenstock to name a few. In the film era, these companies produced some of the sharpest optics ever seen. Whether they were wicked-fast 35mm lenses, or large-format lenses covering massive image circles, they were all well engineered enough to resolve down to the size of a few film grains.
To put it in terms of what we just discussed, as long as their their PSFs could be engineered to be smaller than those clusters of grains, the lenses would generate razor-sharp images. And they certainly delivered, for a long time.
But modern pixels can be much smaller than film grain clusters, and the PSFs of those old lenses that may have been good enough back then may now be too wide for the increasingly tiny pixels on new sensors. That’s why sharper, “digital” versions of these lenses have been released.
Let’s look at a much more recent, widespread example though. On the 35mm Full Frame DSLR (I’ll just refer to this as 35mm FF going forward) side of things, consider that Canon’s excellent 5D system doubled in resolution between 2013 and 2016, reaching 50 MP by the end of the of the short 3 year period. But even the newest entries in their flagship line of L lenses were designed with the 22 MP 5D Mk III and its predecessors in mind. This led to widespread complaints that images on the 50MP 5DS R produced soft images. And it did. But not because the sensor was lacking in resolution – the lenses were not of high enough quality to accommodate the level of stress the sensor subjected them to. Their corresponding PSFs were too wide to take advantage of the jump in megapixels.
So more pixels might mean better images, but only if you have the glass to support them.
The Medium Format Advantage
So why would one use a camera like the $15,000 Phase One IQ3 50MP when one could buy a Canon 5DS R, Nikon D850, or Sony A7R III, and an excellent lens for a third the price? Each of their 35mm FF sensors have nearly as many megapixels as the Phase One. So they should provide the same resolution, right?
Not quite. Because with larger sensors, and thereby larger pixels, medium format systems apply less lens stress, and can make better use of good glass.
But wait – didn’t we just establish that larger pixels mean lower resolution?
Absolutely, but until now we’ve been examining sensors of the same size. One of the big medium format advantages lies in the ability of these larger sensors to host a far greater number of pixels onboard, without making those pixels too small. Remember that if our pixels get too small, our lenses can become over-stressed, causing any additional sensor resolution to be wasted.
So the IQ3 50MP, with its crop frame 44 x 33 sensor, has a field of view 1.68 times the size of a 35mm FF sensor when using a lens of the same focal length. (Similarly, a full frame 645 sensor such as the ones on all other current Phase One digital backs will have a field of view 2.25 times the size of a 35mm FF sensor under the same conditions). This means that we can have more of our peripheral field in our image.
But what if we move closer, to match our fields of view as closely as possible?
Doing this effectively magnifies the projection of the subject across the sensor, increasing pixel density throughout the frame (because now our subject is taking up the whole frame, without extraneous borders). But our pixel size hasn’t changed, so our lenses aren’t under any more stress, as they would be with smaller pixels – we’ve literally just gotten higher pixel density without any trade-offs.
So when using the Phase One 50MP sensor, with its 5.3 micron pixels, we’re well within the tolerance of the PSFs of the Schneider-Kreuznach Blue Ring line of lenses.
The larger medium format sensor allows the pixels to remain at a comfortable size.
But when we examine the Canon 5DS R, with the same resolution on a small 35mm FF sensor, each pixel is squeezed down to 4.1 microns. That might not sound like much of a difference, but it means that the Canon 5DS is more demanding of lenses than even the Phase One 100MP, let alone the 50MP.
This is part of the miracle of medium format. Allowing for larger pixels while maintaining extremely high resolution relaxes the increasingly high stress-level being forced on lenses by sensors with tiny pixels.
Medium Format Lenses
So as you can see, medium format sensors play by different rules than 35mm FF, and will always be able to provide superior resolution in terms of lens performance alone.
But that doesn’t mean medium format camera makers like Phase One are lazy with the extra latitude they’re granted in their lens design. In particular, Phase One’s lenses are all designed to be future-proof, as they must be, since megapixel counts in medium format systems have always grown in leaps and bounds. The recently announced IQ4 150 MP is a prime example, jumping an enormous 50MP from the previous generation IQ3 100MP. If Phase One only ever did the minimum in terms of lens design, they would be forced to go back to the drawing board every three or four years, making entirely new lens lineups.
Instead, Phase One invested in building a lineup of some of the best lenses available today, most of which easily manage the stress of a 100MP sensor. We’ll be publishing another article on this shortly, stress-testing different medium format lenses to see how they perform on the even higher resolution 151 megapixel sensor. Based on some of our preliminary tests, we have high hopes that the Blue Ring line will continue to provide the superior performance demanded by Phase One photographers.
Conclusion
If you’ve read through this article all the way, I greatly appreciate it. I hope that you’ve come to understand resolution in a more nuanced way, beyond sheer megapixel counts, and that the dizzying array of variables – sensor size, pixel size, pixel count, lens choice, and the impact of point-spread functions – is just a little bit clearer.
As you can see, the high-resolution camera world is a technical and complicated place, but our job here at DT CommercialPhoto is to demystify the technology, teach it to you in a way you can understand, and help you keep the gear out of the way, so you can focus on your creative vision. Feel free to reach out to me directly at abc@digitaltransitions.com if you have any questions, concerns, or corrections – I’ve done my best to make sure everything is correct and airtight, but our clients are a sharp bunch, so if you see something, say something.
Until next time!
Arnab