Review Redo: Sony A7S II vs Panasonic GH5S High ISO Comparison

by

posted Wednesday, January 17, 2018 at 5:10 PM EST

 
 

Last week we published our field test of the Panasonic GH5S and lauded its high ISO performance which we still stand by when considering how relatively small the sensor of the camera is. However, in that review we compared the results of video from the GH5S to still images from the A7SII via our previous review archive. In the past, we have been less focused on video and more on stills, hence our lack of video samples of the A7S II during this review. After publication, comments from our ever-faithful readers pointed out our error and we immediately started the process of rectifying it. We had based our initial conclusions on incomplete evidence, and after retesting the cameras, we came to a different conclusion. 

To that end, we've published a new video review, focused specifically on ISO comparisons of the GH5S and the A7S II:

Additionally, the original Field Test has been changed with a new updated section on the ISO comparisons of the GH5S versus the A7S II. We urge you to take a look at that section as well as the above video.

At Imaging Resource we pride ourselves on conducting accurate tests, to provide the best possible information to you, our readers. In our original posting of this review, we failed in that. I failed in that. I apologize for the error, and hopefully this helps set the record straight. 

Background: Why is video noise reduction so different than that for still images?

By Dave Etchells:

Jaron claims full responsibility for missing this, but I need to note that it wasn't all on him; I myself didn't realize just how much difference there was between still and video noise reduction. So I definitely carry some of the responsibility, as the ultimate technical authority at IR, and I thought this might be a good opportunity to explain why video noise reduction can work so much better than that for still images.

The big difference between stills and video imagery is that the camera has a lot more information available to it with video, to help separate the subject from the noise. This is because images are flying by at 24, 30, 60 or more frames per second, while the subject content isn't changing nearly that fast. What's more, parts of the subject that are moving quickly are going to be pretty blurred, so the camera can really crank up the noise reduction in those areas of each frame, because it doesn't need to worry about preserving subject detail. Still cameras try to guess and do the same, but there's always a question of whether what it's seeing is noise or just subtle subject detail. (And yes, you certainly could be using very high shutter speeds and freezing the action in each video frame, but that makes for extremely choppy-looking video, hence the rule of thumb to not let your shutter speed get faster than half the frame time. By the nature of things, you're going to get a lot more motion blur in video images, because you actually want it.)

There are a couple of different types of noise in camera sensors, but for the purposes of our discussion here, they boil down to just two types: random/uncorrelated noise and pattern noise. Pattern noise is noise that varies across the surface of the sensor; manufacturing variations mean that some pixels may generate more noise than others, be stuck or leaky (tend to accumulate charge not caused by incoming light), etc. This is pretty easy to deal with, though, since once it's identified, it'll always be happening in the same areas, so can be compensated for regardless of what's going on with the subject.

Random noise is more difficult, since we don't know where it's going to come from, but that also means it's going to vary randomly from one frame to the next. In the extreme case of a camera on a tripod shooting a still life, it gets easier and easier to figure out what's subject and what's noise, just by watching for long enough. Over long periods of time, all the random noise will average out to zero, and we'll be left with a very clean view of the subject.

But stationary subjects aren't very interesting, and with video, the scene in front of the camera is always going to be changing. But not all of it, though, and even when things are moving, it's often a case of the same stuff just moving around on the sensor surface. If we're clever enough, we can figure out what's moving where, and still look across multiple frames to remove the noise.

The catch, of course is that "if we're clever enough" part. Motion estimation does in fact require a lot of cleverness and a boatload of processing, but a lot of smart engineers have been working on the problem for decades, and camera image processors are getting pretty crazy-powerful these days. So cameras like the A7S II and GH5S can do a lot to reduce image noise by looking at the multiple frames streaming through them. 

The bottom line is that still images from modern cameras are pretty much always going to show more visible noise than video frame-grabs at the same ISO, and both the A7S II and GH5S do a great job of taking advantage of frame to frame processing to ferret out noise.

So that's why our original comparisons showing resized stills from the A7S II next to frame grabs from the GH5S showed the GH5S doing so much better. The GH5S really does produce remarkably clean video for a Micro FourThirds sensor, and surprisingly holds its own very well against the A7S II up to about ISO 6,400. (And there's a lot to like about the GH5S besides just low noise; be sure to check its color rendering in the video samples we show on the GH5S Field Test page!) The balance starts to shift at ISO 12,800, though, and at 25,600 and above the A7S II is surprisingly usable, while the GH5S is not.

Panasonic GH5S Field Test Part I