Logged on as Barry Bassett

Not you?

The Case for and against 4K

The argument for and against 4K 

I wrote this article first in 2014 in response to what seemed the unecessary challenge of producing programmes in 4K and have updated it in 2017, as it is a very interesting argument.

I am not against 4K.  I agree that 4K images ARE stunning, only we need to be clear about what the reasons are, what the practicalities are, what the cost is and whether all of the hassle is actually worth the trouble right now in order to produce 4K images.  That is the crux of my argument.

4K is not new – RED have been trailblazing 4K acquisition ever since the introduction of the RED One some years ago and this has been bolstered recently with the range of new cameras from all manufcaturers - even to the lowly consumer cameras and GoPros.  All of which claim to offer unrivalled image clarity and detail.   

Firstly do we actually need 4K or UltraHD (UHD) as the consumer electronics industry is calling it?

Retiring HBO pioneer Bob Zitter doubts that consumers “barely notice the difference between 4K and existing HD on sets around 60in-70in in size”, which Zitter reckoned would be the maximum size widely deployed. On this basis, the industry certainly need look no further than 1080p HD, which is double the resolution of 1080i or 720p, and is itself not widely deployed for linear TV delivery. Again, the conventional wisdom within the industry is that 1080p HD does not deliver a substantial enough increase in quality over 1080i or 720p HD to be worth the effort of deploying across the content ecosystem, and that the logical next step forward would be to 4K.

John Galt, SVP Panavision claimed recently at a 2012 Creative Cow seminar that viewing 4K at normal theatre distances will result in you not being able to see sharpness at the pixel level as the pixels are too small.  He concluded that, “…in order to see the detail provided by the pixels in a 4K image, you would need to sit in the first six rows of a theatre, otherwise you won’t see the extra image detail.”.  This is without acknowledging that many digital cinemas are only presently capable of displaying only 2K images right now.

His argument followed that, “We perceive edge sharpness as contrast between the edge and what’s behind it.”, so do we, the audience, really want a 4K image when HD does the job perfectly well. In many types of image – fast moving drama for instance, you can’t see the detail that 4K enables because the pictures are moving too fast.   John Galt recommends, “If you want greater resolution, shoot at higher frame rates, not more pixels”.  We can achieve this now with existing HD technology and shoot at 48P instead of 24P, otherwise you simply create very high resolution images with a lot of image motion blur. "

Whist I am on the subject of high-resolution blurry images, it is virtually unheard of for top DoPs to shoot with very expensive lenses at extremely high resolution without inserting a softening piece of glass in order to reduce the sharpness of the images – how can 4K benefit the viewer under these conditions?

This is part of the 4K is not always 4K argument, where the devil of the detail is in the pattern… Debayering is the process of extrapolating colour information using a single sensor and has the result of reducing the colour detail, so that the actual resolution is less than the sensor promises.  This actually means that 4K images from a RED EPIC, whilst better than 4:4:4 HD, are not as ‘better’ as the numbers suggest.  This is a very important point, as whilst the ARRI Alexa is not a 4K camera, it does use a 3.5K sensor. As well as recording on-board to SxS cards in 1080, its de-bayered RAW output can be recorded on external devices at 2800 x 2160 - almost 3K. These RAW files offer improved dynamic latitude for excellent highlight control, as well as greatly increased resolution. Skyfall was shot by Roger Deakins using the Arri RAW 2K+ workflow, with the output recorded by a Codex device. Versions were prepared for 4K cinemas with the files simply up-converted and it looked fantastic!  In contrast to this, the Blackmagic camera shoots 2.4K before debayering and the Go Pro Hero 3 only shoots 4K images at 16 frames per second – so neither of these are in contention for a 4K level of acceptability.

There are serious technical challenges for shooting and delivering in 4K.

Firstly, the file sizes are enormous - 8.6m pixels in a frame; at 16bit colour depth, means 1.7TB/hour!  This is HUGE!  An example of this was given recently by Nicholas Recagno who provided technical support for the final online of The Hobbit.  The final online master at 48 fps, stereoscopic 3D of uncompressed 16 bit 4K files required 2.2GB/second of data – this is 2x faster than a fully loaded RAID drive equipped with thunderbolt can deliver…  It really is at the limit of what current computing technology can provide.

Plus you have to find a way to archive this data.  The current LTO6 backup tape format only has a lifetime 25 years and therefore needs to be copied to new media every 3rd hardware generation, so this is a very important consideration, as there is no helpful HDCAM SR system at present which can capture and archive these files in true 4K yet.

So if we acknowledge the need to compress images, which will by definition be a ‘lossy’ form of compression, then isn’t there an argument that it might be better to record a lower rate of compression but in 2K/HD perhaps?  Especially if you had at your disposal a way of capturing images at 48P instead of just 24P?  HDCAM SR and existing technology can cope with this now in full uncompressed format without spending any extra in expensive infrastructure…

And then how do you view 4K?

Firstly, you need a lot of money for ah HD TV set.  Until Sony released sets in April 2013 for $5,000, you needed $40,000 for an 86” UHD TV and the larger sets are still the price of an executive car!.  OK, the prices will come down in time but having spent such a vast sum on a screen, how can you ensure that you will see the images properly?

There was a study some years ago in the US, which found that most people sit too far from their TVs – the average distance was 9ft.  Under optimal view conditions for 20/20 vision, the optimal viewing distance has been calculated to be 1.8x the width of the screen, so 9ft distance is required for a 70” diagonal to distinguish detail for UHD.  Displays are becoming bigger and this is the basic argument for 4K but houses are not – especially in the UK!

But where does the 4K content come from?  The gigantic files are way beyond the capability of internet presently and we will require more bandwidth for terrestrial, cable or satellite, whichever is the designated vehicle for 4K transmission.  First, we will require a new compression standard to be universally adopted and the present H.264 will become H.265, though this will take years to adopt and will still give a large level of compression too.

Blu-ray disks are possible but a 2K film currently uses all 50GB, so a 4K film would need a 3rd and a 4th layer to shoehorn a 4K film onto a Blu-ray disk and the technology isn’t here to achieve this… yet.

Finally, all cabling for the “last mile” to all houses would need to be replaced in order to work for 4K images, not to mention a complete replacement of all equipment in the production and post production process to be able to edit, route, post produce, copy, archive 4K images…

“It’s about orchestrating a seismic shift in the broadcast and entertainment infrastructure, not to mention rewriting the consumer electronics handbook” according to Techradar.  Additionally, who is going foot the bill to do this infrastructure replacement in these cash-strapped times?

4K will come though, so if we acknowledge this, just how long will it take to penetrate to the mass consumer?

The best guide is the penetration of HDTV.  The first American nationwide broadcast of an HD programme was John Glenn’s lift-off in the Space Shuttle in 1998.  It took another 12 years for HDTV to go mainstream; so by that reckoning it should be 2025 before UHD 4K is in half of all American homes.

In 2014, the advantages of 4K of 4K looked subtle... and then came HDR - the most important reason for 4K

With the display technology of 2014, the conclusions were that the benefits of 4K were at best, subtle on the existing display equipment of the time.

However, this technology was based on the previous REC709 standard of displaying 5 stops of lattitide and limited colour - for us all, this was NORMAL, so 4K of 2014 offered us higher resolution images of the same.

However, now we have the next generation of HDR. This stands for High Dynamic Range and is a technical term that describes a new type of television screens and technical standard that is associated with this type of screens and it is not to be confused with the term HDR that is commonly used in stills photography. In stills photography we would capture several exposures at once and then compress them into a single image that would than display all details of highlight and shadow tones.

What HDR in TV does is to allow us to display images of a higher latitude, larger colour gamut and higher contrast creating more life like images, or put simply, to give it the 'WOW' factor.

At IBC in 2015, I was told by an Industry Insider that HDR was the most important development in Television since the transition from Black and White into Colour.  Once I saw it for myself, I wholeheartedly agreed with him

                                                   Barry Bassett, Managing Director, VMI

HDR changes the advantages of 4K to a MUST HAVE so that consumers and producers are changing to 4K or UHD at an unstoppable rate.  Read the VMI article HDR Survival Guide for more details

Barry Bassett, Managing Director VMI.TV Ltd.

First written 2014 and update in 2017


Join the discussion

Feel free to leave your comments below. If you have an account with us, please log in.

Andrew Bradley 2 years ago

great article, thanks for writing it Barry.

Chris Parker 3 years ago

Thanks for laying it all out so clearly!