The only real downside to this method is the high storage requirements and the need for good hard drive speed as the bitrates are also high – but this is not a massive issue any more given that most are not using HDDs any more. These become the new master files from this moment forward. Re-encode the H.264 file into a high quality edit friendly codec like one of the flavours of ProRes, DNxHR or Cineform, with ProRes 422 HQ being perhaps the most common. It might be a time commitment up front, but for all the time you are editing you will be thankful for your responsive system (here I’m talking about the microseconds between hitting play and it actually playing or the time it takes to show a frame when parked on it or whether trim mode is even usable at all). This is why the general advice is to either transcode or make proxies and this is good advice almost all of the time. Source: Wikipedia Transcode vs Proxiesīut even an intra-frame H.264 codec (where every frame is an I frame) will still be mighty taxing for your computer. I, P and B frames that make up an inter-frame codec. Inter-frame may be the worst case for editing, but it is the best case for keeping quality high and file size low, which is why it is standard as an export codec as well as being used in shooting. If you think about a video at 24 frames per second, a lot of any given frame will be the same as the frame before it, so it actually makes a lot of sense to do it this way. The B frames are bi-directionally predicted from the I and P frames. Instead it is made up of I, P and B frames – the I frames are whole frames and the P frames are predicted or put back together from previous I frames. The worst case is an inter-frame codec as this means that not all the frames are actually there in the data. However, other clients tend to send me files from Sony or Canon or Panasonic cameras that tend to be H.264 10 bit 4:2:2 albeit often intra-frame, meaning each frame stands alone which makes it a little easier to decode. The main DP I work with shoots ProRes HQ on his Arri Alexa and it edits like a charm. The basic rule is that the more compressed the codec, the more work your computer needs to do in order to playback or export the video. That said there is a great deal of difference in how taxing things will be for the computer and therefore how responsive the edit will be.
Thankfully, Premiere Pro is remarkably forgiving when it comes to what codecs, formats and wrappers it can accept. Codecs in postĪs an editor, my interest in all of the above is what it means to me in the edit suite. The advantage of shooting ProRes is that your editor will buy you a drink.
One of the main advantages of a RAW codec is that it’s easier to fix any mistakes during the shoot – ISO or colour balance for example isn’t yet burnt into the file, but it has the downside of much larger file sizes. There is also an increasing need to shoot in HDR and that is coming down to cheaper cameras (as well as the iPhone!). And these options can be gained for a camera that doesn’t have them from an external recorder from manufacturers like Atomos. Blackmagic in BRAW or various ProRes flavours and so on. Arri cameras for example can record in ARRIRAW or one of the ProRes formats. If cameras aren’t recording in these codecs, then generally it is either ProRes or one of the proprietary RAW formats. This Sony FS5 file for example is H.264, intra-frame, 10bit 4:2:2 If you’re not sure, drag a file into MediaInfo and change the view to Tree to check it out. Often there will be the option to record intra-frame or inter-frame, again more on this later. Within those options, the better cameras will record 10 bit 4:2:2, keeping more colour information and avoiding banding (especially if shooting LOG – LOG and 8 bit are not friends), whereas older cameras might shoot in 8 bit 4:2:0. Many consumer and prosumer cameras record in either H.264 (aka AVC) or its successor H.265 (aka HEVC). In any given week, an editor might be dealing with a number of different codecs from different cameras. A video file will also use a wrapper – like. A camera, for example, encodes the video data to compress it into a smaller file, leaving the job of decoding it or decompressing it to later. Video files use a codec to enable file sizes and bitrates to be manageable – whether that’s to help the camera give you more than ten minutes on your SD card or helping you upload your final video to social media in a reasonable time. But if you have time to delve a little deeper, you will see that things aren’t always as simple as they seem.īut’s let’s back up a little and go over some ground that will be familiar to many. One of the most common refrains in editing forums and Facebook groups is “Don’t use H.264 for editing!” There are good reasons for this and in many ways this simple rule is one to live by.