I spent the last six months accumulating video gear of all sorts, and learning about pro-level cinematography, and filmmaking in general, in all their aspects. Not much changed in my practical ability for better video work however... a filmmaker is not something you become overnight. It takes practice, persistence, plenty of creative thinking, and of course lots of skill and experience. For amateur videographers like myself, one needs to do everything on his/her own. Screen writing, directing, cinematography (staging, lighting, shooting), color correction and style-grading, non-linear editing, sound/music. Where in professional productions there are a few dozen up to thousands of specialised workers employed for many months or even years, a simple amateur videographer needs to put on all sorts of trick hats and do same tasks (far smaller in scope, though) on his/her own. One learns a lot by reading specialized blogs and watching instructional and other films by pro's and gifted amateurs on Youtube and Vimeo. It's remarkable, knowing what it takes in terms of equipment, technical knowledge, skill, creativity, and post-production work, to watch what young filmmakers can achieve in practice. When did they learn to do all this, you may wonder. Eternal sceptics might claim that making a motion picture is quite simple, and anybody can do it. I think not. During many years I have performed with some success a variety of 'artistic' endeavours in my life like painting and drawing, stills photography, digital and film (I've got almost 30K pictures on my Flickr channel), but, despite my recent acquired knowledge and equipment in filmmaking, I can definitely state that I'll never be any near to most of these kids, who produce and direct successfully entire feature films at barely half my age...
I initially thought of shooting shorts, but even that requires seasoned skills. So, I basically resorted to spending leisure time by shooting simple scenes of the 'miracles' of nature... flowers, animals and landscapes. I am obsessed with technical picture quality in terms of colour and sharpness, but video compression may kill all that if you don't pay the necessary attention. With professional gear, a captured signal might still be near-perfect as output by the sensor, but as recorders and NLE software use their own codecs, quality might easily be damaged during signal storage, post processing, and delivery. Furthermore, a technically almost perfect video file, encoded in a high quality delivery codec, can and will be further degraded if uploaded to Youtube or Vimeo, as both streaming services need to recode those files with their own proprietary codecs for obvious streaming optimisation reasons. In analog videography we used to undergo huge quality loss from one copy generation to the next. In digital accordingly, we have to be quite careful and knowledgeable about the type of compression our codecs will apply in order to avoid similar degrees of quality damage as in analog.
Shooting video is like shooting stills. Sort of. Composition, lighting, depth of field, ISO, aperture and exposure times, lenses used, are all quite the same. Of course, there are quite a few filmmaking specific elements that have little to do with stills photography. Examples include the frame rates (fps), and the 'rule' that shutter speeds (exposure times) 'have to be' twice the fps to guarantee fluent movement of objects and persons in the video shots. Also camera movement during individual shots is one major filmmaking factor, as viewers are emotionally 'manipulated' by cinematographers by the way the latter hold and move the camera while shooting, as well as the camera angles used. Types of lenses, apertures used and focal distances, camera movement and points of view (angles), along with edit 'cuts' and time-length between cuts are some of the most critical cinematographic tools used to trigger viewer emotion during story telling. Some of those tools hold even true for stills shooting. Stills do tell stories as well, you see. Like painting and all man-made 'artificial' imaging. It's all about the story. Only that filmmaking is the most explicit and dominant among all known imaging art forms in the process of visual storytelling.
The particular clip I embedded in this post above is something I shot and edited yesterday, during the usual rainy afternoon, like so many we've seen this August in Belgium. My compatriots back in the fatherland are heavily sweating, as I type this, under 30+ Celsius temperatures, whereas I have to wear a sweater to get thru the day. Uunfortunately, this year autumn started in Belgium at the end of July.
I haven't tried any special camera movements this time, other than a couple focus tracks, and it was all done handheld, with only a few shots slightly stabilised after the facts, in post. To preserve maximum resolution and sharpness quality I avoided crops and Kern Burn effects in post as well. I used a brand new Lumix GH4 V1.0 to capture and output a clean HDMI 1080p 4.2.2. 10 bit signal and recorded it in ProRes HQ on an Atomos Ninja Blade. I used two different lenses, the 14-140 mm that came with the Lumix and a Canon 24-105 mm with an MFT adaptor. With the Canon lens, lacking aperture setting ability, I should have used ND filters too, but I didn't. Was to lazy to go back to my room and fetch them, as the shooting took place two floors below, in my backyard. Thank God it was quite dark outside and lowering the ISO solved my problem. Shooting at 24 fps meant I had to keep the shutter speed at 50, and get on with it. The GH4 picture style I used was the CinelikeD, without any further parameter adjustments (as many experts suggest to lower further in order to yield flat LOG-like footage for color correction and grading purposes in post).
The ProRes footage captured by the Blade was readily usable in FCP without further transcoding since Apple uses ProRes as its standard format for post processing. What I was particularly awed by though, were the fine-tuning luma and chroma adjustments made possible in post. All this due to the extra 2 bits of chroma subsampling that the Blade gave me. Most experts argue about the elimination of banding in higher subsampling bit-rates, but my personal experience points more to the ability to implement subtle color and tonality changes with more bits than the traditional 8 bit encoding of commercial VDSLRs. In other words, the extra subsampling bits help colourists in the first place, before all the rest. The final video file that I watched on a FHD TV, and not the YT encoded stream you are watching here, practically convinced me to let go for the time being the 4K workflow that I initially bought the GH4 for, and continue shooting 1080p at 4.2.2.-10 bit instead, until I found a way to get a similar ProRes encoding at 4K / 10bits minimum. Does this sound a bit like the upcoming Atomos Shogun? Am I looking for more excuses? I might...
I initially thought of shooting shorts, but even that requires seasoned skills. So, I basically resorted to spending leisure time by shooting simple scenes of the 'miracles' of nature... flowers, animals and landscapes. I am obsessed with technical picture quality in terms of colour and sharpness, but video compression may kill all that if you don't pay the necessary attention. With professional gear, a captured signal might still be near-perfect as output by the sensor, but as recorders and NLE software use their own codecs, quality might easily be damaged during signal storage, post processing, and delivery. Furthermore, a technically almost perfect video file, encoded in a high quality delivery codec, can and will be further degraded if uploaded to Youtube or Vimeo, as both streaming services need to recode those files with their own proprietary codecs for obvious streaming optimisation reasons. In analog videography we used to undergo huge quality loss from one copy generation to the next. In digital accordingly, we have to be quite careful and knowledgeable about the type of compression our codecs will apply in order to avoid similar degrees of quality damage as in analog.
Shooting video is like shooting stills. Sort of. Composition, lighting, depth of field, ISO, aperture and exposure times, lenses used, are all quite the same. Of course, there are quite a few filmmaking specific elements that have little to do with stills photography. Examples include the frame rates (fps), and the 'rule' that shutter speeds (exposure times) 'have to be' twice the fps to guarantee fluent movement of objects and persons in the video shots. Also camera movement during individual shots is one major filmmaking factor, as viewers are emotionally 'manipulated' by cinematographers by the way the latter hold and move the camera while shooting, as well as the camera angles used. Types of lenses, apertures used and focal distances, camera movement and points of view (angles), along with edit 'cuts' and time-length between cuts are some of the most critical cinematographic tools used to trigger viewer emotion during story telling. Some of those tools hold even true for stills shooting. Stills do tell stories as well, you see. Like painting and all man-made 'artificial' imaging. It's all about the story. Only that filmmaking is the most explicit and dominant among all known imaging art forms in the process of visual storytelling.
The particular clip I embedded in this post above is something I shot and edited yesterday, during the usual rainy afternoon, like so many we've seen this August in Belgium. My compatriots back in the fatherland are heavily sweating, as I type this, under 30+ Celsius temperatures, whereas I have to wear a sweater to get thru the day. Uunfortunately, this year autumn started in Belgium at the end of July.
I haven't tried any special camera movements this time, other than a couple focus tracks, and it was all done handheld, with only a few shots slightly stabilised after the facts, in post. To preserve maximum resolution and sharpness quality I avoided crops and Kern Burn effects in post as well. I used a brand new Lumix GH4 V1.0 to capture and output a clean HDMI 1080p 4.2.2. 10 bit signal and recorded it in ProRes HQ on an Atomos Ninja Blade. I used two different lenses, the 14-140 mm that came with the Lumix and a Canon 24-105 mm with an MFT adaptor. With the Canon lens, lacking aperture setting ability, I should have used ND filters too, but I didn't. Was to lazy to go back to my room and fetch them, as the shooting took place two floors below, in my backyard. Thank God it was quite dark outside and lowering the ISO solved my problem. Shooting at 24 fps meant I had to keep the shutter speed at 50, and get on with it. The GH4 picture style I used was the CinelikeD, without any further parameter adjustments (as many experts suggest to lower further in order to yield flat LOG-like footage for color correction and grading purposes in post).
The ProRes footage captured by the Blade was readily usable in FCP without further transcoding since Apple uses ProRes as its standard format for post processing. What I was particularly awed by though, were the fine-tuning luma and chroma adjustments made possible in post. All this due to the extra 2 bits of chroma subsampling that the Blade gave me. Most experts argue about the elimination of banding in higher subsampling bit-rates, but my personal experience points more to the ability to implement subtle color and tonality changes with more bits than the traditional 8 bit encoding of commercial VDSLRs. In other words, the extra subsampling bits help colourists in the first place, before all the rest. The final video file that I watched on a FHD TV, and not the YT encoded stream you are watching here, practically convinced me to let go for the time being the 4K workflow that I initially bought the GH4 for, and continue shooting 1080p at 4.2.2.-10 bit instead, until I found a way to get a similar ProRes encoding at 4K / 10bits minimum. Does this sound a bit like the upcoming Atomos Shogun? Am I looking for more excuses? I might...
No comments:
Post a Comment