Tuesday, December 31, 2013

What’s another year?


What does one do during the last day of the year? The unemotional simply goes on with the usual. Goes to work, pretends it’s a day like any other. The emotional will think back and ‘live’ again the moments of happiness and sorrow he/she has known during the last 364 days. It’s all about feelings for him/her. The desperate will hope for a better year to come as he/she only recalls moments of loss, despair, and sorrow. The ‘deep thinker’ will probably think again about the meaning of life and why we are here and wouldn’t it be better if we weren’t? The pessimist will see another step closer to his/her grave. And the optimist will feel obliged to the spiritual leader of his/her religion, or, if an atheist, to his/her good fortune for staying alive and healthy and enjoying the works of God and Man surrounding him/her. So, like in most questions, there’s many ways to ‘skin a cat’. Which one of those do I choose? I reckon, a bit of everything. Like most of us do, really, unless we are obsessively stubborn and think we are the only ones being right, and all the rest plainly being wrong.

I read this on BBC news this morning. Some people sharing their ideas about what has been their ‘number’ of the past year. Interesting, sort of, pastime lecture for those with nothing better to do this last and final day of 2013. Among those defending their choice of a number, there was this dude, David Spiegelhalter his name (sounds kinda German to me), of Cambridge University, who touched base on an item that has been debated over and over, a gazillion times in the past. ‘How happy did you feel, yesterday?’ Some good stats here and there, nice to know, like the Danes being the happiest people around (probably most of them have obscure vision deficiencies, or are used to ‘look too deep into their glass’, as we colloquially say in Belgium for those with a conspicuous preference to alcoholic beverages). Near the end of his explanatory arguments good ol’ David came up with a populist subject matter though: Bulgarians seem to be the ‘unhappiest’ Europeans, he said, with an average score of 5.5. So, he says, if busloads of Bulgarians left their country on January 1st, tomorrow that is, to enter Britain of all places, then the average happiness of both countries would fall (mind you, David had told us earlier that Britts score 7.3 on the happiness scale! Yes sir, 7.3 was his choice of number of the year.). Here’s what he writes:

«Those smug Danes said 8.4 even in spite of the gloomy crime dramas they watch, while Bulgarians said 5.5, the lowest in the European Union. That means that if the happiest Bulgarians come to the UK the average happiness in both countries would go down.»

Curious as I am, I wanted to dig deeper in the argument and see whether Mr. Spiegelhalter got his math right. And, what made him come up with a sentence like this in the first place. The first memory neurons that fired in my skull at the reading of this statement were those of a ‘classic’ anecdote, when the New Zealand PM said some time ago, at the occasion of emigration of the poorest classes from his country into neighbouring Australia, that the average IQ of both countries would ‘increase’ as a result. What the PM indirectly suggested was that you got to be a poor and stupid bastard to leave New Zealand for Australia, and that Australians are definitely much stupider than even the stupidest among New Zealanders. This makes sense. Subtract a chunk off the left of the NZ IQ Bell-curve distribution and the average moves to the right. Or, in the case of Australia, add a chunk to the right of the Aussie IQ Bell curve and the average moves to the right again. The fact of the matter is that the chunk that left New Zealand, in the opinion of a cunning PM, belonged in the left half of the NZ Bell curve and the right half of the Aussie. I thought the PM response was stated in an extremely funny way without hurting too many national feelings. Maggie Thatcher was quite good in similar catch phrases too... Like her infamous, "I love Germany so much that I would have preferred to still have two of them around...", when talking to Gorby about Germany's reunification in the 80ies.

David’s statement however is all over the map. There maybe scarce cases when his statement could be mathematically correct, but in most other cases I could think of it's all dead wrong. For this simple reason. IQ is typically constant and could theoretically change, but quite slowly, over a long time, if at all. You are born with your IQ! Whereas, happiness is a state of mind. One moment you are ecstatically happy, another moment you want to cut your veins. Like, one moment you hear you won five million in lottery, next moment you get a call from your best friend telling you he's off to South Africa with your girl friend that you are insanely in love with. Or more like that. Happiness can change in a heartbeat. IQ stays put!

If I were in Cambridge myself, I’d probably attempt a scientific and systematic proof of my claim. That David is wrong and that his argument might have been inspired, voluntarily or not, by a... sorry to say, rather ‘racist’ and ‘populist’ spirit. Or simply by lightheadedness or stupidity (he might be an Aussie after all). Take for example this. A 5.5 Bulgarian entering the UK jumps towards 8-9 points in the happiness scale because of the immigration fact alone, feeling good about the future and that he escaped a miserable life in the Balkans for good. In that case, the alien ‘chunk’ entering the UK will have a higher average than all those moaning Britts, who nevertheless think their welfare state is worth less than the Kingdom's of Denmark... Also, when an average Britt finds out from his Bulgarian neighbour how bad things were back in Bulgaria, he might even feel a lot better about his/her life in Britain instead. It might even move his average up from 7.3! Who knows? So, the average UK happiness will have to go up, not down! OK, David?

Of course, British xenophobes will initially get unhappier, influenced by their tabloids, who have been campaigning for some time now about the implications of an imminent Balkan immigration as of January 1st. This negative campaigning is not only happening in Britain. People are being terrified in this country too, to the point that Van Rompuy, the EU President, came to a TV chat-show yesterday to claim that this whole negative campaign was simply baloney. 

How about Bulgaria itself then? While David claims it will make them more unhappy, it might most probably raise their average HQ instead, I can convincingly claim, and this for a variety of reasons. If good news eventually reaches those back home from their relatives arriving in Britain, then their happiness quote might also increase. Emigrants’ parents back home will feel happy about their offsprings searching their fortunes elsewhere, and the good news creates more hope for those, still miserable left behind, as they will soon be preparing for their very own move next. Or, it might even increase because there will be more new job opportunities for those left behind. For one thing, the national unemployment stats will fall, as I don’t think there’ll be lots of Bulgarians with a job and a good life back home that will want to leave all this and move to Britain! It's mainly the unemployed who are desperate to go away. Why otherwise leave the soil of your ancestors at all? Emigration is the result of some sort of despair about your immediate and long term future. You initially leave to find your fortune elsewhere and if possible come back home some day and start all over again, but under better conditions this time, from the savings you have built when abroad. Simple logic that is. (Mind you some things in Bulgaria are still far better than in Britain. Speedtest a server in Bulgaria and compare the response and download speeds with another one in Central London. You’ll see who’s got the better pings and Mbps.) And, BTW, sun still shines far more often in Bulgaria than in Britain, and that's a confirmed fact of life. 

And I could go on and on. Sorry to say, David, most of the arguments I’m thinking of will raise both countries’ HQ (happiness quote). The Kiwi's argument was about IQ (highly invariable), whereas yours was about HQ (highly variable). I rest my case...

Friday, December 13, 2013

Over gamuts and end to end workflows...


Introduction
I have been using digital cameras for a very long time. The Nikon D1 many years ago became my first descent DSLR camera body. I've also used all the usual suspects in terms of imaging software, like Photoshop that I have been using since its legendary version 2.5, up until when Lightroom came by. Go figure...

During this time I had to continuously swallow a huge frustration. The guilty party was obviously Colour Management (CM). I generally thought I understood the implications of a lacking CM in digitally created and printed images, like, experiencing one color vista on a monitor and getting a different one coming out of the printer. However, I had neither the tools nor the understanding of CM, especially how to 'calibrate' the hardware in my workflow and how to balance my colors throughout. We've all been there, haven't we?

So much I knew. But how do you practically do CM? What do you need to do when you graciously witness a scene worth capturing, shoot it with your camera, bring it to your computer for postprocessing, and finally print your 'masterpiece'?

I'm not planning to explain exhaustively the CM theory here, as I am merely a gifted amateur and not a Subject Matter Expert, by any means! I only wanted to share my recent experience, and create a sort of a personal notebook to return to in case I forgot something about 'how I done it' down the road. Besides, the internet is packed with reference and teaching material. I personally learned what I know from two excellent teachers, Joe Brady and Ben Long, who offer online classes in Youtube and Lynda.com. I'd only like to pen down my thoughts about it and possibly demystify some of the concepts that bothered me personally for over a decade and made me lose hundreds of work-hours in frustration by experimenting and never being able to consistently assure the quality of my outputs. If you are interested to read on, feel free to do so.

Colour Space
The Science of colours is quite a complex discipline, packed with theories full of mathematical formulas and of research experiments, involving various disciplines (mathematics, physics, psychology, physiology...), aiming to explore and formally define what is 'color', and how we experience it as humans in the world that surrounds us. And, being a science, it takes quite a long time to grasp its fundamentals, study it in depth, practice it, really learn it and consider yourself a colours expert. To the rest of us, who only seek consistency and truthfulness of colours throughout our imaging workflows, merely for the love of the Photograph, thankfully none of this expertise is necessary. It's simply a 'nice to have' thing, not a 'must have'...

Well here's the thing. That's how I understand it. It all starts with the Color Space. This is a term used to describe the universe of all colors defined within a 'scope'. If the scope is "all the colours visible by humans", the corresponding colour space will contain colours from all electromagnetic wave frequencies of the visible spectrum, from the limit between infrared and red to the limit between violets and ultraviolet. This particular color space was first scientifically formalised in the 1930s. Since the recent advent of digital home computers and peripheral devices (in color), new colour spaces have also emerged to help establish colour communication among the digital devices involved. HP together with Microsoft assembled a large number of participant and relevant manufacturers (including Pantone) back in 1996 and established 'sRGB' as the new standard of color space that most digital devices would henceforth comply with.

Is sRGB the only standard in the digital world? Unfortunately, not so. There are quite a few more, like, AdobeRGB (this one also contains many more colours than sRGB!), ProPhotoRGB, and CMYK (for printers). To this day, sRGB is the dominant standard however, and it is the one that is supposed to be supported by all computer colour monitors anywhere. In other words, if an image file was created within the sRGB space by a compliant device, computer, scanner, camera, then all its colours should be faithfully visible on the screen of a monitor that supports sRGB as well (most of them do, as mentioned) and there would be no clipping of colours during display. The color space specification is usually embedded in the file's metadata (header). Whether the displayed monitor colours are visually 'identical' to the corresponding standard sRGB colours, however, is an entirely different issue. It all depends on the quality of the monitor itself and its current state of 'calibration' vis-à-vis the standard sRGB colour set! But, more on that later.

Internals of a workflow.
Let's follow a workflow from beginning to end to understand what happens. Suppose you take a picture with your digital camera of a scene of your liking. Modern camera internal processing software will commonly process the captured image within one of two spaces, sRGB or Adobe RGB. The better camera manufacturers require you to define which of the two you want, others won't bother. They'll simply use sRGB instead.

After transferring your image files to your computer next, your imaging software application (Photoshop, iPhoto, Lightroom, Aperture or any like this) will read the colour for each image pixel in the files and will ask the OS to map them to the same sRGB values (colours) on your monitor. If all devices and software apps up to this point spoke properly the sRGB language then your image on the monitor should very accurately reflect all the colours on your original scene and remind you exactly what triggered your urge to shoot that picture in the first place. Unfortunately, in the early days, this used to be a moment of sheer disappointment because of mismatches of colours and quality of the hardware, but thankfully free markets competition and shareholder pressures forced manufacturers to aggressively innovate and improve their gear and software to the point that quality has dramatically improved.

But, there's more. Imagine your shot came out so stunning that you decide to print it on paper. Your imaging software displays the print configuration panels, where you simply select the paper quality of your liking -remember, different papers print pictures in quite different looks-. For ease of use, you explicitly configure the printing job to be 'managed by the printer' and subsequently click 'Print'. Computer sends the file to the printer next, declaring it as an sRGB object. Since you configured the print to be managed by the printer itself, the computer can only send the image files based on the bare sRGB specification. This is in order to avoid 'misunderstandings'. It does the same when you send your files to a Printing Lab to create enlarged versions of your marvels that you can't possibly print in your 50 bucks inkjet printer. In turn, the printer's software interprets the received sRGB data accordingly and prints on paper to the best of its capability a near-stunning representation of your beloved scene, for future generations to admire! And pigs will fly... If things were only that simple...

There were 3 devices that were involved in this workflow, the camera, the computer (and its monitor) and the printer. For a scanner workflow, replace the camera with a scanner and the same repeats. In reality, what typically happens is that none of the devices mentioned does a perfect job respecting the sRGB specification. It's not that they change the sRGB model mechanics, but when sRGB tells them to use a particular color, their practical interpretation of that color is slightly to severely different than the standard color that was proposed. So, devices need corrections. In fact, there exist well defined dedicated procedures and hardware tools that examine how much deviate a device's colours from the standard, and they then issue the necessary corrections. Such are known as 'profiles'! Hence, profiling is an automated process that occurs during the various stages of the workflow and corrects a device's interpretation of the standard colours.

In other words, Colour Management is in fact the set of activities applied to a given workflow in order to measure device deviations from sRGB's Standard Colours, issue corrections (a.k.a. ICC profiles) and further assure colour consistency from the original captured scene throughout the workflow to the final print. The process of testing and adjusting each individual device is called 'calibration'. So, when you hear someone say, my monitor needs calibration, he/she really means the monitor needs a profile to correct its current interpretation of sRGB standard colours and make them look more like what they were supposed to.

The Gamut
The gamut is a continuous subset of a color space that is supposedly supported by a given digital device. Usually, it is represented as a 2D or 3D plot in a system of fundamental colour coordinates. The gamut plot literally delineates within a colour space all the colours that the device can support. As an example, the animated Gif in the beginning of this blogpost shows a 3D representation of two gamuts; the transparent wireframe is the standard sRGB container, whereas the coloured 3D solid, contained for the largest part inside the sRGB plot, is the gamut of my Canon Pro9000 inkjet printer calibrated for Epson's Colorlife pearl touch paper and the Cli-8 Canon ink cartridges.

There are some interesting observations to make here. Since the two gamuts are not identical, this means that I might capture colours with my camera, and subsequently display them on my monitor that my particular combination of printer/inks/paper would never be able to reproduce on a print. Regardless how you might combine ink dots coming out of the printer head's nozzles in the four basic colours Yellow, Magenta, Cyan and Black (Key), a.k.a. the YMCK colour space (in which all printers operate), there is no way you can reproduce colours of the sRGB specification that are outside the printer's gamut. Even worse. As you can observe in the Gif plot above, there are colours in the blue green part of the printer gamut that fall outside sRGB altogether! Otherwise said, the printer could possibly print beautiful blue and green colours that the sRGB monitor is not capable of displaying. How can you print such colours then, if you happen to be a sucker for bluegreens? One way is to process your images into another space, like AdobeRGB, which is larger than your monitor's sRGB, and hope these colours will eventually show up in your print. You won't be able to tell, however, until the print comes out, because you can't possibly softproof the images beforehand; your monitor is indeed incapable of displaying them, remember? Or, you can forget about printing deep bluegreens for ever. Sounds pretty awful, doesn't it?

Next, what happens if your image contains colours that you can witness on the monitor but your printer can't print, then? These colours are called 'out-of-gamut' and have to be dealt with somehow, otherwise images would print with visible white holes in them. Like certain Van Gogh or Lautrec masterpieces, where the absence of paint in large parts of their canvas left the bare unpainted base serve as an integral part of their work. As the Masters that they were of course, they used it to their advantage. I suppose they didn't do it because they were left out of paint, although one might never find out...

In the digital printing practice, out-of-gamut colours are replaced with similar colours that are 'in-gamut'. There are four/five methods to do that and the process is called 'rendering intention'. However, there are only two of those methods that are of value to ourselves, end-users, and these are the 'perceptual' and 'relative colorimetric'. Worth mentioning, the former not only replaces out-of-gamut colours with similar inside the gamut, but also moves other neighbouring in-gamut colours to new gamut positions to create a more pleasing human feeling, which at the same time might cause tonal shifts overall. It is therefore up to the user to decide which of the two methods wants applied, depending on the general look that the impacted image acquires after the rendering intent was applied. One could actually inspect the rendered images before printing by 'softproofing', a function we find in some good imaging software like Adobe Lightroom. Softproofing presents to the user the effect of either method, as well as the pixels that caused the trouble in the first place. It actually shows all out-of-gamut pixels using a conspicuous colouring. The phenomenon of out-of-gamut colours in an image is known as clipping. I generally prefer the perceptual method. It seems to preserve the depth of colours. The few times I used relative the images came out with less contrast, a bit on the side of 'flat'.

What I forgot to mention is that the 3D solid plot of a colour space in the animated Gif above is shown in the Lab coordinate system, where L denotes the 'lightness' (L) or 'brightness' of a given colour point, and a,b are plot coordinates used to describe the remaining of a colour's properties, like saturation and hue. The app I used to create that Gif is Apple's Colorsync utility. There also exist apps that offer more than Colorsync. One I have seen is ColorThink Pro by Chromix. This one can even display the 3D gamut plot of any given image (!) in the form of a set of coloured 3D plot points in space, whereby you can visually inspect if your particular image falls within your target printer/gamut or it contains out-of-gamut areas. It's like softproofing, but in ColorThink you get a 3D plot representation of the image as a cloud of dots superposed upon the printer Gamut solid. However, this is far less attractive than softproofing as the latter also presents the clipping effect on the image itself, and visibly simulates the overall impact of the rendering intention as well...

Practically, the theory above explains why we often see prints that don't quite match what we see in our monitor, even if both our printer and monitor were perfectly calibrated. Colour management cannot guarantee you perfect prints, only the best possible within the perimeter of the capabilities of your printer/ink/paper combination. So, if you don't usually like your prints, and prefer the monitor display instead, start with printing on a different paper. And if that is still not enough, try buying another printer. If you are out shopping for printers, it would be nice to know how much of the standard sRGB space is practically 'covered' by the gamut of the printer you are interested in, and that for papers with quality and surface feels of your liking. I don't believe, if you posed a printer salesman a query like this, she'd rush to answer it without hesitation. Most probably, she might think you are pulling her leg or have fallen on Earth from an alien planet... true story!

Profiling your devices (and doing proper CM)
Here's the moment of truth. I'll tell you what I did with my gear, since there are several commercial solutions for doing this, varying from affordable inexpensive to non-affordable very dear. From a few hundred to several thousand bucks, that is. Having the calibration done by specialized services is rarely a practical solution to us, amateur users, I think. It's pretty easy to calibrate a camera, but bringing your computer and printers to a specialized service is too much of a hustle. At least, for me it is.

I have acquired Datacolor's SpyderSTUDIO for less than half a mille (in €). There are also solutions offered by X-Rite and others. I also bought X-Rite's ColorChecker Passport for very little cost, a fabulous camera calibration target, very handy to carry around whenever you are out shooting.

My workflow is a Canon 5D M-III with about half a dozen lenses, an iMac 27' the latest model, a Canon Pro9000, and an Epson pigment based R3000. All these devices can be profiled by the CM gear I described in the previous paragraph.

Profiling the camera
I have decided to calibrate my camera each and every time I have to deal with a serious shoot, studio, or nature photography. The advantage of doing this lies in the time saved in postprocessing, and the image quality obtained. If you wondered why I'm doing this instead of maintaining a few standard profiles that I created earlier, like, one for studio with strobes or LEDs, one for tungsten lighting and interior shooting, and another for landscapes in ambient light (sunshine and cloud), then, well, you may be right. It could work either way. If I occasionally forget to calibrate, I'd have to effectively use a corresponding profile that I created in the past. Mind you, light is a peculiar phenomenon. It changes all the time. Summer light is not the same as autumn light, different parts of the day have different light, and then colours often look different. I don't mean different because of wrong color balance, just that the colours our eyes experience seem to have a different look and feel. That's part of what makes a scene screaming to be photographed. The only way you can achieve perfection in representing the 'atmosphere' in your scene is by consistently recalibrating your camera during each and every shoot. Does this meticulously profiling work show up in postproduction? You bet! Certain colours come out significantly different than in case you used the standard situational profiles offered by your camera manufacturer, even if it's called Canon.

X-rite ColorChecker (CC) Passport, an incredible little marvel!
For such a calibration, I first custom white-balance my camera by shooting a neutral 18% gray card by Sekonic, and next, I take a shot of the ColorChecker Passport target, filled with colour patches (see right), holding it at an arm's length in front of me, or having someone else do it for me. I then continue to shoot my subjects only paying attention to my focus and framing. Oh yes, there's one thing that I forgot to mention before. Ever since I started with CM, I do everything in 'manual', even the focusing bit. To this end, I use an external light-meter by Sekonic, their latest marvel L-478DR, which is also configured with the necessary exposure correction profiles issued for my specific Canon camera body. I also built those myself with the help of a Sekonic app. In other words, I only do light evaluation of a scene by measuring ambient incident light, and I only do measure reflected light (spot), when necessary (high contrast scenes), to simply define the zones of the strong highlights and darks in the scene, and their subsequent position in my camera's dynamic range in order to avoid clipping. All these are done of course with the Sekonic L-487DR. I stopped using my camera's light-metering functions after I bought the Sekonic. Even with this Canon camera, light measurements kept producing very inconsistent results. It's quite embarrassing, especially during serious commercial shoots. And it also requires loads of postprocessing to correct the result. But, that's a subject for another post.

X-rite offers a free app and a plugin for Lightroom that reads the CC Passport image (raw format, please; profiling only works on raw), figures out where the patches are precisely located, and then compares the average colour values of each patch individually with the standard sRGB values they were originally meant to be. You see, the actual CC Passport patch colours are hard coded in x-Rite's profiler plugin; since they may eventually fade out, x-Rite recommends to buy a new Passport after expiration date -a couple years- of the piece you already own (nice try!). Based on the deviations found, the plugin creates a correction profile for the camera, which can then be systematically applied during/after import to all your other photographs in the shoot. In Lightroom, you can even create a preset with the camera profile applied, in addition to many more corrections that LR itself provides, all decided upon the CC Passport image. For example, lens corrections, sharpness, curve, noise reduction, cropping, and more. I normally apply white balance, colour calibration, lens aberrations, and sharpness. It becomes interesting to work in tethered mode, that is, your camera is connected via USB to your computer, and watch each image shot entering Lightroom in real time with the necessary preset applied. If you done your homework well, what you get this way is almost a final product. Only awaiting for your own personal 'creative' touch... Huge time savings, mark my words!

Profiling the monitor
Depending on the solution you adopt there's always a formal procedure that you have to follow, led by a software app that goes with. In my case, I obviously use Datacolor's profiling procedure. There's a shedload of 'how to' Youtube clips about this. Take your pick. Google few keywords together such as 'Datacolor Spyder4 Elite' and Bob's your uncle. During calibration, a measuring device, hung face-down at the center of your monitor, measures the light and its color that comes out of the monitor, the latter obeying multiple display instructions by the calibration software. It then corrects the deviations from the standard (sRGB), along with your monitor's gamma and colour temperature, and creates a corrective profile that is henceforth applied to the monitor. That's all. There's not much more I could write on the subject. Other than one needs to repeat regular recalibration of a monitor, as the ambient light might evolve and change, and monitors themselves tend to age with their display colours fading in time. When I read the full report that the calibration app created about my fancy iMac monitor, I came close to jumping off the building. Other than the only good verdict that sRGB was 100% supported** (Thank God for that!), the homogeneity of monitor performance over the different parts of my real estate seemed in a few areas quite appalling and far away from what it should normally be, almost 20% below par in the lower parts of the monitor screen. In the gamut 2D plot shown above, you can see one result of my iMac's monitor testing and calibration. 100% sRGB and 78% of AdobeRGB coverage. When I saw that, I decided to abandon AdobeRGB altogether. What is the point of shooting colours that I can't possibly see on my iMac? It's like buying a stereo with outstanding HiFi sound reproduction performance very near the 20KHz end of the human audible spectrum. Why bother? I'm pretty deaf to those sounds! Only my neighbour's dog can hear them!

Color patches on target for calibrating the printer.
Print on the target printer without Colour Management
Profiling the printer
This is the last bit of the puzzle. Calibrating the printer. One has to first print a few targets with a bunch of colour patches, (see left), and then measure those same patches on the target print with another tool, a so-called (mouthful) Spectrocolorimeter. This baby actually measures the reflected light bouncing back from each patch. To do that, the Spectrocolorimeter momentarily transmits white light onto each printed patch, which then the latter reflects back to the colorimeter sensor. That's why the target print needs to dry well before you start the procedure. It's also imperative that the same paper quality for which the printer profile is being prepared should be used to print the targets. Also, no colorsync or printer color management should interfere in this print job; this a critical condition of the procedure. The computer simply transmits the target file to the printer, using plain vanilla sRGB. In turn, the calibrating software, being well aware of the average sRGB color value that corresponds to each patch individually, subsequently yields the set of deviations between the expected (standard) values and the measured. All this is reflected in a brand new ICC profile. In all future CM print jobs after calibration, the process should be consistently managed by the ICC profiles for perfect colour managed results.

It's a wrap!
Is a color Management effort worth the cost and time spent in the calibration? I'll rush to respond, I wouldn't be going back to my old habits any time soon. CM is worth every bit of it. It's not just subtle corrections that we are talking about here. For the first time ever, I could experience images in the monitor with colours very close to those I witnessed when I took the picture, and the prints matched my monitor experience more than ever before. I strongly felt that I cracked the CM puzzle. It really seems to be working for me!

_____________________________________________________

** It was funny, during a Ben Long session of his class 'Printing for Photographers' (Lynda.com), where he's been calibrating his brand new monitor, about which he's been bragging all along before calibration, his process reported that it could only cover 80 and change percent of sRGB. He looked flabbergasted with the result and openly took the piss on his calibration gear instead, as he was convinced his monitor was still the best thing money could buy. What a disappointment! When my process showed 100% coverage I had a serious ROFLMAO. I beat you Ben, hands down. Apple still rules!

Friday, November 29, 2013

Oὐ γὰρ ἔδωκεν ἡμῖν ὁ θεὸς πνεῦμα δειλίας, ἀλλὰ δυνάμεως καὶ ἀγάπης καὶ σωφρονισμοῦ.

I watched two 2008-crisis related documentaries yesterday. I started with The Flaw and finished with Hank. Did I learn or hear much I haven't heard before? Not really, if one thinks of the specifics we all heard and read in the blogosphere and the Press then, with only a few noticeable exceptions. It's about those I'm posting this.

One** was the moment when Alan Greenspan, the Almighty Warlord of Capitalism of the 90ies, admitted that he became "distressed to find out about a flaw in the model of how the free markets were actually structured", that he personally didn't realise during the 40+ years he's been active in the Finance universe, and that he eventually admitted before a US Congress committee that "we were not all that smart", only falling short of actually admitting that "we were basically plain stupid and arrogant, and have allowed Armageddon to happen!"

My 'favorite' Yale Professor of Finance, Nobel Laureate, Robert Schiller*, said he had found that the 'housing industry' prices corrected for inflation had not fundamentally changed from before WW2 up until the year 2000, from which point they started heading up... up... up... towards the stratosphere (a.k.a. the housing bubble).

I liked a clear distinction someone made** between 'assets' and 'goods'. Assets are thingies one uses for investment in order to generate financial income, and Goods are thingies that people typically use and/or consume to get thru another day. People should not mix the two and employ 'goods' as 'assets'. This was an explicit criticism to all those responsible for packaging sub-prime mortgages, slicing them in tranches of increasing risk and selling them to investors in the form of non-transparent investment strategies a.k.a. CDOs. It seems CDOs used to return lots more profit than traditional investment alternatives. That's why investors loved them and they became so popular. Until the financial tsunami drowned everyone by surprise with CDOs in their portfolios.

Another point that was made** was something we often heard; that despite the crisis the world's richest keep increasing their wealth. Money disappears from the pockets of the middle class and the poor into the pockets of the top richest 1% of the population, especially the 0.1%, and even more in the 0.01%. We are talking tens of billions of dollars here to hundreds of billions, mind you. The commentator said that money obviously shifts from millions of households who lose purchasing power, and, in the process, hurt severely the economy, into the pockets of those who have no use for it whatsoever (other than boosting their ego and showing off in the Forbes list of the World's Billionaires). Luck has it that this morning a similar article popped up in Ta Nea's online edition discussing Portugal's riches and how during just one of the Nation's worst crisis years they added more than 12 Billion € into their personal fortunes. Mind you, many of these Midas's are 75+. Do they ever retire them greedy bastards? This kind of money won't buy them love anymore... that's for sure.

Hank*** was a sort of biography of Henri Paulson, Ex-Goldman Sachs Chairman and CEO, who served as Treasury Secretary during George W. Bush's final years (2006-2009). His name was indadvertedly connected to the events at the eye of the storm of the September 2008 Financial outburst, Lehman Brothers bankruptcy, Bear Sterns takeover, AIG bailout, TARP, and the further bailout of the seven largest US Banks. It was good to hear it from the horse's mouth, although what he or anyone involved had said and done during Doomsday of that dreadful weekend while trying to sell Lehman to Barclays and failing, were already shown on TV and written about a million times.

One little detail was striking to me though. It was actually about Paulson's wife Wendy, who emerged from this documentary as a very strong and straightforward personality, and about what she responded, when, after the collapse of the Barclay's deal, Hank in desperation stepped out of his meeting to call her, admittedly overwhelmed by fear. When he mentioned to her that he had suddenly felt very afraid, she cited a line from Apostle Paul's 2nd letter to Timothy, 1:7... "For God hath not given us the spirit of Fear but of Power and of Love and of a Sound Mind****" (KJV). That phrase alone shook him up and gave him the courage to go back and wrap-up business, with Ben Bernanke and Tim Geithner on his side.

A final remarkable point that we also know and heard a lot about was Paulson's reaction to the interviewer's question about banks, who, despite failing miserably, used bailout money to pay hefty bonuses to their executives; indeed the banks, not just investment banks but also the likes of Citi and BoA, shared a huge part of responsibility in the notorious 2008 financial clusterf@ck. The least one could consider acceptable was bonus payments to the top in a business as usual fashion. No wonder 'Occupy Wall Street' became a world movement in the years that followed. Hank (you don't mind me calling him Hank, everybody else does) smiled bitterly and actually jumped saying 'not that one again' and further reacted emotionally and he correctly showed his obvious sadness even talking about it. He had forgotten the fundamental dogma of Wall Street at that moment, though. Of which he has been part for decades before then. That 'Greed is Good'...

____________________________________________
* I actually watched a number of his 2011 "Financial Markets" lectures on iTunes-U before 'The Flaw' yesterday, and, my goodness, Mr. Nobel Laureate is such a bore in terms of lecturing techniques. Someone should really talk to him. Teachers should stare towards their students for most of the time, not trying to 'uncover' hidden messages from God by staring at the blackboard instead...
** The Flaw
*** Hank
**** οὐ γὰρ ἔδωκεν ἡμῖν ὁ θεὸς πνεῦμα δειλίας, ἀλλὰ δυνάμεως καὶ ἀγάπης καὶ σωφρονισμοῦ.

Friday, November 15, 2013

Autumn Light

CH-BW-1.jpgCH-BW-2.jpgCH-BW-3.jpgCH-BW-4.jpgCH-BW-5.jpgCH-BW-6.jpg
CH-BW-7.jpgCH-BW-8.jpgCH-BW-9.jpgCH-BW-10.jpgCH-BW-11.jpgCH-BW-12.jpg

In the kingdom of Belgium, where during most of the year a drizzle rain turns our landscapes into a sad, colourless, dull mixture of all shades of grey, the autumn light in a rainless afternoon becomes a pleasant surprise. As the sun plays hide 'n seek with the scattered cloud, the last yellow leaves that are still hanging in there become the subject of an interesting interplay of light and shadow, like one can only recreate in a studio with strobes, filters and strict light control with shaders, reflectors and other purpose made strobe accessories. I like to seek spots in my garden, where nature does the same with unpredictable effects. My lens, especially my 135mm, vacillates from leave to leave seeking the ultimate shadow/light interplay with leaves becoming members of an orchestra that play a different symphony, one without sound but with lights and darks. Nature, in the cusp of autumn and winter becomes in days like today a photographer's paradise, especially if you adore warm colours as much as I do, yellows, reds, rusty orange and vanishing greens.


Sunday, October 20, 2013

First Autumn Colours



First Autumn Colours 2013

Three things I deeply cherish almost more than anything. Light, colours, and the games they interplay inside our perceptive brains, almost touching our soul. I make a serious habit of capturing that visual miracle inside my various cameras, be it either professional and expensive or more of the cheap stuff, like point-'n-shoots and iPhones/iPads. Of all light, especially in autumn, and in days that we are blessed with sun-rays escaping the Belgian cloud and causing the warm colours of decaying tree leaves to really pop, I feel so fortunate. Equipped with my Canon gear I usually go for a walk not far from my house, by the river Schelde, only a few hundred yards away, and shoot the beauty surrounding me. I must have photographed the same subjects a thousand times, for 25 years now. Nevertheless, there's always a different glimpse of the nature around me ready to amaze me again, as the trees and green are rarely the same. Everything changes in time, and so do the autumn vistas. There's always new ways to enjoy it and photograph new scenes unseen before, like when I came across a peculiar colony of mushrooms today in a formation that I could only think of just one attribute to call what I saw: Abundance
Yellow, green, magenta, orange, and deep reds, combined with scarce spots of clear blue sky escaping the scattered white/gray clouds have created for me one of the brightest symphonies of autumn colours I could have ever dreamed of. I feel blessed... indeed!

Feel free to experience my enthusiasm by clicking the links above and see the entire set in Flickr, called First Autumn Colours.

Wednesday, October 2, 2013

Τι Λωζάνη, τι Κοζάνη; (What's the difference between Lausanne (Switzerland) and Kozani (Greece)? - old Greek pop song)

In the 60's in Greece there has been a hit pop song with the title 'What's the difference between Lausanne and Kozani'. I guess the lyrics author picked Lausanne for it's rhyming with Kozani when pronounced in Greek. I searched the lyrics on the net and reading them I realised that the comparison between the cities is about them both located in 'mountainous' areas 'suffering' snowfalls in the winter (so, what's the difference?). The lyrics are furthermore about a soldier serving the military (still compulsory in Greece) and moaning about his lonely and sexless life... I thought about it first time I visited Lausanne, back in 1986. A thought flashed then, like 'dream on, baby' as an imaginary answer to the songwriter, reckoning he'd never seen Lausanne before writing his lyrics. Or he might have done and the whole thing was simply a catch phrase for laughs. In any case, the song stuck and became an instant hit. However, the issue of city comparison between North and South still remains. Do indeed cities in South Europe look different than cities in the Central and Northern Europe and why?
Coming from Alexandroupolis, myself, a Greek city that grew to 75 thousand inhabitants from 15 when I used to live there, and having become utterly frustrated when I visited it after being absent for a very long time, I have a 'weak' spot when it comes to how cities are built, organised and maintained to make life pleasing and comfortable for their inhabitants. It's not sufficient to simply claim that 'the south is chaotic, dirty and disorganised', (which indeed it mostly is, not just in the streets but also inside public administration buildings, schools and universities) and that 'the north is clean, organised and disciplined under the law' (which it often is, incidentally). There is also no imaginary line separating the troubled south from the disciplined north, and it's not like crossing the line will bring you from 'heaven' to 'hell' kind of thing... It is definitely a matter of culture of the individuals living in those cities and managing them daily; it's about tradition, vision, individuals's dignity, and applicable laws. But it is also about certain quite simple policies that various nations with the proper long term vision seem to apply, whereas others (mostly in the South of Europe) don't have a clue. And their impact could be humongous. A small perturbation with immense repercussions. Like chaos theory itself...
To illustrate my argument I have included two pictures here of two cities of similar size in two representative countries of our dearest EU. One is Rethymnon, Crete, in Greece, and the second is Aalter, East Flanders, Belgium. I shot both pictures myself, a few weeks apart. Click on them for larger view and tell the difference by inspecting the two. I will only mention one policy that would make a hell of a difference to Rethymnon and would bring it much closer to order, out of its current chaos. In other words: Cables!
No cables for communications, power and TV/Internet distribution can be seen in the open in Aalter, and just like Aalter, in the town where I live and so many other Flemish towns, cities and villages. Only houses, streets and street lanterns. I am not even talking about the Rethymnon old balconies and parts of buildings that seem to be falling apart. There, they will tell you that there's is a crisis and no money for maintenance and cleanup of the towns is available. Sorry to say, I'm not buying that. As it appears, despite what they say, there's still a lot of money left in Greece, like for top branded fashion and clothing (much more conspicuous than anything I ever saw in Belgium) and driving cars. With 600 thousand inhabitants in Crete, I reckon they must have at least 300 thousand cars and pickup trucks. There's cars everywhere you turn. Appalling! I am not even talking about their larger cities. I almost went ballistic when I arrived at Anogia, a large village not far from Rethymnon, where it was almost impossible to drive around because of the hundreds of cars parked both sides of the street in ways that would render Parisian drivers into the obedient angels of car parking. Apparently, even in Crete there are initiatives for burying the cables, as I heard about Acharnes (I think), a village near Knossos, where there's an EU funded programme for doing with cables what has been the policy in Belgium for more than twenty years now.
In conclusion, Τι Λωζάνη, τι Κοζανη; I betsa, a hell of a difference between the two! For a long time still, deep into the distant future... Unless hell freezes up.

For kicks, I photoshopped out the cables and did some minor repairs and painting and that's how it comes out...At least, it looks safer now...


Saturday, September 14, 2013

iTunes Festival

It is the 7th edition of the Festival this year, and as always, it lasts the entire month of September. Since last year I have been watching its individual shows meticulously and with great pleasure. Needless to say, I'm watching it via all my Apple devices and computers, but its best appearance of all happens on my AppleTV connected to a Samsung HDTV. The show takes place in the London Roundhouse that has been converted for the purpose to an outstanding venue to accommodate the many acts (or gigs, to use the youth's jargon). Apparently this year about 20 million people requested free tickets to attend the live shows, which is more or less normal with the stars appearing this time, as are the likes of Lady Gaga (who kicked off the Festival this year) and Elton John. I watched the latter's show twice and I'm gonna have to watch it again and again before they take it off the air (or cable, to be more accurate). A genuine virtuose, still performing like time didn't pass by. Incredible pianist and singer. And with plenty of energy, he spent 1 hour and 42 minutes on stage! Gaga's gig was also performance-wise exceptional, if you like that sort of music it is. I have been amazed by the discipline and professionalism of the artists, and their humility too, regardless their stardom status. I love such pro's. Respect!

Last year I enjoyed for the first time on a show my favorite band Mumford and Son and watched for the first time as well the wünderkind Ed Sheeran, in an one-man orchestra performance. The most romantic voice and music money can buy. But also, Elbow and Emeli Sandé. We had seen all of them during the opening and closing Olympics ceremony by the time the Festival started in September 2012. The Olympics shows had catapulted many of them internationally and established them in the ranks of the best of Britain.

So far this year I have been impressed by a 22 year old kid, Tom Odell, whose voice reminds me the lead singer of Mumford and Sons, but his style is largely impacted by Elton John and Bowie too. Extremely romantic voice and lyrics. The kid plays the piano and does his best to sound like Elton John, but he'll have to spend a few extra performance hours to come any close to the great Artist (with a capital A).

The iTunes Festival is something for all ages. The production and its TV coverage are simply impeccable and one can watch the shows via an iPad app, AppleTV and iTunes on a Mac/PC. No excuses for missing any gigs. If not 'live', at least one has the opportunity to watch the shows after the fact, multiple times to his/her heart's desire. Even deep into the following month of October... and buy the records linked to many of the gigs.

Thursday, August 22, 2013

Focus stacking cont'd.

A picture I made today by method 2 explained further on
Focus stacking (FS) is like 3D scanning. In such, human or other parts are scanned and images of virtual slices of the object are created at discrete positions along an imaginary axis. Software solutions could be further used to blend the obtained slice images into high resolution 3D virtual objects in which experts navigate to establish medical diagnoses or other useful 'stuff'. In FS, we similarly shoot a number of photographs of the same object, and like in traditional scanners, the frames are only sharp (and useful) on discrete planes perpendicular to the lens axis. Each such frame has a very shallow depth of field (DOF) that spreads incrementally in front and on the back of the focus plane. Shallow DOF is typical in macro photography where FS is mainly used for. Next, FS software maps out the blurred (out-of-focus) areas in each of the frames, and only keeps the sharp parts. By finally blending the latter together, a razor sharp photograph is obtained with a 'humongous' DOF (almost spectacular compared to conventional photography).

There are two ways to focus a camera on the aforementioned focus planes (incidentally, those same planes are parallel to the camera's CCD capturing element plane, if you haven't guessed it yet):

1. Keep the camera immovable on a steady tripod, pointing at the object, and use the lens ring to focus and shoot frames on a number of equally spread imaginary planes across the lens axis of the camera.

2. You focus at one extreme of the object (say, its front end), and then micro-displace the camera at new focus plane positions, for a discrete number of incremental steps, at which points you obviously shoot subsequent new frames, until you reach the object's other extreme (say, its back end).


I added those arrows on the 454's knob for better control
After all is said and done, (a) I still don't know which of the two methods yields better results, (b) whether indeed better results can be obtained by one of the two methods, and (c) what would be the reasons for possible differences in quality. If there are any differences, they must be the consequence of the blur/sharp region mapping algorithms used in the FS apps, right? I can only say that I tried both methods and found that the two of them equally yield great results. I should probably do a formal test with them, and systematically compare the results in a more formal manner. Until that is done, one thing I found to be true however:

To systematically focus on those incremental planes by precision turning of the focus ring, one does need indeed to tether the camera to a computer and control it with a dedicated app (like Helicon Remote). It's virtually impossible to obtain similar increments between focus planes by manually turning the focus ring. If you can manage to do it, then you're a champ. Also, the measurement scales typically depicted on focus rings are not terribly accurate to be used for such micro adjustments. For all practical reasons, only computers could manage this precision focusing properly. So, for method 1 you really need to tether your camera to a computer and control the shoot with an app. Furthermore, most people own cameras that are not readily supportable by the available tethering software (eg. Helicon Remote). So, you could really be left out in the cold...


Manfrotto 454 Micrometric Positioning Sliding Plate
To do 2 is much simpler. You mount a micro-positioning sliding plate like the one shown here on the head of your tripod, and then mount your camera on the sliding plate. After you focus your camera on one extreme of the object, and shoot a first frame, you slide the camera forwards (or backwards, depending which extreme you shot first) at regular increments by carefully turning the finger-tip control for precise micro movements; I used three entire revolutions of the knob in a few tests I did. It turned out to be fine. The larger the increments the less frames you'll have to shoot, obviously.

In other words, method 2 only requires you to invest in a sliding plate (plenty of solutions for a few bucks, less than 100 anyways) and any camera of your liking, without bothering about availability of tethering control apps. The FS you can eventually do with any dedicated app (available from free, via shareware, and a few good commercial solutions like Photoshop and Helicon).


Camera setup for shooting the coins
Of course, in case you have the proper gear supported by solutions like Helicon's, it's a much simpler and far more fine-tuned experience as you control everything from the computer, including all detail camera settings and shooting parameters. At the same time you can maintain the camera really amovible vis-à-vis the object and avoid the need for frame alignment later. Indeed, even using the sliding plate described above, a remote control to trigger the camera release button, and a super steady tripod like the 055XPROB I use, the resulting frames may still need to be further precision aligned for perfect results. The FS apps I use, Photoshop and Helicon Focus, are able to perform such alignments, but the lesser the need the better the result. I think...

Tuesday, August 20, 2013

Helicon Focus vs. Photoshop focus stacking

To clear things up beforehand, this is not an exhaustive comparison of the two as it would defeat the purpose. It's only about focus stacking, for which Helicon Focus is actually made, whereas for Photoshop it's only one of thousands of functionalities it otherwise offers. I'll only share my experience on the results of focus stacking based on a series of frames shot with my 5D Mark III and the Canon 100mm macro lens. The meerkat is a present I got on my latest b-day. The shots were programmed with Helicon Remote tethered to the 5D. I received Remote together with Helicon Focus Pro on an one year user license. The Photoshop is part of my Creative Suite 6 Academic Edition.

Helicon has plenty of functions and parameters to both, shoot pictures and post-process them. I used mostly its default settings. To understand how Helicon Remote works, you define the closest focus point, then the farthest, and it calculates the number of shots needed for best results between the two extremes, as well as the interval between two consecutive focus points. You can preview the results live (for Canon gear it can also show actual Depth of Field - DOF) and you can define the manual settings for exposure and speed, as well as the ISO setting, with their direct visual effect shown on your computer monitor. An additional Remote bonus is the ability to bracket exposure up to 15 different points and define the shooting parameters for each one of them individually (if you got the patience and know what you're doing, that is). I applied Exposure Bracketing for this demonstration and defined 4 bracket points with 1/3 diaphragm distance between them. Once all my shots were done (4 for Exposure bracketing for 9 focus points in all, that is 36 frames in total), I loaded them in Helicon Focus for post-processing and let Helicon do the rest. In honesty, I have no idea how they do HDR first and focus stacking next, I couldn't find anywhere where they explain that particular detail, so I asked them by email yesterday, but received no reaction yet... They seem to process all the frames at once. The combined result for HDR/FS out of Helicon Focus is shown on the left on the picture above (click for larger view). That of Photoshop is shown on the right.

Returning to Photoshop with the original 36 source files, I created 9 HDR files first with the Photoshop HDR function, whom I then combined via the Photoshop focus stacking (edit blend layers) function into a final shot. I'm not getting into details how to do this in Photoshop as the net is loaded with reviews and videos about this. Here I'm only comparing results.

The two final images, one from Helicon and one from Photoshop, I then opened in Lightroom and only corrected them for color balance by picking a gray spot in the meerkat's nose. The resulting pictures were combined together on a common canvas in Photoshop, and texts and arrows added subsequently in Voila. This could have been done in thousand different ways but that's how I did it. It's not important anyways.

As for the results...

Photoshop is far superior than Focus but it takes longer and is more complex to handle (unless one somehow automates the batch process in a script). Focus didn't quite handle the HDR effect properly, appeared less sharp, its color balance maybe appearing more pleasing to the eye at first sight, but in reality the Photoshop color balance is more correct as I compare my impression of the image with the real thing next to me as I write this. Worst of all though, the Focus stacking post-processing created this awful stain artefact on the meerkat face and blouse that is probably due to the processing algorithms, whereas the Photoshop rendering result is almost impeccable. It's also sharper and the HDR effect shows excellent dynamic with proper detail in both the dark and highlighted areas.

I can't stress enough how useful the Helicon Remote is though. This is an excellent tool that curiously enough they don't charge you for but offer it along Focus itself. I tried Focus without Remote and my results were appalling. You just can't get the right frames to blend without serious artefacts unless you use Remote. For them it's a necessity, for us it's a bonus. I had to pay for an annual subscription for Focus, but having seen what Photoshop does, I'll only use the Helicon Remote now to shoot the pictures and subsequently do the HDR and stacking within Photoshop. Kinda peculiar conclusion, but that's what I think. The fact that Photoshop didn't create the stain alone was good enough for me to abandon Focus for this FS purpose.

To be fair to Helicon, I am a novice in using Focus, and maybe there are tricks and hints and points to pay attention to, because of the stacking algorithms they use, to avoid the problems I had. But, problem is, I got no time to waste trying to find out. Photoshop does the job by design without 'special' hints and tricks, so why bother? Software Packages should perform as specified and not by wishful thinking... And do it simply without fuss, even for geeks like myself...

UPDATE: I'll be damned. Just moments after I posted that, I received the following email by Helicon Support. Enjoy:
--------------------------------------

Dear *******,
You have received an answer to your message (***********) from our support.
=================================================
No, Helicon Focus can only focus merge stack, at least in current version. So I would suggest to focus stack each exposure and then merge it HDR. Or, merge to HDR first and then focus stack. In the latter case please select HDR with global, not local operator. Otherwise artefacts on the resulting image are possible.
=================================================
Best regards,
Stas Yatsenko
Helicon Help Desk
http://support.heliconfilter.com
------------------------------------
You received this email as a registered user of one of the Helicon Soft Products.

--------------------------------------
So, it's either a huge coincidence or the guys at Helicon are continuously checking whatever is written about their products, real time! I am impressed! So, it was the HDR factor that brought about the stain on the Helicon result, then. Now you know. This of course changes the entire situation. I'll have to try what the say and come back to you. I'll use Focus with the Photoshop combined HDR files and then compare the two. More work. What else to do during summer anyways?

UPDATE 2: Here is the result, as promised, of focus stacking in Helicon Focus of the HDR files resulting from Photoshop. Eventually Focus didn't create the aforementioned artefact stains either. The problem was with Helicon's lack of information about combined focus and exposure bracketing. What a difference a proper Help file makes?! Actually, there's not much noticeable difference between the two after all.  In that case, maybe Focus is a more preferable solution to some folks because of the simplicity of use, and the many configuration parameters they provide. As for me, I dunno. I might use both just in case one is offering me better results than the other. For HDR related and exposure bracketing source files, I think I'll go Photoshop for the entire workflow from the start. For plain vanilla FS without HDR though, and since Photoshop takes somewhat longer, I might try Focus first, and if I don't like the result, try Photoshop next... Keep busy, in other words...


UPDATE 3: In posts like this one needs to be extremely fair and accurate. I therefore performed another test based on same source files again from another subject shoot (31 focus points in total), without exposure bracketing this time, to get HDR out of the way, that is. The result is shown here below. Click for a larger view. I have printed 'Fail' on the points where one method performed worse than the other. In this example Helicon Focus wins hands down. Like I said, one needs to try both methods in each individual case and keep the best results. Unless one is an expert on FS algorithms and, by simple inspection of the source files, one is capable of predicting which of the two methods will perform better... I'm not able to do that, not yet...