Showing posts with label photography. Show all posts
Showing posts with label photography. Show all posts

Saturday, December 1, 2012

AGFA. CCD vs CMOS

My compact camera just broke. I say just broke but in reality it slowly died for a few years now. I dropped it too many times, it was all bumped at edges and the lens lid wouldn't close anymore except if i hit it strongly. Now it refuses to focus anymore no matter what. Goodbye Coolpix L18! It was one of the best spent 100 dollars!

I needed badly a small camera to fit in my pocket and still make quality pictures when i couldn't take the big DSLR with me (Sony DSLR300 that by the way has a 2/3 CCD sensor).

And started looking around. And a brand popped on the screen. An old aquiantance. In the 70s at the Children's Club in Câmpulung Suceava we used to have 16 mm film cameras that where using the AGFA brand films. They were also the magnetic tapes for recorders. And i looked in the wikipedia and saw that there is still such a brand, as a holding company, still in Germany. Most of the cameras in US today are Japanese. Never saw a compact camera brand from Germany until now.

So i ordered at Amazon an AGFAPHOTO Digital PRECISA 1430 for 75 bucks. At AGFA site it is 59,95 €. THEN i downloaded the manual and looked at the specs.

Many of the spec are smilar or slightly below competition at that price.

But a few things caught my eyes. 7 elements lens. Live Histogram. CCD sensor. Artificial Intelligence Metering (wow, what can that be ? LOL).

I keep saying for years now that CCD sensor always seemed better to me. Since there is not any on pixels processing, the signal is sent outside the sensor area for processing, the signal capturing area of the pixel is bigger than at CMOS because it's not used for other components but just for light capturing. Don't know exactly but probably twice bigger. So it captures more light and there is less noise in the output signal. Probably the equivalent CMOS sensors would be twice bigger as area. According to this theory a good CCD 2/3 sensor could give better quality pictures than a full frame CMOS but that's my opinion of course. On CCD there is also more signal uniformity because of not processing each pixel on its own different components. There are more arguments here, you can read them in the site framed at the bottom. Until i get it, without even having it on my hand i dare recommend this to all. I don't know yet if they could improve color depth as well on any compact but i will come back with that.

For my Romanian friends, i see there it is not available yet in 14 MP but there is in 12. Who needs those megapixels anyway?



"In a CCD sensor, every pixel's charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output's uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture."



Got it today 10-05. QED or this is why they say 1 picture=1000 words.

From kitchen window, Dec.1 2012. clickable! (try middle click)

Directly

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6JMM9-izvqh3HDR960rLp8k241_ag3IPXeMJXJETWrVKvITxyVunkRintSmiycZAZM21vIeKAxjQXaxgYXRgH6aet6TaZx7AWVDjxYCD3b6ZLW5vO8bWpKoR_hyphenhyphendr1-O-Sy4EuqUMoeM/s1600/APDC0035.JPG

Friday, November 23, 2012

Film Rules

And flickr started to hiccup. After a number of pictures in my photostream they reduce the default viewing size of the picture. I also noted a significant slowdown lately. That's why i will try to move some of my better pictures here. Don't know how yet but i'll figure something. Until then here is a series i recently "digitized" from old paper prints pulled from a shoe box using my DSLRA300 from a distance when i can hear the gear in the lens getting stuck from not being able to focus anymore. That is 9 1/2 inches from the first lens element to the paper print. I hope i'm not gone break the gear again, as i did in the past when shooting at seagulls near the ocean with the lens going past the limit trying to focus at infinity. The focusing motor on that camera (which is on the camera body as opposed as being on the lens as in most current models) is a bit too strong for a cheap, stock lens, Sony DT 18-70mm.

Light source was a 50 W halogen bulb and had to move it around until i saw no more reflection in the coating of the prints. For WB on camera i used 2600K.

Each picture is enlargeable by clicking on it, best results with the middle click and opening in a new tab.

Near Mt Hood @2000
Near Jewell, OR, @2000
Sisters Quilt Festival @2000
Sisters Quilt Festival, OR, @2000
Near Mt.Hood, OR, @2000
@2000
The 3 Sisters Mountains, Near Sisters, OR, @2000
Near Tillamook, OR, @2000
Near Canon Beach, OR, @2000
Near Sisters, OR, @2000

Wednesday, October 24, 2012

Unprocessing Photographs

While using automatic exposure, from my own experience with film i saw better quality (exposure, color realism) than with digital.

That's why i say film photography is much more forgiving to bad exposure than digital, maybe with several steps on the indicator. To get the same quality like when using automatic exposure on film, with digital you often have to offset the exposure a bit in the... lucky direction. Some will say automatically, like i did for years after i bought my first digital cameras: There's always Photoshop. Based on rather impressive achievements based on it saw here and there and everywhere.

But then, by pure luck i saw on occasions that photographs heavily processed can be easily outperformed by the... non-processed ones.

Silver grains on a film saturate harder. They will always carry more meaningful information after a sightly offset exposure (and anyone who had worked on film can agree on this).

The current cameras' sensors translate light (actually the three "fundamental" colors) into numbers at the (split in three colors) pixel level and put those numbers into computer type files that are written on some type of memory organized the same way as computer files, usually Windows FAT32.

Their capabilities rely mainly on how many bits of information or color depth each physical pixel on the sensor can generate. Usually 10-14 bits per pixel per color in RAW file and 8 in compressed JPEG.

But after jumping from film to digital you will soon be unsatisfied. Go and spend hours on Photoshop with a picture and still won't be satisfied. Here's why.

10, 12, or 14 bits per color is not that much of a range. (Compared to the (almost) continuous (infinite) range on analog). That's why you have to use it properly. But to make things worse, JPEG and this is what your computer (by means of browser and other types of software) can display reduced the color depth to 8 bits per color, that is a total of 24 bits per pixel. Starting with amateur photographers we all know that 24 bits per pixel is one of the earliest standard for graphic cards. Nowadays the better settings and capabilities start with 32 and go to 48 and more. Too bad, cause there is no picture we can see at that resolution, even if we have now the cameras to generate them. The bottleneck is now JPEG and nobody seems to care.

Although i did not thoroughly researched this area, my experience tells me digital pixels saturate pretty fast when exposed non-carefully. That means that they will be almost all 8 corresponding to one color either 0 or 1 and carry no meaningful information when exposed to too much or too little light. Those are what we call the burnt pixels (not to be confounded with the defective, dead pixels sometimes present on sensors or displays as "accepted" fabrication mishaps). The dreaded blinking areas on a picture you just took seen on the playback display of some of the more advanced cameras.

Usually on current cameras there are several type of exposure indicators; classical metering arrays borrowed from later SLR cameras for film photography tied to the classical "scale indicator" in the bottom of the optical viewfinder, dense arrays with histograms, for compact cameras using directly the main sensor or DSLR with an extra "live" sensor or mirrorless or finally the displaying of the saturated (burnt) pixels areas on top of the resulted image in the form of blinking are (i saw these only for already taken pictures on certain cameras on certain display modes).

Unfortunately some designs of the current metering systems are taken directly from film era and not sufficiently upgraded for the technological differences with digital photography. That's why, if you use a classical metering system (not a "live" one, through main or auxiliary sensor) and want to raise quality, it is a good idea to balance exposure manually in order to avoid those saturated areas and maximize your sensor's usage. One good way to do this, if you have time, is by trials, by eliminating, through compromise, as many as possible burnt pixels on both ends of the range and by doing this you will also have more meaningful information on the whole (rest of the) range. In fewer words, if you adjust your exposure towards the middle of sensor's range on a maximum of sensor's area, you will have increased dynamics (the use in a meaningful way) on the entire numerical range corresponding to color depth of the sensor by having, ideally, one and only one burnt pixel on white, that is 0 in the same time with having one and only on color, that is 1) and/or increased information density (higher pixel count or "thicker" histogram on all color intensities) on the same subject, relatively to pictures taken with different, less than ideal, exposures. But this can be done only by looking at the resulting picture on the back display. And this takes time.

Next way to predict the quality of a picture is through live histogram. In digital photography, the word histogram is used to define a graph that has as X axis the intensity (or the numerical value of the pixels from white to color) and on/as Y the count of pixels of that intensity. Histograms sometimes come on each color, sometimes, like the "live" or real time ones on camera's displays, cumulative.

There are not many explanation on how to use a histogram in the manufacturers' manuals.

I read on forums many different interpretations of what it is and how to use it (the histogram), i assume some manufacturer added it as a feature and others followed but with no much explanation of what it is and how to use it, letting photographers and teachers to figure it out maybe for them too. Below are a few examples.

And finally the scale indicator tied to the metering array. With experience, by categorizing type of scenes, you will learn which scenes require adjustment and in what direction. Lately i came to use this in most of situations because of speed and convenience. However, from time to time i take a look at the final picture too.

Compressing, by using mathematical algorithms based on eye perception in order to reduce the size of the output file is possible by taking advantage of lower information density (at least in some areas) when transforming or packing adjusted raw files. If there are not big variations of numerical representations from one pixel to another in the whole picture, that is, less dynamics, the file will be smaller.

A good way to measure information density, besides histogram, is by how big the output file is compared to different one taken in identical conditions, in fast succession (bracketing) . The better the exposure, the bigger the output file.

(This could be a good idea for an exposure optimization algorithm, even in real time on the cameras of the future. (Generate and measure the size of a temporary file while bracketing exposure around the value suggested a by a metering array, even at a lower resolution or selected areas, for speed)).

High dynamics (the histogram touches both the right and left limit of the graph), low information density (thin histogram), but due to the monotonous nature of the subject, burnt pixels on white on an insignificant area (the cloud on the upper left). Most of the pixels are in the thin high intensity area on the right. Not major exposure or WB mistakes. If i w

Usually on current cameras there are several type of exposure indicators; classical metering arrays borrowed from later SLR cameras for film photography tied to the classical "scale indicator" in the bottom of the optical viewfinder, dense arrays with histograms, for compact cameras using directly the main sensor or DSLR with an extra "live" sensor or mirrorless and finally the displaying of the saturated (burnt) pixels areas on top of the resulted image (i saw these only for already taken pictures on certain cameras on certain display modes).
ould have lowered exposure i would have had a more even histogram but unrealistic brightness colors and general look.

High dynamics, high information density, well balanced histogram. However, the trees look unrealistic due to extreme contrast of the subject, but still discernible details and colors. General realistic impression.

Pretty underexposed (with about on step) due to hurry to catch the subject and failure of the of the automatic exposure and lack of more manual compensation (it is only one step up) due to hurry - not so high dynamics (highest intensity pixels missing on the right side of the histogram). Under average information density (histogram thickness). Burnt pixels on white (histograms touches the upper limit on right only). Still useful to show for composition and dramatism. Still wouldn't get better through processing.
I used earlier the word dynamics related to digital photography. I would make here an analogy with digital recording of the sound, when (instead of finding the optimal volume for recording,) you record at a lower volume, not using all the range of the analog to digital converter and never reaching the highest volume available for sampling on your analog to digital converter but instead using lower numbers and leaving part of the range unused.

Similarly, if your volume is too high, you would have saturation thus no meaningful information on those samples above the maximum range of the converter that would be all of the same value of the total depth or number of bits available (that would be all 1).

If you have more then one type of exposure measuring on the same camera, as it so happens on later models they usually tend to "disagree", sometimes by several steps. Best idea is to try to make a compromise between those. Using the classical indicator as refference, with time, you will learn that some type of scene require always the same type of exposure compensation in order to comporomise with histogram and final result.

You don't even need to learn the basic principles of photography in order to balance exposure manually. All digital cameras, compacts and DSLRs have now a function called exposure compensation. So you can use that feature on any type of automatic exposure program like aperture priority or totally automatic and tell the camera you want more or less exposure if you don't like what you see on the display after the first trial. (I do not ever correct more than 2 steps in each direction of the indicator even if the histogram (if i have time for it) tells me otherwise. I call it the common sense rule.)

You can't burn pixels if you use the histogram, but so far i haven't seen DSLRs with live histogram on the optical viewfinder.

There are some with live (real time) histogram on the LCD on the back and but framing through a display is not the same as through the optical viewfinder due to ambiental light or even sun that can fall on it and also the level of details and even comfort of holding the camera. So when it is possible and also necessary you can actually go back and forth the optical viewfinder and back display for framing and for the histogram, that sometimes that can be inconvenient.

Best way to balance exposure (if you have time) is by looking at the result on camera's playback display set on histogram (without actually using it) and trying to make the best compromise between saturated white and saturated colored (burnt) pixels (both ends of range) represented on most cameras on two different blinking colors on the actual picture (blinking areas of different colors indicating burnt pixels on 0 or 1). (If i'm lucky and don't have any burnt pixels after the first shot, and again i have time, i still try and play around trying to go for the maximum contrast, histogram in the middle and as thick as possible.)

Trouble is... when there is plenty of light you can't see well the picture on the back display for evaluating contrast... Sometimes i think on getting some sort of cap like photographers had for early photography on glass plates...

About processing. There is no software algorithm capable of predicting how a pixel would look as generated by a different light than the original.

Correcting through processing on the computer is way more timely and cannot ensure by far the same result, recover dynamics and especially cannot recover the burnt pixels (except for isolated ones by borrowing or averaging the nearby ones) but generally information once lost (again, a burnt or saturated pixel does not record anything) cannot be recreated. And even in the case of useful pixels, correcting through use of the linear or monotone algorithms the processing software probably uses (according to processing time and use of computer processor seen or observed during processing) is not the same thing as photographing the subject with the right exposure achieved through optical means because, by example in the case of compensating underexposed images, those algorithms simply multiply the values of each pixels or the opposite for overexposed. The same for increasing contrast, when they multiply more the corresponding value of the more intense pixels etc.

When the analog to digital converters samples light (colors) at the pixel level and converts it to numerical values it always rounds the result for the last or least significant bit, rounding implying loosing some information. We all know that if you multiply a rounded value, you will multiply the inherent rounding error as well. What happens when two adjacent pixels have been rounded but each having a different rounding error (one up and one down)? By multiplying those digitized values you will not get the same result as you would have gotten if you multiplied those value before digitizing. By experience, i can tell that if you try to correct through software more than 1-2 steps, you will destroy the quality of the image gathered with great effort and using such sophisticated and expensive optics (in fact what you do is magnify errors, the loss has already occurred at exposure time).

A good way to prove this would be through taking a good picture, process it to simulate underexposure by lowering brightness, save the result, reload it and then try to bring it back to the initial brightness, reversing the process, then compare the results.

I also leave all the settings on the camera that imply real time "on camera" software processing like contrast, sharpness, vivid, dynamic range, etc to normal or zero to make sure no on camera processing occurs at all as well. The only thing that's on is noise reduction on low light and the stabilizer that on better cameras is actually moving the sensor mechanically when trying to compensate for the unwanted motion of the camera).

Color temperature in photography is not the same thing as in physics but it imitates it. It is a convention that allows an adjustment of the final compressed displayable JPEG picture generated from the RAW file picked directly from the sensor based on the light conditions during taking the picture.

We need that adjustment for the same reason our eyes adapt to different color temperatures between dawn and dusk and allow us to see the objects somehow of similar colors making the light source differences bearable for our perception.

We all know that a hotter light source puts out a bluer light and a colder a more red one from physics in school. It is the opposite of our perception or intuition of colder light as warm, like red and hotter lights as cold like blue. Maybe that's why is so confusing to many beginners. Probably because we associate (psychologically) the blue color of the sky with cold and the red color of the fire with warm.

The equivalent color temperatures of light sources in visual range are well beyond our thermal sensory capabilities. As an example, an incandescent bulb filament actually has a temperature of 2700 degrees K and a flash plasma 6800. The surface of the Sun is 5,778 K. What would those temperature be for our skin? Not much difference.

With Sun's actual surface temperature in mind, let's think about the equivalent color temperature of the light on the surface of Earth on an average sunny day at noon which is about 5500 K. Why? Because of the filtering of the atmosphere, that scatters some of the blue photons in all directions preventing them to reach surface directly thus its blue color and cooling the light reaching the surface a little making it a little redder. (As i said in the previous paragraph, the total opposite of our intuition). The effect is much stronger at dawn and sunset where light comes at a steeper angle and has to travel more through air.

There are also light sources that are not hot or barely warm like LEDs that put out equivalents of daylight color temperatures. So it's not really about temperature but the composition or mixture of different visible wavelengths of the light source, or how much of each color(wavelength) has a hot source of an equivalent temperature.

All the colors we see around come usually from the same "white" light that gets reflected in different ways on different objects that absorb some of the colors and reflect others. White light is actually a mixture of colors and not always the same (but we perceive a range of mixtures as white light due to eye adaptation). (We are too used to these things and do not think of them or never did). But if the light source is unbalanced and puts out more colors than others the camera has to know for reference what the eye or brain or whatever does automatically for us and that is the equivalent temperature of the light source in order to transform a RAW file into a JPEG or directly generate a JPEG that would be compatible with our perception of the real image in that light, perception given by the automatic adjustment of our eyes.

There is an automatic function which directs the camera to detect the average color composition of the light on the sensor. In fact the only option most photographers use. But what if the chosen composition of the image is unbalanced like when taking a picture of a red tree in the fall that occupies most of the area of the sensor? The camera will average the framed image "thinking", as programmed, that the average should be white while that red tree is occupying most of the frame, or sensor, will over-correct and the result would be a cold, bluer looking red tree.

There are ways to measure the color temperature of the light source. A notorious one that i've tried until i learned enough is the camera's main sensor and a gray card. A gray card is an A4 piece of cardboard with some gray paint on it. You can calibrate the camera with by taking a picture of the card placed in the vicinity of the subject using camera's specialized function to measure color temperature just before taking the picture in the same light. But after a little experimentation that lasted several months i learned to do without.

Most of the situations between sunrise and sunset come in the range between 5500 and 5700. Then it goes quickly all the way to 7500 within a half hour after sunset then when we are starting to loose color perception into the light condition we perceive as dark but not completely night quickly all the way up to the limit of the setting of the camera.

Flash is 6800 but if the camera is on automatic it will know you used it.

There are cameras with settings like daylight (5500), clouds (5700), incandescent (2700), flash (6800) etc. So you don't have to set the color temperature but just to recognize the type of light and set it on the camera.

For color temperature, any manual approximate setting is better than automatic.

During daytime is better if you leave it on daylight or cloudy all day long then on automatic. You can have at most 100 or 200 degrees difference when shooting outside in different conditions which is less than if the camera picks automatically the wrong correction and it does so too often. A little effort to learn how to set it and you will have dramatically improved pictures with less or no necessary correction through processing (if you still want to do processing after reading this).

If you have degrees settings for white balance on your camera if you set at 5500 you are covered in 99% of daylight situations. Unless if i'm in the middle of a field with no clouds and the sun is above me when i use 5500. Or in a forest on a cloudy day when i use 5600 or 5700 towards the evening. Or... you can start playing around and make subjective photographs, over-correcting 100 degrees in one way or another. + for a warmer impression or - for a colder one according to everything i tried to say before. And you will want to give a colder impression in a forest so you better live it on 5600!. If you don't have degrees settings the best setting is on daylight or cloudy, depending on the situation, camera and results. (some cameras work better on cloudy even if there's sunlight). But not on automatic!

I set the camera for output both RAW and JPEG. But i never use the RAW files, i just keep them for reference just in case. The camera manufacturers are very interested in giving you the best possible JPEG directly out of the camera because there are many people that don't use RAWs so i just use those. I used for a while the converting software that came with the camera to generate JPEGs out of RAWs on the computer. But it had a very slight tendency of turning the pictures towards green. I like better the JPEGs that come directly from the camera. But just in case you don't like the colors on the final JPEG or simply didn't have time to set the color temperature on the camera at the time you took the picture, you can always generate a new JPEG out of the RAW on the computer adjusting the color temperature. And this is not processing but merely imitating the generation of the JPEG file on the camera.

In the RAW file color temperature information is not contained within pixels as is in JPEG but is just a parameter in the file's header. The information in each pixel in the RAW file comes directly from the sensor's physical pixels as it was when you took the photograph. After converting to JPEG the correction is taken from there and included in each pixel. Reprocessing a JPEG file just for adjusting colors or for any reason is the worst possible idea. You should always use RAWs as source for processing, if possible. However there are softwares that allow this for the reason that many photographers are not aware of the options and choices given by using RAW files or simply because they use cameras that do not have that output option or they must correct an older JPEG photograph.

Now that i write this i remember a conversation i had with someone a few days ago. I agree now that by using optical filters towards sunset you may level the color balance and increase the amount of useful information or dynamics in the RAW file on the colors that are less intense due to imbalance of the source and decrease the more saturated ones by purely optical means. Doing this, you should limit the amount of correction by adjusting the color temperature parameter when generating the jpeg file. But i also believe this is necessary in the areas where correction exceeds +/- 1500 K. When close to 5500 there are no major differences in dynamics on different color information coming straight from the sensor with no filters and i like the freedom of not needing to fiddle with filters while on a photography trip.

 http://cdn.cambridgeincolour.com/images/tutorials/wb_bbdiagram.png

And for taking pictures without flash with fluorescent lamps as source the problem is more than color temperature which by the way is around 4500. You will need a filter to correct the holes in spectrum that fluorescent lamps have.

Here are a few examples about what i was talking above. Can click on each picture for better view.

5600 - Mixed light: Sun, and clouds.

Lake Oswego, OR, street intersection. Click to enlarge


6000 - Mixed light: daylight, fluorescent, flash, done by trials.

My place, 2012-10-20 09:26:18, click to enlarge


9900, 7+ Magenta, almost night, by trials (LOL had to be quick), beyond the camera capabilities. Everything is too blue.

January 29, 2012 at 9.54PM PST, Cannon Beach, OR. Click to enlarge.
Since on current processing software adjusting focus is an option (is more like an imitation or simulation of focus adjustement), true focusing adjustment can only be achieved through optics, a few words on that too.

Many current cameras won't let you shoot unless they achieved focus that is detected through various means. But the big problems is the angular motion or angular speed of the optical axis of the camera in the moment that button clicks. Most cameras have a shutting button based on a two step switch. It's a switch that has one or two flat slightly spherically shaped springs that deform when pushed then pop, delivering a high switching speed but also a tiny, non neglectable mechanical momentum in the moment of popping. By pressing half it pops once and the camera measures light and focuses. By fully pressing it pops again and sometimes slightly shakes the camera. That mechanical momentum tends to give the camera an angular motion enough to blure the image. Everything is ruined because of that.

My experience tell me that most of blurred shots are because of that. There is no remedy to it except holding the camera, squeezing as opposed to pushing the button, and praying not to move. Lately i discovered a new trick. If you bypass the first step that implies focusing and metering, and push it quickly to the end and keeping it like that, most cameras will have a slight delay due to focusing and metering time that can go up from several tenth to a full second, enough time to mechanically stabilize the camera.

Short of using a mono-pod, a tripod or other mechanical fixture that holds the camera in place. The smaller and lighter the camera is, the more likely to get blurred images because of angular motion of the camera while pressing the button.

Tuesday, September 25, 2012

Live Histogram In Optical Viewfinder


Different metering methods in modern photography have all something in common. They measure light in different ways using a sensor array. And display the average or weighed average or some sort of related indication on the optical viewfinder in the form of an indicator under a scale at the bottom of the image. So you have an intuitive indication that even tells you which way you have to correct the exposure, moving that button or ring in the other direction until you get it right. So you can turn the button until you come close to center or do it completely automated, sometimes using exposure compensation to move the indicator one step or a few permanently off center.

But here came the digital cameras with megapixel sensors and real time computer processors. They can use the main or auxiliary sensor to measure way better the light. Why? Because instead of having let's say 9 sensors that measure light in a conventional array, they have millions of pixels, each actually being a sensor and not only on broad spectrum but color specific.

But how to use this new fantastic possibility? The first way they thought of is the histogram. I looked at the definition of the histogram especially for the purpose of writing this and it does not match with my intuition on the subject.

What the "live histogram" function in my camera does is it displays in real time an animation with number of vertical bars in the left lower corner of the LCD display when in that mode, each bar giving an indication of the number of pixels of a certain intensity (for live histogram, black and white only thus an average of all colors). So actually the software in the camera counts many times a second the different intensity pixels on the sensors and displays them in an animated graphic. I don't know why it is called a histogram. To me is just a graphic with light intensity on the X axis and number of pixels on the Y.

Nobody really needs to know how many pixels of a certain intensity are on the screen. But by looking at the "histogram" one can instantly... well it depends of one's definition of instant... can realize if there are missing pixels of a certain intensity, and this happens usually at the left or right end of the graphic. Or how "thick" is the "histogram", that giving you an information of how much usable information is ready to be transferred in the flash card when you press the button. Missing bars on the right means i'm underexposing and on the left i'm over. If the last right bar touches the top of the graphic it means i have "burnt" - saturated, useless white pixels and on the left, black burnt pixels. It's that simple. I am saying by one's definition of instant, because it actually takes seconds to look at the histogram on the LCD display and interpret it even in the most intuitive way. And by doing this you lose the framing and change the histogram itself. Then you have to come back at the optical viewfinder for a better framing and this is serious time when you are trying to take instant or candid shots.

Histogram is the best and unmatched exposure measuring method before taking the picture, the next best thing to looking at the picture itself after. But even looking at the picture sometimes you cannot tell if you have burnt pixels but the histogram will tell you. Another method of visualizing the burnt pixels is one of the many display options on the camera on playback mode, when you can see them as blinking areas.

If you have a big, fat histogram with no bars touching at the ends, you will have the most incredible, fine tone, balanced, high contrast, high dynamics and colorful image. And that will not always be with the exposure indicator in the middle. Sometimes, the indicator is 3 or 4 steps off center while the histogram shows the best picture.


But then I've been thinking even further... Why a histogram anyways? What if you had an indicator like the one in the picture above that would work using the same information used to draw a histogram from a megapixel sensor (instead from the metering sensor array), through counting all the pixels of a certain intensity and instead of displaying the histogram, displaying an indication of the thickness or the white area on  the histogram, and letting you center it (or optimizing the non displayed histogram) while adjusting the exposure the same way as with the current indicators based on the old metering systems? After all it is all the information you need and use from that histogram. With keeping the histogram only as an option.

My Sony DLSR 300 has two sensors and two mirrors. One is the main sensor behind the lens and mirror like any DSLR camera. One, smaller, like for a compact camera, is up near the prism like in the images below and it works when the main mirror is closed and secondary mirror is tilted and is for the LCD display behind the camera body and not for the picture itself.

The optical viewfinder is unbeatable for framing. The histogram is unbeatable for exposure. So what if they could be combined?

Today (post date) i've been looking on the web for such a camera. Couldn't find one. I found some with an electronic viewfinder (that might include a histogram), which is better then an LCD viewfinder  because no ambient light comes on it (especially on sunny days) when you close it with your eye while watching through it but it has a poor resolution. It is a compromise with poor results.

And than i said to myself... What if the manufacturer would add a crude histogram made of let's say 10 bars on the optical viewfinder? But the problem with building a histogram while looking through the viewfinder is when the eye see the image through the optical viewfinder none of the sensors does.





I can think of two ways this can be solved, one by using the same second tilting mirror, or second, a semitransparent one, sort of like for the rear view mirror at cars in night mode and temporarily redirecting all of the light or permanently just a percentage towards the live (secondary) sensor. For the semitransparent mirror solution, you don't actually need that much of the image on that sensor since you only want to build a histogram. At the expense of the quality of the optical image in the viewfinder. Or by having a button that can (or periodically for short period of times) tilt the second mirror temporarily sort of like the main mirror opens when you take the shot, to redirect the image for short periods of times towards the live sensor just to build a histogram. My favorite is the last one but both would still be better than switching to LCD viewfinder and back to optical as i am doing right now. And probably better than the electronic viewfinder, i don't know, haven't had one yet on my hands. And you are not so interested in the quality of the image when looking at the histogram and/or indicator anyways. (Or maybe even an eye direction sensor that would tilt the mirror while you are looking down at the histogram or indicator).

September 30, 2012 at 4.25pm PDT, Near Mt.Hood, OR