Simulation comments and explanations

Some aspects may lead to a rather serious journey for a casual DIY-hobbyist.. May be I'm wrong somewhere.. If you double-checked to make sure it's certain - please, do let me know.

See image_proc_commands.txt for image processing commands and more comments.

UPD, Nov 2016: "Far" below is from older setup, when it was used as Far light. Now it is used upside-down as High-beam light.


Raw vs. Blackbox

The camera software is closed, proprietary, and "enhances" the image in unknown way. Here's a simulation (Far-wide) from *camera* images (2 = HDR), left. Below - same from *raw* files - a copy from the simulation page to be compared with.

Clear enough, without comments.

Also note the unfaithful shape of the brights curve. And the over-all shrinking. (The camera also cheats something with resolution. Taking it into account would shrink the curves down by ~4% more. I ignored that.)

Of course, identical sensor data is used in both cases, and the visualizing script is same. The only difference is in how the script input images were obtained:

# camera: de-gamma = make input data linear:
OPTS="-type grayscale -background black -rotate -3 -quality 50 \
-gamma 0.45455"
convert $OPTS P1010323.JPG camera_far_d_a.jpg
convert $OPTS P1010327.JPG camera_far_d_c.jpg
# raw (this is a copy from image_proc_commands.txt for your convenience):
OPTS="--temperature=4058 --green=0.851 --exp=0 --out-type=jpeg \
--compression=50 --gamma=1 --grayscale=lightness"
ufraw-batch $OPTS --rotate=-3 --output=far_d_a.jpg P1010323.RW2
ufraw-batch $OPTS --rotate=-3 --output=far_d_c.jpg P1010327.RW2

The de-gamma process for camera is not exact - fixed power instead of detailed gamma (with linear part).. But I think it's gonna be about that bad anyway.

Alternative, photographic point of view. Below are two crops, from same shot. Left - from camera, right - from raw.

How these 2 images were obtained:

ufraw-batch --exp=2 --crop-left=1650 --crop-right=2070 --crop-top=205 \
--crop-bottom=655 --wb=camera --out-type=jpeg --compression=70 \
--output=_tmp_sim-com-raw.jpg P1010274.RW2
# adjust levels in linear scale for clarity: 25%=2EV
convert -crop "420x450+1650+194" -gamma 0.45455 -level 0,25% -gamma 2.2 \
-quality 50 P1010274.JPG _tmp_sim-com-cam.jpg
# thumbs are not visually different from originals
convert -thumbnail 420 _tmp_sim-com-cam.jpg sim-com-cam.jpg
convert -thumbnail 420 _tmp_sim-com-raw.jpg sim-com-raw.jpg

Few observations what camera did:

However, for general picture-taking, by someone not skilled at photo-editing (like me), camera gives great looks - with none of your time spent.

Conclusion: Never ever use camera-software images for measurement-related purposes! :) Even though they look better.


HDR vs. single image

Here are 2 simulations, both using only 1 source jpg image. First - raw used, second - camera image used. To be compared with the good image from the previous section - HDR.

The black is a tight mesh of isolines around noise. (again, identical sensor data were used..) The camera software adds a lot more noise. Not surprisingly: "enhancements" are made by extracting valuable info, and leaving noise.


Misc visualization notes

All images are normalized per 1W (electrical consumption) of the LED. This gives a "common denominator" for comparison of different optics.

Reminder (see formulas at the end of this page):

My isolines are not fixed to particular values. They decrease from the maximum value down by sqrt(2). For example, the illuminance drops in 2 or 8 times at the 2nd and 6th isoline correspondingly.

The 3D-view is modelled using the (simplified) Oren-Nayar model for diffuse reflection from matte/rough surfaces (sigma=0.4). The 4 forward red marking lines - 2m apart. To better evaluate the perspective, there's a blue bar and a circle - representing the handlebar and the headlight. Also a (very) small part of the front wheel can be seen.

Different colorbar limits were used for luminance (cd/m^2, 0.01-3) and illuminance (lux, 0.03-30) images, although the _colormap_ is same.


Advanced

The below (especially) assumes some basic background. See for example a review (8MB) by Techno Team. In terms of this review, I was making far-field (so the light source is point-like) measurements of the LID using digital image processing (indirect) method (section 2.1.2.1).

When properly done in a properly equipped environment, this is a fully professional method for measuring LID e.g., of car headlights. I'm a poor man.., so I'm doing it quick and dirty, from the ceiling of my room. I ignore the spectral luminous efficiency for photopic vision (V(lambda) in Eq. 1.1), so radiance=luminance=whatever camera measures.


Camera calibration

Given a ceiling (wall) beamshot, how to get lux values? Trigonometry aside, consider only the calibration (normalization). Camera measures (=sensel signal is proportional to) the luminance (cd/m^2) of a surface of a spatially-extending (not point-like) light source.

One way is to measure the illuminance (of the headlight) with a luxmeter at some distance, let it be E lux. Take a photo of your white screen (ceiling) at that distance (light power same). Then use that pixel value - let it be V - as the reference. Say, if later a beamshot pixel has a value V/2, you know, at that point you have the illuminance E/2 lux. Simple.

Another way, if a computer monitor brightness (luminance) is known (either from colorimeter measurements, or simply from manufacturer's specs), say 78 cd/m^2. Take a photo of some white on the monitor, let pixels value be V. If instead of the monitor screen there would be a white diffusing screen, we would need some external light giving illuminance of E=78*pi lm/m^2=lux on that screen - to have sensels have the same value V. Assuming here lambertian diffusing and no power losses. Say, if I adjusted power of the light source to have sensels have the value V, I know that at that point I have illuminance of E lux. Also simple.

I don't have a luxmeter, and experimented a bit in a local store. To make it short, the lux values from the first method were about two times smaller than from the second method. However, since two luxmeters there sometimes gave readings different by 50%, I decided that I trust my colorimeter calibration method more. There's also another (done later) confirmation of the colorimeter-method accuracy.

You should edit get_LID.m script, line 82 with your calibration. In my case the value of V was 97.06 (white on my monitor). So the line looks like: zM=zM*78.58/97.06/rEVs(1);

The first element of rEVs (this is one of the input parameters for get_LID()) is the (brightest) beamshot exposure value in units of this (monitor calibration) reference shot exposure value. For example, if you keep all your camera settings (shutter, aperture, sensitivity) for the beamshot same (as for the calibration shot), rEVs(1) should be 1. If you (only) increase shutter _time_ from 1s (calibration) to 4s (beam), it should be 4. Or, if you increase shutter _speed_ from 0.5s (calibr.) to 0.1s (beam), it should be 1/5.


Some related formulas


(I'm spelling it out because even a wikipedia page on the Oren-Nayar model had a mistake of confusing light luminance and illuminance for many years.. wiki-lesson: doubt, and verify)

For the curious. The Oren-Nayar multiplication term straight along (at alpha=0), for my setup: