How to /manually/ process digital HDR?

Pskhaat

2005 Expedition Trophy Champion
I don't want to hear about any pre-made software here, I'm curious as to the algorithm in order to combine multiple various-exposed pictures together to yield an HDR image.

Can anyone speak to the basic processing under the scenes?

I have been playing around with logically adding, subtracting, multiplying each associated pixel (color inclusive of course) values but it does not render as I would expect. e.g. you can not simply add (or subtract) RGB values from each picture as the results are a whitening (or resembling the over-exposed).

Thanks!!!
 

adventureduo

Dave Druck [KI6LBB]
Scott, Im interested in hearing too. I've been playing around with HDR myself and my results weren't as good as i expected either.
 

Photog

Explorer
Are you trying to do this within layers, in Photoshop, or are you trying to write code in C++, or some other language (you mentioned "no pre-made software")?
 
Last edited:

Pskhaat

2005 Expedition Trophy Champion
Yes, looking to program this myself. I find that to be the best way for me to understand things. Yes, I'm a total dork.
 

neliconcept

Spirit Overland
you can take three different exposures and mask sections off to get the best tonal and color range you can but thats about the only manual way I know of to do it.

it takes a bit though, you just dont mask out half a photo and then make sure all shadows from the lighter exposure are in and all lights from the darker exposure are in.

gotta brush it in to a degree to get the desired effect
 

Pskhaat

2005 Expedition Trophy Champion
But don't some programs do this automatically? I mean, you don't have to mask off sections in many pre-made programs, right? How do they know when to use pixels from one picture over another, or some combination thereof?
 

Michael Slade

Untitled
But don't some programs do this automatically? I mean, you don't have to mask off sections in many pre-made programs, right? How do they know when to use pixels from one picture over another, or some combination thereof?

That's a really good question, and one reason I have never really been a big fan of HDR that's done automatically. I'm not really a believer in anything fully automatic, as I don't think the computer can read my mind (yet..).

I do a few shots here and there with a quasi-HDR, but I do the blending manually via layers in Photoshop. I do have one image in my project that is proably 7-8 layers and would be an N-10 if you understand the Zone System methodology.

The problem that I have visually with HDR is that it tries to look *too* perfect, and the images usually end up looking flat and unnatural IMO. There are a few practitioners of HDR that manage to make it look and feel 'right', and that's when I start to really like it. How they do it, and what's the magic formula for making things look 'right'? It's probably different for each photographer.

Programming your own HDR code sounds like an amazing challenge. Let us know how it goes!
 

Lost Canadian

Expedition Leader
The problem that I have visually with HDR is that it tries to look *too* perfect, and the images usually end up looking flat and unnatural IMO.

Not to side track the thread, but that's interesting. I'm also not a fan of the "trying to look natural but not" HDR image. I do however like the HDR shots that push way beyond the boundaries of what is real, with their sublime, painted quality.
Kind of like Chris Alvanas "edge" portfolio found on his site here.
 

Pskhaat

2005 Expedition Trophy Champion
The Photomatix HDRSoft does this

OK, so any insight into how it does this.

Example: let's assume a 3-picture, 3-channel RGB arbitrary depth image at a meager 320x240 pixels. Each pixel in this configuration (let's say) has a red, green, and blue value of any fraction between 0.0 and 1.0. An overexposed image will have these values of all pixels at let's say 0.7 and above; and unexposed 0.3 and below and the "normal" exposure of roughly 0.3 - 0.7. As one iterates across the 76,800 pixels or 230,400 values, how does a program know to use the high value vs. the low value. If you just add them together you get basically white (1.0,1.0,1.0) or pretty close to it. If you assign a high or low threshold value the overexposed or underexposed image always wins.

My thinking is that there is some quantization level between the distinct color values and those with the highest deviation win? Or is it a matrix (dither style) calculation to compare adjacent pixels or channels?

I just dunno. I mean, I get the concept of the projected final, but I just don't know how it is implemented even at the most basic level.
 

neliconcept

Spirit Overland
Not to side track the thread, but that's interesting. I'm also not a fan of the "trying to look natural but not" HDR image. I do however like the HDR shots that push way beyond the boundaries of what is real, with their sublime, painted quality.
Kind of like Chris Alvanas "edge" portfolio found on his site here.


just about everybody get this wrong

HDR and tonemapping are not the same thing. Tone mapped images have the cartoonish effect or have a flat lighting effect to them.

a really good HDR will really just bring colors and a bit more light and shadow out but not overly done. To get this you really need to do it manually.

Photomatix and other programs do it for you, make sure you are not taking a single raw file and just upping the EV +2 or -2, not a true HDR

set your camera to EV bracketing and you should be fine with 3-5 exposures, sometimes even 2 exposures.

an HDR i did that was not overly done, manually to

n85500505_30622430_778.jpg


toronto btw
 

Forum statistics

Threads
190,033
Messages
2,923,344
Members
233,266
Latest member
Clemtiger84
Top