Astrophotography Filters part 2: Narrowband vs. RGB filters and how they work

This is part 2 of a series of blogs about filters and narrowband astrophotography. Bill explains how filters work, why you might use different filters and what a photo taken through a filter looks like.

In the previous blog, I went into a bit of my own history: how from DSLR photography, I gradually moved into broadband (RGB) filter photography, and the steps I made in that journey. I also introduced my three “narrowband” filters, hydrogen, oxygen and sulphur, and talked about the advantages of these strange filters.

In this part, I’ll start by showing you what an image taken through a filter will look like. We call these “component images” because they’ll eventually get combined.

Next, I’ll try to explain how the human eye sees in colour. I’ll show how this enables us to combine the component images into an image we see as colours. I’ll also talk about how we can swap the colours around and trick your brain, and why on earth we’d want to do that.

Finally, I’ll end up with a question that my wife asked that completely floored me. This was actually the whole point about this blog, I’ve just taken ages getting around to it.

Component images

First, I’ve produced a short guide on broadband (red, green and blue) and narrowband (hydrogen, sulphur and oxygen) filters for our website. This shows what they do and how they work. I’d recommend you have a quick look.

Single component images

To get a single filter image, you simply take photos through the relevant filter. If you use a colour camera, each image will be the same colour as the filter, just like looking through cellophane. Astrophotographers would normally use a monochrome camera to do this, because they have a higher resolution compared to colour cameras. (Why is this? Because a colour camera has to use one pixel for red, one pixel for blue and one pixel for green. A monochrome camera doesn’t have to do this, so it can use each pixel separately. Therefore, the resolution of a colour camera will only be a fraction of a monochrome camera with the same sensor.)

When you look at component photos taken using a monochrome camera, they tend to look like a pretty plain black and white picture. The one below is NGC 6188, which is also the nebula with the most awesome name of all – the Fighting Dragons of Ara. I used a hydrogen alpha filter. To be honest, this was only a test image. Three 20 minute exposures make up the stack, and I took them through a tree!

NGC 6188 - the Fighting Dragons of Ara. Photo taken in hydrogen alpha light.

In the previous part, I referred to “false colour” images. These are different to true colour images (duh!). With true colour, the colours in the image are as close as possible to what you would see with your eye. In false colour images, the colours are mixed up in any one of a number of ways. The image taken through the red filter might eventually come out as green or blue. False colour images can look terribly weird, especially if we’re using narrowband filters instead of coloured ones.

Components coming together

If you’re trying to produce an image with colours, either true or false, you’re going to need to take more than one (usually three) images through different filters and combine them afterwards. While images taken through different filters might look kinda similar, each filter produces subtly different results. The filter highlights the parts of the photo that are the filters’ favourite colours. At the same time, it downplays other colours. Filters aren’t really fair, are they?

To show you, here is a photo of me in the office. I’ve separated the colour photo on the left into red, green and blue components. This is as though I’ve taken the shot through red, green and blue filters. As you can see, they’re the same photo, but look slightly different. The most striking example is the Celestron logo and the mounting plate on the telescope. They’re orange, which means they show up brightly through the red filter. However, the blue filter discriminates heavily against orange, and so the logo appears quite dark through the blue filter.

Colour photo on the left, with red, green and blue components to the right.

So, we have a series of photos (normally three) we’ve taken through our filters. We will recombine these images to get a single one that appears as colour to our eyes.

Next time I’ll describe how we put these images together. And I’ll show how our eye sees this image as a full colour picture. I’ll also describe how we can use this to highlight different aspects of the picture. This is what we call a false colour image.

How a human eye works

Our eyes are quite similar to cameras. The retina is covered with different types of nerve ending, and they act like pixels in a sensor.

Rod sensors detect light, but can’t tell what its wavelength is. Incidentally, that’s the reason why you’re totally colour-blind in very dim light. Try it next time you’re walking down a dark street with no lights. See if you can tell what colour that parked car is.

Cone cells aren’t as sensitive as rod cells. However, their trick is that they come in three different types, and fire if they are hit by one of red, green or blue light. Yellow light mostly triggers green cells, but also it gives red cones a bit of a nudge. Blue cones aren’t fussed by yellow light. The impulses from all these cones are sent to the brain.

When your brain gets the news that there’s something in front of you that the green cones can see, the red cones can see dimly, but the blue cones can’t see at all, it says “ahh – yellow”. What your brain is doing is receiving the three channels and reassemble them into colour images. Brains are pretty awesome.

How we can use this to combine component images

Look at this macro photograph of my television screen (the things I do for a blog post…). Like your retina, it’s made up of pixels. So it’s built in a similar way, just reversed for projection. Some pixels produce green light, some produce red and some produce blue.

The picture on the left side was a little bit of the screen that had a grey area with a white band and then a yellow area. To prove it, I blurred the same photo and got what you see on the right hand side of the page.

The grey area has all the pixels turned on, but not brightly. The white area is the same with the brightness turned to the max. But remember the example I mentioned before? The yellow area is made up of green pixels and dimmer red pixels, but there are no blue pixels visible.

Macro photo of a TV - sharp on let, blurred on right

So pictures on the screen have to be split into the red, green and blue channels. The television displays just the red component image on the red pixels, and the blue and green component images on the blue and green pixels.

It’s then sent to our eyes simply by our looking at the television.

True colour

When the red pixels on your computer screen display the red component image, the red sensors in your retinas pick this up. The brain recombines them as the red parts of the colour image. (It’s the same with green and blue, of course.)

We call this “true colour”, or “RGB”, because the red component image gets transmitted as red, the blue component image gets transmitted as blue and the green component image gets transmitted as green. Each component comes through as its corresponding display pixel colour, and our brains recombine a nice colour image.

But it’s not necessarily so. What happens if I take the red component and display it on the green pixels of the computer screen, put the green component onto the blue pixels and put the blue component onto the red pixels?

This is exactly what I’ve done here. The photo on the left displays the red information on the red pixels on your screen. Similarly, it displays the green and blue channels on the green and blue pixels.

But I’ve mixed up the image on the right. The green pixels on your computer screen are displaying the red information. Likewise, the green information is now on the blue pixels and the blue is “mapped” to the red. I’ve tricked your brain into seeing the wrong colours.

True colour image on left. False colour image on right.

So the reddish sandy ground appears green (and the makers of my nice red ASI cameras aren’t happy…). Likewise, the green box I use for my laptop has gone blue, and the sky looks a sort of sunset pinky colour. Trippy!

We call a picture that maps colours from their originals to other colours a “false colour” image.

False colour

Up to here, we’ve mostly talked about broadband (red, green and blue) components. But what about those narrowband filters we talked about earlier?

Well, we combine narrowband filters in exactly the same way. We display one component image on the computer screen’s red pixels, and the others on green and blue.

But, I hear you ask, which component goes to which pixel on the computer display? (I know you’re asking this, because you’re intelligent.) The short answer is that it doesn’t matter.

Sometimes astrophotographers like to transmit the sulphur on the red pixels, hydrogen alpha on the green pixels and oxygen on the blue pixels. This is known as “SHO”, or the Hubble palette. Another popular palette is “HOS”, where hydrogen goes to red, oxygen goes to green and sulphur goes to blue. The photos look different and sometimes one looks more aesthetically pleasing than another.

For scientific reasons, false colour narrowband can be very useful. Last time, you’ll remember, I noted that narrowband filters highlight different elements. Recombining them using different channels can really make the element you’re looking for stand out.

An accomplished astrophographer friend, Andy Campbell, has kindly provided me with a pair of photographs of the Great Nebula in Orion. This is essentially the same photo with the channels being transmitted to different receptors in your eyes.

Orion Nebula complex in narrowband - copyright Andys_Astropix 2016

Now, everyone’s eyes are different, but I think that the hydrogen ripples in the dark band between the Great Nebula (actually, M43) and the Running Man Nebula seem more pronounced in the “red” version than the “blue” version.

I was going to make an Andy Warhol – Marilyn Monroe reference here, but I probably shouldn’t.

What does that tree look like in narrowband?

After confusing my wife with all this, she asked me the obvious question: so what does that tree look like in narrowband?

I had utterly no idea. She’d floored me.

I hunted all over the place, through Google, and I even asked on some astrophotography forums. As far as I could tell, nobody had ever uploaded a narrowband photo of anything other than skies.

So it was up to me.

More in the next blog.

Bill is Optics Central’s expert on astrophotography, telescopes and bird watching. You’ll find him in the Mitcham store on Fridays and Saturdays. Come in for advice on how to get the best out of your current telescope, what your next telescope should be, how to take photos of the sky, or even how to see some rare birds.

Leave a Reply

Your email address will not be published. Required fields are marked *