noise estimation and reduction with emva1288fuh/personal/noiseestimationandredu… · related works...

8
NOISE ESTIMATION AND REDUCTION WITH EMVA1288 1 Chih-Jou Yang (楊至柔), 2 Chiou-Shann Fuh (傅楸善) 1 Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan, 2 Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan E-mail: [email protected]; [email protected]; ABSTRACT A method of image quality enhancement based on European Machine Vision Association (EMVA1288) as the evaluation method is proposed in this thesis. First we analyze all the EMVA1288 parameters that influence image quality, how the setting of the environment will effect, then we analyze possibility of how these parameters can be improved. We use the real images to estimate if all the parameters meet hypothesis. Keywords: noise reduction, noise level estimation, EMVA1288 1. INTRODUCTION 1.1. Overview People use digital cameras every day. The digital camera module becomes an important module in human life. While in industry, Automated Optical Inspection (AOI) takes an important role in the Industrial Automation. AOI uses machine vision to build the product quality standard, as an improvement of manual detection on criteria such as the error rate and judgement speed to achieve reliability and productivity. Sensor defect characterization is an important process when we want to evaluate the quality of the sensor, and each sensor manufacturer has its own published datasheet and is mostly incomparable [4]. EMVA1288 is a standard based on this scenario, and we use this standard as our evaluation standard and test our noise reduction performance. With the sensor defect characterization, we can estimate if the defect is reducible, and the pros and cons of reducing the noise on each noise reduction method. There are many algorithms available for noise reduction, and we will discuss some of these algorithms and their feasibility for reducing the sensor defect noise. 1.2. Sensor Defect Estimation When the sensor manufacturers produce their products, each manufacturer has its own datasheet format. However, the datasheet may not provide enough information about the sensor, or even not comparable, and may be the problem for who would like to compare camera sensors to calculate the overall system performance on an image sensor. With the standard datasheet format, the camera manufacturer can compare according to the datasheet, and easily select the sensor looking the key factor they needed, and this is more convenient than buy many sensors and then do much testing. The sensor defect estimation without lens is needed, because when the lens is attached to the sensor, it needs to be recalibrated and the sensor defect characteristic will be lost, or it depends on the alignment accuracy of the sensor and lens or it shows only the lens defect instead of the sensor. In the estimation process, we use the calibrated light source then directly take photographs by sensor to get the testing image, and analyze the testing image to get the sensor characteristic. The sensor defect characterization process: we take the photograph with different exposure values, and use the digital value the sensor gets to estimate if this digital value matches the expected output given the exposure value. 1.3. The EMVA1288 Standard EMVA stands for European Machine Vision association. EMVA1288 is a standard developed by EMVA to define the methods to measure and characterize in testing and report, and provide series of guidelines to show the quality of image sensors and cameras. This standard aims for industrial camera that the accuracy of the sensor is the key point to the final Automatic Optical Inspection (AOI) quality. This standard is free to used and free to download, but the user

Upload: others

Post on 23-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

NOISE ESTIMATION AND REDUCTION WITH EMVA1288

1 Chih-Jou Yang (楊至柔), 2 Chiou-Shann Fuh (傅楸善)

1 Graduate Institute of Networking and Multimedia,

National Taiwan University, Taipei, Taiwan, 2 Department of Computer Science and Information Engineering,

National Taiwan University, Taipei, Taiwan

E-mail: [email protected]; [email protected];

ABSTRACT

A method of image quality enhancement based on

European Machine Vision Association (EMVA1288) as

the evaluation method is proposed in this thesis. First we

analyze all the EMVA1288 parameters that influence

image quality, how the setting of the environment will

effect, then we analyze possibility of how these

parameters can be improved. We use the real images to

estimate if all the parameters meet hypothesis.

Keywords: noise reduction, noise level estimation,

EMVA1288

1. INTRODUCTION

1.1. Overview

People use digital cameras every day. The digital camera

module becomes an important module in human life.

While in industry, Automated Optical Inspection (AOI)

takes an important role in the Industrial Automation. AOI

uses machine vision to build the product quality standard,

as an improvement of manual detection on criteria such

as the error rate and judgement speed to achieve

reliability and productivity.

Sensor defect characterization is an important process

when we want to evaluate the quality of the sensor, and

each sensor manufacturer has its own published datasheet

and is mostly incomparable [4]. EMVA1288 is a standard

based on this scenario, and we use this standard as our

evaluation standard and test our noise reduction

performance.

With the sensor defect characterization, we can estimate

if the defect is reducible, and the pros and cons of

reducing the noise on each noise reduction method. There

are many algorithms available for noise reduction, and

we will discuss some of these algorithms and their

feasibility for reducing the sensor defect noise.

1.2. Sensor Defect Estimation

When the sensor manufacturers produce their products,

each manufacturer has its own datasheet format.

However, the datasheet may not provide enough

information about the sensor, or even not comparable,

and may be the problem for who would like to compare

camera sensors to calculate the overall system

performance on an image sensor.

With the standard datasheet format, the camera

manufacturer can compare according to the datasheet,

and easily select the sensor looking the key factor they

needed, and this is more convenient than buy many

sensors and then do much testing.

The sensor defect estimation without lens is needed,

because when the lens is attached to the sensor, it needs

to be recalibrated and the sensor defect characteristic will

be lost, or it depends on the alignment accuracy of the

sensor and lens or it shows only the lens defect instead of

the sensor.

In the estimation process, we use the calibrated light

source then directly take photographs by sensor to get the

testing image, and analyze the testing image to get the

sensor characteristic. The sensor defect characterization

process: we take the photograph with different exposure

values, and use the digital value the sensor gets to

estimate if this digital value matches the expected output

given the exposure value.

1.3. The EMVA1288 Standard

EMVA stands for European Machine Vision association.

EMVA1288 is a standard developed by EMVA to define

the methods to measure and characterize in testing and

report, and provide series of guidelines to show the

quality of image sensors and cameras. This standard aims

for industrial camera that the accuracy of the sensor is the

key point to the final Automatic Optical Inspection (AOI)

quality. This standard is free to used and free to download,

but the user

Page 2: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

must register to EMVA to have the right to use the

“EMVA1288 compliant” logo on their publications or

products.

EMVA1288 covers all sensors and cameras with linear

response. The philosophy behind this standard is to find

a suitable mathematical model for each elements of

sensors, and build a standard testing process to retrieve

the value of each parameters in the model. There are

many parameters to characterize, including linearity,

sensitivity, noise, dark current, sensor array

nonuniformities and defect pixels characterization.

EMVA1288 includes an overview of required testing for

all parameters and all the requirement setup, and the

report format such that each sensor can be compared.

2. RELATED WORKS

EMVA1288 Parameters

EMVA models the process of taking photograph from

input photons number into final digital value as physic

model and mathematical model description.

Figure 1 a Physical model of the camera and b

Mathematical model of a singal pixel.[4]

1) Quantum Efficiency (η)

2) Overall System Gain (K)

3) Temporal Dark Noise (σd)

4) Signal-to-Noise Ratio (SNR)

5) Saturation Capacity

6) Absolute Sensitivity Threshold

7) Dynamic Range

8) Spatial Nonuniformities

The detailed explanation of each parameter is as

follows.

Quantum Efficiency (η)

The basic equation in quantum efficiency is

𝜂(𝜆) =𝜇𝑒

𝜇𝑝 (1)

which is the ability of sensor to transfer photons into

electrons, defined as mean number of received electrons

(𝜇𝑒) over mean number of received photons (𝜇𝑝) on

each pixel; 𝜆 is the wavelength of the light.

The mean number of photons that hit a pixel of area A is

calculated as

𝜇𝑝 =𝐴𝐸𝑡exp

ℎ𝜐=

𝐴𝐸𝑡exp

ℎ𝑐/𝜆 (2)

where 𝐸 is the irradiance by calibrated light setting; 𝑡𝑒𝑥𝑝

is the exposure time; 𝑐 is the speed of light; and ℎ is

Planck constant.

Overall System Gain (𝐾)

The charged unit received by sensor will amplify by a

system gain 𝐾, then converted to final digital value 𝑦 by

an ADC (Analog-to-digital converter).

The equation of 𝐾 is

𝜇𝑦 = 𝐾(𝜇𝑒 + 𝜇𝑑) or 𝜇𝑦 = 𝜇𝑦.𝑑𝑎𝑟𝑘 + 𝐾𝜇𝑒 (3)

Combine with Eqs. (1) and (2) get the equation

𝜇𝑦 = 𝜇𝑦.dark + 𝐾𝜂𝜆𝐴

ℎ𝑐𝐸𝑡𝑒𝑥𝑝 (4)

By measuring the mean gray value versus the mean

number of photons incident on the pixel, we can get the

relation of 𝐾𝜂. After the overall system gain K is

determined, it is possible to estimate 𝜂.

Figure 2 Example of measurement of Kη.

Shot noise

According to the law of quantum physics and the particle

nature of light, the number of photons detected by sensor

will fluctuate statistically

𝝈𝒑𝟐 = 𝝁𝒑 (5)

Shot noise has a typical Poisson distribution model,

therefore the variance of the received number of photons

is the same as the mean.

Thermal Noise (Temporal Dark Noise)

Thermal noise is the electron noise generated when the

temperature is higher than absolute zero, regardless of

any applied voltage. The random thermal motion of

electrons cause an independently normally distributed

noise.

All noise related to temperature can be described as a

signal-independent noise σd2

Page 3: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

Quantization Noise

After the amplifier circuit, the analog signal will then be

converted to digital value, that is the final digital number

in the image, and in the quantization process, it will round

all the number into integer, and thus the quantization

noise.

Noise Model

Since we have linear signal model, the variance of final

digital value y is the add up of all the noise in the

sensor, that is:

𝝈𝒚𝟐 = 𝑲𝟐(𝝈𝒅

𝟐 + 𝝈𝒆𝟐) + 𝝈𝒒

𝟐 (6)

Combine with Eqs. (5) and (3), we can get the equation

𝝈𝒚𝟐 = 𝑲𝟐𝝈𝒅

𝟐 + 𝝈𝒒𝟐 + 𝑲(𝝁𝒚 − 𝝁𝒚.𝐝𝐚𝐫𝐤) (7)

This equation is central to the characterization of the

sensor.

By measuring the mean gray value in relation to the

variance gray value, we can find the slope as the overall

system gain 𝐾.

Figure 3 Example of measurement of K.

Signal-to-Noise Ratio (SNR)

The quality of the signal is expressed by the signal-to-

noise ratio (SNR), which is defined as

𝐒𝐍𝐑 = 𝛍𝐲−𝛍𝐲.𝐝𝐚𝐫𝐤

𝛔𝐲 (8)

Using Eqs. (3) and (7), the SNR can then be written as

𝐒𝐍𝐑(𝝁𝒑) =𝜼𝝁𝒑

√𝝈𝒅𝟐+𝝈𝒒

𝟐/𝑲𝟐+𝜼𝝁𝒑

(9)

Consider two limiting cases of the high photon range

with 𝜂𝜇𝑝 ≫ 𝜎𝑑2 + 𝜎𝑞

2/𝐾2 and low-photon range with

𝜂𝜇𝑝 ≪ 𝜎𝑑2 + 𝜎𝑞

2/𝐾2, we can get the equation

𝐒𝐍𝐑(𝝁𝒑) =

{

√𝜼𝝁𝒑 𝜼𝝁𝒑 ≫ 𝝈𝒅𝟐 + 𝝈𝒒

𝟐/𝑲𝟐

𝜼𝝁𝒑

√𝝈𝒅𝟐+𝝈𝒒

𝟐/𝑲𝟐 𝜼𝝁𝒑 ≪ 𝝈𝒅

𝟐 + 𝝈𝒒𝟐/𝑲𝟐 (10)

Saturation and Absolute Sensitivity Threshold

For a k-bit digital camera, theoretically we can get

digital value from 0 to 2^𝑘−1, in practice however, not

whole range is meaningful, this is caused by saturation

and absolute sensitivity threshold limit.

Before saturation point, the variance of digital value

grows as the mean of digital value goes up.

After saturation point, the variance gradually goes down

as mean goes up

This is because the digital range at that point cannot

hold for the variance range, when we see a 2^k−1, we

do not know if it is a normal point or it is overflowed.

We can easily find the saturation point from finding on

the photon transfer curve

Absolute sensitivity threshold is the minimum value

where signal has meaningful value, the most common

way is defined by SNR where the signal-to-noise ratio

equals 1.

Use the inverse of Eq. (9) we can find the 𝜇𝑝 threshold

gives SNR value

𝜇𝑝(SNR) =SNR2

2𝜂(1 + √1 +

4(𝜎𝑑2+𝜎𝑞

2 𝐾2)⁄

𝑆𝑁𝑅2 ) (11)

given SNR=1 comes

𝝁𝒑(𝐒𝐍𝐑 = 𝟏)=𝝁𝒑.𝐦𝐢𝐧 ≈𝟏

𝜼(√𝝈𝒅

𝟐 + 𝝈𝒒𝟐 𝑲𝟐⁄ +

𝟏

𝟐)=

𝟏

𝜼(𝝈𝒚.𝐝𝐚𝐫𝐤

𝑲+

𝟏

𝟐) (12)

The ratio of signal saturation to absolute sensitivity

threshold is defined as the Dynamic Range (DR).

Dark Current

The main component in dark signal is thermally induced

electrons, which grows as the exposure time increases.

𝝁𝒅 = 𝝁𝒅.𝟎 + 𝝁𝒕𝒉𝒆𝒓𝒎 = 𝝁𝒅.𝟎 + 𝝁𝑰𝒕𝒆𝒙𝒑 (13)

The quantity 𝜇𝐼 is called dark current, means the

increased dark signal to exposure time ratio.

Spatial Nonuniformity

The parameters between each sensor are not the same.

Some sensors may be brighter or darker than other

pixels in the same sensor, called spatial nonuniformity.

Two basic types of nonuniformity is Photo Response

Nonuniformity (PRNU) and Dark Signal Nonuniformity

Page 4: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

(DSNU), mean the nonuniformity with light and without

light.

3. NOISE REDUCTION METHOD

3.1. Noise Reduction Basic

Noise reduction on image aims to enhance the image

quality. Quality is the amount of information contained

in an image. In machine vision, preprocessing noise

reduction before any algorithm is needed, otherwise the

machine vision algorithm may be influenced by the noise.

There is a tradeoff between reducing noise and reducing

the detailed information in the image. While the noise is

nearly white and uncorrelated between each pixel, it is

impossible to perfectly separate noise from signal. Noise

reduction is similar to smoothing the image. The basic

idea behind noise reduction is to retrieve real information

of a pixel from that pixel itself and from surrounding

pixel or global image feature.

The main concern of noise reduction is that it may

sometimes treat detailed part and high-frequency part as

noise, and remove this important information.

If the information gain after noise reduction is less than

information loss, then this is called over-smoothing and

hence a bad noise reduction method.

Noise reduction method may perform on spatial domain

or frequency domain, locally or globally, pixel-wise or

block-based, linearly or non-linearly, and any other

different categories. Each has its advantages and

disadvantages on different scenarios.

Edge is a basic important feature on machine vision,

many algorithms rely on edge as their key feature such as

object boundaries and foreground-background separation.

Edge-preserving filters focus on removing noise while

reducing the edge blurring effect such as halos effect.

Examples of edge-preserving filters include median filter,

bilateral filter, non-local means, total variation denoising,

and others. Here we introduce some of these filters.

3.2. Median Filter

Median filter reduces the noise and preserves the edge.

The basic idea behind median filter is to use median to

remove the extreme pixel in an area. When using median

filter, we first select a window size (typically odd), use

symmetric padding at border, then for each pixel, choose

the window surrounding it and sort numbers in the

window and find the median to replace the original value.

Figure 4 Example of median filter.

Median filter is a simple non-linear filter, especially

effective to defective noise such as salt-and-pepper

noise, because in the mean-based method such as mean

filter and Gaussian weighted average, single defective

noise as outlier can easily influence the image at the

averaging step.

3.3. Gaussian Bilateral Filter Bilateral filtering [3] is also a spatial domain edge-

preserving method, while using weighted average to

combine values, rather than just using pixel intensity

value. It combines spatial similarity with computed

radiometric differences.

Because bilateral filtering uses not only pixel intensity

value into average, it is a non-linear filter. When using

bilateral filtering, consider

𝒉(𝒙) = 𝒌−𝟏(𝒙) ∫ ∫ 𝒇(𝝃)𝒄(𝝃, 𝒙)𝒔(𝒇(𝝃), 𝒇(𝒙))𝒅𝝃∞

−∞

−∞ (

where ℎ is the output value; 𝑓 is the input image; 𝑥 is

the neighborhood center; 𝜉 is any nearby points when

considering 𝑥; 𝑐(𝜉, 𝑥) is the geometric closeness

between 𝜉 and 𝑥; and 𝑠(𝑓(𝜉), 𝑓(𝑥)) is the photometric

similarity between 𝜉 and 𝑥; here it use 𝑓(𝜉), 𝑓(𝑥) instead of 𝜉 and 𝑥 because the photometric similarity is

operates in the range of the image function 𝑓; 𝑐 and 𝑠

both use Gaussian form of distance; 𝑐(𝜉, 𝑥) =

𝑒−1

2(𝑑(𝜉,𝑥)

𝜎𝑔)2

and 𝑠(𝜉, 𝑥) = 𝑒−1

2(𝛿(𝜉,𝑥)

𝜎𝛿)2

; 𝑔 is simply the

geometric distance; and 𝛿 is the intensity/color distance,

in the grayscale image; 𝛿 is simply the intensity

distance; and in color image it can define another color-

space distance.

3.4. Non-local Means

Non-local means algorithm [1] is also a spatial-domain

edge-preserving filter. Enhance from local means,

which consider only a surrounding box of pixels. Non-

local means take every pixel into consideration when

computing a single pixel. When averaging the pixel, it

computes weights of each pixel by how similar the pixel

is to the target pixel. The similarity-based average

makes it better than local mean method.

Page 5: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

Figure 5 Scheme of NL-means strategy.

The basic function for NL-means is

𝑁𝐿[𝑣](𝑖) =∑𝑤(𝑖, 𝑗)𝑣(𝑗)

𝑗∈𝐼

where 𝑣 = {𝑣(𝑖)|𝑖 ∈ 𝐼} is the input noisy image; 𝑤(𝑖, 𝑗) is the weighted function for similarity. We usually use

the local mean surrounding 𝑖 and 𝑗 and Gaussian-based

distance

𝑤(𝑖, 𝑗) =1

𝑍(𝑖)𝑒‖𝑣(𝒩𝑖)−𝑣(𝒩𝑗)‖2,𝑎

2

where a > 0 is the standard deviation of Gaussian

kernel. 𝒩𝑖 ,𝒩𝑗 is the mean of surrounding pixel of 𝑖 and

𝑗; and 𝑍 is the normalized term.

Take example of Figure 5 into consideration. The

pixel values of 𝑝, 𝑞1, 𝑞2, 𝑞3 are similar, but when

considering surrounding pixel, the surrounding pixels of

𝑝 and 𝑞3 have much difference, then 𝑤(𝑝, 𝑞3) will be

small.

4. METHODOLOGY

4.1. Noise Reduction Limitation

Noise reduction has a hard limitation. Mostly, noise

reduction algorithms tend to determine whether a pixel is

a noise or photograph detail. Since we cannot distinguish

signal and noise perfectly, reducing noise without

damage to information is impossible. When the noise is

pixelwise correlated, it is possible to use that information

in the noise reduction process. For pixelwise uncorrelated

and independent noise, there is no such advantage to be

used.

4.2. Our Proposed Method

The noise reduction algorithm on real image is needed,

when using the EMVA1288 test. We use uniform light to

test our sensors, these images are expected to be uniform,

and the variance in the image nearly means the noise.

The noise introduced in emva1288 has 3 types: the shot

noise with Poisson distribution, the dark noise with

Gaussian distribution, and the quantization noise with

uniform distribution.

According to PointGrey sensor review [5,], we can

approximately find the magnitude of each noise. The

digital dark noise σ_(y.dark) in digital number can be

computed as the specified digital value in e^- multiplied

with the system gain K(DN/e^-) The digital dark noise of

54 monochrome sensors, the dark noise ranges from 0.38

to 7.96, with mean 1.98 (DN), and with 60 color sensors

ranges from 0.33 to 16.76, with mean 1.95 (DN).

Figure 6 Distribution of dark noise. (a) Monochrome

sensor. (b) Color sensor.

Figure 7 Distribution of maximum shot noise. (a)

Monochrome camera. (b) Color camera.

The noise reduction process should be bounded by the

threshold as a function of shot noise and dark noise,

since we do not want to change the original image too

much.

When using real color image in test, it is needed to

convert color channel from RGB to YCbCr channels,

because it can separate color from the intensity value

and is more viable for noise reduction algorithm.

4.2.1. Blending

We blend the result from each noise reduction method.

Parameter of each noise reduction method:

Median filter: 3 by 3 pixel window, symmetric padding

on border.

Bilateral filter: 5 by 5 Gaussian window, spatial-domain

standard deviation σ_g=3, intensity-domain standard

deviation σ_δ=0.1.

Page 6: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

Non-local mean filter: radius of search window: 5

pixels, radius of similarity window: 2 pixels, Gaussian

filter standard deviation: 1, intensity similarity Gaussian

distance standard deviation: 10.

The blending process goes like this:

• Original: original image (noisy).

• MEDresult: median filter result image.

• BLresult: bilateral filter result image.

• NLresult: non-local mean filter result image.

• 𝑑MED = MEDresult − Original • 𝑑BL = BLresult − Original • 𝑑NL = NLresult − Original • 𝑑blending = 𝑎 ∙ 𝑑MED + 𝑏 ∙ 𝑑B𝐿 + 𝑐 ∙ 𝑑NL, 0 <

𝑎, 𝑏, 𝑐 < 1, 𝑎 + 𝑏 + 𝑐 = 1. Testing with different 𝑎, 𝑏, 𝑐 to get better result.

4.2.2. variance correction

The noise reduction process should be bounded by the

threshold as a function of shot noise and dark noise,

since we do not want to change the original image too

much.

From the analysis of each noise in a sensor, we can get

the expected noise of each pixel.

The shot noise is related to light intensity, brighter pixel

suffers more from shot noise.

Add up all the expected noise, we can build an expected

variance map of an image.

Using Eq. (7), we can build the expected variance map

from sensor parameter and image intensity.

• 𝑣𝑎𝑟𝑚𝑎𝑝 = 𝐾2𝜎𝑑2 + 0.08 + 𝐾𝜇𝑦

• 𝑑𝑠𝑐𝑎𝑙𝑒 = 𝑑𝑏𝑙𝑒𝑛𝑑𝑖𝑛𝑔 .∗ √ 𝑣𝑎𝑟𝑚𝑎𝑝

• varmean = 𝐾2𝜎𝑑2 + 0.08 + 𝐾 ∙ 𝑚𝑒𝑎𝑛(𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙)

• 𝑑𝑟𝑒𝑠𝑢𝑙𝑡 = 𝑑𝑠𝑐𝑎𝑙𝑒 ∗ √𝑣𝑎𝑟𝑚𝑒𝑎𝑛∗𝑣𝑠𝑐𝑎𝑙𝑒

𝑑𝑠𝑐𝑎𝑙𝑒

Testing with vscale and previous a,b,c to get better

result.

Final parameter: 𝑣_𝑠𝑐𝑎𝑙𝑒=0.75,𝑎=0.2,𝑏=0.25,𝑐=0.55.

4.2.3. Flowchart

Figure 8 Our proposed flowchart.

5. EXPERIMENTAL RESULT

5.1. Overview

First, we introduce our experiment environment:

CPU: AMD Ryzen 7 1800X, 3.6GHz

Operating System: Windows 10

Development Environment: Matlab R2017b

Datasets: Two different types of datasets.

First dataset is the real measurement image on 11 sensors,

performed by Delta Electronics, aimed for evaluation of

correctness on real sensor test.

Each contains:

Light images with different exposure values (for Photon

Transfer Method):306 images (102 images for each color

channel).

102 images consist of 51 different exposure values, 2

images for each exposure value, according to

EMVA1288 standard.

Dark images with different exposure times (for Dark

Current): 10 images.

Nonuniformity images

6*104 images (dark image and 50% saturation image for

each color channel).

104 is to average out the temporal noise.

The other datasets are the general photograph on daily

life. Testing for the noise reduction method on real

photograph, we add noise based on real sensor

parameters, and test the performed image compared with

original image.

Page 7: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

5.2. Evaluation of the EMVA1288 Standard

We have a standard testing equipment for EMVA1288,

and we want to see if all the test mentioned in the report

is correctly computed from test image. We do a test on

every sensor dataset to test the accuracy of our calculated

parameter versus official EMVA report.

Figure 9 Comparison of our calculated parameter to

official EMVA report by AEON.

5.3. Experiment on Real Image

Our experiment uses system gain to determine the

threshold, and test with different thresholds. The testing

process goes like this: we first add noise based on

EMVA1288 parameter, and then test the Root Mean

Square Error (RMSE) with original image and output

image.

Example image of noise reduction result

(a) Original image. (b) Noisy image,

K=0.05.

(c) Output image

without variance

correction.

(d) Output image with

variance correction.

RMSE before noise reduction: 2.4421

RMSE with pure median filter: 4.2683

RMSE with pure bilateral filter: 4.9344

RMSE with pure non-local mean filter: 3.5931

RMSE with pure blending but without variance

correction: 3.7404

RMSE with blending and variance correction: 2.2608

Another Example of noise reduction result.

(a) Original image. (b) Noisy image,

K=0.20.

(c) Output image

without variance

correction.

(d) Output image with

variance correction.

RMSE before noise reduction: 4.8817

RMSE with pure median filter: 4.7135

RMSE with pure bilateral filter: 4.9851

RMSE with pure non-local mean filter: 3.6504

RMSE with pure blending but without variance

correction: 3.8421

RMSE with blending and variance correction: 3.2867

Another Example of noise reduction result.

Page 8: NOISE ESTIMATION AND REDUCTION WITH EMVA1288fuh/personal/NoiseEstimationandRedu… · RELATED WORKS EMVA1288 Parameters ... digital value y is the add up of all the noise in the sensor,

(a) Original image. (b) Noisy image,

K=0.35.

(c) Output image

without variance

correction.

(d) Output image with

variance correction.

RMSE before noise reduction: 6.4572

RMSE with pure median filter: 5.0865

RMSE with pure bilateral filter: 5.0741

RMSE with pure non-local mean filter: 3.8084

RMSE with pure blending but without variance

correction: 3.9829

RMSE with blending and variance correction: 3.7281

Another Example of noise reduction result.

(a) Original image. (b) Noisy image,

K=0.50.

(c) Output image

without variance

correction.

(d) Output image

with variance

correction.

RMSE before noise reduction: 7.7242

RMSE with pure median filter: 5.4259

RMSE with pure bilateral filter: 5.1683

RMSE with pure non-local mean filter:

4.0458

RMSE with pure blending but without

variance correction: 4.1507

RMSE with blending and variance

correction: 4.0251

6. CONCLUSION

In this thesis, we develop an effective way to reduce the

noise on sensor and preserve details. The noise reduction

algorithm can be used as a general preprocessing before

any further machine vision algorithms.

Using the sensor parameter, we can take the system gain

K and use it to derive our threshold for better algorithms.

We proposed variance map method that can correct each

pixel by its theoretically noise.

Our algorithms show good result on RMSE reduction,

and we can see from the result image that it can well

preserve the detail parts in the image.

REFERENCES

[1] A. Buades, B. Coll, and J. M. Morel, “A Non-Local

Algorithm for Image Denoising,” Proceedings of IEEE

Conference on Computer Vision and Pattern Recognition,

San Diego, CA, pp. 1-6, 2005.

[2] A. Darmont, J. Chahiba, J. F. Lemaitre, M. Pirson, and D.

Dethier, “Implementing and Using the EMVA1288

Standard,”

http://adsabs.harvard.edu/abs/2012SPIE.8298E..0HD, 2012.

[3] C. Tomasi and R. Manduchi, “Bilateral Filtering for Gray

and Color Images,” Proceedings of International

Conference on Computer Vision, Bombay, India, pp. 839-

846, 1998.

[4] European Machine Vision Association. “EMVA Standard

1288 Release 3.1,” http://www.emva.org/wp-

content/uploads/EMVA1288-3.1a.pdf, 2016.

[5] PointGrey “MonoCameraSensorPerformanceReview2017-

Q1.pdf,” 2017.

[6] PointGrey “ColorCameraSensorPerformanceReview2017-

Q1.pdf,” 2017.