average sampling and signal denoisingmeeting.xidian.edu.cn/workshop/miis2012/uploads/... · average...

Post on 14-Jan-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Average Sampling and Signal Denoising

Wenchang SUN (孙文昌)

Nankai University

Average Sampling for Bandlimmited Functions

The sampling theorem says that under certain conditions, a signalf in a function space V is uniquely determined and can be recon-structed from sampled values f (xk) : k ∈ Z.

x

x

x x

x

f (x) =∑k

f (xk)Sk(x).

2 / 52

Average Sampling for Bandlimmited Functions

For example, the classical Shannon sampling theorem says thatif a signal f ∈ L2(R) is band-limited to [−π, π], i.e., f (ω) = 0 for|ω| > π, where

f (ω) =

∫R

f (x)e−ixωdx

is the Fourier transform of f , then f is uniquely determined by sampledvalues f (k) and can be reconstructed by

f (x) =∑k∈Z

f (k)sin π(x − k)

π(x − k),

where the convergence is both in L2(R) and uniform on R.

3 / 52

Average Sampling for Bandlimmited Functions

For physical reasons, e.g., the inertia of the measurement appa-ratus, it is something difficult to measure the value of a signal pre-cisely at time x . In practice, only a local average near x can be mea-sured. Specifically, let xk be an increasing real sequence such thatxk → ±∞ as k → ±∞. Then the sampled values will be 〈f , uk〉 forsome collection of averaging functions uk satisfying that

supp uk ⊂ [xk −δ

2, xk +

δ

2], uk ≥ 0, and

∫uk(x)dx = 1.

δ

4 / 52

Average Sampling for Bandlimmited Functions

It is clear that from local averages one should obtain at least agood approximation of the original signal if δ is small enough. Wiley,Butzer and Lei studied the approximation error when local averagesare used as sampled values.

Problem: can we reconstruct a signal exactly from local averages?

5 / 52

Average Sampling for Bandlimmited Functions

It is clear that from local averages one should obtain at least agood approximation of the original signal if δ is small enough. Wiley,Butzer and Lei studied the approximation error when local averagesare used as sampled values.

Problem: can we reconstruct a signal exactly from local averages?

5 / 52

Average Sampling for Bandlimmited Functions

Grochenig (1992) proved that if

xk+1 − xk ≤ δ <1√2Ω

,

then every f ∈ f : f ∈ L2(R) and supp f ⊂ [−Ω,Ω] is uniquelydetermined by local averages 〈f , uk〉 around xk and f can be recon-structed by an iterative algorithm.

Grochenig’s result works for arbitrary averaging functions whilethe sampling rate is greater than the Nyquist rate.

6 / 52

Average Sampling for Bandlimmited Functions

Feichtinger and Grochenig (1994) proved that if δ := supk∈Z

(xk+1 −

xk) < πΩ, then every f which is band limited to [−Ω,Ω] is uniquely

determined by 1yk−yk−1

∫ ykyk−1

f (x)dx , where yk =xk+xk+1

2, k ∈ Z.

f (x) =∑k∈Z

1

yk − yk−1

∫ yk

yk−1

f (x)dx · Sk(x).

7 / 52

Average Sampling for Bandlimmited Functions

Problem. Given a sequence of sampling points which satisfy

0 < xk+1 − xk ≤ β <π

Ω,

Find a positive constant δ, such that for any averaging functions uk(x)with supp uk ⊂ [xk − δ

2, xk + δ

2], we can reconstruct f from 〈f , uk〉 in

a numerically stable way.

8 / 52

Average Sampling for Bandlimmited Functions

xk+1 − xk ≤ β <π

Ω.

δ

Theorem (Sun & Zhou, Constr. Approx. 2002)

"δ < πΩ− β,

%δ > πΩ− β.

9 / 52

Average Sampling for Bandlimmited Functions

xk+1 − xk =επ

Ω, 0 < ε < 1.

δ

Theorem (Sun & Zhou, Constr. Approx. 2002)

Regular sampling: xk = kεπΩ

for some 0 < ε < 1. Define

∆ε =mεπ

Ωfor

1

m + 1≤ ε <

1

m, m ≥ 1.

Then

"0 < δ < ∆ε.

%δ ≥ ∆ε and 1ε

is not an integer.

10 / 52

Average Sampling for Bandlimmited Functions

xk+1 − xk =π

Ω.

δ

Theorem (Sun & Zhou, Constr. Approx. 2002)

Regular sampling: xk = kπΩ

. Then

"0 < δ < π2Ω

.

%δ > π2Ω

.

11 / 52

Average Sampling for Bandlimmited Functions

Theorem (Sun & Zhou, Constr. Approx. 2002)

xk = kεπΩ, 0 < ε ≤ 1 and uk(x) = 1

δkχ

[xk−δk2,xk+

δk2

](x).

" supk δk ≤ δ < πΩ

.

12 / 52

Average Sampling for Bandlimmited Functions

uk(x)

Theorem (Sun & Zhou, IEEE-IT 2002)

xk = kεπΩ, 0 < ε ≤ 1.

supp uk ⊂ [xk − δ2, xk + δ

2], uk(x + xk) is even and nonincreasing on

[0, δ2].

" 0 < δ < 1.8830453πΩ

,

% δ ≥ 2πΩ

.

13 / 52

Average Sampling for Bandlimmited Functions

Theorem (Song, Sun, Zhou, & Hou, IEEE-IT, 2007)

Let uk : k ∈ Z be a sequence of averaging functions and tk : k ∈Z be a relatively separable sequence of real numbers. Suppose thatSk : k ∈ Z is a frame for BΩ such that

f (t) =∑k∈Z

〈f , uk〉Sk(t), ∀f ∈ BΩ, (1)

where the convergence is both in L2(R) and uniform on R.Let X (t, ω) be a bandlimited process whose autocorrelation functionRX satisfies |RX (t)| ≤ RX (0)(1 + |t|)−1−η for some η > 0. Then wehave

limN→∞

E

∣∣∣∣X (t, ω)−N∑

k=−N

〈X (·, ω), uk〉Sk(t)

∣∣∣∣2 = 0, ∀t ∈ R.

14 / 52

Average Sampling: Determining the Averaging Functions

Theorem (Liu & Sun, IEEE-SPL 2007)

Let f ∈ BΩ be such that f (ω) 6= 0 on [−Ω,Ω]. Then we have forω ∈ [2mΩ− Ω, 2m + Ω], m ∈ Z,

uk(ω) =1

f (ω − 2mΩ)

∑n∈Z

π

Ω〈f (

Ω− ·)e−2mΩi ·, uk〉e−i

nπΩω. (2)

15 / 52

Average Sampling: Determining the Averaging Functions

Theorem (Liu & Sun, IEEE-SPL 2007)

Let xk : k ∈ Z and uk : k ∈ Z be sequences of sampling pointsand averaging functions, respectively. Suppose that f (x) ∈ L2(R) andf (x) 6= 0 , x ∈ [xk − σ, xk + σ]. Denote fn(x) = f (x)e i nπ

σx . Then we

have

uk(x) =1

2σf (x)

∑n∈Z

〈fn, uk〉e−inπσx , x ∈ [xk − σ, xk + σ]. (3)

16 / 52

Average Sampling for Shift Invariant Spaces

Although the assumption that a signal is band-limited is eminentlyuseful, it is not always realistic since a band-limited signal is of infiniteduration. Thus, it is natural to investigate other signal classes forwhich a sampling theorem holds. A simple model is to consider shift-invariant subspaces, e.g., wavelet subspaces, which generalize the spaceof band-limited functions and are of the form

V = spanϕ(· − k) : k ∈ Z

for some kernal function ϕ(x). In fact, there have been many resultsconcerning the sampling in shift-invariant subspaces for both regularand irregular sampling.

17 / 52

Average Sampling for Shift Invariant Spaces

In particular, for the spline subspace

Vm = ∑k∈Z

ckϕm(· − k) : ck ∈ `2

generated by the cardinal B-spline

ϕm = χ[0,1] ∗ · · · ∗ χ[0,1] ( m + 1 terms ), m ≥ 1,

many sampling theorem were established.

18 / 52

Average Sampling for Shift Invariant Spaces

Liu (IEEE-IT, 1996) proved that every f ∈ Vm is uniquely de-termined and can be reconstructed by an iterative algorithm from itssamples f (xk) if xk+1 − xk is small enough.

Furthermore, Aldroubi and Grochenig (JFAA, 2000) proved thatif

0 < α ≤ xk+1 − xk ≤ β < 1,

then every f ∈ Vm is uniquely determined by f (xk).

19 / 52

Average Sampling for Shift Invariant Spaces

Theorem (Sun & Zhou, Proc AMS 2003)

Let xk : k ∈ Z be a real sequence such that limk→±∞

xk = ±∞ and

0 < xk+1 − xk ≤ β < 1, k ∈ Z

for some constant β. Then for any 0 < δ < 1 − β and averagingfunctions uk(x) with supp uk ⊂ [xk − δ

2, xk + δ

2], there is a frame

Sk(x) : k ∈ Z for Vm such that for any f ∈ Vm,

f (x) =∑k∈Z

(xk+1 − xk−1

2)1/2〈f , uk〉Sk(x), (4)

where the convergence is both in L2(R) and uniform on R.Furthermore, the conclusion fails if δ ≥ 1− β.

20 / 52

Average Sampling for Shift Invariant Spaces

For the standard averaging function, we have the following result.

Theorem

Let S(ω) = ϕm(ω)Zϕm+1(m/2,ω)

. Then S(· − k) is a Riesz basis for Vm and

for any f ∈ Vm,

f (x) =∑k∈Z

〈f , u(· − k)〉S(x − k), (5)

where u(x) = χ[m/2−1,m/2](x) and the convergence is both in L2(R)and uniform on R.

21 / 52

Average Sampling for Shift Invariant Spaces

TheoremSuppose that xk is a real sequence such that

0 < α ≤ xk+1 − xk ≤ β < 1

for some two constants α and β. Then there is a frame Sk for Vm

such that for any f ∈ Vm,

f (x) =∑k∈Z

〈f , u(· − xk)〉Sk(x),

where u(x) = χ[− 1

2, 1

2](x) and the convergence is both in L2(R) and

uniform on R.

22 / 52

Signal Denoising

Signal Denoising

In practice, measured sampled values are usually noised. They aresupposed to be of the following form,

yk := f (kτπ

Ω) + εk , (6)

where εk : k ∈ Z is a sequence of independent and identicallydistributed random variables with zero mean and variance σ2.

When we replace f (kτπ/Ω) by yk in the classical Shannon sam-pling theorem, the new series might not converge. That is, the partialsums

fn(x) :=∑|k|≤n

yksin(Ωx/τ − kπ)

Ωx/τ − kπ

might diverge as n tends to infinity. In fact, it was shown that

E

∫R|fn(x)− f (x)|2dx →∞, n→∞, (7)

where E stands for the mathematical expectation.24 / 52

Signal Denoising

To fix this defect, a linear smoothed method was considered. Let

yk =∑|i |≤M

ωiyk−i , (8)

where M ≥ 1 is a constant, ωi ≥ 0 and∑|i |≤M ωi = 1. The smoothed

signal yk is the weighted moving average of yi in the neighborhood off (kτπ/Ω) and is often called the moving average sequence. Let

fn(x) :=∑|k|≤n

yksin(Ωx/τ − kπ)

Ωx/τ − kπ. (9)

It was shown that we can make E‖fn−f ‖2 arbitrarily small by choosingappropriate τ and n. However, for fixed τ , fn does not converge to fin general even if εk = 0 for any k ∈ Z.

25 / 52

Signal Denoising

Motivated by average sampling, we present a new reconstructionalgorithm which have the following features,

1 If the sampled values are noised, then the reconstructed signal isdenoised, and

2 If the sampled values are exact, so is the reconstructed signal.

26 / 52

Signal DenoisingTheorem (Zhang, Wang & Sun, Digital Signal Processing,2012)

Let N ≥ 2 be an integer and δ, τ be constants such that 0 < τ < 2/Nand 0 < δ < (1/τ − 1)Ω. Define

SN(ω) =

Nπτ/Ω

1+e iωπτ/Ω+...+e i(N−1)ωπτ/Ω , ω ∈ [−Ω,Ω],

− N(ω−Ω−δ)πτ/Ω

δ(1+e iωπτ/Ω+...+e i(N−1)ωπτ/Ω), ω ∈ (Ω,Ω + δ],

N(ω+Ω+δ)πτ/Ω

δ(1+e iωπτ/Ω+...+e i(N−1)ωπτ/Ω), ω ∈ [−Ω−δ,−Ω),

0, otherwise.

(10)

Then for any f ∈ BΩ, we have

f (t) =∑k∈Z

f (kπτ/Ω) + · · ·+ f ((k + N − 1)πτ/Ω)

NSN(t − kπτ

Ω), t ∈ R,(11)

where the convergence is both in L2(R) and uniform on R.

27 / 52

Signal Denoising

Error Estimate:

If sampled values f (kτπ/Ω) in RN f are replaced by f (kτπ/Ω) +εk and εk : k ∈ Z is a sequence of independent and identicallydistributed random variables with zero mean and variance σ2. Thenfor any f ∈ BΩ, we have

E |f (t)− (RN f )(t)|2

=Ωσ2

2π2τ

∫ Ω+δ

−Ω−δ

1− cos(Nωπτ/Ω)

N2(1− cos(ωπτ/Ω))|SN(ω)|2dω.

28 / 52

Signal Denoising

Based on the previous result, we get a reconstruction algorithm.Assume that f is a signal bandlimited to [−Ω,Ω]. We can reconstructf from sampled values with the following steps.

1 Choose some integer N ≥ 2 and positive constants τ, δ such that0 < τ < 2/N and 0 < δ < (1/τ − 1)Ω.

2 Sample f on some interval [t1, t2] with sampling period τπ/Ω.Denote the sampled values by f (kπτ/Ω) : k1 ≤ k ≤ k2.

3 Compute SN with (10) via the inverse Fourier transform.

4 Compute RN f with

(RN f )(t) =k2−N+1∑k=k1

f (kπτΩ

) + · · ·+ f ( (k+N−1)πτΩ

)

NSN(t − kπτ

Ω).

29 / 52

Signal Denoising

−15 −10 −5 0 5 10 15−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

f(x)

−15 −10 −5 0 5 10 15−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

f(x)

−10 −5 0 5 10−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

f(x)

Original signal

[PRK]

This paper

−10 −5 0 5 10

−0.25

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

x

f(x)

[PRK]

This paper

30 / 52

Signal Denoising

−15 −10 −5 0 5 10 15−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

g(x

)

−15 −10 −5 0 5 10 15−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

g(x

)

−10 −5 0 5 10−0.2

0

0.2

0.4

0.6

0.8

1

1.2

x

g(x

)

Original signal

[PRK]

This paper

−10 −5 0 5 10−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

x

g(x

)

[PRK]

This paper

31 / 52

Local Sampling Theorems

Local Sampling Theorem

Most of known results are concerning global sampling. That is,to recover a function at a point or on an interval, we have to know allthe samples which are usually infinitely many.

For example, in the Shannon sampling theorem,

f (x) =∑k∈Z

f (k)sin π(x − k)

π(x − k).

33 / 52

Local Sampling Theorem

Another example: for the spline subspaces defined by

Vm =

∑k∈Z

ckϕm(· − k) : ck ∈ `2

,

where ϕm denotes the m degree cardinal B-spline, i.e.,

ϕm = χ[0,1] ∗ · · · ∗ χ[0,1] ( m + 1 terms ), m ≥ 1.

It was shown that

f (x) =∑k∈Z

f(

k +m + 1

2

)S(x − k), ∀f ∈ Vm,

where S(x) is defined by

S(ω) =ϕm(ω)∑

k∈Z ϕm(k + (m + 1)/2)e−ikω.

Observe that∑

k∈Z ϕm(k + (m + 1)/2)e−ikω is a triangular polynomialfor m ≥ 2. There are infinitely many non-zero Fourier coefficients for1/∑

k∈Z ϕm(k + (m + 1)/2)e−ikω.34 / 52

Local Sampling Theorem

On the other hand, since the generating function ϕm is compactlysupported, for a given f (x) =

∑k∈Z ckϕm(x − k), f (x0) depends on

only finitely many ck . This implies that it could be possible to recon-struct f at one point with only finitely many samples.

35 / 52

Local Sampling Theorem

We consider the following local sampling problem:

Find conditions on the sampling points xk : 0 ≤ k ≤ K − 1 ⊂[N1,N2], where N1 < N2 are integers, such that we can reconstructevery f ∈ Vm on the interval [N1,N2] from only finitely many samplesf (xk) : 0 ≤ k ≤ K−1. That means, there is a sequence of functionsSk : 0 ≤ k ≤ K − 1 such that

f (x) =K−1∑k=0

f (xk)Sk(x), ∀f ∈ Vm, x ∈ [N1,N2]. (12)

36 / 52

Local Sampling Theorem

Local sampling is practically useful since we need only to considera signal on a bounded interval in many cases and computers can processonly finitely many samples.

37 / 52

Local Sampling Theorem

As far as we know, there are only two results in this aspect. Theone is the periodic nonuniform sampling theorem. Specifically, let0 ≤ x0 < x1 < · · · < xm < 1 be fixed. Then we can find compactlysupported S0, S1, · · · , Sm ∈ Vm such that

f (x) =∑k∈Z

∑0≤p≤m

f (xp + k(m + 1))Sp(x − k(m + 1)).

xxxx xxxx xxxx

In this case, f (x) is determined by finitely many samples near x .However, the distribution of sampling points is not balanced. Thereare m+1 sampling points in every interval like [k(m+1), k(m+1)+1)while there is no sampling point in the subsequent interval of lengthm.

38 / 52

Local Sampling Theorem

The other appeared in a recent paper by Grochenig and Schwab,which was stated for general shift invariant spaces with compactlysupported generators. As a consequence, it was shown that if

0 < xk+1 − xk ≤N2 − N1

N2 − N1 + 2m + 1,

then one can reconstruct every f ∈ Vm on [N1,N2] exactly with finitelymany samples.

39 / 52

Local Sampling Theorem

Note that there is not a local sampling sequence for band-limitedfunctions. In fact, since band-limited functions are the restriction ofanalytical functions on the real line, if

f (x) =K−1∑k=0

f (xk)Sk(x)

holds on some interval, then f is determined on the whole real line,which is impossible since a band-limited function is not determined byfinitely many samples in general.

40 / 52

Local Sampling Theorem

For the case of spline subspaces, however, such a local samplingsequence does exist. In fact, we have the following.

DefinitionWe call a sequence E := xk : 0 ≤ k ≤ K − 1 a local samplingsequence for Vm on [N1,N2] if E ⊂ [N1,N2] and there is a sequence offunctions Sk : 0 ≤ k ≤ K − 1 such that

f (x) =K−1∑k=0

f (xk)Sk(x), ∀f ∈ Vm, x ∈ [N1,N2].

41 / 52

Local Sampling Theorem

Theorem (Sun & Zhou, AiCM, 2009)

A sequence E of distinct points is a local sampling sequence for Vm

on [N1,N2] if and only if it satisfies the following.

#E ≥ N2 − N1 + m. (13)

#(E ∩ [N1,N1 + k)

)≥ k , 0 ≤ k ≤ N2 − N1. (14)

#(E ∩ (N2 − k ,N2]

)≥ k , 0 ≤ k ≤ N2 − N1. (15)

#(E ∩ (n1, n2)

)≥ n2 − n1 −m, N1 ≤ n1 ≤ n2 ≤ N2. (16)

42 / 52

Local Sampling Theorem

The proof of this theorem depends on the celebrated Schoenberg-Whitney Theorem. In fact, they are equivalent.

43 / 52

Local Sampling Theorem

Aldroubi and Grochening proved that if the sampling sequencexk : k ∈ Z satisfies

0 < ε0 ≤ xk+1 − xk ≤ δ0 < 1, k ∈ Z, (17)

then there are constants C1,C2 > 0 such that

C1‖f ‖22 ≤

∑k∈Z

|f (xk)|2 ≤ C2‖f ‖2, ∀f ∈ Vm (18)

and every f ∈ Vm can be reconstructed from f (xk) : k ∈ Z. Thestandard procedure to reconstruct a function from sampled values iscarried out with the frame algorithm.

We give an explicit solution for Sk : k ∈ Z with the help oflocal sampling, where every Sk is compactly supported.

44 / 52

Local Sampling Theorem

Theorem (Sun & Zhou, AiCM, 2009)

Suppose that N1 = x0 < x1 < · · · < xK = N2 and

xk+1 − xk < 1, 0 ≤ k ≤ K − 1,

where K ≥ N2 − N1 + m. Let uk : 0 ≤ k ≤ K − 1 be a sequenceof averaging functions such that suppuk ⊂ [xk , xk+1), 0 ≤ k ≤ K − 1.Then there exist functions Sk ∈ Vm satisfying suppSk ⊂ [N1−m,N2 +m] such that

f (x) =K−1∑k=0

〈f , uk〉Sk(x), ∀f ∈ Vm, x ∈ [N1,N2].

45 / 52

Local Sampling Theorem

Corollary (Sun & Zhou, AiCM, 2009)

Suppose that N1 = x0 < x1 < · · · < xK = N2 and

xk+1 − xk < 1, 0 ≤ k ≤ K − 1,

where K ≥ N2 − N1 + m. Then there exist functions Sk ∈ Vm suchthat supp Sk ⊂ [N1 −m,N2 + m] and

f (x) =K−1∑k=0

1

xk+1 − xk

∫ xk+1

xk

f (t)dt Sk(x), ∀f ∈ Vm, x ∈ [N1,N2].

46 / 52

Local Sampling Theorem

Let Γ = tk : k ∈ Z be a sequence of real numbers such that

tk ≤ tk+1 and tk < tk+m, k ∈ Z, (19)

where m ≥ 1 is a fixed integer. Note that the above inequalities showthat each point can appear at most m times in the sequence. Let ϕn

be the m-degree B-spline with knots (tn, tn+1, · · · , tn+m+1) and

Vm =

∑n∈Z

cnϕn : cn ∈ C.

47 / 52

Local Sampling Theorem

Observe that Vm stands for a large class of function spaces. Forexample, for Γ = Z, ϕn is exactly the m-degree cardinal B-spline, i.e.,ϕn = Bm(· − n), where Bm = χ[0,1] ∗ · · · ∗ χ[0,1] (m + 1 terms).

Another interesting example is Γ = bn/rc : n ∈ Z, where1 ≤ r ≤ m and we use the notation bxc := maxn ∈ Z : n ≤ x. Inthis case,

V (r)m =

∑n∈Z,1≤l≤r

cn,lψl(· − n) : cl ,n ∈ C, (20)

where ψl is the normalized B-spline with knots(bl/rc, b(l+1)/rc, · · · , b(l+

m + 1)/rc). These splines are investigated in the study of wavelets of

multiplicity r .

48 / 52

Local Sampling Theorem

For convenience, we introduce the following definitions. Given aknot sequence tk : k ∈ Z, we define

L(k) := LΓ(k) = mink ′ ≤ k : tk ′ = tk,R(k) := RΓ(k) = maxk ′ ≥ k : tk ′ = tk.

For simplicity, we write L(k) and R(k) instead of LΓ(k) and RΓ(k),respectively. It is easy to see that tk appears exactly R(k)− L(k) + 1times in Γ. By (19), we have

0 ≤ R(k)− L(k) ≤ m − 1, ∀k ∈ Z. (21)

Let #E denote the cardinality of a sequence E .

49 / 52

Local Sampling Theorem

Theorem (Sun, Math Comput 2009)

A sequence of distinct points is a local sampling sequence for Vm on[tN1 , tN2] if and only if it satisfies the following conditions.

#E ≥ L(N2)− R(N1) + m.

#(E ∩ [tN1 , tn)

)≥ R(n)− R(N1), N1 ≤ n ≤ N2.

#(E ∩ (tn, tN2]

)≥ L(N2)− L(n), N1 ≤ n ≤ N2.

#(E ∩ (tn1 , tn2)

)≥ R(n2)− L(n1)−m, N1 ≤ n1 ≤ n2 ≤ N2.

50 / 52

Local Sampling Theorem

By setting tn = bn/rc, we get a characterization of local sampling

sequences for the space V(r)m defined by (20).

Corollary (Sun, Math Comput 2009)

A sequence of distinct points is a local sampling sequence for V(r)m on

[N ′1,N′2] if and only if it satisfies the following.

#E ≥ r(N ′2 − N ′1 − 1) + m + 1.

#(E ∩ [N ′1,N

′1 + k)

)≥ rk , 0 ≤ k ≤ N ′2 − N ′1.

#(E ∩ (N ′2 − k ,N ′2]

)≥ rk , 0 ≤ k ≤ N ′2 − N ′1.

#(E ∩ (n′1, n

′2))≥ r(n′2 − n′1 + 1)−m − 1, N ′1 ≤ n′1 ≤ n′2 ≤ N ′2.

51 / 52

谢谢!

top related