# asymptotic expansion of oscillatory integrals satisfying ...mgilula/oscint.pdfasymptotic expansion...

Embed Size (px)

TRANSCRIPT

Asymptotic expansion of oscillatory integrals

satisfying Varchenko’s condition

Maxim Gilula

October 25, 2015

Abstract

We consider scalar oscillatory integrals with real analytic phase φ sat-isfying the analytic condition used by Varchenko in [15]. We first showVarchenko’s condition implies a decay rate for ∇φ close to the origin. Thisdecay rate allows us to integrate by parts away from the singularities of∇φ. We decompose our integral into dyadic boxes close enough to theorigin, estimate each piece using the decay rate, and apply linear pro-gramming to obtain Varchenko’s estimates. The techniques in this proofallow us to compute the exponents appearing in the asymptotic expansionof scalar oscillatory integrals satisfying Varchenko’s condition. Moreover,we show the asymptotic expansion holds for all λ > 2.

1 Introduction

In 1976, Varchenko proved what is now a very well known result quantifyingdecay of scalar oscillatory integrals

I(λ) =

∫Rdeiλφ(x)ψ(x)dx

without assumptions on a uniform lower bound on any derivatives of the phase,nor the Hessian[15]. In his revolutionary paper, Varchenko used techniques fromcomplex analysis and algebraic geometry to show that under certain analyticconditions on a real-valued phase φ, for any smooth amplitude ψ supported ina sufficiently small neighborhood of the origin, there is a positive constant Cindependent of λ such that

|I(λ)| ≤ Cλ−1/t log(λ)d−1−k (1)

as λ → ∞, where t > 0 and 0 ≤ k ≤ d − 1 are read off from the Newtonpolyhedron of φ, and the exponent of λ is sharp (over all ψ) if t > 1. Theimportance of Varchenko’s discovery inspired new proofs of these estimates,e.g., Greenblatt[5] via resolution of singularities, and Kamimoto-Nose[9], with

1

both papers including some generalizations. The first aim of this paper is toprovide a proof of (1) using only integration by parts, linear programming, andthe linear algebraic structure of Rd. Sharpness of the bounds will not be proved.The second is to develop an asymptotic expansion for I(λ) for large λ. One canfind, for example in Malgrange[12], that as λ→∞,

I(λ) ∼∑p

d−1∑k=0

ap,k(ψ)λ−p logd−1−k(λ),

where p runs through finitely many arithmetic progressions, independent ofψ, constructed from positive rational numbers. We will compute the powersexplicitly from the Newton polyhedron of φ without making use of previouslyknown results about the expansion: there is a simple geometric description ofthe arithmetic progression. We will even be able to be more precise aboutwhat exponents of log appear, and show this holds for λ > 2. Since we do nothave sharp estimates for the case t ≤ 1, we cannot be precise about when thecoefficients of the expansion are nonzero in general.

Most results in this paper rely on examining the Newton polyhedron of areal analytic function. Over the past few decades, this combinatorial object hasbeen an important tool related to oscillatory integrals. For example, in 2001,Phong, Stein, and Sturm[13] used the Newton polyhedron to find decay ratesof oscillatory integral operators with polynomial phases and interpreted theirmultilinear operators as the analytic notions corresponding to the geometricnotion of Newton distance.

In addition to [13], there have been many inspirational results in the studyof van der Corput’s lemma and multilinear operators in higher dimensions, e.g.,by Carbery, Christ and Wright in [2], Gressman’s geometric perspective of thesetwo papers in [7], by Carbery-Wright in [3], assuming only a smooth phase inR2 by Rychkov in [14], a simple proof assuming convex functions and domainsby Carbery in [1], and many more. There are even results still being discoveredin one dimension, e.g., by Do-Gressman in [4].

2 Terminology

Without loss of generality, we work towards estimating

∫Rdeiλφ(x)ψ(x)χRd≥

(x)dx, (2)

since estimating each orthant above is a symmetric problem for phases satisfyingVarchenko’s condition. The estimate of I(λ) can be deduced by summing the es-timates over all 2d orthants. Also by symmetry, we assume that φ(x) =

∑α cαx

α

has uniformly and absolutely convergent power series in [0, 4]d, although 4 is forconvenience: it can be replaced by any positive real number we wish. Without

2

loss of generality we assume that φ(0) = 0, or else we could factor out eiλφ(0).We also assume φ is not identically zero in [0, 4]d.

To estimate (2), we use a partition of unity and reduce to estimating

I(λ, ε) =

∫[ε,4ε]

eiλφ(x)ψ(x)ηε(x)dx

where ε = (ε1, . . . , εd) ∈ (0, 1)d is small enough, [ε, 4ε] is the box∏dj=1[εj , 4εj ],

ηε is smooth with support in [ε, 4ε], ψ is our smooth amplitude, and φ satisfiesVarchenko’s condition. We choose {[ε, 4ε]} to be a set of dyadic boxes anddecompose our amplitude into a sum of amplitudes supported in [ε, 4ε] to provethe final estimate. In order to discuss the main results, we require some moreterminology and notation.

2.1 Basic notation

We write N for the set of nonnegative integers and R≥ for the set of nonnegativereals. The convention for elements x ∈ Rd is x = (x1, . . . , xd), i.e., xi is the ith

component of x. Next, some algebraic conventions are introduced. In additionto the standard notation, for y ∈ Rd≥ and α ∈ R, that ∂α = ∂α1

∂xα11

· · · ∂αd

∂xαdd

,

yα = yα11 · · · y

αdd and |y| = y1 + · · · + yd, we make use of some less standard

notation for c ∈ R and y, z ∈ Rd≥ :

• yz = (y1z1, . . . , ydzd);

• if c > 0, denote the vector (cy1 , . . . , cyd) by cy;

• [y, 4y] is defined to be the box∏dj=1[yj , 4yj ];

• if the components of y are positive, fy(x) = f(y−11 x1, . . . , y

−1d xd);

• c is the vector (c, . . . , c).

In particular, note that (cy)z = c〈y,w〉 and (cyx)z = c〈y,z〉xz. Lastly, we write

f(x) . g(x)

for positive real-valued functions f and g to express that there is a positiveconstant C independent of x such that f(x) ≤ Cg(x) for all x wherever thisexpression makes sense.

2.2 Newton polyhedron

Now we move on to some definitions involving the Newton polyhedron, a keyobject of study in the upcoming sections. Note: one can find facts we assumeabout polyhedra in, e.g., Grunbaum[8].

We assume φ : Rd → R is analytic around the origin throughout.

3

Definition 1 (Newton polyhedron). Denote the set of indices of the nonzerocoefficients in the expansion φ(x) =

∑α cαx

α by

supp+(φ) = {α ∈ Nd : cα 6= 0}.

We define the Newton polyhedron of φ to be the convex hull of the union⋃α∈supp+(φ)

α+ Rd≥,

and we denote the Newton polyhedron of φ by N+(φ).

The Newton polyhedron is a slightly bulkier object than we require: mostof the time we will only refer to the compact faces of the Newton polyhedron,so we define the Newton diagram of φ as the union of all compact faces ofN+(φ) and denote it by N(φ); furthermore, we define the finite set supp(φ) =N(φ) ∩ supp+(φ). We remind the reader that a subset F of a polyhedron P isa face if there is some supporting hyperplane H of P satisfying H ∩ P = F.Moreover, we say F is a face of N(φ) if it is a face of N+(φ), and say H is asupporting hyperplane of N(φ) if H is a supporting hyperplane of N+(φ) andif H ∩N+(φ) is compact.

Definition 2 (Newton distance). The Newton distance of φ is defined by theinfimum

t = inf{s ∈ (0,∞) : (s, . . . , s) ∈ N+(φ)}.One can check t ≥ 1/d if φ(0) = 0.

Definition 3 (The polynomials φF ). For any face F ⊂ N(φ), denote by φF thepolynomial

φF (x) =∑

α∈F∩supp(φ)

cαxα.

Originally used by Varchenko[15] to prove (1), we can finally define theanalytic condition we impose on our phase:

Definition 4 (Varchenko’s condition). We say that φ satisfies Varchenko’scondition if for all faces F ⊂ N(φ) the polynomials φF satisfy

‖x∇φF (x)‖ 6= 0

for all x such that x1 · · ·xd 6= 0, where ‖ · ‖ is the `∞(Rd) norm (of the vectorx∇φF (x) for fixed x). We say a scalar oscillatory integral

∫Rd e

iλφ(x)ψ(x)dxsatisfies Varchenko’s condition if φ does.

Varchenko’s condition is equivalent to the property that for all x and allF ⊂ N(φ) there is some component of ∇φF (x) that is nonzero away from thecoordinate hyperplanes; the phrasing used in the definition is preferred becauseworking with x∇φF (x) (x∇φ(x)) is easier than working with ∇φF (x)(∇φ(x))since the Newton polyhedron of each component of x∇φ(x) is contained in theNewton polyhedron of φ(x). We see later on why this is important.

4

3 Main results

A crucial tool used to prove the main results is quantifying how ∇φ behavesnear the origin:

Lemma 1. Assume φ satisfies Varchenko’s condition. For all ε ∈ (0, 1)d smallenough, for all x in the box [ε, 4ε], and for all α ∈ N+(φ), we have the lowerbound

‖x∇φ(x)‖ & εα,

where the implicit constant is independent of ε.

So we see Varchenko’s condition implies a very useful growth rate for ∇φ aroundthe origin. Small enough will be made explicit in (11), as we don’t yet have allof the necessary information to state it here. With this lemma we are able toprove the most useful result in the paper:

Lemma 2. Assume φ satisfies Varchenko’s condition. Let ψ : Rd → R besmooth and supported in [1, 4]d. For all ε ∈ (0, 1)d small enough, we have theestimate

∣∣∣ ∫Rdeiλφ(x)ψε(x)dx

∣∣∣ . λ−Nε−(Nα−1) (3)

for all λ > 0, all N ∈ N, and all α ∈ N+(φ), where the implicit constant aboveis independent of ε and λ.

Using induction and techniques from the proof of lemma 2, we are able toprove theorem 1. Next, to introduce theorem 1 we need to briefly discuss anotherconvention. From now on, when we talk about supporting hyperplanes H ofN+(φ) we mean only those not containing the origin, and we use a normalizationconvention for normal vectors to H: we pick the unique vector w ∈ Rd≥ satisfyingH = {ξ : 〈ξ, w〉 = 1}. Any supporting hyperplane H of N+(φ) (not containingthe origin) can be defined this way, and we write Hw for such H, namely Hw ={ξ : 〈ξ, w〉 = 1}. It is a simple exercise in linear algebra to show such normalsexist and have rational components, along with other properties; we discussthese facts in more detail later on. We say w is a normal of the face F ofN+(φ) if Hw ∩ N+(φ) = F, and say w is a normal of the face F of N(φ) if Fis a compact face of N+(φ) with normal w. Note that we can say the normal ofF if F is codimension 1. We also say w is a normal of N(φ) if w is a normalof any F ⊂ N(φ). It is important to remember that we only refer to normals wof N+(φ) not containing the origin. Such normals are guaranteed to exist sinceφ(0) = 0 implies N+(φ) doesn’t contain the origin.

We use the convention of writing w(β) for the set {w : 〈β,w〉 is minimal}where the minimum is taken over all (finitely many) normals w of codimension 1faces F of N+(φ). We write 〈β,w(β)〉 for the scalar minw〈β,w〉. This conventionwill be used mainly in section 7.

5

Theorem 1. Assume φ satisfies Varchenko’s condition. If ψ : Rd → R issmooth and supported close enough to the origin, then for λ > 2,

∫Rd≥

eiλφ(x)ψ(x)dx ∼∞∑j=0

dj−1∑k=0

aj,k(ψ)λ−pj logdj−1−k(λ), (4)

where p0 < p1 < · · · is the ordering of the set {〈α + 1, w(α + 1)〉}α∈Nd and1 ≤ dj ≤ d is the greatest codimension over all faces intersecting the lines

{s · (α+ 1) : s ∈ R, α ∈ Nd, 〈α+ 1, w(α+ 1)〉 = pj}.

If we rewrite the sum (4) as∑∞`=0 a`F`(λ), where an ∈ {aj,k(ψ)}j,k and

Fn(λ) . Fn+1(λ) for all n ∈ N, the asymptotic expansion (4) holds in the sensethat there is an implicit constant independent of λ such that

∣∣∣ ∫Rd≥

eiλφ(x)ψ(x)dx−N∑`=0

a`F`(λ)∣∣∣ . FN+1(λ).

Some points of theorem 1 require clarification. First, we show that {〈α +1, w(α+1)〉}α∈Nd runs through finitely many arithmetic progressions of positiverationals (and therefore can be ordered as claimed in the theorem). Each normalw of a codimension 1 face F of N+(φ) can be uniquely defined by d linearlyindependent vectors αi in supp+(φ) ∩ F . If A is the matrix with rows αi, thenw must satisfy Aw = 1, by definition. Hence, w = A−11. The matrix A−1 musthave rational entries, since A has rational entries, therefore w ∈ Qd. In fact, eachcomponent of w must be nonnegative because it is oriented towards the interiorof the Newton polyhedron, so in particular it must be in Rd≥. The Newtonpolyhedron has finitely many codimension 1 faces, so there are finitely manysuch w and therefore finitely many rational components wj . The arithmeticprogressions come from these components.

Varchenko showed that the first term of the expansion (4) with nonzerocoefficient is λ−1/t logd−1−k(λ), where k is the smallest dimension over all facescontaining t (d − k the largest codimension), assuming the Newton distancet of N+(φ) satisfies t > 1. It is easy to see for any positive scalar c thatw(c(α + 1)) = w(α + 1). Therefore w(1) = w(t) and since 〈t, w(t)〉 = 1, weconclude 〈1, w(1)〉 = 1/t. Clearly p0 = 〈1, w(1)〉 = 1/t, because over all w withnonnegative components and all α ∈ Nd, 〈α+ 1, w〉 ≥ 〈1, w〉, which is boundedbelow by 1/t for all normals w of N+(φ): 〈t, w〉 ≥ 1 implies 〈1, w〉 ≥ 1/t.

Note: There is an easy geometric way to describe the exponents in theorem

1. First, write 〈α+1, w〉 = c〈 (α+1)c , w〉. If we take c to be such that (α+1)/c ∈

∂N+(φ), then 〈 (α+1)c , w〉 = 1 for some normal w of N+(φ) (of a codimension 1

face whose affine hull does not contain the origin) so α+1c ∈ w(α+1). Hence, the

set of powers {−pj : j ∈ N} of λ is equal to the set {−c ∈ Q : α+1c ∈ ∂N+(φ)}.

The power of log multiplying λ−c is equal to the largest codimension over allfaces containing α+1

c . Varchenko’s estimate is the case α = 0 (c = 1/t).

6

4 Proof of lemma 1

4.1 Motivation

To prove lemma 2, we integrate the left side of (3) by parts N times. Lemma 1will be used for this purpose, although it is an interesting result in its own right,reminiscent of Lojasiewicz’s theorem 17[11] (an English version can be found in[10]). Lemma 1 has much stronger assumptions, but gives a much strongerresult. Greenblatt also proved a very nice version of this lemma in [6](lemma3.6, under an assumption on the order of the zero, but not on the zero locus).

From now on, x always lies in [1, 4]d and we scale by ε when talking aboutelements outside the box [1, 4]d. To integrate by parts, we use Varchenko’s con-dition to prove a lower bound on ‖y∇φ(y)‖ for y close to the origin. We nowillustrate the ideas used to approach this problem. The jth component of y∇φ(y)is equal to

∑α

cααjyα. (5)

For each F ⊂ N(φ), we can write (5) as

∑α∈F

cααjyα +

∑α/∈F

cααjyα. (6)

The left side of (6) equals the jth component of y∇φF (y), which is a nonzerovector by Varchenko’s condition. The goal is to show the right side is verysmall for all y small enough, there are F and 1 ≤ j ≤ d so that the significantcontribution comes from the polynomial we know. If for all y = εx ∈ [ε, 4ε] wecan find F ⊂ N(φ) such that we only have to worry about the left sum, and ifwe can scale so that εα = S for all α ∈ F, we will be able to conclude that forsome 1 ≤ j ≤ d, (5) is bounded below by a uniform constant times

|∑α∈F

cααjyα| = |

∑α∈F

cααjxαεα| = |S

∑α∈F

cααjxα| & S = εα

for all α ∈ F. The compact face F is chosen so that the terms εα contributemost when α ∈ F, so we conclude (5) is bounded below by εα for all α ∈ N+(φ).The difficulty is in showing the right sum of (6) is negligible for appropriateF . We will recursively define finitely many boxes [b, 4b−1]d, where 0 < b < 1,on which to apply Varchenko’s condition, because the right side of (6) is notalways negligible if we naively try to use the logic presented above. We mightneed to move to larger faces Fd−1 ⊇ · · · ⊇ F0 = F by moving relatively largesummands of the right-hand sum of (6) to the left-hand sum, checking whetherall the summands in the right-hand side are negligible, and applying Varchenko’scondition on larger and larger boxes depending only on φ. This is the contentof the main proposition below: proposition 3.

7

4.2 Supporting hyperplanes of N+(φ) and scaling

The following proposition is used to define some constants necessary for applyingVarchenko’s condition to (5). It basically says that we can move from onehyperplane not containing all vectors from some set to a new hyperplane thatdoes contain them and such that some scaling holds with respect to the newhyperplane. The other stuff going on in the statement has to do with the specificcase for which we apply this proposition, but really we are just proving a basicstatement in linear algebra together with some bound on a vector.

Proposition 1. Let 0 < S,C < 1. Let v1, . . . , vm ∈ Rd be linearly independentpoints satisfying 〈vi, w〉 ≥ 1 for all 1 ≤ i ≤ m ≤ d. Let ηi = 〈vi, w〉 − 1 ≥0 and assume that 1 ≥ Sηi ≥ C for all i. Let x ∈ [1, 4]d. There is someb(v1, · · · , vm, C) ∈ (0, 1) and some y ∈ [b, 4b−1]d satisfying the equalities

yvi

= Sηixvi

. (7)

Furthermore, there is some d−tuple σ ∈ Rd≥ satisfying the bound ‖σ‖∞ ≤ρ(v1, . . . , vm)‖η‖1 such that y = Sσx, so Cdρ ≤ Sσixi ≤ 4C−dρ, and there-fore we can take b = Cdρ.

Proof. Let V be the m×d matrix with rows v1, . . . , vm. Let σi ∈ R for 1 ≤ i ≤ mbe indeterminate. Without loss of generality, assume that the first m columnsof V are linearly independent and define the d−tuples σ = (σ1, . . . , σm, 0, . . . , 0)and η = (η1, . . . , ηm, 0, . . . , 0). Solving the equation V σ = η can be reduced tosolving V σ = η where σ = (σ1, . . . , σm), η = (η1, . . . , ηm) and V = {vij}1≤i,j≤m.Since V has full rank, we can solve σ = V −1η. Denoting ‖V −1‖∞ = ρ, webound

‖σ‖∞ ≤ ‖V −1‖∞‖η‖1 = ρ‖η‖1.

Since ηi are nonnegative,

−ρ(η1 + · · ·+ ηm) ≤ σi ≤ ρ(η1 + · · ·+ ηm).

We can use these bounds to estimate each Sσixi and find precisely which biggerbox we are looking for. We use the inequalities C ≤ Sηi ≤ 1 to bound

Cdρ ≤ (Sη1 · · ·Sηm)ρ ≤ 1 ≤ (Sη1 · · ·Sηm)−ρ ≤ C−dρ.

Therefore

Cdρ ≤ Sσixi ≤ 4C−dρ.

Hence, letting b = Cdρ ∈ (0, 1), we see that y ∈ [b, 4b−1]d defined by

y = Sσx

8

satisfies the system of equations (7) because

yvi

= (Sσx)vi

= S〈vi,σ〉xv

i

= Sηixvi

.

Since the sum in the left side of (6) are over all lattice points v ∈ supp(φ) ∩F, we cannot only consider the linearly independent ones as the propositionsuggests. Therefore, another proposition is required to make sure the scalingworks over all points in the face we are considering.

Proposition 2. Let x ∈ [1, 4]d, η1, . . . , ηm ∈ R, and S > 0. Assume η is the

linear combination η =∑mi=1 λiηi. If vi ∈ Rd satisfy yv

i

= Sηixvi

for all1 ≤ i ≤ m, then yv = Sηxv where v =

∑mi=1 λiv

i.

Proof. This is simply because

yv =

m∏i=1

yλivi

=

m∏i=1

(Sηixvi

)λi =

m∏i=1

Sλiηixλivi

= Sηxv.

4.3 More notation and the main proposition

Motivated by proposition 1, we define constants required to talk about scalingover faces F ⊂ N(φ) in order to apply Varchenko’s condition.

For any codimension 1 face F of N(φ) and linearly independent v1, · · · , vm ∈supp(φ)∩ F, define V to be the m× d matrix with rows vi and for each V pickV , a full rank m ×m matrix defined by taking m independent columns of V.Define the constant

ρ = maxV‖V −1‖∞ ∈ (0,∞),

where the maximum is taken over all finitely many V (supp(φ) is finite).Since φ has absolutely and uniformly convergent Taylor series in [0, 4]d, there

is a nonzero a ∈ R such that

a = 2 max1≤i≤d

∑α∈Nd

αi|cα|4α.

We define the constant C1:

C1 = minF⊂N(φ)

infx∈[1,4]d

‖x∇φF (x)‖`∞(Rd).

By Varchenko’s condition, over each compact face F , the infimum over [1, 4]d

defines some positive constant. Since there are finitely many compact faces, C1

must exist and is nonzero. We make the observation that C1 < a because

C1 ≤ max1≤i≤d

∑α∈F

αi|cα|4α ≤ a/2.

9

Now for 1 ≤ i ≤ d− 1, recursively define the constants

bi+1 =(Cia

)dρ, (8)

C ′i+1 = minF⊂N(φ)

infx∈[bi+1,4b

−1i+1]d‖x∇φF (x)‖`∞(Rd),

and finally,

Ci+1 = min{C ′i+1, Ci/a}. (9)

Using the convention b1 = 1, it is easy to see that C1 > C2 > · · · > Cd > 0 andtherefore b1 > b2 > · · · > bd > 0.

We define one last constant used in the proof of the main proposition. Let

δ = infα1,α2,w

〈α1 + α2

2, w〉 − 1

where the infimum is taken over all α1, α2 ∈ supp+(φ) not contained in the samecodimension 1 face, and all normals w of N+(φ) (corresponding to hyperplanesHw). In the case where there exist such α1, α2, we claim that δ > 0. Firstnotice all α ∈ N+(φ) and all such w satisfy 〈α,w〉 ≥ 1. Since N+(φ) is convex,α1 and α2 must lie on some nontrivial line segment contained in N+(φ). Ifδ = 0 then 〈α1, w〉 = 〈α2, w〉 = 1 for some w (we can reduce the infimum tobe taken over boundedly many α), so any convex combination of α1 and α2

satisfies 〈λ1α1 + λ2α2, w〉 = 1. This implies α1, α2 lie on the same codimension1 face. The fact that δ > 1 is intuitive because the average of α1, α2 not lyingon the same codimension 1 face lies in the interior of N+(φ), and we expect allpoints ξ in the interior to satisfy 〈ξ, w〉 > 1. If all elements of supp+(φ) arecontained in the same codimension 1 face, we use the convention δ = 1.

Now we are ready to set up the main proposition required to estimatey∇φ(y).

Proposition 3. Let φ have absolutely and uniformly convergent power seriesin [0, 4]d satisfying Varchenko’s condition. Let x ∈ [1, 4]d. Fix a face F0 ⊂ N(φ)and a corresponding normal w. Define the constants bi, Ci as in (8) and (9) for1 ≤ i ≤ d. Let S ∈ (0, (Cd/a)1/(2δ)). There is a vector σ ∈ Rd≥, a compact faceF ′ ⊇ F0, and 0 ≤ j′ ≤ d− 1 such that

(i) For all v ∈ F ′ we have the scaling

S〈v,w〉−1xv = (Sσx)v, where Sσx ∈ [bj′+1, 4b−1j′+1]d, and

(ii) for all u ∈ supp(φ)− F ′ we have the upper bound

S〈u,w〉−1 ≤ Cj′+1/a.

10

Proof. First, if every u /∈ F0 satisfies S〈u,w〉−1 ≤ C1/a, we are done with (ii) byletting σ = 0 and j′ = 0. (i) is also easy to see since 〈v, w〉 = 1 for all v ∈ F0.Otherwise, for 1 ≤ j ≤ d − 1 define Λj = {u ∈ supp(φ) : S〈u,w〉−1 > Cj/a}.Let us first show that each Λj is contained in N(φ). If some u ∈ Λj is not inN(φ), then there is some normal w to N+(φ) such that u /∈ Hw ⊃ F ∩ supp(φ).Therefore, for any v ∈ F,

〈u,w〉 − 1 = 〈u+ v, w〉 − 2 > 2δ.

Since S2δ < Cd/a, the vector u cannot lie in any Λj . This implies Λj is containedin N(φ). By a similar computation, one sees that Λj must actually lie in somecodimension 1 face. Because Λj is contained in a codimension 1 face of N(φ), theaffine hull of Λj must contain some face Fj ⊃ F of N(φ) of maximal dimension,which implies there is an affine basis {v1, . . . , vdim(Fj)+1} ⊂ Fj∩Λj for the affinehull of Λj . By proposition 1, we know there is some d−tuple σj such that forall 1 ≤ i ≤ dim(Fj) + 1 we have the equalities

S〈vi,w〉−1xv

i

= (Sσj

x)vi

,

where Sσj

x ∈ [bj , 4b−1j ]d. By the definition of bj , we can apply proposition 1,

since S〈vi,w〉−1 > Cj/a. Proposition 2 tells us that for all v ∈ Fj we have

S〈v,w〉−1xv = (Sσj

x)v.

Since this holds for all 0 ≤ j ≤ d − 1, we are left with claim (ii). Notice thatdim(F0),dim(F1), ... is an increasing list of natural numbers strictly boundedabove by d, and in particular dim(Fj) ≥ j. That means there is some 0 ≤j ≤ d − 1 such that Fj = Fj+1, as there can be no d−dimensional face, so let

j′ = min{1 ≤ j ≤ d− 1 : Fj = Fj+1}. Letting F ′ = Fj′ , and σ = σj′

completesthe proof, as (i) was shown for all j.

With proposition 3 we can finish the proof of lemma 1. It turns out thescaling we want to consider over the face F = Hw ∩ N(φ) is Sw. The way wethink about this is that y = Swx for some S > 0, and some w normal to N(φ)where Hw ∩N(φ) = F . We will prove shortly that every y ∈ (0, 4(Cd/a)d/(dδ))d

can be written this way. We now return to the sums (6). For all 1 ≤ i ≤ d wecan write the ith component of y∇φ(y) for y = Swx as

∑α∈Fj′

αicα(Swx)α +∑α/∈Fj′

αicα(Swx)α

=∑α∈Fj′

αicαS〈α,w〉xα +

∑α/∈Fj′

αicαS〈α,w〉

prop 1+2= S

( ∑α∈Fj′

αicα(Sσj′

x)α +∑α/∈Fj′

αicαS〈α,w〉−1

). (10)

11

Applying the triangle inequality, the bounds on S〈α,w〉−1 guaranteed by proposi-tion 3, and Varchenko’s condition on the interval [bj′ , 4b

−1j′ ]d, for some 1 ≤ i ≤ d

we can bound (10) by

S(C ′j′ − Cj′/a · a/2) ≥ S(Cj′/2) & S.

We now show for all ε ∈ (0, (Cd/a)d/(2δ))d there is some S ∈ (0, (Cd/a)1/(2δ))and some supporting hyperplane Hw of N(φ) such that Sw = ε, and thereforeS = S〈α,w〉 = (Sw)α = εα for all α ∈ Hw, completing the proof of lemma 1.

First note that the d−tuple (1/d, . . . , 1/d) lies on or below N+(φ). Therefore,for all α ∈ N(φ) there is some 1 ≤ i ≤ d such that αi ≥ 1/d. If Hw is asupporting hyperplane of N(φ) containing α, then

1 = 〈α,w〉 ≥ αiwi ≥ wi/d,since every component of α and w are nonnegative. Hence, for every supportinghyperplane Hw there is some 1 ≤ i ≤ d such that wi ≤ d.

Next, let ε ∈ (0, (Cd/a)d/(2δ))d. Without loss of generality, if we assume thatε1 is the largest, we can solve the equations εqi1 = εi where qi ≥ 1. For allq ∈ Rd with positive components there is some supporting hyperplane Hw ofN(φ) and positive constant c such that q = cw: we can just take a hyperplanenot intersecting the first orthant with normal q, and translate so that it intersectsonly ∂N(φ). Since q1 = 1 ≤ qi, we see that w1 ≤ wi and therefore w1 ≤ d.Hence, 1 = q1 = cw1 ≤ cd, so that c ≥ 1/d. Now we can solve for S in therequired interval:

εi = εqi1 = εcwi1 = (εc1)wi ,

so that S = εc1 ≤ (Cd/a)dc/(2δ) ≤ (Cd/a)1/(2δ).This finishes the proof of lemma 1, summarizing again that ε small enough

means

ε ∈ (0, (Cd/a)d/(2δ))d. (11)

5 Proof of lemma 2

5.1 Estimating an integration by parts operator

For ψ supported in [1, 4]d, we want to integrate

I(λ, ε) =

∫[1,4]d

eiλφ(εx)ψ(x)dx

by parts. Let f(x) = ∇φ(x)‖∇φ(x)‖2 . We define the operator D = Dε,φ on smooth

functions g : Rd → R by

D(g)(x) =∇g(x) · f(εx)

iλ. (12)

12

We can check that eiλφ(εx) is an eigenfunction of D, one of the main reasons Dis considered. We estimate (Dt)N (g)(x), where the adjoint Dt of D is given bythe divergence

Dt(g)(x) = −∇ · g(x)f(εx)

iλ. (13)

To proceed in estimating (Dt)N (g), consider the components fk of f. If we couldshow ∂βfk(x) is a linear combination of terms of the form

∂γ1φ(x) · · · ∂γ2n−1φ(x)

‖∇φ(x)‖2n=

∂γ1φ(x)

‖∇φ(x)‖· · · ∂

γ2n−1φ(x)

‖∇φ(x)‖· ‖∇φ(x)‖−1, (14)

where γi ∈ Nd, we could deduce an upper bound on fk(εx), namely

|fk(εx)| . ε−α (15)

for any α ∈ N(φ), for ε small enough: (14) implies ∂βfk(εx) is a linear combi-nation of products of

εγi∂γiφ(εx)‖ε∇φ(εx)‖−1 (16)

for 1 ≤ i ≤ 2n − 1, times ‖ε∇φ(εx)‖−1. We claim the first 2n − 1 terms canbe bounded above by a uniform constant, while the last term, ‖ε∇φ(εx)‖−1,we know is bounded above by ε−α for any α ∈ N+(φ) by lemma 1. The claimis easy to verify, since each function xγi∂γiφ(x) has uniformly and absolutelyconvergent power series wherever φ(x) does. Therefore, for any ε small enough,εγi∂γiφ(εx) = x−γi(εx)γi∂γiφ(εx) is bounded above by a uniform constant timesεη for some η ∈ supp+(φ) (expand the power series and use the triangle inequal-ity). Lemma 1 then guarantees the first 2n − 1 terms of (14) evaluated at εx,namely the terms (16), are indeed bounded above by a constant independent ofε since ‖ε∇φ(εx)‖−1 . ‖εx∇φ(εx)‖−1 . ε−α for all α ∈ N+(φ), in particularα = η.

We proceed by examining some derivatives necessary to prove (14). Consider‖∇φ(x)‖2n , a sum of products of 2 · 2n−1 = 2n functions, each equal to somederivative of φ. Its partial derivative in the j direction is

∂ej‖∇φ(x)‖2n

= 2n‖∇φ(x)‖2n−2

d∑i=1

φ′xi(x)φ′′xixj (x),

which is a sum of products of (2n − 2) + 2 = 2n functions, each equal to somepartial derivative of φ, more precisely, 2(2n−1 − 1) from the norm and 2 morefrom the chain rule. Writing γ = γ1 + · · ·+ γ2n−1, the function

∂ej∑γ

aγ∂γ1φ(x) · · · ∂γ2n−1φ(x)

13

=∑γ

2n−1∑m=1

aγ∂γ1φ(x) · · · ∂γm+ejφ(x) · · · ∂γ2n−1φ(x)

is again a sum of products of 2n − 1 functions, each equal to some partialderivative of φ. Therefore the numerator of

∂ej

∑γ αγ∂

γ1φ(x) · · · ∂γ2n−1φ(x)

‖∇φ(x)‖2n

is equal to

‖∇φ(x)‖2n∑

γ

2n−1∑m=1

aγ∂γ1φ(x) · · · ∂γm+ejφ(x) · · · ∂γ2n−1φ(x)

−∑γ

aγ∂γ1φ(x) · · · ∂γ2n−1φ(x) · 2n‖∇φ(x)‖2

n−2d∑i=1

φ′xi(x)φ′′xixj (x).

After reorganizing, we see that we get a sum of products of 2n + 2n − 1 =2n+1 − 1 functions, each equal to some partial derivative of φ. We are left withthe denominator of the partial derivative in the j direction: ‖∇φ(x)‖2n+1

. Soby induction, the proof of (15) is complete: we let |β| = n above and wroteβ = β′ + ej for arbitrary j (the base case β = 0 holds trivially).

We can now compute by induction without much work that for βj ∈ Rdthere are aβ = aβ1,...,βn ∈ {0, 1} such that

(Dt)N (g)(x) = (iλ)−N∑

1≤j1,...,jN≤d|β0+β1+···+βN |=N

aβ∂β0g(x)(∂β1fj1)(εx) · · · (∂βN fjN )(εx).

By (15),

|(Dt)N (g)(x)| ≤ λ−N∑

1≤j1,...,jN≤d|β0+β1+···+βN |=N

aβ |∂β0g(x)| · |(∂β1fi1)(εx)| · · · |(∂βN fiN )(εx)|

. λ−Nd∑

j1,...,jn=1

|∂β0g(x)|ε−α1 · · · ε−αN

for all α1, · · · , αN ∈ N+(φ). In particular, for all α ∈ N+(φ),

|(Dt)N (g)(x)| . λ−Nε−Nα, (17)

where the implicit constant is independent of ε and λ.

14

5.2 Final estimate for lemma 2

We now put everything together for ε small enough (εi ≤ (Cd/a)d/(2δ)):

I(λ, ε) =

∫[ε,4ε]

eiλφ(x)ψε(x)dx = ε1∫

[1,4]deiλφ(εx)ψ(x)dx

= ε1∫

[1,4]dDN (eiλφ)(εx)ψ(x)dx = ε1

∫[1,4]d

eiλφ(εx)(Dt)N (ψ)(x)dx.

By (17), ∫[1,4]d

|(Dt)N (ψ)(x)|dx .∫

[1,4]dλ−Nε−Nαdx . λ−Nε−Nα.

Therefore we have proved lemma 2:

|I(λ, ε)| . λ−Nε−(Nα−1).

6 Proof of Varchenko’s upper bounds

We now use lemma 2 and linear programming to prove Varchenko’s upperbounds. We can sum over positive ji to get a bound on the integral∫

[0,1]deiλφ(x)ψ(x)dx

where ψ is supported in a sufficiently small neighborhood of the origin. This isachieved by decomposing ψ(x) =

∑∞j1,...,jd=0 ψ(x)fj(x), where fj(x) is a parti-

tion of unity subordinate to the cover {(2−j , 2−j+2)}j∈Nd of (0, 4)d. One shouldchoose a family {fj} for which there exists a uniform constant C > 0 such that∣∣∣∂αfj(x)

∂xα

∣∣∣ ≤ Cx−α,so that when one scales the support of (ψfj)(x) to [1, 4]d by 2−j , all derivativesof (ψfj)(2

−jx) are bounded above by a uniform constant independent of j. Sinceit is sufficient to find some f0(x) such that the functions fj(x) = f0(2jx) defineour partition of unity, it is not difficult to prove that indeed we can choose afamily {fj}j∈Nd as required.

Now we apply lemma 2. For the rest of the section, we use i as an index. Fixλ > 2. Let t be the Newton distance of φ and assume k is the smallest integersuch that t lies in a face F ⊂ N+(φ) of dimension k − 1 (d− k + 1 the greatestcodimension). By lemma 2, it is enough to show

∞∑j1,...,jd=0

minN≥0, α∈N+(φ)

{λ−N2j·(Nα−1)} . λ−1/t logd−k(λ).

First, setting N = 0, we see

15

∞∑j1=log(λ)/t, j2,...,jd=0

2−|j| . λ−1/t.

Hence, it is enough to bound

log(λ)/t∑j1,...,jd=0

minN≥0, α∈N+(φ)

{λ−N2j·(Nα−1)} (18)

above by a uniform positive constant times λ−1/t logd−k(λ). Since t lies in a faceof dimension k − 1 that cannot lie in a coordinate hyperplane (t > 0) there arelinearly independent α1, . . . , αk ∈ F whose convex hull contains t, so we write

t =

k∑i=1

λiαi.

For the rest of the proof we fix N > 1/t. For 1 ≤ i ≤ k let θi = λiNt and

θ0 = 1− 1Nt . Then all θi are positive and sum to 1. Moreover, we can check

θ0(−1) +

k∑i=1

θi(Nαi − 1) = 0.

For x ∈ Rd, denote x+ = (x1, . . . , xk, 0, . . . , 0) and x− = x − x+. Withoutloss of generality assume that {αi+}ki=1 is a linearly independent set, as some kcolumns of the k × d matrix with rows αi must be linearly independent, so wesimply assume it is the first k columns. We estimate (18) by fixing j−, i.e., weconsider over j1, . . . , jk the sum

log(λ)/t∑j1,...,jk=0

min1≤i≤k

{J02−|j+|, Ji2j+·(Nαi−1)}, (19)

where J0 = 2−|j−| and the coefficients Ji equal λ−N2j−·(Nαi−1) for 1 ≤ i ≤ k;

(19) clearly bounds (18) above, since we fixed N. Letting A be the matrix{αi`}1≤i,`≤k, we can solve Az = 1 ∈ Rk. Write the solution as z = (z1, . . . , zk).Since the convex hull of {α1, . . . , αk} contains t ∈ Rk, we conclude 〈t, z〉 = 1,hence 〈1, z〉 = 1/t. Denoting the d−tuple log(λ)(z1, . . . , zk, 0, . . . , 0) by j0, wecompute

J02−|j0| = λ−1/t = Ji2

j0·(Nαi−1).

Hence, by reindexing and factoring out λ−1/t, we see

log(λ)/t∑j1,...,jk=0

min1≤i≤k

{J02−|j+|, Ji2j+·(Nαi−1)}

16

. λ−1/t

log(λ)/t−log(λ)z1∑j1=− log(λ)z1

· · ·log(λ)/t−log(λ)zk∑jk=− log(λ)zk

min1≤i≤k

{2−|j+|, 2j+·(Nαi−1)}. (20)

Now notice that the vectors (αi1, . . . , αik) and −1 ∈ Rk do not all lie in the

same hyperplane: 〈−1, z〉 = −|z| = −1/t 6= 1. This finishes the claim, since{ξ ∈ Rk : 〈ξ, z〉 = 1} is the unique hyperplane in Rk containing the k manylinearly independent vectors (αi1, . . . , α

ik). Therefore, for all x ∈ Rd,

sup‖x‖∞=1

min1≤i≤k

{−1+ · x, (Nαi − 1)+ · x} < 0.

By homogeneity, there is some c > 0 such that

min1≤i≤k

{−1+ · x, (Nαi − 1)+ · x} ≤ −c‖x‖∞.

Apply this fact to bound the sum in (20) by

∑j1,...,jk∈Z

2min{−|j+|, j+·(Nαi−1)} .∞∑n=0

∑‖j+‖∞=n

nk−12−cn . 1.

Now taking the sum over the remaining indices 0 ≤ jk+1, . . . , jd ≤ log(λ)/t, wesee that (18) is bounded above by a uniform constant times

log(λ)/t∑jk+1,··· ,jd=0

λ−1/t . λ−1/t logd−k(λ),

which is exactly the estimate we were looking for, as F is codimension d− k.

7 Theorem 1: Asymptotic expansion of I(λ)

7.1 Corollary to lemma 2

Under the same assumptions as lemma 2, we can show

∣∣∣ ∫[ε,4ε]

eiλφ(x)xβψε(x)dx∣∣∣ . λ−Nε−(Nα−β−1) (21)

for all λ > 0, all N ∈ N, all α ∈ N+(φ), and all β ∈ Nd where the implicitconstant above is independent of ε and λ. The proof is a simple change ofvariables and an application of lemma 2. It is not necessary for β to be ad−tuple of nonnegative integers, but this is the setting we are interested in fortheorem 1. More importantly, one can show the following corollary by using theinequality (21) together with the same proof of Varchenko’s upper bounds inthe previous section:

17

Corollary 1. Under the same assumptions of lemma 2,∣∣∣ ∫[0,1]d

eiλφ(x)xβψ(x)dx∣∣∣ . λ−〈β+1,w(β+1)〉 logk−1(λ)

where 1 ≤ k ≤ d is the maximum codimension over all faces F ⊂ N+(φ)intersecting the line {s(β + 1) : s ∈ R}.

We make heavy use of the corollary in the proof of theorem 1 below.

7.2 Derivatives of I(λ)

Denoting the integral∫Rd e

iλφ(x)ψ(x)dx by I(λ), where ψ is supported closeenough to the origin, we want to prove that I(λ) has asymptotic expansion forlarge λ of the form

I(λ) ∼∞∑j=0

dj−1∑k=0

aj,k(ψ)λ−pj logdj−1−k(λ),

where p0 < p1 · · · is the ordering of {〈β + 1, w(β + 1)〉}β∈Nd and dj is thegreatest codimension over all faces intersecting the lines {s(β + 1) : s ∈ R, β ∈Nd, 〈β + 1, w(β + 1)〉 = pj}.

The first step in proving theorem 1 is to estimate for β ∈ N the quantity

λd

dλIβ(λ) = λ

d

dλ

∫Rdeiλφ(x)xβψ(x)dx.

We write φ(x) as the series

φ(x) =∑α

cαxα =

∑α

d∑j=1

αjvjcαxα,

where we are free to choose any v = vα ∈ Rd≥ satisfying 〈α, v〉 = 1; we suppressthe dependence on α for notational convenience. Next, let w ∈ w(β + 1) bearbitrary. Recall: geometrically, F = Hw ∩ N+(φ) is a codimension 1 face hitby the line {s(β + 1) : s ∈ R}, so in particular F does not lie in a coordinatehyperplane. We can rewrite φ as

∑α

d∑j=1

αj(vj − wj)cαxα +∑α

d∑j=1

αjwjcαxα. (22)

It is easy to see that both sums in (22) converge uniformly in [0, 4]d since thequantities |v| and |w| are bounded above by d, by their definitions. Lettingv = w for all α ∈ F, (22) simplifies to

∑α/∈F

d∑j=1

αj(vj − wj)cαxα +∑α

d∑j=1

αjwjcαxα.

18

Denote by Φ(x) the sum on the left. Applying the operator λ ddλ , using the

identity ∇ · φ(x) =∑α

∑dj=1 αjcαx

α, and integrating by parts,

λd

dλ

∫eiλφ(x)xβψ(x)dx =

∫eiλφ(x)iλφ(x)xβψ(x)dx

=

∫eiλφ(x)iλΦ(x)xβψ(x)dx+

∫eiλφ(x)iλ

∑α

d∑j=1

αjwjcαxαxβψ(x)dx

=

∫eiλφ(x)iλΦ(x)xβψ(x)dx−

∫eiλφ(x)iλ∇ · (xαxβψ(x)(xw))dx

= λ

∫eiλφ(x)iΦ(x)xβψ(x)dx− 〈β + 1, w〉

∫eiλφ(x)xβψ(x)dx

−d∑j=1

〈ej , w〉∫eiλφ(x)xβ+ejψ′xj (x)dx.

Letting Dβ be the operator (λ ddλ + 〈β + 1, w〉), we estimate

DβIβ(λ) = λ

∫eiλφ(x)iΦ(x)xβψ(x)dx−

d∑j=1

〈ej , w〉∫eiλφ(x)xβ+ejψ′xj (x)dx

= λI1(λ) + I2(λ).

We use corollary 1 to conclude

|I2(λ)| .d∑j=1

〈ej , w〉λ−〈β+ej+1,w(β+ej+1)〉 logkj (λ). (23)

If 〈β + ej + 1, w(β + ej + 1)〉 > 〈β + 1, w〉, we are done with the estimate.Otherwise,

〈β+1, w〉 = 〈β+ej+1, w(β+ej+1)〉 = 〈β+1, w(β+ej+1)〉+〈ej , w(β+ej+1)〉,

a quantity strictly greater than 〈β + 1, w〉 unless 〈ej , w(β + ej + 1)〉 = 0 and〈β+1, w(β+ej +1)〉 = 〈β+1, w(β+1)〉. In this case, either 〈ej , w〉 = 0 or elsew(β+ej +1) ( w(β+1). So if 〈ej , w〉 6= 0, we conclude that β+ej +1 lies in astrictly smaller intersection of codimension one faces. Since β+ ej + 1 6= β+ 1,we can from there conclude the codimension of the face containing β+ ej + 1 isstrictly larger than that of the face containing β+ 1. Therefore, the power kj oflog in (23) is strictly smaller than in the estimate of Iβ , guaranteed by corollary1. If 〈ejw〉 = 0, the jth term in (23) vanishes. No matter what, we get a strictlybetter estimate consistent with corollary 1 (and theorem 1).

19

To estimate I1(λ), we use the fact that∑α/∈F

∑dj=1 αj(vj − wj)cαxα con-

verges uniformly, so we just have to estimate∫eiλφ(x)xα+βψ(x)dx for α /∈ F.

Applying corollary 1, the estimate is

|I1(λ)| . maxα/∈F

λ−〈α+β+1,w(α+β+1)〉 logkα(λ) (24)

for some 0 ≤ kα ≤ d− 1. Note that

〈β + 1, w〉+ 〈α,w(α+ β + 1)〉 ≤ 〈α+ β + 1, w(α+ β + 1)〉.

Moreover, 〈α,w(α+β+1)〉 ≥ 1 since α must lie on or above Hw′ for w′ ∈ w(α+β+ 1) by convexity of N+(φ). Let us examine what happens if 1 + 〈β+ 1, w〉 =〈α + β + 1, w(α + β + 1)〉, namely if 〈α,w(α + β + 1)〉 = 1. It is impossiblefor w(α + β + 1) to contain w, since we assumed α /∈ F. In particular, α mustlie in a face of strictly smaller codimension than β + 1. Therefore corollary 1guarantees that kα must be strictly smaller than the power of log in the estimateof∫eiλφ(x)ψ(x)xβdx. So the estimate in this case is strictly better because of

the power of log . If α /∈ Hw′ , the power of λ must be strictly smaller. Therefore,using the bound ∫

eiλφ(x)xβψ(x)dx . λ−〈β+1,w〉 logkβ (λ)

guaranteed by corollary 1, we have shown

|DβIβ(λ)| .

max〈ej ,w〉6=0

λ−〈β+1,w(β+1)〉 logkj (λ)

maxα/∈F

λ1−〈α+β+1,w(α+β+1)〉 logkα(λ)

where the first case bounds 1 ≤ j ≤ d in (23), where we concluded kj < kβ bycorollary 1 for those j satisfying ej , w〉 6= 0, and the second case occurs otherwisefor kα guaranteed by corollary 1, where 1−〈α+β+1, w(α+β+1)〉 < 〈β+1, w(β+1)〉. It is important in the second case to show, for consistency with theorem 1,that we can find γ ∈ Nd such that 1−〈α+β+1, w(α+β+1)〉 = −〈γ,w(α+β+1)〉.This is because there is some c > 0 such that δ = c(α+ β+ 1) ∈ ∂N+(φ). Sinceδ lies in the convex hull of points in Nd, there must be some α′ ∈ Nd lying atmost 1 in each component away from δ and therefore α+ β + 1− α′ = γ ∈ Nd.

7.3 Estimating derivatives of I(λ)

We now set up the problem of estimating derivatives of I(λ), which we use tofigure out its asymptotic expansion. Let p0 = 1/t. Define for j ≥ 1,

pj = min{〈v + 1, w〉 > pj−1 : v ∈ Nd, w}

where w runs over normals of N+(φ). We want to show for all n ∈ N, there are0 ≤ d0, d1, · · · , dn ≤ d− 1 such that

20

|(λ d

dλ+ pn)k(λ

d

dλ+ pn−1)dn−1+1 · · · (λ d

dλ+ p0)d0+1I(λ)| . λ−pn logdn−k(λ)

(25)

for all 1 ≤ k ≤ dn. The proof is by induction on n ∈ N and 1 ≤ k ≤ dn.For all such n and k, let Gn,k(λ) = (λ d

dλ + pn)k(λ ddλ + pn−1)dn−1+1 · · · (λ d

dλ +p0)d0+1I(λ). The induction hypothesis states that Gn,k is of the form

Gn,k(λ) =

d0+···+dn−1+n+k∑j=0

λjJj,n,k(λ),

where there are

• ψj,n,k smooth,

• β = βj,n,k and w (depending on β) such that pn = 〈β + 1, w〉 − j,

• a set Γ such that∑γ∈Γ bγ(j, n, k)xγ converges uniformly and absolutely,

and all γ ∈ Γ satisfy

• 〈γ + 1, w′〉 ≥ 〈βj,k,n + 1, w〉 for all w′ normal to N+(φ)

such that

Jj,n,k(λ) =

∫Rdeiλφ(x)ψj,n,k(x)

∑γ∈Γ

bγ(j, n, k)xγdx.

From now on the dependence on j, k, n is suppressed. The case n = 0, k = 1was shown in the previous section, taking β = 0, Γ = {α+ β : α ∈ supp+(φ)}.

Assuming k < dn, we apply (λ ddλ +pn) to Gn,k−1 (otherwise, apply (λλ d

dλ +pn+1)− the proof is identical). By induction hypothesis, Gn,k−1 is a sum ofterms λjJ(λ) satisfying the conditions above, where J depends on j, k, n. Forall j there is some β and w corresponding to a codimension 1 face such thatpn = 〈β + 1, w〉 − j. Therefore,

(λd

dλ+ pn)λjJ(λ) = λj

(∫eiλφ(x)iλφ(x)ψ(x)

∑bγx

γdx

+ 〈β + 1, w〉∫eiλφ(x)ψ(x)

∑bγx

γdx.)

= λj(J ′(λ) + 〈β + 1, w〉J(λ)).

We separately estimate J ′ and J. For simplicity, write J ′ = J ′1 + J ′2, where

J ′1 =

∫eiλφ(x)iλψ(x)

∑γ

bγxγ

d∑j=1

∑α

cααjwjxαdx,

21

and

J ′2 =

∫eiλφ(x)iλψ(x)

∑γ

bγxγ

d∑j=1

∑α

cααj(vj − wj)xαdx.

We reuse the notation v and w from the previous section. The argument is verysimilar to the one already given. We use the integration by parts along with theequality

wx · ∇φ(x) =

d∑j=1

∑α

cααjwjxα

to write J ′1 as

∫eiλφ(x)iλ(wx) · ∇φ(x)ψ(x)

∑γ

bγxγdx (26)

= −d∑j=1

∑γ

wjbγ

∫eiλφ(x)ψ′xj (x)xγ+ejdx

−∑γ

〈γ + 1, w〉∫eiλφ(x)ψ(x)bγx

γdx. (27)

The integrals under the sum over j can be bounded above by λ−〈γ+ej+1,w(γ+ej+1)〉

times the appropriate power of log by corollary 1. By the induction hypothesis,

〈γ + ek + 1, w(γ + ek + 1)〉 ≥ 〈β + 1, w〉+ 〈ek, w(γ + ek + 1)〉.

If 〈ek, w(γ + ek + 1)〉 6= 0, the exponent of λ in our approximation is strictlybetter than 〈β + 1, w〉. Otherwise, the kth coefficient in (27) is zero, so theestimate of these integrals is strictly better. Next, adding the second integralof (27) to 〈β + 1, w〉J, we get

∑γ

bγ(〈β + 1, w〉 − 〈γ + 1, w〉)∫eiλφ(x)ψ(x)xγdx

=∑

〈γ+1,w〉>〈β+1,w〉

b′γ

∫eiλφ(x)ψ(x)xγdx.

Here we can bound each summand above by λ−〈γ+1,w(γ+1)〉 logkγ (λ), and it isthe same story as in the base case.

The last integral we have to estimate is J ′2. By dominated convergence, werewrite it as

22

d∑j=1

∑γ

∑α/∈Hw

∫eiλφ(x)iλψ(x)xγ+αdx.

Each integral in this sum is bounded above by λ · λ−〈α+γ+1,w(α+γ+1)〉 timessome power of log that we must treat delicately, depending on the codimensionof the face generated by the origin and α+ γ + 1. The argument is exactly thesame as in the base case. So we are done proving (25).

7.4 A differential inequality

The last thing we need to do is show (25) implies the asymptotic expansion ofI(λ) we have been looking for. The expansion will be a corollary of the followingresult:

Lemma 3. Let f : (2,∞)→ C be smooth. Assume there are positive rationalsp0 < p1 < · · · < pn+1 and positive integers d0, · · · , dn+1 such that

|(λ d

dλ+ pn)dn · · · (λ d

dλ+ p0)d0f(λ)| . λ−pn+1 logdn+1−1(λ). (28)

Then, there are constants ajk ∈ C such that

f(λ) =

n∑j=0

dn−1∑k=0

ajkλ−pj logdj−1−k(λ) +O(λ−pn+1 logdn+1−1(λ)).

First, we require some more basic results about the differential operator weare considering. We let pj and dj as in lemma 3. We assume f : R → C issmooth for all statements below. Also, big-O statements are for λ→∞.

Proposition 4. Let h : (2,∞) → C be smooth. Assume there are positiverationals p0 < p1 < · · · < pn+1 and positive integers d0, · · · , dn+1 such that

(λd

dλ+ pn)dn · · · (λ d

dλ+ p0)d0h(λ) = 0.

Then, there are ajk ∈ C such that

h(λ) =

n∑j=0

dj−1∑k=0

ajkλ−pj logk(λ).

Proposition 4 can be shown by a simple induction argument on 0 ≤ m ≤d0 + · · ·+ dn.

Proposition 5. Let f : (2,∞)→ C be smooth. Let 0 < p < q and let d ∈ N. If|(λ d

dλ + p)f(λ)| . λ−q logd(λ), then |f(λ)| . λ−q logd(λ).

23

Proof. We multiply both sides of the inequality by λp−1, notice the left-handside becomes exact, and integrate:

∣∣∣ ∫ ∞λ

d

dt(tpf(t))dt

∣∣∣ ≤ ∫ ∞λ

| ddt

(tpf(t))|dt .∫ ∞λ

tp−q−1 logd(t)dt.

Since p−q−1 < −1, the rightmost side is integrable, therefore so is the leftmost.Integrating by parts, (differentiating the log term if d 6= 0), we conclude

|λpf(λ)| . λp−q logd(λ).

This technique is the same one we would use if we were providing the proof ofproposition 4. Proposition 5 provides the base case for the proof of lemma 3:

Proof. Let Dn be the differential operator (λ ddλ + pn)dn · · · (λ d

dλ + p0)d0 . Let hbe the general solution to the homogeneous equation Dn(h) = 0 guaranteed byproposition 4. Then to solve for f in the differential inequality (28), we need tosolve |Dn(f + h)| . λ−pn+1 logdn+1−1(λ). We use induction the same way as inthe proof of proposition 5, making use of p0 < · · · < pn < pn+1. We conclude

|f(λ) + h(λ)| . λ−pn+1 logdn+1−1(λ).

Hence, there are constants ajk ∈ C such that

f(λ) =

n∑j=0

dj−1∑k=0

ajkλ−pj logk(λ) +O(λ−pn+1 logdn+1−1(λ)).

Now we can conclude that for all n ∈ N, there are ajk ∈ C such that

|I(λ)−n∑j=0

dj−1∑k=0

ajkλ−pj logk(λ)| . λ−pn+1 logdn+1−1(λ).

Finally, taking pj and dj as in theorem 1, the proof is complete. Note that inthe propositions above, we should take dj + 1 for the codimension.

8 Acknowledgments

I would like to deeply thank my adviser, Professor Philip Gressman, for his trulyinspiring enthusiasm, inexhaustible patience and invaluable advice. I would alsolike to thank Professors Michael Greenblatt and Robert Strain for some helpfulcomments and suggestions.

24

References

[1] Anthony Carbery. A uniform sublevel set estimate. In Harmonic analysisand partial differential equations, volume 505 of Contemp. Math., pages97–103. Amer. Math. Soc., Providence, RI, 2009.

[2] Anthony Carbery, Michael Christ, and James Wright. Multidimensionalvan der Corput and sublevel set estimates. J. Amer. Math. Soc., 12(4):981–1015, 1999.

[3] Anthony Carbery and James Wright. What is van der Corput’s lemma inhigher dimensions? Publ. Mat., (Vol. Extra):13–26, 2002.

[4] Yen Do and Philip T. Gressman. An operator van der corput estimatearising from oscillatory riemann-hilbert problems. 2013. Available onlineat arXiv:1308.1367.

[5] Michael Greenblatt. Oscillatory integral decay, sublevel set growth, andthe Newton polyhedron. Math. Ann., 346(4):857–895, 2010.

[6] Michael Greenblatt. Maximal averages over hypersurfaces and the Newtonpolyhedron. J. Funct. Anal., 262(5):2314–2348, 2012.

[7] Philip T. Gressman. Uniform geometric estimates for sublevel sets. 2009.Available online at arXiv:0707.3168.

[8] Branko Grunbaum. Convex polytopes, volume 221 of Graduate Texts inMathematics. Springer-Verlag, New York, second edition, 2003.

[9] Joe Kamimoto and Toshihiro Nose. Newton polyhedra and weighted oscil-latory integrals with smooth phases, 2014. arXiv:1406.4325.

[10] Steven G. Krantz and Harold R. Parks. A primer of real analytic functions,volume 4 of Basler Lehrbucher [Basel Textbooks]. Birkhauser Verlag, Basel,1992.

[11] S. Lojasiewicz. Sur le probleme de la division. Studia Math., 18:87–136,1959.

[12] Bernard Malgrange. Integrales asymptotiques et monodromie. AnnalesScientifiques de l’Ecole Normale Superieure, 7(3):405–430, 1974.

[13] D. H. Phong, E. M. Stein, and Jacob Sturm. Multilinear level set oper-ators, oscillatory integral operators, and Newton polyhedra. Math. Ann.,319(3):573–596, 2001.

[14] Vyacheslav S. Rychkov. Sharp L2 bounds for oscillatory integral operatorswith C∞ phases. Math. Z., 236(3):461–489, 2001.

[15] A. N. Varchenko. Newton polyhedra and estimation of oscillating integrals.Funktsional’nyi Analiz i Ego Prilozheniya, 10(3):13–38, 1976.

25