transform mathematics

195
TRANSFORM MATHEMATICS Orhan Özhan

Upload: others

Post on 11-Sep-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: transform mathematics

TRANSFORM MATHEMATICS

Orhan Özhan

Page 2: transform mathematics

2

GIRISBismillahirrahmanirrahim

Matematik... Onunla ilgili söylenmis çok sey var süphesiz. Söylenenlerin en güzeli, sözlerin en güzelinde söylen-mis: "Günes ve ay hesap iledir1", "... ve geceyi sükunet için, günes ve ayı da hesap için yarattı. Iste bu aziz olan,her seyi kemali ile bilen Allahın takdiridir2". Fiziksel evrende bilime medar olan her tabii hadise matematik ile ifadeedilebilmektedir. Evrende maddeyle ilgili olan yaratılmıs her sey bu dille anlasılıyor. Galileo’nun da dedigi gibi: "Kainatdedigimiz kitap, yazıldıgı dil ve harfler ögrenilmedikçe okunamaz. O, matematik dilinde yazılmıstır; harfleri üçgen, daireve diger geometrik sekillerdir. Bu dil ve harfler olmaksızın kitabın bir tek kelimesini anlamaya beseri olarak imkan yok-tur.3" Matematigin insanlar tarafından yapıldıgı iddiasının aksine, bu varlıkların kendileri her ne kadar ezeli olmasalar da,onların dili olan matematik Allah’ın ilmidir; Allah nasıl kadim ise matematik de kadimdir, ezelidir. Bu itibarla matematikinsana ayrı bir haz ve tatmin verir; onun tadı ve güzelligi akıl ile idrak edilir. "O’nun istemesi dısında onun ilmindenhiç bir seyi idrak edemezler4" ilahi kelamı geregince, eger biz matematik biliyorsak bu tamamen Allah’ın lutfu ve ikramıiledir; ve bu insan için büyük bir seref ve nimettir. Insanlık ve bilim tarihinde kazanılmıs olan bütün matematik mükte-sebatı kütüphaneler doldursa da, yine de bu müktesebat Allah’ın ilmi yanında okyanusdan bir damla gibidir veya dahaazdır.

Iste bu küçük müktesebatla bugün atomun yapısı ve temel parçacıklar anlasılmıs; astronomi ilmi günes sistemi dısınaaraç gönderecek, Mars’a araç indirecek seviyeye gelmis; elektronik ve haberlesme teknolojisi akıllı telefonlarla insanlarıyüz yüze görüsür hale getirmis; hava ve deniz araçları ile dünya küçük ve global bir köye dönüsmüs; bilgisayar veinternet teknolojisi ile bilgi sistemlerinde adeta bir patlama yasanmıs; tıpta inanılmaz görüntüleme ve tedavi sistemleriile hastalıkların teshis ve tedavisinde büyük ilerlemeler saglanmıstır. Maddi ilimlerin sahı olan fizigin, bu müktesebatıen çok ve en iyi kullanan ilim oldugu herkesce bilinmekte ve kabul edilmektedir. Mühendislik dalları ise fizik ilmininuygulamalarından ibarettir. Klasik ve modern fizigin yüzlerce konusundan bazıları günlük hayatın belirli sahalarında,hayatı kolaylastırmak üzere degisik mühendislik dallarını olusturmustur. Makine mühendisligi ve insaat mühendisligiçıkıs itibarıyla oldukça eskidirler. Bunlara nazaran elektrik mühendisligi daha yenidir; ondokuzuncu yüzyılın sonu veyirminci yüzyılın basları bu mühendisligin serpilip gelistigi zamanlardır.

Matematigin yasantımıza olan etkisi baglamında bir örnek verelim. Ondokuzuncu yüzyılda James Clark Maxwellkendisinden önce bilinen Ampere kanununa küçük bir terim ekledi. Yer degistirme akımı5 adı verilen bu basit terimsayesinde Ampere kanunu ve Faraday kanunu birbirlerine eklemlendiler. Bu iki kanunun uzay ve zamanda çözümüsayesinde, ısıgın özelliklerini tasıyan elektromanyetik dalgaların varlıgı öngörüldü. Ondokuzuncu yüzyılın sonlarınadogru Heinrich Herz bu dalgaların varlıgını ve ısık gibi davrandıklarını deneysel olarak ispatladı. Yirminci yüzyılınbaslarında J. A. Fleming ve Lee De Forest tarafından diyot ve triyot tüpleri icat edilince, Maxwell ve Herz’in buluslarındanelektronik diye yeni bir mühendislik alanı dogdu.

Maxwell’in katkıda bulundugu Ampere kanunu ile birlikte Gauss Kanunları ve Faraday kanunlarının olusturdugu dörtkanuna Maxwell kanunları denir. Maxwell’in küçük katkısı olan d

dt

´S−→D ·d−→S terimi aslında o kadar büyük ve önemlidir

ki bu dört denklem takımı Maxwell’in ismi ile anılagelistir.

Gauss Kanunu:z

S

−→D ·d−→S = Q

Manyetizmada Gauss Kanunu:z

S

−→B ·d−→S = 0

Faraday Kanunu:z

C

−→E ·d−→l = − d

dt−→B ·d−→S

Ampere Kanunu:z

C

−→H ·d

−→l = I +

ddt

ˆS

−→D ·d−→S

Elektrik, elektronik, mikrodalga ve anten mühendisliklerinin temeli bu dört kanundur. Kirchhoff akım ve gerilim kanun-ları, elektromanyetik induksiyon, Ohm kanunu, devre elemanlarının terminal denklemleri vb’nin kaynagı Maxwell den-klemleridir. Mühendislik müfredatının ilk yıllarında verilen fizik ve matematik derslerinin önemli bir kısmı bu denklem-lerin anlasılmasının alt yapısını olusturur. Kalkulus ve lineer cebir bu meyanda sayılabilecek önemli derslerdir. Bunun

1Rahman suresi: 52En’am Suresi: 963[The universe] cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in

mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehenda single word. Opere Il Saggiatore p. 171

4Bakara suresi: 2555Displacement current

Page 3: transform mathematics

3

yanında, sistemler ve sistemlerin isledikleri isaret kavramları ortaya çıkmıstır. Sistemler ve isaretler, Laplace dönüsümleri,Fourier dönüsümleri, Fourier serileri gibi matematiksel aletlerle kolayca ifade edilebilmektedir. Bu da mühendislik analizve sentezine kolaylık saglamaktadır. 80’li yılların sonundan baslayarak rakamsal teknolojinin ucuzlaması ve yayılması ilez-dönüsümleri de sistemler ve isaretler alanında kendine saglam bir yer edinmistir. Devre konuları kendi dersleri içindeele alınmakla beraber, sistemler ve isaretler devre dersleri içinde verilememektedir.

Kitap esas olarak Laplace, Fourier, z-dönüsümlerini ve Fourier serilerini hedef almıstır. Devre analiz ve sentezi,geribesleme (feedback) teorisi, filtre teorisi, isaretler ve sistemler, sayısal isaret isleme ve kontrol teorisi elektrik-elektonikve biyomedikal mühendisliklerinin en önemli yapı tasları arasındadır. Adı geçen konular bu yapı taslarının harcı mesabesindedir.Bu konuları kavrayabilmenin ön sartı ise karmasık sayılar ve karmasık sayı fonksiyonları ile ilgili konseptlerin iyi ögre-nilmesidir. Bu yüzden kitap 1.bölümde kompleks sayıları ve 2.bölümde kompleks fonksiyonları ve analitisiteyi elealmıstır. Ister analog, ister sayısal olsun, sistem tasarımında karmasık sayı ve analizinin büyük bir agırlıgı vardır. Kar-masık sayılar ve analitik fonksiyonlar kalkulusun bıraktıgı noktadan bir adım sonrasına kapı aralamaktadır. Bu sebe-ple kalkulus bu dersin olmazsa olmaz ön sartıdır; bir çok konu ve kavram kalkulus bilgileri üzerine insa edilmektedir.Amacımız, ögrenciyi elektrik mühendisliginde çok önemli bir araç olan dönüsüm teknikleri matematigi ile tanıstırmaktır.Okuyucunun kalkulus ve diferansiyel denklemler derslerini aldıklarını kabul ediyoruz. Örneklerde zaman zaman elek-trik devreleri kullanılmakta ise de burada gaye elektrik devrelerini ögretmek degil, elektrik devrelerini vasıta olarak vemotivasyon gayesiyle kullanmaktır. Elektrik devreleri yanında mekanik ve bilgisayar uygulamalarına da yer verilmisdir.

1970’li yıllara kadar matematik, logaritma ve trigonometri tablolarıyla, elle hesaplayarak; hesap makineleriyle veyasürgülü hesap cetvelleriyle yapılıyordu. Mainframe tabir edilen bilgisayarlar simdiki bilgisayarlarımızla karsılastırıldıgındahem çok pahalı, hem çok yavas, bellek kapasitesi çok düsük, görsel arayüzleri ilkel; çok özel ve elit kullanıcıların dısındaulasılması çok zor makinelerdi. Üniversitede ikinci sınıf ögrencisi oldugum yıl bir aylık harçlıgımı ARISTO multilogsürgülü hesap cetveline yatırdıgımı hatırlıyorum. Yaz tatilinde Almanya’ya giden bir arkadasıma bu parayı verip heye-canla sürgülü hesap cetvelimin gelmesini beklemistim. Daha sonra bu cetveli, ögle sıcagında arabada unutup, rakamlarınboyalarını kabarmıs bir sekilde bulunca büyük bir hayal kırıklıgı yasamıstım. Simdi artık hesap makinelerini ve sürgülühesap cetvellerini bilen ve hatırlayan kaç kisi kalmıstır bilemem6... Bugün bunların yerine bilgisayarlarımız var; çokkuvvetli matematik yazılımlarımız var. Siz sürgülü hesap cetveli yerine MATLAB, SCILAB, MATHCAD, MATHE-MATICA, MAXIMA vb. kullanabilirsiniz. Gördügünüz gibi bizim FACIT ve ARISTO’muza karsı sizin ne kadar fazlaseçeneginiz var! FSMVÜ ögrencilerine LabVIEW kullanma imkanı vermektedir. Konuyu cazip hale getirmek için konuanlatımlarında ve örneklerde LabVIEW yazılımını sıkça kullandık. Bizim eskiden hesap makinelerimiz ve sürgülü hesapcetvellerimizden aldıgımız hazzın çok üstünde bir zevk alacagınızı garanti edebilirim. Elbette ki LabVIEW tek seçenekdegil; biz lisanslı kullanıcısı oldugumuz için LabVIEW üzerine yogunlastık. Lisans dısında, LabVIEW programlama,veri toplama ve analiz etme, sayısal sistem ve cihaz kontrol etme gibi pek çok yetenekleri sebebiyle ögrencye büyük katkısaglayan bir platformdur. Örneklerimizde ve bölüm sonu problemlerinde LabVIEW kullanmamızın saiklerinden birisi debudur.

Kitabı Ingilizce yazmaya karar verdik. Bunun bir kaç sebebi var.

1. FSMVÜ’nde %30 Ingilizce egitim uygulanmaktadır. Kitabın Ingilizce te’lifi bu egitime destek niteligindedir.

2. Üniversitelerimizin teknik bölümlerinin mezunlarında ciddi bir Ingilizce zafiyeti söz konusudur.

3. Kitaba konu olan sahalarda Türkiye’deki akademiya içinde bir fikir birligi bulunmamaktadır. Aynı kavram veyaterminoloji farklı üniversitelerde, farklı akademisyenler tarafından farklı kelime ve terminoloji ile ifade edilmeyeçalısılmaktadır. Bazı terminolojinin Türkçe’de henüz stabil olmaması; yazarın mevcut terminoloji ile zaman zamansıkıntı yasaması sebebi ile; bunların zaman kaybına yol açacagı endisesi dogmustur. Dil konusunun milli bir prob-lem oldugunu düsünüyor ve Türkiye çapında bütün üniversitelerin ve Türk Dil Kurumu’nun katkılarıyla çözülmesive fikir birligi saglanması gerektigine inanıyoruz.

4. Ileride bu kitabı belki iki dilli olarak basma ihtimali. Bu tabii ki müstakbel bir proje olabilir. Türkçe terminolojiproblemi halledilditen sonra bu adım düsünülebilir. Bastan isi bu boyutu ile ele almak ise bu kitaba belki hiçbaslayamamak anlamına gelecekti. "Tamamına ulasılmayan tamamen terkedilmez" kaidesince Ingilizce baslayıpsonra insaallah kitabın iki dilli versiyonuna geçeriz düsüncesi agırlık kazandı.

Derslerimizin Türkçe olması, kitaptaki konuların sınıfta Türkçe anlatılması demektir. Kitabın Ingilizce, dersin Türkçeolması sebebiyle ögrencilerimizin teknik literatüre ve terminolojiye daha kolay asina olmalarını umuyoruz.

Orhan Özhan

Istanbul, 20166"ARISTO slide rule", "FACIT calculator" anahtar kelimeleri ile bir internet taraması yapıp, veya http://www.sliderulemuseum.com/Aristo.

htm sitesinden o zaman kullanılan teknoloji hakkında yeterli bilgiye ulasabilirsiniz.

Page 4: transform mathematics

4

PREFACEIn the name of Allah, the Beneficent, the Merciful

Mathematics... Undoubtedly there is much said about it. The most beautiful of what is said has been said in the mostbeautiful of the words: "The sun and the moon [move] by precise calculation7", "[He is] the cleaver of daybreak andhas made the night for rest and the sun and moon for calculation. That is the determination of the Exalted in Might, theKnowing8". In the physical universe every natural event which is the subject matter of science can be expressed in math-ematics. Every material creation in the universe can be understood in this language. As Galileo once said: "Philosophy[i.e. physics] is written in this grand book — I mean the universe — which stands continually open to our gaze, but itcannot be understood unless one first learns to comprehend the language and interpret the characters in which it is writ-ten. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures,without which it is humanly impossible to understand a single word of it; without these, one is wandering around in adark labyrinth.9" Contrary to the claim that mathematics is created by the Man, mathematics, the language of the created,as opposed to the creation, has no beginning as mathematics is within Allah’s knowledge; Allah has no beginning and HisKnowledge has no beginning either. As such mathematics imparts man a different pleasure and satisfaction; its taste andbeauty being perceived by intellect. In accordance with the divine word "And they encompass nothing of His knowledgeexcept for what He wills10", if we ever know mathematics it is solely due to Allah’s benefaction and endowment; and thisis a grand honor and blessing for man. Even if the legacy of mathematics acquired during the history of humanity andscience should fill libraries, this acquisition is like nothing but a droplet from the ocean or less.

It is with this minuscule legacy that the atomic structure and the elementary particles have been understood; astronomyhas reached a level which made it possible to send spacecraft beyond solar system and land probes on the Martian soil;electronics and communication technology have devised smart phones which let people talk to each other face-to-facefrom a distance; the world has been transformed to a small village by air and sea transportation; an explosion has beenwitnessed in information technology because of developments in computer and internet technologies; incredible imagingand treatment systems in medicine have ushered in unprecedented advances in diagnosing and treating diseases. It isknown and accepted by everyone that physics, the master of this acquisition, is the branch of knowledge that best usesthis legacy. Various engineering branches are but applications of physics. Some fields of classical and modern physicshave crystallized into specific engineering disciplines to ease certain aspects of our daily lives. Mechanical engineeringand civil engineering are relatively older than electrical engineering by birth; the late 19th century and early 20th centuryare the time slots when this engineering was born and flourished.

Let us mention an example for the influence of mathematics on our lives. In 19th century James Clark Maxwell addeda small term to Ampere’s Law already known before him. Thanks to this simple term, the so-called displacement current,Ampere’s Law and Faraday’s law became coupled. Simultaneous spatial and temporal solution of these equations led toprediction of electromagnetic waves which possessed properties light. In fact toward the end of 19th century HeinrichHerz demonstrated the existence of these waves through experiments and that they behaved like light. In early 20th cen-tury J. A. Fleming and Lee De Forest invented diode and triode vacuum tubes whereby a new engineering was born; thenew engineering leveraged Maxwell’s and Herz’s ideas and was called electronics11.

Four canonic laws i.e., Gauss’s Laws, Faraday’s Law and Ampere’s Law, are collectively called Maxwell’s equations.Maxwell’s minuscule contribution to Ampere’s Law, the displacement current d

dt

´S−→D · d−→S is such a big discovery that

these four laws are known by Maxwell’s name. Without this term electromagnetic waves would not be discovered.

Gauss’s Law (electric):z

S

−→D ·d−→S = Q

Gauss’s Law (magnetic):z

S

−→B ·d−→S = 0

Faraday’s Law:z

C

−→E ·d−→l = − d

dt−→B ·d−→S

Ampere’s Law:z

C

−→H ·d

−→l = I +

ddt

ˆS

−→D ·d−→S

7Surah Ar-Rahman: 58Surah Al-An’am: 969The Assayer, Il Saggiatore p. 171, Galileo Galilei, Rome, 1623.

10Surah al-Baqara : 25511The naming, electronics, was probably because the diode, triode and later pentode were electron devices.

Page 5: transform mathematics

5

These four laws are fundamental to electrical, electronics, microwave and antenna engineering. Kirchhoff’s cur-rent and voltage laws, electromagnetic induction, Ohm’s law, terminal equations of circuit elements are derived fromMaxwell’s equations. Physics and mathematics courses in freshman and sophomore of electrical engineering curriculumserve as the infrastructure towards understanding these equations. Calculus and linear algebra are important subjectstowards to this end. Besides these, signals and systems concepts have emerged in electrical engineering. Signals and sys-tems, can be studied using mathematical such tools as Laplace transforms, Fourier transforms and Fourier series. Thesetools provide ease of analysis and synthesis with signals and systems. With the spread and affordability of digital tech-nology, starting in late 1980’s, z- transform too has found a niche and secured its place in signals and systems applications.

At FSMVU we have observed that the existing math courses in the curriculum are insufficient to ready the student forelectrical engineering courses. As such it has been deemed appropriate to set up a course which would make up this verygap and fill it. As a result MAT235 has been introduced in biomedical engineering and electrical-electronics engineeringcurricula. This book is a compilation of the author’s lecture notes since the course was assigned to him. The idea was thatnotes in pdf or printed form would be better than loose notes.

The main objective of the book is to teach Fourier series, Laplace, Fourier, and z-transforms. Circuit analysis andsynthesis, feedback theory, filter theory, signals and systems and control theory are the most important building blocks ofelectrical-electronics and biomedical engineering. The targeted topics in the book are indispensable tools in order to studythese fields. These tools can not be grasped enough without a working knowledge of complex number theory. Thereforethe book starts in Chapter 1 with complex numbers and continues with functions of complex numbers and analyticity inChapter 2. Whether analog or digital, complex numbers and functions play a crucial role in system design. Complexnumbers and analytic functions open a door to a wonderful world where the calculus has left off. Therefore calculus isan essential prerequisite for this course as many novel concepts are built on calculus and some "old" ideas are refined orgeneralized. Our aim is to introduce the student to transform mathematics. We assume that they have taken courses incalculus and differential equations. Although we liberally resort to electrical circuits in examples time and again, it is notour intention in this course to teach electric circuits, but we use them as a vehicle and motivation. Besides electric circuits,mechanical and computing examples also take place in the book.

Until 1970’s mathematical calculations were done manually using logarithmic and trigonometric tables or using me-chanical calculators, or slide rules. Mainframe computers were machines which, when compared to our modern desktopor laptop computers, were too expensive, too slow, too bulky; had too small a memory, had a primitive user interface,and were hardly accessible to average users. I remember having saved a month of my pocket money to order an ARISTOMultilog slide rule through a friend who was traveling to Germany in that summer vacation. I had eagerly waited forthe arrival of the slide rule with excitement and impatience. Later on I had been utterly disappointed and shocked tofind the dyes of the letters and numbers had come off due to heat because I had forgotten it inside a car under intensenoon temperatures. Now I don’t know how many people have remained who know about and remember the mechanicalcalculators and slide rules12... Now we enjoy computers on which we run extremely powerful mathematics software. Inlieu of slide rule you can use MATLAB, SCILAB, MATHCAD, MATHEMATICA, MAXIMA etc. All of these softwarehave provisions for complex number operations and complex number functions. As you will agree and appreciate youhave more math tool alternatives than we once had! LabVIEW is just another fantastic math tool for students. In orderto make the subject more interesting and attractive, we have often used LabVIEW in text and examples. With LabVIEWor similar tools you can derive pleasure and satisfaction much superior to ours we had derived using the archaic tools; Ican guarantee it! Since we are a licenced user, we have focused on LabVIEW with a few examples with MATLAB orSCILAB. A data acquisition, analysis, and instrument control platform, LabVIEW contributes to student’s intellectualdevelopment. These very attributes are motives in our choice of LabVIEW in the text, examples and chapter problems.

However a warning is in order for reader of wit. All these super software will do you no good unless you know andunderstand the underlying mathematics. They just make your job easier, funnier and more enjoyable. Convince yourselfthat, without proper understanding, even with these software you can’t go very far. Don’t settle to a trial-and-error strategywith software for the knowledge itself inspires and gives you insight. Never succumb to the common fallacy that you aretaught useless theories in this and other courses that you have to take. Hopefully the software of our choice will also helpyou like mathematics as is, and make it a joy to learn it through its number-crunching power and breath-taking graphics.

Finally as for the language, we have decided to write the book in English for a few reasons.

1. Some Turkish universities teach in English exclusively; some teach 30% of their curricula in English. Our choiceof English hopefully promotes this education.

12Enter the keywords "ARISTO slide rule", "FACIT calculator" or the url http://www.sliderulemuseum.com/Aristo.htm into a web browserto learn about the computing technology used then.

Page 6: transform mathematics

6

2. A serious weakness and insufficiency of university graduates in English are of concern. A text written in Englishwhile the course is taught in Turkish should help the student to learn profession in two languages.

3. Regarding the fields touched upon in the book, there exists no consensus on Turkish terminology among Turkishacademia. Different Turkish terminology can be used for the same concepts at different universities, by differentacademicians. Some terminology being yet unstable in Turkish, have caused the author trouble in lectures at times;as such inevitable delays in writing could be incurred. We think that the language is a national problem which hasto be solved in order to arrive at a consensus with collective contributions from universities across Turkey, and theTurkish Language Institute.

4. Viable probability of publishing the book in bilingual form. Certainly this can be a future project which can betaken up after solving the terminology problem. To start the bilingual project from the beginning would render thework a mission impossible. In compliance with the maxim "What can’t be grasped in its entirety is not abandonedentirely", the thought of starting in English first, then continuing in bilingual form has gained weight.Definitely, chapters one and two must be taught in sequence and before the other chapters. Fourier Series and FourierTransform can be taught in any sequence as desired before or after the Laplace Transform. In a one semester coursez-Transform can perhaps be left as the last topic.

Orhan Özhan

Istanbul, 2016

Page 7: transform mathematics

Contents

1 Complex Numbers 91.1 Representation of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2 Euler’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3 Mathematical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.3.1 Conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3.2 Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.3 Addition and Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.4 Multiplication and Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3.5 Rotating a number in complex plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.4 Roots of a complex number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.5 Applications of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.5.1 Complex Numbers versus Trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.5.2 Phasors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171.5.3 3 - Phase Electrical Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.5.4 Negative Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241.5.5 Complex Numbers in Mathematics Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2 Functions of a Complex Variable 332.1 Limit of a Complex Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.2 Derivative of Complex Functions and Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.3 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.4 Some Elementary Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2.4.1 Polynomial and Rational Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452.4.2 Exponential Function of a Complex Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452.4.3 Logarithm of a Complex Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462.4.4 Trigonometric Functions of a Complex Variable . . . . . . . . . . . . . . . . . . . . . . . . . . 472.4.5 Hyperbolic Functions of a Complex Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3 The Laplace Transform 533.1 Motivation to use Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.2 Definition of the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.3 Properties of the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.4 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.4.1 Real roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.4.2 Complex roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.4.3 Multiple roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.5 Poles and Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723.5.1 Factoring polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723.5.2 Poles and time response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743.5.3 An alternative way to solve differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . 75

3.6 Applications of Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793.6.1 Electrical systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793.6.2 Evaluation of definite integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

7

Page 8: transform mathematics

8 CONTENTS

4 The Fourier Series 894.1 Vectors and Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.2 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904.3 Calculating Fourier Series Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.4 Properties of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

4.4.1 Symmetry Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.4.2 Time Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

4.5 Parseval’s Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034.5.1 Convergence of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.5.2 Gibbs Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4.6 Applications of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5 The Fourier Transform 1175.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175.2 Definition of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175.3 Fourier Transform versus Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185.4 Convergence of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205.5 Properties of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.5.1 Symmetry Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.5.2 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255.5.3 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255.5.4 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265.5.5 Frequency Shifting (Amplitude Modulation) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275.5.6 Differentiation with respect to time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275.5.7 Integration with respect to time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275.5.8 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285.5.9 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285.5.10 Multiplication in Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295.5.11 Parseval’s Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

5.6 Fourier Transform versus Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1325.7 Fourier Transform of Discrete Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

5.7.1 Short-Term Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385.7.2 The Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

5.8 Two Dimensional Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445.9 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

5.9.1 Cepstrum Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515.9.2 Circuit Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525.9.3 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545.9.4 Amplitude Modulation and Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

6 z - Transform 1676.1 Definition of the z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686.2 Region of Convergence for the z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.3 z-Transform Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1746.4 The Inverse z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

6.4.1 Partial Fraction Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1776.5 Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836.6 One-Sided z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

6.6.1 Difference Equations in LabVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846.7 Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856.8 Applications of z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

7 Power Series 1897.1 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897.2 Maclaurin Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897.3 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Page 9: transform mathematics

Chapter 1

Complex Numbers

Euler formula, the amazing linkfrom exponential function tocomplex numbers. The second linederived from the first is called byFeynman the most elegant formulain mathematics.

Invention of decimal number system, zero and negative numbers have been greatintellectual strides for us to understand and work with mathematical problems. Theset of all positive and negative numbers, and zero constitutes a framework that wecall “real numbers” and denote with R. With real numbers we can tackle a largeset of problems in our daily lives. However as our intellectual realm has widenedfurther, we have encountered instances which our established number system isunable to handle.

The roots of -1 were shunned by people as being impossible, undefined or unre-alistic. In 16th century Italian mathematician Gerolamo Cardano sought two num-bers whose sum is 10 and their product is 40. He gave two “impossible” results,5+√−15 and 5−

√−15. Cardano warned that these were “meaningless, fictitious

and imaginary[Cardano].” In 17th century Swiss Mathematician Leonhard Eulerwrote a book on algebra and gave several examples and applications of imaginarynumbers. He made however apolegetic remarks about these fictitious numbers say-ing that:

“All such expressions as√−1,√−2, etc. are impossible or imaginary

numbers, since they represent roots of negative quantities, and of suchnumbers we may truly assert that they are neither nothing1, nor greater than nothing, nor less than nothing,which necessarily constitutes them imaginary or impossible[Euler].”

To generalize the concept of number, we can invent a number i ( j in electrical engineering), which, when multipliedby itself, produces -1. This very number, being strange, unreal, or whatever, has been called an imaginary number assuggested by Euler. Our familiar old numbers have been called real numbers by the same token. By mixing real andimaginary numbers we form a new framework which we call complex numbers. The set of all complex numbers isdenoted by C .

Real numbers are traditionally illustrated by a line which has a point for 0. The numbers to the right of zero arepositive; those to its left are negative. Likewise we can use the plane to represent complex numbers. We draw twoperpendicular lines for real and imaginary axes. On the real and imaginary number axes 0 is common. On this plane everypoint has two components: one representing the real part and another representing the imaginary part.

1.1 Representation of Complex NumbersA complex number z is a pair of real numbers (x,y) in Cartesian coordinate system and denoted by

z = x+ jy (1.1)

where j =√−1 . x and y are called the real and imaginary part of z respectively.

x = Re(z)

y = Im(z)

1By “nothing” Euler means “zero.”

9

Page 10: transform mathematics

10 CHAPTER 1. COMPLEX NUMBERS

y

x

0 θ

r

Imz=x+jy

Re

Figure 1.1: A complex number can be represented by a point in the complex plane.

From the definition of j one can deduce that j2 =−1,1j=− j, j3 =− j and j4 = 1.

z can also be represented asz = r∠θ = |z|∠θ (1.2)

where r = |z| is called the norm or magnitude of z, and θ is its argument. θ is the angle subtended by the line drawn fromthe point (x,y) to the origin. Polar coordinates are obtained from rectangular coordinates using the right triangle in Fig.1.1:

r =√

x2 + y2

θ = tan−1(yx)

Eqn. 1.1 and Eqn. 1.2 are called the rectangular and polar representations of z respectively.

1.2 Euler’s FormulaLet z be a complex number whose magnitude is 1, i.e., |z|= r =

√x2 + y2 = 1. Then Eqn. (1.2) yields z as 1∠θ . Another

beautiful notation to for z = 1∠θ is e jθ which is technically more elegant. As the following development is about todemonstrate, this representation gives us a deeper insight into the nature of complex numbers, and helps us establishuseful results.

Complex Exponential. We can carry out Maclaurin Series expansion of the exponential function e jθ :

e jθ = 1+ jθ +( jθ)2

2!+

( jθ)3

3!+

( jθ)4

4!+

( jθ)5

5!+ · · ·+ ( jθ)n

n!+ · · ·

= 1+ jθ − θ 2

2!− j

θ 3

3!+

θ 4

4!+ j

θ 5

5!−·· ·

= 1− θ 2

2!+

θ 4

4!−·· ·+ j

(θ − θ 3

3!+

θ 5

5!−·· ·

)

Since we recognize 1− θ 2

2!+

θ 4

4!−·· · as cosθ , and θ − θ 3

3!+

θ 5

5!−·· · as sinθ we conclude that

e jθ = cosθ + j sinθ (1.3)

A special case of Eqn. (1.3) is called Euler’s Identity and obtained by setting θ = π:

e jπ +1 = 0 (1.4)

which Richard Feynman called "the most remarkable formula in mathematics[Feynman]."

1.3 Mathematical Operations

1.3.1 ConjugateConjugate of a complex number z = x+ jy is defined as

Page 11: transform mathematics

1.3. MATHEMATICAL OPERATIONS 11

z∗ = x− jy

In polar coordinates the conjugate is expressed byz∗ = re− jθ = r∠−θ

Conjugate of z is also denoted by z. From this definition it follows that

Re(z) =z+ z∗

2and

Im(z) =z− z∗

2 j.

Replace all j’s with − j to find the conjugate of an expression of complex numbers.Using Eqn. (1.3) we have the following relations:

z = re jθ = r cosθ + jr sinθ

z∗ = re− jθ = r [cos(−θ)+ j sin(−θ)]

z∗ = r (cosθ − j sinθ)

1.3.2 IdentityTwo complex numbers z1 and z2 are said to be identical if and only if their real parts are equal and their imaginary partsare equal, i.e.,

z1 = z2 if x1 = x2 and y1 = y2.

This is equivalent to saying that z1 and z2 are identical if and only if their magnitudes are equal and their arguments areequal, i.e.,

z1 = z2 if r1 = r2 and θ1 = θ2.

z = x + jy1 1 1

Im

y

x0

z = z + z

Re

2y

1y

2z = x + jy

x12x

(a)

0y

x

z = z − z

(b)

1z = x + jy

x12x

2z = x + jy

2y

1y

Im

Re

1

1

2

2

22 2 2

11

Figure 1.2: (a) Addition and (b) subtraction of complex numbers.

1.3.3 Addition and SubtractionAddition of two complex numbers are accomplished by adding their respective real and imaginary parts using the rectan-gular form (Eqn. (1.1)):

z = x+ jy

= z1 + z2

= x1 + jy1 +(x2 + jy2)

x+ jy = (x1 + x2)+ j (y1 + y2)

Hence:

Page 12: transform mathematics

12 CHAPTER 1. COMPLEX NUMBERS

x = x1 + x2, andy = y1 + y2.

Subtraction is likewise performed as:

z = x+ jy

= z1− z2

= x1 + jy1− (x2 + jy2)

x+ jy = (x1− x2)+ j (y1− y2)

Hence:

x = x1− x2, andy = y1− y2.

Conjugate of the sum of complex numbers is equal to the sum of the conjugates of those numbers, i.e.,

(z1 + z2)∗ = z∗1 + z∗2

which can be proven using definition of addition and conjugation:

(z1 + z2)∗ = (x1 + jy1 + x2 + jy2)

= [(x1 + x2)+ j (y1 + y2)]∗

= (x1 + x2)− j (y1 + y2)

= (x1− jy1)+(x2− jy2)

= z∗1 + z∗2

For addition and subtraction of compex numbers in polar coordinates see Problems 16 and 17.

1.3.4 Multiplication and DivisionMultiplication of z1 and z2 in rectangular form is carried out as:

z = z1z2

x+ jy = (x1 + jy1)(x2 + jy2) = x1x2− y1y2 + j (x1y2 + y1x2)

Thus

x = x1x2− y1y2

y = x1y2 + y1x2

Multiplication of z1 and z2 in polar form as expressed by Eqn (1.2) is more convenient:

z = re jθ = r∠θ

z1z2 = r1e jθ1r2e jθ2 = (r1∠θ1)(r2∠θ2)

z1z2 = r1r2e j(θ1+θ2) = r1r2∠(θ1 +θ2) (1.5)r = r1r2

θ = θ1 +θ2

Page 13: transform mathematics

1.3. MATHEMATICAL OPERATIONS 13

Division of z1 and z2 in Cartesian form is carried out as:

z =z1

z2=

x1 + jy1

x2 + jy2

Multiplying the numerator and denominator by z∗2 yields

z =z1z∗2z2z∗2

z =(x1 + jy1)(x2− jy2)

(x2 + jy2)(x2− jy2)

z =x1x2 + y1y2 + j (−x1y2 + y1x2)

x22 + y2

2

Again, division is much easier in polar form:

z =z1

z2=

r1∠θ1

r2∠θ2=

r1

r2∠(θ1−θ2) (1.6)

Special case:

zz∗

= 1∠2θ

We can state relations involving complex conjugates of multiplication and division:

(z1z2)∗ = z∗1z∗2(

z1

z2

)∗=

z∗1z∗2

We leave the proof to the reader as an exercise. As a special case of the preceding relations, the magnitude and square ofa complex number can be written in rectangular coordinates as:

z · z∗ = r2 = x2 + y2

z2 = x2− y2 + j2xy.

which become in polar representation:

zz∗ = (r∠θ)(r∠−θ) = r2 andz2 = (r∠θ)(r∠θ)

z2 = r2∠2θ (1.7)

Generalizing Eqn. (1.7) to n > 2 we obtain zn = rn∠nθ which yields with r = 1

(cosθ + j sinθ)n = cosnθ + j sinnθ (1.8)

which is called the De Moivre’s formula.

Example 1. Consider a complex function of ω given by H (ω) = 1+ jω100 . Plot G(ω) = 20log |H (ω)| and arg [H (ω)].

G(ω) = 20log∣∣∣∣1+ jω

100

∣∣∣∣= 20log

√1+(

ω

100

)2

If ω 100 then 1+(

ω

100

)2 ≈ 1 and we get G(ω)≈ 0, arg [H (ω)]≈ 0.

If ω 100 then 1+(

ω

100

)2 ≈(

ω

100

)2 and we get G(ω) ≈ 20logω − 40, arg [H (ω)] = arctanω ≈ π/2. G(ω) is astraight line which passes through 2 = log102 with a slope 20.

When ω = 100 then 1+(

ω

100

)2= 2 and we get G(ω) = 20log

√2 = 3,arg [H (ω)] = arg [1+ j] = arctan1 ≈ π/4.

In electrical engineering G(ω) and arg [H (ω)] are called the dB gain and phase. In Figure the 20log |H (ω)| andarg [H (ω)] are plotted versus ω . These two graphs where the horizontal scale is logarithmic are called Bode plots.

Page 14: transform mathematics

14 CHAPTER 1. COMPLEX NUMBERS

Figure 1.3: Plot of 20log |1+ jω/100|and arg [1+ jω/100]versus logω .

1.3.5 Rotating a number in complex planeAs a result of Eqn.(1.5), multiplying a complex number by e jθ increases the argument of the number by θ with no changeto the magnitude. This is but rotating the complex number counterclockwise by an angle θ in the complex plane. Ifz = re jϕ , multiplying it by e jθ yields

ze jθ = re jϕ e jθ

= r exp j (ϕ +θ)

Since e jπ/2 = j, multiplying a complex number by j rotates the number counterclockwise by 90 degrees, and multiplica-tion by − j causes a 90 degrees rotation clockwise.

This idea can be applied to rotating 2-D images. Every image is a 2-D array of pixel values. A pixel is caharacterizedby its x, y coordinates and its gray level. The coordinates can be thought of as the real and imaginary part of a complexnumber z = x+ jy. Multiplying z by e jθ rotates the pixel by θ . After rotation the gray level at z is attributed to the newcoordinate ze jθ . When this is done to all pixels of the image, we obtain the rotated image. Fig. 1.4 shows an image rotatedby −45°.

Figure 1.4: Rotating graphical objects.

Complex numbers are not monstrous, but you can build monsters with complex numbers. Next example shows how to builda Dragon fractal using complex number operations. These fractals are constructed by repeatedly adding new complex numbersbetween adjacent complex numbers. Let z1 and z2 be existing numbers. From z1 and z2 a third point z3 can be created throughz3 = z1 + 0.5(1∓ j)(z2− z1) applied successively. Multiplication by 0.5(1∓ j) scales the difference z2− z1 by 1/

√2 and rotates it

through∓45º. If we rotate by 45º in one step we rotate by−45º in the next step. The process is repeated as many times as desired. Aftera sufficiently large number of iterations the dragon shape emerges. Fig. 1.5 shows the development of the fractal until we eventuallyobtain the dragon figure after 524287 steps.

1.4 Roots of a complex numberLet Z = R∠θ be a known complex number. We want to determined z = r∠ϕ such that Z = R∠θ = zN , in other words wewant to find z = N

√Z = Z

1N .

Page 15: transform mathematics

1.4. ROOTS OF A COMPLEX NUMBER 15

f

Figure 1.5: Fractals can be generated using complex numbers.

zN = (r∠ϕ)N = rN∠Nϕ

R∠θ = rN∠Nϕ

This necessitates that

r = R1N , and

ϕ =θ

N

However we expect zN −Z = 0 to have N distinct roots; we can not just have one solution which has ϕ =θ

N. Note that

e jθ = exp j (θ +2πn) where n is an integer. Letting n = 0, · · · , N−1 we have

Nϕ = θ +2πn,

ϕ =θ +2πn

N, n = 0, · · · , N−1

Example 2. Find the 5-th roots of -1.

Solution:z =−1 can be expressed as z = 1e jπ . Hence the 5-th roots are given as:

zn =5√1e

jπ +2πn

5 n = 0,1, · · · ,4

z0 = 1ejπ

5 = 0.809+ j0.588

z1 = 1ejπ +2π

5 =−0.309+ j0.951

z2 = 1ejπ +4π

5 =−1

z3 = 1ejπ +6π

5 =−0.309− j0.951

z4 = 1ejπ +8π

5 = 0.809− j0.588

Page 16: transform mathematics

16 CHAPTER 1. COMPLEX NUMBERS

z

z

z

z

z

0

1

2

3

4

= 0.809017 + j0.587785

= −0.309017 + j0.951056

= −1 + j0

= −0.309017 − j0.951056

= 0.809017 − j0.587785

Re

Im

+1−1

+j

−j

z4

z3

z1

z0

z2

Figure 1.6: 5-th roots of z =−1.

1.5 Applications of Complex Numbers

Complex numbers lend themselves to plenty of applications in science and engineering. Euler formula, rotational prop-erties and other properties can be exploited to solve otherwise difficult problems. Problem 30 is a good example makinguse of complex number rotations. Dragon and Mandelbrot fractals are fun and extremely interesting. In this section wecite applications to trigonometry and electrical engineering.

1.5.1 Complex Numbers versus Trigonometry

Trigonometric identities become a matter of fun with complex numbers. The formulas which needed memorization cannow be derived in a few easy steps using complex numbers.

Let z = e jθ . Then

z+ z∗ = e jθ + e− jθ

= cosθ + j sinθ + cosθ − j sinθ

= 2 cosθ

and

z− z∗ = e jθ − e− jθ

= cosθ + j sinθ − cosθ + j sinθ

= j2 sinθ

Thus,

cosθ =e jθ + e− jθ

2(1.9)

and

sinθ =e jθ − e− jθ

2 j(1.10)

Using these relations several trigonometric identities are readily established (see Problems 7 through 11). For example:

Page 17: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 17

cosα · cosβ =

(e jα + e− jα

2

)(e jβ + e− jβ

2

)

=12· e

j(α+β )+ e j(α−β )+ e− j(α−β )+ e− j(α+β )

2

cosα · cosβ =12· [cos(α +β )+ cos(α−β )]

and

cosx+ cosy = 2cosx+ y

2· cos

x− y2

Example 3. Convert 5secx+12cscx into a product or ratio of sinusoidal functions.

After converting secant and cosecant functions into sine and cosine, using De Moivre’s formula (Eqn. 1.8) we can expressthe sine and cosine terms as the real or imaginary part of a complex exponential.

5secx+12cscx =5

cosx+

12sinx

=5sinx+12cosx

sinxcosx

First we express the numerator as either the real part or an imaginary part of a complex exponential. Since cosx =Re

e jx= Im

je jx

and sinx = Im

e jx= Re

− je jx

We can write

5sinx+12cosx = Im

5e jx+ Im

12(

je jx)= Im

5e jx +12 je jx

= Im

e jx (5+12 j)

= Im

e jx√

52 +122 exp(

j tan−1 (12/5))

= Im

13exp[

j(x+ tan−1 2.4

)]= 13sin(x+1.176)

As for the denominator we can use 1.9 and 1.10

sinxcosx =e jx− e− jx

2 j· e

jx + e− jx

2

=12· e

j2x +1−1− e− j2x

2 j

=12

sin2x

And finally we obtain

5secx+12cscx =13sin(x+1.176)

12 sin2x

=26sin(x+1.176)

sin2x

1.5.2 PhasorsPhasors are valuable tools to solve network problems in electrical engineering. For example using phasors one can easilyadd voltages and currents and calculate impedances. The frequency of the variables to be added, subtracted etc., areomitted from the calculation and only the amplitudes and phases are taken.We illustrate this approach with an example.

Page 18: transform mathematics

18 CHAPTER 1. COMPLEX NUMBERS

Suppose x1 = A1 cos(ωt +θ1) and x2 = A2 cos(ωt +θ2), x1 and x2 both having the same frequency ω . Let us assign thesum of x1 and x2 to another variable x. Thus

x = x1 + x2

= A1 cos(ωt +θ1)+A2 cos(ωt +θ2)

= Re

A1e j(ωt+θ1)+Re

A2e j(ωt+θ2)

= Re

e jωt

(A1e jθ1 +A2e jθ2

)Let us denote sum of the two complex numbers in parantheses with Ae jθ . With this we have

x = x1 + x2

= Re

e jωtAe jθ

= Acos(ωt +θ)

A1 cos(ωt +θ1)+A2 cos(ωt +θ2) = Acos(ωt +θ)

We see that the sum of two sinusoids with the same frequency is a third sinusoid of that frequency. Combining these stepswe get

A1 cos(ωt +θ1)+A2 cos(ωt +θ2) = Acos(ωt +θ)

Re

e jωt(

A1e jθ1 +A2e jθ2)

= Re

e jωtAe jθ

e jωt(

A1e jθ1 +A2e jθ2)

= e jωtAe jθ

A1e jθ1 +A2e jθ2 = Ae jθ (1.11)

Thus addition of sinusoids in time domain can be transformed to an addition in phasor domain

X = X1 +X2

A∠θ = A1∠θ1 +A2∠θ2 (1.12)

We can exploit this result to find x = x1 + x2. At first we can eliminate the cosine terms from the addition and find Aand θ , then we can plug in the cosine term including the phase θ into the result. The last equation is the sum of phasors.A phasor is a complex number which has a magnitude and an argument (phase) obtained by eliminating ωt froma sinusoid.

X = Ae jθ

= A∠θ (1.13)

is the phasor representing x(t) = Acos(ωt +θ). Obviously

x(t) = Re

Xe jωt (1.14)

Note that x(t) is a real-valued time function which changes with t; whereas X is a complex number and independent oftime. If sine function is given rather than a cosine function, we have to convert the sine into cosine by adding an extraphase angle −π/2rad (90) using the relation sinωt = cos(ωt−π/2).

Example 4. A voltage source V = ve jθ drives two electrical loads connected in series whose voltages are measured to bev1 (t) = 5cos(1000t +0.9273) Volts, and v2 (t) =

√5cos(1000t +2.6779) Volts. Find v(t).

v1 (t) = Re[5e j1000te j0.9273] Volts

v2 (t) = Re[√

5e j1000te j2.6779]

Volts

v(t) = V cos(1000t +θ) Volts

= Re[Ve j1000te jθ

]Volts

Page 19: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 19

Figure 1.7: Phasor addition. (a) Adding complex numbers, (b) adding voltage phasors.

Hence we find V1 = 5e j0.9273 = 3+ j4 Volts, and V2 =√

5e j2.6779 =−2+ j Volts. (Fig. 1.7).Then

V = V1 +V2

= 3+ j4+(−2)+ j Volts= 1+ j5Volts=√

26e j1.3734 Volts

and

v(t) =√

26cos(1000t +1.3734) Volts

We may want to differentiate and integrate x(t) as given by Eqn. 1.14.

dxdt

=ddt

Re

Xe jωt= Re

ddt

Xe jωt

= Re

jωXe jωt (1.15)

Likewise the integral of x(t) in time becomes

ˆx(t)dt =

ˆRe

Xe jωtdt

= Reˆ

Xe jωtdt

= Re

Xe jωt

= Re

Xjω

e jωt

(1.16)

We can show the transformations in Eqns 1.15 and 1.16 as follows

dxdt

⇐⇒ jωXˆxdt ⇐⇒ 1

jωX

In electric circuits resistance, which is the ratio of voltage to current, can be generalized to electrical impedance when wework with sinusoidal voltages and currents in AC circuits. Voltages and currents are transformed into phasors to obtainthe steady state voltage and current values. Remember the terminal equations of R, L and C

Page 20: transform mathematics

20 CHAPTER 1. COMPLEX NUMBERS

Figure 1.8: (a) Series-connected RL circuit driven by current source. (b) The phasor diagram.

R : v(t) = Ri(t)

L : v(t) = Ldi(t)

dt

C : v(t) =1C

ˆi(t)dt (1.17)

These terminal equations, when converted to phasor domain, become

R : V = R IL : V = jωL I

C : V =1

jωCI (1.18)

The impedance is defined as the ratio of the voltage phasor to the current phasor, that is Z = V/I. Hence the impedancesof R, L, C are found to be

R : Z = R

L : Z = jωL

C : Z =1

jωC(1.19)

Impedance of a resistor in AC is the same value as its DC value, i.e., R. Impedance of L is directly proportional withfrequency and impedance of C is inversely proportional with angular frequency ω which is in rad/sec.

Admittance is defined as the inverse of impedance and given by Y = 1/Z. Using the phasor notion with Kirchhoff’sLaws yields useful results. Assume that a current source I = Ip cosωt = Ip∠0 drives a resistance and an inductanceconnected in series (Fig. 1.8). Let us derive the relation between the voltage developed across the drive terminals and thecurrent. The Kirchhoff’s Voltage law dictates

v(t) = RIp cosωt +Lddt

(Ip cosωt)

In phasor domain this equation becomes

V = RI+ jωLI= (R+ jωL) I= ZI

Vp∠θ = |Z|∠ϕIp∠0= |Z| Ip∠ϕ

Hence we find that v(t) = Ip√

R2 +ω2L2 cos(ωt +ϕ) where ϕ = tan−1 (ωL/R). We have an important observation here:The total impedance of the circuit is obtained by adding the individual impedances of resistance and inductance, that is,Z = R+ jωL. We can generalize this observation to n impedances connected in series:

Page 21: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 21

Z = Z1 + . . .+Zn (1.20)

Likewise if n impedances connected in parallel then the total admittance is given by:

Y = Y1 + . . .+Yn (1.21)

Example 5. Find the impedance of the circuit in Fig. 1.9

Fig. 1.9 and Fig. 24 show two circuits of utmost importance and appear in different forms in various engineeringfields. In electrical engineering these circuits are called the parallel resonant and series resonant RLC circuits as well astank circuits. Here we demonstrate the phasor concept using the parallel tank circuit. Because the circuit elements areconnected in parallel we use Eqn. 1.21. The phasor diagram for admittances is shown in Fig. 1.9.

Y = YR +YC +YL

=1R+ jωC+

1jωL

=1

50+ j 103 ·5 ·10−5 +

1j 103 ·0.1

= 0.02+ j 5 ·10−2 +1

j 100= 0.02+ j (0.05−0.01)= 0.02+ j0.04mho

Z =1Y

=1

0.02+ j0.04

=50

1+ j2

= 10√

5exp(− tan−1 2

)= 22.36∠−63.4º ohms

The voltage phasor across the tank circuit is given

V = ZI= 22.36∠−63.4º ·1∠0º= 22.36∠−63.4ºVolts

Note that this is the phasor quantity. The voltage in time domain is

V (t) = 22.36cos(

103t− 63.4π

180

)= 22.36cos

(103t− 63.4π

180

)= 22.36cos

(103t−1.106

)Volts

We can rewrite the admittance as

Y = R+ j(

ωC− 1ωL

)When ω = 1/

√LC the imaginary part of Y becomes 0, i.e., Y and Z become purely resistive. This is called resonance.

At resonance inductive and capacitive admittances (impedances) cancel each other. In engineering there are plenty of ap-plications of resonance. For this RLC circuit the resonant frequency is ω = 1/

√0.1 ·5 ·10−5 = 447.21rad/sec = 71.2Hz.

The same phenomenon occurs with series RLC circuits. See Prob. 24.

Page 22: transform mathematics

22 CHAPTER 1. COMPLEX NUMBERS

Figure 1.9: (a) Parallel RLC circuit, (b) Phasor diagram.

R

L

1 ohm

Z1

Z2

C

R

1 ohm

−j2 ohms

C

R

1 ohm

1

2

j3 ohms

Z

R

L

1 ohm

j3 ojms

2

1

−j2 ohms

Figure 1.10: Series/parallel connected RL-RC components.

Example 6. In Figure find the impedance Z in Fig. 1.10.

Let Z1 and Z2 denote the inductive and capacitive circuits respectively. Since the inductor and R1 and L are connectedin series:

Z1 = 1+ j3On the other hand R2 and C are connected in parallel. Therefore we can write

Z2 =1 · (− j2)

1− j2=− j2(1+ j2)

(1− j2)(1+ j2)=

4− j21+4

= 0.8− j0.4

Hence the total impedance isZ = Z1 +Z2 = 1+ j3+0.8− j0.4 = 1.8− j2.6 ohms.

1.5.3 3 - Phase Electrical Circuits

Ubiquitous 3-phase electrical system is utilized to supply electric energy all over the world. Only the amplitude of thevoltages and the frequency vary with countries. In European countries 220V 50Hz 3-phase system is in use at homes andsmall industry; in north American countries 117V 60Hz 3-phase was adopted. The industry depends on 3-phase electricpower. Electric energy is generated three phase, transmitted three phase and consumed three phase in industry. One ofthe main reasons for three phase system is the ease with which rotating magnetic fields are generated with three phase.Rotating magnetic fields are used drive electric motors2.

A 3-phase system with a neutral wire produces three voltages of equal amplitude and frequency whose phases differ120º from each other. These phases can be denoted by three phasors

VA = Vm∠0VB = Vm∠120ºVC = Vm∠240º

2Interested reader is urged to read about the AC/DC wars between Nicola Tesla and Thomas Alva Edison in early twentieth century.

Page 23: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 23

Figure 1.11: 3-Phase voltage phasors.

Figure 1.12: 3-Phase load.

The phase-to-phase voltages between phases A, B, C are the voltage differences between the corresponding phases1.11.Thus the voltage between phases A and B is

VAB = Vm∠0−Vm∠120º= Vm (1− cos120º− j sin120º)

= Vm

(1+

12− j

√3

2

)

= Vm3− j

√3

2= Vm

√3∠−30º

For the other phase-to-phase voltage phasors see Problem 31. If three phases with a neutral wire are connected across abalanced 3-phase load as shown in Fig. 1.12. The phase currents are determined by the load impedances driven by thephases. Let the phase load impedances be ZA,ZB and ZC the phase currents become

IA=VA

ZA, IB =

VB

ZBand IC =

VC

ZC

From Kirchhoff’s Current Law the sum of these currents and the neutral current is equal to zero, that is,

IN + IA + IB + IC = 0, orIN = −(IA + IB + IC)

Page 24: transform mathematics

24 CHAPTER 1. COMPLEX NUMBERS

If ZA=ZB=ZC then IN = 0. This type of 3-phase load is called a balanced 3-phase load. An unbalanced loading isundesired and should be avoided because IN 6= 0. See Problem 32.

1.5.4 Negative Frequency

Figure 1.13: Positive and negative frequencies.

A sinusoidal time function may be given by x(t) = A cos(ωt +ϕ), where the term ω is called the angular frequencyand its unit is rad/sec and ϕ is the phase angle. With this representation ω is non-negative and ω = 0 is called DC. Anegative frequency wouldn’t make sense. However when one does a frequency analysis, say by running a FFT3, often onecomes across negative frequencies. How could one interprete a negative frequency, ω < 0?

Using Eqn. 1.9 one can rewrite x(t) = cosωt as

x(t) = A · ejωt + e− jωt

2

x(t) =A2

e jωt +A2

e− jωt

The two complex exponential terms e j(ωt+ϕ) and e j(−ωt−ϕ) contain the frequency information: one term with a frequencyof ω and the other with a frequency of −ω . This situation is shown in Fig. 1.13 and the sinusoidal function of amplitudeA and positive frequency ω can be interpreted as composed of two complex exponentials of amplitude A/2 each and onehaving a positive frequency +ω and the other having a negative frequency −ω .

Negative frequency is a mere mathematical convenience; it can’t be physically generated by signal generators in thelaboratory.

1.5.5 Complex Numbers in Mathematics SoftwareAll professional mathematics software have provisions to deal with complex numbers and complex number arithmetics.Matlab accepts i or j for

√−1; in Scilab you have to use %i instead. Solutions of quadratic equations can be complex.

For instance if you want to solve x2 + x+1 = 0 you enter the following commands in Matlab:

>>C = [1 1 1]';

>>roots(C)

to get two complex roots:

ans =

-0.5 + 0.8660i

-0.5 - 0.8660i

You can find N-th roots of a complex number. To find the five 5-th roots of 1 we use the same roots function after settingup a coefficient vector C.

3Fast Fourier Transform

Page 25: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 25

>> C=[1 0 0 0 0 -1]';

>> roots(C)

ans =

-0.8090 + 0.5878i

-0.8090 - 0.5878i

0.3090 + 0.9511i

0.3090 - 0.9511i

1.0000

Some functions like abs, sin, cos, logarithmic functions and exponential functions are overloaded. If z = 1+ j and if weuse these functions then we get

>>z=1+j;

>>abs(z)

ans =

1.4142

>>sin(z)

ans =

1.2985 + 0.6350i

>>log(z)

ans =

0.3466 + 0.7854i

LabVIEW is no exception to complex number arithmetic. Complex numbers are represented in three resolutions inLabVIEW, namely, complex single, complex double and complex extended. The default is complex double. Numericsubpalette under Programming presents the user different ways of using complex numbers. You can create a complexnumber in rectangular or polar coordinates, convert from rectangular to polar and visa versa and perform conjugation oncomplex numbers. Once you use a complex number then all math functions are overloaded with their complex versionsthat we sudied in this chapter (Fig. 1.14).

If you have access to these software, you are strongly urged to use them.

Figure 1.14: Complex numbers in LabVIEW. (a) Complex number palette, (b) Using complex numbers.

Page 26: transform mathematics

26 CHAPTER 1. COMPLEX NUMBERS

Afterthought on Euler’s Identitiy

Why has the physicist Richard Feynman declared that Euler’s identity is the most elegant formula in mathematics? Whatcontemplation and reasoning may have led him to this conclusion? Well, I don’t know. But let us try our way about thiselegance. The Euler’s identity is

e jπ +1 = 0

where e and π are irrational numbers. j =√−1 is an imaginary number. In this identity we have exponentiation,

summation with 1 and equality to 0. Can the elegance be due to this?

WHEN MAN, THE IRRATIONAL BEING, IS RAISED TO AN IMAGINARY IRRATIONAL ENTITY AND MEETSUNITY, WHO IS HIS CREATOR, BECOMES NIL.

Let us rearrange Euler’s identity

e jπ =−1

Does this mean then:

AS IF IRRATIONAL MAN, WHO RAISES HIMSELF TO AN IMAGINARY IRRATIONAL ENTITY, HAS NEVEREXISTED. 0 IS FOR VANISHING; -1 IS THE STATE OF HAVING NEVER EXISTED.

What do you think? Was Richard Feynman right? Could this formula be the synopsis of vanishing by Allah which istaught by Sufism?

Page 27: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 27

Problems1. Let z1 = 1− j3, z2 =−3+ j4, z3 = 2e− jπ/3 and z4 = 5e jπ/6. Find

(a) z1 + z2, z1 + z3, z3 + z4,

(b) z1− z∗2, z1− z∗3, z3− z4,

(c) z1z∗2, z1z3, z∗3z4,

(d) z1/z∗2, z1/z3, z∗3/z4, z∗3/z3.

2. Prove the following relations

(a) (z1z2)∗ = z∗1z∗2

(b)(

z1z2

)∗=

z∗1z∗2

3. Show that∣∣z2∣∣= r2.

4. Show that we can represent a circle of radius R with center z0 as

(a) z− z0 = Re jθ where 0≤ θ ≤ 2π

(b) From (a) derive

(x− x0)2 +(y− y0)

2 = R2 and θ = arctany− y0

x− x0.

z0

R

Re

Im

z

0

Problem 4.

5. Show that the locus of z described below is an ellipse in the complex plane with center at z0 and major and minoraxis lengths 2a and 2b respectively:z = zo +a cosθ + jb sinθ , where 0≤ θ ≤ 2π .

6. Show that

(a) 1+ cos2x = 2cos2 x

(b) sinx+ sin3x = 2cosxsin2x

7. Using Euler’s formula show that

(a) A cosx + B sinx =√

A2 +B2 cos(x− tan−1 BA)

(b) A sinx + B cosx =√

A2 +B2 sin(x+ tan−1 AB)

8. Express sin(2x) · cos(5x) as a sum of two sinusoidal functions.

9. Show thattanx = sin2x

1+cos2x

10. Show thatcosx+ cosy = 2cos

( x−y2

)cos( x+y

2

)

Page 28: transform mathematics

28 CHAPTER 1. COMPLEX NUMBERS

11. Calculate(1+ j

√3)4

.

12. Find the roots below:

(a) 81/3

(b) (− j8)1/3

(c) (1+ j)1/4

(d)√

1−√

j

(e) z = j = ejπ

2 , z15 .

13. Find all four roots of x4−2x2 +5 = 0.

14. Find two numbers z1, z2 such that z1 + z2 = 10, andz1z2 = 40.

15. Let z1 = x1 + jy1, z2 = x2 + jy2 and z = z1 + z2. Show that

|z| =

√(x1 + x2)

2 +(y1 + y2)2

argz = tan−1(

y1 + y2

x1 + x2

)

16. Consider two complex numbers z1 = r1e jθ1 and z2 = r2e jθ2 . The angle between z1 and z2 is θ1−θ2. Show that themagnitude of the difference z = z1− z2 is given by the cosine theorem

r2 = |z1− z2|2 = r21 + r2

2−2r1r2 cos(θ1−θ2)

Hint: Use the relation |z|2 = z · z∗.

17. Consider two complex numbers z1 = r1e jθ1 and z2 = r2e jθ2 . Show that the magnitude of the sum z = z1+z2 is givenby

r2 = |z1 + z2|2 = r21 + r2

2 +2r1r2 cos(θ1−θ2)

18. Prove triangular inequalities

(a)|z1 + z2| ≤ |z1|+ |z2|

(b)|z1− z2| ≥ |z1|− |z2|

19. Show that 1+ ej

(2π

3

)+ e

j

(4π

3

)= 0. Using this result also show that 1+2cos

(2π

3

)= 0.

20. Consider the following relation

e jθ + ej

(θ+

n

)+ e

j

(θ+

n

)+ e

j

(θ+

n

)+ · · ·+ e

j

θ+2(n−1)π

n

= 0

(a) Prove this relation by geometric construction (you may try for n = 3,5,7)

(b) Verify the ralation numerically Using LabVIEW

21. A picture is first reduced by 25% then rotated clockwise by 5°. A pixel on the picture before rotation is located byits coordinates (x,y). After rotation the pixel attains new coordinates (x’,y’). Calculate x’ and y’ in terms of x andy.

22. The complex number z = 2+2 j is rotated clockwise about another complex number z0 = 1+ j by 90. Obtain thenew location of z after rotation.

Page 29: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 29

23. Calculate the following complex numbers:

(a) z = 1+1

j+1

1+1

j+. . .

. (Answer: z =1±√

1− j 42

)

(b) z = j+1

j+1

j+1

j+. . .

. Answer: z =±√

3+ j2

)

24. For the following RLC circuit find

Problem 24.

(a) the impedance Z

(b) the current supplied by the voltage source

(c) the resonance frequency.

25. Given Z (ω) =R

1+ jQ(

ω

ω0− ω0

ω

) find ω for which |Z (ω)| is maximum.

26. Consider the circuit in Problem 13. Resonance frequency ω is defined as that frequency for which the impedanceZ is purely resistive, i.e., a real number. Find the resonance frequency of this circuit.

27. Show that when the x-y coordinates are rotated about the origin by an angle θ , the new coordinates (x′, y′) of apoint are related to its original coordinates (x, y) through the transformation:[

x′

y′

]=

[cosθ sinθ

−sinθ cosθ

][xy

]Hint: Note that rotating the x-y axes in one direction by a certain angle is equivalent to rotating the point in theopposite direction by the same angle.

28. Suppose you are hired by CIBERGRAPHICS INC. as a software engineer in the graphics software department. Yourassignment is to write a code to rotate an image about its center. Describe how you would achieve the job.

29. What is “negative” frequency?

(a) Why is it called “negative”?

(b) Can we generate negative frequencies?

(c) Can we generate negative-only frequencies without generating “positive” frequencies?

Page 30: transform mathematics

30 CHAPTER 1. COMPLEX NUMBERS

30. This problem is excerpted from George Gamow’s book “One Two Three... Infinity[Gamow].” Being in archaicEnglish, the original of the message within quotes is given as a footnote, and we present it here in modern English.

[There was a young and adventerous man who found amoung his great-grandfather’s papers a piece of parch-ment that revealed the location of a hidden treasure. The instructions read:

“Sail to ... North latitude and ... West longitude where you will find a deserted island. There lies a large meadow,not pent, on the north shore of the island where a lonely oak and a lonely pine stand. There you will see also an oldgallows where we used to hang traitors. Start from the gallows and walk to the oak counting your steps. At the oakyou must turn right by a right angle and take the same number of steps. Put here a spike in the ground. Now youmust turn to the gallows and walk to the pine counting your steps. At the pine you must turn left by a right angleand you see that you take the same number of steps, and put another spike in the ground. Dig halfway between thespikes; the treasure is there.”4

The instructions were quite clear and explicit, so our young man chartered a ship and sailed to the South Seas.He found the island, the field, the oak and the pine, but to his great sorrow the gallows was gone. Too long a timehad passed since the document had been written; rain and sun and wind had disintegrated the wood and returned itto the soil, leaving no trace even of the place where it once stood.]

As you may guess the young man tried and tried, digging here and there with no success, and the island beingso big, he eventually gave up, and sailed back home. Now you, equipped with the knowledge of complex numbers,are assigned to find the hidden treasure.

Re

Im

0

−1 +1A

B

PineOak

Γ

Re

Im

−1 0 +1PineOak

ΓA

B

(a)

(b)

Problem 30: Treasure island.

31. A 3-phase system with a neutral wire produces three voltages of equal amplitude and frequency whose phasesdiffer 120º from each other. Let these phases be denoted by the phasors VA =Vm∠0,VB =Vm∠120º,VC =Vm∠240º.Calculate the phasors VBC and VCA.

32. Refer to the 3-phase load in Fig. 1.12. If ZA = ZB = ZC show that IN = 0.Hint: Use the results of Problems 18 and 19.

33. Computer project. In Fig. ?? is shown the virtual instrument to construct the Dragon fractal of Sec. 1.3.5. Buildthis virtual instrument and operate it.

(a) Find and explain the code which is responsible of adding a new point between ajacent points.(b) When this virtual instrument completes execution how many points are generated?

4“Sail to ... North latitude and ... West longitude where thou wilt find a deserted island. There lieth a large meadow, not pent, on the north shore ofthe island where standeth a lonely oak and a lonely pine. There thou wilt see also an old gallows on whıch we once were wont to hang traitors. Startthou from the gallows and walk to the oak counting thy steps. At the oak thou must turn right by a right angle and take the same number of steps. Puthere a spike in the ground. Now must thou turn to the gallows and walk to the pine counting thy steps. At the pine thou must turn left by a right angleand see that thou takest the same number of steps, and put another spike in the ground. Dig halfway between the spikes; the treasure is there.”

Page 31: transform mathematics

1.5. APPLICATIONS OF COMPLEX NUMBERS 31

Problem 33: Building Dragon fractal virtual instrument.

Page 32: transform mathematics

32 CHAPTER 1. COMPLEX NUMBERS

Page 33: transform mathematics

Chapter 2

Functions of a Complex Variable

Real and imaginary parts of complex functions are 3-Dsurfaces. Here is shown the imaginary part of the complexfunction cosz.

In the previous chapter we studied complex numbers andwe learned several interesting and novel concepts. Thecomplex numbers could even be put into better use by con-structing functions from them. In fact we use functionsof complex variables in real life applications more thanwe use complex numbers themselves. Wave function usedin quantum mechanics to predict the position of a parti-cle is a complex function which guesses the position ofthe particle. The solution of MaxwelI’s equations in freespace is the wave equation which is also a complex-valuedfunction. More examples can be cited for complex valuedfunctions. In this chapter we discuss very important top-ics pertaining to functions of a complex variable. Manyconcepts are extensions of our knowldege from calculus.Thus we start defining limits and continuity to set the stagefor differentiation. We introduce the very important no-tions of differentiabilty and analytic functions and studyderivatives of complex functions. We find that differen-tiabilty imposes strict requirements on the function knownas the Cauchy-Riemann conditions. We also introduce theextended definitions for complex-valued elementary func-tion. To augment the topic of analytic functions the powerseries expansions, the Taylor Series, Mac Laurin Series and Laurent Series are included at the end of the chapter becausethese series pave the way to complex integration and residues.

A complex function w = f (z) is a mapping from a domain in complex plane to a region in complex plane, called therange of f . To a point z = x+ jy is assigned a unique number w = f (z) = u(x,y)+ jv(x,y) where u(x,y) and v(x,y)are real and imaginary parts of f (z). For example w = z2 results in u+ jv = x2− y2 + j 2xy from which we obtain twofunctions (3-D surfaces of variables x and y):

u(x,y) = x2− y2

v(x,y) = 2xy (2.1)

We can not visualize f directly because it is a 4-D surface. However we can sketch the mapping f : (x,y) −→ w aswell as draw the 3-D surfaces u(x,y) and v(x,y). Fig. 2.1 illustrates the mapping of the domain x≤ 1,y≤ 1 into (u,v)plane which shows the distribution of w = z2 = u+ jv. Fig. 2.2 is another way to visualize w = z2. The real and imaginaryparts obtained from Eqn. 2.1 are plotted as 3-D surfaces.

Trigonometric, hyperbolic, exponential and polynomial functions of a real variable in calculus can be easily gen-eralized to assume complex variables as their independent variables. Inas much as functions are single-valued finiteassignments to (x,y) pairs, rational functions which may assign infinite values, and logarithmic functions which assignmultiple values, need special attention as we will see shortly.

33

Page 34: transform mathematics

34 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

Figure 2.1: Mapping from complex domain x≤ 1,y≤ 1 into w = z2.

Figure 2.2: u-v surfaces for w = z2. a) u = x2− y2, b) v = 2xy .

Page 35: transform mathematics

2.1. LIMIT OF A COMPLEX FUNCTION 35

(a) (b)

Figure 2.3: Fractals. Self-similar patterns like those in these pictures can be generated using complex numbers. The firstfigure is the Mandelbrot fractal generated by Ultra Fractal 5. The second figure is from the dome of Selimiye Mosque inEdirne, Turkiye.

FractalsFractals are self-similar repeating figures (Fig. 2.3). Fractal geometry has attracted mathematicians, scientists as wellas artists because it can model many natural events, shape of plants, mountains and clouds using simple mathematicalformulas in complex numbers. There are hundreds of different fractals each of which can be described mathematically. Itis very interesting to find fractals in the dome of Selimiye Mosque in Edirne.

For instance the Mandelbrot fractals are generated by the iterative formula:

zk = z2k−1 + z0 (2.2)

where z0 = x0 + jy0 and zk = xk + jyk . First two iterations of Mandelbrot Equation yield

z1 = z20 + z0

z1 = x20− y2

0 + x0 + j (y0 +2x0y0)

z2 = z21 + z0

z2 = x21− y2

1 + x0 + j (y0 +2x1y1)

z2 =(x2

0− y20 + x0

)2− (y0 +2x0y0)2 + x0 + j

[y0 +2

(x2

0− y20 + x0

)(y0 +2x0y0)

]Where magnitude of zk, rk =

√x2

k + y2k does not tend to infinity, we obtain beautiful repeating figures at every scale of

magnification. The set of initial points z0 which satisfy rk < ∞ is called the Mandelbrot set. Fig. Apparently the iterationsquickly become formidable to carry out by hand. See the LabVIEW implementation of these calculations in the computerexperiments of the problems. 2.3 shows us these self-repeating figures obtained from Eqn. 2.2.

2.1 Limit of a Complex FunctionLet w = f (z) be a function defined in a disk D around z = z0, possibly not at z = z0 itself. If w comes closer and closer toa point l in w plane as z approaches z0 in D then l is said to be the limit of f (z) as z approaches z0. More formally we calll the limit of the function w = f (z) at z = z0 provided that for an arbitrarily small number ε we can find another numberδ such that whenever |z− z0|< δ we have | f (z)− l |< ε and we write this as

l = limz−→z0

f (z)

Whereas for a real function f (x), x can approach x0 along a line (x axis), for complex functions z can approach z0 fromany direction. The limit of a function is unique if it exists.

A function is said to be continuous if

Page 36: transform mathematics

36 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

Figure 2.4: Limit of f (z)

f (z0) = limz−→z0

f (z)

To be more precise we can rewrite this so that given two real numbers k and l

f (z0) = f (x0,y0) = limk,l→0

f (x0 + k,y0 + l)

regardless if k and l approach zero together (k, l→ 0); or we keep l equal to 0 and let k approach 0 (k→ 0, l = 0); or wekeep k equal to 0 and let l approach 0 (k = 0, l→ 0.

2.2 Derivative of Complex Functions and Analyticity

We can define the derivative of a complex fuction similar to the way we define the derivative of a real function. Thederivative of w = f (z) at z = z0, is defined as

f ′ (z0) = lim∆z−→0

f (z0 +∆z)− f (z0)

∆z(2.3)

provided that the limit is unique regardless of the path along which ∆z approaches 0. Path independence is rather a stricterrequirement when compared with differentiation of real functions. To comply with calculus notation we denote 1-st, 2-nd,... n-th order derivatives of function f (z) with respect to z as

d fdz

,d2 fdz2 , · · ·

dn fdzn

As an example let us consider w = f (z) = z2. Applying the definition of derivative we can obtain f ′ (z) as follows

f ′ (z) = lim∆z−→0

(z+∆z)2− z2

∆z= lim

∆z−→0

z2 +2z∆z+∆z2− z2

∆z= lim

∆z−→0

2z∆z+∆z2

∆z= lim

∆z−→02z+∆z

= 2z

To illustrate the independence of derivative of the path, let us approach z along two directions: once along x-directionand then along y-direction. In general ∆z = ∆x+ j∆y. First let ∆z = ∆x+ j0:

Page 37: transform mathematics

2.2. DERIVATIVE OF COMPLEX FUNCTIONS AND ANALYTICITY 37

f ′ (z) = lim∆x−→0

(x+∆x+ jy)2− (x+ jy)2

∆x

= lim∆x−→0

(x+∆x)2 + j2(x+∆x)y− y2−(x2 + j2xy− y2

)∆x

= lim∆x−→0

2x∆x+(∆x)2 + j2y∆x∆x

= lim∆x−→0

(2x+ j2y+∆x)

= 2(x+ jy)

= 2z

Now let ∆z = 0+ j∆y:

f ′ (z) = lim∆y−→0

[x+ j (y+∆y)]2− (x+ jy)2

j∆y

= lim∆y−→0

x2 + j2x(y+∆y)− (y+∆y)2−(x2 + j2xy− y2

)j∆y

= lim∆y−→0

x2 + j2x(y+∆y)−(

y2 +2y∆y+(∆y)2)− x2− j2xy+ y2

j∆y

= lim∆y−→0

2x(y+∆y)+ j2y∆y− (∆y)2−2xy∆y

= lim∆y−→0

2x∆y+ j2y∆y− (∆y)2

∆y= lim

∆y−→0(2x+ j2y−∆y)

= 2(x+ jy)

= 2z

On the other hand w = z fails to have a derivative at any point z. This can be readily shown by evaluating the derivativeonce along x-direction then along y-direction:

f ′ (z) =ddz

(z) = lim∆z−→0

(z+∆z)− z∆z

= lim∆x+ j∆y−→0

(x+ jy+∆x+ j∆y)− (x+ jy)∆x+ j∆y

= lim∆x+ j∆y−→0

∆x− j∆y∆x+ j∆y

Along x-direction ∆y = 0 and

f ′ (z) = lim∆x−→0

∆x∆x

= 1

whereas along y -direction ∆x = 0 and

f ′ (z) = lim∆y−→0

− j∆yj∆y

=−1

Hence the conjugate function w = z is not differentiable at any point in the z plane.

Decomposing ∆z into the sum of real increment and imaginary increment, we can rewrite Eqn. 2.3

f ′ (z0) = lim∆x+ j∆y−→0

f (x0 + jy0 +∆x+ j∆y)− f (x0 + jy0)

∆x+ j∆y

which can be further translated into

Page 38: transform mathematics

38 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

f ′ (z0) = lim∆x+ j∆y−→0

u(x0 + jy0 +∆x+ j∆y)−u(x0 + jy0)

∆x+ j∆y+ j

v(x0 + jy0 +∆x+ j∆y)− v(x0 + jy0)

∆x+ j∆y

Apparently if the derivative exists, one can obtain f ′ along x-axis or y-axis, i.e.

f ′ (z) = lim∆y=0,∆x−→0

f (x+ jy+∆x)− f (x+ jy)∆x

=∂ f (z)

∂xor

f ′ (z) = lim∆x=0,∆y−→0

f (x+ jy+∆y)− f (x+ jy)j∆y

= − j∂ f (z)

∂y

Along x-direction

f ′ (z) = lim∆y=0,∆x−→0

u(x+∆x,y)+ jv(x+∆x,y)−u(x,y)− jv(x,y)∆x

=∂u(x,y)

∂x+ j

∂v(x,y)∂x

and along y-direction

f ′ (z) = lim∆x=0,∆y−→0

u(x,y+∆y)+ jv(x,y+∆y)−u(x,y)− jv(x,y)j∆y

=∂v(x,y)

∂y− j

∂u(x,y)∂y

A function which is defined and differentiable at a point z = z0 is said to be analytic at z = z0. A function which isanalytic at a point z = z0 is also analytic in a domain D around z0. A function differentiable at every point in D is said tobe analytic in D. A function which is analytic everywhere in the complex plane is called entire. If the function is analyticat z then the derivative exists and the limit definition of derivative necessitates that the two evaluations along the x− andy-directions be equal to each other. Hence

∂u(x,y)∂x

+ j∂v(x,y)

∂x=− j

∂u(x,y)∂y

+∂v(x,y)

∂yEquating real and imaginary parts leads us to the Cauchy-Riemann conditions for analyticity. Cauchy-Riemann conditionsare necessary and sufficient for a complex function to be analytic.

∂u(x,y)∂x

=∂v(x,y)

∂y(2.4)

∂u(x,y)∂y

= −∂v(x,y)∂y

Eqn. 2.4 can be expressed in shorthand notation as

ux = vy

uy = −vx

If f (z) is analytic in a certain domain D, its derivative is independent of the manner in which ∆z approaches 0, thenwe can choose to use x direction to express its derivative. Hence if a function f has a derivative at a point z than we canwrite

f ′ (z) = ux + jvx

Now that we have stated the Cauchy-Riemann conditions, let us investigate the analyticity of the two functions above.

Page 39: transform mathematics

2.2. DERIVATIVE OF COMPLEX FUNCTIONS AND ANALYTICITY 39

Example 7. Example. w = z2

We have z2 = x2− y2 + j2xy so u(x,y) = x2− y2 and v(x,y) = 2xy.

∂u(x,y)∂x

= 2x

∂v(x,y)∂y

= 2x

∂u(x,y)∂y

= −2y

∂v(x,y)∂x

= 2y

Hence w = z2 is analytic for all z.

Example 8. w = z

We have w = x− jy and u(x,y) = x, v(x,y) =−y.

∂u(x,y)∂x

= 1

∂v(x,y)∂y

= −1

∂u(x,y)∂y

= 0

∂v(x,y)∂x

= 0

We see that the first condition is not satisfied. Consequently w = z is not an analytic function.

Example 9. Check whether f (x,y) = cosxcoshy− j sinxsinhy is analytic or not.

We have u(x,y) = cosxcoshy and v(x,y) =−sinxsinhy. From uand v we obtain the partial derivatives ux,uy,vx and vy:

ux = −sinxcoshy

uy = cosxsinhy

vx = −cosxsinhy

vy = −sinxcoshy

Since

ux =−sinxcoshy = vy

uy = cosxsinhy =−(−cosxsinhy) =−vx

we deduce that f (x,y) is analytic for all x,y.

The rules of differentiation for analytic complex functions are similar to those for real-valued functions studied incalculus. We list below these rules which are very easy to prove.

Multiplication by a constant Let c0 = x0 + jy0 be constant complex number and f (z) be an analytic function in somedomain D. Then in that domain we have

ddz [c0 f (z)] = c0

d f (z)dz .

Linearity Let complex functions f1 (z) , f2 (z) , . . . , fn (z) be analytic over the domains D1,D2, . . . ,Dn and let c1,c2, . . . ,cnbe complex constants. Then the function f (z) = ∑

ni=1 ci fi (z) is analytic over the domain D1 ∩D2 ∩ . . .∩Dn with

derivative f ′ (z) = ∑ni=1 ci f ′i (z). The proof is trivial and left as an exercise to the student.

Page 40: transform mathematics

40 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

Multiplication Let complex functions f (z) and g(z) be analytic over the domains D1 and D2. Then the function h(z) =f (z)g(z) is analytic over the domain D1∩D2 with derivative

dh(z)dz

= g(z)d f (z)

dz+ f (z)

dg(z)dz

Division Let complex functions f (z) and g(z) be analytic over the domains D1 and D2. Rational functions h(z) =f (z)/g(z) are differentiable over the domain D1∩D2 except at points for which g(z) = 0 and its derivative is

dh(z)dz

=f ′ (z)g(z)− f (z)g′ (z)

[g(z)]2

Chain Rule Let w = f (z) and h = g(w) = g [ f (z)]. Domain Dw over which f is analytic is mapped into another domainDh. If h is analytic over Dh then the derivative of h with respect z over Dh is given by

dh(z)dz

=dh(w)

dw· dw(z)

dz

Example 10. As an exercice let us prove the multiplication rule. The claim that h = f g is analytic over the intersectionof their domains of analyticity must be obvious because in order for h to be analytic, f (z) and g(z) must both be analytic;and this is only possible in D1 ∩D2 , the intersection of their domains. We prove the rule by applying the definition ofderivative, that is

dh(z)dz

= lim∆z→0

f (z+∆z)g(z+∆z)− f (z)g(z)∆z

= lim∆z→0

f (z+∆z)g(z+∆z)− f (z)g(z+∆z)+ f (z)g(z+∆z)− f (z)g(z)∆z

= lim∆z→0

f (z+∆z)g(z+∆z)− f (z)g(z+∆z)∆z

+ lim∆z→0

f (z)g(z+∆z)− f (z)g(z)∆z

= lim∆z→0

[ f (z+∆z)− f (z)]g(z+∆z)∆z

+ lim∆z→0

f (z) [g(z+∆z)−g(z)]∆z

= f ′ (z)g(z)+ f (z)g′ (z)

QED.

Example 11. The derivative of the entire function w = zn is similar to the derivative of xn for real variable x:

ddz

zn = nzn−1

We can prove this assertion by mathematical induction.

1. For n = 1 w = z is analytic everywhere in the z-plane. It is easy to verify from definition that

ddz

z = ux + jvx

=ddx

x+ jddx

y

= 1+ j0= 1 · z1−1

Hence our claim is true for n = 1.

2. Assume that our assertion is true for n > 1. Let us show that the assertion is true for n+1 as well. Making use ofthe fact that if f and g are functions of z in some domain D then from Example 10, ( f g)′ = f ′g+ f g′, we have

ddz

zn+1 =ddz

(znz)

= zddz

zn + zn ddz

z

= z(nzn−1)+ zn ·1

= nzn + zn

= (n+1)zn

Page 41: transform mathematics

2.2. DERIVATIVE OF COMPLEX FUNCTIONS AND ANALYTICITY 41

QED.

A complex function f = u+ jv can be expressed in polar coordinates as well as rectangular coordinates. If w = f (z) isanalytic in some domain D around z = z0, then u and v have continuous partial derivatives in r and θ as well as x and yin D. Just as analyticity of f imposes Cauchy-Riemann conditions on u and v in rectangular coordinates, it also imposesCauchy-Riemann conditions on r and θ in polar coordinates. In order to obtain these constraints, we can proceed bydecomposing z = x+ jy into its polar components:

x = r cosθ

y = r sinθ

Since x and y are functions of r and θ , the partial derivative ux,uy,vx,vy can be expressed using the chain rule for twoindependent variables:

∂u∂ r

=∂u∂x

∂x∂ r

+∂u∂y

∂y∂ r

,∂u∂θ

=∂u∂x

∂x∂θ

+∂u∂y

∂y∂θ

and

∂v∂ r

=∂v∂x

∂x∂ r

+∂v∂y

∂y∂ r

,∂v∂θ

=∂v∂x

∂x∂θ

+∂v∂y

∂y∂θ

We have

∂x∂ r

= cosθ ,∂y∂ r

= sinθ

∂x∂θ

=−r sinθ ,∂y∂θ

= r cosθ

Using shorthand notations for the partial derivatives we can write

ur = cosθux + sinθuy, uθ =−r sinθux + r cosθuy

vr = cosθvx + sinθvy, vθ =−r sinθvx + r cosθvy

which can be combined in matrix form as[uruθ

]=

[cosθ sinθ

−r sinθ r cosθ

][uxuy

]and [

vrvθ

]=

[cosθ sinθ

−r sinθ r cosθ

][vxvy

]Solving these matrix equations we have [

uxuy

]=

[cosθ − 1

r sinθ

sinθ1r cosθ

][uruθ

][

vxvy

]=

[cosθ − 1

r sinθ

sinθ1r cosθ

][vrvθ

]Since ux = vy and uy =−vx in Caretesian coordinates

Page 42: transform mathematics

42 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

ux = vy

cosθur−1r

sinθuθ = sinθvr +1r

cosθvθ

uy = −vx

sinθur +1r

cosθuθ = −sinθvr−1r

cosθvθ

In matrix form [cosθ − 1

r sinθ

sinθ1r cosθ

][uruθ

]=

[sinθ

1r cosθ

−sinθ − 1r cosθ

][vrvθ

]We can solve this linear system for ur and uθ

[uruθ

]=

[cosθ − 1

r sinθ

sinθ1r cosθ

]−1 [sinθ

1r cosθ

−sinθ − 1r cosθ

][vrvθ

]to obtain [

uruθ

]=

[0 1

r−r 0

][vrvθ

]In summary analyticity of a function f in some domain imposes the Cauchy-Riemann conditions on the polar coordinatesof z = re jθ :

ur =1r

uθ = −rvr (2.5)

in that domain.

Example 12. Show that w = zn is analytic for all z.

Let us express z in polar coordinates z = re jθ and w = rne jnθ . Let us check the Cauchy-Riemann conditions in polarcoordinates. Using De Moivre’s Law we can decompose w into its real and imaginary parts as follows

w = rn cosnθ + jrn sinnθ

Thus

u(r,θ) = rn cosnθ , v(r,θ) = rn sinnθ

ur = nrn−1 cosnθ

uθ = −rn sinnθ

vr = nrn−1 sinnθ

vθ = nrn cosnθ

Apparently we see that the Cauchy-Riemann conditions for analyticity are satisfied:

ur = nrn−1 cosnθ =nrn cosnθ

r=

r

uθ =−rn sinnθ =−r(nrn−1 sinnθ

)=−rvr

f (z) can be expressed by polar representation f (z) = u(r,θ)+ jv(r,θ). Since differentiation is independent of themanner in which ∆z approaches 0, we should also be able to express f ′ (z) in polar coordinates. Let us opt to apprach z inr direction so that ∆z = ∆re jθ . Then by definition of the derivative

f ′ (z) = lim∆z→0

u(r+∆r,θ)+ jv(r+∆r,θ)− [u(r,θ)+ jv(,θ)]∆re jθ

Thus

f ′ (z) = e− jθ (ur + jvr)

Page 43: transform mathematics

2.3. HARMONIC FUNCTIONS 43

2.3 Harmonic FunctionsA real-valued function f (x,y) of real variables x and y is said to be harmonic in a certain domain of the xy-plane if it hascontinuous first and second derivatives in that domain and satisfies the following partial differential equation called theLaplace’s equation

∂ 2 f (x,y)∂x2 +

∂ 2 f (x,y)∂y2 = 0 (2.6)

fxx (x,y)+ fyy (x,y) = 0

Laplace’s equation arises in several applications in physics such as the electric potential distribution in a plane. Regardingthe analytic complex functions we have the following very important theorem.

Theorem. If a complex-valued function f (x,y) = u(x,y)+ jv(x,y) is analytic in a region D of the complex z-plane,then u(x,y) and v(x,y) are harmonic in region D.

Proof. Since f is analytic in D, Cauchy-Riemann conditions which are repeated below are satisfied.

∂u(x,y)∂x

=∂v(x,y)

∂y(2.7)

∂u(x,y)∂y

= −∂v(x,y)∂x

(2.8)

Let us differentiate Eqn. 2.7 with respect to x and Eqn. 2.8 with respect to y:

∂ 2u(x,y)∂x2 =

∂ 2v(x,y)∂x∂y

∂ 2u(x,y)∂y2 =− ∂ 2v(x,y)

∂y∂x

From calculus we know that the order of partial differentiation doesn’t matter, i.e.,

∂ 2v(x,y)∂x∂y

=∂ 2v(x,y)

∂y∂x

Adding the two equations we obtain

∂ 2u(x,y)∂x2 +

∂ 2u(x,y)∂y2 =

∂ 2v(x,y)∂x∂y

− ∂ 2v(x,y)∂y∂x

= 0

To prove the second assertion, this time we differentiate Eqn. 2.7 with respect to y and Eqn. 2.8 with respect to x:

∂ 2u(x,y)∂y∂x

=∂ 2v(x,y)

∂y2

∂ 2u(x,y)∂x∂y

= −∂ 2v(x,y)∂x2

Subtracting the second equation from the first one completes the proof:

∂ 2v(x,y)∂x2 +

∂ 2v(x,y)∂y2 = 0

The functions u(x,y) and v(x,y) are called to be conjugate of each other. However the meaning of conjugate in thiscontext has nothing to do with the usual meaning we have used so far in connection with complex quantities.

Example 13. For w = cosz show that the component functions are harmonic.In Section 2.4.4 we show that cosz = cosx coshy− j sinx sinhy with u(x,y) = cosx coshy and v(x,y) = −sinx sinhy.Thus w is entire with continuous partial derivatives. Let us obtain the first and second derivatives:

Page 44: transform mathematics

44 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

ux =−sinx coshy , uy = cosx sinhyvx =−cosx sinhy , vy =−sinx coshyWe see that ux = vy and uy =−vx (Cauchy-Riemann conditions are satisfied)uxx =−cosx coshy , uyy = cosx coshy =⇒ uxx +uyy = 0.vxx = sinx sinhy , vyy =−sinx sinhy =⇒ vxx + vyy = 0.

Example 14. w = f (z) = z3. Show that u(x,y) and v(x,y) are harmonic.

w is entire with continuous partial derivatives.

w = (x+ jy)3 = x3 +3x2 ( jy)+3x( jy)2 +( jy)3

= x3 + j3x2y−3xy2− jy3

= x3−3xy2 + j(3x2y− y3)

u(x,y) = x3−3xy2 v(x,y) = 3x2y− y3

ux = 3x2−3y2 , uy =−6xyvx = 6xy , vy =−6y2

uxx = 6x , uyy =−6x =⇒uxx +uyy = 0, andvxx = 6y , uyy =−6y =⇒vxx + vyy = 0.

Example 15. If real (or imaginary) part of an analytic function is given, we can find its harmonic conjugate and build thecomplex function. Let v(x,y) = ex siny. Let us construct u(x,y) and f (z) from v(x,y).

uy (x,y) =−vx (x,y) =−ex sinydu(x,y) =−ex sinydy. Integrating uy with respect to y

u(x,y) =−ˆ

ex sinydy

we obtain u(x,y) = ex cosy+ c1 (x). Where c1 (x) is constant of integration.On the other hand ux (x,y) = vy (x,y) = ex cosy. Integrating ux with respect to x

u(x,y) =ˆ

ex cosydx

we obtain the harmonic conjugate of v(x,y): u(x,y) = ex cosy+ c2 (y), where c2 (y) is constant of integration. Com-paring the two integrations

u(x,y) = ex cosy+ c1 (x) = ex cosy+ c2 (y)we deduce that the integration constants must be independent of x and y, i.e., c1 (x) = c2 (y) = c. Hence u(x,y) =

ex cosy+ c. Thus

f (x,y) = u(x,y)+ jv(x,y)

= ex cosy+ c+ jex siny

= exe jy + c

= ex+ jy + c

Hence f (z) = ez + c.

2.4 Some Elementary Functions

We mean by elementary functions those functions encountered in calculus defined for real variables. Those functionsare extended to their complex forms by setting z = x+ jy ; therefore these extended forms are naturally reduced to theirfamiliar forms when z = x+ j0.

Page 45: transform mathematics

2.4. SOME ELEMENTARY FUNCTIONS 45

2.4.1 Polynomial and Rational FunctionWe have mentioned that a complex number raised to an integer (w = zn) is entire which implies that a linear combinationof such terms is also entire, i.e., f (z) = a0 +a1z+a2z2 + . . .+anzn is analytic everywhere in the z-plane. In example 11we have shown that d

dz zn = nzn−1. Therefore

d f (z)dz

=n−1

∑i=0

(i+1)ai+1zi

We can form the quotient of two such polynomials and call the new function a rational function.

f (z) =p(z)q(z)

=∑

mi=0 bizi

∑ni=0 aizi (2.9)

We have seen in Chapter 1 Section 1.4 that q(z) has n roots. f (z) is analytic in the z-plane except where q(z) = 0. Wecall the z values for which q(z) = 0 the poles of f (z). Poles have a tremendous effect on behaviour of physical systems.The derivative of 2.9 is

f (z) =p′ (z)q(z)− p(z)q′ (z)

[q(z)]2

2.4.2 Exponential Function of a Complex VariableExponential function is basic to the functions that we discuss below in which it appears in one way or another. Some ofits properties mimic those of the real exponential function. We can define w = ez in terms of Re(z) and Im(z). Thus

w = ez

= exp(x+ jy)

= exe jy

hence

ez = ex cosy+ jex siny

which yieldsu(x,y) = Re(ez) = ex cosy

v(x,y) = Im(ez) = ex siny

Note that ez 6= 0 for all z; |z|= ex and arg(ez) = y+2nπ (n = 0,±1,±2, . . .). From this property we see that ez is periodicin y with a period of 2π . These relations result in following identities which we are familiar with from our knowledge ofreal-valued exponential function.

ez1ez2 = ez1+z2

ez1

ez2= ez1−z2

(ez1)z2 = ez1z2

(ez1)1z2 = ez1/z2

Exponential function is entire and its derivative isddz

ez = ez

.

Example 16. Calculate j j

j j =(

e jπ/2) j

= e j2π/2

= e−π/2

= 0.2079

Page 46: transform mathematics

46 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

Figure 2.5: lnz is a multi-valued assignment from z to w. For lnz to qualify as a function, the principal argument of lnz isadopted to define the complex logarithmic function. The new function, Ln z is distingushed from the former by the capitalL in its name.

Example 17. Calculate (−1)2/π .

(−1)2/π =(e jπ)2/π

= e j2

= cos2+ j sin2= −0.4161+ j0.909

2.4.3 Logarithm of a Complex NumberIn the following discussion of logarithms we will be referring to natural logarithms rather than logarithms to base 10.Logarithm of a complex variable z defined for z 6= 0 may be a little troublesome and different from logarithm of realnumbers. Logarithm of real numbers are defined for numbers greater than 0 and assigns a unique number to lnx where xis real and greater than 0. This is not quite so for complex logarithms in that the complex logarithm can assume a negativeargument not equal to zero and assigns infinitely many values to lnz. In fact the argument is general and the logarithm ofreal numbers follows as a special case.

Consider a complex variable z = re jθ expressed in polar form. This number can have an argument which is θ plusinteger multiples of 2π . We form the function w = lnz:

lnz = ln(

re jθ)

where e jθ is periodic with period 2π , that is, e jθ = e j(θ+2πn) (n is an integer). Hence

lnz = ln[re j(θ+2πn)

]= lnr+ j (θ +2πn)

Thus the complex logarithm of z = re jθ is a multi-valued assignment with imaginary parts separated from each other by2π . Certainly this assignment does not qualify lnz to be a function. Therefore we modify the above assignment such thatwe take n = 0 and rename the assignment as Lnz which is called the principal value of lnz. We use capital L in Ln todiscern it from its multi-valued cousin ln (See Fig. 2.5).

Lnz = lnr+ jθ (2.10)

If θ = 0 then z = r is real and Ln z = ln r is the ordinary natural logarithm. If θ = π then z = −r is real and negative.Thus for a negative number we have come up with a novel logarithm

Lnz = ln r+ jπ

Page 47: transform mathematics

2.4. SOME ELEMENTARY FUNCTIONS 47

Example 18. Complex logarithms allow us to find logarithms of negative numbers. Find Ln (−1).

−1 = e jπ

Ln (−1) = Ln e jπ

= jπ

To raise an arbitrary complex number a to z, we must find another complex number k such that a = re jθ = ek whosesolution yields k = Ln a = lnr+ jθ . Then

az =(

ek)z

= exp(zLn r+ jzθ)

= rz exp( jzθ)

2.4.4 Trigonometric Functions of a Complex Variable

Extending the definitions of real-valued trigonometric functions to complex-valued trigonometric functions is straightfor-ward. We know cosx =

(e jx + e− jx

)/2, sinx =

(e jx− e− jx

)/2 j. In a similar way we can define cosz, sinz which are

entire :

cosz =e jz + e− jz

2

sinz = cosz =e jz− e− jz

2 j

Expanding e jx and e− jx we have

cosz =e−ye jx + eye− jx

2

=e−y (cosx+ j sinx)+ ey (cosx− j sinx)

2

=ey + e−y

2· cosx+ j

e−y− ey

2· sinx

recalling that coshy = (ey + e−y)/2, and sinhy = (ey− e−y)/2 we obtain sin( jx) = j sinhx cos( jx) = coshx.

We also derive the following expansions of complex trigonometric functions

cosz = cosx coshy− j sinx sinhy

sinz = sinx coshy+ j cosx sinhy

tanz =sinx coshy+ j cosx sinhycosx coshy− j sinx sinhy

cotz =cosx coshy− j sinx sinhysinx coshy+ j cosx sinhy

In Fig. 2.6 real and imaginary parts of cosz are illustrated. Derivatives of these functions can be readily obtained fromtheir definitions and the linearity property of derivatives. Thus

Page 48: transform mathematics

48 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

Figure 2.6: cosz function. (a) The real part u(x,y) = cosxcoshy, (b) The imaginary part v(x,y) =−sinxsinhy

ddz

sinz =12 j

[ddz

e jz− ddz

e− jz]

=12 j

[je jz−

(− je− jz)]

=12(e jz + e− jz)

= cosz

and

ddz

cosz = −sinz

ddz

tanz = sec2 z

ddz

cotz = −csc2 z

When z is a real number z = x+ j0 the formulas above reduce to cosz = cosx and sinz = sinx. If z is purely imaginaryz = 0+ jy we obtain cosz = coshy and sinz = j sinhy.

2.4.5 Hyperbolic Functions of a Complex VariableExtending the definitions of real-valued hyperbolic functions to complex-valued hyperbolic functions is also straightfor-ward. We know coshx = (ex + e−x)/2, sinhx = (ex− e−x)/2. We can define coshz, sinhz and tanhz in a similar way:

coshz =ez + e−z

2

sinhz =ez− e−z

2

tanhz =sinhzcoshz

=ez− e−z

ez + e−z

cothz =ez + e−z

ez− e−z

Page 49: transform mathematics

2.4. SOME ELEMENTARY FUNCTIONS 49

One can readily show that

coshz = coshx cosy+ j sinhx siny

sinhz = sinhx cosy+ j coshx siny

Derivatives of these functions can be obtained from their definitions and the linearity property of derivatives. Thus

ddz

sinhz =12

[ddz

ez− ddz

e−z]

=12[ez−

(−e−z)]

=12(ez + e−z)

= coshz

and

ddz

coshz = sinhz

ddz

tanhz = sech2z

ddz

cothz = −csch2z

Again note that when z is a real number z = x+ j0 the formulas above reduce to coshz = coshx and sinhz = sinhx. If z ispurely imaginary, that is z = 0+ jy, we obtain coshz = cosy and sinhz = j siny.

Problems1. Let f be an analytic function in a domain D. Starting with the definition of derivative of a complex function show

that the deivative of f in D is given by

f ′ (z) = ux + jvx

= vy− jvy

2. Let complex functions f (z) and g(z) be analytic over the domains D1 and D2. Rational functions h(z) = f (z)/g(z)are differentiable over the domain D1 ∩D2 except at points for which g(z) = 0. Using the definition of derivativeshow that h′ (z) is given by

dh(z)dz

=f ′ (z)g(z)− f (z)g′ (z)

[g(z)]2

3. Use the definition of derivative to show that the derivative of w = zn is ddz zn = nzn−1.

4. Show that in polar coordinates f ′ (z) =− jz−1 (uθ + jvθ ) making use of

(a) Cauchy-Riemann conditions in polar coordinates (Eqn. 2.5)

(b) Definition of derivative.Hint: In polar coordinates keeping r constant and incrementing θ , ∆z is given by ∆z = jz∆z.

5. Show that ez is analytic except for all z.

6. Solve equation ez = 1− j for z.

7. Although ex > 0 for all real x, prove that ez can be negative.

8. Given the complex function w = z3,

Page 50: transform mathematics

50 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

(a) Show that w is analytic checking the Cauchy-Riemann conditions in

i. Cartesian coordinatesii. Polar coordinates.

(b) Verify that u(x,y) and v(x,y) satisfy the Laplace equation.

9. Calculate (1+ j)1+ j.

10. z0 = x0 + jy0 is a constant complex number. Show that

(a) If w(z) is a diffreentiable function ddz (x0w) = xow´

(b) ddz ex0z = x0ex0z.

11. Show that Lnz is analytic except at z = 0.

12. Let z be a complex number. Show that

(a) cos2 z+ sin2 z = 1

(b) sin(−z) =−sinz, cos(−z) = cosz

(c) sin(z1± z2) = sinz1 cosz2± cosz1 sinz2

(d) cos(z1± z2) = cosz1 cosz2∓ sinz1 sinz2

13. Calculate sin(π/6+ j0.5) using the definition of the sin function and compare with the result obtain in Example ?.

14. Computer experiment. Consider a domain in the complex z−plane D : |x|< 1 and 0≤ y < 2π . Let w = ez.

(a) Derive u = u(x,y) and v = v(x,y).

(b) Using your favorite programming platform depict the mapping from D onto w−plane. Below is shown themapping generated by LabVIEW.

Figure 2.7: 14Mapping from z plane to w plane as defined by w = ez function.

15. Find z3 using the Mandelbrot iteration formula in Eqn. 2.2.

16. Consider two Mandelbrot series zk and wk defined by

zk = z2k−1 + z0

wk = w2k−1 +w0

Show that wk = z∗k if w0 = z∗0, that is, the figures generated by these formulas are symmetric about x axis.

17. Consider Eqn. 2.2. Show that rk = |zk| → ∞ as k→ ∞ if r0 > 1.

Page 51: transform mathematics

2.4. SOME ELEMENTARY FUNCTIONS 51

18. Computer experiment. The Mandelbrot figures can be generated on computer. The following implementation isdone on a LabVIEW platform. The tedious job of iteration is carried out by the FOR loop. Pay attention to howcomplex numbers are generated and handled in LabVIEW. Once all the numbers zk are produced they are plotted yversus x on an XY graph.

Problem 18

19. Complex tangent function is given by tanz = sinx coshy− j cosx sinhycosx coshy+ j sinx sinhy . Show that

tanz =sin2x+ j sinh2ycos2x+ cosh2y

Hint: sinhx = (ex− e−x)/2 and coshx = (ex + e−x)/2.

Page 52: transform mathematics

52 CHAPTER 2. FUNCTIONS OF A COMPLEX VARIABLE

Page 53: transform mathematics

Chapter 3

The Laplace Transform

"Pierre-Simon, marquis de Laplace (23 March 1749 – 5March 1827) was an influential French scholar whose workwas important to the development of mathematics, statistics,physics, and astronomy. He summarized and extended thework of his predecessors in his five-volume MécaniqueCéleste (Celestial Mechanics). Laplace formulatedLaplace’s equation, and pioneered the Laplace transformwhich appears in many branches of mathematical physics, afield that he took a leading role in forming. The Laplaciandifferential operator, widely used in mathematics, is alsonamed after him. He restated and developed the nebularhypothesis of the origin of the Solar System and was one ofthe first scientists to postulate the existence of black holesand the notion of gravitational collapse.Sometimes referred to as the French Newton or Newton ofFrance, Laplace has been described as possessing a phenom-enal natural mathematical faculty superior to that of any ofhis contemporaries."Abridged from Wikipedia

Laplace Transform serves as a convenient tool to solve andunderstand linear systems. In your study of differentialequations you have learned how to find functions that aresolutions to differential equations. You may wonder whyone should resort to another tool to solve linear systemswhile one can do it with differential equations.

The first answer to the question posed above is theease with which differential equations can be solved ifwe can transform them into algebraic equations. Lineardifferential equations can indeed be transformed into al-gebraic equations by Laplace Transform. Transformationrules and transform tables then simplify the solution pro-cedure.

A second and more subtle answer is the insight thatLaplace transform provides us. Time-domain signalsare mapped into their complex frequency representationswhich give us a deeper and probably a better insightinto the system behavior than differential equations. Incomplex-frequency domain, namely the s-domain, the no-tion of frequency is extended to complex frequency. Thevery attributes, real and imaginary, can be deceptive, mis-leading, absurd, improper or whatever you may choose todescribe them. Indeed the complex frequency s = σ + jω1

introduced by the Laplace transform has two components:the real part σ and the imaginary part ω . Ironically ω ,which is on the imaginary axis, is the real frequency whichwe can physically generate. σ , the real part, on the otherhand seems unreal, having no period, and appears physi-cally unrealizable. σ , which lacks a period, appears to bea mathematical necessity of the Laplace transform. So is itunrealizable? After expending some thought, the answershould be negative, because we can generate a decayingsinusoidal wave e−σt sin ωt in the lab in which both σ

and ω have dimensions of frequency. As will be appar-ent shortly, this signal transforms into a real and a pair ofcomplex-conjugate poles. These poles are the natural fre-quencies of the system. Even this insight makes study ofLaplace transform worthwhile.

Another advantage of transforming a system into complex frequency domain is the stability insight it provides. Sys-tems with poles in the left half s-plane are stable. Simple poles with σ = 0 imply unstable but bounded system response;while multiple poles with σ = 0 are indications of unstable systems. Systems with poles in the right half plane σ > 0 areunstable regardless of the multiplicity.

1Some authors prefer to use p instead of s. In this book we will consistently use s.

53

Page 54: transform mathematics

54 CHAPTER 3. THE LAPLACE TRANSFORM

+

_

10 sin 2t Volts

+

I

C

R

100 K

10 uF

+v(t)

Figure 3.1: RC circuit is a first-order linear system which can be modeled with a linear differential equation.

As we will discover when we study Chapter 4, Fourier transform is closely related to Laplace transform. WhileLaplace transform is used to obtain transient and steady state responses for general stimuli, Fourier transform is used forsteady state analysis of systems with sinusoidal excitation. More remarks will be made when we take that topic in itsplace. For now suffice it to say that mastering the Laplace transform helps us to master the Fourier transform.

3.1 Motivation to use Laplace Transform

Many engineering systems can be modeled as linear systems and can be mathematically described by constant-coefficient linear differential equations. Such a system, an RC circuit is shown in Fig. 3.1, and its differential equationcan be derived from Kirchoff’s Voltage Law:

10sin2t = v(t)+RCdv(t)

dt

The capacitor voltage is initially 10 Volts. Since RC = 105 ∗10−5 = 1 sec:

dv(t)dt

+ v(t) = 10sin2t (3.1)

v(t) is the sum of homogenous and particular solutions:

v(t) = vh (t)+ vp (t)

vh (t) = Ae−t

vp (t) = B sin(2t +θ)

v(t) = Ae−t +B sin(2t +θ)

Particular solution can be found from the steady state:

dvp (t)dt

+ vp (t) = 10sin 2t

ddt

[B sin(2t +θ)]+B sin(2t +θ) = 10sin 2t

2B cos(2t +θ)+B sin(2t +θ) = 10sin 2t√

5B sin[

2t +θ + arctan(

2BB

)]= 10sin 2t

√5B sin [2t +θ + arctan(2)] = 10sin 2t

We deduce from these equations:

B = 2√

5 Voltsθ = −arctan 2

= −1.107149 rad

Page 55: transform mathematics

3.2. DEFINITION OF THE LAPLACE TRANSFORM 55

A can be found from the total solution evaluated at t = 0−. With v(0−) = 10:

v(t)∣∣∣t=0

=[Ae−t +B sin(2t +θ)

]t=0

10 = A+B sinθ

A−2√

5sin(arctan 2) = 10

A−2√

5 · 2√1+22

= 10

A−4 = 10A = 14

Hence

v(t) = 14e−t +2√

5 sin(2t− arctan 2) Volts

= 14e−t +2√

5 sin(2t−1.10715) Volts

To verify this solution substitute it in the differential eauation:

dv(t)dt

+ v(t) = 10sin2t

ddt

(14e−t +2

√5 sin(2t− arctan2)

)+14e−t +2

√5 sin(2t− arctan2) = 10sin 2t

−14e−t +4√

5 cos(2t− arctan 2)+14e−t +2√

5 sin(2t− arctan 2) = 10sin2t

2√

5 [2cos(2t− arctan2)+ sin(2t− arctan2)] = 10sin2t

2√

5 ·√

5 sin(2t− arctan2+ arctan2) = 10sin2t

10sin2t = 10sin2t

and evaluate it at t = 0:

14+2√

5sin(−1.10714871779409) = 10Volts

Q.E.D.

Compare this solution with the Laplace transform solution of section 3.5.3 to see the ease with which we can arriveat the answer. For higher-order differential equations finding the homogenous and particular solutions is very tediousindeed. As will be apparent there, there is no need for finding separate homogenous and particular solutions; the totalsolution is found in one step.

3.2 Definition of the Laplace TransformLaplace transform of a function f (t) is defined as2

L [f (t)] = F (s) =ˆ

0−f (t)e−stdt (3.2)

where2The definition given above is called the unilateral Laplace transform. There is also a bilateral Laplace transform with the lower integral limit set to

−∞. The unilatereal transform enables us to use initial conditions for the differential equation. We will be using the unilateral Laplace transform in thisbook.

Page 56: transform mathematics

56 CHAPTER 3. THE LAPLACE TRANSFORM

Figure 3.2: Region of covergence (ROC) for an exponential function f (t) = e−at where a is complex.

s = σ + jω (3.3)

is a complex quantity and called complex frequency. This integral exists provided that the function is absolutely integrable.Absolute integrability is a result of the requirement∣∣∣∣ˆ ∞

0−f (t)e−stdt

∣∣∣∣< ∞

Due to triangle inequality ∣∣∣∣ˆ ∞

0−f (t)e−stdt

∣∣∣∣≤ ˆ ∞

0−| f (t)|e−σtdt

If the second integral is less than infinity f (t) is absolutely integrable. Thus∣∣∣∣ˆ ∞

0−f (t)e−stdt

∣∣∣∣≤ ˆ ∞

0−| f (t)|e−σtdt < ∞ (3.4)

A bounded function f (t) has a Laplace transform for σ > 0. Boundedness dictates that | f (t)| ≤ M for some positivenumber M > 0. With this (3.4) becomes

ˆ∞

0−| f (t)|e−σtdt < M

ˆ∞

0−e−σtdt =

σ > 0∞ σ ≤ 0

Thus ∣∣∣∣ˆ ∞

0−f (t)e−σtdt

∣∣∣∣≤ Mσ

< ∞

Let f (t) = e−at . Then F (s) = L [ f (t)] becomes

ˆ∞

0−e−ate−stdt =

ˆ∞

0−exp [−(σ +Rea)] t exp [− j (ω + Ima)] tdt∣∣∣∣ˆ ∞

0−e−ate−stdt

∣∣∣∣ ≤ ˆ∞

0−exp [−(σ +Rea)] t dt

As shown in Fig. 3.2, for this integral to exist we must have σ >−Rea. This is called the region of convergence (ROC)for F (s).

In this text all the functions we deal with satisfy the convergence requirement in Equation 3.4. If the above integralexists, the Laplace transform is invertible, that is, we can retrieve the original function f (t) from its Laplace transformF (s) using the complex integral below

L-1 [F (s)] = f (t) = 12π j

ˆσ1+ j∞

σ1− j∞F (s)estdt (3.5)

Page 57: transform mathematics

3.2. DEFINITION OF THE LAPLACE TRANSFORM 57

However with transformations of some basic functions and properties of the transform at hand, and using partial fractionexpansion outlined in Sec. 3.4, the inverse transform will be much easier and straightforward to find from a table; hencethere is no need to use complex contour integration.

The function and its transform as defined in Eqn. 3.2 are usually denoted symbolically as

Lf (t)←→F(s)

The function f (t) on the left depends on a real variable which usually represents time (t stands for time). One way ofrepresenting a signal is to describe it by a function f (t) in the time-domain. Laplace transform is another representationthat describes the signal in another domain, namely the complex-frequency domain. Note the e−st multiplier in Eqn. 3.2.The quantitiy s is called the complex frequency. Since st is a dimensionless quantity and t has the dimension of time (T ),dimensions of σ and s have to be T−1. Ironically ω , the imaginary part of the complex frequency, is physically realizablebecause we can generate it with signal generators in the laboratory.

Figure 3.3: Damped sine wave is a time function.

To illustrate the notion of two domains, consider a signal depicted in Fig. 3.3 and expressed in the time domain byf (t) = e−0.5t sin 2t. Referring to Table 3.1 this signal is expressed in the complex-frequency domain by

F (s) =2

(s+0.5)2 +4

We say that f (t) is mapped onto F (s) by Laplace transform. As s is a complex number, F (s) is also complex having realand imaginary parts, or having magnitude and phase. Substituting s = σ + jω into F (s) we get

F (s) =2[(σ +0.5)2 +4−ω2

][(σ +0.5)2 +4−ω2]2 +4(σ +0.5)2ω2

− j4(σ +0.5)ω

[(σ +0.5)2 +4−ω2]2 +4(σ +0.5)2ω2

∣∣∣F (s)∣∣∣ =

2√[(σ +0.5)2 +4−ω2]2 +4(σ +0.5)2ω2

arg [F (s)] = − tan−1(

2(σ +0.5)ω

(σ +0.5)2 +4−ω2

)You can appreciate the complexity of these expressions. It is not our intention to scare you away from the subject withthese awful expressions. We give these results so that you can obtain the s-domain graphs in Fig. 3.4 using your favouritemath program. Fortunately we don’t bother such complexities in formulas or in s-plane graphs. We just manipulatealgebraic equations using transform rules and transform tables. All that matters is that you understand what we are up to!

Example 19. Find Laplace transform of (a) Unit impulse function δ (t), (b) Unit step function u(t), (c) e−atu(t), (d)sin ωt.

Page 58: transform mathematics

58 CHAPTER 3. THE LAPLACE TRANSFORM

(a)

−1

−0.5

0

0.5

1

−3−2

−10

12

3−30

−20

−10

0

10

20

30

σ

Imag[F(s)]

ω

(b)

−1−0.5

00.5

1

−4

−2

0

2

40

5

10

15

20

25

30

35

σ

Magnitude of F(s)

ω

(c)

Figure 3.4: f (t) = e−0.5t sin2t in s-domain. (a) Real part, (b) imaginary part, (c) magnitude of F (s).

Page 59: transform mathematics

3.3. PROPERTIES OF THE LAPLACE TRANSFORM 59

(a) Applying unit impulse function’s sifting property´ +∞

−∞f (t)δ (t)dt = f (0) to

´∞

0− δ (t)e−stdt = e−s·0 = 1.

(b)

L [u(t)] =

ˆ∞

0−u(t)e−stdt

=

ˆ∞

0−1 · e−stdt

= −1s· [e−st

∣∣∣∣+∞

0=−1

s· (0−1)

=1s

(c)

L[e−atu(t)

]=

ˆ∞

0−e−atu(t)e−stdt

=

ˆ∞

0−e−(s+a)tdt

=1

s+a

(d)

sinωt =e jωt − e− jωt

2 j

Using the result from (c)

L [sinωt] = L

[ejωt− e−jωt

2j

]=

12 j

(1

s− jω− 1

s+ jω

)=

12 j· 2 jω

s2 +ω2

s2 +ω2

3.3 Properties of the Laplace Transform

LinearityLaplace transform of a linear combination of transformable functions is given by the same linear combination of Laplacetransforms of those functions, i.e.,

L

[∑

iai fi (t)

]= ∑

iaiL [ fi (t)]

Real DifferentiationGiven a function f (t) with initial condition f (0−), Laplace transform of its derivative is

L

[df (t)

dt

]= sF (s)− f (0−)

This result can be obtained by using integration by parts as follows:

L[f ′ (t)

]=

ˆ∞

0−f ′ (t)e−stdt

Page 60: transform mathematics

60 CHAPTER 3. THE LAPLACE TRANSFORM

ˆ∞

0−e−st f

′(t)dt = e−st f (t)

∣∣∣∣∞0−−(−s)

ˆ∞

0−f (t)e−stdt

= e−st f (t)∣∣∣∣∞0−+sˆ

0−f (t)e−stdt

L[f ′ (t)

]= sF (s)− f (0−)

which can be readily generalized to derivatives of order n:

L

[dnf (t)

dtn

]= snF (s)− sn−1 f (0−)− sn−2 f ′ (0−)−·· ·− f (n−1) (0−)

L

[dn f (t)

dtn

]= snF (s)−

n−1

∑i=0

si f (n−i−1) (0−)

Real Integration

Given a function f (t) with Laplace transform F (s), Laplace transform of its integral is given by

L

[ˆ t

0-f (τ)dτ

]=

F (s)s

Proof.

L

[ˆ t

0-f (τ)dτ

]=

ˆ +∞

0−

[ˆ t

0-f (τ)dτ

]e−stdt

Integrating by parts we obtain

L

[ˆ t

0-f (τ)dτ

]=−

[e−st

s

ˆ t

0-f (τ)dτ

∣∣∣∣∞0−

+1s

ˆ∞

0−e−st f (t)dt

Since

limt→∞

e−st = 0 andˆ t

0-f (τ)dτ

∣∣∣∣0−0−

= 0 we obtain

L

[ˆ t

0-f (τ)dτ

]=

F (s)s

Differentiation by s

L [t f (t)] =−dF (s)ds

Proof. We refer back to the definition of Laplace transform (Eqn. 3.2):

L [f (t)] = F (s) =ˆ

0−f (t)e−stdt

Page 61: transform mathematics

3.3. PROPERTIES OF THE LAPLACE TRANSFORM 61

Differentiating F (s) with respect to s we have

dF (s)ds

=dds

ˆ∞

0−f (t)e−stdt

=

ˆ∞

0−f (t)

dds

(e−st)dt

=

ˆ∞

0−(−t) f (t)e−stdt

Thus we get

dF (s)ds

=−ˆ

0−t f (t)e−stdt = −L [t f (t)]

Real TranslationDelaying f (t) by a in time domain amounts to multiplying its transform by e−as:

L [f (t−a)u(t−a)] = e−asF (s)

Let’s do a change of variable by letting x = t − a. Hence t = x + a and dt = dx.

L [f (t−a)u(t−a)] = L [f (x)u(x)]

=

ˆ∞

0−f (x)e−s(x+a)dx

=

ˆ∞

0−e−sa f (x)e−sxdx

= e−asˆ

0−(x)e−sxdx

= e−asF (s)

Complex TranslationDelaying F (s) by a in s-domain amounts to multiplying its transform by eat in time domain:

F (s−a) = L[eatf (t)

]Referring back to Laplace transform definition (Eqn. 3.2):

F (s−a) =

ˆ +∞

0−f (t)e−(s−a)tdt

=

ˆ +∞

0−eat f (t)e−stdt

= L[eat f (t)

]Periodic FunctionsA periodic function with a period T satisfies f (t) = f (t + nT ) for all integers n, i.e., n∈Z. Laplace transform of such afunction follows from the definition of the transform and periodicity:

F (s) =

ˆ∞

0−f (t)e−stdt

=

ˆ T

0−f (t)e−stdt +

ˆ 2T

Tf (t)e−stdt +

ˆ 3T

2Tf (t)e−stdt + · · ·

Page 62: transform mathematics

62 CHAPTER 3. THE LAPLACE TRANSFORM

Since f (t) = f (t + nT ) we can evaluate integrals´ (n+1)T

nT f (t)e−stdt from t = 0− to t = T by shifting f (t) left by nT .Thus

F (s) =

ˆ T

0−f (t)e−stdt +

ˆ T

0−f (t +T )e−s(t+T )dt +

ˆ T

0f (t +2T )e−s(t+2T )dt + · · ·

=

ˆ T

0−f (t)e−stdt + e−sT

ˆ T

0−f (t)e−s(t+T )dt + e−2sT

ˆ T

0−f (t)e−stdt + · · ·

=(1 + e−sT + e−2sT + · · ·

)ˆ T

0−f (t)e−stdt

=1

1 − e−sT

ˆ T

0−f (t)e−stdt

is obtained.

1 2 3

t

f(t)

1

0

Figure 3.5: Sawtooth waveform with T = 1.

Example 2. Find the Laplace transform of the sawtooth function f (t) = t, 0 ≤ t ≤ 1 in Fig. 3.5. f (t) is periodic withT = 1.

Solution:

F (s) =1

1 − e−sT

ˆ T

0−f (t)e−stdt

=1

1 − e−s

ˆ 1

0−te−stdt

=1

1 − e−s

[− (st +1)e−st

s2

]1

t=0

=1

1− e−s

[− (s+1)e−s−1

s2

]=

1− e−s− se−s

s2 (1− e−s)

Transform of Convolution

Convolution of two functions x(t) and h(t) is denoted by x(t)∗h(t) and defined as

x(t)∗h(t) =ˆ

−∞

x(τ)h(t− τ)dτ =

ˆ∞

−∞

h(τ)x(t− τ)dτ (3.6)

which is called convolution integral. If h(t) and x(t) are zero for t ≤ 0 then Eqn. 3.6 is reduced to

x(t)∗h(t) =ˆ t

0−x(τ)h(t− τ)dτ =

ˆ t

0−h(τ)x(t− τ)dτ (3.7)

Page 63: transform mathematics

3.3. PROPERTIES OF THE LAPLACE TRANSFORM 63

δ( )t h( )t x( )t y( )tLinearSystem

LinearSystem

Figure 3.6: Linear system driven by (a) an impulse, (b) by an arbitrary input.

The convolution integral plays a crucial role in the study of signals and systems (Fig. 3.6). A linear system whichresponds to an impulse with h(t) responds to an arbitrary input x(t) with an output y(t) which is the convolution of x(t)and h(t). That is

y(t) = x(t)∗h(t) (3.8)

Don’t worry about details of convolving two signals at this point; you will have ample time to work with convolutionwhen you study signals and systems. It suffices now to mention that Laplace transform of Eqn. 3.8 is given by

Y (s) = X (s)H (s)

where X (s) and H (s) are Laplace transforms of x(t) and h(t) respectively.

Proof. From definition of the Laplace transform we can write

Y (s) =ˆ

0−x(t)∗h(t)e−stdt (3.9)

Substituting Eqn 3.7 into 3.9 we obtain

Y (s) =

ˆ∞

0−

[ˆ∞

0−h(τ)x(t− τ)dτ

]e−stdt

=

ˆ∞

0−dt[ˆ

0−dτ h(τ)x(t− τ) e−st

]

We can readily change the order of integration in this double integral and write

Y (s) =

ˆ∞

0−dt[ˆ

0−dτ h(τ)x(t− τ) e−st

]=

ˆ∞

0−dτ

[ˆ∞

0−dt h(τ)x(t− τ) e−st

]=

ˆ∞

0−dτ h(τ)

[ˆ∞

0−dt x(t− τ) e−st

]

With a change of variable u = t− τ and rearrangement we get

Y (s) =

ˆ∞

0−h(τ)e−sτ dτ ·

ˆ∞

0−x(u)e−sudu

Y (s) = H (s)X (s) (3.10)

Q.E.D.

Example 3. A causal system’s impulse response h(t) is given as

h(t) =1

RCe−

tRC u(t)

Find the unit-step response.

Solution:x(t) = u(t) where u(t) is the unit-step function. The convolution integral (Eqn. 3.6) gives the reponse of the system as

Page 64: transform mathematics

64 CHAPTER 3. THE LAPLACE TRANSFORM

y(t) =ˆ +∞

−∞

x(τ)h(t− τ)dτ

Since x(t) = 0 for t < 0 and the system is causal, we use (3.7) to find the output. Evaluating this integral we get

y(t) =

ˆ +∞

0u(τ) · 1

RC· e−(t− τ)

RC u(t− τ)dτ

=1

RC

ˆ t

0e−(t− τ)

RC dτ

=e−

tRC

RC

ˆ t

0e

τ

RC dτ

= e−

tRC

RC

t

0

= e−

tRC

et

RC −1

=

1− e−

tRC

u(t)

From this result follows the Laplace transform of y(t):

Y (s) = L

1− e−

tRC

u(t)

=

1s− 1

s+1

RC

=1

s(1+ sRC)

Now let us use the convolution property of the Laplace transform. From the transform table (Table 3.1) or by directintegration one can readily obtain

X (s) =1s

For h(t) using complex-frequency shift property we have

H (s) =1

RC1

s+1

RC

=1

1+ sRC

Hence

Y (s) = X (s)H (s)

=1

s(1+ sRC)

Thus we see that convolution property result is in agreement with that obtained by applying the convolution integral.

Page 65: transform mathematics

3.3. PROPERTIES OF THE LAPLACE TRANSFORM 65

Initial Value Theorem

limt→0+

f (t) = lims→∞

sF (s)

Proof. From Laplace Transform definition

L

[d f (t)

dt

]=

ˆ∞

0−

d f (t)dt

e−stdt = sF (s)− f (0−)

lims→∞

ˆ∞

0−

d f (t)dt

e−stdt = lims→∞

[sF (s)− f (0−)]

The left hand side is 0 as lims→∞ e−st = 0. Therefore

lims→∞

[sF (s)− f (0−)] = 0

lims→∞

sF (s) = f (0−)

If f (t) is continous at t = 0, f (0+) = f (0−) from which follows the assertion.

Now we consider the case where f (t) has a step discontinuity at t = 0, i.e., f (0+) = f (0−)+A where A is a constant.We can split f (t) into a continuous part g(t) and a step function

f (t) = g(t)+Au(t)

It is clear that

f (0−) = g(0−)

Upon differentiation we getd f (t)

dt=

dg(t)dt

+Aδ (t)

lims→∞

ˆ∞

0−

d f (t)dt

e−stdt = lims→∞

ˆ∞

0−

dg(t)dt

e−stdt +A = lims→∞

[sG(s)−g(0−)]+A

Because lims→∞ sG(s) = g(0−) = f (0−)

lims→∞

ˆ∞

0−

d f (t)dt

e−stdt = f (0−)+A

= f (0−)+ f (0+)− f (0−)= f (0+)

Q.E.D.

Example 4. Find limt→0+sin t

t.

Using tables or wxMaxima we get L

[sin t

t

]=

π

2− tan−1 (s). Therefore

limt→0+

sin tt

= lims→∞

sL

[sin t

t

]= lim

s→∞s[

π

2− tan−1 s

]= lim

s→∞

(s

π

2− s

π

2

)= 0

Page 66: transform mathematics

66 CHAPTER 3. THE LAPLACE TRANSFORM

Final Value TheoremProvided that the poles F (s) lie in the left-half s-plane, f (∞) can be found from the final value theorem

limt→∞

f (t) = lims→0

sF (s)

Proof. From Laplace Transform definition

L

[d f (t)

dt

]=

ˆ∞

0−

d f (t)dt

e−stdt = sF (s)− f (0−)

As lims→0 e−st = 1 the left hand side becomes

lims→0

ˆ∞

0−

d f (t)dt

dt = limt→∞

[ f (t)− f (0−)]

Therefore

limt→∞

[ f (t)− f (0−)] = lims→0

[sF (s)− f (0−)]

limt→∞

f (t)− f (0−) = lims→0

sF (s)− f (0−)

limt→∞

f (t) = lims→0

sF (s)

Q.E.D.

Example 5. Find limt→∞ t e−t .

Solution:Since

L[t e−t]= 1

(s+1)2

we have

limt→∞

t e−t = lims→0

s ·L[t e−t]

= lims→0

s

(s+1)2

= 0

Example 6. Find limt→∞ cosh (t).

Solution:L [cosh t] =

ss2−1

limt→∞

cosh t = lims→0

s ·L [cosh t]

= lims→0

s2

s2−1= 0

On the other hand we get a different answer if we use

limt→∞

cosh (t) = limt→∞

et + e−t

2

=12

(limt→∞

et + limt→∞

e−t)

= ∞+0= ∞

The two answers are inconsistent; the first limit is incorrect because cosine hyperbolic function has a pole in the right-halfof the s-plane.

Page 67: transform mathematics

3.4. THE INVERSE LAPLACE TRANSFORM 67

3.4 The Inverse Laplace Transform

Eqn. 3.2 defines the Laplace transform of a function provided that it meets the conditions for existence. Given a functionF (s) in complex frequency domain, one can use the inverse transform formula (3.5)

f (t) =1

2π j

ˆσ1+ j∞

σ1− j∞F (s)estds

to retrieve the time function. Eqn. 3.5 involves complex contour integration which is beyond the scope of this book.Linear systems result in Laplace transforms which are quotients of poynomials in s. This makes it possible for us to use asimpler approach to invert Laplace transforms; we use partial fraction expansions, the properties of the Laplace transformand the table of transforms for simple functions (Table 3.1).

Partial Fraction Expansions. Linear systems mentioned above are governed by constant-coefficient multiple-orderdifferential equations. Their response to an input is obtained by convolving the input with the system impulse response.The solution of the differential equation includes a homogenous and a particular solution. As Section 3.1 demonstrated,the process is tedious and prone to human errors.

Transforming the differential equation and the excitation function results in an algebraic equation of the form 3.10.By doing this we transfer the system formulation from time domain to complex frequency domain. In this domain wedeal with Y (s) which is usually a quotient of polynomials in s. Obtaining y(t) back from Y (s) is the process of InverseLaplace transform which can be performed by the inverse transform integral as defined by Eqn. 3.2 which involvescomplex integration and not so easy to carry out. On the other hand taking advantage of the form of Y (s) which is arational function of s, then using Laplace transform rules and the Table of Laplace transforms, we can obtain the inversetransform much more easiliy. This facilitates our job of finding solutions to differential equations. Y (s) = H (s)X (s) isthe quotient of two polynomials, N (s) and D(s):

Y (s) =N (s)D(s)

=amsm +am−1sm−1 + · · ·+a0

bnsn +bn−1sn−1 + · · ·+b0

Page 68: transform mathematics

68 CHAPTER 3. THE LAPLACE TRANSFORM

Table 3.1: Laplace Transforms

Function Laplace Transform

1 f (t)´

0− f (t)e−stdt

2 a1 f1(t)+a2 f2(t) a1F1(s)+a2F2(s)

3d f (t)

dtsF(s)− f (0−)

4dn f (t)

dtn sn−n∑j=1

sn− j f j−1(0−)

5´ t

0− f (τ)dτ1s

F(s)

6´ t

0−´ t

0− f (τ)dτdσ1s2 F(s)

7 (−t)n f (t)dnF(s)

dsn

8 f (t−a)u(t−a) e−asF(s)

9 eat f (t) F(s−a)

10 δ (t) 1

11dn

dtn δ (t) sn

12 u(t)1s

13 t1s2

14tn

n!1

sn+1

15 e−αt 1s+α

161

β −α

(e−αt − e−β t

) 1(s+α)(s+β )

17 sinωtω

s2 +ω2

18 cosωts

s2 +ω2

19 sinhata

s2− a2

20 coshats

s2− a2

21 e−αt sinωtω

(s+α)2 +ω2

22 e−αt cosωts+α

(s+α)2 +ω2

which can be factored out as

N (s)D(s)

=am

bn· (s− z1)(s− z2) · · ·(s− zm)

(s− p1)(s− p2) · · ·(s− pn)(3.11)

where m ≤ n. p j’s and zi’s are called the poles and zeros of the system respectively. We are interested in two cases inwhich

1. all the poles are unique, or

2. Some poles are equal, i.e., pi = p j = · · · = pk.

The first case is that of simple poles (with multiplicities equal to 1), whereas the second one is the case of multiplepoles. Also the poles and zeros can be either real or complex-conjugate pairs. This statement is dictated by a well-known

Page 69: transform mathematics

3.4. THE INVERSE LAPLACE TRANSFORM 69

theorem in algebra that says “the roots of a polynomial with all real coefficients are real or occur in complex conjugatepairs.”

In order to facilitate the inversion of the Laplace transformN (s)D(s)

, we expand it in partial fractions. For the case of

simple poles we can expandN (s)D(s)

as

N (s)D(s)

=A1

s− p1+

A2

s− p2+ · · ·+ An

s− pn

For a pole of multiplicity r partial fraction expansion will contain termsA1

s− p1,

A2

(s− p1)2 , · · · ,

Ar

(s− p1)r .

Let us study these cases with examples.

3.4.1 Real rootsExample 7. Let

N (s)D(s)

=s−4

s2 +4s+3=

s−4(s+1)(s+3)

We want to expand this in partial fractions as followss−4

s2 +4s+3=

As+1

+B

s+3where A and B must be determined. Multiplying both sides of the equality by s+1 and setting s = −1 we obtain A:

s−4s+3

∣∣∣∣∣s=−1

= A+B(s+1)

s+3

∣∣∣∣∣s=−1

A = −2.5

Likewise multiplying both sides of the equality by s+3 and setting s = −3 we obtain B:

s−4s+1

∣∣∣∣∣s=−3

=A(s+3)

s+1

∣∣∣∣∣s=−3

+B

B = 3.5

Thus we obtains−4

s2 +4s+3= − 2.5

s+1+

3.5s+3

Referring to Table 3.1 one can readily obtain the inverse Laplace Transform:

L −1[

s−4s2 +4s+3

]= L −1

[− 2.5

s+1+

3.5s+3

]= L −1

[− 2.5

s+1

]+L −1

[3.5

s+3

]=

(−2.5e−t + 3.5e−3t) u(t)

3.4.2 Complex roots

Denominator terms like s2+ p2 can be factored into (s+ jp)(s− jp) while (s+ p)2+r2 can be factored into (s+ p+ jr)(s+ p− jr).

These terms give rise to fractions likeA

s+ jp,

A∗

s− jp,

Bs+ p+ jr

andB∗

s+ p− jr. The theorem mentioned above imposes

complex conjugate coefficients A, A∗ and B, B∗. Therefore once A, B are obtained, the process of finding the coefficientsis over.

Example 8. Let

F (s) =N (s)D(s)

=s+1

s(s2 +4)

DecomposeN (s)D(s)

into partial fractions and find f (t).

Page 70: transform mathematics

70 CHAPTER 3. THE LAPLACE TRANSFORM

Solution

s+1s(s2 +4)

=As+

Bs+ j2

+B∗

s− j2

A =s+1s2 +4

∣∣∣∣∣s=0

=14= 0.25

B =s+1

s(s− j2)

∣∣∣∣∣s=− j2

=− j2+1

− j2(− j2− j2)=

1− j2−8

=−18(1− j2)

Hence B∗ is automatically found to be

B∗ =−18(1+ j2) and

s+1s(s2 +4)

=1/4

s− 1

81− j2s+ j2

− 18

1+ j2s− j2

The decomposition can be stopped at this point since s+ j2 and s− j2 in the denominators give rise to e− j2t and e+ j2t

terms during inverse transformation. Referring to Table 3.1 we can invert this result.

L −1[

s+1s(s2 +4)

]= L −1

(1/4

s− 1

81− j2s+ j2

− 18

1+ j2s− j2

)=

(14− 1− j2

8e− j2t − 1+ j2

8e j2t)

u(t)

=14

[1−√

5cos(2t + tan−1 2

)]u(t)

Alternatively we can further simplify the decomposition by combining the complex terms:

s+1s(s2 +4)

=1/4

s− 1

8(1− j2)(s− j2)+(1+ j2)(s+ j2)

s2 +4=

14

(1s+−s+4s2 +4

)

=14

(1s− s

s2 +4+

4s2 +4

)Probably this last form makes it easier to use Table 3.1 to find an inverse for the transform.

L −1[

s+1s(s2 +4)

]=

14L −1

(1s− s

s2 +4+

4s2 +4

)=

14(1− cos2t +2sin2t)u(t)

=14

[1−√

5cos(2t + tan−1 2

)]u(t)

3.4.3 Multiple rootsTo demonstrate the case of poles wıth multiplicities greater than 1 let us consider the following function

F (s) =N (s)D(s)

=s2− s−1

(s+1)3 (s+3)

We can write this ass2− s−1

(s+1)3 (s+3)=

A1

s+1+

A2

(s+1)2 +A3

(s+1)3 +B

s+3

We will outline two methods to decompose the function into partial fractions. The first method can also be applied to thecase of simple poles.

Page 71: transform mathematics

3.4. THE INVERSE LAPLACE TRANSFORM 71

Method 1. Identical polynomials. Gather the four terms together by multiplying denominators and numerators byfactors so that all the denominators are equal to the least common multiple. For our problem

s2− s−1

(s+1)3 (s+3)=

A1 (s+1)2 (s+3)

(s+1)(s+1)2 (s+3)+

A2 (s+1)(s+3)

(s+1)2 (s+1)(s+3)+

A3 (s+3)

(s+1)3 (s+3)+

B(s+1)3

(s+3)(s+1)3

=A1(s3 +5s2 +7s+3

)+A2

(s2 +4s+3

)+A3 (s+3)+B

(s3 +3s2 +3s+1

)(s+1)3 (s+3)

=(A1 +B)s3 +(5A1 +A2 +3B)s2 +(7A1 +4A2 +A3 +3B)s+3A1 +3A2 +3A3 +B

(s+1)3 (s+3)

For this equality to hold we require that the coefficients of s0, s1, s2 and s3 be equal in the numerator polynomials. Thus

A1 +B = 05A1 +A2 +3B = 1

7A1 +4A2 +A3 +3B = −13A1 +3A2 +3A3 +B = −1

Solution of these equations yields:

A1 =118, A2 =−

74, A3 =

12, B =−11

8

Hence the partial fraction expansion becomes:

F (s) =s2− s−1

(s+1)3 (s+3)=

1.375s+1

− 1.75

(s+1)2 +0.5

(s+1)3 −1.375s+3

Method 2. Differentiation In this method, to get rid of (s+1)3 in the denominator of the left-hand side we multiplyboth sides of the equation by (s+1)3 to obtain

s2− s−1s+3

= A1 (s+1)2 +A2 (s+1)+A3 +B(s+1)3

s+3

Now we can obtain A3 as outlined under Method 1.

A3 =s2− s−1

s+3

∣∣∣s=−1

=12= 0.5

Now differentiate both sides once with respect to s to obtain:

2s−1s+3

− s2− s−1

(s+3)2 = 2A1 (s+1)+A2 +0+B

[3(s+1)2

s+3− (s+1)3

(s+3)3

]

We can obtain A2 by substituting s =−1:

[2s−1s+3

− s2− s−1

(s+3)2

]s=−1

= 2A1 (s+1)∣∣∣s=−1

+A2 +0+B

[3(s+1)2

s+3− (s+1)3

(s+3)3

]s=−1

−74

= 0+A2 +0+0

A2 = −74=−1.75

A1 can be found by differentiating both sides of the equation twice and evaluating the two sides at s =−1:

Page 72: transform mathematics

72 CHAPTER 3. THE LAPLACE TRANSFORM[2

s+3+

2(2s−1)

(s+3)2 +2(s2− s−1)(s+3)3

]s=−1

= 2A1 +0+0+B

[6(s+1)

s+3− 6(s+1)2

(s+3)2 +2(s+1)3

(s+3)3

]s=−1

114

= 2A1 +0+0+B ·0

114

= 2A1

A1 =118

= 1.375[s2− s−1

(s+1)3

]s=−3

= B

B = −118

= 1.375

Thus

s2− s−1

(s+1)3 (s+3)=

1.375s+1

− 1.75

(s+1)2 +0.5

(s+1)3 −1.375s+3

Once the partial fraction expansion is obtained, we can find f (t) using the Laplace Transform Table and the propertiesof the Laplace transform. Thus

f (t) =(1.375e−t −1.75te−t +0.25t2e−t −1.375e−3t)u(t)

3.5 Poles and ZerosIn Eqn. 3.11 the numbers zi for which N (zi) = 0 for i = 1, . . . ,m are called the zeros of F , or zeros of the system. Thenumbers pk on the other hand, for which D(pk) = 0 for k = 1, . . . ,n are called the poles of F , or the poles of the system.As discussed in Section 3.4 zeros and poles of the system can be simple or multiple; real or complex-conjugate pairs. Fig.3.7 shows possible ways poles and zeros of a signal/system can be distributes in the complex frequency plane.Poles andzeros are indicated with a small cross ”×” and a small circle ””. In case of multiple poles/zeros, as many ×’s and ’s areinserted into the graphic as there are poles/zeros. Poles and zeros with multiplicities two and three can be stacked on thepole-zero diagram; higher order poles and zeros may be indicated by a figure next to the cross or circle. Poles and zerosare important system parameters.

System behavior can be judged from system poles and zeros. Especially poles of a system have a decisive rolein specifying system stability. By inserting or manipulating poles we can design oscillators or we can stop spuriousoscillations in circuits. Systems, filters or networks can be synthesized from a pole-zero description. In the followingparagraphs we will demonstrate synthesis by examples. However one of our priorities is factoring polynomials which weconsider next. If the reader is comfortable with factoring polynomials and finding roots, the next section can be skipped.

3.5.1 Factoring polynomialsPoles and zeros are obtained by factoring polynomials amsm + am−1sm−1 + · · ·+ a0 and bnsn + bn−1sn−1 + · · ·+ b0, aprocedure which can be tedious for large valus of m and n. Polynomial identities below together with splitting andregrouping techniques can be used for factorization.

a2±2ab+b2 = (a±b)2

a3±3a2b+3ab2±b3 = (a±b)3

a2−b2 = (a−b)(a+b)

a3−b3 = (a−b)(a2 +ab+b2)

a3 +b3 = (a+b)(a2−ab+b2)

a4−b4 = (a−b)(a+b)(a− jb)(a+ jb)n

∑k=0

n!k!(n− k)!

anbn−k = (a+b)n

Page 73: transform mathematics

3.5. POLES AND ZEROS 73

j w

j w j w j w

σ

j w

σ

j w

σ

j w

σ

j w

σ

(d)

(g)

α α

(e) (f) (h)

σσ

(a) (b) (c)

α

m=2

m=2

α

σα

Figure 3.7: Examples of the ways in which poles and zeros of a system can be distributed in s-plane. (a) A simple realpole in the left-half plane, (b) a double real pole in the left-half plane, (c) a simple real pole on the right-half plane, (d) apair of simple conjugate poles on jω axis with a simple zero in the left-half plane, (e) a pair of simple conjugate poles onjω axis with a simple zero at the origin, (f) A pair of double conjugate poles on jω axis, (g) a real zero in the left-halfplane and a pair of simple conjugate poles in the right-half plane.

If m, (n) is odd, then there is at least one real root of N (s) , (D(s)). In this case one can use the Newton-Raphsonmethod to find the real zero (pole). Once this is achieved the degree of N (s) or D(s) is reduced by one through factoriza-tion. For example let D(s) be a polynomial of odd order. Then there exists a real root a for D(s) such that

D(s) = (s−a)D1 (s)

Now the degree of D1 is reduced by one and is a polynomial of even order which may or may not have real roots. Searchingfor zeros and poles (roots of N (s) and D(s)) may prove very exhausting and the procedure is error-prone. As an exampleconsider factoring the polynomial s4 +6s3 +12s2 +10s+3. This is a polynomial of 4-th order; it may or may not have areal root. To help us guess on a root you can try to sketch the polynomial to see if it has any real root or roots. Looking atthe sketch in Fig. 3.8 we suspect a root around s =−3. Substitution of s =−3 in s4 +6s3 +12s2 +10s+3 indeed yields0: 81+6(-27)+12(9)+10(3)+3=0. We are very lucky; we can reduce the polynomial by dividing through s+3.

s4 +6s3 +12s2 +10s+3 = (s+3)(s3 +As2 +Bs+1

)s4 +6s3 +12s2 +10s+3 = s4 +(A+3)s3 +(B+3A)s2 +(1+3B)s+1

From this we obtain A = 3 and B = 3. Now we haves4 +6s3 +12s2 +10s+3 = (s+3)

(s3 +3s2 +3s+1

)We recognize the second factor as s3 +3s2 +3s+1 = (s+1)3. With these results we arrive at the factors

s4 +6s3 +12s2 +10s+3 = (s+3)(s+1)3

If this polynomial appears in the denominator we have a simple real pole at p1 =−3, and a real pole with multiplicity 3(m = 3) at p2 =−1.

Page 74: transform mathematics

74 CHAPTER 3. THE LAPLACE TRANSFORM

Figure 3.8: Plot of the polynomial s4 +6s3 +12s2 +10s+3

However we are not always this lucky and we can spend a lot of time seeking to find irreducible factors by hand.Luckily we can employ mathematical software either to factorize the polynomial or to find its roots. For example ifD(s) = s5 +9s4 +51s3 +159s2 +280s+300 the job becomes quite formidable. In this case let us ask Maxima3 to factorout the polynomial for us. At Maxima prompt we enter:

(%i1) factor(s^5+9*s^4+51*s^3+159*s^2+280*s+300);

to get

(%o1) (s+3)(s2 +2s+5)(s2 +4s+20)

Now we are better off than we would be trying to figure out a solution. We have one last step left for us to complete. Werewrite the Maxima answer in the following form:

D(s) = (s+3)(s2 +2s+1+4)(s2 +4s+4+16)

to get

D(s) = (s+3)[(s+1)2 +4

][(s+2)2 +16

]= (s+3)(s+1+ j2)(s+1− j2)(s+2+ j4)(s+2− j4)

Roots of D(s) are p1 = −3, p2 = −1− j2, p3 = −1+ j2, p4 = −2− j4, p5 = −2+ j4; all the roots are simple roots(m = 1), one root being real and there being two pairs of complex conjugate roots. All roots are on the left-half s-plane.If this polynomial is the denominator of another function F (s), then they are the poles of F (s).

3.5.2 Poles and time responsePoles (and zeros) can be real or occur as complex conjugate pairs; simple or multiple; can reside in the left-half or right-half plane or occur on the imaginary axis. Signals and systems can be a combination of such poles (and zeros.) Polelocations and multiplicities have a profound effect on system behavior like stability, sensitivity, settling time, ringing etc.Zeros can be used to fine tune system for compensation, remove undesired oscillations, and tailor filter responses.

With poles and zeros known the system function or output can be expressed in partial fraction terms, and the timefunction can be obtained through inverse Laplace transform. As a consequence of linearity principle each term contributes

3Maxima is a free software. You are strongly advised to familiarize yourself with such mathematical software.

Page 75: transform mathematics

3.5. POLES AND ZEROS 75

to the output additively. Fig. 3.7 shows some combinations of poles and zeros. Table 3.1 is sufficient to find the inverseLaplace transform of each term. In the following we repeat some important transforms:

1s

L −1

−→ u(t)

1sn+1

L −1

−→ tn

n!u(t)

1

(s−a)n+1

L −1

−→ tn

n!eatu(t)

ω

s2 +ω2

L −1

−→ sinωt

In view of these transforms we deduce that:

1. A simple pole at the origin produces a step function in the time domain.

2. A multiple pole at the origin makes a system unstable because it gives rise to a tn dependence in the time domain.

3. A pole in the right-half plane causes an unstable system because eat grows unboundedly.

4. A simple pole at jω axis gives rise to a bounded sinusoid.

5. A multiple pole at jω axis gives rise to an unbounded sinusoid.

Figure 3.9 shows examples of time responses for some common pole locations. A growing time function is not desired insystem design. Also oscillations and ringing behavior might be disturbing in some situations. Therefore engineers mustpay due attention to the pole locations of systems which they design, and also must fight against unexpected oscillationsand ringing that arise due to poles introduced unintentionally by parasitic (spurious) effects.

3.5.3 An alternative way to solve differential equationsLinear systems (or nonlinear systems linearized around a certain operating point) are described by n-th order constant-coefficient linear differential equations. Given initial conditions at time t = 0, behavior of such systems can be obtained assolutions to these differential equations. The 1-st order RC circuit example of Section 3.1 has illustrated the steps involvedin this approach. Mass-spring-friction problems from mechanical systems can also be mentioned as another example. Asthe RC circuit example has demonstrated, the solution process becomes more and more difficult as the system order in-creases. Below we repeat the solution of the RC circuit of Section 3.1, using Laplace transform techniques this time.

The differential equation for the circuit using KVL has been given there in terms of the capacitor voltage as

v(t)+dv(t)

dt= 10 sin 2t

Taking the Laplace transform of both sides of this equation and using Table 3.1 we have

L

[v(t)+

dv(t)dt

]= 10 L [sin2t]

V (s)+ sV (s)− v(0−) =20

s2 +4

(s+1)V (s) = v(0−)+ 20s2 +4

With v(0−) = 10Volts we get

Page 76: transform mathematics

76 CHAPTER 3. THE LAPLACE TRANSFORM

Figure 3.9: Pole locations and corresponding time functions. Poles on the real axis (a), complex conjugate poles on theright-half and left-half of s-plane (b), and complex conjugate poles on the imaginary axis (c). m denotes the multiplicityof the pole(s).

V (s) =10

s+1+

20(s+1)(s2 +4)

Expanding the right hand side in partial fractions we have

V (s) =10

s+1+

As+1

+B

s+ j2+

B∗

s− j2

Applying the methods described in Section 3.4 we get

A =

[20

s2 +4

]s=−1

=205

= 4

B =

[20

(s+1)(s− j2)

]s=− j2

=−2+ j

B∗ = −2− j

And

Page 77: transform mathematics

3.5. POLES AND ZEROS 77

F

k

dF

Fm

x(t)

L(t)

b

km

Figure 3.10: Driven mass-spring system. The mass moves against a drag force caused by the surface friction.

V (s) =10

s+1+

4s+1

+−2+ js+ j2

+−2− js− j2

=14

s+1−(

2− js+ j2

+2+ js− j2

)=

14s+1

−√

5

(e− j tan−1 0.5

s+ j2+

e j tan−1 0.5

s− j2

)Now v(t) can be found by using the Laplace transform table

v(t) = L −1 [V (s)]

=[14e−t −2

√5cos

(2t + tan−1 0.5

)]u(t)

=[14e−t +2

√5sin

(2t− tan−1 2

)]u(t)

which agrees with the classical differential equation methods. The brevity and the elehance of the Laplace transformmethod is evident.

To further illustrate the ease which Laplace transform provides us for the solution of differential equations, considerthe mechanical system shown in Fig. 3.10. This system is governed by the equilibrium of three forces as described by thefollowing differential equation:

Fm = Fk +Fd

md2x(t)

dt2 = −k [x(t)− l (t)]−bdx(t)

dt(3.12)

Taking the Laplace transform of both sides we obtain

m[s2X (s)− sx(0)− x′(0)

]= −k [X (s)−L(s)]+b [sX (s)− x(0)]

which can be rearranged as

[s2m+bs+ k

]X (s) = kL(s)+(sm+b)x(0)+mx′(0)

X (s) =kL(s)

s2m+bs+ k+

sm+bs2m+bs+ k

x(0)+m

s2m+bs+ kx′(0)

X (s) =km· L(s)

s2 + bm s+ k

m

+s+ b

m

s2 + bm s+ k

m

x(0)+1

s2 + bm s+ k

m

x′(0) (3.13)

Eqn. 3.12 and 3.13 are definitions of the mass-spring system in the time and complex frequency domains. Solution ofeither representation gives us the behavior of the mass position at any time t > 0. Before we illustrate the solution withspecific values of m, k, b and L(t) we would like to take a few moments about Eqn. 3.13.

The first term in (3.13) is the zero-state system response. The ratio of X (s) to L(s) is very important and called thetransfer function of the system:

H (s) =X (s)L(s)

(3.14)

=km

s2 + bm s+ k

m

Page 78: transform mathematics

78 CHAPTER 3. THE LAPLACE TRANSFORM

The second and third terms taken together comprise the zero-input response or the transient response of the mass-springsystem (L(t) = 0). The coefficents of the denominator polynomial determine the mode of the circuit response. With2α = b/m and ω2

0 = k/m Eqn. 3.13 can be rewritten as

X (s) =ω2

0

s2 +2αs+ω20

L(s)+s+2α

s2 +2αs+ω20

x(0)+m

s2 +2αs+ω20

x′(0)

Three modes of operation arise from the coefficients α and ω0:

1. Overdamped case: α > ω0 i.e., b > 2√

km. Transient response contains two decaying exponentials.

2. Critically damped case: α = ω0 i.e., b = 2√

km. Transient response has the form te−αt .

3. Underdamped case: α < ω0 i.e., b < 2√

km. Transient response has the form e−αt sin(β t +θ).

Study of these systems and modes are beyond the scope of this text. However as an illustration of the 2nd order differentialequation we will take a few numerical examples for the above mass-spring system.

Let m = 0.2kg, b = 0.4 N·sec/m, and k = 1 N/m. Thus 2α = 2sec−1 and ω20 = 5m sec−2 and Eqn. 3.13 becomes:

X (s) =5

s2 +2s+5L(s)+

s+2s2 +2s+5

x(0)+0.2

s2 +2s+5x′(0)

Let us consider a few different cases:

1. Zero-state response x(0) = 0m, x(0) = 0m/sec:

(a) l (t) = 0.1u(t) m, L(s) = 0.1/s and

X (s) = 5(m/sec2) · 0.1m

s(s2 +2s+5)(m/sec2

)=

0.5s(s2 +2s+5)

m

=0.1s− 0.1(s+2)

s2 +2s+5m

=0.1s− 0.1(s+2)

(s+1)2 +22m

Taking the inverse Laplace Transform we get

x(t) = 0.1u(t)−0.05e−t (sin2t +2cos2t) m

(b) l (t) = 0.1sin t m, L(s) = 0.1s2+1

X (s) =0.5

(s2 +1)(s2 +2s+5)m

For this case we strongly recommend that the reader use a math package to obtain the partial fraction expan-sion and the Inverse Laplace Transform. We use wxMaxima for this purpose:

X (s) = 0.05(

ss2 +2s+5

− s−2s2 +1

)m

Inverse Laplace Transform yields

x(t) = 0.025e−t (2cos2t− sin2t)+0.05(2sin t− cos t) m

2. Zero-input response l (t) = 0m, x(0) = 0.1m, x(0) = 0m/sec:

X (s) =s+2

s2 +2s+5x(0)

=0.1(s+2)s2 +2s+5

Page 79: transform mathematics

3.6. APPLICATIONS OF LAPLACE TRANSFORM 79

Thus we obtain the time domain zero-input response:

x(t) = 0.05e−t (sin2t +2cos2t) m

In Fig. 3.11 the time domain responses for these input excitations are plotted.

Figure 3.11: Mass-spring system response in time domain to a step input, to a sinusoidal input and zero-input response.

3.6 Applications of Laplace Transform

3.6.1 Electrical systems

Electrical systems are specified by system transfer function H (s) =Y (s)X (s)

where X (s) and Y (s) are the system input and

output respectively. H (s) is a rational function of two polynomials. Network functions other than transfer function arealso defined such as impedance and admittance. Laplace transform is a very powerful tool in analyzing and synthesizingelectrical networks. The basis for analysis and synthesis of such networks is the representation of passive elements L, C,R in complex-frequency domain.

In time domain the terminal equations of these elements are:

v(t) = Ri(t) , i(t) = v(t)/R = Gv(t) ................(Resistance)

v(t) = v(0)+1C

ˆ t

0i(τ)dτ, i(t) =C

dv(t)dt

.......(Capacitance)

v(t) = Ldi(t)

dt, i(t) = i(0)+

1L

ˆ t

0v(τ)dτ.........(Inductance)

Taking the Laplace transform of these equations and assuming zero-state4, we find terminal equations of these elementsin complex frequency domain5. Thus we have:

4

Zero-state here means v(0−) = 0 and i(0−) = 0.

5G is called conductance and is defined as G = 1/R. Unit of conductance is mho or Siemens.

Page 80: transform mathematics

80 CHAPTER 3. THE LAPLACE TRANSFORM

V (s) = RI (s) , I (s) =V (s)/R = GV (s) ...............(Resistance)

V (s) =1

sCI (s) , I (s) =CV (s) ..............................(Capacitance)

V (s) = sLI (s) , I (s) =1sL

V (s) ..............................(Inductance)

From these transforms we derive impedances of these elements in s-plane. Impedance is defined as the ratio of voltage tocurrent; therefore the impedances are found to be

Z = R .........(Resistance)

Z =1

sC........(Capacitance) (3.15)

Z = sL..........(Inductance)

Analysis

Laplace transform of differentiation and integration in time domain replacesddt

and´

dt with s and1s

respectively.One can analyze circuits that contain resistance, inductance and capacitance by using their impedance representations as

given in Eqn. 3.15, by replacing L’s with sL’s, C’s with1

sCand keeping R’s unchanged. Series, parallel connections of

impedances follow series parallel connections of resistances. Thus

Z = Z1 +Z2 + · · ·+Zn = Σni=1Zi . . . . . .Series connection

1Z

=1Z1

+1Z2

+ . . .+1Zn

= Σni=1

1Zi

. . . . . .Parallel connection

Example 9. Consider the circuit in Fig. 3.12(a).

1. Find the input impedance and voltage transfer function of the circuit.

2. Find the poles and zeros of the transfer function.

3. From the pole-zero distribution state the time-domain behavior of the circuit.

Solution:

1. Input impedance and transfer function

Z (s) = sL+

1sC·R

R+1

sC

= Ls2 +

1RC

s+1

LC

s+1

RC

Calling1

LC= ω2

o and1

RC= 2α we can rewrite Z (s) as

Z (s) = Ls2 +2αs+ω2

o

s+2α

and the voltage transfer function becomes

Page 81: transform mathematics

3.6. APPLICATIONS OF LAPLACE TRANSFORM 81

H (s) =

1sC·R

R+1

sCZ (s)

=

1C· 1

s+2α

L · s2 +2αs+ω2

o

s+2α

=1

LC· 1

s2 +2αs+ω2o

H (s) =ω2

o

s2 +2αs+ω2o

2. Poles and zeros of the transfer function We see that Z (s) has simple zeros at s =−α±√

α2−ω2o =−α±β and

a real pole at s =− 1RC

=−2α; H (s) has simple poles at s =−α±√

α2−ω2o .

Time-domain response depends on the values of R,L,C. We differentiate three cases:

1. α > ωo, that is1

2RC<

1√LC

. There are two distinct real poles at s = −α ±β with time response c1e(−α−β )t +

c2e(−α+β )t where β =√

α2−ω2o .

2. α = ωo, that is R = 0.5

√LC

. There is a double pole at s =−α with time response cte−αt .

3. α < ωo, that is R < 0.5

√LC

. There is a pair of complex conjugate poles at s = −α ± jβ with time response

c sin(β t +ϕ) where β =√

ω2o −α2.

In Figure 3.12(b) and (c) three examples for these cases are given accompanied by their respective time-domain responses.

Synthesis

Often systems need to be built from a specification of H (s) =Y (s)X (s)

. As we have seen before H (s) is the ratio of two

polynomials. This is a synthesis problem which can be solved in various ways. Here we are going to give two methods,one using differentiator blocks and the other using integrator blocks. The differentiator and integrator blocks can be easilyimplemented using operational amplifiers, capacitors and resistors (Fig. 3.14).

Differentiator synthesis Let us synthesizeY (s)X (s)

=10s

s2 +2s+10differentiators. By arranging

Y (s)X (s)

we can write s2Y +

2sY +10Y = 10sX . Solving for Y (s) we obtain

Y (s) = sX (s)−(0.1s2 +0.2s

)Y (s)

= sX (s)− (0.1 · s · s+0.2 · s)Y (s)

Figure 3.13 (a) shows the implementation of H (s) in block diagram form. The rectangular blocks perform multiplicationof its input with the text written inside the block. The circle with a “+” sign in it is an adder. When you want subtractionyou add a “-” sign at the subtrahend input. The signal proceeds in the direction of arrows. The blocks that contain “s” aredifferentiators which can be readily made of an operational amplifier, a capacitor and a resistor (Figures 3.14, 3.15).

Integrator synthesis Let’s redesign the same transfer function using integrators. Dividing numerator and denominatorby s2 we obtain

Y (s)X (s)

=10s/s2

(s2 +2s+10)/s2 =10/s

1+2/s+10/s2

Again solving for for Y (s) we obtain

Page 82: transform mathematics

82 CHAPTER 3. THE LAPLACE TRANSFORM

sC

1

sL

Y(s)X(s)R

(a)

(b)

(c)

Figure 3.12: Second-order LPF with R, L, C elements. Inductive and capacitive impedances are given as sL and1

sC. (a)

Laplace transform implementation on LTSpice simulator, and (b) the pertinent time-domain response to a 0.5 sec pulse.

Page 83: transform mathematics

3.6. APPLICATIONS OF LAPLACE TRANSFORM 83

_

+

X(s) Y(s)

+0.2

+0.1

+

+

s

s

s

(a)

_

1

s

1

s

s

10X(s) Y(s)

+10

+

+

+

+2

(b)

Figure 3.13: (a) Block diagram for derivative and (b) integral implementation of a transfer function.

Y (s) =10s

X (s)−(

10s2 +

2s

)Y (s)

=10s

X (s)−(

10 · 1s· 1

s+2 · 1

s

)Y (s)

Figure 3.13 (b) shows the implementation of H (s) with integrators. The blocks that contain “1s

” are integrators which,like differentiators, can be readily made of an operational amplifier, a capacitor and a resistor of Fig. 3.14.

3.6.2 Evaluation of definite integralsLaplace transform can be used to evaluate integrals of the form

´∞

0 e−kx cosmxdx. For instance let us evaluate´

0 e−3x cos2xdx.Replacing e−3x with e−sx we can write

I =

ˆ∞

0−cos2xe−sxdx

I = L [cos2x]

Referring to Laplace transform table we have

I =s

s2 +4

Page 84: transform mathematics

84 CHAPTER 3. THE LAPLACE TRANSFORM

Figure 3.14: Building blocks for network synthesis in s-domain.

Substituting s = 3 we get

I =3

9+4=

313

Problems1. A transform T that acts upon a function f maps f into F . Given two functions f and g, their transforms T [ f ] = F ,

T [g] = G and two constants a and b. T is said to be linear if it satisfies the following condition:

T [a f +bg] = aF +bG

Show that Laplace transform is a linear transform.

2. Find Laplace transform f (t) = t u(t).

3. Hyperbolic sine and cosine are defined by

sinh t =et − e−t

2

cosh t =et + e−t

2

Find Laplace transform of sinh and cosh functions.

4. Solve Problem 2 using differentiation rule of the Laplace transform.

5. Find the Laplace transform of full-wave rectified sine function f (t) = |sin t|.

6. Using the real translation property obtain the Laplace transform of a full-wave rectified cosine function.

7. Find the Laplace transform of f (t) = cos t.

(a) Usingddt

sin t = cos t find the Laplace transform of sin t.

Page 85: transform mathematics

3.6. APPLICATIONS OF LAPLACE TRANSFORM 85

Figure 3.15: Differentiator synthesis.

(b) Using the phase shift property of sinusoidal functions, i.e., sin t = cos(

π

2− t)

find the Laplace transform ofsin t.

8. Using the fact that L [cos(ωt)] =s

s2 +ω2 andddt

cos(t) =−sin(t) find L [sin(ωt)] yi bulunuz.

9. Find the Laplace transform of f (t) = cos(ωt +θ).

10. Find the Laplace transform of f (t) = sin(ωt +θ).

11. Do the functions tan t, cot t, sec t, csc t have Laplace transforms?

12. Complete the proof of Eqn. 3.10.

13. Prove that

(a) L

[d2f (t)

dt2

]= s2F (s)− s f (0−)− f ′ (0−)

(b) L

[d3f (t)

dt3

]= s3F (s)− s2 f (0−)− s f ′ (0−)−·· ·− f

′′(0−)

14. Derive the following transform yourself:

L

[ˆ t

0-f (τ)dτ

]=

F (s)s

15. Prove that limt→∞ tne−at = 0 where a > 0.

16. Prove that limt→∞ e−at sinωt = 0 where a > 0.

17. Calculate limt→∞ t2e−2t .

Page 86: transform mathematics

86 CHAPTER 3. THE LAPLACE TRANSFORM

18. Obtain the Laplace Transform of the following integrodifferential equation and solve it for the case where the initialconditions are zero.

Adx(t)

dt+Bˆ t

0−x(τ)dτ +Cx(t) = u(t)

x(0−) = 0x′ (0−) = 0

19. Solve the differential equation x′′ (t)+3x′ (t)+2x(t) = 4et subject to the initial conditions x(0−) = 1, x′ (0) =−1.

20. Given the differential equationd2ydt2 +10

dydt

+169y = 0 and the initial conditions y(0) = 1, y′ (0) =−10.

(a) Solve for y(t) using Laplace transform.

(b) Find the poles and zeros of the system represented by this differential equation.

21. Verify the solutions for the three responses of the mass-spring system.

22. Given a system of coupled 1-st order differential equations:

Lx′ (t) = e(t)− y(t)−Rx(t)

Cy′ (t) = x(t)

Assuming zero initial conditions show that the above system can be expressed in matrix form as:

s[

1 00 1

][X (s)Y (s)

]=

−RL−1

L1C

0

[ X (s)Y (s)

]+

[ 1L0

]E (s)

23. With x(0−) = X0 and y(0−) = Y0 solve

x′ (t) = 2y(t)

y′ (t) = −x(t)

24. A linear system with impulse response h(t) responds to an arbitrary input x(t) through the convolution relation:

y1 (t) = x(t)Bh(t)

Should this sytem be excited by the derivative of x(t) its response is given as

y2 (t) = x′ (t)Bh(t) .

Show that

y2 (t) =dy1 (t)

dt.

25. A linear system with impulse response h(t) responds to an arbitrary input x(t) through the convolution relation:

y1 (t) = x(t)Bh(t)

Should this sytem be excited by the integral of x(t) its response is given as

y2 (t) =ˆ t

0−x(τ)dτBh(t) .

Page 87: transform mathematics

3.6. APPLICATIONS OF LAPLACE TRANSFORM 87

Show that

y2 (t) =ˆ t

0−y1 (τ)dτ.

26. Decomposing H (s) =N (s)D(s)

=s

(s+3)(s2 +16)into partial quotionts find h(t) = L−1 [H (s)].

27. Evaluate the integral

I =

ˆ +1

−1t2e−2|t|dt

28. Evaluate the integralˆ

0

e−t

tdt

29. Using Laplace Transform properties show that the convolution in time domain satisfies the following laws:

(a) Commutativity: x(t)∗ y(t) = y(t)∗ x(t)

(b) Associativity: [x(t)∗ y(t)]∗ z(t) = x(t)∗ [y(t)∗ z(t)]

(c) Distributivity: x(t)∗ [y(t)+ z(t)] = x(t)∗ y(t)+ x(t)∗ z(t)

Page 88: transform mathematics

88 CHAPTER 3. THE LAPLACE TRANSFORM

Page 89: transform mathematics

Chapter 4

The Fourier Series

A prominent French mathematician,physicist and statesman, Jean-BaptisteJoseph Fourier (1768-1830) wasactively involved in promoting Frenchrevolution in his homeland and barelyescaped guillotine. He accompaniedNapoleon on his Egypt expediton, andserved as his advisor and governor. Heserved at Ecole Polythechnique afterLagrange. In 1807 he published hiswork on heat propagation and theseries to be known by his name. Asidefrom his mathematical career hecontributed to the work Description del’Egypt about Egypt.

Periodic signals constitute an important class of signals in electrical engineering.Some signals are truely periodic like the carrier wave of TRT FM, while someothers are quasi-periodic like voiced speech utterings and noisy periodic signals.Linear Time-Invariant systems treat complex exponential signals and their deriva-tives - the sinusoidal signals - as eigenfunctions. If a linear system is driven byan eigenfunction, then the output is the same eigenfunction whose magnitude andphase are modified by the system. Because of linearity the superposition applies toa sum of eigenfunctions appied to the input of the LTI system. Thus if we know theeigenfunctions that are contained in a signal, we are able to find the response of LTIsystems to such inputs.

There is a strong parallelism between vectors and signals. In this chapter wewill learn how to decompose a periodic signal into an infinite-dimension vectorwhose base vectors are orthogonal exponential functions. Fourier has shown thatthe frequencies of these eigenfunctions are integer multiples of the periodic functionof interest. The procedure of finding these eigenfunctions is very similar to findingthe components of a vector along its basis vectors. As we will discover, the sameidea is pursued with aperiodic signals. We will deal with aperiodic signals in thenext chapter when we study the Fourier Transform.

4.1 Vectors and SignalsRecall that a vector A with components Ax, Ay, Az in x, y, z directions can be ex-pressed as A= Axi+Ayj+Azk where i, j, k are the unit vectors in x, y, z directionsrespectively. Since i · i = j · j = k ·k = 1 and i · j = j ·k = i ·k = 0, A can be decom-posed into its x−,y− and z components using

Ax = A · iAy = A · jAz = A ·k

Two vectors A and B are said to be orthogonal if

A·B = 0

In addition to this if

A·A = B·B = 1

then they are said to be orthonormal. Hence the direction vectors i, j, k are orthonormal vectors. We call them the basisvectors of the three dimensional vector space.

We can readily think of vectors in n dimensions:

89

Page 90: transform mathematics

90 CHAPTER 4. THE FOURIER SERIES

A = A1a1 +A2a2 + · · ·+Anan or

A =n

∑k=1

Akak

where ak is the basis vector in k-th dimension. Thus vector A can be decomposed into n components. Decomposing Ainto Ak is to find the projection of A in the k-th dimension. We can extend the idea so that A has infinitely many discretecomponents Ak along infinitely many basis vectors ak, i.e.,

A =∞

∑k=1

Akak

Scalar product, (aka dot product and inner product) of A and B is defined as

< A, B >= A·B =n

∑k=1

AkBk (4.1)

which is the sum of the products of the corresponding components of A and B . This logically follows from the orthonor-mality of the basis vectors ak.

Consider a piecewise-continuous function f (t) defined on an interval [a,b]. For every value α in [a,b] we assign tof a unique number f (α). The set of numbers f (α) | a≤ α ≤ b can be thought of as a vector with infinitely manycomponents. Let f (t) and g(t) be piecewise-continuous functions defined on an interval [a,b]. We allow g(t) to be acomplex-valued function. Then the integral

´ ba f (t)g∗ (t)dt is continuous form of the sum in Eqn 4.1 and we call it the

scalar product of functions f (t) and g(t) over the interval [a,b]. Hence

< f , g >=

ˆ b

af (t)g∗ (t)dt

where g is the basis function, < f , g > is the inner product of f and g and represents the projection of f on g. If

ˆ b

af (t)g∗ (t)dt = 0

then the functions f (t) and g(t) are orthogonal in the interval [a,b].

Example 20. Consider the functions f1 (x) = sinx, f2 (x) = sin2x, f3 (x) = cosx, f4 (x) = cos2x. Since

ˆπ

−π

sinx sin2xdx = 0ˆ

π

−π

sinx cosxdx = 0ˆ

π

−π

cosx cos2xdx = 0ˆ

π

−π

sinx cos2xdx = 0ˆ

π

−π

cosx sin2xdx = 0

f1, f2, f3 and f4 are orthogonal in the interval [−π,π], but they are not orthonormal because´

π

−πf 2k (x)dx = 1

2 for k =1, · · · ,4.

4.2 PeriodicityIn 19-th century, while working with heat conduction, Fourier came up with the brilliant idea that a periodic function canbe expressed as an infinite series of sinusoids whose frequencies are integer multiples of the frequency of the periodicfunction. A function f (t) is said to be periodic with period T if it satisfies

Page 91: transform mathematics

4.2. PERIODICITY 91

Figure 4.1: Periodic function with four periods shown.

f (t +T ) = f (t) (4.2)

A periodic function has infinitely many such T values. The smallest such T which satisfies 4.2 is called the fundamentalperiod. We shall denote the fundamental period with T0 . The inverse of the fundamental period is called the fundamentalfrequency f0 which is related to T0 through

f0 =1T0

We also define angular frequency ω0 which is related to f0 and T0 through ω0 = 2π f0 = 2π/T0. In Fig. 4.1 T =1s, 2s, 3s,... are all periods. However the smallest of them, 1s, is the fundamental period.

What Fourier has said can be formulated as

f (t) = a0 +∞

∑n=1

(an cosnω0t +bn sinnω0t) (4.3)

where cosnω0t and sinnω0t are the basis functions and an and bn are the magnitudes of these components and nω0 isthe n−th harmonic. Note that cosnω0t and sinnω0t to f (t) are what ak is to A in A = ∑

∞k=1 Akak in Eqn. 4.1. We can

interprete this by saying that Fourier has found infinitely many trigonometric basis functions for a periodic function fromwhich the function can be built through the Fourier series (Eqn. 4.3).

an cosnω0t and bn sinnω0t are quadrature components of f (t) where n = 0,1, · · · ,∞. This discovery had far-reachingconsequences in mathematics, science and engineering - especially electrical engineering. Decomposing a periodic func-tion into a series of trigonometric components and using the superposition principle for linear time-invariant systemsgreatly ease the analysis of linear systems. In the following we give three different forms of Fourier series and show thatthey are equivalent. From these forms we will focus on the complex exponential Fourier series form due to its eleganceand because the set of complex exponentials constititute a set of eigenfunction bases.

Let f (t) be a periodic function with period T0. Then f (t) can be written as an infinite sum of weighted complexexponential terms e jnω0t :

f (t) =∞

∑n=−∞

cne jnω0t (4.4)

= · · ·+ c−ne− jnw0t + · · ·+ c−1e− jw0t + c0 + c1e jw0t + · · ·+ cne jnw0t + · · ·

which is called the complex Fourier series. The fact that f (t) is a real-valued function makes it necessary that c0 is realand cn and c−n are complex conjugate pairs. Hence we rearrange Eqn. 4.4 as follows:

Page 92: transform mathematics

92 CHAPTER 4. THE FOURIER SERIES

f (t) = c0 +∞

∑n=1

(cne jnω0t + c−ne− jnω0t)

= c0 +∞

∑n=1

[cne jnω0t +

(cne jnω0t)∗]

= c0 +∞

∑n=1

(|cn|e jθne jnω0t + |cn|e− jθne− jnω0t

)= c0 +

∑n=1

2 |cn|cos(nω0t +θn)

f (t) = A0 +∞

∑n=1

An cos(nω0t +θn) (4.5)

where A0 = c0 is the average or DC value of f (t) and An = 2 |cn| and θn are the amplitude and phase angle of the n−thharmonic respectively. This is the phase-amplitude trigonometric representation of the Fourier series.

Cosine terms in Eqn. 4.5 can be expanded to yield Fourier series in quadrature trigonometric form:

f (t) = A0 +∞

∑n=1

An cos(nω0t +θn)

f (t) = A0 +∞

∑n=1

An (cosθn cosnω0t− sinθn sinnω0t)

f (t) = a0 +∞

∑n=1

(an cosnω0t +bn sinnω0t) (4.6)

where an = An cosθn and bn = −An sinθn are the amplitudes of the cosine and sine components of the n−th harmonic.Also note that

An =√

a2n +b2

n and θn =− tan−1(

bn

an

)

4.3 Calculating Fourier Series CoefficientsLet f (t) with period T0 be represented by the complex Fourier series:

f (t) =∞

∑k=−∞

cne jkω0t

Multiplying f (t) by e− jnω0t and integrating over one period yields cnT0. This is nothing but the the scalar product of thefunction with a basis function:

⟨f (t) ,e jnω0t⟩ =

ˆ T0

0f (t)e− jnω0t dt

=

ˆ T0

0

∑k=−∞

cke jkω0te− jnω0t dt

=∞

∑k=−∞

ck

ˆ T0

0e j(k−n)ω0t dt

The integral in the sum yields 0 when k 6= n and T0 when k = n . That is

ˆ T0

0e j(n−k)ω0t dt =

T0 k = n0 k 6= n

Thus

Page 93: transform mathematics

4.3. CALCULATING FOURIER SERIES COEFFICIENTS 93

⟨f (t) ,e jnω0t⟩= ˆ T0

0f (t)e− jnω0t dt = cnT0

Hence we obtain

c0 =1T0

ˆ T0

0f (t)dt (4.7)

and

cn =1T0

ˆ T0

0f (t)e− jnω0t dt (4.8)

As mentioned before, c0 is real and cn can be complex in general, in which case cn and c−n are conjugates.

With cn determined from Eqn. 4.8, an, bn of the quadrature series as well as An, θn of the phase-amplitude series arereadily calculated from

An = 2 |cn| , θn = arg(cn) (4.9)an = An cosθn, bn =−An sinθn (4.10)

In all three cases c0, A0 and a0 are equal to the average value of f (t).An alternative calculation for the quadrature series exists whereby f (t) is multiplied by cosnω0t or sinnω0t then inte-

grated and averaged over one period. That is, we set out to evaluate the integrals´ T0

0 f (t)cosnω0t dt and´ T0

0 f (t)sinnω0t dt:

ˆ T0

0f (t)cosnω0t dt =

ˆ T0

0

[a0 +

∑k=1

(ak coskω0t +bk sinkω0t)

]cosnω0t dt

= a0

ˆ T0

0cosnω0t dt +

∑k=1

ak

ˆ T0

0coskω0t cosnω0t dt +

∑k=1

bk

ˆ T0

0sinkω0t cosnω0t dt

Invoking our trigonometry and calculus knowledge

ˆ T0

0cosnω0t dt =

T0 n = 00 otherwise

ˆ T0

0coskω0t cosnω0t dt =

T02 if k = n

0 otherwiseˆ T0

0sinkω0t cosnω0t dt = 0 for all n

we deduce that

a0 =1T0

ˆ T0

0f (t)dt (4.11)

an =2T0

ˆ T0

0f (t)cosnω0t dt and (4.12)

bn =2T0

ˆ T0

0f (t)sinnω0t dt (4.13)

Example 2.Find the Fourier coefficients of the following function. The period of the function is T0 and is defined as

f (t) =

1 |t|< T0

40 T0

4 < |t|< T02

Page 94: transform mathematics

94 CHAPTER 4. THE FOURIER SERIES

Complex series coefficients are obtained from Eqns 4.7 and 4.8 as:

cn =1T0

ˆ T0/2

−T0/2f (t)e− jnω0tdt

=1T0

(ˆ −T0/4

−T0/20 · e− jnω0tdt +

ˆ T0/4

−T0/41 · e− jnω0tdt +

ˆ T0/2

T0/40 · e− jnω0tdt

)

=1T0

ˆ T0/4

−T0/4e− jnω0tdt

=1T0·[

e− jnω0t

− jnω0

] T04

− T04

=e− jnω0

T04 − e jnω0

T04

− jnω0T0

Using the relation ω0T0 = 2π we have

cn =1

nπsin(nπ

2

)=

12·

sin( nπ

2

)( nπ

2

)cn =

12· sinc

(nπ

2

)It turns out that cn is real and sinc(−x) = sinc x, hence we have c−n = cn. First few cn are c0 = 1

2 , c1 = 1π

, c2 = 0,c3 =− 1

3π, c4 = 0, c5 =

15π. . . Note that even harmonics (coefficients with even n) vanish in this expansion. Also note that

because f (t) is even-symmetric; only cosine terms which are also even-symmetric, exist in the Fourier Series expansion.

We can synthesize f (t) using these coefficients:

f (t) =12+

∑n=1

12· sinc

(−nπ

2

)e− jnω0t +

12· sinc

(nπ

2

)e jnω0t

=12+

∑n=1

sinc(nπ

2

)cosnω0t

=12+

∑n=1

sin( nπ

2

)( nπ

2

) cosnω0t

=12+

∑n=1

sin(nπ

2

)cosnω0t

f (t) =12+

(cosω0t− 1

3cos3ω0t +

15

cos5ω0t− 17

cos7ω0t−·· ·)

(4.14)

This can also be written as

f (t) =12+

∑n=1

(−1)n−1

2n−1cos [(2n−1)ω0t] (4.15)

T0

2

T0

4

T0

4

T0

2

1

f(t)

t

Figure 4.2: Square wave with 50 % duty cycle and even symmetry.

Page 95: transform mathematics

4.3. CALCULATING FOURIER SERIES COEFFICIENTS 95

Figure 4.3: (a) Twenty coefficients of f (t). Since in this example all the phases are 0, these coefficients are the amplitudesof the harmonics. Further note that the even harmonics are all 0. This is because the square wave’s duty cycle is 50 %.(b) f (t)and its construction from twenty coefficients. Note the Gibbs effect at discontinuities.

Figure 4.4: Periodic impulse train of period T0.

Example 3. Find the Fourier series coefficients of periodic unit impulse function train whose period is T0.

Impulse train can be expressed as

x(t) =∞

∑k=−∞

δ (t− kT0) (4.16)

The Fourier series coefficients are given by

cn =1T0

ˆ T0/2

−T0/2x(t)e− jnω0tdt

Substituting Eqn. 4.16

cn =1T0

ˆ T0/2

−T0/2

∑k=−∞

δ (t− kT0)e− jnω0tdt

=1T0

∑k=−∞

ˆ T0/2

−T0/2δ (t− kT0)e− jnω0tdt

=1T0·ˆ T0/2

−T0/2δ (t)e− jnω0tdt

=1T0·1

cn =1T0

We see that all of the Fourier series coefficients are real and equal to f0.

Page 96: transform mathematics

96 CHAPTER 4. THE FOURIER SERIES

4.4 Properties of Fourier SeriesFourier Series definition (Eqns. 4.4, 4.5 and 4.6) gives us both shortcuts into calculating coefficients and insight into thenature of the series concerning certain situations related to and time-axis operations on f (t). These special situations areclassified below as symmetry conditions and time-axis operations.

4.4.1 Symmetry ConditionsWe identify three symmetry conditions below.

Even Symmetry

A function is even symmetric if f (−t) = f (t). cn is given by Eqn. 4.8:

cn =1T0

ˆ T0/2

−T0/2f (t)e− jnω0tdt

Invoking Euler’s formula we can write

cn =1T0

ˆ T0/2

−T0/2f (t)(cosnω0t− j sinnω0t)dt

=1T0

ˆ T0/2

−T0/2f (t)cosnω0t dt− j

1T0

ˆ T0/2

−T0/2f (t)sinnω0t dt

Since f (t)sinnω0t is an odd function the integrand of the second integral is odd and it integral vanishes. f (t)cosnω0t isen even function and we get

cn =2T0

ˆ T0/2

0f (t)cosnω0t dt

A0 =2T0

ˆ T0/2

0f (t)dt

An =4T0

ˆ T0/2

0f (t)cosnω0t dt

θn = 0

We see that even functions have Fourier Series coefficients which are all real, and comprise all cosine terms and no sineterms. Even functions can be constructed from cosine functions which are themselves even.

Example 21. Obtain the Fourier Series coefficients for full-wave rectified cosine function.

Rectified cosine function f (t) = |cos t| has a period of π instead of 2π and even symmetric, i.e., cos(−t) = cos t. Thecomplex Fourier coefficients are given by

cn =1π

ˆπ/2

−π/2cos t e− j2ntdt because ω0 =

π= 2

=2π

ˆπ/2

0cos t cos2πnt dt

=1π

[ˆπ/2

0cos(2n+1) tdt +

ˆπ/2

0cos(2n−1) tdt

]

= 2π· (−1)n+1

4n2−1

c0 =2π

ˆπ/2

0cos t dt

=2π

Page 97: transform mathematics

4.4. PROPERTIES OF FOURIER SERIES 97

Figure 4.5: Full-wave rectifed cosine function constructed from (a) 3, (b) 4, (c) 5 and (d) 50 coefficients.

Thus

A0 = c0 =2π

and

An = 2 |cn|

An =4π· 1

4n2−1

θn = (−1)n+1π

Hence we can express the rectified cosine wave as

f (t) =2π+

∑n=1

(−1)n+1

4n2−1· cos2nt

Fig. 4.12 depicts f (t) constructed from 3, 4, 5 and 50 coefficients. Note that cos2nt are double harmonics of cos t,because the frequency of |cos t| is 2 times the frequency of cos t.

Odd Symmetry

A function is odd symmetric if f (−t) =− f (t). Again invoking Euler’s formula the Fourier Series coefficients become

Page 98: transform mathematics

98 CHAPTER 4. THE FOURIER SERIES

Figure 4.6: Sawtooth function (a) synthesized from 15 Fourier coefficients shown in (b).

Again invoking Euler’s formula

cn =1T0

ˆ T02

− T02

f (t)e− jnω0tdt

cn =1T0

ˆ T02

− T02

f (t)cosnω0t dt− j1T0

ˆ T02

− T02

f (t)sinnω0t dt

Since f (t)cosnω0t is an odd function the first integral vanishes. Since f (t)sinnω0t is en even function we obtain

cn = − j2T0

ˆ T02

0f (t)sinnω0t dt

A0 = 0

An =4T0

ˆ T02

0f (t)sinnω0t dt

θn = −π

2· sgn

(ˆ T02

0f (t)sinnω0t dt

)where sgn(·) is the signum function.

Odd functions have Fourier series coefficients which are all imaginary; comprise all sine terms. Furthermore there isno DC term. Odd functions can be constructed from sine functions which are themselves odd.

Example 22. Compute the Fourier Series coefficients for

f (t) = t, −π < t ≤ π

cn = − j2T0

ˆ T02

0f (t)sinnω0t dt

= − j2

ˆπ

0t sin

(n · 2π

2πt)

dt

= − j1π

ˆπ

0t sin(n · t) dt

= − jπ

[sinnt−nt cosnt

n2

0

cn = j (−1)n · 2n

Page 99: transform mathematics

4.4. PROPERTIES OF FOURIER SERIES 99

Thus we get

A0 = 0

An =4n

θn = (−1)n · π2

and

f (t) = 4∞

∑n=1

(−1)n+1

n· sinnt

f (t) = 4(

sin t− 12

sin2t +13

sin3t− 14

sin4t + . . .

)Half-Period Symmetry

A function is half-period symmetric if f(

t− T02

)=− f (t). Then

cn =1T0

[ˆ 0

− T02

f (t)e− jnω0tdt +ˆ T0

2

0f (t)e− jnω0tdt

]

=1T0

[ˆ 0

− T02

− f(

t +T0

2

)e− jnω0tdt +

ˆ T02

0f (t)e− jnω0tdt

]

With change of variables u = t + T02 and du = dt

cn =1T0

[ˆ T02

0− f (u)e− jnω0

(u− T0

2

)du+

ˆ T02

0f (t)e− jnω0tdt

]

=1T0

[−e jnω0

T02

ˆ T02

0f (u)e− jnω0udu+

ˆ T02

0f (t)e− jnω0tdt

]

=1T0

[−e jnπ

ˆ T02

0f (t)e− jnω0tdt +

ˆ T02

0f (t)e− jnω0tdt

]

Because e jπ =−1 we obtain

cn =1T0

[1− (−1)n]

ˆ T0/2

0f (t)e− jnω0tdt

The factor 1− (−1)n is equal to 0 if n is even, and 2 if n is odd. Thus

cn =

0 n even2T0

´ T02

0 f (t)e− jnω0tdt n odd

As in the case of odd functions we have no DC term; moreover even harmonics vanish from the series.

Example 23. Find the Fourier Series coefficients of the following function shown in Fig. 4.7a:

f (t) =

−t−π −π ≤ t < 0t 0≤ t < π

This function is half-period symmetric. The portion of f (t) for 0 ≤ t < π is the same as the sawtooth of the previousexample. Therefore

Page 100: transform mathematics

100 CHAPTER 4. THE FOURIER SERIES

Figure 4.7: (a) Half-period symmetric sawtooth and its construction from 20 FS coefficients, (b) Magnitude, (c) Angle ofFS coefficients.

Page 101: transform mathematics

4.4. PROPERTIES OF FOURIER SERIES 101

cn =

0 n even1π

´π

0 te− jntdt n odd

For n odd

cn =1π

(ˆπ

0t cosnt dt− j

ˆπ

0t sinnt dt

)=

[nt sinnt + cosnt

n2 − j−nt cosnt + sinnt

n2

0

=1

n2π[nt sinnt + cosnt]π0 +

jn

=1

n2π[(−1)n−1]+

jn

cn = −1n

(2

nπ− j)

With c0 = 0 we obtain

A0 = 0

An =

0 neven2n

√1+ 4

n2π2 nodd

θn = π + tan−1(nπ

2

)Thus

f (t) =∞

∑n=1

2n

√1+

4n2π2 cos

[nt +π + tan−1

(nπ

2

)]nodd

Fig. 4.7b, c show magnitude and phase of 20 coefficients.

4.4.2 Time OperationsShifting in Time

Let f (t) = ∑∞−∞ cne jnω0t be Fourier Series expansion of f (t) which is periodic with period T0. Then shifting f (t) by ∆t

in time generates proportional phase shifts in cn without affecting |cn|.

Proof. Let g(t) = f (t−∆t). Denote the Fourier Series coefficients of g(t) by kn. Then

kn =1T0

ˆ T0

0g(t)e− jnω0tdt

=1T0

ˆ T0

0f (t−∆t)e− jnω0tdt

=1T0

ˆ T0

0f (u)e− jnω0(u+∆t)du

= e− jnω0∆t · 1T0

ˆ T0+∆t

∆tf (u)e− jnω0udu

= e− jnω0∆tcn

Since ω0 =2π

T0

nω0∆t = n · 2π

T0·∆t

= 2πn ·(

∆tT0

)

Page 102: transform mathematics

102 CHAPTER 4. THE FOURIER SERIES

Thus we obtain

kn = cn exp[− j2πn ·

(∆tT0

)]arg(kn) = arg(cn)−2πn ·

(∆tT0

)

Time Reversal

Time reversal is flipping f (t) about t = 0, that is, substituting −t for t. Thus g(t) = f (−t) has the same appearance asf (t) except that "past" becomes "future" and future becomes past. As usual the Fourier Series coefficients are

kn =1T0

ˆ T02

− T02

g(t)e− jnω0tdt

=1T0

ˆ T02

− T02

f (−t)e− jnω0tdt

With a change of variable u =−t

kn =1T0

ˆ − T02

T02

f (u)e+ jnω0u (−du)

= − 1T0

ˆ − T02

T02

f (u)e+ jnω0udu

=1T0

ˆ T02

− T02

f (u)e+ jnω0udu

=

(1T0

ˆ T02

− T02

f (u)e− jnω0udu

)∗kn = c∗n andk∗n = cn

Hence

g(t) = f (−t) =∞

∑n=−∞

kne− jnω0t

and

f (−t) =∞

∑n=−∞

c∗ne− jnω0t

=∞

∑n=−∞

|cn|e− j(nω0t+θn)

Recall that for f (t) we had

f (t) =∞

∑n=−∞

|cn|e− j(nω0t−θn)

We conclude that the magnitudes of the basis functions remain the same but their phases lag by 2θn.

Page 103: transform mathematics

4.5. PARSEVAL’S RELATION 103

Differentiation

If a periodic function is derivative of another function then its Fourier Series components are jω0n times the coefficientsof the second function. Let

g(t) =d f (t)

dt

f (t) can be expressed by the synthesis equation

f (t) =∞

∑n=−∞

cn exp( jnω0t)

Upon differentiating both sides of the equation we obtain

g(t) =d f (t)

dt

=∞

∑n=−∞

kn exp( jnω0t)

d f (t)dt

=ddt

∑n=−∞

cn exp( jnω0t)

=∞

∑n=−∞

cnddt

exp( jnω0t)

=∞

∑n=−∞

jnω0cn exp( jnω0t)

=∞

∑n=−∞

kn exp( jnω0t)

Hence we see that the Fourier Series coefficients for g(t) are kn = jnω0cn. It is interesting to note that differentiationemphasizes higher order harmonic terms because of the nω0 factor. An important corollary of this result is that the DCcomponent (average value of the periodic function) can not pass through a differentiator.

Integration

If a periodic function is integral of another function then its Fourier Series components are the coefficients of the secondfunction divided by jω0n . This property follows from differentiation property we just derived. However note that theaverage value of the function integrated must be zero; otherwise the integration will generate aperiodic signal for whichwe cannot talk about Fourier Series analysis. With this condition imposed we have

g(t) =

ˆ tf (τ)dτ

kn =cn

jnω0

Also note that integration deemphasizes higher order harmonic terms because of the 1/nω0 factor and we must havecn = 0.

4.5 Parseval’s RelationSuppose that a periodic voltage function is applied across a 1-ohm resistance. The energy dissipated by the resistanceduring one period is given by

E =

ˆ T0

0[ f (t)]2 dt

Let us substitute into this expression the Fourier series representation of f (t):

Page 104: transform mathematics

104 CHAPTER 4. THE FOURIER SERIES

E =

ˆ T0

0

(∞

∑n=−∞

cne− jnω0t

)2

dt

=

ˆ T0

0

(∞

∑m=−∞

∑n=−∞

cmcne− jmω0te− jnω0t

)dt

=∞

∑m=−∞

∑n=−∞

cmcn

ˆ T0

0e− j(m+n)ω0tdt

We have for the integralˆ T0

0e− j(m+n)ω0tdt =

0 m 6=−nT0 m =−n

Thus

E =

ˆ T0

0[ f (t)]2 dt = T0

∑n=−∞

|cn|2 (4.17)

Power dissipated by the resistance is P = E/T0. Therefore:

P =1T0

ˆ T0

0[ f (t)]2 dt

=∞

∑n=−∞

|cn|2

1T0

ˆ T0

0[ f (t)]2 dt = c2

0 +∞

∑n=1

2 |cn|2 (4.18)

= A20 +

12

∑n=1

A2n (4.19)

= a20 +

12

∑n=1

(a2

n +b2n)

(4.20)

This is a very important and fundamental formula in signal processing. Eqn. 4.19 is in amplitude-phase representationand Eqn. 4.20 is in quadrature representation. The period-average of the squared function in time domain is equal tothe sum of the magnitude squares of all Fourier Series components of the function and gives the power supplied bythe function to a 1-ohm resistance.

4.5.1 Convergence of Fourier SeriesIt is curious to know whether the assertion made by Fourier that periodic functions can be expressed by an infinite sum ofharmonics holds for every periodic function; if not, under which conditions does it hold; and when it does hold what ismeant by the assertion that the function can be ´ expressed´ ? A thorough discussion of this topic is beyond the scope ofthis text. Nevertheless we will talk about this briefly.

Let f (t) be defined on an an interval 0≤ t ≤ T0 and let its period be T0. We form the partial sum

SN (t) =N

∑n=−N

cne− jω0t

As n→ ∞ SN converges to f (t) except at points of discontinuity if the following conditions by Dirichlet are satisfied:

1. f (t) is absolutely integrable over a period:ˆ T0

0| f (t)|dt < ∞

2. f (t) is bounded in one period.

Page 105: transform mathematics

4.5. PARSEVAL’S RELATION 105

3. f (t) has a finite number of discontinuities in one period.

4. f (t) has a finite number of maxima and minima in one period.

SN converges to f (t) at every point in the period where the function is continuous. Let f (t) be discontinuous at a point t0so that f

(t−0)= limt→t−0

f (t), f(t+0)= limt→t+0

f (t) then SN converges to the midpoint of the jump, i.e.,

limN→∞

SN (t0) =f(t+0)+ f

(t−0)

2

and to f (t) where f (t) is continuous. In Fig. 4.8 note that the Fourier series evaluate to half the step in square wave.

There is another interpretation of Fourier series representation of a periodic function if the function is square integrableover a period. Let

ˆ T0

0[ f (t)]2 dt < ∞

f (t)− SN (t) is the error between the function and its partial Fourier Series sum representation. This difference can bepositive or negative, but its square is always positive and can be viewed as the power of the error. Integrating this errorover one period yields the error energy in one period:

´ T00 [ f (t)−SN (t)]2 dt. The Mean Square Error then is this quantity

divided by T0.

E =1T0

ˆ T0

0[ f (t)−SN (t)]2 dt

Then as N→ ∞ the Mean Square Error approaches zero:

limN→∞

E =1T0

limN→∞

ˆ T0

0[ f (t)−SN (t)]2 dt = 0

While absolute integrability guarantees that SN approaches f (t) at all points except at discontinuities, square integra-bility guarantees that MSE becomes zero as N tends to infinity.

4.5.2 Gibbs Phenomenon

As it is impossible to compose discontinuous functions out of continuous functions, Fourier Series exhibits unexpectedbehavior near discontinuities. In Section 4.5.1 we have seen such behavior when Fourier Series settles to the midpoint ofthe jump at a discontinuity. To complement this, another phenomenon is observed near discontinuities.

American physicist Albert A. Michelson made a device in 1898 which accepted Fourier Series coefficients as inputand constructed the function from the coefficients and drew it. When Fourier Series coefficients for a square wave wereinput to the machine it generated decaying oscillations near the discontinuity. It would overshoot or undershoot beforeand after the discontinuity and die out. Increasing the number of coefficients didn’t eliminate the oscillations nor did itreduce the overshoots or undershoots, but rather it pushed the oscillations closer to the discontinuity. This phenomenonwas not paid due attention by Michelson. It was attributed to the imperfections of the machine and Michelson didn’tmention it in his paper [Michelson]. British mathematician Henry Wilbraham discovered the effect and published it in1848 [Wilbraham], but his work went unnoticed. Then in 1898 J. Willard Gibbs, completely unaware of Wilbraham’swork, published a paper [Gibbs] that described the phenomenon for a sawtooth function. Now the Gibss is credited forthe work and the effect is known as the Gibbs effect.

Gibbs effect arises at a discontinuity and it causes a maximum overshoot (or undershoot) of 8.939 % of the jump atdiscontinuity (see Fig. 4.8). The oscillations die out and the Fourier sum converges to the function. However as mentionedin the last paragraph ringing does not go away as the number of coeffients are increased, they rather squeeze oscillationscloser and closer to the discontinuity.

A thorough mathematical treatment of Gibbs Effect is beyond the scope of this text.

Page 106: transform mathematics

106 CHAPTER 4. THE FOURIER SERIES

Figure 4.8: Gibbs phenomenon for a square wave function. Oscillations are shown for N = 2, 4, 10 and 50.

Figure 4.9: Spectrogram of speech utterance "she had your dark suit in greasy wash water all year". Horizontal axis,frame, is related to time and each frame is 32 ms long in this case. Fourier coefficients are computed for every frame andplotted on the frequency axis. The darkness of the coefficient is proportianal to its magnitude. As hearing is insensitive tothe phase, only the magnitude information is used.

4.6 Applications of Fourier Series

SpectrogramSpectrogram is a frequency-versus-time display of Fourier Transform. Short duration chunks of a signal are analyzedand the Fourier series coefficients are computed. The magnitudes (or the angles) of the coefficients are color coded anddisplayed as intensity of the related frequency. This is called STFT (Short Term Fourier Transform) and is an importantJoint Time-Frequency Analysis tool in signal processing. In speech science, it depicts the formant variations, voiced orunvoiced speech segments; in mechanical engineering it highlights the vibration harmonics generated by a rotating en-gine. Spectrograms display the switching on and off of transmitters in communication, and their frequency hops in time.Another related term for spectrogram is waterfall diagram.

In Fig.4.9 the horizontal axis is time in milliseconds. The vertical axis is the frequency. The values and intensities ofthe coeffients give an idea of the flow of speech formants during the course of the speech. The darker the trace the largerthe magnitude of the FS coefficient at that frequency. The spectrogram in the figure represents the uttering ”she hadyour dark suit in greasy wash water all year” from TIMIT speech data base[Özhan, ?, TIMIT, ?, ?].

Mozer speech synthesisHuman hearing is sensitive to the frequency and amplitude of the sound wave and insensitive to its phase. Thus thefollowing two sounds are perceived as the same sound:

s1 (t) = sin2000πt +0.5sin200πt

s2 (t) = sin(2000πt +π/3)+0.5sin(200πt−π/4)

Page 107: transform mathematics

4.6. APPLICATIONS OF FOURIER SERIES 107

Forrest S. Mozer exploited this property of human hearing and invented the first speech synthesizer in 1974. ”Hefirst licensed this technology to TeleSensory Systems, which used it in the "Speech+" talking calculator for blind persons.Later National Semiconductor also licensed the technology, used for its "DigiTalker" speech synthesizer, the MM54104.”

Mozer extracted the Fourier coefficients from short speech recordings (words or phonemes). To cut the memory sizerequired to store the recording in half, he adjusted the phases of the sound harmonics so that when used in a partial sumthe result would be a speech sound which has even-symmetry about the midpoint of the record. Once this was achievedhe could preserve the frequencies, phases and amplitudes of half of the speech, save them in memory and get rid of thesecond half of the recording. During synthesis he played the stored speech in forward and then in reverse to produce theintended phoneme or word.

There is much more to the Mozer speech synthesis. However this is the most crucial part and the one that concerns uswithin the context of this chapter.

(a) (b)

Figure 4.10: Mozer Speech synthesis depends on phase insensitivity of human hearing. (a) shows the original waveformwithout phase adjustment. In (b) phases of the individual frequencies are adjusted to make the speech symmetric aboutmidpoint of the recording. The portion to the left of the midline at t = 0.02s is retained and stored; the portion aftert = 0.02s is discarded. Although the waveforms appear different, they are heard as the same sound.

Frequency TriplerA periodic function has infinite harmonics. When this function is applied to a LTI system, response from each harmoniccontributes additively to the overall response of the sysytem. The output for each n-th harmonic yn (t) is the convolutionof An cos(ωnt +θn) with h(t). In frequency domain

Yn ( jωn) =(cne jωnt + c−ne− jωnt)H ( jωn)

This yields

|Yn ( jωn)| = An |H ( jωn)|arg [Yn ( jωn)] = arg [H ( jωn)]+ arg(cn)

Thus y(t)is a linear combination of all yn (t)’s which are given by

y(t) =∞

∑n=0

yn (t)

=∞

∑n=0

An |H ( jωn)|cos(ωnt +θn)

Page 108: transform mathematics

108 CHAPTER 4. THE FOURIER SERIES

In Fig. 4.11 a square wave of 50 % duty cycle is applied to an RLC tank circuit. The tank circuit has a response whosemagnitude resembles a bell shape which peaks at ωT = 1/

√LC, i.e., fT = 1/

(2π√

LC). This response is the voltage

transfer function and is expressed as

H ( jω) =1

1− jQ(

ω

ωT− ωT

ω

)or

H ( j f ) =1

1− jQ(

ffT− fT

f

)where Q = ωT RC = R

√CL is called quality factor which determines the "sharpness" of |H ( jω)|.

With R = 100 ohms, L = 53.052 µH, C = 53.052 µF

fT =1

2π ·53.052 ·10−6 = 3000Hz

Q = 100 ·√

53.052 ·10−6

53.052 ·10−6

Q = 100

we obtain

H ( jω) =1

1− j100(

ω

ωT− ωT

ω

)|H ( jω)| =

1√1+104

ωT− ωT

ω

)2

argH ( jω) = arctan[

100(

ω

ωT− ωT

ω

)]Let’s compute the harmonics at the output of the tank circuit:

1. The DC component: f = 0Hz i.e., ω = 0rad/sec.The Fourier Series coefficient:

A0 = 10 · 12= 5V

θ0 = 0º

Tank circuit response:

|H ( j 0)| =1√

1+104( 0

3000 −3000

0

)2

= 0

argH ( j 0) =π

2

y0 (t) = 5 ·0 · cos(

0t +π

2

)= 0Volt

Page 109: transform mathematics

4.6. APPLICATIONS OF FOURIER SERIES 109

2. The fundamental harmonic f = 1kHz i.e., ω = 2π krad/sec.The Fourier Series coefficient:

A1 = 10 · 2π= 6.366Volts

θ1 = 0º

Tank circuit response:

|H ( j 2000π)| =1√

1+104( 1000

3000 −30001000

)2

≈ 1102( 3

1 −13

)= 0.00375

argH ( j 2000π) = arctan[

100(

13− 3

1

)]= arctan(−800/3)

= −π

2= −90°

y1 (t) = 6.366 ·0.00375 · cos(

2000πt− π

2

)= 0.024 sin 2000πt Volts

3. The third harmonic f = 3kHz i.e., ω = 6π krad/secThe Fourier Series coefficient:

A3 = 10 · 23π

= 2.122Volts

θ3 = π

= 180°

Tank circuit response:

|H ( j 6000π)|= 1 and argH ( j 6000π) = 0

y3 (t) = 2.122 ·1 · cos(6000πt +π +0) =−2.122 cos 6000πt Volts

4. The fifth harmonic f = 5kHz i.e., ω = 10π krad/secThe Fourier Series coefficient:

A5 = 10 · 25π

= 1.273Volts

θ5 = π

= 180

Tank circuit response:

|H ( j 10000π)|= 1√1+104

( 50003000 −

30005000

)2≈ 1

102 · 1615

= 0.009375

Page 110: transform mathematics

110 CHAPTER 4. THE FOURIER SERIES

argH ( j 10000π) = arctan[

100(

53− 3

5

)]=

π

2= 90º

y5 (t) = 1.273 ·0.009375 · cos(

10000πt +π +π

2

)= 0.012 sin 10000πt Volts

One can easily compute the other harmonics in this fashion. Using superposition the tank circuit output is obtained as thesum of the individual contributions from all harmonics:

y(t) = y0 (t)+ y1 (t)+ y3 (t)+ y5 (t)+ · · ·

y(t) = 0.024 sin 2000πt−2.122 cos 6000πt +0.012 sin 10000πt + · · · Volts

We observe that the ratio of the first harmonic to the third harmonic is 0.0242.122 = 0.0113 and the ratios for higher

harmonics are less than this value decreasing uniformly with the harmonic number. In Fig. 4.11 the circuit (a) and itsinput and output waveforms are shown (b). Although the output contains all the harmonics, the 3rd harmonic is dominant.This is a simple way of producing frequencies derived synchronously from a periodic input waveform.

Figure 4.11: Using an RLC tank circuit we can generate a sine wave which is the n− th harmonic of the square wave.Tank circuit in (a) has L and C tuned to the 3rd harmonic of the square wave. R sets the quality factor, Q, of the circuitto 100. In (b) amplitude of the sine wave is equal to the magnitude of the 3rd harmonic, A3 as given the graph in (c).The voltage transfer characteristic of the tank circuit is shown in (d). Note that the tank circuit passes the 3rd harmonicunattenuated.

Page 111: transform mathematics

4.6. APPLICATIONS OF FOURIER SERIES 111

Figure 4.12: SciLab graph of first twenty Fourier Series coefficients for full-wave rectified cosine wave. fft functionwith scaling computes the Fourier series. (That n starts with 1 rather than 0 is because SciLab and MatLab arrays startwith index 1 as opposed to LabVIEW which starts with 0.)

Fourier Series in SoftwareMathematical and circuit simulation software have functions to obtain the Fourier Series coefficients of a periodic func-tion. Mathematical packages like Mathematica and Maxima provide coefficients on the basis of Eqn. 4.5 or 4.6. Packageslike LabVIEW, MatLab and SciLab on the other hand compute the coefficients using Discrete Fourier Transform (DFT)algorithms. One basic difference between the analytical techniques of this chapter and DFT technique is that DFT gener-ates only finite number of coefficients. If the DFT is computed from N samples, half of the Fourier coefficients are cn andhalf are c−n. This and other more subtle differences will be discussed later in the related chapters 5 and 6.

You can run the following SciLab script to compute and display twenty Fourier Series coefficients of full-wave rectifiedcosine wave. In Fig. n = 1 corresponds to DC coefficient, n = 2,3,4 are the 1,2 and 3rd harmonics of cos t:

t = -128:127;

t = -128:127;

t = t*%pi/128;

y = abs(cos(t));

Y = fft(y)/256;

Ysub = 2*Y(2:20);

Y = [Y(1) Ysub];

plot(abs(Y),'.')

LabVIEW implementation is shown in Fig.4.12. The graphs display cn for n > 0. Recall that An = 2 |cn|. In FFT c−n aredisplayed in the right halph of the graph. Compare the graphs of LabVIEW and SciLab computations. We will have moreto sat about Discrete Fourier Transform and Fast Fourier Transform (FFT) in chapters 4 and 5.

Problems1. If f (t +T0) = f (t) then prove that f (t +nT0) = f (t) for all integers n.

2. For square wave duty cycle is defined as the ratio d = τ/T0. Compute the Fourier Series coefficients for the followıngsquare wave and show that

(a) Even harmonics vanish for d = 0.5

(b) Every N-th harmonic vanishes if d = 1/N

3. Consider a square wave with period T0 and duty cycle d as defined in Problem 2.

(a) Find the Fourier Series coefficients

Page 112: transform mathematics

112 CHAPTER 4. THE FOURIER SERIES

Figure 4.13: (a) LabVIEW program to find the Fourier Series coefficients for the full-wave rectified cosine function. (b)256 coefficients cn, (c) First 20 coefficients.

T0

f(t)

1

(b) Show that the magnitudes of the Fourier Series coefficients for a square wave with duty cycle 1−d and squarewave with duty cycle d are the same but the phases of the even coefficients differ by 180°.

4. Given two periodic functions f (t) and g(t) with period T0. Let g(t) = d f (t)dt , f (t) = ∑

+∞n=−∞ cne jnω0t and g(t) =

∑+∞n=−∞ kne jnω0t . Prove that kn = jnω0cn.

5. Given two periodic functions f (t) and g(t) with period T0. Let g(t) =´

f (t)dt, f (t) = ∑+∞n=−∞ cne jnω0t and g(t) =

∑+∞n=−∞ kne jnω0t . Prove that kn =

cnjnω0

.

6. Obtain the Fourier Series coefficients for g(t), then using the result of Problem 4 find the Fourier Series coefficientsfor f (t).

7. LabVIEW project. Implement the following virtual instrument.

(a) Change the controls for number of coefficents N, and duty cycle d. Watch the effect of these changes on theseries coefficients and the reconstructed square wave.

(b) Note that if 1/d is an integer n-th harmonics with n = k/d disappear (k = 1,2, . . .). Prove this analytically.

(c) Also note that if 1/d is an integer n-th harmonics with n = k/(1−d) disappear too(k = 1,2, . . .). How do youexplain this?

Page 113: transform mathematics

4.6. APPLICATIONS OF FOURIER SERIES 113

Problem 5

8. Synthesize the rectified cosine function from N Fourier Series components using the following SciLab program.Run the program for N = 2, 5, 10, 20, 50 and 100. [Note: You can convert this program to MatLAB m-file and runit on MatLAB, or convert it to LabVIEW vi if you prefer.]

// DC average value of Rectified Cosine function

A0=2/%pi;

// Number of Fourier Series coefficients

N=50;

// Empty array of Fourier Series coefficients

A=[];

// Compute N coefficients

for n=1:N

A=[A ((-1)^(n+1)*4/((4*n^2-1)*%pi))];

end

// Define a 601 point 1-D time array from -3pi/2 to 3pi/2

t=[-300:300]*%pi/200;

// Define a 601 point y array y=f(t)

y=[];

for i=-300:300

y=[y 0];

end

// Synthesize the cosine function from N Fourier coefficients

for n=1:N

y=y+A(n)*cos(2*n*t)

end

// Add the average value

y=y+A0;

// Plot y=f(t)

plot(t, y)

Page 114: transform mathematics

114 CHAPTER 4. THE FOURIER SERIES

Problem 4. Square Wave Fourier Series front panel.

Page 115: transform mathematics

4.6. APPLICATIONS OF FOURIER SERIES 115

Problem 4. Square Wave Fourier Series block diagram.

Page 116: transform mathematics

116 CHAPTER 4. THE FOURIER SERIES

Page 117: transform mathematics

Chapter 5

The Fourier Transform

5.1 IntroductionRecall that unilateral Laplace Transform has been defined in Chapter 2 as

X (s) =ˆ +∞

0−x(t)e−stdt

Discrete Fourier Transform of an FM signalshown on an oscilloscope screen. As opposed tocontinuous Fourier transform, DFT iscalculated on a finite number of samples fromthe signal.

where s = σ + jω is the complex frequency. Laplace Transform issuitable to solve linear differential equations with initial conditions.A differential equation has two solutions: one produced with initialconditions and called homogenous solution; and another producedby an excitation and called particular solution. The total solution isthe superposition of the homogenous and particular solutions. Ho-mogenous solution includes decaying exponentials, while particularsolution is similar to the excitation. Thus the solution includes tran-sient and steady-state constituents. We have seen that a linear systemwith an impulse response h(t) responds to en excitation x(t) accord-ing to y(t) = h(t) ∗ x(t). In complex frequency domain this becomesY (s) = H (s)X (s). x(t) can be any function for which X (s) exists.

An important class of applications arises when x(t) is sinusoidaland we are solely interested in sinusoidal steady-state response of lin-ear systems. Setting σ = 0 in s = σ + jω the relation mentioned abovebecomes Y (ω) = H (ω)X (ω). This is an extremely useful outcome of

Laplace Transform. Once we obtain the system response function in complex frequency domain (s-plane), we can restrictour attention to system response to sinusoidal signals by setting s = jω . In the last chapter we obtained sinusoidal com-ponents of periodic time signals. Each component (harmonic) of the decomposition is treated separately by H (ω) andtheir responses are added up in accordance with linearity.

Aperiodic signals are another important important class of signals. Fourier Series analysis is not well-suited to analyzeaperiodic signals. In practice a broad range of signals is not periodic. In this chapter we set out to handle such signalsand will also discover that periodic signals have discrete line spectra which are shaped by the envelop of the continuousspectrum of an aperiodic signal. As the period approaches infinity the line spectrum of the periodic signal merges into acontinuum of frequencies. This is a powerful feature of the Fourier Transform as it is capable of handling both periodicand aperiodic signals.

5.2 Definition of the Fourier TransformGiven a function of time x(t), its Fourier Transform is defined as the inner product of x(t) and e j2π f t :

X ( j f ) =⟨x(t) ,e j2π f t⟩ (5.1)

X ( j f ) =

ˆ +∞

−∞

x(t)e− j2π f tdt

117

Page 118: transform mathematics

118 CHAPTER 5. THE FOURIER TRANSFORM

The inverse transform is given by

x(t) =ˆ +∞

−∞

X ( j f )e j2π f td f (5.2)

This is one common form of the Fourier transform with the property

x(t) =ˆ +∞

−∞

[ˆ +∞

−∞

x(t)e− j2π f tdt]

e j2π f td f

which simply states that x(t) is the inverse Fourier Transform of its Fourier Transform. Another common form usesω = 2π f instead of f . In this case we get

X ( jω) =

ˆ +∞

−∞

x(t)e− jωtdt (5.3)

x(t) =1

ˆ +∞

−∞

X ( jω)e jωtdω (5.4)

Eqns. 5.3 are called the analysis and synthesis equations respectively.x(t) and X (ω) are Fourier Transform pairs andshown by the following notations

x(t)F↔ X( jω)

F [x(t)] = X ( jω)

F−1 [X ( jω)] = x(t)

Being a complex function of ω , the Fourier Transform of x(t) is written X ( jω) rather than just X (ω). This emphasizesthe dependency of the transform on an imaginary variable jω . In electrical engineering, |X ( jω)| and arg [X ( jω)], themagnitude and phase of X ( jω) are of more interest to engineers rather than the real and imaginary parts; also it is easierfor spectrum analyzers to derive the magnitude and phase than it is for the real and imaginary parts.1

5.3 Fourier Transform versus Fourier SeriesBeside setting s = jω in Laplace transform to obtain Fourier transform, we can interpret the Fourier transform as adegenerate case of the Fourier series with a period extending to infinity. Let x(t) be a periodic function whose period isT0. In Chapter 3 we have seen that under certain conditions X (t) can be represented by

x(t) =+∞

∑n=−∞

cne jnω0t

where the coefficients cn are the Fourier series coefficients. There we have seen that

cn =1T0

ˆ T0/2

−T0/2x(t)e− jnω0tdt (5.5)

Fig. 5.1 shows a periodic function with fundamental period T0. Let’s call the following function, which is continuous inω , the envelope of the coefficients T0cn:

X (ω) =

ˆ T0/2

−T0/2x(t)e− jωtdt (5.6)

Then we can view cn as samples of the envelope X (ω) at ω = nω0. Eqn. 5.5 can be normalized by multiplying throughT0 so that

T0cn =

ˆ T0/2

−T0/2x(t)e− jnω0tdt

and1Here, swept-frequency spectrum analyzers, which obtain X ( jω)electronically through modulators and filters, are meant. Digital spectrum analyzers

by contrast employ DSP techniques and compute the transform by number-crunching. For DSP-based analyzers real and imaginary parts of X ( jω)popup first from which the magnitude and phase are calculated.

Page 119: transform mathematics

5.3. FOURIER TRANSFORM VERSUS FOURIER SERIES 119

−T1 T1

T0

(t)f

t

−T1 T1

(t)f

T0

t

Figure 5.1: A non-periodic function obtained from a periodic function.

T0cn = X (nω0) (5.7)

Now if we let the fundamental period approach infinity, the periodic function x(t) obviously becomes the aperiodicfunction x(t), i.e.,

x(t) = limT0→∞

x(t)

Let us rearrange Eqn. 5.6. In the interval−T02 ≤ t ≤ T0

2 we have x(t) = x(t). Furthermore since x(t) = 0 outside the rangeof integration we can write

T0cn =

ˆ T0/2

−T0/2x(t)e− jnω0tdt

X (ω) =

ˆ +∞

−∞

x(t)e− jωtdt

Therefore

T0cn = X (nω0) and,

cn =1T0

X (nω0) (5.8)

With cn expressed as in Eqn. 5.8 and T0 =ω02π

, x(t) can be written as

x(t) =+∞

∑n=−∞

1T0

X (nω0)e jnω0t

=+∞

∑n=−∞

1T0

X (nω0)e jnω0t

=+∞

∑n=−∞

12π

X (nω0)e jnω0tω0

As T0 approaches infinity x(t)→ x(t), nω0→ ω , ω0→ dω and the sum becomes an integral, that is,

x(t) =1

ˆ +∞

−∞

X ( jω)e jωtdω (5.9)

and the envelope of T0cn is called the Fourier Transform of x(t):

Page 120: transform mathematics

120 CHAPTER 5. THE FOURIER TRANSFORM

X ( jω) =

ˆ +∞

−∞

x(t)e− jωtdt (5.10)

With ω = 2π f we can rewrite Eqn’s 5.9 and 5.10

x(t) =

ˆ +∞

−∞

X ( f )e j2π f td f

X ( j f ) =

ˆ +∞

−∞

x(t)e− j2π f tdt

Using f instead of ω , we see that an aperiodic function is equal to the inverse Fourier transform of its Fourier transformand vica versa:

x(t) = F−1 F [x(t)]X ( f ) = F

F−1 [X ( f )]

provided that the Fourier transform exists.

5.4 Convergence of the Fourier TransformIn order that Eqn. 5.9 can faithfully synthesize x(t), it is sufficient that x(t) fulfill the following criteria which are calledthe Dirichlet conditions. With these conditions satisfied, x(t) and F−1 F [x(t)] become identical.

Dirichlet Conditions1. x(t) must have a finite number of extrema

2. x(t) must be piecewise continuous whose discontinuities are finite jumps

3. x(t) must be absolutely integrable.

In the following we talk about the last condition.

Fourier Transform as defined by Eqn. 5.3 is a complex function of ω and is independent of t. For the transform toexist, we require that X (ω) have a finite magnitude, that is, |X (ω)| must be less than infinity for the transform to be ofany use:

|X (ω)| < ∞

|X (ω)| =

∣∣∣∣ˆ +∞

−∞

x(t)e− jωtdt∣∣∣∣< ∞

For two numbers x and y, real or complex, the triangular inequality states that

|x+ y| ≤ |x|+ |y|

Because integration is a special way to sum infinitely many quantitites, the triangular inequality holds for integration aswell. Since

∣∣e− jωt∣∣= 1

∣∣∣∣ˆ +∞

−∞

x(t)e− jωtdt∣∣∣∣ ≤ ˆ +∞

−∞

∣∣x(t)e− jωt ∣∣dt

=

ˆ +∞

−∞

|x(t)|∣∣e− jωt ∣∣dt

=

ˆ +∞

−∞

|x(t)|dt

Therefore if x(t) is absolutely integrable in the range −∞≤ t ≤+∞

Page 121: transform mathematics

5.4. CONVERGENCE OF THE FOURIER TRANSFORM 121

ˆ +∞

−∞

|x(t)|dt < ∞

then we must have

|X ( jω)| ≤ˆ +∞

−∞

|x(t)|dt < ∞

|X ( jω)| < ∞

and X ( jω) exists.

It is interesting to note that some functions whose Laplace Transforms exist have no Fourier transforms. This can beverified by expanding the exponential factor in the Laplace Transform integrand:

X (s) =

ˆ +∞

0−x(t)e−stdt

=

ˆ +∞

0−x(t)e−(σ+ jω)tdt

=

ˆ +∞

0−x(t)e−σte− jωtdt

= F[x(t)e−σtu(t)

]We deduce that Laplace transform of x(t) is given by the Fourier Transform of x(t)e−σtu(t). While

´ +∞

−∞x(t)dt = ∞,

obviously we can find some σ for which´ +∞

−∞x(t)e−σtu(t)dt < ∞. Recall that L [u(t)] = 1/s because

´ +∞

−∞e−σtu(t)dt <

∞ for σ > 0. However´ +∞

−∞u(t)dt does not converge and F [u(t)] should not exist. However this is not the case and

as will be apparent shortly u(t) does have a Fourier Transform. This is so because the Dirichlet conditions are sufficientbut not necessary. Every function which satisfies Dirichlet conditions has a Fourier Transform; but not every functionwhich has a Fourier transform necessarily satisfies Dirichlet conditions. Using impulse functions we can produce usefultransforms for such functions.

Example 24. Let x(t) = u(t).

Since´ +∞

−∞u(t)dt =

´ +∞

0 1 ·dt = ∞, X ( jω) does not exist.

Example 25. Let x(t) = eat .

Since´ +∞

−∞eatdt = ∞, X ( jω) does not exist.

Example 26. Let x(t) = e−atu(t).

ˆ +∞

−∞

e−atu(t)dt =

ˆ +∞

0e−atdt

=e−at

−a

∣∣+∞

0

=

1/a a > 0+∞ a < 0

Therefore X ( jω) does not exist if a < 0.

Example 27. Find the Fourier Transform of x(t) shown in Fig. 4.2.

x(t) =

A |t| ≤ τ/20 |t|> τ/2

where A=100 and τ = 0.005sec.

Page 122: transform mathematics

122 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.2: A pulse waveform and its Fourier Transform.

X ( jω) =

ˆ +∞

−∞

x(t)e− jωtdt

= Aˆ +τ/2

−τ/2e− jωtdt

= τAsinωτ/2

ωτ/2

With A = 100 and τ = 0.01 we obtain

X ( jω) =sinωτ/2

ωτ/2=

sinπ f τ

π f τ

Example 28. Let X (ω) = 2πδ (ω−ω0). Find x(t) .

Using the Inverse Fourier Transform formula and the sifting property of the impulse function we get

x(t) =1

ˆ +∞

−∞

2πδ (ω−ω0)e jωtdω

= e jω0t

Thus

F(e jω0t)= 2πδ (ω−ω0)

Recall the Fourier Series expansion of a periodic function:

x(t) =+∞

∑n=−∞

cne jnω0t

Taking the Fourier Transform of both sides yields

F [x(t)] =+∞

∑n=−∞

cnF(e jnω0t)

X ( jω) =+∞

∑n=−∞

2πcnδ (ω−nω0)

Hence we see that the Fourier Transform of a periodic time function is an impulse train whose individual impulsestrengths are derived from Fourier Series coefficients cn through multiplication by 2π . Utilizing this result, let us find theFourier Transform of sine and cosine functions:

Page 123: transform mathematics

5.5. PROPERTIES OF THE FOURIER TRANSFORM 123

Figure 5.3: Fourier Transform of (a) cosine and (b) sine function.

F (cos t) = F

(e jω0t + e− jω0t

2

)=

12

ˆ +∞

−∞

e jω0te− jωtdt +12

ˆ +∞

−∞

e− jω0te− jωtdt

=12·2πδ (ω−ω0)+

12·2πδ (ω +ω0)

= πδ (ω−ω0)+πδ (ω +ω0)

And

F (sin t) = F

(e jω0t − e− jω0t

2 j

)=

12 j

ˆ +∞

−∞

e jω0te− jωtdt− 12 j

ˆ +∞

−∞

e− jω0te− jωtdt

=12 j·2πδ (ω−ω0)−

12 j·2πδ (ω +ω0)

jδ (ω−ω0)−

π

jδ (ω +ω0)

5.5 Properties of the Fourier TransformMany properties of the Fourier Transform are straightforward extensions of the bilateral Laplace Transform propertiesby evaluating the transform on the jω axis, i.e., settting s = 0+ jω . Beside these familiar properties, we have someadditional properties like the duality and symmetry that we introduce in this section.

5.5.1 Symmetry IssuesLet x(t) be a complex function of t. The Fourier Transform of the conjugate function x∗ (t) can be obtained from theFourier Transform definition. Since

X ( jω) =

ˆ +∞

−∞

x(t)e− jωtdt

Page 124: transform mathematics

124 CHAPTER 5. THE FOURIER TRANSFORM

taking the conjugates of both sides yields

X∗ ( jω) =

[ˆ +∞

−∞

x(t)e− jωtdt]∗

=

ˆ +∞

−∞

x∗ (t)e jωtdt

Negating jω we get

X∗ (− jω) =

ˆ +∞

−∞

x∗ (t)e− jωtdt

Thus we see that the Fourier Transform of the conjugate of x(t) is the conjugate of X (− jω), that is

F [x∗ (t)] = X∗ (− jω) (5.11)

If x(t) is real x∗ (t) = x(t) then Re [X ( jω)] is even-symmetric and Im [X ( jω)] is odd-symmetric:

X∗ (− jω) = F [x∗ (t)]

= F [x(t)]

= X ( jω)

Thus

X∗ ( jω) = X (− jω)

We can separate the left and right hand into real and imaginary parts

Re [X ( jω)]+ j Im [X ( jω)]∗ = Re [X (− jω)]+ j Im [X (− jω)]

Re [X ( jω)]− j Im [X ( jω)] = Re [X (− jω)]+ j Im [X (− jω)]

which necessitate that

Re [X (− jω)] = Re [X ( jω)]

Im [X (− jω)] = −Im [X ( jω)]

These are called conjugate symmetry relations for Fourier Transform of real functions. It is straightforward to show thatconjugate symmetry in X ( jω) extends to |X ( jω)| and arg [X ( jω)] and is left as an exercise for the reader:

|X (− jω)| = |X ( jω)|arg [X (− jω)] = −arg [X ( jω)]

Assume that x(t) is an even function. Let us take the conjugate of the transform

X∗ ( jω) =

ˆ +∞

−∞

x(t)e jωtdt

=

ˆ +∞

−∞

x(t)e− jω(−t)dt

With a change of variable u =−t and using the evenness of x(·) we get

X∗ ( jω) =

ˆ −∞

+∞

x(−u)e− jωu (−du)

=

ˆ +∞

−∞

x(u)e− jωudu

X∗ ( jω) = X ( jω)

Page 125: transform mathematics

5.5. PROPERTIES OF THE FOURIER TRANSFORM 125

Thus we see that X ( jω) is real. Should x(t) be an odd function, X ( jω) turns out to be imaginary (see Problem 6).Fom the conjugate symmetry of real functions we deduce that the transform of even functions are real and even symmetric;and the transform of odd functions are imaginary and odd symmetric. A real function can be expressed as the sum of areal function and an odd function.

x(t) = e(t)+o(t)

Then X ( jω) = E ( jω)+ jO( jω) where E ( jω) is even-symmetric and O( jω) is odd-symmetric. We can summarizethese results as follows:

1. If x(t) is complex then x∗ (t) and X∗ (− jω) are Fourier Transform pairs.

2. If x(t) is real then X∗ ( jω) = X (− jω).

(a) If x(t) is even then X ( jω) is real and even.

(b) If x(t) is odd then X ( jω) is imaginary and odd.

5.5.2 LinearityThe linearity property stems from the linearity of integration. The Fourier Transform of a linear combination of functionsis equal to the same linear combination of the Fourier Transforms of those functions, that is,

F

[∑

iaixi (t)

]= ∑

iaiF [xi (t)] (5.12)

= ∑i

aiXi ( jω) (5.13)

which can be easily proved from the definition:

F

[∑

iaixi (t)

]=

ˆ +∞

−∞

[∑

iaixi (t)

]e− jωtdt

= ∑i

ˆ +∞

−∞

aixi (t)e− jωtdt

= ∑i

ai

ˆ +∞

−∞

xi (t)e− jωtdt

= ∑i

aiF [xi (t)]

5.5.3 Time ScalingA signal when compressed in time domain by a factor a it is decompressed by the same amount in frequency domain.

F [x(at)] =1|a|

X(

a

)Let a be a positive constant.

F [x(at)] =ˆ +∞

−∞

x(at)e− jωtdt

Replacing at with u we have

F [x(u)] =

ˆ +∞

−∞

x(u)e− j ωua

dua

=1a

ˆ +∞

−∞

x(u)e− j ωa udu

Page 126: transform mathematics

126 CHAPTER 5. THE FOURIER TRANSFORM

F [x(at)] =1a

X(

a

)(5.14)

Now let us consider the case F [x(−at)].

F [x(−at)] =

ˆ +∞

−∞

x(−at)e− jωtdt

=

ˆ −∞

+∞

x(u)e j ωua

du−a

=1−a

ˆ −∞

+∞

x(u)e j ωua du

=1a

ˆ +∞

−∞

x(u)e j ωua du

=1a

ˆ +∞

−∞

x(u)e− j(−ωa )udu

=1a

X(− j

ω

a

)(5.15)

Removing the constraint a > 0, we can combine Eqns. 5.14 and 5.15 to get

F [x(at)] =1|a|

X(

a

)(5.16)

Time ReversalSetting a =−1 above flips x(t) about the time axis. The transform of x(−t) is by Eqn. 5.16

F [x(−t)] = X (− jω)

which is also the conjugate of the transform

F [x(−t)] = X (− jω)

= X∗ ( jω) (5.17)

We conclude that by flipping x(t) about the time axis, we flip X ( jω)about jω axis.

5.5.4 Time ShiftingShifting a function in time by t0 amounts to a phase shift in all frequencies by an amount −ωt0. Let us denote the phasespectrum of X ( jω) by θ (ω).

F [x(t− t0)] =ˆ +∞

−∞

x(t− t0)e− jωtdt

Replacing t− t0 with u, u = t− t0, dt = du and t = u+ t0 .

F [x(t− t0)] =

ˆ +∞

−∞

x(u)e− jω(u+t0)du

= e− jωt0ˆ +∞

−∞

x(u)e− jωudu

= e− jωt0X ( jω) (5.18)

This can be rewritten to determine the phase shift

F [x(t− t0)] = |X ( jω)|exp j [arg [X ( jω)]−ωt0] (5.19)

Page 127: transform mathematics

5.5. PROPERTIES OF THE FOURIER TRANSFORM 127

5.5.5 Frequency Shifting (Amplitude Modulation)Multiplying x(t) by complex exponential e jω0t shifts X ( jω) right by ω0.

F[e jω0tx(t)

]=

ˆ +∞

−∞

e jω0tx(t)e− jωtdt

=

ˆ +∞

−∞

x(t)e− j(ω−ω0)tdt

= X [ j (ω−ω0)] (5.20)

This is a very important result and is the basis of amplitude modulation in communication. A sinusoidal signal isamplitude-modulated when multiplied by the time signal x(t). The frequency spectrum of x(t) is shifted left and right ifmultiplied by cos ω0t. In Section 5.9.4 we give examples to frequency shifting property.

F [x(t)cos ω0t] =12

X [ j (ω +ω0)]+12

X [ j (ω−ω0)] (5.21)

5.5.6 Differentiation with respect to timeDifferentiation in time is analogous to differentiation in time for Laplace Transform. In fact replacing s with jω one canreadily obtain the differentiation property

F

[dx(t)

dt

]= jωX (ω)

Anyway to obtain this property from the definition, we differentiate x(t) with respect to time in synthesis equation:.

x(t) =1

ˆ +∞

−∞

X ( jω)e jωtdω

dx(t)dt

=1

ˆ +∞

−∞

X ( jω)ddt

e jωtdω

=1

ˆ +∞

−∞

jωX ( jω)e jωtdω

Using the analysis equation yields the desired relation

ˆ +∞

−∞

dx(t)dt

e− jωtdt =

ˆ +∞

−∞

[1

ˆ +∞

−∞

jωX ( jω)e jωtdω

]e− jωtdt

= jωX ( jω)

This result can be generalized to the n-th derivative

F

[dnx(t)

dtn

]= ( jω)n X (ω)

5.5.7 Integration with respect to timeLet y(t) = y(t)+ c, where y(t) is a function with zero average, c is a constant, and x(t) = y′ (t). Since differentiationremoves the constant term c, x(t) = y′ (t) = y′ (t). Using the differentiation property we have

X ( jω) = jωY ( jω) = jωY ( jω)

If c = 0 then Y ( jω) and Y ( jω) would be identical and equal to X( jω)jω . On the other hand if c 6= 0 then there must be a

difference between Y ( jω) and Y ( jω). This is a situation where the Dirichlet convergence conditions are not met, thatis, we cannot find the Fourier Transform of y(t). The problem can be worked around by allowing impulse function inthe solution. Remember that u(t) is neither square integrable nor absolutely integrable, so strictly speaking its FourierTransform should not exist. However by introducing impulse function into the transform, we can define a transform forthe unit step function. Here we state without proof that the Fourier Transform of u(t) is given by

Page 128: transform mathematics

128 CHAPTER 5. THE FOURIER TRANSFORM

U ( jω) = F [u(t)] =1jω

+πδ (ω) (5.22)

y(t), the integral of x(t) with respect to time can be restated as the convolution of x(t) with u(t).

y(t) =ˆ t

−∞

x(t)dt =ˆ +∞

−∞

x(τ)u(t− τ)dτ

Now we can invoke the convolution property

Y ( jω) = X ( jω)U ( jω)

= X ( jω)

[1jω

+πδ (ω)

]Since δ (ω) = 0 for ω 6= 0 we obtain the Fourier Transform of the integral as

F

[ˆ t

−∞

x(τ)dτ

]=

X ( jω)

jω+πX (0)δ (ω) (5.23)

5.5.8 DualityIf x(t) and X ( jω) are Fourier Transform pairs then X (t) and 2πx(−ω) are also Fourier Transform pairs. Known asduality property, this symmetry enables us to use already known transform to exchange the time and frequency domains.We know that

x(t) =1

ˆ +∞

−∞

X ( jω)e jωtdω

Hence

2πx(t) =

ˆ +∞

−∞

X ( jω)e jωtdω

2πx(−t) =

ˆ +∞

−∞

X ( jω)e− jωtdω

Interchanging ω with −t yields

2πx(−ω) =

ˆ +∞

−∞

X (t)e− jωtdt = F [F [X (t)]]

Example 29. If x(t) = e−|t| then X ( jω) = 2/(ω2 +1

). By duality property, if X (t) = 2/

(t2 +1

)then 2πx(ω) =

2πe−|ω|.

5.5.9 ConvolutionConvolving two functions in time domain results in their product in frequency domain. Assume that F [x(t)] = X ( jω),F [h(t)] = H ( jω) and y(t) = x(t)∗h(t), then we have in frequency domain Y ( jω) = X ( jω)H ( jω).

Remember that

x(t)∗h(t) =ˆ +∞

−∞

x(τ)y(t− τ)dτ (5.24)

Then

F [y(t)] = F [x(t)∗h(t)]

Y ( jω) =

ˆ +∞

−∞

(ˆ +∞

−∞

x(τ)h(t− τ)dτ

)e− jωtdt

Page 129: transform mathematics

5.5. PROPERTIES OF THE FOURIER TRANSFORM 129

Interchanging the order of integration yields

Y ( jω) =

ˆ +∞

−∞

x(τ)[ˆ +∞

−∞

h(t− τ)e− jωtdt]

Invoking the time-shift property Eqn. 5.18 we obtain

Y ( jω) =

ˆ +∞

−∞

x(τ)e− jωτ H ( jω)dτ (5.25)

= X ( jω)H ( jω)

5.5.10 Multiplication in Time DomainJust as convolution in time domain results in multiplication in frequency domain, convolution in frequency domain resultsin multiplication in time domain. This is a direct consequence of the duality principle and can be stated as

x(t)y(t)←→ 12π

[X ( jω)BY ( jω)]

To prove this property we shall use X ( j f ) and Y ( j f ) rather than X ( jω) and Y ( jω). Furthermore we shall omit j infront of frequency to avoid nested parantheses inside brackets. Using X ( j f ) and Y ( j f ) rids us of the 1/2π factor whichmultiply the integrals. In Section 4.2 we showed that

12π

ˆ +∞

−∞

X (ω)e jωtdω =

ˆ +∞

−∞

X ( f )e j2π f td f

Exploiting this fact we write the convolution of X ( f ) and Y ( f ) (we temporarily omit j ’s for convenience):

ˆ +∞

−∞

X (ϕ)Y ( f −ϕ)dϕ

whose inverse Fourier Transform

ˆ +∞

−∞

[ˆ +∞

−∞

X (ϕ)Y ( f −ϕ)dϕ

]e j2π f td f

yields after interchanging the order of integration

ˆ +∞

−∞

X (ϕ)

[ˆ +∞

−∞

Y ( f −ϕ)e j2π f td f]

dϕ =

ˆ +∞

−∞

X (ϕ)e j2πϕty(t)dϕ

= y(t)ˆ +∞

−∞

X (ϕ)e j2πϕtdϕ

= x(t)y(t)

Thus we see that x(t)y(t) and X ( j f )BY ( j f ). We can now revert to angular frequency

x(t)y(t)←→ 12π

[X ( jω)BY ( jω)] (5.26)

Example 30. Find the convolution of X ( jω) and δ [ j (ω−ω0)] and the corresponding time-domain function.

X ( jω)Bδ [ j (ω−ω0)] =´ +∞

−∞δ [ j (Ω−ω0)]X [ j (ω−Ω)]dΩ = X [ j (ω−ω0)]

We see that convolution with impulse function shifts the Fourier transform of the function to where the impulse occurs infrequency. Using frequency shift property we obtain the inverse transform.

F−1 [X ( jω)BY ( jω)] =1

ˆ +∞

−∞

X [ j (ω−ω0)]e jωtdω = e jω0tx(t)

Example 31. The pulse waveform p(t) in Example ... is multiplied by a sinusoidal signal x(t) = sin(2π ·40t). The pulseoccurs between 400ms≤ t ≤ 600ms.

Page 130: transform mathematics

130 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.4: Modulation of a sinusoidal wave by a pulse waveform. Modulation is the product of two signals in timedomain. (a), (c) The pulse waveform and its magnitude spectrum; (b), (d) Modulated pulse waveform and its magnitudespectrum.

p(t) =

1 400ms≤ t ≤ 600 ms0 elsewhere

x(t) = sin(2π ·40t)

For the Fourier transforms we use the previous results

P( j f ) = 200sinc5 f

X ( j f ) =π

j[δ ( f −40)−δ ( f +40)]

The magnitude spectrum of p(t) is a sync function that goes through zero at multiples of 1/0.2ms. Since the magnitudeof the transform rectifies the negative portions of the sinc function we see those portions above the zero level. The FourierTransform of the modulated pulse waveform is convolution of X ( j f ) and P( j f ). The peak of the sync function is equalto 1 · (600−400) = 200ms. The modulated signal peaks have half of this value (100ms) and they are 40Hz away fromDC.

5.5.11 Parseval’s RelationParseval’s relation is about the total energy a signal contains. The theorem relates the time-domain and the frequencydomain expressions for energy. In Chapter 3 we derived Parseval’s theorem in the context of power because of periodicity.In the following development we follow the same thinking as before except for energy/power distinction. Aperiodicsignals are energy signals if their Fourier Transforms exist and if they are square integrable. Square integrability wasdiscussed in Chapter 3 under convergence topic. The same considerations hold for aperiodic as well as periodic signals.From circuit theory the energy of a signal is defined as

Page 131: transform mathematics

5.5. PROPERTIES OF THE FOURIER TRANSFORM 131

E =

ˆ +∞

−∞

|x(t)|2 dt (5.27)

which can be rewritten as

E =

ˆ +∞

−∞

x(t)x∗ (t)dt

where we allow the signal to be complex. Let us substitute the synthesis equations for x(t) and x∗ (t).

E =

ˆ +∞

−∞

[1

ˆ +∞

−∞

X ( jω)e jωtdω

][1

ˆ +∞

−∞

X ( jω)e jωtdω

]∗dt

=

(1

)2ˆ +∞

−∞

[ˆ +∞

−∞

X ( jω)e jωtdω

ˆ +∞

−∞

X∗ ( jω)e− jωtdω

]dt

Since multiplication of the second and third integrals can be written as

ˆ +∞

−∞

X ( jω)e jωtdω

ˆ +∞

−∞

X∗ ( jω)e− jωtdω =

ˆ +∞

−∞

ˆ +∞

−∞

X ( jω)X∗ ( jξ )e jωte− jξ tdωdξ

=

ˆ +∞

−∞

ˆ +∞

−∞

X ( jω)X∗ ( jξ )e j(ω−ξ )tdωdξ

energy becomes

E =

(1

)2ˆ +∞

−∞

[ˆ +∞

−∞

ˆ +∞

−∞

X ( jω)X∗ ( jξ )e j(ω−ξ )tdωdξ

]dt

Interchanging order of integration we get

E =

(1

)2ˆ +∞

−∞

ˆ +∞

−∞

X ( jω)X∗ ( jξ )[ˆ +∞

−∞

e− j(ξ−ω)tdt]

dωdξ

We recognize´ +∞

−∞e− j(ξ−ω)tdt as 2πδ (ξ −ω). Therefore

E =

(1

)2ˆ +∞

−∞

ˆ +∞

−∞

X ( jω)X∗ ( jξ )2πδ (ξ −ω)dωdξ

=1

ˆ +∞

−∞

X ( jω)

[ˆ +∞

−∞

X∗ ( jξ )2πδ (ξ −ω)dξ

]dω

Using the sifting property of the impulse function in the inner integral we obtain

E =1

ˆ +∞

−∞

X ( jω)X∗ ( jω)dω

Hence

E =1

ˆ +∞

−∞

|X ( jω)|2 dω

Andˆ +∞

−∞

|x(t)|2 dt =1

ˆ +∞

−∞

|X ( jω)|2 dω (5.28)

A useful variant of this relation results if we substitute ω = 2π f that uses cyclic frequency instead of angular fre-quency:

ˆ +∞

−∞

|x(t)|2 dt =ˆ +∞

−∞

|X ( j f )|2 d f

The area under |x(t)|2 curve for all t is equal to the area under |X ( j f )|2 curve for all f .

Page 132: transform mathematics

132 CHAPTER 5. THE FOURIER TRANSFORM

5.6 Fourier Transform versus Laplace TransformAs mentioned at the beginning of the chapter, Fourier Transform ignores initial conditions of LTI systems at t = 0. This isin contrast with unilateral Laplace Transform whose concern is to find a total solution to a LTI system with possibly non-zero initial conditions and non-zero excitation. On the other hand the bilateral Laplace Transform spans time from minusinfinity to plus infinity with zero initial state at time minus infinity. If we split time to [−∞,0−) and (0−,+∞] , in effectwe allow an initial state to build up before t = 0−. Since initial state is not of concern for Fourier Transform, we investigatebilateral Laplace Transform of a function and check that the jω-axis is in the ROC. If the bilateral Laplace Transformexists on the jω-axis then we set s = jω and obtain the Fourier Transform from the bilateral Laplace Transform. Thiscomes in very handy once we know the Laplace Transform and saves us time. Note that adopting the bilateral LaplaceTransform we get rid of sixn−i (0−) terms for derivatives.

Assuming H (s) is the system function of a LTI system, if the ROC of H (s) includes jω-axis, a useful graphicaltechnique can be used to investigate the Fourier Transform H ( jw). Assuming H (s) is a rational function given by Eqn.3.11, i.e.

H (s) = A · ∏mi=1 s− zi

∏ni=1 s− pi

whose evaluation on jω-axis is

H ( jω) = A · ∏mi=1 jω− zi

∏ni=1 jω− pi

which is a complex-valued function having a magnitude function and phase function:

H ( jω) = |H ( jω)|argH ( jω)

where

|H ( jω)|= |A| · ∏mi=1 | jω− zi|

∏ni=1 | jω− pi|

(5.29)

and

argH ( jω)= argA+m

∑i=1

arg( jω− zi)−n

∑i=1

arg( jω− pi) (5.30)

Eqn. 5.29 and Eqn. 5.30 together suggest a vectoral analysis of H ( jω) which is called pole-zero diagram analysismethod. The factors | jω− zi| and | jω− pi| are the magnitudes of the vectors drawn from s = jω to zeros zi and polespi respectively. The terms arg( jω− zi) and arg( jω− zi) are likewise the arguments of the vectors drawn from s = jω tozeros zi and poles pi respectively. Evaluating these magnitudes and arguments from pole-zero diagram could facilitate theanalysis; sometimes simplify it and many times giving us insight and inspiration into the behavior of the system.

Assume we have a system with a zero at s = 0, and two poles at s = p1 = −5+ j12 and s = p2 = −5− j12. Alsoassume a gain factor A = −10. The pole-zero diagram of this system is shown in Fig. 5.5. The vectors are drawn in redfrom from s = jω to zero z1 = 0 and poles p1 and p2. The system function is given by

H ( jω) =−10 · jω( jω +5− j12)( jω +5+ j12)

Thus using the right triangles formed by these vectors we can write

|H ( jω)| = |−10| · | jω−0|| jw− p1| | jω− p2|

=10ω√

(ω−12)2 +52√(ω +12)2 +52

=10ω√

(ω2−24ω +169)(ω2 +24ω +169)

=10ω√

(ω2 +169)2−576ω2

Page 133: transform mathematics

5.7. FOURIER TRANSFORM OF DISCRETE SIGNALS 133

Figure 5.5: Finding frequency response by using pole-zero diagram.

argH ( jω) = arg−10+ arg jω−0− arg jω− p1− arg jω− p2

= −π +π

2− tan−1

(ω−12

5

)− tan−1

(ω +12

5

)= −

2+ tan−1

(ω−12

5

)+ tan−1

(ω +12

5

)]These formulas can be plotted to view the magnitude and phase spectrum. There seems to be no insight one can derive

from these results. However assume that |Re [pi]| |Im [pi]|. This brings the poles close to the jω axis. Moreover in thevicinity of the upper pole ω ≈ Im [p1], | jω− p2| ≈ 2ω , arg( jω− p2)≈ π/2. This gives us the insight and inspiration weseek. Thus

|H ( jω)| ≈ 10ω

2ω |ω−12|

≈ 5|ω−12|

and

argH ( jω) = −[

π

2+ tan−1

(ω−12

α

)+

π

2

]≈ −π− tan−1

(ω−12

α

)where α =−Re [pi]. The method we described here becomes the Bode plot method when we use 20log |H ( jω)| insteadof |H ( jω)| and the frequency scale is logarithmic. In Chapter 1 Example 1 we have already seen an example of Bodeplot. In Sec. 5.9.3 Bode plot method is explored in more depth.

5.7 Fourier Transform of Discrete SignalsA discrete impulse function δ [n] is defined as

δ [n] =

1 n = 00 n 6= 0 (5.31)

Page 134: transform mathematics

134 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.6: The unit circle z = e jω on which the Fourier Transform of a discrete signal is evaluated. With normal Fouriertransform all the points on the unit circle contribute to the transform. With Discrete Fourier Transform, we use only thediscrete frequencies ωk = 2πk/N on the unit circle where k is an integer between 1 and N−1.

A discrete function is defined at integer values of n. Let a discrete signal consisting of a sequence of numbers be x [n] =[. . . ,a−2,a−1,a0,a1,a2, . . .]. x [n] can be written as a linear combination of shifted impulse functions

x [n] =+∞

∑k=−∞

akδ [n− k] (5.32)

Similar to what we have done with continuous-time signals, we can transform x [n] into a complex z−domain representa-tion. Although z-Transform is the subject of the next chapter, here we will be content with employing a unit delay operatorz−1 to transform a signal with unit delay to z-domain. Thus we define

Z δ [n−1]= z−1

Using this notation in Eqn. 5.32 we obtain the z−transform of x [n]

X (z) = Z x [n]

= Z

+∞

∑k=−∞

akδ [n− k]

(5.33)

=+∞

∑k=−∞

akz−k (5.34)

One can readily recognize the resemblance of X (z) to X ( jω) that we have derived for continuous-time Fourier Trans-form. Indeed the summation replaces the integration; ak, z and k stand for x(t), e jω and t respectively. In the context ofthis section we focus on the Fourier Transform of the discrete signal x [n]. Recall that in order to transition from LaplaceTransform to Fourier Transform, we set s equal to jω provided that the function satisfies the Dirichlet conditions andthat ω spans frequencies from −∞ to +∞. There ω lay on a straight line, the jω−axis. Here the Fourier Transform of adiscrete-time series is evaluated on the unit circle z = e jω . ω = 0 and ω = π correspond to DC and fs/2, half the sam-pling frequency respectively. These points are the discrete counterparts of ω = 0 and ω =+∞ of the continuous FourierTransform. Hence Eqn. 5.34 gives the Fourier Transform of discrete-time sequence x [n]:

X(e jω)= +∞

∑k=−∞

ake− jωk −π ≤ ω ≤ π (5.35)

Here we note that, because X(

e j(ω+2π))= X

(e jω e j2π

)= X

(e jω)

which implies that F x [n] is periodic with period2π .

Example 32. Let x [n] =[1,α,α2,α3, . . .

]α < 1 (see Fig. 5.7). Find X

(e jω).

Page 135: transform mathematics

5.7. FOURIER TRANSFORM OF DISCRETE SIGNALS 135

Figure 5.7: Decaying exponential sequence x [n] = ∑+∞

k=0 αk ·δ [n− k] with α = 0.5.

Page 136: transform mathematics

136 CHAPTER 5. THE FOURIER TRANSFORM

x [n] =+∞

∑k=0

αk ·δ [n− k]

Hence

X (z) =+∞

∑k=0

αkz−k = 1+αz−1 +α

2z−2 + . . .

= 1+αz−1X (z)

X (z) =1

1−αz−1

Without elaboration we state that X (z) exists because |α|< 1 2. By setting z = e jω we get

X(e jω) =

11−αe− jω

=1−α cosω

1+α2−2α cosω− j

α sinω

1+α2−2α cosω

Specifically for α = 0.5 we find

X(e jω) =

1−0.5cosω

1.25− cosω− j

0.5sinω

1.25− cosω

=1√

1.25− cosωexp[− j arctan

(0.5sinω

1−0.5cosω

)]Example 33. Let x [n] =

[1,−α,α2,−α3, . . .

]α < 1. Find X

(e jω).

x [n] =+∞

∑k=0

(−α)k ·δ [n− k]

Hence

X (z) =+∞

∑k=0

(−α)k z−k = 1−αz−1 +α2z−2− . . .

= 1−αz−1X (z)

X (z) =1

1+αz−1

Without elaboration we state that X (z) exists because |α|< 1 . By setting z = e jω we get

X(e jω) =

11+αe− jω

=1+α cosω

1+α2 +2α cosω+ j

α sinω

1+α2 +2α cosω

Specifically for α = 0.5 we find

X(e jω) =

1+0.5cosω

1.25+ cosω+ j

0.5sinω

1.25+ cosω

=1√

1.25+ cosωexp[

j arctan(

0.5sinω

1+0.5cosω

)]The magnitude and phase of X

(e jω)

are depicted in Fig. 5.8.

Page 137: transform mathematics

5.7. FOURIER TRANSFORM OF DISCRETE SIGNALS 137

Figure 5.8: Frequency response of decaying exponential sequence x [n] = ∑+∞

k=0 (−α)k ·δ [n− k] with α = 0.5.

Page 138: transform mathematics

138 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.9: A time signal frequency contents of which changes with time.

5.7.1 Short-Term Fourier TransformThe defining integral for the Fourier Transform covers the entire time domain (−∞≤ t ≤+∞). As long as the signal doesnot “change” along the way, this is fine. By inspecting the Fourier Transform of the signal we can tell which frequenciesit is made of. On the other hand, should the signal change locally, we become at a loss predicting where those changeshave occured. We can cite several examples of such signals: sound of an approaching and passing ambulance car, speechsignal, arhythmic ECG signal and chirp of a singing bird, radar and sonar echoes etc. Fig. 5.9 shows a signal thatcontains sinωt, sinωt and sin4ωt. However combination of frequencies changes in time; for 0 ≤ n ≤ 255 it consists ofsinωt + sin2ωt, for 256≤ n ≤ 511 of sinωt + sin4ωt, and for 512≤ n ≤ 767 it consists of sin2ωt + sin4ωt. A typicalFourier Transform analysis would yield all of the three sinusoids with their respective amplitudes. However we wouldn’tknow where these frequencies have occured. This is a typical case where the need for Joint Time-Frequency Analysis(JTFA) arises. Before better mathematical tools like the wavelet analysis have been available, Fourier Transform was thesole tool to localize frequencies in time.

To incorporate time information into the Fourier Transform, the entire signal is multiplied by a time function whoseposition in time is known in advance and can be shifted as desired. This time function is called window function. Themultiplication passes the signal between two points in time, and stops the portions beyond the window. Calling the signalx(t) and the window function w(t), we generate the product signal y(t):

y(t) = x(t)w(t) (5.36)

When Fourier analysis is then run on y(t), the resulting spectrum Y ( jω) presents a spectral image of x(t) within the"window". Since the position of the window in time is well known, the frequency information about the signal is confinedto the position the window occupies in time. Figs. 5.10a, b show the signal after multiplication by a rectangular windowof length N = 256. In (a) the window is centered (n = 256) and in (b) the window is shifted to n = 384. The respectivemagnitude Fourier Transforms are depicted under the windowed signals. While the signal contains ω and 4ω in (a), itcontains ω, 2ω and 4ω in (b).

Taking the Fourier Transform of Eqn. 5.36 we obtain

Y ( jω) =1

2π[X ( jω)BW ( jω)] (5.37)

We see here that windowing, beside localizing the frequency information in time, smears that information somewhatbecause of convolution in frequency domain. Thus the original X ( jω) within the window location is never knownexactly, but can only be estimated from Y ( jω) and spectral characteristics of W ( jω). This is a small price we must payto localize the frequency information of the signal.

There are several windows that are in wide use. Rectangular, Hamming, Hanning, Bartlet, Blackman are some ofthem. In Table 5.1 mathematical definitions of these windows are given and Fig. 5.11 and Fig. 5.12 show these windows

2Convergence issues for z-transform will be dealt with in Chapter 5

Page 139: transform mathematics

5.7. FOURIER TRANSFORM OF DISCRETE SIGNALS 139

Figure 5.10: STFT with rectangular window on the signal of Fig. 5.9. (a) The window of length 256 starts at n = 256, (b)at n = 384.

Page 140: transform mathematics

140 CHAPTER 5. THE FOURIER TRANSFORM

Table 5.1: Some window functions. Window sampling instances 0≤ n≤ N−1 where N is the length of the window.

Name FunctionRectangular 1Hamming 0.54−0.46cos

( 2πnN−1

)Hanning 0.5

[1− cos

( 2πnN−1

)]Bartlett

2n

N−1 0≤ n < N−12

2− 2nN−1

N−12 ≤ n≤ N−1

Blackman 0.42−0.5cos( 2πn

N−1

)+0.08cos

( 4πnN−1

)

Figure 5.11: Some popular window functions: Rectangular, Hanning, Bartlett and Blackman.

in time domain and frequency domain respectively. Because of convolution in frequency domain (Eqn. 5.37), choosinga particular window function has a profound effect on the observed spectra Y ( jω). Some windows suppress frequenciesclose to the center more than others because of the breadth of the main lobe. Compare the Fourier Transforms of the abovementioned windows in Fig. 5.12. For example the signal which we use in this section is shown in Fig. 5.13a multipliedby a rectangular window and in Fig. 5.13b by Blackman window. Notice that the resolution performance of Blackmanwindow in this example is inferior to that of the rectangular window.

The STFT slots can be aggregated to generate useful JTFA graphs. Waterfall graph shows STFT analyses in a 3-Dgraph where the horizontal axis is for frequency, the vertical axis is for magnitude (or phase) and the third axis is for thetime. Another useful technique is a 2-D intensity graph where the horizontal axis, the vertical axis and the pixel valuerepresent time, frequency and the spectrum (magnitude or phase) respectively. This graph is commonly known as signalspectrogram. Fig. depicts the spectrogram of the signal introduced at the beginning of this section. The spectrogram ofthe signal we use in this section is illustrated in Fig. 5.14. To generate this spectrum we have used a rectangular windowwith a width of 256. At every 16 samples we start a new Fourier Transform computation resulting in an overlap of 240samples. Note the localization of frequencies ω, 2ω, 4ω and transition from one frequency to another in the vicinity ofn = 256 and n = 512.

Another fallacy to watch for is the uncertainty in JTFA. As one makes the window function narrower for a more preciselocalization, one loses precision in the frequency information. This is like a seesaw and analogous to the Heisenberg’sUncertainty principle in physics. If time location and frequency of a signal are known with a precision ∆ t and ∆ f , thenwe have

∆ t ∆ f ≥ 14π

(5.38)

5.7.2 The Discrete Fourier TransformIt is much easier to compute Fourier Transform of signals in discrete time domain than in continous time domain. Analogsignal processing hardware to compute the Fourier Transform are less flexible, prone to aging and drift problems, moredifficult to adapt for new needs (more difficult to program), and the analysis result is more difficult to store in analogform compared to digital signal processing analysis. Moreover once converted to digital form, the signal is less affectedby noise. Another difficulty with continuous Fourier Transform is implementation of the Inverse Fourier Transform. We

Page 141: transform mathematics

5.7. FOURIER TRANSFORM OF DISCRETE SIGNALS 141

Figure 5.12: Fourier Transform of the rectangular, Hanning, Bartlett and Blackman windows.

believe that computation of 12π

´ +∞

−∞X (ω)e jωtdω with analog hardware should be much more challenging than comput-

ing´ +∞

−∞x(t)e− jωtdt . As will be apparent shortly, there is no difference between the levels of difficulty when Inverse

Fourier Transform and Fourier Transform are computed in digital hardware. Moreover dedicated algorithms to speed upthe computation of Discrete Fourier Transform (DFT), namely Fast Fourier Transform (FFT), are available for signalsconsisting of 2n-many samples where n is an integer. These factors altogether motivate performing the Fourier Transformin discrete time domain. In Fig. 5.23 is shown a test instrument which employs DFT to display the Fourier Transform ofa signal.

Fourier Transform of a continuous signal x(t) as defined by Eqn. 5.3 is

X (ω) =

ˆ +∞

−∞

x(t)e− jωtdt

In discrete time domain the signal becomes a sequence of samples from continuous signal taken regularly at t = nTs, thatis

x [n] = x(nTs)

where Ts is the sampling period. In continuous case the range of time is [−∞,+∞]. In discrete time the range is limited to[0,N−1] if we take N samples. The frequency too becomes discrete spanning the range from 0 to 2π

N (N−1) two adjacentfrequencies being separated by 2π

N . Hence the discrete version of the Fourier Transform takes the form of a finite sum

X [k] =N−1

∑n=0

x [n]e− j 2πknN , 0≤ k ≤ N−1 (5.39)

where k and n are frequency and time indices respectively. Just as x [n] is a short-hand notation for x [nTs], X [k] is the shortform of X

[ 2πkN

]. Obviously X [0] is the DC value (average) of the N samples. Should we denote 2πk

N with Ω, Ω = 2π

corresponds to the sampling frequency ωs =2π

Tsin the continuous-time domain. Propertıes of continuous transform hold

for the discrete transform. For instance, for a real sequence x [n] the even and odd symmetry properties of the contin-uous Fourier Transform apply for DFT too. That is, ReX [k] and |X [k]| are even-symmetric whereas ImX [k] andargX [k] are odd-symmetric.

Given the discrete Fourier Transform X [k] of a signal, we can recover the discrete time signal using the inverse DFT:

x [n] =1N

N−1

∑k=0

X [k]e j 2πknN (5.40)

This is clear if we expand Eqn. 5.40 and change order of summation

Page 142: transform mathematics

142 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.13: Effects of window on observed STFT. Signal is multiplied by (a) Rectangular window, (b) Blackman window.

Page 143: transform mathematics

5.7. FOURIER TRANSFORM OF DISCRETE SIGNALS 143

Figure 5.14: Spectrogram of the signal used in this section with a rectangular window of length N = 256 and overlappingat every 16 samples.

Page 144: transform mathematics

144 CHAPTER 5. THE FOURIER TRANSFORM

1N

N−1

∑k=0

X [k]ej2πkn

N =1N

N−1

∑k=0

[N−1

∑m=0

x [m]e−j2πkm

N

]e

j2πknN

=1N

N−1

∑m=0

N−1

∑k=0

x [m]e−j2πkn

N ej2πkm

N

=1N

N−1

∑m=0

x [m]N−1

∑k=0

exp[

j2πk (n−m)

N

]

We immediately recognize exp[

j2πk(n−m)N

]as N-th roots of 1 and from ... we know that

N−1

∑k=0

exp[

j2πk (n−m)

N

]=

1 n = m0 otherwise

Thus

1N

N−1

∑k=0

X [k]ej2πkn

N =1N

N−1

∑m=0

x [n] ·1

=1N· x [n]

N−1

∑m=0

1

=1N· x [n] ·N

= x [n]

In Fig. figure 5.15 a discrete time signal x [n] = sin(

πn64

)+ 0.5sin πn

4 is shown. The first signal has an amplitude ofunity and a discrete frequency 1

128cyclessample = 2π

128rad

sample . The second signal with an amplitude of 0.5 is inserted as a noise

and has a frequency 18

cyclessample =

8rad

sample . We run a DFT on x [n]. Because DFT is periodic with a period of 2π , this displayof discrete Fourier Transform repeats itself after Ω = 2π . The magnitude and phase of the DFT depict the even and oddsymmetry just mentioned. On horizontal axis of DFT display k = 0 is the DC term; k = 256 (Ω = 2π) corresponds tothe sampling frequency fs =

1Ts

and k = 128 (Ω = π) corresponds to the Nyquist frequency fN = fs/2. Thus we have thefollowing relations

X [0] = X∗ [256]X [1] = X∗ [255]

· · ·X [m] = X∗ [256−m]

The DFT display is symmetric about the Nyquist frequency fN . We can exploit this symmetry property to moveX [m] = X∗ [256−m] to the center of the display. Then k = 128 is interpreted as f = 0, and the frequencies for whichk > 128 are positive, and those with k < 128 are negative.As discussed in Section 5.9.3 ideal filters, also known as brickwall filters, are noncausal and can not be realized in realtime. In nonrealtime operations however causality is not required and ideal filters are possible. One can remove thenoise from the noisy sine wave of Fig. 5.15 by multiplying the DFT by an ideal low-pass filter or a bandstop filter(notch filter). 0.5V noise is represented by the spikes X [31] and X [225]. Low-pass filtering or bandstop filtering makesX [31] = X [225] = 0. Now taking the inverse DFT eliminates the noise and yields the desired sine wave. In Sectionsection §?? we see filtering applied to 2-D images.

5.8 Two Dimensional Fourier TransformSo far we have dealt with Fourier Transform of one dimensional signals, which are typically functions of time. In imageprocessing, we encounter with signals that are two-dimensional or three dimensional functions of space variables x, y andz. We can extend the Fourier Transform concepts we studied in this chapter to multidimensional signal. 2-D and 3-Dsignals can be continuous or discrete. 3-D signals are images of 3-D objects while 2-D signals are pictures, or projections

Page 145: transform mathematics

5.8. TWO DIMENSIONAL FOURIER TRANSFORM 145

Figure 5.15: 1-D Discrete Fourier Transform. (a) Signal, (b) Normal FFT magnitude, (c) FFT phase.

Figure 5.16: Due to symmetry f = 0 can be brought to the center of DFT display. This way frequencies can be interpretedas positive and negative. (a) Normal and (b) centered view of DFT magnitude spectrum..

of 3-D objects on a particular plane. Pictures can be considered to be planar slices of 3-D images. 2-D images can bestacked on top of ecah other to create 3-D images. Just as 1-D Fourier Transform enables us to view the signal in adifferent domain, which we called the frequency domain or spectrum, the multidimensional transform too represents thesignal in a 2-D or 3-D frequency space. In photographic film era the pictures used to be considered continuous functionsof xand y. With introduction of digital cameras and imaging systems, the sampled versions of the continuous images wereproduced, which are classified as discrete 2-D signals.

Before proceeding to these transforms a few definitions are in order. For continuous 1-D signal a particular value ofthe independent variable is a moment in time. The signal has a temporal frequency whose unit is sec−1 (Hz). For a 2-Dsignal a particular point in xy space is a pixel; for a 3-D signal a particular point in xyz space is a voxel. Pixel is for pictureelement, and voxel is for volume element. Similarly we define spatial frequencies for multidimensional signals whoseunit has dimensions cm−1. A 3-D signal can be expressed as f3D (x,y,z) where x,y,z are rectangular coordinates. f3Dassigns a unique voxel value for each x,y,z. For mathematical objects, expressions f3D (r,θ ,z) in cylindrical coordinatesor f3D (r,θ ,ϕ) in spherical coordinates are also possible. In rectangular representation if we keep one coordinate constantwe obtain a 2-D signal expressed as a function of the other coordinates. Thus

f (x,y) = f2D (x,y) = f3D (x,y,z)|z=c

which is a slice from 3-D image f3D (x,y,z) along z = c plane. Since Ax+By+Cz = 0 defines a plane in rectangularcoordinates any 3-D image produces a 2-D image under the constraint Ax+By+Cz = 0. Since we will be talking about2-D images in this section, we can drop the subscript 2-D from f2D (x,y). Thus

f (x,y) = f3D (x,y,z)|Ax+By+Cz=0

is an intersection of the 3-D image and the plane Ax+By+Cz = 0.

f (x,y) can sometimes be expressed by a mathematical function as is the case in computer graphics where objects arecreated by mathematical formulas. Elsewhere it may be very difficult or impossible to represent an image in mathematicalform. Fig. 5.18(a) and (b) were generated by mathematical formulas, whereas (c) was created by a digital camera andcan hardly be described mathematically. These figures are not continuous functions of x and y because they are actually

Page 146: transform mathematics

146 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.17: Ideal filtering in nonreal time. (a) The spectrum of noisy sine wave is bandstop filtered (the solid red line).The DFT is inverse transformed after filtering. (b) The result is the sine wave without disturbing noise.

Figure 5.18: Image (a), and (b) can be expressed by f (x,y) = 64(2+ sin16πx+ sin8πy), and f (x,y) =64(1+ sin16πx)(1+ sin8πx) respectively. Image (c) can scarcely be expressed mathematically.

obtained from their continuous counterparts by sampling. x and y have been discretized by m∆x and n∆y where ∆x and∆y are horizontal and vertical distances between adjacent pixels; m,n are integers such that 0≤m,n < 128 for (a), (b) and0≤ m,n < 256 for (c). Thus what we see in Fig. 5.18 is actually f (m∆x,n∆y). Regardless of whether these images canbe expressed mathematically, we observe that they are single valued, and absolutely integrable. Recall from 1-D FourierTransform that absolute integrability makes these functions eligible for Fourier Transform.

The frequency contents of a signal is directly related to its information content. The higher the number of frequencies,the higher the information that the image contains. For instance, once we see a small part of the images 5.18(a) and (b),since this small image repeats itself over the whole picture, we can guess what the whole image looks like. Howeverimage in (c) has a lot of frequencies, and features do not repeat themselves predictably. By seeing the upper left wingof the butterfly we can not predict the damage in its upper right wing. Hence we are interested in knowing the spatialfrequencies. Just as we have done with 1-D time signals, we can invoke the Fourier Transform to derive the sapatialfrequency information from the image. To accomplish this, we can extend the 1-D definition of Fourier Transform to a animage signal:

F (u,v) =ˆ +∞

−∞

ˆ +∞

−∞

f (x,y)exp [− j2π (ux+ vy)]dxdy (5.41)

and its inverse transform by

f (x,y) =ˆ +∞

−∞

ˆ +∞

−∞

F (u,v)exp [ j2π (ux+ vy)]dudv (5.42)

where u and v are spatial frequencies in x and y directions. We can show these transform pairs using the following notationthat we used before

F(u,v)F←→ f (x,y)

Page 147: transform mathematics

5.8. TWO DIMENSIONAL FOURIER TRANSFORM 147

2-D Fourier Transform and its inverse can be implemented as two successive 1-D Fourier Transforms. Let us considerEqn. 5.41 and rewrite it

F (u,v) =

ˆ +∞

−∞

ˆ +∞

−∞

f (x,y)exp [− j2π (ux+ vy)]dxdy

=

ˆ +∞

−∞

[ˆ +∞

−∞

f (x,y)e− j2πuxdx]

e− j2πvydy (5.43)

=

ˆ +∞

−∞

F (u,y)e− j2πvydy (5.44)

or

F (u,v) =

ˆ +∞

−∞

ˆ +∞

−∞

f (x,y)exp [− j2π (ux+ vy)]dxdy

=

ˆ +∞

−∞

[ˆ +∞

−∞

f (x,y)e− j2πvydy]

e− j2πuxdx (5.45)

=

ˆ +∞

−∞

F (x,v)e− j2πuxdx (5.46)

In Fig. 5.19 we show the 2-D DFT operation as a sequence of row-wise 1-D DFT on the rows of the image andcolumn-wise 1-D DFT on F (u,y). The result is F (u,v). Had we preferred column-wise DFT on the image and therow-wise DFT on F (x,v) the result would be the same.

Effective way of performing 2-D Fourier Transform is to use 2-D Discrete Fourier Transform on the image. Eqn. 5.41can easily be converted into discrete form. For the sake of simplicity we can use integers x and y instead of m∆x and n∆yin f (m∆x,n∆y). The integrals are replaced by sums which run from 0 to M−1 and 0 to N−1 for an M×N image.

F (u,v) =M−1

∑x=0

N−1

∑y=0

f (x,y)exp [− j2π (ux+ vy)] (5.47)

Discrete versions of Eqns. 5.43 through 5.46 become

F (u,v) =M−1

∑y=0

[N−1

∑x=0

f (x,y)e− j2πux

]e− j2πvy

=M−1

∑x=0

F (u,y)e− j2πvy (5.48)

Column-wise DFT calculation can be performed in a similar way. Fig. 5.19 shows the 2-D DFT being evaluated by1-D DFT on rows of the image, then on columns of the intermediate DFT to obtain the final 2-D DFT. F (0,0) obtainedfrom Eqn. 5.47 is the DC value of the image which is supposed to amount to average of all pixel values of the picture(average brightness). A keen eye on the equation however notices that is M×N times the average brightness. Indeedevaluating F (0,0) from Eqn. 5.47 we obtain

F (0,0) =M−1

∑x=0

N−1

∑y=0

f (x,y)

= MN · 1MN

M−1

∑x=0

N−1

∑y=0

f (x,y)

= MN f (x,y)

where f (x,y) is the average brightness. This is a huge number especially for large images. Therefore in practice F (u,v)is divided by MN to obtain reasonable numbers. This is indeed what we did in the examples of this section.

You need not be concerned about performing these calculations. All mathematical software includes these transforms.What you need to be concerned about is to acquire a sound understanding of the underlying concepts to use these functionscorrectly in your favorite software.

Page 148: transform mathematics

148 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.19: Performing 2-D Fourier Transform on an image by successive applications of 1-D Transforms on the imageand the intermediate transform.

Example 34. Find the 2-D Fourier Transform of f (x,y) = 64(2+ sin8πx+ sin16πy).

F (u,v) =

ˆ +∞

−∞

ˆ +∞

−∞

f (x,y)exp [− j2π (ux+ vy)]dxdy

=

ˆ +∞

−∞

ˆ +∞

−∞

[64(2+ sin8πx+ sin16πy)]exp [− j2π (ux+ vy)]dxdy

= 64ˆ +∞

−∞

ˆ +∞

−∞

(2+ sin8πx+ sin16πy)exp [− j2π (ux+ vy)]dxdy

= 64[

2ˆ +∞

−∞

ˆ +∞

−∞

exp [− j2π (ux+ vy)]dxdy+ˆ +∞

−∞

ˆ +∞

−∞

sin8πx exp [− j2π (ux+ vy)]dxdy

+

ˆ +∞

−∞

ˆ +∞

−∞

sin16πy exp [− j2π (ux+ vy)]dxdy]

ˆ +∞

−∞

ˆ +∞

−∞

exp [− j2π (ux+ vy)]dxdy =

ˆ +∞

−∞

exp [− j2πux]dxˆ +∞

−∞

exp [− j2πvy]dy

= 2πδ (u) ·2πδ (v)

= 4π2δ (u,v)

ˆ +∞

−∞

ˆ +∞

−∞

sin8πx exp [− j2π (ux+ vy)]dxdy =

ˆ +∞

−∞

ˆ +∞

−∞

e j2π·4x− e− j2π·4x

2 jexp [− j2π (ux+ vy)]dxdy

= 2πδ (v) · 2π

2 j[δ (u−4)−δ (u+4)]

= 2π2 j [δ (u+4,v)−δ (u−4,v)]

Likewiseˆ +∞

−∞

ˆ +∞

−∞

sin16πy exp [− j2π (ux+ vy)]dxdy = 2π2 j [δ (u,v+8)−δ (u,v−8)]

Combining these results we obtain

F (u,v) = 512π2δ (u,v)+128π

2 j [δ (u+4,v)−δ (u−4,v)]+128π2 j [δ (u,v+8)−δ (u,v−8)]

which gives us a DC component, frequencies (8,0)cm−1 and (0,4)cm−1.

Page 149: transform mathematics

5.9. APPLICATIONS 149

Figure 5.20: (a) The image f (x,y) = 64(2+ sin8πx+ sin16πy), (b) 2-D DFT magnitude of the signal, (c) 3-D view ofthe DFT.

Just as electronic filters can be used to enhance 1-D time signals, spatial filters can be used for similar purposes onimages. Recall that filtering is performed by convolution in the signal domain, and by multiplication in the frequency do-main. Electrical systems are causal systems, therefore ideal electrical filters can not be physically implemented. However,in image processing causality is not a problem, so we can design ideal 2-D brickwall filters. While convolution for 1-Dsignals involves natural computation, 2-D convolution must be performed on a computer. It turns out that shifting a 2-Dfilter in x and ydirections of space coordinates is more difficult than performing a DFT, multiplication and inverse DFT.Also designing a filter in spatial frequency domain is much easier and intuitive than designing it in spatial coordinates.Hence we are tempted to take the frequency domain filtering rather than 2-D convolution. Once we have the 2-D Fouriertransform of the image, we can manipulate F (u,v) to remove unwanted spatial frequencies or enhance certain desiredfrequencies in preference to others. The manipulated 2-D spectrum can then be inverse-Fourier-transformed back to theimage domain (Fig. 5.21). The resulting image is an improved version of the original image. The manipulation can besmoothing the image, enhancing the edges, or removing some interfering process like Moire patterns. This manipulation -called filtering - is a topic in image processing. In Fig.5.22, we show an original image, its magnitude DFT and the imagerecovered from low-pass filtered DFT by performing an inverse DFT operation.

Figure 5.21: 2-D filtering improves some aspects of images as dictated by the spatial filter. In image processing idealfilters can be implemented because causality is not an issue with 2-D signals.

5.9 ApplicationsWhen we deal with steady state behavior of (linear) systems, Fourier transform and its inverse transform prove to bevery useful tools. Lengthy differential equation analyses and solutions give way to algebraic equations involving complexvariables and functions. Differentiation, integration, time and frequency shift operations, modulation/demodulation areeasily done through proper Fourier Transform substitutes of these operations. As you have already learned Fourier andLaplace Transforms are linear transforms. By this virtue they can not be used with nonlinear systems. However it ispossible to derive piecewise linear systems from nonlinear ones. Linearization is beyond the scope of this book. Onceyou manage to linearize a system you can use Laplace and Fourier Transforms in a resrticted region of operation.

Convolution and multiplication properties add many useful techniques to electrical engineer’s disposal. These areanalysis and design tools in software and hardware. Using simulators on computers you can model systems and analyzetheir time and frequency domain behaviour. Simulators save us time and money without actually building a system.Simulating a system on silicon chip without actually manufacturing the chip is invaluable because design errors andmistakes are too costly in semiconductor industry. Circuit simulators are of great help to a circuit designer and are widelypracticed along with breadboarding. Cepstrum analysis is an interesting application of convolution. Actually it performs

Page 150: transform mathematics

150 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.22: Low-pass filtering of an image to smooth out the noise and blur the picture details. The Fourier transformis multiplied by the cylinder-shaped ideal LPF. The product does not contain high frequencies. When the filter output isinverse-transformed the image is blurred and noise is suppressed. As a result the signal-to-noise ratio is higher than theoriginal image.

deconvolution of two signals extracting them from convolution. It finds applications in speech science, radar and sonartechnologies.

Filters are a good example for application of convolution to shape signals in frequency domain. Signal magnitude andphase can be modified using filters. We resort to filters to pass or stop signals within certain frequency bands. We designfilters using Laplace and Fourier transforms. We can connect them in parallel or in cascade to further enhance overallfrequency domain behavior. During design phase we might use mathematical tools such as Bode plot to visualize the filterbehavior.

Time domain multiplication results in a class of very useful applications, namely modulation, demodulation andmixing. We talk about this very important application in Section 5.9.4.

In the past, spectrum analyzers were designed using swept frequency oscillators and mixers and as such they were veryexpensive. The oscillator output was multiplied by the input signal and the mixer output was fed into a band pass filterand rectified to a DC value. As the oscillator frequency was swept up an electron beam was also swept from left to rightacross the cathode ray tube. The mixer output fell into the passband of the filter at a certain horizontal position and thetrace showed a peaked spike. The horizontal position of the beam gave us information about the input signal’s frequencycontent, that is, its magnitude Fourier Transform. The phase spectrum was usually discarded. Today clock frequencieswell above 1 GHz, very high speed DSP processors, Fast Fourier Transform (FFT) techniques, large capacity memorydevices and smart LCD screens have revolutionized the test instruments. The instrument in Fig. 5.23 can perform FourierTransform through FFT computation on the input signal and can demodulate it. In contrast with swept frequency spectrum

Page 151: transform mathematics

5.9. APPLICATIONS 151

Figure 5.23: Thanks to digital signal processing, modern test equipment can display RF signals, run spectrum analysis onthem and demodulate baseband signals. A 867.93 MHz RF carrier was quad-FSK modulated to produce the lower trace.The instrument demodulated the RF signal and produced the baseband signal shown in the upper trace. The instrumentuses Discrete Fourier Transform to display the spectrum and digital demodulation to produce modulating signal.

analyzers the digital spectrum analyzer in this figure does not utilize mixers. While mixers use analog multiplication DSPprocessors use digital multiplication.

Cepstrum analysis in the next section is another interesting application of Fourier Transform.

5.9.1 Cepstrum Analysis

We have seen that the output of a linear time-invariant system is related to its impulse response and the input throughconvolution. Apparently it is not obvious to deduce or derive the two elements of convolution from the convolution itself.Fortunately there is a way around the difficulty through Fourier transform and logarithms. We have seen in Chapter 2 ??that the natural logarithm of a complex number z = re jθ is given by Eqn. 2.10as

Lnz = lnr+ jθ (5.49)

A linear system with sistem function H ( jω) and whose input is X ( jω) produces an output Y ( jω) = H ( jω)X ( jω).Being a complex valued function, Y ( jω) has a magnitude and a phase function. Thus

Y ( jω) = |H ( jω)X ( jω)|e j argH( jω)X( jω)

Taking natural logarithm of both sides we obtain the logarithmic spectrum of the convolution.

LnY ( jω) = ln |H ( jω)X ( jω)|+ j argH ( jω)X ( jω)= ln |H ( jω)|+ ln |X ( jω)|+ j argH ( jω)X ( jω) (5.50)

In Eqn. 5.50 we notice that the system function and the input have been decoupled from each other and they appearas additive terms in the logaritmic spectrum. A graph of the real part of LnY ( jω) include ln |H ( jω)| and ln |X ( jω)|

Page 152: transform mathematics

152 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.24: Speech signal seen in the (a) time domain, (b) logarithmic spectrum and (c) real cepstrum domain.

superimposed on top of each other. In Fig. 5.24(a) a portion of a voiced speech3 signal is shown. In part (b) the highlyrepetitive trace in red is the vocal folds excitation; and the green trace is the frequency response of the vocal tract. As youcan see the two constituents of speech are separated from each other. Inverse Fourier Transform of Eqn. 5.50 has beennamed Cepstrum 4 analysis.

y(t) = F−1 [ln |H ( jω)|+ ln |X ( jω)|+ j argH ( jω)X ( jω)] (5.51)

The inverse transform of the whole equation is called the complex cepstrum, whereas the inverse transform of the realpart is called the real cepstrum.

y(t) = F−1 [ln |H ( jω)|+ ln |X ( jω)|]= F−1 ln |H ( jω)|+F−1 ln |X ( jω)| (5.52)

Eqn. 5.52 clearly demonstrates the separation of the source and the system.Cepstrum brings us back to the time domain where the unit is second. Although we are in the time domain, the

independent variable of the cepstrum anlaysis is called quefrency rather than time. The horizontal axis in Fig. 5.24(c) isquefrency. Since the imaginary part is discarded in real cepstrum, the impulse-function-related and input-related functionscan not be faithfully restored using real cepstrum. On the other hand complex cepstrum restores these functions with aphase within 0 . . .2π . In Fig. 5.24(c) the two spikes correspond to the vocal folds vibration quefrency which is known inaudio science as the pitch. The lower quefrency end of the axis represents the vocal tract filter in time domain.

5.9.2 Circuit ApplicationsImpulse function is a basic stimulus function by which systems can be studied. A LTI system is duly described by itsimpulse response. The response of a LTI system to an arbitrary excitation is the familiar convolution integral which wehave came across several times. Suppose now that a LTI system with impulse response h(t) is excited by a complexexponential e jωt , another basic stimulus. The response to this excitation is given by convolution as

3Voiced speech is generated by vibrating vocal folds. The pseudo periodic air stream excites the oral and nasal cavities to produce voiced speech like/’i/ at the beginning of evening. Unvoiced speech are produced by turbulent air coming from the lungs without vocal folds vibration. They are soundslike /p/ of park.

4Cepstrum is a word coined by flipping the first syllable of spec-trum. Cepstrum is pronounced in two ways: /kEpstr@m/ as in cat or /sEpstr@m/ as incenter.

Page 153: transform mathematics

5.9. APPLICATIONS 153

y(t) =

ˆ +∞

−∞

h(τ)e jω(t−τ)dτ

= e jωtˆ +∞

−∞

h(τ)e− jωτ dτ

= e jωtH ( jω)

= |H ( jω)|exp [ j (ωt + argH ( jω))] (5.53)

We note that when a LTI system is excited with a complex exponential, the output is the same complex exponentialwhose magnitude and phase are modified by the magnitude and phase of H ( jω). Excitation functions which possess thisproperty are called eigenfunctions. Complex exponential is an eigenfunction.

Should we apply now e− jωt to our LTI system we obtain

y(t) = e− jωtH (− jω)

= e− jωtH∗ ( jω) (5.54)

which is the complex conjugate of the previous response. Therefore the response to a cosine function would be

y(t) = |H ( jω)|cos(ωt +θ ( jω)) (5.55)

where θ ( jω) = argH ( jω).This is an important result in analysis of LTI systems. If we determine H ( jω) of a system we can readily find its sys-

tem response to a cosine function. H ( jω) can be obtained by first finding h(t), then taking its Fourier transform. Howeverthis is rather a lengthy solution and rarely practised by engineers. They would rather use the Fourier Transform-equivalentof the circuit components, and apply Kirchhoff’s current and voltage laws, mesh and node equations to determine H ( jω),etc. Just as we did in ?? and Laplace Transform applications, we transform the terminal equations of inductors and capaci-tors to frequency domain using the differentiation and integration properties. We denote impedances by Z and admittancesby Y . For the inductance

vL (t) = LdiL (t)

dt

F [vL (t)] = F

[L

diL (t)dt

]VL ( jω) = jωLIL ( jω)

The impedance of the inductance thus becomesZL = jωL (5.56)

Likewise for the capacitance

iC (t) = CdvC (t)

dt

F [iC (t)] = F

[C

dvC (t)dt

]IC ( jω) = jωCVC ( jω)

The impedance of the inductance thus becomesYC = jωC (5.57)

Current though an inductance and voltage across a capacitance are time integrals of inductance voltage and capacitancecurrent respectively. Their Fourier Transforms should normally include an impulse at DC. However assuming zero averagefor the inductance voltage and capacitance current we can omit these impulse terms to get

YL =1

jωL(5.58)

ZC =1

jωC(5.59)

The resistance value R is the same in frequency domain as the time domain.

Example 35. For the circuit shown in Fig. 5.25a) derive the voltage transfer function, b) select C1 so that the circuitbecomes a resistive divider at all frequencies.

Page 154: transform mathematics

154 CHAPTER 5. THE FOURIER TRANSFORM

Solution

The voltage transfer function of the circuit can be derived by replacing C1 and C2 with their impedance values and findingthe complex impedance voltage divider output:

Vout =Z1

Z1 +Z2·Vin (5.60)

where

Y1 = jωC1 +1/R1

=1+ jωR1C1

R1

Z1 = Y−11

Z1 =R1

1+ jωR1C1

LikewiseZ2 =

R2

1+ jωR2C2

Substituting Z1 and Z2 in the Eqn. \tau_1=5.60 we have

Z1

Z1 +Z2=

R11+ jωR1C1

R11+ jωR1C1

+ R21+ jωR2C2

=R1 (1+ jωR2C2)

R1 (1+ jωR2C2)+R2 (1+ jωR1C1)

Hence the voltage transfer function is found to beVout

Vin=

R1 (1+ jωR2C2)

R1 (1+ jωR2C2)+R2 (1+ jωR1C1)(5.61)

Here τ1 = R1C1 and τ2 = R2C2 are the time constants of the two RC arms of the circuit. Should we let τ1 = τ1 = τ weobtain

Vout

Vin=

R1 (1+ jωτ)

R1 (1+ jωτ)+R2 (1+ jωτ)

=R1

R1 +R2

which implies that the circuit becomes independent of the frequency. For the circuit τ = 4µs, therefore C1 must be 4nF.C1 is called speedup capacitance since it can be set to render the circuit in Fig. 5.25 resistive. Oscilloscope probes havesimilar compensation circuits that use a trimmer capacitor for speedup capacitance C1. R2 and C2 represent the inputchannel of the oscilloscope. Without a speedup, or more precisely a compensation capacitance, C2 smears the sharp edgesof input signals. To compensate for the effect of C2, a square wave is applied to the probe and C1 trimmer is adjusted witha screw driver until a square wave is displayed without overshoot or exponential rise.

5.9.3 FilteringAs mentioned at the beginning of this section filters are used shape the spectral characteristics of a signal through convo-lution. Shaping the signal spectrum occurs as a result of multiplying the filter spectrum with that of the signal spectrum.In the process the magnitudes of the signal and filter function are multiplied, and their phases are added. Filters are thenrequired either to shape the magnitude spectrum or the phase spectrum of a signal. Without proof let us state that the mag-nitude and phase spectra of a signal can not be shaped independently. A desired modification of the magnitude spectrumcan result in an undesired phase spectrum and vica versa.

Filters come in five flavors when classified with respect to their magnitude reponses:

1. Low Pass Filter (LPF),

2. High Pass Filter (HPF),

3. Band Pass Filter (BPF),

Page 155: transform mathematics

5.9. APPLICATIONS 155

Figure 5.25: Speedup capacitance in an oscilloscope probe compensation circuit.

4. Band Stop Filter, Band Reject Filter or Notch Filter (BSP),

5. All Pass Filter (APF).

In Fig. 5.26 the ideal forms of these filters are shown. Regarding the magnitude response, the filters possess regionsdesignated as passband and stopband. Passbands in Fig. 5.26are filled in with gray. Ideal filters pass signals withinpassband unaltered but delayed somewhat. As seen In the figure, the ideal filter passbands start and stop abruptly at cutofffrequencies ωc, ω1 and ω2.

Ideal lowpass filter in Fig. (a) has the transfer function

HLP ( jω) =

1 |ω| ≤ ωc

0 |ω|> ωc(5.62)

The filters except allpass filter can be built from a lowpass filter. For example highpass filter is aptly synthesized bysubtracting a lowpass filter function from 1

HHP ( jω) = 1−HLP ( jω) (5.63)

and bandpass filter is realized by shifting a lowpass filter to ω = ω0 and ω =−ω0 and adding them up:

HBP ( jω) =12

HLP [ j (ω−ω0)]+12

HLP [ j (ω +ω0)] (5.64)

Though mathematically correct, this synthesis implies modulating the lowpass filter function. In lieu bandpass filters canalternatively be synthesized by multiplying two suitable LPF and HPF functions in frequency domain, i.e., by physicallycascading them (Problem 14). We have similar observations and remarks for the bandstop filter. See Problem 15 for thebandstop case.

Ideal filters can not be implemented physically because they are noncausal. Causality imposes that the filter respondonly after an input is applied to it. All of the ideal filters above have nonzero responses from −∞ to 0, that is, the filterresponds to the input before it is applied at t = 0! We can easily verify this for the lowpass filter by taking inverse Fouriertransform of its transfer function in Fig. 5.26(a):

h(t) =1

ˆ +∞

−∞

H ( jω)e jωtdω

=1

ˆ +ωc

−ωc

1 · e jωtdω (5.65)

=ωc

π· sinωct

ωct

The sinc function is nonzero before t = 0, hence ideal lowpass filter is noncausal therefore unrealizable. Since other idealfilters are derived from LPF, they too are noncausal and unrealizable.

Let us design a simple bandpass filter by cascading a lowpass filter and a highpass filter. Note that because of theconvolution in time, the order of cascading does not matter. For nonideal filters the cutoff frequencies are specified as those

Page 156: transform mathematics

156 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.26: Basic filter types. (a) Lowpass, (b) highpass, (c) bandpass, (d) bandstop and (e) allpass. These filters exceptallpass filter are ideal filters; since they are noncausal they can not be physically realized.

frequencies for which |H ( jω)|= 1/√

2. These frequencies are also called 3dB cutoff frequencies since 20log(

1/√

2)=

−3dB. Let the BPF cutoff frequencies be ω1 = 10 rad/sec and ω2 = 1000 rad/sec. To realize this BPF, the LPF and HPFshall have cutoff frequencies of 1000 rad/sec and 10 rad/sec respectively. These specifications lead to the following filtertransfer function

HBP (s) =s/ω1

(1+ s/ω1)(1+ s/ω2)

=0.1s

(1+0.1s)(1+0.001s)

We can split this equation into its HPF and LPF parts:

HBP (s) =0.1s

1+0.1s· 1

1+0.001s= HHP (s) ·HLP (s)

Thus

Page 157: transform mathematics

5.9. APPLICATIONS 157

Figure 5.27: BPF implemented by cascading a LPF and a HPF. The lower cutoff frequency is 10 rad/sec and the highercutoff frequency is 1000 rad/sec. HPF and LPF can be swapped without altering the filter characteristics.

HHP (s) =0.1s

1+0.1sHHP ( jω) = HHP (s)|s= jω

=0.1 jω

1+0.1 jω

HLP (s) =1

1+0.001s

HLP ( jω) =1

1+0.001 jω

HBP ( jω) =0.1 jω

(1+0.1 jω)(1+0.001 jω)(5.66)

We can use RC lowpass and highpass filters to implement these sections. Since τ1 = R1C1 = 1/ω1 = 0.1sec for the HPF,and τ2 = R2C2 = 1/ω2 = 0.001sec so we can select R1 = R2 = 100kΩ, C1 = 1 µF and C2 = 10nF. Cascading is achievedusing a unity gain buffer amplifier (Fig. 5.27).

Bode plot technique can be used to graph the magnitude and phase response of our BPF. Bode plot technique wasdeveloped in 1930 by Hendrik Wade Bode. It uses asymptotes to approximate the magnitude and phase characteristicsof systems. Because of hand calculations are preferred during plotting, the approximation uses addition and subtractioninstead of multiplication and division. This is achieved using logarithms on the magnitude function. Standard 20log formwas chosen as it was already in use to find gain in decibels. Consider our design example where the system transferfunction is given by Eqn. 5.66. Let us rewrite this equation as

HBP ( jω) =j (ω/10)

[1+ j (ω/10)] [1+ j (ω/1000)]

The magnitude and phase functions are given by

|HBP ( jω)| =ω/10

|1+ j (ω/10)| |1+ j (ω/1000)|

argHBP ( jω) =π

2− arctan(ω/10)− arctan(ω/1000)

From 20log of the magnitude function we can convert multiplication and divisions to addition and subtraction.

20log |HBP ( jω)| = 20log(ω/10)−20log |1+ j (ω/10)|−20log |1+ j (ω/1000)|= 20logω−20−20log |1+ j (ω/10)|−20log |1+ j (ω/1000)|

Logarithmic rather than linear scale is used for the frequency axis. You make a mental change of variables x = logω whenyou work with Bode plots. Then the numerator ω/10 gives rise to 20logω−20 = 20x−20 which is a line with a slope

Page 158: transform mathematics

158 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.28: Bode plot of the BPF.

of +20 dB/decade that intercepts the frequency axis at x = 1 (or ω = 10). The interval from x = 1 to x = 2 or x = i tox = i+1 corresponds to a ten-fold increase in frequency. This ten-fold increase is called a decade. As for the factors inthe denominator we have

20log |1+ j (ω/10)|=

0 ω 10−3 ω = 1020logω−20 ω 10

and

20log |1+ j (ω/1000)|=

0 ω 1000−3 ω = 100020logω−20 ω 1000

These three sections and the overall magnitude graph are drawn separately in Fig. 5.29.

5.9.4 Amplitude Modulation and DemodulationAs we have seen in Section 4.5 time domain multiplication convolves the Fourier transforms of the signals. Convolutionin frequency domain causes signals to be translated from one frequency band to another. This property has been widelyexploited in history of radio communication. High frequency signals are more easily propagated than low frequencysignals because antenna lengths are proportional to signal’s wavelength. To make antenna sizes feasible, both physicallyand economically, the information signal is transmitted on a carrier signal. In amplitude modulation, an electronic circuitwhich is called an AM modulator multiplies the information and carrier signals. Thus the information content of the signalis safely translated to a band around the carrier frequency. At the receiving end, modulation is undone by another elec-tronic circuit called a demodulator, extracting the information signal off the modulated carrier. In so-called synchronousdemodulator, the modulated signal is multiplied again by a copy of the carrier signal. The multiplication contains the in-formation signal among other products. Filtering the product signal generates the information at the receiver. Mixers aresimilar to modulators/demodulators in that they also operate on the basis of multiplication. They are used in radio, TV setsand telephone industry. In telephone industry thousands of conversations get squeezed onto a wide-band communicationchannel where each conversation is allocated a band not overlapping with other conversation channels. The post office

Page 159: transform mathematics

5.9. APPLICATIONS 159

Figure 5.29: Bode magnitude plot construction.

equipment assigns these carriers to subscribers and multiply them with baseband signals, translate the conversation tothose frequencies and restore themback to the speaker’s phone at the other end. This operation called Frequency DivisionMultiplexing (FDM) is achieved through multiplication by mixers.

Invoking the frequency shifting property (Eqn. 5.20) or time domain multiplication property (Eqn. 5.26) we have

F

e jω0tx(t)= X [ j (ω−ω0)]

Using the identity cosω0t =(e jω0t + e−ω0t

)/2 and linearity of the Fourier Transform we had also derived

F x(t)cosω0t= X [ j (ω +ω0)]+X [ j (ω−ω0)]

2which simply states the message in x(t) gets translated to frequencies ω = −ω0 and ω = ω0. This is double-sidedspectrum of the modulated carrier. In Fig. 5.30 double-sided spectrum of a sinusoidal carrier modulated by a complexbaseband signal is shown. This figure is for illustration only to show how the portions of baseband signal are shifted infrequency.

As an example let x(t) = 0.8sin4πt be our baseband signal whose amplitude is 0.8 and frequency is 2Hz. Let uschoose a carrier with amplitude 1 and frequency 20Hz. We can generate an amplitude modulated signal with injectedcarrier using

y(t) = sin40πt +0.8sin4πt sin40πt

= (1+0.8sin4πt)sin40πt

Page 160: transform mathematics

160 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.30: AM modulation. (a) Baseband signal and carrier, (b) Modulated carrier.

Figure 5.31: Amplitude modulation with injected carrier (a) the baseband signal (blue), and the modulated signal (red),(b) The double-sided spectrum of the AM signal.

where the first term is the injected carrier and the second term is the modulated carrier. The Fourier Transform of thismodulation signal will contain two impulses of magnitude 0.5 at f =±20Hz and baseband signals translated to f =±18Hz and f = ±22 Hz each having an amplitude of 0.2. We can demodulate this signal by multiplying it by carrier signaland filtering out the high frequency signals:

y(t)sin40πt = (1+0.8sin4πt)sin2 40πt

= (1+0.8sin4πt)1− cos80πt

2

= 0.4sin4πt +12− 1

2cos80πt−0.4sin4π t cos80πt

The first term is the desired baseband signal; the second term is the DC component, and the third and fourth terms arehigh frequency components. They can be filtered out leaving us with the baseband signal.

Problems

1. Obtain the Fourier transform of f (t) = A [u(t + τ)−u(t− τ)].

2. Fourier transform of f (t) is F (ω). Obtain the Fourier transform of m(t) = f (t)cos ω0t.

3. If f (t)←→ F (ω) are Fourier transform pairs show that

f (t) =1

ˆ +∞

−∞

(ˆ +∞

−∞

f (τ)e− jωτ dτ

)e jωtdω

Page 161: transform mathematics

5.9. APPLICATIONS 161

and

f (t) =ˆ +∞

−∞

(ˆ +∞

−∞

f (τ)e− j2π f τ dτ

)e j2π f td f

4. If f (t)←→ F (ω) are Fourier transform pairs show that f (at)←→ 1|a|F (ω).

5. Two LTI systems H1 and H2 are described in the time domain by

H1 : y1 (t)+ k1

ˆ t

0y1 (τ)dτ = x1 (τ) and

H2 :dy2 (t)

dt+ k2y2 (t) = k2x2 (t) with y(0-)=0

These two systems are connected in cascade to form a composite system H with h(t) = h1 (t)Bh2 (t). Obtain themathematical representation of H.

6. If x(t) is a real function, then show that

|X (− jω)| = |X ( jω)|arg [X (− jω)] = −arg [X ( jω)]

7. If x(t) is an odd function, prove that X ( jω)is imaginary.

8. The Fourier Transform of a filter function is given as

H ( jω) = |H ( jω)|exp j [arg [H ( jω)]−ωt0]

An signal x(t) applied to the input of this filter produces y(t) at the filter output. Prove that the output of the filteris given by

y(t) = h(t)Bx(t− t0)

9. A signal y(t) is related to another signal x(t) through the relation y(t) = x(t)cos(ω0t +θ). If the Fourier Trans-form of x(t) is given as

X ( jω) =12

X [ j (ω +ω0)]+12

X [ j (ω−ω0)]

Find Y ( jω) and discuss the effect of θ .

10. Using the fact that the impulse function is the derivative of the unit-step function and the Fourier Transform ofunit-step function is

U ( jω) =1jω

+πδ (ω)

find the the Fourier Transform of the impulse function.

11. Using Fourier Transform properties show that the convolution in time domain satisfies the following laws:

(a) Commutativity: x(t)∗ y(t) = y(t)∗ x(t)

(b) Associativity: [x(t)∗ y(t)]∗ z(t) = x(t)∗ [y(t)∗ z(t)]

(c) Distributivity: x(t)∗ [y(t)+ z(t)] = x(t)∗ y(t)+ x(t)∗ z(t)

12. Prove Eqn. 5.55.

Page 162: transform mathematics

162 CHAPTER 5. THE FOURIER TRANSFORM

13. For the circuit in Fig. 5.32 voltage sources are identical with equal frequency and phase. Show that |Vout |= 1Voltfor all frequencies and R,C values; and argVout= arctan(ωRC)−1.

Figure 5.32: Constant amplitude output RC circuit.

14. A HPF and a LPF with cutoff frequencies ω1 and ω2 such that ω1 < ω2 are connected in cascade. Show that theresultant filter is a BPF with cutoff frequencies ω1 and ω2.

15. A HPF and a LPF with cutoff frequencies ω1 and ω2 such that ω1 > ω2 are connected in cascade. Show that theresultant filter is a BSF with cutoff frequencies ω1 and ω2.

16. A HPF and a LPF with cutoff frequencies ω1 and ω2 such that ω1 > ω2 are connected in parallel and their outputsare summed. Show that the resultant filter is a BSF with cutoff frequencies ω1 and ω2.

17. Let x(t) be a time signal, y(t) = e jω0t and z(t) = x(t)y(t). Using time domain multiplication property of 5.26 showthat Z ( jω) = X [ j (ω−ω0)].

18. A QAM signal is generated by z(t) = x(t)cosω0t + y(t)sinω0t. Show that x(t) and y(t) can be recovered fromz(t) using

x(t) = LPF [z(t)cosω0t]

y(t) = LPF [z(t)sinω0t]

where LPF(·) is lowpass filter operation on the signal.

19. Prove that the sum of N complex exponentials is equal to 0 (See Chapter 1 Problem 20).

N−1

∑n=0

exp(

j2πnN

)= 0

20. Study Eqn. 5.48. Using a similar procedure derive the column-wise DFT.

21. Find the 2-D Fourier Transform of f (x,y) = 64(1+ sin8πx)(1+ sin16πy).

22. Find the magnitude and phase of the Fourier Transform for the following system functions

(a)

H (s) =s−1s+1

(b)

H (s) =1+0.1s

1+ s

Page 163: transform mathematics

5.9. APPLICATIONS 163

23. Computer project.Implement the following block diagram on LabVIEW and experiment with DFT, inverse DFT and filtering.

24. Computer project.Implement the following block diagram on LabVIEW and obtain the 2-D Fourier Transform of the test image.

25. Computer project. In this project you implement a LabVIEW virtual instrument which performs low-pass orhigh-pass filtering on a black-and-white BMP image.

(a) First choose a small picture which contains considerable fine details. If it is a color picture convert it tomonochrome BMP image.

(b) Save the picture to your computer. Enter the path where you saved the picture into the path constant of ReadBMP vi.

(c) On the front panel create a numerical control for the 2-D filter cutoff frequency. The accepted frequencies arefrom 0 to 0.5. Make 0.1 the default frequency value.

(d) On the front panel create a enum control for the 2-D filter type. Edit the control and make the first and secondselections Low Pass and High Pass respectively.

(e) The vi will generate a raw image and a processed image arrays.

(f) On the front panel create two intensity graph indicators for the raw and filtered images.

(g) On the block diagram window draw the block diagram shown in 25.

(h) Operate the vi with several cutoff frequencies and filter types.

Page 164: transform mathematics

164 CHAPTER 5. THE FOURIER TRANSFORM

Page 165: transform mathematics

5.9. APPLICATIONS 165

Problem (25). 2-D filtering with LabVIEW.

Page 166: transform mathematics

166 CHAPTER 5. THE FOURIER TRANSFORM

Page 167: transform mathematics

Chapter 6

z - Transform

Electronics engineering was born digital whenthe telegraph was invented. Even the firstradio communication at the beginning of20th century was digital. Guglielmo Mar-coni1 made a transmitter using a high-voltageRuhmkorff coil which generated long arcs ofelectric discharge between two brass balls.His radio transmitters used Morse codes2 tosend messages. The telephone, AlexanderGraham Bell’s3 invention, was definitely ana-log. It could carry the voice signals over a pair

of wires. Later on Lee De Forest’s4 invention of vacuum triode tube (audion) made it possible to receive and amplify weaksignals. Triode tube and the telephone triggered more developments that ushered in radio communication. For long yearsto come electronics meant radio engineering and radio communication was analog. Digital lay dormant in the lap ofMorse telegraphy.

With electronics being analog, signal processing was analog too. Laplace Transform, Fourier Series and FourierTransform developed in the 19th century were mature and well understood. The physics needed to make new devices or todesign sytems were also on the side of the “analog designer”. For decades, circuit theory, control theory, wave generationand shaping; filtering theory, modulation and demodulation have walked hand in hand in peace without ever utteringthe word digital. Well, Mr. Boole’s5 algebra has been around, but who cares Boolean algebra? Even the mechanicalcalculators and the first programmed electronic computer ENIAC, which came in 1946, were decimal, not binary.

The late arrival of digital signal processing - discrete signal processing to be more accurate - may be attributed tounavailability of appropriate technology, formidable cost of the existing technology and scarcity of skilled people to workin the field. ENIAC was designed to work with “17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors,10,000 capacitors and 5,000,000 hand-soldered joints6.” It had no memory and consumed 150 kW of power and exceeded30 tons of weight. Its 2016-equivalent cost is $6,816,000. Discrete signal processing as we know it today demands highresolution ADC’s and DAC’s; high sampling rates and high-speed digital signal processors built with special architecture.Discrete signal processingwas not in curricula of most universities in 1960’s; then It was taught in graduate courses ofelite universities. Even after then discrete signal processing was run on mainframe computers, then minicomputers whichwere not accessible to an average engineer.

Digital age has started with the advent of digital integrated circuits in late 1960’s and early 1970’s. As predictedby Moore’s Law7 the number of on-chip transistors has doubled every 18 months or so. At the beginning small-scale

125 April 1874 – 20 July 1937 An Italian electrical engineer, inventor and entrepreneur. He invented radio and achieved radio transmission acrossthe Atlantic. He was awarded the 1909 Nobel Prize in Physics.

2“Morse code is a method of transmitting text information as a series of on-off tones, lights, or clicks that can be directly understood by a skilledlistener or observer without special equipment.” - Wikipedia.

3“March 3, 1847 – August 2, 1922. A Scottish-born scientist, inventor, engineer and innovator who is credited with patenting the first practicaltelephone.” - Wikipedia

4August 26, 1873 – June 30, 1961. “An American inventor, "Father of Radio", and a pioneer in the development of sound-on-film recording used formotion pictures.” - Wikipedia.

5“George Boole (2 November 1815 – 8 December 1864) was an English mathematician, educator, philosopher and logician. He worked in the fieldsof differential equations and algebraic logic, and is best known as the author of The Laws of Thought (1854) which contains Boolean algebra. Booleanlogic is credited with laying the foundations for the information age.” - Wikipedia.

6https://en.wikipedia.org/wiki/ENIAC7Statement by Gordon E. Moore that the number of transistors in an integrated circuit doubles almost every two years. Moore was one of the founders

of Intel Corporation.

167

Page 168: transform mathematics

168 CHAPTER 6. Z - TRANSFORM

integration (SSI) devices, then medium-scale (MSI), then large-scale (LSI) and very large scale (VLSI) devices havebecome available in the arsenal of engineers. At first the main application domain of these devices was purely digital,leaving analog applications to analog devices. Prospect of the analog realm was sound and healthy. Quality opamps andanalog function blocks have been introduced with rising performance/price ratios. The progress in analog and RF deviceshave paved the ground toward UHF and beyond. Howevever the two main domains of electronics have been unable tomerge because the technology was still not mature or not cheap enough.

Birth of the microprocessors in late 1970’s has started another revolution in electronics, namely the programmed logic.There was a boom in all aspects of digital electronics: microprocessors increased their performance with wider busses,faster clocks; memory devices flourished with ever increasing storage sizes and faster access times; ADC’s and DAC’salso followed the trend in bit resolution and conversion time. The good thing was that cost of these devices did not in-crease as their performance improved. The big beakthrough happened in 1979 when Apple introduced the first personalcomputer designed with an 8-bit microprocessor. Myriad of small game computers flooded the markets until IBM intro-duced the IBM-PC and published its technical workings. These developments encouraged discrete signal processing onPC’s albeit nonreal-time. By this time the DSP had already been gaining momentum in popularity. Cooley and Tukey hadinvented Fast Fourier Transform in 1965 which was included in Top 10 Algorithms of 20th Century by the IEEE journalComputing in Science & Engineering. FFT brought Fourier Transform from analog domain to digital domain. 1990’s sawrapid development of DSP processors which gradually invaded analog applications zone to open the door to mixed signaldevices. The main difference between a general purpose microprocessor and a DSP processor is that the DSP processormultiplies and adds in a single clock cycle. This is in compliance with the Nyquist sampling theorem which imposes thatthe processor finish its job until the next sample arrives. For real-time applications this is a must. Companies such asAnalog Devices and Texas Instruments pioneered in the DSP chips which implemented floating point processors. Todaywe are even one more step ahead with ubiquitous FPGA devices and development systems. Although DSP processorswere a breakthrough, they still ran program-coded algorithms. FPGA’s on the other hand run wire-coded algorithms inreal time; they are uncommitted digital hardware that can be wired to perform multiplication, addition, FFT etc.

To benefit the exciting field of discrete signal processing, one must grasp certain relevant concepts. A sound under-standing of discrete signals, difference equations, discrete time and frequency analysis is a prerequisite to progress. Therather lengthy introduction above was for stimulating your curiosity and motivating you in discrete signal processing. Thischapter is not to teach you a lesson in DSP, but will teach a very important analysis tool which is called the z-Transform.z-Transform to discrete signal processing is what Laplace and Fourier Transforms are to continuous time signal process-ing. It is extremely important to assess discrete signal systems like if they are stable or not. To emphasize the strongrelation between z-Transform and digital hardware, a D-type register is denoted in DSP work with the unit delay operatorz−1. We will see LabVIEW examples of this in the applications.

6.1 Definition of the z-TransformDiscrete delta function δ [n] is defined for all integer values of n as

δ [n] =

1 n = 00 n 6= 0

δ [n] produces the value of a function at n = 0 when multiplied by a continuous time function h(t)h(t)δ [n] = h0 = h(0)

We can combine shifted delta functions through addition to form another function ∆ [n], call it comb function

∆ [n] =∞

∑k=−∞

δ [n− k] (6.1)

Multiplying continuous time function h(t) by ∆ [n] produces a discrete sequence of numbers hn

hn = h(t)∆ [n]

= h(t)∞

∑k=−∞

δ [n− k]

=∞

∑k=−∞

h(t)δ [n− k]

= · · · ,h−2,h−1,h0,h1,h2, · · · (6.2)

Page 169: transform mathematics

6.2. REGION OF CONVERGENCE FOR THE Z-TRANSFORM 169

Multiplication by the comb function ∆ [n] samples the continuous function h(t) at discrete instances and converts it into adiscrete signal. We can write Eqn. 6.2 in series form as

h [n] =∞

∑k=−∞

hkδ [n− k]

=∞

∑k=−∞

h [k]δ [n− k] (6.3)

We can interprete Eqn. 6.3 both as the convolution of h [n] with ∆ [n] and as the impulse response of a discrete system H.Extending this idea to an arbitrary input x [n] instead of ∆ [n] we can obtain the discrete output y [n]:

y [n] =∞

∑k=−∞

h [k]x [n− k] (6.4)

= h [n]∗ x [n]

Now let us choose a complex-valued function zn for x [n]. y [n] becomes

y [n] =∞

∑k=−∞

h [k]zn−k

y [n] = zn∞

∑k=−∞

h [k]z−k

= H (z)zn

We observe that just like the continuous-time complex exponential function e jωt , our discrete time exponential functionzn is an eigenfunction of the system. Recall that when a system is excited by an eigenfunction, its response contains thateigenfunction whose magnitude and phase are altered by the system. This leads us to define the z-transform of a sequenceh [n]:

H (z) =∞

∑n=−∞

h [n]z−n (6.5)

This is the definition of the z-transform which can also be viewed as an operator on the input sequence which maps itinto z-plane. The relation is also denoted with the following notations:

H (z) = Z h [n] (6.6)

Zh[n]←→H(z)

(6.7)

The discrete time index n in the definition (Eqn. 6.5) runs from −∞ to +∞. If we restrict n to run from 0 to +∞ weobtain the so-called unilateral z-transform:

H (z) =∞

∑k=0

h [k]z−k (6.8)

Signals which are non-zero for n ≥ 0, the unilateral z-transform and the bilateral transform are identical. The unilateralz-transform can be used to solve difference equations with initial conditions.

6.2 Region of Convergence for the z-TransformIn Section 5.7 we had seen the Fourier Transform of a discrete signal x [n] defined as H

(e jω)= ∑

∞n=−∞ h [n]e− jωn. We

notice that this relation and the z-transform are very similar. For this transform to converge we had required that H(e jω)

be absolutely summable, i.e.,

Page 170: transform mathematics

170 CHAPTER 6. Z - TRANSFORM

∑n=−∞

|h [n]|< ∞

Denoting z = re jω , z−Transform can be rewritten as

H(re jω) =

∑n=−∞

h [n](re jω)−n

=∞

∑n=−∞

h [n]r−ne− jωn

where 0≤ r < ∞, or 0≤ ω ≤ 2π . Thus we deduce that H (z) is the Fourier Transform of the sequence h [n]r−n :

H (z) =∞

∑n=−∞

h [n]r−ne− jωn (6.9)

In order that the Fourier Transform of h [n]r−n converges we demand absolute summability for H (z)

|H (z)|=

∣∣∣∣∣ ∞

∑n=−∞

h [n]r−ne− jωn

∣∣∣∣∣< ∞ or

∑n=−∞

|h [n]|r−n < ∞ (6.10)

H (z) may not converge everywhere in the z-plane. If H (z) does exist then Eqn. 6.10 converges in a region in thez-plane called the region of convergence (ROC), beyond which H (z) will not converge. As will be apparent shortly ROCis bounded by a circle and can extend to the origin or to infinity. The power series in the definition of z-transform isa Laurent series. In the region of convergence it is an analytic function and its derivatives of all orders are continuous.Eqn. 6.9 suggests that H (z) is the Fourier transform of the sequence h [n] multiplied by the exponential r−n. Becauseof multiplication through r−n, a discrete signal whose Fourier Transform does not converge, may have a z-transform thatconverges for r > 1. Provided that the ROC includes the unit circle, the Fourier Transform of a discrete signal is itsz-transform evaluated on the unit circle |z|= 1.

It is interesting to note the similarities between Laplace transform - continuous time Fourier transform and z-transform- discrete time Fourier transform. s and z of the Laplace and z-transform are continuous complex variables. If the Laplacetransform of a continuous time function converges and has a ROC that includes the jω axis then the function possessesa Fourier transform which is simply obtained by setting s = jω . Likewise if z-transform of a discrete time sequenceconverges and has a ROC that includes the unit circle |z| = 1 then the discrete Fourier transform of the sequence existsand is obtained by setting z = e jω in the z-transform. s = jω is the imaginary axis of the s−plane and covers the rangeof frequencies −∞ < ω < +∞ . On the other hand |z| = 1 or z = e jω is the unit circle; ω in z = e jω is the frequencywhich falls in the range 0≤ ω ≤ 2π . Obviously ω is periodic with period 2π and frequencies Ω and Ω+2π are identical.One should keep in mind this very important fact: if a continuous function is sampled and converted into a discrete timesequence then the jω axis of the s-plane is mapped into the unit circle of the z-plane; ω =+∞ and ω =−∞ of the s-planeare mapped to ω = π and ω = −π of the z-plane. Also note that ω = π corresponds to the Nyquist frequency (halfthe sampling frequency fs). As pointed out previously the frequency of the discrete signals is periodic, i.e., H

(e jω)

and

H(

e j(ω+2πn))

are identical for any integer n.

The z-transform, H (z), of a discrete time signal h [n] is always associated with its ROC. H (z) alone does not representthe z-transform of a signal as two different signals might have the same H (z) as will be apparent shortly. The onlydifference between z-transforms of such signals is their ROC’s.

Example 1

Let x [n] = anu [n]. Find X (z) and the ROC. We proceed from the z-transform definition.

Page 171: transform mathematics

6.2. REGION OF CONVERGENCE FOR THE Z-TRANSFORM 171

Figure 6.1: The Region of Convergence for z-transform. Fourier transform for the continuous time and discrete timefunctions are evaluated on the jω axis of the s-plane (a) and the unit circle of the z-plane respectively (b) provided thatthe Laplace and z-transforms converge on the jω axis and the unit circle. Continuous time frequencies ω = ±∞ map todiscrete time frequencies ω = ±π respectively. (c) ROC does not include the unit circle hence the Fourier transform ofthe discrete time function does not exist. (d), (e) If ROC includes the unit circle the Fourier transform of the discrete timefunction exists.

Page 172: transform mathematics

172 CHAPTER 6. Z - TRANSFORM

Figure 6.2: z-transform of (a) x [n] = anu [n] and (b) x [n] = anu [−n−1] .

X (z) =+∞

∑n=−∞

anu [n]z−n

=+∞

∑n=0

(az−1)n

=1

1−az−1

=z

z−aROC : |z|> |a|

The complex series converges provided that∣∣az−1

∣∣ < 1. This gives us the ROC: |z| > |a| (Fig. 6.2a). Clearly theFourier transform does not exist for |a| > 1. z-transforms of the two signals x [n] =

( 13

)nu [n] and y [n] =

(− 1

3

)nu [n] are

X (z) = 11−1/3z−1 = z

z−1/3 and Y (z) = 11+1/3z−1 = z

z+1/3 with the same ROC |z|> 13 .

Example 2

Let x [n] = anu [−n−1]. Find X (z) and the ROC. Proceeding as we did in the previous example:

X (z) =+∞

∑n=−∞

anu [−n−1]z−n

=−1

∑n=−∞

anz−n

With replacement k =−n we obtain the z-transform and the ROC:

Page 173: transform mathematics

6.2. REGION OF CONVERGENCE FOR THE Z-TRANSFORM 173

X (z) =∞

∑k=1

a−kzk

=∞

∑k=0

(a−1z

)k−1

= −1+1

1−a−1z

=a−1z

1−a−1z

=− zz−a

ROC : z < |a|

Fig. 6.2b shows the ROC for a < 1.

Example 1 and Example 2 demonstrate that two different sequences may have the same z-transform. The only differ-ence between the two transforms is their region of convergence.

If the discrete sequence is a linear combination of exponential sequences then the z-transform can be expressed inclosed form as a rational function

H (z) =P(z)Q(z)

where the numerator P(z) and denominator Q(z) are polynomials in z or z−1. As we have seen in 3.5 on page 72 the zvalues for which P(z) = 0 and Q(z) = 0 are called the zeros and poles of H (z) respectively. In general assuming N simplepoles and M simple zeros, H (z) can be described by one of the following forms:

H (z) =∑

Mi=0 biz−i

∑Ni=0 aiz−i

H (z) = b0a0· ∏

Mi=1(1−niz−1

)∏

Ni=1 (1−diz−1)

H (z) = b0a0· zN

∏Mi=1 (z−ni)

zM ∏Ni=1 (z−di)

(6.11)

We deduce from these forms that

1. The ROC lies outside of the circle that passes through the pole which is farthest from the origin, i.e., |z| >max |ni| i = 1, . . . ,N,

2. There are N poles that are not at the origin,

3. There are M zeros that are not at the origin,

4. If M > N there are M−N poles at the origin,

5. If M < N there are N−M zeros at the origin,

6. None of the poles are at infinity.

We illustrate this in the next example. We will use the linearity property of the z-transform which is very similar to thosewe have encountered in Laplace and Fourier transforms. The linearity states that the z-transform of a linear combinationof sequences is the same linear combination of the individual z-transforms. The proof of linearity is given in Section (6.3).

Page 174: transform mathematics

174 CHAPTER 6. Z - TRANSFORM

Figure 6.3: ROC for Example 3.

Example 3

Find the z-transform of the sequence x [n] = ∑+∞

k=−∞akδ [n− k]+∑

+∞

k=−∞bkδ [n− k].

X (z) = Z x [n]

= Z

+∞

∑k=−∞

akδ [n− k]

+Z

+∞

∑k=−∞

bkδ [n− k]

=+∞

∑n=−∞

anz−n ++∞

∑n=−∞

bnz−n

=1

1−az−1 +1

1−bz−1

For this transform to converge both sequences must converge. ROC for the first sequence is |z|> |a| and |z|> |b| forthe second. Therefore the ROC for X (z) is |z|> max(|a| , |b|). We can express this sum as a rational function:

X (z) =P(z)Q(z)

= =z

z−a+

zz−b

=z [2z− (a+b)](z−a)(z−b)

X (z) has two zeros and two poles: z1 = 0, z2 = (a+b)/2 and p1 = a, p2 = b.

6.3 z-Transform Properties

These properties enable us to study discrete signals and systems. Using these properties, the transforms of basic signalsand the inverse z-Transform techniques we can obtain obtain the response of systems with complicated z−transforms. Theproperties studied below bear striking resemblance to the properties of Laplace Transform which we have already studiedin Chapter 2. Paying attention to the differences between continuous and discrete transforms we can carry those propertiesover to the properties outlined in the following paragraphs. Continuous functions are transformed through integration ofinfinetisimal integrands, such as x(t)e−stdt; in z- transform integration is replaced by summation of discrete quantitiesx [n]z−n. In the following paragraphs we assume a discrete signal x [n] with X (z) in some region upon which certainoperations are done. The z-Transforms of the affected discrete signals are given.

Page 175: transform mathematics

6.3. Z-TRANSFORM PROPERTIES 175

Table 6.1: z-Transform Pairs

Discrete Signal z-Transform ROC1 δ [n] 1 Entire z-plane2 u [n] 1

1−z−1 |z|> 13 −u [−n−1] 1

1−z−1 |z|< 1

4 δ [n−m] z−m

z 6= 0 m > 0z < ∞ m < 0

5 anu [n] 11−az−1 |z|> |a|

6 −anu [−n−1] 11−az−1 |z|< |a|

7 nanu [n] az−1

(1−az−1)2 |z|> |a|

8 −nanu [−n−1] az−1

(1−az−1)2 |z|< |a|

9 [cosω0n] u [n] 1−cosω0z−1

1−2cosω0z−1+z−2 |z|> 1

10 [sinω0n] u [n] sinω0z−1

1−2cosω0z−1+z−2 |z|> 1

11 [rn cosω0n] u [n] 1−r cosω0z−1

1−2r cosω0z−1+r2z−2 |z|> r

12 [rn sinω0n] u [n] r sinω0z−1

1−2r cosω0z−1+r2z−2 |z|> r

13

an n ∈ [0,N−1]0 otherwise

1−aN z−N

1−az−1 |z|> 0

LinearitySummation is a linear operation like integration. Summation on an ensemble of discrete signals is equivalent to ensembleof the summations on the grouped signals. Let w [n] = ax [n]+by [n], X (z) = Z x [n] and Y (z) = Z y [n] where a andb are constants. Then W (z) = aX (z)+bY (z).

Proof is similar to Laplace Transform case and goes by the definition:

W (z) = Z ax [n]+by [n]

=∞

∑n=−∞

w [n]z−n

=∞

∑n=−∞

ax [n]+by [n]z−n

∑∞n=−∞ ax [n]+by [n]z−n is a summation on an ensemble of discrete signals.Linearity property interchanges the

order of summation and grouping. Thus

W (z) =∞

∑n=−∞

ax [n]z−n +∞

∑n=−∞

by [n]z−n

= a∞

∑n=−∞

x [n]z−n +b∞

∑n=−∞

y [n]z−n

which in z-Transform notation becomes

W (z) = aX (z)+bY (z) (6.12)

with ROC being the intersection of ROC’s of x [n] and y [n].

Time ShiftingThis is a very useful property which we often encounter in LTI discrete systems. Many discrete time signal processingoperations work on shifted samples of a known sequence. Time shifting property relates the z-Transform of the shiftedsignal to that of the original (unshifted) signal. If X (z) = Z x [n] then Z x [n−m]= z−mX (z).

Proof: By definition of z-Transform

Page 176: transform mathematics

176 CHAPTER 6. Z - TRANSFORM

Z x [n−m]=∞

∑n=−∞

x [n−m]z−n (6.13)

With a change of variables k = n−m we have n = k+m and

Z x [k] =∞

∑k=−∞

x [k]z−(k+m)

=∞

∑k=−∞

x [k]z−kz−m

= z−m∞

∑k=−∞

x [k]z−k

Z x [n−m] = z−mX (z)

Multiplication by an Exponential Sequence

Multiplication of a discrete signal by an exponential sequence causes scaling of the z-Transform of that signal. Let a be aconstant. Then

Z anx [n] =∞

∑n=−∞

anx [n]z−n

=∞

∑n=−∞

x [n]( z

a

)−n

Z anx [n]= X( z

a

)(6.14)

Differentiation of X(z)

Multiplying a discrete signal x [n] by n amounts to differentiation of X(z) with respect to z, i.e.,

dX (z)dz

=ddz

∑n=−∞

x [n]z−n

=∞

∑n=−∞

−nx [n]z−n−1

= −z−1Z nx [n]

Z nx [n]=−zdX (z)

dz(6.15)

Conjugation of a Complex Sequence

Z x∗ [n] =∞

∑n=−∞

x∗ [n]z−n

=

[∞

∑n=−∞

x [n] (z∗)−n

]∗= [X (z∗)]∗

Page 177: transform mathematics

6.4. THE INVERSE Z-TRANSFORM 177

Convolution of Sequencesw [n] = x [n]∗ y [n]

W (z) = Z x [n]∗ y [n]

=∞

∑n=−∞

x [n]∗ y [n]z−n

=∞

∑n=−∞

(∞

∑k=−∞

x [k]y [n− k]

)z−n

=∞

∑k=−∞

x [k]∞

∑n=−∞

y [n− k]z−n

=∞

∑k=−∞

x [k]z−kY (z)

= Y (z)∞

∑k=−∞

x [k]z−k

= X (z)Y (z)

Time Reversal

Z x [−n] =∞

∑n=−∞

x [−n]z−n

=−∞

∑k=∞

x [k]zk

=∞

∑k=−∞

x [k](

1z

)−k

= X(z−1)

Initial Value Theorem

X (z) =∞

∑n=0

x [n]z−n

= x [0]+ x [1]z−1 + · · ·+ x [n]z−n + · · ·

x [0] = limz→∞

X (z)

6.4 The Inverse z-TransformTaking the z-Transform of the difference equation results in an algebraic equation of the form 6.11. This in effect transfersthe system description from discrete-time domain to z-domain. Obtaining h(n) back from H (z) is the process of Inversez-Transform which can be performed by the inverse transform integral as defined by Eqn. which involves complex contourintegration. Below we outline methods of inverting the z-Transform, contour integration being the most sophisticated andultimate tool. Fortunately we can use simpler alternative methods for inversion.

6.4.1 Partial Fraction ExpansionWhen the z-Transform is a rational function, we have an inverse transform problem similar to inverse transforming LaplaceTransforms. The methods of obtaining the inverse Laplace Transform there is applicable here as well. When X (z) is aquotient of polynomials in z we can use the method of partial fraction expansion. By simplifying X (z), and using z-Transform rules and the Table of z-Transforms x(n) can be obtained rather easily. This facilitates our job of findingsolutions to difference equations.

Page 178: transform mathematics

178 CHAPTER 6. Z - TRANSFORM

A linear time-invariant discrete system relates its output to its input like

a0y(n) = b0x(n)+b1x(n−1)+ . . .+bmx(n−M)− [a1x(n−1)+ . . .+aNx(n−N)]

Collecting terms and taking z-Transforms of right and left sides of the equation

a0y(n)+a1x(n−1)+ . . .+aNx(n−N) = b0x(n)+b1x(n−1)+ . . .+bMx(n−M)N

∑k=0

aky(n− k) =M

∑k=0

bkx(n− k)

Z

[N

∑k=0

aky(n− k)

]= Z

[M

∑k=0

bkx(n− k)

]

Y (z)N

∑k=0

akz−k = X (z)M

∑k=0

bkz−k

Y (z) =∑

Mk=0 bkz−k

∑Nk=0 akz−k

X (z)

As we have already encountered in continuous-time linear time-invariant systems, the quotient Y (z)/X (z) is calledthe system function and denoted by H (z).

H (z) =Y (z)X (z)

=∑

Mk=0 bkz−k

∑Nk=0 akz−k

(6.16)

=p(z)q(z)

(6.17)

Clearly H (z) is a rational function of z−1 which can be converted to a function of z

p(z)q(z)

=z−M

∑Mk=0 bkzM−k

z−N ∑Nk=0 akzN−k

=zN

∑Mk=0 bkzM−k

zM ∑Nk=0 akzN−k

= b0a0

zN−M∑

Mk=0 (bk/b0)zM−k

∑Nk=0 (ak/a0)zN−k

We can factor the numerator and the denominator

p(z)q(z)

= b0a0

zN−M (z− z1)(z− z2) · · ·(z− zM)

(z− p1)(z− p2) · · ·(z− pN)(6.18)

=b0

a0

zN−M∏

Mk=0 (z− zk)

∏Nk=0 (z− pk)

(6.19)

It is evident that H (z) has M non-zero zeros, N non-zero poles. We can distinguish three cases in which

1. N < M when M−N poles are located at z = 0,

2. N = M when no zeros or poles are located at z = 0,

3. N > M when N−M zeros are located at z = 0.

We also notice that the number of poles and zeros are equal and there are no zeros or poles at infinity. Below we outlinepartial fractions method to put p(z)/q(z) into a sum of terms of the form Ai/

(1− piz−1

i

). Then using linearity and the

transform table sequence terms Ai pni u(n) are obtained whose sum yields h(n). We will deal with cases 2 and 3 afterwards.

Page 179: transform mathematics

6.4. THE INVERSE Z-TRANSFORM 179

When N < M the poles can be simple or multiple. The first case is that of simple poles (with multiplicities equal to1), whereas the second one is the case of multiple poles. Poles and zeros can be either real or occur as complex conjugate

pairs. For the simplest case of poles with multiplicities of one we can expandp(z)q(z)

as

p(z)q(z)

=A1

z− p1+

A2

z− p2+ · · ·+ An

z− pn

Let us study these cases with examples.

Real roots

Example 7. Let H (z) =p(z)q(z)

represent a system with right-sided signals

p(z)q(z)

=−1+3.6z−1

1+0.3z−1−0.54z−2

By factoring q(z), H (z) can be written as

−1+3.6z−1

1+0.3z−1−0.54z−2 =−1+3.6z−1

(1−0.6z−1)(1+0.9z−1)

which can be expanded in partial fractions as followsp(z)q(z)

=A1

1−0.6z−1 +A2

1+0.9z−1

where A1,A2 must be determined. Multiplying both sides of thep(z)q(z)

by 1−0.6z−1 and equating at z = 0.6 we obtain A1:

−1+3.6z−1

1+0.9z−1

∣∣∣∣∣z=0.6

= A1 +A2

1+0.9z−1

(1−0.6z−1)∣∣∣∣∣

z=0.6

2 = A1

Likewise multiplying both sides of the equality by 1+0.9z−1 and equating at z =−0.9 we obtain A2:

−1+3.6z−1

1−0.6z−1

∣∣∣∣∣z=−0.9

=A1

1−0.6z−1

(1+0.9z−1)∣∣∣∣∣

z−1=−0.9

+A2

−3 = A2

Thus we obtainp(z)q(z)

=−1+3.6z−1

1+0.3z−1−0.54z−2 =2

1−0.6z−1 −3

1+0.9z−1

Referring to Table 6.1 one can readily obtain the inverse z-Transform:

Z −1(

−1+3.6z−1

1+0.3z−1−0.54z−2

)= Z −1

(2

1−0.6z−1 −3

1+0.9z−1

)= Z −1

(2

1−0.6z−1

)−Z −1

(3

1+0.9z−1

)For right-sided signals the ROC is z > 0.6 and

h [n] = Z −1(

−1+3.6z−1

1+0.3z−1−0.54z−2

)

h [n] = 2 ·0.6nu [n]−3 · (−0.9)n u [n]

= [2 ·0.6n−3 · (−0.9)n]u [n]

H (z) can be obtained from H(z−1)

H (z) =− z(z−3.6)(z−0.6)(z+0.9)

H (z) has zeros at z = 0 and z = 3.6; and poles at z = 0.6 and z = −0.9. In Fig. 6.4 the pole-zero diagram and thecorresponding discrete-time signal are shown.

Page 180: transform mathematics

180 CHAPTER 6. Z - TRANSFORM

Figure 6.4: Example 1. (a) Pole-zero diagram, (b) Discrete time sequence.

Complex roots

Denominator terms like z2+ p2 can be factored as (z+ jp)(z− jp) while (z+ p)2+r2 can be factored as (z+ p+ jr)(z+ p− jr).

These terms give rise to fractions likeA

z+ jp,

A∗

z− jp,

Bz+ p+ jr

andB∗

z+ p− jr. Once A, B are obtained, the process of

finding the coefficients is over.

Example 8. Let

X (z) =z+1

z(z2 +1.21)

Decompoing X (z) =p(z)q(z)

into partial fractions find the discrete time sequence x [n].

Solution

z+1z(z2 +1.21)

=Az+

Bz+ j1.1

+B∗

s− j1.1

A =z+1

z2 +1.21

∣∣∣∣∣z=0

=1

1.21= 0.8264

B =z+1

z(s− j2)

∣∣∣∣∣z=− j1.1

=− j1.1+1

− j1.1(− j1.1− j1.1)=

1− j1.1−2.42

=−0.4132(1− j1.1)

Hence B∗ is automatically found to be

B∗ =−0.4132(1+ j1.1) and

z+1z(z2 +1.21)

=0.8264

z−0.4132

(1− j1.1z+ j1.1

+1+ j1.1z− j1.1

)= 0.8264z−1−0.4132z−1

(1.4866e− j0.8229

1+ j1.1z−1 +1.4866e+ j0.8229

1− j1.1z−1

)= 0.8264z−1

(1− 0.3071e− j0.8229

1+ j1.1z−1 − 0.3071e+ j0.8229

1− j1.1z−1

)

Page 181: transform mathematics

6.4. THE INVERSE Z-TRANSFORM 181

We can invert this result to discrete time domain in two steps. First we recognize the z−1 factor which will generate a unittime delay in the inverse transform.let us obtain the inverse z-transform of interior of the parantheses.

Z −1(

1− 0.3071e− j0.8229

1+ j1.1z−1 − 0.3071e+ j0.8229

1− j1.1z−1

)= δ [n]−0.3071e− j0.8229 (− j1.1)n−0.3071e j0.8229 ( j1.1)n

= δ [n]−0.3071 ·1.1n[exp− j

(0.8229+

2

)+ exp j

(0.8229+

2

)]= δ [n]−0.6143 ·1.1n cos

(nπ

2+0.8229

)Applying the unit time delay on this result we obtain x [n]

x [n] = δ [n−1]−0.6143 ·1.1n−1 cos(

n−12

π +0.8229)

Multiple rootsOnce again we can follow the methods we have used with Laplace Transform for poles of multiplicities greater than one.

For a pole p of multiplicity r partial fraction expansion will contain termsA1

z− p,

A2

(z− p)2 , · · · ,Ar

(z− p)r . To demonstrate

the case of poles wıth multiplicities greater than one let us consider the following example

X (z) =p(z)q(z)

=3z+1

(z−0.5)2

We can write this as3z+1

(z−0.5)2 =A1

z−0.5+

A2z

(z−0.5)2

We can use the two methods described in ? to decompose the function into partial fractions. The first method can also beapplied to the case of simple poles.

Method 1. Identical polynomials Gather the two terms together by multiplying denominators and numerators so thatthe denominators shall be the least common multiple of all. For our problem

3z+1

(z−0.5)2 =A1 (z−0.5)

(z−0.5)2 +A2z

(z−0.5)2

=(A1 +A2)z−0.5A1

(z−0.5)2

For this equality to hold we require that the coefficients of z0, z be equal in the numerator polynomials. Thus

A1 +A2 = 3−0.5A1 = 1

Solution of these equations yields:A1 =−2, A2 = 5

Hence the partial fraction expansion becomes:

3z+1

(z−0.5)2 = − 2z−0.5

+5z

(z−0.5)2

= −2z−1

1−0.5z−1 +100.5z

(z−0.5)2

= −2z−1

1−0.5z−1 +100.5z−1

(1−0.5z−1)2

Page 182: transform mathematics

182 CHAPTER 6. Z - TRANSFORM

Figure 6.5: Inverse z-Transform of Example . (a) LabVIEW implementation of x[n] =−2 ·0.5n−1u [n−1]+10n ·0.5nu [n],(b) Graph of x[n].

Assuming right-sided signal ROC is |z|> 0.5. Referring to from Table 6.1, x [n] is given as

x [n] = Z −1

(−2

z−1

1−0.5z−1 +100.5z−1

(1−0.5z−1)2

)= −2 ·0.5n−1u [n−1]+10n ·0.5nu [n]

LabVIEW implementation of this result is shown in Fig. 6.5.

Method 2. Differentiation In this method, to get rid of (z−0.5)2 in the denominator of the left-hand side we multiplyboth sides of the equation by (z−0.5)2 to obtain

3z+1 = A1 (z−0.5)+A2z

Now differentiate both sides once with respect to z to obtain:

3 = A1 +A2

A2 z|z=0.5 = 3z+1∣∣∣z=0.5

0.5A2 = 2.5A2 = 5A1 = −2

Once the z-Transform of a system or signal is given, the inverse transform can be recursively expressed by collectingthe dependent variable’s present value on the left hand side and its past values and the input function and its past valueson the right hand side of a difference equation. Assuming the z-Transform is expressed by a rational function

Y (z)X (z)

=∑

Mk=0 bkz−k

∑Nk=0 akz−k

Inverse z- Transform Using Contour Integration

The Cauchy integral theorem states that

12π j

‰C

z−kdz =

1 k = 10 k 6= 1

(6.20)

Page 183: transform mathematics

6.5. PARSEVAL’S THEOREM 183

12π j

‰C

X (z)zkdz =1

2π j

‰C

[∞

∑n=−∞

x(n)z−n

]zkdz

=∞

∑n=−∞

x(n)[

12π j

‰C

zk−ndz]

=

x(k) k = n−10 otherwise

x [n] =1

2π j

‰C

X (z)zn−1dz

Power Series Expansion

6.5 Parseval’s Theorem

6.6 One-Sided z-TransformOne-sided (unilateral) z- Transform can be used to solve difference equations with specified initial conditions. To illustratethis we take simple a second-order causal system as an example. In this system let y [n] and x [n] be two discrete signalsrelated to each other through a difference equation:

y [n] = −a1y [n−1]−a2y [n−2]+b0x [n]+b1x [n−1]x [n] = 0 n < 0

Taking z- Transforms of both sides we have

∑n=−∞

y [n]+a1y [n−1]+a2y [n−2]z−n =∞

∑n=−∞

b0x [n]+b1x [n−1]z−n

Since x [n] = 0 for n < 0 the summation on the right-hand side is reduced to ∑∞n=0 b0x [n]+b1x [n−1]z−n. Hence

we arrive at a transform which differs from the two-sided (bilateral) transform in the start value of its index.

∑n=−∞

y [n]+a1y [n−1]+a2y [n−2]z−n =∞

∑n=0b0x [n]+b1x [n−1]z−n

=(b0 + z−1b1

)X (z)

Two-sided z- Transform of y [n] is found as usual

Y (z) =b0 + z−1b1

1+a1z−1 +a2z−2 X (z)

But two-sided z- Transform of y [n] can be split into two sums

∑n=−∞

y [n]z−n =−1

∑n=−∞

y [n]z−n +∞

∑n=0

y [n]z−n

= y [−2]z2 + y [−1]z+∞

∑n=0

y [n]z−n

=b0 + z−1b1

1+a1z−1 +a2z−2 X (z)

∑n=0

y [n]z−n =b0 + z−1b1

1+a1z−1 +a2z−2 X (z)−

y [−2]z2 + y [−1]z

where we assumed y [n] = 0 for n < −2. If we denote the unilateral transform ∑∞n=0 y [n]z−n with Y (z), the relation

between Y (z) and Y (z) becomes

Page 184: transform mathematics

184 CHAPTER 6. Z - TRANSFORM

Y (z) = Y (z)− y [−2]z2− y [−1]z

Values of y [−2] and y [−1] are called the initial conditions. Obviously should the initial conditions vanish, the twotransforms, the one-sided transform and two-sided transform, become identical.

6.6.1 Difference Equations in LabVIEW

Figure 6.6: Time delay operations in LabVIEW. The first version shown in (a) can be used in FOR loop and WHILE loop.It is extendable to any number of delays, and it accepts initial conditions. The second version in (b) uses a loop set to runone time only. Also initial condition is not connected to the register.

Rather than handling the transformed equations in z-plane, difference equations can be solved in discrete time domain.Recall that z-transform of a sequence delayed by one sample is Z x[n−1] = z−1X (z). LabVIEW has a facility calledregister port on FOR and WHILE loops. Using the register port one can introduce delays in discrete time to draw theblock diagram of the difference equation. As shown in Fig. 6.6, there are two ways to introduce time delays in LabVIEW.Registers can be added to the loop, then it can be extended to multiple delays on the left wall of the loop. Initial conditionsfor x [−1] ,x [−2] , · · ·x [−n] can be wired as seen on the figure. The loop can iterate as many times as needed. The secondversion uses a loop set to run once only. The register input is available as an output on the next run. As the output will beneeded on the next run no initial conditions may be connected to the register.

Example 36. As an example to one-sided z- Transform consider the following iterative algorithm to compute square roots.The algorithm starts with an initial guess for

√A; call it x0. We assume the iteration produces a sequence of numbers xn

which converges to√

A. We will try to obtain a difference equation to find√

A if A is a positive integer.

Since the sequence is assumed to converge to the square root

limn→∞

xn =√

A (6.21)

Due to convergence of the sequence we must also have

limn→∞

xn = limn→∞

xn−1 (6.22)

Squaring Eqn. 6.21, and using 6.22 we obtain

limn→∞

x2n = A

limn→∞

(2x2

n− x2n)

= A

limn→∞

(2xnxn−1− x2

n−1)

= A

2 limn→∞

xnxn−1 = limn→∞

x2n−1 +A

limn→∞

xnxn−1 =12

limn→∞

(x2

n−1 +A)

limn→∞

xn = limn→∞

[12

(xn−1 +

Axn−1

)]

Page 185: transform mathematics

6.7. FREQUENCY RESPONSE 185

We can drop the limit operation and replace the equality with approximation to arrive at the iteration in the form of adifference equation

xn ≈12

(xn−1 +

Axn−1

)with initial condition (6.23)

x0 = G (6.24)

We start the computation by assigning a guess to x0. This is a very interesting and a very fast algorithm; in just afew iterations one can get the square root with great accuracy, the number of iterations depending somewhat on the initialguess and the precision.

x0 = G initial guess

x1 =12

(x0 +

Ax0

)x2 =

12

(x1 +

Ax1

)· · ·

Figure 6.7 shows the LabVIEW implementation of the algorithm and a sample run for A = 24 and G = 7. The iterationstops when

∣∣xn− xn−1 ≤ 10−12∣∣.

Although the difference uquation we have just derived looks elegant, it is a nonlinear system and very tough to analyzeusing z- Transform. Let us give it a try at least to obtain further insight. Transforming both sides of 6.23 we obtain

X (z) =12

[z−1X (z)+AZ

(1

xn−1

)](6.25)

(2− z−1)X (z) = Az−1Z

(1xn

)Z(

1xn

)is given by

Z

(1xn

)=

∑n=0

z−n

xn

and unfortunately cannot be evaluated in terms of X (z) using the properties and the transform table in this chapterexcept for a hopeless step

Z

(1xn

)=

1A(2z−1)X (z)

which is of no use to obtain X (z) in closed form.

6.7 Frequency ResponseAs mentioned in , frequency response of discrete signals can be found by evaluating z-transform of the sequence on unitcircle provided that ROC includes the unit circle. By setting z equal to e jω . A discrete signal with z-transform X (z) andconverging on the unit circle has a Fourier Transform X

(e jω)

with 0≤ ω < 2π .

6.8 Applications of z-Transformz-transform can be successfully applied to solve difference equations that represent linear time-invariant discrete systems.Modern mathematical programs like MATLAB, MathCAD and SciLAB provide functions that accept coefficients oflinear time-invariant systems. LabVIEW has a good capacity to carry out arithmetic on complex numbers; however it isnot so wise to form complex number z = re jθ in z-transform equations and let r and θ vary over a range and computethe equation in the region defined by r and θ then obtain two 2-dimensional functions for real and imaginary parts of thatequation. This is tedious; we rather recommend the use of z-transform in the discrete time domain. Here we will useLabVIEW to illustrate the z-transform application with an example. Suppose we have a system with difference equation

Page 186: transform mathematics

186 CHAPTER 6. Z - TRANSFORM

Figure 6.7: Square root algorithm implemented iteratively using a difference equation. (a) LabVIEW block diagram. Theiteration stops when the difference between successive values is less than 10−12. (b) Front panel showing a sample runA = 24 and G = 7. It takes only 5 iterations to calculate

√24 = 4.89898.

y [n] = x [n]+2cos10ºy [n−1]− y [n−1]

Taking the z-transform of both sides we obtain

Y (z) = 2cos10ºz−1Y (z)− z−2Y (z)+X (z)

Arranging the terms we derive H (z) = Y (z)/X (z)

H (z) =1

1−2cos10ºz−1 + z−2

H (z) =z2

z2−2cos10ºz+1

The system has two zeros at z = 0 and poles at p1 = e j10º and p2 = e− j10º and its pole-zero diagram is shown in Fig. 6.8b.Since the system possesses poles on the unit circle it oscillates with a frequency of 10

360 ·1Ts

= fs/36. Providing nonzeroinitial conditions at y [−2] and/or y [−1] the system oscillates without need for x [n]; hence we can remove the input. TheLabVIEW implementation in the figure sets y [−2] = y [−1] = 1. The output of the oscillator is shown in 6.8(d). Note howwe implement z−1 operators and feed the initial conditions y [−2] = y [−1] = 1 to LabVIEW.

Our second example is a 3-tap Finite Impulse Response (FIR) filter given by the difference equation:

y [n] = 0.25x [n]+0.50x [n−1]+0.25x [n−2]

The block diagram of this filter, its LabVIEW implementation and its response to x [n] = 0.98n cos πn25 is shown in Fig. 6.9.

Page 187: transform mathematics

6.8. APPLICATIONS OF Z-TRANSFORM 187

Figure 6.8: Digital oscillator implemented in LabVIEW. The frequency of the oscillator is 1/36. (a) Signal-flow graph, (b)Pole-zero diagram, (c) LabVIEW block diagram, (d) Output.

Figure 6.9: 3-Tap FIR filter on LabVIEW. (a) Filter block diagram, (b) LabVIEW implementation using unit delay loop,(c) Response to x [n] = 0.98n cos πn

25 .

Page 188: transform mathematics

188 CHAPTER 6. Z - TRANSFORM

Problem 2

Problems1. Fig. 6.5 shows how x[n] =−2 ·0.5n−1u [n−1]+10n ·0.5nu [n] is implemented in LabVIEW.

(a) Build this vi and operate it.

(b) Identify the terms of the equation and interprete the realization.

2. The system with transfer function X [z] = 3z+1(z−0.5)2 has the unit impulse response x[n] = −2 · 0.5n−1u [n−1]+10n ·

0.5nu [n].

(a) Derive the difference equation for this system,

(b) Implement the system in LabVIEW as shown in Fig 2.

(c) Verify that this block diagram produces the output x[n] =−2 ·0.5n−1u [n−1]+10n ·0.5nu [n]

Page 189: transform mathematics

Chapter 7

Power Series

A linear combination of power terms (z− z0)n for −∞≤ n≤+∞ is called a complex power series denoted by

f (z) = . . .+a−2 (z− z0)−2 +a−1 (z− z0)

−1 +a0 +a1 (z− z0)+a2 (z− z0)2 + . . .

= ∑∞n=−∞ an (z− z0)

n

The power series which consists of powers of z is entire. Interestingly complex functions which are analytic in someregion can be evaluated using a power series at points in that region. Such functions can be expanded in Taylor’s Seriesat a point zwhere f is analytic. Only nonnegative powers are involved in Taylor’s Series. Mac Laurin Series is a specialcase of Taylor’s Series. If there exists a point in the region at which f fails to be analytic then another series, the LaurentSeries, evaluate f (z) in that region. In contrast to Taylor’s Series, Laurent Series comprise negative and positive powersof z− z0. For functions which are analytic throughout the region Laurent Series is reduced to Taylor’s Series. Reader isreferred to [Brown and Churchill] for further study of the series.

7.1 Taylor SeriesIf f (z) is analytic in some domain |z− z0|< R, then the function can be expanded into a Taylors Series:

f (z) =∞

∑n=0

f (n) (z0)

n!(z− z0)

n (7.1)

The coefficient of the n−th power is f (n) (z0)/n!, with f (n) (z0) being the n−th derivative evaluated at z0. f (0) (z0) =f (z0) is the function itself evaluated at z0.

Example 37. Using the fact that sin(π/6) = 0.5 calculate sin(π/6+ j0.5) using Taylor’s Series.

We are given z0 = π/6, z = π/6+ j0.5, z− z0 = j0.5 .The Taylor’s Series for f (z) = sinz :

sinz =f (0)

0!(z− z0)

0 +f (1)

1!(z− z0)

1 +f (2)

2!(z− z0)

2 +f (3)

3!(z− z0)

3 +f (4)

4!(z− z0)

4 + . . .

= sinz0 + cosz0 (z− z0)−sinz0

2(z− z0)

2− cosz0

6(z− z0)

3 +sinz0

24(z− z0)

4 + . . .

= 0.5+

√3

2( j0.5)− 1

4( j0.5)2−

√3

12( j0.5)3 +

148

( j0.5)4 + . . .

= 0.5638+ j0.4511

This can be verified by using sinz = sinx coshy+ j cosx sinhy. Matlab evaluation yieldssin(π/6+ j0.5) = 0.5638+ j0.4513

7.2 Maclaurin Series

f (z) =∞

∑n=0

f (n) (0)n!

zn (7.2)

189

Page 190: transform mathematics

190 CHAPTER 7. POWER SERIES

Example 38. Using the fact that sin(π/6) = 0.5 calculate sin(π/6+ j0.5) using Maclaurin’s Series.

We are given z0 = 0, z = π/6+ j0.5.Mac Laurin’s Series for f (z) = sinz:

sinz =f (0)

0!(z)0 +

f (1)

1!(z)1 +

f (2)

2!(z)2 +

f (3)

3!(z)3 +

f (4)

4!(z)4 + . . .

= sin0+ cos0(z)− sin02

(z)2− cos06

(z)3 +sin024

(z)4 + . . .

= z− 16

z3 +1

120z5 + . . .

= (π/6+ j0.5)− 16(π/6+ j0.5)3 +

1120

(π/6+ j0.5)− . . .

= 0.5638+ j0.4513

7.3 Laurent SeriesIf f (z) is analytic in a region D : |z− z0|< R but not at z = z0, Taylor’s series can not be used to evaluate f (z0). If C is asimply-connected contour in D then f (z) can be expanded in Laurent Series. While powers of z− z0 in Taylor’s Series isnon-negative integers, Laurent Series include negative and positive powers.

f (z) =−1

∑n=−∞

bn

(z− z0)n +

∑n=0

an (z− z0)n (7.3)

The coefficients of positive and negative powers are given by

an =1

2π j

‰C

f (z)dz (7.4)

bn =1

2π j

‰C

f (z)dz

Problems1. Calculate cos(π/3+ j0.1) using the definition of the cos function. Using 5 terms from the series calculate an

approximation for cos(π/3+ j0.1) using

(a) Taylor’s Series expansion

(b) Mac Laurin’s Series expansion.

Page 191: transform mathematics

Bibliography

[Cardano] "Artis magnae, sive de regulis algebraicis (also known as Ars magna)," Gerolamo Cardano,Nuremberg, 1545.

[Euler] “Elements of Algebra,” Leonhard Euler.

[Feynman] "Chapter 22: Algebra, The Feynman Lectures on Physics: Volume I. p. 10," Richard Feynman,(June 1970).

[Gamow] "One Two Three... Infinity, Facts and Speculations of Science," G. Gamow, Bantam Books, 9thprinting 1960.

[Brown and Churchill] "Complex Variables and Applications," J. W. Brown, R. V. Churchill, 8th edition, McGraw-Hill2009, ISBN 978–0–07–305194–9.

[Sadiku] "Fundamentals of Electric Circuits," Alexander Sadiku, 3rd edition, McGraw-Hill 2007, ISBN978-0-07-110582-8.

[Kuo] "Network Analysis and Synthesis," Franklin F. Kuo, 2nd edition, Wiley International 1966, ISBN0-471-51118-8.

[1] , "Engineering Mathematics"

[LabVIEW] , "http://www.ni.com/"

[Matlab] , "http//www.mathworks.com/"

[wxMaxima] , ""

[Michelson] , 1898.

[Wilbraham] , 1848.

[Gibbs] , 1898.

[Özhan] , ””, Yıldız Teknik Üniversitesi Fen Bilimleri Enstitüsü, 30 April 1999.

[TIMIT] University of Pennsylvania, http://www.ldc-upenn.edu

191

Page 192: transform mathematics

Index

1-D, 1492-D image, 143-Phase

balanced load, 23circuits, 22

AAC, 19Admittance, 20AM demodulation, 158AM modulation, 158AM, amplitude modulation, 158Amplitude Modulation, 127Analysis

of electrical networks, 80Analyticity, 36, 38APF, all-pass filter, 155Applications

of complex numbers, 16Argument, 18Average, 92

BBasis vectors, 89Bode plot, 13, 133, 150, 157Bode, Hendrik Wade, 157BPF, band-pass filter, 154Breadboarding, 149Brickwall filter

see ideal filter, 149BSP, band-stop filter

notch filter, 155

CCardano, Gerolamo, 9Cauchy-Riemann conditions

polar form, 42rectangular form, 38

Causality, 155Cepstrum

complex, 152real, 152

Cepstrum analysis, 151Compensation circuits, 154Complex

conjugate, 10exponential, 152frequency, 53function, 33plane, 14

Complex frequency, 56Complex functions, 33

derivative, 36entire, 38, 40, 45exponential function, 45hyperbolic functions, 48limit of, 35logarithmic function, 46, 47polynomial, 45rational function, 45trigonometric functions, 47

Complex number, 9addition and subtraction, 11argument, 10conjugate, 10exponential, 10identity, 11magnitude of, 10multiplication and division, 12norm of, 10polar form, 10rectangular form, 9roots, 14rotation, 14

Conjugateharmonic, 44of complex number, 10of harmonic function, 43symmetry relations, 124

Continuity, 35Convergence, 56

of Fourier Series, 104Convolution, 62, 128

in frequency domain, 129in time domain, 62integral, 62

Cosineof a complex number, 47

Cotangentof a complex number, 47

Creator, 26Critically damped, 78Current, 19, 80

DdB, decibels, 157DC, 24, 92De Moivre’s formula, 13, 17Decade, 158Decibel, 157

192

Page 193: transform mathematics

INDEX 193

Deconvolution, 150Definite integrals

evaluation of, 83Demodulation

synchronous, 158Derivative

of complex functions, 36DFT, 111, 140Differentiability, 38Differential equation, 54Differentiation, 59, 71

chain rule, 40division, 40in frequency domain, 60linearity, 39multiplication, 40rules, 39with respect to time, 127

DigiTalker, 107Dirichlet conditions, 104, 120, 134Discrete Fourier Transform, 111, 140Domain, 33Dot product, 90Duality, 128, 129

EEigenfunctions, 153Electrical systems, 79Elementary functions

see complex functions, 44Euler

formula, 10identity, 10

Euler, Leonhard, 9

FFast Fourier Transform, 24, 141FDM, 159Feynman, Richard, 10, 26FFT, 111, 141

see Fast Fourier Transform, 24Filtering, 154Filters

APF, 155BPF, 154BSF, 155HPF, 154LPF, 154notch filter, 155

Final value theorem, 66Formant, 106Fourier Series

complex, 91differentiation, 103integration, 103reversal, 102shifting in time, 101trigonometric (phase-amplitude), 92trigonometric (quadrature), 92

versus Fourier transform, 118Fourier Transform, 118

convergence, 120definition, 117properties, 123

Fractal, 35Dragon, 14Mandelbrot, 16

Frequency, 20angular, 24, 91fundamental, 91negative, 24Nyquist, 144spatial, 145temporal, 145tripler, 107

Frequency Division Multiplexing, 159Frequency shifting, 127

GGain, 13Gibbs phenomenon, 105

HHarmonic, 91Harmonic functions, 43

conjugate, 44Heisenberg’s uncertainty principle, 140HPF, high-pass filter, 154Hyperbolic function

of a complex variable, 48

IImaginary number, 9Impedance, 17, 19, 80Impulse function

discrete, 133sifting property, 59

Inductance, 20Initial value theorem, 65Inner product, 90Integration

real, 60with respect to time, 127

InverseLaplace transform, 56

Inverse Laplace Transform, 67

JJoint Time-Frequency Analysis, 106, 138JTFA, 138

KKirchhoff’s

current law, 153voltage law, 20, 153

LLabVIEW, 35, 111Laplace Transform, 53

Page 194: transform mathematics

194 INDEX

applications, 79definition, 55of periodic functions, 61properties, 59

Laplace’s equation, 43Limit

of complex functions, 35Linearity

of Fourier Transform, 125of Laplace Transform, 59

Logarithmof negative numbers, 47

Logarithmic functionprincipal value, 46

LPF, low-pass filter, 154

MMaclaurin Series, 10Mandelbrot

equation, 35Mathematica, 111Mathematical induction, 40MatLab, 111Maxima, 74, 111Maxwell’s equations, 33Mean Square Error, 105Mixer, 158Modulation, 155Mozer speech synthesis, 106Mozer, Forrest S., 107MSE, 105Multiplication

in time domain, 129

NNeutral, 22Noncausal, 155Nyquist frequency, 144

OOrthogonal, 89Orthonormal, 89Oscilloscope, 154

probes, 154Overdamped, 78Overshoot, 154

PParseval’s relation, 103, 130Partial Fraction expansion, 67Passband, 155Period, 90

fundamental, 91Periodicity, 90Phase, 13, 18Phasor, 17, 22

addition, 18definition, 18differentiation, 19

integration, 19Pitch, 152Pixel, 14, 145Poles, 68Poles and zeros, 72Pole-zero diagram, 132Polynomial, 45

factoring, 72identical, 71

Power SeriesLaurent series, 190Maclaurin series, 189of a complex variable, 189Taylor series, 189

QQuality factor, 108Quantum mechanics, 33Quefrency, 152

RRange, 33Rational function, 45, 67Region of Convergence, 56Resistance, 19Resonance, 21ROC

see Region of Convergence, 56Roots

complex, 69multiple, 70of a complex number, 14real, 69

Rotating magnetic fields, 22

SSampling, 141, 146SciLab, 111Selimiye Mosque, 35Short-Term Fourier Transform, 138Sinc function, 155Sine

of a complex number, 47Sinusoid, 24Spectrogram, 106, 140Spectrum analyzer, 150

digital, 118, 151swept, 150swept frequency, 118

Speech, 106Speech signal, 152Speedup capacitor, 154Stability, 53STFT, 106, 138Stopband, 155Sufism, 26Symmetry, 123Synthesis

differentiator, 81

Page 195: transform mathematics

INDEX 195

electrical networks, 81integrator, 81

TTangent

of a complex number, 47Tank circuit, 21, 110Time reversal, 126Time scaling, 125Time shifting, 126TIMIT, 106Translation

complex, 61real, 61

Triangle inequality, 56Trigonometric functions

of a complex variable, 47Trigonometry, 16

UUncertainty principle, 140Underdamped, 78Unit step function, 127

VVector space, 89Vocal folds, 152Voltage, 19, 80Voxel, 145

WWave function, 33Window, 138

Bartlet, 138Blackman, 138Hamming, 138Hanning, 138rectangular, 138

Window function, 138

Zz- transform, 134Zero-input response, 78Zeros, 68Zero-state response, 78