embedded system

62
1 Embedded System 講講 : 講講講

Upload: istas

Post on 13-Jan-2016

30 views

Category:

Documents


0 download

DESCRIPTION

Embedded System. 講師 : 黃永賜. Outline. Introduction to Embedded system What is Embedded system? Introduction to RTOS (ucosii) Background/foreground system Advantages and Disadvantages of RTOS kernels Background /Foreground solution for IP cam Embedded software modules - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Embedded System

1

Embedded System

講師 : 黃永賜

Page 2: Embedded System

2

Outline

Introduction to Embedded system What is Embedded system? Introduction to RTOS (ucosii) Background/foreground system Advantages and Disadvantages of RTOS kernels

Background /Foreground solution for IP cam Embedded software modules

Keyboard software module LED software module

Page 3: Embedded System

3

What is Embedded system

是一種完全嵌入受控器件內部為特定應用設計的專用電腦系統。與個人電腦這樣的通用電腦系統不同,嵌入式系統通常執行的是帶有特定要求的預先定義的任務。

Page 4: Embedded System

4

Application of Embedded system

航空電子,例如慣性導航系統、飛行控制硬體和軟體以及其他飛機和飛彈中的整合系統。

手提電話。 電腦網路器材,包括路由器、時間伺服器和防火牆。 印表機。 複印機。 磁碟機(軟碟驅動器和硬碟機)。 汽車發動機控制器和防鎖死系統(ABS)。 家庭自動化產品,如恆溫器、冷氣機、灑水裝置和安全監視系統。 手持計算器 。 家用電器,包括微波爐、洗衣機、電視機、 DVD 播放器和錄製器。 醫療器材 。 測試器材,如數字存儲示波器、邏輯分析儀、頻譜分析儀。 多功能手錶 。 …

用途廣 = 工作機會多 !!

Page 5: Embedded System

5

Advantages and Disadvantages of RTOS kernels

Advantages: Applications can be designed and expanded easily Simplifies the design by splitting the application into separate

tasks. Valuable services such as semaphores, mailbox, queue, time

delay.

Disadvantages: More ROM/RAM 4 percent additional CPU overhead Extra Cost (royalties, maintenance…)

Page 6: Embedded System

6

Introduction to RTOS (ucosii)

Page 7: Embedded System

7

Introduction (1)

Critical Section of Code Resource Shared Resource Multitasking Task Context Switch (or Task Switch) Kernel Scheduler Non-Preemptive Kernel Preemptive Kernel Reentrancy

Page 8: Embedded System

8

Introduction (2)

Task Priority Static Priority Dynamic Priority Priority Inversion Assigning Task Priority Mutual Exclusion Deadlock

Page 9: Embedded System

9

Critical Section of CodeA critical section of code, also called a critical region, is code that needs to

be treated indivisibly. Once the section of code starts executing, it must not be interrupted. To ensure this, interrupts are typically disabled before the critical code is executed and enabled when the critical code is finished (see also Shared Resource).

Resources• A resource is any entity used by a task. A resource can thus be an I/O device

such as a printer, a keyboard, a display, etc. or a variable, a structure, an array, etc.

Shared Resources

• A shared resource is a resource that can be used by more than one task. Each task should gain exclusive access to the shared resource to prevent data corruption. This is called Mutual Exclusion and techniques to ensure mutual exclusion are discussed in section Mutual Exclusion.

Page 10: Embedded System

10

Multitasking

• Multitasking is the process of scheduling and switching the CPU (Central Processing Unit) between several tasks; a single CPU switches its attention between several sequential tasks.

• Multitasking is like foreground/background with multiple backgrounds.

• Multitasking maximizes the utilization of the CPU and also provides for modular construction of applications.

• One of the most important aspects of multitasking is that it allows the application programmer to manage complexity inherent in real-time applications.

• Application programs are typically easier to design and maintain if multitasking is used.

Page 11: Embedded System

11

Task (1/3)

A task, also called a thread, is a simple program that thinks it has the CPU all to itself.

The design process for a real-time application involves splitting the work to be done into tasks which are responsible for a portion of the problem.

Each task is assigned a priority, its own set of CPU registers, and its own stack area.

SP

CPU Registers

SP

Task Control Block

Priority

Context

Stack Stack Stack

CPU

MEMORY

TASK #1 TASK #2 TASK #n

SP

Task Control Block

Priority

SP

Task Control Block

Priority

Status Status Status

Page 12: Embedded System

12

Tasks (2/3)

Each task typically is an infinite loop that can be in any one of five states: DORMANT, READY, RUNNING, WAITING FOR AN EVENT, or INTERRUPTED.

The DORMANT state corresponds to a task which resides in memory but has not been made available to the multitasking kernel.

A task is READY when it can execute but its priority is less than the currently running task.

A task is RUNNING when it has control of the CPU.A task is WAITING FOR AN EVENT when it requires the occurrence of an event

(waiting for an I/O operation to complete, a shared resource to be available, a timing pulse to occur, time to expire etc.).

Finally, a task is INTERRUPTED when an interrupt has occurred and the CPU is in the process of servicing the interrupt.

Page 13: Embedded System

13

Tasks (3/3)

RUNNINGREADY

OSTaskCreate()OSTaskCreateExt()

Task is Preempted

OSMBoxPend()OSQPend()

OSSemPend()OSTaskSuspend()OSTimeDly()OSTimeDlyHMSM()

OSMBoxPost()OSQPost()OSQPostFront()OSSemPost()OSTaskResume()OSTimeDlyResume()OSTimeTick()

OSTaskDel()

DORMANT

WAITING

OSStart()OSIntExit()

OS_TASK_SW()

OSTaskDel()

OSTaskDel()

Interrupt

OSIntExit()

ISR

Page 14: Embedded System

14

Context Switch (or Task Switch) When a multitasking kernel decides to run a different task, it simply saves the current

task's context (CPU registers) in the current task's context storage area – it’s stack. Once this operation is performed, the new task's context is restored from its storage

area and then resumes execution of the new task's code. This process is called a context switch or a task switch.

Context switching adds overhead to the application. The more registers a CPU has, the higher the overhead. The time required to perform a context switch is determined by how many registers have to be saved and restored by the CPU.

Kernel• The kernel is the part of a multitasking system responsible for the management of

tasks (that is, for managing the CPU's time) and communication between tasks.• The fundamental service provided by the kernel is context switching.• The use of a real-time kernel will generally simplify the design of systems by allowing

the application to be divided into multiple tasks managed by the kernel.• A kernel will add overhead to your system because it requires extra ROM (code

space), additional RAM for the kernel data structures but most importantly, each task requires its own stack space which has a tendency to eat up RAM quite quickly. A kernel will also consume CPU time (typically between 2 and 5%).  

• A kernel can allow you to make better use of your CPU by providing you with indispensable services such as semaphore management, mailboxes, queues, time delays, etc.

Page 15: Embedded System

15

Scheduler (1/3)

The scheduler, also called the dispatcher, is the part of the kernel responsible for determining which task will run next.

Most real-time kernels are priority based. Each task is assigned a priority based on its importance. The priority for each task is application specific.

In a priority-based kernel, control of the CPU will always be given to the highest priority task ready-to-run.

When the highest-priority task gets the CPU, however, is determined by the type of kernel used. There are two types of priority-based kernels: non-preemptive and preemptive.

Page 16: Embedded System

16

Scheduler (2/3)

1

2

3

(a) non-preemptive

時間

工作號碼排程器

A

B

C

DE

priority 1 > 2 > 3

• Advantage– Low interrupt latency

– Non-reentrant code can be used

– Don’t need more effort to protect shared memory

• Drawback– Time response can not be evaluated

Interrupt B makes task D ready to run

and then ?

Page 17: Embedded System

17

Scheduler (3/3)

priority 1 > 2 > 3

1

2

3

(b) preemptive

時間

排程器

A

B

C

DE

• Advantage– Time response is deterministic

• Drawback– The scheduling is more complex than non-preemptive– The reentrancy of routine must be considered – Need more effort to protect shared memory

Interrupt B makes task D ready to run

and then ?

Page 18: Embedded System

18

Non-Preemptive Kernel (1/3)Non-preemptive kernels require that each task does something to e

xplicitly give up control of the CPU. Non-preemptive scheduling is also called cooperative multitasking; tasks cooperate with each other to share the CPU.

Asynchronous events are handled by ISRs and it can make a higher priority task ready to run, but the ISR always returns to the interrupted task. One of the advantages of a non-preemptive kernel is that interrupt latency is typically low (see the later discussion on interrupts).

At the task level, non-preemptive kernels can also use non-reentrant functions (discussed later). Non-reentrant functions can be used by each task without fear of corruption by another task. This is because each task can run to completion before it relinquishes the CPU.  

Another advantage of non-preemptive kernels is the lesser need to guard shared data through the use of semaphores. Each task owns the CPU and you don't have to fear that a task will be preempted.

In some instances, semaphores should still be used. Shared I/O devices may still require the use of mutual exclusion semaphores; for example, a task might still need exclusive access to a printer.

Page 19: Embedded System

19

Non-Preemptive Kernel (2/3)• A task is executing (1) but gets interrupted.• If interrupts are enabled, the CPU vectors (i.e. jumps) to the ISR (2).• The ISR handles the event (3) and makes a higher priority task

ready-to-run. • Upon completion of the ISR, a Return From Interrupt instruction is

executed and the CPU returns to the interrupted task (4).• The task code resumes at the instruction following the interrupted

instruction (5).• When the task code completes, it calls a service provided by the

kernel to relinquish the CPU to another task (6).• The new higher priority task then executes to handle the event

signaled by the ISR (7). Low Priority Task

High Priority Task

ISR

ISR makes the highpriority task ready

Low priority taskrelinquishes the CPU

Time

(1) (2)

(3)

(4)

(5)

(6)

(7)

Page 20: Embedded System

20

Non-Preemptive Kernel (3/3)

• The most important drawback of a non-preemptive kernel is responsiveness.

• A higher priority task that has been made ready to run may have to wait a long time to run, because the current task must give up the CPU when it is ready to do so.

• As with background execution in foreground/background systems, task-level response time in a non-preemptive kernel is non-deterministic; you never really know when the highest priority task will get control of the CPU. It is up to your application to relinquish control of the CPU. 

• To summarize, a non-preemptive kernel allows each task to run until it voluntarily gives up control of the CPU.

• An interrupt will preempt a task. Upon completion of the ISR, the ISR will return to the interrupted task. Task-level response is much better than with a foreground/background system but is still non-deterministic.

• Very few commercial kernels are non-preemptive.

Page 21: Embedded System

21

Preemptive Kernel (1/3)

A preemptive kernel is used when system responsiveness is important.

Because of this, µC/OS-II and most commercial real-time kernels are preemptive.

The highest priority task ready to run is always given control of the CPU.

When a task makes a higher priority task ready to run, the current task is preempted (suspended) and the higher priority task is immediately given control of the CPU.

If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended and the new higher priority task is resumed.

Page 22: Embedded System

22

Preemptive Kernel (2/3)• A task is executing (1) but is interrupted.• If interrupts are enabled, the CPU vectors (i.e. jumps) to the ISR (2).• The ISR handles the event (3) and makes a higher priority task ready-to-run. • Upon completion of the ISR, a service provided by the kernel is invoked. This

function knows that a more import task has been ready to run, and thus, instead of returning to the interrupted task, the kernel performs a context switch and executes the code of the more import task(4).

• When the more import task is done, another function that the kernel provided by is called to put the task to sleep waiting for the event (i.e., the ISR) to occur(5).

• The kernel then sees that a lower priority task needs to execute, and another context switch is done to resume execution of the interrupted task(6)(7).

Low Priority Task

High Priority Task

ISR

ISR makes the highpriority task ready Time

Page 23: Embedded System

23

Preemptive Kernel (3/3)

• With a preemptive kernel, execution of the highest priority task is deterministic; you can determine when the highest priority task will get control of the CPU.

• Task-level response time is thus minimized by using a preemptive kernel.  

• Application code using a preemptive kernel should not make use of non-reentrant functions unless exclusive access to these functions is ensured through the use of mutual exclusion semaphores, because both a low priority task and a high priority task can make use of a common function. Corruption of data may occur if the higher priority task preempts a lower priority task that is making use of the function.  

• To summarize, a preemptive kernel always executes the highest priority task that is ready to run. An interrupt will preempt a task. Upon completion of an ISR, the kernel will resume execution to the highest priority task ready to run (not the interrupted task).

• Task-level response is optimum and deterministic. • µC/OS-II is a preemptive kernel.

Page 24: Embedded System

24

Reentrant Functions (1/4)

A reentrant function is a function that can be used by more than one task without fear of data corruption.

A reentrant function can be interrupted at any time and resumed at a later time without loss of data.

Reentrant functions either use local variables (i.e., CPU registers or variables on the stack) or protect data when global variables are used.

void strcpy(char *dest, char *src){ while (*dest++ = *src++) { ; } *dest = NUL;}

Because copies of the arguments to strcpy() are placed on the task's stack, strcpy() can be invoked by multiple tasks without fear that the tasks will corrupt each other's pointers.

Page 25: Embedded System

25

Reentrant Functions (2/4)

An example of a non-reentrant function is shown in listing 2.2. swap() is a simple function that swaps the contents of its two arguments.

For sake of discussion, I assumed that you are using a preemptive kernel, that interrupts are enabled and that Temp is declared as a global integer:

int Temp;

void swap(int *x, int *y){ Temp = *x; *x = *y; *y = Temp;}

Page 26: Embedded System

26

Reentrant Functions (3/4) The following figure shows what could happen if a low priority task is interrupted while

swap() (1) is executing. Note that at this point Temp contains 1. The ISR makes the higher priority task ready to run, and thus, at the completion of the ISR

(2), the kernel (assuming µC/OS-II) is invoked to switch to this task (3). The high priority task sets Temp to 3 and swaps the contents of its variables correctly (that

is, z is 4 and t is 3). The high priority task eventually relinquishes control to the low priority task (4) by calling a

kernel service to delay itself for 1 clock tick (described later). The lower priority task is thus resumed (5). Note that at this point, Temp is still set to 3!

When the low-priority task resumes execution, it sets y to 3 instead of 1.

ISR O.S.

O.S.

HIGH PRIORITY TASK

while (1) { z = 3; t = 4;

swap(&z, &t); { Temp = *z; *z = *t; *t = Temp; } . . OSTimeDly(1); . .}

Temp == 3!

Temp == 1

Temp == 3

LOW PRIORITY TASK

while (1) { x = 1; y = 2;

swap(&x, &y); { Temp = *x;

*x = *y; *y = Temp; } . . OSTimeDly(1);}

OSIntExit()

(1)

(2)(3)

(4)

(5)

Page 27: Embedded System

27

Reentrant Functions (4/4)

An error caused by a non-reentrant function may not show up in your application during the testing phase; it will most likely occur once the product has been delivered!

If you are new to multitasking, you will need to be careful when using non-reentrant functions.  

We can make swap() reentrant by using one of the following techniques:

a) Declare Temp local to swap(). b) Disable interrupts before the operation and enable them after. c) Use a semaphore (described later).

If the interrupt occurs either before or after swap(), the x and y values for both tasks will be correct.

Page 28: Embedded System

28

Task Priority A priority is assigned to each task. The more important the task, the higher

the priority given to it.

Static Priorities • Task priorities are said to be static when the priority of each task does not change

during the application's execution.• Each task is thus given a fixed priority at compile time.• All the tasks and their timing constraints are known at compile time in a system

where priorities are static.

Dynamic Priorities • Task priorities are said to be dynamic if the priority of tasks can be changed during

the application's execution; each task can change its priority at run-time.• This is a desirable feature to have in a real-time kernel to avoid priority inversions.

Page 29: Embedded System

29

Priority Inversions (1/4)Priority inversion is a problem in real-time systems and occurs mostly

when you use a real-time kernel.The following figure illustrates a priority inversion scenario. Task#1

has a higher priority than Task#2 which in turn has a higher priority than Task#3.

Task#1 and Task#2 are both waiting for an event to occur and thus, Task#3 is executing (1). At some point, Task#3 acquires a semaphore that it needs before it can access a shared resource (2).

Task#3 performs some operations on the acquired resource(3) until it gets preempted by the high priority task, Task#1 (4).

Task#1 executes for a while until it also wants to access the resource(5). Because Task#3 owns the resource, Task#1 will have to wait until Task#3 releases the semaphore.

As Task#1 tries to get the semaphore, the kernel notices that the semaphore is already owned and thus, Task#1 gets suspended and Task#3 is resumed(6).

Task#3 continues execution until it gets preempted by Task#2 because the event that Task#2 was waiting for occurred (7).

Task #2 handles the event (8) and when it’s done, Task#2 relinquishes the CPU back to Task#3 (9).

Task#3 finishes working with the resource(10) and thus, releases the semaphore(11).

At this point, the kernel knows that a higher priority task is waiting for the semaphore and, a context switch is done to resume Task#1. At this point, Task#1 has the semaphore and can thus access the shared resource (12).

Page 30: Embedded System

30

Priority Inversions (2/4)• The priority of Task#1 has been virtually reduced to that of Task#3’s

because it was waiting for the resource that Task#3 owned. The situation got aggravated when Task#2 preempted Task#3 which further delayed the execution of Task#1.

Task 1 (H)

Task 2 (M)

Task 3 (L)

Priority Inversion

Task 3 Get Semaphore

Task 1 Preempts Task 3

Task 1 Tries to get Semaphore

Task 2 Preempts Task 3

Task 3 Resumes

Task 3 Releases the Semaphore

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(11)

(12)

Page 31: Embedded System

31

Priority Inversions (3/4)• You can correct this situation by raising the priority of Task#3 (above the priority of

the other tasks competing for the resource) for the time Task#3 is accessing the resource and restore the original priority level when the task is finished.

• A multitasking kernel should allow task priorities to change dynamically to help prevent priority inversions.

• What is really needed to avoid priority inversion is a kernel that changes the priority of a task automatically. This is called priority inheritance.

Task 1 (H)

Task 2 (M)

Task 3 (L)

Priority Inversion

Task 3 Get Semaphore

Task 1 Preempts Task 3

Task 1 Tries to get Semaphore(Priority of Task 3 is raised to Task 1's)

Task 3 Releases the Semaphore(Task 1 Resumes)

Task 1 Completes

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(11)

Page 32: Embedded System

32

Priority Inversions (4/4)

• The figure illustrates what happens when a kernel supports priority inheritance. As with the previous example, Task#3 is running(1) and then acquires a semaphore to access a shared resource(2).

• Task#3 accesses the resource(3) and then gets preempted by Task#1 (4).

• Task#1 executes (5) and then tries to obtain the semaphore (6). The kernel sees that Task#3 has the semaphore but has a lower priority than Task#1. In this case, the kernel raises the priority of Task#3 to the same level as Task#1.

• The kernel then switches back to Task#3 so that this task can continue with the resource (7).

• When Task#3 is done with the resource, it releases the semaphore (8).

• At this point, the kernel reduces the priority of Task#3 to its original value and gives the semaphore to Task#1 which is now free to continue (9).

• When Task#1 is done executing (10), the medium priority task (i.e. Task#2) gets the CPU (11).

• Note that Task#2 could have been ready-to-run anytime between (3) and (10) without affecting the outcome. There is still some level of priority inversion but, this really cannot be avoided.

Page 33: Embedded System

33

Mutual Exclusion (1/1)

• The easiest way for tasks to communicate with each other is through shared data structures. This is especially easy when all the tasks exist in a single address space.

• Tasks can thus reference global variables, pointers, buffers, linked lists, ring buffers, etc.

• While sharing data simplifies the exchange of information, you must ensure that each task has exclusive access to the data to avoid contention and data corruption.

• The most common methods to obtain exclusive access to shared resources are: 

– a) Disabling interrupts – b) Test-And-Set – c) Disabling scheduling – d) Using semaphores

Page 34: Embedded System

34

Disabling and Enabling Interrupts • The easiest and fastest way to gain exclusive access to a shared resource

is by disabling and enabling interrupts as shown in the pseudo-code of listing 2.3.

Disable interrupts;

Access the resource (read/write from/to variables);

Reenable interrupts;

• µC/OS-II uses this technique (as do most, if not all kernels) to access internal variables and data structures. In fact, µC/OS-II provides two macros to allow you to disable and then enable interrupts from your C code: OS_ENTER_CRITICAL() and OS_EXIT_CRITICAL(), respectively.

void Function (void){ OS_ENTER_CRITICAL(); . . /* You can access shared data in here */ . OS_EXIT_CRITICAL();}

• You must be careful, however, to not disable interrupts for too long because this affects the response of your system to interrupts. This is known as interrupt latency. Also, this is the only way that a task can share variables or data structures with an ISR.

• If you use a kernel, you are basically allowed to disable interrupts for as much time as the kernel does without affecting interrupt latency.

• Obviously, you need to know how long the kernel will disable interrupts.

Page 35: Embedded System

35

Test-And-Set (1/1)• If you are not using a kernel, two functions could ‘agree’ that to access a

resource, they must check a global variable, and if the variable is 0 the function has access to the resource.

• To prevent the other function from accessing the resource, however, the first function that gets the resource simply sets the variable to 1.

• This is commonly called a Test-And-Set (or TAS) operation. The TAS operation must either be performed indivisibly (by the processor) or you must disable interrupts when doing the TAS on the variable as shown in listing 2.5.

Disable interrupts;if (‘Access Variable’ is 0) { Set variable to 1; Reenable interrupts; Access the resource; Disable interrupts; Set the ‘Access Variable’ back to 0; Reenable interrupts;} else { Reenable interrupts; /* You don’t have access to the resource, try back later; */}

• Some processors actually implement a TAS operation in hardware (e.g. the 68000 family of processors have the TAS instruction).

Page 36: Embedded System

36

Disabling and enabling the scheduler (1/1)

• If your task is not sharing variables or data structures with an ISR then you can disable/enable scheduling

• You should note that while the scheduler is locked, interrupts are enabled and, if an interrupt occurs while in the critical section, the ISR will immediately be executed.

• At the end of the ISR, the kernel will always return to the interrupted task even if a higher priority task has been made ready-to-run by the ISR.

• The scheduler will be invoked when OSSchedUnlock() is called to see if a higher priority task has been made ready to run by the task or an ISR.

• A context switch will result if there is a higher priority task that is ready to run.

• Although this method works well, you should avoid disabling the scheduler because it defeats the purpose of having a kernel in the first place. The next method should be chosen instead.

void Function (void){ OSSchedLock(); . . /* You can access shared data in here (interrupts are recognized) */ . OSSchedUnlock();}

Page 37: Embedded System

37

Semaphores (1/2)

• A semaphore is a protocol mechanism offered by most multitasking kernels. Semaphores are used to:

– a) control access to a shared resource (mutual exclusion); – b) signal the occurrence of an event; – c) allow two tasks to synchronize their activities.

• A semaphore is a key that your code acquires in order to continue execution.• If the semaphore is already in use, the requesting task is suspended until

the semaphore is released by its current owner. In other words, the requesting task says: "Give me the key. If someone else is using it, I am willing to wait for it!"  

• There are two types of semaphores: binary semaphores and counting semaphores.

• As its name implies, a binary semaphore can only take two values: 0 or 1.• A counting semaphore allows values between 0 and 255, 65535 or

4294967295, depending on whether the semaphore mechanism is implemented using 8, 16 or 32 bits, respectively. The actual size depends on the kernel used.

• Along with the semaphore's value, the kernel also needs to keep track of tasks waiting for the semaphore's availability. 

• There are generally only three operations that can be performed on a semaphore: INITIALIZE (also called CREATE), WAIT (also called PEND), and SIGNAL (also called POST).  

• The initial value of the semaphore must be provided when the semaphore is initialized. The waiting list of tasks is always initially empty.

Page 38: Embedded System

38

Semaphores (2/2)

• Semaphores are especially useful when tasks are sharing I/O devices.• Imagine what would happen if two tasks were allowed to send characters to a print

er at the same time. The printer would contain interleaved data from each task. For instance, if task #1 tried to print “I am task #1!” and task #2 tried to print “I am task #2!” then the printout could look like this:

I Ia amm t tasask k#1 #!2! • In this case, we can use a semaphore and initialize it to 1 (i.e. a binary semaphore).• The rule is simple: to access the printer each task must first obtain the resource's s

emaphore. The tasks competing for a semaphore to gain exclusive access to the printer.

• Note that the semaphore is represented symbolically by a key indicating that each task must obtain this key to use the printer.

TASK 1

TASK 2

PRINTERSEMAPHORE

Acquire Semaphore

Acquire Semaphore

"I am task #2!"

"I am task #1!"

Page 39: Embedded System

39

Deadlock (or Deadly Embrace) (1/3)• A deadlock, also called a deadly embrace, is a situation in which two

tasks are each unknowingly waiting for resources held by each other.• If task T1 has exclusive access to resource R1 and task T2 has exclusive

access to resource R2, then if T1 needs exclusive access to R2 and T2 needs exclusive access to R1, neither task can continue. They are deadlocked.

1

2

3

4

Task 1{ …

Wait S1; // wait for semaphore S1 Access R1; // get S1 and access resource R1

… Wait S2; // wait for semaphore S2

Access R2; // get S2 and access resource R2 Rease S2; // finish using R2 and release S2

Release S1; // finish using R1 and release S1 …

} Task 2

{ …

Wait S2; // wait for semaphore S2 Access R2; // get S2 and access resource R2

… Wait S1; // wait for semaphore S1

Access R1; // get S1 and access resource R1 Rease S1; // finish using R1 and release S1

Release S2; // finish using R2 and release S2 …

}

Page 40: Embedded System

40

Deadlock (or Deadly Embrace) (2/3)

• The simplest way to avoid a deadlock is for tasks to:  – a) acquire all resources before proceeding, – b) acquire the resources in the same order, and – c) release the resources in the reverse order.

Task 1{ …

Wait S1; // wait for semaphore S1 Access R1; // get S1 and access resource R1

Release S1; // finish using R1 and release S1 Wait S2; // wait for semaphore S2

Access R2; // get S2 and access resource R2 Rease S2; // finish using R2 and release S2 …

} Task 2

{ …

Wait S2; // wait for semaphore S2 Access R2; // get S2 and access resource R2

Release S2; // finish using R2 and release S2 Wait S1; // wait for semaphore S1 Access R1; // get S1 and access resource R1

Rease S1; // finish using R1 and release S1 …

}

Page 41: Embedded System

41

Background /Foreground solution for IP cam

Page 42: Embedded System

42

Foreground/Background Systems

An application consists of an infinite loop that calls modules (that is, functions) to perform the desired operations (background). Interrupt Service Routines (ISRs) handle asynchronous events (foreground).

Foreground is also called interrupt level while background is called task level. Critical operations must be performed by the ISRs to ensure that they are dealt with in a timely fashion

The worst case task level response time depends on how long the background loop takes to execute. Because the execution time of typical code is not constant, the time for successive passes through a portion of the loop is non-deterministic.

Background Foreground

ISR

ISRISR

Time

Code execution

Page 43: Embedded System

43

Background /Foreground solution for IP cam(1/6)

1.The architecture of whole firmware of MPG45X (1)The data flow of whole firmware(2)dma_task_scheduler() function(3)dma_task_execute() function

2.How to communicate with PC (1)Function point (2)How to add a new command to Function point table

Page 44: Embedded System

44

Background /Foreground solution for IP cam(2/6)

The data flow of program

Dma_task_scheduler() Dma_task_execute()

QUEUE

Fore-ground Back-ground

使用插入排序,用優先權讓決定插入的位置。

User Command

From PC Interrupt Command

From ASIC

Using function point

table

Page 45: Embedded System

45

Background /Foreground solution for IP cam(3/6)

主程式流程 (Back-ground)

啟始化晶片

是否有資訊要傳 送資訊到 PC

否 做完

查看 Bus 是否空閒

Page 46: Embedded System

46

Background /Foreground solution for IP cam(4/6)

插入排序法 Dma_task_scheduler: 使用可搶先式的插入排序法 - 將新的鍵值 K 插入到一依序排列的環狀陣列 L之中,小於該鍵優先權

的元素,將依序遞移至下一位,所以高優先權的鍵值將永遠被排在前位,為了避免低優先權的飢餓,每被遞移一次,優先權將被提高一級。

假設陣列大小為四,目前發生依序發生 A , B , C , D事件,優先權分別為 1,3,2,4 ,則陣列內容為下所示。

陣列索引 : 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 事件 : A B A B A C D B A C 優先權 : 1 3 2 3 2 2 4 4 3 3

Dma_task_execute:只是簡單的從陣列的最開頭取值,依序取出 D,B,A,C ,並依照事件內容來決定行動。

Page 47: Embedded System

47

Background /Foreground solution for IP cam(5/6)

Function Point(函式指標 )(1/2)

code void ( code *Func_Table []) (void)=

{

init_mpg45x,

reset_mpg45x,

start_video,

start_empg,

start_dmpg,

start_display,

start_eadpcm,

start_dadpcm,

set_video_size,

set_video_frame_rate,

set_video_motion_ctrl,

set_video_motion_region,

……..

};

void pcic51_int_hander(void){ uchar SDATA cmd; uchar SDATA status = ptr_reg[PCIC_INT_STS];

if(status & PCIC_MPU_READ_PARA) { cmd = ptr_reg[PCIC_PC2MPU_CMD3]; ptr_reg[PCIC_MPU2PC_CMD3]=MSG_SUCCESS; (*Func_Table[cmd])(); ……. …….

}}

Page 48: Embedded System

48

Background /Foreground solution for IP cam(6/6)

Function Point(函式指標 )(2/2)typedef enum

{

INIT_MPG45X,

RESET_MPG45X,

START_VIDEO,

START_EMPG,

START_DMPG,

START_DISPLAY,

START_EADPCM,

START_DADPCM,

SET_VIDEO_SIZE,

SET_VIDEO_FRAME_RATE,

SET_VIDEO_MOTION_CTRL,

SET_VIDEO_MOTION_REGION,

SET_VIDEO_DEINTER_MODE,

……..

……..

……..

} MPU_COMMAND;

Driver 及 dll之間使用的 command table ,當中的命令順序必須

與 51 內部的 function point index(順序 ) 一樣,不然會發生執行的命令對不上,譬如下 a命令, 51

卻執行 b命令。

Page 49: Embedded System

49

Keyboard software module

Page 50: Embedded System

50

Key Boards (1/7)

Why embedded products need key board ? Input numerical data Control Device

Type of Key boards Key board scanning chip Software approach to key board scanning Cost ??

Page 51: Embedded System

51

Key Boards (2/7) What features can this key board software

module provide ? Scans any keyboard arrangement from 3*3 to 8*8 key

matrix Provides buffering (user configurable buffer size) Supports auto-repeat Keeps track of how long a key has been pressed Allows up to three Shift keys

Page 52: Embedded System

52

Key Boards (3/7)

Keyboard Basics De-bounce Matrix arrangement

Matrix Keyboard Scanning Algorithm All rows are forced low

Outputs a low on only one of rows Read input port and determine which one

Page 53: Embedded System

53

Key Boards (4/7)

矩陣式掃描

Page 54: Embedded System

54

Matrix Keyboard driver flow diagram

Key Boards (5/7)

state machine !!

Page 55: Embedded System

55

Matrix Keyboard driver state machine

Key Boards (6/7)

可讀性高、可擴性佳

Page 56: Embedded System

56

How to modify this module to let it become your module ??

Key Boards (7/7)

Page 57: Embedded System

57

LED software module

Page 58: Embedded System

58

Multiplexed LED Display (1/4)

Why embedded products need LED display ? Display limited ASCII using seven-segment digits Display numbers Turn ON or OFF individual LEDs

How to turn on an LED 注意電流 !!

Page 59: Embedded System

59

Multiplexed LED Display (2/4)

What features can this LED software module provide ? Multiplex up 64 LEDs Drive Seven-segment or discrete LEDs Easy to extend it (Be careful of scan-time!)

Page 60: Embedded System

60

Multiplexed LED Display (3/4)Multiplexing LEDs

Page 61: Embedded System

61

Internals block diagram

Multiplexed LED Display (4/4)

Page 62: Embedded System

62

END

Thanks for your attention to this course !!