recognition and classification of human motion

6
Recognition and Classification of Human Motion Based on Hidden Markov Model for Motion Database Yoshihiro Ohnishi Department of System Design Engineering Keio University 3-14-1 Hiyoshi, Kohoku, Yokohama 223-8522, Japan Email: [email protected] Seiichiro Katsura Department of System Design Engineering Keio University 3-14-1 Hiyoshi, Kohoku, Yokohama 223-8522, Japan Email: [email protected] Abstract—In some countries, many problems according to aging are pointed out. Decrease of worker’s physical ability is one of them. The old workers have high techniques, but physical ability is lower than that of young workers. And it becomes difficult to keep high quality. Hence it is thought that a power assist by robot is needed. The method that increases human motion simply is mainstream conventional power assist method. However, to assist accurately it is thought that robot has to recognize human motion and has to assist fitly. Hence, the system that save and reproduce human motion “motion database” is necessary. Here, to assist accurately, the motion which includes force information is saved to database. In this research, the trajectory information and the force information of human motion is extracted by using bilateral control and it is modeled. To reproduce appropriate motion from database, a search system is needed. For adapting power assist, the search system should be real-time and be able to search at all times. Therefore, in this research, a real-time motion searching method is proposed. The searching method is based on hidden Markov model because human motion has Markov property. Proposed method can search human motion on real-time while human does motion. The viability of proposed method is confirmed by motion search experiment. I. I NTRODUCTION Recently, many problems in terms of low birthrate and aging are pointed out in some countries. The ageing worker is one problem of low birthrate and aging . The old worker have high techniques, but their physical ability is falling. Therefore, to keep quality of work the power assist system is needed. Here, the robot that assist human have to know human motions so that robot assist human accurately. There are many studies on power assist by robot such as Human Extender proposed by Kazerrooni:[1] and HAL proposed by Sankai:[2]. In these methods, robot amplifies human motion only simply. However, for accurate power assist especially advanced technique, robot has to know what human want to do. If robot know human motion, robot can apply right assist to human. For these reasons, the system which saves, searches and reproduces “motion database” is needed. By copying the technique of motion database to robot, technical tradition[3] and power assist are possible. Motion database Exstraction of normative motion data Extraction of motion User Extraction of motion Modeling Assist Searching Fig. 1. Target of this research. Here, to construct motion database, motion modeling is needed. The acceleration sensor [4] and motion capture are the method of motion extraction [5], [6]. However, these methods extract only trajectory information, and they can- not get force information. It is thought that when human motion is modeled, not only trajectory information but also force information is important. Especially, in processing and molding, even if the trajectories of motion are same, the finished product may become different. Thus, to model human motion, force information is important in addition to position information. Therefore, in this research human motion is extracted by bilateral control. [7] By using bilateral control, position information and force information can be extracted at once. Bilateral control is the system, that transmits haptic sense by transmitting information between master and slave. In bilateral control, ”law of action and reaction” is realized between master and slave. And slave tracks master position. Hence, when human do some motion through bilateral control, the force and position information is extracted. The system which copies human motion by using bilateral control “motion copying system” [8] is realized. To construct motion database, a search system is needed in addition to extraction motion. In this research, hidden Markov model is used to search motion. As a conventional search method based on hidden markov model, handwritten character recognition is proposed [9]. In The 12th IEEE International Workshop on Advanced Motion Control March 25-27, 2012, Sarajevo, Bosnia and Herzegovina 978-1-4577-1073-5/12/$26.00 ©2012 IEEE

Upload: hutami-endang

Post on 20-Jun-2015

97 views

Category:

Education


1 download

TRANSCRIPT

Page 1: Recognition and classification of human motion

Recognition and Classification of Human MotionBased on Hidden Markov Model

for Motion DatabaseYoshihiro Ohnishi

Department of System Design EngineeringKeio University

3-14-1 Hiyoshi, Kohoku, Yokohama 223-8522, JapanEmail: [email protected]

Seiichiro KatsuraDepartment of System Design Engineering

Keio University3-14-1 Hiyoshi, Kohoku, Yokohama 223-8522, Japan

Email: [email protected]

Abstract—In some countries, many problems according toaging are pointed out. Decrease of worker’s physical abilityis one of them. The old workers have high techniques, butphysical ability is lower than that of young workers. And itbecomes difficult to keep high quality. Hence it is thought thata power assist by robot is needed. The method that increaseshuman motion simply is mainstream conventional power assistmethod. However, to assist accurately it is thought that robothas to recognize human motion and has to assist fitly. Hence,the system that save and reproduce human motion “motiondatabase” is necessary. Here, to assist accurately, the motionwhich includes force information is saved to database. In thisresearch, the trajectory information and the force information ofhuman motion is extracted by using bilateral control and it ismodeled.

To reproduce appropriate motion from database, a searchsystem is needed. For adapting power assist, the search systemshould be real-time and be able to search at all times. Therefore,in this research, a real-time motion searching method is proposed.The searching method is based on hidden Markov model becausehuman motion has Markov property. Proposed method cansearch human motion on real-time while human does motion.

The viability of proposed method is confirmed by motionsearch experiment.

I. INTRODUCTION

Recently, many problems in terms of low birthrate and agingare pointed out in some countries. The ageing worker is oneproblem of low birthrate and aging . The old worker have hightechniques, but their physical ability is falling. Therefore, tokeep quality of work the power assist system is needed. Here,the robot that assist human have to know human motions sothat robot assist human accurately. There are many studieson power assist by robot such as Human Extender proposedby Kazerrooni:[1] and HAL proposed by Sankai:[2]. In thesemethods, robot amplifies human motion only simply. However,for accurate power assist especially advanced technique, robothas to know what human want to do. If robot know humanmotion, robot can apply right assist to human.

For these reasons, the system which saves, searches andreproduces “motion database” is needed. By copying thetechnique of motion database to robot, technical tradition[3]and power assist are possible.

Motion databaseExstraction of normative motion data

Extraction of motion

User

Extraction of motion

Modeling

AssistSearching

Fig. 1. Target of this research.

Here, to construct motion database, motion modeling isneeded. The acceleration sensor [4] and motion capture arethe method of motion extraction [5], [6]. However, thesemethods extract only trajectory information, and they can-not get force information. It is thought that when humanmotion is modeled, not only trajectory information but alsoforce information is important. Especially, in processing andmolding, even if the trajectories of motion are same, thefinished product may become different. Thus, to model humanmotion, force information is important in addition to positioninformation. Therefore, in this research human motion isextracted by bilateral control. [7] By using bilateral control,position information and force information can be extractedat once. Bilateral control is the system, that transmits hapticsense by transmitting information between master and slave.In bilateral control, ”law of action and reaction” is realizedbetween master and slave. And slave tracks master position.Hence, when human do some motion through bilateral control,the force and position information is extracted. The systemwhich copies human motion by using bilateral control “motioncopying system” [8] is realized. To construct motion database,a search system is needed in addition to extraction motion. Inthis research, hidden Markov model is used to search motion.As a conventional search method based on hidden markovmodel, handwritten character recognition is proposed [9]. In

The 12th IEEE International Workshop on Advanced Motion Control March 25-27, 2012, Sarajevo, Bosnia and Herzegovina

978-1-4577-1073-5/12/$26.00 ©2012 IEEE

Page 2: Recognition and classification of human motion

Finger Robotwith DOB and RTOB

Linear Motorwith DOB and RTOB

Modal space

Modal space

Slave System

Master System

MotionDatabase

Fig. 2. The block diagram of extraction motion by using bilateral control.

this method, handwritten character was recognized by usinghidden markov model. However, this method didn’t use forceinformation. Thus, this method could not recognize humanmotion that force information is dominant. And this methodcould not recognize the motion in succession. It is thought thatthe search system should be real-time and be able to searchat all times.

To resolve these problems, this research proposes a searchmethod in real-time motion based on hidden Markov model.This method can search motion real-time in succession. Andbecause motion is modeled by using bilateral control, thismethod can recognize a human motion that force informationis dominant. In this paper, hand type robot is used to extracthuman motion. The Overview of this research is shown in Fig.1. First, human motion is extracted. Next, human motion ismodeled and saved to motion database. If robot assist human,motion is searched and reproduced.

This research is organized as follows. In following section,the system which extracts human motion by using bilateralcontrol is explained. In section III, motion modeling methodis introduced. The proposal motion search method is describedin section IV. Motion search experiment is shown in sectionV. At the last section, this research is summarized.

II. MOTION EXTRACTION BY BILATERAL CONTROL

A. Extraction of the trajectory and the force information bybilateral control

In this research, to extract human motion, the trajectoryinformation and the force information is extract by bilateralcontrol. Bilateral control is the system, that transmit hapticsense by transmitting information between master and slave.To transmit haptic sense, it is necessary to realize ”law ofaction and reaction” between master and slave to transmitforce information. And slave has to track master position. The

Fig. 3. Hand type robot.

position and the force of linear motor are transformed so thatthe movable range of the linear motor accorded with the rotaryrange of the rotary motor.

𝜃𝑀 =𝑥𝑀

𝐿𝑣(1)

𝜏𝑀 = 𝐹𝑀𝐿𝑣 (2)

where 𝜃𝑀 , 𝜏𝑀 , 𝑥 and 𝐿𝑣 denote virtual angle, virtual torque,position of linear motor and virtual link. For these reason, thegoal of force control and position control are as follows,

𝐹 𝑒𝑥𝑡𝑀 + 𝜏𝑒𝑥𝑡𝑆 = 0 (3)

𝜃𝑀 − 𝜃𝑆 = 0 (4)

where 𝐹 𝑒𝑥𝑡𝑀 , 𝜏𝑒𝑥𝑡𝑆 , 𝑥𝑀 and 𝜃𝑆 denote the force that applied to

master, the torque that applied to slave, the position responseof master and angle response of slave. Eq (3) means forcecontrol and eq (4) means position control. By satisfying thesegoals of control, haptic sense can be transited. However, goal

Page 3: Recognition and classification of human motion

of position control is difference from goal of force control.Thus position response and force response is transformed intothe common and differential mode by the second order quarrymatrix 𝑸 to realize two goals at once. The second-order quarrymatrix 𝑸 is defined by (5) [10].

𝑸2 =1

2

[1 11 −1

](5)

The disturbance force that is added to a motor is compensatedfor by the DOB(: disturbance observer)[11]. And the externalforce is estimated by the RFOB(: reaction force observer)[12]without force sensors.

In this research, hand type robot is used as slave system toextract grip motion. This robot has three fingers as shown inFig. 3. And linear motor is used as master motor. By usingthis system, human motions could be extracted. The blockdiagram of bilateral control is shown in Fig. 2. where ⃝𝐶 ,⃝𝐷, ⃝𝑟𝑒𝑠, ⃝𝑒𝑥𝑡, ⃝̂, ⃝𝑟𝑒𝑓 , ⃝𝑀 , ⃝𝑆 denote the commonmode, the differential mode, the response, the external force,the estimate value, the reference, master system, slave system,respectively. In this research, slave side force information andposition information is saved to motion database.

III. MODELING OF HUMAN MOTION

In this section, extracted motion is modeled. First, thehuman motion is defined as follows for modeling.

∙ A human motion is constructed with trajectory and force.∙ When motion’s trajectory and force are same, they are

same motion.

A. The state division of the motion

It is difficult to use raw data of motion. Hence, motionshould be transformed to manage database. In this section, amethod of the state division of the motion is explained.

It is thought that human motion is constructed with muchelements, and every element has much information. For motionmodeling, state division of the motion and parameterize everyelement are needed. It is thought that human motion isconstructed with many same velocity elements.

In this research, a element is defined as the state that movesconstant velocity. Hence, if velocity of motion changes, thestate of motion changes. However, it is impossible to recognizeforce information by using only this state. Hence, the vectorof applied force is added states as feature quantity.

According to above definition, motion is modeled by tra-jectory and force information.

IV. MOTION SEARCH METHOD BASED ON HIDDEN

MARKOV MODEL

In this section, the real-time motion searching method byusing motion model is proposed. It is thought that the modelwhich was defined in previous section has Markov property.Markov property means that each state depends on only thestate just before that. Hence, In this paper Hidden MarkovModel is used to search the motion. Hidden Markov model isthe probabilistic model that has Markov property in inner state.

0

Fig. 4. State transition diagram of this research.

Hidden Markov model estimate state transition by observedinformation.

Hidden Markov Model is used in voice recognition [13] andcharacter recognition [9]. Recently, hidden Markov model isused to recognize human motion and behavior [14], [15]. Inthis paper, ”left-to-right model” is used. ”left-to-right model”is one of the model of hidden markov model. This modelmeans that state transition unidirectionally advances. In addi-tion, initial state and a last state are known. State transitiondiagram of Hidden Markov model in this research is shownin Fig. 4 where 𝑎𝑛𝑖𝑡,𝑖𝑡+1

and 𝑏𝑛𝑖𝑡,𝑖𝑡+1denote the probability of

transition from state 𝑖𝑡 to state 𝑖𝑡+1 and the probability ofobserving feature vector 𝒚𝑡 when state transit from 𝑖𝑡 to 𝑖𝑡+1.And 𝑖𝑡 denotes the state when time is 𝑡. The probability that𝒚𝑡 occurs from model 𝜆𝑛 is given as

𝑃 (𝒚𝑡∣𝜆𝑛) =

𝑇∏𝑡=1

𝑎𝑛𝑖𝑡,𝑖𝑡+1𝑏𝑛𝑖𝑡,𝑖𝑡+1

(𝒚𝑡) (6)

where 𝑃 (𝒚𝑡∣𝜆𝑛) denotes probability. 𝑎𝑛𝑖𝑡,𝑖𝑡+1and 𝑏𝑛𝑖𝑡,𝑖𝑡+1

areobtained by putting up a model from statistics of the motion.In this paper, motion is recognized by using Viterbi algorithm.Viterbi algorithm can find the most likely sequence of hiddenstates. The model that output the highest probability is con-sidered as recognition result.

A. Feature vector

In this paper, feature vector is constructed with velocity andtorque of each finger. It is shown as follows

𝒚𝑡 = (𝜔1 𝜔2 𝜔3 𝜏1 𝜏2 𝜏3)𝑇 . (7)

where 𝜔 and ⃝𝑛 denote angler velocity and finger number.And the probability of observing feature vector 𝒚𝑡 is six-dimensional normal distribution as follows

𝑏𝑖(𝒚𝑡) =1

(√2𝜋)6

√∣𝑺∣ ⋅

exp

(−1

2(𝒚𝑡 − 𝝁)𝑇𝑺−1(𝒚𝑡 − 𝝁)

)(8)

where 𝑺 and 𝝁 denote covariance matrix and average vector,respectively. Here, velocity and torque of each finger areindependent event. In fact, most coefficient of correlation are

Page 4: Recognition and classification of human motion

less than 0.2. In this research, the average values are usedbecause the motions are very simple. Even if the averagevalues are used, the complex motion model can be constructedaccurately by dividing state more minuter.

By using eq. (6), (8), the probability that a motion accordswith a model is obtained. And, the model whose probabilityis highest is assumed recognition result.

B. Real-time searching

In this section, the method of searching the motion in real-time is explained. The feature vector that extracted by bilateralcontrol is input into the each model every sampling. And theprobability is output every sampling. Therefore, the motion isrecognized in real-time. Eq. (6) can recognize motion onlyonce. However, for power assist, it is needed that motionis recognize time and again in real-time. Therefore, in thisresearch, the search method which could recognize time andagain in real-time is proposed. First, to recognize time andagain, a window function is added to eq. (6).

𝑃 (𝒚𝑡∣𝜆𝑛) =𝑇∏

𝑡=1

𝑎𝑛𝑖𝑡,𝑖𝑡+1𝑏𝑛𝑖𝑡,𝑖𝑡+1

(𝒚𝑡)𝑤(𝑡− 𝛼) (9)

The window function is shown as follows

𝑤(𝑡) =

{1 if t < 100 otherwise

. (10)

By using models that have different variable 𝛼, the probabilityof arbitrary interval can be obtained. By calculating searchalgorithm in the window, a part of the motion which is inwindow can be recognized. However, the probability of eq.(9) decreases with time. Thus, to normalize the probability, loglikelihood is used. A log likelihood is applied to eq. (9), theprobability of 𝑗th window 𝑃𝑗(𝒚𝑡∣𝜆𝑛) is described as follows

𝑃𝑗(𝒚𝑡∣𝜆𝑛) =1

𝑚

𝑚∑𝑖=1

log(𝑎𝑛𝑖𝑡,𝑖𝑡+1

𝑏𝑛𝑖𝑡,𝑖𝑡+1(𝒚𝑡)𝑤(𝑡− 𝛼)

)(11)

where 𝑚 denotes sampling number. The nearer log likelihoodapproaches 0, the nearer the motion approaches the model ofones. The probability of window which has biggest probabilityis the probability of model. And, the model whose log likeli-hood of probability is highest is assumed recognition result.

V. MOTION SEARCH EXPERIMENT

A. Experimental setup

In this paper, hand type robot is used.It is shown in Fig. 5.This system is contructed with linear motor as master systemand hand robot as slave system. Hand robot’s motor is rotarygeared motor and resolving power of the encoder is 1600pulse/rev. Motor of master system is linear motors and positionencoders whose resolutions are 100 nm. And C++ languageis used for control and it is mounted by RTAI 3.7. A soft ballis used as a envrionment. Parameters of experimental systemare shown in Table I In experiment, width of window is 10 s,and distance of next window is 1 s.

Environment

Master systemLinear motor Slave system

Hand robot

Fig. 5. Experimental setup.

TABLE IPARAMETERS OF EXPERIMENTAL SYSTEM.

Parameter Description Value

𝑇𝑠 Sampling time 0.2 ms𝐾𝑡𝑀𝑛 Force coefficient of linear motor 22.0 N/A𝐾𝑡𝑆𝑛 Torque coefficient of finger’s motor 0.0243 Nm/A𝐾𝑓 Force gain 2𝐾𝑝 Position gain 1600𝐾𝑣 Velocity gain 80

𝑔𝑑𝑖𝑠Cut-off frequency ofdisturbance observer 15 rad/s

𝑔𝑟𝑒𝑎𝑐Cut-off frequency ofreaction force observer 15 rad/s

𝑙1 Link1 length 0.05398 m𝑙2 Link2 length 0.03175 m𝑙3 Link3 length 0.02553 m𝑚1 Mass of link1 0.035 kg𝑚2 Mass of link1 0.024 kg𝑚3 Mass of link1 0.018 kg

B. Normative motions

In this paper, 4 motions are searched. Schematic view ofmotions that is used in experiment are shown in Fig. 6. Themotions are shown in Fig. 6. Motion 1 and motion 2 are sametrajectory, and the difference is only the force. Motion 3 andmotion 4 are same relation, too. From motion 5 to motion 8are same trajectory, and only the angle of force vector in laststate is different.C. Recognition results

In this paper, each motion was searched 10 times. And,the motions are searched in series. The recognition results areshown in Table II. Table II shows that almost all motionswere recognized with high rate. In spite of the fact that thesemotions are same trajectory, they were recognized with highrate. Conventional method could not recognize these motions.D. The log likelihood transition

Figs. 7 to 11 show the log likelihood transition of eachwindow. In this experiment, motion 1 is conducted from 7.5s to 17.5 s, and motion 3 is conducted from 20 s to 30 s.Fig. 7 shows log likelihood of motions in window 1 and 2.The red solid line is the log likelihood transition of motion

Page 5: Recognition and classification of human motion

Environment

Force

Environment

Force

(a) Schematic view ofnormative motion 1.

(b) Schematic view ofnormative motion 2.

Environment

Force

Environment

Force

(a) Schematic view ofnormative motion 3.

(b) Schematic view ofnormative motion 4.

Fig. 6. Schematic view of motions.

TABLE IIRECOGNITION RESULTS.

Motion 1 2 3 4Result [%] 100 100 100 90

1. Because motion 1 is conducted from 10 s to 20 s, thelog likelihood of motion 1 in window 1 is the highest. Thus,motion is recognized as motion 1 between 17 s and 20 s.Motion 3 is conducted from 20 s to 30 s. As well as above,the log likelihood of motion 3 in window 3 is the highest.Because, motion 3 is begun at 20 s and window 4 observesmotion from 23 s to 33 s. It is shown in Fig. 8 And, Fig. 12shows the recognition result transition. This figure shows thatmotion is recognized in succession.

VI. CONCLUSIONS

In this research, the motions were modeled to construct themotion database. The human motion was extracted by usingbilateral control. And, a real-time motion searching methodwas proposed. The proposed method could search motion inreal-time at all times. The motions were recognized with highrate in search experiment by using proposed method. Thevalidity of proposed model and real-time search method wereconfirmed by experiment.

By using this model and search method, motion databasewill be applied to many contents. If robot can estimate humanmotion, advanced power assist will be realized.

ACKNOWLEDGMENTS

This research was partially supported by the Ministry of Ed-ucation, Culture, Sports, Science and Technology, Grantin–Aidfor Scientific Research for Young Scientists (A), 20686019,2008.

REFERENCES

[1] H. Kazerooni : “Human-robot Interaction Via the Transfer of Power andInformation Signals,” IEEE Transactions on Systems, Man and Cybernetics, Vol.20, No. 2, pp. 450–463, March/April, 1990.

[2] H. Kawamoto, L. Suwoong, S. Kanbe, and Y. Sankai : “Power assist Method forHAL-3 Using EMG-based Feedback Controller,” IEEE International Conferenceon Systems, Man and Cybernetics, Vol. 2, No. 2, pp. 1648–1653, November, 2003.

[3] S. Hyodo and K. Ohnishi : “A Method for Motion Extraction and Guide Basedon Haptic Information Relationship,” 2008 10th IEEE International Workshop onAdvanced Motion Control, AMC’ 08-TRENTO, pp. 428–433, March, 2008.

[4] Jaeseok Yun, Shwetak N. Patel, Matthew S. Reynolds, and Gregory D. Abowd:“Design and Performance of an Optimal Inertial Power Harvester for Human-Powered Devices,” IEEE Transactions on Mobile Computing, Vol. 10, No. 5, pp.669–683, May, 2011

[5] T. B. Moeslund and E. Granum : “A Survey of Computer Vision-Based HumanMotion Capture,” Computer Vision and Image Understanding, Vol. 81, No. 3, pp.231–268, March, 2001.

[6] Gwenaelle Piriou, Patrick Bouthemy, and Jian-Feng Yao: “Recognition of DynamicVideo Contents With Global Probabilistic Models of Visual Motion,” IEEETransactions on Image Processing, Vol. 15, No. 11, pp. 3418–3430, November,2006

[7] S. Katsura, W. Iida, and K. Ohnishi : “Medical Mechatronics - An Application toHaptic Forceps,” IFAC Annual Reviews in Control, Vol. 29, No. 2, pp. 237–245,November, 2005.

[8] Y. Yokokura, S. Katsura, and K. Ohishi : “Motion Copying System Based onReal-World Haptics,” 10th IEEE International Workshop on Advanced MotionControl AMC ’08, pp. 613–618, March, 2008.

[9] M. Nakai, N. Akira, H. Shimodaira and S. Sagayama: “Substroke Approachto HMM-based on-line Kanji Handwriting Recognition,” Proceedings. SixthInternational Conference on Document Analysis and Recognition, 2001, pp. 491–495, September, 2001.

[10] S. Katsura and K. Ohnishi : “Advanced motion control for wheelchair in unknownenvironment,” in Proc. IEEE Int. Conf. SMC, Taipei, Taiwan, pp. 4926–4931,Octber, 2006.

[11] K. Ohnishi, M. Shibata, and T. Murakami : “Motion Control for AdvancedMechatronics,” IEEE/ASME Transactions on Mechatronics, Vol. 1, No. 1, pp.56–67, March, 1996.

[12] T. Murakami, F. Yu, and K. Ohnishi : “Torque Sensorless Control in Multidegree-of-freedom Manipulator,” IEEE Transactions on Industrial Electronics, Vol. 40,No. 2, pp. 259–265, April, 1993.

[13] S. Nakagawa and Y. Hashimoto: “A Method for Continuous Speech Segmentationusing HMM,” 9th International Conference on Pattern Recognition, 1988, Vol. 1,pp. 509–512, November, 1988.

[14] Jacinto C. Nascimento, Mario A. T. Figueiredo and Jorge S. Marques: “Trajec-tory Classification Using Switched Dynamical Hidden Markov Models,” IEEETransactions on Image Processing, Vol. 19, No. 5, pp. 1338–1348, May, 2010

[15] Junghyun Kwon and Frank C. Park: “Natural Movement Generation Using HiddenMarkov Models and Principal Components,” IEEE Transactions on Systems, Man,and Cybernetics, Part B: Cybernetics, Vol. 38, No. 5, pp. 1184–1194, Ocotober,2008

Page 6: Recognition and classification of human motion

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

motion1 motion2 motion3 motion4

Fig. 7. Log likelihood transition. (a) window 1 (b) window 2.

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

motion1 motion2 motion3 motion4

Fig. 8. Log likelihood transition. (a) window 3 (b) window 4.

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

motion1 motion2 motion3 motion4

Fig. 9. Log likelihood transition. (a) window 5 (b) window 6.

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

motion1 motion2 motion3 motion4

Fig. 10. Log likelihood transition. (a) window 7 (b) window 8.

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

-400-350-300-250-200-150-100

-50 0

10 15 20 25 30

Log

likeh

oot

Time [s]

motion1 motion2 motion3 motion4

Fig. 11. Log likelihood transition. (a) window 9 (b) window 10.

0

1

2

3

4

5

10 15 20 25 30

Mot

ion

No.

Time [s]

Fig. 12. Recognition result transition.