telecom engineering studies automatic detection of …

74
TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF ON-BODY SENSOR LOCATION FOR OPPORTUNISTIC AND ROBUST INFERENCE OF BEHAVIOR MADE BY: Marta Calles Gálvez MANAGED BY: Oresti Baños Legrán Héctor Pomares Cintas DEPARTMENT: Architecture and computer technology Granada, Febrero de 2014 1

Upload: others

Post on 15-Oct-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

TELECOM ENGINEERING STUDIES

AUTOMATIC DETECTION OF ON-BODY SENSORLOCATION FOR OPPORTUNISTIC AND ROBUST

INFERENCE OF BEHAVIOR

MADE BY:Marta Calles Gálvez

MANAGED BY:Oresti Baños Legrán

Héctor Pomares Cintas

DEPARTMENT:Architecture and computer technology

Granada, Febrero de 2014

1

Page 2: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

“No te rindas, aún estás a tiempoDe alcanzar y comenzar de nuevo,

Aceptar tus sombras,Enterrar tus miedos,

Liberar el lastre,Retomar el vuelo.

No te rindas que la vida es eso,Continuar el viaje,

Perseguir tus sueños,Destrabar el tiempo,

Correr los escombros,Y destapar el cielo.

No te rindas, por favor no cedas,Aunque el frío queme,

Aunque el miedo muerda,Aunque el sol se esconda,

Y se calle el viento,Aún hay fuego en tu alma

Aún hay vida en tus sueños.Porque la vida es tuya y tuyo también el deseo

Porque lo has querido y porque te quieroPorque existe el vino y el amor, es cierto.

Porque no hay heridas que no cure el tiempo.Abrir las puertas,

Quitar los cerrojos,Abandonar las murallas que te protegieron,

Vivir la vida y aceptar el reto,Recuperar la risa,Ensayar un canto,

Bajar la guardia y extender las manosDesplegar las alas

E intentar de nuevo,Celebrar la vida y retomar los cielos.

No te rindas, por favor no cedas,Aunque el frío queme,

Aunque el miedo muerda,Aunque el sol se ponga y se calle el viento,

Aún hay fuego en tu alma,Aún hay vida en tus sueños

Porque cada día es un comienzo nuevo,Porque esta es la hora y el mejor momento.Porque no estás solo, porque yo te quiero.”

Mario BenedettiNo te rindas

2

Page 3: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

D. Héctor Pomares Cintas,Profesor del departamento de Arquitectura y Tecnología de Computadores

de la Universidad de Granada,D. Oresti Baños Legrán,Investigador del departamento de Arquitectura y Tecnología de Computa-

dores de la Universidad de Granadacomo codirectores del Proyecto Fin de Carrera de Doña Marta Calles Gálvez

Informan:

que el presente trabajo, titulado:

Automatic detection of on-body sensor location for opportunistic and robustinference of behavior.

Ha sido realizado y redactado por el mencionado alumno bajo su dirección,y con esta fecha autorizan a su presentación.

Granada, a____de Febrero de 2014

Fdo.________________________

3

Page 4: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Los abajo firmantes autorizan a que la presente copia de Proyecto Fin deCarrera se ubique en la Biblioteca del Centro y/o departamento para ser

libremente consultada por las personas que lo deseen.

Granada, a de Septiembre de 2012

Oresti Baños Legrán DNI:____________Firma:___________

Héctor Pomares cintas DNI: __________Firma:____________

Marta Calles Gálvez DNI:75157314-F _____Firma:____________

4

Page 5: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Agradecimientos:“A mi familia. Gracias por toda la paciencia y dedicación.”

5

Page 6: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Automatic detection of on-body sensor location for opportunisticand robust inference of behavior.

Marta Calles GálvezArquitectura y Tecnología de Computadores

Universidad de Granada

6

Page 7: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Contents1 Abstract 11

2 Introduction 12

3 Inertial sensing concepts 153.1 IMU signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.1 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.2 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.1.3 Magnetic field sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 General Activity Recognition chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Issues in Activity Recognition 234.1 On-body sensor location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234.2 IMU sensor orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.2.1 Rigid body concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2.2 Consequences for displaced sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5 Initial movement dataset 285.1 File distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.2 Data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

5.2.1 Static motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2.2 Quasi-static motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.2.3 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6 Fitness dataset 416.1 Data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

6.1.1 Activity set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416.1.2 Sensor deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436.1.3 File distribution and log files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

6.2 Experiment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.2.1 Data experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.2.2 Data treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7 Methodology for on-body sensor location 537.1 Method I: Visual Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537.1.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

7.2 Method II: Time signal features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577.2.2 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587.2.3 Best fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587.2.4 Mutual Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597.2.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

7.3 Method III: Statistical feature sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

8 Conclusion and future work 70

References 71

7

Page 8: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

List of Figures1 The coordinate-system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Physical functioning of an accelerometer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Quasi-stactic motions 2.1 and 2.3 performed with the accelerometer. . . . . . . . . . . . . . . 174 Motions of a gyroscopic sensor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Quasi-stactic motions 2.1 and 2.3 performed with the gyroscope. . . . . . . . . . . . . . . . . 196 Quasi-stactic motions 2.1 and 2.3 performed with the magnetic field sensor. . . . . . . . . . . 207 General Activity Recognition chain [13]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Roll, pitch and yaw[50]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Activity summary of the Initial Movement dataset. . . . . . . . . . . . . . . . . . . . . . . . . 2810 An example of a recorded video exercise for the initial movement dataset. . . . . . . . . . . . 2911 Acceleration, gyroscopic and magnetic signals of the static motions exercise 1. . . . . . . . . . 2912 Acceleration, gyroscopic and magnetic signals of the static motions exercise 2. . . . . . . . . . 3013 Acceleration, gyroscopic and magnetic signals of the static motions exercise 3.1. . . . . . . . . 3014 Acceleration, gyroscopic and magnetic signals of the static motions exercise 3.2. . . . . . . . . 3015 Acceleration, gyroscopic and magnetic signals of the static motions exercise 3.3. . . . . . . . . 3116 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.1. . . . . . . . . 3117 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.2. . . . . . . . . 3118 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.3. . . . . . . . . 3219 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.4. . . . . . . . . 3220 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.5. . . . . . . . . 3221 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.6. . . . . . . . . 3322 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.7. . . . . . . . . 3323 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.8. . . . . . . . . 3324 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.9. . . . . . . . . 3425 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.10. . . . . . . . 3426 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.11. . . . . . . . 3427 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.12. . . . . . . . 3528 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.13. . . . . . . . 3529 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.14. . . . . . . . 3530 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.15. . . . . . . . 3631 Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.16. . . . . . . . 3632 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 1. . . . . . 3633 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 2.1. . . . . 3734 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 2.2. . . . . 3735 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 2.3. . . . . 3736 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 3. . . . . . 3837 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 4. . . . . . 3838 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 5. . . . . . 3839 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 6. . . . . . 3940 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 7. . . . . . 3941 Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 8. . . . . . 3942 The activity set performed in Fitness dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4243 Missing activity data: (a) ideal and self-placement and (b) mutual displacement. . . . . . . . 4344 Sensor positioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4445 Pre-defined positions for the wearable sensor set. . . . . . . . . . . . . . . . . . . . . . . . . . 4446 Displaced sensors: (a) self-placement and (b) mutual-displacement. . . . . . . . . . . . . . . . 4547 Description of the log file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4648 Activity description for the SET parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4749 Activity description for the N parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4750 Summary of the different parameters involved in the Fitness dataset. . . . . . . . . . . . . . . 4751 An example of a Decision Tree classifier[1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4952 An example of a K-Nearest Neighbor classifier[2]. . . . . . . . . . . . . . . . . . . . . . . . . . 49

8

Page 9: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

53 Data distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5054 Leave One subject Out: training phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5155 Leave One subject Out: testing phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5156 Confusion Matrix with setting parameters including the ideal placement, the activity of walk-

ing, a window size of 1 second, a Decision Tree classifier and the Feature Set 1 (mean). . . . . 5257 Marker programmed interface tool. Acceleration signals of walking activity for left calf and

left thigh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5358 Acceleration signals represented at the same initial time and normalized linear acceleration for

a standart window size, 4 seconds and 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . 5459 Gyroscopic signals represented at the same initial time and normalized linear acceleration for

a standart window size, 4 seconds and 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . 5460 Magnetic field signals represented at the same initial time and normalized linear acceleration

for a standart window size, 4 seconds and 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . 5561 Acceleration signals represented at different initial times and normalized units. . . . . . . . . 5562 Gyroscopic signals represented at different initial times and normalized units. . . . . . . . . . 5563 Magnetic signals represented at different initial times and normalized units. . . . . . . . . . . 5664 Acceleration signals represented at the same initial time and normalized units for subject 1,

subject 2 and subject 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5665 Gyroscopic signals represented at the same initial time and normalized units for subject 1,

subject 2 and subject 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5666 Magnetic field signals represented at the same initial time and normalized units for subject 1,

subject 2 and subject 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5767 Correlation, Best Fit and Mutual Information for ideal placement, activity of walking, using

an accelerometer and a window size of 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . . 6068 Universal Correlation for ideal placement, activity of walking, using an accelerometer, a gyro-

scope and a window size of 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6069 Universal Correlation and Universal Best Fit for ideal placement, activity of walking, using a

magnetic field sensor, an accelerometer and a window size of 6 seconds. . . . . . . . . . . . . 6070 Universal Best Fit and Universal Mutual Information for ideal placement, activity of walking,

using a magnetic field sensor, an accelerometer and a window size of 6 seconds. . . . . . . . . 6171 Universal Mutual Information for ideal placement, activity of walking, using a gyroscope, a

magnetic field sensor and a window size of 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . 6172 Results obtained from the accelerometer, the gyroscope and the magnetic field sensor, walking

activity and using a window size of 1 second. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6373 Results obtained from the accelerometer, the gyroscope and the magnetic field sensor, walking

activity and using a window size of 3 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . 6374 Results obtained from the accelerometer, the gyroscope and the magnetic field sensor, walking

activity and using a window size of 6 seconds. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6375 Results obtained from the combination of the accelerometer, the gyroscope and the magnetic

field sensor, walking activity and using a window size of 1 second. . . . . . . . . . . . . . . . . 6476 Results obtained from the combination of the accelerometer, the gyroscope and the magnetic

field sensor, walking activity and using a window size of 3 seconds. . . . . . . . . . . . . . . . 6477 Results obtained from the combination of the accelerometer, the gyroscope and the magnetic

field sensor, walking activity and using a window size of 6 seconds. . . . . . . . . . . . . . . . 6478 Results obtained from the combination of the accelerometer, the gyroscope and the magnetic

field sensor, walking activity and using window sizes of 1, 3 and 6 seconds. . . . . . . . . . . . 6579 Results obtained from ideal placement using an accelerometer, walking activity, a window size

of 1 second, the DT classifier and different statistical feature sets. . . . . . . . . . . . . . . . . 6580 Results obtained from ideal placement using an accelerometer, walking activity, a window size

of 1 second, the KNN classifier and different statistical feature sets. . . . . . . . . . . . . . . . 6581 Results obtained from self placement using an accelerometer, walking activity, a window size

of 1 second, the DT classifier and different statistical feature sets. . . . . . . . . . . . . . . . . 6682 Results obtained from self placement using an accelerometer, walking activity, a window size

of 1 second, the KNN classifier and different statistical feature sets . . . . . . . . . . . . . . . 66

9

Page 10: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

83 Results obtained from mutual displacement using an accelerometer, walking activity, a windowsize of 1 second, the DT classifier and different statistical feature sets . . . . . . . . . . . . . . 66

84 Results obtained from mutual displacement using an accelerometer, walking activity, a windowsize of 1 second, the KNN classifier and different statistical feature sets . . . . . . . . . . . . . 67

85 Results obtained from all placements using an accelerometer, all activity sets, a window sizeof 6 seconds and the two types of classifiers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

86 Results obtained from all placements using an accelerometer, a window size of 6 seconds, theDT classifier, different activity sets and feature sets. . . . . . . . . . . . . . . . . . . . . . . . 67

87 Results obtained from all placements using an accelerometer, different activity sets, classifiersand a window size of 6 seconds,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

88 Results obtained from all placements using an accelerometer, different activity sets, a windowsize of 6 seconds, the DT classifier and the Feature Set 1(mean). . . . . . . . . . . . . . . . . 68

10

Page 11: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

1 AbstractActivity recognition models are normally based on predefined on-body sensor positioning. As a matter offact, most of the related literature frequently offers outstanding recognition results but often considering nochanges in the sensor setup. This is an unrealistic assumption since the sensor deployment may vary due toseveral conditions. User self-attachment, firmness of the attachment (loose of fitting) or displacements due tothe use of the sensors may introduce variations with respect to the original setup. Furthermore, the accuracyof the recognition system may strongly depend on the particular body position considered for the mountingof the sensor. In this project we aim to analyze the effects that different sensor configurations/positionsmay have in the system recognition capabilities as well as define techniques that attempt to autonomouslyidentify where a sensor is on body located. This information is particularly interesting to adapt the recognitionmethods to the best sensor configuration at each time (opportunistic configuration).

11

Page 12: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

2 IntroductionThe increase of life expectancy is one of the most noteworthy achievements in human evolution. Throughouthistory, the average lifespan has raised significantly and it is believed that advanced medical knowledge,better health behaviors, continued improvements in living standarts or lowered mortality, especially fromheart disease and stroke, have been key in it. The most relevant changes in life expectancy rates have beenexperienced in wealthier parts of the world where it is considered as a vital social goal supported with financialinvestigation aids.

According to figures from LeDuc Media et al.[6], in the middle of the twentieth century, places likeNorthern and Western Europe, Canada or Australia had the longest life expectancy with 71-74 years oldwhereas in Southern and Eastern Europe or United States only 69 years old.

In 2010, a relevant enhancement in life expectancy of almost ten years took place in nations such asAustralia, Canada, France or Spain with 81 years old. However, countries like United States or NorthernEurope did not increase their figures, 78 years old, as much as the others. As consequence of the wide varietyof diseases and a less technological enviroment, Third World countries have kept lifespans shorter thandeveloped countries, although there are regions like Asia where life expectancy has experienced a dramaticrise. In fact, India has doubled its life expectancy to 64 years over the last century. Nowadays, it is worthnoting that Japan and Singapur are leading the list with the best data registered, 82 years old, speciallywhen it was only 68 years old in the 60s.

Because of this fact, a considerable drop of mortality has been produced in a wide range of major in-juries, chronic diseases and potencial hazards to human health. In its report of 2009, the World HealthOrganization(WHO)[43] estimates the leading global risks for mortality including high blood pressure (re-sponsible for 13% of deaths globally), tobacco use (9%), high blood glucose (6%), physical inactivity (6%),and overweight and obesity (5%). In fact, physical inactivity and high cholesterol are a part of 61% due tocardiovascular deaths, which is the first cause of mortality in the world.

On the other side, an increase of the life expectancy has meant a phenomenon in the ageing population.Actually, it is not an isolated event, but it is widespread across the world. Taking a look at the internationalsituation in the last fifty years, the population pyramid has almost reversed its ideal shape, rising towardsmedian and older ages. While ageing population has been arising from increasing longevity and decliningfertility, their quality of life has been enhanced through the use of advanced treatments, better drugs andeven a proper hygiene as well.

With the shift of a higher longevity, short-term diseases now have turned into chronic thus a considerableamount of elder people suffer from two or more conditions at the same time in the body, which in formalterms has been defined as comorbidities. There is a list of comorbidities involved in this sort of patientsincluding heart diseases, osteoporosis, diabetes and chronic pulmonary diseases (COPD). In addition, suchdiseases go often hand in hand making of the existence of patients with both of them, a common worrisomesituation. Because of this, people who endure them should pay a particular attention to their health all thetime, especially when another condition occurs. As to heart diseases, there are two main issues: high bloodpressure and co pulmonale (heart failure that results from lung disease). To decrease the appearance ofdeveloping heart problems, people should regularly engage in light cardiovascular activities, such as walking,biking or swimming. In the case of osteoporosis, some elderlies who live with lung infections have othersassociated risky factors including smoking, low vitamin D levels and the use of steroids for treatment speciallyif it is about women who are at a greater risk for fall-outs and hip fractures. Hence, weight-bearing andstrengthening exercises may help to prevent them. To deal with diabetes, a healthy diet and exercise arebasic patterns which usually trigger a better forecast. Most injuries mentioned before share daily exercise asa helpful treatment to overcome motion limitations, therefore monitoring their breakthroughs represents aninteresting improvement.

As consequence of the lifelong nature of chronic diseases, goverments have driven to major investmentsas the cost of healthcare has shot up in the last few years. Along with this, less availability of resourcesin hospitals and medical centers in conjunction with the lack of staff, have created a significant quandaryconsidering health as a pure private good. In fact, health-system workers are often subject to workforcesproducing lobbies and rising insecurities. Therefore, they have to be prepared in order to deal with ageingand the appearance of new diseases, making of their training an added cost.

According to a WHO’s report about models of sanitation around the world (2010) et al.[5], there is a

12

Page 13: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

recent public demand to access to an affordable healthcare by struggling for an universal coverage. TheInternational Labour Organization has estimated that only one in five subjects in the world is covered by acompleted social security scheme, remaining their whole salary in case of disease status. Nevertheless, morethan a half of the world population are lacking any social protection. Even focusing on those developednations with social services, as a matter of fact it is normal to find overwhelmed queues for appointments tothe physician, patients awaiting for long periods to receive rehabilitation or relatives having to take care oftheir elders by themselves.

Therefore, international health is still far from having a complete coverage because of the lack of resourcesand investments. While goverments do not dedicate steadily financial support as much as they would be ableto, enterprises and consulting brands are taking advantage on their market standing in order to provide publicuncovered services.

Several alternatives have been undertaken because of the urgent necessity of a quality healthcare. Nursinghome care has set in as one of the most used services for elder people in the last decade. It has not onlybeen providing supervision and assistance, but also it means a philosophy of care and services promotingindependence and dignity. Nursing home care has also triggered new advances in the health care enviromentlike home care assisted living framework. It is based on those disorders which heavily affect to daily motionstackling a wide variety of consulting solutions with telecom and high tech services.

Providing services for patients without leaving home within home care assisted living facilities, ActivityRecognition has broken into market and soon it has become relevant for researchers. Activity Recognition(AR) has been playing an important role to facilitate the daily life. There is an ongoing background devel-oped by several computer science communities and its connection to many different fields of study includingmedicine, human-computer interaction or sociology. The focus is almost entirely on human activity whichis inherently complex due to the wide variety of movements and gestures. Application examples are able tovary from application in smart homes, on-demand information systems, surveillance and monitoring systems,interactive interfaces for mobile services and games, up to healthcare application for both inpatient and out-patient treatment. AR research, particularly for elder care and health applications, has demonstrated thatit is possible to recognize a variety of activities such as driving, walking, or climbing stairs. Through recog-nizing activities, long-term and chronic patients might be daily monitored and supervised when undesirablepatterns like fall-outs are produced and a proper diagnosis is needed.

Home care industry has taken advantage of common appliances to bring these breakthroughs in theeasiest way. Supplying a suitable solution, Inertial Sensing is crucial to infer human activity not only forwardhome-based rehabilitation but monitoring activities and physical exercises. Sensors allow systems to steadilyrecording data and report an activity with its duration.

Such sensors are joint into an unique device often used to gather this activity data. An Inertial Measure-ment Unit is composed with an accelerometer, a gyroscope and a magnetic field sensor which might providesignal information while an activity is being performed by setting them at certain on-body positions. UsingIMUs as wearable sensors distributed throughout the body report individual information just representativeof a certain part. As consequence of the new breakthroughs achieved with wearable sensors, smartphoneshave also several built-in sensors embedded into one integrated device. Their sensors are used in sofwareapplications as a measurement data platform capturating information by recognition technologies. The avail-ability of these sensors in mass-marketed communication devices creates exciting new opportunities for datamining applications.

But wearable sensors and specially mobile phones are supposed to be daily worn in a natural conditioninstead of being attached in a certain part of the body. In fact, most people and specially elders do notbring mobile phones in a particular way, but wearing them as usual being comfortable in mind. Mostresearchers have aimed towards unrealistic assumptions by using wearable sensors and mobile phones withina predetermined setup having been demonstrated that recognizing activities in such conditions fails[53].Therefore, well attached on-body sensors are useless because of the ideal procedure making systems incapableof recognizing signal’s patterns matched up with the actual activity. In addition, the greatest difficulty resideson the displacements and variations occured as long as an activity is being performed, since people in theirdaily life put smartphones into any pocket position.

There is a large amount of work on the use of sensing for Activity Recognition and behavior profilingbut there are, so far, no comparative studies researching optimal sensor placement for activities of dailyliving. In contrast to some approaches, our work is mainly concentrated on optimizing the AR process by

13

Page 14: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

generalizing on-body sensor position in order to deduce where the sensor is on-body placed without assumingfixed configurations. The future of inertial sensing resides on the sensor integration within our daily life withsystems are being able to recognize activities regardless of the on-body position. Smart textiles researchrepresents a new model for generating creative and novel solutions for integrating electronics into unusualenvironments and will result in new discoveries that push the boundaries of science forward[19].

Through our work, we understand on-body positioning as a set formed by the location and the orientationof the device. Such issues should be addressed individually making use of specific wearable sensors for eachone, as there are established differences including the lack of sensitivity to measure orientation in the case ofaccelerometers. Along with this, considerations such as natural body motions and prohibit rotation anglesbecause of torsional rigidity will be taken into account in the following sections.

14

Page 15: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

3 Inertial sensing concepts

3.1 IMU signalsThe important growth of inertial sensor technology increases day by day and it will lead fabulously over thenext few years. Inertial sensors are commonly applied in the measuring of the acceleration, angular velocityand magnetic forces.

IMU is an electronic device that measures and reports velocity, orientation, and gravitational forces usinga combination of accelerometers and gyroscopes. The term IMU is widely used to refer to a box containingthree accelerometers (one per axis) and three gyroscopes, thus, a calibrated IMU measures 3D angularvelocity, 3D acceleration and gravity with respect to the sensor housing.

There are multiple applications making the most of devices equipped with IMUs. Real-time signal read-ings from accelerometers and gyroscopes are often required in video game consoles, vehicle-installed inertialguidance systems, every commercial or military water-going vessel, smartphones, PDAs (Personal DigitalAssitants) and other portable devices. As a matter of fact, new boundaries are being proposed and becomingwidespread within inertial sensing enviroments including the concept of “smart textiles”, in which miniaturesensors are set and integrated into clothing[19].

Accelerometers and gyroscopes are used in a vast majority of activity classification studies since mostsoftware applications take advantage of the information provided by combining accelerometer and gyroscopesimultaneously. Although, other inertial sensors including magnetometers may provide relevant informationas well. As consequence, a quite group of approaches have obtained data from the accelerometer, gyroscopeand magnetic field sensor as a whole to deduce context within AR framework[29]. We have deemed appropiateto combine the three of them, as different features supply extra information about location and orientationof the device.

For a first contact and being able to introduce beginers within IMU signal sensor behaviours, we havecreated a dataset of basic motions performed with a smartphone (see section 5 below). In addition, the basisof each type of sensor are described from information provided [3]throughout the following sections.

3.1.1 Accelerometer

Accelerometers are the most widely used transducers to measure vibration. A significant purpose is that athigh frequencies, displacement and velocity drop off quickly into instrumental noise which often makes theacceleration may be measured with the easiest vibration characteristics. In addition, it provides relevantupsides such as robustness, inexpensive, self-generating and being easy to calibrate. Currently, several typesare found on the market including industrial grade, high vibration, premium grade or triaxial accelerometers.We have focused on triaxial accelerometers included in smartphones.

Triaxial accelerometers measure the vibration in three axis X, Y and Z. They have three crystals positionedso that each one reacts to vibration in a different axis. The output has three signals, each representing thevibration for one of the three axis. In relation to the coordinate-system of all sensors, it is defined relativeto the screen of the phone and its axis are not twisted when the device’s screen orientation changes.

15

Page 16: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 1: The coordinate-system.

Defining whicha are the basis of an accelerometer embedded in an IMU, in physical terms, it lies in aenclosed ball inside a box subjected to motion forces and grativity acting as a spring system. The measuredacceleration is a combination of forces applied to the sensor itself and the force of gravity, which alwaysis influencing the measurement. Forces due to the effect of gravity belong to “static acceleration”, and itprovides a measure of the sensor inclination with regard to the vertical. Futhermore, “dynamic acceleration”results from movement and vibration to which the body-worn accelerometer is attached. By sensing theamount of dynamic acceleration, the way that the device is moving may be analyzed as follows[4].

Figure 2: Physical functioning of an accelerometer.

The main equations of the accelerometer arise from the measurement of specific forces also known asproper acceleration. Proper acceleration is not the same as linear acceleration since it is relative to a free-fall, or inertial observer who is momentarily at rest regarding to the object being measured. Gravitation,therefore, does not cause proper acceleration, since gravity acts upon the inertial observer that any proper

16

Page 17: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

acceleration must depart from. A corollary is that all inertial observers always have a proper acceleration ofzero .

Assuming from physics, velocity is the change in position over time.

v = �x�t

Deriving from this, acceleration is the change in velocity over time. In addition, the acceleration a bodymay be subjected to is proportional to the force acting upon it.

a = Fm

Furthermore, its magnitude is a measure for the quantity of acceleration and has no directions, beinginsensitive for the orientations of the mobile phone.

Ad = −g −∑F/mass

Therefore, when the device lies flat on a table, the magnitude read is g = 9,81[m/s^2]. In other case,when the device is in free-fall towards to ground at 9,81[m/s^2], the measured acceleration is 0[m/s^2]. Inorder to read the real acceleration, it is necessary to eliminate the component of gravity.

By gathering data from IMUs through a software application, signal readings from sensors vary in real timedescribing a pattern matched up with the activity performed. In the opposite scenario, from any particularsignal might be identified the type of activity as well as a certain on-body position. Being this the aim of ourresearch, a dataset of basic motions has previously been created to get the hang of signal’s behaviours frommobile phones(see section 5 below). Consequently, an example of this is shown by raising a smartphone upand down at two different rates (see 2.1 and 2.3 of quasi-static motions in section 5).

0 1 2 3 4 5 6 7 8−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5Collected data

Time (min)

Acc

eler

atio

n (G

)

Figure 3: Quasi-stactic motions 2.1 and 2.3 performed with the accelerometer.

As may be observed, each axis from the accelerometer is represented by a colour that changes dependingon the orientations induced on the mobile phone. Furthermore, in the second one a variation in each axismay be noticed due to the major acceleration associated with a higher rate.

17

Page 18: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

3.1.2 Gyroscope

Concerning gyroscopes, they allow devices to measure and maintain orientation. Gyroscopic sensors maymonitor and control device positions, orientation, direction, angular motion and rotation. When they areapplied to a smartphone, it commonly performs gesture recognition functions. Additionally, gyroscopes insmartphones help to determine the position and orientation of the device being insensitive to gravity andlinear acceleration.

Its operation resides on, at least, three rings in descending sizes with a spinning wheel placed in themiddle. Each ring spins in a different direction, but maintains a parallel orientation to the ring which isinside of it. The rings follow a concept in physics known as the angular momentum conservation.

The fundamental equation describing the gyroscopic behavior is:

τ = dLdt = d(Iω)

dt = Iα

where the pseudovectors τ and L are, respectively, the torque on the gyroscope and its angular momentum,the scalar I is its moment of inertia, the vector ω is its angular velocity, and the vector a is its angularacceleration. It follows from this that a torque τ applied perpendicular to the axis of rotation, and thereforeperpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is calledprecession. The angular velocity of precession Ωp is given by the cross product.

τ =Ωp ×L

Precession may be demonstrated by placing a spinning gyroscope with its axis horizontal and supportedloosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the gyroscopeappears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupportedand the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning.The torque on the gyroscope is supplied by a couple of forces: gravity acting downward on the device’s centreof mass, and an equal force acting upward to support one end of the device. The rotation resulting from thistorque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular toboth the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation(horizontal and outwards from the point of support), causing the device to rotate slowly about the supportingpoint.

Figure 4: Motions of a gyroscopic sensor.

Similar to triaxial accelerometers, the three gyroscopes are placed in a orthogonal pattern measuringrotational position in reference to an arbitrarily chosen coordinate system. Considering the coordinate system

18

Page 19: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

in a mobile phone described previously (see Figure 1), gyroscopes keep it along with the rest of sensors atthe same position. Around the X, Y and Z axis all values are in radians/second and measure the rate ofrotation being positive in the counter-clockwise direction. That is, an observer looking from some positivelocation on the x, y or z axis at a device positioned on the origin would report positive rotation if the deviceappeared to be rotating counter clockwise. In practice, the gyroscope noise and offset will introduce someerrors which need to be compensated using the information from the other sensors[3].

In general, gyroscopes are often supplying service in several enviroments including computer point- ingdevices (in effect a mouse), in the racing car industry, motorcycles, segways, virtual reality, autopilot andartificial horizons, robotics or anti-rolls stabilisers. Though gyroscopes use the same basic gyroscope compo-nents and mechanisms, there are many types of gyroscopes that are specific to their purposes. Fiber opticgyroscopes use light interference that is passed through fiber optics to determine orientation or vibratingstructure gyroscopes use vibrations of their internal rings to measure orientation and balance.

Proceeding similarly as the motion performed with the accelerometer (see section 3.1.1), an example ofgyroscopic signals has been shown as well.

0 1 2 3 4 5 6 7 8−2

−1.5

−1

−0.5

0

0.5

1

1.5

2Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3−4

−3

−2

−1

0

1

2

3

4

5Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

Figure 5: Quasi-stactic motions 2.1 and 2.3 performed with the gyroscope.

Comparing acquired information from the accelerometer and the gyroscope (see Figure 3), it may be dis-tinguished how the higher rate a motion has, the noisier measurement from gyroscope may result. Therefore,looking at the slowest motion, data from the two sensors, accelerometer and gyroscope, seem to be similarwhereas those belonging to the highest rate are barely overlapped.

This shift is produced because accelerometers measure the rapidity of change in velocity, that is, theacceleration thus when it is about constant velocity, there is not an output signal. In order to indicateposition, a signal is generated whether it is out of the initial position otherwise there is not measurement.

For this reason, it is recommended combining both sensors together while different rates may ocurr. Thegyroscope is not free from noise because when it measures rotation, it is less sensitive to linear mechanicalmovements than the type of noise that an accelerometer suffers from. However gyroscopes have other typesof problems like drift (not coming back to zero-rate value when rotation stops). Nevertheless, averagingdata from the accelerometer and the gyroscope, we may obtain a relatively better estimate of current deviceinclination than it would be obtained by using the accelerometer data alone.

3.1.3 Magnetic field sensor

Magnetic field sensor is the most popular type of motion sensor after using accelerometers and gyroscopes.Towards using magnetic field sensors for several devices, more and more smartphones, handsets and tabletsmanufacturers have been equipping with them over last times.

The Android Platform provides with two different sensors letting us determine the position of the mobilephone: geomagnetic field sensor and the orientation sensor. Focusing on the geomagnetic field sensor, it is

19

Page 20: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

useful for determining a mobile phone’s physical position with regard to the earth magnetic North. Thereby,providing raw field strength data from the three coordinate axes during time periods, magnetometers mayenable accurate orientation measurement about the vertical axis.

The magnetometer measures the strength and direction of the local magnetic field. The magnetic fieldmeasured is a combination of both, the earth’s magnetic field and any magnetic field created by nearbyobjects. The three field strength components that a 3D magnetic field sensor outputs represent a vector thatis tangential to the magnetic field line at sensor location. Such geomagnetic field strengths are supplied in uTas unit of measure being permanently monitored any event. Comparing the magnetometer with an analogcompass, both point straight north. Nevertheless, what they really do is to orient themselves parallel to thetangential of the magnetic field line at the specific location. Thereby, orienting the sensor in such a way thatone sensor axis points in the direction of the magnetic field (which is tangential to the field line) then thesensor reading will be B(t) = (0,b,0), whether it is about y-axis. Generalizing to arbitrary orientations ofthe sensor with respect to the field line, the equation would be as follows.

Bn(t) = ‖B(t)‖ · cos(ϕn)

The angle ϕn(B(t)) = arccos Bn(t)‖B(t)‖ is referred to the n-th axis and the magnetic field strength vector

measured at time t.As a matter of fact, magnetic field measurements are not always reliable because of the inhomogeinity

of the earth magnetic field and magnetic disturbances caused by electrical appliances and metallic objectsin the enviroment. Although when it is about measuring orientation of the sensor with respect to the localmagnetic field lines such issues are not produced as the estimation does not involve absolute orientation.

On the one hand, magnectic field lines are curved and linear displacements of the sensor being notrotational may trigger a change of the angle between the sensor and the field lines. Therefore, it forces usto deal only with the earth magnetic field due to the smaller curvature of the field lines, although strongecurvatures in enviromental fields may likewise provoke errors. But, noisy estimations are produced when itdepends on rotations around axes that are closed to parallel to the field line [29].

We have also analyzed how magnetic signals behave by performing the same motion (rising up and downwith the mobile phone), as with the accelerometer and gyroscope were carried out in sections 3.1.1 and 3.1.2.

0 1 2 3 4 5 6 7 8−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 6: Quasi-stactic motions 2.1 and 2.3 performed with the magnetic field sensor.

As to different axis, measured magnetic fields shift when the mobile phone turns into a new position.Signal variations as well as a quite noisy samples are relative to the higher frequency and the lack of accuracyinvolved in the second motion because of rotations around axis.

20

Page 21: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

3.2 General Activity Recognition chain

Human Activity Recognition(HAR) is mainly targeted towards development of intelligent healthcare sys-tem to infer the human behaviour modeling domain with the human-machine interaction domain[20]. Recog-nizing human gestures and movements has become a complex task since the vast range of particular naturalbehaviours present in human beings. In fact, such quandary becomes worst when specially it is linked tosuffer disorders which heavily affect to natural motions and tackle unexpected conditions. HAR under thecondition of free living for a specific subject often cannot yield accurate results when used on a differentperson.

This hinders the implementation of an unique human behaviour model and, therefore, most approxima-tions involve a multi-stage AR procedure in which its parts are continuously optimized. In the multi-stageAR chain (see Figure 7), signals are firstly stored from sensors and then they go through several stages untilthe activity is properly recognized at the end.

Figure 7: General Activity Recognition chain [13].

While a continuous real-time activity profile is produced with body-fixed sensors, the AR system setsup an inicial sensor raw data acquisition with hidden information and noisy samples as well as the threeacceleration axes are preprocessed and turned into the acceleration magnitude.

From that point on, its magnitude is divided into smaller time segments by applying windowing tech-niques. This entails a sensitive procedure since the determination of a specific proper-sized window dependson parameters such as type of activity or dismissed samples corresponding to idle periods. There are threedifferent techniques used including sliding, event-defined and activity-defined windows. Sliding window con-sists of dividing signal into segments of fixed length with no gaps between windows. While sliding windowsdoes not require a signal pre-processing and, therefore, it is the most selected by researchers, event-definedwindows must apply it to locate certain events. Such events are not equal spaced in time, then different ap-proaches have been proposed for determining a specific window by using search windows ([8],[52],[25]). It iseven possible to identify times at which adjacent components of the trunk acceleration vary sign ([7],[59],[60]).When used activity-defined window, times at which the activity changes have to be identifyed ([42],[51]) to-gether with an optimal window size. On account of this, previous approaches have employed a size of 0.25s[54] for a smaller window, matched time size of adjacent windows([31],[46]) as well as bigger window sizesof 6.7 s[31]. Focusing on a specific activity like walking, Kunze et al.[26] proposes firstly a window walking

21

Page 22: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

recognition of 1 sec, then a walking recognition smoothing with a length of 10 sec and a walking segmentlocalization using the results from the previous recognition to localize walking segments that are long enoughto allow realiable recognition. Its length is at least a few tens of seconds and not longer than a 2 or 3 min.Then they apply frame by frame location recognition where a sliding window of the length of 1 sec is used ineach segment that has been identified as a relevant walking event. Finally, a majority decision is performedon the frame by frame location classification.

Subsequently, AR model proceeds with an analysis from threshold-based quantization of the accelerationas well as magnitude-based features are extracted to characterize windows. As consequence, a estimatedfeature pattern is then used to classification schemes.

When different features might be derived by analyzing aspects of movements and gestures which pro-duce characteristic body-worn sensor signals, they are considered as heuristic features. To distinguish staticpostures and postural transitions in the absence of motion, the acceleration angle relative to the verticalcomponent is often applied as a discriminative feature ([21],[41]). Taking into account gyroscopes takle orien-tation issues, features from such sensor signals might identify postures and transitions ([40],[41]). Generallymovement patterns are result of variations in acceleration segments. Methods such as the Signal MagnitudeArea (SMA) that allows to quantify the level of intensity of physical activity[37, 45], peak-to-peak acceler-ation [36], mean rectified value [16, 17] and root mean square [58] allow to distinguish between static anddinamic activities by quantifying the acceleration magnitude. In addition, heuristic features are used to dealwith unexpected situations like fall-outs by extracting characteristics of velocity[14] and orientation. There isan ongoing area of study for robust methods to accurately differentiate between daily activities and fall-outsby using body-worn sensors.

Otherwise, features that are not depending on aspects of individual movements and postures like time-domain, frequency-domain and time-frequency might be also selected within classifications algorithms. There-fore, statistical features including mean, variance, median and standard deviation are commonly employedto data sensor windows. Other approaches have used a cross-correlation coefficient to measure the similaritybetween an event-defined window and an activity pattern previously obtained by classifying the activity[32].Frequency-domain features represent the amplitudes of signal frequency components together with the dis-tribution of the signal energy. Such features allow to distinguish between simple acceleration patterns andthose activities that are more complex[32]. The complexity of selecting suitable features to optimize classifi-cation schemes have suggested to develop methods that reach such aim. For example, a method for featureselection is a forward-backward search while features are continuously added and deleted from a larger setthus, depending on the classification results, features may be included or removed[44].

The latter adopted procedure involves a built classifier with trained algorithms to generate a selectedclassification model. Once selected features represent a specific window of signal data, they are used as in-puts to classification methods. There are different degrees of complexity that increases from threshold-basedmethods to advanced algorithms. Such as hidden Markov models, neuronal networks and softwares to recog-nize patterns in the input feature with each activity, which are known as machine learning techniques. Themost popular machine learning techniques used in activity classification include threshold-based classifier[15],hierarchical methods[22], k-nearest neighbour[38], support vector machines (SVM)[18], Naive Bayes[32] andMarkov models[35]. In order to evaluate the accuracy of a system, classifiers may be firstly trained with datafrom subjects excluding a few and finally tested with the rest of the them. Otherwise, training is performedusing a portion of windows for a specific subject, while testing takes place with the remaining samples of thesame subject. Combining different classifiers have recently gained popularity due to the enhancement usingseveral techniques. With stacked generalization, for example, it is possible to train the base classifier andthen use its predictions as data to a new learning stage. Others like bossting allows to assign weights to thetraining patterns and combine the performance of weak classifiers[49].

In general, most approaches follow the General Activity Recognition chain to accurately aim towardsimproved reliable results and in which each stage is still supposed to be an on-going study over next years.

22

Page 23: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

4 Issues in Activity RecognitionThe relevance of on-body sensor positioning lies in enabling appliances to have a significant role within ARmodel. While subjects do not have to watch out whether the sensor is properly fixed, situations includingbringing a mobile phone into the handbag or a call event make even more difficult the recognition of theactivity.

The same situation occurs when body-worn sensors are used to monitor free-living activities, as com-plexities about location and orientation arise from displacement events and desviations that are far fromthe ideal on-body sensor positioning together with self-user attachment. Variations referred to the generalcoordinate system hinder the suitable recognition of the sensor on-body placement and its activity reported.This project aim resides on properly determining the position and location of the sensor and overcomingsuch limits. Taking into account different types of sensors provide non-correlated data, we mainly deal thelocation issue with the acceleration and the orientation with gyroscopic and magnetic information. Therefore,through the following sections, we will separate the study of location and orientation by going in depth intoboth.

4.1 On-body sensor location

Accelerometer data is heavily affected by varying locations and orientations. When a sensor device isdeployed with varying locations but with no shifts with regard to the orientation, the accelerometer mayexert different forces because the movement patterns of different body parts are distinct, even while theuser is doing the same activity. On the other side, when the device is deployed with varying orientationsbut at the same location, the accelerometer will exert the same force. It occurs due to the discomposedforce along the coordinates of the device is related to the angles among the synthesized force and the threeaxes, thus each acceleration axis has a different resultant vector when orientation shifts but the synthesizedacceleration magnitude has always the same value for a certain body part. As consequence, acceleration dataresults insensitive to orientation changes and thus accelerometers need to be combined with other sensors.Moreover, using a wide range of positions for accelerometers migth induce to make errors when a change inposition within daily living context could lead to a shift in the received signal with regard to the standarizedpredicted signal for the AR system.

Most of researchers have developed their approaches to deal with the localization issue throughout theelimination of an acceleration component. Kunze et al.[26] proposes a method that allows to derive thelocation of an acceleration sensor placed on the user’s body solely based on the sensor’s signal. They basetheir analysis on the norm of the acceleration vector that is independent of the sensor orientation. In aneffort to design a light-weight and effective signal processing method for eliminating the gravity acceleration,Baek et al.[12] makes use of a second-order Butterworth highpass filter and extract the motion accelerationcomponent in order to recognize the activity with an user interface on a mobile device.

There is an attempt to generalize the recognition methods from specific appliances such as mobile phones.In order to create an efficient system despite of the device used, several studies tackle the complexitiestriggered when signals are not recognized due to variations occured because of the daily-living use of a device.Alanezi et al.[28]addresses how a major barrier for large-scale proliferation of context aware applicationsavailable in smartphones is poor accuracy. They show us that smartphone positions significantly affect thevalues of the sensor data being collected and this turns into a significant impact on the accuracy of theapplication. The paper ends up demonstrating that the accuracy of an existing context aware service isenhanced when run in conjunction with the proposed smartphone position discovery service.

Focusing on wereable sensors, Lester et al.[24] has classified activities regardless of orientation and posi-tioning with a combination of sensors in which magnitudes such as audio and barometric pressure are included.They identify three main issues: location sensitivity, variations across users and the required sensor modali-ties. Within of this group, Atallah et al. [34] investigates about the ideal sensor placement for a given groupof activities by extracting several time-frequency features from wearable accelerometers. Their experimenthas been carried out with seven wearable accelerometer positions using statistical and time-frequency featuresthat are not extremely affected by orientation. Some of these features are the averaged multidimensional

23

Page 24: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

entropy, the main FFT frequency, the main frequency over total FFT energy or the averaged mean of crosscovariance between each 2 axes. Using KNN and Bayesian classifiers, they obtain fairly reliable results asthey oscillate according to the level of the activity performed.

Studying the effectiveness of activity classifiers in a multi-sensor system when the wearing positions of thesensors are varied, Maurer et al. [57] based the activity recognition and monitoring system on the eWatch.Locations including the belt, shirt pocket, trouser pocket, backpack and necklace are analyzed by gatheringdata from the accelerometer of X and Y axes. Values were combined calculating the squared length of theacceleration vector and thus being able to reduce the dependency on the orientation. Conclusions made showus that the bag position with time domain features is the most recognized by the system with an efficiencyof 92.8%.

However, when it concerns recognition systems based on mobile phones work, Grokop et al. [33] classifiesactivities and device placements by fusing data from accelerometer and multiple light sensors attached tofixed body positions. Along with this, they use smartphones placed in unknown on-body positions includingpocket, holster and hand. With seven fixed positions and over six activities, figures achieved are between92.6% and 66.8% respectively.

Besides using smartphones, Kunze and Luckowicz et al. [27] choose a set of sensors integrated in everydayappliances such as PDAs, watches, headsets and so on. Their approach is based on the fact that differentbody parts show several movement patterns and varying degrees of freedom. Using different types of windowsizes and a HMM (Hidden Markov Model) classifier, they achieve a maximum accuracy of 82 %, althoughthe classifier fails to predict the right class when there is little movement and taking into account the use offixed positions.

Commonly, on-body localization failures in signal recognition systems ocurr because of the fixed positionsset up within unnatural enviroments in many approaches. Axes from triaxial sensors shift completely whena sensor is fixed to the upper left arm instead of being on the upper right arm. Moreover, whether the sensorsuffers variations in its fixed position because of movement or an uncorrect setup made by the user. Alongto this adds up to the recognition when it is about discriminating parts of the body in which sensors havethe same orientation between their axes. Such as arms or legs, making a distinction between the upper andlower part by watching over differences in the signal amplitudes.

4.2 IMU sensor orientation

Sensor orientation has become an individual issue within the AR field, since the analysis of the orientationhas a great interest on its own.

Setting the accelerometer basic concepts shown in section 3.1.1, the difference between linear accelerationand the earth’s gravitational field vector is measured by the accelerometer. When there is not any linearacceleration, the accelerometer pitch and roll orientation angles may be obtained throughout the measurementof the rotated gravitational field vector. Thus, accelerometers are insensitive to rotation about the earth’sgravitational field vector.

When it is about the orientation related to a consumer product such a smartphone, the used coordinatesystem is aligned with the mobile phone and may be set up at an arbitrary angle in the device as theaccelerometer may be mounted at any orientation. Therefore, the X axis is aligned along the body of thesmartphone, the Y axis is aligned at right angles and Z axis is aligned with gravity when the smartphoneis placed flat on a table. Looking at the orientation of the appliance, whether any shifts are produced byrotations in roll Φ, pitch θ and yaw ψ, they are directly related to X, Y and Z axis.

24

Page 25: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 8: Roll, pitch and yaw[50].

Focusing on a developed background of previous orientation studies, when variations of device placementare produced, Kunze et al.[30]demonstrates that the acceleration caused by rotational motion is locationsensitive and combining gyroscope with accelerometer is helpful to reduce this sensitivity.

As consequence, adding a gyroscope to an accelerometer might make recognition more robust againtsdisplacements such as a mobile phone put into a trouser pocket where a movement from deep down on oneside to the other side may be stemmed. A possibility to deal with the orientation is making use of gyroscopeand magnetic field which are equiped on smartphones as well. Their research has based on computingangular velocity from gyroscope with position determined through the magnetic field sensor in order to inferorientation of the mobile phone.

The signal of an body-worn accelerometer is the sum of three components: acceleration due to rota-tion, acceleration due to translation and acceleration due to orientation with respect to gravity. Reddy etal.[47] propose to recognize transportation modes using the features extracted from the series of accelerationmagnitude, which is the value of acceleration synthesization of three axes. Experimental results show thatthe acceleration decomposition-based method performs a little better than acceleration synthesization-basedmethod. However, Wang et al.[61] show that acceleration synthesization based method outperforms theacceleration decomposition-based method for recognizing six activities. They demonstrate that the gravityestimation error will degrade the performance of acceleration decomposition-based method.

Different papers like Yang et al.[62] who tackles the issue of varying orientations, proposes to computethe vertical and horizontal component of each accelerometer sensor reading for compensating the effect ofgravity, based on the gravity-related estimation work. However, Yang’s work does not show the performanceimprovement of using his approach, compared to the case without using the orientation-independent feature.

In order to deal with sensor displacements, Kunze and Luckowicz et al.[30] show how, within certainlimits and with modest quality degradation, motion sensor-based activity recognition can be implemented ina displacement way. They evaluate a set of synthetic lower arm motions as well as illustrate the strengthsand limits of their approach. Their work assume assumptions including accelerometer signal segments whichare dominated by rotation and are possibly ’contaminated’ with displacement noise. They introduce in theirexperiment the gyroscopes, which are insensitive to displacement within a single body part but provideinformation on rotation ignoring translations and vertical orientation. As consequence, the displacementissue cannot be solved with simple calibration gestures. They show how within certain limits, it is posibleto make a proper recognition with respect to sensor displacement within a single body part. Combininggyroscope and accelerometer and ignoring by the latter all signal frames dominated with rotation, it isposible to removed placement sensitivity while retaining the relevant information. Their heuristic increases

25

Page 26: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

the displaced recognition rate from 24% for a displaced accelerometer, which had 96% recognition when notdisplaced, to 82% combining sensors. Using the accelerometer and the gyroscope individually, the first onehas a rate of 33% using a majority decision over the motions based on a KNN classifier with 35%. Gyroscopeshows better results with 43%-44%.

Within the orientation issue, arises another one task: knowing horizontal component. The orientationin the vertical plane (angle towards gravity) may be computed by inferring the orientation in the horizontalplane. In fact, the main complexity of inferring the orientation of the device resides on knowing the horizontalcomponent. In doing so, their work is provide with the research of Mizell et al.[39], who estimates the constantgravity vector by averaging accelerometer samples. This vector enables estimation of the vertical componentand the magnitude of the horizontal component of the user’s motion, independently of how the three-axisaccelerometer system is oriented. They conjecture that the vertical acceleration component is sufficientinformation for most such activity detection due to they do not know the orientation of the horizontalcomponent. Also, it is posible to deduce the position from the vertical component where the horizontalcomponent is projected by using the magnetic field vector.

In a later research, Lukowicz and Kunze et al.[10] present a method to deduce the orientation withinthe horizontal plane of mobile device carried in a trouser pocket from the acceleration signal acquired whenthe user is walking. Once they infer the vertical component, they project the accelerometer signal in theplane perpedicular to the vertical gravity vector (horizontal plane) and after applying principle componentanalysis on the projected data points, they get the direction where the acceleration variations is greatest.This axis is parallel to the walking direction. Moreover, the differentiate whether the user is walking forwardor backward.

By following the same line, Thiemjarus et al.[56] describes the orientation device as a classification issueby using four prefixed attachment and making use of data acquired during a specific sensor orientation aswell as applying to other input signals regardless of the orientation device.

To address the varying location and orientation issues simultaneously, Sun et al.[53] attempt to extractfeatures that are independent from or insensitive to orientation change using a classifier for all physicalactivities in all pocket positions. They propose an orientation independent sensor reading dimension whichcan relieve the effect of varying orientation on the performance of the activity recognition. They assumethat most of the people put their mobile phones in one of their six pockets around the waist and they use4 possible orientations when people normally put the mobile phone into the front pocket of the trousersvertically. Noting that, after a mobile phone is placed inside the pocket, it may slip or rotate when theuser is moving. They investigate the influences of the four mobile phone orientations and use an additionalfeature, the magnitude of acceleration, to relieve such kind of influences and show how the orientation is alsoa discriminative feature for the considered physical activities in the experimental results.

However, by rectifying the acceleration signals into the same coordinate system, Henpraserttae et al.[9]shows two experiments to deal with orientation and location tasks in conjunction. One with a device fixedon the waist in sixteen different orientations and another with three different device locations performing6 activities. Based on the dataset with sixteen different device orientations, the experimental results haveillustrated that the method is significantly efficient. The basic idea is to transform all input signal into thesame global reference coordinate system.

The interest of solving the orientation problem has its explanation when a reference system is created froma training dataset of movements for a certain appliance. When this appliance varys completely its position,the system may not recognize absolutely the activity performed and if the on-body location changes as well.

4.2.1 Rigid body concept

A rigid body is an ideal solid body of finite size for which the relative position of any two given pointsremains constant in time regardless of external forces exerted on it. Any motion of a rigid body can bedescribed as a combination of a translation and a rotation. As consequence, both concepts have a significantinterest to deal with the orientation task.

During a translation every point in a rigid body is moved by exactly the same vector with exactly thesame speed and acceleration. Thus, given a translatory motion, at any point in time during this motion, all

26

Page 27: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

points of a rigid body will have been moved by exactly the same vector.In an analogous way for rotation may be shown that angular velocity vector (and angular acceleration)

are the same for all points of a rigid body during a rotation around an arbitrary point in space. For instance,supposing that any part of the rigid body makes a rotation with the same angle than the others, the angularspeed ω and acceleration must also be the same for all three points.

To sum up, during a rotation of a rigid body around an arbitrary point in space a gyroscope will producethe same signal no matter where in the rigid body it is placed. Here resides the importance of the rigid bodyconcept. This does not apply to accelerometers since different points in a rigid body, in general, experiencea different non zero acceleration vector during a rotation.

During a pure translation gyroscopes will provide no signal at all (there is per definition no rotationalcomponent) while accelerometers will all give the same readings no matter where they are placed.

In a rigid body all points are rotated with the same angular velocity and experience the same angularacceleration α. Thus, the gyroscope signal is invariant with respect to sensor displacement.

Taking this one step further, human body is not exactly composed of rigid body parts however for manysensor positions and motions, it could be a proper aproximation. The most common desviations that ocurrin the human body may are produced in the following scenarios [30].

• Vibrations of short and intensive acceleration from soft body parts such as some muscles and fat whichare not considered rigid body parts. To deal with these sort of desviations, the system may not theminvolved within its recognition activity.

• Whether the movements arise from active muscles, especifically large muscles, might cause motionsignals incompatible with the rigid body approximation because of the different accelerations in everypart of the muscle. For instance, it should not be proper to fix a sensor in a well developed biceps.

• Within a same body part such as the arm, different desviations may be given depending where is thesensor fixed. For instance, the lower arm, specifically the wrist, has a rotation parallel to the axis ofthe arm different from the upper part near to the elbow. The elbow rotation will be less twisting thanthe rotation executed by the wrist.

Taking into account these scenarios mentioned above, gestures for such rotations are a significant discrimina-tive information and it may become a complex task when it is about to distinguish different body locations. Asconsequence, it has the advantage of recognizing properly with a displaced sensor to deal with the orientationissue but also the disadvantage of being location invariant for location issue.

4.2.2 Consequences for displaced sensors

Regarding to an acceleration signal, we need to differentiate between three contributions: the contributioncaused by orientation with respect to gravity, the contribution caused by translations and contribution causedby rotation. The first two are location invariant and only the rotation component is location sensitive. Sensordisplacement noise during rotational movement depends only on the amount of displacement with respect tothe center of rotation. It is independent of the actual angular velocity or angular acceleration.

The distortion of the acceleration signal is related to rotation, as the other components are not affectedby displacement.

It is only during rotations that shifted accelerometers produce different signals thus the gyroscope signalis insensitive to such shifts. By using the ratio of acceleration to rotation, it is likely to determine if a motionis translation or rotation dominated as well as select either acceleration or gyroscope features. Therefore,gyroscope could discriminate whether the motion is due to translation or rotation as well as combining withthe magnetic field sensor, which is rotation sensitive, the initial position for horizontal component could beobtained to deduce the orientation device.

27

Page 28: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

5 Initial movement dataset

The motivation of having created a basic movement dataset is based on providing an easier understandingto initial researchers within the Activity Recognition field. We have considered significant not only havingthe AR theorical awareness but getting used to signal recognition through its visualization. In fact, in a firstmethodology applied (see section 7.1) along this study has been the visualization of signals to detect thebest performed activity pattern. The importance of detecting proper visualized window patterns might bean useful initial step in order to make correct assumptions.

We have used an Android smartphone (HTC Sensation) with tri-axial sensors: accelerometer, gyroscopeand magnetic field sensor that are embbeded. Through using an application software, previously developedfor such purpose, real time data is recorded while the application is running. In the following section, its filedistribution and data collection have been explained in detail.

5.1 File distribution

File distribution is organized in a repository called “Initial_movement_dataset” where the followingseveral files may be found::

• IMU_signal_data: It contains recorded data from the two types of exercises performed including quasi-static motions and static motions. Within each file, the whole activity set corresponding to the typeof motion may be found. Files are defined with numbers (as the exercises concerned, because of thesimilarity of the exercises within each type of the two motions.

• IMU_signal_image: It contains signal images that represent the whole recorded data. All imagesshown within the previous section have been taken from this repository and have been extracted fromthe Matlab representation format converted to a Portable Document Format. In addition, a video hasbeen made for each exercise in order to have an easier description about the movement performed.

Activity setStatic motion Quasi-static motion

1 (3:52 min) 1 (2:11 min)2 (2:43 min) 2: from 2.1(5:29 min) until 2.3(1:39 min)

3: from 3.1 (3:10 min) until 3.3 (4:23 min) 3 (2:15 min)4: from 4.1 (2:54 min) until 4.16 (2:21 min) 4 (2:25 min)

5 (3:20 min)6 (2:59 min)7 (3:09 min)8 (6:02 min)

Figure 9: Activity summary of the Initial Movement dataset.

When the mobile application applied to gather all the information is running, it creates a unique file persession with signal data from each sensor, thus, each file within the repository contains the performed activitythroughout its Matlab data files.

The number of such files depends on the activity duration. Also, there is a “log” file with details aboutthe triaxial sensors (accelerometer, gyroscope and magnetic field sensor), the duration of each session, threedifferent Matlab programmed scripts to represent signal data and, finally, a video with two types of qualityin order to have a complete understanding of the activity undertaken.

28

Page 29: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 10: An example of a recorded video exercise for the initial movement dataset.

5.2 Data collectionThe dataset is composed with two types of movement exercises: quasi-static motions and static motions. Theexercises have been carried out by picking the mobile phone up, twisting it through its axis, changing ratesof velocity or using instruments to choose certain rotation angles. For each exercise, real time data has beenplotted from the three type of sensors: accelerometer, gyroscope and magnetic field sensor.

5.2.1 Static motion

The static exercises is a set composed of four main movements in which the mobile phone is moved throughits axis. Such basic movements have allow us to train eyes in the ability of AR signal recogntion. As it wasmentioned in the previous paragraph, our initial method “Visual Inspection” has born from having createdsuch dataset. The importance of recognizing signals only looking at them resides on those cases in whichwindow patterns may be classified as the best representative pattern because of the feature selected, but,once the signal is plotted, we have realized how mistakes are involved within the computational analysis.Therefore, we have created a repository with all signal images related to the recorded data from the exercisesthat are shown as follows. In order to find a certain description of the activity performed, there is extendedinformation within the concerned repository in section 5.2 (File distribution).

• 1: With the aid of angles drawn in a sheet of paper, the mobile phoned is moved through them varyingits axes.

0 1 2 3 4 5 6−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−4

−3

−2

−1

0

1

2

3

4Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 11: Acceleration, gyroscopic and magnetic signals of the static motions exercise 1.

29

Page 30: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

• 2: With the aid of angles drawn in a sheet of paper, the mobile phoned is moved through them varyingits axes.

0 1 2 3 4 5 6 7 8 9−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6 7 8 9−5

−4

−3

−2

−1

0

1

2

3

4

5Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6 7 8 9−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 12: Acceleration, gyroscopic and magnetic signals of the static motions exercise 2.

• 3: With the aid of angles drawn in a sheet of paper, the mobile phoned is moved through them varyingits axes.

• 3.1:

0 1 2 3 4 5 6−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−4

−3

−2

−1

0

1

2

3

4

5Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 13: Acceleration, gyroscopic and magnetic signals of the static motions exercise 3.1.

• 3.2:

0 1 2 3 4 5 6−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−5

−4

−3

−2

−1

0

1

2

3Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 14: Acceleration, gyroscopic and magnetic signals of the static motions exercise 3.2.

• 3.3:

30

Page 31: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 2 4 6 8 10 12−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 2 4 6 8 10 12−10

−8

−6

−4

−2

0

2

4

6

8Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 2 4 6 8 10 12−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 15: Acceleration, gyroscopic and magnetic signals of the static motions exercise 3.3.

• 4: With the aid of a set-square, the mobile phone is positioned along all the exercises with an angle of45º involving the X axis and Y axis using movements that involve the Z axis in order to set distinguishingbreaks between them.

• 4.1:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−8

−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 16: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.1.

• 4.2:

0 1 2 3 4 5 6−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−6

−5

−4

−3

−2

−1

0

1

2

3

4Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 17: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.2.

• 4.3:

31

Page 32: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−8

−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 18: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.3.

• 4.4:

0 1 2 3 4 5 6−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 19: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.4.

• 4.5:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 20: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.5.

• 4.6:

32

Page 33: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 1 2 3 4 5 6−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−4

−3

−2

−1

0

1

2

3

4

5Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−50

−40

−30

−20

−10

0

10

20

30

40Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 21: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.6.

• 4.7:

0 1 2 3 4 5 6−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−5

−4

−3

−2

−1

0

1

2

3

4

5Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 22: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.7.

• 4.8:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 23: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.8.

• 4.9:

33

Page 34: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 1 2 3 4 5 6−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−8

−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 24: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.9.

• 4.10:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−10

−8

−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 25: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.10.

• 4.11:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 26: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.11.

• 4.12:

34

Page 35: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−4

−2

0

2

4

6

8Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 27: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.12.

• 4.13:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−4

−2

0

2

4

6

8Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 28: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.13.

• 4.14:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−4

−2

0

2

4

6Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 29: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.14.

• 4.15:

35

Page 36: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−8

−6

−4

−2

0

2

4

6

8Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 30: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.15.

• 4.16:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−15

−10

−5

0

5

10Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30

40

50Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 31: Acceleration, gyroscopic and magnetic signals of the static motions exercise 4.16.

5.2.2 Quasi-static motion

Quasi-static data is composed of eight exercises that are not carried out with accuracy since their mission issupposed only to make a general vision of the signal behavior. In next motions, some needed descriptions forthe exercises have been provided due to the differences between them as well as signal representations areshown as follows.

• 1: With the aid of a paper sheet with drawn angles ( 0º, 45º, 90º, 135º, 180º, 225º, 270º, 315º, 360º),the mobile phone is moved, lying on the sheet, through the different angles with a certain velocity andvarying the Z axis in its two possible positions.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−1.5

−1

−0.5

0

0.5

1

1.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−5

−4

−3

−2

−1

0

1

2

3

4Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 32: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 1.

36

Page 37: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

• 2: It is composed with three exercises increasing rate in each one (2.1, 2.2, 2.3). Taking the mobilephone, movement goes up/down and right/left with Z axis involved in the variations ocurred in mostcases.

• 2.1:

0 1 2 3 4 5 6 7 8−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6 7 8−2

−1.5

−1

−0.5

0

0.5

1

1.5

2Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6 7 8−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 33: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 2.1.

• 2.2:

0 1 2 3 4 5 6−2

−1.5

−1

−0.5

0

0.5

1

1.5

2Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−4

−2

0

2

4

6

8Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−60

−40

−20

0

20

40

60Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 34: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 2.2.

• 2.3:

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3−4

−3

−2

−1

0

1

2

3

4

5Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−60

−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 35: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 2.3.

• 3: Similar to the previous exercise, movement consists of going through the different angles.

37

Page 38: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−4

−3

−2

−1

0

1

2

3

4Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 36: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 3.

• 4: With the aid of a circle drawn, the mobile phone has been moved through its edge.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−0.2

0

0.2

0.4

0.6

0.8

1

1.2Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−6

−5

−4

−3

−2

−1

0

1

2

3Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 37: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 4.

• 5: With the aid of a square drawn, the mobile phone has been moved through its edge.

0 1 2 3 4 5 6−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6−3

−2

−1

0

1

2

3

4Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 38: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 5.

• 6: With the aid of a triangle drawn, the mobile phone has been moved through its edge.

38

Page 39: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−0.2

0

0.2

0.4

0.6

0.8

1

1.2Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−3

−2

−1

0

1

2

3Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 39: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 6.

• 7: With the aid of a square drawn, the mobile phone has been moved through its diagonal.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4Collected data

Time (min)

Acc

eler

atio

n (G

)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−3

−2

−1

0

1

2

3

4Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 40: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 7.

• 8: By varying its axes, the mobile phone has been moved along a fictitious line.

0 1 2 3 4 5 6 7 8−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6Collected data

Time (min)

Acc

eler

atio

n (G

)

0 1 2 3 4 5 6 7 8−6

−4

−2

0

2

4

6

8Collected data

Time (min)

Gyr

osco

pe (r

adia

ns/s

econ

d)

0 1 2 3 4 5 6 7 8−50

−40

−30

−20

−10

0

10

20

30Collected data

Time (min)

Mag

netic

Fie

ld (u

T)

Figure 41: Acceleration, gyroscopic and magnetic signals of the quasi-static motions exercise 8.

5.2.3 Observations

Analyzing results obtained from signal sensors, assumptions may be deduced only through their visualization.Signal accelerometer, within the static motions, almost always is represented by a square expanded signalwith zero crossing when the mobile phone shifts from one side to another side. If movements are carried outwith more rate, such as quasi-static motions, the accelerometer signal turns out in a major compressed signalwith bursts of vibration because of the natural steady hand when the mobile phone is moved.

39

Page 40: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Relative to the gyroscopic signal, there is data representation only in those accelerometer signal periodswith zero crossing. The reason of such fact is because gyroscopes are only sensitive to rotations in which thereare changes of rates. Just as it was mentioned before, both sensors are commonly combined to differentiatelinear from angular acceleration correspond to linear movements and rotations of the mobile phone. The morevelocity the movement performed has, gyroscope signals adquire more amplitude although there are worstrepresentations of the acceleration signal. As to the magnetic field signals, they are similar to accelerationcharacteristics since they vary with the shift of mobile phone sides. However, it may be observed howmagnetic signals are more unsteady due to the external agents such as electronic appliances which may steman uncorrect magnitude measured.

40

Page 41: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

6 Fitness datasetFitness dataset is an open benchmark dataset to investigate sensor displacement effects in activity recognition.It has been created and developed by a group of researchers, Oresti Baños with Máté Attila Tóth, from theUniversity of Granada in collaboration with TU Eindhoven[11] . Therefore, all the information offered belowabout the Fitness dataset has been extracted from their approach.

Datasets are built with the aim of analyzing effects taken out from a number of activities, a number of sen-sors, ways of body wearing sensor attachement, certain body parts or even introduced sensor displacements.There is a not wide background of developed datasets because of the relatively recent awareness within theactivity recognition field. One of the first to introduce a dataset with natural conditions was Bao and Intille[31] with 20 subjects wearing 5 on-body biaxial accelerometers and performing 20 activities. However, alatter advance within activity recognition datasets was tackled by D. Roggen [48] with 25 hours of sensordata for 12 subjects stemming from 72 sensors with 10 modalities. But the importance of this dataset resideson having collected data in a room equipped with a kitchen where subjects had a daily and naturalisticbehaviour performing morning activities. Using different sensor modalities such as audio, video, IMU orRFID, the Carnegie Mellon University Multimodal Activity database [23] contains multimodal measures ofsubjects cooking and preparing food. In addition, anomalous situations have been introduced to make morerealistic the human behaviour.

In general terms, there is a lack of realistic user-self-placement way of attachment and, even more un-common, of a mutual displacement mode. Fitness dataset have the purpose of evaluating the variabilityintroduced by sensors self-positioning with respect to an ideal setup as well as investigating the effects oflarge sensor displacements. As it was mentioned before, Oresti Baños with Máté Attila Tóth [11] havedescribed three sorts of scenarios to tackle sensor displacements in Fitness dataset.

• Ideal-placement : The user-sensor attachment together with the activity recognition system are super-vised by the instructor who predefines certain body sensor positions.

• Self-placement : The user is asked to place 3 sensors himself on a certain body part specified by theinstructor. It is an attempt to simulate the variability that may occur in the daily usage of an activityrecognition system with wearable and self-attached sensors. There are sensor displacements introducedwith respect to the ideal-placement in this type of scenario, although the effects may be minimal if theuser positions the sensor near to the ideal mode.

• Mutual-displacement : Rotations and translations are introduced by the instructor in order to investigatehow the performance of a certain method degrades when the appliance shifts far from the ideal position.In fact, this sceneario has allow us to test significantly our study methods and their functional worth.In each activity, the number of sensors displaced in this scenario rises from 4 to 7.

In the following sections, sensor positioning and experiment setup have been described, since Fitness datasethas been trained and afterwards tested with the methodology developed in this study.

6.1 Data collection6.1.1 Activity set

The Fitness dataset is composed with a set of 33 different activities. Specifically, there are activities thatinvolve translation (L1-L3), jumps (L5-L8) as well as general fitness exercises (L31-L33). In addition, thereare body part specific activities focused on the trunk (L9-L18), upper extremities (L19-L25) and lowerextremities (L26-L29). Thus, an amount of activities imply the motion of the whole body such as walking orjumping while others are based on training individual body parts like cycling. As the facility of performingthe activities was easy, each subject did not find any difficulty to practise them and, also, there was aninterest to make exercises that could well represent normal daily situations. Translations have been observedonly when the sensor is in motion but rotations even when the sensor is not in motion, both are propercasuistries to analyze the effects involved with the gyroscope.

41

Page 42: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Activity set

L1: Walking (1 min) L12: Waist rotation (20x) L23: Shoulders high amplitude rotation (20x)

L2: Jogging (1 min) L13: Waist bends (reach foot

with opposite hand) (20x)

L24: Shoulders low amplitude rotation (20x)

L3: Running (1 min) L14: Reach heels backwards

(20x)

L25: Arms inner rotation (20x)

L4: Jump up (20x) L15: Lateral bend (10x to the

left + 10x to the right)

L26: Knees (alternatively) to the breast (20x)

L5: Jump front & back (20x) L16: Lateral bend arm up (10x

to the left + 10x to the right)

L27: Heels (alternatively) to the backside (20x)

L6: Jump sideways (20x) L17: Repetitive forward

stretching (20x)

L28: Knees bending (crouching) (20x)

L7: Jump leg/arms open/closed (20x) L18: Upper trunk and lower

body opposite twist (20x)

L29: Knees (alternatively) bend forward (20x)

L8: Jump rope (20x) L19: Arms lateral elevation

(20x)

L30: Rotation on the knees (20x)

L9: Trunk twist (arms outstretched) (20x) L20: Arms frontal elevation

(20x)

L31: Rowing (1 min)

L10: Trunk twist (elbows bended) (20x) L21: Frontal hand claps (20x) L32: Elliptic bike (1 min)

L11: Waist bends forward (20x) L22: Arms frontal crossing (20x) L33: Cycling (1 min)

Figure 42: The activity set performed in Fitness dataset.

For the ideal-placement and self-placement scenarios, all the activities were performed by 17 subjects,while for mutual-displacement only 3 subjects were involved. In addition, the different sensor positioningallowed to introduce the three different scenarios by introducing some anomalous sensors of the whole sensorset. Thus, there were not any anomalous sensor for the ideal-placement because this scenario attempts torepresent a unnatural attachment, 3 anomalous sensors for self-placement which allowed us to introduce anuser self-attachment and, finally, the mutual-displacement that has from 4 to 7 anomalous sensors. The entiredataset has a duration of 10 hours of exercise data and over 39 hours in total. The difference between both isperiods of unrelated activities with respect the activity concerned. All the exercises have been recorded witha video camera, thus, it is possible to check unexpected patterns in the data. In addition, in some exercisestwo subjects have performed in parallel the same activity for a better efficiency.

Within the Fitness dataset, some parts of the recordings were identified as missing. In the following figure,the missing activity data is described for each subject. Thus, for subject 7, there is almost no data availablein the ideal-placement scenario. For the self-placement setup, no activity data is available for subjects 6 and13. For mutual-displacement, only 3 subjects have activity performed data including subject 2,5 and 15. Inaddition, there are more missing activity data for the rest subjects that may be known checking the recordedvideos that are especially useful for erroneous labels as well as the validity of the anotated data may be alsochecked.

42

Page 43: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 43: Missing activity data: (a) ideal and self-placement and (b) mutual displacement.

6.1.2 Sensor deployment

The technology used as wearable sensors for the Fitness dataset has been the Initial Measurement Unit, inparticular, Xsens [55]. A sensor set of 9 IMUs have been on-body positioned in each subject as shown in thenext Figure.

43

Page 44: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 44: Sensor positioning.

The selected sensor set is composed of eight sensor units that have been distributed on the middle of eachbody limb and another one that is centered on the back (Figure13 shows sensor details). They have beenattached to the body with the aid of elastic straps and velcro. In addition, specific trousers and sport jacketswere provided to fit them properly to the subject.

Sensor setLC: Left calf

RC: Right calfLT: Left thigh

RT: Right thighLLA: Left lower arm

RLA: Right lower armLUA: Left upper arm

RUA: Right upper armBACK: Back

Figure 45: Pre-defined positions for the wearable sensor set.

Each appliance provides several sensing modalities including acceleration, rate of turn, magnetic field andderive the orientation estimates of the sensor frame with respect to the Earth reference. In fact, each sensornode provides tri-directional acceleration, gyroscope and magnetic field measurements as well as orientation

44

Page 45: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

in quaternion format (4D). Sensor outputs are formed by 13 measurement values that lead to an overall setof 117 recorded signals. For the sampling rate, 50 Hz has been established because it suffices the exercisesrequirements. As to those sensors which have been displaced with respect to the different deploymentscenarios, the following figure shows with shading spots the identified displaced sensors.

Figure 46: Displaced sensors: (a) self-placement and (b) mutual-displacement.

6.1.3 File distribution and log files.

The whole Fitness dataset has been developed with Matlab tools. It is composed with files that include datafrom the activity sessions, labels that make reference to the data with deeper details of the activity sessions

45

Page 46: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

and a Matlab programmed interface (Marker) in order to represent all the activity signals (see section 71.).The data file is organized with individual files indicating the subject ID and the type of sensor scenario: ideal,self and mutual. Within each individual file, there is the whole data associated to a subject including allthe activities performed (from L1 to L33). The label file is analogous organized as the data file and each filecontains the name of the whole activity set and the duration of each activity (when it started and finished)among others secondary details. There are 9 sensors in total with 13 modalities each and the order in whichthe sensors and their respective modalities appear in the log file.

Log files are structured with rows of the each data file corresponding to the consecutive samples of themeasurements sampled at 50 Hz. Thus, each log file contains fixed 120 columns and a number of rows equalto the samples from the session duration. Each log file has two first columns of the samples that correspondto the whole and rest part of the timestamp in seconds and microseconds respectively, a last column thatcorresponds to the activity label. In addition, the label is denoted by a positive integer corresponding tothe activity,thus, zero denotes that no activity label is present. Finally, columns between the 3rd and 119thcorrespond to the sensor measurements. The following figures show how data, modalities and activities arestructured within the log file.

Activity setL1 L2 L3 L4 L5 L6 L7 L8 L9 L10 L11 L12 L13 L14 L15......... L33

Sensor orderingRLA RUA BACK LUA LLA RC RT LT LC

Modality orderingACC: X ACC: Y ACC: Z GYR: X GYR: Y GYR: Z MAG: X MAG: Y MAG: ZQUAT: 1 QUAT: 2 QUAT: 3 QUAT: 4

Figure 47: Description of the log file.

6.2 Experiment setup6.2.1 Data experiment

Once data from Fitness dataset is available, we have distributed its activities in order to represent individualresults for our interest. Therefore, the whole set of activities has been divided in two types of sets: from SET1to SET5, activities are organized depending on their fitness exercises and the second one has been namedfrom N1 to N4. Within the first one, each SET is composed with activities that are associated between them,however in the second one, each group is only to analyze how are the effects of introducing a bigger numberof activities. Thus, N1 contains a certain number of activities, 5, and the last one, N4, contains the wholeactivity set. Both groups are described as follows.

Group Activity

46

Page 47: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

SET1 1 2 3SET2 4 5 6 7 8SET3 9 10 11 12 13 14 15 16SET4 17 18 19 20 21 22 23 24 25SET5 26 27 28 29 30 31 32 33

Figure 48: Activity description for the SET parameter.

Group ActivityN1 1 2 3 4 5N2 1 2 3 4 5 6 7 8 9 10N3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20N4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

Figure 49: Activity description for the N parameter.

The way to proceed, taking into account the groups of activities mentioned before, within our experimentalprotocol has been the analysis of data from each modality and using all wearable sensors involving only acertain activity. On one hand, when activity has been used individually, walking has been the one chosenin all these cases. In addition, all data from subjects of ideal, self and mutual-placement has been used atthe same time to analyze their results. On the other hand, there are results obtained from types of activities(SET1-SET4) and from increasing the number of activities (N1-N4).

In addition, with regard to the activities, there are several parameters that have been used from thedataset for the experimental protocol. Because of the importance of a proper window size election, data hasbeen cut into different segments, thus, the window sizes chosen have been of 1, 3 and 6 seconds correspondingto W1, W3 and W6 respectivitely. The three window sizes provide the opportunity of reading different data.Window size of 1 and 3 seconds are recommended to detect which sensors is the most representative and theyare considered by window segment studies as the best length of time to analyze patterns. Although, it hasbeen also used a window size of 6 seconds to compare results with the others. For a major understanding, atable with different parameters used from the dataset together with their initials is shown as follows.

PARAMETER DESCRIPTIONPlacement Ideal, Self, Mutual

Window size W1, W3, W6Modality ACC, GYR, MAGClassifier DT, KNNSensor RLA, RUA, BACK, LUA, LLA, RC, RT, LT, LC

Figure 50: Summary of the different parameters involved in the Fitness dataset.

As to the algorithms applied, in order to decide which of the nine wearable sensors (on-body location)corresponding to the signal analyzed is the most representative, it has been used two sorts of classifiers:

47

Page 48: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Decision Tree algorithm (DT) and K-Nearest Neighbors algorithm (KNN). Both have been considered becauseof the different characteristics to show variability from results.

The Decision Tree classifier organizes a series of test questions and conditions in a tree structure. Withinthe decision tree, the root and the internal nodes contain attribute test conditions to separate data that havedifferent characteristics. In addition, all the terminal nodes have assigned a class label “yes” or “no”.

Once the decision tree has been built, classifying a test data is straightforward. Starting from the rootnode, we apply the test condition to the data and it follows the appropriate branch based on the outcomeof the test. Then, it leads either to another internal node, for which a new test condition is applied, or asagainst that, to a leaf node. When the leaf node is reached, the class label associated with the leaf node isthen assigned to the data. Build an optimal decision tree is key issue in decision tree classifier. In general,decision trees may be constructed from a given set of attributes. While some of the trees are more accuratethan others, seeking the optimal tree is computationally complex due to the exponential size of the searchspace.

The decision tree algorithm must provide a method for specifying the test condition for several attributetypes as well as an objective measure for evaluating the goodness of each test condition. First, the specificationof an attribute test condition and its corresponding outcomes depends on the attribute types. It may dotwo-way split or multi-way split, discretize or group attribute values as needed. The binary attributes leadsto two-way split test condition. For norminal attributes which have many values, the test condition maybe expressed into multiway split on each distinct values, or two-way split by grouping the attribute valuesinto two subsets. Similarly, the ordinal attributes can also produce binary or multiway splits as long asthe grouing does not violate the order property of the attribute values. For continuous attributes, the testcondition can be expressed as a comparsion test with two outcomes, or a range query. Or we can discretizethe continous value into nominal attribute and then perform two-way or multi-way split.

Since there are choices to specify the test conditions from the given training set, we need use a measurementto determine the best way to split the data. The goal of best test conditions is whether it leads a homogenousclass distribution in the nodes, which is the purity of the child nodes before and after spliting. The largerthe degree of purity, the better the class distribution.

To determine how well a test condition performs, it is needed to compare the degree of impurity of theparent before spliting with degree of the impurity of the child nodes after splitting. The larger their difference,the better the test condition. Some properties of a decision tree classifier include a similarity with humandecision process, being easy to understand as well as it deals with discrete and continuous features. Thelatter property is highly suited to match with the methodology used in this study because of the time andstatistical features selected for time and statistical signal domains.

For K-Nearest Neighbors classifier, in pattern recognition, is a non-parametric method used for classifi-cation and regression. The input consists of the k closest training examples in the feature spaces and theoutput depends on whether KNN is used for classification or regression. In classification, the output is aclass membership, thus, each object is classified by a majority vote of its neighbors with the object beingassigned to the class most common among its k nearest neighbors (k is a positive integer, typically small).In regression, the output is the property value for the object. This value is the average of the values ofits k-nearest neighbors. KNN classifier is a type of instance-based learning where the function is only ap-proximated locally and all computation is deferred until classification. The KNN algorithm is the simplestalgorithm of all machine learning algorithms. Both, classification and regression, may be useful to weight thecontributions of the neighbors, so that the nearer neighbors contribute more to the average than the moredistant ones. For instance, a common way to proceed is giving each neighbor a weight equal to the distanceto the neighbor. Neighbors are taken from a set of objects for which the class(KNN classification) or theobject property value(KNN regression) is known.

The training examples are vectors in a multidimensional feature space, each with a class label. Thetraining phase of the algorithm consists only of storing the feature vectors and class labels of the trainingsamples.

The best choice of k depends upon the data, larger values of k reduce the effect of noise on the classification,but make boundaries between classes less distinct. A good k election would be the selection of various heuristictechniques. A special case where the class is predicted to be the class of the closest training sample is calledthe nearest neighbor algorithm.

48

Page 49: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 51: An example of a Decision Tree classifier[1].

Figure 52: An example of a K-Nearest Neighbor classifier[2].

Both classifiers, the Decision Tree and K-Nearest Neighbors, have been developed with the aid of Matlabprogramme tools, specifically, with a toolbox(IDTC_toolbox) that is within the Fitness dataset repositoryand it has been also provided by the Fitness dataset’s owners ([11]).

6.2.2 Data treatment

• Leave One subject Out (LOO): training and testing phases.

Leave One Out procedure is a model validation technique in order to train and test an independent dataset.As it was explained within file distribution of Fitness dataset, data is organized in columns distinguished byits modalities, window size, wearable sensors, activities and subjects. Thus, the activity selection is in themost superficial level, then, in a lower level, there is data from every subject and within a certain subject

49

Page 50: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

selected, sensors and all windows may be found (see Figure 53). Once data is selected as a subset from thewhole dataset, it has been trained applying the methodology that will be explained in depth in section 7 andposteriory tested. For such training and testing process, it has been used the procedure Leave One subjectOut (LOO).

ACC

X Y ZRowdata

GYR

X Y ZRowdata

MAG

X Y ZRowdata

#SENSORS

#WINDOWS

#ACTIVITY

SUBJECT 1

SUBJECT 2

SUBJECT 3

SUBJECT N

#SE

Figure 53: Data distribution.

Leave One Out technique for training data has consisted in dividing the extracted data with the selectedparameters from the Fitness dataset into a number of iterations equal to the number of subjects selected.Thus, data from a particular subject have been separated from the rest in a first step and all sample windowsfrom the other subjects have been compared between them by appliying a set of certain features. After doingas many iterations as subjects selected, a number of representative patterns has been obtained for each sensorequal to the number of subjects. Making another feature comparison between patterns for a same sensor,the pattern with the maximum value is set as the Universal Pattern (Figure 54). For instance, an initialloop was leaving data from subject 1 out of the rest of data and the first feature methodology, correlation,was implemented with subjects from 2 until 17. In this case, subject 1 is the Leave One Out and the rest ofsubjects correspond to the trained data. Iterations are formed by selecting a certain window from a subjectand comparing with the next window data and subsequents. Patterns have been named as Universal becausedata from the triaxial sensors has been compared as a whole, with no segmentation between X, Y and Zaxis. This allows to generalize data for training phase and it may make the selection of the Universal windowpattern independent of anomalous variations appeared between their axes.

50

Page 51: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

SUBJECTOUT

SUBJECTS FEATURE SET

E

RLA Universal Pattern

RUAUniversal Pattern

WINDOW Y

WINDOW X

Figure 54: Leave One subject Out: training phase.

When the Universal signal pattern of each wearable sensor is deduced, the testing phase consists inmatching data from each window segment used in training phase with each signal Universal Pattern beingevaluated with the confusion matrix procedure. The methodology applied to determine which is the bestrepresentative sensor for the current window segment evaluated is again the time domain and statisticalfeature set. Similarly, such process is then repeated with the rest of subjects adquiring a final confusionmatrix that represent the method accuracy. Next Figure shows the testing flow since each window segmentis selected from a certain subject until it is matched with the different Universal sensor patterns and finally,the system provides the on-body location outcome.

Subject

Windows

OnOn-n-bodyOOnn odybobPosition

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

Figure 55: Leave One subject Out: testing phase.

51

Page 52: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

• Confusion Matrix

Once data results have been obtained, from the Matlab analysis and simulation, when the methodologydeveloped has been applied, the data treatment has been to measure their accuracy with a machine learningtechnique defined as the Confusion Matrix. This performance measurement tool is known as a contingencytable or an error matrix and it allows the visualization of the performance of a certain algorithm throughouta table layout provided. Its name is due to the verification about if the system is confusing two classes,normally mislabeling one as another. On the one had, each column of the matrix is a representation of theinstances in a predicted class and on the other hand, each row is the instances in an actual class.

When the classification system has been trained to differentiate between a set of classes, the confusionmatrix summarizes the results of testing the algorithm for further inspection. Thus, as it was mentionedbefore, results that are uncorrectly classified recieve a different class label instead of its correct class label.All the correct guesses are located in the diagonal of the table, therefore, it is easy to inspect the table forerrors, as they are represented by any non-zero values outside the diagonal.

In this work, together with the confusion matrix results, there are available several accurate measurementtechniques that may be applied to the results equally as the confusion matrix and which include sensitivity,positive predicted value, positive ratio, negative ratio, correct rate, error rate, mean or balanced accuracy.Correct rate is known as accuracy as well and it has been selected to represent all the results obtained alongthis study. Only with the use of the confusion matrix, conclussions have been able to be provided because ofthe efficiency in the accuracy of the confusion matrix method.

IDEAL, ACT1, ACC, W1, DT, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 56: Confusion Matrix with setting parameters including the ideal placement, the activity of walking,a window size of 1 second, a Decision Tree classifier and the Feature Set 1 (mean).

52

Page 53: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

7 Methodology for on-body sensor location

7.1 Method I: Visual Inspection7.1.1 Introduction

Visual Inspection methodology called as Visual Inspection Ideal Walking, due to the walking performanceand the ideal placement, arised from the observation of the Fitness dataset signals throughout the use ofMarker tool provided with the Fitness dataset[11]. The lack of awareness about signal AR behaviours, in afirst step, was a motivation to identify differences between several activities, sensors, modalities and axis fromtriaxial modalities. Subsequently, it was born the necessity of distinguishing different patterns by dividingsamples of activity performance into window segments. Such window segment identifications were treated asan issue in its own, since results vary and are completely different when window sizes go from 1 second to 6seconds, for instance. Within Activity Recognition field, window segmentation has a computational analysisbackground to determine which are those more appropiate to represent required activities.

As consequence, the visual inspection method of AR signals has the capacity to recognize, with an in-dependent degree of freedom, a suitable pattern without memorizing window after window. Natural humansight has not the capacity of being influenced by previous window segments seen. Such advantage eliminatesthe necessity of make a comparison between windows as well as allow us to choose a common window pattern.With this method, it is possible to identify anomalous situations that they do not appear when a compu-tational analysis is carried out. In fact, one of the most problematic issues that statistical methods presentis the different relationships between their signals samples and the same feature results they provide.Timedomain and statistical methodology often tend to compare each window with the rest and obtain an universalpattern which attempts to provide the best relational approach.

Next Figure (57) shows two signals over time from sensors attached on the left leg, in particular the leftcalf and left thigh body parts. The triaxial accelerometer signals have an evident pattern just as the samplesdescribe the activity performed. In this case, if time moments from 30 seconds to 34 seconds are observed,red signal represent moments with a higher amplitude that correspond with the gesture of the raised leg whenit changes from a step to another one for walking. As it was mentioned before, by watching over such redsignal, it is possible to identify anomalous leg movements like the one that corresponds to the 32.5 secondsapproximately. It has a lower amplitude than the others that might represent the existence of, for instance,an obstacle. With a computational analysis, it would be more complex determining such deviation as with anstatistical feature, for instance correlation, two different random variables with several relationships betweenthem may provide same results if they present the same total distance to their fictitious line of regression.Another significant task related to the window size issue is the certain time at a signal pattern is determined.Two universal patterns may have a great similarity but if they are compared with signal features that onlymeasure samples at the same time, the computational analysis may have not recognized such window patternas equal otherwise with the sense of visualize them.

Figure 57: Marker programmed interface tool. Acceleration signals of walking activity for left calf and leftthigh.

53

Page 54: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

7.1.2 Results and discussion

In order to identify these variabilities mentioned throughout different window sizes, we have selected 3 sorts ofwindow time segments including periods of 4 seconds, 6 seconds and a latter window size that lasts a randomtime that depends only on the most proper visualized pattern considered. It has allowed the possibility ofcomparing with universal patterns obtained from the time and statistical methods posteriory to measure howefficient are they with respect to patterns selected in this methodology. A conjunction of selected visualizedpatterns from the three types of window segments are shown as follows.

Varying window sizes:

Walking, ideal placement, accelerometer, 1 subject, Back sensor .

0 10 20 30 40 50 60 70 80 90 100−0.14

−0.12

−0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0 50 100 150 200 250−0.14

−0.12

−0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0 50 100 150 200 250 300 350−0.14

−0.12

−0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0.06

Figure 58: Acceleration signals represented at the same initial time and normalized linear acceleration for astandart window size, 4 seconds and 6 seconds.

Walking, ideal placement, accelerometer, 1 subject, Back sensor .

0 10 20 30 40 50 60 70 80 90 100−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

0 50 100 150 200 250−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

0 50 100 150 200 250 300 350−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

Figure 59: Gyroscopic signals represented at the same initial time and normalized linear acceleration for astandart window size, 4 seconds and 6 seconds.

Walking, ideal placement, accelerometer, 1 subject, Back sensor .

0 10 20 30 40 50 60 70 80 90 100−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0 50 100 150 200 250 300 350−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

54

Page 55: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 60: Magnetic field signals represented at the same initial time and normalized linear acceleration fora standart window size, 4 seconds and 6 seconds.

Watching over signal axis for the first figure, it may be distinguished for red signal (Z axes) and greensignal (Y axes) the unique pattern corresponding to one period but, otherwise, for blue signal (X axes) thewindow size selected has resulted in almost three patterns of activity representation. However, for longerwindow sizes (third figure of 6 seconds), the identification of anomalous movements by observing a group ofsequential signal periods is less complex than applying a computational feature analysis. Back sensor is themost complicated pattern for being distinguished between the rest of sensors due to the lower differentiationin its movement amplitude. Such situation gets worst when data is from the self and mutual displacementthat are more apprached to normal life movements. Although, as it was described in section 3.2 (GeneralActivity Recognition chain), the longer are window sizes, the less precise results are in recognition patterns.Therefore, visual inspection methodology may provide characteristics, that combined with a computationalfeature analysis, infers the variability within unexpected situations such as fall-outs and several disease states.

Varying sensor modalities corresponding to different initial times as well as to the same time period forthe first modality selected:

Walking, ideal placement, 1 subject, Left Calf sensor, standart window size.

0 10 20 30 40 50 60 70 80 90 100−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

0 10 20 30 40 50 60 70 80 90−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0 10 20 30 40 50 60 70 80 90−0.25

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

Figure 61: Acceleration signals represented at different initial times and normalized units.

Walking, ideal placement, 1 subject, Left Calf sensor, standart window size.

0 10 20 30 40 50 60 70 80 90 100−6

−5

−4

−3

−2

−1

0

1

2

3

0 10 20 30 40 50 60 70 80 90−5

−4

−3

−2

−1

0

1

2

3

0 10 20 30 40 50 60 70 80 90−6

−5

−4

−3

−2

−1

0

1

2

3

4

Walking, ideal placement, 1 subject, Left Calf sensor, standart window size.

Figure 62: Gyroscopic signals represented at different initial times and normalized units.

55

Page 56: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

0 10 20 30 40 50 60 70 80 90 100−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60 70 80 90−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

0 10 20 30 40 50 60 70 80 90−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Figure 63: Magnetic signals represented at different initial times and normalized units.

These three types of initial different times show how complex is to compare universal patterns determinedwith input recorded data for a recognizing system. Using time domain features, most of such comparisons mayreside on matching input samples with universal samples at same times. Finally, they provide an uncorrecton-body location, although both may be very similar by just using the visual inspection method. In fact,being the third acceleration signal representation observed, the significat amplitude corresponding to theconcerned movement is split into two amplitudes because of the different initial time selected with respect tothe first acceleration signal representation. Such two amplitudes are a consequence of two movement stepsfor walking activity but they would not be the very best representation for a recognizing system.

Varying subjects for an unique wearable sensor and a window size of 4 seconds:

Walking, ideal placement, Right Lower Arm, a window size of 4 seconds.

0 50 100 150 200 250−0.14

−0.12

−0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0.06

0 50 100 150 200 250−0.2

−0.15

−0.1

−0.05

0

0.05

0 50 100 150 200 250−0.14

−0.12

−0.1

−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0.06

Figure 64: Acceleration signals represented at the same initial time and normalized units for subject 1,subject 2 and subject 3.

Walking, ideal placement, Right Lower Arm, a window size of 4 seconds.

0 20 40 60 80 100 120 140 160 180 200−3

−2

−1

0

1

2

3

4

5

0 50 100 150 200 250−1.5

−1

−0.5

0

0.5

1

1.5

0 50 100 150 200 250−2

−1.5

−1

−0.5

0

0.5

1

1.5

Figure 65: Gyroscopic signals represented at the same initial time and normalized units for subject 1, subject2 and subject 3.

56

Page 57: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Walking, ideal placement, Right Lower Arm, a window size of 4 seconds.

0 50 100 150 200 250−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

0 50 100 150 200 250−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

Figure 66: Magnetic field signals represented at the same initial time and normalized units for subject 1,subject 2 and subject 3.

When several subjects are assessed, anomalous and unstandart ways of walking appear and producefailures for deducing the sensor placement. Looking at the acceleration signals, the three subjects have asimilar walking pattern although the third subject makes a softer step than the others, as the red amplitudesignal is lower. But if we take a look at the gyroscopic results, signals for the second and third subject aremore representative than the first one. On the one hand, this can mean that combining the accelerometer andthe gyroscope, the second subject would have propably the most representative universal pattern comparedto the others. But on the other hand, results for the magnetic field sensor have a bigger variability than theother modalities, thus, it would be more suitable to make a computational analysis in depth to obtain anuniversal pattern.

7.2 Method II: Time signal features7.2.1 Introduction

One of the most restrictive properties of using wearable sensors within Activity Recognition is often thedifficulty to predict which on-body location may provide the most relevant features with respect to activityclassification. Usually, the optimal set of features is unknown and it is desirable having a number of featuresavailable as well as limited due to the computational degree needed for each one.

A wide group of researchers have applied in their approaches features from the signal time and frequencydomains. As consequence of trying to monitor real time activities as a tendency in our daily life, time domainfeatures have been extracted and classified as an esencial procedure within the Activity Recognition chain.After that, signal characteristics have been analyzed and, finally, they have allow to identify an unique andrepresentative pattern for a specific body part corresponding to a wearable sensor.

On the one hand, a first step has been carried out in order to tackle time domain features in depth andobserve failures within recognition systems. Such time domain features have included Correlation, Best Fitand Mutual Information. Selecting fixed parameters from the Fitness dataset such as walking activity, theaccelerometer, all subjects (17), all wearable sensors (9) for each time domain feature, we have obtainedthe most representative window pattern. Since signals from the accelerometer, gyroscope and magnetic fieldsensor are triaxial, when the X signal has been selected, then, the Y and Z signals are assigned to the samewindow data wit X singnal. Therefore, patterns with XYZ correspond to the same temporal space. Suchprocess has been chosen due to the signal dependency with the activity movement for each temporal spaceand considering that each axes from a certain window may be representative of the rest of axis. Correlationbetween X axes and the rest of X axis has to be the same that the correlation between Y signals and Zsignals.Applying Correlation, Best Fit and Mutual Information, the window pattern have been extractedcomparing each window data segment with the rest of window segments and, finally, selecting the one withbest feature results. Secondly, once 17 window patterns have been obtained for each subject and for eachwearable sensor, 17 iterations have been applied using the Leave One subject Out (LOO) procedure explainedin which each subject out is used then in the testing phase to select 9 Universal patterns representing eachsensor. Thus, each Universal pattern is obtained throughout the comparison between 16 window patterns

57

Page 58: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

once one of them is left out as it was applied with window patterns from subjects and applying again thethree time domain features in order to obtain those with best results. A final step consists in testing eachsubject that has been left out with the Universal patterns corresponding to each wearable sensor and, finally,results have been shown through the use of the Confusion Matrix.

On the other hand, statistical domain features have been applied to the same data parameters describedfor time domain features. Within such procedure, three statistical feature sets have been created containingmean, standart desviation, maximum, minimum, energy and kurtosis. For each window data segment hasbeen calculated its statistical feature turning from time domain into statistical domain. Unexpectedly, veryefficient results have been analyzed from applying same data procedure that in time signal methodology aswell as both methodologies have been deployed in detail as follows.

7.2.2 Correlation

In order to evaluate how efficient results are from a certain time domain feature, this methodology has splitthe analysis into the three features including Correlation, Best Fit and Mutual Information. The correlationcoefficient between two variables is mainly based on how they change together, their variability, being knownas the covariance. It is used in order to seek relevant characteristics when two signals are compared and adisplacement gap is applied between them as well. An existing correlation depends on whether each instantvalue compared from signals is similar, thus signals are correlated, or otherwise its distance is significantly.As consequence of being based on the covariance, the correlation coefficient assumes that the relationshipbetween variables is lineal. This means that the magnitud of the correlation coefficient is not informationenough to know whether the two variables are correlated. It may well be the case that the two variables havesame mean, correlation, standart desviation and their linear regression but the relationship between themmay be completely different although they have the same relationship. The correlation coefficient has beenapplied within the training phase to evaluate an initial representative pattern for each on-body sensor andfrom all subjects. The best magnitude has been the major number of correlation evaluated from 0 until 1.In addition, the population correlation coefficient between two random variables X and Y is defined as:

ρX,Y = corr(X,Y ) = cov(X,Y )σ�xσ�y = E[(X−μx)(Y-μy)]

σ�xσ�y

When results from the Confusion Matrix are analyzed using the correlation method, the amount of mis-takes commited in the sensor prediction phase is evident. Looking at those results in Figure 67 correspondingto the triaxial sensor where XYZ are analyzed independently, wearable sensors that are attached in similarbody parts such as RT and RC have an uncorrect prediction and the recognition system identifys them asequal. But the recognition has also failures between not similar sensors from different body parts and suchfact produced generally means the on-body location recognition fails almost in any case. In a posterior anal-ysis (Figure 68), where XYZ are taken as a conjunction, since the relationship between different X row datashave the same variation ocurred for Y and Z row datas in several window segment, it is shown how resultsimproved significantly. In fact, the recognition system identifies as correct the RT sensor and LLA, LUA andLT with less precision. Although, LC sensor data is confused with the RC sensor. For sensors correspondedto the upper body part including RLA, RUA and BACK, mistakes dominate the recognition system. Thus,it is possible to assume that for the upper body part the recognition system suffers uncorrect predition due tomovements less representative than those movements made with the legs. In this case, since walking activityis being analyzed, it has sense thinking legs represent more efficient a posible Universal pattern than thearms.

7.2.3 Best fit

The Best fit feature is the norm quotient between the distance of two window segments, value per value, andthe distance of one window segment and its mean. The maximum value of Best Fit feature is 1 and minusinfinity when the Best Fit feature results worst. Thus, the bigger is the quotient from the method the worstresults are obtained and values from the window segments have a great distance between them. Therefore,Best Fit methodology compares values from two window segments as well as uses a reference of one windowsegment with its mean. Best Fit is defined as:

58

Page 59: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

BF = 1− ‖(Wind1−Wind2)‖‖(Wind2−mean(Wind2)‖

Although Best Fit method uses a comparison of distances between values as Correlation method, theirrelationship do not accomplish with the statistical parameters as ocurred in the latter. Because of this,mistakes and recognition failures are not as frequent as they were in the Correlation Confusion Matrix.Analyzing results obtained from having applied the Best Fit method in the training phase and, at the end,in order to select the Universal pattern for each on-body sensor, they have considerably improved sensorprediction. As opposed to Correlation results, for XYZ analyzed independently, Best Fit obtain good figuresfor sensors RLA, LUA and LLA but, otherwise, there are errors commited with LT and RT sensors. Infact, Best Fit methodology mix up sensors from the same extremity. An assumption made because of thisresult might be the fact that Best Fit feature is based on the distances between window segment values andprobably distances from the LT sensor and RT sensor may be similar because of the movement performedwith that body part. For XYZ analyzed as a whole, results improve slightly and it may be distinguished thediagonal corresponding to correct predictions concerned to the Confusion Matrix.

7.2.4 Mutual Information

The Mutual Information methodology measures the information that two random variables share. In par-ticular, it measures how when we have awareness about one variable, then, it is reduced our doubt aboutthe other one. In the case of two independent variables, the mutual information would be zero as know-ing one of them do not provide any information about the another one. The mutual information mediumcorresponds to the contained information in its own for a variable. For two continuous random variables Xand Y, Mutual Information is defined by two properties that make of this feature independent of the rest offeatures: the capacity of measuring any type of relationship between variables and its invariance under spacetransformations. Its root resides not using statistics of any grade but from joint and marginal pdfs of thevariables.

The mathematical definition for Mutual Information:

I(X;Y ) =´ ´

p(x, y)log p(x,y)p(x)p(y)dxdy

As consequence of having a logarithmic argument, the equation is not dimensional as well as the valuefrom the integration is independent of the seleted coordinates. Because of this, the MI characteristic isrobust to rotations, translations and, ultimately, any transformation that differs from the original. For datafrom self placement and mutual displacement would be predictable to obtain better results than applyingthe Correlation and Best Fit method due to such independence, since wearable sensors are not suitableattached as in the ideal placement. Although such independence with respect to attachment variations issupposed to be more efficient, results surprisingly are worst than using the other methodologies. Lookingat the Confusion Matrix, we have assumed that variables compared from two window segment should beindependent and Mutual Information in such case does not provide any awareness.

7.2.5 Results and discussion

When XYZ are analyzed independently:

Correlation, Ideal, ACC, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 Best Fit, Ideal, ACC, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 Mutual Information, Ideal, MAG, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

59

Page 60: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Figure 67: Correlation, Best Fit and Mutual Information for ideal placement, activity of walking, using anaccelerometer and a window size of 6 seconds.

When XYZ are analyzed as a whole and defined as Universal:

Universal Correlation, Ideal, ACC, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 Universal Correlation, Ideal, GYR, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 68: Universal Correlation for ideal placement, activity of walking, using an accelerometer, a gyroscopeand a window size of 6 seconds.

Universal Correlation, Ideal, MAG, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1 Universal Best Fit, Ideal, ACC, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 69: Universal Correlation and Universal Best Fit for ideal placement, activity of walking, using amagnetic field sensor, an accelerometer and a window size of 6 seconds.

60

Page 61: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

Universal Best Fit, Ideal, MAG, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Universal Mutual Information, Ideal, ACC, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 70: Universal Best Fit and Universal Mutual Information for ideal placement, activity of walking,using a magnetic field sensor, an accelerometer and a window size of 6 seconds.

Universal Mutual Information, Ideal, GYR, Window size 6s, Walking

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Predicted sensor

Cur

rent

sen

sor

Universal Mutual Information, Ideal, MAG, Window size 6s, Walking

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 71: Universal Mutual Information for ideal placement, activity of walking, using a gyroscope, amagnetic field sensor and a window size of 6 seconds.

61

Page 62: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

7.3 Method III: Statistical feature sets7.3.1 Introduction

Thinking of statistical feature sets allow us to turn signals from time domain into a statistical space where theyare defined by parameters including mean or standart desviation. In addition, lineal errors associated to timedomain features like the Correlation coefficient mentioned in the methodology before might be eliminated.Time domain provokes that when Universal patterns are compared with window segments as inputs, theinitial moment may decide whether the recognition is correct or uncorrect and sometimes independently ofbeing similar between them. Such fact inspire us to deal with signal time domain and only define signals bytheir statistical parameters, isolating them from time. Therefore, the modification introduce an independenceof initial times for each window sizes as well as variations produced due to an incorrect attachment since itonly compares signals within the statistical domain features. In fact, this method is not based on assumptionsmade due to the chosen features are independent of the device orientation and consequently the analysis isefficient also when the recognition is subject to any change.

Along the statistical methodology, training and testing phases have been applied as the procedure wasexplained within data treatment. Thus, in this case, the Universal patterns are represents by statisticalparameters and they are selected choosing their maximum values. Three different feature sets have beenused in order to distinguish efficiency in the posterior results obtained.

• Feature set 1: mean.

• Feature set 2: mean and standard deviation.

• Feature set 3: mean, standard deviation, maximum, minimum, energy and kurtosis.

Understanding such feature sets that include mean as the average value of the signal over its domain as wellas standard desviation which shows how much variation or dispersion exists from the average. A low standarddesviation indicates that data value tend to be very close to the mean, known as the expected value. Whenthe standard deviation is high, data values are spread out over a large range of values. Within empiricalenviroment, the maximum and minimum of a signal are the largest and smallest value that the signal takes ata point either within a given neighborhood or even on the signal domain in its entirety. When the maximumor the minimun of a set is found, then, they are corresponded with the greatest and least element in the set.

Energy concept for signal processing is defined as

Es = {x(t), x(t)} =´ | x(t) |2 dt

Taking x(t) as a signal function that represents the magnitude of the electric field component of a signalwave propagating through free space, then X(f) would represent the signal’s spectral energy density as afunction of frequency f. The last parameter used within the third statistical feature set is kurtosis. Inprobability theory and statistics, kurstosis corresponds with any measure of a shape given. Thus, kurstosismeasures attempt to analyze the variance explained through the combination of extreme data with respectto the mean and those data that are closer. When kurstosis is high, it involves a major data concentrationnext to the mean distribution as well as, at the same time, lives with a high frequent data far from the mean.Kurtosis is obtained also through the evaluation of the Kurstosis equation by adding random variables. If Yis the addition of n random statistical and independent variables, all of them with the same distribution X,then Kurtosis is defined as

Kurt[Y ] = Kurt[X]n

62

Page 63: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

7.3.2 Results

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,W1

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1GYR,W1

Fitness dataset, ClassifierA

ccur

acy

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1MAG,W1

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 72: Results obtained from the accelerometer, the gyroscope and the magnetic field sensor, walkingactivity and using a window size of 1 second.

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1GYR,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1MAG,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 73: Results obtained from the accelerometer, the gyroscope and the magnetic field sensor, walkingactivity and using a window size of 3 seconds.

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1GYR,W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1MAG,W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 74: Results obtained from the accelerometer, the gyroscope and the magnetic field sensor, walkingactivity and using a window size of 6 seconds.

63

Page 64: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,GYR,W1

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1GYR,MAG,W1

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,MAG,W1

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 75: Results obtained from the combination of the accelerometer, the gyroscope and the magnetic fieldsensor, walking activity and using a window size of 1 second.

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,GYR,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1GYR,MAG,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,MAG,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 76: Results obtained from the combination of the accelerometer, the gyroscope and the magnetic fieldsensor, walking activity and using a window size of 3 seconds.

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,GYR,W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1GYR,MAG,W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,MAG,W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 77: Results obtained from the combination of the accelerometer, the gyroscope and the magnetic fieldsensor, walking activity and using a window size of 6 seconds.

64

Page 65: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,GYR,MAG, W1

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,GYR,MAG,W3

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

IDEAL,DT IDEAL,KNN SELF,DT SELF,KNN MUTUAL,DTMUTUAL,KNN0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1ACC,GYR,MAG, W6

Fitness dataset, Classifier

Acc

urac

y

FS1FS2FS3

Figure 78: Results obtained from the combination of the accelerometer, the gyroscope and the magnetic fieldsensor, walking activity and using window sizes of 1, 3 and 6 seconds.

IDEAL, ACT1, ACC, W1, DT, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1IDEAL, ACT1, ACC, W1, DT, FS2

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1IDEAL, ACT1, ACC, W1, DT, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 79: Results obtained from ideal placement using an accelerometer, walking activity, a window size of1 second, the DT classifier and different statistical feature sets.

IDEAL, ACT1, ACC, W1, KNN, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1IDEAL, ACT1, ACC, W1, KNN, FS2

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1IDEAL, ACT1, ACC, W1, KNN, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 80: Results obtained from ideal placement using an accelerometer, walking activity, a window size of1 second, the KNN classifier and different statistical feature sets.

65

Page 66: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

SELF, ACT1, ACC, W1, DT, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SELF, ACT1, ACC, W1, DT, FS2

Predicted sensorC

urre

nt s

enso

r

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SELF, ACT1, ACC, W1, DT, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 81: Results obtained from self placement using an accelerometer, walking activity, a window size of 1second, the DT classifier and different statistical feature sets.

SELF, ACT1, ACC, W1, KNN, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SELF, ACT1, ACC, W1, KNN, FS2

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1SELF, ACT1, ACC, W1, KNN, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 82: Results obtained from self placement using an accelerometer, walking activity, a window size of 1second, the KNN classifier and different statistical feature sets

MUTUAL, ACT1, ACC, W1, DT, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1MUTUAL, ACT1, ACC, W1, DT, FS2

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1MUTUAL, ACT1, ACC, W1, DT, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 83: Results obtained from mutual displacement using an accelerometer, walking activity, a windowsize of 1 second, the DT classifier and different statistical feature sets

66

Page 67: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

MUTUAL, ACT1, ACC, W1, KNN, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1MUTUAL, ACT1, ACC, W1, KNN, FS2

Predicted sensorC

urre

nt s

enso

r

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1MUTUAL, ACT1, ACC, W1, KNN, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RC RT LT LC

RLA

RUA

BACK

LUA

LLA

RC

RT

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 84: Results obtained from mutual displacement using an accelerometer, walking activity, a windowsize of 1 second, the KNN classifier and different statistical feature sets

Using several types of activities together in a set have given the following figures:

SET1 SET2 SET3 SET4 SET50.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1IDEAL, DT, ACC, W6

Activity set

Acc

urac

y

FS1FS2FS3

SET1 SET2 SET3 SET4 SET50.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1SELF, DT, ACC, W6

Activity set

Acc

urac

y

FS1FS2FS3

SET1 SET2 SET3 SET4 SET50.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1MUTUAL, DT, ACC, W6

Activity set

Acc

urac

y

FS1FS2FS3

Figure 85: Results obtained from all placements using an accelerometer, all activity sets, a window size of 6seconds and the two types of classifiers.

Ideal, SET2, ACC, W6, FS3

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RT RC LT LC

RLA

RUA

BACK

LUA

LLA

RT

RC

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Self, SET2, ACC, W6, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RT RC LT LC

RLA

RUA

BACK

LUA

LLA

RT

RC

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Mutual, SET1, ACC, W6, FS2

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RT RC LT LC

RLA

RUA

BACK

LUA

LLA

RT

RC

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 86: Results obtained from all placements using an accelerometer, a window size of 6 seconds, the DTclassifier, different activity sets and feature sets.

67

Page 68: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

N1 N2 N3 N40.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1IDEAL, DT, ACC, W6

Activity set

Acc

urac

y

FS1FS2FS3

N1 N2 N3 N40.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1SELF, DT, ACC, W6

Activity setA

ccur

acy

FS1FS2FS3

N1 N2 N3 N40.9

0.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1MUTUAL, DT, ACC, W6

Activity set

Acc

urac

y

FS1FS2FS3

Figure 87: Results obtained from all placements using an accelerometer, different activity sets, classifiers anda window size of 6 seconds,.

Self, SET2, ACC, W6, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RT RC LT LC

RLA

RUA

BACK

LUA

LLA

RT

RC

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Self, SET2, ACC, W6, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RT RC LT LC

RLA

RUA

BACK

LUA

LLA

RT

RC

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Self, SET2, ACC, W6, FS1

Predicted sensor

Cur

rent

sen

sor

RLA RUA BACK LUA LLA RT RC LT LC

RLA

RUA

BACK

LUA

LLA

RT

RC

LT

LC

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 88: Results obtained from all placements using an accelerometer, different activity sets, a window sizeof 6 seconds, the DT classifier and the Feature Set 1(mean).

7.3.3 Discussion

Using available data from Fitness dataset, a mixture of combinations has been made in order to extract as-sumptions. Therefore, results have been obtained from selecting several modalities including the accelerom-eter, gyroscope and magnetic field sensor as well as those relating to the use of different activities, types ofwindow segments, types of classifiers, types of user attachment and types of feature sets.

On the one hand, an initial comparison has been carried out by varying modalities together with windowsizes (Figure 72, Figure 73, Figure 74). Gyroscopic results are clearly less accurate with a 66% for feature set 1(mean) than the accelerometer and magnetic field sensor data for window segments of 1 second, 3 seconds and6 seconds. Within such gyroscopic figures, the incremental accuracy for the other feature sets is significantlyand it achieves, for feature set 3, an accuracy of 83%. An statistical feature implementation of a recognitionsystem only using gyroscopic data might be more efficient than applying time domain features with mentionedresults. It also may be observed that the increasing features for the gyroscopic data is more relevant than forthe rest of modalities taking into account that the rest have better results. Accelerometer representation fora window size of 1 second has shown surprisingly less accuracy (80%) for the ideal placement scenario thanfor the self and mutual-displacement scenarios. Such fact sets a difference within the Activity Recognitionfield because of the realistic data used againts those that assumed fixed user-attachment. Although, thegyroscope experiments worst accuracy for mutual-displacement scenario. In the case of using data from themagnetic field sensor, it shows an efficiency similar to data from the accelerometer but occurs as equal asthe gyroscope for the mutual-displament scenario. In addition, increasing time segments with a window sizeof 6 seconds for gyroscopic data from the mutual-diplacement scenario have confirmed an improvement with83% in the worst case.

68

Page 69: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

After having seen the necessity of combining modalities to make figures more efficient, assumptions madein previous sections before having implemented the feature methodology have been corroborated and gy-roscopic data, that is the only one sensitive to rotations and translations, has been combined with theaccelerometer and magnetic field sensor (Figure 75, Figure 76, Figure 77) in a posterior analysis. Combiningthe accelerometer and the gyroscope, accuracy is between 85% and 90% in worst cases but there is a softerimprovement when window sizes are increased comparing with the analysis of modalities separately. In addi-tion, the rise of statistical features, within the mutual-displacement scenario and using DT classifier, do notaffect to the efficiency either. It means that with only using mean, in such case, would be enough in orderto recognize the on-body location. But using the KNN classifier, also with mutual-displacement scenario,results suffer a decline of 0.5% approximately. The combination of data from the gyroscope and magneticfield sensor present a less efficiency with 83% for window sizes of 1 second, 3 seconds and 6 seconds. A lastmixture of data from the three modalities has been made by combining all of them (Figure 78) and varyingthe window size at the same time. Results are less affected when the window sizes are increased but there isa soft improvement using a window segment of 6 seconds. In addition to provide representations where theaccuracy is plotted, preditions from the Confusion Matrix have been represented as well (from Figure 79 toFigure 84). Therefore, concluding with the use of the different modalities, it is proved how data from theaccelerometer, gyroscope and magnetic field sensor, that is independent, results in the best combination thatmay be carried out using the statistical feature methodology.

On the other hand, a following analysis consisting of having varied the types of activities has beenundertaken (Figure 85). As it was shown in section 6.2.1 (Figure 48), a conjunction of activities was selecteddepending on the type of activities involved. In addition, evaluating the three user-placement scenarios aswell as using the accelerometer, DT classifier and a window size of 6 seconds, results have shown how, forsets 1 and 2, the efficiency suffers a decline of only 0.3% wit respect to sets 3, 4 and 5 since the accuracyis significantly high (99% in the best case). It has to be mentioned the fact that both sets, 1 and 2, have aless number of activities although they have been divided by the type of activity. Thus, activities such aswalking, running and jogging are very affected when the statistical feature set number changes. For sets 3,4 and 5 with almost the same number of activities, their types are conformed of fitness activities performedwith a particular body part including arms, knees, waist, trunk or heels making rotations, jumps and cycling.If the user-placement scenario is changed, the mutual-displacement scenario presents worst results comparedwith the ideal-placement but only for sets 1 and 2. Thus, it is confirmed how realistic situations withinthe mutual-displacement may be also recognized by systems and the on-body identification might be carriedout succesfully. As occured with previous analysis, data from the Confusion Matrix has also been shown inFigure 86.

Finally, a last evaluation has been made when the number of activities is increased. In Figure 49, withinsection 6.2.1, sets of N activities were described through rising their number of activities until the last one(N4) that contains all the activities performed. As proceed with the variation of the activity type in previousresults, this analysis has selected the three type of scenarios, the DT classifier, the accelerometer and a windowsize of 6 seconds. A considerable higher efficiency has been obtain with 99% in the self-placement scenariowhile increasing the number of activities, set N4. It is clearly assessed, within the mutual- displacementscenario, how results improve when the activity number is increased but not as much as within the ideal-placement scenario where the variability is higher. The higher statistical feature sets introduce better resultsin all cases and it may be observed also within the Confusion Matrix representations (Figure 88).

69

Page 70: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

8 Conclusion and future work

The necessity of giving a realistic outcome to the sensor placement on-body recognition issue has been ourmotivation for having carried this work out. Most approaches undertaken in previous stages of the ActivityRecognition framework have been based on trying to monitor patient’s daily life as well as fitness exerciseswith an unnatural user self-attachment. Therefore, sensor variations may be produced because of mutual-displacements together with the intensity suffered by the sensor device while activities are performed. Alongthis work, we have tackled such boundary in order to facilitate to recognition systems the generalization ofidentifying on-body locations throughout the use of wearable sensors.

Previously, related works associated to the orientation and location tasks has been gathered to make adeeply understanding. The lack of a wider background within the Activity Recognition field has allowed usthe possibility to select between a range of opportunities to tackle such limitations. Therefore, a basic signaldatabase for initials has been created and also analyzed to get used to sensor signal behaviours in a firststep. Then, following with the awareness of signals from the accelerometer, gyroscope and magnetic fieldsensor, a Fitness dataset provided has been used to develop a methodology focused on previous approachesand another one introducing a change in the way to proceed within the Activity Recognition.

Visual Inspection methodology has provided us with a freedom degree making independent the recognitionof window patterns from computational analysis. In order to detect anomalous window segments and may beafter compared with a latter computational Universal Pattern obtained from the time and statistical domainmethodologies, we have structured the method contents with an increasing knowledge getting on the featuredomains. Thus, a set of time domain features including Correlation, Best Fit and Mutual Information hashelped us to understand problematic issues surrounded when initial times limit the recognition of window datasegment compared with Universal Patterns selected. In addition, linear relationships between data segmentshave strongly influences when failures appeared as occured within the Correlation methodology. Havingexpected better results in an initial moment as a consequence of selecting Best Fit and Mutual Information,we noticed the necessity of turning time domain into a different domain with no time dependency. Although,Best Fit method got the best accuracy among the time domain features analyzed, it was not efficient enoughto implement it as an on-body placement recognition.

Once results from the time domain validated problems associated with previous investigations, a statisticalspace where signals could be defined only by their parameters and classified in order to such features arised.As consequence of the very new procedure, the feature sets selected contain basic features including mean,standard deviation, maximum, minimum, energy and finally, more complex, kurtosis. Different combinationsof sensor modalities such as accelerometer with gyroscope or gyroscope with the magnetic field sensor haveproved accuracy improvements by ommiting rotations and translations of the device. In addition, differentwindow sizes have been applied in order to deduce which of them may be more suitable in our analysis. Thus,3 and 6 seconds have demonstrated to get better results than a window size of 1 second, for instance.

Several types of activity sets together with sets conformed to an increasing number of activities fromthe Fitness dataset have been deeply analyzed and shown throughout a wide range of accuracy plots andMatrix Confusion representations. Finally, a significant efficiency has been obtained with the statisticalfeature methodology in almost all results with a minimum accuracy in the worst case of 66% for gyroscopicdata using a window size of 1 second and data from the three types of scenarios including ideal-placement,self-placement and mutual-displacement. The rest of accuracies obtained have a range from 80% to 90% thatmight make possible to be implemented in a recognition system with an error margin assumed.

Finally and concluding with the statistical signal domain as the significant methodology along this work,future investigations might go into depth with statistical feature sets through computational analysis in orderto detect which ones are suitable to be applied to recorded data segments. Basing on features obtained fortime domain, most of them may also be proved in the statistical domain and the posibility of combininga group of them in different ways. In addition, one issue which may determine the future of the ActivityRecognition depends on the device in use. Through this work, wearable sensors have been applied to simulatea generalization of sensors far from the ongoing technology leading. Thus, such fact might allow to introduceactivity recognition appliances in a natural way within patient’s daily lives as well as monitoring activitiesin general in a future vision not far from nowadays.

70

Page 71: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

References[1] http://classes.engr.oregonstate.edu/eecs/fall2012/cs434/notes/decisiontree-5-part2.pdf.

[2] http://datasciencerules.blogspot.com.es/2012/10/classification-of-hand-written-digits-3.html.

[3] http://developer.android.com/index.html.

[4] http://www.starlino.com/.

[5] http://www.who.int/whr/2010/es/index.html.

[6] http://www.worldlifeexpectancy.com/history-of-life-expectancy.

[7] Mansfield A and Lyons G M. The use of accelerometry to detect heel contact events for use as a sensorin fes assisted walking. Med. Eng. Phys., (25):879–85, 2003.

[8] De Andres E Fritsch C Leyvraz P F Aminian K, Rezakhanlou K and Robert P. Temporal featureestimation during walking using miniature accelerometers: an analysis of gait improvement after hiparthroplasty. Med. Biol. Eng. Comput., 1999.

[9] Sanparith Marukatat Apiwat Henpraserttae, Surapa Thiemjarus. Accurate activity recognition using amobile phone regardless of device orientation and location. International Conference on Body SensorNetworks, 2011.

[10] Kunze K.;Lukowicz P.;Partridge K.;Begole B.;. Which way am i facing: Inferring horizontal deviceorientation from an accelerometer signal. ISWC ’09. International Symposium., 2009.

[11] Oresti Baños, Miguel Damas, Héctor Pomares, Ignacio Rojas, Máté Attila Tóth, and Oliver Amft. Abenchmark dataset to evaluate sensor displacement in activity recognition. In Proceedings of the 2012ACM Conference on Ubiquitous Computing, UbiComp ’12, pages 1026–1035, New York, NY, USA, 2012.ACM.

[12] H. Cho J. Yun B. Baek J., Kim S. Recognition of user activity for user interface on a mobile device.South East Asia Regional Computer Conference, Thailand., 2007.

[13] Miguel;Pomares Hector;Rojas Fernando;Delgado-Marquez Blanca;Valenzuela Olga; Banos,Oresti;Damas. Human activity recognition based on a sensor weighting hierarchical classifier.Soft Computing, 2013.

[14] O’Donovan K J Bourke A K and Olaighin G. The identification of vertical velocity profiles using aninertial sensor to investigate pre-impact detection of falls. Med. Eng. Phys., (30):937–46.

[15] Wark T Chan W Boyle J, Karunanithi T and Colavitti C. Quantifying functional mobility progressfor chronic disease management. 28th Annual Conf. of the IEEE Engineering in Medicine and BiologySociety (New York), pages 5916–9, 2006.

[16] Veltink P H Martens W L Bussmann H B, Reuvekamp P J and Stam H J. Validity and reliability ofmeasurements obtained with an ‘activity monitor’ in people with and without a transtibial amputation.Phys. Ther., (78):989–98, 1998.

[17] van Herel E C Bussmann J B, Tulen J H and Stam H J. Quantification of physical activities by meansof ambulatory accelerometry: a validation study. Psychophysiology, (35):448–96, 1998.

[18] Doukas C and Maglogiannis I. Advanced patient or elder fall detection based on movement and sounddata. IEEE Pervasive Health Conference and Workshops, 2008.

[19] Kunigunde Cherenack and Liesbeth van Pieterson. Smart textiles: Challenges and opportunities. AP-PLIED PHYSICS, 2012.

71

Page 72: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

[20] S.; Harrison B.; Hightower J.; LaMarca A.; Legrand L.; Rahimi A.; Rea A.; Bordello G.; Choudhury,T.; Consolvo. "the mobile sensing platform: An embedded activity recognition system,". PervasiveComputing, IEEE , vol.7, no.2, pp.32,41,, April-June 2008.

[21] Maxwell D. Addressing the challenge of quantifying free-living activity. Conf. on Recent Advances inAssistive Technology and Engineering (RAATE), page 23, 2002.

[22] Mantyjarvi J Ermes M, Parkka J and Korhonen I. Detection of daily activities and sports with wearablesensors in controlled and uncontrolled conditions. IEEE Trans. Inf. Tech. Biomed., (12):20–6.

[23] J. Montano S. Valcarcel F. De la Torre J. Hodgins J. Montano S. Valcarcel F. De la Torre J. Hodgins J.Montano S. Valcarcel R. Forcada F. De la Torre, J. Hodgins and J. Macey. Guide to the carnegie mellonuniversity multimodal activity (cmu-mmac) dataset.

[24] T. Choudhury J. Lester and G. Borriello. "a practical approach to recognizing physical activities”.Pervasive, pages 1–16, 2006.

[25] Middleton J W Barriskill A Condie P Purcell B Jasiewicz J M, Allum J H and Li R C. Gait eventdetection using linear accelerometers or angular velocity transducers in able-bodied and spinal-cordinjured individuals. Gait Posture, (24):502–9, 2006.

[26] Holger Junker Kai Kunze, Paul Lukowicz. Where am i: Recognizing on-body positions of wearablesensors. Institute for Computer Systems and Networks UMIT, 2005.

[27] Paul Lukowicz Kai Kunze. Using acceleration signatures from everyday activities for on-body devicelocation. Embedded Systems Lab (ESL), University of Passau, 2007.

[28] Shivakant Mishra Khaled Alanezi. Impact of smartphone position on sensor values and context discovery.Department of Computer Science, University of Colorado, 2013.

[29] G.; Lukowicz P.; Partridge K. Kunze, K.; Bahle. "can magnetic field sensors replace gyroscopes inwearable sensing applications?,". Wearable Computers (ISWC), 2010 International Symposium on ,vol., no., pp.1,4, 10-13, Oct. 2010 Oct. 2010 2010.

[30] Kai Kunze and Paul Lukowicz. Dealing with sensor displacement in motion-based onbody activityrecognition systems. pages 20–29, 2008.

[31] Bao L and Intille. Activity recognition from user-annotated acceleration data. Pervasive Computing(Lecture Notes in Computer Science vol 3001), pages 1–17, 2004.

[32] Bao L and Intille S S. Activity recognition from user-annotated acceleration data. Pervasive Computing(Lecture Notes in Computer Science vol 3001), pages 1–17, 2004.

[33] Chris Brunner Vidya Narayanan Sanjiv Nanda Lenny Grokop, Anthony Sarah. Activity and deviceposition recognition in mobile devices. UbiComp’11, 17–21, 2011.

[34] IEEE Benny Lo Rachel King Louis Atallah, Member and Guang-Zhong Yang. Sensor positioning foractivity recognition using wearable accelerometers. 320 IEEE TRANSACTIONS ON BIOMEDICALCIRCUITS AND SYSTEMS,, VOL. 5,(NO. 4), AUGUST 2011.

[35] Junker H Stager M Troster G Atrash A Lukowicz P, Ward J and Starner T. Reconizing workshop activityusing body worn microphones and accelerometers. Pervasive Computing (Lecture Notes in ComputerScience vol 3001), pages 18–32, 2004.

[36] Makikawa M and Iizumi H. Development of an ambulatory physical activity memory device and itsapplication for the categorization of actions in daily life. Medinfo, (8):747–50, 1995.

[37] Lovell N H Mathie M J, Coster A C and Celler B G. Mathie m j, coster a c, lovell n h and celler bg 2003 detection of daily physical activities using a triaxial accelerometer. Med. Biol. Eng. Comput.,(41):296–301, 2003.

72

Page 73: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

[38] Smailagic A Maurer U, Rowe A and Siewiorek D. Location and activity recognition using ewatch: awearable sensor platform. Ambient Intelligence in Everday Life (Lecture Notes in Computer Science vol3864) (Berlin: Springer), pages 86–102, 2006.

[39] David Mizell. Using gravity to estimate accelerometer orientation. In ISWC ’03: Proceedings of the 7thIEEE International Symposium on Wearable Computers, 2003.

[40] Loew F Blanc Y Najafi B, Aminian K and Robert P A. Measurement of stand-sit and sit-stand transitionsusing a miniature gyroscope and its application in fall risk evaluation in the elderly. IEEE Trans. Biomed.Eng., (49):843–51, 2002.

[41] Paraschiv-Ionescu A Loew F Bula C J Najafi B, Aminian K and Robert P. Ambulatory system forhuman motion analysis using a kinematic sensor: monitoring of daily physical activity in the elderly.IEEE Trans. Biomed. Eng., (50):711–23, 2003.

[42] Seah K H Nyan M N, Tay F E and Sitoh Y Y. Classification of gait patterns in the time-frequencydomain. . Biomech., (39):2647–56, 2006.

[43] World Health Organization. Global health risks. 2009.

[44] Fujinami K Pirttikangas P and Nakajima T. Feature selection and activity recognition from wearablesensors. Ubiquitous Computing Systems (Lecture Notes in Computer Science vol 4239), pages 516–27,2006.

[45] Kenney L P J Preece S J, Goulermas J Y and Howard D. Does accelerometer placement affect metabolicenergy expenditure estimation in normal weight and obese subjects? Proc. ICAMPAM (Rotterdam),page 193, 2008.

[46] Kenney L P J Preece S J, Goulermas J Y and Howard D. A comparison of feature extraction methodsfor the classification of dynamic activities from accelerometer data. IEEE Trans. Biomed. Eng., 2008b.

[47] S. Reddy, J. Burke, D. Estrin, M. Hansen, and M. Srivastava. Determining transportation mode onmobile phones. pages 25 –28, 28 2008-oct. 1 2008.

[48] D. Roggen, A. Calatroni, M. Rossi, T. Holleczek, K. Forster, G. Troster, P. Lukowicz, D. Bannach,G. Pirkl, A. Ferscha, J. Doppler, C. Holzmann, M. Kurz, G. Holl, R. Chavarriaga, H. Sagha, H. Bayati,M. Creatura, and J. del R Millan. Collecting complex activity datasets in highly rich networked sensorenvironments. Networked Sensing Systems (INSS), 2010 Seventh International Conference on, pages233–240, 2010.

[49] Theodoridis S and Koutroumbas K. Pattern recognition. 3rd edn (San Diego: Academic), 2006.

[50] Laura Salhuana. Tilt sensing using linear accelerometers. Freescale Semiconductor, 2012.

[51] Togawa T Sekine M, Tamura T and Fukui Y. Classification of waist-acceleration signals in a continuouswalking record. Med. Eng. Phys., (22):285–91, 2000.

[52] Bussmann J B J Janssens P J Selles R W, Formanoy M A G and Stam H J. Automated estimationof initial and terminal contact timing using accelerometers; development and validation in transtibialamputees and controls. IEEE Trans. Neural Syst. Rehabil. Eng., (13):81–8, 2005.

[53] Lin Sun, Daqing Zhang, Bin Li, Bin Guo, and Shijian Li. Activity recognition on an accelerometerembedded mobile phone with varying positions and orientations. 6406:548–562, 2010.

[54] Huynh T and Schiele. Analyzing features for activity recognition. Huynh T and Schiele B 2005 Analyzingfeatures for activity recognition Proc. Conf. Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies, pages 159–64, 2005.

[55] http://www.xsens.com Technical Documentation. Xsens technologies b.v. xm-b, may 2009.

73

Page 74: TELECOM ENGINEERING STUDIES AUTOMATIC DETECTION OF …

[56] S Thiemjarus. A device-orientation independent method for activity recognition. Body Sensor Networks,International Conference, 2010.

[57] Daniel P. Siewiorek Uwe Maurer, Asim Smailagic. Activity recognition and monitoring using multiplesensors on different body positions. Computer Society, 2006.

[58] de Vries W Martens W L Veltink P H, Bussmann H B and Van Lummel R C. Detection of static anddynamic activities using uniaxial accelerometers. IEEE Trans. Rehabil. Eng., (4):375–85, 1996.

[59] Zijlstra W. Assessment of spatio-temporal parameters during unconstrained walking. Eur. J. Appl.Physiol., (92):39–44, 2004.

[60] Zijlstra W and Hof A L. Assessment of spatio-temporal gait parameters from trunk accelerations duringhuman walking. Gait Posture, (18):1–10, 2003.

[61] Chen C Ma J. Wang, Shuangquan. Accelerometer based transportation mode recognition on mobilephones. APWCS, 2010.

[62] Jun Yang. Toward physical activity diary: motion recognition using simple acceleration features withmobile phones. pages 1–10, 2009.

74