photo-model-based cloth recognition/handling and...

201
Photo-model-based Cloth Recognition/Handling and Application to Visual Servoing 博士後期課程産業創成工学専攻 (平 成 30 年 度) 51427356 Khaing Win Phyu

Upload: others

Post on 04-Feb-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Photo-model-based Cloth Recognition/Handlingand Application to Visual Servoing

岡 山 大 学 大 学 院 自 然 科 学 研 究 科

博 士 後 期 課 程 産 業 創 成 工 学 専 攻

(平 成 3 0 年 度)

学 籍 番 号 51427356 Khaing Win Phyu

Page 2: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Photo-model-based Cloth Recognition/Handling

and Application to Visual Servoing

Intelligent Robotics and Control Laboratory

 Khaing Win Phyu (51427356)

Abstract:

Nowadays, most of the garment companies especially in Japan are facing with two

main inconveniences as follows:

• Growing shortage of labor force because an aging population has been progressing.

• Weak points of human workers are laziness, boring and tiring due to the long working

hour.

Globally, robots are being used in garment factories according to their advantages such as

high precision, flexibility, productivity and adaptiveness. Human can recognize and han-

dle (pick and place) easily the objects with a variety of different shapes, colors, sizes and

humans’ eyes are adaptable to various light environments with a certain tolerance. From

the instant of birth, human beings are thought to be talented at managing their activities

under such environment variabilities as climates, light environments, temperatures, etc.

While human beings can conduct intended tasks in pending circumstances, an automated

robot is not adept at being similarly adaptable. Therefore, the researchers have tried to

improve the abilities of automated robots.

Nowadays, industrial robots have been utilized to perform a wide variety of tasks in-

stead of human workers. However, it is difficult for robots to handle deformable objects

such as cloth, string, etc., especially if an object is unique. Of course, handling deformable

objects is difficult than handling rigid objects. A robot control technology using visual

Page 3: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

3

information, called as visual servoing has been playing an important role in the applica-

tions where deformable things are recognized and handled by a robot. The main tasks

of such robots are to recognize a deformable object and to pick up and place it at a

designated position automatically. The deformable character of cloths seems a main hin-

drance for automatic handling by robots. The poses (positions and orientations) of the

deformable cloths have varieties, and then it should be recognized and handled automati-

cally, requiring abilities with respect to both vision-based recognition and visual servoing.

Additionally, there have been difficulties for robot with cameras to accurately detect and

handle objects under various light environments.

This dissertation proposes a cloth handling system that recognizes a unique cloth ap-

peared in front of a robot, that is a photo-model-based approach. The photo-model-based

approach has been adopted since the photo-model can be made at once by taking a photo

of the unique cloth. In proposed cloths’ pose estimation method that utilizes stereo-vision,

a photo-model projected from 3D to 2D is used where this system does not need defining

the object’s size, shape, color, pattern, and design. It detects and estimates its pose of

the cloth through model-based matching method and Genetic Algorithm (GA). The han-

dling performance by the proposed method with stereo-vision cameras has been verified,

revealing that the proposed system has leeway to recognize and handle the unique cloth

in lighting varieties from 100 (lx) to 1300 (lx) under different light sources such as the

fluorescent light and the light emitting diode (LED) light.

The aim of the proposed system is to be applied in a mail delivery system of cloth for

garment factories in which as of today employees classify a large number of second-hand

cloths manually. Since the cloths are deformable objects and the cloth is single/unique, no

definition of cloths can be predefined in a computer. Consequently, it is difficult to handle

a wide variety of cloths that are irregular and unique in shape and size. To overcome this

difficulty, a cloth handling robot has been developed.

Since the practicality of industrial robots used in factories has been indispensable, the

Page 4: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

4

pick and place accuracy with tolerances against illumination changing has been checked

in this dissertation. In addition, this dissertation includes evaluation results of illumi-

nation varieties and fitness function distribution, which can explain why the proposed

photo-model-based system can be tolerable against illumination varieties. Moreover, 3D

recognition and handling accuracy have been confirmed to be practically effective by

conducting the recognition/handling experiments under different light conditions. Fur-

thermore, the proposed system recognizes the target cloth and estimates the pose of that

cloth for handling even though the target cloth is occluded partially and there is another

cloth in the field of view of robot’s camera. The proposed photo-model-based cloths han-

dling system is intended to save the cost of staff workers and to get better performance

and high accuracy.

The proposed system is summarized as follows:

(1) The new recognition method using a photo-model of each cloth that can recognize

the target objects (cloths) with different sizes, shapes, colors, patterns, and designs

has been proposed.

(2) The cloth handling robot constructed by current mechatronics technology cannot

discern an identity of the target cloth. A unique cloth may be recognized by bar-

code if attached at the cloth, but the photo-model-based whole cloth recognition

is indispensable for non-erroneous identification of the unique cloth. Of course the

photo-model-based recognition may be combined with barcode for enhanced relia-

bility.

(3) The illumination tolerance of the proposed system against different light environ-

ment varieties was verified experimentally, revealing that the system is robust about

the deformation of cloths and also light condition varieties.

The ingenuities of this photo-model-based approach developed for dealing with cloth

identifying/handling have been confirmed to be practical in the real world.

Page 5: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Contents

1 Introduction 1

1.1 Background and motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Aim and objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 Principal contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.4 Dissertation structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.5 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Literature review 15

2.1 Camera configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 Vision-based cloth operation . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.3 Object recognition methods . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.4 Optimal solution search method . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Robustness against changing environmental conditions . . . . . . . . . . . 23

2.5.1 Changing light environment . . . . . . . . . . . . . . . . . . . . . . 23

2.5.2 Occlusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.5.3 Unknown environment . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.6 System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Orientation representation by quaternion 27

4 Photo-model-based recognition 41

4.1 HSV color system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

i

Page 6: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

4.2 Kinematics of stereo-vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.2.1 Projective transformation of three-dimensional (3D) model . . . . . 44

4.2.2 Projective transformation matrix . . . . . . . . . . . . . . . . . . . 45

4.2.3 Projective transformation to left and right camera . . . . . . . . . . 47

4.2.4 Perspective projection . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.3 Cloth Model generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.4 3D Model-based matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.5 Definition of the fitness function . . . . . . . . . . . . . . . . . . . . . . . . 62

4.6 Genetic Algorithm (GA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.7 Spatial resolution in term of pixel and bit . . . . . . . . . . . . . . . . . . 68

4.8 Verification of cloth recognizable illumination range . . . . . . . . . . . . . 68

4.9 Sensor unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

5 Experimental Environment 73

6 Verification of unique cloth handling performance based on 3D recog-

nition accuracy of cloth by dual-eye cameras with photo-model-based

matching 77

6.1 Recognition without disturbance . . . . . . . . . . . . . . . . . . . . . . . . 79

6.2 Recognition of partly hidden cloth . . . . . . . . . . . . . . . . . . . . . . . 79

6.3 Identification of two cloths . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

6.4 1000 times recognition experiment . . . . . . . . . . . . . . . . . . . . . . . 84

6.5 100 times handling experiment . . . . . . . . . . . . . . . . . . . . . . . . . 85

6.6 Handling accuracy verification experiment . . . . . . . . . . . . . . . . . . 86

6.7 Handling experimental environment and contents . . . . . . . . . . . . . . 86

6.8 100 times handling experiment of cloth No.3 . . . . . . . . . . . . . . . . . 88

7 Verification of illumination tolerance for photo-model-based cloth recog-

nition 93

ii

Page 7: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

8 Verification of photo-model-based pose estimation and handling of unique

cloth under illumination varieties 111

8.1 Accuracy with fixed illumination . . . . . . . . . . . . . . . . . . . . . . . 111

8.2 Robustness confirmation against illumination and lighting source varieties . 116

8.3 Accuracy with illumination varieties . . . . . . . . . . . . . . . . . . . . . . 118

8.4 Handling experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

9 Application of photo-model-based recognition to visual servoing 133

9.1 Photo-model-based real-time 3D-pose estimation and tracking of solid ob-

ject with real-time multi-step genetic algorithm (RM-GA) . . . . . . . . . 133

9.2 Photo-model-based recognition . . . . . . . . . . . . . . . . . . . . . . . . 137

9.2.1 Kinematics of stereo-vision . . . . . . . . . . . . . . . . . . . . . . . 137

9.2.2 Model generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

9.2.3 3D model-based matching . . . . . . . . . . . . . . . . . . . . . . . 140

9.2.4 Definition of the fitness function . . . . . . . . . . . . . . . . . . . . 142

9.2.5 Real-time Multi-step Genetic Algorithm (RM-GA ) . . . . . . . . . 144

9.3 Experimental environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

9.4 Experimental results and discussion . . . . . . . . . . . . . . . . . . . . . . 149

9.4.1 Recognition of blue crab among different kinds of underwater animal

pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

9.4.2 Blue crab recognition and pose estimation under changing orientation154

9.4.3 Real-time 3D pose estimation and tracking of solid object with RM-

GA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

10 Conclusion 163

Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

iii

Page 8: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 9: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

List of Figures

1.1 System configuration of a cloth handling system: During the application

process, the cloth moves along the conveyor automatically. The single-

camera set at the entry point of cloth at the beginning of conveyor is used

to generate a photo model. The left and right cameras which are attached

at the end-effector of the PA-10 robot are used to recognize and estimate

the pose of the cloth that appears in the field of view of two cameras using

generated model. The application process of Fig. 1.1 is as same as the

cloth handling application in Fig. 2.3. . . . . . . . . . . . . . . . . . . . . . 7

2.1 Camera configuration: (a) Robot eye-in-hand system (b) Robot eye-to-

hand system. Each figure shows coordinate system of robot and end-

effector: (hand coordinate system (ΣH) and world coordinate system (ΣW )) 17

2.2 Evolution process of genetic algorithm (GA) . . . . . . . . . . . . . . . . . 22

2.3 A photo of a cloth handling robot system with dual-eye cameras: PA-10

robot is equipped with two cameras (vision sensors are used as dual-eye

vision system) for recognition and vacuum cups (four absorption pads by

the air compressor possible to perform the pick and place of the target

object (cloth)) for handling. In the test, the robot picks up the cloth and

places it into the collection box. . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1 Angle/Axis model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

v

Page 10: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

3.2 Rotation matrix of angle/axis representation . . . . . . . . . . . . . . . . . 34

3.3 Cross Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.1 Red, Green and Blue (RGB) color system . . . . . . . . . . . . . . . . . . 43

4.2 Cyan color is the color value of RGB = (0,255,255) . . . . . . . . . . . . . 44

4.3 Projection Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.4 Perspective projection of dual-eye vision-system: In the searching area, a

3D solid model is represented by the picture of cloth with black point (j-th

photo-model). The coordinate systems of photo-model, camera and image

are represented by ΣMj, ΣCL, ΣCR, ΣIL and ΣIR respectively. A 3D solid

model that is assumed to be in the searching area is projected from 3D

space to 2D left and right camera images. . . . . . . . . . . . . . . . . . . . 48

4.5 Model generation process: (a) get background image and calculate the hue

value, (b) get image including cloth and calculate the hue value of each

point, (c) distinguish background from difference between the hue value,

(d) determine the frame of model. Generated surface space of model and

outside space of model are denoted as Sin and Sout. . . . . . . . . . . . . . 60

4.6 A 3D solid model in the 3D searching space (sub-figure on the top of Fig.

4.6) and left and right 2D searching models represented as SL(φjM) and

SR(φjM) (sub-figures on the left/right bottom of Fig. 4.6). . . . . . . . . . . 61

4.7 Calculation of the matched degree of each point in model space (SL,in and

SL,out) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.8 GA evolution process in which 3D models with different poses converge to

the real target object (cloth) through GA operation and the pose of the

model with the highest fitness function represents the estimated pose of

the cloth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

vi

Page 11: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

4.9 Photo of experiment for recognition of cloth No.3 under 700 (lx) (a) (lx) is

measured by a Lux sensor (MW700), (b) recognition under 700 (lx) using

PA10 robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.10 Target objects (No.1 ∼ No.12) cloths: each has different colors, sizes,

shapes, patterns and weights (unit is [mm] and [g] in Table 5.1). . . . . . . 70

4.11 Sensor unit configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5.1 Sketch map of the manipulator. (Note that the distances, l2, l4, l5 are zero

[mm], but the joints are illustrated in figure to be seen clearly.) . . . . . . . 74

5.2 Coordinate systems of robot and end-effector: (hand coordinate system

(ΣH), world coordinate system (ΣW ) and target object coordinate system

(ΣM)). Note that W xM , W yM , W zM are offsets of robot-base center and

work table center in x-, y-, z-directions. (unit is [mm] in Fig. 5.2.) . . . . . 75

6.1 Experiments of No.3 cloth (a) recognition without disturbance (b) recog-

nition of partly hidden cloth (c) identification of two cloths . . . . . . . . 78

6.2 Fitness function distribution of No.3 cloth in x-y plane (a) recognition

without disturbance as 2D view (b) recognition without disturbance as 3D

view (c) recognition of partly hidden cloth as 2D view (d) recognition of

partly hidden cloth as 3D view (e) identification of two cloths as 2D view (f)

identification of two cloths as 3D view (each black point in Fig.6.2represents

the result of GA evaluation times 100 which indicates the pose of each

model and mountain means pose searched by full search method) . . . . . 81

vii

Page 12: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

6.3 Fitness function distribution of No.3 cloth in x-y plane (a) recognition

without disturbance as 2D view (b) recognition without disturbance as 3D

view (c) recognition of partly hidden cloth as 2D view (d) recognition of

partly hidden cloth as 3D view (e) identification of two cloths as 2D view (f)

identification of two cloths as 3D view (each black point in Fig.6.3represents

the result of GA evaluation times 200 which indicates the pose of each

model and mountain means pose searched by full search method) . . . . . 82

6.4 Histogram of error in (a) position x [mm] (b) position y [mm] (c) angle [◦]

(No.1∼No.12 Cloths) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

6.5 Coordinate system of robot and end-effector (unit is [mm] in Figure 6.5) . 85

6.6 Coordinate system of target object (unit is [mm] in Figure 6.6) . . . . . . . 86

6.7 Flowchart of handling experiment . . . . . . . . . . . . . . . . . . . . . . . 87

6.8 Process of handling experiment . . . . . . . . . . . . . . . . . . . . . . . . 89

6.9 Histogram of 100 times handling experiment of cloth No.3 (a) X [mm]

position error (b)Y [mm] position error (c) Z [mm] position error (d) angle

θ[◦] orientation error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

7.1 Fitness function value of each cloth (No.1∼ No.12) under fluorescent light

(see Table 7.1for the entire numerical results of Fig. 7.1) . . . . . . . . . . 96

7.2 Fitness function value of each cloth (No.1∼ No.12) under LED light (see

Table 7.2for the entire numerical results of Fig. 7.2) . . . . . . . . . . . . . 98

7.3 Fitness function distribution of cloth No.2 in x-y plane (fluorescent light) . 99

7.4 Fitness function distribution of angle for cloth No.2 (fluorescent light) . . . 100

7.5 Fitness function distribution of cloth No.2 in x-y plane (LED light) . . . . 101

7.6 Fitness function distribution of angle for cloth No.2 (LED light) . . . . . . 102

7.7 Fitness function distribution of cloth No.3 in x-y plane (fluorescent light) . 103

7.8 Fitness function distribution of angle for cloth No.3 (fluorescent light) . . . 104

7.9 Fitness function distribution of cloth No.3 in x-y plane (LED light) . . . . 105

viii

Page 13: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

7.10 Fitness function distribution of angle for cloth No.3 (LED light) . . . . . . 106

7.11 Fitness function distribution of cloth No.11 in x-y plane (fluorescent light) 107

7.12 Fitness function distribution of angle for cloth No.11 (fluorescent light) . . 108

7.13 Fitness function distribution of cloth No.11 in x-y plane (LED light) . . . . 109

7.14 Fitness function distribution of angle for cloth No.11 (LED light) . . . . . 110

8.1 Average error and ±3σ concerning three positions (x, y, z) and one orien-

tation (θ) of different cloths (No.1 ∼ No.12) that have been confirmed by

the experimental result of 1000 times recognition under fixed illumination

700 (lx), where σ is standard deviation. . . . . . . . . . . . . . . . . . . . . 114

8.2 Frequency distribution concerning x, y, z and θ of cloth No.6 that has been

resulted by conducting 1000 times recognition experiment. . . . . . . . . . 115

8.3 Average error and average error±3σ of 100 times recognition experimental

result of 12 unique cloths under 100 (lx). . . . . . . . . . . . . . . . . . . . 118

8.4 Average error and average error±3σ of 100 times recognition experimental

result of 12 unique cloths under 400 (lx). . . . . . . . . . . . . . . . . . . . 119

8.5 Average error and average error±3σ of 100 times recognition experimental

result of 12 unique cloths under 700 (lx). . . . . . . . . . . . . . . . . . . . 120

8.6 Average error and average error±3σ of 100 times recognition experimental

result of 12 unique cloths under 1000 (lx). . . . . . . . . . . . . . . . . . . 121

8.7 Average error and average error±3σ of 100 times recognition experimental

result of 12 unique cloths under 1300 (lx). . . . . . . . . . . . . . . . . . . 122

8.21 Four absorption pads under robot hand . . . . . . . . . . . . . . . . . . . . 129

8.22 Average error and average error±3σ of 100 times handling experiments of

No.2 cloth under different light conditions (100 [lx], 400 [lx], 700 [lx], 1000

[lx] and 1300 [lx]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

ix

Page 14: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

9.1 A photo of a cloth handling robot system with dual-eye cameras: PA-10

robot is equipped with two cameras (vision sensors are used as dual-eye

vision system) for recognition and vacuum cups (four absorption pads by

the air compressor possible to perform the pick and place of the target

object (cloth)) for handling. In the test, the robot picks up the cloth and

places it into the collection box. . . . . . . . . . . . . . . . . . . . . . . . . 135

9.2 A photo of the blue crab recognition robot system with dual-eye cameras:

a PA-10 robot is equipped with two cameras (vision sensors are used as

dual-eye vision system) for recognition and pose estimation . . . . . . . . . 136

9.3 Perspective projection of dual-eye vision-system: In the searching area, a

3D solid model is represented by the picture of blue crab with black point

(j-th photo model) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

9.4 Model generation process describes as Fig. 9.4(a) ∼ Fig. 9.4(d): Fig. 9.4(a)

shows a photograph of background image, Fig. 9.4(b) shows a photograph

of the target object (the blue crab) in background, Fig. 9.4(c) represents

a photograph of surface space model Sin by inner points group also see in

enlarging portion of photo model and Fig. 9.4represents a photograph of

outside space of model Sout that enveloping Sin also in enlarging portion of

photo model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

9.5 A 3D solid model in the 3D searching space (sub figure on the top of Fig.

9.5) and left and right 2D searching models represented as SL(φjM) and

SR(φjM) (sub figures on the left/right bottom of Fig. 9.5) . . . . . . . . . . 142

x

Page 15: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

9.6 RM-GA evolution process in which 3D models with random poses converge

to the real 3D solid target object (blue crab) through RM-GA operation,

and the pose of the model with the best fitness function represents the

estimated pose of the blue crab: (a) graphical representation of solution

evaluation and chromosome evolution during each 24 [ms] control period

and (b) flowchart of RM-GA process steps during convergence performance

from the first to final generation. . . . . . . . . . . . . . . . . . . . . . . . 147

9.7 Coordinate systems of robot and end-effector: Hand coordinate system

(ΣH), world coordinate system (ΣW ) and target coordinate system (ΣM) . 148

9.8 Recognition of a blue crab among different kinds of underwater animals on

the underwater background . . . . . . . . . . . . . . . . . . . . . . . . . . 150

9.9 Recognition of a blue crab in left and right camera images in experiment . 151

9.10 Experiment of recognizing a blue crab (a) a photograph of the blue crab

on the underwater background (b) fitness distribution in x-y plane (c) a

photograph of the blue crab among different kinds of underwater animals

(d) fitness distribution in x-y plane (e) fitness distribution of orientation in

ε1-ε2 (f) fitness distribution of orientation in ε1-ε3 (g) fitness distribution

of orientation in ε2-ε3. Note that the output of the orientations (ε1, ε2, and

ε3) are represented in quaternion. . . . . . . . . . . . . . . . . . . . . . . . 152

9.11 Experiment of recognizing a blue crab (a) a photograph of blue crab without

disturbance (b) fitness distribution in x-y plane (c) fitness distribution of

orientation in ε1 and ε2 (d) fitness distribution of orientation in ε2 and ε3.

Note that the output of the orientations (ε1, ε2, and ε3) are represented in

quaternion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

xi

Page 16: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

9.12 Experiment of recognizing the blue crab orientation around yM -axis, 0 [◦]

(a) recognition of a blue crab in left and right camera images (b) fitness

distribution in x-y plane (c) fitness distribution of orientation in ε1-ε2 (d)

fitness distribution of orientation in ε2-ε3 (e) fitness distribution of orien-

tation in ε1-ε3. Note that the output of the orientations (ε1, ε2, and ε3)

are represented in quaternion. . . . . . . . . . . . . . . . . . . . . . . . . . 156

9.13 Experiment of recognizing the blue crab orientation around yM -axis, -10

[◦] (a) recognition of a blue crab in left and right camera images (b) fitness

distribution in x-y plane (c) fitness distribution of orientation in ε1-ε2 (d)

fitness distribution of orientation in ε2-ε3 (e) fitness distribution of orien-

tation in ε1-ε3. Note that the output of the orientations (ε1, ε2, and ε3)

are represented in quaternion. . . . . . . . . . . . . . . . . . . . . . . . . . 157

9.14 Experiment of recognizing the blue crab orientation around yM -axis, -20

[◦] (a) recognition of a blue crab in left and right camera images (b) fitness

distribution in x-y plane (c) fitness distribution of orientation in ε1-ε2 (d)

fitness distribution of orientation in ε2-ε3 (e) fitness distribution of orien-

tation in ε1-ε3. Note that the output of the orientations (ε1, ε2, and ε3)

are represented in quaternion. . . . . . . . . . . . . . . . . . . . . . . . . . 158

9.15 Experiment of recognizing the blue crab orientation around yM -axis, -30

[◦] (a) recognition of a blue crab in left and right camera images (b) fitness

distribution in x-y plane (c) fitness distribution of orientation in ε1-ε2 (d)

fitness distribution of orientation in ε2-ε3 (e) fitness distribution of orien-

tation in ε1-ε3. Note that the output of the orientations (ε1, ε2, and ε3)

are represented in quaternion. . . . . . . . . . . . . . . . . . . . . . . . . . 159

xii

Page 17: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

9.16 Experiment of recognizing the blue crab orientation around yM -axis, -40

[◦] (a) recognition of a blue crab in left and right camera images (b) fitness

distribution in x-y plane (c) fitness distribution of orientation in ε1-ε2 (d)

fitness distribution of orientation in ε2-ε3 (e) fitness distribution of orien-

tation in ε1-ε3. Note that the output of the orientations (ε1, ε2, and ε3)

are represented in quaternion. . . . . . . . . . . . . . . . . . . . . . . . . . 160

9.17 Pose estimation and tracking performance experiment of the 3D solid target

object blue crab orientation in yM -direction, ε2M. Note that the orienta-

tions ε2M, ε2Hd

, and ε2Hare represented in quaternion. . . . . . . . . . . . 161

xiii

Page 18: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 19: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

List of Tables

4.1 Information for three cameras FCB-IX11A . . . . . . . . . . . . . . . . . . 71

5.1 Size, thickness and weight of all cloths (No.1 ∼ No.12). . . . . . . . . . . . 76

6.1 Fitness function value under different GA evaluation times . . . . . . . . . 83

6.2 Average error and standard deviation of cloth No.3 . . . . . . . . . . . . . 91

7.1 Fitness function value under different light conditions (100 (lx), 400 (lx),

700 (lx), 1000 (lx) and 1300 (lx)) using fluorescent light . . . . . . . . . . 97

7.2 Fitness function value under different light conditions (100 (lx), 400 (lx),

700 (lx), 1000 (lx) and 1300 (lx)) using LED light . . . . . . . . . . . . . . 97

8.1 Average error and standard deviation (3σ) measured by 1000 times recogni-

tion experiments of different cloths (No.1 ∼ No.12) under fixed illumination

700 (lx). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

8.2 Average error measured by 100 times recognition experiments of each cloth

No.1∼No.12 under different light conditions (100 (lx), 400 (lx), 700 (lx),

1000 (lx) and 1300 (lx)). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

8.3 Standard deviation (σ) measured by 100 times recognition experiments of

each cloth No.1∼No.12 under different light conditions (100 (lx), 400 (lx),

700 (lx), 1000 (lx) and 1300 (lx)). . . . . . . . . . . . . . . . . . . . . . . . 126

8.4 Average error and standard deviation (3σ) measured by 100 times handling

experiments of cloth No.2 under different light conditions. . . . . . . . . . 131

xv

Page 20: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

9.1 List of the different kinds of underwater animals (Ref:https://www.google.co.jp)153

xvi

Page 21: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1

Introduction

In recent years, the increase in the aging population and the decline in the birth rate

have led to an increase in a shortage of the labor force in Japan. The entire population

aged 65 years or older proportion of Japan in 2009 became 23% highest in the world

[1]. Japan is the highest proportion of order adults country in the world, which has been

discussed in detail in [2]. The Japanese aging rate is expected to continue to rise in

the future. The aging population is not only an existent personal occasion but also an

important aspect in local and national financial institutions such as pension, health, and

long-term care systems [2]. The declining birthrate and aging population are progressing

which represent a diminution of the labor force population in Japan.There is a great

complication for the Japanese labor force. Japan’s population of labor force continues to

decrease until the present with the number of rate in 1988, and it is speculated that it

will be decreasing in the future as well. So, most of the garment companies especially in

Japan have been facing with two main inconveniences as follows:

(1) Growing shortage of labor force because an aging population have been progressing,

(2) Weak point of human workers are laziness, boring and tiring due to the long working

hours.

1

Page 22: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

Due to the influence, the needs of robots that replace the tasks that humans have done so

far are increasing. Additionally, an automated robot can work at a constant speed without

pausing for rest breaks, sleep or vacations, and ultimately has higher productivity than

a human worker. In the mail-order system of the company, employees classify a large

number of goods manually every day. So, searching for items and sorting them are tasks

that people do every day in a textile mill. On the other hand, the robot has more

advantages than employees such as higher work efficiency and precision. However, there

remain many challenges for robots in recognition pose (position and orientation) detection

operations, especially when the working object is deformable and every working object has

unique shape and color. Additionally, IT based technology and Robotics have began to be

used in the garment (cloth) companies considering above problems. A previous research

in [3] efforts have typically been supported to interpret sub-tasks within an autonomous

laundering pipeline such as recognizing cloth varieties [4], [5], [6], cloth pose estimation

[7], [8], [9], [10], grasping cloth from a large number of cloth [4], [11], folding [12], [13],

[14], [15], [16], and unfolding [17], [5], [18], [19] [20], [21]. However, robotics in garment

industry are capable of operations only if preconditions are met such as follow:

(1) surrounding the operating environment, the light environment is guaranteed not to

change with time,

(2) the handling object is able to be defined the shape of object in advance, and

(3) the design of the robot hand is predefined based on the object shape.

Therefore, our research group has developed a robot that is capable of automatically

classifying, identifying, and handling cloth by using the visual servoing system.

2

Page 23: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

1.1 Background and motivation

There are so many studies on recognizing deformable object, especially cloths have been

done [22]. However, most of them are based on monocular vision. In our previous works,

we have developed a three-dimensional move on sensing system named as 3D-MoS using

two cameras as stereo vision sensors. In our laboratory, pose tracking abilities with two

stereo cameras as vision sensors [23] has been done previously. In another study, multiple

cameras [24] and two cameras; with one fixed on the end-effector, and the other fixed

in the workspace [25] have been done. In this paper, 3D model-based matching method

and dual-eye cameras are applied to estimate the pose (position and orientation) of the

robot relative to the target cloth without defining the target cloth’s size, shape, color,

design and weight. The correlation function between the model and the target object es-

timated is designed model based matching. Genetic Algorithm (GA) has been developed

to search the best model that can represent the pose of the target object [26], [27]. The

basic concepts of GA is discussed in [28]. The effectiveness of 3D model-based matching

method and developed GA have been confirmed in guidance and control of underwater

robot (3D-MoS/AUV) [29]. A cloth detection method based on Image Wrinkle Feature

was proposed in [30]. The purpose of [31] is to recognize the configuration of cloth arti-

cles. But, the folding model is always needed to initialize the proper fold direction, which

is the weak point of the study in [31]. Robots equipped with a hand-eye camera have

been utilized widely as industrial robots, e.g. bin-picking robot with a hand-eye camera

since the robot can choose the observing point arbitrarily. As the cases that these robots

use a single camera are conventional in factories, a problem has arisen that the accu-

racy of estimated position along with the camera sight direction tends to be deteriorated.

Therefore, a recognition method [9] that combines a camera and a laser-range-finder has

been researched. However, such kind of method congenitally has a difficulty named here

“Object Identification Problem.” The method that estimates target’s pose (position and

3

Page 24: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

orientation) by using camera and laser-range-finder assumes a preposition that the object

projected into camera image and the object detected by the laser-range-finder be identical.

If this assumption is not kept valid, the robot’s motion may be out of order. There has

been a concept called “Sensor Fusion” [32] that integrates information from multiple sen-

sors to measure. In the sensor fusion strategy, the same kind of problem above-mentioned,

i.e. Object Identification Problem, still remained unsolved, because whether the sensors

detect the same object may not be justified.

In a situation to estimate the 3D position and orientation from image information of

multiple cameras, the similar problem above-mentioned may happen, that is whether the

points on an object in 3D space correspond correctly to the points in the images of multi-

ple cameras may not be justified. Then, the problem existing in the process to reconstruct

a 3D pose from points in images of multiple cameras, i.e. “Corresponding Points Identifi-

cation Problem,” may arise. It is common to search for corresponding points in multiple

cameras images by using epipolar geometry [33]. When the searched corresponding points

are not truly identical ones, the reconstructed 3D points include inevitably position er-

rors, meaning the reconstructed 3D object through multiple cameras images represents an

incorrect 3D shape and pose. On the other hand, concerning control method of robots by

visual servoing [34], a prediction method that utilizes an object’s dynamical model and a

nonlinear observer has been proposed in [35], which has been stalked by a problem that

it takes some time before the recognition error reduces to near zero.

To avoid above problems, a single camera system may be a possible choice. In case

of single camera observation, it has been understood that the position measurement in

camera depth is difficult [36] and the orientation measurement is hard to estimate cor-

rectly [37]. In [37], the position and the orientation (only heading angle) of an underwater

vehicle with respect to a target object are estimated using images from a single camera.

To compensate the above-mentioned demerits of pose estimation using a single camera,

Luca et. al [34] have proposed a method to estimate the distance between the camera

4

Page 25: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

and an object by utilizing the time profile of the camera’s motion trajectory, called “Mov-

ing View Points.” As a result of above discussions concerning pose estimation by sensor

fusion, multi-cameras, and a single-camera, there has been still some difficulties for pose

estimation even though the target object is solid and not deformable.

Based on the above discussions, handling deformable and single/unique second-hand

cloth deems to be a challenging topic in robotic researches. In this circumstances of

robotics researches, the authors have been requested from T2K Co., Ltd (Logistics com-

pany to deal with second-hand cloths) to construct a robotic system to handle the

second-hand cloths. Then authors have started to make a robot to be able to handle

single/unique.

A preceding work in [9] is a pioneer whose strategy is based on a combination of

sensor and cameras. Then it cannot escape from above-mentioned Object Identification

Problem. In our previous work [38], [39], recognizing deformable cloths automatically

and estimation of the 3D pose of the cloth using three cameras have been proposed. In

proposed cloths’ pose estimation method, a photo-model-based projection from 3D to 2D

is used, where this system does not need defining the object’s size, shape, design, color

and weight. It detects the cloth through the model-based matching method and Genetic

Algorithm (GA). The proposed handling system has a merit that since it generates auto-

matically photo-models of target cloths, this system can recognize the unique object and

estimate its pose with lighting adaptiveness, enabling the system to handle unique targets

in lighting condition varieties.

Since the practicality of industrial robots used in the factory has been indispensable,

the tolerances against illumination changing have been checked [40]. The paper [41] is an

extension of [40] that includes new evaluation of illumination varieties and fitness function

distribution, which can explain why the proposed photo-model-based system can be tol-

erable against illumination varieties. In [12], the cloth-grasping points are detected using

four cameras without using other sensors for a towel folding application by robots. The

5

Page 26: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

main task in [12] is to detect the corners of cloth instead of recognition a whole cloth. The

recognition of cloth shape based on strategic observation during handling was reported in

[9]. Multi-views from trinocular stereo vision system were used in [9]. Almost studies, the

deformable cloths are handled in [12] and [9]. Many studies have not been conducted the

handling of the deformable cloths under illumination variations. On the other hand, prob-

lems in recognizing a 3D solid object and estimating the object’s pose have been thought

to be a conundrum. Estimating poses of solid objects uses usually dual-eye recognition

methodology [42], [9] and [43]. The dual-eye image processing exploits epipolar geometry

[42] to reduce the dimension of searching space that is defined in the dual-eye images [43].

Even though epipolar geometry is effective for 3D image processing, the problem called

“Corresponding Points Identification Problem” that is how to make a point in one camera

image correctly correspond to a point in another camera image – to confirm whether the

both two points in dual cameras’ images represent a point on the 3D target object –, has

yet to be solved [44], [45]. Then the reconstruction of an object’s pose in 3D space from

dual-eye cameras has been still difficult, and estimated 3D space inevitably includes pose

error.

Therefore, our research group developed vision-based robot system to avoid the prob-

lems mentioned above as shown in Fig. 1.1. The configuration of the proposed system is

shown in Fig. 1.1. In this configuration, three cameras are used as vision sensors. The first

camera is used for model generation. The other two which are fixed at the end-effector of

the mobile manipulator (PA-10 robot) are used for recognition based on the photograph

model. PA-10 with 7-DoF (Degree of Freedom) is being used for recognition/handling

and pose detection operations.

6

Page 27: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

Two cameras for

recognition

Controller

PA-10

Robot

PCPA-10

cable

Sensor PC

Cable

Relay terminal

blockRCA composite cable

Hand

Sensor unit

Robot controller unit Cloth

Camera for

model

generation

Save as BMP

(bitmap file)

Internet(or)

USB

communication

Transport

conveyor

Four suction

pads

Collection box

Fig. 1.1: System configuration of a cloth handling system: During the application process,

the cloth moves along the conveyor automatically. The single-camera set at the entry point

of cloth at the beginning of conveyor is used to generate a photo model. The left and right

cameras which are attached at the end-effector of the PA-10 robot are used to recognize

and estimate the pose of the cloth that appears in the field of view of two cameras using

generated model. The application process of Fig. 1.1 is as same as the cloth handling

application in Fig. 2.3.

1.2 Aim and objectives

The key aims of the verification experiments presented in this thesis should be complied

with this following:

* To improve the performance of PA-10 robot with less error positioning,

* To be familiar with the control algorithms and end-effector of PA-10 robot motion,

* To search and study the multi cloths handling both randomly and target cloths No.

(1∼12),

7

Page 28: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

* To control for obtaining the good performance of PA-10 robot,

* To recognize, estimate and handle for various deformable objects such as colors,

sizes, shapes, patterns and weights through the proposed photo-model-based recog-

nition method, and

* To do application in various optical environments and unknown environments au-

tomatically.

1.3 Principal contributions

From the instant of birth, human beings are thought to be talented at managing their

activities under such variability as climates, light environments, temperatures, etc. While

human beings can conduct intended tasks in pending circumstances, an automated robot

is not adept at being similarly adaptable. Therefore, the researchers have tried to improve

the abilities of automated robots.

Nowadays, industrial robots have been utilized to perform a wide variety of tasks

instead of human workers are handled in other papers, 3D pose estimation of deformable

cloths under different light conditions has not been discussed in almost studies. These

automated robots are required to handle a wide variety of deformable objects including

cloths, strings, ropes, electric cables and so on. Of course, handling deformable objects is

difficult than handling rigid objects. A robot control technology using visual information,

called as visual servoing, has been playing an important role in the applications where

deformable things are recognized and handled by a robot.

Each item of the deformable target objects has various possibilities of the poses (po-

sitions and orientations) to be recognized and handled, requiring ability with respect to

both vision-based recognition and visual servoing. The appearance of deformable cloth

is susceptible not only to how they are placed in the view of camera but also the light

condition that is one of the main difficulties in visual servoing. Based on the above dis-

8

Page 29: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

cussions, estimating 3D-pose of solid target with dual-eye cameras of solid target seems

not easy, then pose estimation and handling deformable cloth deems challenging. In this

circumstances of robotics researches, the authors have been requested from T2K Co., Ltd

(Logistics company to deal with cloths) to construct a robotic system to handle the sin-

gle/unique second-hand cloths.

By considering the above backdrop, a photo-model-based cloth recognition method

has been devised [12], [38], [39], [40], [41] and [46], since the photo-model can be made at

once by taking a photo of the unique cloth.

The aim of the proposed system is to be applied in a mail delivery system of cloth

for T2K Co., Ltd, in which as of today employees classify a large number of second-hand

cloths manually. Since the cloths are deformable objects and the cloth is single/unique,

no definition of cloths can be predefined in a computer. Consequently, it is difficult to

handle a wide variety of cloths that are irregular and unique in shape and size. T2K

explained this condition to authors and requested to collaborate with our research group

to develop a cloth handling robot. Different 12 cloths samples have been chosen by T2K

so that those 12 cloths may represent enormous cloths varieties.

Despite that the author has made efforts to find published papers concerning vision-

based robot system using photo-model, we could not discover any related paper. Since

the practicality of industrial robots used in factories has been indispensable, the pick and

place accuracy with tolerances against illumination changing has been checked [38] and

[40]. This thesis is also an extension of [38] and [40], but it includes new evaluation re-

sults of illumination varieties and fitness function distribution, which can explain why the

proposed photo-model-based system can be tolerable against illumination varieties. In

our previous study, two different light sources, fluorescent light and light-emitting diode

(LED), were used to provide with different illuminations environments. Why these illumi-

nation conditions were chosen has been published in [41]. According to the experimental

results, the recognition performances under fluorescent light are better than light-emitting

9

Page 30: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

diode. Moreover, 3D recognition under occlusion tolerance, identification of two cloths

and 3D handling under predetermining position and orientation have been verified and

published in [47]. Finally, 3D recognition and handling accuracy have been confirmed to

be practically effective by conduction the recognition/handling experiments under differ-

ent light conditions [48].

The immediate thesis intends for researching the handling of deformable object after

recognition it. From now on, the other researchers based on the recognition, handling, 3D

pose estimation and visual servoing of the deformable objects under changing environment

can use and cite the current work. In the application of photo-model-based recognition

to visual servoing, the author continued to find very unmanageable a blue crab because

of the reasons as follows:

• A blue crab is one kind of deformable object,

• A blue crab can be found mostly in underwater covered with eelgrass, mud and etc,

and

• A blue crab can be found in lighting varieties (such as daytime and night-time).

1.4 Dissertation structure

This thesis is organized as follows:

Chapter 2 presents literature review on robot vision-based approaches including

cloth recognizing, pose estimation and handling such as grasping, unfolding, folding

and flattening.

Chapter 3 describes the orientation representation by quaternion among different

kinds of the orientation of a rigid body such as Euler angles.

Chapter 4 describes phot-model based recognition with detailed explanation on hue,

saturation (HSV) color system, kinematics of stereo-vision, cloth model generation,

10

Page 31: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

3D model based matching, definition of designed fitness function, genetic algorithm

(GA), spatial resolution in term of pixel and bit, verification of cloth recognizable

illumination range and sensor unit applied in the experiment.

Chapter 5 presents the experimental environment for cloth recognition and handling

experiments under different light conditions.

Chapter 6 describes the experimental results and discussion of verification of unique

cloth handling performance based on 3D recognition accuracy of cloth by dual-eye

cameras with photo-model-based matching method.

Chapter 7 presents the experimental contents with the results of illumination toler-

ance for photo-model-based cloth recognition system.

Chapter 8 describes the verification of photo-model-based pose estimation and han-

dling of unique cloth under illumination varieties.

Chapter 9 presents the application of photo-model-based recognition and dual-eye

cameras to visual servoing using “Real-time Multi-step Genetic Algorithm (RM-

GA)” with the trial experimental results.

Chapter 10 concludes this thesis.

1.5 Publications

The research work presented in thesis has resulted in the following publications.

Journals

(1) “Verification of Illumination Tolerance for Photo-model-based Cloth Recognition,”

Khaing Win Phyu, Ryuki Funakubo, Ryota Hagiwara, Hongzhi Tian, and Mamoru

Minami, Artificial Life and Robotics, Vol. 23, DOI: 10.1007/s10015-017-0391-0, pp.

118-130 (2018)

11

Page 32: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

(2) “Verification of Photo-model-based Pose Estimation and Handling of Unique Clothes

under Illumination Varieties,” Khaing Win Phyu, Ryuki Funakubo, Ryota Hagi-

wara, Hongzhi Tian, Mamoru Minami, Journal of Advanced Mechanical Design, Sys-

tems, and Manufacturing, JSME, Vol. 12, No. 2, DOI: 10.1299/jamdsm.2018jamdsm0047,

(2018) (23 pages)

(3) “Verification of Unique Cloth Handling Performance Based on 3D Recognition Ac-

curacy of Cloth by Dual-eyes Cameras with Photo-model-based Matching,” Khaing

Win Phyu, Ryuki Funakubo, Fumiya Ikegawa, and Mamoru Minami, International

Journal of Mechatronics and Automation (IJMA) (In Press) (2018) (8 pages)

International Conferences

(1) “Accuracy on Photo-model-based Clothes Recognition,” Khaing Win Phyu, Yu Cui,

Hongzhi Tian, Ryota Hagiwara, Ryuki Funakubo, Akira Yanou, and Mamoru Mi-

nami, Proceedings of the SICE Annual Conference 2016, pp. 1366-1371, Tsukuba,

Japan, September 20-23 (2016)

(2) “Verification of Recognition Performance of Cloth Handling Robot with Photo-

model-based Matching,” Khaing Win Phyu, Ryuki Funakubo, Ikegawa Fumiya,

Yasutake Shinichiro, and Mamoru Minami, Proceedings of 2017 IEEE International

Conference of Mechatronics and Automation (ICMA), pp. 1750-1756, Takamatsu,

Japan, August 6-9 (2017)

(3) “Recognition and Handling of Clothes with Different Pattern by Dual Hand-eyes

Robotic System,” Ryuki Funakubo, Khaing Win Phyu, Hongzhi Tian, and Mamoru

Minami, Proceedings of the 2016 IEEE/SICE International Symposium on System

Integration, pp. 742-747, Sapporo Convention Center, Sapporo, Japan, December

13-15 (2016)

(4) “Verification of Illumination Tolerance for Clothes Recognition,” Ryuki Funakubo,

12

Page 33: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 1 Introduction

Khaing Win Phyu, Ryota Hagiwara, Hongzhi Tian, Mamoru Minami, The Twenty-

Second International Symposium on Artificial Life and Robotics 2017 (AROB 22nd

2017), pp. 807-812, B-Con Plaza, Beppu, Japan, January 19-21 (2017)

(5) “Robust Translational/Rotational Eye-Vergence Visual Servoing under Illumina-

tion Varieties,” Hongzhi Tian, Yejun Kou, Khaing Win Phyu, Daiki Yamada and

Mamoru Minami, Proceedings of the 2017 IEEE International Conference on Robotics

and Biomimetics, pp. 2032-2037, Macau Sar, China, December 5-8 (2017)

National Conferences

(1) “Recognition of Cloth Target Based on Photograph Modeling and Handling,” Khaing

Win Phyu, 崔 禹, 田 宏志, 萩原 涼太, 舟久保 龍希, 矢納 陽, 見浪 護, JSME ロボ

ティクス・メカトロニクス講演会 2016 (ROBOMECH 2016), Paper No. 2A1-18b2,

Yokohama, Japan, June 8-11 (2016) (4 pages)

(2) “複眼ハンドアイロボットによる不定形単品衣服のハンドリング,” 舟久保龍希, Win

Phyu Khaing, 田 宏志, 寇 ギョウ郡, 見浪 護, RSJ 第 34回日本ロボット学会学術

講演会, Paper No. RSJ2016ACIV2-07, Yamagata University, Japan, September 7-9

(2016) (4 pages)

(3) “複眼ハンドアイロボットを用いた不定形単品衣服ハンドリングとビジュアルサー

ボ,”舟久保龍希, Khaing Win Phyu,田宏志,寇ギョウ郡,見浪護,矢納陽第 27回

インテリジェント・システム・シンポジウム (FAN2017 in Okayama), pp. 122-125,

Okayama University, Japan, November 7-8 (2017)

13

Page 34: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 35: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2

Literature review

Literature review on some background topics related to this study and research for config-

urations of the robot end-effector, robot vision-based cloth operations, object recognition

techniques, effective searching method GA and robustness against changing environment

and lighting conditions are presented in more detail in this section.

2.1 Camera configuration

Visual servoing, also known as vision-based robot control method to control the motion

of a robot is a technique which uses feedback visual information obtained from a vision

sensor (camera) [51]-[55]. There are two fundamental configurations of the robot end-

effector (hand) and the camera: [56]

(1) Robot eye-in-hand system: Eye-in-hand [57] and [58], or end-point closed-loop

control, where the camera is attached to the manipulator end-effector and observing

the relative position and orientation of the target object in the workspace.

• Our work deals with the use of vision sensors (dual-eye cameras) at the end-effector

of PA-10 robot manipulator as shown in Fig. 2.1 (a). Exceptionally, eye-in-hand

positioning consists of identifying the unknown pose of the camera coordinate with

15

Page 36: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

respect to the robot hand coordinate, when the camera is fixed on the end-effector

of the robot. Recognition and handling the deformable 3D target object is diffi-

cult to control the robot using feedback visual servo information obtained from a

vision camera, which aims to realize “3D-MoS (Three Dimensional Move on Sens-

ing).” However, the visual servoing estimates the relative pose of the moving target

with hand-in-eye camera configuration is ineluctably affected by hand dynamical

oscillation. To deal with this weakness of the hand-in-eye camera configuration, we

used dual-eye cameras in which the relative pose between the robot and the 3D

deformable target object is estimated using images of the left and right cameras

and estimated pose id used as feedback visual information in controlling the PA-10

robot.

(2) Robot eye-to-hand system: Eye-to-hand [59], or end-point open-loop control,

where the camera is fixed in the workspace and observing the target object and the

motion of the hand as shown in Fig. 2.1 (b).

• This configuration needed to assign the working plane [60]. So, there is some limi-

tation to the workspace area. After assigning the working plane, the working plane

is restricted to place within the range of camera (or) cameras vision. Then, the

target object is placed on the plane and the robot is commanded to estimate the

pose of the target object. In this current pose, the pose of the target object is

needed to record and take the image. If the target object is moved another pose, it

needed to take and save the image again and move the robot. This configuration is

complicated by the fact that the vital steps are repeating and consuming time.

16

Page 37: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

HzÜH

HxHy

Wz

Wy

Wx ÜW

Dual-eye cameras

PA-10 robot

(a)

Wz

Wy

Wx ÜW

Hz

ÜH

HxHy

Dual-eye

cameras

direction

PA-10 robot

(b)

Fig. 2.1: Camera configuration: (a) Robot eye-in-hand system (b) Robot eye-to-hand

system. Each figure shows coordinate system of robot and end-effector: (hand coordinate

system (ΣH) and world coordinate system (ΣW ))

2.2 Vision-based cloth operation

This section contributes an overview of previously reported robot vision-based approaches

that have been applied for efficient cloth operation tasks. The tasks involve cloths recogni-

tion, pose estimation and handling such as grasping, folding [15]∼[16], unfolding [17]∼[21]

and flattening [67]∼[71].

A. Vision-based cloth operations

(1) Cloth recognition

Kita et al. [61] proposed a method of cloths handling using visual recognition

work with actions. Their method classifies the possible 3D shapes as states (State

1∼State 17). The limitation of their cloths recognizing is that the one state can

represent only one point of the cloths. Among 17 states, some states can recognize

the cloth and the other states can happen recognition failure. Using a data-driven

method, achieving 100% accuracy in active recognition is proposed in [18].

17

Page 38: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

(2) Cloth pose estimation

There are generally two types of objects that involves unique guidance when

designing solutions to the recognition and pose (position and orientation) estimation

problems: rigid and deformable. Pose estimation has to do with estimating the exact

pose of the object in 3D space generally expressed in Euler angles. The deformable

object such as string, cloth or paper, is difficult for a robot to recognize and estimate

because it soft, can change its shape easily and needs to concern different shapes,

sizes, designs, patterns, weights, and colors. Demand for autonomous robots to

manipulate the deformable objects have instantly escalated. The progress of robotics

research of deformable object is expected to automate the pose estimation of cloth.

(3) Cloth grasping/handling

Kita et al. [62] proposed a method to recognize cloths shape using strategic ob-

servation during handling. In their approach, a robot handles typically deformable

objects such as cloths, it is difficult for the robot to recognize a continuously chang-

ing shape. The aims of the above approaches is that 3D information achieved from

the stereo vision system. By using 3D information, the cloths region can easily sep-

arate from the background. The proposed method [62] was used to grasp the cloths

based on a recognition method [63] and [64]. The restriction of their experiments

[62] is that they used only one item of cloth. However, [4] and [11] proposed grasp-

ing position detection approach detecting collars in deformed polo shirts among

different items of cloths, and using a Kinect range camera, and RGB-D data. They

built a Bag of features based detector that combines appearance and 3D geometry

features. In their proposed system, SIFT (Scale-invariant Feature Transform) ap-

pearance descriptors and GDH (Geodesic-Depth Histogram) shape descriptors local

features are extracted on wrinkled regions in the RGB image and its corresponding

depth image information. An image is scanned using a sliding window with a non-

linear SVM (Support Vector Machine)-like spare classifiers and a “grasp goodness”

18

Page 39: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

specification to select the best grasping points. The main restriction of the above

mentioned approaches is that the non-linear SVM is not able to quantify the quality

of graspable point by a robot hand.

(4) Cloth folding

The successful folding with an optimal trajectory for a Baxter robot manip-

ulation of the different types of deformable garments has been proposed in [16].

Using a Willow Garage PR-2 robot, four different types of garments (towels, T-

shirts, sweaters, and slacks) folding results have been presented in [13]. The work

of [65] shows how to fold a T-shirt using a popular YouTube video titled “Japanese

way of folding T-shirt [66].” In this Japanese method, the fold can be established

by grasping the cloth at three points without re-grasping.

(5) Cloth unfolding

The essential phase for cloths unfolding is to detect grasping regions that can

possibly cause an unfolding position such as waist of a pant, shoulders of a shirt,

corners of a towel and so on. The proposed system [17] is used to estimate the

identify and configuration of the different sizes colors and shapes of cloths such as

one short-sleeve shirt, one long-sleeve shirt, one pair of pants, one skirt, one towel

and two infant cloths. These tasks are mainly focused on bringing a cloth from an

unknown configuration into a desired one. In [18], both a data-driven method for

cloths recognition from depth images using Random Decision Forests and a method

for unfolding an article of clothing after estimating and grasping two key-points,

using Hough forests are implemented into a POMDP framework allowing the robot

to interact optimally with the garments, taking into account uncertainty in the

recognition and point estimation process.

(6) Cloth flattening

The vision architecture and visually guided manipulation skills are proposed

19

Page 40: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

as the main contribution of the previous paper [3]. Moreover, their system consists

of the customized dual-arm robot, stereo robot head, stereo head calibration, stereo

matcher and robot motion control. L. Sun et al. [67] presented a perception pipeline

approach to track the flattening state of the cloth by a visually guided, dual-arm

and industrial robot system. The perception-manipulation cycle fully interprets a

high-quality RGB-D image of the cloth scene based on an active stereo robot hand.

A novel heuristic manipulation strategies to flatten wrinkled cloth on the operating

table with autonomous robotics are presented in [68]. The proposed approach [68] is

necessary to separate the imaged cloth from the background (pre-processing step).

2.3 Object recognition methods

According to a literature review, object recognition methods are mainly classified into

three categories:

• Image-based (Feature-based)

• Appearance-based (Template-based)

• Model-based

Featured-based approach [72] is based on the error between current and desired features

on the image plane, and does not implicate any evaluation of the position and orientation

of the target object. The main task of this method is to select a set of visual feature

points, lines or moments of regions. This method is mostly applied to head pose estima-

tion. In this method, the features possibly be set at points on faces like the corners of

the eye or mouth that may not accurate and sometimes cannot be recognized definitely

because it cannot resist the influence of facial expression, light environment changing and

occlusion.

In an appearance-based method [73], it is needed to perform template matching com-

20

Page 41: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

pletely. The image is compared with the reference templates to determine which one

closely corresponds to the image. This method may address the problem in terms of

time-consuming.

Model-based approach [74] is to search the target in the image using a model, which

is constructed based on known information of the target object. Our proposed method is

included in this category. In proposed 3D model-based recognition method, the problem

of recognizing the target object and detecting its pose are converted into an optimization

problem. GA is used as an optimization method for cloth recognition in this experiment.

The reason why we choose the GA is based on its simplicity, repeatable ability and espe-

cially effectiveness in the recognition performance. To evaluate the recognition accuracy,

the full search method is used to compare with the estimated pose by the proposed system.

In full search method, the pose of the target cloth is searched by scanning all possible

pixels in the entire searching area.

2.4 Optimal solution search method

GA is simple and effective searching method to seek for the solution pose of the 3D

pose of the 3D model, including three positions and one orientation represented by angle,

are defined as φjM = [xj

M , yjM , zj

M , εj1, ε

j2, ε

j3] and the designed 3D pose of the model φj

M

explained in detail in Section 4.6. GA is an adaptive solution-search and optimization

algorithm. GA has been used in previous studies [75] and [76]. The individuals GA

represent the different poses of the model φjM . The individual GA has the same color and

shape with the target object but different position and orientation. GA is the optimization

method to find the best pose of the 3D model with the maximum fitness value without

time-consuming. Fig. 2.2 shows the evolution process of GA in which the individual

generation is evolved from the generation to the next generation through the GA operators

((1) Selection operator, (2) Crossover operator and (3) Mutation operator).

21

Page 42: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

An elitist model of a GA that preserves the best individual in the population at every

generation is utilized.

Population of k-th generationGenerating next (k+1)-th

population

Population of p individual Evaluation

....

......

..

...sorting

selection

crossover

mutation

...

û1k = [t1x;k; t

1y;k; t

1z;k; è

11;k; è

12;k; è

13;k] F1k = Fk(û

1k)

F2k

F ik

F1k

Fpk

û1k

û2k

ûpk

ûik

ûik+1

û2k+1

û1k+1

ûpk+1

û2k = [t2x;k ; t

2y;k; t

2z;k; è

21;k; è

22;k; è

23;k ]

ûik = [tix;k; t

iy;k; t

iz;k;è

i1;k;è

i2;k; è

i3;k]

ûpk = [tpx;k; t

py;k; t

pz;k;è

p1;k;è

p2;k;è

p3;k]

F2k = Fk(û2k)

F ik = Fk(û

ik )

Fpk = Fk(ûpk)

...

...

...

...

...

...

...

...

...

Fig. 2.2: Evolution process of genetic algorithm (GA)

(1) Selection operator is to select the best individual, remove weak individual for

reproduction the new individual in the next generation. The selection operator

consists of selecting the individuals for reproduction (after sorting based on their

fitness value), which will be used to create the new population. A selection rate is

used to remove weak individuals based on their low-ranking fitness value (objective

function value), and those individuals will be replaced by the selected top-ranking

individuals which undergo crossover and mutation.

(2) Crossover operator is to cross two individuals considered as parent to create

two new ones considered as offspring. The binary strings of randomly selected two

individuals are partly interchanged according to two-random crossover sites in the

binary individual structure. Crossover is applied to take valuable information from

both parents.

(3) Mutation operator is to mutate bits. A bit of value “1” is changed to “0” and a

22

Page 43: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

bit of value “0” to “1.” A random bit-by-bit basis mutation is performed on all the

bits of the intermediate population obtained after crossover.

2.5 Robustness against changing environmental con-

ditions

Robots are difficult to recognize and estimate the pose of the deformable target object

under different kinds of environmental conditions as follow;

• Changing light environment

• Occlusion

• Unknown environment

2.5.1 Changing light environment

The proposed system is aimed to be applied to the real world with lighting condition

variations that mean the working room has the window. When the robots equipped with

cameras are used to operate day and night in both indoor and outdoor spaces, most of

them are limited to work in restricted environments in terms of illumination variation.

Therefore, the robustness of visual recognition against illumination variations is impor-

tant and the illumination tolerance of the vision-based system has to be confirmed. Based

on this motivation, cloths recognition experiments under different lighting conditions (100

(lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx)) for different light sources were conducted

to verify the illumination tolerance of the proposed system. Why these illumination con-

ditions are chosen is discussed in Section 4.8. In this study, two different light sources,

fluorescent and light-emitting diode(LED), are used to provide different levels of illumi-

nation for recognition experiment. The recognition performances under different light

conditions are analyzed.

23

Page 44: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

2.5.2 Occlusion

Robots are difficult to recognize the deformable target objects such as cloth, string, etc.,

especially if the object is unique. Additionally, there have been difficulties for robots with

vision sensors (cameras) to accurately detect and handle the object under changeable

environments. The object can be found mostly in the 3D searching area covered with

another object. The proposed system recognizes the target object and estimates the pose

of that cloth for handling even though the target cloth is occluded partially.

2.5.3 Unknown environment

Finding the object in the natural environment is very unmanageable, this research is used

for determining the target object with the dual-eye cameras. If there is another object in

the field of view of robot’s vision sensor (camera), our system can confirm to recognize

the target object under unknown environment.

2.6 System configuration

Our research group has developed a robot that can automatically perform classifying and

handling the garment by visual servoing. The configuration of the system is shown in Fig.

1.1. In this configuration, three cameras are used as vision sensors. The first camera is

used for generating models. The other two cameras, which are fixed at the end-effector

of the manipulator (PA10 robot), are used for recognition based on the photo-model. A

robot named as PA10 with 7-DoF (Degree of Freedom) is used for recognition and pose

detection operations. The aim of the robot using proposed system is to pick up the cloth

after recognizing it and set the cloth into the desired place. The proposed system is aimed

to be applied to the real world with lighting condition variations that mean the working

room has the window. When the robots equipped with cameras are used to operate

day and night in both indoor and outdoor spaces, most of them are limited to work

24

Page 45: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

in restricted environments in terms of illumination variation. Therefore, the robustness

of visual recognition against illumination variations is important and the illumination

tolerance of the vision-based system has to be confirmed. Based on this motivation,

cloths recognition experiments under different lighting conditions (100 (lx), 400 (lx), 700

(lx), 1000 (lx) and 1300 (lx)) for different light sources were conducted to verify the

illumination tolerance of the proposed system. Why these illumination conditions are

chosen is discussed in Section 4.8. In this study, two different light sources, fluorescent

and light-emitting diode(LED), are used to provide different levels of illumination for

recognition experiment. The recognition performances under different light conditions

are analyzed. The developed vision-based robot system is shown in Fig. 2.3. In Fig.

2.3, the dual-eye cameras that are fixed at the end-effector of a PA-10 robot perform

the cloth recognition and pose estimation process based on the photograph model. The

cloth absorption pads as shown in Fig. 2.3 is possible to perform the absorption of the

target object (cloth). The aim of the PA-10 robot using proposed system is to pick up

the cloth after recognition it and set the cloth into the desired collection box as shown in

Fig. 2.3. Figure 2.3 represents the cloth handling system that consists of the robot with

a cloth’s pose measuring dual-eye sensor unit, which includes transport conveyor. There

are three cameras (vision sensors) in this configuration. After a cloth being input on the

conveyor from the right-hand side of the figure, the single-camera makes a photograph

model. The model would be used to recognize the cloth and to measure the cloth’s pose

on the left-hand side, where the cloth is picked up and set inside a mailing box by the

robot.

An unique cloth may be recognized by barcode if attached to the cloth, but the photo-

model-based whole cloth recognition is indispensable for non-erroneous identification of

the unique cloth. Of course the photo-model-based recognition may be combined with

barcode for enhanced reliability. After recognizing and estimating the pose of the target

cloth, handling the cloth was performed to pick and place the cloth. The proposed

25

Page 46: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 2 Literature review

photo-model-based cloths recognition system is intended to save the cost of staff workers

and to get better performance and higher accuracy than human workers. Moreover,

this system is aimed at being applied in the real world, regardless of lighting conditions

varieties. Therefore, the robustness of the proposed system against different illuminations

was verified experimentally in this study.

Cloth

absorption

pads

Target object (Cloth)

PA-10 robot

Collection box

Dual-eye cameras

Fig. 2.3: A photo of a cloth handling robot system with dual-eye cameras: PA-10 robot

is equipped with two cameras (vision sensors are used as dual-eye vision system) for

recognition and vacuum cups (four absorption pads by the air compressor possible to

perform the pick and place of the target object (cloth)) for handling. In the test, the

robot picks up the cloth and places it into the collection box.

26

Page 47: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3

Orientation representation by

quaternion

There are several representations used to describe the orientation of a rigid body. Euler

angles are a well-known one that includes a set of three angles rotating around three

coordinates, z, y, z successively. A drawback of the Euler angles is the occurrence of

representation singularities (for the manipulator, the Jacobian matrix is singular for some

orientation). An alternative representation is an angle/axis, describing the general ori-

entation of a rigid body as a displacement of an angle around an axis. Figure 3.1 shows

the angle/axis model, a general angle/axis representation is not unique since a rotation

by an angle −θ around an axis −k cannot be distinguished from a rotation by θ around

k. Moreover, a representational singularity happens when θ = 0, the rotation axis can

not be determined. It does not satisfy our research since we use GA to search the best

position/orientation, the problem appears when θ = 0, where the rotation axis k cannot

be converged since the above problem. Thus, it can not be used to detect 3D pose and

to apply it to visual servoing tasks in which the desired target pose is θ = 0. A unit

quaternion is a different angle/axis representation. It can represent the orientation of a

rigid body without singularities.

For the reader’s convenience, a few basic concepts regarding the use of a unit quater-

27

Page 48: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

k í

Äí

(Äí;Äk)(í;k)(a) (b)

Äk

Fig. 3.1: Angle/Axis model

nion to describe the orientation of a rigid body are summarized hereafter [78]. The unit

quaternion is defined as

Q = {η, ε}, (3.1)

where

η = cosθ

2(3.2)

ε = sinθ

2k, (3.3)

where

• k(‖k‖ = 1) : rotation axis

• θ : rotation angle around k

• η : scalar part of the quaternion

• ε : vector part of the quaternion

• cos2 θ

2+sin2 θ

2k = 1

They are constrained by

η2 + εT ε = 1 (3.4)

28

Page 49: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

hence it is named unit quaternion. It is worth to remark that, differently from the

angle/axis representation, a rotation by an angle −θ about an axis −k gives the same

quaternion as that associated with a rotation by θ about k, and θ = 0 gives η = 1, ε = 0,

which has solved the nonuniqueness problem from singularity.

The rotation matrix corresponding to a given quaternion is

R(η, ε) =εεT

1 − η2+ (2η2 − 1)

(I − εεT

1 − η2

)+ 2ηS(ε)

=εεT

1 − η2+ (2η2 − 1)I + (−2η2 + 1)

εεT

1 − η2+ 2ηS(ε)

= (2η2 − 1)I + (−2η2 + 2)εεT

1 − η2+ 2ηS(ε)

= (2η2 − 1)I + (2 − 2η2)εεT

1 − η2+ 2ηS(ε)

= (2η2 − 1)I + 2(1 − η2)εεT

1 − η2+ 2ηS(ε)

= (2η2 − 1)I + 2εεT + 2ηS(ε)

= (2η2 − η2 − εεT )I + 2εεT + 2ηS(ε)

= (η2 − εεT )I + 2εεT + 2ηS(ε) (3.5)

where S(·) is the operator performing the cross product between two (3 × 1) vectors.

Given a = [ax, ay, az]T , S(a) takes on the form

S(a) =

0 −az ay

az 0 −ax

−ay ax 0

. (3.6)

For arbitrary vectors a and b, we have

29

Page 50: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

a × b = −(b × a)

= −

∣∣∣∣∣∣∣∣∣∣

i j k

b1 b2 b3

a1 a2 a3

∣∣∣∣∣∣∣∣∣∣= −[(a2b3 − a3b2)i + (a3b1 − a1b3)j + (a1b2 − a2b1)k]

=

0 −b3 b2

b3 0 −b1

−b2 b1 0

a1

a2

a3

= −S(b)a

S(a)b = a × b, S(a)b = −S(b)a. (3.7)

where

Rotation matrix is shown below,

R(θ, k) = cosθ

1 0 0

0 1 0

0 0 1

+ (1 − cosθ)

k21 k1k2 k1k3

k1k2 k22 k2k3

k1k3 k2k3 k23

+ sinθ

0 −k3 k2

k3 0 −k1

−k2 k1 0

=

(1 − cosθ)k12 + cosθ (1 − cosθ)k1k2 − k3sinθ (1 − cosθ)k1k3 + k2sinθ

(1 − cosθ)k1k2 (1 − cosθ)k22 + cosθ (1 − cosθ)k2k3 − k1sinθ

(1 − cosθ)k1k2 (1 − cosθ)k2k3 + k1sinθ (1 − cosθ)k32 + cosθ

30

Page 51: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

Using Rotation matrix,

R11 + R22 + R33 = 3cosθ + (1 − cosθ)(k12 + k2

2 + k32)

= 1 − 2cosθ

= 4cos2 θ

2− 1

= 4η2 − 1

Note: η = cosθ

2and θ is from −π to π, so η ≥0 can be concluded.

On the other hand, the quaternion corresponding to a given rotation matrix R = {Ri,j}(i, j =

1, 2, 3) is

η =1

2

√1 + R11 + R22 + R33 (3.8)

ε =1

4

R32 − R23

R13 − R31

R21 − R12

(3.9)

The relations between the time derivative of the Quaternion parameters and the body

angular velocity ω are established as:

η = −1

2εT ω (3.10)

ε =1

2(ηI − S(ε))ω (3.11)

Deductions of Eq. (3.10) and Eq. (3.11) are written in the next.

Deduction of quaternion differential equation(η, ε)

Deduction of angle/axis differential equation (θ, k)

Angle/Axis can be obtained in terms of a rotation angle θ around an axis k. Then,

the angular velocity vector ω is given by

ω = k θ, (3.12)

where

31

Page 52: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

• k = [k1, k2, k3]T (3×1) unit vector

• kkT = kT k = I

• Inner dot product are established as:

k = [k1 k2 k3]T

=

k1

k2

k3

kT

= [k1 k2 k3]

k = [k1 k2 k3]T

kTk = [k1 k2 k3]

k1

k2

k3

= [k1k1 + k2k2 + k3k3]

= [k1 k2 k3]

k1

k2

k3

= kT k

• k: velocity of k

Note: k and k are orthogonal vectors.

By using kT k = I, (3.12) can be rewritten as

θ = kT ω. (3.13)

32

Page 53: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

In Fig. 3.2 the vector r rotate around the axis k angle θ and get a new vector r′ so

r′ = R(θ, k)r (3.14)

And from Fig. 3.2,

r′ =−−→ON +

−−→NV +

−−→V Q

=−−→ON +

−−→NP · cosθ −

−−→QV

where (−→OP =

−−→ON +

−−→NP )

=−−→ON + (

−→OP −

−−→ON) · cosθ −

−−→QV

= k(kT · r) + [r − k(kT · r)] · cosθ − (r × k) sin θ

= (kkT )r + [r − (kkT )r] · cosθ − sinθ(−S(k)r)

=(kkT + cosθ(I − kkT ) + sinθS(k)

)r (3.15)

From Eq. (3.13) and Eq. (3.15) the rotation matrix corresponding to an angle/axis

representation is

R(θ, k) = kkT + cosθ(I − kkT ) + sinθS(k). (3.16)

Here, we define

Ra = kkT . (3.17)

Rb = cosθ(I − kkT ). (3.18)

Rc = sinθS(k). (3.19)

Substituting Eq. (3.17), Eq. (3.18), Eq. (3.19) into Eq. (3.16), then Eq. (3.16) and the

differential of Eq. (3.16) can be described as

R = Ra + Rb + Rc, (3.20)

and

R = Ra + Rb + Rc. (3.21)

Multiplying k to both left and right side of (3.21), we have

33

Page 54: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

k û

O

N

QV

P

rr0

Q

VN P

r Ç k

í

(a) Victor r rotate around axis k (b) Plane QNP of (a)

Fig. 3.2: Rotation matrix of angle/axis representation

Rk = Rak + Rbk + Rck, (3.22)

whereRak =

d

dt(kkT )k

= (kkT + kkT)k

= kkT k + kkTk

= k + kkT k

= (I + kkT )k (3.23)

Rbk =d

dt(cosθ(I − kkT ))k

=(−(sinθ)θ(I − kkT ) + cosθ(−kkT − kk

T))k

= −(sinθ)θ(I − kkT )k + cosθ(−kkT − kkT)k

= −(sinθ)θ(Ik − kkkT ) + cosθ(−kkT k − kkT k)

= −(sinθ)θ(I − kkT )k + cosθ(−kkT − kkT)k

= −(sinθ)θ(k − k) − cosθ(k + kkT k)

= −(sinθ)θ(0) − cosθ(k + kkT k)

= −cosθ(I + kkT )k (3.24)

34

Page 55: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

By using S(r) = S(r) and S(r)r = −S(r)r, we get

Rck =d

dt

(sinθS(k)

)k

=((cosθ)θS(k) + sinθS(k)

)k

= (cosθ)θS(k)k + sinθS(k)k

= (cosθ)θ(k × k) − sinθS(k)k

= −sinθS(k)k (3.25)

Substituting Eq. (3.23), Eq. (3.24), Eq. (3.25) into Eq. (3.22), we get

Rk = Rak + Rbk + Rck

= (I + kkT )k − cosθ(I + kkT )k − sinθS(k)k

= [(1 − cosθ)(I + kkT ) − sinθS(k] k (3.26)

On the other hand,

R = [x, y, z]

= [ω × x, ω × y, ω × z]

= [S(ω)x, S(ω)y, S(ω)z]

= S(ω)R (3.27)

Multiplying k to both left and right side of Eq. (3.27) and Substituting Eq. (3.17), Eq.

(3.18), Eq. (3.19) into Eq. (3.27), we get

Rk = S(ω)Rk

= S(ω)Rak + S(ω)Rbk + S(ω)Rck

= S(ω)kkT k + S(ω)(cosθ(I − kkT ))k + S(ω)(sinθS(k))k

= S(ω)k + cosθS(ω)(k − k) + sinθS(ω)k × k

= S(ω) k

= −S(k) ω (3.28)

35

Page 56: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

From Eq. (3.26) and Eq. (3.28), we have

[(1 − cosθ)(I + kkT ) − sinθS(k] k = −S(k) ω. (3.29)

To move the matrix before k to the right side of Eq. (3.29), we consider the following

equation.

[1

1 − cosθ(I − kkT ) − 1

sinθS(k)][(1 − cosθ)(I + kkT ) − sinθS(k]

= (I − kkT )(I + kkT ) − sinθ

1 − cosθ(I − kkT )S(k) − 1 − cosθ

sinθS(k(I + kkT ) + S(k)S(k)

= (I − kkT ) − sinθ

1 − cosθS(k) − 1 − cosθ

sinθS(k) + (kkT − I)

= − 2

sinθS(k), (3.30)

where

(I − kkT )(I + kkT )

= I − kkT + kkT − kkT kkT

= I − kkT , (3.31)

(I − kkT )S(k)

= S(k) − kkT S(k)

= S(k) − k(ST (k)k

)T

= S(k) − k(−S(k)k

)T

= S(k), (3.32)

S(k)(I + kkT )

= S(k) + S(k)kkT

= S(k), (3.33)

36

Page 57: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

and

S(k)S(k)

= kkT − kT kI

= kkT − I, (3.34)

since S(a)S(b) = baT − aT bI.

Multiplying [ 11−cosθ

(I −kkT )− 1sinθ

S(k)] to both left and right side of Eq. (3.29), and

using Eq. (3.30), we get

− 2

sinθS(k) k = −[

1

1 − cosθ(I − kkT ) − 1

sinθS(k)] S(k) ω. (3.35)

Then,

S(k) k =1

2[cot

θ

2(I − kkT ) − S(k)] S(k) ω

=1

2S(k)[cot

θ

2(I − kkT ) − S(k)] ω, (3.36)

That is

k × k =1

2k × [cot

θ

2(I − kkT ) − S(k)] ω. (3.37)

Since k is the velocity of k, k and k are orthogonal vectors. And because of the following

equation,

kT ([cotθ

2(I − kkT ) − S(k)] ω)

=1

2[cot

θ

2(kT − kT kkT ) − kT S(k)] ω

=1

2[0 −

(ST (k)k

)T

] ω

=1

2[−

(−S(k)k

)T

] ω

= 0 (3.38)

k and [cot θ2(I − kkT ) − S(k)] ω are orthogonal vectors.

From Eq. (3.37), and the orthogonal relations of k and k, and k and [cot θ2(I−kkT )−

S(k)]ω, we can get

k =1

2[cot

θ

2(I − kkT ) − S(k)] ω, (3.39)

37

Page 58: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

aaaa

cccc

rrrr

bbbbía

íb

Fig. 3.3: Cross Products

which can be explained by the following example:

Let vectors r, a, b, c in Fig. 3.3 satisfy

r × a = r × b = c, (3.40)

And we have

||r × a|| = ||r × b|| = ||c||, (3.41)

||r|| ||a||sinθa = ||r|| ||b||sinθb. (3.42)

If θa = θb, then from Eq. (3.42), we can get ||a|| = ||b||, that is a = b.

In the case of Eq. (3.37), k and k, k and [cot θ2(I − kkT ) − S(k)]ω are orthogonal

vectors, that is θa = θb = 90[deg], so we can get the result of Eq. (3.39).

Put together Eq. (3.13) and Eq. (3.39), Angle/Axis differential equations of θ, k are

shown as

θ = kT ω, (3.43)

k =1

2[cot

θ

2(I − kkT ) − S(k)] ω. (3.44)

Deduction of angle/axis differential equation (θ, k)

The unit quaternion is defined as

η = cosθ

2(3.45)

38

Page 59: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 3 Orientation representation by quaternion

ε = sinθ

2k, (3.46)

where k(‖k‖ = 1) is the rotation axis and θ is the rotation angle around k.

Differentiating Eq. (3.45) with respect to time, we have

η = −1

2sin

θ

2θ. (3.47)

Substituting θ expressed in Eq. (3.43) to Eq. (3.47), we have

η = −1

2sin

θ

= −1

2sin

θ

2kT ω

= −1

2εT ω (3.48)

Differentiating Eq. (3.46) with respect to time, we have

ε =1

2cos

θ

2θ k + sin

θ

2k. (3.49)

Substituting θ and k expressed in Eq. (3.43) to Eq. (3.49), we have

ε =1

2cos

θ

2θ k + sin

θ

2k

=1

2cos

θ

2kT ω k + sin

θ

2

1

2[cot

θ

2(I − kkT ) − S(k)] ω

=1

2ηkkT ω +

1

2[cos

θ

2(I − kkT ) − sin

θ

2S(k)] ω

=1

2ηkkT ω +

1

2[η(I − kkT ) − S(sin

θ

2k)] ω

=1

2[ηkkT + ηI − ηkkT − S(ε)] ω

=1

2[ηI − S(ε)] ω (3.50)

Put together Eq. (3.47) and Eq. (3.50), Quaternion differential equations of η, ε are

shown as

η = −1

2sin

θ

2θ, (3.51)

ε =1

2[ηI − S(ε)] ω. (3.52)

39

Page 60: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 61: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4

Photo-model-based recognition

This section is to explain a photo-model-based cloth recognition method. The model-

based matching method [83] has been utilized with the adoption of a set-point-model-

thinking. That is all points of the solid 3D model in the 3D searching space as a group

are projected onto the left and right camera image planes (2D images) without the Corre-

sponding Points Identification Problem that has been pointed out as the difficulty existing

in pose estimation by using plural cameras. Since all points on the 3D model are projected

into 2D camera images in our method, all projections for each point are correct. This

means that forward projection of the 3D object has been used, which does not raise up

Corresponding Points Identification Problem. The following is the description of the HSV

color system before an explanation of the proposed system in details.

4.1 HSV color system

There are three attributes of color information; hue, color difference (such as the difference

between red and blue), saturation (colorfulness) and value (HSV). In this situation, we

are considered the value of S(saturation)=0 and V=0. The value of the hue(H) is studied

and used for the extraction of the target color. By using the H, it is possible to reduce the

color change due to the change in the light of the target. The merit of the color system

41

Page 62: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

is that it is able to examine the hue by reducing of brightness. Furthermore, there is

an advantage that the color is easy to understand using HSV color system. The values

received from camera images red, green, and blue are defined as RGB color system that is

shown in Fig. 4.1. The conversion formulas to HSV values from rgb values are as follows:

RGB[0∼255] represents “0” for a minimum value of rgb and “255” for the maximum value

of rgb.

V = max{r, g, b} (4.1)

v = min{r, g, b} (4.2)

In the case of r = V,

H∗ =(g − b)

(V − v)(4.3)

When g = V,

H∗ = 2 +(b − r)

(V − v)(4.4)

When b = V,

H∗ = 4 +(r − g)

(V − v)(4.5)

In order to get the value of H in the range of 0∼359

H = 60H∗ (4.6)

For example, we want to calculate the hue value of cyan, Hcyan which the cyan color is

the color between the blue color and green color. So, the cyan color becomes (0,255,255)

as shown in Fig. 4.2. For the green color is maximum condition then we use the Eq. 4.4.

H∗G = 2 +

(b − r)

(V − v)

= 2 +(255 − 0)

(255 − 0)

= 2 + 1

= 3

(4.7)

42

Page 63: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

For the blue color is maximum condition then we use the Eq. 4.5.

H∗B = 4 +

(r − g)

(V − v)

= 4 +(0 − 255)

(255 − 0)

= 4 − 1

= 3

(4.8)

The value of Eq.4.7 and Eq. 4.8 have the same result so we substitute the value of 3 in

Eq.4.9. Then, we got the hue value of the cyan color.

Hcyan = 60H∗G

= 60 × 3

= 180

(4.9)

RGB

R

G

B

green

red

blue

yellow

black

white

cyan

magenta

(255,0,0)

(0,255,0)

(0,0,255)(0,255,255)

Fig. 4.1: Red, Green and Blue (RGB) color system

43

Page 64: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

Cyan (0,255,255)

Fig. 4.2: Cyan color is the color value of RGB = (0,255,255)

4.2 Kinematics of stereo-vision

Many computer vision operations, such as feature extraction, structure for motion, object

recognition, pose estimation, extraction of depth information, etc are used to extract

information from images. The robot controller using information provided by a computer

vision system is necessary to understand the modeling of imaging process geometrically.

Each camera depicts information from 3D space to the 2D left and right camera images.

A 3D solid model that is assumed to be in the searching area used in computer vision is

modeled by perspective projection. Subsections 4.2.1 ∼ 4.2.4 are the description of the

kinematics of stereo-vision before the explanation of proposed system in detail.

4.2.1 Projective transformation of three-dimensional (3D) model

The pose of 3D model φ = (x,y,z,ε3) is decided by GA parameter. By using information

on the pose of the 3D model, it can be easily understood that how 3D model would be

reflected in an image. In order to find out how 3D model appears in the image from the

relationship between the camera and the pose of the 3D solid model, it is necessary to

project the 3D model to the image plane using the center of projection to generate a plane

model. In this section, the projective transformation is described in detail.

44

Page 65: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

4.2.2 Projective transformation matrix

In this section, detailed derivation of the projective transformation matrix, P will be

explained in detail. Figure 4.3 shows the projection of a point from 3D space to 2D

images. As center of projection, Fig. 4.3, shown the focal distance defined f , and the

coordinate of central position of the image is defined (Ix0,Iy0). It is also defined that the

proportion between the distance [mm] of x-axis and y-axis in the coordinate of camera ΣC

and the distance [pixel] of x-axis and y-axis in the coordinate of image ΣI are ηy, ηy[mm/

pixel] Then, a distance of these two coordinates of origin is defined as e. Some points

(Cxi,Cyi,

Czi) of ΣC can penetrate a lens and create images (Ixi,Iyi) in ΣI . In Fig.

eO

y

z

C�� =C��C��C��

� C

x

yI�� = I��I �

YX

f f

A B

�x

��a

b� I

Fig. 4.3: Projection Matrix.

4.3, a point is situated in front of the camera len ΣC with the position of Cxi,C yi,

C zi.

The corresponding point will appear at the position Ixi,I yi in image coordinate plane

according to the projection matrix. The projection matrix can be derived as follow: From

Fig. 4.3, the ∆ oab and the ∆ oab, we get the ratio as the following;

ab : ab = bo : bo (4.10)

X : Y = A : B (4.11)

X

Y=

A

B(4.12)

45

Page 66: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

By using this equations, the position of the object can translate 3D to 2D by the

projection,

(Iyi − Iy0)ηy

Cyi

=e

Czi

(4.13)

(Ixi − Ix0)ηx

Cxi

=e

Czi

(4.14)

As shown in Fig. 4.3, on condition that it is not considered the thickness of the lens,

the relation between some points and the coordinate of images are described as above

mentioned Eqs. (4.13) and (4.14). Eq. (4.13) and Eq. (4.14) sum up,

Ixi

Iyi

=

1Czi

e/ηx 0 Ix0 0

0 e/ηyIy0 0

Cxi

Cyi

Czi

1

(4.15)

Compared with the focal distance, the distance between a target and lens is longer.

Then, the distance e can be approximated to the distance f , and Eq. (4.15) are shown

as the following;

Ixi

Iyi

=

1Czi

f/ηx 0 Ix0 0

0 f/ηyIy0 0

Cxi

Cyi

Czi

1

(4.16)

Eq. 4.16 as declared that projective transformation matrix is P ,

P =1

Czi

f/ηx 0 Ix0 0

0 f/ηyIy0 0

. (4.17)

where

Ixi

Iyi

: coordinate of the position of the image

46

Page 67: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

Ix0

Iy0

: coordinate of central position of the image

• e :distance of two coordinates of origin

• f :distance of two coordinates of origin is approximately equal to the distance of

focal length

• ηx = ηy = [mm/pixel]

• Czi : distance between origin and camera image

However, ηx and ηy are needed to decide values before and experiment, because these

values are calculated by using a size of the target known [pixel] and the target [pixel] in

the image. Depending on the experimental environment, the spatial resolution in term of

the pixel for X and Y positions are explained detailed in Section 4.7.

4.2.3 Projective transformation to left and right camera

As shown in Fig. 4.4, two cameras are mounted on the PA 10 robot and let the reference

point be the hand coordinate system ΣH . Also, it is assumed that the world coordinate

system ΣW , the right camera coordinate system ΣCR, the left camera coordinate system

ΣCL and the image coordinate system ΣI . Firstly, the relationship between the hand

coordinate system ΣH and the model coordinate system ΣM is obtained by using the

homogeneous transformation matrix from ΣH to ΣM with HT M . An arbitrary i-th point

on the target object defined on the solid model in ΣH are expressed by the following;

Hrji = HT M

Mrji (4.18)

The relationships between the right camera coordinate system ΣCR, the left camera co-

ordinate system ΣCL and including the PA-10 hand coordinate system ΣH as shown in

Fig. 4.4 are represented by the matrix CRrji and CLrj

i as Eq. (4.19) and Eq. (4.20).

47

Page 68: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

y

x x

y

z

Camera R

Camera L

x y

y

zx

Image R

Image L

x

y

z

z

y

x

Searching Area

Robot Hand

yzx

Σ�

�� ��

Σ�

��

��

���

f

j-th

Photo-model

���

����

����

���

���

��

��x

y

Fig. 4.4: Perspective projection of dual-eye vision-system: In the searching area, a 3D

solid model is represented by the picture of cloth with black point (j-th photo-model).

The coordinate systems of photo-model, camera and image are represented by ΣMj, ΣCL,

ΣCR, ΣIL and ΣIR respectively. A 3D solid model that is assumed to be in the searching

area is projected from 3D space to 2D left and right camera images.

CRrji = CRT H

Hrji (4.19)

CLrji = CLT H

Hrji (4.20)

An arbitrary i-th point on the target object defined on the j-th 3D model in ΣCR (right

camera) and ΣCL (left camera) are as Eq. (4.21) and Eq. (4.22).

CRrji = CRT M

Mrji (4.21)

CLrji = CLT M

Mrji (4.22)

From Eq. (4.21) and Eq. (4.19),

CRT MMrj

i = CRT HHrj

i (4.23)

48

Page 69: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

where

• CRrji :An arbitrary i-th point on the target object defined on the j-th model in ΣCR

(right camera)

• CRT M :Homogeneous transformation matrix from ΣCR to ΣM

• Mrji :The object is viewed from the search point i-th on the j-th model in ΣM

• CRT H :Homogeneous transformation matrix from ΣCR to ΣH

• Hrji :The object is viewed from the search point i-th on the j-th model in ΣH

According to the inverse homogeneous transformation matrix,

TT−1 =

R r

0 0 0 1

RT

−rT x

−rT y

−rT z

0 0 0 1

TT−1 =

RRT

R(−rT x) + r

R(−rT y) + r

R(−rT z) + r

0 0 0 1

49

Page 70: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

TT−1 =

I3 R

−rT x

−rT y

−rT z

+ r

0 0 0 1

TT−1 =

I3 R

−xT r

−yT r

−zT r

+ r

0 0 0 1

TT−1 =

I3 R

−xT

−yT

−zT

r + r

0 0 0 1

TT−1 =

I3 R − RT r + r

0 0 0 1

TT−1 =

I3 −RRT r + r

0 0 0 1

50

Page 71: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

TT−1 =

I3 −r + r

0 0 0 1

TT−1 =

I3 0

0 0 0 1

TT−1 =

I3

0

0

0

0 0 0 1

TT−1 =

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

TT−1 = I4 (4.24)

T−1 is avaliable.

According to the inverse homogeneous transformation matrix,

T−1 =

xT −rT .x

yT −rT .y

zT −rT .z

0 0 0 1

51

Page 72: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

T−1 =

RT

[−rx −ry −rz

].

xx

xy

xz

[−rx −ry −rz

].

yx

yy

yz

[−rx −ry −rz

].

zx

zy

zz

0 0 0 1

T−1 =

RT −rxxx −ryxy −rzxz

−rxyx −ryyy −rzyz

−rxzx −ryzy −rzzz

0 0 0 1

T−1 =

RT −xxrx −xyry −xzrz

−yxrx −yyry −yzrz

−zxrx −zyry −zzrz

0 0 0 1

T−1 =

RT

−xx −xy −xy

−yx −yy −yz

−zx −zy −zz

rx

ry

rz

0 0 0 1

52

Page 73: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

T−1 =

RT

−xT

−yT

−zT

.r

0 0 0 1

T−1 =

RT −RT .r

0 0 0 1

T−1 =

RT −RT r

0 0 0 1

(4.25)

(HT CR)−1 = CRT H

=

(HRCRT) −(HRCR)T HrCR

0 0 0 1

(4.26)

HT CR−1

is substituted CRT H in Eq.4.19 and also HT MMrj

i is substituted Hrji in Eq.

(4.19) then Eq. (4.27) becomes as follow;

CRrji = HT CR

−1 HT MMrj

i . (4.27)

Eq. (4.26) represents as the inverse homogeneous transformation matrix from ΣH to ΣCR

whose size is 4×4 matrix. HT CR−1

contains orientation for rotation matrix HRCR, 3×3

53

Page 74: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

matrix and position for column vector HrCR, 3×1 matrix, and scalar 1.

HRCR = Rot(y, θ)Rot(x, φ) (4.28)

In this system, we are considered about the rotation of y-axis only at θCR. The rotation

of x-axis about at φCR.

Rot(y, θ) =

cos θCR 0 sin θCR

0 1 0

sinθCR 0 cos θCR

(4.29)

Rot(x, φ) =

1 0 0

0 cos φCR − sin φCR

0 sin φCR cos φCR

(4.30)

HrCR =

HxCR

HyCR

HzCR

(4.31)

HT CR−1

=

cos θCR sin θCR sin φCR sin θCR cos φCRHxCR

0 cos φCR − sin θCRHyCR

− sin θCR cos θCR sin φCR cos θCR cos φCRHzCR

0 0 0 1

−1

(4.32)

HT M =

HRM(ε1, ε2, ε3)HxM

HyM

HzM

0 0 0 1

(4.33)

54

Page 75: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

Mrji =

Mxji

Myji

Mzji

1

(4.34)

Substituting from Eqs. (4.32), (4.33) and (4.34) to Eq. (4.27) then we get the CRrji in

Eq. (4.35).

CRrji =

cos θCR sin θCR sin φCR sin θCR cos φCRHxCR

0 cos φCR − sin θCRHyCR

− sin θCR cos θCR sin φCR cos θCR cos φCRHzCR

0 0 0 1

−1

HRM(ε1, ε2, ε3)HxM

HyM

HzM

0 0 0 1

Mxji

Myji

Mzji

1

(4.35)

Similarly, CLrji : an arbitrary i-th point on the target object defined on the model in ΣCL

(left camera)is received by doing same procedure. Then, CLrji become as Eq. (4.36).

CLrji = CLT M

Mrji . (4.36)

W rji = W T CR

CRrji . (4.37)

Using, Fig. 4.4, it can understand the relationships between the world coordinate system

ΣW , the hand coordinate system ΣH , the right and left cameras coordinates system ΣCR,

ΣCL and the image coordinate system ΣI .

W rji =

W xji

W yji

W zji

(4.38)

55

Page 76: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

W T CR =

W RCRW xCR

W yCR

W zCR

0 0 0 1

(4.39)

W T CR is the homogeneous transformation matrix from the i-th point world coordi-

nate system in ΣW to the right camera coordinate system ΣCR.

In the right camera coordinate system, ΣH to ΣCR rotation is about y-axis at θand

x-axis at φ. In the left camera coordinate system, ΣH to ΣCL rotation is about y-axis at

θand x-axis at φ.

HRCR = Rot(y, θCR)Rot(x, φCR) (4.40)

W RCR = W RHHRCR (4.41)

In our system, ΣW to ΣH rotation is about y-axis at θ and x-axis at φ.

W RH = Rot(y, θ)Rot(x, φ) (4.42)

HRCL = Rot(y, θCL)Rot(x, φCL) (4.43)

W T CL =

W RCLW xCL

W yCL

W zCL

0 0 0 1

(4.44)

W RCL = W RHHRCL (4.45)

W rji = W T CL

CLrji . (4.46)

4.2.4 Perspective projection

Figure 4.4 shows a perspective projection of the dual-eyes vision system. The coordinate

systems of dual-eyes cameras and the target object (cloth) in Fig. 4.4 consist of world

coordinate system ΣW , j-th model coordinate system ΣMj, hand coordinate system ΣH ,

56

Page 77: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

camera coordinate systems as ΣCL and ΣCR, and image coordinate systems as ΣIL and

ΣIR. In Fig. 4.4, the position vectors of an arbitrary i-th point of the j-th 3D model ΣMj

based on each coordinate system are as follows:

• W rji : position of an arbitrary i-th point on j-th 3D model based on ΣW

• Mrji : position of an arbitrary i-th point on j-th 3D model in ΣMj, where Mrj

i is

constant vector

• CRrji and CLrj

i : position of an arbitrary i-th point on j-th 3D model based on ΣCR

and ΣCL

• ILrji and IRrj

i : projected position on ΣIL and ΣIR of an arbitrary i-th point on j-th

3D model

The homogeneous transformation matrix from the right camera coordinate system

ΣCR to the target object coordinate system ΣM is defined as CRT M(φjM , q), where φj

M is

j-th model’s pose and q means robot’s joint angle vector. Then, CRrji can be calculated

by using Eq. (9.1),

CRrji = CRT M(φj

M , q) Mrji . (4.47)

where Mrji is predetermined as fixed vectors since ΣMj is fixed on the j-th model. CLrj

i

that represents the same i-th point on j-th model based on ΣCL is also calculated by using

CLT M(φjM , q). Since q can be measured by robot’s joint sensors, it could be thought to

have been known, then q is omitted hereafter. Equation (4.48) represents the projective

transformation matrix P k.

P k =1

kzi

fηx

0 Ix0 0

0 fηy

Iy0 0

. (4.48)

where, k = CL, CR, kzi ; position of the i-th point in the camera sight direction in ΣCR

and ΣCL, f ; focal length, ηx; [mm/pixel] in x-axis, ηy; [mm/pixel] in y-axis. Eq. (4.49),

57

Page 78: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

IRrji is the position vector of the i-th point on the j-th model in the right camera image

coordinates.

IRrji =

IRxji

IRyji

(4.49)

The position vector of the i-th point on the j-th model in the right and left camera

image coordinates IRrji can be described by using P k as,

IRrji = P k

CRrji = P k

CRT M(φjM)Mrj

i (4.50)

IRxji

IRyji

=

1kzi

fηx

0 Ix0 0

0 fηy

Iy0 0

cos θCR sin θCR sin φCR sin θCR cos φCRHxCR

0 cos φCR − sin θCRHyCR

− sin θCR cos θCR sin φCR cos θCR cos φCRHzCR

0 0 0 1

−1

HRM(ε1, ε2, ε3)HxM

HyM

HzM

0 0 0 1

Mxji

Myji

Mzji

1

(4.51)

Then, IRrji can be described as,

IRrji (φ

jM) = fR(φj

M ,M rji )

ILrji (φ

jM) = fL(φj

M ,M rji )

(4.52)

where ILrji can also be described as the same manner like IRrj

i . The measurement of the

pose φjM , φj

M = [xM , yM , zM , θM ]T will be explained in the following sections.

4.3 Cloth Model generation

Apart from conventional model-based matching methods in which models are predefined

in a computer system using known information of the object, a model of deformable cloths

58

Page 79: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

is generated directly from a photo-image of a single camera. There are two main portions

in the proposed robot handling system. The first portion is for cloth model generation and

the latter is for relative pose estimation of actual cloths using generated model through

model-based matching method. This subsection is to describe the first portion.

The hue value in HSV color representation is used to describe cloths’ photo-model

that comprises positions of plural pixel dots and the hue value of each dot. By using

the hue value, it is possible to make the color recognition tolerable against the lighting

condition varieties, which is given by the character of HSV representation. The description

of the HSV color system in detail is represented in Section 4.1.

In the proposed system, three cameras are used as the vision sensors. Among them,

the first camera is used for generating a model, which is depicted at the right-hand-side in

Fig. 1.1. The model generation process is represented in Fig. 4.5. Firstly, a background

image is captured by the first camera as shown in Fig. 4.5 (a) and the averaged hue

value of the background image is calculated. Then, the cloth comes in on the conveyor

with the green background as shown in Fig. 4.5 (b). In Fig. 4.5 (c), the hue value of

each predefined pixel point in the image is compared with the averaged hue value of the

background image, then the area of cloth is detected. The set made by dots on the cloth

is named as Sin. The dots defined in enveloping strip around the Sin constitutes a set of

Sout. The combined set S with Sin and Sout made of dots’ position and the hue color of

the dots represents the photo-model that is used for recognizing the single/unique cloth

and estimating the cloth’s pose. The procedure is explained next subsection.

59

Page 80: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

(c) S�� space of model is

shown by

black points group

(a) Background(b) Target object in

background

S���

S��

(d) Enveloping space of S�� is

shown by

points group S���

S��

Fig. 4.5: Model generation process: (a) get background image and calculate the hue value,

(b) get image including cloth and calculate the hue value of each point, (c) distinguish

background from difference between the hue value, (d) determine the frame of model.

Generated surface space of model and outside space of model are denoted as Sin and Sout.

4.4 3D Model-based matching

3D pose of the 3D model, including three positions and one orientation represented by

angle, are defined as φjM = [xM , yM , zM , θM ]T . The angle θ is an angle around the normal

direction of the clothing bench, that is θ is around z-axis of ΣMj as shown in Fig. 4.4.

The generated model is projected from the 3D space in the 3D searching space into the

left and right 2D images planes, as shown in Fig. 4.6. The upper side of the Fig. 4.6

60

Page 81: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

shows the appearance of a generated 3D solid model in the 3D searching space, and the

left and right 2D searching models (sub-figures on the left and right bottom of Fig. 4.6)

are projected into 2D image planes. Despite that the searching model assumed to set

in 3D searching space is flat shape, it has a varieties of pose in the space, we named

the model as 3D model. S(φjM) is made of Sin(φj

M) (inner dotted points) and the out-

side strip Sout(φjM) enveloping Sin(φj

M) denoted by outer dotted line. The sub-figures on

the left/right bottom of Fig. 4.6 show the left/right 2D searching models SL(φjM) and

SR(φjM), where those two models are projected by using φj

M that represents the pose of

j-th model in the evolution process of Genetic Algorithms as one of genes.

A fitness function (explained in next section) is defined for the evaluation of the correla-

tion between the projected models (how different models with different poses are generated

in GA is explained in Section 2.4 and Section 4.6) and the images from the dual-eye cam-

Σ�

��

forward

projection

��

S��

S� �

S�,� �

S�,��

� S�,� �

�S�,��

Left image Right image

2D searching model: S�(

�) 2D searching model: S�(

�)

3D searching space y

x

z S

y y

x x

Fig. 4.6: A 3D solid model in the 3D searching space (sub-figure on the top of Fig. 4.6)

and left and right 2D searching models represented as SL(φjM) and SR(φj

M) (sub-figures

on the left/right bottom of Fig. 4.6).

61

Page 82: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

eras. The pose of the best model with the highest fitness value that coincides with the

captured images from left and right cameras is selected as an estimated pose. Please refer

to previous works [74]-[82] for detailed explanation on 3D model-based matching can be

seen in previous works.

4.5 Definition of the fitness function

The fitness function is constructed to estimate how much the matching degree between

the projected model defined by its pose (φjM) and the captured images on the left and

right 2D searching areas. If the projected 2D searching models (SL(φjM) and SR(φj

M))

absolutely coincide with the captured target object (cloth) in the left and right images,

the fitness function that represents correlation of that model with the pose of (φjM) has

been designed to have a maximum value. Therefore, the fitness value distribution for

all models will represent a mountain shape with a peak that represents the pose of the

real 3D target object. The concept of fitness function in this study can be said to be

an extension of the work in [83] in which different models including a rectangular shape

surface-strips model were evaluated using images from a single camera.

The correlation function between the projected model and the actual cloth images input

from the dual-eye cameras attached at the end-effector is used as a fitness function [83]. In

the fitness distribution, the valuable of position and orientation to give the highest peak

represents the best pose of the model that coincides with the captured cloth’s images

from the left and right cameras as shown in Fig. 4.6. Then the pose φjM that gives the

peak can be thought to be representing the true pose of the target cloth that is placed

in the 3D searching space as shown in Fig. 4.6. The concept of the fitness function in

this study can be said to be an extension of the work in [83], in which different models

including a rectangular shape surface-strips model was evaluated using images from a

single camera. The correlation between the projected models including a pose of φjM and

62

Page 83: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

captured images with actual cloth that were projected on the left and right 2D searching

areas is calculated by Eqs. (9.2) - (4.55). F (φjM) is calculated by averaging the fitness

functions of both left camera image FL(φjM) and right camera image FR(φj

M) as shown

in Eq. (9.2).

F (φjM) =

( ∑

IRrji∈

SR,in(φj

M )

pR,in(IRrji (φ

jM)) +

IRrji∈

SR,out(φj

M )

pR,out(IRrj

i (φjM))

)

+( ∑

ILrji∈

SL,in(φj

M )

pL,in(ILrji (φ

jM)) +

ILrji∈

SL,out(φj

M )

pL,out(ILrj

i (φjM))

)

/2

={FR(φj

M) + FL(φjM)

}/2 (4.53)

The points on a 3D matching model, S(φjM) are projected to the left and right image

plane. The projected points on Sin(φjM) and Sout(φ

jM) to the left camera image are de-

scribed as ILrji ∈ SL,in(φj

M) and ILrji ∈ SL,out(φ

jM) respectively. For detailed explanation

of Eq. (9.2), the following definitions should be stated here.

• SL,in; the inside area projected to left image plane,

• SL,out; the space on a strip area surrounding SL,in,

• HIL(ILrji (φ

jM)); the hue value of the left camera image at the point ILrj

i (φjM),

• HML(ILrji (φ

jM)); the hue value of the model at the point ILrj

i (φjM) (i-th point on

the j-th model),

• HB; the average hue value of the background image.

63

Page 84: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

�� x

Left camera image

S�,��

S�,�

y

j-th model

Real cloth

� ���� ∅�

����� ∅�

������� ∅�

(a) Evaluation position ILrji (φ

jM ), that is i-th

point of j-th model, which is projected on left

image whose pose φjM is given by evolutionary

process of GA.

�� x

Left camera image

S�,��

S�,�

yj-th model (B)

(A)

(D)

(C)

Real Cloth

(b) Classification of evaluation points

(A)∼(D) on the photo model is based on

the cases in Eqs. (4.54) and (4.55). (A)

represents points that satisfy the first case of

Eq. (4.54), |HIL (ILrji (φj

M ))− HML (ILrji

(φjM ))| ≤ 30, and (B) does |HB− HML (ILrj

i

(φjM ))| ≤ 30, both in Sin. (C) does |HB−

HML (ILrji (φj

M ))| ≤ 20 and (D) satisfy vice

versa, both in Sout.

Fig. 4.7: Calculation of the matched degree of each point in model space (SL,in and SL,out)

The next Eqs. (4.54) and (4.55) is used for calculating pL,in(ILrji (φ

jM)) and pL,out(

ILrji (φ

jM))

that are included in Eq. (9.2).

pL,in(ILrji (φ

jM)) =

2, if(|HIL(ILrji (φ

jM)) − HML(ILrj

i (φjM))| ≤ 30);

−0.005, if(|HB − HML(ILrji (φ

jM))| ≤ 30);

0, otherwise.

(4.54)

pL,out(ILrj

i (φjM)) =

0.1, if(|HB − HML(ILrji (φ

jM))| ≤ 20);

−0.5, otherwise.

(4.55)

Equations (4.54) and (4.55) are designed to provide a peak in the fitness value distri-

bution, F (φjM) when φj

M coincides with the true pose of the target cloth, which has been

64

Page 85: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

confirmed by [39]. Figure 4.7 (a) shows j-th model, real cloth (target object), the evalua-

tion points of Hue value, · · · ILrji−1(φ

jM), ILrj

i (φjM), ILrj

i+1(φjM) · · · , in inside area SL,in,

and those in outside strip SL,out. Figure 4.7 (b) shows a situation that the overlapping

area of real cloth and the model increased than the one depicted in (a). The hue value

of the left camera input image at the point ILrji (φ

jM) is represented by HIL(ILrj

i (φjM)),

and the i-th point of j-th model in SL,in and SL,out and the hue value of the same point

ILrji (φ

jM) on the model is defined as HML(ILrj

i (φjM)). The average hue value of back-

ground is defined as HB.

In Eq. (4.54), if the hue value of each point of captured images, HIL(ILrji (φ

jM)), which

lies inside the surface model frame SL,in, and the hue value of corresponding same point

in a model, HML(ILrji (φ

jM)), have similar values with a tolerance less than 30, that is

|HIL(ILrji (φ

jM)) − HML(ILrj

i (φjM))| ≤ 30 then this means that model’s hue value and

input image’s hue value have close hue distance at the same checking point of ILrji (φ

jM)

. This represents photo model overlaps to the real cloth projected in left camera image

in Sin, which are represented by dots designated by (A) in Fig. 4.7(b). In this case the

fitness value would be increased with the voting value of “+2.” The fitness value will

decrease with the value of “-0.005” for every point of cloth in the left camera image, if

model’s point ILrji (φ

jM) positions on the background. This situation is described by the

condition, |HB − HML(ILrji (φ

jM))| ≤ 30 in Eq. (4.54), and this means model’s cloth area

overlaps with green background and it also represents that the model does not overlap

precisely, which are represented by (B) in Fig. 4.7 (b). Then “-0.005” is given as a penalty

to decrease F (φjM). Otherwise, the fitness value will be “0.”

Similarly, in Eq. (4.55), if the hue value of each point in the left camera image lying

in SL,out has similar value to the average hue value of background HB with the tolerance

of 20, the fitness value will be increased with the value of “+0.1.” This means SL,out

strip area surrounding SL,in overlaps the green background, expressing the model and the

cloth overlap rather correctly as (C) in Fig. 4.7 (b). Since this situation means that the

65

Page 86: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

model’s position and orientation matches to the real cloth, plus points “0.1” is given to

the function pL,out, which is described in Eq. (4.55). Otherwise, the fitness value will de-

crease with the penalty value of “-0.5.” This represents points on SL,out overlaps with the

real cloth as (D) in Fig. 4.7 (b). The detailed explanation of fitness function is discussed

in [74], [83]. How designed fitness function explained above is effective and provides the

robustness against illumination and lighting source varieties is described in Section 8.2.

4.6 Genetic Algorithm (GA)

The main problem of searching for the true pose of the object can be converted into an

optimization problem because the fitness function has been designed to give the maximum

value if and only if the pose of the model (GA’s gene represented by φjM) and the target

object coincide with the image in the 3D space [39].

The maximum value of the fitness function can be searched by a number of optimiza-

tion methods. Among them, the full search method is the simplest and easiest way. It is

envisaged that the peak in fitness distribution will be searched by scanning all possible

points. However, this method addresses a drawback in terms of computing time. GA

can recognize the target in a short time, without the need to scan all the possible points.

Therefore, Genetic Algorithm (GA) is applied to find the maximum value as an optimal

solution because of its simplicity, effectiveness, and easiness to implement.

Figure 4.8 shows the GA’s evolution process in which 3D models converge into the

real target object (cloth). In Fig. 4.8, a target object is represented by cloth-shape and

models are represented by a rectangular-shape with dotted lines. The models have the

same shape and same color information with the target object since the model is made of

the photograph of the target cloth. But each model has a different pose φjM(j=1, 2, ...,

60) as shown in Fig. 4.8. The 60 individuals of GA are used in this experiment. Each

individual’s chromosome consists of four variables. Each variable is coded by 12bits that

66

Page 87: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

can easily implement to get the optimal solution. The first three variables of a model in

3D space (tx, ty, tz) are represented as the position and the last one θ means angle around

z-axis of ΣMj shown in Fig. 4.4. And then, the genes of GA representing possible pose

solution is defined as below;

tx︷ ︸︸ ︷01 · · · 01︸ ︷︷ ︸

12bits

ty︷ ︸︸ ︷00 · · · 01︸ ︷︷ ︸

12bits

tz︷ ︸︸ ︷11 · · · 01︸ ︷︷ ︸

12bits

θ︷ ︸︸ ︷01 · · · 10︸ ︷︷ ︸

12bits

.

These 60 individuals are evaluated by the fitness function value. The fitter ones

are selected to regenerate the next generations. In the final generation, the gene that gives

the highest fitness value stands for the most trustful pose as shown at the bottom-right

part in Fig. 4.8.

1st generation

i-th generation Final generation

2nd generation

Model

Target

Object

Fig. 4.8: GA evolution process in which 3D models with different poses converge to the

real target object (cloth) through GA operation and the pose of the model with the

highest fitness function represents the estimated pose of the cloth.

67

Page 88: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

4.7 Spatial resolution in term of pixel and bit

Regarding the GA, the width of searching for position X, Y, Z and orientation, θ, that

are corresponded to the 12bits and pixel. This information is important to know how fine

the spatial resolution of the gene was. Depending on the experimental environment, the

spatial resolution of the gene in term of pixel and bit are represented accurately. In this

experiment, the spatial resolution in term of pixel is 0.89 [mm/pixel] for X position, and

0.92 [mm/pixel] for Y position. The spatial resolutions in term of bit are 0.1 [mm/bit]

for position, X, Y, Z and 0.00022 [quaternion/bit] for orientation, θ in quaternion. For

the detailed explanations of the spatial resolutions in term of pixel and bit see as follows:

(1) Millimeter per pixel [mm/pixel] is the measurement of the spatial resolution distance

[mm] in term of pixel density (resolution) of electronic image devices, such as a

computer monitor or television display, or image digitizing devices such as camera

or image scanner.

(2) Millimeter per bit [mm/bit] and quaternion per bit [quaternion/bit] are the mea-

surement of the spatial resolution distance [mm] and orientation in quaternion in

term of a bit (binary digit). A bit is a basic unit of information used in computing

and digital communications that posses either a “0” or “1.”

4.8 Verification of cloth recognizable illumination range

The different types of lighting sources applied in this experiments are as follows;

• Fluorescent

• Light-emitting diode (LED)

Five common light conditions such as 100 (lx), 400 (lx), 700 (lx), 1000 (lx) and 1300

(lx) are provided using both lighting types. According to recommendation from Japanese

68

Page 89: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

Industrial Standard (JIS), 75 (lx)∼150 (lx) is commonly used in the warehouse, 750

(lx)∼1500 (lx) is recommended value of fine visual work of inspection. Additionally, 700

(lx) is used for ordinary illumination. Therefore, 100 (lx), 400 (lx), 700 (lx), 1000 (lx)

and 1300 (lx) are selected in the experiment to verify the proposed system’s robustness

in all possible working areas. Figure 4.9 shows the photo of experiment for recognition

of cloth No.3 under 700 (lx) lighting condition, where all tested cloths are listed in Fig.

4.10. Illuminance is measured by a Lux sensor (MW700) as shown in Fig. 4.9 (a) and

recognition under 700 (lx) using PA-10 robot is shown in Fig. 4.9 (b) respectively.

(a) (b)

Lux sensor (MW700)

No.3 cloth

PA-10 robot

Fig. 4.9: Photo of experiment for recognition of cloth No.3 under 700 (lx) (a) (lx) is

measured by a Lux sensor (MW700), (b) recognition under 700 (lx) using PA10 robot

The initial aim of the proposed system is to be applied in a mail-sending procedure

that have been conducted by the T2K Co., Ltd in which employees classify a large number

of second-hand cloths manually every day. Since the cloths are deformable objects and

unique since it is second-hand, no definition of cloth can be predefined in a computer

beforehand. Consequently, it is difficult to handle a wide variety of cloths that are irregular

in shape and size. T2K explained us this condition and requested to collaborate with

our research group to solve the above-mentioned problem. Therefore, we conducted a

69

Page 90: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

joint research with T2K Co., Ltd. Different 12 cloth samples have been chosen by the

cooperation of T2K Co., Ltd as shown in Fig. 4.10. T2K Co., Ltd thought the 12 samples

represent general varieties of cloths so they have chosen 12 cloths samples to conduct the

experiment.

No.1

No.12No.11No.10No.9

No.8No.7No.6No.5

No.4No.3No.2

Fig. 4.10: Target objects (No.1 ∼ No.12) cloths: each has different colors, sizes, shapes,

patterns and weights (unit is [mm] and [g] in Table 5.1).

4.9 Sensor unit

The configuration of the sensor unit is shown in Fig. 4.11. In this configuration, three

cameras (FCB-IX11A) are used as vision sensors. Table 4.1 shows the quick information

of the cameras FCB-IX11A which are used in this experiment. The first camera is used

for generating models which are placed on model making unit as shown in Fig. 4.11. The

other two cameras, which are fixed at the end-effector of the manipulator (PA10 robot), are

used for recognition based on the photograph model. In Fig. 4.11, the robot control unit

is used for recognition and pose detection operations. The sensor unit named as NS001

70

Page 91: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

is operated in response to an instruction from the robot PC by serial communication.

Configuration of the sensor unit NS001 is as follows:

• Image board PEX-530115

• Relay terminal block TNS-6871B

• Two cameras FCB-IX11A

• Camera mounting base

Table 4.1: Information for three cameras FCB-IX11A

Camera Category: Area Scan

Camera Series: FCB-IX

Frame Per Second [FPS]: 30

Interface: Analog

Manufacturer: Sony

Resolution: Under 1 MP

Scanning Mode: Interlace

Signal System: Color

71

Page 92: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 4 Photo-model-based recognition

Cloth

Camera RCA

Composite

cable

Cable CWB-5608

TNS-6871B

Image board PEX-

530115

Knight AH

FCB-IX11AModel making unit

Robot control unit

FCB-IX11A

FCB-IX11A

Fig. 4.11: Sensor unit configuration

72

Page 93: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 5

Experimental Environment

The structure of the robot also has been depicted in Fig. 5.1 with the lengths of all links

being shown in the figure. Figure 5.2 shows the experimental layout and the coordinate

systems of the cloth-handling robot, i.e. the world coordinate system (ΣW ), the hand

coordinate system (ΣH) and the cloth coordinate system (ΣM) that are used in the ex-

periments respectively. The ΣM that defines the center position and neutral orientation

is set at x=0 [mm], y=0 [mm], z=685 [mm], with respect to ΣH . The offsets of the

robot-base center and work table center in the x-direction, y-direction, and z-direction

are described in corrected new Fig. 5.2. The explanation of W xM , W yM and W zM are

described in the caption of Fig. 5.2.

Figure 4.10 shows the 12 different cloths samples (No.1∼ No.12) that have been

chosen by collaborating company, T2K. Each cloth used in this experiment has different

colors, sizes, shapes, and weights. This paper focuses on both the recognition tolerance

in light varieties and handling accuracy. All 12 cloths are recognized and handled indi-

vidually to confirm the influence of illumination on recognition and handling accuracy.

The authors assumed that the clothing condition in the real implementation is packed

in the plastic bags. The proposed method can use even reflections on a plastic bag as a

recognizable information in images. Even though the reflection could be changed at times

depending on some happenings, and it may disturb the recognition, total accuracy could

73

Page 94: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 5 Experimental environment

be within allowable extent for practical operations.

Since the thickness of the 12 cloths is different as shown in Table 5.1, the handling

robot needs to detect the distance from the origin of the ΣH to the surface of the vinyl

package of the cloth. The authors have chosen dual-eyes pose estimation method as shown

in Figs. 1.1 and 5.2 that has been used for visual servoing [85], [38] and [40] since the

method has been proved to be practical and credible. Depending on the application of

this proposed system, two cameras (vision sensors are used as dual-eyes vision system)

for recognition and vacuum cups (four absorption pads by the air compressor possible to

perform the absorption of the target cloth) for handling are attached at the end-effector

of the PA-10 robot. The distance between two cameras is 323.4 [mm]. The positions of

origin of ΣM based on ΣW are depicted as (WxM ,W yM ,W zM) = (-1050, 0, -180) [mm] in

Fig. 5.2.

xW ; x0

zW ; z0

yW ; y0ÜW;0 Ü1;2

x1; y2

y1; z2

z1; x2

l1 l3 l5 lE

(317[mm]) (450[mm])

l2 l4 l6 l7

(70[mm]) (45[mm])

x3; y4; x5

y3; z4; y5

z3; x4; z5

y6

z6

x6

(480[mm])

x7z7

y7Ü6Ü3;4 ;5 Ü7 ÜH

z8x8

y8

Dual-eye cameras

Fig. 5.1: Sketch map of the manipulator. (Note that the distances, l2, l4, l5 are zero [mm],

but the joints are illustrated in figure to be seen clearly.)

74

Page 95: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 5 Experimental environment

Σ�Σ�

x�

500

685

180

510950

525

220

345

580

360

Σ�

250

PA-10 base

1050

x�

z�

y�

y� x�

z�

y�

z�

Fig. 5.2: Coordinate systems of robot and end-effector: (hand coordinate system (ΣH),

world coordinate system (ΣW ) and target object coordinate system (ΣM)). Note that

W xM , W yM , W zM are offsets of robot-base center and work table center in x-, y-, z-

directions. (unit is [mm] in Fig. 5.2.)

75

Page 96: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 5 Experimental environment

Table 5.1: Size, thickness and weight of all cloths (No.1 ∼ No.12).

No. Size

[mm]×[mm]

Thickness

[mm]

Weight

[g]

1 140×170 10 38

2 145×145 10 37

3 200×200 5 53

4 190×200 5 58

5 150×150 15 69

6 200×150 13 46

7 130×130 14 31

8 190×200 5 45

9 130×205 15 50

10 205×125 10 55

11 200×260 30 177

12 220×260 40 199

76

Page 97: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6

Verification of unique cloth handling

performance based on 3D

recognition accuracy of cloth by

dual-eye cameras with

photo-model-based matching

In this experiment, 100 times and 200 times generations are applied to the evolutionary

process. The recognition accuracy is analyzed in terms of the histogram of pose estima-

tion error. Recognizing the deformable different cloth automatically, estimation of the

3D pose of the target object and recognizing the cloth under changing and the unknown

environment by our proposed photo-model-based cloth recognition system have been con-

firmed experimentally to be applied in the garment factories. Moreover, the unique cloth

handling performance based on 3D recognition accuracy has been verified experimentally.

In 100 times handling experiment, the cloth in each time can handle the error range be-

tween the ±15 [mm] in positions (x, y, z) and 15 [◦] in orientation θ which less than the

77

Page 98: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

desired 40 [mm] in positions (x, y, z) and 40 [◦] in orientation θ. The numerical value

of the histogram of pose estimation error results of the cloth No.3 handling accuracy is

summarized as Table. According to the experimental result, the practical experiments’

result can approach the best goal in handling performance. The proposed unique cloth

handling system is aimed to save the cost of staff and to get the better performance and

higher accuracy than human.

Fig. 6.1 shows the different experiments of cloth No.3. Fig. 6.1 (a), (b) and (c) show

environment without disturbance, recognition of partly hidden cloth and identification of

two pieces of cloth such as No.3 and No.4 cloth respectively. To evaluate the recognition

accuracy, the full search method is used to compare with the estimated pose by the pro-

posed system. In full search method, the pose of the target cloth is searched by scanning

all possible pixels in the entire searching area. Figs. 6.2 and 6.3 show the recognition

comparison of fitness distribution in x-y plane between the full search and the GA search

process. Fig. 6.4 shows the recognition accuracy of all cloth (No.1∼No.12) in positions

[mm]- x,y and angle [◦]- θ.

(a) (b)

(c)

Fig. 6.1: Experiments of No.3 cloth (a) recognition without disturbance (b) recognition

of partly hidden cloth (c) identification of two cloths

78

Page 99: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

6.1 Recognition without disturbance

Among 12 different cloths as shown in Fig. 4.10 and Table 5.1, we will discuss in detail

about the experiments of cloth No.3 in this section according to the three different features

of the cloth No.3. There are small size, colorful pattern and lightweight. As shown in Fig.

6.1 (a), cloth No.3 is placed in the vicinity of (x,y)=(0,0) [mm]. Figs. 6.2 and 6.3 show

the different experiments of cloth No.3 at 100 times and 200 times GA evaluation. Fig.

6.2 (a), (b) and (c) show environment without disturbance, recognition of partly hidden

cloth and identification of two pieces of cloths such as No.3 and No. 4 cloth respectively.

Fig. 6.3 shows the different experiments of cloth No.3 at 200 times GA evaluation. Fig.

6.3 (a), (b) and (c) show environment without disturbance, recognition of partly hidden

cloth and identification of two pieces of cloths such as No.3 and No. 4 cloths respectively.

Figs. 6.2 (a) and 6.3 (a) show the fitness distribution in the x-y plane of cloth No.3 as 2D

view and Figs. 6.2 (b) and 6.3 (b) show as 3D view. Fig. 6.2 (b) and 6.3 (b) show clearly

that the maximum fitness function value of a peak in the fitness function distribution

can be searched by the evaluation process of GA model after 100 times and 200 times

evaluation. The fitness value reaches the maximum at the peak of a mountain at 0.85 and

the position is (9,11) [mm] at both 100 times and 200 times evaluation of the x-y plane.

There was only one peak in the fitness distribution which means that the target object

(cloth No.3) and the model are matching well. The numerical values of Figs. 6.2 and 6.3

are summarized in Table 6.1.

6.2 Recognition of partly hidden cloth

In this experiment, about 45% of cloth is hidden by the blue paper as shown in Fig. 6.1

(b) and that hidden cloth No.3 is put near (x,y)=(0,0) [mm] for recognition performance.

After 100 times and 200 times evaluations, the evolution processes of GA are shown in

Figs. 6.2 (c) and (d) and 6.3 (c) and (d). The maximum fitness value 0.32 can be seen

79

Page 100: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

at (x,y)=(-6,0) [mm] in the fitness distribution as the peak of the mountain shape. Figs.

6.2 (c) and (d) and 6.3 (c) and (d) confirmed that GA converged correctly with the pose

of the target object and recognized the partly hidden No.3 cloth.

6.3 Identification of two cloths

In Fig. 6.1 (c), cloth No.3 put in the range of x<0 and cloth No.4 put in the range of

x>0. The evolution process of GA at 100 times evaluation is indicated in Fig. 6.2 (e)

and (f). The evolution process of GA at 200 times evaluation is indicated in Fig. 6.3 (e)

and (f). The peak of cloth No.3 appears at (x,y)=(-111,21) [mm] with respective fitness

value of 0.788. GA converged individually in this experiment. The maximum value of the

highest mountain of the peak represents for the cloth No.3 and also it has been confirmed

that the cloth No.3 has been recognized among two different cloths.

80

Page 101: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

(b)

-180

-180

180

1800

x direction [mm]

0

Fitness value

0.43

0

0.850.85

0.64

0.43

0.21

0

-180.0

Fitness

-90.00 90.0

180.0x [mm] 180.0

90.0

-90.0

0

-180.0

y [mm]

(b)(a)

-180

180

-180 1800

x direction [mm]

Fitness value

0.39

0

0.78

0

0.78

0.59

Fitness 0.39

0.20

0

-180.0-90.0

0 90.0180.0 180.0

90.0

-90.0

0

-180.0

(e) (f)

Fs (x,y) = (9,11)

Fitness value = 0.85Gs (x,y) = (13.7,13.6)

Fitness value = 0.73

F=0.85

F=0.78

Fs (x,y) = (-6,0)

Fitness value = 0.32

(d)

-180

180

-180 1800

x direction [mm]

Fitness value

0.16

0

0.32 0.32

0.24

Fitness0.16

0.1

0

-180.0-90.0

0 90.0180.0x [mm]

180.0

90.0

-90.0

0

-180.0

y [mm]

0

(c)

F=0.32

Gs (x,y) = (2.24,-5.8)

Fitness value = 0.25

Fs (x,y) = (-111,21)

Fitness value = 0.78Gs (x,y) = (-111.9,17.3)

Fitness value = 0.62

x [mm]

y [mm]

Fig. 6.2: Fitness function distribution of No.3 cloth in x-y plane (a) recognition without

disturbance as 2D view (b) recognition without disturbance as 3D view (c) recognition

of partly hidden cloth as 2D view (d) recognition of partly hidden cloth as 3D view (e)

identification of two cloths as 2D view (f) identification of two cloths as 3D view (each

black point in Fig.6.2 represents the result of GA evaluation times 100 which indicates

the pose of each model and mountain means pose searched by full search method)

81

Page 102: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

(b)

-180

-180

180

1800

x direction [mm]

y d

irec

tio

n [

mm

]

0

Fitness value

0.43

0

0.850.85

0.64

0.43

0.21

0

-180.0

Fitness

-90.00 90.0

180.0x [mm] 180.0

90.0

-90.0

0

-180.0

y [mm]

(b)(a)

(d)

-180

180

-180 180

Fitness value

0.16

0

0.32 0.32

0.24

0.16

0.1

0

-180.0-90.0

090.0 180.0 180.0

90.0

-90.0

0

-180.0

0

(c)

-180

180

-180 180

Fitness value

0.39

0

0.78

0

0.78

0.59

0.39

0.20

0

-180.0-90.0

0 90.0180.0 180.0

90.0

-90.0

0

-180.0

(e) (f)

Fs (x,y) = (9,11)

Fitness value = 0.85

Gs (x,y) = (4.5,13.4)

Fitness value = 0.78

F = 0.85

F=0.32

F=0.78

Fs (x,y) = (-6,0)

Fitness value = 0.32 Gs (x,y) = (-6.7,-2.0)

Fitness value = 0.30

Fs (x,y) = (-111,21)

Fitness value = 0.78Gs (x,y) = (-118.7,20.6)

Fitness value = 0.73

Fitness

Fitness

y d

irec

tio

n [

mm

]

0

x direction [mm]x [mm]

y [mm]

y d

irec

tio

n [

mm

]

0

x direction [mm]

x [mm]

y [mm]

Fig. 6.3: Fitness function distribution of No.3 cloth in x-y plane (a) recognition without

disturbance as 2D view (b) recognition without disturbance as 3D view (c) recognition

of partly hidden cloth as 2D view (d) recognition of partly hidden cloth as 3D view (e)

identification of two cloths as 2D view (f) identification of two cloths as 3D view (each

black point in Fig.6.3 represents the result of GA evaluation times 200 which indicates

the pose of each model and mountain means pose searched by full search method)

82

Page 103: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

Table 6.1: Fitness function value under different GA evaluation times

Recognition Without Disturbance

GA Evaluation TimesFull Search GA Search Fitness Value

X [mm] Y [mm] X [mm] Y [mm] Full Search GA Search

1009 11

13.7 13.60.85

0.73

200 4.5 13.4 0.73

Recognition of Partly Hidden cloth

GA Evaluation TimesFull Search GA Search Fitness Value

X [mm] Y [mm] X [mm] Y [mm] Full Search GA Search

100-6 0

-5.85 2.240.32

0.253

200 -6.7 -2.0 0.30

Identification of Two Cloths

GA Evaluation TimesFull Search GA Search Fitness Value

X [mm] Y [mm] X [mm] Y [mm] Full Search GA Search

100-111 21

-111.9 17.30.78

0.62

200 -111.8 20.6 0.73

83

Page 104: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

6.4 1000 times recognition experiment

Figure 6.4 shows the histogram of the pose estimation error for 1000 times recognition

experiment for all cloths. Figure 6.4 (a) and (b) show the mean error ±3σ (standard

deviation) which are within ±10 [mm] for position X and Y. The orientation for angle θ

is within ±10 [◦] as shown in Fig. 6.4 (c). From these histogram results, all the cloths

(No.1∼No.12) can be recognized without any problems.

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x

[mm

]

Cloth Number

(a)

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y

[mm

]

Cloth Number

(b)

51

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

θ

[°]

Cloth Number

(c)

Fig. 6.4: Histogram of error in (a) position x [mm] (b) position y [mm] (c) angle [◦]

(No.1∼No.12 Cloths)

84

Page 105: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

6.5 100 times handling experiment

Each coordinate system of the robot and the cloths used in this experiments are shown in

Fig. 6.5 and Fig. 6.6. The cloth coordinate system is represented as ΣM and ΣH defined

as the hand coordinate system of the robot end-effector. ΣM can be viewed from (x=0,

y=0, z=580[mm]). It is centered on the recognition range of the position as a reference

of the 510[mm] × 390[mm]. The size of the collection box is a 220[mm] × 220[mm] as

shown in Fig. 6.6. 12 different cloths (No.1, No.2, . . ., No.12) have unrepeatable color,

size, shape and weight samples which are used in this experiment as shown in Fig. 4.10.

Each item of cloth must be recognized individually to confirm the recognition accuracy

of the proposed system.

530

400200

250

50

540

PA-10 Base

Σ�

Σ�

z�

y�

x�

z�580

y�

z�

Σ�

y�x�

Fig. 6.5: Coordinate system of robot and end-effector (unit is [mm] in Figure 6.5)

85

Page 106: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

��

Target

Recognition

limit

Collection tray

390

55

220220

y

x

55

Fig. 6.6: Coordinate system of target object (unit is [mm] in Figure 6.6)

6.6 Handling accuracy verification experiment

The proposed system is used to develop cloth handling that can substitute for humans.

In the 1000 times recognition experiment, the target object recognition accuracy (average

error ±3σ) is within ±20 [mm] for positions x,y,z and 20 [◦] for orientation angle θ. In

the handling experiment, the proposed system verified the handling accuracy. Here is the

description of the handling experimental environment and the contents of the handling

experiment before the examination through experimental results.

6.7 Handling experimental environment and contents

In robotics, the device at the end of a robotic end is called an end-effector which is

designed to interact with the environment. Depending on the application of this proposed

system, two cameras (vision sensors are used as dual-eye vision system) for recognition and

vacuum cups (four absorption pads by the air compressor which are possible to perform

the absorption of the target object (cloth)) for handling are attached at the end-effector

86

Page 107: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

of a PA-10 robot. Figure 2.3 shows the experimental environment for 3D pose recognition

and handling performance.

Figure 6.7 shows the flowchart of handling experiment. In Fig. 6.7, by recognizing

the approximate pose of the object with the first recognition, moving so that the object

comes to the camera focus at the second recognition, and performing highly accurate

3D recognition, the change in the z position automatically. The result is performed for

handling after the third recognition. If the third recognition result had satisfied, the

system would have stopped. On the other hand, the system will restart again as shown

in Fig. 6.7. The experiment was carried out assuming that the position of the object

Start

First recognition

Move to the pose (position and

orientation) of the recognition

Second recognition

Handle based on the recognition result

Third recognition

Robot’s hand uplift to get the same pose

relationship between cloth and hand as the

second recognition

End

Satisfy

Stop

condition

Output

result

Yes

No

Fig. 6.7: Flowchart of handling experiment

differs every time. The state of the actual experiment is shown in Fig. 6.8 ((a)∼(f)).

First, as shown in Fig. 6.8 (a), the first recognition of cloth is performed in the initial

87

Page 108: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

position (x = 0 [mm], y = 0 [mm], z = 580 [mm]). Next, based on the first recognition

result, move the robot hand to the recognized pose as shown in Fig. 6.8 (b). At this

time, the distance between the camera and the object in the z position is z = 420 [mm]

which is the focal point of the camera. In Fig. 6.8 (c), handling is performed based on the

second recognition result. In Fig. 6.8 (d) and (e), the cloth is put at the position where

the absorption pads come at the center of the pickup tray. Finally, in order to make it

the same as the first recognition condition, raising the robot hand and make the third

recognition as shown in Fig. 6.8 (f). The center of the third recognition is the center of

the collection tray. In this experiment, the data of the third recognition result and set the

average error ±3σ are aimed to acquire within 40 [mm] and 40 [◦] in (x, y, z) positions

and θ orientation.

6.8 100 times handling experiment of cloth No.3

The third recognition result of the handling verification experiment conducted with No.

3 cloth is shown in Fig. 6.9 (a-d) and Table 6.2. According to Fig. 6.9 (a-d), the result

roughly converge to the normal distribution but the average error ±3σ is situated less

than its target positions and orientation ±40 [mm] and 40 [◦] respectively, so it can be

said that handling is possible.

88

Page 109: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

a b

c d

e f

Fig. 6.8: Process of handling experiment

89

Page 110: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

0

1

2

3

4

5

6

7

-13.

0-1

2.5

-12.

0-1

1.5

-11.

0-1

0.5

-10.

0-9

.5-9

.0-8

.5-8

.0-7

.5-7

.0-6

.5-6

.0-5

.5-5

.0-4

.5-4

.0-3

.5-3

.0-2

.5-2

.0-1

.5-1

.0-0

.5 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

6.0

6.5

7.0

7.5

8.0

8.5

9.0

9.5

10.0

10.5

11.0

11.5

12.0

12.5

13.0

Fre

quen

cy [

tim

es]

Position error of X direction [mm]

(a)

0

1

2

3

4

5

6

7

8

9

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

6.0

6.5

7.0

7.5

8.0

8.5

9.0

9.5

10.0

10.5

11.0

11.5

12.0

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

20.5

21.0

Freq

uenc

y[tim

es]

Position error of Y direction[mm]

(b)

0

1

2

3

4

5

6

7

-9.0

-8.5

-8.0

-7.5

-7.0

-6.5

-6.0

-5.5

-5.0

-4.5

-4.0

-3.5

-3.0

-2.5

-2.0

-1.5

-1.0

-0.5 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

6.0

6.5

7.0

7.5

8.0

8.5

9.0

9.5

10.0

10.5

11.0

11.5

12.0

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

Freq

uenc

y [t

imes

]

Position error of Z direction [mm]

(c)

0

1

2

3

4

5

6

7

8

9

-15.

0-1

4.5

-14.

0-1

3.5

-13.

0-1

2.5

-12.

0-1

1.5

-11.

0-1

0.5

-10.

0-9

.5-9

.0-8

.5-8

.0-7

.5-7

.0-6

.5-6

.0-5

.5-5

.0-4

.5-4

.0-3

.5-3

.0-2

.5-2

.0-1

.5-1

.0-0

.5 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

6.0

6.5

7.0

7.5

8.0

8.5

9.0

9.5

10.0

10.5

11.0

11.5

12.0

12.5

13.0

13.5

14.0

14.5

15.0

Freq

uenc

y [ti

mes

]

Angle error [°]

(d)

Fig. 6.9: Histogram of 100 times handling experiment of cloth No.3 (a) X [mm] position

error (b)Y [mm] position error (c) Z [mm] position error (d) angle θ[◦] orientation error

90

Page 111: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 6 Experimental results and Discussion

Table 6.2: Average error and standard deviation of cloth No.3

x [mm] y [mm] z [mm] θ[◦]

Average error (-3σ) -22.5 -14.3 -13.6 -16.0

Average error (-2σ) -16.0 -5.70 -6.15 -10.6

Average error (-1σ) -9.53 2.90 1.26 -5.14

Average error -3.02 11.5 8.67 0.279

Average error (+1σ) 3.48 20.1 16.1 5.70

Average error (+2σ) 9.99 28.7 23.5 11.1

Average error (+3σ) 16.5 37.3 30.9 16.5

Standard deviation (σ) 6.50 8.60 7.41 5.42

91

Page 112: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 113: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7

Verification of illumination tolerance

for photo-model-based cloth

recognition

The verification of illumination tolerance for photo-model-based cloth recognition has

been validated experimentally under different light conditions of different light sources.

The fluorescent light and the light emitting diode (LED) light are used in this experiment,

having confirmed that the proposed system can recognize cloth in condition that the light

environments have varieties. Based on this motivation, cloth recognition experiments

under different lighting conditions (100 (lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx)) for

different light sources were conducted to verify the illumination tolerance of the proposed

system. Why these illumination conditions are chosen is discussed in section 3.7. Figures

7.1 and 7.2 show the fitness function value of 12 cloths separately recognition under five

common light conditions (100 (lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx)) using

fluorescent light and LED light. In each figure, the vertical axis represents the highest

fitness function value that is found by GA and the horizontal axis shows the cloths number

(No.1∼ No.12).

In Fig. 7.1, the fitness values of recognition of cloth No.1 are 1.0 for 100 (lx), 0.675 for

93

Page 114: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

400 (lx), 0.685 for 700 (lx), 0.625 for 1000 (lx) and 0.511 for 1300 (lx) respectively. The

fitness function value of No.3 cloth is nearly around 1.0∼1.3 for all illuminances. No.4

and No.11 cloth for the illuminance of 100 (lx) have larger fitness function value than the

other cloth numbers under different illuminances. There is no significant difference among

fitness values for cloth No.5 under three illuminance conditions with fitness value of 0.738

(100 (lx)), 0.757 (400 (lx)), 0.864 (700 (lx)) and 0.789 (1300 (lx)). The highest fitness

value for cloth No.5 under 1000 (lx) is about 0.911. Similarly, cloth No.10 has nearly equal

fitness function value for all illuminances with the value of around 0.75. No.7 cloth has

approximately equal value of 0.7 at 700 (lx) and 1300 (lx). Then, the two fitness values

of No.8 cloth are nearly equal to 1.0 at 100 (lx) and 700 (lx). For cloth No.9, the one

fitness value close to the minimum value (0.3) with fitness values of 0.459 (1300(lx)). But

the fitness value is about 1.0 at 100 (lx). The fitness value of No.12 cloth remains nearly

1.0 for five light conditions in case of the fluorescent light environment. In Fig. 7.2, the

fitness function values for all different cloths numbers are not prominently changed under

all light conditions. But, the fitness value for No.9, No.10 and No.12 cloths under 100 (lx)

are 0.301, 0.317 and 0.315 respectively. The highest fitness value appears at cloth No.7

with 1.097 (400 (lx)).

In summary, according to the graphical results shown in Figs. 7.1 and 7.2, it can be

generally said that the overall fitness function value of recognition for all cloths under

fluorescent light is higher than that of the LED light condition. In Fig. 7.1, the brighter

the light condition, the less the fitness function value become except for No.2 and No.6

cloths. On the other hand, in Fig. 7.2, the brighter the light condition, the higher the

fitness function value except for No.11 cloth. According to the experimental result, it is

confirmed that all cloths can be recognized with the minimum fitness value of 0.3 under

different light conditions. The results of the fitness values are summarized in Table 7.1

and Table 7.2 numerically.

The cloth recognition experiment has been conducted under the five illuminations

94

Page 115: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx) for the verification of illumination

tolerance. The number of differences between each illumination range is 300 (lx) steadily.

The difference between each illumination is small and detail. The detailed analysis of the

experimental result has been confirmed to apply in the different lighting environments.

These results intend to apply the actual working environment.

Figures 7.3, 7.5, 7.7, 7.9, 7.11 and 7.13 show the fitness distribution in x-y plane of

cloths No.2, No.3 and No.11 with five different illuminations (100 (lx), 400 (lx), 700 (lx),

1000 (lx) and 1300 (lx)) under fluorescent light condition and the LED light condition.

The overall the fitness distribution in x-y plane results of cloths No.2, No.3 and No.11

can also be seen as graphical results in Figs. 7.1, 7.2 and numerical value in Tables 7.1,

7.2 respectively. Additionally, Figures 7.4, 7.6, 7.8, 7.10, 7.12 and 7.14 show the fitness

distribution for orientation of cloths No.2, No.3 and No.11 with five different illuminations

(100 (lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx)) under two different light sources

(fluorescent and LED). The proposed system has been analyzed in the orientation of

three cloths because of their criteria. These three different cloths have the different

characteristics as follow;

• Cloth No.2 has a black color, small size and with some reflection.

• Cloth No.3 has colorful and small size.

• Cloth No.11 has a white color without colorful patterns and large size.

According to experimental results, it can be analyzed that how the different light illumina-

tions effect on the fitness distribution of different cloths whose color, texture and pattern

are different. For, example, there is a significant difference in the height of fitness value

of cloth No.11 between the case of 100 (lx) and 1300 (lx) as shown in Figs. 7.11 and 7.13.

However, the positions recognized by the proposed system do not change even though the

heights of the fitness distributions are affected by illumination. These experimental results

show the robustness of the proposed system against illumination conditions. However,

95

Page 116: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

there are some errors in recognition of orientation affected by illumination. For example,

the highest fitness distribution of cloth No.11 in 1300 (lx) of fluorescent light, and in 100

(lx), 400 (lx), 1000 (lx) and 1300 (lx) of LED light do not appear at 0 (degree) as shown

in Figs. 7.12 and 7.14. However, it can be confirmed experimentally that the system can

recognize the different cloths under different light conditions with fitness value above 0.3

that is the threshold of the fitness value to guarantee the recognition of cloth.

The reason why the proposed cloth recognition system seems to come from that the

criterion of recognition is set as the optimization result, allowing the system to give a

tolerance derived from the optimization does not matter how much the peaks of fitness

functions are [82].

0

0.3

0.6

0.9

1.2

1.5

1.8

No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10 No.11 No.12

Fit

nes

s fu

nct

ion

val

ue

Cloth Number

Lighting condition (lx) 100 400 700 1000 1300

Fig. 7.1: Fitness function value of each cloth (No.1∼ No.12) under fluorescent light (see

Table 7.1 for the entire numerical results of Fig. 7.1)

96

Page 117: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

Table 7.1: Fitness function value under different light conditions (100 (lx), 400 (lx), 700

(lx), 1000 (lx) and 1300 (lx)) using fluorescent light

Fitness function value of each cloth (No.1∼ No.12)

Illuminance (lx) No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10 No.11 No.12

100 1.005 0.475 1.293 1.512 0.738 0.372 0.477 0.973 0.965 0.756 1.550 1.019

400 0.675 0.722 1.203 1.296 0.757 0.515 0.739 0.683 0.887 0.715 1.148 0.915

700 0.685 0.668 1.14 1.27 0.864 0.475 0.784 0.859 0.965 0.756 1.55 1.02

1000 0.625 0.705 1.147 1.260 0.911 0.662 0.861 0.748 0.535 0.631 0.837 0.994

1300 0.511 0.713 1.10 1.25 0.789 0.749 0.778 0.696 0.459 0.750 0.766 0.937

Table 7.2: Fitness function value under different light conditions (100 (lx), 400 (lx), 700

(lx), 1000 (lx) and 1300 (lx)) using LED light

Fitness function value of each cloth (No.1∼ No.12)

Illuminance (lx) No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10 No.11 No.12

100 0.478 0.702 0.570 0.955 0.460 0.389 0.851 0696 0.301 0.317 0.382 0.315

400 0.548 0.812 0.655 1.014 0.563 0.475 1.097 0.893 0.456 0.549 0.572 0.3956

700 0.484 0.766 0.588 0.864 0.378 0.535 1.02 0.835 0.317 0.533 0.556 0.406

1000 0.498 0.843 0.609 0.777 0.414 0.606 0.915 0.687 1.076 0.857 0.529 0.408

1300 0.497 0.891 0.591 0.884 0.823 0.724 0.887 0.779 0.482 0.794 0.338 0.590

97

Page 118: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

0

0.3

0.6

0.9

1.2

1.5

1.8

No.1 No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9 No.10 No.11 No.12

Fit

nes

s fu

nct

ion

val

ue

Cloth Number

Lighting condition (lx) 100 400 700 1000 1300

Fig. 7.2: Fitness function value of each cloth (No.1∼ No.12) under LED light (see Table

7.2 for the entire numerical results of Fig. 7.2)

98

Page 119: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

400 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

700 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

1000 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

1300 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

0-0.1

0-0.1 0.1-0.2 0.2-0.3

0.3-0.4

0.6-0.7

0.4-0.5

0.7-0.8

0.5-0.6

Fig. 7.3: Fitness function distribution of cloth No.2 in x-y plane (fluorescent light)

99

Page 120: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

0.5

0.4

0.3

0.2

0

0.1Fit

nes

s v

alu

e

100 (lx)

-45-36 -27-18 -9 0 9 18 27 36 45

0.5

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.90.80.70.6

Fit

nes

s v

alu

e

400 (lx)

DegreeDegree

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.80.70.60.5

Fit

nes

s v

alu

e

1000 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.30.2

00.1

0.7

0.60.5

Fit

nes

s v

alu

e

700 (lx)

Degree

0.4

0.30.2

0

0.1

0.80.7

0.60.5

Fit

nes

s v

alu

e

1300 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

Fig. 7.4: Fitness function distribution of angle for cloth No.2 (fluorescent light)

100

Page 121: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx)

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

400 (lx)

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

700 (lx)

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

1000 (lx)

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

0-0.1 0.1-0.2 0.2-0.3

0.3-0.4

0.6-0.7

0.4-0.5

0.7-0.8 0.8-0.9

0.5-0.6

1300 (lx)

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

Fig. 7.5: Fitness function distribution of cloth No.2 in x-y plane (LED light)

101

Page 122: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.80.70.60.5

Fit

nes

s v

alu

e

100 (lx)

Degree-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.90.80.70.6

Fit

nes

s v

alu

e

400 (lx)

Degree

0.5

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.90.80.70.6

Fit

nes

s v

alu

e

1000 (lx)

Degree

0.5

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.90.80.70.6

Fit

nes

s v

alu

e

700 (lx)

Degree

0.5

-45-36 -27 -18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.90.80.70.6

Fit

nes

s v

alu

e

1300 (lx)

Degree

0.5

Fig. 7.6: Fitness function distribution of angle for cloth No.2 (LED light)

102

Page 123: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx)

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

400 (lx)

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

700 (lx)

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

1000 (lx)

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

1300 (lx)

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

0-0.2 0.2-0.4 0.4-0.6

0.6-0.8 0.8-1 1-1.2

1.2-1.4

Fig. 7.7: Fitness function distribution of cloth No.3 in x-y plane (fluorescent light)

103

Page 124: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.80.70.60.5

Fit

nes

s v

alu

e

100 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.7

0.6

0.5

Fit

nes

s v

alu

e

400 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.70.6

0.5

Fit

nes

s v

alu

e

700 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.70.6

0.5

Fit

nes

s v

alu

e

1000 (lx)

Degree

-45-36 -27 -18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.70.6

0.5

Fit

nes

s v

alu

e

1300 (lx)

Degree

Fig. 7.8: Fitness function distribution of angle for cloth No.3 (fluorescent light)

104

Page 125: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

400 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

700 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

1000 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

1300 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

0-0.1 0.1-0.2 0.2-0.3

0.3-0.4

0.6-0.7

0.4-0.5

0.7-0.8

0.5-0.6

Fig. 7.9: Fitness function distribution of cloth No.3 in x-y plane (LED light)

105

Page 126: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.80.70.60.5

Fit

nes

s v

alu

e

100 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.7

0.6

0.5

Fit

nes

s v

alu

e

400 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.70.6

0.5

Fit

nes

s v

alu

e

700 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.70.6

0.5

Fit

nes

s v

alu

e

1000 (lx)

Degree

-45-36 -27 -18 -9 0 9 18 27 36 45

0.4

0.3

0.2

00.1

0.70.6

0.5

Fit

nes

s v

alu

e

1300 (lx)

Degree

Fig. 7.10: Fitness function distribution of angle for cloth No.3 (LED light)

106

Page 127: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx)

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)Fit

ness

valu

e

400 (lx)

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)Fit

ness

valu

e

700 (lx)

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

1000 (lx)

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

1300 (lx)

1.6

1.4

1.2

1

0.8

0.6

0.4

0.2

0

X (mm)

Y (

mm

)

Fit

ness

valu

e

0-0.2 0.4-0.60.2-0.4

1-1.20.8-10.6-0.8

1.2-1.4 1.4-1.6

Fig. 7.11: Fitness function distribution of cloth No.11 in x-y plane (fluorescent light)

107

Page 128: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

-45-36 -27-18 -9 0 9 18 27 36 45

0.6

0.4

0

0.2

1

0.8

1.4

1.2

Fit

nes

s v

alu

e

400 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.60.4

00.2

10.8

1.41.2

Fit

nes

s v

alu

e

100 (lx)

Degree

1.61.8

-45-36 -27-18 -9 0 9 18 27 36 45

0.6

0.4

0

0.2

1

0.8

Fit

nes

s v

alu

e

700 (lx)

Degree-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

00.1

0.90.80.70.6

Fit

nes

s v

alu

e

1000 (lx)

Degree

0.5

-45-36 -27-18 -9 0 9 18 27 36 45

0.40.30.2

0

0.1

0.80.7

0.60.5

Fit

nes

s v

alu

e

1300 (lx)

Degree

Fig. 7.12: Fitness function distribution of angle for cloth No.11 (fluorescent light)

108

Page 129: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

100 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

400 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

700 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

1000 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

1300 (lx)

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

X (mm)

Y (

mm

)Fit

ness

valu

e

0-0.1 0.2-0.30.1-0.2

0.3-0.4 0.4-0.5

0.6-0.7 0.7-0.8

0.5-0.6

Fig. 7.13: Fitness function distribution of cloth No.11 in x-y plane (LED light)

109

Page 130: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 7 Experimental results and Discussion

-45-36 -27-18 -9 0 9 18 27 36 45

0.20.150.1

00.05

0.40.35

0.30.25

Fit

nes

s v

alu

e

100 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.4

0.3

0.2

0

0.1

0.6

0.5

Fit

nes

s v

alu

e

400 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.3

0.2

0

0.1

0.5

0.4

Fit

nes

s v

alu

e

700 (lx)

Degree

0.6

-45-36 -27-18 -9 0 9 18 27 36 45

0.3

0.2

0

0.1

0.5

0.4

Fit

nes

s v

alu

e

1000 (lx)

Degree

-45-36 -27-18 -9 0 9 18 27 36 45

0.20.150.1

00.05

0.40.35

0.30.25

Fit

nes

s v

alu

e

1300 (lx)

Degree

Fig. 7.14: Fitness function distribution of angle for cloth No.11 (LED light)

110

Page 131: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8

Verification of photo-model-based

pose estimation and handling of

unique cloth under illumination

varieties

The handling performance by the proposed method with dual-eyes cameras has been ver-

ified, revealing that the proposed system has leeway to recognize and handle the unique

cloth in lighting varieties from 100 (lx) to 1300 (lx). In addition, 3D recognition and han-

dling accuracy have been confirmed to be practically effective by conducting the recogni-

tion/handling experiments under different light conditions.

8.1 Accuracy with fixed illumination

In this section, each pose (x, y, z in ΣH in Fig. 5.2 and θ around z-axis) of all cloths in

Table 5.1 is estimated 1000 times repeatedly under fixed illumination of 700 (lx). The

average pose estimation errors and their extent of ±3σ of all cloths (No.1 ∼ No.12) are

shown in Fig. 4.10. Errors and the extent of ±3σ concerning x and y positions of all

111

Page 132: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

12 cloths are less than 10 [mm], and those of θ is less than 10 [◦] with the probability of

99.7%. However, the errors of z-direction are less than 30 [mm], which is 3 times bigger

than the cases of x and y. The numerical data of Fig. 8.1 are listed in Table 8.1. The

frequency distributions concerning x, y, z, θ of cloth No.6 are shown in Fig. 8.2. We have

chosen No.6 to show actual data repeated 1000 times, where the reason of the choice of

No.6 is that the cloth represents comparatively large 3σ value in x, y and z directions.

From the graph of (c) in Fig. 8.2, the recognition system tends to miscalculate the z value

to be nearer than the one in fact (the minus value indicates nearer position).

Table 8.1: Average error and standard deviation (3σ) measured by 1000 times recognition

experiments of different cloths (No.1 ∼ No.12) under fixed illumination 700 (lx).

Cloth No.

Average error : upper row

Standard Deviation (3σ) : lower row

x [mm] y [mm] z [mm] θ [◦]

No.10.652 -0.565 -2.13 0.274

5.28 3.42 15.9 3.84

No.20.858 -1.20 -3.06 1.86

4.80 5.82 18.9 7.38

No.30.449 0.136 -3.46 0.379

3.69 3.03 17.8 2.83

No.40.0494 0.294 -4.26 0.31

3.45 2.79 19.5 2.94

No.50.244 0.00557 -7.31 0.908

4.50 3.75 23.7 6.21

No.6-1.08 0.601 -6.75 0.235

6.36 8.49 22.5 5.49

112

Page 133: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

No.70.197 0.488 -5.80 0.691

3.27 3.90 21.2 5.76

No.80.453 -0.0191 -2.94 0.128

3.33 3.75 17.0 3.06

No.90.764 0.683 -6.95 0.360

6.39 3.51 23.6 4.35

No.100.373 0.598 -5.91 -0.159

3.15 6.39 21.4 5.28

No.110.346 0.586 -6.70 0.205

4.71 4.29 23.2 3.84

No.120.137 -0.120 -3.74 0.0310

6.36 3.30 19.2 3.63

113

Page 134: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y [

mm

]z

[mm

]θ [°]

-40

-20

0

20

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13

No.2

No.3

No.4

No.5

No.6

No.7

No.9

No.8

No.10

No.11

No.12

No.1

Cloth number

Cloth number

Average

error

�3σ

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x [

mm

]

Cloth number

Fig. 8.1: Average error and ±3σ concerning three positions (x, y, z) and one orientation

(θ) of different cloths (No.1 ∼ No.12) that have been confirmed by the experimental result

of 1000 times recognition under fixed illumination 700 (lx), where σ is standard deviation.

114

Page 135: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

0

50

100

150

200

250

300

350

400

-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9

10

Fre

qu

ency

[tim

es]

Position error of x direction[mm]

(a) frequency distribution of error in x di-

rection

0

50

100

150

200

250

300

350

-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9

10

Fre

qu

ency

[tim

es]

Position error of y direction[mm]

(b) frequency distribution of error in y

direction

0

50

100

150

200

250

300

-20

-18

-16

-14

-12

-10 -8 -6 -4 -2 0 2 4 6 8

10

12

14

16

18

20

Fre

qu

ency

[tim

es]

Position error of z direction[mm]

(c) frequency distribution of error in z di-

rection

0

50

100

150

200

250

300

350

-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9

10

Fre

qu

ency

[tim

es]

Angle error[degree]

(d) frequency distribution of error in θ di-

rection

Fig. 8.2: Frequency distribution concerning x, y, z and θ of cloth No.6 that has been

resulted by conducting 1000 times recognition experiment.

115

Page 136: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

8.2 Robustness confirmation against illumination and

lighting source varieties

If the lighting source changes (e.g. light bulb, fluorescent lamp, mercury-arc lamp, sun-

light, etc), it will be necessary to check the robustness of the system against different light

sources. We have already confirmed the verification of illumination variations under two

different lighting sources. Three different illumination conditions (100 (lx), 700 (lx), and

1300 (lx)) using fluorescent light and light-emitting diode (LED) separately were given

as experimental environments in [40]. In [41], five different illumination conditions (100

(lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx)) using fluorescent light and light-emitting

diode (LED) were separately simulated for experiments. Fluorescent light was used for

the experiments of the present paper.

Figures 7.7, 7.8, 7.9 and 7.10 show the fitness distributions for position and orientation

of cloth No.3 under the five different illuminations (100 (lx), 400 (lx), 700 (lx), 1000 (lx)

and 1300 (lx)) and the two different lighting sources (Figs. 7.7 and 7.8; fluorescent light

and Figs. 7.9 and 7.10; LED). In the experiments, cloth No.3 was chosen because of its

distinct characteristics such as colorful patterns, small size, and lightweight. According

to experimental results in Figs. 7.7, 7.8, 7.9 and 7.10, it can be seen that the height of

the peak of the fitness distribution changes with illumination variations. The difference

between the height of fitness distribution in position under 100 (lx) and 1300 (lx) can

be clearly seen in Figs. 7.7 and 7.9. Even though the height of the peak changes with

illumination strength, it can be seen from Figs. 7.7 ∼ 7.10 that there exist peaks at

the position of the cloth and the positions represented by the peaks are maintained in

all cases. It means that the proposed system is robust against illumination and lighting

source varieties. The reason why the system is robust is that the searching problem is

converted into the optimization problem in our proposed system. The conversion enables

the pose estimation system to be robust against lighting condition varieties since the op-

116

Page 137: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

timization procedures do not care about the height of the peaks, but the existence of the

highest peak at the pose to be estimated. These Figs. 7.7 ∼ 7.10 are introduced from our

previous paper [41].

The first conditions in Eqs. 4.54 and 4.55 contributed to making the peak higher,

and second conditions of penalties in Eqs. 4.54 and 4.55 helped lower peaks that were

generated by image noises deleted. In the Figs. 7.7 and 7.9, the fitness function values

with minus sign were all replaced by zero, then all the fitness distributions in Figs. 7.7

and 7.9 look like external form with single peak. The values set in Eqs. 4.54 and 4.55,

that is, 2, -0.005, 0.1 and -0.5 are experimentally set by adjusting the valuables that had

been done before pose estimation and handling experiments.

117

Page 138: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

8.3 Accuracy with illumination varieties

Figures 8.3 to 8.7 show the average error of each cloth No.1∼No.12 under five different

illuminations (100 (lx), 400 (lx), 700 (lx), 1000 (lx) and 1300 (lx)). The numerical values of

average error of pose estimation results are listed in Table 8.2 and the standard deviation

are in Table 8.3. From Figs. 8.3 to 8.7, the 100 times recognition experiment results of

average error under five different illuminations for all cloths (No.1∼No.12) have almost

same tendency with Fig. 8.2. With the variation of cloths and also with the varieties

of light conditions (100 (lx)∼1300 (lx)), it has been confirmed that the ±3σ of the x, y

position are less than 10 [mm], z position being 30 [mm], θ being 10 [◦].

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x [

mm

]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y [

mm

]

Cloth number

-40

-20

0

20

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13

z [m

m]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Cloth number

θ[°

]

Fig. 8.3: Average error and average error±3σ of 100 times recognition experimental result

of 12 unique cloths under 100 (lx).

118

Page 139: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x [

mm

]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y [

mm

]

Cloth number

-40

-20

0

20

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13

z [m

m]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Cloth number

θ[°

]

Fig. 8.4: Average error and average error±3σ of 100 times recognition experimental result

of 12 unique cloths under 400 (lx).

119

Page 140: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x [

mm

]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y [

mm

]

Cloth number

-40

-20

0

20

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13

z [m

m]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Cloth number

θ[°

]

Fig. 8.5: Average error and average error±3σ of 100 times recognition experimental result

of 12 unique cloths under 700 (lx).

120

Page 141: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x [

mm

]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y [

mm

]

Cloth number

-40

-20

0

20

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13

z [m

m]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Cloth number

θ�°

]

Fig. 8.6: Average error and average error±3σ of 100 times recognition experimental result

of 12 unique cloths under 1000 (lx).

121

Page 142: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

x [

mm

]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

y [

mm

]

Cloth number

-40

-20

0

20

40

0 1 2 3 4 5 6 7 8 9 10 11 12 13

z [m

m]

Cloth number

-10

-5

0

5

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13

Cloth number

θ[°

]

Fig. 8.7: Average error and average error±3σ of 100 times recognition experimental result

of 12 unique cloths under 1300 (lx).

122

Page 143: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

Table 8.2: Average error measured by 100 times recognition experiments of each cloth

No.1∼No.12 under different light conditions (100 (lx), 400 (lx), 700 (lx), 1000 (lx) and

1300 (lx)).

Average error

Cloth No. Illuminance (lx) x [mm] y [mm] z [mm] θ [◦]

No.1

100 -0.179 0.282 -1.586 0.169

400 -0.813 0.234 -2.273 0.533

700 -0.550 -0.291 -4.727 0.499

1000 -0.487 0.119 -2.459 0.233

1300 -0.645 0.181 -2.310 0.408

No.2

100 0.074 -0.067 -2.57 0.667

400 0.109 -0.227 -1.52 0.084

700 0.043 -0.239 -1.49 0.251

1000 0.182 -0.204 -1.84 0.335

1300 0.0840 -0.156 -0.968 0.381

No.3

100 0.287 0.170 -2.32 -0.364

400 0.250 0.543 -3.17 0.676

700 0.913 0.407 -6.68 0.7290

1000 0.715 0.254 -5.98 0.933

1300 0.521 0.399 -5.04 0.600

No.4

100 0.462 0.095 -5.124 -0.462

400 0.392 0.099 -1.416 0.435

700 0.113 0.042 -2.061 0.186

1000 0.057 0.222 -2.810 0.390

1300 0.185 0.208 -1.749 0.486

123

Page 144: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

No.5

100 0.641 0.296 -4.312 0.2496

400 0.291 0.015 -2.314 0.008

700 0.209 0.027 -1.944 0.139

1000 0.234 0.062 -2.730 -0.017

1300 0.198 0.019 -2.466 -0.021

No.6

100 -0.545 -0.341 -3.880 -0.164

400 -0.278 -0.181 -0.100 0.027

700 -0.484 -0.676 -1.356 0.021

1000 -0.248 -0.561 -2.246 0.011

1300 -1.22 0.215 -1.909 -0.126

No.7

100 -0.308 -0.115 -1.260 0.413

400 -0.145 -0.291 -0.571 0.334

700 -0.143 -0.321 -1.112 0.288

1000 -0.168 -0.268 -1.257 0.472

1300 -0.253 -0.336 -1.973 0.263

124

Page 145: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

No.8

100 1.319 -0.094 -3.551 -0.523

400 0.416 0.173 -1.984 0.582

700 0.893 -0.2976 -3.864 -0.492

1000 0.052 0.234 -2.495 0.821

1300 0.047 0.300 -2.018 0.5871

No.9

100 -0.103 0.186 -0.947 -0.166

400 -0.614 0.304 -1.185 0.116

700 -0.393 0.195 -1.507 0.255

1000 -0.197 0.195 -2.300 0.255

1300 -0.349 0.410 -3.043 0.213

No.10

100 1.622 2.194 -1.850 -0.795

400 0.773 0.497 -0.613 -0.878

700 0.349 1.691 -2.214 -1.449

1000 1.739 0.679 -1.153 -1.480

1300 0.525 1.437 -1.551 -1.376

No.11

100 -0.016 0.059 -3.49 -0.168

400 -0.740 1.67 -1.86 -0.122

700 -0.506 0.790 -1.55 -0.073

1000 0.103 -0.325 -2.95 0.197

1300 0.109 -0.235 -1.68 -0.092

No.12

100 0.674 -0.687 -3.910 0.226

400 0.625 -0.394 1.981 0.622

700 0.678 -0.861 -2.381 0.372

1000 0.537 -0.620 -3.981 1.028

1300 0.340 -0.733 -4.625 1.000

125

Page 146: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

Table 8.3: Standard deviation (σ) measured by 100 times recognition experiments of each

cloth No.1∼No.12 under different light conditions (100 (lx), 400 (lx), 700 (lx), 1000 (lx)

and 1300 (lx)).

Standard deviation (σ)

Cloth No. Illuminance (lx) x

[mm]

y [mm] z [mm] θ [◦]

No.1

100 0.829 0.713 3.796 0.535

400 1.869 1.270 6.577 1.510

700 2.093 1.619 6.508 1.823

1000 1.534 1.041 5.805 1.635

1300 1.685 1.117 5.964 1.591

No.2

100 1.02 0.699 5.68 1.48

400 0.570 0.874 4.22 0.877

700 0.455 0.858 4.39 1.02

1000 0.664 0.640 4.58 1.34

1300 0.793 0.755 3.57 1.05

No.3

100 0.812 0.739 4.62 0.795

400 0.988 1.12 5.64 1.21

700 1.54 1.12 7.28 1.35

1000 1.52 1.27 7.73 1.58

1300 1.26 1.28 6.50 1.34

No.4

100 1.124 1.148 5.988 1.289

400 0.974 0.837 3.440 0.808

700 0.571 0.500 4.646 0.844

1000 0.876 0.668 5.279 1.026

1300 0.634 0.716 4.154 0.932

126

Page 147: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

No.5

100 1.468 0.824 6.729 0.889

400 0.855 0.355 5.325 0.762

700 0.697 0.489 4.884 0.752

1000 0.918 0.490 5.128 0.892

1300 0.957 0.460 5.515 0.863

No.6

100 1.426 2.462 6.847 1.821

400 1.096 0.794 2.922 0.748

700 1.653 1.797 4.439 1.626

1000 1.249 1.572 5.023 1.078

1300 2.237 1.087 5.217 1.425

No.7

100 1.105 0.954 3.495 1.166

400 0.730 0.875 2.767 1.019

700 0.639 0.907 3.747 1.015

1000 0.692 0.835 3.421 1.261

1300 0.984 1.074 4.625 1.079

127

Page 148: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

No.8

100 1.865 1.071 5.846 1.476

400 1.213 0.888 4.816 1.429

700 1.565 1.630 6.011 1.582

1000 1.078 0.911 4.587 1.314

1300 0.872 0.782 4.863 1.174

No.9

100 1.060 0.578 3.825 0.869

400 1.773 0.734 4.493 0.803

700 2.192 0.686 4.807 0.795

1000 1.612 0.634 5.336 0.933

1300 1.944 0.772 6.114 0.889

No.10

100 1.542 2.557 4.978 2.202

400 1.360 1.596 3.220 1.946

700 0.950 2.637 4.686 2.411

1000 1.731 2.828 4.735 2.286

1300 1.067 2.720 3.655 2.536

No.11

100 1.01 1.28 6.01 0.952

400 1.83 2.05 5.26 1.20

700 1.45 1.28 3.79 0.905

1000 1.27 1.39 5.39 1.05

1300 0.813 1.08 4.20 0.924

No.12

100 1.693 1.442 6.711 1.314

400 2.472 1.629 7.233 1.635

700 2.398 1.999 6.368 1.331

1000 2.325 1.787 7.038 1.979

1300 1.768 1.98 7.801 1.864

128

Page 149: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

8.4 Handling experiment

Regarding the handling of clothing by the PA-10 robot, the four absorption pads that are

inhaled by air pump are used for picking up the cloth and putting into the collection box

automatically. Figure 8.21 shows the four absorption pads used in the experiment. Even

though the required accuracy for grabbing is not high, but it relates to handling accuracy

finally, which has been confirmed and been shown in Fig. 8.22. This has shown that the

proposed system can be useful in practical viewpoint.

Fig. 8.21: Four absorption pads under robot hand

In the mail-order system of the company T2K, human workers classify and handle a

large number of cloths manually every day. The robot to help human workers should

be capable of automatically handling to classify cloths. Results of 100 times handling

experiment at different light conditions (100 [lx], 400 [lx], 700 [lx], 1000 [lx] and 1300 [lx])

were summarized in Table 8.4 numerically and shown in Fig. 8.22. Table 8.4 lists the

numerical data of average error and standard deviation ±3σ when handling No.2 cloth.

As being depicted that maximum 3σ of the z-axis is about 30 [mm], the hand of the robot

needs some adaptive mechanism to pick up the cloths, like spring-rubber siphon absorbing

mechanism to adjust possible z-axis hand position errors. Since the horizontal position

errors in x, y-axis are less than 10 [mm] and orientation error is roughly less than 15 [◦],

the proposed handling robot can insert the cloths into a box with a size being 20 [mm]

larger the biggest size of the cloths’ varieties. Then the experimental results as shown in

Fig. 8.22 and Table 8.4 have confirmed experimentally that the proposed system is able

129

Page 150: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

to handle the 12 different cloths (No.1 ∼ No.12) under different light conditions without

the need for human assistance.

-15

-10

-5

0

5

10

15

0 100 200 300 400 500 600 700 800 90010001100120013001400

-15

-10

-5

0

5

10

15

0 100 200 300 400 500 600 700 800 900100011001200130014001000 1300

-15

-10

-5

0

5

10

15

0 100 200 300 400 500 600 700 800 90010001100120013001400Illuminance [lx]

1000 1300

Illuminance [lx]

-20

-10

0

10

20

30

40

0 100 200 300 400 500 600 700 800 90010001100120013001400

z [m

m]

1000 1300

y [

mm

]x

[m

m]

Illuminance [lx]

θ[°]

Illuminance [lx]

Fig. 8.22: Average error and average error±3σ of 100 times handling experiments of No.2

cloth under different light conditions (100 [lx], 400 [lx], 700 [lx], 1000 [lx] and 1300 [lx]).

130

Page 151: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 8 Experimental results and Discussion

Table 8.4: Average error and standard deviation (3σ) measured by 100 times handling

experiments of cloth No.2 under different light conditions.

Illuminance [lx]

Average error : upper row

Standard Deviation (3σ): lower row

x [mm] y [mm] z [mm] θ [◦]

1000.436 0.392 14.2 -0.466

10.2 6.93 15.5 12.1

4001.20 0.527 13.6 0.559

8.37 7.59 17.6 9.87

7000.732 0.666 13.0 -0.0440

9.00 6.93 17.0 11.2

10000.525 0.502 9.96 0.464

11.3 7.29 21.4 9.48

13000.366 0.772 13.4 0.0171

7.53 5.85 17.1 10.2

131

Page 152: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 153: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9

Application of photo-model-based

recognition to visual servoing

9.1 Photo-model-based real-time 3D-pose estimation

and tracking of solid object with real-time multi-

step genetic algorithm (RM-GA)

Naturally, the underwater environment may have different kinds of fish and many other

sea animals. Among them, the blue crab has been chosen to verify the 3D pose estimation

and tracking performance with dual-eye cameras because the blue crab shape was not only

the flat shape but also the other interesting shape. Moreover, the blue crab is very popular

in the Chesapeake Bay and is a keystone species for an integral ecological, economic and

social logical role. According to a literature review in [88] and [89], three areas (Hamana

Lake, Osaka Bay, and Okayama Prefecture) can be found mostly the blue crab in Japan.

Approximately, 1 million juveniles are currently released per year. Nowadays, japan’s

coastal fisheries are controlled by the Japanese Fishing Cooperative Association. The

catch of the blue crab has increased remarkably by overfishing. An automatic recognition,

pose estimation and tracking the 3D solid object with a robot has been vastly used in the

133

Page 154: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

industrial application according to their advantages such as high precision, flexibility, and

productivity. However, pose estimation of the deformable objects are more difficult than

the rigid objects. The blue crab is one of such deformable object. Many studies have been

conducted around automated deformable objects including cloth, strings, ropes, electric

cables and so on. According to the author’s knowledge, there is no study on any other

natural 3D solid deformable objects/animals. Therefore, our research group has been

decided to apply the vision-based robot in the fishery product instead of the fishermen.

The visual servoing, a method for controlling a robot using visual information in the

feedback loop, is expected to be able to allow the robot to adapt to changing or unknown

environments. Some methods have already been proposed to improve observation abilities,

by using stereo cameras [23], multiple cameras [24], and two cameras; with one fixed on

the end-effector and the other one fixed in the workspace [25]. These methods obtain

multiple different views to observe an object by increasing the number of cameras. In

case of single camera observation, it has been understood that the position measurement

is hard to estimate correctly. In case of 3D pose estimation from image information of

multiple cameras, the problem may happen, that is whether the points on an object in

3D space correspond correctly to the points in the images of multiple cameras may not be

justified. Then, the problem existing in the process to construct a 3D pose from points in

images of multiple cameras may arise. Using epipolar geometry, the reconstructed 2D-to-

3D object through multiple cameras images represented an incorrect 3D shape and pose.

To avoid the above-mentioned problems, the model-based matching method [83] has been

utilized with the adoption of a set-point-model-thinking. That is all points of the solid 3D

model in the 3D searching space as a group are projected onto the left and right camera

image planes (2D images) without the Corresponding Points Identification Problem [33]

that has been pointed out as the difficulty existing in pose estimation by using plural

cameras. Since all points on the 3D model are projected into 2D camera images in our

method, all projections for each point are correct. This means that forward projection

134

Page 155: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

of 3D object has been used, which does not occur Corresponding Points Identification

Problem.

Cloth

absorption

pads

Target object (Cloth)

PA-10 robot

Collection box

Dual-eye cameras

Fig. 9.1: A photo of a cloth handling robot system with dual-eye cameras: PA-10 robot

is equipped with two cameras (vision sensors are used as dual-eye vision system) for

recognition and vacuum cups (four absorption pads by the air compressor possible to

perform the pick and place of the target object (cloth)) for handling. In the test, the

robot picks up the cloth and places it into the collection box.

Therefore, our research group developed vision-based robot system as shown in Fig.

9.1. In Figure 9.1, the dual-eye cameras that are fixed on the end-effector (Robot eye-in-

hand system) of a PA-10 robot perform the cloth recognition and pose estimation process

based on the digital photo model. The cloth absorption pads as shown in Fig. 9.1 is

possible to perform the absorption of the target object (cloth). The aim of the PA-10

robot using proposed system is to pick up the cloth after recognition it and set the cloth

into the desired collection box as shown in Fig. 9.1. Figure 9.1 is introduced from our

previous papers [38], [40], [41] and [46], since the dual-eye pose estimation method for

flat shape 3D unique cloth has been used for visual servoing and proved to be practical

and credible. The effectiveness of the proposed photo-model-based matching system (Fig.

135

Page 156: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

��

Dual-eye cameras

3D solid target object

(blue crab)

Underwater

background

PA-10

base

Fig. 9.2: A photo of the blue crab recognition robot system with dual-eye cameras: a

PA-10 robot is equipped with two cameras (vision sensors are used as dual-eye vision

system) for recognition and pose estimation

9.1) for the 12 different kinds of flat 3D target deformable objects in recognition, pose

estimation and handling performance has been verified under illumination varieties in

previous works [41] and [48].

The dual-eye cameras that are fixed at the end-effector of a PA-10 robot perform the

blue crab recognition and pose estimation process based on the photograph model as

shown in Fig. 9.2. The major difference between these system configurations, types of

target object and background can be clearly seen in Fig. 9.1 and Fig. 9.2. In the previous

experiment, our research group used 3D flat target object (cloth) and only green color

background as shown in Fig. 9.1. In this experiment, the target object changed from flat

shape deformable target object (cloth) to 3D solid target object (blue crab) and from only

green color background to natural underwater background as shown in Fig. 9.2. A digital

photo model would be used to recognize the blue crab and to estimate the blue crab’s pose.

This paper introduces a novel method of photo-model-based recognition for PA-10 robot

136

Page 157: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

real-time pose estimation and tracking, which uses a RM-GA. The photo-model-based

recognition method presented in this study is concerned with the pose estimation in the

2D left and right dynamic input images which are captured by dual-eye cameras based

on a 3D solid target object. Improvement of the photo-model-based matching system by

utilizing RM-GA for real-time pose estimation is one of the main contributions of this

paper. In Figure 9.2, a 3D solid target object (blue crab) knowing size, shape, and color

are used in this paper. However, another 3D target object with shape varieties can also be

recognized and estimated by the proposed model-based matching approach, for example

in [97] a model of fish is used to track fish in real-time, and in [98] a model of human face

is used for human perception, in which perspective projection is employed as projection

transformation. The proposed photo-model-based blue crab 3D pose estimation system is

intended to get better performance and higher accuracy than fishermen. Moreover, this

system is aimed at being applied in the real fishery product, regardless of the shape of

the target object. Therefore, the robustness of the proposed system against the shape of

the target object has been verified experimentally in this study.

9.2 Photo-model-based recognition

9.2.1 Kinematics of stereo-vision

Figure 9.3 shows a perspective projection of the dual-eye vision system. The coordinate

systems of dual-eye cameras and the target object (blue crab) in Fig. 9.3 consist of world

coordinate system ΣW , j-th model coordinate system ΣMj, hand coordinate system ΣH ,

camera coordinate systems as ΣCL and ΣCR, and image coordinate systems as ΣIL and

ΣIR. In Fig. 9.3, the position vectors of an arbitrary i-th point of the j-th 3D model ΣMj

based on each coordinate system are as follows:

• W rji : position of an arbitrary i-th point on j-th 3D model based on ΣW

137

Page 158: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

• Mrji : position of an arbitrary i-th point on j-th 3D model in ΣMj

, where Mrji is a

constant vector

• CRrji and CLrj

i : position of an arbitrary i-th point on j-th 3D model based on ΣCR

and ΣCL

• ILrji and IRrj

i : projected position on ΣIL and ΣIR of an arbitrary i-th point on j-th

3D model

z

Camera R

Camera L

y

x

xy

yzx

Image R

Image L

x

y

z

z

y

xSearching Area

Robot Hand

yzx

Σ�

�� ��

Σ�

��

��

���

f

��

����

���

���

���

j-th

Photo-model

x

y

y

x

��

��

Fig. 9.3: Perspective projection of dual-eye vision-system: In the searching area, a 3D

solid model is represented by the picture of blue crab with black point (j-th photo model)

The homogeneous transformation matrix from the right camera coordinate system

ΣCR to the target object coordinate system ΣM is defined as CRT M(φjM , q), where φj

M is

j-th model’s pose and q means robot’s joint angle vector. Then, CRrji can be calculated

by using Eq. (9.1),

CRrji = CRT M(φj

M , q) Mrji . (9.1)

where Mrji is predetermined as fixed vectors since ΣMj

is fixed on the j-th model. CLrji

that represents the same i-th point on j-th model based on ΣCL is also calculated by using

138

Page 159: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

CLT M(φjM , q). Since q can be measured by robot’s joint sensors, it could be thought to

have been known, then q is omitted hereafter. Equation (9.2) represents the projective

transformation matrix P k.

P k =1

kzi

fηx

0 Ix0 0

0 fηy

Iy0 0

. (9.2)

where,

• k = CL, CR,

• kzi ; position of the i-th point in the camera sight direction in ΣCR and ΣCL,

• f ; focal length,

• ηx; [mm/pixel] in x-axis,

• ηy; [mm/pixel] in y-axis.

The position vector of the i-th point in the right and left camera image coordinates

IRrji can be described by using P k as,

IRrji = P k

CRrji = P k

CRT M(φjM)Mrj

i (9.3)

Then, IRrji can be described as,

IRrji (φ

jM) = fR(φj

M ,M rji )

ILrji (φ

jM) = fL(φj

M ,M rji )

(9.4)

where ILrji can also be described as the same manner like IRrj

i .

9.2.2 Model generation

In this proposed system, the artificial blue crab is selected for the verification of the

experiment of the occlusion tolerance for photo-model-based recognition. The selected

artificial blue crab for these experiments were approximately the same color, shape and

139

Page 160: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

size as the real one. There are two main portions in the proposed system. The first portion

is blue crab model template generation and the latter is relative pose estimation using

generated model template through model-based matching method. This subsection is a

description of the first portion before an explanation of the model-based matching method.

The hue value is used for the extraction of the target color. By using the hue value, it

is possible to avoid the color change due to the change in the light of the target. The

merit of the color system is that it is able to examine the hue regardless of illumination.

The model generation process is represented as Figure 9.4. Firstly, a background image

is captured by the first camera and the averaged hue value of the background image is

calculated as shown in Fig. 9.4 (a). Then, the blue crab is put on the background as

shown in Fig. 9.4 (b). As shown in Fig. 9.4 (c), the hue value of each point in the

image acquired by scanning individual pixel is compared with the averaged hue value of

the background image to generate the surface space Sin of the model. Finally, the outside

space Sout of the model is generated by enveloping Sin as shown in Fig. 9.4 (d).

9.2.3 3D model-based matching

After generating a model by the blue crab model generation process, it is used for recog-

nizing the target object (blue crab) through model-based matching. 3D pose of the 3D

model, including three position variables and three orientation variables in quaternion,

are represented as CφjM = [CxM ,CyM ,C zM ,Cε1M ,Cε2M ,Cε3M ]T . Figure 9.5 shows the def-

inition of a generated solid model in the 3D searching space (sub figure on the top of

Fig. 9.5) and the left and right 2D searching models (sub figures on the left and right

bottom of Fig. 9.5) respectively. In Fig. 9.5, a generated solid model is projected from

the 3D space onto the left and right 2D searching planes. The sub figure on the top of

Fig. 9.5 shows a generated 3D solid model with its pose Sin(φjM) (inner dotted points)

and the outside space enveloping Sin(φM) denoted as outer dotted line (Sout(φjM)). The

sub figure on the left/right bottom of Fig. 9.5 show the left/right 2D searching models

140

Page 161: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

(c) S�� space of model is

shown by

black points group

(a) Background(b) Target object in

background

S���

S��

(d) Enveloping space of S�� is

shown by

points group S���

S��

Fig. 9.4: Model generation process describes as Fig. 9.4 (a) ∼ Fig. 9.4 (d): Fig. 9.4 (a)

shows a photograph of background image, Fig. 9.4 (b) shows a photograph of the target

object (the blue crab) in background, Fig. 9.4 (c) represents a photograph of surface

space model Sin by inner points group also see in enlarging portion of photo model and

Fig. 9.4 represents a photograph of outside space of model Sout that enveloping Sin also

in enlarging portion of photo model

SL(φjM) and SR(φj

M) respectively. Both SL(φjM) and SR(φj

M) consist of SL,in(φjM)and

SL,out(φjM)and SR,in(φj

M) and SR,out(φjM). The evaluation of the correlation between the

projected model and the images from the dual-eye cameras attached at the end-effector

defined as a fitness function. In the fitness distribution, the highest fitness value repre-

sents the best pose of the model φ that coincides with the captured image from the left

and right cameras as shown in Fig. 9.5 is selected as an estimated pose. After knowing

the estimated pose, the target object has been done to handle.

141

Page 162: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

Σ�

��

forward

projection

��

S�� �

S� � �

S�,� � �

S�,�� � S�,� �

�S�,��

Left image Right image

2D searching model: S�(�

) 2D searching model: S�(�

)

3D searching space y

x

z S �

y y

x x

Fig. 9.5: A 3D solid model in the 3D searching space (sub figure on the top of Fig. 9.5)

and left and right 2D searching models represented as SL(φjM) and SR(φj

M) (sub figures

on the left/right bottom of Fig. 9.5)

9.2.4 Definition of the fitness function

The concept of the fitness function in this study can be said to be an extension of the

work in [49] in which different models including a rectangular shape surface-strips model

was evaluated using images from a single camera. The correlation between the projected

model φ and captured images on the left and right 2D searching areas is calculated by

the equations Eq. (9.5) to Eq. (9.8). The evaluation function F (φjM) is deigned to have

a maximum value as Eq. (9.5). The maximum evaluation function F (φjM) is achieved

by averaging the fitness functions of both left camera image FL(CLφjM) and right camera

142

Page 163: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

image FR(CRφjM) as shown in Eq. (9.6).

F (φjM) =

( ∑

IRrji∈

SR,in(CRφjM )

pR,in(IRrji ) +

IRrji∈

SR,out(CRφj

M )

pR,out(IRrj

i ))

+( ∑

ILrji∈

SL,in(CLφjM )

pL,in(ILrji ) +

ILrji∈

SL,out(CLφj

M )

pL,out(ILrj

i ))

/2

(9.5)

F (φjM) =

{FR(CRφj

M) + FL(CLφjM)

}/2 (9.6)

The evaluation of every point in the input image that lie inside the surface model

frame and outside area of the model frame is represented as ILrji ∈ SL,in(CLφj

M) and

ILrji ∈ SL,out(

CLφjM) respectively. Eqs. (9.7) and (9.8) is used for calculating pL,in(ILrj

i )

and pL,out(ILrj

i ) .

pL,in(ILrji ) =

2, if(|HIL(ILrji ) − HML(ILrj

i )| ≤ 30);

−0.005, if(|HB − HIL(ILrji )| ≤ 30);

0, otherwise.

(9.7)

pL,out(ILrj

i ) =

0.1, if(|HB − HIL(ILrji )| ≤ 20);

−0.5, otherwise.

(9.8)

where

• SL,in: the space of coordinates on the surface area of the model,

• SL,out: the space of coordinates on the outside area of the model,

• HIL(ILrji ): the hue value of the left camera image at the point ILrj

i (i-th point in

SL,in),

143

Page 164: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

• HML(ILrji ): the hue value of the point ILrj

i (i-th point in SL,in) on the model ,

• HB: the average hue value of the background image

Equations (9.7) and (9.8) are designed to provide a peak in the fitness value distribution

by reducing noise. The evaluation values are tuned experimentally. In Eq. (9.7), if the

hue value of each point of captured images, which lies inside the surface model frame

SL,in, is same to the hue value of each point in a model, the fitness value will increase

with the voting value of “+2.” The fitness value will decrease with the value of “−0.005”

for every point of the blue crab in the left camera image that are similar to the average

hue value of the background. Similarly, in Eq. (9.8), if the hue value of each point in the

left camera image, which are in SL,out, is same to the hue value of the background, with

the tolerance of 20, the fitness value will increase with the value of “0.1.” Otherwise, the

fitness value will be decreased with the value of “−0.5.” Similarly, a function pR,in(IRrji )

and pR,out(IRrj

i ) are represented for the right camera image.

9.2.5 Real-time Multi-step Genetic Algorithm (RM-GA )

The main problem of searching for the best pose of the object can be converted into an

optimization problem because the fitness function has been designed to give the maximum

value only in the case that the pose of the model (RM-GA’s gene) and the target object

coincide with the image. The maximum value of the fitness function can be searched by a

number of optimization methods. Among them, RM-GA evaluation process is applied to

find the maximum value as an optimal solution because of its simplicity, effectiveness and

quickly implement. Moreover, RM-GA can recognize the target object during the short

time, regardless of scanning all the possible points. Pose estimation using the RM-GA.

The 20 individuals of RM-GA are used in this experiment. Each individual chromosome

consists of six variables. Each variable is coded by 12bits that can easily implement to

get the optimal solution. The first three variables of a model in 3D space (tx, ty, tz) are

144

Page 165: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

represented as the position and the last three variables (ε1, ε2, ε3) are represented as the

orientation. And then, the genes of RM-GA representing possible pose solution is defined

as below;tx︷ ︸︸ ︷

01 · · · 01︸ ︷︷ ︸12bits

ty︷ ︸︸ ︷00 · · · 01︸ ︷︷ ︸

12bits

tz︷ ︸︸ ︷11 · · · 01︸ ︷︷ ︸

12bits

ε1︷ ︸︸ ︷01 · · · 01︸ ︷︷ ︸

12bits

ε2︷ ︸︸ ︷01 · · · 11︸ ︷︷ ︸

12bits

ε3︷ ︸︸ ︷01 · · · 10︸ ︷︷ ︸

12bits

.

Figure 9.6 (a) shows the process flow in the RM-GA in which 3D models converge

into the real 3D solid target object which is the blue crab. In Fig. 9.6 (a), target object

represents as blue crab and each model are represented as a rectangular shape with dotted

lines. Many models have the same shape and color information with the target object but

each model has different poses as shown at the top of Fig. 9.6 (a). Note that evaluation

process is performed in the left and right 2D image planes and convergence occurs in 3D

searching space. These 20 individual RM-GA are evaluated by the fitness function value.

The fitter one is selected to regenerate the next generations. This is contemplated that

the trustful pose may be preserved in the same 24 [ms] every 5-th generation as shown

before the final generation of Fig. 9.6 (a). In the final generation, the gene that gives

the best fitness value can be addressed the real solid target’s true pose as shown at the

bottom of Fig. 9.6 (a).

Figure 9.6 (b) shows a flowchart for the RM-GA evolution process for recognition and

pose estimation;

• Firstly, the individuals RM-GA are randomly generated in the 3D searching area as

the first generation.

• Secondly, a new image captured by dual-eye cameras is input.

• Thirdly, the fitness value of each individual is evaluated.

• Fourthly, each individual’s fitness value is sorted by the calculated results based on

the rank of the fitness value.

145

Page 166: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

• Fifthly, the best individuals are selected from the current population and the weak

individuals are removed.

• Then, the next generation is reproduced by making crossover and mutation from

the selected individuals.

• And then, the above procedures have performed repeatedly within 24 [ms] with 5

times evolutions.

• Finally, the RM-GA output the best individual, then repeat the work again until

input a new image.

Using Lyapunov analysis in [98], the time convergence performance of the RM-GA as

a dynamic evaluation function has been confirmed experimentally. Real-time 3D pose

estimation using 3D-model-based recognition and the RM-GA has been presented in detail

in a previous paper [99]. Real-time pose estimation using RM-GA is discussed briefly in

a previous study [100].

146

Page 167: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

Output

(position and orientation

of the best individual)

(b)

Yes

No24 [ms]

Evaluation

(fitness evaluation of each

model)

Sorting

(sort models based on their

fitness values)

Input new image

(captured by dual-eye

cameras )

Selection

(save models with hight fitness

values)

Crossover and mutation

Initialization

(generate a population of

models)

3D search space (First generation)

Output 1

Output jConvergence

in successively

input images

3D search space (i-th generation)

Model

Target object

(a)

3D search space (5-th generation)

3D search space (Final generation)

5 t

imes

ev

olu

tio

ns

wit

hin

24

[m

s]

Start

To End?

End

No

Yes

Fig. 9.6: RM-GA evolution process in which 3D models with random poses converge to

the real 3D solid target object (blue crab) through RM-GA operation, and the pose of

the model with the best fitness function represents the estimated pose of the blue crab:

(a) graphical representation of solution evaluation and chromosome evolution during each

24 [ms] control period and (b) flowchart of RM-GA process steps during convergence

performance from the first to final generation.

147

Page 168: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

9.3 Experimental environment

Figure 9.7 shows the experimental layout and the coordinate systems of the system, i.e.

the world coordinate system (ΣW ), the hand coordinate system (ΣH) and the blue crab

coordinate system (ΣM) that are used in the experiments respectively. The blue crab

coordinate system (ΣM) is set at x = 0, y = 0, z = 500 [mm], with respect to the hand

coordinate system (ΣH).

In robotics, the device at the end of a robotic end is called an end-effector which is

designed to interact with the environment. Depending on the application of this proposed

system, two cameras ( vision sensors are used as dual-eye vision system) for recognition

are attached at the end-effector of a PA-10 robot.

Σ�

Σ�

Σ� x�

x�

y�

z�

z�

y�

x�z�

500 [mm]y�

Fig. 9.7: Coordinate systems of robot and end-effector: Hand coordinate system (ΣH),

world coordinate system (ΣW ) and target coordinate system (ΣM)

148

Page 169: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

9.4 Experimental results and discussion

Firstly, the 3D solid target object blue crab is recognized among different kinds of under-

water animals pictures in vertical position φM = [xM [mm], yM [mm],zM [mm], ε1, ε2, ε3

= [0, 0, 500, 0, 0, 0]. 3D recognition accuracy has been confirmed experimentally by using

different kinds of underwater animals pictures and the blue crab. Secondly, the blue crab

recognition under changing orientations has been thoroughly analyzed in terms of the fit-

ness function value in both position and orientation. The successful experimental results

have been confirmed by changing the rotation of the target object around yM -direction,

[◦] = 0 [◦], -10 [◦], -20 [◦], -30 [◦] and -40 [◦] respectively. The original rotation of the target

object around xM , yM , zM -directions are explained by degree [◦] values in this experiment

for improving understandability. The output of the “RM-GA” for orientations in xM , yM ,

zM -directions (ε1, ε2, ε3) are given by quaternion that can be seen in the experimental

result figures. Finally, the tracking performance of PA-10 robot end-effector, ε2Hdunder

changing the target object orientation in yM -direction, ε2Mthrough 5.5 minutes has been

confirmed practically.

9.4.1 Recognition of blue crab among different kinds of under-

water animal pictures

Figure 9.8 shows the recognition of a blue crab among different kinds of underwater an-

imals on the underwater background. Table 9.1 shows the different kinds of underwater

animal pictures and names. Each underwater animal used in this experiment has different

colors, sizes, and shapes. Among different kinds of underwater animals, we will discuss

in detail about the experiments of the blue crab in this paper. Firstly, the verification of

the recognition of the blue crab among different kinds of underwater animals pictures has

been done experimentally. The blue crab recognition among different kinds of underwater

animals pictures for position x-y, orientations ε1-ε2, ε1-ε3 and ε2-ε3 was analyzed in terms

149

Page 170: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

Target object (blue crab)

Fig. 9.8: Recognition of a blue crab among different kinds of underwater animals on the

underwater background

of fitness function value. Figure 9.9 shows the recognition experiment of a blue crab in left

and right camera images. In Fig. 9.9, a target object is represented by a blue crab and

model is represented by a rectangular-shape with dotted lines. When a model completely

overlaps the target object (Fig. 9.9), its fitness function gets the maximum. In Fig. 9.10

(a), the blue crab is put near (x,y)= (0,0) [mm] on the underwater background for recog-

nition performance. In Fig. 9.10 (c), the blue crab among different kinds of underwater

animals pictures is put near (x,y) = (25,5) [mm] for recognition. The maximum of fitness

value 0.7432 can be seen at the position (x,y) = (0,0) [mm] as shown in Fig. 9.10 (b).

The maximum fitness distribution in position x-y, orientation ε1-ε2, orientation ε1-ε3 and

ε2-ε3 for Fig. 9.10 (c) can also be seen as the peak of the mountain as shown in Fig. 9.10

(d), (e), (f) and (g) respectively. In Fig. 9.10 (d) ∼ (g), the maximum of fitness value,

0.611 can be seen at position (x, y) = (25,5) [mm], 0.6507 can be seen at orientation (ε1,

ε2) = (-0.025,0), 0.65 can be seen at orientation (ε1, ε3) = (0,0) and 0.6707 can be seen

at orientation (ε2, ε3) = (0,0) respectively. According to the experimental results of Figs.

9.10 (b) and (d) ∼ (g), we found the peak of mountain clearly and the small position

error in the fitness distribution in position x-y plane, orientation ε1-ε2, orientation ε1-ε3

and orientation ε2-ε3 of the blue crab.

150

Page 171: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

Left image Right image

ModelTarget object

(Blue crab)

Fig. 9.9: Recognition of a blue crab in left and right camera images in experiment

Maximum fitness value at position

(x, y) [mm] = (0,0) [mm]

Fitness value = 0.7432

(a) (b)

� [��]

� [��]

Maximum fitness value at position

(x, y) [mm] = (25, 5) [mm]

Fitness value = 0.611

(c) (d)

� [��] � [��]

151

Page 172: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

(e) (f)

Maximum fitness value at orientation

(��, �� ) = (-0.025, 0)

Fitness value = 0.6507

Maximum fitness value at orientation

(��, �� ) = (0, 0)

Fitness value = 0.65

ε�

Fitness

-0.375

0

0.17

0.35

0.52

0.70

-0.1870

0.187

0.375 -0.375

-0.187

00.187

0.375

ε�

Fitness

0

0.17

0.35

0.52

0.70

-0.1870

0.187

0.375 -0.375

-0.187

0

0.187

0.375

-0.375

ε�ε�

(g)

Maximum fitness value at orientation

(��, �� ) = (0, 0)

Fitness value = 0.6707

ε�

0

-0.375

0

0.187

0.375

ε�

Fitness

0

0.17

0.35

0.52

0.70

-0.187

0.187

-0.375

-0.1870

0.375

Fig. 9.10: Experiment of recognizing a blue crab (a) a photograph of the blue crab on the

underwater background (b) fitness distribution in x-y plane (c) a photograph of the blue

crab among different kinds of underwater animals (d) fitness distribution in x-y plane (e)

fitness distribution of orientation in ε1-ε2 (f) fitness distribution of orientation in ε1-ε3 (g)

fitness distribution of orientation in ε2-ε3. Note that the output of the orientations (ε1,

ε2, and ε3) are represented in quaternion.

152

Page 173: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

Table 9.1: List of the different kinds of underwater animals

(Ref:https://www.google.co.jp)

No. Name Picture

1 Angle

2 Blue Crab

3 Thailand Flag Betta Fish

4 Blue Starfish

5 Mandarin Dragonet

6 Copadichromis Azureus

7 Sea Dragon

8 Starfish

153

Page 174: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

9.4.2 Blue crab recognition and pose estimation under changing

orientation

To evaluate the recognition accuracy, the full search method is used to compare with

the estimated pose by proposed system. In full search method, the pose of the target

object (the blue crab) is searched by scanning all possible pixels in the entire searching

area. Figure 9.11 (a) shows the experiment of the blue crab without disturbance. In

this experiment, Fig. 9.11 (b), (c) and (d) represent the fitness distribution of position

x-y [mm], orientation of ε1 - ε2 and orientation ε2 - ε3 respectively. In Fig. 9.11 (b), the

maximum of fitness value 0.931 can be seen at (x,y) = (2,0) [mm] in the fitness distribution

of the position x-y as the peak of the mountain shape. In Fig. 9.11 (c), the maximum of

fitness value 0.9484 can be seen at (ε1, ε2) = (0,0) in the fitness distribution of orientation

ε1 - ε2. The peak of the mountain at (ε2, ε3) = (0,0) with the maximum value 0.9368 in

the fitness distribution of orientation ε2 - ε3 can be seen in Fig. 9.11 (d).

In the previous Section 9.4.1, a solid 3D deformable object blue crab using photo-

model-based recognition proposed by the authors has been confirmed the recognition of

the target object only in vertical positions. The relative orientation calculation was more

complicated and difficult than the position detection. In this work, the relative orientation

calculations have been practically confirmed with experimental results under changing

the orientation of the target object yM direction [◦] = −10 [◦] steadily. According to the

experimental results, our proposed system can recognize the number of differences between

each degree range −10 [◦] steadily. Each of Figs. 9.12∼9.16 (a) shows the recognition

experiment of the 3D solid object blue crab in the left and right camera images. In

each of Figs. 9.12∼9.16 (a), the target object is represented by the 3D solid object blue

crab and model is represented by a rectangular-shape with dotted lines. When a model

completely overlaps the target object in each of Figs. 9.12∼9.16 (a), its fitness function

gets the maximum. The maximum fitness distribution in position x-y, orientations ε1 − ε2,

154

Page 175: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

ε1 − ε3 and ε2 − ε3 can be seen as the peak of the mountain as shown in each of Figs.

9.12∼9.16 (b)∼(e) respectively.

�180 �120 �60 0 60 120 18000 . 20 . 40 . 60 . 8 1"180 &128 *76 -24 28 80 132

M a x i m u m f i t n e s s v a l u e a t p o s i t i o n( x , y ) [ m m ] = ( 2 , 0 ) [ m m ]F i t n e s s v a l u e = 0 . 9 3 1

( a ) ( b ) y [ m m ]x [ m m ]F i t n e s s

�0 .375 �0 .25 �0 .125 0 0 .125 0 .25 0 .3750 . 20 . 30 . 40 . 50 . 60 . 70 . 80 . 9 190 .375 ?0 .25 D0 .125 0 0 .125 0 .25 0 .375

F i t n e s sF i t n e s se0 .375 k0 .25 p0 .125 0 0 .125 0 .25 0 .3750 . 20 . 30 . 40 . 50 . 60 . 70 . 80 . 9 1

�0 .375 ¤0 .25 ©0 .125 0 0 .125 0 .25 0 .375

M a x i m u m f i t n e s s v a l u e a to r i e n t a t i o n ( , ) = ( 0 , 0 )F i t n e s s v a l u e = 0 . 9 4 8 4 M a x i m u m f i t n e s s v a l u e a to r i e n t a t i o n ( , ) = ( 0 , 0 )F i t n e s s v a l u e = 0 . 9 3 6 8

( c ) ( d )Fig. 9.11: Experiment of recognizing a blue crab (a) a photograph of blue crab without

disturbance (b) fitness distribution in x-y plane (c) fitness distribution of orientation in

ε1 and ε2 (d) fitness distribution of orientation in ε2 and ε3. Note that the output of the

orientations (ε1, ε2, and ε3) are represented in quaternion.

155

Page 176: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

-180-135

-90-45

045

90135180

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-180

-155

-130

-105

-80

-55

-30

-5 20

45

70

95

12

0

14

5

17

0

x [mm]

y [mm]

Left camera image Right camera image

Maximum fitness value at position

(x, y) [mm] = (5, 5) [mm]

Fitness value = 0.702

x [mm] y [mm] z [mm] ε� ε� ε�

Object rotates 0 [°] around y� axis

� 0 0 500 0 0 0

GA data 6.73 4.98 509.14 -0.037 0.043 0.004

(b)

(a)

Fitness

-0.3

75

-0.2

75

-0.1

75

-0.0

75

0.0

25

0.1

25

0.2

25

0.3

25

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

ε�

ε�

Maximum fitness value at orientation

(ε�, ε�) = (-0.075, 0.05)

Fitness value = 0.75

Maximum fitness value at orientation

(ε�, ε�) = (0.05, 0)

Fitness value = 0.7367

(c)

(d)

ε�

ε�

Fitness

Fitness

-0.375

-0.225

-0.075

0.075

0.2250.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

-0.2

25

-0.1

5

-0.0

75 0

0.0

75

0.1

5

0.2

25

0.3

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (-0.025, 0)

Fitness value = 0.7353

Fitness

ε�

ε�

(e)

Fig. 9.12: Experiment of recognizing the blue crab orientation around yM -axis, 0 [◦] (a)

recognition of a blue crab in left and right camera images (b) fitness distribution in x-y

plane (c) fitness distribution of orientation in ε1-ε2 (d) fitness distribution of orienta-

tion in ε2-ε3 (e) fitness distribution of orientation in ε1-ε3. Note that the output of the

orientations (ε1, ε2, and ε3) are represented in quaternion.

156

Page 177: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

-180-135

-90-45

045

90135180

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-18

0

-15

0

-12

0

-90

-60

-30 0

30

60

90

12

0

15

0

18

0Left camera image Right camera image

Maximum fitness value at position

(x, y) [mm] = (-5, -5) [mm]

Fitness value = 0.6553

x [mm] y [mm] z [mm] ε� ε� ε�

Object rotates -10 [°] around y� axis

� 0 0 500 0 -0.087 0

GA data -4.98 -2.63 503.86 -0.009 0.025 -0.034

(b)

(a)

x [mm]

y [mm]

Fitness

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at

orientation (ε�, ε�) = (0.05, -0.025)

Fitness value = 0.7393

Maximum fitness value at orientation (ε�, ε�)

= (-0.025, -0.05)

Fitness value = 0.7147

ε�

ε�

(c)

Fitness

(d)

Fitness

ε�

ε�

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0.05, -0.05)

Fitness value = 0.7147

Fitness

ε�

ε�

(e)

Fig. 9.13: Experiment of recognizing the blue crab orientation around yM -axis, -10 [◦]

(a) recognition of a blue crab in left and right camera images (b) fitness distribution in

x-y plane (c) fitness distribution of orientation in ε1-ε2 (d) fitness distribution of orienta-

tion in ε2-ε3 (e) fitness distribution of orientation in ε1-ε3. Note that the output of the

orientations (ε1, ε2, and ε3) are represented in quaternion.

157

Page 178: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

-180

-135-90

-450

4590

135180

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

-180

-155

-130

-105

-80

-55

-30

-5

20

45

70

95

120

145

170

Left camera image Right camera image

Maximum fitness value at position

(x, y) [mm] = (-5, -5) [mm]

Fitness value = 0.6393

x [mm] y [mm] z [mm] ε� ε� ε�

Object rotates -20 [°] around y� axis

� 0 0 500 0 -0.174 0

GA data -5.37 -2.53 503.86 0.026 -0.124 -0.0415

(a)

(b)

x [mm]

y [mm]

Fitness

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8-0

.37

5

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0.025, -0.125)

Fitness value = 0.7167

Maximum fitness value at orientation

(ε�, ε�) = (-0.15, -0.05)

Fitness value = 0.7107

(c)

(d)

ε�

ε�

Fitness

Fitness

ε�

ε�

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0, -0.05)

Fitness value = 0.6953

Fitness

ε�

ε�

(e)

Fig. 9.14: Experiment of recognizing the blue crab orientation around yM -axis, -20 [◦]

(a) recognition of a blue crab in left and right camera images (b) fitness distribution in

x-y plane (c) fitness distribution of orientation in ε1-ε2 (d) fitness distribution of orienta-

tion in ε2-ε3 (e) fitness distribution of orientation in ε1-ε3. Note that the output of the

orientations (ε1, ε2, and ε3) are represented in quaternion.

158

Page 179: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

-180

-105

-30

45

120

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

-18

0

-15

5

-13

0

-10

5

-80

-55

-30

-5 20

45

70

95

12

0

14

5

17

0

Left camera image Right camera image

Maximum fitness value at position

(x, y) [mm] = (-5, -5) [mm]

Fitness value = 0.6227

x [mm] y [mm] z [mm] ε� ε� ε�

Object rotates -30 [°] around y� axis

� 0 0 500 0 -0.259 0

GA data -6.93 -2.63 507.38 0.085 -0.232 -0.034

(a)

(b)

x [mm]

y [mm]

Fitness

-0.375

-0.225

-0.075

0.075

0.225

0.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

-0.3

75

-0.2

75

-0.1

75

-0.0

75

0.0

25

0.1

25

0.2

25

0.3

25

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0.05, -0.25)

Fitness value = 0.7067

Maximum fitness value at orientation

(ε�, ε�) = (-0.25, -0.025)

Fitness value = 0.682

(c)

(d)

ε�

ε�

Fitness

Fitness

ε�

ε�

-0.375

-0.25

-0.125

0

0.1250.250.375

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

-0.2

25

-0.1

5

-0.0

75

0

0.0

75

0.1

5

0.2

25

0.3

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0.025, -0.05)

Fitness value = 0.7407

Fitness

ε�

ε�

(e)

Fig. 9.15: Experiment of recognizing the blue crab orientation around yM -axis, -30 [◦]

(a) recognition of a blue crab in left and right camera images (b) fitness distribution in

x-y plane (c) fitness distribution of orientation in ε1-ε2 (d) fitness distribution of orienta-

tion in ε2-ε3 (e) fitness distribution of orientation in ε1-ε3. Note that the output of the

orientations (ε1, ε2, and ε3) are represented in quaternion.

159

Page 180: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

-18

0

-13

0

-80

-30 20 70

12

0

17

00

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-18

0

-16

0

-14

0

-12

0

-10

0

-80

-60

-40

-20 0

20

40

60

80

10

0

12

0

14

0

16

0

18

0

Left camera image Right camera image

Maximum fitness value at position

(x, y) [mm] = (-5, -5) [mm]

Fitness value = 0.5533

x [mm] y [mm] z [mm] ε� ε� ε�

Object rotates -40 [°] around y� axis

� 0 0 500 0 -0.342 0

GA data -7.12 -3.12 504.94 0.033 -0.318 -0.05

(b)

(a)

x [mm]y [mm]

Fitness

-0.375

-0.1

0.175

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

-0.375

0.0250

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0.075, -0.325)

Fitness value = 0.6107

Maximum fitness value at orientation

(ε�, ε�) = (-0.325, -0.05)

Fitness value = 0.6173

(c)

(d)

ε�

ε�

Fitness

Fitness

ε�

ε�

-0.375

-0.1

0.175

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

-0.3

75

-0.3

25

-0.2

75

-0.2

25

-0.1

75

-0.1

25

-0.0

75

-0.0

25

0.0

25

0.0

75

0.1

25

0.1

75

0.2

25

0.2

75

0.3

25

0.3

75

Maximum fitness value at orientation

(ε�, ε�) = (0.075, -0.05)

Fitness value = 0.6107

Fitness

ε�

ε�

(e)

Fig. 9.16: Experiment of recognizing the blue crab orientation around yM -axis, -40 [◦]

(a) recognition of a blue crab in left and right camera images (b) fitness distribution in

x-y plane (c) fitness distribution of orientation in ε1-ε2 (d) fitness distribution of orienta-

tion in ε2-ε3 (e) fitness distribution of orientation in ε1-ε3. Note that the output of the

orientations (ε1, ε2, and ε3) are represented in quaternion.

160

Page 181: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 9 Application of photo-model-based recognition to visual servoing

9.4.3 Real-time 3D pose estimation and tracking of solid object

with RM-GA

The original rotation of the target object around yM -direction is conducted by 0 [◦] ∼ 30

[◦] ∼ 0 [◦] ∼ -30 [◦] ∼ 0 [◦] as the step input within 5.5 minutes. The number of difference

between each degree input range is 5 [◦] steadily. Actually, the input and output of the

“RM-GA” are shown in Fig. 9.17 by quaternion in fact and then we explained the input

into degree value for improving readability. The output of the “RM-GA” for orientation

in yM -direction, ε2Hdis given by quaternion that can be seen in Fig. 9.17. Figure 9.17

shows the time profile of estimated orientation value of end-effector of PA-10 robot around

yM -direction, ε2Hdrecognized by “RM-GA,” the orientation value of the target object in

yM -direction, ε2Mand the actual orientation value of end-effector of PA-10 robot ε2H

respectively. Fig. 9.17 shows the tracking performance of PA-10 robot end-effector, ε2Hd

under changing the target object orientation in yM -direction, ε2Mthrough T = 0 [s] ∼

330 [s].

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0 30 60 90 120 150 180 210 240 270 300 330

Ori

enta

tio

n a

rou

nd

y-d

irec

tio

n,

Ɛ2

Time [s]

Actual orientation value of

end-effector of PA-10 robot

�

Orientation value

of target object

�

Estimated orientation value of

end-effector of PA-10 robot

recognized by “RM-GA”

��

Fig. 9.17: Pose estimation and tracking performance experiment of the 3D solid target

object blue crab orientation in yM -direction, ε2M. Note that the orientations ε2M

, ε2Hd,

and ε2Hare represented in quaternion.

161

Page 182: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 183: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 10

Conclusion

In this paper, cloth model generation method, new recognition method using a photo-

model of each cloth (photo-model-based matching method) and pose estimation method

to detect the target cloth and estimate the pose of the target object using model-based

cloth recognition are presented. By using a photo model of each cloth, the target objects

(cloths) with different colors, shapes, sizes, designs and patterns can be recognized. The

merits of the proposed system are as follow:

• Model-based approach is used to search the target in the image using a model, which

is constructed based on known information of the target object,

• The proposed method can resist the influence of light environment changing,

• The best matching between the target object and model can get quickly and accu-

rately without time consuming,

• Photo-model-based method can recognize the deformable different cloths automat-

ically,

• 3D-pose measurement is possible, and

• Photo-model-based 3D-pose recognition is not limited to cloth, any object can be

recognized by the proposed system.

163

Page 184: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 10 Conclusion

GA can converge correctly between the generated model and the undefined target cloth.

Moreover, in this experiment, 100 times generation is applied for the evolutionary process.

The recognition accuracy is analyzed in terms of the histogram of pose estimation error.

Recognizing the deformable different cloths automatically, estimation of the 3D pose of

the target object and recognizing the cloth under changing and unknown environment

by our proposed photo-model-based cloth recognition system have been confirmed exper-

imentally to be applied in the garment factories.

In this paper, the unique cloths handling performance based on 3D recognition ac-

curacy has been verified experimentally. In 100 times handling experiment, the cloth in

each time can handle the error range between the ±15 [mm] in positions (x, y, z) and 15

[◦] in orientation θ which less than the desired 40 [mm] in positions (x, y, z) and 40 [◦]

in orientation θ . The numerical value of the histogram of pose estimation error results

of the cloth No.3 handling accuracy are summarized as Table. According to the experi-

mental result, the practical experiments’ result can approach to the best goal in handling

performance. The proposed unique cloths handling system is aimed to save the cost of

staff and to get the better performance and higher accuracy than human.

Verification of the unique cloths recognition and handling performance using the

photo-model-based cloth recognition under different illuminations under 100 (lx) to 1300

(lx) is presented. In addition, the handling performance by PA-10 robot has been verified.

The experimental results indicated that errors and the extent of ±3σ concerning x and y

positions of all 12 cloths are less than ±10 [mm], and those of θ around z-axis is less than

10 [◦] with the probability of 99.7% in recognition experiment with illumination varieties.

According to the experimental result, the proposed system has been confirmed to be able

to recognize and handle the 12 unique cloths under different light conditions without the

need for human assistance.

Moreover, the maximum value of the highest mountain of the peak represents for the

blue crab and also it has been confirmed that the target object (crab) has been recog-

164

Page 185: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Chapter 10 Conclusion

nized among different kinds of underwater animals. Recognizing the 3D solid deformable

target object blue crab automatically, estimation of the 3D pose of the target object and

recognizing the crab under changing and unknown environment by our proposed photo-

model-based crab recognition system have been confirmed experimentally to be applied in

the fishery product. According to the experimental results, the system can recognize the

blue crab at the orientation of the target object in yM -direction (0 [◦], −10 [◦], −20 [◦],

−30 [◦] and −40 [◦]) with the fitness value that is over 0.3. The maximum fitness distri-

bution in orientations ε1 − ε2, ε1 − ε3 and ε2 − ε3 have been confirmed experimentally as

the peak of the mountain with quaternion. In terms of fitness function distribution, au-

thenticity of the present study of photo-model-based 3D pose estimation by stereo-vision

extended to visual servoing result has been dealt with trial of a flat shape clothes handling

robotic system. Finally, the proposed system can be confirmed that the end-effector of

PA-10 robot tracks the target object under changing step orientation of yM -direction, ε2M

within 0[s] ∼ 330 [s] and the “RM-GA” can estimate the target object correctly when the

target object is changing its pose.

165

Page 186: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there
Page 187: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[1] Bureau, Statistics, “Ministry of internal affairs and communications,” Government

of Japan (2010).

[2] Muramatsu, Naoko, and Hiroko Akiyama, “Japan: super-aging society preparing for

the future,” The Gerontologist 51.4 : pp. 425-432 (2011)

[3] Sun, Li, Gerardo Aragon-Camarasa, Simon Rogers, and J. Paul Siebert, “Robot

Vision Architecture for Autonomous Clothes Manipulation,” arXiv preprint

arXiv:1610.05824 (2016)

[4] A. Ramisa, G. Alenya, F. Moreno-Noguer, and C. Torras, “Using depth and appear-

ance features for informed robot grasping of highly wrinkled clothes,” In Robotics

and Automation (ICRA),2012 IEEE International Conference on. IEEE, pp. 1703-708

(2012)

[5] B. Willimon, S. Birchfield, and I. D. Walker, “Model for unfolding laundry using

interactive perception,” In IROS, pp. 4871-4876 (2011)

[6] B. Willimon, I. Walker, and S. Birchfield, “A new approach to clothing classification

using mid-level layers,” In Robotics and Automation(ICRA), 2013 IEEE Interna-

tional Conference on, May, pp. 4271-4278 (2013)

[7] Y. Kita, T. Ueshiba, E. S. Neo, and N. Kita, “Clothes state recognition using 3d

observed data,” In Robotics and Automation, 2009. ICRA’09. IEEE International

Conference on. IEEE, pp. 1220-1225 (2009)

167

Page 188: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[8] Y. Kita, T. Ueshiba, E. S. Neo, and N. Kita, “A method for handling a specific

part of clothing by dual arms,” In Intelligent Robots and Systems, 2009. IROS 2009.

IEEE/RSJ International Conference on. IEEE, pp. 4180-4185 (2009)

[9] Yinxiao Li, Chih-Fan Chen and Peter K Allen., “Recognition of deformable object

category and pose,” Robotics and Automation (ICRA), IEEE International Confer-

ence, Hong Kong, pp. 5558-5564, May 31-June 7 (2014)

[10] Y. Li, Y. Wang, M. Case, S.-F. Chang, and P. K. Allen, “Real-time pose estima-

tion of deformable objects using a volumetric approach,” In IEEE/RSJ International

Conference on Intelligent Robots and Systems. IEEE, pp. 1046-1052 (2014)

[11] A. Ramisa, G. Alenya, F. Moreno-Noguer, and C. Torras, “Finddd: A fast 3d de-

scriptor to characterize textiles for robot manipulation,” In Intelligent Robots and

Systems (IROS), 2013 IEEE/RSJ International Conference on, Nov, pp. 824-830

(2013)

[12] Maitin-Shepard J, Cusumano-Towne M, Lei J and Abbeel., “Cloth grasp point detec-

tion based on multiple-view geometric cues with application to robotic towel folding,”

In Robotics and Automation (ICRA), IEEE International Conference, pp. 2308-2315,

May (2010)

[13] J. Van Den Berg, S. Miller, K. Goldberg, and P. Abbeel, “Gravitybased robotic cloth

folding,” In Algorithmic Foundations of Robotics IX. Springer, pp. 409-424 (2011)

[14] S. Miller, J. Van Den Berg, M. Fritz, T. Darrell, K. Goldberg, and P. Abbeel, “A ge-

ometric approach to robotic laundry folding,” The International Journal of Robotics

Research, vol. 31, no. 2, pp. 249-267 (2012)

[15] Stria, Jan, Daniel Pra, Vclav Hlav, Libor Wagner, Vladimr Petrk, Pavel Krsek, and

Vladimr Smutn, “Garment perception and its folding using a dual-arm robot,” In In-

168

Page 189: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

telligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference

on, pp. 61-67. IEEE (2014)

[16] Li, Yinxiao, Yonghao Yue, Danfei Xu, Eitan Grinspun, and Peter K. Allen, “Folding

deformable objects using predictive simulation and trajectory optimization,” In In-

telligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on,

pp. 6000-6006. IEEE (2015)

[17] Cusumano-Towner, Marco, Arjun Singh, Stephen Miller, James F.

O’Brien, and Pieter Abbeel, “Bringing clothing into desired configura-

tions with limited perception,” In Robotics and Automation (ICRA),

2011 IEEE International Conference on, pp. 3893-3900 (2011). [Online].

Available:http://graphics.berkeley.edu/papers/CusumanoTowner-BCD-2011-05/

[18] Doumanoglou, Andreas, Andreas Kargakos, Tae-Kyun Kim, and Sotiris Malassio-

tis, “Autonomous active recognition and unfolding of clothes using random decision

forests and probabilistic planning,” In Robotics and Automation (ICRA), 2014 IEEE

International Conference on, pp. 987-993 (2014)

[19] A. Doumanoglou, T.-K. Kim, X. Zhao, and S. Malassiotis, “Active random forests:

An application to autonomous unfolding of clothes,” In Computer Vision ECCV

2014, ser. Lecture Notes in Computer Science, D. Fleet, T. Pajdla, B. Schiele, and

T. Tuytelaars, Eds. Springer International Publishing, vol. 8693, pp. 644-658 (2014)

[Online]. Available: http://dx.doi.org 10.1007/978-3-319-10602-1 42

[20] Y. Li, D. Xu, Y. Yue, Y. Wang, S.-F. Chang, E. Grinspun, and P. K. Allen, “Regrasp-

ing and unfolding of garments using predictive thin shell modeling,” In Proceedings

of the IEEE International Conference on Robotics and Automation (ICRA) (2015)

169

Page 190: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[21] D. Triantafyllou, I. Mariolis, A. Kargakos, S. Malassiotis, and N. Aspragathos, “A

geometric approach to robotic unfolding of garments,” Robotics and Autonomous

Systems, Vol. 75, pp. 233-243 (2016)

[22] Awai M, Shimizu T, Yamashita A, Kaneko T, “Person following and autonomous

returning by mobile robot equipped with monocular camera and laser range finder,”

Proceedings of the 2011 JSME Conference on Robotics and Mechatronics, 2A1-H10

(in Japanese) (2011)

[23] Song, Wei, Mamoru Minami, Yasushi Mae, and Seiji Aoyagi, “On-line evolutionary

head pose measurement by feedforward stereo model matching,” In Robotics and

Automation, 2007 IEEE International Conference, Roma, Italy, pp. 4394-4400, April

(2007)

[24] J. Stavnitzky, D. Capson, “Mutiple Camera Model-Based 3-D Visual Servoing,”

IEEE Trans. on Robotics and Automation, Vol. 16, No. 6, pp. 732-739, December

(2000)

[25] C. Dune, E. Marchand, C. leroux, “One Click Focus with Eye-inhand/Eye-to-hand

Cooperation?,” IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2471-2476

(2007)

[26] Wei Song, Mamoru Minami, Fujia Yu, Yanan Zhang and Akira Yanou, “3-D Hand

and Eye-Vergence Approaching Visual Servoing with Lyapunouv-Stable Pose Track-

ing,” IEEE International Conference on Robotics and Automation Shanghai Inter-

national Conference Center, Shanghai China, May 9-13, (2011)

[27] S. Nakamura, K. Okumura, A. Yanou and M. Minami, “Visual Servoing of Patient

Robot’s Face and Eye-Looking Direction to Moving Human,” SICE Annual Confer-

ence, Tokyo, Japan pp. 1314-1319 (2011)

170

Page 191: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[28] Man, Kim-Fung, Kit-Sang Tang, and Sam Kwong, “Genetic algorithms: concepts and

applications [in engineering design],” IEEE transactions on Industrial Electronics,

Vol. 43, No. 5, pp. 519-534 (1996)

[29] Myo Myint, Kenta Yonemori, Akira Yanou, Shintaro Ishiyama and Mamoru Minami,

“Robustness of Visual-Servo against Air Bubble Disturbance of Underwater Vehicle

Using Three-dimensional Marker and Dual-eye Cameras,” International Conference

OCEANS 15 MTS/IEEE, Washington DC, USA, October 19-22, (2015)

[30] Kimitoshi Yamazaki, Masayuki Inaba, “A Cloth Detection Method Based on Im-

age Wrinkle Feature for Daily Assistive Robots,” MVA 2009 IAPR Conference on

Machine Vision Applications, Yokohama, Japan, May 20-22, (2009)

[31] S.Miller, M.Fritz, T.Darrell and P.Abbeel, “Parametrized Shape Models for Cloth-

ing,” IEEE International Conference on Robotics and Automation (ICRA), Shanghai

China, 9-13 May (2011)

[32] Ishikawa M, “Sensor fusion system-mechanism for integration of sensory informa-

tion,” Journal of the Robotics Society of Japan, Vol.6, No.3, pp. 251-255 (in Japanese)

(1998)

[33] Matsuyama T, Kuno Y, Imiya A, “Computer vision: technical review and future

view,” New Technology Communications (in Japanese) (1998)

[34] De Luca A, Oriolo G, Giordano P R, “On-line estimation of feature depth for image-

based visual servoing schemes,” Proceedings of 2007 IEEE International Conference

on Robotics and Automation (ICRA2007), pp. 2823-2828 (2007)

[35] Hashimoto K, Kimura H, “Visual servoing-nonlinear observer approach,” Journal of

the Robot Society of Japan, Vol. 13, No.7 , pp. 986-993 (in Japanese) (1995)

171

Page 192: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[36] Ono k, Ogawa T, Maeda Y, Nakatani S, Nagayasu G, Shimizu R, Ouchi N, “Recog-

nition and bin-picking of coil springs by stereo vision,” Transactions of the Japan So-

ciety of Mechanical Engineers, Series C, Vol.79, No. 804, pp. 2769-2779 (in Japanese)

(2013)

[37] Yu Son-Cheol, Tamaki Ura, Teruo Fujii, Hayato Kondo, “Navigation of autonomous

underwater vehicles based on artificial underwater landmarks,” In OCEANS,

MTS/IEEE Conference and Exhibition, Vol. 1, pp. 409-416 (2001)

[38] Ryuki Funakubo, Khaing Win Phyu, Hongzhi Tian, and Mamoru Minami, “Recogni-

tion and handling of clothes with different pattern by dual hand-eyes robotic system,”

Proceedings of the 2016 IEEE/SICE International Symposium on System Integra-

tion, pp. 742-747, Sapporo Convention Center, Sapporo, Japan, December 13-15

(2016)

[39] Khaing Win Phyu, Yu Cui, Hongzhi Tian, Ryota Hagiwara, Ryuki Funakubo, Akira

Yanou, and Mamoru Minami, “Accuracy on photo-model-based clothes recognition,”

Proceedings of the SICE Annual Conference 2016, pp. 1366-1371, Tsukuba, Japan,

September 20-23 (2016)

[40] Ryuki Funakubo, Khaing Win Phyu, Ryota Hagiwara, Hongzhi Tian, Mamoru Mi-

nami, “Verification of illumination tolerance for clothes recognition,” The Twenty-

Second International Symposium on Artificial Life and Robotics 2017 (AROB 22nd

2017), pp. 807-812, B-Con Plaza, Beppu, Japan, January 19-21 (2017)

[41] Khaing Win Phyu, Ryuki Funakubo, Ryota Hagiwara, Hongzhi Tian, and Mamoru

Minami, “Verification of Illumination Tolerance for Photo-model-based Cloth Recog-

nition,” Artificial Life and Robotics, Vol. 23, DOI: 10.1007/s10015-017-0391-0, pp.

118-130 (2018)

172

Page 193: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[42] W. E. L. Grimson., “A computer implementation of a theory of human stereo vision,”

Philosophical Transactions of the Royal Society of London, B., Vol. 292, No. 1058,

pp. 217-253 (1981)

[43] Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong., “A robust technique for match-

ing two uncalibrated images through the recovery of the unknown epipolar geometry,”

Artificial Intelligence Journal, Vol. 78, pp. 87-119 (1995)

[44] T. Poggio and S. Edelman., “A network that learns to recognize 3d objects,” Nature,

Vol. 343, pp. 263-266 (1990)

[45] S. Ullman and R. Basri., “Recognition by linear combinations of models,” IEEE

Trans. PAMI, Vol. 13, No. 10, pp. 992-1106 (1991)

[46] Khaing Win Phyu, Ryuki Funakubo, Ikegawa Fumiya, Yasutake Shinichiro, and

Mamoru Minami, “Verification of Recognition Performance of Cloth Handling Robot

with Photo-model-based Matching,” Proceedings of 2017 IEEE International Con-

ference of Mechatronics and Automation (ICMA), pp. 1750-1756, Takamatsu, Japan,

August 6-9 (2017)

[47] Khaing Win Phyu, Ryuki Funakubo, Fumiya Ikegawa, and Mamoru Minami, “Veri-

fication of Unique Cloth Handling Performance Based on 3D Recognition Accuracy

of Cloth by Dual-eyes Cameras with Photo-model-based Matching,” International

Journal of Mechatronics and Automation (IJMA) (In Press) (2018) (8 pages)

[48] Khaing Win Phyu, Ryuki Funakubo, Ryota Hagiwara, Hongzhi Tian, Mamoru

Minami, “Verification of Photo-model-based Pose Estimation and Handling

of Unique Clothes under Illumination Varieties,” Journal of Advanced Me-

chanical Design, Systems, and Manufacturing, JSME, Vol. 12, No. 2, DOI:

10.1299/jamdsm.2018jamdsm0047, (2018) (23 pages)

173

Page 194: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[49] Minami M, Agbanhan J and Asakura T., “Evolutionary scene recognition and si-

multaneous position/orientation detection,” in Soft Computing in Measurement and

Information Acquisition, Springer Berlin Heidelberg, pp. 178-207 (2003)

[50] Myint Myo, Kenta Yonemori, Akira Yanou, Shintaro Ishiyama and Mamoru Mi-

nami., “Robustness of visual-servo against air bubble disturbance of underwater ve-

hicle system using three-dimensional marker and dual-eye cameras,” In OCEANS’15

MTS/IEEE Washington, pp. 1-8 (2015)

[51] Agin, Gerald J, “Real time control of a robot with a mobile camera,” SRI Interna-

tional (1979)

[52] Hutchinson, Seth, Gregory D. Hager, and Peter I. Corke, “A tutorial on visual servo

control,” IEEE transactions on robotics and automation 12, No. 5, pp. 651-670 (1996)

[53] Oh, Paul Y., and K. Allen, “Visual servoing by partitioning degrees of freedom,”

IEEE Transactions on Robotics and Automation 17, No. 1, pp. 1-17 (2001)

[54] Malis, Ezio, Francois Chaumette, and Sylvie Boudet, “2 1/2 D visual servoing,”

IEEE Transactions on Robotics and Automation 15, No. 2, pp. 238-250 (1999)

[55] Allen, Peter K., Aleksandar Timcenko, Billibon Yoshimi, and Paul Michelman,“ Au-

tomated tracking and grasping of a moving object with a robotic hand-eye system,”

IEEE Transactions on Robotics and Automation 9, No. 2, pp. 152-165 (1993)

[56] Chaumette, Francois, and Seth Hutchinson, “Visual servo control. II. Advanced ap-

proaches (Tutorial),” IEEE Robotics and Automation Magazine 14, No. 1, pp. 109-

118 (2007)

[57] Middelplaats, L. N. M, “Automatic Extrinsic Calibration and Workspace Mapping:

Algorithms to Shorten the Setup time of Camera-guided Industrial Robots” (2014)

174

Page 195: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[58] Kou Yejun, Hongzhi Tian, Mamoru Minami, and Takayuki Matsuno, “Improved eye-

vergence visual servoing system in longitudinal direction with RM-GA,” Artificial Life

and Robotics, Vol. 23, No. 1, pp. 131-139 (2018)

[59] Wang, Ming-Shyan, “Eye to hand calibration using ANFIS for stereo vision-based

object manipulation system,” Microsystem Technologies 24, No. 1, pp. 305-317 (2018)

[60] Hu, Jwu-Sheng, Yung-Jung Chang, “Simultaneous Hand-Eye-Workspace and Cam-

era Calibration using Laser Beam Projection,” International Journal of Automation

and Smart Technology 4, No. 1, pp. 27-42 (2014)

[61] Kita, Yasuyo, Ee Sian Neo, Toshio Ueshiba, and Nobuyuki Kita, “Clothes handling

using visual recognition in cooperation with actions,” In Intelligent Robots and Sys-

tems (IROS), 2010 IEEE/RSJ International Conference on, pp. 2710-2715 (2010)

[62] Kita Yasuyo, Fumio Kanehiro, Toshio Ueshiba, and Nobuyuki Kita, “Clothes han-

dling based on recognition by strategic observation,” In Humanoid Robots (Hu-

manoids), 2011 11th IEEE-RAS International Conference on, pp. 53-58 (2011)

[63] Y. Kita, T. Ueshiba, E. S. Neo and N. Kita, “Clothes state recognition using 3D

observed data,” In Proc. of International Conference on Robotics and Automation,

pp. 1220-1225 (2009)

[64] Y. Kita, T. Ueshiba, E. S. Neo and N. Kita, “A method for handling a specific part of

clothes by dual arms,” In Proc. of International Conference on Robots and Systems,

pp. 480-485 (2009)

[65] Bell, Matthew, “Flexible object manipulation,” Dartmouth College (2010)

[66] Miller, Stephen, Jur Van Den Berg, Mario Fritz, Trevor Darrell, Ken Goldberg, and

Pieter Abbeel, “A geometric approach to robotic laundry folding,” The International

Journal of Robotics Research 31, No. 2, pp. 249-267 (2012)

175

Page 196: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[67] Sun, Li, Gerarado Aragon-Camarasa, Paul Cockshott, Simon Rogers, and J. Paul

Siebert, “A heuristic-based approach for flattening wrinkled clothes,” In Conference

Towards Autonomous Robotic Systems, Springer, Berlin, Heidelberg, pp. 148-160

(2013)

[68] Sun, Li, Gerardo Aragon-Camarasa, Simon Rogers, and J. Paul Siebert, “Accu-

rate garment surface analysis using an active stereo robot head with application to

dual-arm flattening,” In Robotics and Automation (ICRA), 2015 IEEE International

Conference on, pp. 185-192 (2015)

[69] Miller, Stephen, Mario Fritz, Trevor Darrell, and Pieter Abbeel, “Parametrized shape

models for clothing,” In Robotics and Automation (ICRA), 2011 IEEE International

Conference on, pp. 4861-4868. IEEE (2011)

[70] Ramisa, Arnau, Guillem Alenya, Francesc Moreno-Noguer, and Carme Torras,

Finddd, “A fast 3d descriptor to characterize textiles for robot manipulation,” In

Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference

on, pp. 824-830. IEEE (2013)

[71] Willimon, Bryan, Stan Birchfield, and Ian Walker, “Classification of clothing using

interactive perception,” In Robotics and Automation (ICRA), 2011 IEEE Interna-

tional Conference on, pp. 1862-1868 (2011)

[72] Gee A, Cipolla R, “Determining the gaze of faces in images,” Image and Vision

Computing, Vol. 12, No. 10, pp. 639-647 (1994)

[73] Huang Chien-Yuan, Octavia I Camps, Tapas Kanungo, “Object recognition using

appearance-based parts and relations,” In Computer Vision and Pattern Recognition,

IEEE Computer Society Conference on, pp. 877-883 (1997)

176

Page 197: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[74] Yu F, Minami M, Song W, Zhu J, Yanou A, “On-line head pose estimation with

binocular hand-eye robot based on evolutionary model-based matching,” Journal of

Computer and Information Technology, Vol.2, No.1, pp. 43-54 (2012)

[75] McCall, John, “Genetic algorithms for modelling and optimisation,” Journal of Com-

putational and Applied Mathematics 184, No. 1, pp. 205-222 (2005)

[76] Uddin, M. Nasir, Mohammed Ali Abido, and M. Azizur Rahman, “Real-time perfor-

mance evaluation of a genetic-algorithm-based fuzzy logic controller for IPM motor

drives,” IEEE Transactions on Industry Applications 41, No. 1, pp. 246-252 (2005)

[77] Ishiguro, Akio, Kenta Kawasumi, and Akinobu Fujii, “Increasing evolvability of a

locomotion controller using a passive-dynamic-walking embodiment,” In Intelligent

Robots and Systems, 2002. IEEE/RSJ International Conference on, vol. 3, pp. 2581-

2586. IEEE, 2002.

[78] B.Siciliano and L.Villani: “Robot force control,” ISBN 0-7923-7733-8.

[79] Pengyu Liu, Mamoru Minami, Sen Hou, Koichi Maeda, Akira Yanou, “Eye-vergence

visual servoing on moving target,” System Control Information Society Research

Presentation Lecturer,Okayama University, Japan, May 15-17 (2013)

[80] Y. Cui, X. Li, M. Minami, A. Yanou, M. Yamashita and S. ISHIYAMA, “Robot for

Decontamination with 3D Move on Sensing,” Nuclear Safety and Simulation, Vol. 6,

No. 2, pp. 142-154 , June (2015)

[81] Myint M, Yonemori K, Yanou A, Lwin KN, Minami M, Ishiyama S, “Visual Servoing

for underwater vehicle using dual-eyes evolutionary real-time pose tracking,” Journal

of Robotics and Mechatronics, 28(4), pp. 543-558 (2016)

177

Page 198: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[82] Lwin KN, Yonemori K, Myint M, Naoki M, Minami M, Yanou A, Matsuno T, “Per-

formance analyses and optimization of real-time multi-step GA for visual-servoing

based underwater vehicle,” In Techno-Ocean, pp. 519-526 (2016)

[83] Minami M, Agbanhan J, Asakura T, “Evolutionary scene recognition and simultane-

ous position/orientation detection,” in Soft Computing in Measurement and Infor-

mation Acquisition, Springer Berlin Heidelberg, pp. 178-207 (2003)

[84] Castillo CD and Jacobs DW., “Using stereo matching with general epipolar geometry

for 2d face recognition across pose,” IEEE Transactions on Pattern Analysis and

Machine Intelligence, Vol. 31, No. 12, pp. 2298-2304 (2009)

[85] CUI Y, LI X, MINAMI M, YANOU A, YAMASHITA M and ISHIYAMA S., “Robot

for decontamination with 3D move on sensing,” Nuclear Safety and Simulation, Vol.

6, No. 2, pp. 142-154 (2015)

[86] Paton M, Pomerleau F, Barfoot TD., “In the dead of winter: Challenging vision-

based path following in extreme conditions,” In Field and Service Robotics Springer

International Publishing, pp. 563-576 (2016)

[87] Maeda Koichi, Mamoru Minami, Akira Yanou, Hiroaki Matsumoto, Fujia Yu, and

Sen Hou, “Frequency response experiments of 3-d pose full-tracking visual servo-

ing with eye-vergence hand-eye robot system,” In SICE Annual Conference (SICE),

Proceedings of IEEE, pp. 101-107 (2012)

[88] www.mdsg.umd.edu

[89] Secor, D.H. and E.D. Houde, “Use of larval stocking in restoration of Chesapeake

Bay striped bass,” ICES J. Mar. Sci. 55: pp. 228-239 (1998)

[90] Maitin-Shepard J, Cusumano-Towne M, Lei J, Abbeel, “Cloth grasp point detection

based on multiple-view geometric cues with application to robotic towel folding,” In

178

Page 199: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

Robotics and Automation (ICRA), 2010 IEEE International Conference, pp. 2308-

2315, May (2010)

[91] Bologna, P.A., Heck, K.L., “Differential predation and growth rates of bay scallops

within a seagrass habitat, Journal of Experimental Marine Biology and Ecology,”

Vol. 239, No. 2, pp. 299-314 (1999)

[92] NATIONAL AQUARIUM, 501 East Pratt Street, Baltimore, MD 21202, aqua.org

[93] Nishida, Yuya, Tamaki Ura, Tomonori Hamatsu, Kenji Nagahashi, Shogo Inaba, and

Takeshi Nakatani, “Resource investigation for Kichiji rockfish by autonomous under-

water vehicle in Kitami-Yamato bank off Northern Japan,” ROBOMECH Journal 1,

No. 1, p.2 (2014)

[94] Ohba, Kohtaro, Yoichi Sato, and Katsusi Ikeuchi, “Appearance-based visual learning

and object recognition with illumination invariance,” Machine vision and applications

12, No. 4, pp. 189-196 (2000)

[95] Sugawara, Mai, Kaoru Kiyomitsu, Tatsuya Namae, Toshiya Nakaguchi, and Norim-

ichi Tsumura, “An optical projection system with mirrors for laparoscopy,” Artificial

Life and Robotics, Vol. 22, No. 1, pp. 51-57 (2017)

[96] Watanabe, Toshihiro, and Taro Hirose, “Estimation of the snow crab Chionoecetes

opilio population density using the deep-sea video monitoring system on a towed

sledge,” Bulletin of the Japanese Society of Scientific Fisheries (Japan) (2001)

[97] Minami Mamoru et al., “Visual servoing to fish and catching using global/local GA

search,” Advanced Intelligent Mechatronics, Proceedings, IEEE/ASME International

Conference on Vol. 1, IEEE (2001)

179

Page 200: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Bibliography

[98] Song, Wei, Yu Fujia, and Mamoru Minami, “3D visual servoing by feedforward evo-

lutionary recognition,” Journal of Advanced Mechanical Design, Systems, and Man-

ufacturing, Vol. 4, No. 4, pp. 739-755 (2010)

[99] Myint Myo, Kenta Yonemori, Akira Yanou, Khin Nwe Lwin, Mamoru Minami and

Shintaro Ishiyama, “Visual servoing for underwater vehicle using dual-eyes evolu-

tionary real-time pose tracking,” JRM 28, No. 4, pp. 543-558 (2016)

[100] Myint Myo, Kenta Yonemori, Khin Nwe Lwin, Akira Yanou, and Mamoru Minami,

“Dual-eyes vision-based docking system for autonomous underwater vehicle,” an Ap-

proach and Experiments, Journal of Intelligent & Robotic Systems, pp. 1-28 (2017)

180

Page 201: Photo-model-based Cloth Recognition/Handling and ...ousar.lib.okayama-u.ac.jp/files/public/5/56283/...cloth for handling even though the target cloth is occluded partially and there

Acknowledgement

Acknowledgement

I would like to first say a very big thank you to my supervisor Professor Mamoru Mi-

nami for all the support and encouragement he gave me throughout these years. Without

his stand, guidance and constant feedback this Ph.D. would not have been achievable.

A special mention of thanks to all the people in the Visual Servoing Group for mak-

ing it such a helpful, delightful and enjoyable environment. I would like to express my

thanks to T2K Co., Ltd for their help and support. My heartfelt thanks to my family for

their love, kindness, moral support, and understanding of my goals and aspirations. Fi-

nally, I would also like to thank Japanese Government (MONBUKAGAKUSHO:MEXT)

SCHOLARSHIP for their financial support granted through the doctoral degree.

September, 2018

Khaing Win Phyu

181