semi-automatic visual support system with drone for

9
Semi-Automatic Visual Support System with Drone Paper: Semi-Automatic Visual Support System with Drone for Teleoperated Construction Robot Takahiro Ikeda , Naoyuki Bando ∗∗ , and Hironao Yamada Gifu University 1-1 Yanagido, Gifu city, Gifu 501-1193, Japan E-mail: {ikedata, yamada}@gifu-u.ac.jp ∗∗ Industrial Technology and Support Division, Gifu Prefectural Government 2-1-1 Yabuta-minami, Gifu city, Gifu 500-8570, Japan E-mail: [email protected] [Received September 7, 2020; accepted February 1, 2021] In this paper, we describe a semi-automatic viewpoint moving system that employs a drone to provide visual assistance to the operator of a teleoperated robot. The objective of this system is to improve the operational efficiency and reduce the mental load of the opera- tor. The operator changes the position of the drone through an interface to acquire the optimal assist im- age for teleoperation. We confirmed through an eval- uation experiment that, in comparison with our pre- vious study in which the final positions of the drone were determined in advance, the proposed method im- proves the operational accuracy and reduces the men- tal load of the operator. Keywords: drone, teleoperated construction robot, visual support 1. Introduction Teleoperated robots provide an effective means of op- erating in places that are difficult or dangerous for human workers to enter, such as recovery operations following disasters or maintenance work in nuclear power plants. In the past, they were deployed at Mount Unzen in Nagasaki Prefecture (eruption), Mount Usu in Hokkaido (eruption), Miyakejima Island in Tokyo (eruption), and Nagaoka City in Niigata Prefecture (the 2004 Chuetsu earthquake) [1]. The first case involved tests of teleoperated unmanned operations for post-disaster recovery at Mount Unzen in 1990 undertaken by several construction contractors un- der a test field system set up by the Ministry of Construc- tion. These tests involved the removal of boulders at a location that posed the risk of pyroclastic flows and thus the risk of a secondary disaster, making them extremely dangerous. In the teleoperation system used at that time, the joysticks of the construction equipment were teleop- erated via simple feed-forward control. In this operation, the operator performed work using the fed back visual information captured by a stereoscopic camera. How- ever, this system failed to provide the operator with suffi- cient site information, resulting in a low work efficiency of 30%–50% compared with the case of direct opera- tions [2]. Subsequently, the basic technology of teleoper- ation was established and deployed in the recovery oper- ations at Mount Usu, Miyakejima Island, Nagaoka City, and Ibigawa town. In these operations, multiple cam- eras were deployed to capture images from several view- points and simultaneously present them to the operator, instead of using stereoscopic cameras, which are known to cause considerable eye fatigue, to compensate for the loss of perspective. However, the limitation of this ap- proach was the need for larger facilities due to the use of multiple cameras. Thus, there has been a demand for readily deployable systems that require less equipment, to assist teleoperation. A study on teleoperation systems attempted to reduce the communication time delay in teleoperation by intro- ducing model-based teleoperation [3]. Studies on improv- ing the operability of teleoperated surgical robots using haptic feedback have also been conducted [4]. Further- more, a study conducted teleoperation through the head movements of the operator based on the camera images captured using the operated device [5]. The objective of these studies was to improve tele-existence in teleopera- tion. Moreover, studies in which images from the view- point of a third party in the operation environment are pro- vided to the operator have been conducted to improve the operational accuracy and reduce the load of the operator. On uneven ground, mounting a camera on a robot ca- pable of moving in three dimensions is believed to be ef- fective for capturing multiple viewpoints using a small number of cameras. A representative example of a robot capable of three-dimensional movements is a multicopter (commonly known as a drone). Drones have been used in various fields in recent years because of their high mo- bility. Active research and development efforts have been undertaken in the fields of aerial photography [6, 7] and transport [8, 9]. Research and development have also been conducted on drones used for the visual or hammering in- spection of bridges [10–13]. Among studies on the application of drones in the tele- operation of construction equipment, Kiribayashi et al. Journal of Robotics and Mechatronics Vol.33 No.2, 2021 313 https://doi.org/10.20965/jrm.2021.p0313 © Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).

Upload: others

Post on 22-Dec-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Semi-Automatic Visual Support System with Drone for

Semi-Automatic Visual Support System with Drone

Paper:

Semi-Automatic Visual Support System with Drone forTeleoperated Construction Robot

Takahiro Ikeda∗, Naoyuki Bando∗∗, and Hironao Yamada∗

∗Gifu University1-1 Yanagido, Gifu city, Gifu 501-1193, Japan

E-mail: {ikedata, yamada}@gifu-u.ac.jp∗∗Industrial Technology and Support Division, Gifu Prefectural Government

2-1-1 Yabuta-minami, Gifu city, Gifu 500-8570, JapanE-mail: [email protected]

[Received September 7, 2020; accepted February 1, 2021]

In this paper, we describe a semi-automatic viewpointmoving system that employs a drone to provide visualassistance to the operator of a teleoperated robot. Theobjective of this system is to improve the operationalefficiency and reduce the mental load of the opera-tor. The operator changes the position of the dronethrough an interface to acquire the optimal assist im-age for teleoperation. We confirmed through an eval-uation experiment that, in comparison with our pre-vious study in which the final positions of the dronewere determined in advance, the proposed method im-proves the operational accuracy and reduces the men-tal load of the operator.

Keywords: drone, teleoperated construction robot, visualsupport

1. Introduction

Teleoperated robots provide an effective means of op-erating in places that are difficult or dangerous for humanworkers to enter, such as recovery operations followingdisasters or maintenance work in nuclear power plants. Inthe past, they were deployed at Mount Unzen in NagasakiPrefecture (eruption), Mount Usu in Hokkaido (eruption),Miyakejima Island in Tokyo (eruption), and Nagaoka Cityin Niigata Prefecture (the 2004 Chuetsu earthquake) [1].

The first case involved tests of teleoperated unmannedoperations for post-disaster recovery at Mount Unzen in1990 undertaken by several construction contractors un-der a test field system set up by the Ministry of Construc-tion. These tests involved the removal of boulders at alocation that posed the risk of pyroclastic flows and thusthe risk of a secondary disaster, making them extremelydangerous. In the teleoperation system used at that time,the joysticks of the construction equipment were teleop-erated via simple feed-forward control. In this operation,the operator performed work using the fed back visualinformation captured by a stereoscopic camera. How-ever, this system failed to provide the operator with suffi-

cient site information, resulting in a low work efficiencyof 30%–50% compared with the case of direct opera-tions [2]. Subsequently, the basic technology of teleoper-ation was established and deployed in the recovery oper-ations at Mount Usu, Miyakejima Island, Nagaoka City,and Ibigawa town. In these operations, multiple cam-eras were deployed to capture images from several view-points and simultaneously present them to the operator,instead of using stereoscopic cameras, which are knownto cause considerable eye fatigue, to compensate for theloss of perspective. However, the limitation of this ap-proach was the need for larger facilities due to the useof multiple cameras. Thus, there has been a demand forreadily deployable systems that require less equipment, toassist teleoperation.

A study on teleoperation systems attempted to reducethe communication time delay in teleoperation by intro-ducing model-based teleoperation [3]. Studies on improv-ing the operability of teleoperated surgical robots usinghaptic feedback have also been conducted [4]. Further-more, a study conducted teleoperation through the headmovements of the operator based on the camera imagescaptured using the operated device [5]. The objective ofthese studies was to improve tele-existence in teleopera-tion. Moreover, studies in which images from the view-point of a third party in the operation environment are pro-vided to the operator have been conducted to improve theoperational accuracy and reduce the load of the operator.

On uneven ground, mounting a camera on a robot ca-pable of moving in three dimensions is believed to be ef-fective for capturing multiple viewpoints using a smallnumber of cameras. A representative example of a robotcapable of three-dimensional movements is a multicopter(commonly known as a drone). Drones have been usedin various fields in recent years because of their high mo-bility. Active research and development efforts have beenundertaken in the fields of aerial photography [6, 7] andtransport [8, 9]. Research and development have also beenconducted on drones used for the visual or hammering in-spection of bridges [10–13].

Among studies on the application of drones in the tele-operation of construction equipment, Kiribayashi et al.

Journal of Robotics and Mechatronics Vol.33 No.2, 2021 313

https://doi.org/10.20965/jrm.2021.p0313

© Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/).

Page 2: Semi-Automatic Visual Support System with Drone for

Ikeda, T., Bando, N., and Yamada, H.

developed a system to assist teleoperation by capturingand displaying an overview video captured using a cam-era on a drone wire-connected to the construction equip-ment [14]. Although this system makes it possible topresent an overall image of the environment surroundingthe construction equipment, it makes the working spacein the image smaller. The present authors have devel-oped a visual assist system that presents the images ofthe workspace and its surroundings to improve the opera-tional accuracy and reduce the mental load of the operator.

Previously, we developed a visual assist system (view-point moving system) that employs a drone [15]. We em-ployed a drone to capture and present the images of theworkspace from several predetermined locations depend-ing on the operational content, for the task of transportingobjects of a fixed shape using a teleoperated robot. Con-sequently, we could confirm through experiments that theoperational accuracy could be improved, and the mentalload of the operator could be reduced. However, in realwork sites, blind spots often occur in the point of view ofthe drone because of an object (visual obstacle) lying be-tween the fixation point and the drone so that the imagerequired by the operator cannot always be obtained fromthe predetermined positions. This can lead to a reducedoperational accuracy or increased mental load of the op-erator.

Therefore, to improve the operational accuracy and re-duce the mental load of the operator in cases where blindspots are created in the point of view of the drone, we de-veloped a function that allows the operator to make fineadjustments to the viewpoint by manipulating the droneto remove such blind spots. With this function, the oper-ator can manipulate the drone and move it to a positionwhere no blind spots are created, in cases where an obsta-cle that can potentially create blind spots is present. Thismakes it possible to capture the image required by the op-erator, which can be expected to improve the operationalaccuracy and reduce the mental load of the operator.

In our previous study, the drone was automaticallymoved to switch the viewpoint by judging the work con-tent based on the information of the internal pressure of ahydraulic cylinder. In the present study, in addition to theautomatic switching of the viewpoint, we could make fineadjustments to the vertical direction (height) of the dronemanually using a slide button on the joystick. By com-bining automatic switching and manual fine adjustments,a viewpoint preferred by the operator could be obtainedwhile minimizing the load involved in the drone opera-tion.

The main objective of this study is to examine the op-erational accuracy and mental load of the operator byadding the fine-adjustment function. Therefore, althoughthe position of the drone has three degrees of freedom(DOFs), we consider fine adjustments in only one DOF,namely, the vertical direction. This avoids complicatingthe joystick handling, and thus minimizes the load of theoperator.

In this study, we verify the validity of the present func-tion from the standpoints of the operational accuracy, op-

Fig. 1. Configuration of the teleoperated robot system.

Fig. 2. Employed drone.

erating time, and mental load of the operator, based on ex-periments with human subjects using the viewpoint mov-ing system implemented with the present function.

2. Semi-Automatic Visual Assist System withDrone

Figure 1 shows the teleoperated robot system em-ployed in the present study. The system consists of a con-struction robot, a computer (PC1) to control the construc-tion robot, joysticks (JS) connected to PC1, a computer(PC2) to process camera images, camera-mounted drone(AR.Drone 2.0, Parrot, Fig. 2), stationary camera, and ascreen on which the video image provided for visual assis-tance is projected. The joystick displacement and the dis-placement of the piston of the construction robot are fedback to a leader-follower control system. The video im-age generated by PC2 is projected onto the screen, whichis viewed by the operator to conduct the operation. PC2is also used to control the drone.

2.1. Visual Assist SystemThe viewpoint fine-adjustment function developed in

this study is newly added to the viewpoint moving sys-tem developed in our previous study. In this section, wedescribe the viewpoint moving system. This system is in-tended for use in the teleoperated transport of objects of a

314 Journal of Robotics and Mechatronics Vol.33 No.2, 2021

Page 3: Semi-Automatic Visual Support System with Drone for

Semi-Automatic Visual Support System with Drone

Fig. 3. Work field in our previous study.

given shape. A schematic of the operational environmentin which this system is used is shown in Fig. 3. Here,in addition to the images captured using a normal cam-era vehicle (stationary camera) used in conventional tele-operation, a camera-mounted drone moves to appropriatepositions depending on the work content and presents im-ages to the operator. The images from the viewpoint of thegrasping position (point A in Fig. 3) are presented whenthe robot is grasping the block, and those from the view-point of the placement position (point B in Fig. 3) arepresented when the block is placed.

The system measures the drone position and usesthe measurements to perform the proportional-integral-derivative control of the drone position. The attitude ofthe drone is controlled by the attitude control functionmounted in the AR.Drone based on the measurementsof the built-in inertial measurement unit (IMU). The re-turn values of position control are used for roll and pitch,whereas the values at the time when flight is initiated aremaintained for yaw. Therefore, position control is per-formed via position feedback, whereas the target attitudeis outputted by

φd =(

Gp +Gi

s+Gds

)εy, . . . . . . . . (1)

θd =(

Gp +Gi

s+Gds

)εx, . . . . . . . . (2)

where φd and θd are the target roll and pitch angles, re-spectively. Gp, Gi, and Gd are the proportional, integral,and differential gains, respectively.

εx and εy are the position errors in the x and y direc-tions, respectively. As the built-in height control is exe-cuted based on the input of the target travel velocity, thetarget travel velocity Vzd is obtained as

Vzd =(

Gzp +Gzi

s+Gzd s

)εz, . . . . . . . (3)

where Gzp , Gzi , and Gzd are the proportional, integral, anddifferential gains, respectively, and εz is the position errorin the z direction.

In this system, a drone-mounted camera is used for po-sition measurement. Sato et al. used a camera set up in

Fig. 4. Work field in this paper.

Fig. 5. Coordinate systems.

the environment and color markers attached to the droneto measure the position of the drone [16]; however, thissystem requires an additional camera for measurement.As the drone is mounted with a camera in our study, po-sition measurement with the mounted camera and mark-ers without any additional equipment is more suitable.Hatakeyama et al. proposed a method of position mea-surement using a mirror-based omnidirectional cameraand 3D markers [17]. In our study, the drone always facesthe same rough direction so that the relative position of themarkers and camera does not vary considerably; hence, amonocular camera and 2D markers are sufficient for posi-tion measurement.

Therefore, we use ARToolKit [18], which is a marker-based image processing method, for position measure-ment. As shown in Fig. 4, three markers are placedat fixed positions in the environment for position mea-surement. The position is expressed by a homogeneoustransformation matrix from the coordinate system of themarker (marker coordinate system Σm) to the coordinatesystem of the camera (camera coordinate system Σc). Thedrone coordinate system is denoted as Σd . The coordinatesystems are shown in Fig. 5.

As the camera is affixed to the front of the drone body,the relative position of Σd and Σc does not change. There-fore, Σc is related to Σd such that zc and xd , yc and −zd ,and xc and yd are vectors in the same directions.

In position measurement using markers, when only asingle marker is used, the fork grab may obstruct the cam-

Journal of Robotics and Mechatronics Vol.33 No.2, 2021 315

Page 4: Semi-Automatic Visual Support System with Drone for

Ikeda, T., Bando, N., and Yamada, H.

era’s view of the marker, making it impossible to recog-nize the marker and measure the position and attitude ofthe drone. Therefore, we employ several markers to mea-sure the position and attitude of the drone. The aboveproblem is avoided by setting up several markers, whoserelative positions and attitudes are known, at different lo-cations in the work site. Then, the position and attitude ofthe drone are measured from the position and attitude of amarker that can be recognized.

In our previous study, the drone was made to travel toa pre-designated position when the fork grab grasped theobject. This method was based on the assumption that,in the operation of transporting blocks, the image of theposition at which the object has been placed is the opti-mal image when the object is not yet grasped, and that ofthe position at which the object is to be placed is the opti-mal one when it has already been grasped. Here, whetherthe object is grasped is judged by the change in the in-ternal pressure of the hydraulic cylinder of the fork grab.The cylinder pressure rises drastically when an object isgrasped. It is judged that the object is not grasped whenthe cylinder pressure is below a given threshold, and thedrone is moved to position A, which affords a view of theposition at which the object is grasped. In contrast, it isjudged that the object is grasped when the pressure is ator above a given threshold, and the drone is moved to po-sition B.

In this study, the proposed viewpoint fine-adjustmentfunction is mounted on the viewpoint moving system de-veloped in our previous study described above. Then, anexperiment with subjects is conducted to evaluate the va-lidity of this function and examine the optimal manner ofimplementing the additional operation of the drone.

2.2. Semi-Automatic Viewpoint AdjustmentFigure 4 shows a schematic diagram of the work site

assumed in this study. In the conventional teleoperationsystem using a vehicle-mounted camera, the operator per-forms the operation based on only the video image cap-tured using the stationary camera ( 1© in Fig. 4). In theprevious system, the drone mounted with a camera ( 2© inFig. 4) automatically travels to a suitable position accord-ing to the task based on the internal pressure of the hy-draulic cylinder of the fork grab and displays the image tothe operator. An image of the grasping position (point ain Fig. 4) is presented when the task is to grasp the block,and that of the placement position (point b in Fig. 4) ispresented when the task is to place the block. In thepresent experimental setup, a visual obstacle is placed infront of the placement position (point b) so that the vi-sual information necessary to place the block is not avail-able. The operator must execute an additional operation tomove the drone vertically to remove the blind spot causedby the obstacle and obtain an image from a viewpoint(point c) that affords a view of the placement position. Asthe drone operation must be performed at the same time asthe handling of the construction equipment, the additionaloperation of the drone is simply limited to changes in thevertical direction to minimize the load of the operator.

Fig. 6. Image from point a in Fig. 4.

Fig. 7. Image from point b in Fig. 4.

Fig. 8. Image from point c in Fig. 4.

In this study, we propose using a slide button on thejoystick to operate the drone.

The images captured from points a, b, and c in the ex-perimental setup that simulates the work site are shown inFigs. 6, 7, and 8, respectively.

The operator performs the task while viewing the im-ages captured using the stationary and drone cameras. Anexample of the images displayed during the task is shownin Fig. 9.

When the drone receives a command to move vertically,it moves at a preset constant speed. The travel speed wasset to 0.3 m/s.

We next describe the operation using the slide button onthe joystick. The slide button is mounted on the joystickused to control the construction equipment. The slide but-ton can be moved with slight movements of the fingertip;the command to raise or lower the drone is entered by

316 Journal of Robotics and Mechatronics Vol.33 No.2, 2021

Page 5: Semi-Automatic Visual Support System with Drone for

Semi-Automatic Visual Support System with Drone

Fig. 9. Views provided by proposed system (left: drone,right: fixed camera).

Fig. 10. Experimental environment.

sliding the slide button up or down, and that to stop andhover in place is entered by returning the button to thehome position.

3. Experiment on Viewpoint AdjustmentFunction

3.1. Experimental MethodIn this experiment, we verify the validity of the fine ad-

justments of the viewpoint made by the operator by fur-ther manipulating the drone using the viewpoint adjust-ment function. The experiment consisted of performingthe task under two conditions:

I) with the viewpoint adjustment function,

II) without the viewpoint adjustment function.

The viewpoint adjustment function is engaged by usingthe slide button on the joystick.

Figure 10 shows the experimental setup. Fig. 11 showsthe block that must be transported. Here, the task under-taken by the subject is described. The subject is asked totransport two blocks, in the order instructed beforehand,and place them within the green frame on the other plat-form. The environment is such that, when the block isto be placed, a blind spot is created in the field of viewof the drone by an obstacle as long as the drone is onlytransported automatically. Therefore, under condition I(with the function), the viewpoint can be finely adjusted

Fig. 11. Objective block.

Fig. 12. Experimental task.

by additionally operating the drone to obtain the visualinformation necessary to place the block. The subjectsare asked to transport the two blocks placed side by sideon the start platform and stack them on the goal platform(Fig. 12).

Before the experiment, the subjects were briefed aboutthe operation of the construction robot and the task, andthen given a period to practice operating the robot with-out the proposed visual assist system using the drone.They were given approximately an hour each to operate(including grasping objects) the construction robot freelyusing the joysticks. The practice session was conductedto minimize the effects of habituation on the experimen-tal results. The subjects were four males ranging in agefrom 22 to 24 years, with an average age of 22.75 years.None of the subjects was licensed as heavy equipment op-erators nor did they have any experience operating heavyequipment. As they were inexperienced operators, pre-cautions were taken to cut off the supply of hydraulic fluidto the construction robot immediately if an unexpectedoperation was performed.

The subjects performed the task under the two afore-mentioned conditions. To counter an order effect, the or-der of the tasks was alternated with each subject. Aftertwo trials were performed under each condition, the aver-ages of the values described below were taken and wereconsidered as the experimental results.

The experiment was conducted as follows:

1. The subjects were briefed on how to operate the con-struction robot and the task they were to perform.Then, they were allowed to practice operating theconstruction robot.

Journal of Robotics and Mechatronics Vol.33 No.2, 2021 317

Page 6: Semi-Automatic Visual Support System with Drone for

Ikeda, T., Bando, N., and Yamada, H.

Fig. 13. Score of operating accuracy.

2. A pairwise comparison test was conducted to deter-mine the NASA task load index (NASA-TLX) [19],which is used to evaluate the mental workload of theoperator.

3. The experiment was performed, and the results wereevaluated using NASA-TLX.

4. Step 3 was repeated under the other condition.

5. The experiment was concluded when the subject hadundertaken the task under both conditions.

3.2. Evaluation MethodNext, we describe the method of evaluating the exper-

imental results. The experimental results were evaluatedusing three indices: operational accuracy, operation time,and NASA-TLX. The experiment was conducted with theapproval (approval number: 25-199) of the ethics com-mittee for medical research at Gifu University.

3.2.1. Operating Accuracy EvaluationThe manner in which the blocks were placed was given

scores, which were used as the index of the operationalaccuracy. Fig. 13 shows the scores given to the placementcondition; each block was given a score ranging from 0to 5 points so that the maximum score for a single trialwas 10 points. Specifically, 1© 5 points were given whenthe block was placed within the marked frame; 2© 3 pointswere given when the block crossed one side of the zone;3© 2 points were given when the block crossed two sides;4© 1 point was given when the block crossed three or more

sides; and 5© 0 points were given when the block fell out-side the platform. The size of the marked zone is shownin Fig. 14.

3.2.2. Operating Time EvaluationThe time [s] taken to execute the task was measured

and considered as the operating time. To isolate the tasktime, time measurement was started when the fork grabof the construction robot arrived at the position at whichthe block was to be grasped, and stopped when the secondblock had been transported and placed.

Fig. 14. Size of goal area.

Fig. 15. Flow diagram of subject by NASA-TLX.

3.2.3. NASA-TLX

NASA-TLX is an index developed to evaluate mainlythe mental load of aircraft pilots. It is used as a psy-chological index in various fields. Thus, this study usesNASA-TLX to evaluate the mental load of the operator, asthis index has been applied widely and is relatively easyto employ.

NASA-TLX is composed of six scales: mental de-mand (MD), physical demand (PD), temporal demand(TD), overall performance (OP), effort (EF), and frustra-tion level (FR).

Figure 15 shows the flow of the evaluation usingNASA-TLX. It can be broadly divided into three pro-cesses: 1) the pairwise comparison of each scale, 2) theoperation to be evaluated for the load level, and 3) theevaluation of load level of each scale.

First, before undertaking the task to be evaluated for theload level, the pairwise comparisons of the six scales areperformed to assign levels of importance, which are usedas the weights of the respective scales (maximum 5, mini-mum 0). The description of the scales (Table 1) presentedto the subjects for the pairwise comparison is a modifiedversion of the Japanese-version NASA-TLX developedby Haga et al. [20]. Next, the subjects undertake the taskto be evaluated for the load level. Finally, the subjectsevaluate each of the six scales in the range 0–100. Thelower the score, the lighter is the load. The scores givento the six scales are then weighted by the respective val-ues obtained via the pairwise comparison described aboveand then averaged to obtain the mean weighted workloadscore (WWL), as the evaluation results of NASA-TLX.

318 Journal of Robotics and Mechatronics Vol.33 No.2, 2021

Page 7: Semi-Automatic Visual Support System with Drone for

Semi-Automatic Visual Support System with Drone

Table 1. Expository writing of improved NASA-TLX forJapanese.

Scale Description

Mental demand Were you able to operate the device“intuitively”?

Physical demand Were you physically tired after thetask?

Temporal demand Were you able to perform the taskwithout being rushed for time?

Overall performance Were you able to complete the taskaccurately?

Effort Did you make an effort to executethe task?

Frustration level Did you feel any anxiety or stresswhen performing the task?

Fig. 16. Mean of operating score: ** shows p < 0.01.

3.3. Experimental Result3.3.1. Result of Operating Accuracy

Figure 16 shows the mean operating score and standarddeviation of the four subjects under each condition. Theordinate represents the mean operating score, and the ab-scissa represents the two conditions, I (with the function)and II (without the function).

The results show that the mean operational accuracywas higher in condition I than in condition II. A t-test wasconducted to confirm the significance level. Accordingly,a significance level of 1% was observed between condi-tions I and II. This is because the proposed method madeit possible to raise the viewpoint to circumvent the obsta-cle and made it easier to place the object in the markedframe by presenting the images of the working environ-ment. This was achieved by allowing the operator to ad-just the height of the drone to acquire images from a view-point that made it easier for the operator to execute thetask. These results show that the viewpoint adjustmentfunction makes it easier to recognize the work environ-ment and improves the operational accuracy.

3.3.2. Result of Operating TimeFigure 17 shows the mean operating time and standard

deviation of the four subjects under each condition. Theordinate represents the mean operating time, and the ab-scissa represents the two conditions, I and II.

Fig. 17. Mean of operating time: * shows p < 0.05.

Fig. 18. Mean values of NASA-TLX.

The results show that the mean operating time waslonger in condition I than in condition II. A t-test wasconducted to confirm the significance level. Accordingly,a significance level of 5% was observed between condi-tions I and II. This indicates that the viewpoint adjustmentfunction increases the operating time. The operating timeis believed to increase because the operator must addi-tionally operate the drone using the viewpoint adjustmentfunction, wait for the drone to move, and operate it re-peatedly while monitoring the image until it arrives at aposition from which it is possible to acquire images thatmake it easy to operate the construction robot.

3.3.3. Result of NASA-TLXFigure 18 shows the average scores of the six evalua-

tion scales of NASA-TLX for the four subjects for the twoconditions. The mean scores are plotted on the respectiveaxes, where the bold-line hexagon is the result for condi-tion I (with the function) and the broken line is the resultfor condition II (without the function).

The results show that the scores are lower for all the sixNASA-TLX evaluation indices for condition I (with thefunction). This indicates that the mental load in terms ofall the evaluation items is lower when the viewpoint ad-justment function is used as compared with the case with-out the function. In particular, the difference in mental(perceptual) demand is considerable, which is believed tobe because the additional operation of the drone by the

Journal of Robotics and Mechatronics Vol.33 No.2, 2021 319

Page 8: Semi-Automatic Visual Support System with Drone for

Ikeda, T., Bando, N., and Yamada, H.

Fig. 19. Mean WWL scores: ** shows p < 0.01.

viewpoint adjustment function makes it possible for theoperator to obtain sufficient visual information.

Next, the mean WWL and its standard deviation forthe four subjects, obtained from the NASA-TLX results,are shown in Fig. 19. The ordinate represents the meanWWL, and the abscissa represents the two conditions,I (with the function) and II (without the function).

The results show that the WWL scores are lower whenthe viewpoint adjustment function is used compared withthe case without the function. A t-test was conducted,and accordingly, a significance level of 1% was observedbetween conditions I and II. This indicates that the men-tal load is lower when the viewpoint adjustment functionis used as compared with the case without the function.This is because, without this function, the operator mustoperate in an environment with less visual informationbecause of the blind spot created by the obstacle whenhe/she cannot perform the additional drone operation. Incontrast, when this function is available, he/she can addi-tionally operate the drone to obtain more visual informa-tion when placing the block; consequently, he/she feelslower stress.

Meanwhile, some subjects voiced the view that usingthe same joystick to operate the construction robot anddrone makes it more difficult to perform complex tasks(with the construction robot). This comment suggests thatthe operator will experience an increased load if the oper-ation of either the construction robot or the drone is mademore complex. Therefore, if we are to extend the drone-position adjustment function to three DOFs in the future,an interface for drone operation that is independent fromthe joystick will need to be developed.

4. Conclusion

In this study, we developed a semi-automatic viewpointmoving system that employs a drone to provide visual as-sistance to the operator of a teleoperated robot. The ob-jective of this system is to improve the operational effi-ciency and reduce the mental load of the operator. In ourprevious study, we developed a system that automaticallymoves the drone to positions to capture the images of thegrasping and placement points, according to the transport

task. In this system, the operational accuracy decreasedwhenever a blind spot was created by an obstacle. There-fore, in the present study, we made it possible for the re-mote operator to make fine adjustments to the position ofthe drone to remove the obstacle from the field of viewand recognize the target. The operator uses a slide switchto adjust the position of the drone to acquire the optimalassist image for the teleoperation of construction equip-ment. We confirmed through an evaluation experimentthat in comparison with our previous study, the proposedmethod improved the operational accuracy and reducedthe mental load of the operator, although the operatingtime was lengthened due to the drone operation.

In the actual work site, there are likely to be casesin which fine adjustments in the forward/backward andsideways directions, in addition to the vertical direction,would be effective. In the present study, we implementedan adjustment function for only vertical movements toavoid making the joystick operation complex, as the droneis operated using the joystick used to operate the construc-tion equipment. Thus, our future investigation will focuson developing a drone-control interface independent ofthe joystick for the operation of construction equipment,with added adjustment functions for forward/backwardand sideways movements. We also plan to verify that thevisual assist system with a three-DOF adjustment func-tion will improve operability through an experiment witha larger number of subjects.

AcknowledgementsIn this study, we received the generous cooperation of Mr. RyoheiNarita, to whom we wish to express our deep gratitude.

References:[1] “Special Issue: Unmanned Construction to Become Familiar,”

Nikkei Construction, Vol.296, pp. 38-56 (Nikkei Business Publi-cations Inc., Archive CD-ROM2002, No.155490), 2002.

[2] Advanced Construction Technology Center, “Guidebook for Un-manned Construction in Emergencies,” 2002 (in Japanese).

[3] W.-K. Yoon, T. Goshozono, H. Kawabe, M. Kinami, Y. Tsumaki,M. Uchiyama, M. Oda, and T. Doi, “Model-Based Space RobotTeleoperation of ETS-VII Manipulator,” IEEE Trans. on Roboticsand Automation, Vol.20, No.3, pp. 602-612, doi: 10.1109/TRA.2004.824700, 2004.

[4] A. M. Okamura, “Methods for haptic feedback in teleoperatedrobot-assisted surgery,” Industrial Robot, Vol.31, No.6, pp. 499-508, doi: 10.1108/01439910410566362, 2004.

[5] J. M. Teixeira, R. Ferreira, M. Santos, and V. Teichrieb, “Teleop-eration Using Google Glass and AR, Drone for Structural Inspec-tion,” 2014 XVI Symp. on Virtual and Augmented Reality, pp. 28-36, doi: 10.1109/SVR.2014.42, 2014.

[6] N. Hallermann, G. Morgenthal, and V. Rodehorst “Vision-baseddeformation monitoring of large scale structures using UnmannedAerial Systems,” Proc. of IABSE Symp.: Engineering for Progress,Nature and People, pp. 2852-2859, 2014.

[7] F. Bonnin-Pascual, A. Ortiz, E. Garcia-Fidalgo, and J. P. Company,“A Micro-Aerial Platform for Vessel Visual Inspection based on Su-pervised Autonomy,” Proc. of 2015 IEEE/RSJ Int. Conf. on Intel-ligent Robots and Systems (IROS), pp. 46-52, doi: 10.1109/IROS.2015.7353353, 2015.

[8] C. Wu, J. Qi, D. Song, X. Qi, T. Lin, and J. Han, “Developmentof an unmanned helicopter automatic barrels transportation sys-tem,” Proc. of 2015 IEEE Int. Conf. on Robotics and Automation,pp. 4686-4691, doi: 10.1109/ICRA.2015.7139849, 2015.

[9] G. Garimella and M. Kobilarov, “Towards Model-predictive Con-trol for Aerial Pick-and-Place,” Proc. of 2015 IEEE Int. Conf.on Robotics and Automation, pp. 4692-4697, doi: 10.1109/ICRA.2015.7139850, 2015.

320 Journal of Robotics and Mechatronics Vol.33 No.2, 2021

Page 9: Semi-Automatic Visual Support System with Drone for

Semi-Automatic Visual Support System with Drone

[10] T. Ikeda, S. Yasui, S. Minamiyama, K. Ohara, S. Ashizawa,A. Ichikawa, A. Okino, T. Oomichi, and T. Fukuda, “Stable Impactand Contact Force Control by UAV for Inspection of Floor Slabof Bridge,” Advanced Robotics, Vol.32, Issue 19, pp. 1061-1076,doi: 10.1080/01691864.2018.1525075, 2018.

[11] T. Ikeda, S. Minamiyama, S. Yasui, K. Ohara, A. Ichikawa,S. Ashizawa, A. Okino, T. Oomichi, and T. Fukuda, “Stable cameraposition control of unmanned aerial vehicle with three-degree-of-freedom manipulator for visual test of bridge inspection,” J. of FieldRobotics, Vol.36, pp. 1212-1221, doi: 10.1002/rob.21899, 2019.

[12] Y. Hada, M. Nakao, M. Yamada, H. Kobayashi, N. Sawasaki,K. Yokoji, S. Kanai, F. Tanaka, H. Date, S. Pathak, A. Yamashita,M. Yamada, and T. Sugawara, “Development of a Bridge Inspec-tion Support System Using Two-Wheeled Multicopter and 3D Mod-eling Technology,” J. Disaster Res., Vol.12, No.3, pp. 593-606,doi: 10.20965/jdr.2017.p0593, 2017.

[13] K. Hidaka, D. Fujimoto, and K. Sato, “Autonomous Adaptive FlightControl of a UAV for Practical Bridge Inspection Using Multiple-Camera Image Coupling Method,” J. Robot. Mechatron., Vol.31,No.6, pp. 845-854, doi: 10.20965/jrm.2019.p0845, 2019.

[14] S. Kiribayashi, K. Yakushigawa, and K. Nagatani, “Design and De-velopment of Tether- Powered Multirotor Micro Unmanned AerialVehicle System for Remote-Controlled Construction Machine,”Springer, pp. 637-648, 2018.

[15] H. Yamada, N. Bando, K. Ootsubo, and Y. Hattori, “TeleoperatedConstruction Robot Using Visual Support with Drones,” J. Robot.Mechatron., Vol.30, No.3, pp. 406-415, doi: 10.20965/jrm.2018.p0406, 2018.

[16] K. Sato and R. Daikoku, “A Simple Autonomous Flight Controlof Multicopter Using Only Web Camera,” J. Robot. Mechatron.,Vol.28, No.3, pp. 286-294, doi: 10.20965/jrm.2016.p0286, 2016.

[17] N. Hatakeyama, T. Sasaki, K. Terabayashi, M. Funato, andM. Jindai, “Position and Posture Measurement Method of theOmnidirectional Camera Using Identification Markers,” J. Robot.Mechatron., Vol.30, No.3, pp. 354-362, doi: 10.20965/jrm.2018.p0354, 2018.

[18] H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibra-tion for a Video-based Augmented Reality Conferencing System,”Proc. of 2nd IEEE and ACM Int. Workshop on Augmented Reality(IWAR’99), pp. 85-94, doi: 10.1109/IWAR.1999.803809, 1999.

[19] S. G. Hart and L. E. Staveland, “Development of NASA-TLX (TaskLoad Index): Results of empirical and theoretical research,” Ad-vances in Psychology, Vol.52, pp. 139-183, doi: 10.1016/S0166-4115(08)62386-9, 1988.

[20] S. Haga and N. Mizukami, “Japanese version of NASA Task LoadIndex: Sensitivity of its workload score to difficulty of three differ-ent laboratory tasks,” The Japanese J. of Ergonomics, Vol.32, No.2,pp. 71-79, 1996 (in Japanese).

Name:Takahiro Ikeda

Affiliation:Assistant Professor, Gifu University

Address:1-1 Yanagido, Gifu city, Gifu 501-1193, JapanBrief Biographical History:2015-2019 Post Doctral Fellow, Meijo University2019- Assistant Professor, Gifu UniversityMain Works:• “Muscle activity during gait-like motion provided by MRI compatiblelower-extremity motion simulator,” Advanced Robotics, Vol.30, No.7,pp. 459-475, doi: 10.1080/01691864.2015.1122552, 2016.• “Stable Impact and Contact Force Control by UAV for Inspection ofFloor Slab of Bridge,” Advanced Robotics, Vol.32, Issue 19,pp. 1061-1076, 2018.• “Stable camera position control of unmanned aerial vehicle withthree-degree-of-freedom manipulator for visual test of bridge inspection,”J. of Field Robotics, Vol.36, pp. 1212-1221, 2019.Membership in Academic Societies:• The Robotics Society of Japan (RSJ)• The Society of Instrument and Control Engineers (SICE)• The Institute of Electrical and Electronics Engineers (IEEE)

Name:Naoyuki Bando

Affiliation:Industrial Technology and Support Division,Gifu Prefectural Government

Address:2-1-1 Yabuta-minami, Gifu city, Gifu 500-8570, JapanBrief Biographical History:2004- Researcher, Gifu Prefectural Research Institute for Human LifeTechnology2011- Senior Researcher, Research Institute for Machinery and Materials,Gifu Prefectural Government2014- Gifu Prefectural Research Institute of Information Technology2019- Industrial Technology and Support Division, Gifu PrefecturalGovernmentMain Works:• “Teleoperated Construction Robot Using Visual Support with Drones,” J.Robot. Mechatron., Vol.30, No.3, pp. 406-415, 2018.• “Research of Anxiety When Seat Rises in Standing Up AssistanceChair,” Trans. of the Japan Society of Mechanical Engineers, Series C,Vol.74, pp. 3028-3035, 2008.• “Development of Wheelchair Simulator with the Consideration of theFeeling of Body Attitude,” Trans. of the Virtual Reality Society of Japan,Vol.9, pp. 405-412, 2004.Membership in Academic Societies:• The Japan Society of Mechanical Engineers (JSME)• The Society of Instrument and Control Engineers (SICE)

Name:Hironao Yamada

Affiliation:Professor, Gifu University

Address:1-1 Yanagido, Gifu city, Gifu 501-1193, JapanBrief Biographical History:1991-1992 Research Associate, Nagoya University1992-1993 Visiting Research Fellow, the Aachen Institute of Technology2007- Professor, Gifu UniversityMain Works:• “Teleoperated Construction Robot Using Visual Support with Drones,” J.Robot. Mechatron., Vol.30, No.3, pp. 406-415, 2018.• “Support System for Slope Shaping Based on a TeleoperatedConstruction Robot,” J. Robot. Mechatron., Vol.28, No.2, pp. 149-157,2016.• “Operability of a Control Method for Grasping Soft Objects in aConstruction Teleoperation Robot Tested in Virtual Reality,” Int. J. ofFluid Power, Vol.13, No.3, pp. 39-48, 2012.Membership in Academic Societies:• The Japan Society of Mechanical Engineers (JSME)• The Japan Fluid Power System Society (JFPS)• The Society of Instrument and Control Engineers (SICE)• The Robotics Society of Japan (RSJ)

Journal of Robotics and Mechatronics Vol.33 No.2, 2021 321

Powered by TCPDF (www.tcpdf.org)