01546366

Upload: hub23

Post on 04-Jun-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 01546366

    1/15

    1506 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    A Multimodal Interface to Control a Robot Arm the Web: A Case Study on Remote Programmin

    Ral Marn, Member, IEEE , Pedro J. Sanz, Member, IEEE , P. Nebot, and R. Wirz

    Abstract In this paper, we present the user interface and thesystem architecture of an Internet-based telelaboratory, whichallows researchers and students to remotely control and pro-gram two educational online robots. In fact, the challenge hasbeen to demonstrate that remote programming combined withan advanced multimedia user interface for remote control is verysuitable, exible, and protable for the design of a telelaboratory.The user interface has been designed by using techniques basedon augmented reality and nonimmersive virtual reality, whichenhance the way operators get/put information from/to the roboticscenario. Moreover, the user interface provides the possibility

    of letting the operator manipulate the remote environment byusing multiple ways of interaction (i.e., from the simplicationof the natural language to low-level remote programming). Infact, the paper focuses on the lowest level of interaction betweenthe operator and the robot, which is remote programming. Asexplained in the paper, the system architecture permits any exter-nal program (i.e., remote experiment, speech-recognition module,etc.) to have access to almost every feature of the telelaboratory(e.g., cameras, object recognition, robot control, etc.). The systemvalidation was performed by letting 40 Ph.D. students within theEuropean Robotics Research Network Summer School on Inter-net and Online Robots for Telemanipulation workshop (Benics-sim, Spain, 2003) program several telemanipulation experimentswith the telelaboratory. Some of these experiments are shownand explained in detail. Finally, the paper focuses on the analysis

    of the network performance for the proposed architecture (i.e.,time delay). In fact, several congurations are tested throughvarious networking protocols (i.e., Remote Method Invocation,Transmission Control Protocol/IP, User Datagram Protocol/IP).Results show the real possibilities offered by these remote-programming techniques, in order to design experiments intendedto be performed from both home and the campus.

    Index Terms Distributed systems, education and training,humanrobot interaction, remote programming, robotics, telelabs.

    I. INTRODUCTION

    ONLINE robots enable multiple users to have control over

    a remote robotic system in a teleoperated manner. Thesekinds of systems are very useful for teaching robotics, due to

    Manuscript received January 31, 2004; revised August 13, 2004. Abstractpublished on the Internet September 26, 2005. This work was funded in partby the Spanish Ministry of Science and Technology (CICYT) under GrantTSI2004-05165-C02-01, Grant TIC2003-08496, and Grant DPI2004-01920,by the Generalitat Valenciana under Grant GV04A/698, and by the FundaciCaixa Castell under Grant P1-1A2003-10, Grant P1-1B2002-07 and GrantP1-1B2003-15.

    The authors are with the Department of Computer Science and Engineering,University of Jaume I (UJI), E-12006 Castelln, Spain (e-mail: [email protected]).

    Digital Object Identier 10.1109/TIE.2005.858733

    the fact that they enable the general public (e.g., students) gain experience of robotics technology directly [8]. In fact, tnext step is giving more freedom to the operator, and not onlyenable remote control of the robotic system, but also to enabremote programming. To accomplish this, the user-interfadesign is fundamental, and techniques like virtual reality (Vand augmented reality (AR) provide help to improving the wstudents and scientists remotely program the telelaboratory.

    Enabling remote programming of the robot system perm

    the operator to develop external programs that take control ovthe whole set of robotic functions. Thus, for example, we coudesign an experiment in Java for performing a visual-servoimanipulation, or we could even use this interface for designia voice-operated robot. In fact, this is what we have done fthe telelaboratory.

    Interest in the design of Internet-based telelaboratories increasing enormously, and this technique is still very new. very good example of already-existing experiments in this arcan be found in [13].

    A. System Overview

    In our case, the robot scenario is presented in Fig. 1, whethree cameras are presented: one taking images from the top the scene, a second camera from the side, and a third camefrom the front. The rst camera is calibrated, and used as input to the automatic-object-recognition module and thredimensional (3-D)-model construction. The other two camergive different points of view to the user when a teleoperatimode is necessary in order to accomplish a difcult task (e.manipulating overlapped objects).

    Once the user gets an online connection to the robot, thmanipulator goes to the initial position, so the whole sceinformation is accessible from the top camera. At that mome

    the computer-vision module calculates a set of mathematicfeatures that identify every object on the board [9]. Afterwarthe contour information is used in order to determine every sble grasp associated with each object [14], which is necessafor vision-based autonomous manipulation. This process ismust when high-level task specication from a user is requireand also in order to diminish network latency.

    Meanwhile, we shall use the output from these visual algrithms in order to construct a complete 3-D model of the robscenario (i.e., the objects and the robot arm). Thus, users willable to interact with the model in a predictive manner, and theonce the operator conrms the task, the real robot will executhe action.

    0278-0046/$20.00 2005 IEEE

  • 8/13/2019 01546366

    2/15

    MARN et al. : MULTIMODAL INTERFACE TO CONTROL ROBOT ARM VIA THE WEB 15

    Fig. 1. Telelaboratory experimental setup (left) and its associated user interface (right).

    Fig. 2. Different interaction channels enable users to select the more suitable interaction with the robot.

    II. TELEROBOTIC INTERFACE: A COMPARATIVE STUDY

    One of the essential points in the design of a Web-based telerobotic system is the denition of the humanrobotinteraction. In most telerobotic systems, user interaction is stillvery computer oriented, since input to the robot is accomplishedby lling in forms or selecting commands from a panel. Verylittle attention has been paid to more natural ways of communi-cation such as natural language or gestures.

    Another essential point is making the robot able to learnfrom the operator and from practice. It means that the systemsrobustness and effectiveness increase as time goes by, as theprocess is performed repeatedly.

    A. University of Jaume I (UJI) Robotics Telelaboratory

    The multiple interaction channels between the user and throbot system can be seen in Fig. 2. In this gure, a pyramidarepresentation indicates that the complexity degree in the communication with the robot is decreasing from the bottom to thtop. It means that at the bottom, the complexity is at a maximumfor the user, and at the top (i.e., the ideal case in which thnatural language is available), the complexity is at a minimum

    Note that the different levels of interaction (i.e., voice commands, high-level text commands, direct robot position, etc.have already been implemented and are fully operational withithe system. Moreover, as explained before, this article focuse

  • 8/13/2019 01546366

    3/15

    1508 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    Fig. 3. UJI robotics telelaboratory architecture.

    on low-level remote-programming interaction, which hasdemonstrated a very interesting application in the education-and-training domain.

    In particular, as we can see in Fig. 3, the telelaboratory

    accepts experiments (i.e., algorithms) as inputs using any pro-gramming language capable of managing Transmission ControlProtocol (TCP)/IP sockets. We have already provided a Javalibrary for using the robot in a simple manner. Future effortswill be oriented towards facilitating this library not only in Java,but also with other programming languages (i.e., Matlab, C,etc.). The outputs of the experiments are returned to the user bymeans of the telelaboratory client-side user interface, whichpermits the operator to see the results of their programmedactions directly on a multimedia Java 3-D user interface.

    Besides this, the UJI telelaboratory can be accessed by usinga Java applet downloaded from a Web page (Web access), andby executing it on the client machine as a Java application.Web access is more convenient when the number of users isbig and when we need a robust and reliable way to distributethe most recent version of the software. Please note that Javaapplets have a security restriction that prevents it from openinga port on the client-side machine when the applet has not beencertied by the operator. It means, up to now, that in order toperform the low-level remote programming, it is necessary toaccept the Java applet as a trusted source.

    B. Some Issues on Telerobotic Interfaces

    A telerobotic system basically consists of three main parts,which are: 1) the remote robot; 2) the communication network;and 3) the user interface. In our situation, in order to compare

    the UJI telelaboratory with other online robots, we are goingfocus on the user interface, which is our main area of intereNote that the remote robot depends on the current applicatioand the communication network is assumed to be always th

    same (i.e., the Internet).First of all, we are going to enumerate the parameters thain our opinion, will make the elaboration of the postericomparative analysis clear.

    1) Internet-based versus Web-based: Although every telrobotic system included in the comparative analysis usthe Internet as the connection medium, the ones thaare really interesting for us are the online robots, whicmeans that their remote control can be accomplisheusing the worldwide web (WWW). For a list of the advantages of using the Web, refer to [5].

    2) Predictive display: The use of predictive displays hel

    diminish the Internet-latency and delay effects. Besidethis, it can help with the implementation of virtual tasspecication environments [10].

    3) AR: By using AR techniques, user information on thclient side is enhanced enormously. It obviously helprobot programming and monitoring [8].

    4) VR (3-D model): By using 3-D-model construction frocamera input, the user is able to interact with the systefrom any point of view. Besides this, it is a very usefutechnique for implementing the predictive-display funtionality and the task-specication environment [10].

    5) Automatic object recognition: In order to specify tasksa more natural manner, the automatic-object-recognitiomodule is very helpful in those situations where the environment is structured enough to perform the operatio

  • 8/13/2019 01546366

    4/15

    MARN et al. : MULTIMODAL INTERFACE TO CONTROL ROBOT ARM VIA THE WEB 15

    TABLE ICOMPARATIVE STUDY AMONG DIFFERENT TELEROBOTIC SYSTEMS CURRENTLY AVAILABLE ON THE WEB

    in a reasonable time. Moreover, dening robotic tasksby using object names instead of object numbers impliesthat the operations are going to be invariant to the initialposition, which signicantly enhances the system perfor-mance and robustness [9].

    6) Incremental learning: In our opinion, a system shouldhave the ability to learn from experience. It means that thesystem is becoming more intelligent as time goes by. This

    learning feature should at least enhance the automatic-object-recognition system performance [9].7) Autonomous grasping: One of the key modules, in order

    to enable high-level interaction, is autonomous grasp-ing. By default, the telerobotic system should calculatethe best grasping alternatives for every object in thescene [14].

    8) Very-high-level commands: In order to avoid the cogni-tive fatigue of the user when manipulating a scenario re-motely, the specication of very-high-level commands isallowed. It means that the user is able to operate the robotin a more supervised manner, e.g., grasp thepencil [12].

    9) Voice recognition: In our opinion, it would be very inter-esting if the system allows the user to make these high-level commands using a microphone. This means that theuser is able to interact with the remote robot at a distanceusing voice as the only input [11].

    10) Off-line programming: Another important situation totake into account when implementing an online robot iswhat to do when someone else is using the real manipula-tor. The off-line-programming technique allows the userto program the robot in a simulated environment, withoutneeding to have physical control of the manipulator. Thisis very important because it allows people (e.g., students)to work with the robot as long as possible [12].

    11) Tasks specication: In ouropinion, it would be interestingto have a way of specifying tasks that have previously

    been recorded from another interaction. This capabilitycould be used for increasing the possibilities of the robo(e.g., task planning).

    12) Multiple-users management: An online robot should implement some criteria in order to allow multiple users toshare robot control. Initially, the rst-inrst-out (FIFOalternative could be a way of doing it, but the possibilityof associating priorities depending on the context is mor

    interesting.13) Remote programming: This feature is, in our opinionvery interesting because it means that the system canbe programmed remotely by using any programminglanguage (i.e., Java, C, etc.). The advantages of havingthis feature are evident, and these mean that students andscientists can use the system to validate their algorithmin a particular area (e.g., computer vision, object recognition, visual servoing, control, etc.).

    C. A Comparative Study

    Bearing in mind all these properties, a comparative studyamong several Web telerobotic systems focused on manipulation is presented in Table I. Note that only those systems that areavailable online and enabling the remote control of a robot armwill be considered. Nevertheless, an exhaustive comparisonincluding other manipulation systems or other kinds of robo(e.g., mobile robots, etc.), is out of the scope of this workMoreover, it is worth remarking that some of the presentedinformation have been acquired by asking the authors directl(i.e., [3][5], etc.). Others have been obtained by revisinproceedings, books, and research magazines and by using themdirectly via the Web. In the following, a short description oeach system showed in the comparison is presented.

    1) The Mercury project is known as the rst teleroboticsystem on the Web. In August 1994, Ken Goldbergs

  • 8/13/2019 01546366

    5/15

    1510 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    team mounted a digital camera and an air-jet device ona robot arm so that anyone on the Internet could view andexcavate for artifacts in a sandbox located in the roboticslaboratory at the University of Southern California (USC)[3]. The primary goal was creating a very reliable systemcapable of operating 24 h/day and surviving sabotage

    attempts. Moreover, the system had to be low in cost,because of budget limitations. The secondary goal wascreating an attractive Web site that encourages users torepeat their visits [5]. To facilitate use by a wide audienceof nonspecialists, all robot controls were made availablevia Mosaic, the only browser available at that time. Thetwo-dimensional (2-D) interface matched the workspaceof the robot, so users could move by clicking on thescreen with a few buttons for out-of-plane effects. TheMercury project, starting in late August 1994, was onlinefor seven months. During this time, it received over87 700 user connections. An off-center air nozzle causedsand to accumulate in one side of the box, so the sand hadto be periodically regroomed. Other than a few randomdowntimes, the robot was online 24 h/day.

    2) Telegarden: This telerobotic installation allows WWWusers to view and interact with a remote garden lledwith living plants. Members can plant, water, and mon-itor the progress of seedlings via the movements of anindustrial-robot arm. The telegarden was developed atUSC and went online in June 1995, and it has beenonline since August 1995 [4]. In its rst year at USC,over 9000 members helped cultivate the telegarden. InSeptember 1996, the telegarden was moved to the newArs Electronica Center in Austria. This project uses an

    Adept-1 robot and a multitasking control structure so thatmany operators can be accommodated simultaneously[17]. Users are presented with a simple interface thatdisplays the garden from a top view, the garden from aglobal composite view, and a navigation and informationview in the form of a robot schematic. By clicking onany of the images, one commands the robot to move to anew absolute location or to a location relative to wherethey just were. Whereas the Mercury project requiredoperators to wait up to 20 min on a queue for a 5-min turnto control the robot, the telegarden applies multitaskingto allow simultaneous access. After X Y coordinates

    are received from a client, a command is sent to move therobot arm; a color charge-coupled-device (CCD) cameratakes an image; the image is compressed based on useroptions, such as color resolution, lighting, and size; andthen returned to the client.

    3) Australias telerobot on the Web: The University of West-ern Australias telerobot went online in late September1994, enabling users to manipulate different kinds of ob- jects over a board, following a pick-and-place strategy[2]. The challenge with the interface design is to provideenough information to make interpretation easy whileminimizing transmitted data and maintaining a simplelayout. One way of doing this is by restricting the numberof degrees of freedom (DOFs) that an operator controlsas well as making use of Java and JavaScript technol-

    ogy [16]. One example is the remotely operated 6-DOAllmnna Svenska Elektriska Aktiebolaget (ASEA) mnipulator located at the University of Western Australiwhich is believed to be the rst remotely operated indutrial robot on the Web.

    4) The AR interface for telerobotic applications via Intern

    (ARITI) system presents a display interface enabling anperson to remotely control a robot via the Web [1]. Amixed perceptual concept based on VR and AR technologies is part of the control user interface, allowinone to easily perform a task and describe the desireenvironment transformation that the robot has to perform

    5) A very interesting system is the one reported in [7] the University of British Columbia. A central problemin model-based telerobotic technology is obtaining anmaintaining accurate models of the remote site. Thsystem facilitates this by using a fast grayscale visiosystem at the remote site, which can recognize objecof known type and return their spatial positions to the oerator. Basically, it takes a 2-D image and nds edges foevery object on the scene. By computing the geometricproperties of those edges, the system is able to recognian object as belonging to a particular class. This worfocuses on 3-D-model construction from vision, enablinthe previous simulated interaction through an Internconnection (not Web-based). The UJI online robot offera 3-D-model construction from vision too, which has thspecial feature of allowing the manipulation of not onknown, but also unknown objects, owing to the inclusion of a grasp-determination-and-execution algorithmBesides this, it allows many other advantages such a

    interacting by using voice as the input.6) Another project to be taken into account is Visual Inteface for Smart Interactive Task execution (VISIT), whicuses the predictive approach in order to communicatthe user interface and the robot with existing time dela[6]. The system is designed based on advanced robottechnologies, such as computer vision, advanced motiocontrol, etc., so that the operator can execute a task inremote site by utilizing a simple mouse. The computevision module extracts object characteristics from a camera input, in order to incorporate them to the predictivmodel. No object recognition is presented.

    III. REMOTE PROGRAMMING: THE EXPERIMENTSJAVA LIBRARY

    In this section, we are going to explain the low-level detaof the remote-programming capability.

    As seen in Fig. 3, the experiments-server module provida TCP/IP interface to remote applications (i.e., experimentthat can take control over most of the telelaboratory featur(i.e., cameras, object recognition, robot movements, etc.). Texperiments server maintains an open TCP/IP socket ova well-known port (i.e., 7745). It is at this socket where texternal applications must be connected to be able to communicate with the telelaboratory. Moreover, the experimenserver is responsible for receiving the commands and the da

  • 8/13/2019 01546366

    6/15

  • 8/13/2019 01546366

    7/15

    1512 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    TABLE IIROBOT COMMANDS AVAILABLE

    Fig. 6. Remote-programming algorithm for the objects-attributes experiment.

  • 8/13/2019 01546366

    8/15

    MARN et al. : MULTIMODAL INTERFACE TO CONTROL ROBOT ARM VIA THE WEB 15

    Fig. 7. Execution of the objects-attributes experiment in a single step: remote-experiment-execution console (left), telelaboratory user-interface statuand the real-robot state (right).

    Fig. 8. Remote-programming algorithm for the path-planning experiment.

    remote-programming experiment, which is: 1) extending theexperiment class; 2) creating an instance of the experiment;3) calling the getSceneManagerSer to obtain the serializedobjects from the telelaboratory; 4) executing the correspondingactions on the telelaboratory; and 5) closing the connection(see Fig. 5).

    A. Available Remote-Programming Commands

    The experiments Java library provides the user with thefollowing methods.

    1) connect() : It connects the experiment with the telelab-oratory through the TCP/IP interface.

    2) disconnect() : It disconnects the experiment from thetelelaboratory.

    3) getSceneManagerSer() : It retrieves the current com-puter-vision data from the telelaboratory. Informationabout the existing objects on the scene, their geomet-rical information as well as the result from the object-recognition module, are provided.

    4) sendOrder(String command, Boolean prediction) : Itsends a text command to the telelaboratory (e.g., graspthe allen, move to position 10 105", and move up).

    For a description of the available commands, see Table IIMoreover, if the prediction variable is true, the com-mand will only be executed by the predictive displayprovided with the telelaboratory 3-D virtual interfaceOtherwise, the command will be executed for both thepredictive interface and the real robot.

    IV. EXAMPLES OF EXPERIMENTS: EDUCATION ANDTRAINING VALIDATION

    In order to validate and evaluate the interest of the remoteprogramming feature for students and scientists, we proposed several exercises to 40 international Ph.D. students whwere participating at the 2003 EURON International SummeSchool on Internet and Online Robots for Telemanipulation(Benicssim, Spain, 2003). Every student was able to perform the whole set of remote-programming exercises during 2-h session. In our opinion, the system provides the students aeasy way to access all features of the remote telelaboratory, sthey can focus on the implementation of robotic algorithms instead of making efforts to understand the remote-programminprocedure. In this section, we are presenting some of thexercises that were implemented by this group of researchers.

  • 8/13/2019 01546366

    9/15

    1514 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    Fig. 9. Execution of the path-planning experiment.

    Fig. 10. Remote-programming algorithm for pick-and-place experiment I.

    A. Objects-Attributes Experiment

    Therst example consists of designing a remoteprogram thatsimply uses the experiment library in order to obtain informa-tion about the objects that are currently in the robotics scenario

    (i.e., number of objects, area, etc.). In Fig. 6, we can see the tsingle steps (in yellow) that should be performed to accomplthe experiment. First of all, we obtain the computer-visiodata related to the robot scenario from the telelaboratory. Fexample, we can print the dimensions of the calibrated-came

  • 8/13/2019 01546366

    10/15

    MARN et al. : MULTIMODAL INTERFACE TO CONTROL ROBOT ARM VIA THE WEB 15

    Fig. 11. Snapshots of the educational robot when executing pick-and-place experiment I.

    image. The second step gets the information for every object inthe scene and shows their area. The result of the execution of

    the experiment can be seen in Fig. 7.

    B. Path-Planning Experiment

    In this second experiment, the student uses the remote-programming feature to bring the robot from one point toanother (i.e., from [x 1, y 1, z 1] to [x2, y 2, z 2]) by followinga straight line. See Figs. 8 and 9 for the algorithm and theexecution output, respectively.

    C. Pick-and-Place Experiment I

    The third experiment consists of initiating a grasping actionover one of the objects in the scene, by using the most stablegrasping points. In particular, we execute the action grasp ob-

    ject 1 on the telelaboratory, then we move the gripper to quadrant 8 of the robot scenario, and nally, we ungrasp the objec

    by executing the command ungrasp. As can be seen in the agorithm (Fig. 10), the parameter that we pass to the Sendordercommand is false, which means that we deactivate the prediction feature and the action is performed on the real robot.

    In Fig. 11, we can see the states of the robot during theexecution of the pick-and-place experiment I.

    D. Pick-and-Place Experiment II

    The second pick-and-place experiment enables the user toexecute a grasping action over two particular points of theobject contour. For this example, the two selected points ardened by the intersection of the objects maximum-inertiaaxis and its contour. For a broad range of applications, thialternative is sufcient.

  • 8/13/2019 01546366

    11/15

    1516 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    Fig. 12. Remote-programming algorithm for pick-and-place experiment II.

    Fig. 13. Remote-programming algorithm for the automous-robot experiment.

    As can be seen in the algorithm (Fig. 12), in this situation, weadd a grasping to the object by using the objects attributes p1and p2, which represent the contour points that intersect withthe maximum-inertia axis. After that, we execute the graspingcommand on that object by using the already-created grasping.

    E. Autonomous-Robot Experiment

    In order to validate the telelaboratory user interface and tsystem architecture itself, we proposed to the students that thdesign a remote experiment for the control of a robot in a moautonomous manner. The exercise proposed was to leave th

  • 8/13/2019 01546366

    12/15

    MARN et al. : MULTIMODAL INTERFACE TO CONTROL ROBOT ARM VIA THE WEB 15

    Fig. 14. UCS_IP software architecture.

    system waiting for a specic kind of object to appear in therobotic scenario, and once this occurs, grasp the object andclassify it into a particular position on the table. Autonomouscontrol is especially related to the kind of industrial appli-cation that classies objects coming along a conveyor belt

    using camera-vision techniques. The solution proposed for thisexperiment can be seen in Fig. 13.

    V. RESULTS

    In order to determine the convenience of using the remote-programming telelaboratory technique, different congurationsof the architecture have been tested. For every one of these con-gurations, some experiments have been performed consistingof 400 remote movements on the robot following a square path.These experiments have been done using TCP, User DatagramProtocol (UDP), and Remote Method Invocation (RMI) proto-

    cols. Performance details obtained from these experiments aregiven below.The rst conguration that has been tested is a conguration

    with a Unique Client/Server IP (UCS_IP). This congurationcan be seen in Fig. 14. Two different experiments have beenimplemented with this conguration, one using TCP and theother using UDP.

    In Fig. 15, it can be seen that the latency on campus, for bothTCP and UDP, is convenient enough to perform teleoperationexperiments. Although there are some robot movements thathave required almost 14 ms in communicating with the serverside, the average is less than 1 ms. Moreover, by looking atFig. 16, it can be seen that by teleoperating the robot from home(same city) using the TCP/IP approach, the operation is rapidenough for our purposes (less than 90 ms on average).

    The next conguration, telelaboratory experiment RM(TE_RMI), consists of using the telelaboratory features in ordefor the students and scientists to program their own algorithmremotely. This means that the experiment will connect rst viTCP/IP on the local machine with the telelaboratory user interface (client side), and then, once the operation is conrmed

    the action is executed on the real robot via an RMI connectiowith the servers manager. Again, the servers manager connectvia Common Objective Request Broker Architecture (CORBAwith the remote robot, as can be seen in Fig. 17.

    It must be taken into account that once an operation is sento the client side, this operation is executed virtually over thpredictive interface, and then, once the action is conrmed bsending the command conrm, this operation goes througthe RMI and CORBA interfaces to move the real robot. It meansthat this predictive conguration is very useful for studentwho are working off-line or when they want to check the nexrobot movement in the virtual environment. It supposes thathe time latency is increased a lot. In fact, the average for thexperiments executed on campus is 2770 ms, and for thosexecuted from home, 3975 ms (Fig. 18).

    As can be seen, the rst conguration (UCS_IP) is thefastest one, and is the most convenient for traditional telerobotic applications. In fact, the latencies shown in Table II justify this.

    In summary, we have presented a telelaboratory multiroboarchitecture (TE_RMI) that permits any student or scientist noonly to exercise robot control from a user interface, but alsto control the manipulator from a Java program (remote programming). This second conguration is much more expensivbecause it requires a predictive-display feature (i.e., require

    conrmation of each command) and it also uses not only onprotocol (TCP/IP) but three (i.e., TCP, RMI, and CORBA).

    VI. CONCLUSION

    One of the major difculties associated with the design oonline robots is Internet latency and time delay. When usingthe Internet as a connectivity medium, it cannot be assurethat a certain bandwidth is going to be available at a givenmoment. It means that once the information is sent from thserver to the client (and vice versa), there are no guaranteethat these data will reach their destination in a minimum perio

    of time. The delay associated with Internet communications iunpredictable, so the system design must resolve this situatioby means of giving more intelligence to the server side anusing predictive-display techniques.

    Hence, the user interface has been designed to allow theoperator to predict the robot movements before sending thprogrammed commands to the real robot (predictive system). This kind of interface has the special feature of savinnetwork bandwidth and allows for the use of a whole-taskspecicationoff-line-programming interface. Using a predictive virtual environment and giving more intelligence to throbot means a higher level of interaction, which avoids thcognitive fatigue associated with many teleoperated systemsandhelps to enormously diminish the Internet-latency and timedelay effects.

  • 8/13/2019 01546366

    13/15

    1518 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    Fig. 15. Graphical representation of the latency for the conguration UCS_IP on campus, using both TCP and UDP protocols.

    Fig. 16. Graphical representation of the latency for the conguration UCS_IP in the same city, using both TCP and UDP protocols.

    Such a complex and friendly user interface would not bepossible without the help of VR and AR techniques. Withthese two technologies, the user is able to control almost everyviewpoint of the real-robot scenario, and then simulate the pro-grammed tasks before sending it to the real manipulator. More-over, automatic object recognition and autonomous-graspingexecution are crucial in order to provide such a level of userinteraction.

    The present paper presents an extension of the UJI online-robot architecture, which enables not only the control of a

    robot by interacting with an advanced user interface, but allets the scientists and students program their own expements by using the Java programming language. This has bepossible through the denition of the experiments librarwhich allows in an encapsulated way the provision of seralized objects necessary to perform these actions to tellaboratory clients.

    The researchers who have been using the telelaboratory unnow have found it very useful, due to the fact that it enables operator to control the whole robotic system (cameras, obje

  • 8/13/2019 01546366

    14/15

    MARN et al. : MULTIMODAL INTERFACE TO CONTROL ROBOT ARM VIA THE WEB 15

    Fig. 17. TE_RMI software architecture.

    Fig. 18. Graphical representation of the latency for the conguration TE_RMI using both on-campus and in-the-same-city congurations.

    TABLE IIISUMMARY OF TIME LATENCIES FOR EVERY CONFIGURATION EXPERIMENTED ON

  • 8/13/2019 01546366

    15/15

    1520 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 52, NO. 6, DECEMBER

    recognition, predictive interface, robot control, etc.) from asingle library. Using a real robotic system that is accessiblefrom any computer can be considered as a very good alter-native for researchers performing rapid prototyping of theiralgorithms. Moreover, a pilot group of students have been usingthe experiments library to design their own experiments.

    They became very enthusiastic and were motivated by theexperience.Future efforts will focus on the design of more advanced

    experiments for remote manipulation (e.g., remote visual ser-voing, etc.), as well as the study of automatic methods for theevaluation of the experiments.

    REFERENCES[1] Augmented Reality Interface for Telerobot Application Via Internet .

    [Online]. Available: http://lsc.cemif.univ-evry.fr:8080/Projets/ARITI/ index.html

    [2] Australian Telerobot Webpage. School of Mechanical Engineering. TheUniversity of Western Australia [Online]. Available: http://telerobot.mech.uwa.edu.au/

    [3] K. Goldberg, M. Mascha, S. Gentner, J. Rossman, N. Rothenberg,C. Sutter, and J. Widgley, Beyond the web: Manipulating the real world,Comput. Netw. ISDN Syst. J. , vol. 28, no. 1, pp. 209219, Dec. 1995.

    [4] K. Goldberg, J. Santarromana, G. Bekey, S. Gentner, R. Morris, C. Sutter,and J. Wiegley, A telerobotic garden on the World Wide Web, SPIE Robot. Mach. Perception Newslett. , vol. 5, no. 1, Mar. 1996.

    [5] Beyond Web Cams: An Introduction to Online Robots , K. Goldberg andR. Siegwart, Eds. Cambridge, MA: MIT Press, 2001.

    [6] K. Kosuge, J. Kikuchi, and K. Takeo, VISIT: A teleoperation system viathe computer network, in Beyond Webcams, and Introduction to Online Robots . Cambridge, MA: MIT Press, 2001.

    [7] J. E. Lloyd, J. S. Beis, D. K. Pai, and D. G. Lowe, Model-based tele-robotics with vision, in Proc. IEEE Int. Conf. Robotics and Automation(ICRA) , Albuquerque, NM, Apr. 1997, pp. 12971304.

    [8] R. Marn, P. J. Sanz, and J. S. Sanchez, A very high level interface to

    teleoperate a robot via Web including augmented reality, in Proc. IEEE Int. Conf. Robotics and Automation (ICRA) , Washington, DC, May 2002,pp. 27252730.

    [9] R. Marn, J. S. Sanchez, and P. J. Sanz, Object recognition and in-cremental learning algorithms for a Web-based telerobotic system, inProc. IEEE Int. Conf. Robotics and Automation (ICRA) , Washington, DC,May 2002, pp. 27192724.

    [10] R. Marn, P. J. Sanz, and J. S. Sanchez, A predictive interface based onvirtual and augmented reality for task specication in a Web teleroboticsystem, in Proc. IEEE Int. Conf. Intelligent Robots and Systems (IRoS) ,Lausanne, Switzerland, Oct. 2002, pp. 30053010.

    [11] R. Marn, P. Vila, P. J. Sanz, and A. Marzal, Automatic speech recog-nition to teleoperate a robot via Web, in Proc. IEEE Int. Conf. Intel-ligent Robots and Systems (IROS) , Lausanne, Switzerland, Oct. 2002,pp. 12781283.

    [12] R. Marn, P. J. Sanz, and A. P. del Pobil, The UJI online robot: Aneducation and training experience, in Autonomous Robots , vol. 15.Dordrecht, The Netherlands: Kluwer, 2003, pp. 283297.

    [13] G. T. McKee, The development of Internet-based laboratory environ-ments for teaching robotics and articial intelligence, in Proc. IEEE Int. Conf. Robotics and Automation (ICRA) , Washington, DC, May 2002,pp. 26952700.

    [14] P. J. Sanz, A. P. del Pobil, J. M. Iesta, and G. Recatal, Vision-guidedgrasping of unknown objects for service robots, in Proc. IEEE Int.Conf. Robotics and Automation (ICRA) , Leuven, Belgium, May 1998,pp. 30183025.

    [15] T. Sherindan, Telerobotics, Automation, and Human Supervisory Control .Cambridge, MA: MIT Press, 1992.

    [16] K. Taylor and B. Dalton, Internet robots: A new robotics niche, IEEE Robot. Autom. Mag. , vol. 7, no. 1, pp. 2734, Mar. 2000.

    [17] Telegarden Webpage. 1996-97: On exhibit at the Ars Electronica Center.[Online]. Available: http://www.usc.edu/dept/garden/

    Ral Marn (M03A04M04) received the B.S.degree in computer science engineering and thePh.D. degree in computer engineering from the University of Jaume I (UJI), Castelln, Spain, in 1996and 2002, respectively. His Ph.D. studies focusedon humanrobot interaction for Internet roboticsThe title of his thesis was The Uji online robot:A distributed architecture for pattern recognition,

    autonomous grasping and augmented reality.He previously worked as a Software Developerin the Multimedia Department, BYG Systems Ltd.

    Nottingham, U.K. In 1997, he joined Lucent Technologies (Bell Laboratries Innovations) and was a Researcher, Software Developer, and SoftwArchitect at the Switching and Access Division. Since 1999, he has beeResearcher and Professor in the Department of Computer Science, UJI. Hresearch interests lie mainly in the eld of robotics, including telerobotby using the Internet, multimedia, 3-D virtual environments, and educatitutorials. He is the author or coauthor of a broad range of research publication telerobotics and multimedia training systems.

    Pedro J. Sanz (M98) received the B.S. degreein physics from the University of Valencia, Va-lencia, Spain, the M.S. degree in engineering[computer-aided design/computer-aided manufactur-ing (CAD/CAM)] from the Polytechnic Universityof Valencia, Valencia, Spain, and the Ph.D. degreein computer engineering from the University ofJaume I (UJI), Castelln, Spain, in 1985, 1991 and1996, respectively.

    He is an Associate Professor of Computer Sci-ence and Articial Intelligence, a Researcher at the

    Robotic Intelligence Laboratory, and the Head of the Multimedia ResearGroup, UJI. His current research interests include robotics, visually guidgrasping, telerobotics, and humancomputer interaction. Since 1990, he hbeen active in research and development within several projects on advancrobotics and multimedia. He is the author or coauthor of a broad rangeresearch publications.

    Dr. Sanz was a recipient of the Best Thesis in Electronics and CompuScience Domains, National Research Prize from the Artigas Spanish Fountion, Polytechnic University of Madrid, Madrid, Spain, in 1996. He collaborawith different scientic societies, such as IEEE Robotics and AutomatiSociety, International Association for Pattern Recognition (IAPR), and SpanAssociation for Articial Intelligence (ECCAI).

    P. Nebot received the M.S. degree in computer sci-ence from the University of Jaume I (UJI), CastellnSpain, in 2002, and is currently working toward thePh.D. degree in computer science and telerobotics aUJI. The title of his Ph.D. thesis is Collective tasksfor a heterogeneous team of mobile robots.

    In 2002, he joined the Robotics Intelligence Labo-ratory, UJI, and worked on some research projects.Since 2001, he has been a Research Assistant atUJI, where he teaches courses related to robotics andarticial intelligence. His current research interest i

    robotics: involving the implementation of architectures and tasks that allcooperation and collaboration between mobile robots and involving systeof teleoperation to manage any type of robot.

    R. Wirz is currently working toward the Ph.D.degree in computer engineering at the University ofJaume I (UJI), Castelln, Spain.

    In 2003, he joined the Robotic Intelligence Lab-oratory, UJI, and he worked on research projectsrelated to Internet telerobotics and humanrobotinterfaces. His current research interests includeteleoperation, telemanipulation, visual servoing, 3-Dvirtual environments, and humanrobot interfaces.