78 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 The Role of the Control Framework for Continuous Teleoperation of a Brain–Machine Interface-Driven Mobile Robot Luca Tonin , Member, IEEE, Felix Christian Bauer , and José del R. Millán , Fellow, IEEE Abstract—Despite the growing interest in brain–machine interface (BMI)-driven neuroprostheses, the translation of the BMI output into a suitable control signal for the robotic device is often neglected. In this article, we propose a novel control approach based on dynamical systems that was explicitly designed to take into account the nature of the BMI output that actively supports the user in delivering real-valued commands to the device and, at the same time, reduces the false positive rate. We hypothesize that such a control framework would allow users to continuously drive a mobile robot and it would enhance the navigation performance. 13 healthy users evaluated the system during three experimental sessions. Users exploit a 2-class motor imagery BMI to drive the robot to five targets in two experimental conditions: with a discrete control strategy, traditionally exploited in the BMI field, and with the novel continuous control framework developed herein. Experimental results show that the new approach: 1) allows users to continuously drive the mobile robot via BMI; 2) leads to significant improvements in the navigation performance; and 3) promotes a better coupling between user and robot. These results highlight the importance of designing a suitable control framework to improve the performance and the reliability of BMI-driven neurorobotic devices. Index Terms—Brain–machine interface (BMI), control framework, motor imagery (MI), neurorobotics. I. INTRODUCTION RECENT years have seen a growing interest for the neurorobotics field, a new interdisciplinary research topic that aims at studying brain-inspired approaches in robotics and at developing innovative human–machine interfaces. In this scenario, Manuscript received May 21, 2019; accepted August 6, 2019. Date of publication October 22, 2019; date of current version February 4, 2020. This paper was recommended for publication by Associate Editor B. Argall and Editor P. R. Giordano upon evaluation of the reviewers’ comments. This work was supported in part by the Hasler Foundation, Bern, Switzerland, under Grant 17061 and in part by the Swiss National Centre of Competence in Research (NCCR) Robotics. (Corresponding author: Luca Tonin.) L. Tonin is with Intelligent Autonomous System Lab, Department of Information Engineering, University of Padova, 35122 Padua, Italy (e-mail: luca.tonin@dei.unipd.it). F. C. Bauer is with aiCTX AG, 8050 Zurich, Switzerland (e-mail: felix.bauer@aictx.ai). J. D. R. Millán is with Department of Electrical and Computer Engineering & the Department of Neurology, University of Texas at Austin, Austin 78705 USA, and also with Defitech Chair in Brain-Machine Interface, École Polytechnique Fédérale de Lausanne, 1202 Geneva, Switzerland (e-mail: jose.millan@austin.utexas.edu). Color versions of one or more of the figures in this article are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TRO.2019.2943072 brain–machine interfaces (BMIs) represent a promising technology to directly decode user’s intentions from neurophysiological signals and translate them into actions for external devices. The ultimate goal of BMI systems is to enable people suffering from severe motor disabilities to control new generations of neuroprostheses [1], [2]. Several works have already shown the feasibility and the potentiality of such a technology with different devices [3]–[7]. However, despite the great achievements, the integration between BMI systems and robotics is still at its infancy. In the last years, different interactions between BMI and robotic devices have been explored according to the nature of the mental task performed by the user and to the neural processes involved. For instance, researchers have shown the possibility to exploit correlates of electroencephalography (EEG) to external stimuli (e.g., visual flash) to control the navigation of mobile devices. In such systems, users can either select the turning direction or the final destination of the robot (e.g., kitchen or bedroom) by looking at the corresponding stimuli on the screen [8]–[13]. Although such interactions have shown promising results, they do not allow a full control of the device and they require the user to continuously fixate the origin of the external stimulation (e.g., the screen). A more natural approach is based on BMI systems able to detect the self-paced modulation of brain patterns and thus, to allow the user to deliver commands for the robot at any time without the need of exogenous stimulation. In this context, one of the most explored approaches relies on the detection of the neural correlates to motor imagery (MI). MI BMIs detect and classify the endogenous modulation of sensorimotor rhythms while the user is imagining the movement of a specific part of his/her body (e.g., imagination of the movement of right or left hand). At the neurophysiological level, such a modulation is characterized by the decrement/increment (event-related de/synchronization, ERD/ERS) of the EEG power in specific frequency bands (i.e., μ and β bands, 8–12 and 16–30 Hz, respectively) and in localized regions of the motor/premotor cortex [14]–[16]. MI BMI systems continuously decode such brain patterns associated to the motor imagery tasks by means of machine learning algorithms. The responses of the BMI decoder (a probability distribution of possible commands) are integrated over time and, finally, a command is delivered to the robot only when a given threshold is reached—i.e., when the control framework is confident about user’s intention. Therefore, although in principle such BMI 1552-3098 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 79 systems would allow a continuous interaction between user and robot, in practice they result in a discrete control modality, both in terms of time and nature of the commands, with a low information transfer rate (on average 0.3 command/second [17]). This article aims at investigating a novel control approach to generate a continuous control command for MI BMI mobile robots. Herein, continuous control refers to the direct translation of each decoded BMI output (a probability distribution) into a control signal for the robotic device and explicitly in contraposition to the aforementioned discrete interaction modality of most BMI systems. A. Related Work Several studies have shown the effectiveness of discrete control strategy in driving a variety of MI BMI-based devices with healthy subjects and users with motor disabilities. An example of discrete BMI control is the brain-driven wheelchair developed by Vanacker et al. [18], where authors exploited a 2-class MI BMI to interact with the external device. In this implementation, the user could change the default behavior of the wheelchair (i.e., move forward) by asynchronously delivering discrete commands to make it turn left or right. Furthermore, an intelligent navigation system was in charge to generate the continuous trajectory and to take care of all the low-level details (e.g., obstacle avoidance) in order to reduce the user’s workload. Other works developed BMIdriven wheelchairs following the same discrete user interaction principles [5], [19], [20]. Similarly, in [6], [21]–[24] authors demonstrated the validity of such an approach to drive a telepresence robot with both healthy subjects and end-users. A discrete interaction modality has been also proposed by Kuhner et al. [25] where the user is allowed to control a mobile robot by selecting specific actions in a hierarchical, menu-based assistant environment. Enabling BMI users to have a continuous interaction modality and, for instance, to precisely control the extent of the turning direction of the robotic device, would rather be desirable. However, the generation of a continuous control signal can be challenging considering the nonstationarity nature of EEG patterns and the resulting uncertainty of the decoded classifier output. In literature, only a few studies investigated new approaches to use the BMI output as a continuous control signal for robotic devices. From a theoretical point of view, Satti et al. [26] proposed to apply a postprocessing chain based on a Savitzki–Golay filter, an antibiasing strategy, and multiple thresholding in order to remove spikes/outliers and possible bias from the BMI classifier output. The method has been evaluated on artificial and real EEG datasets and results showed a reduction in the false positive rate. This approach has been also tested in an online experiment where three users where asked to continuously control a videogame by a 3-class MI BMI [27]. In Doud et al. 2011 [28], authors proposed a different approach to achieve continuous control of a virtual helicopter. In this case, the modulation of EEG activity (i.e., ERD/ERS during the imagination of six different motor tasks) was linearly mapped to the control signal of the virtual device. However, such a paradigm required high workload for the user who needs to be always in an active control state. In [29], LaFleur et al. described the follow-up of the previous study with a real quadcopter. More interesting, in this article, authors introduced a nonlinear quadratic transformation of EEG signals before the control signal was sent to the device. Furthermore, they provide a fixed thresholding to remove minor perturbations that were not likely to have generated from intentional control. A linear mapping of the EEG activity into a control signal has been also proposed by Meng et al. [7] in order to control a robotic arm. In this case, users were asked to perform a reaching and grasping tasks in a sequential synchronous paradigm. B. Contribution and Overview In this article, we propose a novel control framework for MI BMI that allows a continuous control modality of a telepresence mobile robot in a navigation task. Our aim is to provide a control system able to generate a continuous robot trajectory from the stream of BMI outputs. We decided to use a BMI decoder (instead of regressing the EEG neural patterns into a control signal as in the case of [28] and [29]) because classifiers have proven to be stable over long periods of time and highly reliable for end-users [6], [24], [30], [31]. However, current control frameworks are specifically conceived for a discrete interaction with the external devices. In particular, BMI systems are designed to maximize the accuracy and the speed in delivering discrete commands (also known as intention control state, IC). Surely, this approach works in experimental situations but can hardly cope with real case scenarios when the user wants to continuously drive the robotic device to accomplish daily tasks. Furthermore, current systems do not take into account the situation when the user does not want to deliver any command to the device. This particular state is known as intentional noncontrol (INC). In the past, researchers mainly faced INC in two different ways: by exploiting multiclass classification techniques to model the resting state [28], [29], [32] or by leaving to the user the burden of actively controlling the BMI to not deliver any command [5], [6], [21]. However, the first solution is affected by the complexity of modeling the unbounded resting class, while the second implicates a high workload for the user who needs to actively control the system to counteract possible unintended BMI outputs. Herein, we hypothesize that the generation of a continuous control signal can be achieved by providing a new framework designed to specifically deal with the particular nature of the BMI decoder output and to explicitly take into account the IC and INC situations. In other terms, the framework: 1) should handle the erratic behavior of the BMI decoder output; 2) should support users when they are actively involved in the MI task (IC); at the same time, 3) it should prevent them to deliver unintended commands during resting state (INC). To the best of our knowledge, this is the first time that such a continuous interaction modality for BMI-driven devices is specifically targeted from a pure control perspective. Our proposed control framework is inspired by Schöner and colleagues’ work [33]–[35]. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 80 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 1. (a) Classical MI BMI closed loop and the mobile robot used in this article: EEG data is acquired and task-related features (channel-frequency pair) are extracted and classified in real time by the BMI decoder. Then, the BMI decoder output stream (e.g., posterior probabilities) is integrated in order to accumulate evidence of user’s intention. Finally, when enough evidence is accumulated, a discrete command is sent to the device. (b) Distribution of the posterior probabilities generated by the BMI decoder during motor imagery task. Solid black line represents the distribution fit computed by Epanechnikov kernel function. (c) Distribution of the posterior probabilities while user is resting. Dotted black line represents the distribution fit computed by Epanechnikov kernel function. The rest of this article is organized as follows. In Section II we first model the BMI decoder output with real EEG data from the participants in the study. Second, we shortly review the traditional approach to smooth the BMI decoder output. Third, we describe the novel approach based on a dynamical system developed herein. Lastly, we used real prerecorded data to simulate the behavior of the new control framework in comparison with the traditional one. Section III is devoted to the description of the experiment designed to evaluate the new control framework with healthy subjects during an online experiment where they are asked to mentally teleoperate a mobile robot. Finally, in Section IV we present the experimental results, and in Section V we discuss them in comparison to prior literature and we propose possible extensions of the work in different BMI robotic applications. Section VI concludes this article. II. CONTROL FRAMEWORK FOR BMI The first step for designing a new control framework is to model and characterize the output of the BMI system. Then, we will describe the traditional strategy with low-pass smoothing filtering and our new approach based on dynamical systems. Since our focus is on the BMI control framework, we consider the other modules (e.g., acquisition, processing, and decoder) as given [Fig. 1(a)]. We refer to a classical, state-of-the-art BMI based on two motor imagery classes that has been extensively evaluated in previous studies with healthy subjects and end-users driving robotic devices [6], [21], [24]. Furthermore, such a MI BMI system was successfully exploited (winning the gold medal and establishing the world record) in the BMI Race discipline of the Cybathlon 2016 event, the first international neurorobotic competition, held in Zurich in 2016 [30], [31]. Section III.B gives details of such a BMI. A. Modeling the BMI Decoder Output The BMI decoder output can be seen as a continuous stream of posterior probabilities indicating the estimated user’s intention. It is worth to model the posterior probability distributions in two specific cases: while the user is actively involved in the motor imagery task and while he/she is at rest. Fig. 1(b) and (c) depict the distributions of real data (user S4) in these two scenarios. Extreme values of the posterior probabilities (close to 0.0 or to 1.0) indicate high-confidence detection of one of the two classes. In the first case [Fig. 1(b)], the BMI correctly classified most of the samples (i.e., posterior probabilities close to 1.0), resulting in a beta-like density function. On the other hand, when the user is resting, we would expect a normal-like distribution centered at 0.5. Instead, the posterior probabilities assume extreme values (close to 0.0 or 1.0), resulting in the bimodal distribution shown Fig. 1(c). The aforementioned behavior of the BMI output can be generalized for most users. Such an erratic behavior of the BMI decoder output would benefit from a control framework in order to generate a proper control signal for the robotic device. B. Traditional Approach: Smoothing Filter In the traditional BMI system, such as the one exploited in this article, the raw posterior probabilities originated from the decoder are accumulated over time with a leaky integrator based on an exponential smoothing [36]. Given xt the posterior probability at time t and yt−1 the previous integrated control signal, yt is computed as follows: yt = α · xt + (1 − α) · yt−1 (1) where α ∈ [0.0, 1.0] is the smoothing factor. The closer α is to 1.0, the faster the weight of older values decay and yt tends to follow xt. On the other hand, the closer α is to 0.0, the smaller is the contribution of the current posterior probability, leading to a slow response of the system. It is worth to notice that α is adjusted at the beginning (individually for each user) and, then, it is fixed during BMI operations. Usual values of α vary around 0.03 (slow response) to allow the user to control more precisely the system (examples of α values used in this article are reported in Section III.C, Table I). Finally, thresholding strategies are used to translate the smoothed signal yt into specific commands for the robot. As already mentioned, this kind of discrete interaction modality Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 81 Fig. 2. Design of the novel control framework. (a) Free force profile. Blue squares and red circles refer to the attractors and repellers of the system, respectively. The interval [0.0, 1.0] is divided in three basins where a conservative force (dark gray) or a pushing force (light gray) are applied. (b) Representation of the free potential derived by the free force function. (c) Function applied to the decoder output in order to generate the BMI force. TABLE I CONTROL FRAMEWORK PARAMETERS Control framework parameters chosen for each user in the evaluation runs. Parameters’ names are the same used in Section II. between the BMI user and the device results in an average transfer information rate of 0.3 command/second [17]. C. Novel Approach: Dynamical System The control framework proposed in this article is designed to generate a continuous signal for the robotic device. Following the hypotheses mentioned in Section I.B, it should be able: 1) to handle the erratic behavior of the BMI decoder output described in Section II.A; 2) to support the user’s IC when the current state of the system yt is close to one of the extreme values of the two classes (i.e., 0.0 or 1.0); 3) to prevent yt to reach high values due to random perturbations of BMI decoder output, and so to handle the INC state. We defined Δyt as linear combination of two forces Δyt = Ffree (yt−1) + FBMI (xt) (2) where Ffree(yt−1) only depends on the previous state of the system and FBMI(xt) depends on the current BMI output. Ffree can be explicitly designed to take care of the IC and INC state. Inspired by Schöner and colleagues’ formal technique [33]–[35], we define Ffree in order to exert a conservative force when the current state of the system is close to 0.5 and a pushing force otherwise [see Fig. 2(a)]. Theoretically, this would help the system to be less sensitive to random perturbations (INC state) while, at the same time, it would push yt to high values if the previous state yt−1 was in the external regions (IC state). As mentioned before, we hypothesized that matching these two requirements would support the generation of a reliable continuous control signal for the robot. Hence, such a force was chosen so that: 1) Ffree(y)=0 and dFfree(y) dy < 0 for y ∈ [0.0, 0.5, 1.0]. These are defined as stable equilibria points. Note that these points represent the maximum values for the two classes, respectively, (0, 1.0) and the equal distributed value (0.5). 2) Ffree(y)=0 and dFfree(y) dy > 0 for y = 0.5 − ω and y = 0.5 + ω, where ω ∈ (0.0, 0.5). These are defined as unstable equilibria points. According to these requirements, points y = 0, y = 0.5, and y = 1.0 are attractors for the system, while y = 0.5 − ω and y = 0.5 + ω are repellers [see Fig. 2(a)]. A function Ffree with these properties will divide the interval [0.0 1.0] into three attractor basins that are separated by the points 0.5 − ω and 0.5 + ω: depending on the current value y, it will converge toward one of the three attractors [see Fig. 2(a)]. This will facilitate the user not to deliver false positive commands (attractor in y = 0.5) and, at the same time, to reach the maximum value if y(t − 1) < 0.5 − ω or y(t − 1) > 0.5 + ω. Given that, we defined the following force Ffree: Ffree = ⎧ ⎪⎨ ⎪⎩ −sin π 0.5−ω · y if y ∈ [0, 0.5 − ω) −ψsin π ω · (y − 0.5) if y ∈ [0.5 − ω, 0.5 + ω] sin π 0.5−ω · (y − 0.5 − ω) if y ∈ (0.5 + ω, 1] (3) Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 82 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 3. Simulated temporal evolution of the control signal generated (a) by the traditional smoothing filter and (b) by the new dynamical system. Real data from user S4. Black lines represent the integrated control signal during motor imagery task (solid) and at rest (dotted). Time points when the integrated control signal crosses a predefined fixed threshold (dashed black line) are highlighted in green (during motor imagery task) or in red (during rest). with ψ ≥ 0 corresponding to the height of the potential valley [see Fig. 2(b)]. The force has rotational symmetry with respect to 0.5 and, so, the same force is exerted for the two classes. However, it is worth to notice that it is possible to achieve an asymmetrical response of the system for the two classes by defining ω1 = ω2. FBMI is the second term of (2), and it represents the external force perturbing the system according to the output of the BMI decoder (i.e., user’s intention). As in the previous case, we designed FBMI in order to reduce or enhance the impact of BMI responses with low or high confidence, respectively (posterior probabilities close to 0.5 or close to 0.0 and 1.0). Hence, such a force was chosen so that: 1) FBMI must have rotational symmetry with respect to x = 0.5 to map the two BMI classes in the same way. 2) FBMI(xt) ≈ 0 for xt ∈ [0.5 − x, ˜ 0.5+˜x]. This means that with an uncertain output of the BMI decoder (e.g., around 0.5), the resulting force applied to the system is limited. Given that, we defined the following cubic transformation function: FBMI (x)=6.4 · (x − 0.5)3 + 0.4 · (x − 0.5) (4) where x ∈ [0.0 1.0] is the posterior probability from the BMI decoder. Such a function has been selected in order to promote BMI output with high confidence (i.e., close to 1.0 or −1.0) and to limit the impact of uncertain decoding (i.e., close to 0.5). The coefficients of the function have been chosen through simulations with prerecorded EEG data. Fig. 2(c) depicts a representation of FBMI. Finally, the two forces (Ffree, FBMI) have been combined together according to Δyt = χ · [φ · Ffree (yt−1) + (1 − φ) · FBMI (xt)] (5) with χ > 0 and φ ∈ [0.0, 1.0]. The parameter χ controls the overall velocity of the system while φ determines the contribution of Ffree and FBMI, or in other terms, how much to trust the BMI decoder output. These two parameters can be tuned by the operator according to the requirements of the application (e.g., by increasing χ if high reactiveness of the system is required) and to the BMI decoder accuracy (e.g., by decreasing φ in the case of a highly confident decoder). D. Simulated Temporal Evolution of the Control Signal We compared the temporal evolution of the two control frameworks with real data (BMI decoder output) from user S4 and results are depicted in Fig. 3. On the one hand, the traditional control framework [Fig. 3(a)], generates a control signal yt (starting at 0.5, equal probability for the two classes) that quickly increases (high derivative value) toward the correct side when the user is actively performing the task (IC state, solid black line). However, after the initial phase, the velocity of yt decreases making difficult to reach high values and reducing the extent of the control signal. Furthermore, in the case of resting (INC state, dotted black line), random perturbations of xt might result in locally large changes of yt making difficult to keep the control signal below the predefined threshold. Moreover, repeated simulations (N = 10 000) reported that during rest the control signal crossed the given threshold 96.2% of times with an average time of 7.2 ± 4.1 s. This is mainly due to the nature of the distribution of the BMI output (Section II.A). It is clear that BMI continuous operations using such a kind of unstable control signal are difficult to achieve. On the other hand, Fig. 3(b) depicts the temporal evolution of the control signal in the case of the new control approach developed herein. The same data as before has been used. While the user is actively involved in the mental task (black solid line in the figure), the output control signal y quickly converges toward the maximum value (1.0), crossing the given threshold after 1.1 s. It is worth to highlight how the behavior of the signal perfectly follows the design requirements of the new control framework: a slow initial velocity (to favor the INC Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 83 state) that quickly increases to implement the user’s intention (to support the IC state). Indeed, the new control framework seems to properly work also when the user is at rest. In this case, the random perturbations of the BMI output do not affect the control signal that keeps oscillating around 0.5 (black dotted line in figure). Repeated simulations (N = 10 000) reported that during the task the control signal crossed the threshold 100% of times in 1.4 ± 0.6 s). Importantly, during rest, the control signal crossed the threshold due to random perturbations only 15.5% of the times (in comparison to 96.2% in the case of the traditional control framework). Furthermore, the few random crossings occurred on averaged at 10.4 ± 5.5 s, more than 3 s later with respect to the traditional approach. The simulated results confirm the desired behavior of the control signal generated by the new approach. In the next section, we present an online closed-loop BMI experiment where users are asked to teleoperate a mobile robot with the traditional and the new control frameworks. III. MATERIAL AND METHODS A. Participants Thirteen healthy users participated in the study (S1–S13, 25.8 ± 4.3 years old, four females). Users did not have history of neurological or psychiatric disorders and they were not under any psychiatric medication. Eleven users did not have any previous experience with MI BMI; two already participated in other BMI experiments (S10 and S11) and only one (S13) already controlled a mobile robot via MI BMI. Written informed consent was obtained from all experimental subjects in accordance with the principles of the Declaration of Helsinki. The study has been approved by the Cantonal Committee of Vaud (Switzerland) for ethics in human research under the protocol number PB_2017-00295. B. Brain–Machine Interface Implementation In this article we used a BMI based on 2-class motor imagery (both hands versus both feet motor imagination) to drive the mobile robot. EEG signals were acquired with an active 16-channel amplifier at 512 Hz sampling rate (g.USBamp, Guger Technologies, Graz, Austria). Data were band-pass filtered within 0.1 and 100 Hz and notch-filtered at 50 Hz (hardware filters). Electrodes were placed over the sensorimotor cortex (Fz, FC3, FC1, FCz, FC2, FC4, C3, C1, Cz, C2, C4, CP3, CP1, CPz, CP2, CP4; international 10–20 system layout) to detect the neural patterns related to MI. We removed the dc component from the signals and spatially filtered them by means of a Laplacian derivation (closest neighbors in a cross layout [37]). We used the spectral power of EEG signals as features for the BMI system. We computed the power spectral density via Welch’s periodogram algorithm with 2 Hz resolution (from 4 to 48 Hz) in 1-s windows sliding every 62.5 ms. Feature selection was performed during the calibration phase (Section III.C) by ranking the candidate spatiospectral features according to discriminant power [38], calculated through canonical variate analysis and neurophysiological meaning. Thus, the most discriminative features (channel-frequency pairs, subjectspecific) were extracted and used to train a Gaussian decoder with a gradient-descent supervised learning approach using the labeled dataset obtained during the calibration phase [6], [24], [39]. In the evaluation phase, the same features were classified into a probability distribution over the two MI tasks (imagination of both hand versus both feet). Outputs of the decoder (posterior probabilities) with uncertain probability distribution were rejected (rejection parameter fixed at 0.55). As a result of the aforementioned procedures (processing and decoding), the overall BMI system produced a continuous stream of posterior probabilities at a frequency rate of 16 Hz. Afterward, the posterior probabilities were fed to the control framework to accumulate evidence about the current user’s intention and to generate a suitable visual feedback for the user and a proper control signal for the robot (for details, refer to Section II). The BMI system relies on open source C libraries for the acquisition of EEG signals1 and on our own C++ software for the communication between modules and the feedback visualization. The processing and decoding algorithms have been implemented in MATLAB. C. BMI Calibration, Evaluation, and Navigation Task The study was organized in three different recording sessions (days). Sessions were interleaved by 34.2 ± 9.0 days and each one lasted 45 ± 12 min (mean ± standard deviation). As a common approach in the field, we need to acquire initial data to create, calibrate, and evaluate the BMI model for each subject. Fig. 4(a) shows the structure of the recording sessions. During calibration, users performed the two motor imagery tasks (both hand versus both feet) in front of a monitor following the instruction of a cued protocol. In this phase, a positive visual feedback was always provided and no control of the robot was established. Three runs (60 trials, 30 per class) were recorded and the data were used to train the Gaussian classifier, which remained fixed for the rest of the experiment. During evaluation, we tested the classifier performance in, at least, two consecutive runs where the users actually controlled the movement of the visual feedback utilizing each of the two integration approaches (traditional and new dynamical system strategy), so as to find the optimal, user-dependent parameters of the two control frameworks. In this phase, users were not controlling the robot. The values for each user are reported in Table I. The initial values of the parameters were selected based on simulations with prerecorded data (Section II.B and C). During the first recording session, we adjusted these values according to the individual performances of each user, which did not change during the rest of the experiment. Once subjects achieved good BMI performance (>70%), they moved to the next phase where they completed the navigation tasks. During navigation, users operated the robot with their individual classifier and the two integration frameworks. The navigation field was defined as a rectangular area (width: 900 cm; length: 600 cm) with 5 circular targets (T1-5; radius: 25 cm) located at 300 cm and at −90°, −45°, 0°, 45°, 90° from the starting point 1[Online]. Available: http://neuro.debian.net/pkgs/libeegdev-dev.html Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 84 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 4. Experimental design. (a) Schematic representation of the experimental structure. In the first session (day), each user performed three BMI calibration runs (without controlling the robot) in order to create the model for the decoder. Afterward, the BMI decoder was tested in two BMI evaluation runs (again, without controlling the robot). In the evaluation block, users also tested both the control frameworks (traditional and new dynamical approach) to determine the optimal parameters of the system. Finally, users performed two BMI navigation runs driving the robot. The navigation runs were equally divided per control modality. Session 2 and 3 (day 2 and 3) proposed again the evaluation and the navigation blocks. (b) Experimental field for the navigation tasks. Five targets (T1-5) were defined for each task. Targets were placed at 3 m from the start position of the robot and 45° from each other. The user was sitting outside the navigation field to be able to see the position of the robot at any time. (c) BMI visual feedback controlled by the user and the corresponding change of its heading direction in the traditional (discrete) and dynamical (continuous) control modality. (S) at the center (450, 150 cm). A task consisted in driving the robot from the initial position toward one of the five predefined targets [Fig. 4(b)]. As soon as the robot crossed the target’s edge, the trial was considered successfully completed and the robot was manually positioned at the starting point. Users were not instructed to follow specific trajectories, but we asked them to try to reach the target in the shortest possible time. Furthermore, a trial was considered unsuccessful if the robot left the rectangular area or if the target was not reached after 60 s. Finally, during the navigation tasks, users were able to see the robot, the targets ,and the monitor displaying the visual feedback. Users performed between 2 and 6 navigation runs per session (depending on their level of fatigue). Each run consisted in ten navigation tasks (two repetitions per target) randomly shuffled. The two control modalities (discrete control with traditional approach versus continuous control with new dynamical system approach) were pseudorandomly assigned to each run (equal number of runs per control modality per session). Users performed 88 navigation runs in total (44 runs per control modality) and 880 tasks. A visual representation of the behavior of the robot according to the BMI feedback in the discrete and continuous control modality is reported in Fig. 4(c). D. Mobile Robot The robot is based upon the Robotino platform by FESTO AG (Esslingen am Neckar, Germany) showed in Fig. 1(a). It is a small circular robot (diameter 370 mm, height 210 mm; weight ∼11 kg) equipped with three holonomic wheels, an embedded PC 104 with a compact flash card and nine infrared proximity sensors mounted in the robot’s chassis at an angle of 40° from each other and with a working range up to ∼150 mm (depending on light conditions). Furthermore, we added a laptop (Lenovo X201, Intel Core I5 2.53 GHz, 4GB RAM, Integrated Intel HD video controller) to the robot configuration to overcome the limited computational power of the embedded PC. The laptop was placed on a custom metallic structure fixed to the robot chassis and connected to the robot itself via Ethernet interface. E. Navigation System The motion of the mobile robot relies on a navigation system based on local potential fields and inspired by the work of Bicho et al. [34] and Steinhage et al. [35]. Furthermore, it has already been extensively and successfully evaluated with healthy subjects and end-users in previous works with BMI-driven mobile robots [6], [22]–[24]. In this article, the robot moves forward at a constant speed (0.2 m/s). The angular velocity v of the robot is generated by the following equation: v = (ξ − ξego) e − (ξ−ξego) 2 2 (6) where (ξ − ξego) represents the difference between the turning and the heading direction of the robot. The user is allowed to control the turning direction ξ by delivering BMI commands. In the case of the discrete control modality (Sections II and III.C), ξ may assume two discrete angular values (±π 4 ), according to the BMI command delivered by the user (left or right). Conversely, in the case of continuous modality, the control signal is linearly mapped to the interval [−π 2 , π 2 ] in order to continuously generate the robot’s turning direction ξ. The entire navigation system was developed in the robotic operating system (ROS) ecosystem. Robotino native libraries have been wrapped into ROS packages in order to access sensors’ information and motor controller. We developed two packages for bidirectional communication between the BMI and the ROS framework. In detail, we integrated standard interfaces used in the BMI field (Tobi Interface C and Tobi Interface D, [40]) in the ROS environment. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 85 Fig. 5. Initial BMI decoder results. (a) Topographic representation of the most selected features during the calibration block for μ and β bands. (b) BMI trial accuracy in the evaluation runs. In black the overall trial accuracy is reported; in blue and red the trial accuracy per control framework. (c) BMI trial duration in the evaluation runs. In black the overall trial duration is reported; in blue and red the trial duration per control framework. Mean and standard error of the mean are reported. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p <.05; (∗∗∗): p < 0.001. F. Tracking System Given the unreliability of robot’s odometry, trajectories were recorded by an external camera (Microsoft Kinect v2) located 6 m above the navigation field. A red spherical marker was placed on top of the robot to perform automatic detection of the robot within each frame of the recorded video stream. Detection was based on HSV colors and the previous position. Image coordinates were then mapped to real world trajectories with a homographic transform that was determined by ten world-image coordinate pairs. Localization and coordinate transform were done a posteriori using OpenCV library (OpenCV, version 3.2.02). Finally, trajectories were smoothed using a moving average filter over 25 data points for each time step. G. Statistical Analyses All statistical analyses have been performed by comparing and testing for significant differences at the 95% confidence interval using unpaired, two-sided Wilcoxon nonparametric rank-sum tests. IV. RESULTS A. Initial BMI Decoder Screening At the beginning of each recording session (day) we evaluated the BMI decoder in a classical cued protocol without the robot. The rationale is to have a ground truth of the BMI performance before starting the navigation tasks. Participants were instructed to control a feedback bar on the screen according to the direction provided by a visual cue (see Section III.C). While using the same BMI decoder, participants performed the initial screening with both the aforementioned control frameworks. First, the spatial and spectral distribution of the features selected during the calibration is coherent to the motor imagery tasks performed by the users. Indeed, Fig. 5(a) shows that channels C3 and C4 were the most selected in the μ band (50 and 52 times versus ten times for Cz) and channel Cz in the β band 2[Online]. Available: http://opencv.org/ (24 times versus ten and 11 times for C3 and C4, respectively). These results are in line with literature regarding the brain cortical regions involved in both hands and both feet motor imagery tasks [14]–[16]. Second, Fig. 5(b) and (c) report the BMI performances during the evaluation runs in terms of accuracy (i.e., percentage of successful trials) and time (i.e., duration of each trial). In average, participants achieved an accuracy of 89.9 ± 2.3% and they were able to complete the trial in 4.6 ± 0.2 s. In more detail, the traditional control framework seems to perform better in such a classical BMI paradigm with higher accuracy (93.1 ± 4.1% versus 86.7 ± 2.2%; p = 0.0006) and reduced time (4.0 ± 0.3 s versus 5.2 ± 0.4 s; p = 0.022). B. Navigation Performance We evaluated the navigation performance of the two control modalities according to three objective metrics: 1) distance to the ideal (manual) trajectory (Frechet distance [41]); 2) percentage of reached targets; 3) time to reach the target. Fig. 6(a) illustrates the heat maps of trajectories followed by all participants in the case of the traditional (left) and the new control modality (right). The maps have a 10 cm resolution, targets are indicated by white circles, and the color code ranges from blue (low coverage) to yellow (high coverage). Black lines represent the average trajectories per target and dashed lines the ideal (manual) trajectories. Subpanels around the main image show the individual target heat maps. A preliminary visual inspection of the heat maps already highlights the advantages of the new proposed control framework, especially in the case of the lateral targets (T1 and T5) where the participants required a finer control of the robot to reach them. Such an observation is substantiated by the results in Fig. 6(b). On average (left column), the new control modality allows users to follow the ideal trajectories significantly better (Frechet distance of 117.3 ±7.7 cm versus 85.4±5.0 cm, mean±STD; p=0.026). Results stand if we consider each target separately (middle column), with statistical difference in case of the most lateral ones (T1: p = 0.002; T5: p = 0.039). In addition, the evolution of the distance Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 86 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 6. Navigation results. (a) Heat maps of trajectories performed by the robot for discrete (on the left) and continuous (on the right) control modality. Maps resolution is 10 cm. Target T1-5 are identified by white circle and color code ranges from blue (low) to yellow (high coverage). In black the average trajectories (solid lines) and the ideal manual trajectories (dashed lines) per target. Subpanels around the maps report the coverage, the average and the ideal trajectories for each individual target. (b) Frechet distance to the ideal trajectories per control framework. From left to right: the overall average distance, the average distance per target and the evolution of the distance over runs. (c) Navigation accuracy per control framework corresponding to the percentage of target successfully reached. From left to right: the overall average accuracy, the average accuracy per target, and the evolution of accuracy over runs. Black dashed line represents the chance level. (d) Duration in seconds of the navigation tasks per control framework. From left to right: overall average duration, the average duration per target, and the evolution of the duration over runs. Mean and standard error of the mean are reported. In blue and in red the results for the traditional and the new dynamical system control framework. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p < 0.05; (∗∗): p < 0.01. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 87 Fig. 7. Behavioral results from the navigation questionnaires. Users could answer with a score between 1 and 5. In blue the average scores for the traditional and in red for the new dynamic control framework. Mean and standard error of the mean are reported. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p <.05; (∗∗): p < 0.01; (>∗∗∗): p << 0.0001. TABLE II NAVIGATION QUESTIONNAIRE over runs shows significant improvement after the first day (right column; p = 0.013). The second evaluation metric is related to the percentage of reached target in the two conditions. Also in this case, the new approach ensures better navigation performances [Fig. 6(c)] and, on average (left column) a significant increment with respect to the traditional control framework (77.3 ± 3.3% versus 86.1 ± 2.6%, p = 0.048). Results in the middle column show similar consistency also across targets, with significantly better performances especially for targets T3 and T4 (p = 0.043 and p = 0.015, respectively). Furthermore, the accuracy with the new control framework consistently improves over runs (right column), reaching a statistically significant difference in the second day (run 3; p = 0.022). Finally, in Fig. 6(d) we report an overall time improvement in the case of the new control framework (33.6 ± 1.1 s versus 31.1 ± 0.8 s). Although such a reduction is in line with the previous results (in terms of distance to the ideal trajectory and accuracy), no significant differences have been found (p = 0.42). C. Behavioral Results At the end of each recording session, participants were asked to answer to two questionnaires in order to assess the subjective evaluations of the two control modalities. Each questionnaire was composed by the same eight questions and participants could rank them with a score from 1 to 5 as reported in Table II. The average scores for the eight questions are reported in Fig. 7. Generally, results show a general trend in favor of the new approach proposed in this article. In particular, questions Q2 (control precision, p = 0.006), Q4 (keeping forward direction, p = 0.030), Q5 (effort, p = 0.045), and Q8 (behavior preference, p = 0.000001) show a significant positive impact. These questions are directly related to the design goals of the new dynamical system control framework. Furthermore, in both conditions participants felt to be in control of the robot (Q1, score: 3.8 ± 0.2 versus 4.1 ± 0.1; Q3, score: 3.8 ± 0.2 versus 3.7 ± 0.2). Finally, the fact that we let them to decide to focus their attention on the robot itself or on the visual feedback does not seem to be a confounding factor for the experiment (Q6, score: 3.4 ± 0.3 versus 3.8 ± 0.3; Q7, score 3.5 ± 0.3 versus 3.7 ± 0.3). V. DISCUSSION This article aims at providing a continuous control modality for a BMI-driven mobile robot. Most BMI research focuses on applications based on discrete interaction strategies to drive robotic devices [5], [6], [18]–[25]. Although there exist some examples of BMI continuous control [7], [28], [29], they are scarce and the investigation of new formal techniques to interpret the user’s intention is often neglected. In this scenario, we have hypothesized that a key aspect to achieve such a continuous interaction is to rely on a control approach to translate the BMI decoder output into a control signal for the robotic device. For the first time, we have faced the challenge by formally designing a new control framework for BMI-driven mobile robots and by directly comparing the performances with a traditional approach Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 88 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 in a demanding scenario where we enabled users to continuously drive the device. A. Continuous Interaction and Navigation Performances First of all, results showed that the proposed control framework allowed such a continuous interaction modality between the user and the mobile robot. As consequence, users were able to reliably generate continuous navigation trajectories decoded from their brain activity. In literature, other works using a continuous control strategy rely on the ability of the users to perform up to six motor imagery tasks and consequentially to generate corresponding discriminant brain patterns to control the robotic devices [28], [29]. However, these approaches may hardly be applied in real case scenarios or for a daily usage of any MI BMI applications due to the high physical and mental demands for the user. This is particularly true in the case of the end-users with motor disabilities who have never been reported to utilize a MI BMI with more than two or three classes. It is worth to notice that our approach achieved the continuous interaction between BMI user and robot without any modification of the classical workflow of a 2-class motor imagery BMI that has been largely demonstrated to be suitable for end-users [6], [24], [31]. Furthermore, the comparison between the traditional and the new approach highlighted consistent and significant improvements in terms of navigation performances. Specifically, the distance to the ideal (manual) trajectory [Fig. 6(b)] is significantly reduced (p < 0.05). Moreover, the new control framework allowed users to increase the percentages of successfully completed navigation tasks [Fig. 6(c)]. This particularly fits in the case of the most difficult targets (T1 and T5), where users required finer control to complete the task. In the case of the duration of the navigation tasks, we did not find significant differences in the two conditions [although the time is slightly reduced for the new approach, Fig. 6(c)]. This is probably due to the short duration of the navigation task (∼30 s), that prevents a clear differentiation between the two control conditions. Finally, results from the subjective evaluation [Fig. 7] suggest the positive impacts of the new continuous interaction modality with the robotic device. In summary, the achieved results support our hypothesis that it is feasible to achieve a continuous interaction by means of the design of a new control framework for MI BMI-actuated robot. B. Coupling Between BMI User and Machine The improvement of the coupling between user and machine is a fundamental aspect in any robotic application, and especially in BMI-driven devices. In literature, it has been suggested that the enhancement of such an interaction not only increases the operational performances but it also promotes the acquisition of BMI skills for the user—namely, the ability of generating more reliable and stable brain patterns [31]. Here, we suggest that the new control framework facilitates this coupling in comparison to traditional approaches. Although it is difficult to directly evaluate the coupling with quantitative metrics, we propose the possibility to infer it from the results presented in the article and, in particular, from the temporal evolution of the navigation performances. Interestingly, the temporal evolution over runs of the three navigation metrics [Fig. 6(b)–(d), right column] suggested that the new control framework fosters the user’s learning in better controlling the mobile robot. Indeed, results show that while users had similar performances in the first run [Fig. 6(b), right column], a significant reduction of the Frechet distance only occurred in the second run for the new proposed approach (red line, p < 0.05). In the case of the traditional control framework, users were able to reach similar performances only in the last run of the experiment. In other words, the new control framework allowed users to learn to drive more precisely the robot in shorter time. The evolution of the task accuracy and duration may be interpreted in a similar way. In the first run, users achieved the same task accuracy in both conditions [∼75%, Fig. 6(c), right column]. For the traditional control condition, the accuracy remained stable until the last run (blue line, run 5), while with the new approach it reached a plateau of ∼90% already in the second run (red line). Although the time duration does not show any statistical difference, the trend is the same as in the case of the two previous metrics: already in the second run the duration of the task is reduced only in the case of the new control approach [Fig. 6(d), red line]. Subjective results from the questionnaire are in line with such considerations (Fig. 7) as users indicated not only an overall significant preference for the new control framework (question Q8, p < 0.0001) but also a more natural, precise, and easy interaction with it (questions Q2, p < 0.01, and Q4, p < 0.05). Moreover, it is worth to highlight that users reported less effort to control the robot in the continuous control modality (question Q5, p < 0.05), event if—theoretically—is more demanding. Furthermore, it is worth to mention the apparent discrepancy related to the outcomes in the initial BMI screening (without the robot) and in the navigation tasks. Indeed, users achieved substantially higher BMI accuracy with the traditional approach (p < 0.001) in the evaluation runs when they were asked to only control the visual feedback on the screen [Fig. 5(b) and (c)]. However, as already discussed, the introduction of the new dynamical system control framework led to significant improvements at the robotic application level and it suggests a better coupling between user and machine. This opens the discussion on the fact that metrics commonly used in BMI fields (such as the decoder accuracy) might not be fully informative to predict and evaluate the performances of neurorobotic applications [42]. Indeed, to accomplish complex tasks, such as driving a mobile robot, users not only need to repetitively deliver mental commands as fast as possible (as in common BMI protocols) but also to plan for and make eventual corrections. This spotlights the importance of designing a control framework that explicitly handle the requirements of the specific BMI application to improve the coupling between user and machine and, as consequence, the overall performance of the system. C. Extension to Other BMI Robotic Applications The proposed control framework has been explicitly designed and successfully evaluated for a robotic teleoperated mobile Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 89 platform. From a control perspective, the extension to similar BMI applications for motor substitution (e.g., powered wheelchair) is straightforward: for instance, users may drive a powered wheelchair by continuously controlling the turning direction with the proposed approach as in the case of the mobile robot. Similarly, the new control framework may be applied to BMI-driven lower limb exoskeletons. In literature, most of the studies use a discrete interaction modality to deliver commands to the device (e.g., go forward, turn right or left) [43]–[47]. In these cases, the proposed approach might support the generation of continuous trajectories for the exoskeleton. A different scenario is trying to decode brain patterns related to the user’s intention to make a left or right step [48]. During a walking task, the intended action (the step) is discrete per se, and it does not make sense to provide a continuous interaction modality. However, the generation of a continuous control signal might be useful when users are asked to perform leg extension/flexion robotic-assisted exercises (e.g., in a rehabilitation scenario). In this case, our approach might promote a fine control of the robotic device and thus, improve the rehabilitation outcomes. The need of a continuous interaction modality is not limited to mobile applications. The same approach can also be applied to operate robotic arms or upper limb exoskeletons where a three dimensional (3-D) control would be desirable. In literature, the operations of such devices are limited to two-dimensional (2-D) control strategies by directly remapping the EEG brain patterns into arm trajectories [7], [25]. Herein, we speculate the possibility to generate 3-D continuous trajectories by properly designing the control framework of a 3-class MI BMI. While two classes would be used to control the device in the x-y plane (as for the mobile platform developed in this work), the third one will be translated in the motion along the z dimension. The motion trajectories will be generated by the extension of the dynamical system equations to the 3-D space. Nevertheless, an extensive evaluation in real BMI closed-loop experiments is definitely required to prove the feasibility of this approach. D. Future Work We plan to further improve the new control framework by facilitating the choice of the parameters in the dynamical system equations. Although the results demonstrated the validity of this approach, the parameterization of the control system is still suboptimal. (3), (4), and (5) depend on the parameters ψ, ω, φ, and χ to adjust the strength and the position of the attractors/repellers and to balance the contribution of the Ffree and FBMI as well as the overall reactiveness of the system. The initial ranges of these parameters have been obtained by analysis on prerecorded data. However, in the first session of the experiment, the operator had to heuristically tune the parameters to optimize the behavior for each user. This should be avoided in order to reduce the human intervention as well as possible variability in the performances. For this reason, we performed a posteriori analysis with a twofold goal: 1) to reduce the number of parameters controlling the behavior of the dynamical system; 2) to predict the optimal subject-specific values of the parameters from the calibration data. Preliminary results suggest the feasibility to control the overall behavior of the framework by using only the two parameters related to the strength and position of the attractors/repellers (i.e., ψ and ω). Furthermore, simulated online performances support the possibility to predict the optimal values from calibration data. However, further studies are required to verify these preliminary results and, especially, to evaluate them in a closed-loop online experiment. A second future development will be to integrate information from the environment by exploiting the robot’s sensors. The effectiveness of this approach, namely shared control, has been already demonstrated in the past [5], [6], [18], [21]–[24] where robot’s intelligence was exploited in order to avoid obstacles in the path. In the case of our new approach, we plan to directly change the force fields in the BMI control framework accordingly to environment information in order to adjust the BMI outputs to the arrangement of objects around the robot (i.e., walls, tables, chairs) and to prevent the execution of wrong or not optimal user’s commands for the robot. Such a system needs to be evaluated in more complex scenarios than the one in this article, where the user will need to achieve complicated navigation tasks even in the presence of moving obstacles. VI. CONCLUSION In this article, we proposed a new control framework for an MI BMI-driven mobile robot. We hypothesized that such a novel approach would allow users to continuously control the robot and it would have a significant impact on the navigation performance as well as in the human–machine interaction. Thirteen healthy users evaluated the new control framework in comparison to a discrete approach usually exploited in the BMI field. The experiment lasted three sessions (days) and in total consisted of 880 repetitions of the navigation tasks. Results confirmed our hypothesis and showed the possibility to use a continuous control strategy to drive the robot via a classical 2-class MI BMI system. Furthermore, results highlighted an improvement of the navigation performances in all three evaluation metrics: distance to ideal trajectory, percentage of reached targets, and time to complete the tasks. In addition to providing a new approach that allows BMI users to continuously drive a mobile robotic platform, this article aimed at spotlighting the importance of the control framework to promote successful operations and to foster the translational impact of BMI-driven robotic applications. REFERENCES [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain–computer interfaces for communication and control,” Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, 2002. [2] D. Borton, S. Micera, J. del R. Millán, and G. Courtine, “Personalized neuroprosthetics,” Sci. Transl. Med., vol. 5, no. 210, p. 210rv2, 2013. [3] J. D. R. Millán, F. Renkens, J. Mouriño, and W. Gerstner, “Noninvasive brain-actuated control of a mobile robot by human EEG,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1026–1033, Jun. 2004. [4] J. D. R. Millán, F. Galán, D. Vanhooydonck, E. Lew, J. Philips, and M. Nuttin, “Asynchronous non-invasive brain-actuated control of an intelligent wheelchair,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2009, pp. 3361–3364. [5] T. Carlson and J. D. R. Millán, “Brain-controlled wheelchairs: A robotic architecture,” IEEE Robot. Autom. Mag., vol. 20, no. 1, pp. 65–73, Mar. 2013. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 90 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 [6] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. D. R. Millán, “Towards independence: A BCI telepresence robot for people with severe motor disabilities,” Proc. IEEE, vol. 103, no. 6, pp. 969–982, Jun. 2015. [7] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He, “Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks,” Sci. Rep., vol. 6, no. 1, 2016, Art. no. 38565. [8] B. Rebsamen et al., “Controlling a wheelchair indoors using thought,” IEEE Intell. Syst., vol. 22, no. 2, pp. 18–24, Mar.–Apr. 2007. [9] C. J. Bell, P. Shenoy, R. Chalodhorn, and R. P. N. Rao, “Control of a humanoid robot by a noninvasive brain–computer interface in humans,” J. Neural Eng., vol. 5, no. 2, pp. 214–220, 2008. [10] A. Chella et al., “A BCI teleoperated museum robotic guide,” in Proc. Int. Conf. IEEE Comp. Intelli. Soft. Int. Sys., 2009, pp. 783–788. [11] I. Iturrate, J. M. Antelis, A. Kubler, and J. Minguez, “A noninvasive brainactuated wheelchair based on a P300 neurophysiological protocol and automated navigation,” IEEE Trans. Robot., vol. 25, no. 3, pp. 614–627, Jun. 2009. [12] C. Escolano, A. R. Murguialday, T. Matuz, N. Birbaumer, and J. Minguez, “A telepresence robotic system operated with a P300-based brain-computer interface: Initial tests with ALS patients,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2010, pp. 4476–4480. [13] B. Rebsamen et al., “A brain controlled wheelchair to navigate in familiar environments,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 18, no. 6, pp. 590–598, Dec. 2010. [14] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: Basic principles,” Clin. Neurophysiol., vol. 110, no. 11, pp. 1842–1857, 1999. [15] G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” Proc. IEEE, vol. 89, no. 7, pp. 1123–1134, Jul. 2001. [16] G. Pfurtscheller, C. Brunner, A. Schlögl, and F. H. Lopes da Silva, “μ rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks,” Neuroimage, vol. 31, no. 1, pp. 153–159, 2006. [17] E. Thomas, M. Dyson, and M. Clerc, “An analysis of performance evaluation for motor-imagery based BCI,” J. Neural Eng., vol. 10, no. 3, 2013, Art. no. 031001. [18] G. Vanacker et al., “Context-based filtering for assisted brain-actuated wheelchair driving,” Comput. Intell. Neurosci., vol. 2007, p. 25130, 2007. [19] F. Galán et al., “A brain-actuated wheelchair: Asynchronous and noninvasive brain-computer interfaces for continuous control of robots,” Clin. Neurophysiol., vol. 119, no. 9, pp. 2159–2169, 2008. [20] R. Zhang et al., “Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 24, no. 1, pp. 128–139, Jan. 2016. [21] L. Tonin, R. Leeb, M. Tavella, S. Perdikis, and J. D. R. Millán, “The role of shared-control in BCI-based telepresence,” in Proc. IEEE Int. Conf. Syst., Man Cybern., 2010, pp. 1462–1466. [22] L. Tonin, T. Carlson, R. Leeb, and J. D. R. Millán, “Brain-controlled telepresence robot by motor-disabled people,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2011, pp. 4227–4230. [23] T. Carlson, L. Tonin, S. Perdikis, R. Leeb, and J. D. R. Millán, “A hybrid BCI for enhanced control of a telepresence robot,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2013, pp. 3097–3100. [24] R. Leeb et al., “Transferring brain–computer interfaces beyond the laboratory: Successful application control for motor-disabled users,” Artif. Intell. Med., vol. 59, no. 2, pp. 121–132, 2013. [25] D. Kuhner, L. D. J. Fiederer, J. Aldinger, F. Burget, and M. Völker, “A service assistant combining autonomous robotics, flexible goal formulation, and deep-learning-based brain-computer interfacing,” Robot. Auton. Syst. J., vol. 116, pp. 98–113, 2019. [26] A. Satti, D. Coyle, and G. Prasad, “Continuous EEG classification for a self-paced BCI,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2009, pp. 315–318. [27] D. Coyle, J. Garcia, A. R. Satti, and T. M. McGinnity, “EEG-based continuous control of a game using a 3 channel motor imagery BCI,” in Proc. IEEE Symp. Comp. Intell., Cogn. Algor., Mind, Brain, 2011, pp. 1–7. [28] A. J. Doud, J. P. Lucas, M. T. Pisansky, and B. He, “Continuous threedimensional control of a virtual helicopter using a motor imagery based brain-computer interface,” PLoS One, vol. 6, no. 10, 2011, Art. no. e26322. [29] K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He, “Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface,” J. Neural Eng., vol. 10, no. 4, 2013, Art. no. 046003. [30] S. Perdikis, L. Tonin, and J. D. R.Millan, “Brain racers: How paralyzed athletes used a brain-computer interface to win gold at the Cyborg Olympics,” IEEE Spectr., vol. 54, no. 9, pp. 44–51, Sep. 2017. [31] S. Perdikis, L. Tonin, S. Saeedi, C. Schneider, and J. D. R. Millán, “The Cybathlon BCI race: Successful longitudinal mutual learning with two tetraplegic users,” PLoS Biol., vol. 16, no. 5, p. e2003787, 2018. [32] B. Blankertz et al., “The BCI competition III: Validating alternative approaches to actual BCI problems,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 14, no. 2, pp. 153–159, Jan. 2006. [33] G. Schöner and M. Dose, “A dynamical systems approach to task-level system integration used to plan and control autonomous vehicle motion,” Robot. Auton. Syst., vol. 10, pp. 253–267, 1992. [34] E. Bicho and G. Schöner, “The dynamic approach to autonomous robotics demonstrated on a low-level vehicle platform,” Robot. Auton. Syst., vol. 21, no. 1, pp. 23–35, 1997. [35] A. Steinhage and R. Schöner, “The dynamic approach to autonomous robot navigation,” inProc. IEEE Int. Symp. Ind. Elect., vol. 1, 2002, pp. SS7–S12. [36] E. S. Gardner, “Exponential smoothing: The state of the art—Part II,” Int. J. Forecast., vol. 22, no. 4, pp. 637–666, 2006. [37] D. J. McFarland, L. M. McCane, S. V. David, and J. R. Wolpaw, “Spatial filter selection for EEG-based communication,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 386–394, 1997. [38] F. Galán, P. W. Ferrez, F. Oliva, J. Guardia, and J. D. R. Millán, “Feature extraction for multi-class BCI using canonical variates analysis,” in Proc. IEEE Int. Symp. Intell. Sig. Process., 2007, p. 111. [39] J. D. R. Millán, P. W. Ferrez, F. Galán, E. Lew, and R. Chavarriaga, “Noninvasive brain-machine interaction,” Int. J. Pattern Recognit. Artif. Intell., vol. 22, no. 05, pp. 959–972, 2008. [40] G. R. Müller-Putz et al., “Tools for brain-computer interaction: A general concept for a hybrid BCI,” Front. Neuroinform., vol. 5, p. 30, 2011. [41] H. Alt and M. Godau, “Computing the Fréchet distance between two polygonal curves,” Int. J. Comput. Geom. Appl., vol. 05, no. 01n02, pp. 75–91, 1995. [42] R. Chavarriaga, M. Fried-Oken, S. Kleih, F. Lotte, and R. Scherer, “Heading for new shores! Overcoming pitfalls in BCI design,” Brain-Comput. Interfaces, vol. 4, no. 1–2, pp. 60–73, 2017. [43] Y. He, D. Eguren, J. M. Azorín, R. G. Grossman, T. P. Luu, and J. L. Contreras-Vidal, “Brain-machine interfaces for controlling lower-limb powered robotic systems,” J. Neural Eng., vol. 15, no. 2, 2018, Art. no. 21004. [44] A. H. Do, P. T. Wang, C. E. King, S. N. Chun, and Z. Nenadic, “Braincomputer interface controlled robotic gait orthosis,” J. Neuroeng. Rehabil., vol. 10, no. 1, p. 111, 2013. [45] A. Kilicarslan, S. Prasad, R. G. Grossman, and J. L. Contreras-Vidal, “High accuracy decoding of user intentions using EEG to control a lower-body exoskeleton,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2013, pp. 5606–5609. [46] E. López-Larraz et al., “Control of an ambulatory exoskeleton with a brain-machine interface for spinal cord injury gait rehabilitation,” Front. Neurosci., vol. 10, p. 359, 2016. [47] K. Lee, D. Liu, L. Perroud, R. Chavarriaga, and J. D. R. Millán, “A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers,” Robot. Auton. Syst., vol. 90, pp. 15–23, 2016. [48] D. Liu et al., “EEG-based lower-limb movement onset decoding: Continuous classification and asynchronous detection,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 8, pp. 1626–1635, Aug. 2018. Luca Tonin (M’19) received the Ph.D. degree in robotics from the École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, in 2013. He then pursued three years of postdoctoral research at the Intelligent Autonomous System laboratory (IAS-Lab), the University of Padova, Padua, Italy. Since 2016, he has been Postdoctoral Researcher with the Defitech Chair in Brain-Machine Interface at EPFL. He is currently a Senior Postdoctoral Researcher with the Intelligent Autonomous System laboratory (IAS-Lab), the University of Padova. His research is currently focused on exploring advanced techniques for brain– machine interface (BMI)-driven robotics devices. His main contribution to the BMI field is related to the design of novel shared control approaches to improve the reliability and to enhance the coupling between user and robot. In 2016, Dr. Tonin won the first international Cybathlon paralympic event in the BMI race discipline as a coleader of the BrainTweakers team. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 91 Felix Christian Bauer received the M.Sc. degree in physics in 2017 from ETH Zurich, Zurich, Switzerland, where he is currently working toward Teaching Diploma in physics. He is currently working as Research and Development Engineer with aiCTX AG, Zurich, Switzerland, on the development of neuromorphic hardware applications. His research interests include noninvasive brain–machine interfaces, artificial intelligence, neural network architectures, and neuromorphic hardware. José del R. Millán (F’17) received the Ph.D. degree in computer science from the Universitat Politècnica de Catalunya, Barcelona, Spain, in 1992. He is currently with the Department of Electrical & Computer Engineering and the Deptartment of Neurology of the University of Texas at Austin, Austin, USA, where he holds the Carol Cockrell Curran Endowed Chair. Previously, he held the Defitech Foundation Chair at the École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, from 2009 to 2019, where he helped establish the Center for Neuroprosthetics. Dr. Millán has made several seminal contributions to the field of brain– machine interfaces (BMI), especially based on electroencephalogram signals. Most of his achievements revolve around the design of brain-controlled robots. He has received several recognitions for these seminal and pioneering achievements, notably the IEEE-SMC Nobert Wiener Award in 2011. For the last few years he has been prioritizing the translation of BMI to end-users suffering from motor disabilities. As an example of this endeavor, his team won the first Cybathlon BMI race in October 2016. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.