daniel Foley commited on
Commit
74eb75e
·
1 Parent(s): 59d29cb

Added sample knowledgebase

Browse files

Co-authored-by: Dan dfoley3838@gmail.com
Co-authored-by: Enrico enricoll@bu.edu
Co-authored-by: Brandon bmv2021@bu.edu

Files changed (1) hide show
  1. knowledge.txt +2222 -0
knowledge.txt ADDED
@@ -0,0 +1,2222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 78 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
2
+
3
+ The Role of the Control Framework for Continuous
4
+
5
+ Teleoperation of a Brain–Machine
6
+
7
+ Interface-Driven Mobile Robot
8
+
9
+ Luca Tonin , Member, IEEE, Felix Christian Bauer , and José del R. Millán , Fellow, IEEE
10
+
11
+ Abstract—Despite the growing interest in brain–machine interface (BMI)-driven neuroprostheses, the translation of the BMI
12
+
13
+ output into a suitable control signal for the robotic device is often
14
+
15
+ neglected. In this article, we propose a novel control approach
16
+
17
+ based on dynamical systems that was explicitly designed to take
18
+
19
+ into account the nature of the BMI output that actively supports
20
+
21
+ the user in delivering real-valued commands to the device and, at
22
+
23
+ the same time, reduces the false positive rate. We hypothesize that
24
+
25
+ such a control framework would allow users to continuously drive
26
+
27
+ a mobile robot and it would enhance the navigation performance.
28
+
29
+ 13 healthy users evaluated the system during three experimental
30
+
31
+ sessions. Users exploit a 2-class motor imagery BMI to drive the
32
+
33
+ robot to five targets in two experimental conditions: with a discrete control strategy, traditionally exploited in the BMI field, and
34
+
35
+ with the novel continuous control framework developed herein.
36
+
37
+ Experimental results show that the new approach: 1) allows users to
38
+
39
+ continuously drive the mobile robot via BMI; 2) leads to significant
40
+
41
+ improvements in the navigation performance; and 3) promotes a
42
+
43
+ better coupling between user and robot. These results highlight the
44
+
45
+ importance of designing a suitable control framework to improve
46
+
47
+ the performance and the reliability of BMI-driven neurorobotic
48
+
49
+ devices.
50
+
51
+ Index Terms—Brain–machine interface (BMI), control
52
+
53
+ framework, motor imagery (MI), neurorobotics.
54
+
55
+ I. INTRODUCTION
56
+
57
+ RECENT years have seen a growing interest for the neurorobotics field, a new interdisciplinary research topic that
58
+
59
+ aims at studying brain-inspired approaches in robotics and at developing innovative human–machine interfaces. In this scenario,
60
+
61
+ Manuscript received May 21, 2019; accepted August 6, 2019. Date of publication October 22, 2019; date of current version February 4, 2020. This paper
62
+
63
+ was recommended for publication by Associate Editor B. Argall and Editor P. R.
64
+
65
+ Giordano upon evaluation of the reviewers’ comments. This work was supported
66
+
67
+ in part by the Hasler Foundation, Bern, Switzerland, under Grant 17061 and in
68
+
69
+ part by the Swiss National Centre of Competence in Research (NCCR) Robotics.
70
+
71
+ (Corresponding author: Luca Tonin.)
72
+
73
+ L. Tonin is with Intelligent Autonomous System Lab, Department of Information Engineering, University of Padova, 35122 Padua, Italy (e-mail:
74
+
75
+ luca.tonin@dei.unipd.it).
76
+
77
+ F. C. Bauer is with aiCTX AG, 8050 Zurich, Switzerland (e-mail:
78
+
79
+ felix.bauer@aictx.ai).
80
+
81
+ J. D. R. Millán is with Department of Electrical and Computer Engineering & the Department of Neurology, University of Texas at Austin,
82
+
83
+ Austin 78705 USA, and also with Defitech Chair in Brain-Machine Interface,
84
+
85
+ École Polytechnique Fédérale de Lausanne, 1202 Geneva, Switzerland (e-mail:
86
+
87
+ jose.millan@austin.utexas.edu).
88
+
89
+ Color versions of one or more of the figures in this article are available online
90
+
91
+ at http://ieeexplore.ieee.org.
92
+
93
+ Digital Object Identifier 10.1109/TRO.2019.2943072
94
+
95
+ brain–machine interfaces (BMIs) represent a promising technology to directly decode user’s intentions from neurophysiological
96
+
97
+ signals and translate them into actions for external devices. The
98
+
99
+ ultimate goal of BMI systems is to enable people suffering
100
+
101
+ from severe motor disabilities to control new generations of
102
+
103
+ neuroprostheses [1], [2]. Several works have already shown the
104
+
105
+ feasibility and the potentiality of such a technology with different devices [3]–[7]. However, despite the great achievements,
106
+
107
+ the integration between BMI systems and robotics is still at its
108
+
109
+ infancy.
110
+
111
+ In the last years, different interactions between BMI and
112
+
113
+ robotic devices have been explored according to the nature of the
114
+
115
+ mental task performed by the user and to the neural processes
116
+
117
+ involved. For instance, researchers have shown the possibility to
118
+
119
+ exploit correlates of electroencephalography (EEG) to external
120
+
121
+ stimuli (e.g., visual flash) to control the navigation of mobile
122
+
123
+ devices. In such systems, users can either select the turning
124
+
125
+ direction or the final destination of the robot (e.g., kitchen or
126
+
127
+ bedroom) by looking at the corresponding stimuli on the screen
128
+
129
+ [8]–[13]. Although such interactions have shown promising
130
+
131
+ results, they do not allow a full control of the device and they
132
+
133
+ require the user to continuously fixate the origin of the external
134
+
135
+ stimulation (e.g., the screen).
136
+
137
+ A more natural approach is based on BMI systems able to
138
+
139
+ detect the self-paced modulation of brain patterns and thus, to
140
+
141
+ allow the user to deliver commands for the robot at any time
142
+
143
+ without the need of exogenous stimulation. In this context, one of
144
+
145
+ the most explored approaches relies on the detection of the neural
146
+
147
+ correlates to motor imagery (MI). MI BMIs detect and classify
148
+
149
+ the endogenous modulation of sensorimotor rhythms while the
150
+
151
+ user is imagining the movement of a specific part of his/her
152
+
153
+ body (e.g., imagination of the movement of right or left hand). At
154
+
155
+ the neurophysiological level, such a modulation is characterized
156
+
157
+ by the decrement/increment (event-related de/synchronization,
158
+
159
+ ERD/ERS) of the EEG power in specific frequency bands (i.e., μ
160
+
161
+ and β bands, 8–12 and 16–30 Hz, respectively) and in localized
162
+
163
+ regions of the motor/premotor cortex [14]–[16]. MI BMI systems continuously decode such brain patterns associated to the
164
+
165
+ motor imagery tasks by means of machine learning algorithms.
166
+
167
+ The responses of the BMI decoder (a probability distribution
168
+
169
+ of possible commands) are integrated over time and, finally, a
170
+
171
+ command is delivered to the robot only when a given threshold
172
+
173
+ is reached—i.e., when the control framework is confident about
174
+
175
+ user’s intention. Therefore, although in principle such BMI
176
+
177
+ 1552-3098 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
178
+
179
+ See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
180
+
181
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
182
+
183
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 79
184
+
185
+ systems would allow a continuous interaction between user and
186
+
187
+ robot, in practice they result in a discrete control modality,
188
+
189
+ both in terms of time and nature of the commands, with a
190
+
191
+ low information transfer rate (on average 0.3 command/second
192
+
193
+ [17]).
194
+
195
+ This article aims at investigating a novel control approach
196
+
197
+ to generate a continuous control command for MI BMI mobile
198
+
199
+ robots. Herein, continuous control refers to the direct translation
200
+
201
+ of each decoded BMI output (a probability distribution) into a
202
+
203
+ control signal for the robotic device and explicitly in contraposition to the aforementioned discrete interaction modality of most
204
+
205
+ BMI systems.
206
+
207
+ A. Related Work
208
+
209
+ Several studies have shown the effectiveness of discrete control strategy in driving a variety of MI BMI-based devices with
210
+
211
+ healthy subjects and users with motor disabilities.
212
+
213
+ An example of discrete BMI control is the brain-driven
214
+
215
+ wheelchair developed by Vanacker et al. [18], where authors
216
+
217
+ exploited a 2-class MI BMI to interact with the external device. In this implementation, the user could change the default behavior of the wheelchair (i.e., move forward) by asynchronously delivering discrete commands to make it turn left
218
+
219
+ or right. Furthermore, an intelligent navigation system was in
220
+
221
+ charge to generate the continuous trajectory and to take care
222
+
223
+ of all the low-level details (e.g., obstacle avoidance) in order
224
+
225
+ to reduce the user’s workload. Other works developed BMIdriven wheelchairs following the same discrete user interaction
226
+
227
+ principles [5], [19], [20].
228
+
229
+ Similarly, in [6], [21]–[24] authors demonstrated the validity
230
+
231
+ of such an approach to drive a telepresence robot with both
232
+
233
+ healthy subjects and end-users. A discrete interaction modality
234
+
235
+ has been also proposed by Kuhner et al. [25] where the user is
236
+
237
+ allowed to control a mobile robot by selecting specific actions
238
+
239
+ in a hierarchical, menu-based assistant environment.
240
+
241
+ Enabling BMI users to have a continuous interaction modality
242
+
243
+ and, for instance, to precisely control the extent of the turning direction of the robotic device, would rather be desirable. However,
244
+
245
+ the generation of a continuous control signal can be challenging
246
+
247
+ considering the nonstationarity nature of EEG patterns and the
248
+
249
+ resulting uncertainty of the decoded classifier output.
250
+
251
+ In literature, only a few studies investigated new approaches to
252
+
253
+ use the BMI output as a continuous control signal for robotic devices. From a theoretical point of view, Satti et al. [26] proposed
254
+
255
+ to apply a postprocessing chain based on a Savitzki–Golay filter,
256
+
257
+ an antibiasing strategy, and multiple thresholding in order to
258
+
259
+ remove spikes/outliers and possible bias from the BMI classifier
260
+
261
+ output. The method has been evaluated on artificial and real EEG
262
+
263
+ datasets and results showed a reduction in the false positive rate.
264
+
265
+ This approach has been also tested in an online experiment where
266
+
267
+ three users where asked to continuously control a videogame by
268
+
269
+ a 3-class MI BMI [27].
270
+
271
+ In Doud et al. 2011 [28], authors proposed a different approach to achieve continuous control of a virtual helicopter.
272
+
273
+ In this case, the modulation of EEG activity (i.e., ERD/ERS
274
+
275
+ during the imagination of six different motor tasks) was linearly
276
+
277
+ mapped to the control signal of the virtual device. However, such
278
+
279
+ a paradigm required high workload for the user who needs to be
280
+
281
+ always in an active control state.
282
+
283
+ In [29], LaFleur et al. described the follow-up of the previous
284
+
285
+ study with a real quadcopter. More interesting, in this article,
286
+
287
+ authors introduced a nonlinear quadratic transformation of EEG
288
+
289
+ signals before the control signal was sent to the device. Furthermore, they provide a fixed thresholding to remove minor perturbations that were not likely to have generated from intentional
290
+
291
+ control.
292
+
293
+ A linear mapping of the EEG activity into a control signal
294
+
295
+ has been also proposed by Meng et al. [7] in order to control a
296
+
297
+ robotic arm. In this case, users were asked to perform a reaching
298
+
299
+ and grasping tasks in a sequential synchronous paradigm.
300
+
301
+ B. Contribution and Overview
302
+
303
+ In this article, we propose a novel control framework for MI
304
+
305
+ BMI that allows a continuous control modality of a telepresence
306
+
307
+ mobile robot in a navigation task. Our aim is to provide a control
308
+
309
+ system able to generate a continuous robot trajectory from the
310
+
311
+ stream of BMI outputs. We decided to use a BMI decoder
312
+
313
+ (instead of regressing the EEG neural patterns into a control
314
+
315
+ signal as in the case of [28] and [29]) because classifiers have
316
+
317
+ proven to be stable over long periods of time and highly reliable
318
+
319
+ for end-users [6], [24], [30], [31].
320
+
321
+ However, current control frameworks are specifically conceived for a discrete interaction with the external devices. In
322
+
323
+ particular, BMI systems are designed to maximize the accuracy
324
+
325
+ and the speed in delivering discrete commands (also known
326
+
327
+ as intention control state, IC). Surely, this approach works
328
+
329
+ in experimental situations but can hardly cope with real case
330
+
331
+ scenarios when the user wants to continuously drive the robotic
332
+
333
+ device to accomplish daily tasks. Furthermore, current systems
334
+
335
+ do not take into account the situation when the user does not
336
+
337
+ want to deliver any command to the device. This particular state
338
+
339
+ is known as intentional noncontrol (INC). In the past, researchers
340
+
341
+ mainly faced INC in two different ways: by exploiting multiclass
342
+
343
+ classification techniques to model the resting state [28], [29],
344
+
345
+ [32] or by leaving to the user the burden of actively controlling
346
+
347
+ the BMI to not deliver any command [5], [6], [21]. However,
348
+
349
+ the first solution is affected by the complexity of modeling the
350
+
351
+ unbounded resting class, while the second implicates a high
352
+
353
+ workload for the user who needs to actively control the system
354
+
355
+ to counteract possible unintended BMI outputs.
356
+
357
+ Herein, we hypothesize that the generation of a continuous
358
+
359
+ control signal can be achieved by providing a new framework
360
+
361
+ designed to specifically deal with the particular nature of the
362
+
363
+ BMI decoder output and to explicitly take into account the IC
364
+
365
+ and INC situations. In other terms, the framework: 1) should
366
+
367
+ handle the erratic behavior of the BMI decoder output; 2) should
368
+
369
+ support users when they are actively involved in the MI task (IC);
370
+
371
+ at the same time, 3) it should prevent them to deliver unintended
372
+
373
+ commands during resting state (INC).
374
+
375
+ To the best of our knowledge, this is the first time that
376
+
377
+ such a continuous interaction modality for BMI-driven devices
378
+
379
+ is specifically targeted from a pure control perspective. Our
380
+
381
+ proposed control framework is inspired by Schöner and
382
+
383
+ colleagues’ work [33]–[35].
384
+
385
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
386
+
387
+ 80 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
388
+
389
+ Fig. 1. (a) Classical MI BMI closed loop and the mobile robot used in this article: EEG data is acquired and task-related features (channel-frequency pair) are
390
+
391
+ extracted and classified in real time by the BMI decoder. Then, the BMI decoder output stream (e.g., posterior probabilities) is integrated in order to accumulate
392
+
393
+ evidence of user’s intention. Finally, when enough evidence is accumulated, a discrete command is sent to the device. (b) Distribution of the posterior probabilities
394
+
395
+ generated by the BMI decoder during motor imagery task. Solid black line represents the distribution fit computed by Epanechnikov kernel function. (c) Distribution
396
+
397
+ of the posterior probabilities while user is resting. Dotted black line represents the distribution fit computed by Epanechnikov kernel function.
398
+
399
+ The rest of this article is organized as follows. In Section II
400
+
401
+ we first model the BMI decoder output with real EEG data
402
+
403
+ from the participants in the study. Second, we shortly review the
404
+
405
+ traditional approach to smooth the BMI decoder output. Third,
406
+
407
+ we describe the novel approach based on a dynamical system
408
+
409
+ developed herein. Lastly, we used real prerecorded data to simulate the behavior of the new control framework in comparison
410
+
411
+ with the traditional one. Section III is devoted to the description of the experiment designed to evaluate the new control
412
+
413
+ framework with healthy subjects during an online experiment
414
+
415
+ where they are asked to mentally teleoperate a mobile robot.
416
+
417
+ Finally, in Section IV we present the experimental results, and
418
+
419
+ in Section V we discuss them in comparison to prior literature
420
+
421
+ and we propose possible extensions of the work in different BMI
422
+
423
+ robotic applications. Section VI concludes this article.
424
+
425
+ II. CONTROL FRAMEWORK FOR BMI
426
+
427
+ The first step for designing a new control framework is to
428
+
429
+ model and characterize the output of the BMI system. Then, we
430
+
431
+ will describe the traditional strategy with low-pass smoothing
432
+
433
+ filtering and our new approach based on dynamical systems.
434
+
435
+ Since our focus is on the BMI control framework, we consider
436
+
437
+ the other modules (e.g., acquisition, processing, and decoder) as
438
+
439
+ given [Fig. 1(a)]. We refer to a classical, state-of-the-art BMI
440
+
441
+ based on two motor imagery classes that has been extensively
442
+
443
+ evaluated in previous studies with healthy subjects and end-users
444
+
445
+ driving robotic devices [6], [21], [24]. Furthermore, such a MI
446
+
447
+ BMI system was successfully exploited (winning the gold medal
448
+
449
+ and establishing the world record) in the BMI Race discipline
450
+
451
+ of the Cybathlon 2016 event, the first international neurorobotic
452
+
453
+ competition, held in Zurich in 2016 [30], [31]. Section III.B
454
+
455
+ gives details of such a BMI.
456
+
457
+ A. Modeling the BMI Decoder Output
458
+
459
+ The BMI decoder output can be seen as a continuous stream of
460
+
461
+ posterior probabilities indicating the estimated user’s intention.
462
+
463
+ It is worth to model the posterior probability distributions in two
464
+
465
+ specific cases: while the user is actively involved in the motor
466
+
467
+ imagery task and while he/she is at rest. Fig. 1(b) and (c) depict
468
+
469
+ the distributions of real data (user S4) in these two scenarios.
470
+
471
+ Extreme values of the posterior probabilities (close to 0.0 or to
472
+
473
+ 1.0) indicate high-confidence detection of one of the two classes.
474
+
475
+ In the first case [Fig. 1(b)], the BMI correctly classified most of
476
+
477
+ the samples (i.e., posterior probabilities close to 1.0), resulting
478
+
479
+ in a beta-like density function. On the other hand, when the user
480
+
481
+ is resting, we would expect a normal-like distribution centered
482
+
483
+ at 0.5. Instead, the posterior probabilities assume extreme values
484
+
485
+ (close to 0.0 or 1.0), resulting in the bimodal distribution shown
486
+
487
+ Fig. 1(c). The aforementioned behavior of the BMI output can
488
+
489
+ be generalized for most users.
490
+
491
+ Such an erratic behavior of the BMI decoder output would
492
+
493
+ benefit from a control framework in order to generate a proper
494
+
495
+ control signal for the robotic device.
496
+
497
+ B. Traditional Approach: Smoothing Filter
498
+
499
+ In the traditional BMI system, such as the one exploited
500
+
501
+ in this article, the raw posterior probabilities originated from
502
+
503
+ the decoder are accumulated over time with a leaky integrator
504
+
505
+ based on an exponential smoothing [36]. Given xt the posterior
506
+
507
+ probability at time t and yt−1 the previous integrated control
508
+
509
+ signal, yt is computed as follows:
510
+
511
+ yt = α · xt + (1 − α) · yt−1 (1)
512
+
513
+ where α ∈ [0.0, 1.0] is the smoothing factor. The closer α is to
514
+
515
+ 1.0, the faster the weight of older values decay and yt tends to
516
+
517
+ follow xt. On the other hand, the closer α is to 0.0, the smaller
518
+
519
+ is the contribution of the current posterior probability, leading
520
+
521
+ to a slow response of the system. It is worth to notice that α is
522
+
523
+ adjusted at the beginning (individually for each user) and, then,
524
+
525
+ it is fixed during BMI operations. Usual values of α vary around
526
+
527
+ 0.03 (slow response) to allow the user to control more precisely
528
+
529
+ the system (examples of α values used in this article are reported
530
+
531
+ in Section III.C, Table I).
532
+
533
+ Finally, thresholding strategies are used to translate the
534
+
535
+ smoothed signal yt into specific commands for the robot. As
536
+
537
+ already mentioned, this kind of discrete interaction modality
538
+
539
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
540
+
541
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 81
542
+
543
+ Fig. 2. Design of the novel control framework. (a) Free force profile. Blue squares and red circles refer to the attractors and repellers of the system, respectively.
544
+
545
+ The interval [0.0, 1.0] is divided in three basins where a conservative force (dark gray) or a pushing force (light gray) are applied. (b) Representation of the free
546
+
547
+ potential derived by the free force function. (c) Function applied to the decoder output in order to generate the BMI force.
548
+
549
+ TABLE I
550
+
551
+ CONTROL FRAMEWORK PARAMETERS
552
+
553
+ Control framework parameters chosen for each user in the evaluation runs. Parameters’
554
+
555
+ names are the same used in Section II.
556
+
557
+ between the BMI user and the device results in an average
558
+
559
+ transfer information rate of 0.3 command/second [17].
560
+
561
+ C. Novel Approach: Dynamical System
562
+
563
+ The control framework proposed in this article is designed to
564
+
565
+ generate a continuous signal for the robotic device. Following
566
+
567
+ the hypotheses mentioned in Section I.B, it should be able: 1) to
568
+
569
+ handle the erratic behavior of the BMI decoder output described
570
+
571
+ in Section II.A; 2) to support the user’s IC when the current state
572
+
573
+ of the system yt is close to one of the extreme values of the two
574
+
575
+ classes (i.e., 0.0 or 1.0); 3) to prevent yt to reach high values
576
+
577
+ due to random perturbations of BMI decoder output, and so to
578
+
579
+ handle the INC state.
580
+
581
+ We defined Δyt as linear combination of two forces
582
+
583
+ Δyt = Ffree (yt−1) + FBMI (xt) (2)
584
+
585
+ where Ffree(yt−1) only depends on the previous state of the
586
+
587
+ system and FBMI(xt) depends on the current BMI output.
588
+
589
+ Ffree can be explicitly designed to take care of the IC and
590
+
591
+ INC state. Inspired by Schöner and colleagues’ formal technique
592
+
593
+ [33]–[35], we define Ffree in order to exert a conservative force
594
+
595
+ when the current state of the system is close to 0.5 and a pushing
596
+
597
+ force otherwise [see Fig. 2(a)]. Theoretically, this would help
598
+
599
+ the system to be less sensitive to random perturbations (INC
600
+
601
+ state) while, at the same time, it would push yt to high values if
602
+
603
+ the previous state yt−1 was in the external regions (IC state).
604
+
605
+ As mentioned before, we hypothesized that matching these
606
+
607
+ two requirements would support the generation of a reliable
608
+
609
+ continuous control signal for the robot.
610
+
611
+ Hence, such a force was chosen so that:
612
+
613
+ 1) Ffree(y)=0 and dFfree(y)
614
+
615
+ dy < 0 for y ∈ [0.0, 0.5, 1.0].
616
+
617
+ These are defined as stable equilibria points. Note that
618
+
619
+ these points represent the maximum values for the two
620
+
621
+ classes, respectively, (0, 1.0) and the equal distributed
622
+
623
+ value (0.5).
624
+
625
+ 2) Ffree(y)=0 and dFfree(y)
626
+
627
+ dy > 0 for y = 0.5 − ω and y =
628
+
629
+ 0.5 + ω, where ω ∈ (0.0, 0.5). These are defined as
630
+
631
+ unstable equilibria points.
632
+
633
+ According to these requirements, points y = 0, y = 0.5, and
634
+
635
+ y = 1.0 are attractors for the system, while y = 0.5 − ω and y =
636
+
637
+ 0.5 + ω are repellers [see Fig. 2(a)]. A function Ffree with these
638
+
639
+ properties will divide the interval [0.0 1.0] into three attractor
640
+
641
+ basins that are separated by the points 0.5 − ω and 0.5 + ω:
642
+
643
+ depending on the current value y, it will converge toward one of
644
+
645
+ the three attractors [see Fig. 2(a)]. This will facilitate the user not
646
+
647
+ to deliver false positive commands (attractor in y = 0.5) and,
648
+
649
+ at the same time, to reach the maximum value if y(t − 1) <
650
+
651
+ 0.5 − ω or y(t − 1) > 0.5 + ω.
652
+
653
+ Given that, we defined the following force Ffree:
654
+
655
+ Ffree
656
+
657
+ =
658
+
659
+
660
+
661
+ ⎪⎨
662
+
663
+ ⎪⎩
664
+
665
+ −sin
666
+ π
667
+
668
+ 0.5−ω · y
669
+
670
+ if y ∈ [0, 0.5 − ω)
671
+
672
+ −ψsin π
673
+
674
+ ω · (y − 0.5) if y ∈ [0.5 − ω, 0.5 + ω]
675
+
676
+ sin π
677
+
678
+ 0.5−ω · (y − 0.5 − ω)
679
+
680
+ if y ∈ (0.5 + ω, 1]
681
+
682
+ (3)
683
+
684
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
685
+
686
+ 82 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
687
+
688
+ Fig. 3. Simulated temporal evolution of the control signal generated (a) by the traditional smoothing filter and (b) by the new dynamical system. Real data from
689
+
690
+ user S4. Black lines represent the integrated control signal during motor imagery task (solid) and at rest (dotted). Time points when the integrated control signal
691
+
692
+ crosses a predefined fixed threshold (dashed black line) are highlighted in green (during motor imagery task) or in red (during rest).
693
+
694
+ with ψ ≥ 0 corresponding to the height of the potential valley
695
+
696
+ [see Fig. 2(b)]. The force has rotational symmetry with respect
697
+
698
+ to 0.5 and, so, the same force is exerted for the two classes.
699
+
700
+ However, it is worth to notice that it is possible to achieve an
701
+
702
+ asymmetrical response of the system for the two classes by
703
+
704
+ defining ω1 = ω2.
705
+
706
+ FBMI is the second term of (2), and it represents the external
707
+
708
+ force perturbing the system according to the output of the BMI
709
+
710
+ decoder (i.e., user’s intention). As in the previous case, we
711
+
712
+ designed FBMI in order to reduce or enhance the impact of BMI
713
+
714
+ responses with low or high confidence, respectively (posterior
715
+
716
+ probabilities close to 0.5 or close to 0.0 and 1.0).
717
+
718
+ Hence, such a force was chosen so that:
719
+
720
+ 1) FBMI must have rotational symmetry with respect to
721
+
722
+ x = 0.5 to map the two BMI classes in the same way.
723
+
724
+ 2) FBMI(xt) ≈ 0 for xt ∈ [0.5 − x, ˜ 0.5+˜x]. This means
725
+
726
+ that with an uncertain output of the BMI decoder (e.g.,
727
+
728
+ around 0.5), the resulting force applied to the system is
729
+
730
+ limited.
731
+
732
+ Given that, we defined the following cubic transformation
733
+
734
+ function:
735
+
736
+ FBMI (x)=6.4 · (x − 0.5)3 + 0.4 · (x − 0.5) (4)
737
+
738
+ where x ∈ [0.0 1.0] is the posterior probability from the BMI
739
+
740
+ decoder. Such a function has been selected in order to promote
741
+
742
+ BMI output with high confidence (i.e., close to 1.0 or −1.0)
743
+
744
+ and to limit the impact of uncertain decoding (i.e., close to
745
+
746
+ 0.5). The coefficients of the function have been chosen through
747
+
748
+ simulations with prerecorded EEG data. Fig. 2(c) depicts a
749
+
750
+ representation of FBMI.
751
+
752
+ Finally, the two forces (Ffree, FBMI) have been combined
753
+
754
+ together according to
755
+
756
+ Δyt = χ · [φ · Ffree (yt−1) + (1 − φ) · FBMI (xt)] (5)
757
+
758
+ with χ > 0 and φ ∈ [0.0, 1.0]. The parameter χ controls the
759
+
760
+ overall velocity of the system while φ determines the contribution of Ffree and FBMI, or in other terms, how much to trust the
761
+
762
+ BMI decoder output. These two parameters can be tuned by the
763
+
764
+ operator according to the requirements of the application (e.g.,
765
+
766
+ by increasing χ if high reactiveness of the system is required)
767
+
768
+ and to the BMI decoder accuracy (e.g., by decreasing φ in the
769
+
770
+ case of a highly confident decoder).
771
+
772
+ D. Simulated Temporal Evolution of the Control Signal
773
+
774
+ We compared the temporal evolution of the two control frameworks with real data (BMI decoder output) from user S4 and
775
+
776
+ results are depicted in Fig. 3.
777
+
778
+ On the one hand, the traditional control framework [Fig. 3(a)],
779
+
780
+ generates a control signal yt (starting at 0.5, equal probability
781
+
782
+ for the two classes) that quickly increases (high derivative value)
783
+
784
+ toward the correct side when the user is actively performing
785
+
786
+ the task (IC state, solid black line). However, after the initial
787
+
788
+ phase, the velocity of yt decreases making difficult to reach high
789
+
790
+ values and reducing the extent of the control signal. Furthermore,
791
+
792
+ in the case of resting (INC state, dotted black line), random
793
+
794
+ perturbations of xt might result in locally large changes of yt
795
+
796
+ making difficult to keep the control signal below the predefined threshold. Moreover, repeated simulations (N = 10 000)
797
+
798
+ reported that during rest the control signal crossed the given
799
+
800
+ threshold 96.2% of times with an average time of 7.2 ± 4.1 s.
801
+
802
+ This is mainly due to the nature of the distribution of the BMI
803
+
804
+ output (Section II.A). It is clear that BMI continuous operations
805
+
806
+ using such a kind of unstable control signal are difficult to
807
+
808
+ achieve.
809
+
810
+ On the other hand, Fig. 3(b) depicts the temporal evolution
811
+
812
+ of the control signal in the case of the new control approach
813
+
814
+ developed herein. The same data as before has been used. While
815
+
816
+ the user is actively involved in the mental task (black solid line
817
+
818
+ in the figure), the output control signal y quickly converges
819
+
820
+ toward the maximum value (1.0), crossing the given threshold
821
+
822
+ after 1.1 s. It is worth to highlight how the behavior of the
823
+
824
+ signal perfectly follows the design requirements of the new
825
+
826
+ control framework: a slow initial velocity (to favor the INC
827
+
828
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
829
+
830
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 83
831
+
832
+ state) that quickly increases to implement the user’s intention
833
+
834
+ (to support the IC state). Indeed, the new control framework
835
+
836
+ seems to properly work also when the user is at rest. In this
837
+
838
+ case, the random perturbations of the BMI output do not affect
839
+
840
+ the control signal that keeps oscillating around 0.5 (black dotted
841
+
842
+ line in figure). Repeated simulations (N = 10 000) reported that
843
+
844
+ during the task the control signal crossed the threshold 100%
845
+
846
+ of times in 1.4 ± 0.6 s). Importantly, during rest, the control
847
+
848
+ signal crossed the threshold due to random perturbations only
849
+
850
+ 15.5% of the times (in comparison to 96.2% in the case of the
851
+
852
+ traditional control framework). Furthermore, the few random
853
+
854
+ crossings occurred on averaged at 10.4 ± 5.5 s, more than 3 s
855
+
856
+ later with respect to the traditional approach.
857
+
858
+ The simulated results confirm the desired behavior of the
859
+
860
+ control signal generated by the new approach. In the next section,
861
+
862
+ we present an online closed-loop BMI experiment where users
863
+
864
+ are asked to teleoperate a mobile robot with the traditional and
865
+
866
+ the new control frameworks.
867
+
868
+ III. MATERIAL AND METHODS
869
+
870
+ A. Participants
871
+
872
+ Thirteen healthy users participated in the study (S1–S13, 25.8
873
+
874
+ ± 4.3 years old, four females). Users did not have history of
875
+
876
+ neurological or psychiatric disorders and they were not under
877
+
878
+ any psychiatric medication. Eleven users did not have any previous experience with MI BMI; two already participated in other
879
+
880
+ BMI experiments (S10 and S11) and only one (S13) already
881
+
882
+ controlled a mobile robot via MI BMI.
883
+
884
+ Written informed consent was obtained from all experimental
885
+
886
+ subjects in accordance with the principles of the Declaration
887
+
888
+ of Helsinki. The study has been approved by the Cantonal
889
+
890
+ Committee of Vaud (Switzerland) for ethics in human research
891
+
892
+ under the protocol number PB_2017-00295.
893
+
894
+ B. Brain–Machine Interface Implementation
895
+
896
+ In this article we used a BMI based on 2-class motor imagery
897
+
898
+ (both hands versus both feet motor imagination) to drive the mobile robot. EEG signals were acquired with an active 16-channel
899
+
900
+ amplifier at 512 Hz sampling rate (g.USBamp, Guger Technologies, Graz, Austria). Data were band-pass filtered within 0.1 and
901
+
902
+ 100 Hz and notch-filtered at 50 Hz (hardware filters). Electrodes
903
+
904
+ were placed over the sensorimotor cortex (Fz, FC3, FC1, FCz,
905
+
906
+ FC2, FC4, C3, C1, Cz, C2, C4, CP3, CP1, CPz, CP2, CP4;
907
+
908
+ international 10–20 system layout) to detect the neural patterns
909
+
910
+ related to MI. We removed the dc component from the signals
911
+
912
+ and spatially filtered them by means of a Laplacian derivation
913
+
914
+ (closest neighbors in a cross layout [37]).
915
+
916
+ We used the spectral power of EEG signals as features for
917
+
918
+ the BMI system. We computed the power spectral density via
919
+
920
+ Welch’s periodogram algorithm with 2 Hz resolution (from 4 to
921
+
922
+ 48 Hz) in 1-s windows sliding every 62.5 ms.
923
+
924
+ Feature selection was performed during the calibration phase
925
+
926
+ (Section III.C) by ranking the candidate spatiospectral features
927
+
928
+ according to discriminant power [38], calculated through canonical variate analysis and neurophysiological meaning. Thus, the
929
+
930
+ most discriminative features (channel-frequency pairs, subjectspecific) were extracted and used to train a Gaussian decoder
931
+
932
+ with a gradient-descent supervised learning approach using the
933
+
934
+ labeled dataset obtained during the calibration phase [6], [24],
935
+
936
+ [39]. In the evaluation phase, the same features were classified
937
+
938
+ into a probability distribution over the two MI tasks (imagination of both hand versus both feet). Outputs of the decoder
939
+
940
+ (posterior probabilities) with uncertain probability distribution
941
+
942
+ were rejected (rejection parameter fixed at 0.55). As a result of
943
+
944
+ the aforementioned procedures (processing and decoding), the
945
+
946
+ overall BMI system produced a continuous stream of posterior
947
+
948
+ probabilities at a frequency rate of 16 Hz. Afterward, the posterior probabilities were fed to the control framework to accumulate evidence about the current user’s intention and to generate a
949
+
950
+ suitable visual feedback for the user and a proper control signal
951
+
952
+ for the robot (for details, refer to Section II). The BMI system
953
+
954
+ relies on open source C libraries for the acquisition of EEG
955
+
956
+ signals1 and on our own C++ software for the communication
957
+
958
+ between modules and the feedback visualization. The processing
959
+
960
+ and decoding algorithms have been implemented in MATLAB.
961
+
962
+ C. BMI Calibration, Evaluation, and Navigation Task
963
+
964
+ The study was organized in three different recording sessions
965
+
966
+ (days). Sessions were interleaved by 34.2 ± 9.0 days and each
967
+
968
+ one lasted 45 ± 12 min (mean ± standard deviation). As a
969
+
970
+ common approach in the field, we need to acquire initial data to
971
+
972
+ create, calibrate, and evaluate the BMI model for each subject.
973
+
974
+ Fig. 4(a) shows the structure of the recording sessions.
975
+
976
+ During calibration, users performed the two motor imagery
977
+
978
+ tasks (both hand versus both feet) in front of a monitor following
979
+
980
+ the instruction of a cued protocol. In this phase, a positive visual
981
+
982
+ feedback was always provided and no control of the robot was
983
+
984
+ established. Three runs (60 trials, 30 per class) were recorded
985
+
986
+ and the data were used to train the Gaussian classifier, which
987
+
988
+ remained fixed for the rest of the experiment.
989
+
990
+ During evaluation, we tested the classifier performance in, at
991
+
992
+ least, two consecutive runs where the users actually controlled
993
+
994
+ the movement of the visual feedback utilizing each of the two
995
+
996
+ integration approaches (traditional and new dynamical system
997
+
998
+ strategy), so as to find the optimal, user-dependent parameters
999
+
1000
+ of the two control frameworks. In this phase, users were not
1001
+
1002
+ controlling the robot. The values for each user are reported
1003
+
1004
+ in Table I. The initial values of the parameters were selected
1005
+
1006
+ based on simulations with prerecorded data (Section II.B and C).
1007
+
1008
+ During the first recording session, we adjusted these values
1009
+
1010
+ according to the individual performances of each user, which
1011
+
1012
+ did not change during the rest of the experiment. Once subjects
1013
+
1014
+ achieved good BMI performance (>70%), they moved to the
1015
+
1016
+ next phase where they completed the navigation tasks.
1017
+
1018
+ During navigation, users operated the robot with their individual classifier and the two integration frameworks. The navigation
1019
+
1020
+ field was defined as a rectangular area (width: 900 cm; length:
1021
+
1022
+ 600 cm) with 5 circular targets (T1-5; radius: 25 cm) located at
1023
+
1024
+ 300 cm and at −90°, −45°, 0°, 45°, 90° from the starting point
1025
+
1026
+ 1[Online]. Available: http://neuro.debian.net/pkgs/libeegdev-dev.html
1027
+
1028
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1029
+
1030
+ 84 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
1031
+
1032
+ Fig. 4. Experimental design. (a) Schematic representation of the experimental structure. In the first session (day), each user performed three BMI calibration runs
1033
+
1034
+ (without controlling the robot) in order to create the model for the decoder. Afterward, the BMI decoder was tested in two BMI evaluation runs (again, without
1035
+
1036
+ controlling the robot). In the evaluation block, users also tested both the control frameworks (traditional and new dynamical approach) to determine the optimal
1037
+
1038
+ parameters of the system. Finally, users performed two BMI navigation runs driving the robot. The navigation runs were equally divided per control modality.
1039
+
1040
+ Session 2 and 3 (day 2 and 3) proposed again the evaluation and the navigation blocks. (b) Experimental field for the navigation tasks. Five targets (T1-5) were
1041
+
1042
+ defined for each task. Targets were placed at 3 m from the start position of the robot and 45° from each other. The user was sitting outside the navigation field to
1043
+
1044
+ be able to see the position of the robot at any time. (c) BMI visual feedback controlled by the user and the corresponding change of its heading direction in the
1045
+
1046
+ traditional (discrete) and dynamical (continuous) control modality.
1047
+
1048
+ (S) at the center (450, 150 cm). A task consisted in driving the
1049
+
1050
+ robot from the initial position toward one of the five predefined
1051
+
1052
+ targets [Fig. 4(b)]. As soon as the robot crossed the target’s edge,
1053
+
1054
+ the trial was considered successfully completed and the robot
1055
+
1056
+ was manually positioned at the starting point. Users were not
1057
+
1058
+ instructed to follow specific trajectories, but we asked them to
1059
+
1060
+ try to reach the target in the shortest possible time. Furthermore, a
1061
+
1062
+ trial was considered unsuccessful if the robot left the rectangular
1063
+
1064
+ area or if the target was not reached after 60 s. Finally, during
1065
+
1066
+ the navigation tasks, users were able to see the robot, the targets
1067
+
1068
+ ,and the monitor displaying the visual feedback.
1069
+
1070
+ Users performed between 2 and 6 navigation runs per session
1071
+
1072
+ (depending on their level of fatigue). Each run consisted in ten
1073
+
1074
+ navigation tasks (two repetitions per target) randomly shuffled.
1075
+
1076
+ The two control modalities (discrete control with traditional
1077
+
1078
+ approach versus continuous control with new dynamical system
1079
+
1080
+ approach) were pseudorandomly assigned to each run (equal
1081
+
1082
+ number of runs per control modality per session). Users performed 88 navigation runs in total (44 runs per control modality) and 880 tasks. A visual representation of the behavior of
1083
+
1084
+ the robot according to the BMI feedback in the discrete and
1085
+
1086
+ continuous control modality is reported in Fig. 4(c).
1087
+
1088
+ D. Mobile Robot
1089
+
1090
+ The robot is based upon the Robotino platform by FESTO
1091
+
1092
+ AG (Esslingen am Neckar, Germany) showed in Fig. 1(a). It is a
1093
+
1094
+ small circular robot (diameter 370 mm, height 210 mm; weight
1095
+
1096
+ ∼11 kg) equipped with three holonomic wheels, an embedded
1097
+
1098
+ PC 104 with a compact flash card and nine infrared proximity
1099
+
1100
+ sensors mounted in the robot’s chassis at an angle of 40° from
1101
+
1102
+ each other and with a working range up to ∼150 mm (depending
1103
+
1104
+ on light conditions). Furthermore, we added a laptop (Lenovo
1105
+
1106
+ X201, Intel Core I5 2.53 GHz, 4GB RAM, Integrated Intel HD
1107
+
1108
+ video controller) to the robot configuration to overcome the
1109
+
1110
+ limited computational power of the embedded PC. The laptop
1111
+
1112
+ was placed on a custom metallic structure fixed to the robot
1113
+
1114
+ chassis and connected to the robot itself via Ethernet interface.
1115
+
1116
+ E. Navigation System
1117
+
1118
+ The motion of the mobile robot relies on a navigation system
1119
+
1120
+ based on local potential fields and inspired by the work of Bicho
1121
+
1122
+ et al. [34] and Steinhage et al. [35]. Furthermore, it has already
1123
+
1124
+ been extensively and successfully evaluated with healthy subjects and end-users in previous works with BMI-driven mobile
1125
+
1126
+ robots [6], [22]–[24].
1127
+
1128
+ In this article, the robot moves forward at a constant speed
1129
+
1130
+ (0.2 m/s). The angular velocity v of the robot is generated by the
1131
+
1132
+ following equation:
1133
+
1134
+ v = (ξ − ξego) e
1135
+
1136
+
1137
+
1138
+ − (ξ−ξego)
1139
+
1140
+ 2
1141
+
1142
+ 2
1143
+
1144
+
1145
+
1146
+ (6)
1147
+
1148
+ where (ξ − ξego) represents the difference between the turning
1149
+
1150
+ and the heading direction of the robot. The user is allowed to
1151
+
1152
+ control the turning direction ξ by delivering BMI commands.
1153
+
1154
+ In the case of the discrete control modality (Sections II
1155
+
1156
+ and III.C), ξ may assume two discrete angular values (±π
1157
+
1158
+ 4 ),
1159
+
1160
+ according to the BMI command delivered by the user (left or
1161
+
1162
+ right). Conversely, in the case of continuous modality, the control
1163
+
1164
+ signal is linearly mapped to the interval [−π
1165
+
1166
+ 2 , π
1167
+
1168
+ 2 ] in order to
1169
+
1170
+ continuously generate the robot’s turning direction ξ.
1171
+
1172
+ The entire navigation system was developed in the robotic operating system (ROS) ecosystem. Robotino native libraries have
1173
+
1174
+ been wrapped into ROS packages in order to access sensors’
1175
+
1176
+ information and motor controller. We developed two packages
1177
+
1178
+ for bidirectional communication between the BMI and the ROS
1179
+
1180
+ framework. In detail, we integrated standard interfaces used in
1181
+
1182
+ the BMI field (Tobi Interface C and Tobi Interface D, [40]) in
1183
+
1184
+ the ROS environment.
1185
+
1186
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1187
+
1188
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 85
1189
+
1190
+ Fig. 5. Initial BMI decoder results. (a) Topographic representation of the most selected features during the calibration block for μ and β bands. (b) BMI trial
1191
+
1192
+ accuracy in the evaluation runs. In black the overall trial accuracy is reported; in blue and red the trial accuracy per control framework. (c) BMI trial duration in
1193
+
1194
+ the evaluation runs. In black the overall trial duration is reported; in blue and red the trial duration per control framework. Mean and standard error of the mean are
1195
+
1196
+ reported. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p <.05; (∗∗∗): p < 0.001.
1197
+
1198
+ F. Tracking System
1199
+
1200
+ Given the unreliability of robot’s odometry, trajectories were
1201
+
1202
+ recorded by an external camera (Microsoft Kinect v2) located
1203
+
1204
+ 6 m above the navigation field. A red spherical marker was
1205
+
1206
+ placed on top of the robot to perform automatic detection of the
1207
+
1208
+ robot within each frame of the recorded video stream. Detection
1209
+
1210
+ was based on HSV colors and the previous position. Image
1211
+
1212
+ coordinates were then mapped to real world trajectories with a
1213
+
1214
+ homographic transform that was determined by ten world-image
1215
+
1216
+ coordinate pairs. Localization and coordinate transform were
1217
+
1218
+ done a posteriori using OpenCV library (OpenCV, version
1219
+
1220
+ 3.2.02). Finally, trajectories were smoothed using a moving
1221
+
1222
+ average filter over 25 data points for each time step.
1223
+
1224
+ G. Statistical Analyses
1225
+
1226
+ All statistical analyses have been performed by comparing and
1227
+
1228
+ testing for significant differences at the 95% confidence interval
1229
+
1230
+ using unpaired, two-sided Wilcoxon nonparametric rank-sum
1231
+
1232
+ tests.
1233
+
1234
+ IV. RESULTS
1235
+
1236
+ A. Initial BMI Decoder Screening
1237
+
1238
+ At the beginning of each recording session (day) we evaluated
1239
+
1240
+ the BMI decoder in a classical cued protocol without the robot.
1241
+
1242
+ The rationale is to have a ground truth of the BMI performance
1243
+
1244
+ before starting the navigation tasks. Participants were instructed
1245
+
1246
+ to control a feedback bar on the screen according to the direction
1247
+
1248
+ provided by a visual cue (see Section III.C). While using the
1249
+
1250
+ same BMI decoder, participants performed the initial screening
1251
+
1252
+ with both the aforementioned control frameworks.
1253
+
1254
+ First, the spatial and spectral distribution of the features
1255
+
1256
+ selected during the calibration is coherent to the motor imagery
1257
+
1258
+ tasks performed by the users. Indeed, Fig. 5(a) shows that
1259
+
1260
+ channels C3 and C4 were the most selected in the μ band (50 and
1261
+
1262
+ 52 times versus ten times for Cz) and channel Cz in the β band
1263
+
1264
+ 2[Online]. Available: http://opencv.org/
1265
+
1266
+ (24 times versus ten and 11 times for C3 and C4, respectively).
1267
+
1268
+ These results are in line with literature regarding the brain
1269
+
1270
+ cortical regions involved in both hands and both feet motor
1271
+
1272
+ imagery tasks [14]–[16].
1273
+
1274
+ Second, Fig. 5(b) and (c) report the BMI performances during
1275
+
1276
+ the evaluation runs in terms of accuracy (i.e., percentage of successful trials) and time (i.e., duration of each trial). In average,
1277
+
1278
+ participants achieved an accuracy of 89.9 ± 2.3% and they were
1279
+
1280
+ able to complete the trial in 4.6 ± 0.2 s. In more detail, the
1281
+
1282
+ traditional control framework seems to perform better in such
1283
+
1284
+ a classical BMI paradigm with higher accuracy (93.1 ± 4.1%
1285
+
1286
+ versus 86.7 ± 2.2%; p = 0.0006) and reduced time (4.0 ± 0.3 s
1287
+
1288
+ versus 5.2 ± 0.4 s; p = 0.022).
1289
+
1290
+ B. Navigation Performance
1291
+
1292
+ We evaluated the navigation performance of the two control
1293
+
1294
+ modalities according to three objective metrics: 1) distance to the
1295
+
1296
+ ideal (manual) trajectory (Frechet distance [41]); 2) percentage
1297
+
1298
+ of reached targets; 3) time to reach the target.
1299
+
1300
+ Fig. 6(a) illustrates the heat maps of trajectories followed by
1301
+
1302
+ all participants in the case of the traditional (left) and the new
1303
+
1304
+ control modality (right). The maps have a 10 cm resolution,
1305
+
1306
+ targets are indicated by white circles, and the color code ranges
1307
+
1308
+ from blue (low coverage) to yellow (high coverage). Black
1309
+
1310
+ lines represent the average trajectories per target and dashed
1311
+
1312
+ lines the ideal (manual) trajectories. Subpanels around the main
1313
+
1314
+ image show the individual target heat maps. A preliminary visual
1315
+
1316
+ inspection of the heat maps already highlights the advantages of
1317
+
1318
+ the new proposed control framework, especially in the case of
1319
+
1320
+ the lateral targets (T1 and T5) where the participants required a
1321
+
1322
+ finer control of the robot to reach them. Such an observation
1323
+
1324
+ is substantiated by the results in Fig. 6(b). On average (left
1325
+
1326
+ column), the new control modality allows users to follow the
1327
+
1328
+ ideal trajectories significantly better (Frechet distance of 117.3
1329
+
1330
+ ±7.7 cm versus 85.4±5.0 cm, mean±STD; p=0.026). Results
1331
+
1332
+ stand if we consider each target separately (middle column), with
1333
+
1334
+ statistical difference in case of the most lateral ones (T1: p =
1335
+
1336
+ 0.002; T5: p = 0.039). In addition, the evolution of the distance
1337
+
1338
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1339
+
1340
+ 86 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
1341
+
1342
+ Fig. 6. Navigation results. (a) Heat maps of trajectories performed by the robot for discrete (on the left) and continuous (on the right) control modality. Maps
1343
+
1344
+ resolution is 10 cm. Target T1-5 are identified by white circle and color code ranges from blue (low) to yellow (high coverage). In black the average trajectories
1345
+
1346
+ (solid lines) and the ideal manual trajectories (dashed lines) per target. Subpanels around the maps report the coverage, the average and the ideal trajectories for
1347
+
1348
+ each individual target. (b) Frechet distance to the ideal trajectories per control framework. From left to right: the overall average distance, the average distance per
1349
+
1350
+ target and the evolution of the distance over runs. (c) Navigation accuracy per control framework corresponding to the percentage of target successfully reached.
1351
+
1352
+ From left to right: the overall average accuracy, the average accuracy per target, and the evolution of accuracy over runs. Black dashed line represents the chance
1353
+
1354
+ level. (d) Duration in seconds of the navigation tasks per control framework. From left to right: overall average duration, the average duration per target, and the
1355
+
1356
+ evolution of the duration over runs. Mean and standard error of the mean are reported. In blue and in red the results for the traditional and the new dynamical
1357
+
1358
+ system control framework. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p < 0.05; (∗∗): p < 0.01.
1359
+
1360
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1361
+
1362
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 87
1363
+
1364
+ Fig. 7. Behavioral results from the navigation questionnaires. Users could answer with a score between 1 and 5. In blue the average scores for the traditional and
1365
+
1366
+ in red for the new dynamic control framework. Mean and standard error of the mean are reported. Statistically significant differences are shown with two-sided
1367
+
1368
+ Wilcoxon rank-sum tests, (∗): p <.05; (∗∗): p < 0.01; (>∗∗∗): p << 0.0001.
1369
+
1370
+ TABLE II
1371
+
1372
+ NAVIGATION QUESTIONNAIRE
1373
+
1374
+ over runs shows significant improvement after the first day (right
1375
+
1376
+ column; p = 0.013).
1377
+
1378
+ The second evaluation metric is related to the percentage
1379
+
1380
+ of reached target in the two conditions. Also in this case, the
1381
+
1382
+ new approach ensures better navigation performances [Fig. 6(c)]
1383
+
1384
+ and, on average (left column) a significant increment with respect to the traditional control framework (77.3 ± 3.3% versus
1385
+
1386
+ 86.1 ± 2.6%, p = 0.048). Results in the middle column show
1387
+
1388
+ similar consistency also across targets, with significantly better
1389
+
1390
+ performances especially for targets T3 and T4 (p = 0.043 and
1391
+
1392
+ p = 0.015, respectively). Furthermore, the accuracy with the
1393
+
1394
+ new control framework consistently improves over runs (right
1395
+
1396
+ column), reaching a statistically significant difference in the
1397
+
1398
+ second day (run 3; p = 0.022).
1399
+
1400
+ Finally, in Fig. 6(d) we report an overall time improvement in
1401
+
1402
+ the case of the new control framework (33.6 ± 1.1 s versus 31.1
1403
+
1404
+ ± 0.8 s). Although such a reduction is in line with the previous
1405
+
1406
+ results (in terms of distance to the ideal trajectory and accuracy),
1407
+
1408
+ no significant differences have been found (p = 0.42).
1409
+
1410
+ C. Behavioral Results
1411
+
1412
+ At the end of each recording session, participants were asked
1413
+
1414
+ to answer to two questionnaires in order to assess the subjective
1415
+
1416
+ evaluations of the two control modalities. Each questionnaire
1417
+
1418
+ was composed by the same eight questions and participants
1419
+
1420
+ could rank them with a score from 1 to 5 as reported in Table II. The average scores for the eight questions are reported
1421
+
1422
+ in Fig. 7. Generally, results show a general trend in favor of
1423
+
1424
+ the new approach proposed in this article. In particular, questions Q2 (control precision, p = 0.006), Q4 (keeping forward
1425
+
1426
+ direction, p = 0.030), Q5 (effort, p = 0.045), and Q8 (behavior
1427
+
1428
+ preference, p = 0.000001) show a significant positive impact.
1429
+
1430
+ These questions are directly related to the design goals of the
1431
+
1432
+ new dynamical system control framework. Furthermore, in both
1433
+
1434
+ conditions participants felt to be in control of the robot (Q1,
1435
+
1436
+ score: 3.8 ± 0.2 versus 4.1 ± 0.1; Q3, score: 3.8 ± 0.2 versus
1437
+
1438
+ 3.7 ± 0.2). Finally, the fact that we let them to decide to focus
1439
+
1440
+ their attention on the robot itself or on the visual feedback does
1441
+
1442
+ not seem to be a confounding factor for the experiment (Q6,
1443
+
1444
+ score: 3.4 ± 0.3 versus 3.8 ± 0.3; Q7, score 3.5 ± 0.3 versus
1445
+
1446
+ 3.7 ± 0.3).
1447
+
1448
+ V. DISCUSSION
1449
+
1450
+ This article aims at providing a continuous control modality
1451
+
1452
+ for a BMI-driven mobile robot. Most BMI research focuses
1453
+
1454
+ on applications based on discrete interaction strategies to drive
1455
+
1456
+ robotic devices [5], [6], [18]–[25]. Although there exist some
1457
+
1458
+ examples of BMI continuous control [7], [28], [29], they are
1459
+
1460
+ scarce and the investigation of new formal techniques to interpret
1461
+
1462
+ the user’s intention is often neglected. In this scenario, we have
1463
+
1464
+ hypothesized that a key aspect to achieve such a continuous
1465
+
1466
+ interaction is to rely on a control approach to translate the BMI
1467
+
1468
+ decoder output into a control signal for the robotic device. For
1469
+
1470
+ the first time, we have faced the challenge by formally designing
1471
+
1472
+ a new control framework for BMI-driven mobile robots and by
1473
+
1474
+ directly comparing the performances with a traditional approach
1475
+
1476
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1477
+
1478
+ 88 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
1479
+
1480
+ in a demanding scenario where we enabled users to continuously
1481
+
1482
+ drive the device.
1483
+
1484
+ A. Continuous Interaction and Navigation Performances
1485
+
1486
+ First of all, results showed that the proposed control framework allowed such a continuous interaction modality between
1487
+
1488
+ the user and the mobile robot. As consequence, users were able
1489
+
1490
+ to reliably generate continuous navigation trajectories decoded
1491
+
1492
+ from their brain activity. In literature, other works using a
1493
+
1494
+ continuous control strategy rely on the ability of the users to
1495
+
1496
+ perform up to six motor imagery tasks and consequentially to
1497
+
1498
+ generate corresponding discriminant brain patterns to control
1499
+
1500
+ the robotic devices [28], [29]. However, these approaches may
1501
+
1502
+ hardly be applied in real case scenarios or for a daily usage of
1503
+
1504
+ any MI BMI applications due to the high physical and mental
1505
+
1506
+ demands for the user. This is particularly true in the case of the
1507
+
1508
+ end-users with motor disabilities who have never been reported
1509
+
1510
+ to utilize a MI BMI with more than two or three classes.
1511
+
1512
+ It is worth to notice that our approach achieved the continuous
1513
+
1514
+ interaction between BMI user and robot without any modification of the classical workflow of a 2-class motor imagery BMI
1515
+
1516
+ that has been largely demonstrated to be suitable for end-users
1517
+
1518
+ [6], [24], [31].
1519
+
1520
+ Furthermore, the comparison between the traditional and the
1521
+
1522
+ new approach highlighted consistent and significant improvements in terms of navigation performances. Specifically, the
1523
+
1524
+ distance to the ideal (manual) trajectory [Fig. 6(b)] is significantly reduced (p < 0.05). Moreover, the new control framework allowed users to increase the percentages of successfully
1525
+
1526
+ completed navigation tasks [Fig. 6(c)]. This particularly fits in
1527
+
1528
+ the case of the most difficult targets (T1 and T5), where users
1529
+
1530
+ required finer control to complete the task. In the case of the
1531
+
1532
+ duration of the navigation tasks, we did not find significant
1533
+
1534
+ differences in the two conditions [although the time is slightly
1535
+
1536
+ reduced for the new approach, Fig. 6(c)]. This is probably due
1537
+
1538
+ to the short duration of the navigation task (∼30 s), that prevents a clear differentiation between the two control conditions.
1539
+
1540
+ Finally, results from the subjective evaluation [Fig. 7] suggest
1541
+
1542
+ the positive impacts of the new continuous interaction modality
1543
+
1544
+ with the robotic device.
1545
+
1546
+ In summary, the achieved results support our hypothesis that
1547
+
1548
+ it is feasible to achieve a continuous interaction by means of the
1549
+
1550
+ design of a new control framework for MI BMI-actuated robot.
1551
+
1552
+ B. Coupling Between BMI User and Machine
1553
+
1554
+ The improvement of the coupling between user and machine
1555
+
1556
+ is a fundamental aspect in any robotic application, and especially
1557
+
1558
+ in BMI-driven devices. In literature, it has been suggested that
1559
+
1560
+ the enhancement of such an interaction not only increases the
1561
+
1562
+ operational performances but it also promotes the acquisition of
1563
+
1564
+ BMI skills for the user—namely, the ability of generating more
1565
+
1566
+ reliable and stable brain patterns [31].
1567
+
1568
+ Here, we suggest that the new control framework facilitates
1569
+
1570
+ this coupling in comparison to traditional approaches. Although
1571
+
1572
+ it is difficult to directly evaluate the coupling with quantitative
1573
+
1574
+ metrics, we propose the possibility to infer it from the results
1575
+
1576
+ presented in the article and, in particular, from the temporal
1577
+
1578
+ evolution of the navigation performances.
1579
+
1580
+ Interestingly, the temporal evolution over runs of the three
1581
+
1582
+ navigation metrics [Fig. 6(b)–(d), right column] suggested that
1583
+
1584
+ the new control framework fosters the user’s learning in better
1585
+
1586
+ controlling the mobile robot. Indeed, results show that while
1587
+
1588
+ users had similar performances in the first run [Fig. 6(b), right
1589
+
1590
+ column], a significant reduction of the Frechet distance only occurred in the second run for the new proposed approach (red line,
1591
+
1592
+ p < 0.05). In the case of the traditional control framework, users
1593
+
1594
+ were able to reach similar performances only in the last run of the
1595
+
1596
+ experiment. In other words, the new control framework allowed
1597
+
1598
+ users to learn to drive more precisely the robot in shorter time.
1599
+
1600
+ The evolution of the task accuracy and duration may be
1601
+
1602
+ interpreted in a similar way. In the first run, users achieved
1603
+
1604
+ the same task accuracy in both conditions [∼75%, Fig. 6(c),
1605
+
1606
+ right column]. For the traditional control condition, the accuracy
1607
+
1608
+ remained stable until the last run (blue line, run 5), while with
1609
+
1610
+ the new approach it reached a plateau of ∼90% already in the
1611
+
1612
+ second run (red line). Although the time duration does not show
1613
+
1614
+ any statistical difference, the trend is the same as in the case of
1615
+
1616
+ the two previous metrics: already in the second run the duration
1617
+
1618
+ of the task is reduced only in the case of the new control approach
1619
+
1620
+ [Fig. 6(d), red line].
1621
+
1622
+ Subjective results from the questionnaire are in line with such
1623
+
1624
+ considerations (Fig. 7) as users indicated not only an overall
1625
+
1626
+ significant preference for the new control framework (question
1627
+
1628
+ Q8, p < 0.0001) but also a more natural, precise, and easy
1629
+
1630
+ interaction with it (questions Q2, p < 0.01, and Q4, p < 0.05).
1631
+
1632
+ Moreover, it is worth to highlight that users reported less effort
1633
+
1634
+ to control the robot in the continuous control modality (question
1635
+
1636
+ Q5, p < 0.05), event if—theoretically—is more demanding.
1637
+
1638
+ Furthermore, it is worth to mention the apparent discrepancy
1639
+
1640
+ related to the outcomes in the initial BMI screening (without
1641
+
1642
+ the robot) and in the navigation tasks. Indeed, users achieved
1643
+
1644
+ substantially higher BMI accuracy with the traditional approach
1645
+
1646
+ (p < 0.001) in the evaluation runs when they were asked to
1647
+
1648
+ only control the visual feedback on the screen [Fig. 5(b) and
1649
+
1650
+ (c)]. However, as already discussed, the introduction of the
1651
+
1652
+ new dynamical system control framework led to significant
1653
+
1654
+ improvements at the robotic application level and it suggests
1655
+
1656
+ a better coupling between user and machine. This opens the
1657
+
1658
+ discussion on the fact that metrics commonly used in BMI fields
1659
+
1660
+ (such as the decoder accuracy) might not be fully informative to predict and evaluate the performances of neurorobotic
1661
+
1662
+ applications [42]. Indeed, to accomplish complex tasks, such
1663
+
1664
+ as driving a mobile robot, users not only need to repetitively
1665
+
1666
+ deliver mental commands as fast as possible (as in common BMI
1667
+
1668
+ protocols) but also to plan for and make eventual corrections.
1669
+
1670
+ This spotlights the importance of designing a control framework
1671
+
1672
+ that explicitly handle the requirements of the specific BMI
1673
+
1674
+ application to improve the coupling between user and machine
1675
+
1676
+ and, as consequence, the overall performance of the system.
1677
+
1678
+ C. Extension to Other BMI Robotic Applications
1679
+
1680
+ The proposed control framework has been explicitly designed
1681
+
1682
+ and successfully evaluated for a robotic teleoperated mobile
1683
+
1684
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1685
+
1686
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 89
1687
+
1688
+ platform. From a control perspective, the extension to similar BMI applications for motor substitution (e.g., powered
1689
+
1690
+ wheelchair) is straightforward: for instance, users may drive
1691
+
1692
+ a powered wheelchair by continuously controlling the turning
1693
+
1694
+ direction with the proposed approach as in the case of the
1695
+
1696
+ mobile robot. Similarly, the new control framework may be
1697
+
1698
+ applied to BMI-driven lower limb exoskeletons. In literature,
1699
+
1700
+ most of the studies use a discrete interaction modality to deliver
1701
+
1702
+ commands to the device (e.g., go forward, turn right or left)
1703
+
1704
+ [43]–[47]. In these cases, the proposed approach might support
1705
+
1706
+ the generation of continuous trajectories for the exoskeleton. A
1707
+
1708
+ different scenario is trying to decode brain patterns related to
1709
+
1710
+ the user’s intention to make a left or right step [48]. During a
1711
+
1712
+ walking task, the intended action (the step) is discrete per se,
1713
+
1714
+ and it does not make sense to provide a continuous interaction
1715
+
1716
+ modality. However, the generation of a continuous control signal
1717
+
1718
+ might be useful when users are asked to perform leg extension/flexion robotic-assisted exercises (e.g., in a rehabilitation
1719
+
1720
+ scenario). In this case, our approach might promote a fine
1721
+
1722
+ control of the robotic device and thus, improve the rehabilitation
1723
+
1724
+ outcomes.
1725
+
1726
+ The need of a continuous interaction modality is not limited
1727
+
1728
+ to mobile applications. The same approach can also be applied
1729
+
1730
+ to operate robotic arms or upper limb exoskeletons where a
1731
+
1732
+ three dimensional (3-D) control would be desirable. In literature,
1733
+
1734
+ the operations of such devices are limited to two-dimensional
1735
+
1736
+ (2-D) control strategies by directly remapping the EEG brain
1737
+
1738
+ patterns into arm trajectories [7], [25]. Herein, we speculate the
1739
+
1740
+ possibility to generate 3-D continuous trajectories by properly
1741
+
1742
+ designing the control framework of a 3-class MI BMI. While
1743
+
1744
+ two classes would be used to control the device in the x-y plane
1745
+
1746
+ (as for the mobile platform developed in this work), the third
1747
+
1748
+ one will be translated in the motion along the z dimension.
1749
+
1750
+ The motion trajectories will be generated by the extension of
1751
+
1752
+ the dynamical system equations to the 3-D space. Nevertheless,
1753
+
1754
+ an extensive evaluation in real BMI closed-loop experiments is
1755
+
1756
+ definitely required to prove the feasibility of this approach.
1757
+
1758
+ D. Future Work
1759
+
1760
+ We plan to further improve the new control framework by
1761
+
1762
+ facilitating the choice of the parameters in the dynamical system
1763
+
1764
+ equations. Although the results demonstrated the validity of
1765
+
1766
+ this approach, the parameterization of the control system is
1767
+
1768
+ still suboptimal. (3), (4), and (5) depend on the parameters
1769
+
1770
+ ψ, ω, φ, and χ to adjust the strength and the position of the
1771
+
1772
+ attractors/repellers and to balance the contribution of the Ffree
1773
+
1774
+ and FBMI as well as the overall reactiveness of the system.
1775
+
1776
+ The initial ranges of these parameters have been obtained by
1777
+
1778
+ analysis on prerecorded data. However, in the first session of the
1779
+
1780
+ experiment, the operator had to heuristically tune the parameters
1781
+
1782
+ to optimize the behavior for each user. This should be avoided
1783
+
1784
+ in order to reduce the human intervention as well as possible
1785
+
1786
+ variability in the performances. For this reason, we performed
1787
+
1788
+ a posteriori analysis with a twofold goal: 1) to reduce the
1789
+
1790
+ number of parameters controlling the behavior of the dynamical
1791
+
1792
+ system; 2) to predict the optimal subject-specific values of the
1793
+
1794
+ parameters from the calibration data. Preliminary results suggest
1795
+
1796
+ the feasibility to control the overall behavior of the framework by
1797
+
1798
+ using only the two parameters related to the strength and position
1799
+
1800
+ of the attractors/repellers (i.e., ψ and ω). Furthermore, simulated
1801
+
1802
+ online performances support the possibility to predict the optimal values from calibration data. However, further studies are
1803
+
1804
+ required to verify these preliminary results and, especially, to
1805
+
1806
+ evaluate them in a closed-loop online experiment.
1807
+
1808
+ A second future development will be to integrate information
1809
+
1810
+ from the environment by exploiting the robot’s sensors. The
1811
+
1812
+ effectiveness of this approach, namely shared control, has been
1813
+
1814
+ already demonstrated in the past [5], [6], [18], [21]–[24] where
1815
+
1816
+ robot’s intelligence was exploited in order to avoid obstacles
1817
+
1818
+ in the path. In the case of our new approach, we plan to directly change the force fields in the BMI control framework
1819
+
1820
+ accordingly to environment information in order to adjust the
1821
+
1822
+ BMI outputs to the arrangement of objects around the robot
1823
+
1824
+ (i.e., walls, tables, chairs) and to prevent the execution of wrong
1825
+
1826
+ or not optimal user’s commands for the robot. Such a system
1827
+
1828
+ needs to be evaluated in more complex scenarios than the one
1829
+
1830
+ in this article, where the user will need to achieve complicated
1831
+
1832
+ navigation tasks even in the presence of moving obstacles.
1833
+
1834
+ VI. CONCLUSION
1835
+
1836
+ In this article, we proposed a new control framework for
1837
+
1838
+ an MI BMI-driven mobile robot. We hypothesized that such
1839
+
1840
+ a novel approach would allow users to continuously control the
1841
+
1842
+ robot and it would have a significant impact on the navigation
1843
+
1844
+ performance as well as in the human–machine interaction.
1845
+
1846
+ Thirteen healthy users evaluated the new control framework
1847
+
1848
+ in comparison to a discrete approach usually exploited in the
1849
+
1850
+ BMI field. The experiment lasted three sessions (days) and in
1851
+
1852
+ total consisted of 880 repetitions of the navigation tasks. Results
1853
+
1854
+ confirmed our hypothesis and showed the possibility to use a
1855
+
1856
+ continuous control strategy to drive the robot via a classical
1857
+
1858
+ 2-class MI BMI system. Furthermore, results highlighted an
1859
+
1860
+ improvement of the navigation performances in all three evaluation metrics: distance to ideal trajectory, percentage of reached
1861
+
1862
+ targets, and time to complete the tasks.
1863
+
1864
+ In addition to providing a new approach that allows BMI
1865
+
1866
+ users to continuously drive a mobile robotic platform, this article
1867
+
1868
+ aimed at spotlighting the importance of the control framework
1869
+
1870
+ to promote successful operations and to foster the translational
1871
+
1872
+ impact of BMI-driven robotic applications.
1873
+
1874
+ REFERENCES
1875
+
1876
+ [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M.
1877
+
1878
+ Vaughan, “Brain–computer interfaces for communication and control,”
1879
+
1880
+ Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, 2002.
1881
+
1882
+ [2] D. Borton, S. Micera, J. del R. Millán, and G. Courtine, “Personalized
1883
+
1884
+ neuroprosthetics,” Sci. Transl. Med., vol. 5, no. 210, p. 210rv2, 2013.
1885
+
1886
+ [3] J. D. R. Millán, F. Renkens, J. Mouriño, and W. Gerstner, “Noninvasive
1887
+
1888
+ brain-actuated control of a mobile robot by human EEG,” IEEE Trans.
1889
+
1890
+ Biomed. Eng., vol. 51, no. 6, pp. 1026–1033, Jun. 2004.
1891
+
1892
+ [4] J. D. R. Millán, F. Galán, D. Vanhooydonck, E. Lew, J. Philips, and
1893
+
1894
+ M. Nuttin, “Asynchronous non-invasive brain-actuated control of an intelligent wheelchair,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2009,
1895
+
1896
+ pp. 3361–3364.
1897
+
1898
+ [5] T. Carlson and J. D. R. Millán, “Brain-controlled wheelchairs: A robotic
1899
+
1900
+ architecture,” IEEE Robot. Autom. Mag., vol. 20, no. 1, pp. 65–73, Mar.
1901
+
1902
+ 2013.
1903
+
1904
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
1905
+
1906
+ 90 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020
1907
+
1908
+ [6] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. D. R. Millán,
1909
+
1910
+ “Towards independence: A BCI telepresence robot for people with severe
1911
+
1912
+ motor disabilities,” Proc. IEEE, vol. 103, no. 6, pp. 969–982, Jun. 2015.
1913
+
1914
+ [7] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He, “Noninvasive
1915
+
1916
+ electroencephalogram based control of a robotic arm for reach and grasp
1917
+
1918
+ tasks,” Sci. Rep., vol. 6, no. 1, 2016, Art. no. 38565.
1919
+
1920
+ [8] B. Rebsamen et al., “Controlling a wheelchair indoors using thought,”
1921
+
1922
+ IEEE Intell. Syst., vol. 22, no. 2, pp. 18–24, Mar.–Apr. 2007.
1923
+
1924
+ [9] C. J. Bell, P. Shenoy, R. Chalodhorn, and R. P. N. Rao, “Control of a
1925
+
1926
+ humanoid robot by a noninvasive brain–computer interface in humans,” J.
1927
+
1928
+ Neural Eng., vol. 5, no. 2, pp. 214–220, 2008.
1929
+
1930
+ [10] A. Chella et al., “A BCI teleoperated museum robotic guide,” in Proc. Int.
1931
+
1932
+ Conf. IEEE Comp. Intelli. Soft. Int. Sys., 2009, pp. 783–788.
1933
+
1934
+ [11] I. Iturrate, J. M. Antelis, A. Kubler, and J. Minguez, “A noninvasive brainactuated wheelchair based on a P300 neurophysiological protocol and
1935
+
1936
+ automated navigation,” IEEE Trans. Robot., vol. 25, no. 3, pp. 614–627,
1937
+
1938
+ Jun. 2009.
1939
+
1940
+ [12] C. Escolano, A. R. Murguialday, T. Matuz, N. Birbaumer, and J.
1941
+
1942
+ Minguez, “A telepresence robotic system operated with a P300-based
1943
+
1944
+ brain-computer interface: Initial tests with ALS patients,” in Proc. Int.
1945
+
1946
+ Conf. IEEE Eng. Med. Biol. Soc., 2010, pp. 4476–4480.
1947
+
1948
+ [13] B. Rebsamen et al., “A brain controlled wheelchair to navigate in familiar
1949
+
1950
+ environments,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 18, no. 6,
1951
+
1952
+ pp. 590–598, Dec. 2010.
1953
+
1954
+ [14] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG
1955
+
1956
+ synchronization and desynchronization: Basic principles,” Clin. Neurophysiol., vol. 110, no. 11, pp. 1842–1857, 1999.
1957
+
1958
+ [15] G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer
1959
+
1960
+ communication,” Proc. IEEE, vol. 89, no. 7, pp. 1123–1134, Jul. 2001.
1961
+
1962
+ [16] G. Pfurtscheller, C. Brunner, A. Schlögl, and F. H. Lopes da Silva,
1963
+
1964
+ “μ rhythm (de)synchronization and EEG single-trial classification of
1965
+
1966
+ different motor imagery tasks,” Neuroimage, vol. 31, no. 1, pp. 153–159,
1967
+
1968
+ 2006.
1969
+
1970
+ [17] E. Thomas, M. Dyson, and M. Clerc, “An analysis of performance evaluation for motor-imagery based BCI,” J. Neural Eng., vol. 10, no. 3, 2013,
1971
+
1972
+ Art. no. 031001.
1973
+
1974
+ [18] G. Vanacker et al., “Context-based filtering for assisted brain-actuated
1975
+
1976
+ wheelchair driving,” Comput. Intell. Neurosci., vol. 2007, p. 25130, 2007.
1977
+
1978
+ [19] F. Galán et al., “A brain-actuated wheelchair: Asynchronous and noninvasive brain-computer interfaces for continuous control of robots,” Clin.
1979
+
1980
+ Neurophysiol., vol. 119, no. 9, pp. 2159–2169, 2008.
1981
+
1982
+ [20] R. Zhang et al., “Control of a wheelchair in an indoor environment based
1983
+
1984
+ on a brain–computer interface and automated navigation,” IEEE Trans.
1985
+
1986
+ Neural Syst. Rehabil. Eng., vol. 24, no. 1, pp. 128–139, Jan. 2016.
1987
+
1988
+ [21] L. Tonin, R. Leeb, M. Tavella, S. Perdikis, and
1989
+
1990
+ J. D. R. Millán, “The role of shared-control in BCI-based telepresence,”
1991
+
1992
+ in Proc. IEEE Int. Conf. Syst., Man Cybern., 2010, pp. 1462–1466.
1993
+
1994
+ [22] L. Tonin, T. Carlson, R. Leeb, and J. D. R. Millán, “Brain-controlled
1995
+
1996
+ telepresence robot by motor-disabled people,” in Proc. Int. Conf. IEEE
1997
+
1998
+ Eng. Med. Biol. Soc., 2011, pp. 4227–4230.
1999
+
2000
+ [23] T. Carlson, L. Tonin, S. Perdikis, R. Leeb, and J. D. R. Millán, “A hybrid
2001
+
2002
+ BCI for enhanced control of a telepresence robot,” in Proc. Int. Conf. IEEE
2003
+
2004
+ Eng. Med. Biol. Soc., 2013, pp. 3097–3100.
2005
+
2006
+ [24] R. Leeb et al., “Transferring brain–computer interfaces beyond the laboratory: Successful application control for motor-disabled users,” Artif. Intell.
2007
+
2008
+ Med., vol. 59, no. 2, pp. 121–132, 2013.
2009
+
2010
+ [25] D. Kuhner, L. D. J. Fiederer, J. Aldinger, F. Burget, and M. Völker, “A
2011
+
2012
+ service assistant combining autonomous robotics, flexible goal formulation, and deep-learning-based brain-computer interfacing,” Robot. Auton.
2013
+
2014
+ Syst. J., vol. 116, pp. 98–113, 2019.
2015
+
2016
+ [26] A. Satti, D. Coyle, and G. Prasad, “Continuous EEG classification for a
2017
+
2018
+ self-paced BCI,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2009,
2019
+
2020
+ pp. 315–318.
2021
+
2022
+ [27] D. Coyle, J. Garcia, A. R. Satti, and T. M. McGinnity, “EEG-based
2023
+
2024
+ continuous control of a game using a 3 channel motor imagery BCI,” in
2025
+
2026
+ Proc. IEEE Symp. Comp. Intell., Cogn. Algor., Mind, Brain, 2011, pp. 1–7.
2027
+
2028
+ [28] A. J. Doud, J. P. Lucas, M. T. Pisansky, and B. He, “Continuous threedimensional control of a virtual helicopter using a motor imagery based
2029
+
2030
+ brain-computer interface,” PLoS One, vol. 6, no. 10, 2011, Art. no. e26322.
2031
+
2032
+ [29] K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He,
2033
+
2034
+ “Quadcopter control in three-dimensional space using a noninvasive motor
2035
+
2036
+ imagery-based brain–computer interface,” J. Neural Eng., vol. 10, no. 4,
2037
+
2038
+ 2013, Art. no. 046003.
2039
+
2040
+ [30] S. Perdikis, L. Tonin, and J. D. R.Millan, “Brain racers: How paralyzed athletes used a brain-computer interface to win gold at the Cyborg Olympics,”
2041
+
2042
+ IEEE Spectr., vol. 54, no. 9, pp. 44–51, Sep. 2017.
2043
+
2044
+ [31] S. Perdikis, L. Tonin, S. Saeedi, C. Schneider, and J. D. R. Millán, “The
2045
+
2046
+ Cybathlon BCI race: Successful longitudinal mutual learning with two
2047
+
2048
+ tetraplegic users,” PLoS Biol., vol. 16, no. 5, p. e2003787, 2018.
2049
+
2050
+ [32] B. Blankertz et al., “The BCI competition III: Validating alternative
2051
+
2052
+ approaches to actual BCI problems,” IEEE Trans. Neural Syst. Rehabil.
2053
+
2054
+ Eng., vol. 14, no. 2, pp. 153–159, Jan. 2006.
2055
+
2056
+ [33] G. Schöner and M. Dose, “A dynamical systems approach to task-level
2057
+
2058
+ system integration used to plan and control autonomous vehicle motion,”
2059
+
2060
+ Robot. Auton. Syst., vol. 10, pp. 253–267, 1992.
2061
+
2062
+ [34] E. Bicho and G. Schöner, “The dynamic approach to autonomous robotics
2063
+
2064
+ demonstrated on a low-level vehicle platform,” Robot. Auton. Syst., vol. 21,
2065
+
2066
+ no. 1, pp. 23–35, 1997.
2067
+
2068
+ [35] A. Steinhage and R. Schöner, “The dynamic approach to autonomous robot
2069
+
2070
+ navigation,” inProc. IEEE Int. Symp. Ind. Elect., vol. 1, 2002, pp. SS7–S12.
2071
+
2072
+ [36] E. S. Gardner, “Exponential smoothing: The state of the art—Part II,” Int.
2073
+
2074
+ J. Forecast., vol. 22, no. 4, pp. 637–666, 2006.
2075
+
2076
+ [37] D. J. McFarland, L. M. McCane, S. V. David, and J. R. Wolpaw, “Spatial
2077
+
2078
+ filter selection for EEG-based communication,” Electroencephalogr. Clin.
2079
+
2080
+ Neurophysiol., vol. 103, no. 3, pp. 386–394, 1997.
2081
+
2082
+ [38] F. Galán, P. W. Ferrez, F. Oliva, J. Guardia, and J. D. R. Millán, “Feature
2083
+
2084
+ extraction for multi-class BCI using canonical variates analysis,” in Proc.
2085
+
2086
+ IEEE Int. Symp. Intell. Sig. Process., 2007, p. 111.
2087
+
2088
+ [39] J. D. R. Millán, P. W. Ferrez, F. Galán, E. Lew, and R. Chavarriaga, “Noninvasive brain-machine interaction,” Int. J. Pattern Recognit. Artif. Intell.,
2089
+
2090
+ vol. 22, no. 05, pp. 959–972, 2008.
2091
+
2092
+ [40] G. R. Müller-Putz et al., “Tools for brain-computer interaction: A general
2093
+
2094
+ concept for a hybrid BCI,” Front. Neuroinform., vol. 5, p. 30, 2011.
2095
+
2096
+ [41] H. Alt and M. Godau, “Computing the Fréchet distance between two
2097
+
2098
+ polygonal curves,” Int. J. Comput. Geom. Appl., vol. 05, no. 01n02,
2099
+
2100
+ pp. 75–91, 1995.
2101
+
2102
+ [42] R. Chavarriaga, M. Fried-Oken, S. Kleih, F. Lotte, and R. Scherer, “Heading for new shores! Overcoming pitfalls in BCI design,” Brain-Comput.
2103
+
2104
+ Interfaces, vol. 4, no. 1–2, pp. 60–73, 2017.
2105
+
2106
+ [43] Y. He, D. Eguren, J. M. Azorín, R. G. Grossman, T. P. Luu, and J. L.
2107
+
2108
+ Contreras-Vidal, “Brain-machine interfaces for controlling lower-limb
2109
+
2110
+ powered robotic systems,” J. Neural Eng., vol. 15, no. 2, 2018, Art.
2111
+
2112
+ no. 21004.
2113
+
2114
+ [44] A. H. Do, P. T. Wang, C. E. King, S. N. Chun, and Z. Nenadic, “Braincomputer interface controlled robotic gait orthosis,” J. Neuroeng. Rehabil.,
2115
+
2116
+ vol. 10, no. 1, p. 111, 2013.
2117
+
2118
+ [45] A. Kilicarslan, S. Prasad, R. G. Grossman, and J. L. Contreras-Vidal,
2119
+
2120
+ “High accuracy decoding of user intentions using EEG to control a
2121
+
2122
+ lower-body exoskeleton,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc.,
2123
+
2124
+ 2013, pp. 5606–5609.
2125
+
2126
+ [46] E. López-Larraz et al., “Control of an ambulatory exoskeleton with a
2127
+
2128
+ brain-machine interface for spinal cord injury gait rehabilitation,” Front.
2129
+
2130
+ Neurosci., vol. 10, p. 359, 2016.
2131
+
2132
+ [47] K. Lee, D. Liu, L. Perroud, R. Chavarriaga, and J. D. R. Millán, “A
2133
+
2134
+ brain-controlled exoskeleton with cascaded event-related desynchronization classifiers,” Robot. Auton. Syst., vol. 90, pp. 15–23, 2016.
2135
+
2136
+ [48] D. Liu et al., “EEG-based lower-limb movement onset decoding: Continuous classification and asynchronous detection,” IEEE Trans. Neural Syst.
2137
+
2138
+ Rehabil. Eng., vol. 26, no. 8, pp. 1626–1635, Aug. 2018.
2139
+
2140
+ Luca Tonin (M’19) received the Ph.D. degree in
2141
+
2142
+ robotics from the École Polytechnique Fédérale de
2143
+
2144
+ Lausanne (EPFL), Lausanne, Switzerland, in 2013.
2145
+
2146
+ He then pursued three years of postdoctoral research
2147
+
2148
+ at the Intelligent Autonomous System laboratory
2149
+
2150
+ (IAS-Lab), the University of Padova, Padua, Italy.
2151
+
2152
+ Since 2016, he has been Postdoctoral Researcher
2153
+
2154
+ with the Defitech Chair in Brain-Machine Interface at EPFL. He is currently a Senior Postdoctoral
2155
+
2156
+ Researcher with the Intelligent Autonomous System
2157
+
2158
+ laboratory (IAS-Lab), the University of Padova. His
2159
+
2160
+ research is currently focused on exploring advanced techniques for brain–
2161
+
2162
+ machine interface (BMI)-driven robotics devices. His main contribution to the
2163
+
2164
+ BMI field is related to the design of novel shared control approaches to improve
2165
+
2166
+ the reliability and to enhance the coupling between user and robot.
2167
+
2168
+ In 2016, Dr. Tonin won the first international Cybathlon paralympic event in
2169
+
2170
+ the BMI race discipline as a coleader of the BrainTweakers team.
2171
+
2172
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.
2173
+
2174
+ TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 91
2175
+
2176
+ Felix Christian Bauer received the M.Sc. degree in
2177
+
2178
+ physics in 2017 from ETH Zurich, Zurich, Switzerland, where he is currently working toward Teaching
2179
+
2180
+ Diploma in physics.
2181
+
2182
+ He is currently working as Research and Development Engineer with aiCTX AG, Zurich, Switzerland, on the development of neuromorphic hardware
2183
+
2184
+ applications. His research interests include noninvasive brain–machine interfaces, artificial intelligence,
2185
+
2186
+ neural network architectures, and neuromorphic
2187
+
2188
+ hardware.
2189
+
2190
+ José del R. Millán (F’17) received the Ph.D. degree
2191
+
2192
+ in computer science from the Universitat Politècnica
2193
+
2194
+ de Catalunya, Barcelona, Spain, in 1992.
2195
+
2196
+ He is currently with the Department of Electrical &
2197
+
2198
+ Computer Engineering and the Deptartment of Neurology of the University of Texas at Austin, Austin,
2199
+
2200
+ USA, where he holds the Carol Cockrell Curran Endowed Chair. Previously, he held the Defitech Foundation Chair at the École Polytechnique Fédérale de
2201
+
2202
+ Lausanne (EPFL), Lausanne, Switzerland, from 2009
2203
+
2204
+ to 2019, where he helped establish the Center for
2205
+
2206
+ Neuroprosthetics.
2207
+
2208
+ Dr. Millán has made several seminal contributions to the field of brain–
2209
+
2210
+ machine interfaces (BMI), especially based on electroencephalogram signals.
2211
+
2212
+ Most of his achievements revolve around the design of brain-controlled robots.
2213
+
2214
+ He has received several recognitions for these seminal and pioneering achievements, notably the IEEE-SMC Nobert Wiener Award in 2011. For the last few
2215
+
2216
+ years he has been prioritizing the translation of BMI to end-users suffering
2217
+
2218
+ from motor disabilities. As an example of this endeavor, his team won the first
2219
+
2220
+ Cybathlon BMI race in October 2016.
2221
+
2222
+ Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply.