Spaces:
Running
Running
File size: 75,813 Bytes
74eb75e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 |
78 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 The Role of the Control Framework for Continuous Teleoperation of a Brain–Machine Interface-Driven Mobile Robot Luca Tonin , Member, IEEE, Felix Christian Bauer , and José del R. Millán , Fellow, IEEE Abstract—Despite the growing interest in brain–machine interface (BMI)-driven neuroprostheses, the translation of the BMI output into a suitable control signal for the robotic device is often neglected. In this article, we propose a novel control approach based on dynamical systems that was explicitly designed to take into account the nature of the BMI output that actively supports the user in delivering real-valued commands to the device and, at the same time, reduces the false positive rate. We hypothesize that such a control framework would allow users to continuously drive a mobile robot and it would enhance the navigation performance. 13 healthy users evaluated the system during three experimental sessions. Users exploit a 2-class motor imagery BMI to drive the robot to five targets in two experimental conditions: with a discrete control strategy, traditionally exploited in the BMI field, and with the novel continuous control framework developed herein. Experimental results show that the new approach: 1) allows users to continuously drive the mobile robot via BMI; 2) leads to significant improvements in the navigation performance; and 3) promotes a better coupling between user and robot. These results highlight the importance of designing a suitable control framework to improve the performance and the reliability of BMI-driven neurorobotic devices. Index Terms—Brain–machine interface (BMI), control framework, motor imagery (MI), neurorobotics. I. INTRODUCTION RECENT years have seen a growing interest for the neurorobotics field, a new interdisciplinary research topic that aims at studying brain-inspired approaches in robotics and at developing innovative human–machine interfaces. In this scenario, Manuscript received May 21, 2019; accepted August 6, 2019. Date of publication October 22, 2019; date of current version February 4, 2020. This paper was recommended for publication by Associate Editor B. Argall and Editor P. R. Giordano upon evaluation of the reviewers’ comments. This work was supported in part by the Hasler Foundation, Bern, Switzerland, under Grant 17061 and in part by the Swiss National Centre of Competence in Research (NCCR) Robotics. (Corresponding author: Luca Tonin.) L. Tonin is with Intelligent Autonomous System Lab, Department of Information Engineering, University of Padova, 35122 Padua, Italy (e-mail: luca.tonin@dei.unipd.it). F. C. Bauer is with aiCTX AG, 8050 Zurich, Switzerland (e-mail: felix.bauer@aictx.ai). J. D. R. Millán is with Department of Electrical and Computer Engineering & the Department of Neurology, University of Texas at Austin, Austin 78705 USA, and also with Defitech Chair in Brain-Machine Interface, École Polytechnique Fédérale de Lausanne, 1202 Geneva, Switzerland (e-mail: jose.millan@austin.utexas.edu). Color versions of one or more of the figures in this article are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TRO.2019.2943072 brain–machine interfaces (BMIs) represent a promising technology to directly decode user’s intentions from neurophysiological signals and translate them into actions for external devices. The ultimate goal of BMI systems is to enable people suffering from severe motor disabilities to control new generations of neuroprostheses [1], [2]. Several works have already shown the feasibility and the potentiality of such a technology with different devices [3]–[7]. However, despite the great achievements, the integration between BMI systems and robotics is still at its infancy. In the last years, different interactions between BMI and robotic devices have been explored according to the nature of the mental task performed by the user and to the neural processes involved. For instance, researchers have shown the possibility to exploit correlates of electroencephalography (EEG) to external stimuli (e.g., visual flash) to control the navigation of mobile devices. In such systems, users can either select the turning direction or the final destination of the robot (e.g., kitchen or bedroom) by looking at the corresponding stimuli on the screen [8]–[13]. Although such interactions have shown promising results, they do not allow a full control of the device and they require the user to continuously fixate the origin of the external stimulation (e.g., the screen). A more natural approach is based on BMI systems able to detect the self-paced modulation of brain patterns and thus, to allow the user to deliver commands for the robot at any time without the need of exogenous stimulation. In this context, one of the most explored approaches relies on the detection of the neural correlates to motor imagery (MI). MI BMIs detect and classify the endogenous modulation of sensorimotor rhythms while the user is imagining the movement of a specific part of his/her body (e.g., imagination of the movement of right or left hand). At the neurophysiological level, such a modulation is characterized by the decrement/increment (event-related de/synchronization, ERD/ERS) of the EEG power in specific frequency bands (i.e., μ and β bands, 8–12 and 16–30 Hz, respectively) and in localized regions of the motor/premotor cortex [14]–[16]. MI BMI systems continuously decode such brain patterns associated to the motor imagery tasks by means of machine learning algorithms. The responses of the BMI decoder (a probability distribution of possible commands) are integrated over time and, finally, a command is delivered to the robot only when a given threshold is reached—i.e., when the control framework is confident about user’s intention. Therefore, although in principle such BMI 1552-3098 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 79 systems would allow a continuous interaction between user and robot, in practice they result in a discrete control modality, both in terms of time and nature of the commands, with a low information transfer rate (on average 0.3 command/second [17]). This article aims at investigating a novel control approach to generate a continuous control command for MI BMI mobile robots. Herein, continuous control refers to the direct translation of each decoded BMI output (a probability distribution) into a control signal for the robotic device and explicitly in contraposition to the aforementioned discrete interaction modality of most BMI systems. A. Related Work Several studies have shown the effectiveness of discrete control strategy in driving a variety of MI BMI-based devices with healthy subjects and users with motor disabilities. An example of discrete BMI control is the brain-driven wheelchair developed by Vanacker et al. [18], where authors exploited a 2-class MI BMI to interact with the external device. In this implementation, the user could change the default behavior of the wheelchair (i.e., move forward) by asynchronously delivering discrete commands to make it turn left or right. Furthermore, an intelligent navigation system was in charge to generate the continuous trajectory and to take care of all the low-level details (e.g., obstacle avoidance) in order to reduce the user’s workload. Other works developed BMIdriven wheelchairs following the same discrete user interaction principles [5], [19], [20]. Similarly, in [6], [21]–[24] authors demonstrated the validity of such an approach to drive a telepresence robot with both healthy subjects and end-users. A discrete interaction modality has been also proposed by Kuhner et al. [25] where the user is allowed to control a mobile robot by selecting specific actions in a hierarchical, menu-based assistant environment. Enabling BMI users to have a continuous interaction modality and, for instance, to precisely control the extent of the turning direction of the robotic device, would rather be desirable. However, the generation of a continuous control signal can be challenging considering the nonstationarity nature of EEG patterns and the resulting uncertainty of the decoded classifier output. In literature, only a few studies investigated new approaches to use the BMI output as a continuous control signal for robotic devices. From a theoretical point of view, Satti et al. [26] proposed to apply a postprocessing chain based on a Savitzki–Golay filter, an antibiasing strategy, and multiple thresholding in order to remove spikes/outliers and possible bias from the BMI classifier output. The method has been evaluated on artificial and real EEG datasets and results showed a reduction in the false positive rate. This approach has been also tested in an online experiment where three users where asked to continuously control a videogame by a 3-class MI BMI [27]. In Doud et al. 2011 [28], authors proposed a different approach to achieve continuous control of a virtual helicopter. In this case, the modulation of EEG activity (i.e., ERD/ERS during the imagination of six different motor tasks) was linearly mapped to the control signal of the virtual device. However, such a paradigm required high workload for the user who needs to be always in an active control state. In [29], LaFleur et al. described the follow-up of the previous study with a real quadcopter. More interesting, in this article, authors introduced a nonlinear quadratic transformation of EEG signals before the control signal was sent to the device. Furthermore, they provide a fixed thresholding to remove minor perturbations that were not likely to have generated from intentional control. A linear mapping of the EEG activity into a control signal has been also proposed by Meng et al. [7] in order to control a robotic arm. In this case, users were asked to perform a reaching and grasping tasks in a sequential synchronous paradigm. B. Contribution and Overview In this article, we propose a novel control framework for MI BMI that allows a continuous control modality of a telepresence mobile robot in a navigation task. Our aim is to provide a control system able to generate a continuous robot trajectory from the stream of BMI outputs. We decided to use a BMI decoder (instead of regressing the EEG neural patterns into a control signal as in the case of [28] and [29]) because classifiers have proven to be stable over long periods of time and highly reliable for end-users [6], [24], [30], [31]. However, current control frameworks are specifically conceived for a discrete interaction with the external devices. In particular, BMI systems are designed to maximize the accuracy and the speed in delivering discrete commands (also known as intention control state, IC). Surely, this approach works in experimental situations but can hardly cope with real case scenarios when the user wants to continuously drive the robotic device to accomplish daily tasks. Furthermore, current systems do not take into account the situation when the user does not want to deliver any command to the device. This particular state is known as intentional noncontrol (INC). In the past, researchers mainly faced INC in two different ways: by exploiting multiclass classification techniques to model the resting state [28], [29], [32] or by leaving to the user the burden of actively controlling the BMI to not deliver any command [5], [6], [21]. However, the first solution is affected by the complexity of modeling the unbounded resting class, while the second implicates a high workload for the user who needs to actively control the system to counteract possible unintended BMI outputs. Herein, we hypothesize that the generation of a continuous control signal can be achieved by providing a new framework designed to specifically deal with the particular nature of the BMI decoder output and to explicitly take into account the IC and INC situations. In other terms, the framework: 1) should handle the erratic behavior of the BMI decoder output; 2) should support users when they are actively involved in the MI task (IC); at the same time, 3) it should prevent them to deliver unintended commands during resting state (INC). To the best of our knowledge, this is the first time that such a continuous interaction modality for BMI-driven devices is specifically targeted from a pure control perspective. Our proposed control framework is inspired by Schöner and colleagues’ work [33]–[35]. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 80 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 1. (a) Classical MI BMI closed loop and the mobile robot used in this article: EEG data is acquired and task-related features (channel-frequency pair) are extracted and classified in real time by the BMI decoder. Then, the BMI decoder output stream (e.g., posterior probabilities) is integrated in order to accumulate evidence of user’s intention. Finally, when enough evidence is accumulated, a discrete command is sent to the device. (b) Distribution of the posterior probabilities generated by the BMI decoder during motor imagery task. Solid black line represents the distribution fit computed by Epanechnikov kernel function. (c) Distribution of the posterior probabilities while user is resting. Dotted black line represents the distribution fit computed by Epanechnikov kernel function. The rest of this article is organized as follows. In Section II we first model the BMI decoder output with real EEG data from the participants in the study. Second, we shortly review the traditional approach to smooth the BMI decoder output. Third, we describe the novel approach based on a dynamical system developed herein. Lastly, we used real prerecorded data to simulate the behavior of the new control framework in comparison with the traditional one. Section III is devoted to the description of the experiment designed to evaluate the new control framework with healthy subjects during an online experiment where they are asked to mentally teleoperate a mobile robot. Finally, in Section IV we present the experimental results, and in Section V we discuss them in comparison to prior literature and we propose possible extensions of the work in different BMI robotic applications. Section VI concludes this article. II. CONTROL FRAMEWORK FOR BMI The first step for designing a new control framework is to model and characterize the output of the BMI system. Then, we will describe the traditional strategy with low-pass smoothing filtering and our new approach based on dynamical systems. Since our focus is on the BMI control framework, we consider the other modules (e.g., acquisition, processing, and decoder) as given [Fig. 1(a)]. We refer to a classical, state-of-the-art BMI based on two motor imagery classes that has been extensively evaluated in previous studies with healthy subjects and end-users driving robotic devices [6], [21], [24]. Furthermore, such a MI BMI system was successfully exploited (winning the gold medal and establishing the world record) in the BMI Race discipline of the Cybathlon 2016 event, the first international neurorobotic competition, held in Zurich in 2016 [30], [31]. Section III.B gives details of such a BMI. A. Modeling the BMI Decoder Output The BMI decoder output can be seen as a continuous stream of posterior probabilities indicating the estimated user’s intention. It is worth to model the posterior probability distributions in two specific cases: while the user is actively involved in the motor imagery task and while he/she is at rest. Fig. 1(b) and (c) depict the distributions of real data (user S4) in these two scenarios. Extreme values of the posterior probabilities (close to 0.0 or to 1.0) indicate high-confidence detection of one of the two classes. In the first case [Fig. 1(b)], the BMI correctly classified most of the samples (i.e., posterior probabilities close to 1.0), resulting in a beta-like density function. On the other hand, when the user is resting, we would expect a normal-like distribution centered at 0.5. Instead, the posterior probabilities assume extreme values (close to 0.0 or 1.0), resulting in the bimodal distribution shown Fig. 1(c). The aforementioned behavior of the BMI output can be generalized for most users. Such an erratic behavior of the BMI decoder output would benefit from a control framework in order to generate a proper control signal for the robotic device. B. Traditional Approach: Smoothing Filter In the traditional BMI system, such as the one exploited in this article, the raw posterior probabilities originated from the decoder are accumulated over time with a leaky integrator based on an exponential smoothing [36]. Given xt the posterior probability at time t and yt−1 the previous integrated control signal, yt is computed as follows: yt = α · xt + (1 − α) · yt−1 (1) where α ∈ [0.0, 1.0] is the smoothing factor. The closer α is to 1.0, the faster the weight of older values decay and yt tends to follow xt. On the other hand, the closer α is to 0.0, the smaller is the contribution of the current posterior probability, leading to a slow response of the system. It is worth to notice that α is adjusted at the beginning (individually for each user) and, then, it is fixed during BMI operations. Usual values of α vary around 0.03 (slow response) to allow the user to control more precisely the system (examples of α values used in this article are reported in Section III.C, Table I). Finally, thresholding strategies are used to translate the smoothed signal yt into specific commands for the robot. As already mentioned, this kind of discrete interaction modality Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 81 Fig. 2. Design of the novel control framework. (a) Free force profile. Blue squares and red circles refer to the attractors and repellers of the system, respectively. The interval [0.0, 1.0] is divided in three basins where a conservative force (dark gray) or a pushing force (light gray) are applied. (b) Representation of the free potential derived by the free force function. (c) Function applied to the decoder output in order to generate the BMI force. TABLE I CONTROL FRAMEWORK PARAMETERS Control framework parameters chosen for each user in the evaluation runs. Parameters’ names are the same used in Section II. between the BMI user and the device results in an average transfer information rate of 0.3 command/second [17]. C. Novel Approach: Dynamical System The control framework proposed in this article is designed to generate a continuous signal for the robotic device. Following the hypotheses mentioned in Section I.B, it should be able: 1) to handle the erratic behavior of the BMI decoder output described in Section II.A; 2) to support the user’s IC when the current state of the system yt is close to one of the extreme values of the two classes (i.e., 0.0 or 1.0); 3) to prevent yt to reach high values due to random perturbations of BMI decoder output, and so to handle the INC state. We defined Δyt as linear combination of two forces Δyt = Ffree (yt−1) + FBMI (xt) (2) where Ffree(yt−1) only depends on the previous state of the system and FBMI(xt) depends on the current BMI output. Ffree can be explicitly designed to take care of the IC and INC state. Inspired by Schöner and colleagues’ formal technique [33]–[35], we define Ffree in order to exert a conservative force when the current state of the system is close to 0.5 and a pushing force otherwise [see Fig. 2(a)]. Theoretically, this would help the system to be less sensitive to random perturbations (INC state) while, at the same time, it would push yt to high values if the previous state yt−1 was in the external regions (IC state). As mentioned before, we hypothesized that matching these two requirements would support the generation of a reliable continuous control signal for the robot. Hence, such a force was chosen so that: 1) Ffree(y)=0 and dFfree(y) dy < 0 for y ∈ [0.0, 0.5, 1.0]. These are defined as stable equilibria points. Note that these points represent the maximum values for the two classes, respectively, (0, 1.0) and the equal distributed value (0.5). 2) Ffree(y)=0 and dFfree(y) dy > 0 for y = 0.5 − ω and y = 0.5 + ω, where ω ∈ (0.0, 0.5). These are defined as unstable equilibria points. According to these requirements, points y = 0, y = 0.5, and y = 1.0 are attractors for the system, while y = 0.5 − ω and y = 0.5 + ω are repellers [see Fig. 2(a)]. A function Ffree with these properties will divide the interval [0.0 1.0] into three attractor basins that are separated by the points 0.5 − ω and 0.5 + ω: depending on the current value y, it will converge toward one of the three attractors [see Fig. 2(a)]. This will facilitate the user not to deliver false positive commands (attractor in y = 0.5) and, at the same time, to reach the maximum value if y(t − 1) < 0.5 − ω or y(t − 1) > 0.5 + ω. Given that, we defined the following force Ffree: Ffree = ⎧ ⎪⎨ ⎪⎩ −sin π 0.5−ω · y if y ∈ [0, 0.5 − ω) −ψsin π ω · (y − 0.5) if y ∈ [0.5 − ω, 0.5 + ω] sin π 0.5−ω · (y − 0.5 − ω) if y ∈ (0.5 + ω, 1] (3) Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 82 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 3. Simulated temporal evolution of the control signal generated (a) by the traditional smoothing filter and (b) by the new dynamical system. Real data from user S4. Black lines represent the integrated control signal during motor imagery task (solid) and at rest (dotted). Time points when the integrated control signal crosses a predefined fixed threshold (dashed black line) are highlighted in green (during motor imagery task) or in red (during rest). with ψ ≥ 0 corresponding to the height of the potential valley [see Fig. 2(b)]. The force has rotational symmetry with respect to 0.5 and, so, the same force is exerted for the two classes. However, it is worth to notice that it is possible to achieve an asymmetrical response of the system for the two classes by defining ω1 = ω2. FBMI is the second term of (2), and it represents the external force perturbing the system according to the output of the BMI decoder (i.e., user’s intention). As in the previous case, we designed FBMI in order to reduce or enhance the impact of BMI responses with low or high confidence, respectively (posterior probabilities close to 0.5 or close to 0.0 and 1.0). Hence, such a force was chosen so that: 1) FBMI must have rotational symmetry with respect to x = 0.5 to map the two BMI classes in the same way. 2) FBMI(xt) ≈ 0 for xt ∈ [0.5 − x, ˜ 0.5+˜x]. This means that with an uncertain output of the BMI decoder (e.g., around 0.5), the resulting force applied to the system is limited. Given that, we defined the following cubic transformation function: FBMI (x)=6.4 · (x − 0.5)3 + 0.4 · (x − 0.5) (4) where x ∈ [0.0 1.0] is the posterior probability from the BMI decoder. Such a function has been selected in order to promote BMI output with high confidence (i.e., close to 1.0 or −1.0) and to limit the impact of uncertain decoding (i.e., close to 0.5). The coefficients of the function have been chosen through simulations with prerecorded EEG data. Fig. 2(c) depicts a representation of FBMI. Finally, the two forces (Ffree, FBMI) have been combined together according to Δyt = χ · [φ · Ffree (yt−1) + (1 − φ) · FBMI (xt)] (5) with χ > 0 and φ ∈ [0.0, 1.0]. The parameter χ controls the overall velocity of the system while φ determines the contribution of Ffree and FBMI, or in other terms, how much to trust the BMI decoder output. These two parameters can be tuned by the operator according to the requirements of the application (e.g., by increasing χ if high reactiveness of the system is required) and to the BMI decoder accuracy (e.g., by decreasing φ in the case of a highly confident decoder). D. Simulated Temporal Evolution of the Control Signal We compared the temporal evolution of the two control frameworks with real data (BMI decoder output) from user S4 and results are depicted in Fig. 3. On the one hand, the traditional control framework [Fig. 3(a)], generates a control signal yt (starting at 0.5, equal probability for the two classes) that quickly increases (high derivative value) toward the correct side when the user is actively performing the task (IC state, solid black line). However, after the initial phase, the velocity of yt decreases making difficult to reach high values and reducing the extent of the control signal. Furthermore, in the case of resting (INC state, dotted black line), random perturbations of xt might result in locally large changes of yt making difficult to keep the control signal below the predefined threshold. Moreover, repeated simulations (N = 10 000) reported that during rest the control signal crossed the given threshold 96.2% of times with an average time of 7.2 ± 4.1 s. This is mainly due to the nature of the distribution of the BMI output (Section II.A). It is clear that BMI continuous operations using such a kind of unstable control signal are difficult to achieve. On the other hand, Fig. 3(b) depicts the temporal evolution of the control signal in the case of the new control approach developed herein. The same data as before has been used. While the user is actively involved in the mental task (black solid line in the figure), the output control signal y quickly converges toward the maximum value (1.0), crossing the given threshold after 1.1 s. It is worth to highlight how the behavior of the signal perfectly follows the design requirements of the new control framework: a slow initial velocity (to favor the INC Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 83 state) that quickly increases to implement the user’s intention (to support the IC state). Indeed, the new control framework seems to properly work also when the user is at rest. In this case, the random perturbations of the BMI output do not affect the control signal that keeps oscillating around 0.5 (black dotted line in figure). Repeated simulations (N = 10 000) reported that during the task the control signal crossed the threshold 100% of times in 1.4 ± 0.6 s). Importantly, during rest, the control signal crossed the threshold due to random perturbations only 15.5% of the times (in comparison to 96.2% in the case of the traditional control framework). Furthermore, the few random crossings occurred on averaged at 10.4 ± 5.5 s, more than 3 s later with respect to the traditional approach. The simulated results confirm the desired behavior of the control signal generated by the new approach. In the next section, we present an online closed-loop BMI experiment where users are asked to teleoperate a mobile robot with the traditional and the new control frameworks. III. MATERIAL AND METHODS A. Participants Thirteen healthy users participated in the study (S1–S13, 25.8 ± 4.3 years old, four females). Users did not have history of neurological or psychiatric disorders and they were not under any psychiatric medication. Eleven users did not have any previous experience with MI BMI; two already participated in other BMI experiments (S10 and S11) and only one (S13) already controlled a mobile robot via MI BMI. Written informed consent was obtained from all experimental subjects in accordance with the principles of the Declaration of Helsinki. The study has been approved by the Cantonal Committee of Vaud (Switzerland) for ethics in human research under the protocol number PB_2017-00295. B. Brain–Machine Interface Implementation In this article we used a BMI based on 2-class motor imagery (both hands versus both feet motor imagination) to drive the mobile robot. EEG signals were acquired with an active 16-channel amplifier at 512 Hz sampling rate (g.USBamp, Guger Technologies, Graz, Austria). Data were band-pass filtered within 0.1 and 100 Hz and notch-filtered at 50 Hz (hardware filters). Electrodes were placed over the sensorimotor cortex (Fz, FC3, FC1, FCz, FC2, FC4, C3, C1, Cz, C2, C4, CP3, CP1, CPz, CP2, CP4; international 10–20 system layout) to detect the neural patterns related to MI. We removed the dc component from the signals and spatially filtered them by means of a Laplacian derivation (closest neighbors in a cross layout [37]). We used the spectral power of EEG signals as features for the BMI system. We computed the power spectral density via Welch’s periodogram algorithm with 2 Hz resolution (from 4 to 48 Hz) in 1-s windows sliding every 62.5 ms. Feature selection was performed during the calibration phase (Section III.C) by ranking the candidate spatiospectral features according to discriminant power [38], calculated through canonical variate analysis and neurophysiological meaning. Thus, the most discriminative features (channel-frequency pairs, subjectspecific) were extracted and used to train a Gaussian decoder with a gradient-descent supervised learning approach using the labeled dataset obtained during the calibration phase [6], [24], [39]. In the evaluation phase, the same features were classified into a probability distribution over the two MI tasks (imagination of both hand versus both feet). Outputs of the decoder (posterior probabilities) with uncertain probability distribution were rejected (rejection parameter fixed at 0.55). As a result of the aforementioned procedures (processing and decoding), the overall BMI system produced a continuous stream of posterior probabilities at a frequency rate of 16 Hz. Afterward, the posterior probabilities were fed to the control framework to accumulate evidence about the current user’s intention and to generate a suitable visual feedback for the user and a proper control signal for the robot (for details, refer to Section II). The BMI system relies on open source C libraries for the acquisition of EEG signals1 and on our own C++ software for the communication between modules and the feedback visualization. The processing and decoding algorithms have been implemented in MATLAB. C. BMI Calibration, Evaluation, and Navigation Task The study was organized in three different recording sessions (days). Sessions were interleaved by 34.2 ± 9.0 days and each one lasted 45 ± 12 min (mean ± standard deviation). As a common approach in the field, we need to acquire initial data to create, calibrate, and evaluate the BMI model for each subject. Fig. 4(a) shows the structure of the recording sessions. During calibration, users performed the two motor imagery tasks (both hand versus both feet) in front of a monitor following the instruction of a cued protocol. In this phase, a positive visual feedback was always provided and no control of the robot was established. Three runs (60 trials, 30 per class) were recorded and the data were used to train the Gaussian classifier, which remained fixed for the rest of the experiment. During evaluation, we tested the classifier performance in, at least, two consecutive runs where the users actually controlled the movement of the visual feedback utilizing each of the two integration approaches (traditional and new dynamical system strategy), so as to find the optimal, user-dependent parameters of the two control frameworks. In this phase, users were not controlling the robot. The values for each user are reported in Table I. The initial values of the parameters were selected based on simulations with prerecorded data (Section II.B and C). During the first recording session, we adjusted these values according to the individual performances of each user, which did not change during the rest of the experiment. Once subjects achieved good BMI performance (>70%), they moved to the next phase where they completed the navigation tasks. During navigation, users operated the robot with their individual classifier and the two integration frameworks. The navigation field was defined as a rectangular area (width: 900 cm; length: 600 cm) with 5 circular targets (T1-5; radius: 25 cm) located at 300 cm and at −90°, −45°, 0°, 45°, 90° from the starting point 1[Online]. Available: http://neuro.debian.net/pkgs/libeegdev-dev.html Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 84 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 4. Experimental design. (a) Schematic representation of the experimental structure. In the first session (day), each user performed three BMI calibration runs (without controlling the robot) in order to create the model for the decoder. Afterward, the BMI decoder was tested in two BMI evaluation runs (again, without controlling the robot). In the evaluation block, users also tested both the control frameworks (traditional and new dynamical approach) to determine the optimal parameters of the system. Finally, users performed two BMI navigation runs driving the robot. The navigation runs were equally divided per control modality. Session 2 and 3 (day 2 and 3) proposed again the evaluation and the navigation blocks. (b) Experimental field for the navigation tasks. Five targets (T1-5) were defined for each task. Targets were placed at 3 m from the start position of the robot and 45° from each other. The user was sitting outside the navigation field to be able to see the position of the robot at any time. (c) BMI visual feedback controlled by the user and the corresponding change of its heading direction in the traditional (discrete) and dynamical (continuous) control modality. (S) at the center (450, 150 cm). A task consisted in driving the robot from the initial position toward one of the five predefined targets [Fig. 4(b)]. As soon as the robot crossed the target’s edge, the trial was considered successfully completed and the robot was manually positioned at the starting point. Users were not instructed to follow specific trajectories, but we asked them to try to reach the target in the shortest possible time. Furthermore, a trial was considered unsuccessful if the robot left the rectangular area or if the target was not reached after 60 s. Finally, during the navigation tasks, users were able to see the robot, the targets ,and the monitor displaying the visual feedback. Users performed between 2 and 6 navigation runs per session (depending on their level of fatigue). Each run consisted in ten navigation tasks (two repetitions per target) randomly shuffled. The two control modalities (discrete control with traditional approach versus continuous control with new dynamical system approach) were pseudorandomly assigned to each run (equal number of runs per control modality per session). Users performed 88 navigation runs in total (44 runs per control modality) and 880 tasks. A visual representation of the behavior of the robot according to the BMI feedback in the discrete and continuous control modality is reported in Fig. 4(c). D. Mobile Robot The robot is based upon the Robotino platform by FESTO AG (Esslingen am Neckar, Germany) showed in Fig. 1(a). It is a small circular robot (diameter 370 mm, height 210 mm; weight ∼11 kg) equipped with three holonomic wheels, an embedded PC 104 with a compact flash card and nine infrared proximity sensors mounted in the robot’s chassis at an angle of 40° from each other and with a working range up to ∼150 mm (depending on light conditions). Furthermore, we added a laptop (Lenovo X201, Intel Core I5 2.53 GHz, 4GB RAM, Integrated Intel HD video controller) to the robot configuration to overcome the limited computational power of the embedded PC. The laptop was placed on a custom metallic structure fixed to the robot chassis and connected to the robot itself via Ethernet interface. E. Navigation System The motion of the mobile robot relies on a navigation system based on local potential fields and inspired by the work of Bicho et al. [34] and Steinhage et al. [35]. Furthermore, it has already been extensively and successfully evaluated with healthy subjects and end-users in previous works with BMI-driven mobile robots [6], [22]–[24]. In this article, the robot moves forward at a constant speed (0.2 m/s). The angular velocity v of the robot is generated by the following equation: v = (ξ − ξego) e − (ξ−ξego) 2 2 (6) where (ξ − ξego) represents the difference between the turning and the heading direction of the robot. The user is allowed to control the turning direction ξ by delivering BMI commands. In the case of the discrete control modality (Sections II and III.C), ξ may assume two discrete angular values (±π 4 ), according to the BMI command delivered by the user (left or right). Conversely, in the case of continuous modality, the control signal is linearly mapped to the interval [−π 2 , π 2 ] in order to continuously generate the robot’s turning direction ξ. The entire navigation system was developed in the robotic operating system (ROS) ecosystem. Robotino native libraries have been wrapped into ROS packages in order to access sensors’ information and motor controller. We developed two packages for bidirectional communication between the BMI and the ROS framework. In detail, we integrated standard interfaces used in the BMI field (Tobi Interface C and Tobi Interface D, [40]) in the ROS environment. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 85 Fig. 5. Initial BMI decoder results. (a) Topographic representation of the most selected features during the calibration block for μ and β bands. (b) BMI trial accuracy in the evaluation runs. In black the overall trial accuracy is reported; in blue and red the trial accuracy per control framework. (c) BMI trial duration in the evaluation runs. In black the overall trial duration is reported; in blue and red the trial duration per control framework. Mean and standard error of the mean are reported. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p <.05; (∗∗∗): p < 0.001. F. Tracking System Given the unreliability of robot’s odometry, trajectories were recorded by an external camera (Microsoft Kinect v2) located 6 m above the navigation field. A red spherical marker was placed on top of the robot to perform automatic detection of the robot within each frame of the recorded video stream. Detection was based on HSV colors and the previous position. Image coordinates were then mapped to real world trajectories with a homographic transform that was determined by ten world-image coordinate pairs. Localization and coordinate transform were done a posteriori using OpenCV library (OpenCV, version 3.2.02). Finally, trajectories were smoothed using a moving average filter over 25 data points for each time step. G. Statistical Analyses All statistical analyses have been performed by comparing and testing for significant differences at the 95% confidence interval using unpaired, two-sided Wilcoxon nonparametric rank-sum tests. IV. RESULTS A. Initial BMI Decoder Screening At the beginning of each recording session (day) we evaluated the BMI decoder in a classical cued protocol without the robot. The rationale is to have a ground truth of the BMI performance before starting the navigation tasks. Participants were instructed to control a feedback bar on the screen according to the direction provided by a visual cue (see Section III.C). While using the same BMI decoder, participants performed the initial screening with both the aforementioned control frameworks. First, the spatial and spectral distribution of the features selected during the calibration is coherent to the motor imagery tasks performed by the users. Indeed, Fig. 5(a) shows that channels C3 and C4 were the most selected in the μ band (50 and 52 times versus ten times for Cz) and channel Cz in the β band 2[Online]. Available: http://opencv.org/ (24 times versus ten and 11 times for C3 and C4, respectively). These results are in line with literature regarding the brain cortical regions involved in both hands and both feet motor imagery tasks [14]–[16]. Second, Fig. 5(b) and (c) report the BMI performances during the evaluation runs in terms of accuracy (i.e., percentage of successful trials) and time (i.e., duration of each trial). In average, participants achieved an accuracy of 89.9 ± 2.3% and they were able to complete the trial in 4.6 ± 0.2 s. In more detail, the traditional control framework seems to perform better in such a classical BMI paradigm with higher accuracy (93.1 ± 4.1% versus 86.7 ± 2.2%; p = 0.0006) and reduced time (4.0 ± 0.3 s versus 5.2 ± 0.4 s; p = 0.022). B. Navigation Performance We evaluated the navigation performance of the two control modalities according to three objective metrics: 1) distance to the ideal (manual) trajectory (Frechet distance [41]); 2) percentage of reached targets; 3) time to reach the target. Fig. 6(a) illustrates the heat maps of trajectories followed by all participants in the case of the traditional (left) and the new control modality (right). The maps have a 10 cm resolution, targets are indicated by white circles, and the color code ranges from blue (low coverage) to yellow (high coverage). Black lines represent the average trajectories per target and dashed lines the ideal (manual) trajectories. Subpanels around the main image show the individual target heat maps. A preliminary visual inspection of the heat maps already highlights the advantages of the new proposed control framework, especially in the case of the lateral targets (T1 and T5) where the participants required a finer control of the robot to reach them. Such an observation is substantiated by the results in Fig. 6(b). On average (left column), the new control modality allows users to follow the ideal trajectories significantly better (Frechet distance of 117.3 ±7.7 cm versus 85.4±5.0 cm, mean±STD; p=0.026). Results stand if we consider each target separately (middle column), with statistical difference in case of the most lateral ones (T1: p = 0.002; T5: p = 0.039). In addition, the evolution of the distance Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 86 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 Fig. 6. Navigation results. (a) Heat maps of trajectories performed by the robot for discrete (on the left) and continuous (on the right) control modality. Maps resolution is 10 cm. Target T1-5 are identified by white circle and color code ranges from blue (low) to yellow (high coverage). In black the average trajectories (solid lines) and the ideal manual trajectories (dashed lines) per target. Subpanels around the maps report the coverage, the average and the ideal trajectories for each individual target. (b) Frechet distance to the ideal trajectories per control framework. From left to right: the overall average distance, the average distance per target and the evolution of the distance over runs. (c) Navigation accuracy per control framework corresponding to the percentage of target successfully reached. From left to right: the overall average accuracy, the average accuracy per target, and the evolution of accuracy over runs. Black dashed line represents the chance level. (d) Duration in seconds of the navigation tasks per control framework. From left to right: overall average duration, the average duration per target, and the evolution of the duration over runs. Mean and standard error of the mean are reported. In blue and in red the results for the traditional and the new dynamical system control framework. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p < 0.05; (∗∗): p < 0.01. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 87 Fig. 7. Behavioral results from the navigation questionnaires. Users could answer with a score between 1 and 5. In blue the average scores for the traditional and in red for the new dynamic control framework. Mean and standard error of the mean are reported. Statistically significant differences are shown with two-sided Wilcoxon rank-sum tests, (∗): p <.05; (∗∗): p < 0.01; (>∗∗∗): p << 0.0001. TABLE II NAVIGATION QUESTIONNAIRE over runs shows significant improvement after the first day (right column; p = 0.013). The second evaluation metric is related to the percentage of reached target in the two conditions. Also in this case, the new approach ensures better navigation performances [Fig. 6(c)] and, on average (left column) a significant increment with respect to the traditional control framework (77.3 ± 3.3% versus 86.1 ± 2.6%, p = 0.048). Results in the middle column show similar consistency also across targets, with significantly better performances especially for targets T3 and T4 (p = 0.043 and p = 0.015, respectively). Furthermore, the accuracy with the new control framework consistently improves over runs (right column), reaching a statistically significant difference in the second day (run 3; p = 0.022). Finally, in Fig. 6(d) we report an overall time improvement in the case of the new control framework (33.6 ± 1.1 s versus 31.1 ± 0.8 s). Although such a reduction is in line with the previous results (in terms of distance to the ideal trajectory and accuracy), no significant differences have been found (p = 0.42). C. Behavioral Results At the end of each recording session, participants were asked to answer to two questionnaires in order to assess the subjective evaluations of the two control modalities. Each questionnaire was composed by the same eight questions and participants could rank them with a score from 1 to 5 as reported in Table II. The average scores for the eight questions are reported in Fig. 7. Generally, results show a general trend in favor of the new approach proposed in this article. In particular, questions Q2 (control precision, p = 0.006), Q4 (keeping forward direction, p = 0.030), Q5 (effort, p = 0.045), and Q8 (behavior preference, p = 0.000001) show a significant positive impact. These questions are directly related to the design goals of the new dynamical system control framework. Furthermore, in both conditions participants felt to be in control of the robot (Q1, score: 3.8 ± 0.2 versus 4.1 ± 0.1; Q3, score: 3.8 ± 0.2 versus 3.7 ± 0.2). Finally, the fact that we let them to decide to focus their attention on the robot itself or on the visual feedback does not seem to be a confounding factor for the experiment (Q6, score: 3.4 ± 0.3 versus 3.8 ± 0.3; Q7, score 3.5 ± 0.3 versus 3.7 ± 0.3). V. DISCUSSION This article aims at providing a continuous control modality for a BMI-driven mobile robot. Most BMI research focuses on applications based on discrete interaction strategies to drive robotic devices [5], [6], [18]–[25]. Although there exist some examples of BMI continuous control [7], [28], [29], they are scarce and the investigation of new formal techniques to interpret the user’s intention is often neglected. In this scenario, we have hypothesized that a key aspect to achieve such a continuous interaction is to rely on a control approach to translate the BMI decoder output into a control signal for the robotic device. For the first time, we have faced the challenge by formally designing a new control framework for BMI-driven mobile robots and by directly comparing the performances with a traditional approach Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 88 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 in a demanding scenario where we enabled users to continuously drive the device. A. Continuous Interaction and Navigation Performances First of all, results showed that the proposed control framework allowed such a continuous interaction modality between the user and the mobile robot. As consequence, users were able to reliably generate continuous navigation trajectories decoded from their brain activity. In literature, other works using a continuous control strategy rely on the ability of the users to perform up to six motor imagery tasks and consequentially to generate corresponding discriminant brain patterns to control the robotic devices [28], [29]. However, these approaches may hardly be applied in real case scenarios or for a daily usage of any MI BMI applications due to the high physical and mental demands for the user. This is particularly true in the case of the end-users with motor disabilities who have never been reported to utilize a MI BMI with more than two or three classes. It is worth to notice that our approach achieved the continuous interaction between BMI user and robot without any modification of the classical workflow of a 2-class motor imagery BMI that has been largely demonstrated to be suitable for end-users [6], [24], [31]. Furthermore, the comparison between the traditional and the new approach highlighted consistent and significant improvements in terms of navigation performances. Specifically, the distance to the ideal (manual) trajectory [Fig. 6(b)] is significantly reduced (p < 0.05). Moreover, the new control framework allowed users to increase the percentages of successfully completed navigation tasks [Fig. 6(c)]. This particularly fits in the case of the most difficult targets (T1 and T5), where users required finer control to complete the task. In the case of the duration of the navigation tasks, we did not find significant differences in the two conditions [although the time is slightly reduced for the new approach, Fig. 6(c)]. This is probably due to the short duration of the navigation task (∼30 s), that prevents a clear differentiation between the two control conditions. Finally, results from the subjective evaluation [Fig. 7] suggest the positive impacts of the new continuous interaction modality with the robotic device. In summary, the achieved results support our hypothesis that it is feasible to achieve a continuous interaction by means of the design of a new control framework for MI BMI-actuated robot. B. Coupling Between BMI User and Machine The improvement of the coupling between user and machine is a fundamental aspect in any robotic application, and especially in BMI-driven devices. In literature, it has been suggested that the enhancement of such an interaction not only increases the operational performances but it also promotes the acquisition of BMI skills for the user—namely, the ability of generating more reliable and stable brain patterns [31]. Here, we suggest that the new control framework facilitates this coupling in comparison to traditional approaches. Although it is difficult to directly evaluate the coupling with quantitative metrics, we propose the possibility to infer it from the results presented in the article and, in particular, from the temporal evolution of the navigation performances. Interestingly, the temporal evolution over runs of the three navigation metrics [Fig. 6(b)–(d), right column] suggested that the new control framework fosters the user’s learning in better controlling the mobile robot. Indeed, results show that while users had similar performances in the first run [Fig. 6(b), right column], a significant reduction of the Frechet distance only occurred in the second run for the new proposed approach (red line, p < 0.05). In the case of the traditional control framework, users were able to reach similar performances only in the last run of the experiment. In other words, the new control framework allowed users to learn to drive more precisely the robot in shorter time. The evolution of the task accuracy and duration may be interpreted in a similar way. In the first run, users achieved the same task accuracy in both conditions [∼75%, Fig. 6(c), right column]. For the traditional control condition, the accuracy remained stable until the last run (blue line, run 5), while with the new approach it reached a plateau of ∼90% already in the second run (red line). Although the time duration does not show any statistical difference, the trend is the same as in the case of the two previous metrics: already in the second run the duration of the task is reduced only in the case of the new control approach [Fig. 6(d), red line]. Subjective results from the questionnaire are in line with such considerations (Fig. 7) as users indicated not only an overall significant preference for the new control framework (question Q8, p < 0.0001) but also a more natural, precise, and easy interaction with it (questions Q2, p < 0.01, and Q4, p < 0.05). Moreover, it is worth to highlight that users reported less effort to control the robot in the continuous control modality (question Q5, p < 0.05), event if—theoretically—is more demanding. Furthermore, it is worth to mention the apparent discrepancy related to the outcomes in the initial BMI screening (without the robot) and in the navigation tasks. Indeed, users achieved substantially higher BMI accuracy with the traditional approach (p < 0.001) in the evaluation runs when they were asked to only control the visual feedback on the screen [Fig. 5(b) and (c)]. However, as already discussed, the introduction of the new dynamical system control framework led to significant improvements at the robotic application level and it suggests a better coupling between user and machine. This opens the discussion on the fact that metrics commonly used in BMI fields (such as the decoder accuracy) might not be fully informative to predict and evaluate the performances of neurorobotic applications [42]. Indeed, to accomplish complex tasks, such as driving a mobile robot, users not only need to repetitively deliver mental commands as fast as possible (as in common BMI protocols) but also to plan for and make eventual corrections. This spotlights the importance of designing a control framework that explicitly handle the requirements of the specific BMI application to improve the coupling between user and machine and, as consequence, the overall performance of the system. C. Extension to Other BMI Robotic Applications The proposed control framework has been explicitly designed and successfully evaluated for a robotic teleoperated mobile Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 89 platform. From a control perspective, the extension to similar BMI applications for motor substitution (e.g., powered wheelchair) is straightforward: for instance, users may drive a powered wheelchair by continuously controlling the turning direction with the proposed approach as in the case of the mobile robot. Similarly, the new control framework may be applied to BMI-driven lower limb exoskeletons. In literature, most of the studies use a discrete interaction modality to deliver commands to the device (e.g., go forward, turn right or left) [43]–[47]. In these cases, the proposed approach might support the generation of continuous trajectories for the exoskeleton. A different scenario is trying to decode brain patterns related to the user’s intention to make a left or right step [48]. During a walking task, the intended action (the step) is discrete per se, and it does not make sense to provide a continuous interaction modality. However, the generation of a continuous control signal might be useful when users are asked to perform leg extension/flexion robotic-assisted exercises (e.g., in a rehabilitation scenario). In this case, our approach might promote a fine control of the robotic device and thus, improve the rehabilitation outcomes. The need of a continuous interaction modality is not limited to mobile applications. The same approach can also be applied to operate robotic arms or upper limb exoskeletons where a three dimensional (3-D) control would be desirable. In literature, the operations of such devices are limited to two-dimensional (2-D) control strategies by directly remapping the EEG brain patterns into arm trajectories [7], [25]. Herein, we speculate the possibility to generate 3-D continuous trajectories by properly designing the control framework of a 3-class MI BMI. While two classes would be used to control the device in the x-y plane (as for the mobile platform developed in this work), the third one will be translated in the motion along the z dimension. The motion trajectories will be generated by the extension of the dynamical system equations to the 3-D space. Nevertheless, an extensive evaluation in real BMI closed-loop experiments is definitely required to prove the feasibility of this approach. D. Future Work We plan to further improve the new control framework by facilitating the choice of the parameters in the dynamical system equations. Although the results demonstrated the validity of this approach, the parameterization of the control system is still suboptimal. (3), (4), and (5) depend on the parameters ψ, ω, φ, and χ to adjust the strength and the position of the attractors/repellers and to balance the contribution of the Ffree and FBMI as well as the overall reactiveness of the system. The initial ranges of these parameters have been obtained by analysis on prerecorded data. However, in the first session of the experiment, the operator had to heuristically tune the parameters to optimize the behavior for each user. This should be avoided in order to reduce the human intervention as well as possible variability in the performances. For this reason, we performed a posteriori analysis with a twofold goal: 1) to reduce the number of parameters controlling the behavior of the dynamical system; 2) to predict the optimal subject-specific values of the parameters from the calibration data. Preliminary results suggest the feasibility to control the overall behavior of the framework by using only the two parameters related to the strength and position of the attractors/repellers (i.e., ψ and ω). Furthermore, simulated online performances support the possibility to predict the optimal values from calibration data. However, further studies are required to verify these preliminary results and, especially, to evaluate them in a closed-loop online experiment. A second future development will be to integrate information from the environment by exploiting the robot’s sensors. The effectiveness of this approach, namely shared control, has been already demonstrated in the past [5], [6], [18], [21]–[24] where robot’s intelligence was exploited in order to avoid obstacles in the path. In the case of our new approach, we plan to directly change the force fields in the BMI control framework accordingly to environment information in order to adjust the BMI outputs to the arrangement of objects around the robot (i.e., walls, tables, chairs) and to prevent the execution of wrong or not optimal user’s commands for the robot. Such a system needs to be evaluated in more complex scenarios than the one in this article, where the user will need to achieve complicated navigation tasks even in the presence of moving obstacles. VI. CONCLUSION In this article, we proposed a new control framework for an MI BMI-driven mobile robot. We hypothesized that such a novel approach would allow users to continuously control the robot and it would have a significant impact on the navigation performance as well as in the human–machine interaction. Thirteen healthy users evaluated the new control framework in comparison to a discrete approach usually exploited in the BMI field. The experiment lasted three sessions (days) and in total consisted of 880 repetitions of the navigation tasks. Results confirmed our hypothesis and showed the possibility to use a continuous control strategy to drive the robot via a classical 2-class MI BMI system. Furthermore, results highlighted an improvement of the navigation performances in all three evaluation metrics: distance to ideal trajectory, percentage of reached targets, and time to complete the tasks. In addition to providing a new approach that allows BMI users to continuously drive a mobile robotic platform, this article aimed at spotlighting the importance of the control framework to promote successful operations and to foster the translational impact of BMI-driven robotic applications. REFERENCES [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain–computer interfaces for communication and control,” Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, 2002. [2] D. Borton, S. Micera, J. del R. Millán, and G. Courtine, “Personalized neuroprosthetics,” Sci. Transl. Med., vol. 5, no. 210, p. 210rv2, 2013. [3] J. D. R. Millán, F. Renkens, J. Mouriño, and W. Gerstner, “Noninvasive brain-actuated control of a mobile robot by human EEG,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 1026–1033, Jun. 2004. [4] J. D. R. Millán, F. Galán, D. Vanhooydonck, E. Lew, J. Philips, and M. Nuttin, “Asynchronous non-invasive brain-actuated control of an intelligent wheelchair,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2009, pp. 3361–3364. [5] T. Carlson and J. D. R. Millán, “Brain-controlled wheelchairs: A robotic architecture,” IEEE Robot. Autom. Mag., vol. 20, no. 1, pp. 65–73, Mar. 2013. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. 90 IEEE TRANSACTIONS ON ROBOTICS, VOL. 36, NO. 1, FEBRUARY 2020 [6] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. D. R. Millán, “Towards independence: A BCI telepresence robot for people with severe motor disabilities,” Proc. IEEE, vol. 103, no. 6, pp. 969–982, Jun. 2015. [7] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He, “Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks,” Sci. Rep., vol. 6, no. 1, 2016, Art. no. 38565. [8] B. Rebsamen et al., “Controlling a wheelchair indoors using thought,” IEEE Intell. Syst., vol. 22, no. 2, pp. 18–24, Mar.–Apr. 2007. [9] C. J. Bell, P. Shenoy, R. Chalodhorn, and R. P. N. Rao, “Control of a humanoid robot by a noninvasive brain–computer interface in humans,” J. Neural Eng., vol. 5, no. 2, pp. 214–220, 2008. [10] A. Chella et al., “A BCI teleoperated museum robotic guide,” in Proc. Int. Conf. IEEE Comp. Intelli. Soft. Int. Sys., 2009, pp. 783–788. [11] I. Iturrate, J. M. Antelis, A. Kubler, and J. Minguez, “A noninvasive brainactuated wheelchair based on a P300 neurophysiological protocol and automated navigation,” IEEE Trans. Robot., vol. 25, no. 3, pp. 614–627, Jun. 2009. [12] C. Escolano, A. R. Murguialday, T. Matuz, N. Birbaumer, and J. Minguez, “A telepresence robotic system operated with a P300-based brain-computer interface: Initial tests with ALS patients,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2010, pp. 4476–4480. [13] B. Rebsamen et al., “A brain controlled wheelchair to navigate in familiar environments,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 18, no. 6, pp. 590–598, Dec. 2010. [14] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: Basic principles,” Clin. Neurophysiol., vol. 110, no. 11, pp. 1842–1857, 1999. [15] G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” Proc. IEEE, vol. 89, no. 7, pp. 1123–1134, Jul. 2001. [16] G. Pfurtscheller, C. Brunner, A. Schlögl, and F. H. Lopes da Silva, “μ rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks,” Neuroimage, vol. 31, no. 1, pp. 153–159, 2006. [17] E. Thomas, M. Dyson, and M. Clerc, “An analysis of performance evaluation for motor-imagery based BCI,” J. Neural Eng., vol. 10, no. 3, 2013, Art. no. 031001. [18] G. Vanacker et al., “Context-based filtering for assisted brain-actuated wheelchair driving,” Comput. Intell. Neurosci., vol. 2007, p. 25130, 2007. [19] F. Galán et al., “A brain-actuated wheelchair: Asynchronous and noninvasive brain-computer interfaces for continuous control of robots,” Clin. Neurophysiol., vol. 119, no. 9, pp. 2159–2169, 2008. [20] R. Zhang et al., “Control of a wheelchair in an indoor environment based on a brain–computer interface and automated navigation,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 24, no. 1, pp. 128–139, Jan. 2016. [21] L. Tonin, R. Leeb, M. Tavella, S. Perdikis, and J. D. R. Millán, “The role of shared-control in BCI-based telepresence,” in Proc. IEEE Int. Conf. Syst., Man Cybern., 2010, pp. 1462–1466. [22] L. Tonin, T. Carlson, R. Leeb, and J. D. R. Millán, “Brain-controlled telepresence robot by motor-disabled people,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2011, pp. 4227–4230. [23] T. Carlson, L. Tonin, S. Perdikis, R. Leeb, and J. D. R. Millán, “A hybrid BCI for enhanced control of a telepresence robot,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2013, pp. 3097–3100. [24] R. Leeb et al., “Transferring brain–computer interfaces beyond the laboratory: Successful application control for motor-disabled users,” Artif. Intell. Med., vol. 59, no. 2, pp. 121–132, 2013. [25] D. Kuhner, L. D. J. Fiederer, J. Aldinger, F. Burget, and M. Völker, “A service assistant combining autonomous robotics, flexible goal formulation, and deep-learning-based brain-computer interfacing,” Robot. Auton. Syst. J., vol. 116, pp. 98–113, 2019. [26] A. Satti, D. Coyle, and G. Prasad, “Continuous EEG classification for a self-paced BCI,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2009, pp. 315–318. [27] D. Coyle, J. Garcia, A. R. Satti, and T. M. McGinnity, “EEG-based continuous control of a game using a 3 channel motor imagery BCI,” in Proc. IEEE Symp. Comp. Intell., Cogn. Algor., Mind, Brain, 2011, pp. 1–7. [28] A. J. Doud, J. P. Lucas, M. T. Pisansky, and B. He, “Continuous threedimensional control of a virtual helicopter using a motor imagery based brain-computer interface,” PLoS One, vol. 6, no. 10, 2011, Art. no. e26322. [29] K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He, “Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface,” J. Neural Eng., vol. 10, no. 4, 2013, Art. no. 046003. [30] S. Perdikis, L. Tonin, and J. D. R.Millan, “Brain racers: How paralyzed athletes used a brain-computer interface to win gold at the Cyborg Olympics,” IEEE Spectr., vol. 54, no. 9, pp. 44–51, Sep. 2017. [31] S. Perdikis, L. Tonin, S. Saeedi, C. Schneider, and J. D. R. Millán, “The Cybathlon BCI race: Successful longitudinal mutual learning with two tetraplegic users,” PLoS Biol., vol. 16, no. 5, p. e2003787, 2018. [32] B. Blankertz et al., “The BCI competition III: Validating alternative approaches to actual BCI problems,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 14, no. 2, pp. 153–159, Jan. 2006. [33] G. Schöner and M. Dose, “A dynamical systems approach to task-level system integration used to plan and control autonomous vehicle motion,” Robot. Auton. Syst., vol. 10, pp. 253–267, 1992. [34] E. Bicho and G. Schöner, “The dynamic approach to autonomous robotics demonstrated on a low-level vehicle platform,” Robot. Auton. Syst., vol. 21, no. 1, pp. 23–35, 1997. [35] A. Steinhage and R. Schöner, “The dynamic approach to autonomous robot navigation,” inProc. IEEE Int. Symp. Ind. Elect., vol. 1, 2002, pp. SS7–S12. [36] E. S. Gardner, “Exponential smoothing: The state of the art—Part II,” Int. J. Forecast., vol. 22, no. 4, pp. 637–666, 2006. [37] D. J. McFarland, L. M. McCane, S. V. David, and J. R. Wolpaw, “Spatial filter selection for EEG-based communication,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 386–394, 1997. [38] F. Galán, P. W. Ferrez, F. Oliva, J. Guardia, and J. D. R. Millán, “Feature extraction for multi-class BCI using canonical variates analysis,” in Proc. IEEE Int. Symp. Intell. Sig. Process., 2007, p. 111. [39] J. D. R. Millán, P. W. Ferrez, F. Galán, E. Lew, and R. Chavarriaga, “Noninvasive brain-machine interaction,” Int. J. Pattern Recognit. Artif. Intell., vol. 22, no. 05, pp. 959–972, 2008. [40] G. R. Müller-Putz et al., “Tools for brain-computer interaction: A general concept for a hybrid BCI,” Front. Neuroinform., vol. 5, p. 30, 2011. [41] H. Alt and M. Godau, “Computing the Fréchet distance between two polygonal curves,” Int. J. Comput. Geom. Appl., vol. 05, no. 01n02, pp. 75–91, 1995. [42] R. Chavarriaga, M. Fried-Oken, S. Kleih, F. Lotte, and R. Scherer, “Heading for new shores! Overcoming pitfalls in BCI design,” Brain-Comput. Interfaces, vol. 4, no. 1–2, pp. 60–73, 2017. [43] Y. He, D. Eguren, J. M. Azorín, R. G. Grossman, T. P. Luu, and J. L. Contreras-Vidal, “Brain-machine interfaces for controlling lower-limb powered robotic systems,” J. Neural Eng., vol. 15, no. 2, 2018, Art. no. 21004. [44] A. H. Do, P. T. Wang, C. E. King, S. N. Chun, and Z. Nenadic, “Braincomputer interface controlled robotic gait orthosis,” J. Neuroeng. Rehabil., vol. 10, no. 1, p. 111, 2013. [45] A. Kilicarslan, S. Prasad, R. G. Grossman, and J. L. Contreras-Vidal, “High accuracy decoding of user intentions using EEG to control a lower-body exoskeleton,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2013, pp. 5606–5609. [46] E. López-Larraz et al., “Control of an ambulatory exoskeleton with a brain-machine interface for spinal cord injury gait rehabilitation,” Front. Neurosci., vol. 10, p. 359, 2016. [47] K. Lee, D. Liu, L. Perroud, R. Chavarriaga, and J. D. R. Millán, “A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers,” Robot. Auton. Syst., vol. 90, pp. 15–23, 2016. [48] D. Liu et al., “EEG-based lower-limb movement onset decoding: Continuous classification and asynchronous detection,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 8, pp. 1626–1635, Aug. 2018. Luca Tonin (M’19) received the Ph.D. degree in robotics from the École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, in 2013. He then pursued three years of postdoctoral research at the Intelligent Autonomous System laboratory (IAS-Lab), the University of Padova, Padua, Italy. Since 2016, he has been Postdoctoral Researcher with the Defitech Chair in Brain-Machine Interface at EPFL. He is currently a Senior Postdoctoral Researcher with the Intelligent Autonomous System laboratory (IAS-Lab), the University of Padova. His research is currently focused on exploring advanced techniques for brain– machine interface (BMI)-driven robotics devices. His main contribution to the BMI field is related to the design of novel shared control approaches to improve the reliability and to enhance the coupling between user and robot. In 2016, Dr. Tonin won the first international Cybathlon paralympic event in the BMI race discipline as a coleader of the BrainTweakers team. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. TONIN et al.: ROLE OF THE CONTROL FRAMEWORK FOR CONTINUOUS TELEOPERATION 91 Felix Christian Bauer received the M.Sc. degree in physics in 2017 from ETH Zurich, Zurich, Switzerland, where he is currently working toward Teaching Diploma in physics. He is currently working as Research and Development Engineer with aiCTX AG, Zurich, Switzerland, on the development of neuromorphic hardware applications. His research interests include noninvasive brain–machine interfaces, artificial intelligence, neural network architectures, and neuromorphic hardware. José del R. Millán (F’17) received the Ph.D. degree in computer science from the Universitat Politècnica de Catalunya, Barcelona, Spain, in 1992. He is currently with the Department of Electrical & Computer Engineering and the Deptartment of Neurology of the University of Texas at Austin, Austin, USA, where he holds the Carol Cockrell Curran Endowed Chair. Previously, he held the Defitech Foundation Chair at the École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, from 2009 to 2019, where he helped establish the Center for Neuroprosthetics. Dr. Millán has made several seminal contributions to the field of brain– machine interfaces (BMI), especially based on electroencephalogram signals. Most of his achievements revolve around the design of brain-controlled robots. He has received several recognitions for these seminal and pioneering achievements, notably the IEEE-SMC Nobert Wiener Award in 2011. For the last few years he has been prioritizing the translation of BMI to end-users suffering from motor disabilities. As an example of this endeavor, his team won the first Cybathlon BMI race in October 2016. Authorized licensed use limited to: BOSTON UNIVERSITY. Downloaded on November 24,2024 at 19:01:27 UTC from IEEE Xplore. Restrictions apply. |