text
stringlengths 70
7.94k
| __index_level_0__
int64 105
711k
|
---|---|
Title: Text-based Interpretable Depression Severity Modeling via Symptom Predictions
Abstract: ABSTRACT Mood disorders in general and depression in particular are common and their impact on individuals and society is high. Roughly 5% of adults worldwide suffer from depression. Commonly, depression diagnosis involves using questionnaires, either clinician-rated or self-reported. Due to the subjectivity in questionnaire methods and high human-related costs involved, there are ongoing efforts to find more objective and easily attainable depression markers. As is the case with recent audio, visual and linguistic applications, state-of-the-art approaches for automated depression severity prediction heavily depend on deep learning and black box modeling without explainability and interpretability considerations. However, for reasons ranging from regulations to understanding the extent and limitations of the model, the clinicians need to understand the decision making process of the model to confidently form their decisions. In this work, we focus on text-based depression severity level prediction on DAIC-WOZ corpus and benefit from PHQ-8 questionnaire items to predict the symptoms as interpretable high level features. We show that using a multi-task regression approach with state-of-the-art text-based features to predict the depression symptoms, it is possible to reach a viable test set Concordance Correlation Coefficient performance comparable to the state-of-the-art systems. | 711,023 |
Title: SpiceWare: Simulating Spice Using Thermally Adjustable Dinnerware to Bridge Cultural Gaps
Abstract: ABSTRACT Preference and tolerance towards spicy food may vary depending on culture, location, upbringing, personality and even gender. Due to this, spicy food can often effect the social interaction on the dining table, especially if it is presented as a cultural dish. We propose SpiceWare, a thermally adjustable spoon that alters the perception of spice to improve cross-cultural communication. SpiceWare is a 3D-printed aluminium spoon that houses a thermal peltier that provides thermal feedback up to 45°C which can alter the taste perception of the user. As an initial evaluation, we conducted a workshop among participants of varying cultural backgrounds and observe their interaction when dining on spicy food. We found that the overall interaction was perceived to be more harmonious, and we discuss potential future works on improving the system. | 711,024 |
Title: RemoconHanger: Making Head Rotation in Remote Person using the Hanger Reflex
Abstract: ABSTRACTFor remote collaboration, it is essential to intuitively grasp the situation and spatial location. However, the difficulty in grasping information about the remote user’s orientation can hinder remote communication. For example, if a remote user turns his or her head to the right to operate a device on the right, and this sensation cannot be shared, the image sent by the remote user suddenly appears to flow laterally, and it will lose the positional relationship like Figure 1 (left). Therefore, we propose a device using the ”hanger reflex” to experience the sensation of head rotation intuitively. The ”hanger reflex” is a phenomenon in which the head turns unconsciously when a wire hanger is placed on the head. It has been verified that the sensation of turning is produced by the distribution of pressure exerted by a device worn on the head. This research aims to construct a mechanism to verify its effectiveness for telecommunication that can unconsciously experience the remote user’s rotation sensation using the hanger reflex phenomenon. An inertial measurement unit(IMU) grasps the remote user’s rotation information like Figure 1 (right). | 711,025 |
Title: Wemoji: Towards Designing Complementary Communication Systems in Augmented Reality
Abstract: ABSTRACT Augmented Reality (AR) can enable new forms of self-expression and communication. However, little is known about how AR experiences should be designed to complement face-to-face communication. We present an initial set of insights derived from the iterative design of a mobile AR app called Wemoji. It enables people to emote or react to one another by spawning visual AR effects in a shared physical space. As an additional communication medium, it can help add volume and dimension to what is exchanged. We outline a design space for AR complementary communication systems, and offer a set of insights from initial testing that points towards how AR can be used to enhance same-time same-place social interactions. | 711,026 |
Title: Towards using Breathing Features for Multimodal Estimation of Depression Severity
Abstract: ABSTRACT Breathing patterns are shown to have strong correlations with emotional states, and hence have promise for automatic mood order prediction and analysis. An essential challenge here is the lack of ground truth for breathing sounds, especially for medical and archival datasets. In this study, we provide a cross-dataset approach for breathing pattern prediction and analyse the contribution of predicted breath signals for the detection of depressive states, using the DAIC-WOZ corpus. We use interpretable features in our models to provide actionable insights. Our experimental evaluation shows that in participants with higher depression scores (as indicated by the eight-item Patient Health Questionnaire, PHQ-8), breathing events tend to be shallow or slow. We furthermore tested linear and non-linear regression models with breathing, linguistic sentiment and conversational features, and show that these simple models outperform the AVEC17 Real-life Depression Recognition Sub-challenge baseline. | 711,027 |
Title: ARfy: A Pipeline for Adapting 3D Scenes to Augmented Reality
Abstract: ABSTRACT Virtual content placement in physical scenes is a crucial aspect of augmented reality (AR). This task is particularly challenging when the virtual elements must adapt to multiple target physical environments unknown during development. AR authors use strategies such as manual placement performed by end-users, automated placement powered by author-defined constraints, and procedural content generation to adapt virtual content to physical spaces. Although effective, these options require human effort or annotated virtual assets. As an alternative, we present ARfy, a pipeline to support the adaptive placement of virtual content from pre-existing 3D scenes in arbitrary physical spaces. ARfy does not require intervention by end-users or asset annotation by AR authors. We demonstrate the pipeline capabilities using simulations on a publicly available indoor space dataset. ARfy makes any generic 3D scene automatically AR-ready and provides evaluation tools to facilitate future research on adaptive virtual content placement. | 711,028 |
Title: Rapid Prototyping Dynamic Robotic Fibers for Tunable Movement
Abstract: ABSTRACTLiquid crystal elastomers (LCEs) are promising shape-changing actuators for soft robotics in human–computer interaction (HCI). Current LCE manufacturing processes, such as fiber-drawing, extrusion, and 3D printing, face limitations on form-giving and accessibility. We introduce a novel rapid-prototyping approach for thermo-responsive LCE fiber actuators based on vacuum molding extrusion. Our contributions are threefold, a) a vacuum fiber molding (VFM) machine, b) LCE actuators with customizable fiber shapes c) open-source hackability of the machine. We build and test the VFM machine to generate shape-changing movements from four fiber actuators (pincer, curl, ribbon, and hook), and we look at how these new morphologies bridge towards soft robotic device integration. | 711,029 |
Title: A11yBoard: Using Multimodal Input and Output to Make Digital Artboards Accessible to Blind Users
Abstract: ABSTRACT We present A11yBoard, an interactive multimodal system that makes interpreting and authoring digital artboards, such as presentation slides or vector drawings, accessible to blind and low-vision (BLV) users. A11yBoard combines a web-based application with a mobile touch screen device such as a smartphone or tablet. The artboard is mirrored from the PC onto the touch screen, enabling spatial exploration of the artboard via touch and gesture. In addition, speech recognition and non-speech audio are used for input and output, respectively. Finally, keyboard input is used with a custom search-driven command line interface to access various commands and properties. These modalities combine into a rich, accessible system in which artboard contents, such as shapes, lines, text boxes, and images, can be interpreted, generated, and manipulated with ease. With A11yBoard, BLV users can not only consume accessible content, but create their own as well. | 711,030 |
Title: HapticPuppet: A Kinesthetic Mid-air Multidirectional Force-Feedback Drone-based Interface
Abstract: ABSTRACT Providing kinesthetic force-feedback for human-scale interactions is challenging due to the relatively large forces needed. Therefore, robotic actuators are predominantly used to deliver this kind of haptic feedback; however, they offer limited flexibility and spatial resolution. In this work, we introduce HapticPuppet, a drone-based force-feedback interface which can exert multidirectional forces onto the human body. This can be achieved by attaching strings to different parts of the human body such as fingers, hands or ankles, which can then be affixed to multiple coordinated drones - puppeteering the user. HapticPuppet opens up a wide range of potential applications in virtual, augmented and mixed reality, exercising, physiotherapy, remote collaboration as well as haptic guidance. | 711,031 |
Title: Evaluating Just-In-Time Vibrotactile Feedback for Communication Anxiety
Abstract: ABSTRACT Wrist-worn vibrotactile feedback has been heralded as a promising intervention for reducing state anxiety during stressor events. However, current work has focused on the continuous delivery of the vibrotactile stimulus, which entails the risk of habituation to the potentially relieving effects of the feedback. This paper examines the just-in-time administration of vibrotactile feedback during a public speaking task in an effort to reduce communication apprehension. We evaluate two types of vibrotactile feedback delivery mechanisms compared to a control in a between-subjects design – one that delivers stimulus over random time points and one that delivers stimulus during moments of heightened physiological reactivity, as determined by changes in electrodermal activity. The results from these interventions indicate that vibrotactile feedback administered during high physiological arousal improves stress-related physiological measures (e.g., heart rate) and self-reported stress annotations early on in the intervention, and contributes to increased vocal stability during the public speaking task, but these effects diminish over time. Delivering the vibrotactile feedback over random points in time appears to worsen stress-related measures overall. | 711,032 |
Title: Bodyweight Exercise based Exergame to Induce High Intensity Interval Training
Abstract: ABSTRACTExergames have been proposed as an attractive way of making exercise fun; however, most of them do not reach the recommended intensity. Although HCI research has explored how exergame can be designed to follow High Intensity Interval Training (HIIT) that is effective exercise consisting of intermittent vigorous activity and short rest or low-intensity exercise, there are limited studies on designing bodyweight exercise (BWE) based exergame to follow HIIT. In this paper, we propose BWE based exergame to encourage users to maintain high intensity exercise. Our initial study (n=10) showed that the exergame had a significant effect on enjoyment, while the ratio of incorrect posture (ex., squat) also increased due to participants’ concentration on the exergame, which imply future design implications of BWE based exergames. | 711,033 |
Title: Towards using Involuntary Body Gestures for Measuring the User Engagement in VR Gaming
Abstract: ABSTRACT Understanding the degree of user engagement in a VR game is vital to provide a better gaming experience. While prior work has suggested self-reports, and biological signal-based methods, measuring game engagement remains a challenge due to its complex nature. In this work, we provide a preliminary exploration of using involuntary body gestures to measure user engagement in VR gaming. Based on data collected from 27 participants performing multiple VR games, we demonstrate a relationship between foot gesture-based models for measuring arousal and physiological responses while engaging in VR games. Our findings show the possibility of using involuntary body gestures to measure engagement. | 711,034 |
Title: Keep in Touch: Combining Touch Interaction with Thumb-to-Finger µGestures for People with Visual Impairment
Abstract: ABSTRACTWe present a set of 8 thumb-to-finger microgestures (TTF µGestures) that can be used as an additional modality to enrich touch interaction in eyes-free situations. TTF µGestures possess characteristics especially suited for people with visual impairment (PVI). They have never been studied specifically for PVI to improve accessibility of touchscreen devices. We studied a set of 33 common TTF µGestures to determine which are feasible and usable without seeing while the index is touching a surface. We found that the constrained position of the hand and the absence of vision prevent participants from being able to efficiently target a specific phalanx. Thus, we propose a set of 8 TTF µGestures (6 taps, 2 swipes) balancing resiliency (i.e., low error-rate) and expressivity (i.e., number of possible inputs): as a dimension combined with the touch modality, it would realistically multiply the touch command space by eight. Within our set of 8 TTF µGestures, we chose a subset of 4 µGestures (2 taps and 2 swipes) and implemented an exploration scenario of an audio-tactile map with a raised-line overlay on a touchscreen and tested it with 7 PVI. Their feedback was positive on the potential benefits of TTF µGestures in enhancing the touch modality and supporting PVI interaction with touchscreen devices. | 711,035 |
Title: MoonBuddy: A Voice-based Augmented Reality User Interface That Supports Astronauts During Extravehicular Activities
Abstract: ABSTRACT As NASA pursues Artemis missions to the moon and beyond, it is essential to equip astronauts with the appropriate human-autonomy enabling technology necessary for the elevated demands of lunar surface exploration and extreme terrestrial access. We present MoonBuddy, an application built for the Microsoft HoloLens 2 that utilizes Augmented Reality (AR) and voice-based interaction to assist astronauts in communication, navigation, and documentation on future lunar extravehicular activities (EVAs), with the goal of reducing cognitive load and increasing task completion. User testing results for MoonBuddy under simulated lunar conditions have been positive overall, with participants indicating that the application was easy to use and helpful in completing the required tasks. | 711,036 |
Title: Puppeteer: Manipulating Human Avatar Actions with Intuitive Hand Gestures and Upper-Body Postures
Abstract: ABSTRACT We present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. We selected 17 avatar actions discovered in the pilot study and conducted a gesture elicitation study to invite 12 participants to design best representing hand gestures and upper-body postures for each action. Then we implemented a prototype system using the MediaPipe framework to detect keypoints and a self-trained model to recognize 17 hand gestures and 17 upper-body postures. Finally, three applications demonstrate the interactions enabled by Puppeteer. | 711,037 |
Title: RCSketch: Sketch, Build, and Control Your Dream Vehicles
Abstract: ABSTRACT We present RCSketch, a system that lets children sketch their dream vehicles in 3D, build moving structures of those vehicles, and control them from multiple viewpoints. As a proof of concept, we implemented our system and designed five vehicles that could perform a wide variety of realistic movements. | 711,038 |
Title: Exploring the Detection of Spontaneous Recollections during Video-viewing In-the-Wild using Facial Behavior Analysis
Abstract: ABSTRACT Intelligent systems might benefit from automatically detecting when a stimulus has triggered a user’s recollection of personal memories, e.g., to identify that a piece of media content holds personal significance for them. While computational research has demonstrated the potential to identify related states based on facial behavior (e.g., mind-wandering), the automatic detection of spontaneous recollections specifically has not been investigated this far. Motivated by this, we present machine learning experiments exploring the feasibility of detecting whether a video clip has triggered personal memories in a viewer based on the analysis of their Head Rotation, Head Position, Eye Gaze, and Facial Expressions. Concretely, we introduce an approach for automatic detection and evaluate its potential for predictions using in-the-wild webcam recordings. Overall, our findings demonstrate the capacity for above chance detections in both settings, with substantially better performance for the video-independent variant. Beyond this, we investigate the role of person-specific recollection biases for predictions of our video-independent models and the importance of specific modalities of facial behavior. Finally, we discuss the implications of our findings for detecting recollections and user-modeling in adaptive systems. | 711,039 |
Title: What’s Cooking? Olfactory Sensing Using Off-the-Shelf Components
Abstract: ABSTRACT We present Project Sniff, a hardware and software exploration into olfactory sensing with an application in digital communication and social presence. Our initial results indicate that a simple hardware design using off-the-shelf sensors and the application of supervised learning to the sensor data allows us to detect several common household scents reliably. As part of this exploration, we developed a scent-sensing IoT prototype and placed it in the kitchen area to sense “what’s cooking?”, and share the olfactory information via a Slack bot. We conclude by outlining our plans for future steps and potential applications of this research. | 711,040 |
Title: Using a Dual-Camera Smartphone to Recognize Imperceptible 2D Barcodes Embedded in Videos
Abstract: ABSTRACT Invisible screen-camera communication is promising in that it does not interfere with the video viewing experience. In the imperceptible color vibration method, which displays two colors of the same luminance alternately at high speed for each pixel, embedded information is decoded by taking the difference between distant frames on the time axis. Therefore, the interframe differences of the original video contents affect the decoding performance. In this study, we propose a decoding method which utilizes simultaneously captured images using a dual-camera smartphone with different exposure times. This allows taking the color difference between the frames that are close to each other on the time axis. The feasibility of this approach is demonstrated through several application examples. | 711,041 |
Title: A bonding technique for electric circuit prototyping using conductive transfer foil and soldering iron
Abstract: ABSTRACT Several electric circuit prototyping techniques have been proposed. While most focus on creating the conductive traces, we focus on the bonding technique needed for this kind of circuit. Our technique is an extension of existing work in that we use the traces themselves as the bonding material for the components. The bonding process is not soldering but yields joints of adequate connectivity. A hot soldering iron is used to activate the traces and bond the component to the circuit. Simple circuits are created on MDF, paper, and acrylic board to show the feasibility of the technique. It is also confirmed that the resistance of the contact points is sufficiently low. | 711,042 |
Title: Interactive 3D Zoetrope with a Strobing Flashlight
Abstract: ABSTRACT We propose a 3D printed zoetrope mounted on a bike wheel where users can watch the 3D figures come to life in front of their eyes. Each frame of our animation is a 9 by 16 cm 3D fabricated diorama containing a small scene. A strobed flashlight synced with the spinning of the wheel shows the viewer each frame at just the right time, creating the illusion of 3D motion. The viewer can hold and shine the flashlight into the scene, illuminating each frame from their own point of view. Our zoetrope is modular and can have different 16 frame animations substituted in and out for fast prototyping of many cinematography, fabrication, and strobe lighting techniques. Our interactive truly 3D movie experience will push the zoetrope format to tell more complex stories and better engage viewers. | 711,043 |
Title: InfraredTags Demo: Invisible AR Markers and Barcodes Using Infrared Imaging and 3D Printing
Abstract: ABSTRACT We showcase InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. We achieve this by printing objects from an infrared-transmitting filament, which infrared cameras can see through, and by having air gaps inside for the tag’s bits, which appear at a different intensity in the infrared image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. We demonstrate how our method enables various applications, such as object tracking and embedding metadata for augmented reality and tangible interactions. | 711,044 |
Title: Explorations of Wrist Haptic Feedback for AR/VR Interactions with Tasbi
Abstract: ABSTRACT Most widespread haptic feedback devices for augmented and virtual reality (AR/VR) fall into one of two categories: simple hand-held controllers with a single vibration actuator, or complex glove systems with several embedded actuators. In this work, we explore haptic feedback on the wrist for interacting with virtual objects. We use Tasbi, a compact bracelet device capable of rendering complex multisensory squeeze and vibrotactile feedback. Leveraging Tasbi’s haptic rendering, and using standard visual and audio rendering of a head mounted display, we present several interactions that tightly integrate sensory substitutive haptics with visual and audio cues. Interactions include push/pull buttons, rotary knobs, textures, rigid body weight and inertia, and several custom bimanual manipulations such as shooting an arrow from a bow. These demonstrations suggest that wrist-based haptic feedback substantially improves virtual hand-based interactions in AR/VR compared to no haptic feedback. | 711,045 |
Title: Calligraphy Z: A Fabricatable Pen Plotter for Handwritten Strokes with Z-Axis Pen Pressure
Abstract: ABSTRACT In the current age, the use of desktop publishing software and printing presses makes it possible to produce various expressions. On the other hand, it is difficult to perfectly replicate the ink grazing and subtle pressure fluctuations that occur when using a writing implement to output characters on a printer. In this study, we reproduce such incidental brushstrokes by using a writing implement to output text layout created on software. To replicate slight variations in strokes, we developed Calligraphy Z, a system that consists of a writing device and an application. The writing device controls the vertical position of the writing tool in addition to the writing position, thus producing handwritten-like character output, and an application that generates G-code for the device operation from user input. With the application, users can select their favorite fonts, input words, and adjust the layout to operate the writing device using several types of extended font data with writing pressure data prepared in advance. After developing our system, we compared the strokes of several writing implements to select the most suitable one for Calligraphy Z. We also conducted evaluations of the identification of characters output by Calligraphy Z and those output by a printing machine. We found participants in the evaluation experiment perceive the features of handwritten characters, such as ink blotting and fine blurring of strokes, in the characters by our system. | 711,046 |
Title: Evaluating Calibration-free Webcam-based Eye Tracking for Gaze-based User Modeling
Abstract: ABSTRACTEye tracking has been a research tool for decades, providing insights into interactions, usability, and, more recently, gaze-enabled interfaces. Recent work has utilized consumer-grade and webcam-based eye tracking, but is limited by the need to repeatedly calibrate the tracker, which becomes cumbersome for use outside the lab. To address this limitation, we developed an unsupervised algorithm that maps gaze vectors from a webcam to fixation features used for user modeling, bypassing the need for screen-based gaze coordinates, which require a calibration process. We evaluated our approach using three datasets (N=377) encompassing different UIs (computerized reading, an Intelligent Tutoring System), environments (laboratory or the classroom), and a traditional gaze tracker used for comparison. Our research shows that webcam-based gaze features correlate moderately with eye-tracker-based features and can model user engagement and comprehension as accurately as the latter. We discuss applications for research and gaze-enabled user interfaces for long-term use in the wild. | 711,047 |
Title: Knitted Force Sensors
Abstract: ABSTRACT In this demo, we present two types of knitted resistive force sensors for both pressure and strain sensing. They can be manufactured ready-made on a two-bed weft knitting machine, without requiring further post-processing steps. Due to their softness, elasticity, and breathability our sensors provide an appealing haptic experience. We show their working principle, discuss their advantages and limitations, and elaborate on different areas of application. They are presented as standalone demonstrators, accompanied by exemplary applications to provide insights into their haptic qualities and sensing capabilities. | 711,048 |
Title: Designing a Hairy Haptic Display using 3D Printed Hairs and Perforated Plates
Abstract: ABSTRACT Haptic displays that can convey various material sensations and physical properties on conventional 2D displays or in virtual reality (VR) environments, have been widely explored in the field of human-computer interaction (HCI). We introduce a fabrication technique of haptic apparatus using 3D printed hairs, which can stimulate the users’ sensory perception with hair-like bristles mimicking furry animals. Design parameters that determine 3D printed hair’s properties such as length, density, and direction, can affect the stiffness and roughness of the contact area between the hair tip and the user’s sensory receptors on their skin, thus changing stimulation patterns. To further explore the expressivity of this apparatus, we present a haptic display built with controlling mechanisms. The device is constructed by threading many 3D printed hairs through a perforated plate, manipulating the length and direction of hairs via the connected inner actuator. We present the design specifications including printing parameters, assembly, and electronics through a demonstration of prototypes, and future works. | 711,049 |
Title: Silent subwoofer system using myoelectric stimulation to presents the acoustic deep bass experiences
Abstract: ABSTRACT This study demonstrates a portable, low-noise system that utilizes electrical muscle stimulation (EMS) to present a body-sensory acoustic experience similar to that experienced during live concerts. Twenty-four participants wore head-mounted displays (HMDs), headphones, and the proposed system and experienced a live concert in a virtual reality (VR) space to evaluate the system. We found that the system was not inferior to a system with loudspeakers and subwoofers, where ambient noise concerns precision in rhythm and harmony. These results could be explained by the user perceiving the EMS experience as a single signal when the EMS stimulation is presented in conjunction with visual and acoustic stimuli (e.g., the kicking of a bass drum, the bass sound generated from the kicking, and the acoustic sensation caused by the bass sound). The proposed method offers a novel EMS-based body-sensory acoustic experience, and the results of this study may lead to an improved experience not only for live concerts in VR space but also for everyday music listening. | 711,050 |
Title: SpinOcchietto: A Wearable Skin-Slip Haptic Device for Rendering Width and Motion of Objects Gripped Between the Fingertips
Abstract: ABSTRACT Various haptic feedback techniques have been explored to enable users to interact with their virtual surroundings using their hands. However, investigation on interactions with virtual objects slipping against the skin using skin-slip haptic feedback is still at its early stages. Prior skin-slip virtual reality (VR) haptic display implementations involved bulky actuation mechanisms and were not suitable for multi-finger and bimanual interactions. As a solution to this limitation, we present SpinOcchietto, a wearable skin-slip haptic feedback device using spinning discs for rendering the width and movement of virtual objects gripped between the fingertips. SpinOcchietto was developed to miniaturize and simplify SpinOcchio[1], a 6-DoF handheld skin-slip haptic display. With its smaller, lighter, and wearable form factor, SpinOcchietto enables users with a wide range of hand sizes to interact with virtual objects with their thumb and index fingers while freeing the rest of the hand. Users can perceive the speed of virtual objects slipping against the fingertips and can use varying grip strengths to grab and release the objects. Three demo applications were developed to showcase the different types of virtual object interactions enabled by the prototype. | 711,051 |
Title: TouchVR: A Modality for Instant VR Experience
Abstract: ABSTRACTIn near future, we envision instant and ubiquitous access to the VR worlds. However, existing highly portable VR devices usually lack rich and convenient input modality. In response, we introduce TouchVR, a system that enables BoD interaction in instant VR supported by mobile HMDs. We present the overall architecture and prototype of the TouchVR system in Android platform, capable of being easily integrated to Unity-based mobile VR applications. Furthermore, we implement a sample application (360° video player) to demonstrate the usage of our system. Through TouchVR, we expect it to be a step for enriching interaction methods in instant VR. | 711,052 |
Title: Extail: a Kinetic Inconspicuous Wareable Hair Extension Device
Abstract: ABSTRACT Wearable devices that present the wearer’s information have been designed to stand out when worn, making it difficult to conceal their wear. Therefore, we have been working on developing and evaluating a dynamic expression of hair to realize the presentation of the wearer’s information in an inconspicuous wearable device. In the precedents of hair interaction, the hairstyles and expressions to which the technique can be applied are limited. In this paper, we focus on the output mechanism and present Extail, a hair extension type device with a control mechanism that moves bundled hair like a tail. The results of a questionnaire survey on the correspondence between the movement of hair bundles and emotional expression were generally consistent with the results of the evaluation of tail devices in related studies. | 711,053 |
Title: Comfortability Recognition from Visual Non-verbal Cues
Abstract: ABSTRACTAs social agents, we experience situations in which sometimes we enjoy being involved and others where we desire to withdraw from. Being aware of others’ “comfort towards the interaction” help us enhance our communications, thus this becomes a fundamental skill for any interactive agent (either a robot or an Embodied Conversational Agent (ECA)). For this reason, the current paper considers Comfortability, the internal state that focuses on the person’s desire to maintain or withdraw from an interaction, exploring whether it is possible to recognize it from human non-verbal behaviour. To this aim, videos collected during real Human-Robot Interactions (HRI) were segmented, manually annotated and used to train four standard classifiers. Concretely, different combinations of various facial and upper-body movements (i.e., Action Units, Head Pose, Upper-body Pose and Gaze) were fed to the following feature-based Machine Learning (ML) algorithms: Naive Bayes, Neural Networks, Random Forest and Support Vector Machines. The results indicate that the best model, obtaining a 75% recognition accuracy, is trained with all the aforementioned cues together and based on Random Forest. These findings indicate, for the first time, that Comfortability can be automatically recognized, paving the way to its future integration into interactive agents. | 711,054 |
Title: Music Scope Pad: Video Selecting Interface by Natural Movement in VR Space
Abstract: ABSTRACT This paper describes a novel video selecting interface that enables us to select videos without having to click a mouse or touch a screen. Existing video players enable us to see and hear only one video at a time, and thus we have to play pieces individually to select the one we want to hear from numerous new videos such as music videos, which involves a large number of mouse and screen-touch operations. The main advantage of our video selecting interface is that it detects natural movements, such as head or hand movements when users are listening to sounds and they can focus on a particular sound source that they want to hear. By moving their head left or right, users can hear the source from a frontal position as the tablet detects changes in the direction they are facing. By putting their hand behind their ear, users can focus on a particular sound source. | 711,055 |
Title: Demonstrating p5.fab: Direct Control of Digital Fabrication Machines from a Creative Coding Environment
Abstract: ABSTRACT Machine settings and tuning are critical for digital fabrication outcomes. However, exploring these parameters is non-trivial. We seek to enable exploration of the full design space of digital fabrication. To do so, we built p5.fab, a system for controlling digital fabrication machines from the creative coding environment p5.js and informed by a qualitative study of 3D printer maintenance practices. p5.fab prioritizes material exploration, fine-tuned control, and iteration in fabrication workflows. We demonstrate p5.fab with examples of 3D prints that cannot be made with traditional 3D printing software, including delicate bridging structures and prints on top of existing objects. | 711,056 |
Title: Personalized Productive Engagement Recognition in Robot-Mediated Collaborative Learning
Abstract: ABSTRACT In this paper, we propose and compare personalized models for Productive Engagement (PE) recognition. PE is defined as the level of engagement that maximizes learning. Previously, in the context of robot-mediated collaborative learning, a framework of productive engagement was developed by utilizing multimodal data of 32 dyads and learning profiles, namely, Expressive Explorers (EE), Calm Tinkerers (CT), and Silent Wanderers (SW) were identified which categorize learners according to their learning gain. Within the same framework, a PE score was constructed in a non-supervised manner for real-time evaluation. Here, we use these profiles and the PE score within an AutoML deep learning framework to personalize PE models. We investigate two approaches for this purpose: (1) Single-task Deep Neural Architecture Search (ST-NAS), and (2) Multitask NAS (MT-NAS). In the former approach, personalized models for each learner profile are learned from multimodal features and compared to non-personalized models. In the MT-NAS approach, we investigate whether jointly classifying the learners’ profiles with the engagement score through multi-task learning would serve as an implicit personalization of PE. Moreover, we compare the predictive power of two types of features: incremental and non-incremental features. Non-incremental features correspond to features computed from the participant’s behaviours in fixed time windows. Incremental features are computed by accounting to the behaviour from the beginning of the learning activity till the time window where productive engagement is observed. Our experimental results show that (1) personalized models improve the recognition performance with respect to non-personalized models when training models for the gainer vs. non-gainer groups, (2) multitask NAS (implicit personalization) also outperforms non-personalized models, (3) the speech modality has high contribution towards prediction, and (4) non-incremental features outperform the incremental ones overall. | 711,057 |
Title: Demonstrating Finger-Based Dexterous Phone Gestures
Abstract: ABSTRACTThis hands-on demonstration enables participants to experience single-handed “dexterous gestures”, a novel approach for the physical manipulation of a phone using the fine motor skills of fingers. A recognizer is developed for variations of “full” and “half” gestures that spin (yaw axis), rotate (roll axis), and flip (pitch axis), all detected using the built-in phone IMU sensor. A functional prototype demonstrates how recognized dexterous gestures can be used to interact with a variety of current smartphone applications. | 711,058 |
Title: Touchibo: Multimodal Texture-Changing Robotic Platform for Shared Human Experiences
Abstract: ABSTRACTTouchibo is a modular robotic platform for enriching interpersonal communication in human-robot group activities, suitable for children with mixed visual abilities. Touchibo incorporates several modalities, including dynamic textures, scent, audio, and light. Two prototypes are demonstrated for supporting storytelling activities and mediating group conversations between children with and without visual impairment. Our goal is to provide an inclusive platform for children to interact with each other, perceive their emotions, and become more aware of how they impact others. | 711,059 |
Title: Shadowed Speech: an Audio Feedback System which Slows Down Speech Rate
Abstract: ABSTRACT In oral communication, it is important to speak at a speed appropriate for the situation. However, we need a lot of training in order to control our speech rate as intended. This paper proposes a speech rate control system which enables the user to speak at a pace closer to the target rate using Delayed Auditory Feedback (DAF). We implement a prototype and confirm that the proposed system can slow down the speech rate when the user speaks too fast without giving any instructions to the speaker on how to respond to the audio feedback. | 711,060 |
Title: Anywhere Hoop: Virtual Free Throw Training System
Abstract: ABSTRACT To complete successfully a high percentage of free throws in basketball, the shooter must achieve a stable trajectory with the ball. A player must practice shooting repeatedly to shoot a ball with a stable trajectory. However, traditional practice methods require the preparation of a real basketball hoop, which has made it difficult for some players to prepare a practice environment. We propose a training method for free throws using a virtual basketball hoop. In this paper, we present an implementation of the proposed method and experimental results showing its effectiveness. | 711,061 |
Title: DATALEV: Acoustophoretic Data Physicalisation
Abstract: ABSTRACTHere, we demonstrate DataLev, a data physicalisation platform with a physical assembly pipeline that allows us to computationally assemble 3D physical charts using acoustically levitated contents. DataLev consists of several enhancement props that allow us to incorporate high-resolution projection, different 3D printed artifacts and multi-modal interaction. DataLev supports reconfigurable and dynamic physicalisations that we animate and illustrate for different chart types. Our work opens up new opportunities for data storytelling using acoustic levitation. | 711,062 |
Title: A Triangular Actuating Device Stand that Dynamically Adjusts Mobile Screen’s Position
Abstract: ABSTRACT This demo presents a triangular actuating device stand that can automatically adjust the height and tilt angle of a mounted mobile device (e.g., smartphone) to adapt to the user’s varying interaction needs (e.g., touch, browsing, and viewing). We employ a unique mechanism to deform the stand’s triangular shape with two extendable reel actuators, which enables us to reposition the mobile screen mounted on the hypotenuse side. Each actuator is managed by the mobile device and controls the height and base of the stand’s triangular shape, respectively. To demonstrate the potential of our new actuating device stand, we present two types of interaction scenarios: manual device repositioning based on the user’s postures or gestures captured by the device’s front camera and automatic device repositioning that adapt to the on-screen contents the user will interact with (i.e., touch-based menus, web browsers, illustrator, and video viewers). | 711,063 |
Title: X-Norm: Exchanging Normalization Parameters for Bimodal Fusion
Abstract: ABSTRACTMultimodal learning aims to process and relate information from different modalities to enhance the model’s capacity for perception. Current multimodal fusion mechanisms either do not align the feature spaces closely or are expensive for training and inference. In this paper, we present X-Norm, a novel, simple and efficient method for bimodal fusion that generates and exchanges limited but meaningful normalization parameters between the modalities implicitly aligning the feature spaces. We conduct extensive experiments on two tasks of emotion and action recognition with different architectures including Transformer-based and CNN-based models using IEMOCAP and MSP-IMPROV for emotion recognition and EPIC-KITCHENS for action recognition. The experimental results show that X-Norm achieves comparable or superior performance compared to the existing methods including early and late fusion, Gradient-Blending (G-Blend) [44], Tensor Fusion Network, [48] and Multimodal Transformer [40], with a relatively low training cost. | 711,064 |
Title: Point Cloud Capture and Editing for AR Environmental Design
Abstract: ABSTRACT We present a tablet-based system for AR environmental design using point clouds. It integrates point cloud capture and editing in a single AR workflow to help users quickly prototype design ideas in their spatial context. We hypothesize that point clouds are well suited for prototyping, as they can be captured rapidly and then edited immediately on the capturing device in situ. Our system supports a variety of point cloud editing operations in AR, including selection, transformation, hole filling, drawing, and animation. This enables a wide range of design applications for objects, interior environments, buildings, and landscapes. | 711,065 |
Title: Thermoformable Shell for Repeatable Thermoforming
Abstract: ABSTRACT We propose a thermoformable shell called TF-Shell that allows repeatable thermoforming. Due to the low thermal conductivity of typical printing materials like polylactic acid (PLA), thermoforming 3D printed objects is largely limited. Through embedding TF-Shell, users can thermoform target parts in diverse ways. Moreover, the deformed structures can be restored by reheating. In this demo, we introduce the TF-Shell and demonstrate four thermoforming behaviors with the TF-Shell embedded figure. With our approach, we envision bringing the value of hands-on craft to digital fabrication. | 711,066 |
Title: ConfusionLens: Dynamic and Interactive Visualization for Performance Analysis of Multiclass Image Classifiers
Abstract: ABSTRACTBuilding higher-quality image classification models requires better performance analysis (PA) methods to help understand their behaviors. We propose ConfusionLens, a dynamic and interactive visualization interface that augments a conventional confusion matrix with focus+context visualization. This interface makes it possible to adaptively provide relevant information for different kinds of PA tasks. Specifically, it allows users to seamlessly switch table layouts among three views (overall view, class-level view, and between-class view) while observing all of the instance images in a single screen. This paper presents a ConfusionLens prototype that supports hundreds of instances and its several extensions to further support practical PA tasks, such as activation map visualization and instance sorting/filtering. | 711,067 |
Title: Demonstration of Lenticular Objects: 3D Printed Objects with Lenticular Lens Surfaces That Can Change their Appearance Depending on the Viewpoint
Abstract: ABSTRACTWe present Lenticular Objects, which are 3D objects that appear differently from different viewpoints. We accomplish this by 3D printing lenticular lenses across the curved surface of objects and computing underlying surface color patterns, which enables to generate different appearances to the user at each viewpoint. In addition, we present the Lenticular Objects 3D editor, which takes as input the 3D model and multiple surface textures, i.e. images, that are visible at multiple viewpoints. Our tool calculates the lens placements distribution on the surface of the objects and underlying color pattern. On export, the user can use ray tracing to live preview the resulting appearance from each angle. The 3D model, color pattern, and lenses are then 3D printed in one pass on a multi-material 3D printer to create the final 3D object. We demonstrate our system in practice with a range of use cases that benefits from appearing differently under different viewpoints. | 711,068 |
Title: Multimodal classification of interruptions in humans’ interaction
Abstract: ABSTRACTDuring an interaction interruptions occur frequently. Interruptions may arise to fulfill different goals such as changing the topic of conversation abruptly, asking for clarification, completing the current speaker’s turn. Interruptions may be cooperative or competitive depending on the interrupter’s intention. Our main goal is to endow a Socially Interactive Agent with the capacity to handle user interruptions in dyadic interaction. It requires the agent to detect an interruption and recognize its type (cooperative/competitive), and then to plan its behaviours to respond appropriately. As a first step towards this goal, we developed a multimodal classification model using acoustic features, facial expression, head movement, and gaze direction from both, the interrupter and the interruptee. The classification model learns from the sequential information to automatically identify interruptions type. We also present studies we conducted to measure the shortest delay needed (0.6s) for our classification model to identify interruption types with a high classification accuracy (81%). On average, most interruption overlaps last longer than 0.6s, so a Socially Interactive Agent has time to detect and recognize an interruption type and can respond in a timely manner to its human interlocutor’s interruption. | 711,069 |
Title: GazeScale: Towards General Gaze-Based Interaction in Public Places
Abstract: ABSTRACT Gaze-based interaction has until now been almost an exclusive prerogative of the assistive field, as it is considered not sufficiently performing compared to traditional communication methods based on keyboards, pointing devices, and touch screens. However, situations such as the one we are experiencing now due to the COVID-19 pandemic highlight the importance of touchless communication, to minimize the spread of the disease. In this paper, as an example of the potential pervasive use of eye tracking technology in public contexts, we propose and study five interfaces for a gaze-controlled scale, to be used in supermarkets to weigh fruits and vegetables. Given the great heterogeneity of potential users, the interaction must be as simple and intuitive as possible and occur without the need for calibration. The experiments carried out confirm that this goal is achievable and show strengths and weaknesses of the five interfaces. | 711,070 |
Title: Demonstrating ex-CHOCHIN: Shape/Texture-changing cylindrical interface with deformable origami tessellation
Abstract: ABSTRACTWe demonstrate ex-CHOCHIN which is a cylindrical shape/texture-changing display inspired by the chochin, a traditional Japanese foldable lantern. Ex-CHOCHIN achieves complex control over origami such as local deformation and control in the intermediate process of folding by attaching multiple mechanisms to the origami tessellation. It thereby results in flexible deformations that can be adapted to a wide range of shapes, a one-continuous surface without gaps, and even changes in texture. It creates several deformed shapes from a crease pattern, allowing flexible deformation to function as a display. We have also produced an application using ex-CHOCHIN. During this hands-on demo at UIST, attendees will manipulate the amount of change in shape and texture and interact with ex-CHOCHIN by seeing and touching. | 711,071 |
Title: KineCAM: An Instant Camera for Animated Photographs
Abstract: ABSTRACT The kinegram is a classic animation technique that involves sliding a striped overlay across an interlaced image to create the effect of frame-by-frame motion. While there are known tools for generating kinegrams from pre-existing videos and images, there exists no system for capturing and fabricating kinegrams in situ. To bridge this gap, we created KineCAM, an open source1 instant camera that captures and prints animated photographs in the form of kinegrams. KineCAM combines the form factor of instant cameras with the expressiveness of animated photographs to explore and extend creative applications for instant photography. | 711,072 |
Title: Demonstration of Geppetteau: Enabling haptic perceptions of virtual fluids in various vessel profiles using a string-driven haptic interface
Abstract: ABSTRACT Liquids sloshing around in vessels produce unique unmistakable tactile sensations of handling fluids in daily life, laboratory environments, and industrial contexts. Providing nuanced congruent tactile sensations would enrich interactions of handling fluids in virtual reality (VR). To this end, we introduce Geppetteau, a novel string-driven weight-shifting mechanism capable of providing a continuous spectrum of perceivable tactile sensations of handling virtual liquids in VR vessels. Geppetteau’s weight-shifting actuation system can be housed in 3D-printable shells, adapting to varying vessel shapes and sizes. A variety of different fluid behaviors can be felt using our haptic interface. In this work, Geppetteau assumes the shape of conical, spherical, cylindrical, and cuboid flasks, widening the range of augmentable shapes beyond the state-of-the-art of existing mechanical systems. | 711,073 |
Title: Artistic User Expressions in AI-powered Creativity Support Tools
Abstract: ABSTRACTNovel AI algorithms introduce a new generation of AI-powered Creativity Support Tools (AI-CSTs). These tools can inspire and surprise users with algorithmic outputs that the users could not expect. However, users can struggle to align their intentions with unexpected algorithmic behaviors. My dissertation research studies how user expressions in art-making AI-CSTs need to be designed. With an interview study with 14 artists and a literature survey on 111 existing CSTs, I first isolate three requirements: 1) allow users to express under-constrained intentions, 2) enable the tool and the user to co-learn the user expressions and the algorithmic behaviors, and 3) allow easy and expressive iteration. Based on these requirements, I introduce two tools, 1) Artinter, which learns how the users express their visual art concepts within their communication process for art commissions, and 2) TaleBrush, which facilitates the under-constrained and iterative expression of user intents through sketching-based story generation. My research provides guidelines for designing user expression interactions for AI-CSTs while demonstrating how they can suggest new designs of AI-CSTs. | 711,074 |
Title: Touchless touch with biosignal transfer for online communication
Abstract: ABSTRACT The widespread of audio-visual online communication highlights the importance of enhancing their ability to transmit emotional color and personal experience. Non-verbal biometric cues and signals were recently recognized to convey such missing emotional context when added to virtual interactions. Motivated by that, we present a haptic system allowing for a biometric signal transfer in a fully touchless and seamless way. We utilize camera-based heart-rate signal readings and ultrasonic mid-air haptic technology to affect the audience with a temporal tactile pattern representing the speaker’s heart rate. We assess the usability and engagement enhancement of such a system in two user-studies involving one-way communication, i.e., watching a short emotional video. Our analysis of biometric data and subjective responses hints toward changed values of arousal and valence as well as physiological responses when the haptic feedback was applied in a group of participants. Finally, we outline a further research agenda to confirm our observations with different emotions and communication scenarios. | 711,075 |
Title: Empowering domain experts to author valid statistical analyses
Abstract: ABSTRACT Reliable statistical analyses are critical for making scientific discoveries, guiding policy, and informing decisions. To author reliable statistical analyses, integrating knowledge about the domain, data, statistics, and programming is necessary. However, this is an unrealistic expectation for many analysts who may possess domain expertise but lack statistical or programming expertise, including many researchers, policy makers, and other data scientists. How can our statistical software help these analysts? To address this need, we first probed into the cognitive and operational processes involved in authoring statistical analyses and developed the theory of hypothesis formalization. Authoring statistical analyses is a dual-search process that requires grappling with assumptions about conceptual relationships and iterating on statistical model implementations. This led to our key insight: statistical software needs to help analysts translate what they know about their domain and data into statistical modeling programs. To do so, statistical software must provide programming interfaces and interaction models that allow statistical non-experts to express their analysis goals accurately and reflect on their domain knowledge and data. Thus far, we have developed two such systems that embody this insight: Tea and Tisane. Ongoing work on rTisane explores new semantics for more accurately eliciting analysis intent and conceptual knowledge. Additionally, we are planning a summative evaluation of rTisane to assess our hypothesis that this new way of authoring statistical analyses makes domain experts more aware of their implicit assumptions, able to author and understand nuanced statistical models that answer their research questions, and avoid previous analysis mistakes. | 711,076 |
Title: Design and Fabricate Personal Health Sensing Devices
Abstract: ABSTRACTWith the development of low-cost electronics, rapid prototyping techniques, as well as widely available mobile devices (e.g. mobile phones, smart watches), users are able to develop their own basic interactive functional applications, either on top of existing device platforms, or as stand-alone devices. However, the boundary for creating personal health sensing devices, both function prototyping and fabrication -wise, are still high. In this paper, I present my works on designing and fabricating personal health sensing devices with rapid function prototyping techniques and novel sensing technologies. Through these projects and ongoing future research, I am working towards my vision that everyone can design and fabricate highly-customized health sensing devices based on their body form and desired functions. | 711,077 |
Title: Age Regression for Human Voices
Abstract: ABSTRACT The human voice is one of our most important tools for communicating with other people. Besides pure semantic meaning it also conveys syntactical information such as emphasis as well as personal information such as emotional state, gender, and age. While the physical changes that occur to a person’s voice are well studied, there is surprisingly little work on the perception of those changes. To hold the range of subtleties present in a given utterance constant and thus focus on the changes caused by age, this paper takes adult recordings (three males, and three females) and artificially resynthesizes them (using values from measurements of real children’s voices) to create a childlike versions of the utterance at different target ages. In particular, we focus on a systematic, factorial combination pitch shifting and formant shifting. To get an insight about the influence of these factors on the estimated age, we performed a perceptual experiment. Since the resynthesis method we used can produce a wide range of voices, not all of which are physically consistent, we also asked the participants to rate how natural the voices sounded. Furthermore, since former studies suggest that people are not able to distinguish between males and females of young ages, participants were also asked to rate how male or female the voices sounded. Overall, we found that although the synthesis method produced physically plausible signals (compared average values for real children), the degree of signal manipulation was correlated with perceived unnaturalness. We also found that pitch shift had only a small affect on perceived age, that formant shift had a strong affect on perceived age, and that these effects depended on the original gender of the recording. As expected, people had difficulty guessing the gender of younger sounding voices. | 711,078 |
Title: Extending Computational Abstractions with Manual Craft for Visual Art Tools
Abstract: ABSTRACT Programming and computation are powerful tools for manipulating visual forms, but working with these abstractions is challenging for artists who are accustomed to direct manipulation and manual control. The goal of my research is to develop visual art tools that extend programmatic capabilities with manual craft. I do so by exposing computational abstractions as transparent materials that artists may directly manipulate and observe in a process that accommodates their non-linear workflows. Specifically, I conduct empirical research to identify challenges professional artists face when using existing software tools—as well as programming their own—to make art. I apply principles derived from these findings in two projects: an interactive programming environment that links code, numerical information, and program state to artists’ ongoing artworks, and an algorithm that automates the rigging of character clothing to bodies to allow for more flexible and customizable 2D character illustrations. Evaluating these interactions, my research promotes authoring tools that support arbitrary execution by adapting to the existing workflows of artists. | 711,079 |
Title: Designing Tools for Autodidactic Learning of Skills
Abstract: ABSTRACTIn the last decade, HCI researchers have designed and engineered several systems to lower the entry barrier for beginners and support novices in learning hands-on creative skills, such as motor skills, fabrication, circuit prototyping, and design. In my research , I contribute to this body of work by designing tools that enable learning by oneself, also known as autodidactism. My research lies at the intersection of system design, learning sciences, and technologies that support physical skill-learning. Through my research projects, I propose to re-imagine the design of systems for skill-learning through the lens of learner-centric theories and frameworks. I present three sets of research projects - (1) adaptive learning of motor skills, (2) game-based learning for fabrication skills, and (3) reflection-based learning of maker skills. Through these projects, I demonstrate how we can leverage existing theories, frameworks, and approaches from the learning sciences to design autodidactic systems for skill- learning. | 711,080 |
Title: Environmental physical intelligence: Seamlessly deploying sensors and actuators to our everyday life
Abstract: ABSTRACT Weiser has predicted the third generation of computing would result in individuals interacting with many computing devices and ultimately can “weave themselves into the fabric of everyday life until they are indistinguishable from it” [17]. However, how to achieve this seamlessness and what associated interaction should be developed are still under investigation. On the other hand, the material composition, structures and operating logic of a variety of physical objects existing in everyday life determine how we interact with them [13]. The intelligence of the built environment does not only rely on the encoded computational abilities within the “brain” (like the controllers of home appliances), but also the physical intelligence encoded in their “body” (e.g., materials, mechanical structures). In my research, I work on creating computational materials with different encoded material properties (e.g., conductivity, transparency, water-solubility, self-assembly, etc.) that can be seamlessly integrated into our living environment to enrich different modalities of information communication. | 711,081 |
Title: Towards Future Health and Well-being: Bridging Behavior Modeling and Intervention
Abstract: ABSTRACTWith the advent of always-available, ubiquitous devices with powerful passive sensing and active interaction capabilities, the opportunities to integrate AI into this ecosystem have matured, providing an unprecedented opportunity to understand and support user well-being. A wide array of research has demonstrated the potential to detect risky behaviors and address health concerns, using human-centered ML to understand longitudinal, passive behavior logs. Unfortunately, it is difficult to translate these findings into deployable applications without better approaches to providing human-understandable relationship explanations between behavior features and predictions; and generalizing to new users and new time periods. My past work has made significant headway in addressing modeling accuracy, interpretability, and robustness. Moreover, my ultimate goal is to build deployable, intelligent interventions for health and well-being that make use of succeeding ML-based behavior models. I believe that just-in-time interventions are particularly well suited to ML support. I plan to test the value of ML for providing users with a better, interpretable, and robust experience in supporting their well-being. | 711,082 |
Title: DAWBalloon: An Intuitive Musical Interface Using Floating Balloons
Abstract: ABSTRACTThe development of music synthesis technology has created a way to enjoy listening to music and actively manipulate it. However, it is difficult for an amateur to combine sounds or change pitches to operate a DAW (Digital Audio Workstation). Therefore, we focused on ultrasonic levitation and haptic feedback to develop an appropriate interface for DAW. We propose "DAWBalloon", a system that uses ultrasonic levitation arrays to visualize rhythms using floating balloons as a metaphor for musical elements and to combine sounds by manipulating the balloons. DAWBalloon realizes the intuitive manipulation of sounds in three dimensions, even for people without knowledge of music. | 711,083 |
Title: Influence of Passive Haptic and Auditory Feedback on Presence and Mindfulness in Virtual Reality Environments
Abstract: ABSTRACT Virtual Reality (VR) is increasingly being used to promote mindfulness practice. However, the impact of virtual and multimodal interactive spaces on mindfulness practice and presence is still underexplored. To address this gap, we conducted a mixed-method user study (N=12). We explored the impact of various multimodal feedback, in particular tactile feedback (passive haptic feedback by artificial grass) and auditory feedback (footstep sound) at feet level on both mindfulness and presence. We conducted semi-structured interviews and collected quantitative data through three validated questionnaires (SMS, IPQ, WSPQ). We found a significant effect of passive haptic feedback on presence and mindfulness. Tactile feedback improves the focus on the self, facilitates spatial presence and increases involvement. Auditory feedback did not significantly affect presence or mindfulness. While ambient sound was perceived as beneficial for presence, footstep sound subjectively disrupted both presence and mindfulness. Based on our results, we derive design recommendations for multimodal VR applications supporting mindfulness practice. | 711,084 |
Title: Improving 3D-Editing Workflows via Acoustic Levitation
Abstract: ABSTRACT We outline how to improve common 3D-editing workflows such as modeling or character animation by utilizing an acoustic levitation kit as an interactive 3D display. Our proposed system allows users to directly interact with models in 3D space and perform multi-point gestures to manipulate them. Editing of complex 3D objects can be enabled by combining the 3D display with an LCD, projector or HMD to display additional context. | 711,085 |
Title: Garnish into Thin Air
Abstract: ABSTRACT We propose Garnish into Thin Air, dynamic and three-dimensional food presentation with acoustic levitation. In contrast to traditional plating on the dishes, we make the whole garnishing process an interactive experience to stimulate users’ appetite by leveraging acoustic levitation’s capacity to decorate edibles dynamically in mid-air. To achieve Garnish into Thin Air, our system is built to orchestrate a range of edible materials, such as flavored droplets, edible beads, and rice paper cutouts. We demonstrate Garnish into Thin Air with two examples, including a glass of cocktail named “The Floral Party” and a plate of dessert called “The Winter Twig”. | 711,086 |
Title: Towards Commensal Activities Recognition
Abstract: ABSTRACT Eating meals together is one of the most frequent human social experiences. When eating in the company of others, we talk, joke, laugh, and celebrate. In the paper, we focus on commensal activities, i.e., the actions related to food consumption (e.g., food chewing, in-taking) and the social signals (e.g., smiling, speaking, gazing) that appear during shared meals. We analyze the social interactions in a commensal setting and provide a baseline model for automatically recognizing such commensal activities from video recordings. More in detail, starting from a video dataset containing pairs of individuals having a meal remotely using a video-conferencing tool, we manually annotate commensal activities. We also compute several metrics, such as the number of reciprocal smiles, mutual gazes, etc., to estimate the quality of social interactions in this dataset. Next, we extract the participants’ facial activity information, and we use it to train standard classifiers (Support Vector Machines and Random Forests). Four activities are classified: chewing, speaking, food in-taking, and smiling. We apply our approach to more than 3 hours of videos collected from 18 subjects. We conclude the paper by discussing possible applications of this research in the field of Human-Agent Interaction. | 711,087 |
Title: Shadow Play using Ultrasound Levitated Props
Abstract: ABSTRACTShadow play is a traditional culture to communicate narratives. However, its inheritance is in danger, and traditional methods have limitations in their expression. We propose a novel system to perform a shadow play by levitating props using an ultrasound speaker array. Our system computes an ideal position of levitating props to create a shadow of the desired image. Shadow play will be performed by displaying a sequence of images as shadows. Since the performance is automated, our work makes shadow play accessible to people for generations. Also, our system allows 6 DoF and floating movement of props, which expands the limit of expression. Through this system, we aim to enhance shadow plays informatically and aesthetically. | 711,088 |
Title: Magic Drops:Food 3D Printing of Colored Liquid Balls by Ultrasound Levitation
Abstract: ABSTRACT We introduce the concept of “Magic Drops”, the process which is all using ultrasound levitation to mix multiple liquid drops into a single one in the air and move it specified position and let it free fall below. For molecular gastronomy application, mixture drops with sodium alginate solution are free-fall into a container filled with calcium lactate solution. With this, drops encased in a calcium alginate film are formed in the container, these are edible and the color and flavor of mixture are also controlled through the process. We will also demonstrate stack these drops to create larger edible structure. Our novel mixture drop control technology has other potential applications such as painting techniques and drug development. Thus, we believe that this concept will become a new technology for mixing liquids in the future. | 711,089 |
Title: Top-Levi: Multi-User Interactive System Using Acoustic Levitation
Abstract: ABSTRACTTop-Levi is a public multi-user interactive system that requires a pair of users to cooperate with an audience around them. Based on acoustic levitation technology, Top-Levi leverages a unique attribute of dynamic physical 3D contents displayed and animated in the air: such systems inherently provide different visual information to users depending on where they are around the device. In Top-Levi, there are two primary users on opposite (left/right) sides of the device, and audience members to the front. Each sees different instructions displayed on a floating cube. Their collective task is to cooperate to move the cube from a start point to a final destination by synchronizing their responses to the instructions. | 711,090 |
Title: ShadowAstro: Levitating Constellation Silhouette for Spatial Exploration and Learning
Abstract: ABSTRACT We introduce ShadowAstro, a system that uses the levitating particles’ casted shadow to produce a constellation pattern. In contrast to the traditional approach of making astronomical observations via AR, planetarium, and computer screens, we intend to use the shadows created by each levitated bead to construct the silhouette of constellations - a natural geometrical pattern that can be represented by a set of particles. In this proposal, we show that ShadowAstro can help users inspect the 12 constellations on the ecliptic plane and augment users’ experience with a projector that will serve as the light source. Through this, we draw a future vision, where ShadowAstro can serve as an interactive tool with educational purposes or an art installation in museum. We believe the concept of designing interactions between the levitated objects and their casted shadows will provide a brand new experience to end user. | 711,091 |
Title: CreativeBot: a Creative Storyteller robot to stimulate creativity in children
Abstract: ABSTRACT We present the design and evaluation of a storytelling activity between children and an autonomous robot aiming at nurturing children’s creativity. We assessed whether a robot displaying creative behavior will positively impact children’s creativity skills in a storytelling context. We developed two models for the robot to engage in the storytelling activity: creative model, where the robot generates creative story ideas, and the non-creative model, where the robot generates non-creative story ideas. We also investigated whether the type of the storytelling interaction will have an impact on children’s creativity skills. We used two types of interaction: 1) Collaborative, where the child and the robot collaborate together by taking turns to tell a story. 2) Non-collaborative: where the robot first tells a story to the child and then asks the child to tell it another story. We conducted a between-subjects study with 103 children in four different conditions: Creative collaborative, Non-creative collaborative, Creative non-collaborative and Non-Creative non-collaborative. The children’s stories were evaluated according to the four standard creativity variables: fluency, flexibility, elaboration and originality. Results emphasized that children who interacted with a creative robot showed higher creativity during the interaction than children who interacted with a non-creative robot. Nevertheless, no significant effect of the type of the interaction was found on children’s creativity skills. Our findings are significant to the Child-Robot interaction (cHRI) community since they enrich the scientific understanding of the development of child-robot encounters for educational applications. | 711,092 |
Title: POLLY: A Multimodal Cross-Cultural Context-Sensitive Framework to Predict Political Lying from Videos
Abstract: ABSTRACT Politicians lie. Frequently. Depending on the country they are from, politicians may lie more frequently on some topics than others. We develop the novel concept of a tripartite “VAT” graph (Video-Article-Topic) with three types of nodes: videos (with a politician featured in each), news articles that mention the politician, and topics discussed in the videos or articles. We develop several novel types of audio and video deception scores for each audio/video, as well as a topic deception score and an edge deception score for each edge in the graph. Our POLLY (POLitical LYing) system builds upon past work by others to generate predictions for whether a politician is lying or not. We test POLLY on a novel dataset (which will be made publicly available upon publication of this paper) consisting of 146 videos and 6337 news articles involving 73 politicians from 18 countries from all major continents. We show that POLLY achieves AUC and F1 scores over 77%, beating out several baselines. We further show that POLLY is robust to translation errors made by Google Translate. | 711,093 |
Title: Inclusive Multimodal Voice Interaction for Code Navigation
Abstract: ABSTRACTNavigation of source code typically requires extensive use of a traditional mouse and keyboard which can present significant barriers for developers with physical impairments. We present research exploring how commonly used code navigation approaches (e.g. locating references to user-defined identifiers, jumping to function definitions, conducting a search for specific syntax, etc.) can be optimized for multimodal voice interaction. An exploratory study was initially conducted with five developers who have physical impairments to elicit insights around their experiences in navigating code within existing voice-controlled development environments. Findings from this study informed the design of a code editor integrating different navigation features tailored for multimodal speech input. A user evaluation with 14 developers with physical impairments was conducted with results demonstrating that all participants were able to successfully complete a series of standard navigation tasks. Participants also highlighted that the code navigation techniques were intuitive to use and provided a high-level of usability. | 711,094 |
Title: Improved Word-level Lipreading with Temporal Shrinkage Network and NetVLAD
Abstract: ABSTRACT In most word-level lipreading architectures of recent years, temporal feature extraction module tend to employ Multi-scale Temporal Convolution Network (MS-TCN). In our experiments, we have noticed it is hard for MS-TCN to deal with noise information that may contain in image sequences. In order to solve the problems, we propose a lipreading architecture based on temporal shrinkage network and NetVLAD. We first propose Temporal Shrinkage Unit according to Residual Shrinkage Network and then replace temporal convolution unit with it. The improved network which named Multi-scale Temporal Shrinkage Network (MS-TSN) could focus more on relevant information. Following with MS-TSN that deals with noise frames, NetVLAD is proposed to integrate local information into global feature. Compared with Global Average Pooling, NetVLAD could extract key features by clustering. Our experiments on Lipreading in the Wild (LRW) show that the architecture we propose achieves an accuracy of 89.41%, attaining new state-of-the-art in word-level lipreading. In addition, we build a new Mandarin Chinese lipreading dataset named MCLR-100 and verify our proposed architecture on it. | 711,095 |
Title: Toward Causal Understanding of Therapist-Client Relationships: A Study of Language Modality and Social Entrainment
Abstract: ABSTRACT The relationship between a therapist and their client is one of the most critical determinants of successful therapy. The working alliance is a multifaceted concept capturing the collaborative aspect of the therapist-client relationship; a strong working alliance has been extensively linked to many positive therapeutic outcomes. Although therapy sessions are decidedly multimodal interactions, the language modality is of particular interest given its recognized relationship to similar dyadic concepts such as rapport, cooperation, and affiliation. Specifically, in this work we study language entrainment, which measures how much the therapist and client adapt toward each other’s use of language over time. Despite the growing body of work in this area, however, relatively few studies examine causal relationships between human behavior and these relationship metrics: does an individual’s perception of their partner affect how they speak, or does how they speak affect their perception? We explore these questions in this work through the use of structural equation modeling (SEM) techniques, which allow for both multilevel and temporal modeling of the relationship between the quality of the therapist-client working alliance and the participants’ language entrainment. In our first experiment, we demonstrate that these techniques perform well in comparison to other common machine learning models, with the added benefits of interpretability and causal analysis. In our second analysis, we interpret the learned models to examine the relationship between working alliance and language entrainment and address our exploratory research questions. The results reveal that a therapist’s language entrainment can have a significant impact on the client’s perception of the working alliance, and that the client’s language entrainment is a strong indicator of their perception of the working alliance. We discuss the implications of these results and consider several directions for future work in multimodality. | 711,096 |
Title: Continual Learning about Objects in the Wild: An Interactive Approach
Abstract: ABSTRACTWe introduce a mixed-reality, interactive approach for continually learning to recognize an open-ended set of objects in a user’s surrounding environment. The proposed approach leverages the multimodal sensing, interaction, and rendering affordances of a mixed-reality headset, and enables users to label nearby objects via speech, gaze, and gestures. Image views of each labeled object are automatically captured from varying viewpoints over time, as the user goes about their everyday tasks. The labels provided by the user can be propagated forward and backwards in time and paired with the collected views to update an object recognition model, in order to continually adapt it to the user’s specific objects and environment. We review key challenges for the proposed interactive continual learning approach, present details of an end-to-end system implementation, and report on results and lessons learned from an initial, exploratory case study using the system. | 711,097 |
Title: Explicit nulls with unsafe nulls
Abstract: ABSTRACTThe explicit nulls feature was merged into Scala 3 compiler at the end of 2019, which makes Null no longer a subtype of all reference types. This is the first step to enforce null safety in Scala language. Since then, we are continuously improving the usability to help users migrating to explicit nulls more easily. The UncheckedNull (originally named JavaNull) was introduced in the original design to allow calling Java members unsafely (Nieto et al. 2020). Due to limited usage and difficulty of implementation, we decided to discard this notion and introduced a new language feature, called UnsafeNulls. By simply importing scala.language-.unsafeNulls, users can create an "unsafe" scope. Inside this scope, Null will have similar semantic as in Scala without explicit nulls, which allows selecting members on nullable variables and assigning nullable values to non-nullable variables without checking. This is useful when a large chunk of Scala code is mainly interacting with nullable values from Java library. The community projects are used to evaluate this new feature. We found UnsafeNulls can significantly reduce the work of initial migration. This gives users more flexibility to migrate their projects gradually. We also migrated the Scala 3 compiler itself to explicit nulls. | 711,098 |
Title: Enhancing closures in scala 3 with spores3
Abstract: ABSTRACTThe use of closures, a core language feature of functional programming languages, has become popular in the context of concurrent and distributed programming. Using closures in a concurrent or distributed setting increases safety hazards, however, due to captured variables. Previous work proposed spores, enhanced closures that increase safety by constraining their environment using types. This paper presents Spores3, a completely new library-based implementation of spores for Scala 3. It is shown how the new design is enabled by a unique combination of several new features in Scala 3. Moreover, Spores3 contributes a new, portable approach to serializing closures based on type classes. Its implementation supports the same serialization approach on both the JVM and the JavaScript backends of Scala. | 711,099 |
Title: Unimodal vs. Multimodal Prediction of Antenatal Depression from Smartphone-based Survey Data in a Longitudinal Study
Abstract: ABSTRACTAntenatal depression impacts 7-20% of women globally, and can have serious consequences for both the mother and the infant. Preventative interventions are effective, but are cost-efficient only among those at high risk. As such, being able to predict and identify those at risk is invaluable for reducing the burden of care and adverse consequences, as well as improving treatment outcomes. While several approaches have been proposed in the literature for the automatic prediction of depressive states, there is a scarcity of research on automatic prediction of perinatal depression. Moreover, while there exist some works on the automatic prediction of postpartum depression using data collected in clinical settings and applied the model to a smartphone application, to the best of our knowledge, no previous work has investigated the automatic prediction of late antenatal depression using data collected via a smartphone app in the first and second trimesters of pregnancy. This study utilizes data measuring various aspects of self-reported psychological, physiological and behavioral information, collected from 915 women in the first and second trimester of pregnancy using a smartphone app designed for perinatal depression. By applying machine learning algorithms on these data, this paper explores the possibility of automatic early detection of antenatal depression (i.e., during week 36 to week 42 of pregnancy) in everyday life without the administration of healthcare professionals. We compare uni-modal and multi-modal models and identify predictive markers related to antenatal depression. With multi-modal approach the model reaches a BAC of 0.75, and an AUC of 0.82. | 711,100 |
Title: Design patterns for parser combinators in scala
Abstract: ABSTRACTParser combinators provide a parsing experience that balances flexibility and abstraction with writing parsers in a style that remains close to the grammar. Parser combinators can benefit from the design patterns and structure of an object-oriented world, however, and this paper showcases the implementation and implications of various design patterns tailored at parsers in an object-oriented and functional world. In particular, features of Scala, such as implicits and path-dependent types, along with general object-oriented design help make it easy to write and maintain such parsers. | 711,101 |
Title: End-to-End Learning and Analysis of Infant Engagement During Guided Play: Prediction and Explainability
Abstract: ABSTRACT Infant engagement during guided play is a reliable indicator of early learning outcomes, psychiatric issues and familial wellbeing. An obstacle to using such information in real-world scenarios is the need for a domain expert to assess the data. We show that an end-to-end Deep Learning approach can perform well in automatic infant engagement detection from a single video source, without requiring a clear view of the face or the whole body. To tackle the problem of explainability in learning methods, we evaluate how four common attention mapping techniques can be used to perform subjective evaluation of the network’s decision process and identify multimodal cues used by the network to discriminate engagement levels. We further propose a quantitative comparison approach, by collecting a human attention baseline and evaluating its similarity to each technique. | 711,102 |
Title: Type-safe regular expressions
Abstract: ABSTRACTRegular expressions can easily go wrong. Capturing groups, in particular, require meticulous care to avoid running into off-by-one errors and null pointer exceptions. In this chapter, we propose a new design for Scala's regular expressions which completely eliminates this class of errors. Our design makes extensive use of match types, Scala's new feature for type-level programming, to statically analyze regular expressions during type checking. We show that our approach has a minor impact on compilation times, which makes it suitable for practical use. | 711,103 |
Title: DynaTags: Low-Cost Fiducial Marker Mechanisms
Abstract: ABSTRACT Printed fiducial markers are inexpensive, easy to deploy, robust and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. For this reason, more complex electronic tagging technologies exist, which can sense and change state, but either require special equipment to read or are orders of magnitude more expensive than printed markers. In this work, we explore an approach between these two extremes: one that retains the simple, low-cost nature of printed markers, yet has some of the expressive capabilities of dynamic tags. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences. We describe a library of 23 mechanisms that can be read by standard smartphone reader apps. Through a series of demo applications (augmenting reality through e.g., sounds, environmental lighting and graphics) we show how our tags can bring new interactivity to previously static experiences. | 711,104 |
Title: Comparative Analysis of Entity Identification and Classification of Indian Epics
Abstract: ABSTRACT Despite the relevance and accessibility of Indian epics such as the Mahabharata, little research has been done on the corpus in Natural Language Processing (NLP). To revitalize NLP research on the Indian epics, we present a computational analysis of SOTA supervised Machine Learning (ML) methods for Named Entity Recognition (NER) using our annotated dataset of labeled tokens of the Mahabharata text. The text contains English and Sanskrit words, offering a different vocabulary than modern literature. The characters also change their nature throughout the storyline, challenging many SOTA NER methods as tokens with the same name can have different entity types. For example, we discover that NLTK inaccurately identifies 95% tokens as named entities. We also note that spaCy and Stanford NER perform adequately only after training. However, since they produce one embedding per token, they fail to distinguish between entities with the same name. In contrast, context-driven methods such as BERT and ELMO tackle the issue as they produce different embeddings. Yet, ELMO delivers poor results due to its character-based embeddings and pseudo-bidirectionality. BERT performs the best with a 0.98 micro-avg F1 score but overfits the dataset. Therefore, the current SOTA techniques are unsuitable for NER on the Indian epics, and more research is needed to build a universally suited NER agent capable of recognizing named entities from diverse cultural contexts. | 711,105 |
Title: Real-Time Multimodal Emotion Recognition in Conversation for Multi-Party Interactions
Abstract: ABSTRACT In order to improve multi-party social interaction with artificial companions such as robots or virtual agents, real-time Emotion Recognition in Conversation (ERC) is required. In this context, ERC is a challenging task which involves multiple challenges, such as processing multimodal data over time, taking into account the multi-party context with any number of participants, understanding implied relevant commonsense knowledge during interaction and taking into account each participant’s emotional attitude. To deal with the aforementioned challenges, we design a multimodal off-the-shelf model that meets the requirements of real-life scenarios, specifically dyadic and multi-party interactions. We propose a Knowledge Aware Multi-Headed Network that integrates various sources including the dialog history and commonsense knowledge about the speaker and other participants. The weights of these pieces of information are modulated using a multi-head attention mechanism. The proposed model is learnt in a Multi-Task Learning framework which combines the ERC task with a Dialogue Act (DA) recognition task and an Emotion Shift (ES) detection task through a joint learning strategy. Our proposition obtains competitive and stable results on several benchmark datasets that vary in number of participants and length of conversations, and outperforms the state-of-the-art on one of these datasets. The importance of DA and ES prediction in determining the speaker’s current emotional state is investigated. | 711,106 |
Title: A cognitive knowledge-based system for hair and makeup recommendation based on facial features classification
Abstract: ABSTRACTThis paper aims at building a knowledge-based system of smart mirrors for the cosmetic industry. In order to better understand hair and cosmetic experts, we first conduct interviews for knowledge acquisition. Then, we obtain insights from each category of answers collected from expert interviews. To design this knowledge-based system, we define concepts, main tasks and subtasks in order to extract rules to design the system. This system can suggest hairstyles and make-up colors by considering users’ face shapes and their personal colors. | 711,107 |
Title: A Framework for Video-Text Retrieval with Noisy Supervision
Abstract: ABSTRACT A key challenge in extending vision-linguistic models to new video domains is curating large annotated datasets. We propose a framework that leverages videos with noisy linguistic descriptions, such as sports broadcasts, to train a model using an uncurated dataset. We introduce an unsupervised model that uses the corpus membership between a target and an auxiliary corpus to assign a relevance probability to the linguistic description of examples in the target domain. We examine these probabilities to evaluate the effect of noisy data in the video-text retrieval task. Our framework provides a domain-invariant recipe for enhancing multi-modal datasets by reducing the noise without requiring the costly manual curation effort. We show that our unsupervised model improves the performance of the video-text retrieval model using readily available hockey broadcast videos with closed-captioning. Furthermore, we propose a multi-modal cross-correlation objective function to obtain additional performance gains. We showcase our proposed framework in the context of a new multi-modal dataset of temporally labeled hockey videos with noisy textual descriptions. | 711,108 |
Title: Investigating the relationship between dialogue and exchange-level impression
Abstract: ABSTRACT Multimodal dialogue systems (MDS) have recently attracted increasing attention. The automatic evaluation of user impression with spoken dialog at the dialog level plays a central role in managing dialog systems. A user usually forms an overall impression through the experience of each exchange of turns in the conversation. Thus, the user’s exchange-level sentiment should be considered when recognizing the user’s overall impression of the dialog. Previous research has focused on modeling user impressions during individual exchanges or during the overall conversation. Thus, the relationship between user sentiment at the exchange level and user impression at the dialog level is still unclear, and appropriately utilizing this relationship in impression analysis remains unexplored. In this paper, we first investigate the relation between sentiment at the exchange level and 18 labels that indicate different aspects of the user impression at the dialog level. Then, we present a multitask learning model (MTL) that uses exchange-level annotations to recognize dialog-level labels. The experimental results demonstrate that our proposed model achieves better performance at the dialog level, outperforming the single-task model by a maximum of 15.7%. | 711,109 |
Title: Transformer-Based Physiological Feature Learning for Multimodal Analysis of Self-Reported Sentiment
Abstract: ABSTRACT One of the main challenges in realizing dialog systems is adapting to a user’s sentiment state in real time. Large-scale language models, such as BERT, have achieved excellent performance in sentiment estimation; however, the use of only linguistic information from user utterances in sentiment estimation still has limitations. In fact, self-reported sentiment is not necessarily expressed by user utterances. To mitigate the issue that the true sentiment state is not expressed as observable signals, psychophysiology and affective computing studies have focused on physiological signals that capture involuntary changes related to emotions. We address this problem by efficiently introducing time-series physiological signals into a state-of-the-art language model to develop an adaptive dialog system. Compared with linguistic models based on BERT representations, physiological long short-term memory (LSTM) models based on our proposed physiological signal processing method have competitive performance. Moreover, we extend our physiological signal processing method to the Transformer language model and propose the Time-series Physiological Transformer (TPTr), which captures sentiment changes based on both linguistic and physiological information. In ensemble models, our proposed methods significantly outperform the previous best result (p < 0.05). | 711,110 |
Title: Cognitive Workload Assessment via Eye Gaze and EEG in an Interactive Multi-Modal Driving Task
Abstract: ABSTRACT Assessing the cognitive workload of human interactants in mixed-initiative teams is a critical capability for autonomous interactive systems to enable adaptations that improve team performance. Yet, it is still unclear, due to diverging evidence, which sensing modality might work best for the determination of human workload. In this paper, we report results from an empirical study that was designed to answer this question by collecting eye gaze and electroencephalogram (EEG) data from human subjects performing an interactive multi-modal driving task. Different levels of cognitive workload were generated by introducing secondary tasks like dialogue, braking events, and tactile stimulation in the course of driving. Our results show that pupil diameter is a more reliable indicator for workload prediction than EEG. And more importantly, none of the five different machine learning models combining the extracted EEG and pupil diameter features were able to show any improvement in workload classification over eye gaze alone, suggesting that eye gaze is a sufficient modality for assessing human cognitive workload in interactive, multi-modal, multi-task settings. | 711,111 |
Title: RGBDGaze: Gaze Tracking on Smartphones with RGB and Depth Data
Abstract: ABSTRACT Tracking a user’s gaze on smartphones offers the potential for accessible and powerful multimodal interactions. However, phones are used in a myriad of contexts and state-of-the-art gaze models that use only the front-facing RGB cameras are too coarse and do not adapt adequately to changes in context. While prior research has showcased the efficacy of depth maps for gaze tracking, they have been limited to desktop-grade depth cameras, which are more capable than the types seen in smartphones, that must be thin and low-powered. In this paper, we present a gaze tracking system that makes use of today’s smartphone depth camera technology to adapt to the changes in distance and orientation relative to the user’s face. Unlike prior efforts that used depth sensors, we do not constrain the users to maintain a fixed head position. Our approach works across different use contexts in unconstrained mobile settings. The results show that our multimodal ML model has a mean gaze error of 1.89 cm; a 16.3% improvement over using RGB data alone (2.26 cm error). Our system and dataset offer the first benchmark of gaze tracking on smartphones using RGB+Depth data under different use contexts. | 711,112 |
Title: WEDAR: Webcam-based Attention Analysis via Attention Regulator Behavior Recognition with a Novel E-reading Dataset
Abstract: ABSTRACT Human attention is critical yet challenging cognitive process to measure due to its diverse definitions and non-standardized evaluation. In this work, we focus on the attention self-regulation of learners, which commonly occurs as an effort to regain focus, contrary to attention loss. We focus on easy-to-observe behavioral signs in the real-world setting to grasp learners’ attention in e-reading. We collected a novel dataset of 30 learners, which provides clues of learners’ attentional states through various metrics, such as learner behaviors, distraction self-reports, and questionnaires for knowledge gain. To achieve automatic attention regulator behavior recognition, we annotated 931,440 frames into six behavior categories every second in the short clip form, using attention self-regulation from the literature study as our labels. The preliminary Pearson correlation coefficient analysis indicates certain correlations between distraction self-reports and unimodal attention regulator behaviors. Baseline model training has been conducted to recognize the attention regulator behaviors by implementing classical neural networks to our WEDAR dataset, with the highest prediction result of 75.18% and 68.15% in subject-dependent and subject-independent settings, respectively. Furthermore, we present the baseline of using attention regulator behaviors to recognize the attentional states, showing a promising performance of 89.41% (leave-five-subject-out). Our work inspires the detection & feedback loop design for attentive e-reading, connecting multimodal interaction, learning analytics, and affective computing. | 711,113 |
Title: Multi-level Fusion of Multi-modal Semantic Embeddings for Zero Shot Learning
Abstract: ABSTRACT Zero shot learning aims to recognize objects whose instances may not be covered by the training data. To generalize knowledge from seen classes to the novel ones, semantic space is built to embed knowledge from various views into multi-modal semantic embeddings. Existing semantic embeddings neglect the relationships between classes which are essential to transfer knowledge between classes. Moreover, existing zero shot learning models ignore the complementarity between semantic embeddings from different modalities. To tackle these problems, in this work, we resort to graph theory to explicitly model the interdependence between classes and then obtain new modal semantic embeddings. Furthermore, we pioneer to propose a multi-level fusion model to effectively combine knowledge encoded in multi-modal semantic embeddings together. By the virtue of subsequent fusion block, the results of multi-level fusion can be furtherly enriched and fused. Experiments show that our model could achieve promising results on various datasets. Ablation study suggests that our method is well suited for zero shot learning. | 711,114 |
Title: Sound Scope Pad: Controlling a VR Concert with Natural Movement
Abstract: ABSTRACT We developed Sound Scope Pad, an application that provides an active music listening experience that combines AI, virtual reality, and spatial acoustics. Users can emphasize the sounds of certain performers by turning their head to the left or right or bringing their hands closer to their ears to find and focus on the performer they want to listen to. In the Sound Scope Headphones that we previously built, the user’s head direction was detected by an accelerometer mounted on the arch of the headphones. In Sound Scope Pad, the head direction is detected by combining the angle information detected by the acceleration gyro sensor of a tablet and the angle information of the head recognized from the front camera image of the tablet. | 711,115 |
Title: The DeepMotion entry to the GENEA Challenge 2022
Abstract: ABSTRACT This paper describes the method and evaluation results of our DeepMotion entry to the GENEA Challenge 2022. One difficulty in data-driven gesture synthesis is that there may be multiple viable gesture motions for the same speech utterance. Therefore the deterministic regression methods can not resolve the conflicting samples and may produce more damped motions. We proposed a two-stage model to address this uncertainty issue in gesture synthesis. Inspired by recent text-to-image synthesis methods, our gesture synthesis system utilizes a VQ-VAE model to first extract smaller gesture units as codebook vectors from training data. An autoregressive model based on GPT-2 transformer is then applied to model the probability distribution on the discrete latent space of VQ-VAE. The user evaluation results show the proposed method is able to produce gesture motions with reasonable human-likeness and gesture appropriateness. | 711,116 |
Title: The IVI Lab entry to the GENEA Challenge 2022 – A Tacotron2 Based Method for Co-Speech Gesture Generation With Locality-Constraint Attention Mechanism
Abstract: ABSTRACT This paper describes the IVI Lab entry to the GENEA Challenge 2022. We formulate the gesture generation problem as a sequence-to-sequence conversion task with text, audio, and speaker identity as inputs and the body motion as the output. We use the Tacotron2 architecture as our backbone with the locality-constraint attention mechanism that guides the decoder to learn the dependencies from the neighboring latent features. The collective evaluation released by GENEA Challenge 2022 indicates that our two entries (FSH and USK) for the full body and upper body tracks statistically outperform the audio-driven and text-driven baselines on both two subjective metrics. Remarkably, our full-body entry receives the highest speech appropriateness (60.5% matched) among all submitted entries. We also conduct an objective evaluation to compare our motion acceleration and jerk with two autoregressive baselines. The result indicates that the motion distribution of our generated gestures is much closer to the distribution of natural gestures. | 711,117 |
Title: Exemplar-based Stylized Gesture Generation from Speech: An Entry to the GENEA Challenge 2022
Abstract: ABSTRACTWe present our entry to the GENEA Challenge of 2022 on data-driven co-speech gesture generation. Our system is a neural network that generates gesture animation from an input audio file. The motion style generated by the model is extracted from an exemplar motion clip. Style is embedded in a latent space using a variational framework. This architecture allows for generating in styles unseen during training. Moreover, the probabilistic nature of our variational framework furthermore enables the generation of a variety of outputs given the same input, addressing the stochastic nature of gesture motion. The GENEA challenge evaluation showed that our model produces full-body motion with highly competitive levels of human-likeness. | 711,118 |
Title: GestureMaster: Graph-based Speech-driven Gesture Generation
Abstract: ABSTRACT This paper describes the GestureMaster entry to the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge 2022. Given speech audio and text transcriptions, GestureMaster can automatically generate a high-quality gesture sequence to accompany the input audio and text transcriptions in terms of style and rhythm. GestureMaster system is based on the recent ChoreoMaster publication[12]. ChoreoMaster can generate dance motion given a piece of music. We make some adjustments to ChoreoMaster to suit for the speech-driven gesture generation task. We are pleased to see that among the participating systems, our entry attained the highest median score in the human-likeness evaluation. In the appropriateness evaluation, we ranked first in upper-body study and second in full-body study. | 711,119 |
Title: TransGesture: Autoregressive Gesture Generation with RNN-Transducer
Abstract: ABSTRACT This paper presents a gesture generation model based on an RNN-transducer, submitted to the GENEA Challenge 2022. The proposed model consists of three neural networks: Encoder, Prediction Network, and Joint Network, which can be jointly trained in an end-to-end manner. We also introduce new loss functions, namely statistical losses, as the additional term to the standard MSE loss to put motion statistics of generated gestures close to the ground truths’. Finally, we show the subjective evaluation results and discuss the results and takeaways from the challenge. | 711,120 |
Title: Hybrid Seq2Seq Architecture for 3D Co-Speech Gesture Generation
Abstract: ABSTRACT This paper describes the co-speech gesture generation system developed by DSI team for the GENEA challenge 2022. The proposed framework features a unique hybrid encoder-decoder architecture based on transformer networks and recurrent neural networks. The proposed framework has been trained using only the official training data split of the challenge and its performance has been evaluated on the testing split. The framework has achieved promising results on both the subjective (specially the human-likeness) and objective evaluation metrics. | 711,121 |
Title: 3rd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour
Abstract: ABSTRACT Child behaviour is a topic of great scientific interest across a wide range of disciplines, including social sciences and artificial intelligence (AI). Knowledge in these different fields is not yet integrated to its full potential. The aim of this workshop was to bring researchers from these fields together. The first two workshops had a significant impact. In this workshop, we discussed topics such as the use of AI techniques to better examine and model interactions and children’s emotional development, analyzing head movement patterns with respect to child age. This workshop was a successful new step towards the objective of bridging social sciences and AI, attracting contributions from various academic fields on child behaviour analysis. This document summarizes the accepted papers. | 711,122 |