SA '18- SIGGRAPH Asia 2018 Emerging Technologies

Full Citation in the ACM Digital Library

Demo of olfactory display with less residual odor

An olfactory display is a device which provides various scents to a user. e integration of such olfactory stimulus with conventional VR system will strongly influence human emotion and create a more immersive experience. One of the main issues accompanied by the implementations is that the odorants spread into the ambient air and it makes the user confuse what smell is presented at that time. We tried to solve this with an innovative and simple design concept of the olfactory display: installation of the air intake and inner deodorant filter. Based on the method, we include olfaction in a VR game to provide the remarkable sensation to a user with clear and various smells presentation.

Edible projection mapping

This installation will exhibit dynamic projection mapping on a pancake with an edible retroreflector as an optical marker. Visitors will be able to see a projected character welcoming and attracting them depends on the position of the pancake that they are holding.

FacePush: experiencing pressure forces on face with HMDs

Simulated haptics is a key component to enhance immersion in virtual environments. Previous research has proposed various mechanisms to generate different haptic feedbacks. While many were deployed on limbs e.g., through wearable interfaces or handheld controllers, recent researches started to explore displaying haptics directly through HMDs. Examples include thermal [Peiris et al. 2017], vibrotactile [de Jesus Oliveira et al. 2017] and force [Gugenheimer et al. 2016] feedbacks. We present FacePush, a pulley system incorporated with a HMD to display pressure forces on user face. Unlike GyroVR [Gugenheimer et al. 2016] which enabled tangential/rotational force on user head, FacePush's pulley system creates normal force on the face region covered by the HMD. The main concept of FacePush is shifting torque provided by the two motors to a normal force pushing on face. As displayed in Figure 1. This normal force triggered by the motors pushes the HMD into user face resulting in pressure feedbacks on the face.

Gill+Man: breathing through gills experience system

We propose the gill-breathing simulation system Gill+Man. The system presents the sense of breathing through gills like a fish. The Gill+Man system comprises three devices, namely a breath-sensing device, swallowing sense presenting device, and gill sense presenting device. These devices use simple stimulation and combine to produce the sense of having gills.

Hap-link: wearable haptic device on the forearm that presents haptics sensations corresponding to the fingers

We developed a device that presents the haptic sensation of the fingertip to the forearm rather than to the fingertip as a new haptic presentation method for objects in a virtual reality environment. The device adopts a five-bar linkage mechanism and a Peltier element and presents the strength and direction of a force, vibration and the thermal sensation to the forearm. Compared with a fingertip-mounted display, it is possible to address issues of weight and size that hinder the free movement of fingers. Users can feel differences in texture and hardness/softness of objects, and experiences in the virtual reality environment are better than those without haptics cues even though haptics information is not directly presented to the fingertip.1

Leg-jack: generation of the sensation of walking by electrical and kinesthetic stimuli to the lower limbs

We developed a neurosensory and kinesthetic stimulation system that generated a walking sensation for a seated user. An electrical stimulus was applied to Achilles' and tibialis anterior tendons with a kinesthetic stimulus generated by a lower limb device driven synchronously with an egocentric visual scene during virtual walking. The system works as a part of experience replication scheme that aims to receive other's physical activity. As a common bodily activity of humans, walking motion was focused. The evaluation experiment has shown that walking sensation was increased by each stimulation at a 1% significance level. In this demonstration, the user on a chair can feel as if he/she is walking in a haunted house. The user can move the upper body freely to look around with a virtual flashlight, however the lower body is possessed by the other. The user walks into the house despite the intention. The work gives the user a realistic experience which was not sufficiently generated with only a movie and sounds.

Luciola: a light-emitting particle moving in mid-air based on ultrasonic levitation and wireless powering

In this paper, we present an approach to realize the levitation of a small object with an embedded electronic circuit. Luciola is a light-emitting particle with a diameter of 3.5mm and a weight of 16.2mg moving in mid-air. The novelty of this paper is the ultrasonically levitated electronic object powered by resonant inductive coupling. To enable the levitation of a particle, a custom IC chip is essential in reducing the size and weight of the particle. Ths custom IC chip is designed to achieve an intermittent lighting of the LED, which increases the maximal distance between the transmitter and the receiver coils. Luciola is applied to a self-luminous pixel in a 3-dimensional (3D) mid-air display and the drawing of characters in mid-air is also demonstrated.

Magic zoetrope: representation of animation by multi-layer 3D zoetrope with a semitransparent mirror

In this research, we propose a multilayered 3D zoetrope called the "Magic Zoetrope", which makes it possible to animate two independent object groups concurrently and to represent various alterations in the animation, unlike a conventional 3D zoetrope. A conventional 3D zoetrope has only one object group that is illuminated by a unitary strobe light, so that the presented animation is always periodic and unchanged. Some studies [Miyashita et al. 2016; Smoot et al. 2010; Yoshida et al. 2016] and artworks, for example the Time Stratum series by Toshio Iwai, attempted to expand the range of expression of 3D zoetropes, but they did not focus on animating multiple subjects with alterations in the mutual relation between them as in the video animation.

Magnetact: magnetic-sheet-based haptic interfaces for touch devices

This paper1 presents a rapid prototyping method of haptic interfaces for touch devices utilizing magnetic rubber sheets and conductive materials. When a magnetic sheet is thin enough, the capacitive sensor of the touch device can detect the user's finger behind the magnetic sheet due to the sheet's dielectric behavior. Furthermore, by changing the magnetic pattern of the magnetic sheet using a handy magnetizing tool, the tactile feedback can be customized within seconds. Since the construction of the interface is so simple, this method enables users to customize not only the size and shape, also the haptic feedback of the tangible interface. We demonstrated several types of interface such as buttons, sliders, switches, and cross-keys.

Relaxushion: controlling the rhythm of breathing for relaxation by overwriting somatic sensation

In this study, we propose the method to control the rhythm of breathing for relaxation by overwriting somatic sensation. Breathing way has an essential role in controlling the state of mind and body. Thus, lots of studies tried to instruct the breathing rhythm using lights, sounds, vibrations, and so on. However, these approaches require prior training to adjust a user's breathing rhythm to the system. To improve the effectiveness of controlling the breathing rhythm, we focus on the approach of overwriting somatic sensation. We hypothesized that we could modify the breathing rhythm if users get a device's motion confused with their breathing motion. Thus, we constructed the cushion type device "Relaxushion" which presents the breathing motion. When we embrace this breathing cushion, we feel like putting our hands on our stomachs. A brief user study shows our method can control the rhythm of breathing without prior training and consciousness to the device.

RFIDesk: an interactive surface for multi-touch and rich-ID stackable tangible interactions

This work introduces RFIDesk, an interactive surface that enables both multi-touch and rich-ID stackable tangible interactions. By using ultra-high frequency (UHF) radio-frequency identification (RFID) technology, the RFIDesk can effectively identify the elements of a stack. Furthermore, this system integrates capacitive multi-touch sensing based on indium tin oxide (ITO) to effectively detect touch events while preserving the interface transparency, thus enabling rich visual feedback to be displayed under the stackable objects. The interference between the two sensing technologies is resolved by applying time-division multiplexing sampling. We use a tangible tower-defense game to demonstrate the interaction possibilities of this system.

Spatially augmented depth and transparency in paper materials

The human visual system uses cast shadows to judge the three-dimensional layout of an object. The purpose of this installation is to demonstrate novel visual illusions of depth and transparency for paper materials, which is induced by the conventional light projection of cast shadow patterns. Thus, this installation focuses on perceptual rather than technical aspects. Illuminating a target object, the spatial vicinity of the object is darkened to produce the visual impression of a shadow of the object. By controlling the blurriness of the cast shadow and/or spatial distance between the object and its shadow, a perceptual change in the layout of the object is induced. The audience can interactively enjoy visual experiences wherein objects and letters on a paper perceptually float in the air. We also demonstrate that it is possible to edit the material appearance of a real object by manipulating the shape of the shadow; an opaque colored paper appear to be a transparent color film floating in the air.

TactGAN: vibrotactile designing driven by GAN-based automatic generation

In this study, we propose the vibrotactile feedback designing system using GAN (Generative Adversarial Network)-based vibrotactile signal generator (TactGAN). Preparing appropriate vibrotactile signals for applications is difficult and takes much time because we need recording or directly hand tuning signals if the required signals do not exist in the database of vibrotactile stimuli. To solve these problems, TactGAN can generate signals presenting specific tactile impression based on user-defined parameters. It can also automatically generate signals presenting the tactile impression of images. It realizes the rapid designing of vibrotactile signals for application with such feedback. Users can experience the rapid designing process of the vibrotactile stimuli for specific user interfaces or specific contents on applications. TactGAN enables us to apply various vibrotactile stimuli to UI components like buttons using material kinds or tactile words, and to attach textures with vibrotactile feedback to the 3D model.

Tangible projection mapping: dynamic appearance augmenting of objects in hands

We propose a technique of dynamic appearance augmentation by projection to object in user's hands, named Tangible Projection Mapping. This technique allows users to hold a target object freely, and augments appearance of that in various postures by user's manipulation. In addition, Tangible Projection Mapping has the potential to contribute to widespread use of dynamic projection mapping because of using only off-the-shelf devices. In our demonstration, any textures and movies can be projected on the objects of various shapes. It can provide a deep sense of unity with the attractively enhanced object in hands.

The living wall display: physical augmentation of interactive content using an autonomous mobile display

The Living Wall Display displays interactive content on a mobile wall screen that moves in concert with content animation. To augment the interaction experience, the display dynamically changes its position and orientation, responding to the content animation triggered by user interactions. We implement three proof of concept prototypes that represent pseudo force impact of the interactive content using physical screen movement. Pilot studies show that the Living Wall augments content expressiveness, and increases the sense of presence of the screen content.

TuVe: a flexible display with a tube

Ordinary displays, e.g., liquid crystal displays (LCD), can only provide two-dimensional information. Expressions and interactions with such displays are limited to the surface. On the other hand, many research studies on novel display systems have been proposed to tackle such limitations and provide three-dimensional information. A display system made up of a tube with fluids could be one of the solutions. A tube with fluids inside of it can take various forms according to the environment, therefore making it possible to project information onto various surface shapes. Dietz et al. proposed a kit for prototyping of user interfaces with fluids[Dietz 2014]. They discussed components e.g. pumps, tubes, and accessories for tubing for users can constitute fluidic user interfaces easily. Popp proposed an art piece, "bit.flow" [Popp 2011], which consists of several tubes, two-phase flows inside of them, and pumps that control the flows for each tube. However, there is no discussion about changing the control method according to the tube taking various shapes. In this case, information could only be presented in a known simple shape, and users cannot change the shape of the surface. In order to tackle these issues, we propose a novel tube display TuVe, that consists of a tube and fluids, while offering a dynamical shape-changing display with computer vision based calibration.

VarioLight: hybrid dynamic projection mapping using high-speed projector and optical axis controller

Projection mapping have attracted much attention in recent years and many researches have tried to expand it to dynamic objects. However, it has some problems in the range of motion, the resolution, and the kinds of object that can be projected. In this paper, we propose a method for realizing dynamic projection mapping by combining a high-speed/low-latency projector and a mirror-based high-speed optical axis controller. Based on high-speed visual feedback with multiple dot markers, dynamic projection mapping onto rotating/deforming objects moving in a wide range, with almost-imperceptible delay, has become possible.

Xpression: mobile real-time facial expression transfer

We developed Xpression, a mobile application which allows user to reenact faces from images and videos with only RGB camera on mobile devices. It transfers facial expression from source user to target user. Unlike other reenactment researches, our method works on video as well as still images and requires only mobile device. Our application is freely available to the public on iOS App Store.