We developed an application-layer multiplexing scheme for teleoperation systems with multimodal feedback (video, audio and haptics). The available transmission resources are carefully allocated to avoid delay-jitter for the haptic signal potentially caused by the size and arrival time of the video and audio data. The multiplexing scheme gives high priority to the haptic signal and applies a preemptive-resume scheduling strategy to stream the audio and video data. The proposed approach estimates the available transmission rate in real time and adapts the video bitrate, data throughput and force buffer size accordingly. Furthermore, the proposed scheme detects sudden transmission rate drops and applies congestion control to avoid abrupt delay increases and converge promptly to the altered transmission rate. The performance of the proposed scheme is measured objectively in terms of end-to-end signal latencies, packet rates and peak signal to noise ratio (PSNR) for visual quality. Moreover, peak-delay and convergence time measurements are carried out to investigate the performance of the congestion control mode of the system.
B. Cizmeci, X. Xu, R. Chaudhari, C. Bachhuber, N. Alt, E. Steinbach,
A Multiplexing Scheme for Multimodal Teleoperation,
ACM Transactions on Multimedia Computing, Communication and Applications (TOMM), vol. 13, no. 2, May 2017.
This video shows a general concept of the point cloud-based model-mediated teleoperation (pcbMMT) approach. The pcbMMT is an extension of the model-mediated teleoperation (MMT). The pcbMMT method can deal with complex environments using point cloud surface models. In our system, a time-of-flight camera is used to capture a high resolution point cloud model of the object surface. The point cloud model and the physical properties of the object (stiffness and surface friction coefficient) are estimated at the slave side in real-time and transmitted to the master side. A simple point cloud-based haptic rendering algorithm is adopted to generate the force feedback signals directly from the point cloud model without first converting it into a 3D mesh. The video shows that the teleoperation system is stable in the presence of 500 ms communication delay.
X. Xu, B. Cizmeci, A. Al-Nuaimi, E. Steinbach,
Point Cloud-based Model-mediated Teleoperation with Dynamic and Perception-based Model Updating,
IEEE Transactions on Instrumentation and Measurement , vol. 63, no. 11, pp. 2558 - 2569 , 2014.
We propose a novel radial function-based deformation (RFBD) approach to enable real-time modeling and haptic rendering of deformable objects including frictional contact. Due to the simplicity of the RFBD approach, the model parameters of the remote environment can be obtained in real time.
X. Xu, E. Steinbach,
Towards Real-time Modeling and Haptic Rendering of Deformable Objects for Point Cloud-based Model-mediated Teleoperation,
5th IEEE International Workshop on Hot Topics in 3D - Hot3D, Proc. of IEEE International Conference on Multimedia & Expo, Chengdu, China, July 2014.
A bilateral haptic teleoperation system operated with the delays of 0 ms, 200 ms, and 500 ms. The master device is the Omega.6. The slave robot is the KUKA light-weight arm. A perceptual-deadband-based haptic data reduction scheme is integrated into the time-domain passivity approach. This is able to reduce the haptic data packets transmitted over the network while guaranteeing system stability in the presence of communication delay.
X. Xu, C. Schuwerk, B. Cizmeci, E. Steinbach,
Energy Prediction for Teleoperation Systems that Combine the Time Domain Passivity Approach with Perceptual Deadband-based Haptic Data Reduction,
IEEE Transactions on Haptics, vol. 9, no. 4, pp. 560-573, Oktober 2016.
Impact of delay on system stability for a bilateral haptic teleoperation system. Even a delay as small as 10 ms can lead to instability for hard contact.
Imagine you are using your mobile device to browse the Internet for new furniture, home decoration or clothes. Today’s systems provide us only with information about the look of the products, but how does their surface feel when touched? For the future, we imagine systems that allow us to remotely enjoy the look and feel of products. The impact of such technology could be enormous, especially for E-Commerce. Tangible example applications are product customization, selection of materials, product browsing or virtual product showcases.
The „Remote Texture Exploration“ app displays surface textures using the TPad Phone, which can be received from a texture database or from remote smartphones. Vibration and audio feedback is included to enrich the user experience. The texture models used to display textures recreate important dimensions of human tactile perception, like roughness or friction. New texture models are created using live recordings from the smartphone sensors (IMU, camera, microphone).
While stroking a rigid tool over an object surface, vibrations induced on the tool, which represent the interaction between the tool and the surface texture, can be measured by means of an accelerometer. Such acceleration signals can be used to recognize or to classify object surface textures. The temporal and spectral properties of the acquired signals, however, heavily depend on different parameters like the applied force on the surface or the lateral velocity during the exploration. Robust features that are invariant against such scan-time parameters are currently lacking, but would enable texture classification and recognition using uncontrolled human exploratory movements. We introduce a haptic texture database which allows for a systematic analysis of feature candidates. The database includes recorded accelerations measured during controlled and well-defined texture scans, as well as uncontrolled human free hand texture explorations for 43 different textures.
We developed a novel multiplexing scheme for teleoperation over constant bitrate (CBR) communication links. The approach uniformly divides the channel into 1ms resource buckets and controls the size of the transmitted video packets as a function of the irregular haptic transmission events. The performance of the proposed multiplexing scheme was measured objectively in terms of delay-jitter and packet rates. The results were achieved with acceptable multiplexing delay on both the visual and haptic streams. Our evaluation shows that the proposed approach can provide a guaranteed constant delay for the time-critical force signal, while introducing acceptable video delay.
B. Cizmeci, R. Chaudhari, X. Xu, N. Alt, E. Steinbach,
A Visual-Haptic Multiplexing Scheme for Teleoperation over Constant-Bitrate Communication Links,
Proc. of EuroHaptics Conference (accepted for oral presentation), Versailles, France, Juni 2014.
This video gives an overview of our haptic teleoperation system setup in our lab at the Chair of Media Technology. On the operator side we use a Force Dimension sigma.7 device with 7 active degrees-of-freedom. The teleoperator side is equipped with a KUKA LWR lightweight robot arm, a JR3 6-DoF force/torque sensor and various cameras, e.g. a depth sensing camera or distance sensors. As endeffector we use either a single point of contact stylus or a two-finger parallel gripper with force sensors integrated into the fingers. To simulate a networked haptic teleoperation system with the robot working in a remote environment, the teleoperator can be hidden from the operator with a curtain. Within the local network between the operator and teleoperator a network emulator is running to simulate different QoS parameters like limited bandwidth, delay, jitter or packet loss. This testbed is used to evaluate different coding schemes for transmission of multimodal media streams.
This video shows a stiffness discrimination experiment performed in a virtual environment. Aim of this study is to define the Weber fraction of the stiffness perception of virtual rigid surface objects investigated with the Sigma.7 haptic device. Weber fraction values will be applied to develop the mathematical model of human haptic perception.
This video shows the idea to use a surface geometry-based prediction approach to reduce the haptic data. In our prediction scheme, the local object surface features are approximated with the help of simple geometric models (plane or sphere), which are calculated based on the position and force signals of the haptic interaction point. The haptic force feedback is then locally rendered based on the predicted surfaces, thus the packet rate of haptic data from the slave to the master is reduced. Whenever the difference between the predicted force and the real force is larger than a threshold, we recalculate and update our prediction geometry model.
X. Xu, J. Kammerl, R. Chaudhari, E. Steinbach,
Hybrid Signal-based and Geometry-based Prediction for Haptic Data Reduction,
IEEE International Symposium on Haptic Audio Visual Environments and Games, Qinhuangdao, Hebei, China, Oktober 2011.
This video shows the idea used by the Model-displaced teleoperation (MDT) approach to compensate for the visual-haptic asynchrony in model-mediated teleoperation (MMT), which occurs between the non-delayed locally rendered force and the delayed video feedback. In MDT, we adaptively shift the position of the local surface model to delay the haptic contact with the environment, thus compensating for the visual-haptic asynchrony and avoiding the model-jump effect.
X. Xu, G. Paggetti, E. Steinbach,
Dynamic Model Displacement for Model-mediated Teleoperation,
IEEE World Haptics Conference 2013, Daejeon, Korea, April 2013.