Abstract
The reduced behavior for exploration of volumetric data based on the virtual sectioning concept was compared with the free scanning at the use of the StickGrip linkage-free haptic device. Profiles of the virtual surface were simulated through the penholder displacements in relation to the pen tip of the stylus. One or two geometric shapes (cylinder, trapezoidal prism, ball, and torus) or their halves and the ripple surface were explored in the absence of visual feedback. In the free scanning, the person physically moved the stylus. In the parallel scanning, cross-sectional profiles were generated automatically starting from the location indicated by the stylus. Analysis of the performance of 18 subjects demonstrated that the new haptic visualization and exploration technique allowed to create accurate mental images, to recognize and identify virtual shapes. The mean number of errors was about 2.5% in the free scanning mode and 1.9% and 1.5% in the parallel scanning mode at the playback velocity of 28 mm/s and 42 mm/s, respectively. All participants agreed that the haptic visualization of the 3D virtual surface presented as the cross-sectional slices of the workspace was robust and easy to use. The method was developed for visualization of spatially distributed data collected by sensors.
1. Introduction
Even in the absence of direct contact and visual feedback, people have to explore physical properties such as friction and roughness, compliance and stiffness of environment (in geophysics and monitoring), and materials (nondestructive testing). Complementing visual information, the existing haptic shape-rendering algorithms focus on rendering the interaction between the tip of a haptic probe and the virtual surface. Using haptic interface and analyzing the effects of different types of the force feedback, the operator of the hand-held detector can feel the change of roughness, rigidity, and other physical properties of a contact. However, human perception of spatially distributed data, for example, the surface topography with varying stiffness, relying on single-point-based exploration techniques often fails to provide a realistic feeling of the complex topological 3D surfaces and intersections [1–7]. Although manual palpation can be very effective, free scanning with a single-point inspection is unnatural and significantly increases the cognitive load to establish the right relations between successive samples of sensory information separated in space and time [8]. Haptic recognition and identification of spatial objects, their unique shape, and location do not always lead to the correct decision [9, 10]. Therefore, there is a great challenge to develop the new techniques for tangible exploration and interaction with virtual objects [11] in order to facilitate interpretation of spatially distributed data obtained, for example, by the hand-held detector [12].
The main difference between interaction with physical objects using the fingers and using the rigid probe with virtual objects is that the natural manipulations occur with multiple areas of objects and fingertips and rely on multiple sources of haptic information. This being so, the important components of the surface exploration are the kinesthetic sense of distance to the surface of interaction [11] and self-perception of the finger joint-angle positions [13]. Competitive afferent flows allow the person to immediately sense the relative differences between adjacent locations by sharpening the curvature gradient due to the lateral inhibition phenomenon [14, 15].
The question of how to efficiently explore complex volumetric surfaces by relying on the haptic sense remains open. The key issue to be solved is an accessible 3D frame of reference and means of displaying the specific exploratory patterns as a sequence of haptic probes [16]. What would happen when haptic visualization would be more complicated than just point-based local characteristics (physical parameter or perceptual quality) of the region under the cursor? Could the temporal structuring of sequentially presented spatial data facilitate their mental integration? Could exploratory movements across the volumetric surface, being kept in sync, help in haptic signals integration?
This paper begins with a discussion of related work. Then we present the method and design principles of the experimental setup and the results of comparative study of the two approaches for the presentation and exploration of the volumetric shapes in the absence of visual feedback. Finally, we summarize our results and draw conclusions. Some references are related to studies carried out with blind people. However, we had to note that the technique in the question was developed for engineers and technicians (sighted people) for alternative visualization of the surface topography and spatially distributed data collected by sensors [11, 17–20]. Thus, this is not the case for blind and visually impaired people.
2. Background
Depending upon tactile experience and topological imagination, even being explored with one or two hands, raised line drawings may be difficult for blindfolded-sighted observers to integrate nonvisual information to identify the overall pattern [8]. In some cases, an exploration of the swell paper images placed on the graphical tablet and augmented with audio feedback has facilitated the representation of the topology of a graphical environment [21], although the sonification of the graphs does not work flawlessly [22, 23].
In the experiments with elementary and composite planar geometric shapes [24], haptic and auditory modalities were combined to improve the mental representation of topological relations in the absence of visual feedback. However, the authors concluded that the constraints of single-point exploration techniques do not permit the presentation of certain basic concepts such as parallelism (of lines and planes) and intersection of 3D shapes in a simple and intuitive way.
Usually, when asked to identify the virtual object, subjects rely on sensory cues and previous experience gained through observation and manipulation involving a physical contact with an object embedded in specific contextual settings [25, 26]. To create the true mental image of a geometric shape, an observer has to collect any accessible information about the features of the object such as the local irregularities of the surface (edges, vertices, convex and concave features, and flatness), and then integrate tactile, proprioceptive and kinesthetic cues in a specific way [17, 27–32]. However, in the absence of visual feedback, identification of the objects having different levels of complexity (number and shape of elements and their symmetry and periodicity) is greatly affected by the conditions of the presentation and exploration techniques [33, 34]. In particular, objects having smooth curved boundaries are more difficult to distinguish than polygons. This may lead to misinterpretation of rounded 3D shapes [21, 35].
A systematic exploration of successive locations creates a sequence of sensations from which the person hypothesizes and the imagination retrieves a virtual profile. This helps to identify the surface, that is, to recognize and classify the contact area as, for example, curved outward (convex), curved inward (concave), or flat [36]. Many attempts have been made to specify generic types of surface discontinuity [37–40]. However, to recognize the surface discontinuity, the person should not analyze the absolute parameters of the contacts in different locations but their relative position, that is, local irregularities such as shifts and displacements regarding the common reference point or the reference surface, or the relative finger displacements [11, 13, 36].
Some textural features of the virtual surface can be simulated using pseudographic tactile cells to display a small area of the surface around the pointer where visible irregularities can be transformed into the pattern of raised pins. With the appearance on the market of refreshable braille cells, for example, Metec AG [41], the module functionality was extensively tested by being physically connected to different input devices such as a stylus, mouse, and joystick [42]. An interaction with geometric shapes was also the subject of evaluating functionalities of such a reduced display area. It is interesting that the subjects preferred visualization techniques preventing the redundancy of information about the local details yielding a better presentation of the overall indicative features and trends [43–45]. Another approach to explore virtual images consisted of creating some kind of the haptic profilometer (or surface profiler), for instance, a two-axis H frame (absolute) positioning system with braille cells mounted on a carriage able to move along guiding rails [46]. With such a haptic display in the absence of visual and auditory feedbacks, blindfolded (sighted) persons were able to recognize the features and to identify polygonal tactile shapes from the list of the objects given.
It is important to note that the flat surface of interaction determines not only the sensory-motor coordination and strategy of exploration behavior adequate for the given task but also the way of mental processing (componential analysis, feature extraction, and classification) and reconstruction of the entire image or pieces of the image from the perceptual data collected. Moreover, an exploration strategy acts as a perceptual filter and mechanism of signal compression of sensory information. Depending on the velocity of scanning and the perceptual threshold, a variation of the probe positions is perceived sparsely but effectively allowing the user to differentiate the gradient of the surface, global and local irregularities. It is noteworthy that identifying small virtual objects, even augmented with static and dynamic friction, is a more difficult task than the recognition of physical models examined in a natural way by palpation.
To improve haptic simulation techniques, the researchers compared the accuracy of identifying virtual 3D objects and their physical models in the absence of visual feedback. In the absence of visual feedback, it is hard to imagine a proper frame of reference which would be accessible at any point in the interaction with different components of the virtual 3D object. Therefore, 3D shape recognition and identification demand much more cognitive resources than a perceptual analysis of flat graphs.
In earlier studies in the exploration of virtual objects, the experimenters used the PHANToM haptic device, and, later on, they used the PHANTOM Omni, Omega.3, Novint Falcon and other linkage-based force-feedback devices. However, by making an inspection with a rigid probe, the person could still make contact with a single point of the objects examined [9, 47]. Consequently, by analyzing the profile of the shapes through displacements of the tip of the rigid probe in order to be able to correlate all information collected, an observer has to choose the frame of reference and an optimal scanning strategy to discover the features of the curved surfaces [46]. When no common frame of reference is available, the person can explore a virtual 3D object piece-by-piece using occasional sources of reference such as easily detected landmarks (edges, vertices, and faces) or even the skin surface of the subdominant (opposite) hand.
The results reported by Stamm with coworkers [47] demonstrated the limits and problems of interpreting shapes and their components in haptic exploration of the corners and edges, the shape orientation and posture. Kyung with coworkers [48] showed that human performance during haptic inspection of geometric polygons using the grabbing force-feedback mouse was significantly better than with the point-based force-feedback interaction technique such as the PHANToM haptic device. Jansson and Larsson [49] investigated prominent features of synthetic human heads with the use of the PHANToM device. This research showed that increasing the amount of haptic information needed to recognize and identify virtual 3D objects soon overloads the ability of the perceptual system. The authors concluded that there are three possible solutions to display complex virtual objects and scenes in the absence of visual feedback: training the users, simplifying the information being communicated to the user, and developing more efficient haptic interaction techniques and devices.
However, haptic information for object recognition consists of not only perceptual components but also highly coordinated voluntary behaviors (navigation and exploration) and cognitive resources (mental representations of physical and conceptual attributes) [50, 51]. Therefore, the key question is how to efficiently display and coordinate these components making complex haptic information easy to perceive and understand.
In this paper, we report the results of the comparison of two approaches for the haptic visualization and exploration of volumetric shapes in the absence of visual feedback: the free scanning and the parallel scanning with reduced exploratory behavior. The research was aimed to evaluate the applicability and effectiveness of these two approaches.
3. Materials and Method
3.1. The Participants
Eighteen volunteers (ten males and eight females) from the local university participated in this study. The age of the participants ranged from 23 to 29 years with a mean of 24.5 years. None of them had participated in similar experiments before. None of them reported any hearing and sensory-motor problems. All participants were regular computer users and reported being right-hand dominant or using their right-hand most frequently. As it was stated in the introduction, only the sighted people were chosen to enable the evaluation of the benefits and shortcomings of the technique introduced.
3.2. Method
The human ability to integrate perceptual information over time and space provides the basis of mental imagery [32, 52]. Nevertheless, in our study, we relied on the fact that the sighted participants had the mental templates (visual-haptic models) of different volumetric objects. To detect specific points, object features, and spatial relations in the absence of visual feedback, an observer should be able to integrate the multiple-touch probes collected in the haptic space. An exploratory strategy is also an important factor and depends on the personal cognitive style of thinking (analytic, holistic, or detail-oriented) and individual haptic experience. Therefore, to facilitate mental processing, haptic information obtained from exploratory patterns should still be structured and firmly synchronized.
The “dimensions” of the haptic space (mapping) may nevertheless be different from the dimensions of the visual space adopted for linkage-based force-feedback devices [11]. To collect a sequence of haptic probes specifying the virtual surface, the person can explore the haptic space through self-directed behavior (or free exploration) using a suitable hardware and software providing corresponding haptic signals. Alternatively, a sequence of haptic signals can be generated and presented to the person during a specified time interval if it would be an actual scanning of the study area in some direction. We called this technique “reduced exploratory behavior” which can also be interpreted as motionless exploratory patterns.
For example, as can be seen from Figure 1, multiple-touch probes of the virtual object (the top view projection of the upper half of the ring torus) are displayed and perceived exclusively along the -axis as a gradient of brightness and corresponding displacements of the points of grasp of the penholder in relation to the pen tip. These multiple-touch probes can be displayed on the time axis and belong to the imaginary section planes [32]. To collect information about the virtual surface, the person should mark only the initial position of the section plane on -axis of the tablet within the slit of the stencil frame and start the scanning process by clicking the left button of the tablet. The displacements of the StickGrip along the -axis could be proportional, for example, to the gray scale (brightness) level of the invisible image (ring torus). This could produce exploratory patterns perceived as the virtual cross-sectional slices without the need to physically scan the profile of each trajectory along the -axis with the Wacom pen.
However, let us imagine a ripple surface. It is clear that the free scanning technique is independent of the orientation of the surface irregularities. This cannot be true for the parallel scanning technique. Nevertheless, by making the exploration of virtual shapes in the absence of visual feedback, we do not consider this interaction in the absence of the computer support. That is, the system/application could analyze spatially distributed (e.g., geophysical) data and manipulate the appearance of the image in an appropriate way so that specific features would become more prominent and distinguishable at the cross-sectional analysis.
In contrast to mimicking the visual space, the haptic exploration of the objects with the virtual sectioning concept and the parallel scanning technique with reduced exploratory behavior would have the following benefits: (i)there is a fixed reference point within each section plane; (ii)all reference points belong to the same axis (), allowing a user to easily correlate exploratory patterns located in parallel sectional planes and to ascertain relationships, dependencies, and tendencies between corresponding segments located in parallel planes and presented at the particular moments of the timeline; (iii)an exploratory pattern can be repeated within a certain timeframe (e.g., in less than 3 s, Figure 1) as many times as needed in order to form a correct mental image of each cross-section slice profile; (iv)the virtual sectioning method can be applied to the entire haptic space, or to a part of the space, to explore one or several objects at a time.
Finally, the new scanning technique with the reduced exploratory behavior would contribute to the research on data visualization in haptic space.
3.3. Apparatus
In spite of advances in existing force-feedback techniques, the work reported here was performed using the StickGrip linkage-free haptic device, which presents a motorized pen grip for the Wacom pen input device as shown in Figure 2 [53].
(a)
(b)
The point at which the penholder is held is sliding up and down the shaft of the Wacom pen so that as the user explores the virtual surface with the pen, s/he feels that hand being displaced towards and away from the physical surface of the pen tablet (Wacom Graphire-4).
Using the Portescap linear stepper motor (20DAM40D2B-L) did not require any additional gears and led to low noise and equal torque with no differences in directionality of the shaft displacements that might confuse the user. The StickGrip has a range of 40 mm (±20 mm) of the grip displacements with an accuracy of ±0.8 mm for the Wacom pen having a length of 140 mm. The grip displacements with an average speed of about 25 mm/s of the point of grasp in this range give accurate feedback about the distance and direction (closer and further) regarding the surface of the pen tablet, and, consequently, such feedback is a part of the afferent information regarding the heterogeneity of the virtual surface. During preliminary tests of the setup, the two values of 42 and 28 mm/s were adopted for presenting virtual cross-sectional planes in the parallel scanning mode with an accuracy of displacements better than 4%. However, the grip displacements (even at a velocity of 28 mm/s) still constrained the exploration and presentation of the virtual scan-lines when the gradient of deformation of the virtual surface was too high. The distance and direction of the grip displacements were coordinated with the structure of the virtual surface.
The workspace of exploration was bordered with a frame of 60 × 85 mm along the -axis and the -axis. The frame was used to limit unnecessary exploratory movements and redundant haptic information, in order to easily scan the virtual surface performing long strokes between opposite borders in any direction. The virtual surfaces were visualized as 8-bit grayscale images (Figures 4 and 5). Thus the experimenter could monitor the activity of the subjects as indicated in Figure 2 (at the bottom right).
To facilitate spatiotemporal coordination between the StickGrip displacements along the -axis and the timeline corresponding to the -axis of the virtual cross-sectional planes, the users had to rely on auxiliary sound signals. During the virtual scanning, auxiliary signals presented a sequence of short beeps (sine-wave tone pulses of 800 Hz at duration of 65 ms) with an interval of 360 and 240 ms as illustrated by white dots in Figure 3. The start/end points of the virtual trajectory were marked with the tone pulses of 2.8 kHz at duration of 46 ms. The end-point signal appears immediately at the end of the playback of each scanline.
However, the trajectory had a fixed length, and tone pulses were synchronized with the points of records (of the Stick-Grip displacements) along the timeline (Figure 1). Therefore the last interval was shorter as indicated in Figure 3.
The sequence of sound beeps was the same and it was independently of the type of shape. Therefore, participants could not use these sounds to identify the shapes or their features in the absence of haptic feedback. Sound signals were not used during free scanning mode because they could distract and confound the subjects being presented asynchronously with haptic information.
A microphone was used (Figure 2, on the left) to record the subjects’ decisions as well as any comments given after the test. Short wav-files were used to deliver voice prompts to the subjects about the application status (“test on,” “task was completed successfully”).
3.4. Procedure
The same set of ten volumetric images was presented to the subjects in each experimental block. However, the subjects were not aware of the specific shapes, which were potentially going to be presented to them. One or two geometric shapes (cylinder, trapezoidal prism, ball, and torus) or their halves and the ripple surface (10 volumetric images) were explored with the StickGrip haptic device in the absence of visual feedback and identified in the three conditions (experimental blocks).
The three conditions were as follows: (i)the baseline condition named free scanning was the (self-directed) free exploration of the virtual space; (ii)the successive haptic exploration with the reduced exploratory behavior of cross-sectional profiles lying on parallel planes, named parallel scanning, was of the virtual trajectories along the -axis of the Wacom tablet at the scanning velocity of 28 mm/s;(iii)the similar mode of the parallel scanning (with the reduced exploratory behavior) at the velocity of presenting virtual cross-sectional planes of 42 mm/s.
Audio markers accompanied the two conditions of parallel scanning.
To decrease perceptual learning and knowledge transfer, the participants performed the experimental session at once (three blocks). Both the blocks and volumetric images were presented in randomly assigned counterbalanced order.
Detailed verbal instructions were given to the participants regarding the testing procedure. The subjects started and finished the trials by clicking the right button on the tablet. When the subjects were ready to continue the test, they were instructed to press this button again. During the free scanning mode (baseline condition), the subjects were asked to explore the virtual profile of the surface within the workspace (the frame), to recognize and imagine the virtual shape(s), and to identify it (or them). When the participants finished scanning the surface, they were asked to click the right button on the Wacom tablet. Immediately after that, they had to make the decision by speaking it aloud (into the microphone), by giving a verbal description or title of the virtual image.
In another two blocks, the same virtual shape(s) were explored through successive playback of cross-sectional profiles of the virtual surface. To initiate the scanning process of each cross-section, the subjects had to click the left button of the tablet with the left hand (Figure 3) as many times as needed. The virtual trajectories played back at a given speed (42 or 28 mm/s) starting from the points indicated by the subject.
The subjects held the StickGrip like an ordinary pen. Since fast displacements of the StickGrip could slightly deviate the stylus from the intended direction (e.g., see upper tracks across the ball and ripples in Figure 4), subjects were asked to hold the StickGrip in a vertical position.
In general, the starting point could be any location pointed out within the workspace. However, to choose the starting point, the subjects were asked to move the StickGrip only along the left border of the frame. The right border of the frame was always the endpoint of the virtual trajectory. The subjects had to detect and memorize the features of the entire profile of each cross-section, to further integrate them and mentally retrieve the entire surface of the virtual shape(s). At any time when the subjects had a problem recalling the features of the virtual cross-section, they could examine such a region again.
Once the subjects had been instructed, they were briefly allowed to practice with the sequence of needed actions in two conditions by exploring the virtual pyramid with free and parallel scanning modes. The results of these trials were excluded from further analysis.
The experimental session (three blocks) took place in the usability laboratory as shown in Figure 2 (on the left) and lasted less than 60 minutes. The subjects were blindfolded and perceived the virtual space relying on kinesthetic and proprioceptive senses. To accomplish the test, the participants had to complete ten trials in each block of set tasks with no time limit. At the end of the test, they were given sound feedback (“task was completed successfully”). Between trials and blocks, the participants had a short (self-paced) break and could ask any questions. After the test the participants were interviewed about their experiences and problems.
The test was performed according to the ethical standards. Informed consent was obtained in accordance with the guidelines for the protection of human subjects. No private or confidential information was collected or stored.
3.5. Design
In order to reduce variance due to individual differences, the experiment was conducted as a within-subjects design in which each participant experienced all volumetric images identified in three conditions. There were four dependent variables: the task completion time of recognizing (by clicking the right button on the Wacom tablet) and identifying the virtual shape (by giving a verbal description or title of the virtual image), number of the virtual cross-sectional profiles (scan-lines) inspected in order to recognize and identify each shape, number of repeated inspections of the same scan-line, and number of volumetric images correctly identified. The top view projection of virtual shapes (10 images) and three conditions of their exploration were considered as independent variables.
The reduced exploratory behavior was expected to improve human performance in recognizing and identifying volumetric images in the absence of visual feedback. Both conditions of exploration (reduced behavior versus free exploration and velocity of virtual scanning) and different levels of complexity of the virtual images (number of objects and elements/attributes, their symmetry and periodicity, and the gradient of the surface discontinuity) could have an impact on human performance.
The human performance was evaluated in terms of the task completion time, number of the virtual cross-sectional profiles (scan-lines) inspected, number of repeated inspections of the same scan-line, false recognition or/and identification (confusion matrices of the shapes presented), and exploratory strategies used. A variable number of components of the virtual image (1, 2 objects or many ripples) allowed us to differentiate the results of image interpretation. We could refer to recognition error when the number of objects was specified incorrectly and refer to identification error when the number of objects was correct but the description or title of the image was inappropriate.
4. Results and Discussion
In total, the results were collected from 540 trials during the haptic exploration of 10 virtual shapes (images) in three conditions (blocks) by 18 subjects. The statistical analysis was performed using SPSS 18 for Windows (Chicago, IL, USA) and Origin-Pro 8.6 for the 3D visualization of exploratory behavior.
4.1. Analysis of Exploratory Strategies
The typical tracks recorded during haptic exploration and identification of virtual shapes in the free scanning mode are presented in Figure 4. Here, we can only demonstrate that during inspection of virtual shapes in the free scanning mode, our blindfolded subjects did not use any specific strategy. By making continuous circular and linear movements (Figure 4), they merely repeatedly scanned the workspace to detect at least the more prominent and global features of the test objects (borders, vertices, convexity, concavity, and flat areas), which probably would better correspond to their own mental representations.
Nevertheless, exploration strategies were influenced by the method, techniques, and shape-related factors: relative position and size of the virtual shape(s) (two cylinders, two hemispheres, and two trapezoidal prisms), their inherent symmetry (ball and torus), and a specific relief having periodicity of the surface gradient (torus, two balls, and ripples) or not (the half ball). Most of the subjects reported that the free scanning mode demanded more cognitive effort to make mental matching of different pieces of trajectories which are separated in space and time. In particular, to determine spatial relationships among pattern components (adjacent edges, their slope, and the direction of slope), these components should be fully analyzed each in a separate location.
The typical tracks recorded during haptic exploration and identification of virtual shapes in the parallel scanning mode are presented in Figure 5. At the beginning of the exploration, the subjects tried to define the number of shapes within the frame relying on a sense of the roundedness, straightness, or flatness of exploratory trajectories and spatial intervals between them. They performed a rough inspection with a greater step between virtual cross-sections (ripples, trapezoidal prism, torus, and cylinder). Then, the subjects actually began their exploration of the workspace in more detail (ball, two balls, and two hemispheres) or just a detailed scanning of the key areas (two cylinders, two balls, and two trapezoidal prisms), which could help to identify the object in question.
Although identification of the ball and the half ball was often unsuccessful (Tables 1, 2, and 3), in the brief interview after the test, 14 subjects (78%) out of 18 reported that the ball, cylinder, half ball, and grooved surface (ripples) were the easiest haptic shapes to identify.
Ten out of 18 subjects (55.6%) reported that they actively used sound beeps to “measure” the length of edges and to build the mental model of the virtual shape (e.g., “3 beeps up, 4 beeps straight, and 3 beeps down”). Three out of these ten subjects preferred the low playback velocity of the virtual trajectory of 28 mm/s.
Three out of 18 subjects (16.7%) immediately after an explanation of the test procedure and a short practice asked for the volume of the sound beeps to be lowered, as they believed that these signals would distract them. For these subjects, the sound volume was lowered by about 20%. At the end of the test, they reported that the sound beeps did not distract them, but that only the start and stop sounds were useful from their point of view. It is likely that these subjects relied on a holistic encoding strategy by capturing each of the cross-sectional trajectories as a whole by making “in-air hand gestures.” These three subjects outperformed the others approaching minimum completion time but with a rather high rate of false identification. However, we need more observations to validate our inferences.
4.2. Evaluation of Human Performance
The goal was to analyze the differences between the two kinds of haptic visualization and exploration of virtual volumetric shapes supposing that mental representations of sighted people are quite similar.
4.2.1. Task Completion Time
By relying on the free scanning technique (a baseline condition), the mean task completion time of recognition and identification of the virtual shape was about 59 s with a standard deviation (SD) of about 19 s, varying from a minimum of 13 s (SD = 11 s) to a maximum of 109 s (SD = 20 s) averaged over all participants. The box plots in Figure 6 show the typical pattern of differences in the individual performance under different conditions of exploration of the virtual geometric shapes.
Figure 7 illustrates the mean time of recognition and identification of the virtual shapes for each of the three exploration conditions averaged over all participants. During the parallel scanning mode at the playback velocity of the virtual trajectory of 28 mm/s, the mean task completion time was about 58 s (SD = 14 s) varying from a minimum of 16 s (SD = 12 s) to a maximum of 77 s (SD = 14 s) averaged over all participants. The number of virtual cross-sectional profiles (scan-lines) inspected varied from a minimum of 5 (SD = 1.9) to a maximum of 9.8 (SD = 1.2) with a mean of about 8.5 (SD = 2.6) averaged over all participants. The number of scan-lines of the same shape (Figure 8) varied from a minimum of 2.9 (SD = 1.7) to a maximum of 13 (SD = 1.2) with a mean of about 8.7 (SD = 2.7). The average number of repeated inspections of the same cross-section profile (scan-line) was about 1 (SD = 0.03) varying from a minimum of 1 to a maximum of 1.1 averaged over all participants.
During the parallel scanning mode at the playback velocity of the virtual trajectory of 42 mm/s, the mean of the task completion time (Figure 7) was about 46 s (SD = 19 s) varying from a minimum of 14 s (SD = 14 s) to a maximum of 89 s (SD = 8 s) averaged over all participants. The number of scan-lines of the same shape varied from a minimum of 2.6 (SD = 2) to a maximum of 12 (SD = 3) with a mean of about 8.5 (SD = 3) averaged over all participants. The average number of repeated inspections of the same scan-line was about 1 (SD = 0.04) varying from a minimum of 0.9 to a maximum of 1.1 averaged over all participants.
A grooved surface (ripples) was only the image that was successfully recognized by all participants with both scanning techniques and with minimum effort. To identify the virtual grooved surface (ripples), the subjects spent on average about 35 s (SD = 21 s) using the free scanning mode. During the parallel scanning mode at the playback velocity of the virtual trajectory of 42 mm/s, they needed significantly less time, only about 16 s (SD = 7 s) on average. The mean number of inspections was about 3 (SD = 1.6), which increased by approximately twofold (5.6, SD = 2.8) when lowering the playback velocity of the virtual trajectory.
The shapes having smooth rounded surfaces required more time to perform their inspection (Figure 7). In particular, using the free scanning technique ball, the two halves of the sphere and a half ball required 71 s (SD = 19 s), 69 s (SD = 16 s), and 66 s (SD = 31 s) on average. Making inspection with parallel scanning mode at the playback velocity of the virtual trajectory of 28 mm/s, the times needed to recognize and identify these shapes were: 67 s (SD = 14 s), 68 s (SD = 12 s) and 57 s (SD = 17 s), respectively. At the playback velocity of the virtual trajectory of 42 mm/s, the task completion time diminished: 63 s (SD = 11 s) for the ball, 53 s (SD = 8 s) for the two hemispheres, and 49 s (SD = 11 s) for the half ball.
As regards task completion time, the results of the paired samples -test revealed a statistically significant difference when the virtual surfaces were explored using the free scanning technique and the parallel scanning of frontal cross-sections at the playback velocity of the virtual trajectory of 42 mm/s: (); the correlation index was high and statistically significant 0.805 (). The difference in exploration of the virtual surfaces in the parallel scanning mode at two velocities (28 and 42 mm/s) was also statistically significant: (), although the correlation index of this parameter was high and statistically significant 0.902 ().
However, the paired samples -test revealed no difference between the free scanning technique and the parallel scanning of frontal cross-sections at the playback velocity of the virtual trajectory of 28 mm/s: (), while the correlation index was high and statistically significant 0.853 ().
4.2.2. Number of Inspections (Scanlines)
Regarding the virtual cross-sectional profiles (scan-lines), the results of the paired samples -test demonstrated that no difference was revealed either for the number of scan-lines () or for the number of repeated inspections and the number of repeated inspections of the same cross-section: () and (), respectively. Thus, the correlation between the numbers of scan-lines at two velocities (28 and 42 mm/s) was high as 0.947 and statistically significant ().
The correlation between numbers of repeated inspections of the same scan-line was also high at 0.953 and statistically significant (). The numbers of repeated inspections of the same cross-section revealed a weak correlation of 0.575 which did not reach statistical significance ().
4.2.3. Analysis of Errors
The analysis of errors made (false recognition and identification) for each of the three exploration modes showed that the mean number of errors was less than 2.5% for 180 trials (10 virtual volumetric images being explored, recognized and identified by 18 subjects). In particular, the mean number of errors was 2.5% (SD = 1.6%) in free scanning mode and 1.9% (SD = 1.2%) and 1.5% (SD = 1.3%) in parallel scanning mode, at the playback velocity of the virtual trajectory of 28 mm/s and 42 mm/s, respectively (Figure 9).
The result of the paired samples -test revealed no differences in human performance in terms of false recognition and identification of the virtual shapes at the playback velocity of the virtual scanlines of 28 mm/s and 42 mm/s (); the correlation index was low and not statistically significant: 0.585 (). The paired differences between errors made during the free scanning mode and parallel scanning at the playback velocity of 42 mm/s and 28 mm/s were statistically significant: () and (). Indices of correlation were also significant: 0.691 () and 0.926 (), respectively.
A further analysis of the confusion matrices of the virtual shapes recognition and identification (Tables 1–3) showed that shapes with different levels of complexity (number of the shapes’ elements, their symmetry and periodicity, and gradient of the surface discontinuity) required different perceptual and cognitive efforts to recognize and distinguish their specific features to integrate them into a coherent mental image.
As can be seen from the tables, false recognition and identification were much affected by the scanning mode and perceptual heterogeneity of the shape boundaries. In particular, careless inspection of the virtual profile of the surface within the workspace could be the reason of recognition errors when the number of objects was specified incorrectly. Another reason could be the growing redundancy of sensory information that can also soon overload the subjects to establish the right relations between successive samples. However, this kind of error was made more often in the free scanning mode (2.01%) than with the use of the parallel scanning technique (1.12%). In Tables 1–3, the thick lines border the error values of recognition.
The shapes having smooth rounded surfaces (the ball, the two hemispheres, and the half ball) were more difficult to distinguish than the cylinder, torus, or the ripple surface and could be a reason for their misinterpretation. These poorly identified objects in the confusion matrices are bordered. The contribution of poorly identified objects was about 4.5% of total errors made in the free scanning mode, 2.9% in the parallel scanning mode at the playback velocity of the virtual trajectory of 28 mm/s, and 2.5% in the parallel scanning mode at the playback velocity of the virtual trajectory of 42 mm/s.
5. Conclusion
The imaginary surfaces of virtual shapes can be perceived from the virtual trajectories simulated with displacements of the point of grasp of the penholder. During this study, virtual volumetric shapes with different levels of complexity were presented to blindfolded sighted participants using the StickGrip linkage-free haptic device, the virtual sectioning concept, and the parallel scanning technique with reduced exploratory behavior.
The virtual shapes with smooth rounded surfaces (a ball, the two hemispheres, and a half ball) were more difficult to distinguish, and completing their identification required about 70 seconds. These results corroborated the experimental observations which have also been noted in previous studies [21, 35]. The torus and grooved surface (ripples) were easily identified, and their exploration required much less cognitive effort and time (40–15 seconds). However, the case with a ripple surface demonstrated the need for adaptive adjusting of visualization parameters for presentation of the specific features with respect to the robustness and the sensitivity of the technique.
The number of scan-lines inspected in order to recognize and identify the shape and the average number of repeated inspections of the same scan-line revealed no statistically significant difference in the two exploration conditions. The average number of repeated inspections of the same scan-line was about one, while the scanning velocity of the virtual trajectories presenting cross-sectional profiles is a crucial parameter in the parallel scanning technique. At the speed of displacements of the penholder of 42 mm/s, the subjects achieved significantly better results than when scanning velocity was 28 mm/s. Nevertheless, these parameters could be customized or adjusted depending on information presented (e.g., density of the virtual surface irregularities). The speed of displacements of the penholder should be increased and adapted for visualization of volumetric data with a high gradient of spatial discontinuity.
All participants agreed that visualization of exploratory patterns presented as the virtual cross-sectional slices of the workspace was robust and extremely easy to use, which enabled them to create accurate mental images.
In further research, we plan to confirm the universality of the cross-sectional virtual scanning concept and the reduced exploratory behavior using the data sonification.
Acknowledgments
The authors gratefully acknowledge the support of Finnish Academy Grant 127774. The authors would like to thank the reviewers for their valuable comments and suggestions to improve the quality of the paper.