![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/5c14ef7b-ad15-40bd-aae3-59ac366379a5/teaser.png)
SpatialRugs (A+B) and MotionRugs (C), all with the same underlying dataset of 151 fish moving in a tank for about 90 seconds. Excerpts 1-4 show static snippets of the fish turning from the upper right over the lower right to the lower left. Part A shows unmodified SpatialRugs, where colors can be related to spatial positions (compare colors to Parts 1-4). Part B shows color-smoothed SpatialRugs that mitigate distorted patterns (outlined in red boxes). Part C shows mover speed encoded in the colors instead of the position. In conjunction, SpatialRugs and MotionRugs can be applied to relate space to features (e.g., in which areas of A movers are fast or slow as indicated in C.)
SpatialRugs: Enhancing Spatial Awareness of Movement
in Dense Pixel Visualizations
Abstract
Compact visual summaries of spatio-temporal movement data often strive to express accurate positions of movers. We present SpatialRugs, a technique to enhance the spatial awareness of movements in dense pixel visualizations. SpatialRugs apply 2D colormaps to visualize location mapped to a juxtaposed display. We explore the effect of various colormaps discussing perceptual limitations and introduce a custom color-smoothing method to mitigate distorted patterns of collective movement behavior.
{CCSXML}<ccs2012> <concept> <concept_id>10003120.10003145.10003146</concept_id> <concept_desc>Human-centered computing Visualization techniques</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \ccsdesc[500]Human-centered computing Visualization techniques \printccsdesc
1 Introduction

Visualizations of movement data face challenges in scalability towards time and amount of displayed movers. Especially, uncovering spatio-temporal patterns in collective movement behavior is challenging due to large numbers of entities moving similarly over long periods. To overcome such scalability issues, the MotionRugs technique has been proposed that displays movers in a static, compact fashion [BJC∗18]. In MotionRugs (Figure SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations C), each pixel represents one mover, while the X-axis denotes time and the Y-axis represents a 1D spatial aggregation of all movers. Color can encode any numeric feature of interest, e.g., the speed of the entities. To illustrate, Figure SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations C shows the speed of the movers; several trends of slowing down (red) and speeding up (blue) can be visible at a glance, while the curvature reveals the spatial dynamics of the collective behavior (e.g., change in orientation and positions of the group). Despite the space-efficient benefits of MotionRugs, users cannot relate movers to their original locations due to the spatial linearization, as is possible with many other techniques including simple static plotting or animation of trajectories [AA13]. This is a major drawback, as spatial context can be important when analyzing movements. For example, to be able to explain movers’ behavior, locations of food sources for animals or points of interest for human movers can reveal critical contextual information.
In this paper, we combine the space-efficiency benefit of MotionRugs with the space-awareness advantages of other advanced techniques for trajectory visualization [AAB∗13, HTC09, TSAA12]. We propose SpatialRugs (see Figure SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations A and B), a technique that builds on the application of 2D color maps to dense pixel visualizations, transforming colors to express the spatial positions of movers. We further refine SpatialRugs with a time-aware color correction to mitigate perceptual issues arising from color space transformations (see Figure SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations B). We compare the results to a naive gaussian-based color smoothing approach and discuss suitable color spaces.
2 MotionRugs & Collective Movement Visualization
Visual analysis of movement capitalizes on human perception to process summaries of complex movement data and uncover patterns over time and space [AA13]. Andrienko et al. [AAB∗13] provide an example of spatial abstraction for collective movement, transforming physical mover positions to relative ones regarding a group-centered reference point. MotionRugs [BJC∗18] further reduce the space of the moving entities from a 2-D to an 1-D representation, ideally still reflecting the physical distances between the movers as accurate as possible. To create the 1-D order from a set of 2-D positions of movers at a given moment in time, spatial linearization strategies are used such as space-filling curves or the traversal of spatial index structures [LO93]. In MotionRugs [BJC∗18], every mover in one frame is represented by a single pixel and colored after the feature a user is interested in (e.g., speed in Fig. SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations C). The process is repeated for each time frame, ordering the slices on the x-axis by time. The result is a static, dense pixel display [Kei01], showing the feature development of the movers over time. Alternative dense representations for geometrical relation exist, such as ParaGlide [BSM∗13] for parameter exploration of animal behavior models, or Cui et al. [CWL∗14], showing remarkable results for dynamic graphs.
In contrast, MotionRugs [BJC∗18] are primarily used to retain a quick overview of the spatial dynamics reflected in inherent curved spatial dynamics patterns, which can as well be employed to detect trends in features and feature distributions. However, unlike other sophisticated techniques for trajectory visualization [AAB∗13, HTC09, TSAA12], MotionRugs lacks spatial awareness, as it does not depict the accurate spatial locations of the movers. While MotionRugs capture changes in space and mover orientation over time, it is unable to show where entities are moving to specifically. This limitation is critical for many use cases where analysts needs to be aware of the region the entities are moving in. To enhance spatial awareness, while preserving the spatial efficiency of MotionRugs, we propose an alternative approach. Below, we propose SpatialRugs, a technique that reintroduces spatial positions into MotionRugs, eliminating the necessity for tedious analyses (e.g., clutter-prone static trajectory plots or time-consuming animations).
3 Retaining spatial readability with SpatialRugs

SpatialRugs is a compact visualization technique for collective movement data that enhances spatial awareness by projecting a 2D-color space into the 1D-linearizaton of MotionRugs. SpatialRugs apply a color scale method to the original movement space. Fig. 1 (upper left corner) demonstrates our approach: (i) We transform the color space to a 2-D cubic representation in order to serve as a base for the second step. (ii) We transform the 2-D color space to cover the maximum extent of the spatial dimensions used by the mover dataset. (iii) We assign the 2-D position of a mover to the corresponding color of the transformed color map. Spatial positions are now represented by color, which can be used in conjunction with pixel-based visualizations of movement, such as MotionRugs [BJC∗18], to encode mover locations. With the colormap reference, users are able to identify the spatial distribution of entities at a given time. Fig. SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations shows that the movers come from the upper right corner (green, first excerpt), take a right turn towards the lower right (blue, second excerpt), move through the lower middle of the represented space in purple to the lower (red, third excerpt) and finally middle left in orange color tones (fourth excerpt). The resulting patterns allow to perceive the movers’ spatial distribution, while viewers can also estimate how the movers progress within the color zones. For example, between excerpts 1 and 2, just a few movers start to move towards the blue until everyone follows. This behavior is shown as cone-shaped transition from green to blue. Consequently, the color mapping enables to compactly see patterns over long periods of time, also relating the spatial development to the feature development by comparing the excerpts (e.g. by relating Fig. SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations A and C).
4 Color Space Considerations
Color space mappings have been applied to represent spatial or temporal relations before. Northern Lights Maps [JMBK09] map spatio-temporal properties of movers to an RGB-color scale. PhenoVis [LSA∗16] presents color-coded normalized stacked bar charts to allow comparative analysis over longer time spans. MotionExplorer [BDV∗17] employs a projection-based view displaying human motions in a 2D color-coding to highlight temporal patterns. Similarly, SpatialRugs apply a 2D color space mapping to allow users to map colors of data points in an abstract visualization to their real spatial positions. As reflected in the 2D color map task assessment ER1-3 by Bernard et al. [BSM∗15], a viewer should be able to distinguish different locations by comparing their color representations accurately (I). Also, the colors should be distributed as evenly as possible over the available space (II), and finally, our approach should allow for two or more locations to be compared with each other (III).
Yet, the visible spectrum and derived standard color spaces, e.g., CIELAB, HSV or sRGB are mostly organized in three dimensions and not necessarily of a symmetrical shape. So, the transformation to a regular 2-D form as required by SpatialRugs is challenging. Also, even disregarding errors introduced by color space transformation, color perception is individually different in viewers [DPR∗18], resulting in different abilities to identify fine-grained colormaps. Thus, a sensible choice of color space is critical for the effectiveness of SpatialRugs.
Many related approaches have employed 2D colormaps in specific and generic use cases.
In an extensive survey, Bernard et al. [BSM∗15] investigate the capabilities of 22 different 2-D color maps with respect to analytical tasks and perceptual properties.
Task assessment: Fig. 1 shows a comparison of the color maps taken from Bernard et al.[BSM∗15] generated with the data described in Fig. SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations.
According to the task assessment table of Bernard et al., colormaps provided by Bremm et al. [BvLBS11], Ramirez et al. [RAGG12], Steiger et al. [SBM∗14] and Teuling et al. [TSS11] would be best suitable given our defined tasks I-III. Yet, the task-based recommendations [BSM∗15] do not regard the perceptibility of visual structures within the visualization space. As retaining these structures is important to our approach, we turn to the quality assessment measures [BSM∗15].
Quality assessment: The JND measure describes the “Just Noticeably Different Colors” [BSM∗15], indicating how well a colormap exploits a color space. Here, the colormaps by Simula and Alhoniemi [SA99] and Guo et al [GGMZ05] perform well, but iterate over black or white in the center of the color map. Such color maps with low black- or white distance score work well only in conjunction with backgrounds of the opposite color [BSM∗15]. As SpatialRugs pack pixels densely and do not feature intermediate spaces between the data points, using color maps with black or white color ranges could interfere with the perceived brightness and saturation of the surrounding colors, making the color map difficult for our case. The next best color maps according to the JND feature are Cube Diagonal Cut B-C-Y-R [BvLBS11] and the Four Corners R-B-G-Y color map [ZNK07].
Transformation assessment: Besides applying an appropriate color map, the visual outcome of SpatialRugs is also determined by the amount of applied transformation to the color space. Changing the ratio of an original color space in one axis affects the color discriminability along the same axis. This holds even if the ratio is changed in both axes. In both directions (either shrinking or enlarging the color space), color discriminability suffers, since either there will be less space to represent all colors a color space can provide, or the same colors are stretched over a larger space than the color space can cover. Yet, since color perception is not necessarily linear, these effects can only be measured in perceptual studies. While we acknowledge these effects, we expect that our technique is still applicable to aspect rations of up to 16:9.
Distribution assessment: Movement distributions play a crictical role for the visual outcome of SpatialRugs. If only a few movements take place in a narrow color range, the visual discriminability of rest of the movements is reduced, but outliers could still be seen well. To solve this issue, an adaptive coloring approach based on the movement distribution would be conceivable in future work.
5 Pooling-based Time Aware Color Smoothing
Transition areas in the color space frequently appear as perceptual distortions (outliers), for example, Fig. SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations (excerpt 1, outlined in red) shows most movers in the green quadrant and only a few in the transition area to the blue quadrant. This results in a salient blue line (outlined in the red box), where the perceived color distances appear larger than the actual distances of the blueish movers to the rest of the green group. Such artifacts can mislead viewers to think that the few entities are further into the blue space as they are in reality. To mitigate such perceptual distortions, we propose a time-aware color smoothing technique. Our method includes the mover distribution of the current and subsequent steps to determine the color correction. If entities close to each other are located in different color areas, their respective color is corrected towards the majority.

Our method (seen in Fig. 2) consists of three steps (color collection, pooling, and adaption) repeated for every pixel. During the initialization phase (Fig. 2, Step 0), users adjust the pooling matrix, selecting three parameters: neighborhood size, time frames ahead, and step size. The neighborhood size describes the spatial region around the focused pixel in vertical axis. The time frames ahead incorporates the spatial movement into the future to smooth in horizontal direction. Lastly, the step size offers a way to reduce the neighborhood into the future to steer the importance of the developing spatial region. Please refer to our supplementary material for alternative parameter variations. The code for the color smoothing is publicly available as Python notebook [Sch20]. In Step 1, we apply the user-defined pooling matrix around the target pixel and collect the colors of included pixels. In step 2, the collected pixels are ordered with a stable sorting algorithm (e.g., mergesort) on the RGB values. Outlier pixel color will be sorted to both ends of the list, while more similar colors move to the mid. In Step 3, after the sorting, the median of the array yields the color average of the collected pixels and the index pixel is corrected to the median.
6 Results: assessing visual outcomes
We next elaborate on preliminary results on the effectiveness of SpatialRugs, discuss color scale choice and smoothing method.
Color scale: In section 4, we proposed an initial set of color maps to explore using the work of Bernard et al. [BSM∗15] and defined the requirements I-III.
We further narrow down the selection of well applicable colormaps by visually investigating color space properties (see Figure 1).
First, derived colors should be well distinguishable to relate them to an accurate spatial location, satisfying requirement I. The color maps provided by Bremm et al. 2 [BvLBS11], Steiger et al.[SBM∗14] and Teuling et al.[TSS11] are clearly inferior to their competitors for this property. Second, requirement II states that colors should be distributed as evenly as possible. Here, the color map provided by Simula et al. [SA99] introduces a black/dark area between neighboring colors in the corners, impacting the perceptual continuity. The color regions by Ramirez et al. [RAGG12] and Bremm et al. 1 [BvLBS11] are also not linearly distributed.
This leaves the colormaps by Ziegler et al. [ZNK07] and Guo et al. [GGMZ05] as candidates. Ziegler et al. anchors four distinctive colors, amongst them three primary colors, to the corners of the color space creating a semantic notion of spatial orientation resembling the intuitive natural division of four cardinal directions. Guo et al. extend the color space radially around a white center. Both approaches scale well to different aspect rations, satisfying requirement III. With the approach of Guo et al., an additional center area can be encoded using a white color. Yet, this could interfere perceptually, if an additional feature should be encoded as modification of the color brightness, which only works if no black or white components are present. To leave this possibility open, Ziegler et al.’s approach is more suitable. In conclusion, we expect the color maps by Ziegler et al. and Guo et al. to fulfill our requirements. The choice between the two is use case-dependent.
Color smoothing:
The time-aware smoothing aims to mitigate the effects of neighboring colors (outlined in red Fig 3 A) by including the temporal color distribution.
In Fig. 3 A and B, we see that the methods reduce visible outliers while retaining the temporal structures.
The difference image between (A) and (B) (see Fig. 3 (C)) provides preliminary evidence for the value of the applied smoothing method as it only affects the color transition areas, leaving the visual patterns still crisp and visible.
In contrast, the Gaussian blur (D) creates a fuzzy impression, aggravating accurate interpretation of colors at a given point by blurring visual structures.
A quantitative assessment of our color-smoothing (table in Fig. 3) shows results of applied quality measures by measuring the distance to the original, unsmoothed image. These measures include the root mean squared error (RMSE) [WMP14], the mean squared error (MSE) [WMP14] and the structural similarity index [Bov13] (SSIM). We compare our time-aware color smoothing (TACS) to a standard gaussian smoothing (Gauss).
Similar reference area parameters are chosen to enable a better comparison of the smoothing methods.
Lower RMSE and MSE values indicate better results, whereas a higher value for SSIM indicates better similarity between original and smoothed image.
The results indicate that our pooling method outperforms the Gaussian blur even for small sigmas and large window sizes.
7 Conclusion and Future Work
We demonstrated an approach to encode collective movement behavior for spatial awareness within a static, compact visualization. SpatialRugs uses color mapping to allow users to perceive spatial relations through space-efficient designs. The intended use of SpatialRugs is in conjunction with other pixel-based visualizations of movement datasets showing other features the user is interested in, enabling him to relate space and feature developments (compare SpatialRug and MotionRug in Figure SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations). We compared several color spaces identifying advantages and disadvantages. Further comparisons of color spaces and results for the color smoothing on several color maps can be found in the supplemental material. We further discussed perceptual challenges derived by color scales, where movements appear more distant than in their physical space. To mitigate distortion effects, we proposed a time-aware color smoothing approach, which we illustrated in some examples and provided preliminary quality metrics. We expect that our approach can be applied to non-spatial 2D point distributions as well, for example to projections of dynamic datasets.
Despite its promising potential, SpatialRugs come with several shortcomings. Spatial distance may introduce errors when users try to relate a color to its precise position, while individual differences in color perception might affect clarity of the derived patterns. All these aspects need to be evaluated, while guidelines for the correct parameterization of our technique have to be explored. In future work, we intend to quantify viewer’s perception of our technique and choice of color spaces. Also, the perceptual implications of our color correction process have to be tested thoroughly. Instead of using a single color map, we anticipate that SpatialRugs would benefit from an adaptive color map approach adjusted to the specific movement distributions and user task. Finally, we would like to support better detection of movements in semantically interesting regions by allowing users to place color anchors interactively according to semantic objects or areas.
References
- [AA13] Andrienko N., Andrienko G.: Visual analytics of movement: An overview of methods, tools and procedures. Information Visualization 12, 1 (jan 2013), 3–24.
- [AAB∗13] Andrienko N., Andrienko G., Barrett L., Dostie M., Henzi P.: Space transformation for understanding group movement. IEEE transactions on visualization and computer graphics 19, 12 (2013), 2169–2178.
- [BDV∗17] Bernard J., Dobermann E., Vögele A., Krüger B., Kohlhammer J., Fellner D.: Visual-interactive semi-supervised labeling of human motion capture data. Electronic Imaging 2017, 1 (2017), 34–45.
- [BJC∗18] Buchmüller J., Jäckle D., Cakmak E., Brandes U., Keim D. A.: Motionrugs: Visualizing collective trends in space and time. IEEE transactions on visualization and computer graphics 25, 1 (2018), 76–86.
- [Bov13] Bovik A. C.: Automatic prediction of perceptual image and video quality. Proceedings of the IEEE 101, 9 (Sep. 2013), 2008–2024.
- [BSM∗13] Bergner S., Sedlmair M., Moller T., Abdolyousefi S. N., Saad A.: Paraglide: Interactive parameter space partitioning for computer simulations. IEEE Transactions on Visualization and Computer Graphics 19, 9 (2013), 1499–1512.
- [BSM∗15] Bernard J., Steiger M., Mittelstädt S., Thum S., Keim D., Kohlhammer J.: A survey and task-based quality assessment of static 2d colormaps. In Visualization and Data Analysis 2015 (2015), vol. 9397, International Society for Optics and Photonics, p. 93970M.
- [BvLBS11] Bremm S., von Landesberger T., Bernard J., Schreck T.: Assisted descriptor selection based on visual comparative data analysis. In Computer Graphics Forum (2011), vol. 30, Wiley Online Library, pp. 891–900.
- [CWL∗14] Cui W., Wang X., Liu S., Riche N. H., Madhyastha T. M., Ma K. L., Guo B.: Let it flow: a static method for exploring dynamic graphs. In 2014 IEEE Pacific Visualization Symposium (2014), IEEE, pp. 121–128.
- [DPR∗18] Dasgupta A., Poco J., Rogowitz B., Han K., Bertini E., Silva C. T.: The effect of color scales on climate scientists’ objective and subjective performance in spatial data analysis tasks. IEEE transactions on visualization and computer graphics (2018).
- [GGMZ05] Guo D., Gahegan M., MacEachren A. M., Zhou B.: Multivariate analysis and geovisualization with an integrated geographic knowledge discovery approach. Cartography and Geographic Information Science 32, 2 (2005), 113–132.
- [HTC09] Hurter C., Tissoires B., Conversy S.: Fromdady: Spreading aircraft trajectories across views to support iterative queries. IEEE transactions on visualization and computer graphics 15, 6 (2009), 1017–1024.
- [JMBK09] Janetzko H., Mansmann F., Bak P., Keim D. A.: Northern lights maps: Spatiotemporal exploration of mice movement. In EuroVis (2009).
- [Kei01] Keim D. A.: Visual exploration of large data sets. Communications of the ACM 44, 8 (2001), 38–44.
- [LO93] Lu H., Ooi B. C.: Spatial indexing: Past and future. IEEE Data Eng. Bull. 16, 3 (1993), 16–21.
- [LSA∗16] Leite R. A., Schnorr L. M., Almeida J., Alberton B., Morellato L. P. C., Torres R. d. S., Comba J. L.: Phenovis–a tool for visual phenological analysis of digital camera images using chronological percentage maps. Information Sciences (2016), 181–195.
- [RAGG12] Ramirez C., Argaez M., Guillen P., Gonzalez G.: Self-organizing maps in seismic image segmentation. Computer Technology and Application 3, 9 (2012).
- [SA99] Simula O., Alhoniemi E.: Som based analysis of pulping process data. In International Work-Conference on Artificial Neural Networks (1999), Springer, pp. 567–577.
- [SBM∗14] Steiger M., Bernard J., Mittelstädt S., Lücke-Tieke H., Keim D., May T., Kohlhammer J.: Visual analysis of time-series similarities for anomaly detection in sensor networks. In Computer graphics forum (2014), vol. 33, Wiley Online Library, pp. 401–410.
- [Sch20] Schlegel U.: time-aware-color-smoothing. https://github.com/dbvis-ukon/time-aware-color-smoothing, 2020.
- [TSAA12] Tominski C., Schumann H., Andrienko G., Andrienko N.: Stacking-based visualization of trajectory attribute data. IEEE Transactions on visualization and Computer Graphics 18, 12 (2012), 2565–2574.
- [TSS11] Teuling A., Stöckli R., Seneviratne S. I.: Bivariate colour maps for visualizing climate data. International journal of climatology 31, 9 (2011), 1408–1412.
- [WMP14] Wajid R., Mansoor A. B., Pedersen M.: A human perception based performance evaluation of image quality metrics. In Advances in Visual Computing (2014), Springer International Publishing, pp. 303–312.
- [ZNK07] Ziegler H., Nietzschmann T., Keim D. A.: Visual exploration and discovery of atypical behavior in financial time series data using two-dimensional colormaps. In 2007 11th International Conference Information Visualization (IV’07) (2007), IEEE, pp. 308–315.