”They Aren’t Built For Me”: A Replication Study of Visual Graphical Perception with Tactile Representations of Data for Visually Impaired Users
Abstract.
New tactile interfaces such as swell form printing or refreshable tactile displays promise to allow visually impaired people to analyze data. However, it is possible that design guidelines and familiar encodings derived from experiments on the visual perception system may not be optimal for the tactile perception system. We replicate the Cleveland and McGill study on graphical perception using swell form printing with eleven visually impaired subjects. We find that the visually impaired subjects read charts quicker and with similar and sometimes superior accuracy than in those replications. Based on a group interview with a subset of participants, we describe the strategies used by our subjects to read four chart types. While our results suggest that familiar encodings based on visual perception studies can be useful in tactile graphics, our subjects also expressed a desire to use encodings designed explicitly for visually impaired people.
1. Introduction
Data-driven decision-making is becoming increasingly ubiquitous in human experience. Reading and interpreting data is a critical job skill, and is a fundamental method used to communicate phenomena to the broader public, most recently evidenced by the popular Flattening the Curve campaign to slow the spread of COVID-19 in 2020 (Thunström et al., 2020). However, the ubiquity of visual experiences may not reach blind or visually impaired (BOVI) individuals. For example, the concept of Flattening the Curve was primarily a visual concept, and wasn’t accessible to BOVI individuals until it was translated into a tactile representation and explanation in 2021 (Rosenberg, 2021). Lack of access to visual representations of data can also result in a lack of access to employment. Programs like the NSF’s Data Science Corps111https://new.nsf.gov/funding/opportunities/data-science-corps-dsc and DARPA’s data science job training initiatives 222https://tools-competition.org/workforce/ illustrate the importance of interacting with data in the modern job market. The unemployment rate for individuals with visual impairments is approximately 2-3 times the national average (McDonnall and Sui, 2019), underscoring a significant gap in accessibility and employment opportunities. Concurrently, emerging consumer devices are aiming to bridge this gap by providing tactile representations of data, originally designed for visual perception. While tactile and visual perceptions are interconnected, particularly for individuals with residual visual memory (Tabrik, 2022), it is unclear which findings from visual data visualization literature translate effectively into tactile formats (Ault et al., 2002; Yu et al., 2001).
This study revisits the seminal 1984 experiment by Cleveland and McGill (Cleveland and McGill, 1984), adapting it to examine how established visual design principles translate when applied to tactile graphics for visually impaired users. While the original work assessed error in perceptual judgments of visual graphs, our replication uses tactile graphics, which is a category of graphics utilizing technologies like braille embossers and dynamically refreshing braille output devices. These technologies convert digital images into tactile formats, enhancing accessibility but prompting questions about the efficacy of traditional visual design principles in a tactile context (Fritz and Barner, 1999; Ladner et al., 2005). Our research evaluates these principles’ effectiveness across both visual and tactile media. By understanding perceptual differences between these varying types of media, we additionally hope to unearth universal design opportunities of tactile interfaces with data for broader sets of users (Kildal and Brewster, 2006; Cohen et al., 2006).
In our research, we explore the following research questions:
-
•
Q1: How does the accuracy of blind users interpreting bar charts, pie charts, bubble charts, and stacked bar charts in tactile formats compare to that of sighted users using visual formats?
-
•
Q2: How do inference times for blind users interpreting tactile graphics (bar charts, pie charts, bubble charts, and stacked bar charts) compare to those of sighted users with visual graphics?
Based on preliminary literature reviews and our understanding of tactile perception, we have the following hypotheses.
-
•
H1: Visually impaired readers of tactile graphics will have the same ranking of four stimuli found in sighted participants, but the accuracy will be lower due to using different perceptual systems.
-
•
H2: Visually impaired readers of tactile graphics will take a longer time of inference than sighted readers of visualizations because there is more processing involved, both physically through the manipulation of two hands, and mentally via comparative calculations.
Our research is a controlled experiment that directly compares the effectiveness of various chart types—bar charts, pie charts, bubble charts, and stacked bar charts—in both tactile and visual formats. Notably, it is one of the first studies to systematically compare the performance of blind and visually impaired users with that of sighted participants within the same experimental framework. By doing so, it provides empirical insights into the perceptual differences and challenges faced by visually impaired users when interacting with tactile graphics.
Through a study of eleven BOVI subjects, we find that tactile graphics can be as accurate and be read as quickly by BOVI subjects as previous studies have reported for visual graphics by sighted subjects. We present a statistical analysis of our data showing no significant difference between sighted and BOVI subjects when compared with results from previous studies. Through qualitative data gathered through interviews, we identify different strategies used to read tactile graphics. We also discuss difficulties that imply both gaps in equal access to insights from data for BOVI individuals, and research opportunities for the human computer interaction research community to fill these gaps. Our work shows that tactile graphics based on visual designs can be read accurately, but that there is also opportunity to design better encodings through human-centered design studies with BOVI participants in the future.
2. Related Work
2.1. Visual Perception Experiments
Visual perception research in graphical interfaces began fundamentally with Cleveland and McGill’s study (Cleveland and McGill, 1984). Their foundational work laid the groundwork for understanding how people interpret graphical information. They introduced a systematic way to assess the effectiveness of different graphical representations, which has been extensively replicated and expanded upon in subsequent research. Researchers have adapted the original experiment to explore how specific audiences, such as individuals with disabilities, the elderly, and those with varying levels of visual literacy, interact with graphical data. Notable replications include Heer and Bostock’s work, which applied these principles to crowdsourced environments, showcasing how perceptions can vary widely in less controlled settings (Heer and Bostock, 2010). Additionally, Talbot et al.’s focused exploration of bar chart perception provides further granularity on graphical interpretation (Talbot et al., 2014).
Recent research in visual perception experiments has increasingly focused on creating more accessible data visualizations through the integration of tactile, auditory, and multimodal feedback, addressing the needs of diverse audiences, including the visually impaired and elderly. Studies have explored the efficacy of audio narratives and haptic feedback in enhancing the accessibility of data visualizations (Siu et al., 2022; Fan et al., 2022; Wang et al., 2 06), emphasizing the necessity for designs that accommodate non-visual modalities. Significant work has been done on developing haptic interfaces that enable blind users to interact with statistical graphics on web platforms (Kim and Lim, 2011a; Guinness et al., 2019). Additionally, methodological advancements have been proposed to ensure that socio-technical considerations are integrated into visualization design (Lundgard et al., 2019; Marriott et al., 2021), fostering a more inclusive approach. These efforts underscore a pivotal shift towards visualizations that are not only technically proficient but also universally accessible, highlighting the critical role of inclusive design principles in the advancement of visual perception research (Cherukuru et al., 2022; Sharif et al., 2021; Mishra et al., 2022).
2.2. Accessible Computing Interfaces
Research in accessible computing interfaces has significantly evolved, particularly focusing on data sonification and tactile feedback to enhance data interaction for users with disabilities. Studies have explored auditory feedback mechanisms that help visually impaired users understand complex data sets (Wall and Brewster, 2006; Zhao et al., 2008; Wang et al., 2 06; Thompson et al., 2023), effectively substituting visual cues with sound. A significant focus has been on the development of sonification systems, such as the importance-driven sonification techniques of the Line Harp (Bru et al., 2023), which enhances line chart accessibility. Similarly, the Erie declarative grammar (Kim et al., 2024) facilitates data sonification, allowing users to perceive complex data through auditory means. Screen reader technologies have also seen innovative applications, notably through plugins like VoxLens (Sharif et al., 2022), which enables interactive data visualization accessibility on the web. Advancements in screen reader technologies have facilitated accessible navigation through data visualizations, converting graphical elements into comprehensible auditory formats (Demir et al., 2010). Furthermore, the integration of multimodal feedback systems combines tactile, auditory, and sometimes olfactory cues to provide a richer interaction experience (Kim and Lim, 2011b). These efforts are not merely supplementary but are crucial for users who rely on alternative sensory channels to access information, underscoring the need for technologies that recontextualize rather than replicate data presentation across various sensory modalities (Choi et al., 9 06). This research emphasizes the importance of inclusivity in technological development and aligns with broader goals of universal design, highlighting how accessible interfaces can expand digital engagement for all users (Marriott et al., 2021). Furthermore, the Data Navigator (Elavsky et al., 2023) serves as a toolkit focusing on accessibility-centered data navigation, proving essential for users requiring non-visual data interaction. These tools not only ensure compliance with accessibility standards but also promote an inclusive approach to data engagement, highlighting the shift from traditional visual data representation to multimodal data interaction that accommodates a wider range of sensory preferences and capabilities.
2.3. Tactile Graphics
Tactile graphics enable BOVI individuals to interact with graphical data through embossed or raised diagrams, offering a non-visual approach to data representation. Experiments conducted by Goncu et al. (Goncu et al., 2010) highlighted preferences for tactile diagrams with gridlines and Braille values over direct transcriptions, and tactile tables over tactile charts. Engel and Weber (Engel and Weber, 2017a) further emphasized the importance of tick marks, grid lines, texture, and the positioning of legends for the readability of tactile charts, documenting challenges like information overload and orientation issues in their survey with 71 participants (Engel and Weber, 2017b). Additionally, they explored differences in affordances among various chart types including bar, line, scatter, and pie charts (Engel and Weber, 2018). Watanabe and Inaba (Watanabe and Inaba, 2018) investigated the suitable texture granularity for tactile bar charts on capsule paper through experiments where participants were asked to count the number of bars under different texture conditions. Watanabe and Mizukami (Watanabe and Mizukami, 2018) conducted an experiment comparing tactile scatter plots to tactile and electronic tables, where participants were asked to identify the relationship between two variables. The results showed that tactile graphs outperformed the other two conditions. In a related study with 8 blind and low-vision participants, Yang et al. (Yang et al., 2020) compared tactile node-link and matrix diagrams for tasks such as pathfinding and cluster identification, noting a preference for node-link diagrams, which performed best except in adjacency tasks.
3. Methodology
Creation of Tactile Graphics: The primary techniques for producing tactile graphics include embossing with a braille embosser, printing on swell paper, and thermoforming. Swell paper, utilized in our experiment, features microcapsules that expand when heated, creating raised features up to 0.5mm high (Rowell and Ungar, 2003). This method was chosen due to its precision, quick production capabilities, and cost-effective creation. The cost per tactile graphic was approximately $2.75, making this approach economically viable for large-scale studies. An international survey of thirty blind and partially sighted people demonstrated a strong preference for swell paper, citing its enhanced tactile feedback and durability, making it ideal for users with vision impairments (Rowell and Ungar, 2005).

Generating Stimuli: We adapted the stimuli generation process from the original Cleveland and McGill study to accommodate tactile perception. This involved constraining the ratios of the smaller to the larger graphical elements, which we refer to as the True Proportional Difference, uniformly in 5% increments from 50% to 95%, while allowing the larger elements to vary from 40% to 100% of the viewport. This variance was intended to provide a broad spectrum of graphical sizes, ensuring the tactile graphics were legible and effectively represented the data. Distractor stimuli were also included, ranging from 20% to 100% of the viewport, to mimic real-world scenarios where multiple data points are presented together. In total, 80 different charts were produced, with two variations for each of the ten different values of True Proportional Difference (50%, 55%, …, 90%, 95%), and four chart types. We recognize that previous versions of this experiment used different values of True Proportional Difference, notably lower than 50%. Out of concerns for cost, we were limited to visualizing data that could easily be represented in swell form paper. For this purpose, we compare not only overall error but midmeans stratified by True Proportional Difference.
4. Experimental Design
Participant Recruitment and Setup: In the first round of interviews, eleven participants in total were recruited from a local school for the blind and the experiments were conducted over two days in a controlled environment. The participants ranged in age from 18 to over 65, with an average age of approximately 48.3 years. The gender distribution included 6 males and 5 females. Education levels varied, with 5 participants having completed an undergraduate degree and 5 holding a master’s degree, while 1 had some graduate education. Each session lasted approximately 75 minutes and was conducted in person to enable effective interaction with the tactile graphics. Two months later, a group interview was held with four of the participants of the original study to discuss strategies and to photograph interactions with the tactile graphics.
Procedure: The procedure began with an informed consent briefing read aloud to participants, followed by a practice session with eight stimuli (two of each chart type) to familiarize them with the tactile graphics. Participants were then presented with the remaining 72 graphics in a randomized order and asked to interpret these at their natural pace, verbalizing their thoughts and answers for audio recording. Participants were told the true answer (percent size of the smaller stimuli to the larger) for their 8 training stimuli, but were not told during the experimental phase. Answers were recorded into a spreadsheet during test time and then later adjudicated against recordings, with any disagreements removed from the dataset. This phase of the experiment was limited to 30 minutes, and participants were notified when half of the time was left.
Data Collection: Data were collected through both quantitative and qualitative methods. Quantitatively, we measured the accuracy of each response and the time taken to complete each task. Qualitatively, participants were asked to provide feedback about their experience, including any difficulties they encountered and their subjective assessment of the tactile graphs’ clarity and usability. After the tactile interpretation tasks, participants completed a demographics survey and engaged in a 10-15 minute semi-structured interview. These interviews aimed to gather qualitative feedback on their experience and any challenges they faced during the experiment. 333Study documents, including interview questions, demographic questions, and graphics files are available at https://osf.io/3nsfp/?view_only=7b7b8dcbae1d4c9a8bb4325053d13d9f.
Followup Group Interview: Two months after the experiment was conducted, participants were contacted via email to participate in a followup group interview in order to compare strategies for reading the tactile graphics. Four participants joined the experiments in a conference room for a 90 minute group interview. For each of the four chart types, the participants were first each given two examples of the chart type (one close to a ground truth of 50%, one close to a ground truth of 95%) in order to refresh their memory about the chart type. Then, they were each asked about their strategies for reading from the chart, what they found difficult about reading the chart, and any improvements they suggested. The group interview ended by asking the four participants to respond to any of the strategies of the other participants for any of the chart types. The meeting was photographed and its audio was recorded and transcribed for analysis of common strategies.




5. Results


In accordance with the original Cleveland And McGill study, we calculated both midmeans for absolute accuracy by ground truth per chart and means and 95% confidence intervals of error scores for each four bar charts. Confidence intervals are calculated through bootstrapping. Error is scored as the log of the absolute difference plus , which is added for numerical stability.
The error scores, seen in Fig 4, match up closely with previous results on sighted individuals both in laboratory settings and through a crowdsourced replication on Amazon Mechanical Turk (Cleveland and McGill, 1984; Heer and Bostock, 2010). Our results have larger confidence intervals, which is partly explained by a smaller number of trials in our study () vs. the other two ( for the crowdsourced and for the laboratory experiment). However, there are other possible explanations; we analyze the between-subject variance as a source of uncertainty in the subsequent section.
The midmeans chart (Fig 3) displays the middle quartiles of error for each group of ground truth, and was used in previous studies to communicate performance without being influenced by outliers. It indicates that the bar chart is the most accurate chart in general. Across all chart types, the error decreases as the true proportional difference gets close to 100%, with a likely peak in error near 50%. This matches previous findings reported in a crowdsourced study with sighted participants (Heer and Bostock, 2010). However, while the ranking of different charts is similar, there are some differences: the performance on the stacked bar and bar charts had higher midmean error in our study. This conflicts with the mean error in aggregate seen in Fig 4 - it appears that the removal of outliers from the midmean calculations resulted in our study’s error being higher than the crowdsourced study.
5.1. Analysis of Hypotheses
Our first hypothesis, H1, was that BOVI readers of tactile graphics would have the same ranking of the four chart types found in sighted participants, but that the accuracy would be lower. Our data provided only partial evidence for H1. Both midmeans and error confidence demonstrated similar rankings of charts were similar, but in aggregate the accuracies of BOVI readers was similar or even higher in some cases, as with the bar chart. One-sided t-tests could not confirm the hypothesis that the error of BOVI readers was higher than previous studies ( in comparisons with Cleveland and McGill (Cleveland and McGill, 1984) and Heer and Bostock (Heer and Bostock, 2010)).
Our second hypothesis, H2, was that BOVI readers of tactile graphics would take a longer time of inference than sighted readers of visualizations due to the physical and mental processing required. We compare our results from those reported in Heer and Bostock. Our participants, on average, viewed 59 charts making the total number of readings . The average completion time per chart judgment was 26.74 seconds for BOVI users interacting with tactile graphics. In contrast, Heer and Bostock’s MTurk study reported a higher average time of 54 seconds per trial, with a median response time of 42 seconds and a standard deviation of 41 seconds. A one-sided t test finds this data does not support H2. This comparison suggests that our tactile graphics, while designed for BOVI users, enabled relatively faster responses compared to the MTurk study. We note that Heer & Bostock suggest that in a laboratory setting, compared with MTurk, they expect that participants would be faster.
5.2. Hierarchical Analysis
Our results indicated some nuance in the data that was not captured by the first two hypotheses. In particular, the midmeans chart and the aggregate error disagreed on whether the performance of our subjects was better, equivalent, or worse, than results from previous replications of this experiment with sighted participants. A recent work by Davis et al. offers a different type of analysis of graphical perception studies that enables statistical statements about the variance between subjects using hierarchical modeling (Davis et al., 2022). We repeat that analysis with our results to better understand the outcome of our experiment.
To fit our data to a hierarchical model, we need to declare the assumptions of the model. Our assumptions will model how the data was generated, parameterizing the various effects that we believe led to the values offered by our subjects. We assume that these effects are stochastic, and so we model them as random variables, resulting in a Bayesian model. We then use numerical methods to determine the likely posterior distributions for these variables when conditioned on the data generated by our experiment. For a broader overview of this technique, see Gelman et al. (Gelman et al., 2020)
Davis et al. build a hierarchical model by starting with a simple linear model, which assumes that the average performance of each participant is normally distributed about the true mean for the population - essentially that any differences between participants are the result of random chance. They then expand their model by changing different assumptions. These assumptions include restrictions on the output of the model (replacing the normal distribution with the Zero-Inflated beta distribution), an additional random effects term that assumes some individual differences effect between participant and visualization (), and additionally learning submodels for the precision () and probability of zeros().
likelihood | ||||
mean submodel | ||||
precision submodel | ||||
zeros submodel |
By adding these additional terms and adding submodels, in sum, the hierarchical model is able to capture a participant having different types of error or bias for each visualization type. We replicate the same model used in that work, including using the same weakly-informed priors as they reported, and then compare our analysis to their study. We note that the number of participants (eleven vs. more than a hundred) and the number of trials per participant (approximately sixty vs. more than a thousand) results in experimental datasets of different scales, and so some of the analysis of the resulting hierarchical models still had high levels of uncertainty. But we highlight two charts that can tell us more about the difference between the BOVI subjects of our study and the sighted participants in the previous study.

First, in Fig 5, we compare the Cumulative Distribution Functions of the hierarchical models of the absolute response error of the participants’ answers. We are able to produce this chart with the hierarchical model from Davis et al. because it calculates a posterior distribution rather than point estimates, letting us look at the shape of the types of error we found in our study. It is notable that the both studies have both the same ranking of chart types consistently through different levels of error, both low (less than 5 absolute error) and high (more than 20 absolute error). At the same time, our study appears to have less errors with very small error (less than 5 absolute error), indicated by the flat region at the very left of the plot. This is consistent with repeated think aloud statements by our subjects that it was very hard to decide on small differences of estimated answers, i.e. 70% vs. 75%.

Next, in Fig 6, we compare the mean absolute error across our eleven participants on each chart type with errors from the Davis et al. experiment. In the parallel coordinates chart, the mean performance of each participant across all 10-20 trials from each of the four chart types is calculated. This chart was used in Davis et al. to show that some participants have different rankings or relative performances across visualization types - one pink line, for example, showed a bar chart being less effective than a stacked bar chart. In our data, on the left, we do see similar variance in the rankings of the four chart types within our eleven participants. It is also notable that the performance appeared to vary broadly in our sample of eleven participants. However, it isn’t clear that the variance noticed in our study was markedly different than the variance evident in Davis et al.
Our hierarchical analysis revealed both similarities and differences with previously reported results on a wider study run on sighted participants within a crowdsourced study. By fitting the data to a Bayesian model we are able to analyze the CDFs and note the the performance of BOVI participants is similarly distributed across true error types, although there is some difference in the distribution of guesses with small error. In addition, the comparison of mean performances per participant per chart type showed that while our participants had noticeable variance in performance and preference of charts, it wasn’t clear if it was different than previously reported results. Both findings suggest that a larger study would be needed to analyze the types of between-subject and within-subject variance that appears in the data, and to compare it with statistical significance to previous studies.
5.3. Qualitative Analysis of Subject Strategies
Through think-aloud statements during the experiment, ad-hoc interviews after the experiment with all eleven participants, and the followup group interview with four participants, we gathered qualitative data about the difficulties encountered by our subjects in reading our tactile graphics, as well as the strategies they employed. We conduct a thematic analysis of this data, clustering statements into themes and highlighting quotes from our subjects.
5.3.1. Strategies
While the strategies varied slightly between chart types and between participants, there were two broad categories used to measure distance-based encodings, such as the length of a bar or the length of an arc of a pie chart wedge (calipers and rulers), and one strategy for measuring areas (splaying fingers).
In the first strategy, the thumb and forefinger are used to measure a length between two risen bumps in the tactile graphics (see Fig 8). In a comparison task like in our experiment, the hand that makes this first measurement locks, similar to a caliper, and then is slid over to the second measurement (i.e. moving from the smaller bar to the larger bar). The second measurement was typically made using a second hand also as a caliper. The two calipers were then described as being compared, or even visualized, to estimate what the relative measurement between them would be.
In the second strategy, the fingers of one hand were used as measurements of length, i.e. one bar was a pinkie, ring finger, and middle finger, while the other bar was just a pinkie and a ring finger (Fig 8). In this strategy, participants often pointed out that it was difficult when the measurement didn’t match perfectly with one of their fingers (i.e. the length was two and a half fingers), and that it was also difficult translating from finger widths to relative percentages, since fingers are of different lengths.


While all four chart types can be read via measuring the length of a feature of the graphical primitive, our participants more frequently measured the full area of bubble charts, and sometimes measured full area of the wedges of the pie chart. The general strategy for measuring area involved locating the center of the area, and then splaying fingers outward to sense the size of the area (Fig 9). Some participants would use both hands to measure areas, while others would use the same hand to measure both stimuli to make their relative measurements. This was less effective because it doesn’t rely on a static measurement, like the calipers or ruler method, that can be shifted from one stimuli to the next. This may explain why area marks like those in the bubble chart result in greater error in our experimental data.
For stacked bar charts and pie charts, participants frequently rotated the chart to make the physical measurements more comfortable from their seating position. An example can be seen in the participant’s reading of the stacked bar chart in Fig 8, where the fingers pointing away from the body should be perpendicular to the axis being measured. The orientation of the graphical primitives appears to be important for tactile perception in a unique way compared to visual graphical primitives, and should be analyzed in a future study.
Lastly, participants frequently made multiple readings of the same chart to check their work. This was described by a participant as necessary since they were making only estimations of the encoded values, and so by estimating twice, they have less of a chance of making a mistaken estimate. It is not known whether visual perception involves multiple redundant measurements of the same visual channels because the measurement can be implicit, but it should be recognized that any primitives designed for tactile graphics should account for this need to redundantly measure to mitigate uncertainty in estimates.

5.3.2. Difficulties
Participants described their frustrations throughout think aloud, and were explicitly asked about what made it difficult to read the charts. We broadly categorize these difficulties into challenges with primitive scale, distance traveled, tactile noise, local vs. global sense of the data, and visual memory. For scale, we note that both the calipers strategy and the wruler strategy have limitations in the scale of graphics they can measure. While visualizations for sighted viewers have limitations of scale as well, reasonable scales may be much larger (i.e. large displays) and much smaller (i.e. Apple Watch), than the reasonable scales for tactile graphics. Participants mentioned that having smaller hands felt like a disadvantage for some of the larger graphical primitives being read. The amount and orientation of the distance between marks being measured also added additional difficulties, particularly with the caliper strategy, which often required the measuring hand to move from one mark to the other to make a comparison. The longer this distance was, the greater chance of accidentally shifting the measurement of the caliper. This was sometimes exacerbated by marks that had orientations that were not axis-aligned, as in the bubble chart and pie chart.
Starting from Tufte’s rules about maximizing the data-ink ratio (Tufte and Graves-Morris, 1983), it is accepted that additional noise in the visual encoding can distract from the underlying perceptual task. Our participants mentioned that the design of our graphics contained noise that made it difficult to focus on or even locate the particular marks being measured. However, the types of noise that were identified were not anticipated. First, participants noted that the width of the lines in our swell form graphics were disruptive, and that they would prefer thinner lines. It was also noted that the white space between marks was possibly misleading, because the positive space inside the marks and the negative space between marks may be harder to discriminate in tactile perception. The bar chart was particularly challenged by white space, since the space between marks is fully redundant. Lastly, because participants can only use part of their hand at once and do not interact with the entire marks, the marks themselves may not have needed their full tactile encoding. This was most prevalent in the bubble chart, where the areas where the circles touch are the most noisy areas, but those areas were not typically used by participants. It may be that space-filling visualizations like bubble charts are not effective for tactile visualizations because of the noise resulting from the cramped and unaligned layout of marks. In general, the heuristics and gestalt rules (Wong, 2010) taken for granted in visual design should be interrogated for tactile design.
A broader difficulty that was experienced by our participants was the relationship between a global view of the data and a local view of the data. Shneiderman’s mantra (overview first, zoom and filter, details on demand, a motivating heuristic for visualization design, states that a global view of the data is usually the first impression that a user wants (Shneiderman, 2003). However, one participant brought up independently that they did not typically generate a global perception of the data: ”When you’re using your fingers to do things, your hand is covering things you can’t be viewing. We can’t look at these things globally. We have to look at them in a micro- rather than a macro- kind of way.”. This suggests that elementary perceptual tasks like those used in this study and Cleveland and McGill, which are typically local, might be expected to be comparable between sighted and visually limited participants, but more global tasks could result in a wider gap and greater need for new encodings for tactile graphics.
Lastly, participants frequently brought up the significance of having visual memory for perceiving graphical primitives. They described that it is generally well known within the BOVI community that those who become blind during their lives often have visual memories that can improve their ability to interpret visual concepts. One participant of our group interview was born blind, rather than becoming blind during their life, and remarked at how they found the process of reading tactile graphics very frustrating: ”I’ve never seen a graph, I’ve never seen anything visually. I may be frank – graphics mean nothing to me. I have no context, so I would dismiss them out of hand if they aren’t built for me… If they’re built from a visual place, at least for me, they mean nothing, they have no resonance with me whatsoever. This suggests that additional studies are necessary to understand whether the presence of visual memory impacts the perceptual accuracy of BOVI subjects. It also suggests that a user-driven design of tactile graphical primitives designed primarily for those born blind could result in more effective encodings for tactile graphics.
6. Discussion and Future Work
6.1. Future of Tactile Graphics
Our findings suggest that while tactile graphics adapted from visual design principles can be beneficial, they may not fully meet the accessibility needs of those born without sight. This aligns with recent studies emphasizing the importance of designing with and for BOVI users to create truly effective and accessible tactile graphics (Zong et al., 2022; Lundgard and Satyanarayan, 2021). As (Reinders et al., 2024) suggests, incorporating conversational agents and advanced tactile displays could further enhance accessibility, pointing towards a multimodal approach as a promising future direction. Our results support this direction, indicating that tactile graphics should be part of a broader, inclusive design strategy that prioritizes user-centric development.
6.2. Limitations
While it produced useful insights, our study had some limitations. The primary constraint is the reliance on existing visual encodings which might not optimally translate into tactile formats. This limitation is evident in our mixed results where some tactile graphics performed well while others did not, indicating a possible mismatch in the tactile adaptation of visual data (Kim et al., 2023). Additionally, our sample size and the diversity of our participant group, though adequate for preliminary insights, may not fully represent the wider BOVI population, possibly limiting the generalizability of our findings.
The design of tactile graphics itself, produced using swell paper technology, introduces another limitation due to the restricted resolution and detail this method can provide. This potentially affects the users’ ability to discern fine details in complex graphs, potentially skewing the accuracy and speed of data interpretation when compared to visual graphs as noted in foundational studies (Cleveland and McGill, 1984). The accessibility and cost of producing high-quality tactile graphics also pose significant limitations. While swell form technology is cost-effective, it is not universally accessible, and the financial burden may be prohibitive for some institutions or individuals, limiting widespread adoption.
Moreover, the diversity of techniques used by participants to interpret the data underscores a limitation in the standardization of tactile graphics. For example, one participant employed a method of using fingers as calipers to measure the length of the bars, while another estimated spatial divisions through mental segmentations. Such personalized techniques, although creative, highlight the lack of a one-size-fits-all approach in the current tactile graphic designs.
Participants also noted difficulties with the texture and construction of tactile elements. Many found the uniformity in the texture of bars confusing and suggested that varying the texture could aid in differentiation. Double encoding, which combines tactile information with other sensory cues like varied textures, was recommended to improve usability and accuracy.
6.3. Future Studies
Looking forward, we propose a series of large-scale studies to further validate and refine the tactile graphic designs. These studies should explore a wider array of tactile and multimodal presentation techniques, involving more diverse participant groups to enhance the reliability and applicability of the findings. Furthermore, collaboration with BOVI participants should be emphasized to tailor designs more closely to user needs, potentially involving technologies that integrate tactile feedback with auditory and possibly olfactory cues for a richer user experience (Zong et al., 2022). We anticipate that such inclusive research efforts will be crucial in addressing the broader accessibility challenges faced by the aging population with vision loss (Alliance, 2022).
7. Conclusion
In this research, we replicated the Cleveland and McGill study on graphical perception using tactile graphics to explore their efficacy for blind or visually impaired individuals. Our findings suggest that while tactile representations derived from visual design principles hold promise, their efficacy is not universally optimal, particularly for those born without sight (e.g., (Reinders et al., 2024)). This highlights the need for designs that explicitly consider the unique perceptual requirements of blind or visually impaired users.
The exploration of tactile graphics in our study suggests that they can serve as effective tools for data representation for the visually impaired when adapted with consideration for tactile perception. However, the mixed results across different chart types indicate that further refinement is necessary to fully leverage these tools. This aligns with recent advances in multimodal data representation which suggest integrating tactile feedback with auditory and possibly olfactory cues to enrich the data interaction experience (Lundgard and Satyanarayan, 2021; Zong et al., 2022).
Future research should continue to innovate in the design of accessible data visualizations by engaging with blind or visually impaired users throughout the design process. This user-centric approach is essential for developing effective tactile graphics that are truly accessible and useful in practical scenarios. Further large-scale studies involving a broader spectrum of tactile and multimodal presentation techniques are recommended to enhance the reliability and applicability of these technologies.
As we look towards the future, it is clear that tactile graphics will continue to evolve, reflecting the advancements in technology and a deeper understanding of accessible design. It is our hope that these efforts will significantly reduce the barriers faced by the visually impaired community, granting them greater access to the burgeoning field of data science and visual analytics.
References
- (1)
- Alliance (2022) VisionServe Alliance. 2022. United States’ Older Population and Vision Loss: A Briefing. https://visionservealliance.org. https://drive.google.com/file/d/1FnyenjMMa4LZNX1gbiaY8klWT-joyZ6D/view Prepared by: The Ohio State University, College of Optometry.
- Ault et al. (2002) H. K. Ault, J. W. Deloge, R. W. Lapp, M. J. Morgan, and J. R. Barnett. 2002. Evaluation of Long Descriptions of Statistical Graphics for Blind and Low Vision Web Users. In Computers Helping People with Special Needs, Klaus Miesenberger, Joachim Klaus, and Wolfgang Zagler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 517–526.
- Bru et al. (2023) Egil Bru, Thomas Trautner, and Stefan Bruckner. 2023. Line Harp: Importance-Driven Sonification for Dense Line Charts. arXiv:2307.16589 [cs.GR] https://arxiv.org/abs/2307.16589
- Cherukuru et al. (2022) Nihanth W Cherukuru, David A Bailey, Tiffany Fourment, Becca Hatheway, Marika M Holland, and Matt Rehme. 2022. Beyond Visuals : Examining the Experiences of Geoscience Professionals With Vision Disabilities in Accessing Data Visualizations. arXiv:2207.13220 [cs.CY] https://arxiv.org/abs/2207.13220
- Choi et al. (9 06) Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, and Niklas Elmqvist. 2019-06. Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization. Computer graphics forum. 38, 3 (2019-06).
- Cleveland and McGill (1984) William S Cleveland and Robert McGill. 1984. Graphical perception: Theory, experimentation, and application to the development of graphical methods. Journal of the American statistical association 79, 387 (1984), 531–554.
- Cohen et al. (2006) Robert F. Cohen, Arthur Meacham, and Joelle Skaff. 2006. Teaching graphs to visually impaired students using an active auditory interface. In Proceedings of the 37th SIGCSE Technical Symposium on Computer Science Education (Houston, Texas, USA) (SIGCSE ’06). Association for Computing Machinery, New York, NY, USA, 279–282. https://doi.org/10.1145/1121341.1121428
- Davis et al. (2022) Russell Davis, Xiaoying Pu, Yiren Ding, Brian D Hall, Karen Bonilla, Mi Feng, Matthew Kay, and Lane Harrison. 2022. The risks of ranking: Revisiting graphical perception to model individual differences in visualization performance. IEEE Transactions on Visualization and Computer Graphics 30, 3 (2022), 1756–1771.
- Demir et al. (2010) Seniz Demir, David Oliver, Edward Schwartz, Stephanie Elzer, Sandra Carberry, Kathleen F. Mccoy, and Daniel Chester. 2010. Interactive SIGHT: textual access to simple bar charts. New Review of Hypermedia and Multimedia 16, 3 (2010), 245–279. https://doi.org/10.1080/13614568.2010.534186 arXiv:https://doi.org/10.1080/13614568.2010.534186
- Elavsky et al. (2023) Frank Elavsky, Lucas Nadolskis, and Dominik Moritz. 2023. Data Navigator: An accessibility-centered data navigation toolkit. arXiv:2308.08475 [cs.HC] https://arxiv.org/abs/2308.08475
- Engel and Weber (2017a) Christin Engel and Gerhard Weber. 2017a. Analysis of Tactile Chart Design. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments (Island of Rhodes, Greece) (PETRA ’17). Association for Computing Machinery, New York, NY, USA, 197–200. https://doi.org/10.1145/3056540.3064955
- Engel and Weber (2017b) Christin Engel and Gerhard Weber. 2017b. Improve the Accessibility of Tactile Charts. In Human-Computer Interaction - INTERACT 2017, Regina Bernhaupt, Girish Dalvi, Anirudha Joshi, Devanuj K. Balkrishan, Jacki O’Neill, and Marco Winckler (Eds.). Springer International Publishing, Cham, 187–195.
- Engel and Weber (2018) Christin Engel and Gerhard Weber. 2018. A User Study to Evaluate Tactile Charts with Blind and Visually Impaired People. In Computers Helping People with Special Needs, Klaus Miesenberger and Georgios Kouroupetroglou (Eds.). Springer International Publishing, Cham, 177–184.
- Fan et al. (2022) Danyang Fan, Alexa Fay Siu, Wing-Sum Adrienne Law, Raymond Ruihong Zhen, Sile O’Modhrain, and Sean Follmer. 2022. Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 477, 19 pages. https://doi.org/10.1145/3491102.3517790
- Fritz and Barner (1999) J.P. Fritz and K.E. Barner. 1999. Design of a haptic data visualization system for people with visual impairments. IEEE Transactions on Rehabilitation Engineering 7, 3 (1999), 372–384. https://doi.org/10.1109/86.788473
- Gelman et al. (2020) Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, and Martin Modrák. 2020. Bayesian workflow. arXiv preprint arXiv:2011.01808 (2020).
- Goncu et al. (2010) Cagatay Goncu, Kim Marriott, and John Hurst. 2010. Usability of Accessible Bar Charts. In Diagrammatic Representation and Inference, Ashok K. Goel, Mateja Jamnik, and N. Hari Narayanan (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 167–181.
- Guinness et al. (2019) Darren Guinness, Annika Muehlbradt, Daniel Szafir, and Shaun K. Kane. 2019. RoboGraphics: Dynamic Tactile Graphics Powered by Mobile Robots. In Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility (Pittsburgh, PA, USA) (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 318–328. https://doi.org/10.1145/3308561.3353804
- Heer and Bostock (2010) Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing graphical perception: using mechanical turk to assess visualization design. In Proceedings of the SIGCHI conference on human factors in computing systems. 203–212.
- Kildal and Brewster (2006) Johan Kildal and Stephen A. Brewster. 2006. Non-visual overviews of complex data sets. In CHI ’06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI EA ’06). Association for Computing Machinery, New York, NY, USA, 947–952. https://doi.org/10.1145/1125451.1125634
- Kim and Lim (2011a) Da-jung Kim and Youn-kyung Lim. 2011a. Handscope: enabling blind people to experience statistical graphics on websites through haptics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2039–2042. https://doi.org/10.1145/1978942.1979237
- Kim and Lim (2011b) Da-jung Kim and Youn-kyung Lim. 2011b. Handscope: enabling blind people to experience statistical graphics on websites through haptics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 2039–2042. https://doi.org/10.1145/1978942.1979237
- Kim et al. (2024) Hyeok Kim, Yea-Seul Kim, and Jessica Hullman. 2024. Erie: A Declarative Grammar for Data Sonification. In Proceedings of the CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’24). Association for Computing Machinery, New York, NY, USA, Article 986, 19 pages. https://doi.org/10.1145/3613904.3642442
- Kim et al. (2023) Jiho Kim, Arjun Srinivasan, Nam Wook Kim, and Yea-Seul Kim. 2023. Exploring chart question answering for blind and low vision users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
- Ladner et al. (2005) Richard E. Ladner, Melody Y. Ivory, Rajesh Rao, Sheryl Burgstahler, Dan Comden, Sangyun Hahn, Matthew Renzelmann, Satria Krisnandi, Mahalakshmi Ramasamy, Beverly Slabosky, Andrew Martin, Amelia Lacenski, Stuart Olsen, and Dmitri Groce. 2005. Automating tactile graphics translation. In Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, MD, USA) (Assets ’05). Association for Computing Machinery, New York, NY, USA, 150–157. https://doi.org/10.1145/1090785.1090814
- Lundgard et al. (2019) Alan Lundgard, Crystal Lee, and Arvind Satyanarayan. 2019. Sociotechnical Considerations for Accessible Visualization Design. In 2019 IEEE Visualization Conference (VIS). 16–20. https://doi.org/10.1109/VISUAL.2019.8933762
- Lundgard and Satyanarayan (2021) Alan Lundgard and Arvind Satyanarayan. 2021. Accessible visualization via natural language descriptions: A four-level model of semantic content. IEEE transactions on visualization and computer graphics 28, 1 (2021), 1073–1083.
- Marriott et al. (2021) Kim Marriott, Bongshin Lee, Matthew Butler, Ed Cutrell, Kirsten Ellis, Cagatay Goncu, Marti Hearst, Kathleen McCoy, and Danielle Albers Szafir. 2021. Inclusive data visualization for people with disabilities: a call to action. Interactions 28, 3 (apr 2021), 47–51. https://doi.org/10.1145/3457875
- McDonnall and Sui (2019) Michele C. McDonnall and Zhen Sui. 2019. Employment and Unemployment Rates of People who are Blind or Visually Impaired. Journal of Visual Impairment & Blindness 113, 3 (2019), 245–254.
- Mishra et al. (2022) Prerna Mishra, Santosh Kumar, Mithilesh Kumar Chaube, and Urmila Shrawankar. 2022. ChartVi: Charts summarizer for visually impaired. Journal of Computer Languages 69 (2022), 101107. https://doi.org/10.1016/j.cola.2022.101107
- Reinders et al. (2024) Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, and Kim Marriott. 2024. When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech. arXiv preprint arXiv:2408.04806 (2024).
- Rosenberg (2021) Naomi Rosenberg. 2021. What does Flattening the Curve look like? https://lighthouse-sf.org/2021/04/30/flattening-the-curve/
- Rowell and Ungar (2003) Jonathan Rowell and Simon Ungar. 2003. The world of touch: an international survey of tactile maps. Part 1: production. British Journal of Visual Impairment 21, 3 (2003), 98–104. https://doi.org/10.1177/026461960302100303 arXiv:https://doi.org/10.1177/026461960302100303
- Rowell and Ungar (2005) Jonathan Rowell and Simon Ungar. 2005. Feeling our way: tactile map user requirements-a survey. In International Cartographic Conference, La Coruna, Vol. 152.
- Sharif et al. (2021) Ather Sharif, Sanjana Shivani Chintalapati, Jacob O. Wobbrock, and Katharina Reinecke. 2021. Understanding Screen-Reader Users’ Experiences with Online Data Visualizations. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (Virtual Event, USA) (ASSETS ’21). Association for Computing Machinery, New York, NY, USA, Article 14, 16 pages. https://doi.org/10.1145/3441852.3471202
- Sharif et al. (2022) Ather Sharif, Olivia H. Wang, Alida T. Muongchan, Katharina Reinecke, and Jacob O. Wobbrock. 2022. VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 478, 19 pages. https://doi.org/10.1145/3491102.3517431
- Shneiderman (2003) Ben Shneiderman. 2003. The eyes have it: A task by data type taxonomy for information visualizations. In The craft of information visualization. Elsevier, 364–371.
- Siu et al. (2022) Alexa Siu, Gene S-H Kim, Sile O’Modhrain, and Sean Follmer. 2022. Supporting Accessible Data Visualization Through Audio Data Narratives. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 476, 19 pages. https://doi.org/10.1145/3491102.3517678
- Tabrik (2022) S. Tabrik. 2022. Neural Mechanisms Underlying Cross-Modal Object Categorization: Visual and Tactile Senses. (2022). https://scholar.archive.org/work/i6f56afzgba6xbxadt6rztwita/access/wayback/https://hss-opus.ub.ruhr-uni-bochum.de/opus4/frontdoor/deliver/index/docId/9089/file/diss.pdf
- Talbot et al. (2014) Justin Talbot, Vidya Setlur, and Anushka Anand. 2014. Four Experiments on the Perception of Bar Charts. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 2152–2160. https://doi.org/10.1109/TVCG.2014.2346320
- Thompson et al. (2023) John R Thompson, Jesse J Martinez, Alper Sarikaya, Edward Cutrell, and Bongshin Lee. 2023. Chart Reader: Accessible Visualization Experiences Designed with Screen Reader Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 802, 18 pages. https://doi.org/10.1145/3544548.3581186
- Thunström et al. (2020) Linda Thunström, Stephen C Newbold, David Finnoff, Madison Ashworth, and Jason F Shogren. 2020. The benefits and costs of using social distancing to flatten the curve for COVID-19. Journal of Benefit-Cost Analysis 11, 2 (2020), 179–195.
- Tufte and Graves-Morris (1983) Edward R Tufte and Peter R Graves-Morris. 1983. The visual display of quantitative information. Vol. 2. Graphics press Cheshire, CT.
- Wall and Brewster (2006) Steven Wall and Stephen Brewster. 2006. Feeling what you hear: tactile feedback for navigation of audio graphs. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI ’06). Association for Computing Machinery, New York, NY, USA, 1123–1132. https://doi.org/10.1145/1124772.1124941
- Wang et al. (2 06) R Wang, C Jung, and Y Kim. 2022-06. Seeing Through Sounds: Mapping Auditory Dimensions to Data and Charts for People with Visual Impairments. Computer graphics forum. 41, 3 (2022-06).
- Watanabe and Inaba (2018) Tetsuya Watanabe and Naoki Inaba. 2018. Textures Suitable for Tactile Bar Charts on Capsule Paper. Transactions of the Virtual Reality Society of Japan 23, 1 (2018), 13–20. https://doi.org/10.18974/tvrsj.23.1_13
- Watanabe and Mizukami (2018) Tetsuya Watanabe and Hikaru Mizukami. 2018. Effectiveness of Tactile Scatter Plots: Comparison of Non-visual Data Representations. In Computers Helping People with Special Needs, Klaus Miesenberger and Georgios Kouroupetroglou (Eds.). Springer International Publishing, Cham, 628–635.
- Wong (2010) Bang Wong. 2010. Points of view: Gestalt principles (Part 1). nature methods 7, 11 (2010), 863.
- Yang et al. (2020) Yalong Yang, Kim Marriott, Matthew Butler, Cagatay Goncu, and Leona Holloway. 2020. Tactile Presentation of Network Data: Text, Matrix or Diagram?. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376367
- Yu et al. (2001) Wai Yu, Ramesh Ramloll, and Stephen Brewster. 2001. Haptic graphs for blind computer users. In Haptic Human-Computer Interaction, Stephen Brewster and Roderick Murray-Smith (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 41–51.
- Zhao et al. (2008) Haixia Zhao, Catherine Plaisant, Ben Shneiderman, and Jonathan Lazar. 2008. Data Sonification for Users with Visual Impairment: A Case Study with Georeferenced Data. ACM Trans. Comput.-Hum. Interact. 15, 1, Article 4 (may 2008), 28 pages. https://doi.org/10.1145/1352782.1352786
- Zong et al. (2022) Jonathan Zong, Crystal Lee, Alan Lundgard, JiWoong Jang, Daniel Hajas, and Arvind Satyanarayan. 2022. Rich screen reader experiences for accessible data visualization. In Computer Graphics Forum, Vol. 41. Wiley Online Library, 15–27.