Data Quality, Mismatched Expectations, and Moving Requirements: The Challenges of User-Centred Dashboard Design
Abstract.
Interactive information dashboards can help both specialists and the general public understand complex datasets; but interacting with these dashboards often presents users with challenges such as understanding and verifying the presented information. To overcome these challenges, developers first need to acquire a thorough understanding of user perspectives, including strategies that users take when presented with problematic dashboards. We interviewed seventeen dashboard developers to establish (i) their understanding of user problems, (ii) the adaptations introduced as a result, and (iii) whether user-tailored dashboards can cater for users’ individual differences. We find that users’ literacy does not typically align with that required to use dashboards, while dashboard developers struggle with keeping up with changing requirements. We also find that developers are able to propose solutions to most users’ problems but not all. Encouragingly, our findings also highlight that tailoring dashboards to individual user needs is not only desirable, but also feasible. These findings inform future dashboard design recommendations that can mitigate the identified challenges including recommendations for data presentation and visual literacy.
1. Introduction
Interactive information dashboards enable the interaction with complex data sets using visualisations, tables or maps, often presenting multiple such widgets together on a single display. Dashboards are used in many domains such as healthcare (Buttigieg et al., 2017; Dowding et al., 2015; Koopman et al., 2011), education (Roberts et al., 2017; Schwendimann et al., 2016; Jivet et al., 2018) and urban development (Kitchin et al., 2015; Pathak et al., 2015; Lee et al., 2015). Despite dashboards’ increased adoption, users face interaction and information comprehension challenges caused mainly by information overload (Sarikaya et al., 2018; Buttigieg et al., 2017) and visual literacy gaps (Wakeling et al., 2015; Srinivasan et al., 2018; Dowding et al., 2018). These challenges are complex and multifaceted, but evidence suggests that they partially emerge from dashboard developers prioritising visual appeal over functional effectiveness (Few, 2006; Eckerson, 2010).
Addressing the problems users face is key as dashboards continue to be crucial artefacts to aid decision making in important areas such as epidemiology and finance. Recently, in response to the COVID-19 pandemic, many interactive dashboards have been developed by worldwide health agencies, non-governmental organisations and individuals (WHO, 2020; CDC, 2020; NHS, 2020; MOH, 2020). These dashboards visualise epidemiological data to track disease outbreaks as they unfold, and monitor and report on future outbreaks (Dong et al., 2020). In response to existing problems, public health officials have been creating tools and tutorials to encourage users to become dashboard co-creators, think about data modelling and manipulation and to critically assess the making of COVID-19 visualisations. A project such as We Rate Covid Dashboards (Forman et al., 2021) reflects the magnitude of the problem: it rates COVID-19 dashboards taking into account visual presentation, navigation and data detail. No more than 25% of 369 reviewed dashboards received a high rating, with only 1% receiving the top rating.
Although a body of research has been conducted on user-centred dashboards (Dowding et al., 2018), users still encounter problems. Since the lack of user involvement during development can result in dashboards that are difficult to use (Roberts et al., 2017; Lee et al., 2015), dashboard developers should consider the different levels of visual literacy of the intended users, the constraints of the layouts and the information presented to promote engagement and trust (Srinivasan et al., 2018; Herder and van Maaren, 2020; Wakeling et al., 2015; Kia et al., 2020). In a similar line, previous research examined software developers’ awareness of accessibility and how they can achieve success in mobile application and game development (Antonelli et al., 2018; Srisopha et al., 2021; Kultima, 2015). However, aside from the scarce work demonstrating the challenges of user-centred dashboard development (DiMicco and Mann, 2016), the role of developers around these challenges and their solutions is yet not well understood. Hence, it is imperative to capture dashboard developers’ perspectives in order to determine if dashboard development is inherently difficult and whether developers are aware of the problems users encounter. Thus, our work aligns developers’ understanding of dashboard problems that are reported in the literature with appropriate solutions for addressing them.
Rather than pushing users into developing workarounds to overcome the problems they encounter, recent research has suggested that dashboards should cater for the individual users’ characteristics (Peischl and Lang, 2013; Sarikaya et al., 2018; Weggelaar-Jansen et al., 2018; Dabbebi et al., 2017; Vázquez-Ingelmo et al., 2020; Eckerson, 2010). Three potential mechanisms for doing this are customisation, personalisation and automatic adaptation (Vázquez-Ingelmo et al., 2019a). We define any technical intervention introduced by developers to mitigate a problem an ‘adaptation’, which may include the following tailoring mechanisms: ‘customisation’ is performed by the user to change any aspect of the dashboard to their needs while ‘personalisation’ is done by the system at dashboard creation or loading time. ‘Automatic adaptations’ (not to be confused with the previous ‘adaptation’) are real-time updates to the dashboard based on the user models. We refer to the use of one or more of these three mechanisms on dashboards as tailoring.
In this paper, we present the findings from interviews with seventeen dashboard development experts. Our interviews explore their awareness of user problems that have been reported in the literature, the development practices contributing to these problems and the possible solutions to them. Our specific research questions are:
-
•
RQ1. What are the dashboard users’ challenges from the developers’ point of view? Are developers aware of the existing interaction problems?
-
•
RQ2. What are the possible adaptation techniques to the challenges? Can tailoring, especially automatic adaptations, solve these challenges?
Our contributions are the following:
-
•
We learned that developers are well aware of most of the literature-reported problems, as there is a considerable overlap between what developers report and the literature on dashboard problems – the overlap is not complete though.
-
•
We identified new user problems’ with dashboards, as reported by developers, which were not in the literature such as users’ tendency to verify the displayed data.
-
•
We isolated the causes of known problems: making sense of the data is one of the most frequent problems users have, which is largely caused by a gap in the visual literacy users possess and the one developers expect. We also learned about key development practices that exacerbate existing problems including the pressure to deviate from design guidelines, which leads to ineffective data presentation issues.
-
•
Finally, our findings suggest the feasibility of automatic adaptations to dashboards in order to address users’ interaction problems as demonstrated by the adaptations suggested and employed by the developers.
2. Related Work
In this section we present first a literature review on the existing barriers encountered by dashboard users. Then we discuss the mechanisms for adaptive dashboards reported in the literature, which do not necessarily correspond to the mentioned barriers.
While information dashboards are ubiquitous nowadays, understanding and using dashboards has always been problematic. Existing difficulties are sometimes attributed to a gap between the visual literacy developers expect and the actual one (Peer, 2019; Vornhagen et al., 2018; Sarikaya et al., 2018; Weggelaar-Jansen et al., 2018). This gap is widened in domains that are inherently complex such as city governance (Vornhagen et al., 2018) or with users with low visual and analytic literacy (Weggelaar-Jansen et al., 2018; Barnett et al., 2019). Difficulties in understanding data on dashboards can also be caused by poor information presentation (Sarikaya et al., 2018). Data representation using graphs can be described as dry leaving some users uninterested (Hagood et al., 2016). When it comes to self-tracking data, presenting correlations without overwhelming users with data is a known challenge (Jones, 2015). For users who need to know which information is relevant for decision making, some authors demand explicit instructions to operate dashboards (Colley et al., 2016). Another example of misalignment highlights the lack of correspondence between the data and users needs in learning analytics dashboards (Echeverria et al., 2018).
Data sharing, privacy and security have also been common issues in healthcare (Tendedez et al., 2018; Cohen, 2017) and education-related dashboards (Haupt et al., 2015; Yoo and De Choudhury, 2019). Both academic advisors and students have concerns about sharing student data voluntarily even if it is intended to support student mental health or academic achievement (Yoo and De Choudhury, 2019; Sun et al., 2019). Handling fragmented or incomplete data sources is another frequently reported problem where siloed data (data fragmented across different information systems) is detrimental for decision making (Haupt et al., 2015). In healthcare settings, clinicians are reluctant to act on dashboards if they do not have access to the source of the data and the meta-data, which indicates a lack of trust on dashboards (Tendedez et al., 2018).
Another challenge comes from the inefficiency of support for user-requested features such as adjustments to the granularity of data aggregation and the functionalities to enable comparison, customisation and annotation tasks (Sarikaya et al., 2018; Elias and Bezerianos, 2012). Also, user training was described as a huge challenge especially for clinicians who are usually time-constrained (Tendedez et al., 2018). On the other hand, some works also investigate challenges in implementing dashboards (Cohen, 2017; Haupt et al., 2015; Sarikaya et al., 2018) including data quality and tools’ cost. Data quality problems in dashboards have negative consequences for data completeness, consistency and accuracy (Cohen, 2017). Other authors report that business intelligence tools used to create dashboards are complex and need constant IT support (Haupt et al., 2015).
Dashboard design has also been an area of interest with works leading to design goals (Hagood et al., 2016; Charleer et al., 2018), design proposals (Echeverria et al., 2018; Jones, 2015) and design implications (Tendedez et al., 2018). Design goals include adaptability, transparency, intelligence and glanceability (Charleer et al., 2018), while design suggestions include using a data storytelling approach as a learning design (Echeverria et al., 2018). Jones (2015) suggests comparing self-tracking data based on the user’s needs or using feedback from other users. Involving all stakeholders in the dashboard design and development process has also been explored in education and operational monitoring with encouraging results and positive impact (Gilliot et al., 2018; Martins et al., 2017).
Recent research recognised that the full potential of dashboards could only be realised if individual users’ characteristics are factored into design (Peischl and Lang, 2013; Sarikaya et al., 2018; Weggelaar-Jansen et al., 2018; Dabbebi et al., 2017; Vázquez-Ingelmo et al., 2020), which is an explicit call to user-centred design of dashboards. Some authors suggest this could be achieved by providing support for customisation to adapt dashboards to users’ needs (Sarikaya et al., 2018), where automatic adaptation to different users and displays is still an open research problem. This idea was supported in the context of increasing dashboard understanding through the use of adaptive functionalities (Weggelaar-Jansen et al., 2018). Some model-driven approaches to automatic adaptation include adaptive learning analytics dashboards based on users’ activities, preferences and objectives (Dabbebi et al., 2017) and a visualisation recommender system based on use context, user characteristics, domain and tasks (Vázquez-Ingelmo et al., 2020).
Our approach goes beyond the identification of dashboard problems: we firstly explore developer awareness of the aforementioned user problems and elicit developers’ views on usable adaptation techniques to lessen existing problems. Subsequently, we look into the feasibility of tailoring dashboards as a way to inform future interventions. Thus, our paper contributes to (1) understanding developers’ awareness of users’ problems with dashboards and contrasting them to known ones as reported in the literature, (2) isolating new user interaction problems, (3) identifying the reasons behind the challenges from a developer perspective and (4) assessing the feasibility of using automatic adaptations to address users’ interaction problems. While we find developers to be aware of these issues, users still encounter problems. This suggests that developer awareness alone is not sufficient and further interventions are required in the development process of dashboards.
3. Method
Dashboard development experts were recruited from specialised online forums including Tableau Community, Microsoft Power BI Community and Sisense Community. Thus we used purposeful and snowball sampling techniques. Participants were eligible if they had been working on analysing, designing or developing dashboards or visualisations for dashboards for at least two years, and if the dashboards were created for someone else to use. Semi-structured interviews were conducted remotely using a video conferencing tool and participants were incentivised with a £15 Amazon voucher. Ethical approval was obtained from The University of Manchester Research Ethics Committee.
3.1. Interview Topics and Materials
Our interviews explored the following topics: (i) experience and training undertaken to understand and work on data and dashboards; (ii) problems reported by users; (iii) techniques used and suggested to address reported problems; and (iv) tailoring dashboards to users (customisation, personalisation and automatic adaptation). To illustrate what we meant exactly by adaptations, we shared with developers six pairs of challenges and adaptations from the literature in textual format via a website (available in the supplementary material). Examples include customisation support by incorporating functionalities such as drag-and-drop, and recommending the most appropriate visualisations based on the users’ goal, inferred by monitoring user behaviour and building user models. In the same way, we also showed developers adaptations on web pages (Brusilovski et al., 2007) to get their view on whether similar adaptations could be applied to dashboards (available in the supplementary material). Examples were: (1) page adaptation to select the most suitable version of a web page according to an interaction context model, and (2) adaptive navigation to generate, disable or alter the appearance of hyperlinks based on the goals, preferences, and knowledge of users.
3.2. Participants
We recruited seventeen participants (13 males and 4 females) from the UK (n=), USA (n=), India (n=), Australia (n=), Portugal (n=) and Malaysia (n=). The average age of the participants was 39 years old (SD =). Participants had on average years experience developing dashboards (SD=), and the majority worked in the private sector (n=), while others were in the academic (n=) and public sector (n=). Participants had a high education level (one held an associate degree, eight held bachelors degrees, and eight higher degrees) and came from many backgrounds such as finance, business management, psychology, history, mathematics, computer science, information systems and engineering. Six participants were independent contractors while eleven worked on dashboards for their organisations.
Participants had been involved in the development of operational (13), strategic (10) and analytical (7) dashboards (Few, 2006) along with other types that participants categorised as quality, clinical, reporting, tactical, prediction, dynamic, efficiency, process-oriented, informative and executive dashboards. These dashboards were used in diverse settings such as financial (9), healthcare (8), telecommunication (4), sales (3), business (2), IT (2), among others. Participants used a plethora of tools to create dashboards with Tableau (15) being the most popular followed by Power BI (5), Alteryx (2) Oracle OBIEE (2), Qlik Sense (2), Cognos, Microstrategy, Shinyapps, Spotfire, Qlikview, Salesforce and SAP BusinessObjects. Tableau and Power BI constituted the majority of the tools (45%) confirming existing evidence on dashboard development platform uptake (Richardson et al., 2021). Most developers had experience with a combination of these tools.
3.3. Analysis
It took an average of 68 minutes to conduct the interviews (SD = 13 minutes) which were audio recorded and then transcribed. Dedoose 8.3 was used to perform inductive thematic analysis following Braun and Clarke’s methodology (Braun and Clarke, 2006): data familiarisation, code generation, themes search and revision, themes definition and naming, and then report production. We first analysed dashboard problems in an emergent fashion to answer RQ1. Then, to answer RQ2, we followed a-priori coding to assign adaptations to tailoring mechanisms from the literature (customisation, personalisation and automatic adaptation). A co-author who was not involved in the project was given the transcripts and codebook to test the reliability of the coding. The inter-rater reliability results showed a moderate agreement, Cohen’s .
4. Results
We report results of our analysis in two subsections. In Section 4.1, we thematically analysed the responses and assigned a total of 61 codes which yielded five main themes. The themes that emerged from our thematic analysis include a set of challenges that are written in bold. Data in Section 4.2 was analysed following a-priori coding, where the classified instances within the categories are extracted from the literature (customisation, personalisation and automatic adaptation). Tables 1–4 in the Appendix section provide a mapping between the challenges, adaptations proposed by the developers and implementation strategies suggested by the authors.
4.1. Dashboard Challenges
4.1.1. Involving Users in the Development
Most developers (n=12) involve users throughout the dashboard development process while the rest involve them intermittently. In the latter case, users participate at the outset for requirements elicitation purposes, at the end to summatively evaluate the dashboards or at both stages. It is agreed that collecting requirements from end users is a more effective practice than involving other stakeholders such as IT staff. While this may seem obvious, end users are more knowledgeable about their abilities and limitations and will have a clearer understanding of how the dashboard can, or cannot, serve their needs. Developers seek consensus on the requirements although the number of users and their availability prevents this from happening often times. As the number of users involved grows, the more diverse (often contradicting) opinions have to be considered. Most users require training but the individual differences makes training many users unfeasible. These differences include users’ visual literacy, their willingness to adopt a new technology and the perceived benefit gained from such adoption (Yera et al., 2019; Yigitbasioglu and Velcu, 2012; Weggelaar-Jansen et al., 2018). Finally, arranging meetings with users in senior positions is a challenge in itself.
P16: “the ones that tend to be less agile are the ones where there are more departments involved because the more people you get involved, particularly senior staff, the harder it is to pin them down into meetings and to have consensus because we try and only have one dashboard that they all agree with.”
Prior to starting the development, developers agree on the need to answer questions such as ‘why is this dashboard being developed?’, ‘what will be conveyed?’ and ‘who is the user that will be told the story?’ This information helps developers to design the dashboard so it conveys the intended story. Developers often misunderstand user needs as defined in the requirements, which leads to a misalignment between what the users need and what the developer implements (implementation misalignment). When collecting requirements, developers notice that users struggle to articulate their needs or do not have a clear idea of what their needs are. To overcome this problem, users are given a list of KPIs to trigger their thinking on their needs.
Most developers (n=15) use dedicated software tools to build their dashboards. The majority of the tools are proprietary and very few developers use open source tools, which are typically more portable and extensible. Prior to development, some developers conduct workshops with users to understand what tools users have or can afford. If requirements cannot be supported with these tools, feature requests are submitted or an investment in another tool is made. As a result, some design decisions are dependent on the tools used. For example, the collected usage data is limited to what the platform shares with the developers, while customisation support is entirely left to the tool.
P15: “Not on Shinyapps at the moment, but certainly on Tableau, it comes with the service. We actively monitor who uses the Tableau server and who returns every month to show that it’s got value.”
The majority of interviewed developers (n=14) collect dashboard usage data, which is mostly provided by the tools although sometimes collection is done offline by speaking to users directly. This data is used to find interaction patterns and monitor performance. The collection frequency varies from monthly, every six months, to passive collection to be analysed only if there is a reported issue. Also, some developers collect data out of personal interest and not for any specific purpose.
P16: “I get nosy sometimes and I want to see who is viewing the stuff that I have built because I want to know that it is still relevant to the business, sometimes it is and sometimes it is not.”
Most developers (n=10) say that users report the problems they face with dashboards. An absence of reports is understood as a lack of problems, which is mainly due to the adjustments made to dashboards responsively in existing feedback loops, or because users are taken along in the development process and all their needs are catered for.
P1: “I think if they are designed properly and they are co-produced with users and you take into account their needs and you listen to what they are saying then actually you make them usable, so then they do not have any issues.”
4.1.2. All about Data: Performance, Access, Provenance, Currency and Metadata
Data is not always available or requires further processing so it can be usable (data quality issues). Data unavailability is exacerbated when the users are responsible for storing their own data and when the data is fragmented and siloed across different information systems. Moreover, developers report that when, how and where data was entered is important for decision making, which can also be problematic when users are in charge of collecting and sharing their own data (mostly not captured). In addition to provenance, the currency of the data is another user concern. Users usually complain if the data shown on a dashboard does not refresh at the users’ required rate. Since developers are aware that the amount of data they pull in determines how quick visualisations render, requests for more granular data puts at risk the performance of the dashboard.
P4: “Performance is very key to any kind of reporting. No one wants to be stuck on a graph or an object in the report rendering for more than five or ten seconds. If you are going to a granular level of data that is a different thing, it might take a little time because lots of rows need to be pulled in.”
Developers employ data caching techniques to improve dashboard performance. Another technique is to load data at the background that may be closely related to the current, to be displayed in case it is requested on demand. When displaying large data sets that the dashboard cannot handle, a data extract (a compressed version) or a periodic refresh is performed instead of showing live data. For data from heterogeneous sources with different formats, developers use dedicated tools to handle them.
Developers receive a high number of reports from users when dashboards do not show the expected information (data verification), which is often due to the display of data the users were unaware of. When users are not sure of the accuracy of the information presented, they usually question the origin of the data. If the dashboard does not capture the depth of the data that satisfies users’ needs, users check the raw data sets frequently. Developers try to prevent this workaround as they believe it undermines dashboards’ value of providing data overviews.
P14: “a major challenge I’ve faced across multiple clients is the data mismatch. An executive, for example, sees everything that we have on Tableau. If they look at different views that we have created specifically for marketing they will say this data does not look right to me because I see my numbers on a daily basis, my number is 500 whereas on the marketing dashboard I see the number is 400. What they don’t realise is that the approaches that we had for two different dashboards in the back end can be different or there can be filters on the default view to cater for the need of that specific function.”
Building users’ trust in analytics is problematic when the presented information clashes with users’ biases (lack of trust). Developers agree that showing more information about the source of data and metadata can help users trust the data more. Also, allowing users to zoom into the data set can build trust in the analytics because, for example, they have a better view of how data was aggregated. It was also suggested that building trust in the developer and the tools used increases the overall trust on the dashboard.
P12: “trust in the analytics only comes from allowing quite a deep dive, especially with the clash between the gut instinct and what somebody sees within the data set […] Quite often, it’s a much longer term journey than just one piece of analytics or one dashboard to build that trust.”
Developers’ efforts to create barrier-free dashboards is challenged by issues such as interoperability and authentication processes needed to protect data access and personalise user experience. To make sure that users have access privileges to the data requested, developers use workflows connecting data sources to user roles. Interoperability problems, at the data or platform level, within organisation or across organisations makes the integration of dashboards into users’ daily life difficult.
4.1.3. The Tensions of Addressing User Needs
Dashboard users ask for new data frequently so developers find it difficult to fit all the required information on a single screen that users can grasp instantly. Putting too much information on a single display leads to information overload and prevents effective decision making. As a result, developers typically push back on those requests. To balance users’ frequent requests for more data, developers suggest combining data into smaller subsets to then enabling drilling into more granular data on demand (Shneiderman, 2003). To reduce the number of charts in dashboards, users are encouraged to keep those charts that are still relevant and frequently remove those that are not needed anymore.
P5: “users ask us [to] add this also, add that also in that particular chart, so many times we need to educate them, yes we can have this but on a different dashboard, let us not put everything in one dashboard otherwise it will be more and more complex and difficult to understand.”
Sometimes, by the time developers create the dashboard, users realise that the original problem has changed and the created dashboard no longer answers their question. Developers understand the importance of capturing evolving realities in the domains of interest, but find it difficult to keep it up with constant change. This can lead to dashboards that do not fully address all users’ needs.
P12: “Even the traditional agile two-week sprints don’t work with modern dashboard building mostly due to the fact that you are highlighting and showing information visually, that very quickly people can go and pick apart, learn something new and ask another question that is not in the original scope.”
Most participants (n=12) say they follow data visualisation guidelines when making the dashboards but some do not (n=5). The sources vary between literature, dashboarding tools’ best practices (e.g., Tableau (Bausili and Hughes, 2016)) and their own guidelines accumulating from experience. Some also find inspiration on the Web (e.g., Google Image Search). Following guidelines is not always possible since users preferences can contradict best practices so there must be some “flexibility.”
P14: “Initially we tried following guidelines but users were not aware of whether this is the right approach or not and they were used to Excel so they wanted those Excel tables in Tableau, that’s it!”
Developers are sometimes asked to stick to specific chart types or colours because of the arbitrary preferences of managers even if they do not conform to the best practices (ineffective data presentation). Also, while users’ visual literacy is taken into account, it can be challenging to accommodate all users as not everything can be represented with simple charts (Sarikaya et al., 2018). Moreover, to present the data effectively, developers need to understand the KPIs of a variety of industries and domains.
P12: “I absolutely take into account what they are used to using and what they are capable of using based on their data literacy and then from there it is using the right charting for the job as well. So that depends on how hard I have to fight them on that or how much I need to train them out to be able to use the tool effectively.”
Developers agree that choosing visualisations should be informed by the questions that the dashboard is trying to answer. When a user asks for a specific chart that goes against the best practice, developers try to educate the users on the best way data should be presented. If the user is not convinced, some developers implement two charts on the dashboard and enable swapping between them. To support users in locating specific data, developers use filters on a dashboard (at their end), to then share that view with the users. Also, developers create responsive dashboards so they can be used on multiple devices. When users need to use multiple dashboards, they spend more time to locate the needed information if the design is not consistent across the dashboards and when they need to navigate between them. They can also face visual exhaustion if the dashboards do not use colours properly, use excessive ink to present the data or is in an uncomfortable layout (e.g., not supporting access from different devices) (Few, 2006).
P13: “colours not being used properly, too much ink being used, not structuring the dashboard, and KPIs should be defined usually around the top of any dashboard you create… and standardisation is also another thing so it’s more on using the right visuals…”
4.1.4. Adoption, Onboarding and Facilitating the Learning Journey
Although users can be excited to use dashboards, conveying information with visualisations may be difficult because some users are used to traditional tabular reports. Even if the dashboard is made very simple with minimum interactive features, users may not understand what the dashboard shows or why certain charts are displayed. Developers also report that some users show willingness to learn but there are not enough learning resources or they do not know where to find them.
If the dashboard does not match users’ visual literacy it hinders the sensemaking process and understanding the dashboard’s purpose becomes difficult (Sarikaya et al., 2018). Developers believe that users need to learn how to interact with dashboards, for example, how to hover over objects to find tool tips and how to use menus. To make matters more difficult, developers’ creativity often affects further the comprehension (Galesic and Garcia-Retamero, 2011).
P1: “I was quite concerned that in the visualisation world, people can produce quite sophisticated visualisations of data but in the clinical world the people that I work with wouldn’t necessarily understand that at all. People’s understanding of data especially in front line clinical practice is not that sophisticated, so you need to make sure the data you are displaying they understand what it means.”
The more interactivity and advanced features a dashboard has, the more complex it becomes. So training the users to use the dashboards becomes challenging especially if they have different levels of visual literacy.
P9: “Training is a big challenge because you have different people with different levels of knowledge of data, so someone can see a bar chart and understand exactly what it means, but a different person for the same use case sees a bar chart and takes hard time to understand what is the y-axis and what is the x-axis.”
Developers believe that many users rely on assumptions rather than actual data and it is hard to change their mindset into data-driven decision making. Moreover, if the dashboard is a top-driven initiative (enforced by management instead of demanded by users) then users may not be inclined to use it and they will prefer more traditional ways of exploring data such as spreadsheets. On the other hand, if it is driven by the users themselves they become more interested and engaged. Additionally, dashboards that compare users performance to other users cause them to be demotivated and less likely to continue using them unless advice to improve their performance is included.
P11: “auditing feedback dashboards instantly will say…okay 70% of your peers are better than you at this and that’s great, but unless I actually know what to do to improve that’s just going to make me feel not very great.”
To mitigate visual literacy problems, developers typically train the users through a “hand holding” period to slowly guide them in early stages, where they go through the interpretation of visualisations several times. If there are different dashboards, users are put into groups and get introduced to their relevant dashboards. When training users, developers emphasise the capabilities of the dashboards and also their limitations. To enhance training within the dashboard, developers show users suggested actions, then ask for feedback on the suggestions.
P11: “When we give people suggested actions we say we think you should do this because of X, Y and Z, then they have a mechanism to either agree with it and then it gets put on their personalised action plan or they can disagree and give a reason why, so we get feedback from the users and then it can iterate and incorporate into future designs.”
Developers insist on including instructions which are typically shown in pop-up messages, videos, tutorials, tips and annotations. They mainly focus on the functionalities (e.g., filters) and dashboard navigation. Often, tutorials are then hidden but still made available whenever needed or shown to first time users or ones who have not logged in for a while. Developers aim to make dashboards easy enough for beginners to learn while robust enough to serve seasoned users’ analytic purpose (functionality use issues).
To aid users in understanding data on dashboards, developers add extra helping information such as axes labels, legends or contextual information including data definitions, data reading strategies, description of visualisations and the purpose of the functionalities and the dashboard as a whole. Also, data points are suggested to provide enough context on maps and terms so developers do not depend on the users’ knowledge.
P15: “having a title that clearly states what the dashboard is showing so that users don’t have to think about it is important, a description (from our point of view), an idea of the provenance of the data, what should we be looking at and what the point of the dashboard is…”
It was also advised to use the least number of visualisation types so the users can understand all the charts (one or two types only). Developers use the same colours for the same kind of data across dashboards to enable users to distinguish data quickly (e.g., green for sales data, blue for financials, etc.). The smoothness of the flow and the arrangement of visualisations on the dashboard is important for the user to grasp what the developer is trying to convey. Choosing where to place visualisations and filters has a direct effect on the usability of the dashboard (inappropriate placement). Using white space to emphasise which charts are grouped together was an adaptation employed by several developers, which is in line with Gestalt principles of visual perception (Yigitbasioglu and Velcu, 2012). Also, keeping common functionalities (e.g., filters) in one particular area in all of the dashboards makes it easier for users to know where they are and what they are for more quickly (Shneiderman, 1997).
P11: “when we had information on the right hand side of the page it was often missed by people so it was almost like they were reading the screen left to right so they’d see the main thing on the left and just work with that bit and wouldn’t interact very often with the actions, whereas when we moved that box to the left-hand side …people interacted with it a lot more.”
4.2. User-Tailored Dashboards to Improve Experience
This subsection describes how developers’ implement customisation, personalisation and automatic adaptation on their dashboards. Users customise their dashboards while developers perform personalisation and automatic adaptations. Developers indicate that tailoring targets data, visualisation types, functionalities, layout and interaction.
Customisation
More often than not, developers enable users to customise their dashboards (n=9). Those developers not providing customisation features express that allowing users to change some dashboard aspects can have negative consequences such as inaccurately representing data or the possibility of affecting other users who did not demand such features. As an intermediate solution, customisation features are often made available for advanced users only. Examples of implemented customisations include:
-
•
Enabling drilling down into more granular levels of data and sorting contents so that users can look at different outcomes and measures.
-
•
Changing visualisation types (line graph or bar graph) and selecting the information they want to see.
-
•
Enabling drag and drop to add their own tables or visualisations.
-
•
Building dashboards through an interactive dialogue.
-
•
Enabling users to select from a range of KPIs to show based on categories with the ability to go back to the KPI list and select others.
P8: “I’m favourable to customisation but it depends on users’ knowledge in terms of the data because if I give the user a bar chart and the option to change it to a gauge chart, some users will lose focus on the information because they would be putting information in a visualisation that is not suitable. So it’s a plus if they are able to make those changes because in the end we are building dashboards for people to use in their daily work so they need to customise it to their needs.”
Personalisation
Personalisation is the form of tailoring used the most (n=13). Developers who do not implement personalisation features reported that it is not needed since all users’ needs are already taken care of during development. Other developers do not implement personalisation features because they want to have full control of visualisations and interaction in order to evaluate the dashboard usability. Mostly, dashboards are personalised based on user roles and profiles. Examples on role-based personalisation include: changing filters to fit the role’s needs, changing displayed information (summary for executive-level users then going deeper as more analysis is needed), and highlighting and restricting information relevant to that role. Another approach taken is by loading all the data in the background and enabling the user to switch to their relevant view (information and functionality). The last approach used is to store the state of the dashboard view after it is customised by the user for later use.
P14: “an executive has 20 people in their team, including five managers and people reporting to them, the person at the bottom of the hierarchy only needs to view their own data, they cannot view data for their peers. Each manager can collectively view the data for who reports up to them, and for all of the managers combined the executive can view all data, so whenever a specific user logs in you define their hierarchy and the dashboard automatically restricts the data to that belonging to that user.”
Automatic Adaptation
Automatic adaptation is the most advanced form of tailoring since it alters the dashboard in real-time based on a user model – only four developers reported using automatic adaptations. Those not employing adaptation techniques indicate that these features are too sophisticated, there are no tools to implement them, and ultimately users do not demand it. Examples of these automatic adaptations implemented include:
-
•
If it is detected that the user is facing problems, a tutorial is shown to them.
-
•
If there are erroneous elements in the data set, they will be taken out or dealt with before data is shown to the user.
-
•
Presenting information based on what motivates the user. For example, clinicians motivated by competition are shown percentage of patients at risk compared to the average across their group of users. On the other hand, clinicians motivated by the difference they perceive to make by their work are shown a chart of their performance over time.
-
•
When a user requests a number of variables, the developer will add more variables than needed to accommodate probable future data needs.
-
•
Recommending charts that are appropriate for the KPI that the user is trying to represent.
Developers generally agree that adapting dashboards to users is feasible and sometimes strongly advisable. They expressed several benefits to automatically adapting dashboards such as increasing interest by removing superfluous interactions, mitigating visual literacy problems by addressing users differences, and building trust by showing more relevant information.
P3: “I love the idea, I believe it’s delivering the information that’s relevant to the user, […] I think it saves a step for the user of having to click on that, so I think that should be a focus and a goal of creating dashboards.”
All of the developers agreed that adaptations such as the web-based ones shown to them can be used on dashboards. Some developers acknowledged they had been using page adaptation techniques on their dashboards. They shared the techniques to implement these adaptations such as adjusting how visualisations are displayed and how they can be operated depending on the screen size, and changing visualisations order based on the user’s role and thus priority. Yet, they were also cautious and expressed some concerns: the complexity of inferring users characteristics from their behaviour and the importance of keeping data coherent so that different views can lead users to the same understanding.
P11: “Definitely, this is all possible, […] the issue is when you don’t know much about the user then you’ve got to try and infer it from their behaviour …you can ask them to tell you what their preferences are or what their role is, or you have to infer it from their usage …[which is] quite tricky.”
5. Discussion
5.1. Discussion of Findings
We revisit the research question we formulate at the outset and discuss the implications on dashboard design:
RQ1. What are the dashboard users’ challenges from the developers’ point of view? Are developers aware of existing interaction problems?
Our findings indicate that by involving users directly, developers become aware of the many problems users encounter with dashboards. However, when users do not report issues to developers, the perceived idea is that there are no problems. Since the absence of evidence is not evidence of absence, further research is needed to foster the user-developer communication effectively. Also, our results suggest that there are barriers that prevent effective communication between developers and users. Ineffective communication occurs before, during and after development. Before development, developers find difficulties to elicit user needs and establish their abilities. During development, developers struggle to accommodate frequent changes and requests for more data. After deployment, the challenge is in reporting user issues and mitigating them. This is exacerbated when developers do not collect and analyse usage data or when they collect it passively.
The most frequent problem is that users’ visual literacy does not align with that of the artefacts and visualisations in dashboards, which are typically complex and sophisticated. Other problems include ineffective data presentation, data quality issues, functionality use issues, lack of trust and finding the motivations to continue to use the dashboards. While many of the problems are known from the literature (Sarikaya et al., 2018), some other such as interoperability, data verification and implementation misalignment have not been reported in the literature. Moreover, developers do not seem to recognise some problems as reported in the literature such as user adaptability (Tokola et al., 2016; Mazumdar et al., 2014) and the lack of support for user-requested features (e.g., supporting comparison) (Yigitbasioglu and Velcu, 2012). We also provide a new perspective on known problems: when it comes to information overload, the fact that users demand more data makes it difficult for developers to control information overload. Our findings inform not only the problems and needs of dashboard users, but also those of developers when trying to address user needs. When involving users in the development, the most salient problem developers face are the frequent changes in the requirements and user demands.
Some users do not meet with developers directly but send intermediate parties (e.g., a personal assistant to a CEO) instead to negotiate their needs. This was not an anecdotal but a generalised practice that led to unmet expectations from the users. Even if users are involved, developers do not always include customisation functionalities if end users have low visual literacy. They claim that customisation will lead to non appropriate visualisations. When users are unable to articulate their needs there is little room for discussing the content users would like to have. Thus developers are selective about the user demands since some are perceived to be detrimental to the user experience. This tension in addressing some user demands results in creating suboptimal dashboards.
While the above-mentioned problems exist on their own, they do not always occur in isolation as one problem can trigger other problems. The framework in Figure 1 illustrates the relationships between challenges in that existence of some challenges exacerbates the presence of others (may lead to relationship) and fixing some challenges may generate others (the it may cause relationship). For example, users with low visual literacy (who face data understanding issues) encourage developers to simplify dashboards (sometimes oversimplifying), making advanced users explore raw data to answer their questions, and inhibiting dashboards’ potential of viewing data at a glance (Few, 2006). Alternatively, developers resort to training users on using the dashboards either in dedicated training sessions or within the dashboard as described in Section 4.1.4. Often, this training is not only needed for users with visual literacy problems, but also for those who, despite having the skills, find it difficult to parse the intentions of developers. While this training is portrayed inevitable by the results in Section 4.1.4, standardising domain-specific design guidelines and rigorously following them could reduce the need for it. Some other times, addressing one problem can create new ones: the findings in Section 4.1.2 indicate that trust on the dashboard can be built if up to date, low granularity data is provided, which is detrimental to dashboard performance and purpose. Developers are well aware of these effects and users develop workarounds when this information is not available.

The framework shows how problems interact (e.g., data understanding issue may lead to data verification issue, which then needs user training; also, fixing data understanding problems may cause ineffective data presentation.)
We have identified several development practices that can contribute to the aforementioned problems, such as the creativity in creating visualisations. Though, developers indicated that data visualisation innovations, although prevalent, are inadvisable as explained in Section 4.1.4. The introduction of new visualisation artefacts may require the use of brief tutorials or even training. The major challenges reported by developers included data verification and data quality issues. A recurring source of conflict was shifting users’ interest from legacy reporting applications such as spreadsheets to current dashboards (Tory et al., 2021). The above problems are just a sample of the most serious ones. Section 4 encompasses a plethora of diverse problems, some of which are already known in the information visualisation literature.
While most developers follow data visualisation guidelines when creating dashboards, the guidelines they use vary considerably. Many guideline sets are disjoint while some vendor-specific guidelines (e.g., Tableau (Bausili and Hughes, 2016)) take into account their own performance aspects and not only visual features. Results indicate a tension between generalistic user-interface guidelines and vendor-specific ones in that developers think that by using one guideline set, they address all the problems. Moreover, even though some developers use exclusively a single dashboarding tool, they do not stick to the guidelines published by the tool vendor as mentioned in Section 4.1.3. Some developers do not follow guidelines because they need to cater for users’ preferences that often go against these guidelines (e.g., choosing inappropriate visualisations). Also, addressing specific user needs in a particular dashboard may not be appropriate for another user, which suggests that tailoring dashboards is beneficial.
The heavy reliance of developers on dashboarding tools determines the design decisions in that only those functionalities within the tools’ repertoire can be included. When they can be included, there is little or even no room for extending them. This was clear on tailoring capabilities and user interaction data collection. While these tools enable developers to build dashboards responsively, they sacrifice the inclusion of features that may address their users’ needs.
Dashboards are data-driven decision making tools, but they can also be used in an analytical, operational, or strategic manner, as opposed to other similar tools (e.g., self-reflection tools are more analytical (Choe
et al., 2017)). Though, developers did not focus on analytical features such as advanced data blending capabilities or sharing and exporting. In fact, developers were generally against exporting dashboard views as they believe dashboards are reporting tools. As prior research points out (DiMicco and Mann, 2016), following a user-centred approach to dashboard development can be challenging especially with opposing views on usability between users and developers (Hertzum et al., 2011). The diversity of dashboard experts’ backgrounds, development methodologies, subscribed visualisation guidelines, their data familiarisation habits, their dependence on proprietary dashboarding tools, and changing users requirements are all factors that lead to problematic dashboards.
RQ2. What are the possible adaptation techniques to the challenges? Can tailoring, especially automatic adaptations, solve these challenges?
Developers reported many adaptations to user challenges, which have been described in Section 4 and summarised in Tables 1 to 4 in the Appendix. Many of the reported solutions to accommodate user needs and address barriers had been already employed in the participants’ dashboards. However, some others were broad suggestions or currently non-actionable desires such as “simplifying the views as much as possible” for ineffective data presentation. Furthermore, some participants could not give any suggestions to raised challenges such as data verification and interoperability issues (as we asked for solutions to problems they had not reported). This highlights the need for further research to explore the feasibility of the adaptations. The developers’ domains of expertise, although diverse, did not have a noticeable effect on their development practices or suggested adaptations. Though, developers emphasised the unique characteristics of their users and advised against generalising their views to other domains. Sarikaya et al. (2018) found that a dashboard’s visual and functional aspects can reflect their application domain. However, our results suggest that relying on domain-specific features solely (e.g., common visualisation methods) to build dashboards may not be effective. This challenges previous research that places heavy focus on domain features to design dashboards (Vázquez-Ingelmo et al., 2019b; Rojas
et al., 2020).
Can tailoring, especially automatic adaptations, solve these challenges?
Our findings confirm previous research that adaptive dashboards are the least used kind of tailored dashboards: eleven instances of dashboards incorporate customisation and/or personalisation functionalities, while four instances only use automatic adaptation (Vázquez-Ingelmo et al., 2019a). It also confirms that tailoring dashboards to users’ needs is advantageous and can increase their adoption and use (Peischl and Lang, 2013; Sarikaya et al., 2018; Weggelaar-Jansen et al., 2018; Dabbebi et al., 2017; Vázquez-Ingelmo et al., 2020; Eckerson, 2010). However, our findings also indicate that personalisation is the most used type of tailoring on dashboards. This contradicts the work of Vázquez-Ingelmo et al. (2019a), which found customisable dashboards to be the most common. Although developers are wary about including customisation by default and prefer to limit it to advanced users, we found customised dashboards to be still frequently used. Whereas customisation increases functional flexibility, it requires users to be aware of their needs. As a result, adaptive solutions can be more effective when users are not certain about their needs (Vázquez-Ingelmo et al., 2019a). This is supported by the agreement on the applicability of the automatic adaptations presented to developers as described in Section LABEL:4.5. Although developers were overwhelmingly in favour of automatic adaptations, most of them do not use it on their dashboards. This might be because they perceive obstacles in their technical implementation or were cautious on the effects of adaptive views on the users. The interviewees all agreed that the web-based adaptations shared with them were feasible, but these adaptive content presentations are only a subset of the adaptations for the Web available (Brusilovski et al., 2007). Other adaptations such as those based on social mechanisms (perceived community interests) or textual mechanisms (what to say and how to say it) are more applicable to other systems such as learning management or healthcare systems.
Tables 1 to 4 in the Appendix enumerate user challenges, corresponding adaptations (Section 4) and the type of tailoring that can be used to implement them. The distinction between the categories of tailoring highlights who the stakeholders are, and how, with each of these tailoring mechanisms, the degree of control of the system shifts between the end-user (customisation), the developer (personalisation) and the algorithm that models users (automatic adaptations). Customisable adaptations enable the user to change a dashboard’s aspect: for example, allowing the user to drill down into raw data sets to mitigate users’ lack of trust in the visualisation. Personalisable adaptations are any solution or features that can be given to a user at dashboard loading time based on their role, goal, preferences or abilities to alleviate a challenge. For example, combining data into smaller subsets for a CEO to give a general overview and enabling drilling on demand. Finally, automatic adaptations are any adaptation that can be done at real-time to change a dashboard aspect based on the same factors of personalisable adaptations in addition to data structure or sources (Vázquez-Ingelmo et al., 2019a). Some adaptations can be done in any of the three forms such as enabling swapping between several charts. Other adaptations fit a specific type more such as the automatic adaptation of caching data seen by another similar user. Generally, adaptations that focus on the user are customisations whereas dashboard interface alterations performed by the system are either a personalisation or an automatic adaptation or both.
5.2. Implications for Design
Experiment with multiple forms of tailoring
Tables 1 to 4 in the Appendix summarise actionable adaptations by developers. Yet, we propose the type of tailoring that can be used to implement these adaptations (e.g., allowing the user to zoom into the data set can be done automatically by the system or left to the user). Many of these adaptations can be performed automatically, while others such as ‘creating visualisations through an interactive dialogue’ require user involvement. Tailoring via automatic adaptations could be convenient to users but it entails building user models that contain features of the users that are relevant for these purposes (e.g., user goal). For instance, we learned that users employ workarounds when encountering problems in dashboards (e.g., looking at raw data when the functionalities do not allow to narrow down the data set). These strategies are indicators of problems so automated adaptations could be implemented when these workarounds are detected. Since developers are not very keen on customisation, starting with system-based tailoring (personalisation and automatic adaptation) may be a better approach. Customisation functionalities can be introduced gradually when the user has acquired the needed competency. It is also important to investigate how users respond to each kind of tailoring, ideally through unobtrusive usage data collection.
Tailor dashboards to help developers, too
There were several problems of major concern to developers. For example, training dashboard users, a problem described earlier as huge (Tendedez et al., 2018), was reported often by the participants. Training can be built within the dashboard as a course tailored to the users’ skills, or expandable tips and tutorials that pop up when users have a problem. Also, developers’ efforts to make dashboards that answer all users’ questions fall short. These overheads could be saved if one dashboard was made and then tailored to each user needs. Tailoring can target dashboard information, visualisation types, layout, functionalities and interaction. It can be done based on users’ roles, goals, preferences and skills (Vázquez-Ingelmo et al., 2019a). Tailoring can also ease developers’ efforts to move users from traditional reporting systems to dashboards. A dashboard’s complexity can evolve along with the user’s visual literacy starting with simple charts to more complex ones (Froese and Tory, 2016). During this process, and to increase users’ motivation, the dashboard could highlight its strengths and advantages compared to spreadsheet software (e.g., by showing analytics like time saved).
Put users in control of their own dashboards
Although developers were generally wary of giving users full customisation control over their dashboards, enabling customisation has potential to address several issues. When users are unable to describe what they need, they could still be able to envisage it. Customisation will give users the chance to create their own charts on demand. Feedback on the created charts can be supplied to users within the dashboard if mistakes were made or if charts, or the dashboard as a whole, could be improved. This also has the potential to increase users’ visual literacy by learning from the feedback on their own mistakes. Moreover, the dashboard system can learn from these customisations by recommending visualisations when similar users demand similar data. Another area of improvement that can be addressed with customisation is visual design. For example, developers suggested that some users struggled with something as widely agreed-upon such as the traffic-light colour coding. Customisation could easily solve this problem for users, and the dashboard system could create automatic adaptations informed by user choices.
Leverage asynchronous communication to fasten development
Agile software development methodologies are typically used to address evolving user needs and constant changes (Alvertis et al., 2016). However, as made clear by the participants, agile sprints are not fast enough to keep up with the demands of dashboard development. This is because data nowadays is released at an unprecedented rate, which can quickly change requirements. Also, dashboards are relatively new as an information presentation tool, so users are learning along the way and their realisation of their needs evolves accordingly. By the time for the next agile sprint, users acquire new knowledge rendering their previous requirements obsolete. One solution may be a shared asynchronous communication method in a participatory design approach (Gilliot et al., 2018; Martins et al., 2017): developers could share instantaneous updates of the dashboards, and users could see the product before it is completely implemented. At their preferred time, users could provide early feedback to change a required visualisation if they think it no longer satisfies their needs. Developers could then attend to users’ feedback earlier than they would with periodic agile meetings. Users could also share rough sketches of their needs or screenshots of similar visualisations or dashboards they would like to have. It could also be a better way for eliciting all users’ needs when gathering users and agreeing on universal requirements proves difficult. Such a solution can enable shifting the course of development rapidly with less cost on the developer’s side.
Create a channel for remote support
The COVID-19 pandemic has driven rapid adoption of online environments and dashboard development can make use of this change. As dashboard users were commonly found to request service (help completing a task) and assistance (Tory et al., 2021), developers could provide technical support within the dashboard through screen sharing (access to users’ screen) or through configuration sharing. An example of this technique was given by a participant where data filters are applied on the developer side then the customised view is shared with the user. Configuration can also be shared between users by extending the customisation feature so that they can learn from each other. This way, developers will have a better idea about the specific problems users encounter instead of re-actively responding to them if they are reported, or if users are visited later.
6. Conclusion
We explore dashboard developers’ perspectives on the users and their interactions with dashboards. Our findings show that developers are aware of many problems users encounter such as ineffective data presentations and those problems caused by gaps in visual literacy. We also learned about some developers practices that contribute to existing problems. Encouragingly, our findings also indicate that developers implement mechanisms to overcome these challenges with user-specific adaptations. Such adaptations constitute an avenue for future research and practice.
References
- (1)
- Alvertis et al. (2016) Iosif Alvertis, Sotiris Koussouris, Dimitris Papaspyros, Evangelos Arvanitakis, Spiros Mouzakitis, Sebastian Franken, Sabine Kolvenbach, and Wolgang Prinz. 2016. User involvement in software development processes. Procedia Computer Science 97 (2016), 73–83.
- Antonelli et al. (2018) Humberto Lidio Antonelli, Sandra Souza Rodrigues, Willian Massami Watanabe, and Renata Pontin de Mattos Fortes. 2018. A survey on accessibility awareness of Brazilian web developers. In Proceedings of the 8th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion. 71–79.
- Barnett et al. (2019) Amy Barnett, Michelle Winning, Stephen Canaris, Michael Cleary, Andrew Staib, and Clair Sullivan. 2019. Digital transformation of hospital quality and safety: real-time data for real-time action. Australian Health Review 43, 6 (2019), 656–661.
- Bausili and Hughes (2016) Ben Bausili and Mat Hughes. 2016. Designing Efficient Production Dashboards. Technical Report. Tableau. Retrieved February 10, 2022 from https://www.tableau.com/sites/default/files/2021-10/Designing-Efficient-Workbooks-2021-Interworks_0.pdf
- Braun and Clarke (2006) Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
- Brusilovski et al. (2007) Peter Brusilovski, Alfred Kobsa, and Wolfgang Nejdl. 2007. The adaptive web: methods and strategies of web personalization. Vol. 4321. Springer Science & Business Media.
- Buttigieg et al. (2017) Sandra C Buttigieg, Adriana Pace, and Cheryl Rathert. 2017. Hospital performance dashboards: a literature review. Journal of health organization and management (2017).
- CDC (2020) CDC. 2020. COVIDView. Retrieved April 10, 2020 from cdc.gov/coronavirus/2019-ncov/covid-data/covidview/
- Charleer et al. (2018) Sven Charleer, Kathrin Gerling, Francisco Gutiérrez, Hans Cauwenbergh, Bram Luycx, and Katrien Verbert. 2018. Real-time dashboards to support esports spectating. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 59–71.
- Choe et al. (2017) Eun Kyoung Choe, Bongshin Lee, Haining Zhu, Nathalie Henry Riche, and Dominikus Baur. 2017. Understanding self-reflection: how people reflect on personal data through visual data exploration. In Proceedings of the 11th EAI International Conference on Pervasive Computing Technologies for Healthcare. 173–182.
- Cohen (2017) L Cohen. 2017. Impacts of business intelligence on population health: a systematic literature review. In Proceedings of the South African Institute of Computer Scientists and Information Technologists. 1–9.
- Colley et al. (2016) Ashley Colley, Kirsi Halttu, Marja Harjumaa, and Harri Oinas-Kukkonen. 2016. Insights from the design and evaluation of a personal health dashboard. In 2016 49th Hawaii International Conference on System Sciences (HICSS). IEEE, 3483–3492.
- Dabbebi et al. (2017) Inès Dabbebi, Sébastien Iksal, Jean-Marie Gilliot, Madeth May, and Serge Garlatti. 2017. Towards adaptive dashboards for learning analytic: An approach for conceptual design and implementation.
- DiMicco and Mann (2016) Joan Morris DiMicco and Nancy Mann. 2016. User research to inform product design: turning failure into small successes. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 872–879.
- Dong et al. (2020) Ensheng Dong, Hongru Du, and Lauren Gardner. 2020. An interactive web-based dashboard to track COVID-19 in real time. The Lancet infectious diseases 20, 5 (2020), 533–534.
- Dowding et al. (2018) Dawn Dowding, Jacqueline A Merrill, Nicole Onorato, Yolanda Barrón, Robert J Rosati, and David Russell. 2018. The impact of home care nurses’ numeracy and graph literacy on comprehension of visual display information: implications for dashboard design. Journal of the American Medical Informatics Association 25, 2 (2018), 175–182.
- Dowding et al. (2015) Dawn Dowding, Rebecca Randell, Peter Gardner, Geraldine Fitzpatrick, Patricia Dykes, Jesus Favela, Susan Hamer, Zac Whitewood-Moores, Nicholas Hardiker, Elizabeth Borycki, et al. 2015. Dashboards for improving patient care: review of the literature. International journal of medical informatics 84, 2 (2015), 87–100.
- Echeverria et al. (2018) Vanessa Echeverria, Roberto Martinez-Maldonado, Roger Granda, Katherine Chiluiza, Cristina Conati, and Simon Buckingham Shum. 2018. Driving data storytelling from learning design. In Proceedings of the 8th international conference on learning analytics and knowledge. 131–140.
- Eckerson (2010) Wayne W Eckerson. 2010. Performance dashboards: measuring, monitoring, and managing your business. John Wiley & Sons.
- Elias and Bezerianos (2012) Micheline Elias and Anastasia Bezerianos. 2012. Annotating bi visualization dashboards: Needs & challenges. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1641–1650.
- Few (2006) Stephen Few. 2006. Information dashboard design: The effective visual communication of data. O’Reilly Media, Inc.
- Forman et al. (2021) Howard Forman, Cary Gross, Ayotomiwa Ojo, Sarah Pitafi, Hannah Todd, Eric Mayberry, Shivaji Kumar, and Patrick Hansen. 2021. We Rate Covid Dashboards. https://www.ratecoviddashboard.com/
- Froese and Tory (2016) Maria-Elena Froese and Melanie Tory. 2016. Lessons learned from designing visualization dashboards. IEEE computer graphics and applications 36, 2 (2016), 83–89.
- Galesic and Garcia-Retamero (2011) Mirta Galesic and Rocio Garcia-Retamero. 2011. Graph literacy: A cross-cultural comparison. Medical Decision Making 31, 3 (2011), 444–457.
- Gilliot et al. (2018) Jean-Marie Gilliot, Sébastien Iksal, Daniel Magloire Medou, and Inès Dabbebi. 2018. Participatory design of learning analytics dashboards. In Proceedings of the 30th Conference on l’Interaction Homme-Machine. 119–127.
- Hagood et al. (2016) Danielle Hagood, Cynthia Carter Ching, and Sara Schaefer. 2016. Integrating physical activity data in videogames with user-centered dashboards. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge. 530–531.
- Haupt et al. (2015) Ross Haupt, Brenda Scholtz, and Andre Calitz. 2015. Using business intelligence to support strategic sustainability information management. In Proceedings of the 2015 Annual Research Conference on South African Institute of Computer Scientists and Information Technologists. 1–11.
- Herder and van Maaren (2020) Eelco Herder and Olaf van Maaren. 2020. Privacy dashboards: the impact of the type of personal data and user control on trust and perceived risk. In Adjunct publication of the 28th ACM conference on user modeling, adaptation and personalization. 169–174.
- Hertzum et al. (2011) Morten Hertzum, Torkil Clemmensen, Kasper Hornbæk, Jyoti Kumar, Qingxin Shi, and Pradeep Yammiyavar. 2011. Personal usability constructs: How people construe usability across nationalities and stakeholder groups. International Journal of Human-Computer Interaction 27, 8 (2011), 729–761.
- Jivet et al. (2018) Ioana Jivet, Maren Scheffel, Marcus Specht, and Hendrik Drachsler. 2018. License to evaluate: Preparing learning analytics dashboards for educational practice. In Proceedings of the 8th international conference on learning analytics and knowledge. 31–40.
- Jones (2015) Simon L Jones. 2015. Exploring correlational information in aggregated quantified self data dashboards. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers. 1075–1080.
- Kia et al. (2020) Fatemeh Salehian Kia, Stephanie D Teasley, Marek Hatala, Stuart A Karabenick, and Matthew Kay. 2020. How patterns of students dashboard use are related to their achievement and self-regulatory engagement. In Proceedings of the tenth international conference on learning analytics & knowledge. 340–349.
- Kitchin et al. (2015) Rob Kitchin, Tracey P Lauriault, and Gavin McArdle. 2015. Knowing and governing cities through urban indicators, city benchmarking and real-time dashboards. Regional Studies, Regional Science 2, 1 (2015), 6–28.
- Koopman et al. (2011) Richelle J Koopman, Karl M Kochendorfer, Joi L Moore, David R Mehr, Douglas S Wakefield, Borchuluun Yadamsuren, Jared S Coberly, Robin L Kruse, Bonnie J Wakefield, and Jeffery L Belden. 2011. A diabetes dashboard and physician efficiency and accuracy in accessing data needed for high-quality diabetes care. The Annals of Family Medicine 9, 5 (2011), 398–405.
- Kultima (2015) Annakaisa Kultima. 2015. Developers’ perspectives on iteration in game development. In Proceedings of the 19th International academic Mindtrek conference. 26–32.
- Lee et al. (2015) David Lee, Jesus Ricardo Alvarez Felix, Shan He, Dietmar Offenhuber, and Carlo Ratti. 2015. CityEye: Real-time visual dashboard for managing urban services and citizen feedback loops. In Proceedings of the 14th International Conference on Computing in Urban Planning and Urban Management (CUPUM), Cambridge, MA, USA. 7–10.
- Martins et al. (2017) Andreia Filipa Martins, Anabela C Alves, and Celina P Leão. 2017. Development and implementation of dashboards for operational monitoring using participatory design in a lean context. In International Symposium on Qualitative Research. Springer, 237–249.
- Mazumdar et al. (2014) Suvodeep Mazumdar, Daniela Petrelli, and Fabio Ciravegna. 2014. Exploring user and system requirements of linked data visualization through a visual dashboard approach. Semantic Web 5, 3 (2014), 203–220.
- MOH (2020) Saudi MOH. 2020. Saudi COVID-19 Dashboard. Retrieved May 21, 2020 from covid19.moh.gov.sa/
- NHS (2020) NHS. 2020. UK COVID-19. Retrieved April 10, 2020 from www.gov.uk/government/publications/covid-19-track-coronavirus-cases
- Pathak et al. (2015) Apurva Pathak, Bidyut Kr Patra, Arnab Chakraborty, and Abhishek Agarwal. 2015. A city traffic dashboard using social network data. In Proceedings of the 2nd IKDD Conference on Data Sciences. 1–4.
- Peer (2019) Firaz Peer. 2019. Community Indicator Data Dashboards as Infrastructures for Data Literacy. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion. 109–112.
- Peischl and Lang (2013) Bernhard Peischl and Sandra Lang. 2013. What Can We Learn from In-process Metrics on Issue Management?–Insights from an Industrial Case Study. In 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops. IEEE, 124–125.
- Richardson et al. (2021) James Richardson, Kurt Schlegel, Rita Sallam, Austin Kronz, and Julian Sun. 2021. Magic Quadrant for Analytics and Business Intelligence Platforms. Technical Report. Gartner.
- Roberts et al. (2017) Lynne D Roberts, Joel A Howell, and Kristen Seaman. 2017. Give me a customizable dashboard: Personalized learning analytics dashboards in higher education. Technology, Knowledge and Learning 22, 3 (2017), 317–333.
- Rojas et al. (2020) Elizabeth Rojas, Viviana Bastidas, and Christian Cabrera. 2020. Cities-Board: A Framework to Automate the Development of Smart Cities Dashboards. IEEE Internet of Things Journal 7, 10 (2020), 10128–10136.
- Sarikaya et al. (2018) Alper Sarikaya, Michael Correll, Lyn Bartram, Melanie Tory, and Danyel Fisher. 2018. What do we talk about when we talk about dashboards? IEEE transactions on visualization and computer graphics 25, 1 (2018), 682–692.
- Schwendimann et al. (2016) Beat A Schwendimann, Maria Jesus Rodriguez-Triana, Andrii Vozniuk, Luis P Prieto, Mina Shirvani Boroujeni, Adrian Holzer, Denis Gillet, and Pierre Dillenbourg. 2016. Perceiving learning at a glance: A systematic literature review of learning dashboard research. IEEE Transactions on Learning Technologies 10, 1 (2016), 30–41.
- Shneiderman (1997) Ben Shneiderman. 1997. Designing the User Interface: Strategies for Effective Human-Computer Interaction (3rd ed.). Addison-Wesley Longman Publishing Co., Inc., USA.
- Shneiderman (2003) Ben Shneiderman. 2003. The eyes have it: A task by data type taxonomy for information visualizations. In The craft of information visualization. Elsevier, 364–371.
- Srinivasan et al. (2018) Arjun Srinivasan, Matthew Brehmer, Bongshin Lee, and Steven M Drucker. 2018. What’s the Difference? Evaluating Variations of Multi-Series Bar Charts for Visual Comparison Tasks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.
- Srisopha et al. (2021) Kamonphop Srisopha, Daniel Link, and Barry Boehm. 2021. How should developers respond to app reviews? features predicting the success of developer responses. In Evaluation and Assessment in Software Engineering. 119–128.
- Sun et al. (2019) Kaiwen Sun, Abraham H Mhaidli, Sonakshi Watel, Christopher A Brooks, and Florian Schaub. 2019. It’s my data! Tensions among stakeholders of a learning analytics dashboard. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–14.
- Tendedez et al. (2018) Helena Tendedez, Roisin McNaney, Maria-Angela Ferrario, and Jon Whittle. 2018. Scoping the Design Space for Data Supported Decision-Making Tools in Respiratory Care: Needs, Barriers and Future Aspirations. In Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare. 217–226.
- Tokola et al. (2016) Henri Tokola, Christoph Gröger, Eeva Järvenpää, and Esko Niemi. 2016. Designing manufacturing dashboards on the basis of a Key Performance Indicator survey. Procedia CIRP 57 (2016), 619–624.
- Tory et al. (2021) Melanie Tory, Lyn Bartram, Brittany Fiore-Gartland, and Anamaria Crisan. 2021. Finding Their Data Voice: Practices and Challenges of Dashboard Users. IEEE Computer Graphics and Applications (2021).
- Vázquez-Ingelmo et al. (2019a) Andrea Vázquez-Ingelmo, Francisco José García-Peñalvo, and Roberto Therón. 2019a. Tailored information dashboards: A systematic mapping of the literature. In Proceedings of the XX International Conference on Human Computer Interaction. 1–8.
- Vázquez-Ingelmo et al. (2020) Andrea Vázquez-Ingelmo, Francisco José García-Peñalvo, Roberto Therón, and Miguel Ángel Conde. 2020. Representing Data Visualization Goals and Tasks Through Meta-Modeling to Tailor Information Dashboards. Applied Sciences 10, 7 (2020), 2306.
- Vázquez-Ingelmo et al. (2019b) Andrea Vázquez-Ingelmo, Francisco José García-Peñalvo, Roberto Theron, Daniel Amo Filvà, and David Fonseca Escudero. 2019b. Connecting domain-specific features to source code: Towards the automatization of dashboard generation. Cluster Computing (2019), 1–14.
- Vornhagen et al. (2018) Heike Vornhagen, Brian Davis, and Manel Zarrouk. 2018. Sensemaking of complex sociotechnical systems: the case of governance dashboards. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age. 1–2.
- Wakeling et al. (2015) Simon Wakeling, Paul Clough, James Wyper, and Amy Balmain. 2015. Graph literacy and business intelligence: Investigating user understanding of dashboard data visualizations. Business Intelligence Journal 20, 4 (2015), 8–19.
- Weggelaar-Jansen et al. (2018) Anne Marie JWM Weggelaar-Jansen, Damien SE Broekharst, and Martine De Bruijne. 2018. Developing a hospital-wide quality and safety dashboard: a qualitative research study. BMJ quality & safety 27, 12 (2018), 1000–1007.
- WHO (2020) WHO. 2020. WHO Health Emergency Dashboard. Retrieved April 10, 2020 from covid19.who.int/
- Yera et al. (2019) Ainhoa Yera, Javier Muguerza, Olatz Arbelaitz, Iñigo Perona, Richard N Keers, Darren M Ashcroft, Richard Williams, Niels Peek, Caroline Jay, and Markel Vigo. 2019. Modelling the interactive behaviour of users with a medication safety dashboard in a primary care setting. International journal of medical informatics 129 (2019), 395–403.
- Yigitbasioglu and Velcu (2012) Ogan M Yigitbasioglu and Oana Velcu. 2012. A review of dashboards in performance management: Implications for design and research. International Journal of Accounting Information Systems 13, 1 (2012), 41–59.
- Yoo and De Choudhury (2019) Dong Whi Yoo and Munmun De Choudhury. 2019. Designing dashboard for campus stakeholders to support college student mental health. In Proceedings of the 13th EAI International Conference on Pervasive Computing Technologies for Healthcare. 61–70.
Appendix A Challenges, Adaptations, and Tailoring Forms
Implementation misalignment problem can be adapted by giving users a starting point or conducting a workshop to understand their needs. These adaptations are implemented and can be done using personalisation or auto adaptation. Challenge Adaptation I/S Type of Tailoring C P AA Implementation Misalignment - Giving users a starting point to facilitate requirement elicitation. I x x - Conducting a workshop to understand users’ available tools. I x x Sum of adaptations 0 2 2
Example of issues & adaptations in the second theme: data quality issue is addressed by removing erroneous or incomplete data (implemented) or by using dedicated tools to handle heterogeneous data (suggested). Both can be implemented using auto-adaptation. Challenge Adaptation I/S Type of Tailoring C P AA Data Quality Issues - Removing erroneous or incomplete data elements before showing data. I x - Using dedicated tools to handle heterogeneous data. S x Dashboard Performance - Employing data caching techniques. I x - Loading in background closely related data and display when needed. S x - Showing data extract (a compressed version). I x - Refreshing data displayed periodically instead of live data. I x Lack of Trust - Showing data source and metadata. I x x x - Allowing the user to zoom into the data set. S x x x Protecting Data Access - Using workflows connecting data sources to user roles. I x x Sum of adaptations 2 3 9
Example of issues & adaptations in the third theme: information overload can be adapted by combining data into smaller subsets enabling drill-down on demand. This is a suggested adaptation that can be done in one of the three tailoring techniques. Challenge Adaptation I/S Type of Tailoring C P AA Information Overload - Combining data into smaller subsets & enabling drill down on demand. S x x x - When several users use the same dashboard, involving users to keep relevant charts and remove the rest. I x x x - Educating users on the risks of displaying too much information. I x x Ineffective Data Presentation - Enabling users to change visualisation type on demand. I x - Using data visualisation guidelines. I x x - Creating visualisations informed by the questions that the dashboard is trying to answer. I x x - Creating visualisations through an interactive dialogue. I x - Creating visualisations based on KPIs to be represented. I x - Using the same colour for the same kind of data across dashboard(s). I x x x Sum of adaptations 6 6 6
Example of issues & adaptations in the fourth theme: Demotivation can be addressed by showing users suggestions to improve their performance or presenting information based on what motivates them. The former adaptation is implemented while the latter is suggested; both can be done with personalisation or auto-adaptation. Challenge Adaptation I/S Type of Tailoring C P AA Demotivation - Showing users suggestions to improve their performance. I x x - Presenting information based on what motivates the user. S x x User Training Issues - Training users in hand holding period or in formal training sessions. I x x - Putting users into groups to introduce them to their dashboards separately. I x - Showing instructions on using the dashboard via pop-up messages, annotations, videos and tutorials. I x x - Emphasising capabilities and limitations of dashboard to eliminate confusion. I x x - Showing suggestions and asking for feedback to enhance training. I x x - Showing tutorials to first-time users or who have not logged in for a while. I x x - Educating users when asking for inappropriate visualisations. I x x Functionality Use Issues - Having a mechanism to apply filters on dashboards then sharing them with users. I x x - Changing filters to fit user’s role needs. S x - Displaying a video of how to use functionalities and navigate the dashboard. I x x - Highlighting interesting areas or where a user should click. I x x Data Understanding Issues - Going through the interpretation of the visualisations explicitly. I x x - Adding extra helping information (e.g. axes labels). I x x - Adding data points to give context to maps and terms. I x x - Adding legends even for familiar colour palettes. I x x - Using the least number of charts (maximum two by default). S x x x - Utilising rules associated with charts to highlight changes automatically. I x x - Including definitions of KPIs of a variety of industries/domains. I x x x Inappropriate Information Placement - Using white space to emphasise groups of charts. I x - Keeping common functionalities in a particular area. I x x - Enabling sorting dashboard content to explore different outcomes. S x - Creating responsive layouts to be used on multiple devices. I x - Organising charts in the order they should be read. I x x x - Grouping dashboards by type to ease navigation. I x x x Sum of adaptations 5 23 23