Security for People with Mental Illness in Telehealth Systems: A Proposal
Abstract
A mental health crisis is looming large, and needs to be addressed. But across age groups, even just in the United States, more than 50% of people with any mental illness (AMI) did not seek or receive any service or treatment[49]. The proliferation of telehealth and telepsychiatry tools and systems[12, 8] can help address this crisis, but outside of traditional regulatory aspects on privacy, e.g. Health Insurance Portability and Accountability Act (HIPPA), there does not seem to be enough attention on the security needs, concerns, or user experience of people with AMI using those telehealth systems.
In this text, I try to explore some priority security properties for telehealth systems used by people with AMI for mental heath services (MHS). I will also suggest some key steps in a proposed process for designing and building security mechanisms into such systems, so that security is accessible and usable to patients with AMI, and these systems can achieve their goals of ameliorate this mental health crisis.f
1 Introduction
Mental health issues are prevalent around all of us, and the scale is staggering. Within the United States alone, in 2017, 46.6 million adults had a mental illness, 49.5% of adolescents had any mental disorder, and 10.6 million adults seriously considered suicide[49, 1]. Estimates are that 50% of mental illness begins by age 14, and 75% by 24, while suicide is the third biggest cause of death for age group 10 – 24, among whom 90% had underlying mental illness[9, 50]. Telehealth and telepsychiatry tools and systems have been developed with the hope to help address this crisis, and while they all must comply to HIPPA, there is a missing dimension: psychologically acceptable security to people with AMI.
The “psychological acceptability” principle is identified as “usable” in 1975[59], but it wasn’t until the 1990s, that “usable security” started to get its due attention[74, 52]. Moreover, the audience of “psychological acceptability” is open and wide: to whom, or to what audience are the security measures and mechanisms psychologically acceptable? What if the psychological or mental state of the audience is impaired, or the audience has mental disorders?
This question is open, wide, and more importantly, tricky. While it is relatively easy to diagnose and notice cognitive impairment and neurocognitive disorders as they manifest in domains such as attention, recognition, and language [28, 46, 57], the vast majority of people with mental illness keep functioning in daily lives[10]. What is even trickier, is that mental disorder may eventually turn to affect cognition and behaviors, as Diagnostic and Statistical Manual of Mental Disorders (DSM, latest edition DSM-5) defines a mental disorder as “…a syndrome characterized by clinically significant disturbance in an individual’s cognition, emotion regulation, or behaviors…”[11]. How might we build security into telehealth systems, which would be relied upon by many with mental illness, who a diverse and complex, but under-served and usually invisible user base? This is a question worth asking and solving. In this work, I will propose some priority properties of security in telehealth systems used for MHS, and suggest some key steps in a process when building usable and secure telehealth systems for people with AMI. Here I will adapt [56]’s definition of telehealth, to better suit the MHS context. While it is still fundamentally ”the use of electronic information and telecommunication technologies to support long-distance clinical health care etc.,” providers of MHS via telehealth need not to be only human — they can be automated, interactive agents such as social bots, e.g. conversational agents (colloquially “chatbots”).
2 Related Work
For security and usable security, much has been written and researched. However, even though “psychological acceptability” to users was proposed as a key principle for security, it only started getting attention much later. Meanwhile, security measures keep confusing users[74, 2, 70, 60, 19]. Moreover, the “psychological acceptability” principle is often doubted as incompatible with the goal of “security”[74, 22, 61, 64, 54, 13, 73, 51], and usable security is still a small community compared to other areas of security research. Also, as [69] points out, usable security is designed with the general population in mind, and may leave out specific vulnerable groups that are under-served. This leaves us not a deep foundation to work with, when we consider building psychologically acceptable security, for those whose mental state may suffer from disorders or illness, and into systems that many of them may rely on to get treatment and services.
But more recently, usable security for vulnerable groups is getting more attention, especially on older adults and the visually impaired, thanks to works such as[44, 48, 48, 18, 4, 23, 33, 55, 66, 72, 3] from both usable security and human-computer interaction (HCI) communities. On cognitively impaired users, [43] did an excellent user study on older adults with mild congnitive impairments and their online security behaviors on sharing passwords and identity information, and while it discussed risks options and gave examples on access and control, it stopped short of providing security-specific suggestions, properties, principles, or processes. [42] investigated behaviors of certain cognitively impaired users on authentication methods, but only in a simulated e-commerce setting, not in telehealth systems used for MHS: such systems hold a lot more sensitive information and interactions from and about the users.
On the other hand, the rise of telehealth systems for mental health (e.g. social bots and online therapy) has prompted active research from the HCI community and health researchers [63, 15, 6, 7, 14, 17, 16, 25, 26, 30, 41, 71, 37, 53, 20], but their primary focus is on users’ experience in treatment, their system interaction experiences, and effects of therapy, and did not give much consideration to security — the psychologically acceptable type of security — to users with AMI. [71] uncovered the general “tensions with technology” among members of several peer-support therapy groups, and while it identified “anonymity, identity, access” as parts of the tensions, it focuses on user experiences and participation, and did not address inclusive security.
For telehealth systems, at least in the United States, the federal and state guideline on their security mechanisms are inconsistent[65], and while medical professionals have proposed[21, 27, 39, 40, 67] measures to improve security in telehealth systems, those measures are largely targeting security concerns of medical institutions[38] and tend to be more policy- and administration-oriented than patient-focused.
3 A Proposal on Properties & Process
These properties are by no means exhaustive or authoritative. They are my early explorations into making security methods, mechanisms, and designs that are easily accessible and usable by people with AMI. Most, if not all of these properties have been discussed in general computer science and security literature before. However, putting them into the context for providing security to people with AMI accessing MHS via telehealth systems, places them in high priority positions for the practical design and implementation of those telehealth systems used for MHS.
3.1 Some Priority Properties
- Trust-inducing
-
As [71] illustrates, distrust of technology (e.g. videochats, social networks, cloud data storage) forms the basis of tension with technology in several peer-support therapy groups, and older adults with cognitive impairment are concerned about their privacy when doing art therapy online too[20]. With this distrust of general technology already in the minds of users who seek MHS, it is extremely important for telehealth systems — which handles more sensitive data, content, and interactions than general technology platforms and services — to earn and induce trust from users to ensure and encourage adoption and usage. But security measures — good security measures — could easily confuse or mislead users[74, 2, 70, 60, 19, 58], and when confusion and misdirection in telehealth systems happen to users with AMI seeking MHS, these users may withdraw altogether from using telehealth systems, and their withdrawal may have larger and more detrimental effects on their well-being than the general population withdrawing from telehealth systems.
- Robust
-
By definition[11], mental disorders can disrupt a person’s behaviors and cognition, and such disturbances may not be within considerations of user models or behavioral expectations of general usable security, which has the general population in mind. Therefore, it would be wise to expect more errors, faults, and unexpected inputs or behaviors from users with AMI, and let this expectation lead to the building of more robustness into security measures of telehealth systems used for MHS. Robustness is especially important to telehealth systems whose providers of MHS are automated interactive agents, which are more affordable and accessible than human MHS providers. They are very likely to be relied upon more by people with AMI whose socio-economic status and life circumstances could not afford them to human MHS providers, and without more robustness, these automated interactive agents may deny access to care to users needing the care most.
- Cooperative
-
MHS is naturally a cooperative process, with interactions between patients, providers, other caregivers, and peers active and evolving in the MHS process. People with AMI may have delegated certain powers to caregivers other than their providers, and may also seek MHS with others in scenarios such as family therapy and peer-support group therapies. In the light of this situation, it sounds sensible to consider making certain security mechanisms and features cooperative. But this cooperative nature of MHS may pose legal challenges for security: take the U.S. for example, a shared password, even a voluntarily shared one, counts as unauthorized access by the Computer Fraud and Abuse Act, despite technology policy organizations’ persistent activism[20]. In spite of this challenge, there are times when such sharing and other cooperative procedures are necessary during one’s life span. Telehealth systems used for MHS should proactively consider building security measures that address the roles and functions of caregivers, peers, partners, etc., and in the process of delivering MHS, to actively communicate the system’s cooperative aspect of security to users.
- Functional
-
This may sound silly and obvious, but given the infamous security-functionality trade-off[74, 36, 5, 32, 47] even for the general population, it is worth emphasizing functionality when designing security to include people with AMI, who need MHS provided via telehealth systems more than the general population. Usable security measures should not become obstacles for people with AMI when they access telehealth systems’ care-providing capabilities, and ideally, should not degrade their general user experiences either.
- Fail gracefully
-
Security methods, however carefully designed and built, may still fail to guide users towards the right path of actions, but a graceful failure can help lead users to the right path next time. When the telehealth systems’ security mechanism fails to elicit the right actions from users with AMI, how should the system respond, so next time users can do the right things? Given mental disorders could affect cognition and behavior, would users with AMI react differently to security warnings and failure messages as the general population do? If so, how different and in what aspects? What post-failure contents or actions, educational or otherwise, from the telehealth system can help users do the right thing the next time? These are important questions to consider, when designing and building security mechanisms in telehealth systems to provide MHS to users with AMI.
3.2 Process: Some Key Steps
Based on preceding discussion, it is now helpful to start contemplating the practical aspect of building those priority security properties into telehealth systems used for MHS. I propose some key steps to make security inclusive to users with AMI, and invite the wider usable security community to further contribute to, discuss, debate, utilize, improve, and architect processes that would engineer more inclusive security mechanisms in telehealth systems for people with AMI.
- Build inclusive mental models
-
[34] advocates for formal methods when building mental models for inclusive security’s user models, and in practice, it might be realistic to first build mental models for the security concerns and potential ranges of behaviors for a sub-population of people with AMI, whom the telehealth system targets. These users may differ and diverge from the general population on how they perceive, expect, and manage security and risk in telehealth systems, and their behaviors can range wider than the general population when it comes to use security methods. Including these considerations about the concerns and behaviors of people with AMI, would be the first step towards inclusive security in telehealth systems providing MHS.
An example: young adults with attention-deficit/hyperactivity disorder (ADHD) who seek conversational therapy from an automated agent (“chatbot”) online. While CAPTCHA methods could help protect the chatbot against spams, users with ADHD — who generally have shorter attention spans than a general population[45] — may be more likely to abandon efforts to go through CAPTCHAs, especially when there are multiple and come one after another in various forms (e.g. text, sound, or images recognition). They may even leave with the belief that such a security mechanism is set up to trick them or deny them access. Here, an inclusive mental model of user behaviors does not only help users access MHS, but also encourage creative solutions. For instance, instead of human-recognition-based CAPTCHAs which may require sustained non-interactive attention, might a 30-second “trial conversation” with the chatbot to decide “human-or-not ” be useful?
- Incorporate clinical providers’ inputs
-
Continuing on the previous point, understanding and modeling behaviors and concerns of people with AMI, cannot be done in a vacuum or in an armchair checking off criteria in DSM-5111DSM provides a common framework for describing psychopathology, and it is strongest in its reliability. Meanwhile, there has always been controversies around DSM about its criteria, categorization, characterization, and clinical validity[68, 35].. To build realistic user models and their mental models of security, consulting clinical practitioners on what ranges of behaviors to expect from patients with AMI seeking MHS, would be helpful. Practically speaking, incorporating clinicians views is the next best available approach, short of actual target user research and interviews: in environments where AMI is stigmatized, people with AMI may not be willing to disclose their conditions to researchers, let alone wanting to be interviewed, observed, and studied for their behaviors using telehealth systems. Whenever pragmatic and viable, user research is still the preferred and best method of research and user modeling, but in its absence, a good alternative would be to seek clinicians’ inputs on the online behavioral patterns they have observed in their patients while providing MHS, and then to extract possible mental models of security of people with AMI.
- Consider cooperative situations in security
-
As described in section 3.1, MHS is cooperative by nature, and in the context of telehealth systems in MHS, “cooperations” can happen between human beings and automated interactive agents. Hence, building technical security mechanisms and writing user-facing security communications (e.g. user agreements) that would allow security to be cooperative in certain MHS contexts, would be one important way that telehealth systems can make security more inclusive for people with AMI seeking MHS.
- Define boundaries of cooperation
-
While cooperation is important in MHS, it is also crucial to not idealize it in security settings, especially for security in MHS. Boundary violations between providers and patients seeking MHS, and exploitation of those patients from their caregivers and peers[31, 62, 24, 29] can and do happen in off-line settings. When MHS is provided online via telehealth systems, the boundary and exploitation problems only get more complex with all the security methods and mechanisms in place, so that people with AMI can access MHS. When designing and building cooperative aspects of security in telehealth systems used for MHS, we should also consider drawing boundaries between those principal users with AMI who seek MHS, and the extent to which their providers, caregivers, and peers could influence and change the security decisions, settings, and behaviors of these principal users.
- Tailor communications
-
This is a logical conclusion following all preceding points. What works well for general populations in general systems to enhance security — e.g. pop-up warnings, color-coded buttons, push notifications, conventional user interfaces etc. — may or may not work well with people with AMI seeking MHS via telehealth system. What is more, certain users with AMI may also be cognitively impaired, posing even more challenges. Having built inclusive and diverse user models, it makes sense to implement an inclusive and diverse set of communications paradigms and tools — ranging from warning message re-writes to user interfaces customization — so that each user model is accounted for when encountering security mechanisms in telehealth systems.
- Evaluate failures
-
During early stage development and small-scale user trials, it is important to document and evaluate failures, where security mechanisms and methods fail to lead people with AMI to the right action paths or accomplish their purposes. Those failures may be due to incorrect assumptions about user behaviors, insufficient robustness, unclear warnings and failures, incorrect implementations, or a variety of other reasons. Regardless of the reason, learning from those failures and how they might have become hurdles to actual patients using telehealth systems for MHS, or in fact have motivated insecure behaviors, would be very valuable lessons to inform future inclusive security designs and implementations for people with AMI not only in telehealth systems, but also in general technology services and products.
4 Future Work
This is still early stage work, and the landscape of building inclusive security for people with AMI is an open field with many open problems and solutions. These properties and steps are a suggestion, an invitation, and an encouragement for the usable security and inclusive security community to examine, understand, and build towards the security needs of people with AMI. Telehealth systems used for MHS are the most obvious first target, and the lessons we will have learned here could inspire, expand into, transferred to, and be adapted in security designs and mechanisms of other technology services and products to include more under-served groups.
One direction that I might consider to carry this research forward, is to observe, research, conduct interviews on, and evaluate security behaviors in a specific sub-population of people with AMI, for example, adults with ADHD, who use telehealth systems to access MHS provided by automated, interactive agents. Another possible direction is to evaluate current security solutions in a popular telehealth systems used for MHS, and how these solutions are inclusive or exclusive towards certain people with AMI.
5 Conclusion
With mental health issues prevalent in our societies and telehealth systems proliferating, more people with mental illness are and may start seeking mental health services via telehealth systems. However, there does not seem to be enough discussion on how, and if, security considerations in telehealth systems are including people with mental illness, who may very likely have large cognitive and behavioral deviations from the general population, for whom many security mechanisms are designed and built.
In this text, I shared some security properties that should be prioritized when building telehealth systems for people with mental illness to access mental heath services. I also suggested some key steps when designing and building security mechanisms and experiences into those systems. I hope that readers can take away not only awareness in the security needs of people with mental illness, but also insights on how the usable security community can start contributing to this important but under-served population that deserve our attention when building usable and inclusive security.
References
- [1] Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the united states: Results from the 2017 national survey on drug use and health, 2018.
- [2] Anne Adams and Martina Angela Sasse. Users are not the enemy. Communications of the ACM, 42(12), 1999.
- [3] Tousif Ahmed, Roberto Hoyle, Kay Connelly, David Crandall, and Apu Kapadia. Privacy concerns and behaviors of people with visual impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3523–3532, 2015.
- [4] Tousif Ahmed, Patrick Shaffer, Kay Connelly, David Crandall, and Apu Kapadia. Addressing physical safety, security, and privacy for people with visual impairments. In Twelfth Symposium on Usable Privacy and Security (SOUPS 2016), pages 341–354, 2016.
- [5] Eirik Albrechtsen and Jan Hovden. Improving information security awareness and behaviour through dialogue, participation and collective reflection. an intervention study. Computers & Security, 29(4), 2010.
- [6] Gerhard Andersson and Pim Cuijpers. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cognitive behaviour therapy, 38(4):196–205, 2009.
- [7] Gerhard Andersson, Alexander Rozental, Roz Shafran, and Per Carlbring. Long-term effects of internet-supported cognitive behaviour therapy. Expert review of neurotherapeutics, 18(1):21–28, 2018.
- [8] American Psychiatric Association. Telepsychiatry.
- [9] American Psychiatric Association. Warning signs of mental illness.
- [10] American Psychiatric Association. What is mental illness?
- [11] American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, fifth edition. 2013.
- [12] American Psychological Association. Psychologists embrace telehealth to prevent the spread of covid-19, 2020.
- [13] Dirk Balfanz, Glenn Durfee, Diana K Smetters, and Rebecca E Grinter. In search of usable security: Five lessons from the field. IEEE Security & Privacy, 2(5), 2004.
- [14] Angela Beattie, Alison Shaw, Surinder Kaur, and David Kessler. Primary-care patients’ expectations and experiences of online cognitive behavioural therapy for depression: a qualitative study. Health Expectations, 12(1):45–59, 2009.
- [15] Nina Bendelin, Hugo Hesser, Johan Dahl, Per Carlbring, Karin Zetterqvist Nelson, and Gerhard Andersson. Experiences of guided internet-based cognitive-behavioural treatment for depression: a qualitative study. BMC psychiatry, 11(1):107, 2011.
- [16] Timothy W Bickmore, Suzanne E Mitchell, Brian W Jack, Michael K Paasche-Orlow, Laura M Pfeifer, and Julie O’Donnell. Response to a relational agent by hospital patients with depressive symptoms. Interacting with computers, 22(4), 2010.
- [17] Timothy W Bickmore, Kathryn Puskar, Elizabeth A Schlenk, Laura M Pfeifer, and Susan M Sereika. Maintaining reality: Relational agents for antipsychotic medication adherence. Interacting with Computers, 22(4), 2010.
- [18] Kelly E Caine, Celine Y Zimmerman, Zachary Schall-Zimmerman, William R Hazlewood, L Jean Camp, Katherine H Connelly, Lesa L Huber, and Kalpana Shankar. Digiswitch: A device to allow older adults to monitor and direct the collection and transmission of health information collected at home. Journal of medical systems, 35(5):1181–1195, 2011.
- [19] Sonia Chiasson, PC van Oorschot, and Robert Biddle. Even experts deserve usable security: Design guidelines for security management systems. In SOUPS Workshop on Usable IT Security Management (USM). Citeseer, 2007.
- [20] Raymundo Cornejo, Robin Brewer, Caroline Edasis, and Anne Marie Piper. Vulnerability, sharing, and privacy: Analyzing art therapy for older adults with dementia. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pages 1572–1583, 2016.
- [21] Hilary Daniel and Lois Snyder Sulmasy. Policy recommendations to guide the use of telemedicine in primary care settings: an american college of physicians position paper. Annals of internal medicine, 163(10):787–789, 2015.
- [22] Alexander J DeWitt and Jasna Kuljis. Is usable security an oxymoron? interactions, 13(3):41–44, 2006.
- [23] Bryan Dosono, Jordan Hayes, and Yang Wang. “i’m stuck!”: A contextual inquiry of people with visual impairments in authentication. In Eleventh Symposium On Usable Privacy and Security (SOUPS 2015), pages 151–168, 2015.
- [24] Richard S Epstein and Robert I Simon. The exploitation index: An early warning indicator of boundary violations in psychotherapy. Bulletin of the Menninger Clinic, 54(4):450, 1990.
- [25] Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. The rise of social bots. Communications of the ACM, 59(7):96–104, 2016.
- [26] Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR mental health, 4(2), 2017.
- [27] Alexander L Fogel and Kavita Y Sarin. A survey of direct-to-consumer teledermatology services available to us patients: explosive growth, opportunities and controversy. Journal of telemedicine and telecare, 23(1):19–25, 2017.
- [28] Centers for Disease Control and Prevention. Cognitive impairment: a call for action, now!, 2011.
- [29] Glen O Gabbard. Lessons to be learned from the study of sexual boundary violations. American Journal of Psychotherapy, 50(3):311–322, 1996.
- [30] Christian Grimme, Mike Preuss, Lena Adam, and Heike Trautmann. Social bots: Human-like by means of human control? Big data, 5(4), 2017.
- [31] Thomas G Gutheil and Glen O Gabbard. The concept of boundaries in clinical practice: Theoretical and risk-management dimensions. The American journal of psychiatry, 1993.
- [32] Janne Merete Hagen and Eirik Albrechtsen. Effects on employees’ information security abilities by e-learning. Information Management & Computer Security, 2009.
- [33] Md Munirul Haque, Shams Zawoad, and Ragib Hasan. Secure techniques and methods for authenticating visually impaired mobile phone users. In 2013 IEEE International Conference on Technologies for Homeland Security (HST), pages 735–740. IEEE, 2013.
- [34] Adam Houser and Matthew L Bolton. Formal mental models for inclusive privacy and security. In SOUPS, 2017.
- [35] Thomas Insel. Post by former nimh director thomas insel: Transforming diagnosis, 2013.
- [36] Ronald Kainda, Ivan Flechais, and AW Roscoe. Security and usability: Analysis and evaluation. In 2010 International Conference on Availability, Reliability and Security. IEEE, 2010.
- [37] Taewan Kim, Mintra Ruensuk, and Hwajung Hong. In helping a vulnerable bot, you help yourself: Designing a social bot as a care-receiver to promote mental health and reduce stigma. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, page 1–13, 2020.
- [38] Steve G Langer. Cyber-security issues in healthcare information technology. Journal of digital imaging, 30(1):117–125, 2017.
- [39] Cynthia LeRouge and Monica J Garfield. Crossing the telemedicine chasm: have the us barriers to widespread adoption of telemedicine been significantly reduced? International journal of environmental research and public health, 10(12):6472–6484, 2013.
- [40] David D Luxton, Robert A Kayl, and Matthew C Mishkind. mhealth data security: The need for hipaa-compliant standardization. Telemedicine and e-Health, 18(4):284–288, 2012.
- [41] Kien Hoa Ly, Ann-Marie Ly, and Gerhard Andersson. A fully automated conversational agent for promoting mental well-being: a pilot rct using mixed methods. Internet interventions, 10:39–46, 2017.
- [42] Yao Ma, Jinjuan Feng, Libby Kumin, and Jonathan Lazar. Investigating user behavior for authentication methods: A comparison between individuals with down syndrome and neurotypical users. ACM Transactions on Accessible Computing (TACCESS), 4(4):1–27, 2013.
- [43] Helena M Mentis, Galina Madjaroff, and Aaron K Massey. Upside and downside risk in online security for older adults with mild cognitive impairment. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2019.
- [44] Cosmin Munteanu, Calvin Tennakoon, Jillian Garner, Alex Goel, Mabel Ho, Clare Shen, and Richard Windeyer. Improving older adults’ online security: An exercise in participatory design. In Symposium on Usable Privacy and Security (SOUPS), 2015.
- [45] NIH National Institute of Mental Healtn. Attention-deficit/hyperactivity disorder, 2019.
- [46] NIH National Institute on Aging. What is mild cognitive impairment?, 2017.
- [47] Guillermo Navarro and Simon N Foley. Approximating saml using similarity based imprecision. In International Conference on Intelligence in Communication Systems, pages 191–200. Springer, 2005.
- [48] James Nicholson, Lynne Coventry, and Pamela Briggs. " if it’s important it will be a headline" cybersecurity information seeking in older adults. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–11, 2019.
- [49] National Institute of Mental Health. Nih: Mental illness, February 2019.
- [50] National Association on Mental Illness. Warning signs of mental illness, July 2018.
- [51] Andrew S Patrick, A Chris Long, and Scott Flinn. Hci and security systems. In CHI’03 Extended Abstracts on Human Factors in Computing Systems, 2003.
- [52] Bryan D Payne and W Keith Edwards. A brief introduction to usable security. IEEE Internet Computing, 12(3):13–21, 2008.
- [53] Sachin R. Pendse, Faisal M. Lalani, Munmun De Choudhury, Amit Sharma, and Neha Kumar. “like shock absorbers”: Understanding the human infrastructures of technology-mediated mental health support. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, page 1–14, 2020.
- [54] Lucy Qin, Andrei Lapets, Frederick Jansen, Peter Flockhart, Kinan Dak Albab, Ira Globus-Harris, Shannon Roberts, and Mayank Varia. From usability to secure computing and back again. In Fifteenth Symposium on Usable Privacy and Security (SOUPS 2019), 2019.
- [55] Kyle Rector, Lauren Milne, Richard E Ladner, Batya Friedman, and Julie A Kientz. Exploring the opportunities and challenges with exercise technologies for people who are blind or low-vision. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, pages 203–214, 2015.
- [56] Health Resources and Services Administration. Telehealth programs, 2019.
- [57] Allyson Rosen, Maya Yutsis, and Brian Yochim. Neurocognitive disorders of the dsm-5, 2017.
- [58] Scott Ruoti, Nathan Kim, Ben Burgon, Timothy Van Der Horst, and Kent Seamons. Confused johnny: when automatic encryption leads to confusion and mistakes. In Proceedings of the Ninth Symposium on Usable Privacy and Security, pages 1–12, 2013.
- [59] Jerome H Saltzer and Michael D Schroeder. The protection of information in computer systems. Proceedings of the IEEE, 1975.
- [60] Steve Sheng, Levi Broderick, Colleen Alison Koranda, and Jeremy J Hyland. Why johnny still can’t encrypt: evaluating the usability of email encryption software. In Symposium On Usable Privacy and Security. ACM, 2006.
- [61] D Smetters. Usable security: Oxymoron or challenge, 2007.
- [62] David Smith and Marilyn Fitzpatrick. Patient-therapist boundary issues: An integrative review of theory and research. Professional psychology: research and practice, 26(5):499, 1995.
- [63] Katarzyna Stawarz, Chris Preist, Deborah Tallon, Laura Thomas, Katrina Turner, Nicola Wiles, David Kessler, Roz Shafran, and David Coyle. Integrating the digital and the traditional to deliver therapy for depression: Lessons from a pragmatic study. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, 2020.
- [64] Mary Theofanos. Is usable security an oxymoron? Computer, 53(2), 2020.
- [65] Reed V Tuckson, Margo Edmunds, and Michael L Hodgkins. Telehealth. New England Journal of Medicine, 377(16):1585–1592, 2017.
- [66] Radu-Daniel Vatavu. Improving gesture recognition accuracy on touch screens for users with low vision. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 4667–4679, 2017.
- [67] John Vines, Stephen Lindsay, Gary W Pritchard, Mabel Lie, David Greathead, Patrick Olivier, and Katie Brittain. Making family care work: dependence, privacy and remote home monitoring telecare systems. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, pages 607–616, 2013.
- [68] Jerome C Wakefield. Diagnostic issues and controversies in dsm-5: return of the false positives problem. Annual review of clinical psychology, 12:105–132, 2016.
- [69] Yang Wang. The third wave? inclusive privacy and security. In Proceedings of the 2017 New Security Paradigms Workshop, pages 122–130, 2017.
- [70] Alma Whitten and J Doug Tygar. Why johnny can’t encrypt: A usability evaluation of pgp 5.0.
- [71] Svetlana Yarosh. Shifting dynamics or breaking sacred traditions? the role of technology in twelve-step fellowships. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 3413–3422, 2013.
- [72] Hanlu Ye, Meethu Malu, Uran Oh, and Leah Findlater. Current and future mobile and wearable device use by people with visual impairments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 3123–3132, 2014.
- [73] Ka-Ping Yee. Aligning security and usability. IEEE Security & Privacy, 2(5), 2004.
- [74] Mary Ellen Zurko and Richard T. Simon. User-centered security. In Proceedings of the 1996 Workshop on New Security Paradigms, NSPW ’96. Association for Computing Machinery, 1996.