Abstract

The rapid advancement and widespread adoption of voice assistance technology have shown promise in benefiting individuals with disabilities, offering increased social participation, independence, and leisure activities. However, barriers to their full utilization have been identified, leading to potential abandonment by users with disabilities. This rapid review is aimed at filling the gap in the literature by investigating the utilization of voice assistants among people with disabilities for independent living and community participation. A comprehensive search was conducted in academic literature databases, including PubMed, Embase, and Web of Science, and gray data was sourced from public social media domains through Infegy. The analysis included 48 articles and 281 social media posts that met the inclusion criteria. Neurodiversity, disabilities affecting vision, and general disabilities were the most frequently discussed categories in both sources. The most common tasks performed using voice assistants were interface control, reminders, and environmental control, with a focus on enabling independence. Barriers to use mentioned in the literature included cognitive load during use, speech interpretation, lack of nonverbal control, and privacy concerns, while gray data reported limited functionality and speech interpretation as primary barriers. Amazon Alexa was the most discussed brand in both sources. The findings highlight the need for further research and innovation to fully harness the potential benefits of voice assistants for individuals with disabilities. By addressing the identified barriers and tailoring voice assistance technology to cater to the specific needs of different disability types, this technology can become a powerful tool for enhancing the lives of individuals with disabilities and promoting greater independence and community participation.

1. Introduction

The introduction and widespread usage of smartphones and smart technology over the past decade has brought growth in the use of voice assistants such as Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana [1]. Voice assistants heavily rely on voice recognition and artificial intelligence to provide a service to the users [2]. In addition to smartphones, voice assistants are also found in speakers equipped with a microphone called smart speakers [3].

The increasing adoption of voice assistant technology among US adults, projected to reach 48.2% by 2025 and potentially encompassing 157.1 million users by 2026, underscores its expanding role in society [4]. Amazon Alexa and Google Assistant are the most popular voice assistants among users [2]. This rapid growth presents an opportunity to leverage voice assistants as a promising tool for maintaining and boosting social participation, independence, and leisure activities for individuals with disabilities [5, 6].

Schlomann [6] identified six benefits of voice assistants for people with disabilities. First, voice assistants might facilitate access to technology for people with severe cognitive, sensory, or physical disabilities who cannot use computers or cell phones in a typical manner. Voice assistants can help to solve usability issues like small buttons or text [7]. Second, voice assistants might alleviate social isolation. People with disabilities may come to feel less lonely while talking with the voice assistant and even “personify” it (e.g., attributing the pronoun “she” to Alexa), becoming a form of social companionship [8, 9]. Third, in the health area, voice assistants might assist with health monitoring and medication management [10, 11]. Fourth, voice assistants might be a platform for leisure activities, as they can play music, tell some jokes, and even sing [9, 12]. Fifth, voice assistants might support independent living by automating activities of daily living, such as calling someone, ordering food, shopping online, and controlling lights [13, 14]. Furthermore, this technology might lessen caregiver demand by reading books, shopping, or answering simple questions [13]. Sixth, voice assistants might positively affect a person’s sense of agency. These devices allow people with disabilities to perform activities on their own and feel capable of accomplishing those activities [9].

However, there are still barriers that may prevent the full use of voice assistance technology to support functionality for people with disabilities [1]. Among the most frequently mentioned barriers are technological issues, such as instability of the Internet connection and the technology not responding as requested [8, 9, 1214]. Data privacy and surveillance are other issues raised in the literature [2, 8, 14]. There is also a lack of training and programs to teach people with disabilities to use these devices [9, 15]. Moreover, there are personal factors limiting use, such as a lack of knowledge of the capabilities and functions of voice assistants, difficulty discovering additional features, and difficulty remembering commands to interact with the technology [8, 9, 13].

All these issues can lead to a high rate of abandonment of this technology by users with disabilities. For instance, they may feel that this technology is not easy to use or that the voice assistant does not satisfy their demands, which can lead to frustration [9]. If individuals with disabilities do not feel satisfied using these devices and abandon them, this could escalate to a new form of social exclusion [14]. As more and more people continue to interact with voice assistants, people with disabilities could be left behind if the barriers mentioned above are not adequately addressed.

Two recent scoping reviews were published on how older adults use personal voice assistants and how people with disabilities use smart speakers, specifically Arnold et al. [16] and Tavares et al. [17]. However, both of these reviews do not adequately delve into the needs and challenges faced by subpopulations within the disability community. Moreover, the reviews were published over two years ago, and given the rapid pace of expansion and upgrades in voice assistance technology, it is crucial to continuously explore the use of these devices by people with disabilities to ensure accessibility and usability [12].

Hence, there exists a twofold gap in the literature: the lack of comprehensive exploration of subpopulation needs and the need for up-to-date research to keep pace with technological advancements. To address these gaps, further research is warranted to explore the potential benefits and challenges faced by individuals with disabilities using voice assistants [8, 14]. Understanding the needs of this underresearched group, it is crucial to ascertain whether voice assistants can truly serve as effective assistive technology for them [6, 9].

Given the rapid global spread of voice assistant usage, particularly amid the COVID-19 pandemic, conducting a rapid review of evidence from 2021 to the present is essential to capture the last state-of-the-art literature and usage trends [1820]. Hence, this review is aimed at identifying what recent gray data and peer-reviewed literature have reported on the utilization of voice assistance technology for independent living and community participation among people with disabilities. Understanding the current utilization of this technology might provide valuable insights for enhancing accessibility and usefulness, ultimately supporting this population in achieving greater independence and participation in their communities.

2. Methods

A PICO format was used to guide the search process, focusing on population (P), intervention (I), comparison (C), and outcome (O). The PICO framework is commonly utilized in evidence-based practice and literature reviews to ensure a structured approach to formulating research questions and search strategies [21]. By utilizing the PICO framework, the study ensures clarity and consistency in defining the parameters for inclusion and exclusion criteria. These criteria can be seen in Table 1.

PubMed, Embase, and Web of Science were searched using a strategy that scans titles, abstracts, and text words for terms related to voice assistants and the disability community. Subject headings related to disability were also used for PubMed and Embase-specific thesaurus terms. No subject headings were used for smart technology as this has yet to be integrated into MeSH and Emtree. The scholarly literature search was run on February 14, 2023, and exported to a review software, PICO Portal (version 3.0.2023.0116) (PICO Portal, New York, NY, United States. Available at http://www.picoportal.org), shortly thereafter for deduplication.

In addition to searching academic literature sources, gray data, data sourced from public social media domains, was searched via Infegy (Infegy Atlas, 210W 19 Terrace #200, Kansas City, MO 64108. Available at http://www.infegy.com) on April 9, 2023. A search strategy for Infegy similar to the aforementioned strategy was developed. Some terms were modified to adapt the search to social media. It should be noted that search terms related to the deaf and hard of hearing community were removed from the Infegy strategy due to a high number of misnomer phrases such as “Siri is deaf,” implying the device did not recognize speech, rather than about the deaf community. This did not exclude results related to the deaf community, but it also did not actively search for them. Complete search strategies and term categories can be seen in Tables 25.

All database records retrieved were uploaded to PICO Portal and automatically deduplicated selecting the most complete and/or recent version. Abstract screening, full-text screening, and data extraction were completed using conventional double screening by two independent reviewers followed by consolidation on any conflict. If no consensus was reached, a third reviewer served as a judicator. Full-text versions of all titles that passed abstract screening or needed further review were obtained for full-text screening. A PRISMA flow diagram of the screening results can be seen in Figures 1 and 2 [22]. Table 6 summarizes all included articles.

Data extraction was performed utilizing PICO Portal’s customizable data extraction forms. The final data extracted was then exported to Excel for further analysis according to the domain of interest. Certain extraction categories only pertained to either the peer-reviewed or gray data, and this and the categories as a whole can be seen in Table 7.

To summarize and explain the characteristics and findings of the included literature, a synthesis was produced with information supplied in the text and tables. The synthesis investigates the relationships and findings both within and between the included evidence. Subgroup analysis was performed based on the type of disability of participants. Disability categories will include general, mobility/dexterity, hearing, vision, chronic health, and neurodiverse-based disabilities.

3. Results

3.1. Sources

The analysis included 48 articles from the academic literature and 281 media posts that met our inclusion criteria. Publishing journals and conferences along with the primary domain of articles were recorded, identifying 39 unique publishing sources, across 7 domains. The majority of articles (58.3%) came from the combined fields of technology and engineering, with significantly fewer articles sourced from the remaining fields. Of gray posts, 75.1% came from Twitter, and 36.3% of the reporters/content creators indicated having a disability. Following disabled people, news outlets (21.4%), service/medical providers (14.9%), and family members/caregivers of disabled people (12.8%) were the most common reporter types (see Figure 3 for full reporter information).

3.2. Types of Disabilities

Gray data was further analyzed to determine the types of disabilities mentioned along with device brand, type, primary utilization, tasks performed, and any barriers reported. Neurodiversity was the most frequently discussed disability category reported by 29.2% of posts, followed by disabilities affecting vision (26.3%), general (17.4%), speech (16.4%), and mobility/dexterity (12.5%). Chronic health conditions and disabilities affecting hearing were the least discussed at 1.4% and 2.1%, respectively. Due to the high frequency, neurodiverse was further broken down into subcategories, ADHD was mentioned in 47.6% of neurodiverse posts, followed by dyslexia (28.0%), and autism (17.1%). Literature data was similarly analyzed with the most common disability found in articles being speech at 50% followed by mobility/dexterity (39.6%), neurodiverse (31.3%), vision (22.9%), hearing (20.8%), and chronic health (16.7%). Neurodiversity discussed in the literature was primarily focused on dementia, which was mentioned in 60% of articles, and more broadly cognitive impairment in 46.7%. This was followed by intellectual disability (33.3%) and autism (13.3%). For a full report on the disabilities mentioned, see Figure 4.

3.3. Tasks Performed with Voice Assistants

The focus of this review was to determine how people with disabilities utilize voice assistants; across gray data, 63.7% of posts identified tasks that enable independence, which for this project is defined as being able to perform a task without the assistance of another person. Only 12.6% of posts focused on enabling participation, defined as engaging with others. The most reported tasks by posts involved interface control (28.8%), reminders (16.4%), environmental control (15.7%), communication (11.4%), and reading (10.7%). All articles in the literature discussed tasks that enable independence, while only 52.1% discussed participation tasks. The most common tasks were similar to those of the gray data, being interface control (85.4%), environment control (50%), communication (45.8%), media management (39.6%), and information retrieval (37.5%). An individual breakdown of task utilization mentioned by disability can be seen in Table 8.

3.4. Barriers

Along with tasks involved in utilization, barriers to use were also assessed. Gray data focused on a much smaller pool of barriers as seen in Figure 5, and only 26.7% of posts mentioned a barrier at all in comparison to 50% of articles. The most common gray data-reported barriers were speech interpretation (11.4%), limited functionality (8.9%), lack of voice control in existing devices and apps (4.3%), privacy (3.2%), and assistive features removed (2.8%). The literature identified far more barriers, the most frequent being cognitive load during use (41.7%), speech interpretation (39.6%), lack of nonverbal control (33.3%), cognitive load during setup (25%), privacy (25%), limited functionality (20.8%), and maintenance (20.8%). A full breakdown of barriers by disability type can be seen in Table 9.

3.5. Voice Assistant Brands and Environments in Which They Are Used

Device type and brand utilized were also collected. Amazon, Google, and Apple were the only widespread brands mentioned throughout all posts and articles. Reliability and functionality varied greatly by both post and article, often contradicting one another, so no conclusion can be made in that regard without further investigation. Alexa was the most discussed product spanning 43.1% of gray posts. Google and Siri had similar frequencies at 19.9% and 19.2%, respectively. Most posts did not specify the exact device type (i.e., smartphone and smart speaker) nor what environment it was used in, but of those that did 21.7% mentioned home-specific use, while only 3.9% mentioned community use. Among the literature, Alexa and Google products were equally represented at 41.7%, prototype products were involved in 37.5% of articles, and Apple products were seen in 16.7%. Academic articles reported setting usage more, with 62.5% reporting use in the home, while only 14.6% reported community.

4. Discussion

This rapid review provided a comprehensive overview of the current landscape of voice assistant utilization among the disability community. It also provided a unique opportunity to compare the focus of academic, peer-reviewed literature with unfiltered community discussion. Overall, both groups touched on similar usage patterns and pain points that need improvement, but the frequency of topics discussed varied considerably among the subpopulations of the disability community. While the data collected provides a wide breadth of new information, caution must be taken in its interpretation, as the nature of this review does not lend itself to establishing causation or drawing definitive conclusions. Instead, it is meant to allow for broader themes and connections to be found to help guide future research.

4.1. Use of Voice Assistants for Independence

The analysis revealed that the majority of posts and articles focused on tasks that enable independence, such as interface control, reminders, communication, and environmental control. This aligns with the potential benefits of voice assistants for individuals with disabilities, including facilitating access to technology, affecting positively a person’s sense of agency, and supporting independent living [6]. Furthermore, the literature also expounded upon the relevance of these devices in facilitating participation tasks, which involve engagement with others, thus underscoring their capacity to augment the overall quality of life for individuals with disabilities.

Voice assistants serve as accessible interfaces that enable individuals with disabilities to interact with various digital devices and services using voice commands, thereby reducing barriers to technology access and enhancing digital inclusion. By providing hands-free control and assistance with daily tasks such as setting reminders, managing schedules, and accessing information, voice assistants empower individuals with disabilities to perform activities independently, fostering a sense of autonomy and self-efficacy [9]. Additionally, voice assistants can assist individuals with disabilities in managing their home environment, controlling smart home devices, and accessing essential services, such as ordering groceries or scheduling transportation, thereby streamlining daily living activities and improving overall quality of life [12]. Finally, these devices facilitate communication and social interaction by enabling individuals with disabilities to make phone calls, send messages, and access social media platforms using voice commands, thereby promoting greater connectivity and community participation [16].

4.2. Where to Use Voice Assistants

Teasing out the setting in which these devices were used proved to be difficult, with many gray and literature-based sources reporting nonspecific environments. Neither the gray reports nor literature articles focused heavily on community use, suggesting a lack of community-based applications being explored in depth. This might be explained by smart speakers being restricted to Wi-Fi-enabled locations but would not explain the lack of information on voice assistant use on smartphones. Similarly, workplace-specific applications of voice assistants were absent from the academic literature. In the media posts, however, there was a small (2.1% of posts), but present interest in potential workplace use, suggesting that research could be expanded to a higher level of instrumental activities of daily living. Most of the posts and articles also focused more on primary devices (speakers and smartphones) rather than secondary applications (such as apps and skills). This could be due to a lack of apps/skills available, the fact that some assistants such as Siri do not have add-on apps, or even that participants may view them collectively and not distinguish a built-in feature from an add-on application.

4.3. Barriers

Despite the promising benefits, the study also shed light on the barriers that may hinder the full utilization of voice assistants among individuals with disabilities. These barriers include limited functionality, data privacy concerns, a lack of training programs, and personal factors such as a limited understanding of the device’s capabilities and difficulty remembering commands. Among these challenges, speech interpretation and the removal of assistive features were noteworthy in the data.

4.3.1. Speech Interpretation

The barrier of speech interpretation is most evident in the literature, as it became evident that the deaf and hard of hearing communities indeed have an interest in using smart assistants similar to voice assistants. While all three major voice assistants available (Google Assistant, Siri, and Alexa) have keyboard alternatives to voice, this does not appear to meet the needs of deaf individuals. Many articles noted that a desire for such systems to be able to recognize sign language, or at the very least develop gesture-based interactions exists within the deaf community. This topic was the focus of multiple development-based research articles which all succeeded in creating functional early-stage models. Despite the success of development models, there appears to be little evidence of translation occurring in commercial products to spur any discussion on social media.

This same issue existed within the speech-related category where brain control interfaces (BCI) can be seen being used as a solution for the lack of nonverbal control options across 14.3% of speech-related articles, and 12.3% of all gray data mentioned the barrier of lack of nonverbal control, but discussion on BCI is completely absent on social media. While there has been limited discussion on nonverbal control across media data, efforts have been made to improve commercial devices’ speech interpretation for users with speech impairments. Initially, multiple posts advertised and discussed new add-on skills for Alexa devices that improved speech interpretation. Not long after, user reactions flowed in asking Amazon to integrate these technologies permanently such as this post:

[Specific skill available for download] seems like a great app for people with non-standard speech to control an @amazon [Alexa device]. But the user has to talk to the app, not the device. Why has not Amazon put something like this into Alexa so disabled people can just use their voices like everyone else? (@jennyhunterdc, 2021)

Similarly, multiple posts and duplicate posts from user shares were found advertising Google and Apple’s recruitment efforts to improve speech interpretation in this population.

Meet Relate: Google’s new beta app that is serving as a voice assistant to people with speech impairments, and helping make tech more inclusive to users with neurological conditions. (@talgroupusa, 2021)

Speech interpretation can be improved by creating more inclusive speech recognition training models and providing software updates to existing devices. Sign language and gesture-based control most likely would require developing new hardware, a much more time, and resource-costly solution, which may explain the slow turnaround based on research-reported user demand.

4.3.2. Assistive Features Removed

While device improvements and innovative ideas were seen across gray data and the literature, degradation of existing devices and services was also unfortunately reported. This usually involved software updates deleting previously included assistive features from devices after a company abandons the support of certain features. The most frequently reported case of this occurred with Apple in 2021 affecting Siri’s ability to provide blind users access to voicemails, emails, and phone calls.

Hello. Looking for some help from fellow Redditors. I have a (visually impaired) family member that uses Siri to dictate unread emails throughout the day. After updating to iOS 15, Siri is unable to do this anymore. Prior to iOS 15, saying “Hey Siri, check unread emails” would result in Siri reading the unread emails. I checked the Accessibility settings, but I do not see any possible solutions. Any recommendations? If not, is it possible to downgrade to iOS 14? Thank you. (u/brian_jkwo, 2021)

This exposes a potential risk and trade-off to using mainstream devices as assistive technology as an alternative to dedicated assistive devices, which appears to be becoming a more widespread practice. As stated by the Institute of Medicine Committee on Disability in America [23], companies may lack sufficient motivation to include specific accessibility features in their products unless there is a promising market and significant additional revenues at stake. Even when accessibility features have the potential to increase revenue, they must compete with other product features for limited engineering and marketing resources; therefore, accessibility features might be relegated to a lower priority.

Overall, the barriers identified in this rapid review highlight the importance of adapting voice assistance technology to meet the diverse needs of people with disabilities. For instance, the speech interpretation challenges currently presented by voice assistants can be addressed by developing more inclusive voice recognition training models. Additionally, integrating sign language or gesture-based interactions can improve accessibility for people who are deaf or hard of hearing.

Furthermore, efforts to avoid the removal of assistive features through software updates are crucial to maintaining the functionality of these devices for users with disabilities. However, achieving this level of personalization may require significant resources and investments in research and development. Nonetheless, it is imperative that technology companies prioritize the inclusion of accessibility features to ensure equitable access to voice assistance technology for people with disabilities.

4.4. Alignment between Academic Articles and Gray Data

Alignment between the research and community discussion provided an interesting opportunity to compare if the research lines up with use. Overall, the frequency of tasks discussed between the two groups aligned much more closely than of barriers discussed. This, however, could be due to the limited number of media posts about barriers to use. That said, alignment was not perfect for tasks and was most closely aligned for articles and posts discussing speech-related disabilities. Four out of the top five most discussed tasks matched between the groups. The disability focus with the lowest task alignment was within the neurodiverse community. Initially, it appeared that both the literature and gray data had a similar distribution of neurodiverse-focused articles and posts at 31.3% and 29.2%, respectively.

However, after a closer analysis breaking down neurodiversity by subtype, a nearly inverse relationship between the subtypes discussed emerged, which may be a possible explanation for the misalignment of focuses. Within the literature, the majority of articles (60%) focused on device use among people with dementia, followed by cognitive impairment (46.7%) which greatly overlapped with dementia across articles, and intellectual disability (33.3%).

Conversely, the gray data posts contained no discussion on dementia, and very little on cognitive and intellectual disabilities (1.2 and 4.9%, respectively). This may be due to these groups being less prevalent on social media due to age and access to the Internet, though more information is needed. Individuals with autism were similarly represented in both groups at 17.1% for gray data and 13.3% for literature articles. When looking at the alignment of tasks for individuals with intellectual disabilities, which was discussed in both groups, alignment was 3 out of 5 topics, a much closer result than the original 1 of 5. The two autism groups did not align; however, the literature focused more on autism presenting with intellectual effects, whereas the posts appeared to be more focused on autism without intellectual effects, which could explain this difference in use.

What is overwhelmingly apparent is that the literature greatly lacks any information on how individuals with attention-deficit/hyperactivity disorder (ADHD) utilize voice assistants, despite their presence making up 47.6% of neurodiverse posts. Similarly, dyslexia also appears to be a neglected area of focus covered by only 6.7% of neurodiverse articles compared to 28% of related media posts. While the lack of media posts from individuals with cognitive and intellectual disabilities may be explained by a lack of access to or participation in social media, the reason for the lack of research on ADHD and dyslexia is unclear. More research and attention to these populations are needed to determine how these tools can be utilized to improve independence among individuals with disabilities affecting executive function and learning.

Across both groups, areas such as mental health and learning disabilities lacked much discussion, and while hearing-related articles made up 20.8% of literature articles, very little discussion occurred on social media at only 2.1%. The omission of the term “deaf” in the data search for social media posts might have contributed to the limited representation of relevant publications within this domain.

Overall, though many of the uses for voice assistance technology among the disability community seem to mirror what would be expected among nondisabled individuals, more comparative research would be needed to say definitively. It would appear that the disability community has found a unique way to utilize voice assistants to perform ordinary activities of daily life.

4.5. Use of Voice Assistants in Research

One notable finding is the disparity in the distribution of academic literature across different journals. Surprisingly, only a few articles were published in health and rehabilitation journals, with the majority of cited journals focusing on technology and engineering. Many of the technology and engineering-related articles focused more heavily on problem-solving and innovative approaches to solve existing technology limitations, rather than the current use of these devices. This suggests a need for more research that comes from a rehabilitation perspective to better understand how individuals are actively using these devices.

While no formal quality assessment of the available research was done, the overall type of research (qualitative, quantitative, or mixed methods) along with the publication method (journal vs. conference) was recorded. Of academic articles, 29.8% were strictly qualitative in nature, suggesting a comprehensive exploration of thematic data to capture the nuance and complexity of voice assistant utilization within this population. This is complemented by a sizable amount of quantitative data. Notably, 39.6% of articles were published at only a conference level, many of which have yet to lead to peer-reviewed publications. This, however, could be due to several factors including the focus of this review being recent literature possibly outpacing the publication process.

4.6. Limitations

Limitations of the study include the rapidly evolving nature of voice assistance technology and the dynamic needs of individuals with disabilities, which might have changed since the data collection period. Additionally, the use of gray data from social media may not be fully representative of the entire disability community, as access and usage of such platforms may vary among individuals.

5. Conclusion

This rapid review focused on how people with disabilities are utilizing voice assistance technology for independence, aiming to fill a critical research gap in this rapidly evolving field. The findings provide valuable insights into how disabled people utilize these devices to support their independence and community participation. The results demonstrated that voice assistants have a wide range of potential uses for individuals with disabilities. These devices play a significant role in facilitating independent living by enabling environmental control and reminders, supporting leisure activities through media management and information retrieval, and alleviating social isolation by serving as a means of communication. By providing voice-controlled interfaces and automating various tasks, these devices empower people with disabilities to perform activities independently.

However, the review also identified several barriers that may hinder the full utilization of voice assistants by individuals with disabilities. These include limited functionality, speech interpretation difficulties, lack of nonverbal control, data privacy concerns, cognitive load during setup and use, and removal of assistive features. Some of these barriers, for example, cognitive load and privacy concerns, could be alleviated through provider and user training. Training resources that target common pain points by curating intuitive cognitive load reduction strategies, include tutorials for custom command creation, and provide straightforward routine maintenance guides for targeted audiences (users vs. caregiver vs. provider) could lead to more successful implementation and fewer instances of abandonment.

Moreover, the research highlighted some disparities between the focus of academic literature and community discussions. While both sources mentioned similar usage patterns and benefits of voice assistants, there were differences in the emphasis on specific disability subtypes. Notably, there was a lack of research on how individuals with ADHD and dyslexia utilize these devices, despite being highly discussed topics within the community.

In conclusion, voice assistants have the potential to be powerful tools for enhancing the lives of individuals with disabilities and enabling independence. However, more research and innovation are necessary to fully harness their benefits and overcome the barriers that may limit their use, especially for community engagement. By adopting an inclusive approach and actively involving individuals with disabilities in the development and design process, we can ensure that these technologies truly serve and empower the disability community.

Data Availability

The data that support the findings of this study are available from the corresponding author, MG, upon reasonable request.

Conflicts of Interest

The authors have no conflicts of interest to declare.

Acknowledgments

This work was supported by the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) (HHS-2021-ACL-NIDILRR-REGE-0029).