CripTech Metaverse Lab

Viverse Logo
Vivearts Logo
Meyer Sound Logo
New Art City Logo
Leonardo Logo
Gray Area Logo

Coined by Neal Stephenson in his 1992 novel Snow Crash, the metaverse is a contested term that is still being defined. Corporate jockeying over ownership of the term highlights the political stakes of virtual tools generating new worlds. The CripTech Metaverse Lab conceives of the metaverse broadly as the ecosystem of extended reality technologies that reflect, project and interact with the world we know. Stephenson described the sensory aspirations of the metaverse in his description of avatars as “audiovisual bodies that people use to communicate with each other.” Yet, Virtual Reality (VR), Augmented Reality (AR) and spatial audio present significant access frictions and communication barriers for disabled users and creators. 

The CripTech Metaverse Lab gathered a national cohort of disabled creatives in San Francisco to experience immersive media works. This lab sought to generate collective, participatory access through convenings that invited ten artists working in different modalities —from sound design and dance to virtual reality—to encounter and collectively imagine or “crip” new creative pathways for experiencing metaverse artworks. The lab draws this methodology from what Aimi Hamraie and Kelly Fritsch call crip technoscience: “politicized practices of non-compliant knowing-making: world-building and world-dismantling practice by and with disabled people and communities that respond to intersectional systems of power, privilege, and oppression by working within and around them.”  

Nat, a white non-binary person with dark brown long hair and step bangs. They are wearing a leather jacket with black and white checkered shirt under and a matching checkered head band. They are adjusting the VR headset they are wearing.
Artist Nat Decker wears an HTC VIVE XR Elite headset. 

Co-produced by Leonardo/ISAST and Gray Area, this experiential research lab unfolded from January 27 to February 24, 2023 over a series of five remote and in-person convenings. The in-person convening, which took place over three days across two Bay Area sites, Meyer Sound in Berkeley, and Gray Area in San Francisco, put cohort participants in contact with several immersive technologies. Meyer Sound showcased artist Andy Slater’s work-in-progress “Re: Your prescription,” a short sound piece that utilized spatial audio across 43 individual suspended speakers. The following day created a playspace for participants to navigate VR worlds and demo new HTC VIVE XR Elite headsets, explore WebVR applications from Viverse, and media environments from New Art City. The convening on the third day invited participants to encounter AR work “Kai 海 Hai” developed by artists Tiare Ribeaux and Qianqian Ye using their smartphones. Following these demos, the in-person convening concluded with a speculative design and worldbuilding exercise that invited the cohort to write micro-fictions about accessible metaverse worlds and communities that prioritized disabled joy, agency and belonging. The final gathering took place remotely, and allowed for participants to more fully digest their imagined metaverse futures and to reflect on the Lab’s outcomes.

These convenings were facilitated by Frank Mondelli and Jennifer Justice, access doula Claudia Alick, as well as our teams at Leonardo and Gray Area. CART captioning and ASL interpreters were provided along with quiet spaces and rest areas; we provided a COVID-safe space with masking and daily testing as the ongoing pandemic continues to disproportionately impact the disabled community.

This archive is designed to document and highlight some of the core themes, frictions, and criphacks for collective access that emerged out of these encounters. We frame inaccessibility not as failure but as friction, moments to plan for and to learn from that can be generative if approached from a spirit of learning and a commitment to community. Borne out of access frictions, crip hacks refer to anticipatory or responsive creative access solutions that disabled folks enact individually or together. 

During this research, we identified five core principles of access friction in immersive media:

  1. hardware design user misfit
  2. software design user misfit
  3. navigation and wayfinding user frictions
  4. user interaction frictions
  5. cultural frictions

We also highlight a suite of criphacks that embody values of interdependence, cross-disability adaptation, co-creation, fun, and access intimacy. Finally, we discuss the lab's research, publication, recommendations and commission outcomes.

Principles of Access Friction

Hardware design user misfit

How hardware is designed and placed in a space can reveal hidden assumptions about the world by hardware designers. If designers assume that all users have color vision and place two buttons of the same shape but different colors next to each other, a user with color blindness may not be able to distinguish between them. This is an example of what disability activist Rosemarie Garland-Thompson might call a “misfit.” She writes that “the built and arranged space through which we navigate our lives tends to offer fits to majority bodies” while they “create misfits with minority forms of embodiment.”

Antoine Hunter, A Black and Indigenous person with dark chocolate skin from his mother,  is wearing a black tshirt, long orange pants and a VR headset. He points one controller upwards in one hand.
Mr. Antoine Hunter, Deaf advocate, dancer and choreographer, plays Maestro.

Throughout the weekend, we saw many kinds of hardware misfits. VR headsets could be worn comfortably with implants and hearing devices - or caused vertigo. Controllers were too heavy for extended use, and could only accommodate five-fingered dexterous hands. Spatial audio speakers were hoisted high up from ceilings where Deaf hands couldn’t reach to feel the vibrations. 

Even so, we found moments of crip solidarity and a form of resisting hardware misfits that fosters collective community. One cohort member, dancer Antoine Hunter, began an improvisational dance performance that helped other cohort members find each other in a virtual space. His “embodied navigational guidance” was both a form of creative expression and community-minded inclusivity.

How can disabled artists and creatives, like in the case of Hunter’s choreographic movements, use technological misfitting to create artworks of what the research team calls aesthetic in-access? In exploring this question together, we noticed discrete ways certain users respond to digital creative tools that exclude them. Aesthetic in-access signifies the generative, noncompliant and in-process nature of these artistic negotiations. We found inspiring, joyful, and spontaneous forms of expression everywhere throughout the weekend.

Software design user misfit

Caucasian fat woman with long auburn hair wearing a black and white shirt with stripes. She is wearing a VR headset and a mask and holding two controllers pointed upwards
Artist Maia Scott uses a VIVE headset and controllers.

Using HTC Vive headsets, cohort members could explore several worlds and applications. One of them was the VR game Maestro, which has the player take the role - and point of view - of an orchestra conductor. Upon boot-up, the game presents a tutorial section meant to teach players the basic mechanics of how to play. With a mostly empty concert hall and orchestra stage, there was no epic concert just yet. Here we encountered numerous kinds of software misfits: no subtitles for greater audio accessibility; the game could not recognize gestures from a seated position, making it harder for wheelchair users to play; and the very design of the game made it difficult for blind and vision impaired cohort members to participate. The game assumes players can see a scrolling timeline with only visual (not audio) cues as to when, where, and how to gesture.

As a result, blind and vision impaired cohort members could not progress past Maestro’s tutorial section, and there was no option to skip it. But responding to in-access can yield powerful moments of crip creative expression. One cohort member, performance artist and accessible art educator Maia Scott, found herself in such a loop of in-access. Confronted with this virtual barrier, Scott began experimenting with the other limits of the software. As the conductor’s wand and visual guides were too hard to see, she explored how a broader body movement would register in the headset’s visual field. She started by melodramatically performing the part of a conductor, moving her avatar’s hand to bang through the conductor’s lectern. This made her “head” swing around from one direction to the other, and soon she was creating her own performance inside the game, while outside the expected bounds of user experience. She forgoed the prescribed music, and soon, other cohort members gathered near her and began having fun, software inaccessibility and all. Maia later commented that “When I get lost, I play.” 

The important work on making all software more accessible must continue, but in the meantime, barriers can also provide spring boards for new kinds of expression. If, as Jennifer Justice says, “access should be fun,” then so can in-access and the process itself of striving for access.

Navigation and wayfinding are essential components of interactive gaming and VR technology. According to the Society for Experiential Graphic Design, “Wayfinding refers to information systems that guide people through a physical environment and enhance their understanding and experience of the space.” For example, wayfinding can range in method and technical sophistication from smartphone apps that provide turn-by-turn audio feedback to help users reach their destination, to long canes that alert users to potential obstacles by providing sensory feedback about the nature of the terrain and space. Closely linked to wayfinding, navigation is the process of planning and executing a route through space.  Ease of navigation is synonymous with accessible design.

VR technology, with built-in and real-time feedback software and accessories, has the potential to give players considerable creative license as it pertains to navigating the metaverse. Such potentialities are only beginning to be explored. Considered through the frame of aesthetic in-access, how might disabled artists hack or re-invent wayfinding and navigational tools ostensibly designed for able-bodied users? 

A white woman with short brown hair and a green leather jacket and an Asian-white woman in a silver jacket stand on each side of a white man at a desktop terminal.
Sound artist Andy Slater tries to navigate WebVR while Leonardo Director of Programs Vanessa Chang and VIVE Arts Head of Global Partnerships Leigh Tanner offer live description.

Blind sound artist Andy Slater was unable to access VIVERSE’s WebVR desktop platform at all as there was no screen reader compatible software available. VIVE Arts Head of Global Partnerships Leigh Tanner and Leonardo/ISAST Director of Programs Vanessa Chang provided live description of the world to help Andy orient himself. During a later brainstorming session, lab members envisioned responsive spatial audio and haptic fields as nonvisual wayfinding infrastructure in the metaverse. 

To return to a previous example, Antoine Hunter’s improvisational dance helped fellow cohort member Nat Decker visually locate his body from a distance in their shared space. By exaggerating his movements and utilizing the space through the dynamism of his body, he counterbalanced Decker’s avatar’s navigational limitations and stasis, as the program could not recognize their seated position. 

The lab cohort partnered with AR artists Qianqian Ye and Tiare Ribeaux to map spatialized environments that straddle both the physical and virtual worlds. The hybrid nature of AR's virtual-physical social profile makes it well-suited for real world applications that support a crip ethos of interdependence and co-creation. 

A white person in a blush pink top with long brown hair and round-rimmed glasses holds up their phone while an Asian woman in a black turtleneck and short black hair points at it.
Artist Qianqian Ye and facilitator Laura Cechanowicz Ye’s AR work, Kai Hai.

By leveraging digital social coding repositories like GitHub, AR-tailored image/spatial description and wayfinding could be organized into discrete genres or categories such as site-specific, personal narratives/reflections, spoken word, game-focused, sensory-rich, educational, crowd-sourced, and social interactive.

User interaction frictions

Half-Latino, half-white person in a yellow short kneels against a large black speaker, hands resting on its surface.
Lab researcher Frank Mondelli rests his hands on large subwoofers at Meyer Sound to feel the sonic vibrations.

The objective of accessible design is to remove physical or communication barriers affecting specific disabled users. Universal Design builds upon accessible design by emphasizing solutions that enhance the user experience for a broad sector of the public. The best UI design should employ both strategies to complement overall ease of user experience. However, there is an emerging line of design thinking gathered under the movement “post-Universal Design” that suggests the more effective and equitable approach is specific user-oriented and DIY designs that tailor for individual needs. The following examples demonstrate how these design strategies overlap, creating frictions and synergistic opportunities for expanding access capabilities. 

Immersive soundscapes present interesting accessibility challenges for some users, as our encounters at Meyer Sound (Berkeley, CA) illustrate. At the same time, they offer fertile ground for imagining and rehearsing audio and haptic access strategies in metaverse design and worldbuilding. 

The CripTech Metaverse Lab reimagined Andy Slater’s dense layering of text-to-speech audio sound art as spatialized audio across 43 speakers positioned at varying horizontal and vertical axes. Ideally, speakers would be positioned so that Deaf/hearing impaired users can touch and interact with them. Our artist cohort also proposed creating visually-dynamic captions or animated sonic avatars to accompany the hive-like text-to-speech chorus. Another criphack solution might enlist performers to inhabit and move around the physical environment to personify spatialized audio components, thus creating access for Deaf/hearing impaired audiences. Such artistic considerations with both universal and accessible design principles as many non-disabled audiences enjoy the theatrical visual elements that accompany live music performances.

On Day 2 of the lab, the cohort tested HTC’s VIVE XR Elite headsets. The new release includes a visual “pass-through” wherein wearers can view a hybrid VR space combined with their physical environment. Users can also toggle in between virtual and real worlds instead of having to reposition the headset device. 

White woman with shoulder length blonde hair wearing a VR headset and signing with their interpreter, a woman with long dark hair.
Melissa Malzkuhn uses the pass-through feature on the VIVE headset to sign with her interpreter Santana Chavez.

Melissa Malzkuhn shared this note on the universal and accessible applications of the design feature: “I think the pass-through feature is extremely useful for everyone in any situation. The ability to toggle back and forth between the virtual and real world is a great idea than having to peek underneath the headset and then fix it back in place. From an accessibility standpoint for sign language users, you can say it was useful that we were able to toggle off VR and see our interpreters and then get back in VR. So it does bring forth more advantages.” 

At the same time, the pass-through feature has its limitations -  is only helpful if the user is looking in the appropriate direction. 

Video example of pass-through feature limitations

The following media is presented as a YouTube player embedded in Able Player.

While the pass-through might not be useful for some blind and low vision users, we believe this feature will prove a desirable addition for most. 

Cultural frictions

Cultural frictions abound in the world of metaverse technologies. In one VIVERSE virtual world  where people are meant to gather for general-purpose meetings or impromptu hang-outs, our cohort members of color reported a feeling of immediate alienation when they arrived in the world and it was set to an avatar of a white man with no option to change it. This lack of representation can send a message that they are afterthoughts, coming only once “default” design has been completed. Similarly, there were no clear options for wheelchair users, cane users, or other bodily orientations. Some virtual worlds do have these options, but they typically come after the world has already been built. This encounter reflects how entrenched a largely white, cishet, able-bodied point of view is in the repertoire of designers of these worlds . The cost-prohibitive nature of these devices once they enter the market, which range from $300 to over $1,000, not including necessary accessories, creates additional barriers as many from the disabled community are low-income and/or rely on Supplemental Security Income (SSI).

An Afro-Latina woman with caramel brown skin wearing a multi colored hoodie and a VR headset. Their left hand is raised in a fist. Behind is a pale white person with white rolls for hair, representing a composer.
Artist and activist Jen White-Johnson plays Maestro on VR.

We witnessed another cultural friction, when cohort members and their interpreters using sign language to communicate were unable to do so, finding that both hardware and software could not support rich, linguistic, and cultural Deaf expression. Throughout the weekend, we thought about what it might mean to design metaverse realities - both real and virtual - from the ground up with more diverse teams. Scholar, artist and founder/creative lead of the Motion Light Lab at Gallaudet University Melissa Malzkuhn, for example, proposed the idea of engineering spatialized language recognition for ASL via virtual haptic and interactive 3D environments in both VR and AR contexts. This would mean designing physical and virtual interfaces ready to engage with the rich hand and facial movements that make up Deaf performance and expression, instead of the gloves of today with only rudimentary finger tracking.

Criphacks and collective access

Disabled people have been adapting the inaccessible tools and environments that surround them for millenia. For example, Deaf people have repurposed inflated balloons to feel the beat in concerts, and were some of the earliest adopters and innovators in video calling before there was Facetime or video conferencing platforms. In the contemporary parlance of hacking, we define criphacks as macro or micro in scale, disability-specific, non-assimilationist, or exemplary of accessible design, but it is the spirit of a shared collective struggle that ultimately defines it as a generous, liberatory praxis. 

Responding to Aimi Hamraie and Kelly Fritsch’s assertion that disability is a “driver of technological change” rather than a service provided to passive disabled consumers by tech experts, we share criphacks of crucial value to current and future metaverse creators.

The CripTech Metaverse Lab identified examples of interdependence, cross-disability adaptation, co-creation, fun, and access intimacy as core values of CripTech design. Criphacks are designed to bend or break existing ableist technologies to forge far more expansive concepts of what creative access can become. Embodied performances across the cohort demonstrate the expressive potential of immersive media; at Meyer Sound Deaf choreographer Antoine Hunter gave movement life to the deep vibrations from massive subwoofers while ASL interpreters visually translated the spatialized sound of a helicopter circling the room into finger flutters. Collective efforts to describe dynamic 3D audiovisual objects in AR emphasized multimodal approaches that creatively gather audio, visual and spatial elements. Cohort members devised access hacks on the fly, supporting each other through improvisational performances of description, wayfinding and care. Creative play and “breaking” of inaccessible virtual objects, such as Maia Scott’s headbanging of a virtual conductor’s lectern, are revelations of joy.

CripTech Metaverse Futures: Worldbuilding and Speculative Design

CripTech artists and designers frequently find themselves thrust into inventive and visionary roles due to ableist attitudes and inaccessible situations. For this reason it is pertinent to explore the relationship between an emergent culture of criphacking and the goals of speculative design. 

Speculative design can be thought of as a liberatory practice that imagines what the future might be in order to develop systemic solutions to intractable design, social, or ecological problems.   Similarly, worldbuilding is a creative process by which designers propose comprehensive solutions to future-influencing scenarios. Artists and designers can benefit from crafting holistic microfiction narratives to add texture and substance to new projects or to formalize creative collaborations and shared goals. 

New media artist Niki Selken and scholar Laura Cechanowicz led the Criptech Metaverse Lab Speculative Design and Worldbuilding Workshop. At the start of the Lab, Niki introduced the speculative design prompt Rose, Bud, Thorn, encouraging artists to identify and document moments of aesthetic access potential (Buds), instances of in-access or friction (Thorns), and aesthetic access successes (Roses). Using these prompts collected throughout the weekend, Laura and CripTech Metaverse Lab co-researcher Jennifer Justice wrote a series of future-focused prompts set 15 to 30 years in the future to which artists developed ambitious “microfiction” design solutions. 

One lab artist responded as follows to the question prompt, “In your future experience of the metaverse, how do you define community? Please tell us a story about your experience with the disabled community in the metaverse:” 

I can’t believe we just finished another year of magic making with the CripTech Community! It was so cool to collaborate on an immersive environment with Maia [Scott], Antoine [Hunter], and Andy [Slater]! We have soooo many transducers in the floor and seating…and, wow, short throw laser projectors with spatial audio! Talk about a dream come true! Every surface can be touched and engaged with and there are lots of comfy spots to chill… You can even choose to turn off the transducers in the seating if you need a breather from the stimulation. It’s been so inspiring to co-create such a playful immersive experience with this group! 

Cohort narratives repeatedly centered access intimacy, cross-disability customization, cross-movement solidarity that recognizes disability as an intersectional identity, co-creation, a commitment to supporting disabled communities, disability justice, comfort, belonging, agency, and joy. Given the social, interactive nature of the metaverse, it is easy to imagine how CripTech goals could benefit accessible design futures for all.

Commissions

The 2023 Gray Area Festival: Plural Prototypes featured the world premiere of three new artistic prototypes by artists Melissa Malzkuhn, Indira Allegra and Nat Decker, selected from the CripTech Metaverse Lab. Their speculative VR artworks delve into the realm of accessibility in the metaverse, adding depth to our perception and experience of these emerging realities.

Two of the works are being developed on HTC’s VIVERSE in collaboration with VIVE Arts. Malzkuhn’s project reimagine the concept of the Deaf club, a vital part of Deaf communities globally, within the digital realm of the metaverse. A living weaving that sings, Allegra's TEXERE: A Tapestry is a Forest is an immersive platform for collective grief and memorialization. Decker’s TOUCH, produced in collaboration with virtual art space New Art City, is an interactive poem exploring narratives of the boundaries of physical intimacy and infraction from the perspective of a queer disabled person.

Against a bright yellow background, the words “Viverse deaf club,” in a font evocative of 70s and neon, repeat and increase in size, each line changing bright color.

DEAF CLUB by Melissa Malzkuhn 

Malzkuhn’s project is an experiment in the creation of a VR experience that primarily caters to sign language users—Deaf people. As physical Deaf clubs are dwindling, this serves as a living digital archive for younger generations and a familiar space for older generations to understand the metaverse, and ultimately a place where anyone can visit, learn, gather, share, and connect.

Thin tree trunks form rows in a forest, with foggy greens suggesting the undergrowth and leaves.

TEXERE: A Tapestry is a Forest

TEXERE is an art-based mental health app at texere.space that weaves digital memorial tapestries from words and images about people’s losses. “Texere” is a Latin verb that means “to weave”, and is where the words “text” and “textile” originate from. This virtual world offers a contemplative space— like a cathedral of redwoods—for people to reflect on their shared losses together and to attend to their grief hygiene on a daily basis.

Abstract 3D shapes, round pink and white bodies with legs, with undulating silver piping and clusters of silver rings.

TOUCH

A virtual 3D world as interactive poem threading narratives about the symbolic and practical boundaries of physical intimacy and infraction as experienced as a queer disabled person. Segments of writing are woven together touching themes such as the mobility device as an extension of the body, the different ways touch is felt, permitted, and violated, corporeal sensations of empathy and more. The non-linearity of experience promotes values of flexibility, slowness, and agency.