top of page
LuminAI at AvantSouth

LuminAI

Think about ChatGPT, but for dance! LuminAI is a NSF Funded co-creative AI tool that reads a contemporary improv dancer's movements and responds with similar, transformed movements of its own. I joined this project in August 2023.

Role: User Researcher and Designer

Co-creation in embodied contexts is central to the human experience but is often lacking in our interactions with computers. With LuminAI, we seek to develop a better understanding of embodied human co-creativity to inform the human-centered design of machines that can co-create with us.

Our contributions through this project can inform the design of AI in embodied co-creative domains. 

What are we trying to do_.png

Using Azure Kinect, a dancer’s body movements are tracked and the agent, projected on a screen or a mesh-like scrim transforms those movements to respond with similar movements of its own. 

This interactive installation piece borrows from contemporary movement theory for our two performers - human and non-human to co-exist and collaborate to build choreographies together. 

The same concept is being used in two ways. The first, where I am involved is a pedagogical study to open up avenues for AI and technology in dance education. Improv dancers take inspiration from the environment and the project aims to find if technology could add to creativity and self-expression. The second is a museum installation to increase AI literacy among children. 

What am I working on_.png

In this first approach, I joined when there was existing, a piece of technology that I am working on making consumable for dancers in a classroom and performance setting. I work in a cross-functional team with Unity developers, ML engineers, educators, and dancers. 

 

I started by making a minimal interface of this and collaborated with dance students at Kennesaw State University to test it. I further analyzed that data, and re-designed the interface to introduce new features. I am currently conducting a longitudinal study in an improv course, building towards the world’s first improv dance performance with an AI tool. In this process, I have primarily used Figma, Atlas, and Otter.ai. 

Step I.png

There are three components of the interface. Starting on the right, we have the user agent that mimics the dancer’s movements. This helps the dancer gather awareness and acts like a mirror in a dance studio. In the middle is our AI agent, which transforms the user’s movements to produce movements of its own, which it learns from a database of contemporary dance moves. Lastly, the panel on the left tells us about the detection of the user, what state the AI agent is in, how many movements it has tracked and is going to respond to, what the dance mode is and what transformation is happening. 

Basis of movements.png

At this point, I would like to digress a little and talk about the transformation of movements. Unlike Ballet, which has a fixed set of rules, contemporary is more vague. To make a machine understand it, our team built the Body Action Taxonomy or BAT. It learns from the location of the spine, space, limb expression, which means which limbs are being used, and floor support, which talks about groundwork. 

 

When the user moves, LuminAI assigns labels to it according to the categories it has learned from and finds something similar or different, based on the mode selected. 

Step II.png

In usability studies that followed, I interviewed 4 dance students and 1 instructor from KSU, after 15 minutes of interaction with LuminAI. I discussed what happens in an improv class, what interactions with peers and teachers took like, what their general impression was about LuminAI, the challenges they faced, and if and where they saw this tool being used according to them. 

Step III.png

Using Atlas, I did thematic open coding of the qualitative interview data. The themes ranged from general feedback to potential applications, interface expectations, and using it as a Muse to generate new ideas. The codes were further categorized into learning improv, interface design, application of LuminAI, and building relationships with the tool, due to its collaborative nature. 

Design Ideation.png

I referred to quotes from the user to suggest new features. One of the participants wanted to see their movements embodied in the negative space as well. For example, some dancers have preferred sides that they tend to move, seeing that visualized would encourage them to enhance other limb expressions too. For that, I recommended energy visualization through flares or trailing motion in the background. Some participants wanted insights into the movements they did, and for that, the solution pointed towards profile creation and personalized databases. Avatar design could inspire movements - fast or slow. Building options could help with self-expression and increase fluidity in improv ideas. Along with all of this, we also want our dancers to understand what they’re working with, which pushed me towards designing transparently. 

Feature Planning Priotisation.png

Summarising the design changes in descending order of priority, cosmetic changes were first, followed by visualization of movements captured and profile creation with custom feedback. 

Step IV.png

I started doodling body maps of what the cosmetic changes would look like. I introduced a ground so that the agent doesn’t look like it’s floating in the air, designed for energy visualization, explored mapping transformations on the body, and proposed new skin designs like mesh-based and toggle buttons to make it easy for dancers to see between these options easily. 

 

The development team got implementing, as I planned - 

Step V.png

-the ongoing longitudinal study to observe the evolution of interaction between dancers and this improvisational AI agent in regards to their collaboration, creativity, expressiveness, and idea generation, to gauge the role of this technology in dance education. On the right is what the interface looks like currently. 

Step V.png

-the ongoing longitudinal study to observe the evolution of interaction between dancers and this improvisational AI agent in regards to their collaboration, creativity, expressiveness, and idea generation, to gauge the role of this technology in dance education. On the right is what the interface looks like currently. 

About the study.png

The methods I’m adopting are participatory observation and diary study with the students, instructor, and musician in an improv class at KSU. This will be done across 15 sessions including class, rehearsal time, and performance, where I will record their emotional responses, successes, challenges, social dynamics, self-expression, habits, and motivation.

To break it up: the participants fill out a pre-questionnaire about their perceptions of technology, journal every interaction, and fill another post-study questionnaire which we’ll use to compare the possible change in perceptions and a focus group on stage in front of a live audience where they talk about their experience. 

In early May this year, we’ll be hosting the world’s first live improv dance performance.

Learnings.png

Encoding natural body movements into a machine-readable framework. through the body action taxonomy is a significant take away. This has been a great offline learning experience for me, a former pandemic researcher. I am getting to conduct a diary study for the first time in a non-mainstream domain, while designing for embodied interactions. which is an exciting challenge to make usable and useful.

bottom of page