Autumn AI is a Mental Health Tech Startup that uses AI to measure mental-well being in real time, and prevent workplace burnout before they become an issue. By passively analyzing your workplace messages, they are able to compare to a psychological theory to determine certain behavioral traits contributing to stress, agency, social connection and more.
All content in this case study is either publicly exchanged information or reproduced reduced information, and does not contain any intellectual property not currently available to the public or any content that would violate any signed agreements. For more information, please contact the designer directly
Autumn AI's goals sound like science fiction, utilizing an AI to analyze how people feel. The challenge was great, but rewarding during the 7 months I worked with the company. Balancing Product Design and Product Management, I contributed directly to the company flushing out a rich ready-for-market responsive web-app for teams to get insights about their mental health, without the concerns for "Big Brother" esque insights into your thoughts. So what did I contribute?
Introducing process and rules to a actively growing Startup
When I started at Autumn AI AI, there were many things on the go and a lack of process in place. Feature development happened Ad-Hoc, R&D overlapped with live production, and there was a general lack of foundation for a bright and exciting value proposition. Here's what I did to set the company on a strong course moving forward.
Injecting flexible AGILE into a dynamic team:
AGILE Methodology was always the desire of the organization, however I knew with a fast moving team a rigid process would only hold us back. I evaluated and observed how the existing feature development had been going on, extracting the biggest pain-points of the process (ex. lack of requirements, lack of QA, need for responsive designs), and invested time and effort into those alongside reinforcing habits that worked well. Iterating and designing a workflow with the team, I implemented an augmented AGILE Methodology that incorporated standards like Ticket Evaluations for effort, dedicated deployment windows and templates for feature development documentation, but ensured the format was flexible enough to be used by our product, development and R&D teams simultaneously.
Standardizing a positive emotional experience within UI
One big thing that needed work was the user interface itself. By paying close attention to both user interviews and doing a deep-dive of color theory in a clinical / therapeutic environment, I took to the task of redesigning our user interface with positive psychology in mind. Warm tones, WCAG accessible components, and a focus on the immediate insights a user is looking for was my focus. By implementing things such as tooltips and highlights for psychological terms, I bridged the gap between terminology and understanding. Another example was creating a responsive layout that prioritized summary depending on the device you were using; The smaller the device, the more brief the insights, assuming users were only doing a check-in to begin with.
Development and Design as one
Something different in this role was a desire to make a tangible impact on the development side as well. With some help, I picked up React and some node.js programming and contributed directly to the GitHub Repository, solving 7 different Front-End bugs and contributing to an optimized Front-End codebase using React Components and stylesheets. This allowed me not only to understand technologically how the application existed, but also how to prepare future features and designs to a degree of clarity that allowed features to smoothly flow through the pipeline.
Some difficult Experiential Questions I got to work on...
How do we build trust in a process users can't fully see?
Something that was interesting with the Autumn AI product was that a lot of our data was passively analyzed and collected from workplace messages. This led to a lot of our work being seen sort of like "magic", a recurring problem when trying to sell the product to organizations. The biggest thing that made a difference was a onboarding and language review, refining the information being presented so that users could not only understand, but articulate the workflow of their everyday messages fitting into the broader picture of their mental health.
How do we create a safe space where data feels trusted and real?
Understandably, a lot of what we were proposing was experimental. The challenge of creating language and an interface that made users understand the purpose of our app was simple enough, but the real challenge was making the individuals accessing our tool feel safe in using it.
By scrubbing all instances of information connected to the users between collection and viewing your dashboard, we were able to remove any possibility of your mental health affecting your job. Managers never got to see names, only aggregates. Users only have their names visible to themselves for comfort. The larger a data-set got, the more abstract we displayed information, to not only display a non-biased data set but to protect individual users.