With the READI (Rapid Employment & Development Initiative) program, Heartland Alliance connects individuals who are most at risk of gun violence, with rapid employment in paid transitional jobs, cognitive behavioral therapy, and support services in order to decrease violence and create viable opportunities for a better future.
My team was engaged by the READI Chicago program to design and develop a better way to collect data from coaches and participants, and share relevant information with the program leadership and Heartland Alliance staff.
In order to get a better understanding of the complexity of the work at READI, we conducted interviews and shadowed staff onsite as they carried out a variety of tasks. We discussed their current processes for capturing attendance and participation information, as well as the technology environment to better understand the context of use for our tool.
We began to identify areas where we could improve their existing process and enable the staff to do more with their data. I sketched solutions, turning them into prototypes to test with our users.
Crew chiefs and coaches are required to fill in daily feedback for around 10 participants at the end of each of their sessions.
While using pen and paper is a fast and simple way to collect feedback, the READI staff is still required to enter the data into a digital system. Transferring this data from physical to digital often results on errors or missing information.
With this in mind, the designs focused on large tappable interactions with minimal information on each screen to keep the user focused on the core task. It was delightfully simple, and even those with lower technical literacy were able to use it. The information automatically gets stored on the system, removing a time consuming step in the process.
It was vital that this tool did not become a replacement for a conversation between a staff member and a participant. Feedback is designed to be completed side-by-side, giving the participant full transparency into their scores.
The design features extra large, colored buttons to account for this experience. The size of the buttons allow the scores to be seen from far away on a small device, and the use of color coding provides a clear visual indicator of what score a participant receives.
We heard that transparency was very important when it came to measuring participants’ performance in the program. Coaches and crew chiefs looked for ways to provide visualizations and a sense of proof.
Transparency was also seen as a way to grow the participants’ understanding of how they are measured. This went beyond just knowing when they received a low score. We incorporated the reasoning behind the score to enable participants to make the connection between their behavior and their performance.
We heard that there was a need for coaches and crew chiefs to easily share information about participants. We heard that coaches kept participant spreadsheets on their computers, but there wasn’t a way for others to review them. Crew chiefs also expressed that they wanted the ability to see case notes from coaches in real time so that they can be aware of behavior to keep an eye on onsite.
By digitizing the participant profile and creating a single source of truth for participant feedback, staff members have transparency into a system that was previously a black box. They can create comments and review feedback from participants’ other sessions, as well as have an aggregate view of the data from their site.
The READI program has four core phases which the participants progress through as they meet their goals for each phase. For the most part, the coaches felt that participants knew what they needed to reach the next phase in the program. Yet, the participants often struggled to understand how they were currently performing. Whether it was asking for their attendance rate or CBT hours, the participants were reliant on others to get the information they were seeking.
The tool can print a daily or weekly summary of a participant’s profile at any point in time. Not only does tthis show the accumulated metrics of their performance, but it also visualizes there their performance lies in relation the the goals of the program.
With a timeline of only eight weeks for research, design, and product development, it was important to define some constraints for the tool.
We wrote stories and conducted a requirements workshop with the Heartland team. We printed each screen to visually connect the stories to the designs, and help to make the process as easy as possible for a non-technical cohort of people. This allowed us to collaboratively define what a minimum lovable product looked like, and what should come in a future phase. From there, I refined the designs to account for our feedback from both users and the team, and defined a visual style for the development team to apply to the wireframes going forward.