Engineering Gender-Inclusivity into Software: Ten Teams’ Tales from the Trenches

Human-centered computing Human computer interaction (HCI) HCI design and evaluation methods Software and its engineering

Authors: Claudia Hilderbrand, Christopher Perdriau, Lara Letaw, Jillian Emard, Zoe Steine-Hanson, Margaret Burnet, Anita Sarma

Year: 2020

Published in: IEEE/ACM 42nd International Conference on Software Engineering (ICSE).

Read me: DOI: 10.1145/3377811.3380371. Website.

Abstract: Although the need for gender-inclusivity in software is gaining attention among SE researchers and SE practitioners, and at least one method (GenderMag) has been published to help, little has been reported on how to make such methods work in real-world settings. Real-world teams are ever-mindful of the practicalities of adding new methods on top of their existing processes. For example, how can they keep the time costs viable? How can they maximize impacts of using it? What about controversies that can arise in talking about gender? To find out how software teams "in the trenches" handle these and similar questions, we collected the GenderMag-based processes of 10 real-world software teams---more than 50 people---for periods ranging from 5 months to 3.5 years. We present these teams' insights and experiences in the form of 9 practices, 2 potential pitfalls, and 2 open issues, so as to provide their insights to other real-world software teams trying to engineer gender-inclusivity into their software products.

Bibtex (copy):
@inproceedings{
  hilderbrand2020engineering,
  title={Engineering gender-inclusivity into software: ten teams' tales from the trenches},
  author={Hilderbrand, Claudia and Perdriau, Christopher and Letaw, Lara and Emard, Jillian and Steine-Hanson, Zoe and Burnett, Margaret and Sarma, Anita},
  booktitle={Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering},
  year={2020}
}

Annotation

By Louise Leibbrandt, Nienke Nijkamp, Gaspar Rocha, George Vegelien. 🪧Slides.

The summary written by the students.

Software has diversity problems. They affect some populations in their ability to be productive with, or even use software. A lot of these issues are related to gender inclusivity. Although there are proven methods on how to engineer gender inclusivity into software, there are few studies on the real-world application of these methods.This paper provides an evaluation of GenderMag, a software inspection method to include gender inclusivity in software engineering. GenderMag evaluates user stories using 3 personas: Tim, Pat, and Abi. The personas are walked through the steps in the user stories using a Cognitive Walkthrough, asking several subquestions designed to consider the persona’s possible issues with each step. The researchers investigate the integration of GenderMag into real-world teams’ practices through an Action Research based investigation. The research follows 10 software teams, 4 university based and 6 from companies.

Through their investigation, the researchers find a few key takeaways and pitfalls when integrating GenderMag into existing software projects. Abi’s persona was most helpful in identifying possible pitfalls in the designed software. Her persona helped developers think about underlying problems they otherwise might have missed. Aside from evaluation, Abi’s persona turned out to be a useful communication tool. The investigation found that it was uncomfortable to talk about gender-based design issues. By thinking about what Abi might be able or not able to do, developers could talk and think about the shortcomings in their software without their own egos getting in the way. Another interesting takeaway was that Abi seemed to represent the most inclusive persona. By focusing on the Abi persona first, teams were able to create a more accessible product for a diverse userbase. Including many team members in learning sessions seemed useful, as more people became familiar with the method. Furthermore, in evaluation sessions, teams with many individuals had more perspectives brought up, which increased the completeness of the evaluation. However, this did slow down the process and so was deemed infeasible. These findings indicate that learning and doing have different goals and both gain value from adapting team size.The presence of decision-makers in learning sessions also showed good results, as teams without decision-making power were sometimes unable to communicate the need to fix these problems to the decision-makers, reducing productivity and efficacy.

Some teams were reluctant to address the biases in their software as gender biases, which falls in line with previous reports of teams wanting to “talk about gender without talking about gender”. The paper suggested that a further abstraction beyond gender to computer efficacy may resolve the issues that software teams had with the gendering the personas.

– 📖 –