The ethics of good data research

Data science techniques such as machine learning have been enthusiastically embraced by many for their potential to solve some of society’s biggest problems. While much of this enthusiasm is deserved, new data technologies can create new ethical issues, even when their developers and users are well-intentioned. Not everyone will agree on what a good result looks like.

The picture is further complicated by the structural injustices of racism, sexism, ableism and imperialism, all of which could be perpetuated or exacerbated by certain types of research if not properly addressed. When it comes to high-stakes areas such as social welfare, health or education, it is essential that we approach the development and use of these technologies with a critical and cautious eye.

In light of this data environment, we navigated difficult questions regarding the goals and impact of Nesta’s work.

Over the past two years, we have experimented with cutting-edge ideas in innovation ethics in general and data science in particular. We tested tools and techniques ranging from generative critique (an approach that embeds critique into project processes, helping teams interrogate theories of change and explore alternative courses of action) to fact sheets for datasets (a tool for documenting the backstory of a dataset, including its potential biases and limitations).

Over the past year, our work has focused on questions such as how we can go beyond defining ethics as a constraint on creativity. We also thought about what constitutes the ideal version of his professional role (for example, what does it mean to be a good data scientist?). The answers to these questions affect the collective ability of the data science community to act on large, seemingly intractable societal problems.

One of our biggest challenges is to find a concrete approach to address structural injustice in research and innovation projects. This discrepancy exists for several reasons. Structural issues can seem large, overwhelming, and complicated, making it difficult to know where to start or how one’s actions may have an impact. Furthermore, there is a long-standing belief in research and innovation that its processes should address technical issues but avoid value-based goals such as social justice.

Drawing on theoretical and applied work in moral philosophy, critical data studies, and science and technology studies (STS), we have developed an approach that we believe can help research and innovation teams make treatable structural injustice.

The RI-MI (Role Ideal Moral Imagination) framework is built around two main components. First, the Role-Ideal Model helps individuals develop ideal versions of their professional roles and visions of what they might accomplish, as well as identify and act in ways that push the boundaries of what is expected. of them (for example, what does it mean to be a “good” data scientist?). The second element is the moral imagination framework: a tool for creative and values-based decision-making at the project level. We developed the RI-MI framework in collaboration with Nesta’s data analytics practice, but believe it has the potential to be applied in other disciplines as well.

The RI-MI framework has six main steps

  1. think critically (and read) about the relationship between data science and structural injustice, explore the range of theories and views on how to solve the problem(s) you are working on, and reflect on how your organizational context shapes your work.
  2. (Re)defining role ideals by developing the “ideal” version of your role, building on the knowledge acquired in the previous step. Ask what resources and powers do I have in my role? What would it mean to fulfill my role well? How can my role be used to challenge structural injustice rather than reinforce it?
  3. Identify opportunities to push boundaries in the form of specific actions and behaviors that will help you achieve your ideal role, drawing on resources and powers, as well as positive examples (others who have successfully pushed the limits of their role).
  4. Exercise the moral imagination at the project level, implementing the direction defined by your role ideal. Apply the following scaffolding to project decisions: identify the goal or task to be accomplished; examine the underlying values ​​operationalized in the project approach; explore the landscape of legitimate stakeholders and how they should be involved and; identify the range of alternative approaches that could maximize the different values ​​or interests of different stakeholder groups.
  5. Practicing moral criticism defining an accountability structure around your ideal role and developing routines and processes for providing encouragement, pressure (appropriate and constructive) and team-level feedback.

Share and debate openly and practicing RI-MI outdoors by documenting and sharing the processes and results of role ideal and moral imagination, to allow for feedback, improvement and challenge.

Many of the challenges that Nesta missions aim to address are shaped by structural conditions – factors such as race and socio-economic background affect access to good health, education and other outcomes. . The RI-MI framework can help research and innovation teams see this structure and their role within it and use this understanding to explore, evaluate and debate different approaches at the project level. We look forward to reporting on our progress as we apply the IR-MI framework in our data analysis work and beyond.

We present an article From ideals to action in AI for social good: Introducing the RI-MI framework at the Conference on the Ethics of Socially Disruptive Technologies in October, and will aim to publish a version of this article in due course. If you would like to discuss the RI-MI framework in more detail, please contact us: [email protected], [email protected], [email protected]

Sean N. Ayres