Accountability

AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes.

Human judgment plays a role throughout a seemingly objective system of logical decisions. It is humans who write algorithms, who define success or failure, who make decisions about the uses of systems and who may be affected by a system’s outcomes.

Every person involved in the creation of AI at any step is accountable for considering the system’s impact in the world, as are the companies invested in its development.

Accountability pictogram

01
Make company policies clear and accessible to design and development teams from day one so that no one is confused about issues of responsibility or accountability. As an AI designer or developer, it is your responsibility to know.

02
Understand where the responsibility of the company/software ends. You may not have control over how data or a tool will be used by a user, client, or other external source.

“Nearly 50% of the surveyed developers believe that the humans creating AI should be responsible for considering the ramifications of the technology. Not the bosses. Not the middle managers. The coders.”

To consider

  • Understand the workings of your AI even if you’re not personally developing and monitoring its algorithms.
  • Refer to secondary research by sociologists, linguists, behaviorists, and other professionals to understand ethical issues in a holistic context.

Questions for your team

  • How does accountability change according to the levels of user influence over an AI system?
  • Is the AI to be embedded in a human decision-making process, is it making decisions on its own, or is it a hybrid?
  • How will our team keep records of our process?
  • How do we keep track of ethical design choices and considerations after the launch of the AI?
  • Will others new to our effort be able understand to our records?

Accountability example

  • The team utilizes design researchers to contact real guests in the hotels to understand their wants and needs through face-to-face user interviews.
  • The team considers their own responsibility when a hotel assistant’s feedback does not meet the needs or expectations of guests. They have implemented a feedback learning loop to better understand preferences and have highlighted the ability for a guest to turn off the AI at any point during their stay.
Sampling