Human judgment plays a role throughout a seemingly objective system of logical decisions. It is humans who write algorithms, who define success or failure, who make decisions about the uses of systems and who may be affected by a system’s outcomes.
Every person involved in the creation of AI at any step is accountable for considering the system’s impact in the world, as are the companies invested in its development.
Make company policies clear and accessible to design and development teams from day one so that no one is confused about issues of responsibility or accountability. As an AI designer or developer, it is your responsibility to know.
Understand where the responsibility of the company/software ends. You may not have control over how data or a tool will be used by a user, client, or other external source.
Keep detailed records of your design processes and decision making. Determine a strategy for keeping records during the design and development process to encourage best practices and encourage iteration.
Adhere to your company’s business conduct guidelines. Also, understand national and international laws, regulations, and guidelines that your AI may have to work within. You can find other related resources in the IEEE Ethically Aligned Design document.
“Nearly 50% of the surveyed developers believe that the humans creating AI should be responsible for considering the ramifications of the technology. Not the bosses. Not the middle managers. The coders.”