Why Responsible AI?
Responsible AI is increasingly important as AI technology is embedded in more and more aspects of our lives. AI systems have the potential to have a huge impact on our lives, and it is essential that these impacts are positive and beneficial. AI must be developed and deployed in a way that preserves the safety of users, respects their privacy, and makes sure that the results are fair and unbiased. Responsible AI practices ensure that the technology is developed with transparency and with accountability for its outcomes. This helps to ensure that AI is used ethically and that its impacts are beneficial for all.
What is Wrong with Current Approaches?
The issue with current approaches to responsible AI is that they are often applied at the end of the process of making an AI product, when it is often too late to effectively address potential ethical issues. This means that the potential harms of an AI product are only addressed after the product has been developed, instead of being taken into account from the outset of the product's development. As a result, it is not uncommon for AI products to be released with ethical issues that could have been avoided if the responsible AI approach was integrated into the product's development from the beginning.
Alternatively, costly re-training of the AI models may be required in order to effectively address the issues that have been identified. This process of retraining the AI models involves a significant amount of time and effort in order to ensure that the model is able to accurately and effectively solve the issues found. Additionally, the re-training of the AI models may also require the implementation of new technology, such as the use of updated algorithms, in order to ensure that the model is able to meet the most current standards of accuracy, while resolving the issues.
The issues mentioned above are the result of viewing Responsible AI as solely a “guard rails” approach, which is not focused on preventing unethical AI, but rather on detecting and rectifying any issues that arise, after the fact. This type of approach can be seen as a reactive one, whereby the only way to address any issues is once they have already occurred. Unfortunately, this method is often too late, as the harms caused by irresponsible AI, including reputational harm, can be severe and long-lasting, especially if the product has been already released.
What can We Do?
There is another way, but one that requires a comprehensive design of processes and deliberate practice in order to achieve maximum success. This is not a new concept - the manufacturing industry around the world has been successfully employing process-based approaches for a long time to ensure the quality of their products. This strategy is based on the idea of understanding the processes involved, and the way they interact with each other to produce the desired outcome. By analyzing and optimizing the processes, and by establishing quality control checkpoints, production teams are able to reduce waste, improve efficiency, and ensure that their products meet requirements and are of the highest quality. Quality control checkpoints also catch issues early on, thus lowering the overall cost of rectifying issues.
How does Quality Control Work?
In manufacturing, quality control begins with the identification of the desired quality of the product in terms of form, fit and function that describes the required overall physical, functional, and performance characteristics. These requirements are then distilled into measurable criteria that the product must meet at each step of the manufacturing process, starting with the receiving, inspection and testing of raw materials and extending through to final performance and acceptance testing of the finished product to ensure that it meets the desired quality requirements. Quality control also includes regular monitoring and review of the products and processes to ensure that they remain within statistical quality parameters that have been established. Statistical Process Control or SPC is typically used for monitoring and controlling the quality of a production process. This ensures that the production quality and product are consistently up to the highest standard, meets requirements, and that any problems or issues can be identified and addressed early on.
Close examination of the quality control approach reveals that it is not only comprised of checkpoints where a quality engineer evaluates the product against criteria, but also involves every step of the manufacturing process and every instruction given to the line worker. For instance, workers measure the dimensions of parts as they are machined and document the actual values; process engineers constantly monitor process variables (calibration, tolerances, temperature, pressure, gas mix, etc.) to ensure the desired product quality will meet product requirements; etc. Additionally, strict procedures are implemented to guarantee that these actions are carried out for every part produced. These procedures are documented, reviewed, and audited to meet international quality management system certifications such as ISO 9001 and AS9100.
How can we Implement Responsible AI using the Quality Control Approach?
Unfortunately, there is no straightforward way to replicate the quality control process used in manufacturing for AI products. Even in manufacturing, the quality control approach must be tailored to each product or production line. This requires experts in quality control to collaborate with engineering specialists to create a comprehensive plan for production, including all the steps and measurement requirements necessary to guarantee quality.
Following this general methodology, actions must be taken before launching into making an AI product. A well-designed responsible AI approach that follows this methodology (let’s call it “AI Quality Control”) will include Responsible AI experts that are well-versed with quality control methods. Even before a line of code is written or an AI model is trained on the first record of data, these experts, working with AI engineering leads, will draw up the development process carefully, including, but not limited to, waypoints at which rigorous testing will happen to ensure that the product is compliant. To employ the line-worker discipline from traditional quality control, it may be necessary to augment development environments to include measurements to be made and documented by programmers. Deviations from established criteria should result in stopping and reviewing the work.
A tailored approach is necessary for each AI product, as ethical principles may vary and product development processes differ significantly. The table below provides examples of various AI ethics principles applicable to different AI products on a case-by-case basis. It is not possible to list all product development processes here, as they are unique to each product and development environment.
AI principle | Healthcare | Financial | Authoring | Coding | Manufacturing |
Explainability | X | X | X | X | |
Fairness | X | X | |||
Data Protection | X | X | X | ||
Human Agency | X | X | X | X | X |
Transparency | X | X | X | ||
Accountability | X | X | |||
Efficacy | X | X | X | ||
Integrity | X | X |
Customization of the approach is necessary through successful collaboration between the ethics team and engineering. It is not enough for the developer to take action; teams deploying the technology and those responsible for scale-up must also be involved. These teams are often from different organizations, necessitating actions that span multiple organizations (for an example, see the Chatbots RESET framework in the Reference).
What are Next Steps?
- Choose the ethics and engineering teams that will collaborate on this project. Empower these teams.
- Identify the relevant principles for the AI product being developed.
- Interpret the principles in the context of the application and the user of the AI product; document them in detail.
- For each interpreted principle, create a set of actions to operationalize it. These actions will be carried out by various actors involved in the development, deployment, and scale-up of the AI product.
- Integrate the operationalizing actions into the workflows of the various actors, including programmers, managers, ethics team members, marketing, sales, users, and regulatory agencies.
- Identify milestones within the development cycle to check for correct implementation; this will be the responsibility of the ethics and engineering teams.
- Train all actors on how to implement and track the operationalizing actions.
- Identify a person or a team responsible for the success of the implementation and tracking.
- Periodically audit the product development process to ensure that all actions and tracking are being followed.
- Establish rules/incentives for the actors to adhere to the process.
Reference
“Chatbots RESET: A Framework for Governing Responsible Use of Conversational AI in Healthcare,” World Economic Forum Report, December 2020 (download here)