Key Takeaways
❐ Trustworthy AI (TAI) is an evolving concept.
❐ There is no ‘one-size-fits-all’ solution for TAI.
❐ AI will have impacted human civilization at scales not fully understood yet.
❐ Meanwhile, there is no need to either Panic or underestimate the impact of AI, globally.
❐ THE viable path towards TAI would involve collaboration amongst communities, regulators, private
sector, open-source communities, academia, and legal scholars (to name a few).
❐ Open-source Software movement has been fueling innovation for decades. Let’s encourage it rather
than imposing restrictions, so it can lead to the advancement of TAI tools.
❐ Experts across various disciplines can play a key role in “translating” principles of TAI into “attributes” or “properties” such as safety, reliability, fairness, explainability,... .
❐ There is no single universal framework that can deliver TAI in an organization. Instead, we suggest
communities focus on definition and measurement of relevant metrics for any desired TAI attribute.
❐ Several regulatory bodies such as the European Union has approach TAI from a risk management
perspective.
❐ Clear understanding of uncertainties in AI model’s life-cycle should be mapped to risk management
frameworks such as the Rumsfeld Risk Matrix (RMM). This enables decision-makers with tools to
face and plan for uncertainty.
❐ Terms such as ‘fairness’, ‘bias’, ‘accountability’, and ‘ethical’ are loaded concepts with roots deeply
ingrained in every community’s culture, history, societal values, and governance.
❐ Association of these terms as ‘principles’ of TAI is ultimately context-dependent and, therefore,
requires careful ‘infusion’ into any regulatory or engineering system.
❐ Mathematically speaking, it has been demonstrated that it is impossible to honor every manifestation or aspect of AI-fairness concurrently.