Accountability | The requirement of accountability is closely linked to the principle of fairness. It necessitates that mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use. |
AI Component | A standalone artifact with a dependency on given data and target. |
AI Efficiency | Optimizing various aspects of AI systems to ensure that they operate at peak efficiency while minimizing resource or energy consumption. |
AI System | A fully functioning pipeline that employs data to find certain patterns. An AI system often uses a model to find addressable patterns. |
AI Trustworthiness | According to the EU Ethics Guidelines (link), Trustworthy AI systems should be lawful, ethical, and robust throughout their entire life cycle. The foundations of Trustworthy AI lay upon 4 ethical principles, to ensure ethical and robust AI: (1) Respect for human autonomy, (2) Prevention of harm, (3) Fairness, and (4) Explicability. |
Bias | It refers to AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality. Bias can be found in the initial training data, the algorithm, or the predictions the algorithm produces. |
Cloud-Edge Continuum | The Cloud Edge Continuum is “an integrated environment that incorporates and blends together sensors, automated devices, edge computing, and centralised cloud computing in a way that is tailored to the specific needs of a use case and organisation. |
Code of Conduct | A set of suggestions for breaking down TAI requirements for an ML pipeline that strive to improve the trustworthyness. |
Commercial-off-the-shelf (COTS) | A software and/or hardware product that is commercially ready-made and available for sale, lease, or license to the general public. |
Data Provenance | Data provenance is the record of metadata from the data's source, providing historical context and authenticity. While data lineage helps optimize and troubleshoot data pipelines, data provenance helps to validate and audit data. |
Deep Learning (DL) | Deep learning (DL) is a subset of Machine Learning (ML) that uses multilayered neural networks (NNs), called Deep Neural Networks (DNNs), to simulate the complex decision-making power of the human brain. Some form of deep learning powers most of the AI applications in our lives today. |
Digital Artefact | A digital artefact is often considered as any material or immaterial object, based on a digital technology that allows data collection, processing and/or transmission. |
Diversity, Non-Discrimination, and Fairness | In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle. Besides the consideration and involvement of all affected stakeholders throughout the process, this also entails ensuring equal access through inclusive design processes as well as equal treatment. This requirement is closely linked with the principle of fairness. |
Hardware Aware Training (HAT) | HW-Aware Training combines Quantization-Aware Training QAT and Fault-Aware Training FAT. QAT helps to produce smaller more efficient NN models while FAT helps to produce robust NN models against specificied faults. |
Human Agency and Oversigth | AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy. This requires that AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight. |
Human-in-the-loop (HITL) | It refers to the need for human interaction, intervention, and judgment to control or change the outcome of a process. Hence, the human has full control over the final decision of an AI component or system. |
Human-on-the-loop (HOTL) | It refers to the need for human awareness and authority to overide the outcome of a process without being in control of the outcome. Hence, the human may not have control over the actual decision that the AI component or system chooses but has a very high degree of control over the implementation of the decision. |
Machine Learning (ML) | Machine learning (ML) is a branch of AI and computer science that focuses on the using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy. |
Neural Architecture Search (NAS) | Neural Architecture Search is an optimization algorithm that finds the best NN model architecture with respect to a set of objectives (Accuracy, MACs, size, power consumption, latency...etc. |
Neural Network (NN) | A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions. |
Privacy and Data Governance | Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems. Prevention of harm to privacy also necessitates adequate data governance that covers the quality and integrity of the data used, its relevance in light of the domain in which the AI systems will be deployed, its access protocols and the capability to process data in a manner that protects privacy. |
Pruning | Pruning AI models is the process of removing unnecessary or redundant parts of a neural network to reduce its size and complexity, and improve its efficiency and performance. |
Quantisation | Quantization is a model size reduction technique that converts model weights from high-precision floating-point representation to low-precision floating-point or integer representations, such as 16-bit or 8-bit. As a result the model size and inference speed can improve by a significant factor without sacrificing too much accuracy. Additionally, quantization will improve the performance of a model by reducing memory bandwidth requirements and increasing cache utilization. |
Societal and Environmental well-being | In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle. Sustainability and ecological responsibility of AI systems should be encouraged, and research should be fostered into AI solutions addressing areas of global concern, such as for instance the Sustainable Development Goals. Ideally, AI systems should be used to benefit all human beings, including future generations. |
Technical Robustness and Safety | A crucial component of achieving Trustworthy AI is technical robustness, which is closely linked to the principle of prevention of harm. Technical robustness requires that AI systems be developed with a preventative approach to risks and in a manner such that they reliably behave as intended while minimising unintentional and unexpected harm, and preventing unacceptable harm. This should also apply to potential changes in their operating environment or the presence of other agents (human and artificial) that may interact with the system in an adversarial manner. In addition, the physical and mental integrity of humans should be ensured. |
Transparency | This requirement is closely linked with the principle of explicability and encompasses transparency of elements relevant to an AI system: the data, the system, and the business models. |