News

27th European Conference on Artificial Intelligence (ECAI 2024) 

Main image of the post: 27th European Conference on Artificial Intelligence (ECAI 2024) .

With the rise of AI, ensuring that AI systems adhere to legal and ethical principles has become essential. Concerns over unintended effects and data use have driven strong demand for trustworthy AI, now a major focus for both the public and policymakers. The EU High-Level Expert Group on AI, formed by the European Commission in 2018, released “Ethics Guidelines for Trustworthy AI,” emphasizing that AI should be:

  • Lawful, adhering to laws and regulations
  • Ethical, aligning with core principles and values

The VALE track at the VECOMP 2024 workshop focuses on the ethical dimension—ensuring AI respects human values. Achieving this requires developing systems that reason about human values and norms, incorporate these values, and align behaviors accordingly. Just as human values guide morality, they can guide the morality of AI, resulting in value-aware systems capable of value-aligned decisions and enhancing human value-awareness.

With growing research on value-aligned AI, the VALE Track aims to bring together work on value engineering and stimulate deep discussion. This track continues the successful VALE workshop held at ECAI 2023.

In the context of VECOMP 2024 – VALE Track Workshop on Value Engineering in AI (VALE Track), held at the 27th Conference on Artificial Intelligence (ECAI 2024), 19 October 2024, Santiago de Compostela, Spain, Maria Dagioglou presented the following paper, related to MANOLO, to an audience of about 50 people in the context.

More specifically, authors: Alexandros Nousias, Maria Dagioglou, Georgios Petasis, Values-aligned, responsible sharing (VaRS): A methodology and a blue-print, The pre-proceedings are here: https://vale2024.iiia.csic.es/pre-proceedings.

Relevance to MANOLO:

The paper proposes a methodology that allows licensors to self-reflect about their values set with respect to the licensed AI asset, based on Schwartz’s personal values vocabulary and convey licensors permissions and restrictions accordingly through licensing. The proposed value-aligned responsible sharing license blueprint integrates licensors’ values along with typical open licensing elements.

Furthermore, Christos Spatharis gave an oral presentation of the following paper, related to MANOLO, to an audience of about 30 people, in proceedings of the Workshop on Embracing Human-Aware AI in Industry 5.0 (HAII5.0). Workshop proceedings are available for open access at CEUR-WS at https://ceur-ws.org/Vol-3765/.

More specifically authors: Dimitrios Koutrintzes, Christos Spatharis, Maria Dagioglou, “Human-Aware design for transferring knowledge during human-AI co-learning”.

Relevance to MANOLO:

This paper presents an experimentation pipeline that can be followed during human-aware AI design and development in the case of transfer learning from expert to novice human-AI teams. Two intricate research questions of “when to stop training” and “what expert knowledge to transfer” are tackled through a study with two expert human participants. The results demonstrate the complexities of the process and offer relevant guidelines for future research.