National and Kapodistrian University of Athens
Computing counterfactuals with feasibility and compactness guarantees
Abstract Fairness and explainability in AI and machine learning have been the subject of much recent discussion. For example, we know that there are a variety of biases in commonly used training data which may result in the error being spread very unevenly across individuals and subgroups. The problem is exacerbated since most AI pipelines are completely opaque. Explainability methods, including the use of counterfactual explanations, can be used to understand the fairness implications of different AI and machine learning techniques. Simply stated, a counterfactual explanation encodes how much a given object needs to change for its label to flip. Thus, it provides a mechanism to compare between objects and evaluate fairness outcomes. With this goal we present novel techniques for deriving counterfactual explanations that consider several optimisation factors including feasibility, simplicity, and compactness of the explanation.
Dimitrios Gunopulos got his PhD from Princeton University in 1995. He is currently a Professor and the Department Chair in the Department of Informatics and Telecommunications, University of Athens. He has held permanent and temporary positions as Postoctoral Fellow at the Max-Planck-Institut for Informatics, Research Associate at the IBM Almaden Research Center, Visiting Researcher at the University of Helsinki, Assistant, Associate, and Full Professor at the Department of Computer Science and Engineering in the University of California Riverside, and Visiting Researcher at Microsoft Research, Silicon Lalley Lab. His research is in the areas of Data Mining, Knowledge Discovery in Databases, Databases, Machine Learning, Sensor Networks, Peer-to-Peer systems, and Algorithms. He has co-authored over a hundred journal and conference papers that have been widely cited (Google Scholar reports an h-index of 77), seven patents, and a book. He was the recipient of the 2020 IEEE ICDM Outstanding Service Award. His Erdos number is 2. His research has been supported by NSF (including an NSF CAREER award), the DoD, the Institute of Museum and Library Services, the Tobacco Related Disease Research Program, the European Commission, the Greek General Secretatiat of Research and Technology, AT&T, Nokia, a 2015 Yahoo Faculty Awards and a 2017 Google Faculty Award. He has served as a General co-Chair in SDM SIAM 2018, SDM SIAM 2017, HDMS 2011, and IEEE ICDM 2010, and as a PC co-Chair in ACM SIGKDD 2023, IEEE ICDE 2020, ECML/PKDD 2011, IEEE ICDM 2008, ACM SIGKDD 2006, SSDBM 2003, and DMKD 2000.
University of Pisa
Social Artificial Intelligence: Challenges of the Human-AI Ecosystem
Abstract The rise of large-scale socio-technical systems in which humans interact with AI systems (including assistants and recommenders) multiplies the opportunity for the emergence of collective phenomena and tipping points, with unexpected, possibly unintended, consequences. This is apparent even in such simple everyday applications as navigation systems where suggestions may create chaos if too many drivers are directed on the same route. Similarly personalised recommendations on social media may amplify polarisation, filter bubbles, and radicalisation. On the other hand, we may learn how to foster “wisdom of crowds” and collective action effects to face social and environmental challenges. In order to understand the impact of AI on socio-technical systems and design next-generation AIs that team with humans to help overcome societal problems rather than exacerbate them, we propose to build the foundations of Social AI at the intersection of Complex Systems, Network Science and AI, and discuss the main open questions in Social AI, outlining possible technical and scientific challenges and suggesting research avenues.
Dino Pedreschi is a professor of computer science at the University of Pisa, and a pioneering scientist in data science and artificial intelligence. He co-leads the Pisa KDD Lab – Knowledge Discovery and Data Mining Laboratory, a joint research initiative of the University of Pisa, Scuola Normale Superiore and the Italian National Research Council – CNR. He is currently shaping the research frontier of Human-centered Artificial Intelligence, as a leading figure in the European network of research labs Humane-AI-Net (scientific director of the line “Social AI”) and as the coordinator of the Spoke project “Human-centered AI” of the Next Generation EU national program FAIR – Future AI Research. He is a founder of SoBigData.eu, the European H2020 Research Infrastructure “Big Data Analytics and Social Mining Ecosystem”. He is the coordinator of the Italian National PhD program in Artificial Intelligence. He is a designated expert of GPAI, the Global Partnership on AI – Responsible AI Working Group, since 2020. He obtained in 1987 a PhD in computer science from the University of Pisa. His research focus is on big data analytics and mining, machine learning and AI, and their impact on society: human mobility and sustainable cities, social network analysis, complex social and economic systems, data and AI ethics, discrimination-preventing and privacy-preserving data analytics, explainable AI, synergistic human-AI collaboration and co-evolution.
KTH Royal Institute of Technology
Representation learning and foundation models in robotics
Abstract All day long, our fingers touch, grasp and move objects in various media such as air, water, oil. We do this almost effortlessly – it feels like we do not spend time planning and reflecting over what our hands and fingers do or how the continuous integration of various sensory modalities such as vision, touch, proprioception, hearing help us to outperform any other biological system in the variety of the interaction tasks that we can execute. Largely overlooked, and perhaps most fascinating is the ease with which we perform these interactions resulting in a belief that these are also easy to accomplish in artificial systems such as robots. When interacting with objects, the robot needs to consider various objects’ properties. The focus in our work is on physical interaction with deformable objects using multimodal feedback, generative models and address stability in contact rich tasks. In this talk, we will focus on how to create new informative and compact representations of deformable objects that incorporate both analytical and learning-based approaches.
Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. Her research is in the area of robotics, computer vision and machine learning. She received ERC Starting and Advanced Grant. Her research is supported by the EU, Knut and Alice Wallenberg Foundation, Swedish Foundation for Strategic Research and Swedish Research Council. She is an IEEE Fellow.