July/August 2024 edition of Interactions magazine is out
The latest issue of Interactions magazine, published by ACM, contains five feature stories, four of which are about AI and interaction design.
UX matters: the critical role of UX in responsible AI
Authors: Vera Liao (Microsoft Research), Mihaela Vorvoreanu (Microsoft), Hari Subramonyam (Stanford University), Lauren Wilcox (eBay and Georgia Institute of Technology)
To mitigate harms of AI to people—individuals, communities, and society—RAI uses a set of principles and best practices to guide the development and deployment of AI systems: fairness and nondiscrimination, transparency and explainability, accountability, privacy, safety and security, human control, professional responsibility, and promotion of human values. To understand and positively augment this interplay requires the combined expertise of people who specialize in technology and people who specialize in understanding people, such as those working in UX.
Designing for data sensemaking practices: a complex challenge
Authors: Arma?an Karahano?lu (Twente University), Aykut Co?kun (Koç University)
While the initial wave of health trackers focused on quantifying bodily metrics, the tide is turning toward more qualitative, subjective, and social-oriented tracking practices that capture lived experiences. But are we truly harnessing the power of this data or merely drowning in an ocean of numbers? This article explores this question from the data sensemaking perspective.
Inclusive computational thinking in public schools: a case study from Lisbon
Authors: Ana Cristina Pires, Filipa Rocha, Tiago Guerreiro, Hugo Nicolau (Lisbon University)
Integrating inclusive computational thinking (CT) education into children’s school curricula is essential for developing problem-solving, logical reasoning, and creativity skills. Implementing inclusive, multisensory robotic environments in schools can enhance learning, engagement, and collaboration between children with diverse abilities, promoting educational equity and reducing the risk of social exclusion.
Unmasking AI: informing authenticity decisions by labeling AI-generated content
Authors: Olivia Burrus (Adobe), Amanda Curtis (Adobe and Oxford University), Laura Herman (Adobe and Oxford University)
Providing transparency means presenting viewers with clear and understandable information about how content was created, and disclosing the role of AI in that process, if any. Transparency is essential for fostering viewer trust and understanding, as well as for ensuring accountability and support for responsible AI development and deployment.
Large Language Objects: the design of physical AI and generative experiences
Marcelo Coelho (MIT and Formlabs), Jean-Baptiste Labrune (MIT)
Large language objects (LLOs) are physical interfaces that extend the capabilities of large language models into the physical world. Due to their generative nature, LLOs present more fluid and adaptable functionalities; their behavior can be created and tailored for individual people and use cases; and interactions can progressively develop from simple to complex, better supporting both beginners and advanced users.