Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter
"UX professionals must seize the AI career imperative or become irrelevant", writes Jakob Nielsen in his blog UX Tigers, particularly with current AI-driven tools being "far from user-friendly with their clunky, prompt-driven interfaces", and with adult (digital) literacy being what it is.
If you're a non-native English writer, you should know GPT detectors are biased against you.
10 conversations by Urban AI, a Paris based think tank, with worldwide experts to explore the future of urban artificial intelligence
The book sheds new light on some of the most important themes in AI ethics, from the differences between Chinese and American visions of AI, to digital neo-colonialism. It is an essential work for anyone wishing to understand how different cultural contexts interplay with the most significant technology of our time.
A new area called "human-centered machine learning" (HCML) promises to balance technological possibilities with human needs and values. However, there are no unifying guidelines on what "human-centered" means, nor how HCML research and practice should be conducted. This article by Stevie Chancellor in Communications of the ACM draws on the interdisciplinary history of human-centered thinking, HCI, AI, and science and technology studies to propose best practices for HCML.
This report by the IndustryLab of Ericsson, the Swedish multinational, aims to introduce the ethics of AI and explore how this fast-growing technology needs to align with humans’ moral and ethical principles if it is to be embraced by society at large.
Hal Wuertz, Adam Cutler and Milena Pribic of IBM Design defined a unique set of five skills for “AI Design.”
At the "AI in the Loop: Humans in Charge" conference, which took place Nov. 15 at Stanford University, panelists proposed a new definition of human-centered AI – one that emphasizes the need for systems that improve human life and challenges problematic incentives that currently drive the creation of AI tools.
Bringing together a motley crew of social scientists and data scientists, the aim of this special theme issue of Big Data & Society is to explore what an integration or even fusion between anthropology and data science might look like.
“All research is qualitative; some is also quantitative” Harvard Social Scientist and Statistician Gary King
From transforming the ways we do business and reimagining health care, to creating planet-restoring housing and humanizing our digital lives in an age of AI, Expand explores how expansive thinking across six key areas—time, proximity, value, life, dimensions, and sectors—can provide radical, useful solutions to a whole host of current problems around the globe.
Best practices for addressing the bias and inequality that may result from the automated collection, analysis, and distribution of large datasets.
The Response-ability Summit, formerly known as the Anthropology + Technology Conference, is a unique two-day event that brings social scientists and technologists together to foster interdisciplinary conversations on the important topic of socially-responsible tech.
In Human-Centered AI, Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives.
Curated by Experientia partner Jan-Christoph Zoels and Sara Fortunati, director of the Torino Circle of Design, the conference dealt with the best international practices about the humanization of technology. It was structured into six different thematic sessions: ethics, public services, healthcare, AI, mobility and learning. All videos are now available, with English subtitles.
But scientists are getting better at measuring where each system fails.
Rather than trying to fix the biases of AI systems and their human error, we need to find ways to coexist with it. Anthropology can help us a lot here.
This special issue collects six articles tackling artificial intelligence (AI) from a social science perspective.
The gulf between the technical brilliance claimed for Google's deep learning model and its real-world application points to a common problem that has hindered the use of AI in medical settings.