At the "AI in the Loop: Humans in Charge" conference, which took place Nov. 15 at Stanford University, panelists proposed a new definition of human-centered AI – one that emphasizes the need for systems that improve human life and challenges problematic incentives that currently drive the creation of AI tools.
Bringing together a motley crew of social scientists and data scientists, the aim of this special theme issue of Big Data & Society is to explore what an integration or even fusion between anthropology and data science might look like.
“All research is qualitative; some is also quantitative”
Harvard Social Scientist and Statistician Gary King
From transforming the ways we do business and reimagining health care, to creating planet-restoring housing and humanizing our digital lives in an age of AI, Expand explores how expansive thinking across six key areas—time, proximity, value, life, dimensions, and sectors—can provide radical, useful solutions to a whole host of current problems around the globe.
Best practices for addressing the bias and inequality that may result from the automated collection, analysis, and distribution of large datasets.
The Response-ability Summit, formerly known as the Anthropology + Technology Conference, is a unique two-day event that brings social scientists and technologists together to foster interdisciplinary conversations on the important topic of socially-responsible tech.
In Human-Centered AI, Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives.
- AI, Conference, Education, Educational services, Experientia, Experientia, Health, Healthcare, Italy, Jan-Christoph Zoels, Mobility, Public services, Technology (general), Turin
Curated by Experientia partner Jan-Christoph Zoels and Sara Fortunati, director of the Torino Circle of Design, the conference dealt with the best international practices about the humanization of technology. It was structured into six different thematic sessions: ethics, public services, healthcare, AI, mobility and learning. All videos are now available, with English subtitles.
But scientists are getting better at measuring where each system fails.
Rather than trying to fix the biases of AI systems and their human error, we need to find ways to coexist with it. Anthropology can help us a lot here.
This special issue collects six articles tackling artificial intelligence (AI) from a social science perspective.
The gulf between the technical brilliance claimed for Google's deep learning model and its real-world application points to a common problem that has hindered the use of AI in medical settings.
It wasn't just technical work but also significant social and emotional labor that turned Sepsis Watch, a Duke University deep-learning model, into a success story.
This essay by AI specialist Jessy Lin explores some of the possibilities to rethink how humans and "intelligent" machines interact today.
The Anthropology + Technology conference brings together pioneering technologists and social scientists from across the globe. Its aim is to facilitate dialogue on emerging technology projects in order to help businesses benefit from more socially-responsible AI.
Artificial Intelligence is permeating a wide range of areas and it is bound to transform work and society. This dossier asks what needs to be done politically in order to shape this transformation for the sake of the common good.
New worlds need new language. TOne of those things to name is what is happening to ourselves and our data proxies. Expanding our language from privacy to personhood enables us to have conversations that enable us to see that our data is us, our data is valuable, and our data is being collected automatically.
The book explores the future of artificial intelligence (AI) through interviews with AI experts and explores AI history, product examples and failures, and proposes a UX framework to help make AI successful.
This article argues [that] the well-publicized social ills of computing will not go away simply by integrating ethics instruction or codes of conduct into computing curricula. The remedy to these ills instead lies less in philosophy and more in fields…
Angèle Christin argues that we can
explicitly enroll algorithms in ethnographic research, which can shed light on unexpected aspects of algorithmic systems - including their opacity. She delineates three mesolevel strategies for algorithmic ethnography.