The gulf between the technical brilliance claimed for Google's deep learning model and its real-world application points to a common problem that has hindered the use of AI in medical settings.
It wasn't just technical work but also significant social and emotional labor that turned Sepsis Watch, a Duke University deep-learning model, into a success story.
The Anthropology + Technology conference brings together pioneering technologists and social scientists from across the globe. Its aim is to facilitate dialogue on emerging technology projects in order to help businesses benefit from more socially-responsible AI.
Artificial Intelligence is permeating a wide range of areas and it is bound to transform work and society. This dossier asks what needs to be done politically in order to shape this transformation for the sake of the common good.
New worlds need new language. TOne of those things to name is what is happening to ourselves and our data proxies. Expanding our language from privacy to personhood enables us to have conversations that enable us to see that our data is us, our data is valuable, and our data is being collected automatically.
The book explores the future of artificial intelligence (AI) through interviews with AI experts and explores AI history, product examples and failures, and proposes a UX framework to help make AI successful.
This article argues [that] the well-publicized social ills of computing will not go away simply by integrating ethics instruction or codes of conduct into computing curricula. The remedy to these ills instead lies less in philosophy and more in fields…
Angèle Christin argues that we can
explicitly enroll algorithms in ethnographic research, which can shed light on unexpected aspects of algorithmic systems - including their opacity. She delineates three mesolevel strategies for algorithmic ethnography.