AI ethics principles statements and guidelines are proliferating. As might be expected given their varied origins in academic, governmental, non-governmental or corporate settings, there are notable differences among these statements and guidelines. Yet, all are in agreement that one of the cornerstones of AI policy must be commitment to developing AI and machine learning systems and big data applications that are aligned with human values.
Privacy is one of the most frequently cited human values in the many dozens of AI ethics principles statements and guidelines that have been developed worldwide to date. But, while concerns about data privacy are apparently as common in Asia and Africa as they are in the Americas and Europe, the location of boundaries between the private and public domains is remarkably varied, reflecting the very different cultural and political conditions within which global infrastructures for internet and wireless connectivity are turned to locally-relevant uses. Thus, while serious moral misgivings have been expressed in the U.S., for example, about police and security uses of facial recognition systems, these uses have been widely embraced in China as part of “smart governance” in the public interest. Differences in privacy concerns also vary greatly across generations. As “digital natives,” global youth have in common a significantly greater comfort with—and at times powerfully social yearnings for—digital exposure that are not shared by even those “naturalized” netizens who are their nearest generational kin. Privacy is not a singular, universal value; it is a topographically complex field of ethical concerns.
A much less frequently and explicitly mentioned core value for developing “human-centered” AI is social cohesion. Yet, concerns about social cohesion are arguably intrinsic to the privacy concerns raised by ubiquitous 24/7 digital connectivity. Social media, e-commerce, and search platforms are shaping the dynamics of human sociality in ways that are complex, recursive and inherently qualitative. While increasing amounts of social energy are being redirected from physical to digital spaces, opening prospects for entirely new kinds of social cohesion, recent research in the neuroscience of communication suggest that asynchronous, digitally-mediated connectivity may compromise social learning, empathy, and emotional fluency. Moreover, just as concerns about privacy vary across national, cultural and generational boundaries, so do concerns regarding the interface of the personal and social that emerge with the normalization—and normativity—of what have been called the expository society and surveillance capitalism. This variation reflects differences in the degrees to which persons are conceived fundamentally as individual beings or as relational becomings. But it also reflects different depths of concern about how the epistemic power of digital connectivity is connected with its ontological power—the ways in which increasingly high volumes, varieties and velocities of data about who each of us is can be (and are being) used to shape who we become, as citizens as well as consumers.
With a nod toward the original Greek meaning of the term “symposium”—a gathering of drinking partners—our aim is to open a convivial space for thought leaders, senior academics, postdocs, graduate students, and practitioners to table cultural, national, and generational differences and to generate shared understandings of privacy and social cohesion in the pursuit of realizing more equitable and humane AI.