Among those putting forth predictions of the AI future is the Pew Research Center, which published a June 2023 report summarizing the results of its “Future of the Internet” project, in partnership with Elon University’s Imagining the Internet Center. More than 300 experts responded to a survey distributed by the project, which asked participants to predict how AI will impact society between the present day and the year 2035.
Unsurprisingly, the answers covered a broad range of opinions and perceived possibilities. Some were utopian, some dystopian; some were purely speculative, some grounded in practical extrapolation; and all of them were thoughtful, often deeply insightful.
The general sentiments collated by the report were as follows: 37% of respondents said they are more concerned than excited about the impact of AI over the next dozen years; 42% said they are equally concerned and excited. Only 18% said they are more excited than concerned.
The report served up the negatives before presenting the positives. The question prompting the former was:
“As you look ahead to the year 2035, what are the most harmful or menacing changes in digital life that are likely to occur in digital technology and humans’ use of digital systems?”
The emergent themes in the replies to this question included:
Human-centered development of digital tools and systems. There are fears that digital systems driven by profit and power incentives will lead to increased surveillance abuses and unregulated data collection, all for the purpose of “controlling people, rather than empowering them to act freely, share ideas and protest injuries and injustices.” Ethical design in AI, they fear, “will be an afterthought,” and profit-driven AI “is likely to increase inequality and compromise democratic systems.”
Human rights. AI, they fear, will increase the current assault on privacy, creating new threats to rights. Disinformation and deepfake will spike, widening social divides; crime and harassment will proliferate. And the already-looming threat of the massive elimination of jobs will result in “a rise in poverty and the diminishment of human dignity.”
Human knowledge. The best of what we know, they fear, will be “lost or neglected in a sea of mis- and disinformation... facts will be increasingly hard to find amidst a sea of entertaining distractions, bald-faced lies and targeted manipulation.” They also fear that cognitive skills will continue to decline, and that “reality itself is under siege,” as digital tools grew increasingly adept at accommodating “deceptive or alternate realities.”
Human health and well-being. Some who addressed this theme believe that the growth of digital tech has “already spurred high levels of anxiety and depression,” and predicted “things could worsen as technology embeds itself further in people’s lives and social arrangements.” Some of this worsening, they speculated, will be the result of tech-related “loneliness and social isolation”, while some will be the result of digital fantasies replacing real-world experiences. The strife of joblessness was also cited as a likely problem.
Human connections, governance, and institutions. The evolution of AI technology will proceed much faster than the development of norms, standards and regulation surrounding it, the respondents fear. Two emergent concerns were a trend toward “autonomous weapons and cyberwarfare” and “runaway digital systems.” These trends will contribute to the general deterioration of trust and faith in long-standing institutions, resulting in “polarization, cognitive dissonance and public withdrawal from vital discourse.” Digital systems could grow “too big and important to avoid, and all users will be captives.”
Wow. Lots to unpack there.
Each of these themes is worthy of serious contemplation. All address areas of our social landscape that are tremendously important, and all the fears and concerns cited are soberingly justified. There’s no Skynet Fever at work here; this is all a chillingly possible portrait of our near-term future.
Here are a couple of specific responses.
Frank Bajak, Cybersecurity Investigations Chief for the Associated Press
“The powerful technologies maturing over the next decade will be badly abused in much of the world unless the trend toward illiberal, autocratic rule is reversed. Surveillance technology has few guardrails now, though the Biden administration has shown some will for limiting it. Yet far too many governments have no qualms about violating their citizens’ rights with spyware and other intrusive technologies. Digital dossiers will be amassed widely by repressive regimes. Unless the United States suppresses the fascist tendencies of opportunist demagogues, the U.S. could become a major surveillance state. Much depends also on the European Union being able to maintain democracy and prosperity and contain xenophobia. We seem destined at present to see biometrics combined with databases – anchored in facial, iris and fingerprint collection – used to control human migration, prejudicing the Black and Brown of the Global South.
“I am also concerned about junk AI, bioweapons and killer robots. It will probably take at least a decade to sort out hurtful from helpful AI. Full autonomous offensive lethal weapons will be operative long before 2035, including drone swarms in the air and sea. It will be incumbent on us to forge treaties restricting the use of killer robots.
“Technology is not and never was the problem. Humans are. Technology will continue to imbue humans with god-like powers. I wish I had more faith in our better angels.”
Kat Schrier, associate professor and founding director of the Games & Emerging Media program at Marist College
“Systemic inequities are transmogrified by digital technologies (though these problems have always existed, we may be further harming others through the advent of these systems). For instance, problems might include biased representation of racial, gender, ethnic and sexual identities in games or other media. It also might include how a game or virtual community is designed and the cultural tone that is established. Who is included or excluded, by design?
“Other ethical considerations, such as privacy of data or how interactions will be used, stored and sold.
“Governance issues, such as how people report and gain justice for harms, how we prevent problems and encourage pro-social behavior, or how we moderate a virtual system ethically. The law has not evolved to fully adjudicate these types of interactions, which may also be happening across national boundaries.
“Social and emotional issues, such as how people are allowed to connect or disconnect, how they are allowed to express emotion.”
As quantified above, of course, it wasn’t all gloom-and-doom. Some optimism shines through in the report, when respondents answered the question,
“As you look ahead to the year 2035, what are the best and most beneficial changes in digital life that are likely to occur in digital technology and humans’ use of digital systems?”
The same five emergent themes were touched on in these replies, but this time, the visions were positive:
Human-centered development of digital tools and systems. Respondents here foresee “a wide range of likely digital enhancements in medicine, health, fitness and nutrition; access to information and expert recommendations; education in both formal and informal settings; entertainment; transportation and energy; and other spaces.” They also foresee deep integration between the digital and the physical, resulting in “smartness” in objects and organizations, and that personal digital assistants will make everyone’s lives better.
Human rights. AI tools “can be shaped in ways that allow people to freely speak up for their rights and join others to mobilize for the change they seek,” they feel, forecasting “ongoing advances in digital tools and systems will improve people’s access to resources, help them communicate and learn more effectively, and give them access to data in ways that will help them live better, safer lives.”
Human knowledge. These respondents are looking for innovations in “business models; in local, national and global standards and regulation; and in societal norms and digital literacy that will lead to the revival of and elevation of trusted news and information sources in ways that attract attention and gain the public’s interest.” Digital tools can be leveraged to assure the verification of factual information, rather than the spread of disinformation.
Human health and well-being. Some believe that “the many positives of digital evolution will bring a health care revolution that enhances every aspect of human health and well-being... full health equality in the future should direct equal attention to the needs of all people while also prioritizing their individual agency, safety, mental health and privacy and data rights.”
Human connections, governance, and institutions. Among these respondents, the belief is that “society is capable of adopting new digital standards and regulations that will promote pro-social digital activities and minimize antisocial activities... people will develop new norms for digital life and foresee them becoming more digitally literate in social and political interactions.” In their best-case scenario, such changes will nudge digital systems to promote “human agency, security, privacy and data protection.”
Again, some specific responses:
Daniel S. Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University
“Some of the most beneficial changes in digital technology and human use of digital systems may surface through impacts on health and well-being, education and the knowledge economy, and consumer technology and recreation. I anticipate more moderate positive impacts in areas like energy and environment, transportation, manufacturing and finance, and I have only modest optimism around areas like democratic governance, human rights and social and political cohesion.
“In the next decade, the prospects for advancing human well-being, inclusive of physical health, mental health and other associated aspects of life satisfaction and flourishing seems substantial. The potential of techniques like deep learning to predict the structure of proteins, identify candidates for vaccine development and diagnose diseases based on imaging data has already been demonstrated. The upsides for humans of maturing these processes and enacting them robustly in our health infrastructure is profound.
“It has become clear that tools like large language models are likely to substantially reform how individuals search for, access, synthesize and even produce information. Thanks to improved user interfaces and user-centered design along with AI, increased computing power, and increased internet access, we may see widespread benefits in terms of convenience, time saved and the informal spread of useful practices. A more convenient and accessible knowledge ecosystem powered by virtual assistants, large language models and mobile technology could, for example, lead to easy spreading of best practices in agriculture, personal finance, cooking, interpersonal relationships and countless other areas.
“Perhaps on a more cautionary note, I find it less likely that these advances will be driven through changes in human behavior, institutional practices and other norms per se. For example, the use of digital tools to enhance democratic governance is exciting and certain countries are leading here, but these practices require under-resourced and brittle human institutions to enact, as well as the broader public (not always digitally literate) to adapt... Reaching a new paradigm of human culture, so to speak, may take more than a decade or two. Even so, relatively modest improvements driven by humans in data and privacy culture, social media hygiene and management of misinformation and toxic content can go a long way.
“Instead then, I feel that many of these positive benefits will arrive due to ‘the technologies themselves’ (crassly speaking, since the process of innovation is deeply socio-technical) rather than because of human-first changes in how we approach digital life... Bringing hundreds of millions or billions of people into deeper engagement with the plethora of digital tools may be the single most important change in digital life in the next decades.”
Jeff Johnson, principal consultant at UI Wizards, Inc., former chair of Computer Professionals for Social Responsibility
“Cars, trucks and busses will be improved in several ways. They will have more and better safety features, such as collision-avoidance and accident-triggered safety cocoons. They will be mostly powered by electric motors, have longer ranges than today’s electric cars, benefit from improved recharging infrastructure. In addition:
“A significant proportion of AI applications will be designed in a human-centered way, improving human control and understanding.
“Digital technology will improve humankind’s ability to understand, sequence and edit genetic material, fostering advances in medicine, including faster creation of more effective vaccines.
“Direct brain-computer interfaces and digital body implants will, by 2035, begin to be beneficial and commercially viable.
“Auto-completion in typing will be smarter, avoiding the sorts of annoying errors common with auto-complete today. Voice control and biometric control, now emerging, may replace keyboards, pointers and touch screens.
“Government oversight and regulation of digital technology will be more current and more accepted.
“Mobile digital devices will consume less power and will have longer-lasting batteries.
“Robots – humanoid and non-humanoid, cuddly and utilitarian – will be more common, and they will communicate with people more naturally.
A well-known professor of computational linguistics based at a major U.S. university
“There are many opportunities for conventional digital technologies to make vast improvements in human life and society. Advances in computing alongside advances in the biosciences and health sciences are promising. A better understanding of the human mind is likely to arise over the next 15 years, and this could have major positive impacts, especially as it relates to problems of the mind such as addiction (to drugs, gambling, etc.) as well as depression and other disorders. Changes in social and political forces have given hope to combating issues surrounding climate change, clean energy, disappearing life and reduction of toxins in the environment. Solutions will be found to make cutting-edge machine learning computation less expensive in terms of processors and the energy to drive them. The rapid advances in machine learning and robotics will continue, and they will be used both for social good and ill. The good includes better methods of combating disease and climate, and robots that can do more tasks that people don’t want to or that are unsafe. Food production should also be more efficient via a combination of algorithms and robotics. 3D printing is still just getting started; by 2035 it will be much more widely used in a much wider range of applications. There will be a better understanding of how to integrate 3D printing with conventional building construction. Tools to aid human creativity will continue to advance apace; how people create content is going to radically change, and in fact that process has already begun. We’ll see more technology implanted into humans that aid them in various ways, led by research in human-brain interfaces. By 2035 there is a chance that many changes will have been wrought by quantum computing. … If progress is made there, it could perhaps lead to better modeling of real-world systems like weather and climate change, and perhaps applications in physics.”
Zizi Papacharissi, professor and head of the communication department and professor of political science at the University of Illinois-Chicago
“I see technologies improving communication among friends, family and colleagues. Personally mediated communication will be supported by technology that is more custom-made, easier to use, conversational agent-supported and social-robot enabled. I see technology advancing in making communication more immediate, more warm, more direct, more nuanced, more clear and more high fidelity. I see us moving away from social media platforms, due to growing cynicism about how they are managed, and this is a good thing. The tools we use will be more precise, glossy and crash-proof – but they will not bring about social justice, heightened connection or healthier relationships. Just because you get a better stove, does not mean you become a better cook. Getting a better car does not immediately make you a better driver. The lead motivating factor in technology design is profit. Unless the mentality of innovation is radically reconfigured so as to consider innovative something that promotes social justice and not something that makes things happen at a faster pace (and thus is better for profit goals), tech will not do much for social justice. We will be making better cars, but those cars will not have features that motivate us to become more responsible drivers; they will not be accessible in meaningful ways; they will not be friendly to the environment; they will not improve our lives in ways that push us forward (instead of giving us different ways to do what we have already been able to do in the past).”
We can lament that these predictions are, by the numbers, more negative than positive; but Pew’s 300+ experts have given us much to think about in both the debit and credit columns. And it should be said that this is not just the province of experts; the welfare of every member of society is at stake here, so everyone should learn all they can about these issues and seek to develop informed opinions, and to act where they have opportunity to push toward the best AI future we can manage, as we barrel toward 2035.
Comments