I have been reading sociologist Thierry Bardini’s Bootstrapping (2000), a fascinating account of the origins of personal computing whose focus is the groundbreaking work of Douglas Engelbart (1925-2013). Engelbart is a once forgotten, later rediscovered and celebrated innovator. In the 1960s, he created tools like the mouse, hypertext, the graphical user interface, networked computing, structured data, and other devices we take for granted today. As I write this, I am using a mouse, relying on cut-and-paste, switching across windows with different functions, and searching on a hyperlinked network of knowledge. The implements and media that enable us to become active manipulators of and agents in the digital virtual space and experience them as extensions of our mental space were first envisioned by Engelbart in the 1960s. He was mostly forgotten until the 1990s when he received an honorary doctorate from Yale, the MIT-Lemelson prize, and the National Medal of Technology and Innovation.
A former navy officer inspired by Vannevar Bush’s Memex concept and his time deployed in the Philippines staring at a radar screen, Engelbart later studied engineering at Berkeley. In the 1960s, Engelbart established the Augmentation Research Center (ARC) at the Stanford University affiliated but privately operated Stanford Research Institute (SRI).
Benefiting from the largesse of federal military funding, Engelbart’s quirky operation received support for experimental research organized around his concept of Human Augmentation Research. The interesting part is that the idea behind Human Augmentation wasn’t just technical. While his lab was primarily staffed by engineers, ARC focused on human-computer interaction and enhancing organizational capabilities in knowledge generation and complex decision-making. It involved reimagining the computer and human-computer interaction, as Bardini shows. It developed ways for the computer to serve not just as a tool for conducting traditional tasks but a cognitive prosthesis to enhance human intelligence. The latter were central to the human augmentation paradigm.
According to Bardini, Engelbart’s lab coined the term “knowledge worker.” ARC understood the importance of computers as symbolically-oriented systems. Unlike traditional machines, where the human operator conducts limited operations (think of an industrial machine that performs repetitive functions), computers extend the cognitive, conceptual, and social domains of human activity. Instead of the traditional operator learning a small set of functions permitted by the machine, the computer involved a user who interacted symbolically with the machine as a medium that would serve as a physical extension and virtual expansion of their bodily and mental space. The challenge was imagining ways to integrate bodily and mental actions with the logical operations computers could carry out quickly and efficiently. In doing so, Engelbart and his team had to invent both the personal computer and its user.
Bardini documents how the “bootstrapping” approach served the research and development of computer technology and human-machine interactions in Engelbart’s lab. Symbolically-oriented machines would co-evolve with the needs of knowledge workers working collaboratively. Under the notion of bootstrapping, innovation would come out of continuous experimentation. ARC’s experimental object was human-computer interaction, and its experimental subjects were its own staff. Put differently, Engelbart’s lab was not only about designing better computers or computer peripherals and accessories, but engaged in an earnest reimagining of human-computer systems and their cognitive and organizational integration toward the enhancement of what Engelbart called Collective Intelligence.
Engelbart’s vision was ahead of its time, foreseeing a future where computers and humans would collaborate seamlessly to amplify intellectual capabilities. The concept of human augmentation through technology was revolutionary and laid the foundation for modern human-computer interaction. By emphasizing the symbiotic relationship between humans and machines, Engelbart’s lab fostered a new way of approaching technological innovation. This iterative process of co-evolution between humans and machines marked a significant shift in the paradigm of computing and knowledge management. However, by the mid 1970s, Engelbart’s lab faced funding cuts. Many engineers left for Xerox and other emerging technology hubs in Silicon Valley and elsewhere. Engelbart’s work, as noted, was seen as too radical and quirky and was underappreciated until much later, even as ARC’s inventions were appropriated and turned into commercially successful products (think of the later association between the mouse, GUI, and the Apple Macintosh).
Bardini makes a critical observation about computing’s early era, where Engelbart was a key figure. Bardini argues that in the 1950s and 60s, when computing technology represented a new space of opportunities, there were two competing frameworks: Engelbart’s Human Augmentation Research (HA) and the longer standing framework of Artificial Intelligence (AI). Whereas HA saw computing as cognition, knowledge, and organization enhancing, AI aimed to use computing technology to recreate human intelligence. AI envisioned computers replicating the human mind — a synthetic tool that could outperform the human brain and potentially replace humans.
Bardini identifies Norbert Wiener’s cybernetics as among the progenitors of this vision. He sees a continuity in the work of Engelbart’s boss at SRI, the computer scientist and psychologist J. C. R. Licklider. Licklider famously developed the concept of “man-machine symbiosis” and argued in favor of computers as “intelligence amplification” similar to Engelbart’s Human Augmentation. However, according to Bardini, Licklider saw the symbosis as transitional. He foresaw computing as surpassing human brain capacities, a process he called “cerebration.” Bardini quotes Licklider suggesting that, “it seems worthwhile to avoid arguments with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone.” Until then, “the main intellectual advances will be made by men and computers working in intimate association” (quoted in Bardini 2000, 20-21). In Licklider’s grand vision of the future of computing, Engelbart’s work was mainly useful for this “interim” era during which humans will need to interact cognitively and symbolically with computers. Over time, cerebration makes humans increasingly unnecessary as machines grow in intelligence.
Bardini calls this a clash of two frameworks — AI and HA. What strikes me looking at present-day developments with artificial intelligence is that these visions, rather than two competing frameworks, actually represent two emergent but competing political ethics of the computer age. On one hand, computers, as virtual and symbolic machines, enhance human capabilities — cognitive, knowledge producing, and social-organizational. On the other hand, computers are seen as means of replacing humans in a wide array of knowledge and complex decision-making operations — and ultimately concentrating power with those who control the machine systems.
The emergence of Large Language Models (LLMs) AI systems such as ChatGPT has brought the old dilemma forcefully back into the fore. The clash between the AI and HA frameworks arguably became dormant from the 1980s to the 2000s as the IT revolution and rise of personal computing appeared to align with Engelbart’s vision of empowering “knowledge workers.” The popularization of this term, along with theorists such as Daniel Bell heralding the arrival of post-industrial society, suggested an expansion of computing’s role in augmenting human capabilities (and thus efficiency and profit-making) in business and industry. The PC and the Internet heralded the final displacement of the industrial worker and his/her replacement by the knowledge worker. However, this realization of Engelbart’s vision occurred through a fundamentally different, market-based approach rather than the original scientific model he had envisioned.
Today, the dilemma reasserts itself in the latest era of AI. Do machines replace human cognitive work — do they write our novels, make our movies, and generate our research for us? Or do they serve as tools to enhance human capabilities? Do they create new, historically unprecedented opportunities for human augmentation? The two ethics, of course, do not play out in a vacuum; they are not mere intellectual differences. Instead, they are battled out in the political economy of the distribution and integration of computing across multiple realms of human experience — work, social life, politics, creativity, health, leisure time, and their embeddedness in existing political, market, and knowledge structures.
As we navigate this new era of rapidly advancing artificial intelligence, I find it important to critically examine the profound ethical and political implications of these technologies through an understanding of these histories. It seems to me that the competing visions of AI and HA that Bardini identifies in the early computing era have renewed urgency. We must grapple with the fundamental question of whether we will allow AI to replace and displace human cognition and labor, exacerbating inequalities and concentrating power in the hands of a few, or whether we will harness the potential of AI to augment and empower human capabilities, democratizing access to knowledge and decision-making.
We are at an important crossroads and the path taken today will have far-reaching consequences for the future of work, creativity, and social organization. There seems to be an important urgency in articulating a political ethics of computing (and not one of just AI). For me, that should be one that prioritizes human flourishing, social equity, and the preservation of human agency.
The history of computing drives the point that we need to resist the temptation of viewing AI as an inevitable force beyond our control. It seems to me that there are opportunities to organize in order to shape the trajectory of these powerful technologies. We should ground our efforts in a meaningful understanding of their historical and philosophical roots, as well as their potential social, economic, and political ramifications. I would venture to modestly suggest that this is one of the few hopes we have to build a future in which artificial intelligence enhances rather than diminishes our shared humanity, and in which the benefits of these technologies are distributed equitably.
This is a great analysis. Really enjoyed reading and will assign to my students for Thursday this week.