“What a relief,” said Susannah Glickman. A PhD student at Columbia University in New York, she had just successfully defended her dissertation on the history of quantum computing, and had morphed into Dr Susannah Glickman. Her thesis, as she wrote in the introduction, explored “how quantum computing went from the theoretical fringes to a brick and mortar set of institutions”.
I’m guessing, but I imagine hers is the first PhD in the history of quantum computing. Most historians who study technology generally investigate the origin and development of something in the past, be it telephones, engines or medical devices. But Glickman’s thesis – entitled “Histories, Tech, and a New Central Planning” – explores how and why the US invested tens of billions of dollars, launched various federal policies and programmes, and indeed created entire industries devoted to a technology whose applications lay entirely in the future.
As an undergraduate at Reed College in Portland, Oregon, Glickman did a joint degree in mathematics and anthropology. For her undergraduate project in 2015 she worked with a maths professor writing algorithms for quantum computers. But she was baffled that anyone would want to write algorithms for devices that did not yet exist – and may never even do so. When she posed that question to her supervisor, he couldn’t give her a satisfactory answer.
Turning speculation into solutions
Glickman began her PhD at Columbia in 2016, still brooding about the fact that an entirely speculative technology could inspire such a huge industrial infrastructure and such far-reaching federal programmes and initiatives – the “central planning” of her title. The answer, she discovered during the course of her thesis, comes in two parts.
The narrative that investing in technology is good for the national interests has been a success for many decades.
The first is the notion – strongly promoted by scientists and politicians alike – that the development of all technology follows a “natural course” and any nation that ignores this fact will endanger its global power and security. As Glickman describes in her dissertation, this narrative was used to push America’s development of semiconductors, which in the 1970s were said to be vital to keep the US ahead in the Cold War. Later, in the 1980s, they were needed to staunch the country’s decline relative to Japan, and in the 1990s it was to support encryption devices.
The narrative that investing in technology is good for the national interests had, in other words, a long track record of success. Hardly surprising, then, that those with a vested interest in quantum computers used the familiarity and persuasiveness of this narrative to give these devices the hard sell too. In doing so, they had to challenge the more neo-liberal idea that any technology should be left to develop at its own pace.
Glickman’s template for this process is the semiconductor industry’s use of Moore’s law, which is named in honour of Gordon Moore. As the co-founder of tech giant Intel, he famously predicted in 1965 that the density of transistors on a microprocessor would double every year (a figure later revised to every two years). Clever public relations extrapolated what was essentially a rule of thumb into an iron “law” demonstrating the inevitability of smaller and smaller computer chips. Woe to the US, it seemed, if it did not invest heavily in the technology. The resulting massive investments, Glickman showed, made that extrapolation self-fulfilling. Her dissertation treated the development of quantum computing as the same basic process, but on steroids.
The second part of Glickman’s answer has to do with the extraordinary claims of what quantum computers would do in the future. Such devices, their backers said, would solve hitherto insoluble problems like protein folding and optimizing nitrogen fixation. They would crack most encryption methods, and would develop uncrackable ones. Richard Feynman, John Wheeler and other prominent physicists who spoke about quantum computing’s potential seemed to ratify the promises, helping to convince federal administrators to take them seriously. Quantum computing, writes Glickman, was held up as “revolutionary, era-defining”.
Glickman’s dissertation does not take a stand on whether these narratives or promises are true or false. Rather, her aim is to describe their role in creating the political, economic and industrial environment in which a massive infrastructure, political programmes and planning, and plentiful funding sprang up around a still-speculative technology.
In her research, Glickman was startled by how obsessed quantum computer advocates are with its history, often saving boxes of photographs, stacks of notes, and caches of e-mails. “I knew the historians would come knocking,” said one of her interview subjects. From my experience, that’s in sharp contrast to other kinds of scientists, who keep only reprints, trash all their e-mails, and view history as last year’s journals. But quantum computing practitioners were, Glickman writes, if anything “too excited about documenting their own histories,” for such documentation can distort and disguise history, obscure ambiguities, erase dead ends, and encourage over-the-top claims.
One promoter of quantum computers told Glickman a holy-grail-like story about how the history of technology began with fire and has culminated with quantum computing. Another related a metaphysical tale of how just as the quantum world is the ultimate reality and the classical world derivative, so quantum computing is the natural way and classical computing its imperfect predecessor.
The critical point
Glickman’s PhD was not a straightforward story about technology developing in its own special ether, but as much about US history, political economy, industrial competition, philosophies and myths.
Glickman’s PhD was also unusual in that the committee before whom she had to defend it consisted of an anthropologist and three historians who specialized in US history, political theory and business. This diversity was due to the fact that her dissertation was not a straightforward story about technology developing in its own special ether, but was as much about US history, political economy, industrial competition, philosophies and myths. “I had to adapt to the subject matter,” she told me. “Studying quantum computing made me a different kind of historian.”
Glickman is now an assistant professor in the history department at Stony Brook University, where she is working on an interdisciplinary project lying at the intersection of liberal arts and quantum and AI technologies. But the thoroughness of Glickman’s PhD made me wonder whether most historians haven’t been overlooking these features in past technologies. Her dissertation on the history of a future technology suggests that historians of past technologies will have to become different too.
The post Why was so much spent on quantum computers before they even existed? appeared first on Physics World.