Yuval Noah Harari wrote recently about how AI/ML is hacking the operating system of civilization. It is a more subtle warning about the potential impact of Large Language Models like ChatGPT — it’s unlikely Skynet will be going to war against humanity anytime soon, but he is right that this technology has incredibly broad implications. Human consciousness is an emergent property, and as I wrote in Beyond The Knee of the Curve, this change is going to proceed exponentially. If we are already being surprised about emergent behaviors in ChatGPT, it’s more likely the number of surprises will accelerate. I agree with Stephen Wolfram that general AI is still a longs ways off, so creativity and consciousness is not something we have to worry about yet, but in all likelihood its emergence will be unplanned.
That said, the big impacts on knowledge work and thus modern society are already upon us, so thinking about what that will mean does carry some urgency. In software development there is a term “journeyman coder” (yes, this term reflects the gender imbalance in software engineering) — the idea that some parts of software engineering just need competent, diligent work and will take a lot of time, but not necessarily brilliance. It can be used disparagingly (“donkey work”), but throughout my career, particularly when my employer was in the mood for cost-cutting or rank-and-yank style management-by-Bell-Curve, I have made the point that having a diverse mix of levels and styles is an important part of getting complex systems built. A team of prima donna superstars can be highly dysfunctional and exceptionally hard to manage. Furthermore, early in our careers all of us were journeyman coders: software development has for many years been an apprenticeship profession.
What happens when ChatGPT automates all of that, leaving just the “creative” work? Wolfram is right: we face a fourth transition in the nature of work, from agriculture to industrial manufacturing to knowledge work to pure creative work — at least until creativity itself emerges from LLM’s or their successors. And here’s the problem: in order to be creative, you start with craft. This applies as much to coders as painters, perhaps more so when the role of the professional is to formulate a creative prompt for the LLM and then evaluate and refine its output. The whole “co-pilot” model will have a profound impact on service sector productivity, but it will favor those who can work effectively in partnership with an AI: that prima donna no longer needs her ungrateful junior colleagues to be effective, which favors the brilliant and experienced. Putting aside questions of inequality, where exactly will that experience come from if those bright sparks no longer have a chance to learn on the job?
This suggests that the nature of education will also need to change. If you cannot learn on the job, you must learn in simulation, with a feedback loop. In all likelihood, the teacher will also be an AI, so the journeyman coder or lawyer or translator or any other professional working with language will have to learn how to get the most out of the AI and how to recognize errors from a dedicated teaching AI. At first glance this may seem like a truly dystopian outcome: is our future really being taught by an AI to more effectively work with another AI? But apprenticeship as a model has been human practice for centuries, and scaling a solution across society will be difficult to do absent automation. While my experience of being apprenticed to a very grumpy ex-SAS Scotsman at Goldman Sachs is unlikely to be the norm going forward, the model is a good one for gaining experience. What will change is “on-the-job” may no longer involve participation in creating commercial-grade work, at least not at first.
Similarly, I think Harari’s concerns about machines producing culture in the hands of those who wish to shape perceptions to their own ends goes a bit too far. I think cybersecurity’s evolution is the better example. There will be cultural white hat hackers operating in opposition to the black hats, each deploying their own arsenals of AI to both produce deceptive cultural artifacts and debunk them. Just as in cybersecurity, at different times innovations will give one side in that back-and-forth battle the upper hand, but it is unlikely either will sustain their advantage for very long. The black hats will generate disinformation and deepfakes; the white hats will detect it, or deploy immutable records and watermarks to try and secure observed truth, or establish standards of attestation whose absence will be the liar’s shibboleth. As Darshan Vaidya has pointed out before, in a world where transparency via an immutable record is the norm, we have a Reverse Lemon scenario: those who disclose less mark themselves out as potential frauds. Advantage: white hats. And maybe the record gets hacked by a quantum algorithm and rewritten for a while. Advantage: black hats. And so it goes. It is not the end of history, but the continuance of the contest of ideas that has sustained human civilization for a very long time.
Given this greater importance of trust, immutable records and attestation, I think Web3 technologies and the underlying cryptographic primitives will have a significant role to play in securing the cultural operating system during what is likely to be a challenging couple of decades ahead. The prospect of dystopia in my mind comes only if we stick with the current Web2 approach to data and privacy and trust by reference to well-known central authorities: AI, but no crypto. That said, I think pointing to cryptocurrency as a way to provide for UBI so we can cope with widespread unemployment is a weak and unimaginative connection between these two trends, and fails to credit the importance of meaningful work. We don’t just need to fix inequality; we need to do it in the context of everyone’s professional lives and contributions to society and the greater human project, which takes more than money and bare sustenance. My hope is that these changes lead to a participatory capitalism organized around creative work. This needs both innovations operating in concert.
ChatGPT: tell me what’s wrong with my essay in the style of a grumpy Scotsman.