So over the wekend we learned that Sam Altman is apparently nervous about the impact of AI on democratic elections:
i am nervous about the impact AI is going to have on future elections (at least until everyone gets used to it). personalized 1:1 persuasion, combined with high-quality generated media, is going to be a powerful force. (@sama on x, 04-08-2023)1
And that even though he is the CEO of the company that has arguably done more than anyone else to get generative ML tools into the hands of the public, he cannot come up with anything better than “raising awareness”:
although not a complete solution, raising awareness of it is better than nothing. we are curious to hear ideas, and will have some events soon to discuss more. (@sama on x, 04-08-2023)
It seems that Altmans “concern” is something that has already had its impact on OpenAIs public policy narrative. A few hours after Altman’s tweet, Open AI’s new European head op policy and partnerships had translated this into a call to action:
Who are the best thinkers/builders at the intersection of generative AI and elections in Europe? Ideas welcome! (@sGianella on x, 04-08-2023)
Now, there are plenty of reasons to suspect that Altman’s “nervousness” is nothing more than self-serving criti-hype (see here for such an interpretation), but in this case it is worth dwelling a bit on the particular connection being made between AI and elections (especially since both the US and Europe have upcoming elections).
There are indeed reasons to be concerned about the impact of generative ML models on elections and other democratic processes2, but it is beyond absurd that such concerns are being articulated by Open AI and its representatives:
The way in which Open AI introduces its models into the public sphere stands in stark contrast to the very norms of openness, transparency, and equality on which democratic elections are based. Instead of these values, Open AI deliberately hides how its models are trained, making it impossible for researchers, policymakers, and the general public to understand “the impact AI is going to have on future elections.”
If Sam Altman and his surrogates are truly interested in limiting any undue interference of the technology they are peddling to the public with upcoming elections, then they should apply the same level of transparency to their publicly available models. Until that happens, we should keep ML systems as far away from the electoral process as possible.
Although the impact is probably overstated, the underlying problem is not so much “AI” but rather disinformation that may or may not be aided by “AI” systems. As Sayash Kapoor & Arvind Narayanan have convincingly argued the real bottleneck for disinformation campaigns is not its generation but rather its distribution and as a result it is at the distribution level where this problem should be adressed. ↩︎