AI Was Born in a Room Full of People; Its Future Is Plural

We’ve become too used to the myth of the lone genius.

The college dropout working in his father’s garage, whose genius idea changes everything.

While the myth of the lone genius has always been comforting because it’s dramatic, simple, and glorifies the selfdom belief that world-changing innovation only happens through isolated brilliance. However, the origin of Artificial Intelligence (AI) tells a less dramatic but far more important truth.

On the 31st of August, 1955, John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories) submitted a proposal for a 10 man study lasting two months to explore how machines can solve human problems and improve their own performances by enabling them develop abstractions and concepts and use language. The Workshop took place in June and August 1956, and that summer, the term “Artificial Intelligence” was coined, and the culture of AI was born.

At the 1956 Dartmouth AI workshop, the organizers and a few other participants gathered in front of Dartmouth Hall https://spectrum.ieee.org/dartmouth-ai-workshop

The development of Artificial Intelligence did not emerge from the mind of one individual working in secrecy as while John McCarthy coined the term, he wasn’t just declaring the result of one perfect idea fully formed in his head, he was inviting others into a critical shared question which was collectively explored by computer scientists, mathematicians, linguists, philosophers, and scientists with each contributing fragments of insights. Decades later. It explains why today’s transformative breakthroughs live on GitHub and are stress-tested in hacker forums, not in proprietary vaults.

AI is not the product of a breakthrough model from a Fortune 500 company; it started from a gathering of thinking individuals in the same room arguing and challenging assumptions in the recognition that the philosopher needs the engineer and the linguist needs the mathematician, and that is precisely why the community will determine where it goes next.

The Community Stack: How AI Actually Builds

Since AI started with a debate among industry professionals, its entire history revolves around building infrastructures that scale the debate, as underneath the algorithms, models, and complex codes, lies the human infrastructure determining what gets built, scaled, and needs to go. This is what I call the community stack, which consists of a three-layered foundation.

The Trust Network: AI conferences aren’t just places for presenting impressive code but are actually places to build trust through quick follow-ups after presentations, hallway conversations, or informal dinners. Trust is quantified in a digital world, as while a project with 50,000 GitHub repository stars signals a global peer review that this work can be trusted with yours, a project with fewer repository stars would be up for criticism.

The mentorship engine: AI builds through mentorship. A complete project shows what works, but mentorship provides insights about what does not work and what not to try again. For example, reading “Attention Is All You Need” shows how the Transformer Architecture works, but understanding the team’s dead ends comes only through mentorship in lab meetings or over coffee. When a junior engineer hits a wall, the senior engineer may not know the answer but might say, “I know someone at ABC who faced this issue last month. Let me introduce you.”

The Serendipity Machine: It is an unwritten law that the most important AI ideas do not arrive through structured meetings but at 10 PM over cold pizza and Coke when someone from a different team strolls over. An example of such informal cross-pollinating chat, where a computer vision problem gets discovered by someone who works on audio, is the story of AlexNet involving Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. The success of AlexNet is not just due to the architectural brilliance of the model; unlike today’s world, where we work remotely, reducing the chance of serendipity, it was the collision of Hinton’s theoretical persistence, Krizhevsky’s coding prowess, and the sudden, community-provided availability of powerful GPUs. These elements converged not in isolation, but within the fertile, collaborative ecosystem of Hinton’s lab.

The Future of AI is Plural

As AI systems grow even more powerful, the key question is not whether AI can shape the future, but who will get to see the future being shaped. This is the problem of the modern world, as with many companies going remote, we risk optimising away serendipity, hence the need for the intentional creation of more “collision space” -the digital equivalent of the hallway track and cold pizza-fueled night.

When a closed lab discovers a critical flaw in its mode, its first initiative is to contain and quietly patch it until they surface as a failure. Now, imagine that same flaw being discovered in an open-source model like Llama 3 or Mistral. What happens? A GitHub issue gets filed, and a tweet is made to explain the issue. A Reddit debate erupts on its ethical implications, a researcher blogs a proposed fix, and a YouTuber explains it in simple terms to the audience. When 10,000 people debate AI safety on Twitter, the risks become visible, and that visibility is a safety feature and not a weakness.

This does not mean community-built AI is perfect. It means its imperfections are visible. And visibility is what allows improvement. Take the community role in shaping AI as Akin to the Wikipedia role for giving knowledge, for instance, while Wikipedia is messy and constantly gets disputed and edited, its transparency is its reliability as its self correcting, impossibly current, and aggregates the knowledge of millions with its talk page highlighting the citations, arguments, and real-time consensus. Community is AI’s version of Wikipedia.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.