From Black Box to Glass House: The Imperative For Transparent AI Development

The world is abuzz with talk of AI, a technology that has become an integral part of life for many. However, as AI approaches at breakneck speed, it’s natural that people are scared. This fear is understandable, but it’s also causing a reaction based on emotion rather than reason. It’s time to take a step back, breathe, and think logically about these systems and their place in society.

The popular reaction of knee-jerk regulations may soothe some of our fears, but it could also hinder untold worlds of progress. We risk the continuation of real horrors while trying to avoid those imagined. Today’s fear-based regulations may prevent tomorrow’s cure for cancer.

At this juncture, it’s essential to present some ideas that are not considered enough: the ethical case for open-source artificial intelligence. Many people imagine tight control of AI software is the answer, but information wants to be free. Empirically, it always finds a way out. The popular cries for AI safety through tight controls often ignore certain lessons of history and confuse the reality of the present.

We’re not here to indict closed-source companies; they have their place in the drive for human progress. They frequently push the boundaries of innovation, and we’re grateful for their contributions. Closed-source allows groups to keep their competitive edge, and it’s often a necessity. Closed-source companies, projects, and tools have the right to exist.

However, some people would not grant the same consideration to open-source. They paint open-source AI as a dangerous force, a threat to be contained and controlled. Some want it outlawed. Many of these voices benefit personally when the freedom of others is constrained. They may not even realize that their rationale emerges from this bias.

Some argue that open-source AI is too dangerous to exist and that we should collectively and immediately hand over the immense power of this technology to a select and powerful few. It’s alarming to see how far we’ve progressed on the road to subservience. The portrayal of open-source software as inherently dangerous couldn’t be further from the truth.

Open-source AI is the key to unlocking the widespread potential of artificial intelligence, democratizing the technology, and putting it in the hands of the many rather than the privileged few. Equal access and education around technologies result in much safer development and awareness. If in 2012 only Adobe and a handful of highly motivated actors had access to Photoshop in its pro form, citing worries about deepfakes, many more people would be fooled by altered images. Anyone could be blamed for a crime with a doctored image. This is the consequence of a lack of public access to technology.

It’s only widespread awareness that prepares people for societally shifting technologies. At Nous, our goal is to push this technology forward while upholding the responsibility of public awareness and access. We understand the weight of duty regarding these societally shifting technologies – and we ask others to heed this call to duty.

But why, you may ask? The reason is simple. Open-source AI serves as a counterbalance to the otherwise unchecked power of private AI and the potential mass digital surveillance by corrupt corporations and governments. It ensures that no single entity holds the reins of great power and that no entity can hijack or veto technological progress.

You may think you know what Open Source AI is, but you don’t. For technical people, open-source AI is often colloquially defined in simplistic terms. For most others, open-source AI seems like scary, uncontrolled power. The Open Source Initiative, which has been defining standards for Open Source Software since the late 90s, has been working on a formal definition of Open Source AI for over two years. After numerous workshops with hundreds of leaders of AI projects, these definitions are still being drafted. So, we say: if it’s too early for the technologists to even define it, then perhaps it’s too early for bureaucrats to regulate it.

Welcome to the frontier. It might be chaotic, but it’s a great place to be. 

Foundationally, open-source AI projects make software and tools available under terms that grant developers the freedom to use, inspect, modify, and share knowledge about the system and its parts. Ultimately, open source means transparency. Open source allows a global community of developers, researchers, and users to scrutinize, modify, and improve AI systems. Open source allows for visibility around the proliferation of this technology instead of granting exclusive power to private corporations and governments. Open source is the great equalizer. It balances man and machine, and most importantly, it provides the transparency needed to keep otherwise entirely opaque AI systems in check.

The uncomfortable truth is that there is no such thing as closed source, only degrees of openness. To place so much trust in locks when doors are everywhere is naive and imprudent.Closed-source is a myth. Every single project is subject to leaked technical secrets. Employees take sensitive information from one job to the next, and all people and systems are vulnerable to hacking. Over 90% of Fortune 100 companies have had sensitive data breaches, with 60% experiencing a significant breach in the past 2 years alone. Companies that are closed-source and heavily regulated are breached every day. Rules are regulations that work for those who obey them, but bad actors don’t follow the rules. The entire premise of being a criminal is that you don’t act within the law. Nefarious people and unaligned AI don’t care about rules. Remember, Skynet was closed-source.

Only law-abiding citizens, by definition, abide by laws. Regardless of the regulations in the US, the same rules and constraints do not apply to foreign research from our friends or foes. Restricting domestic technological progress in the face of potential foreign adversaries accelerating past us is a national security threat. And how about those highly secure institutions that uphold our national security? How many security breaches do US Federal agencies collectively report a year? Take a guess. In 2023, US federal agencies reported 32,000 cybersecurity breaches of their systems, up 10% from 2022. Breaches of digital systems are the norm, even among highly regulated, highly competent entities.

AI companies, too, are already being hacked. We’ve only seen what’s been leaked to the public. In July 2024, it was reported that OpenAI’s internal messaging systems had been hacked the year prior. This information was only made public by employee whistleblowers. Anthropic confirmed it suffered a data leak in 2024. This, too, is only known because they were forced to notify affected customers. Mistral’s closed-source model Miqu 70B was leaked in 2023. These aren’t criticisms of OpenAI, Anthropic or Mistral. We love and appreciate their work, but leaks happen, and many we presume we don’t even know about. We’re merely trying to dispel the popular myth that AI can be controlled through closed-source software.  We are arguing in favor of safety, which starts by first recognising the reality of software and human vulnerabilities.

The idea that we achieve safety through 100% opaque closed-source protectionism is security theatre. Closed-source AI companies will suffer security breaches just as in all other industries and among all international governments. Leaks are the norm. Every human and every system is an attack vector. Every hire, as is every vendor, is a potential security threat across the technology stack. Information gets leaked because most people have a price, and only one mistake or malicious act is required. We can accept this and embrace the transparency of openness, or we can deny the actual history of IT security. 

At Nous, we build open-source AI infrastructure for the world to develop upon. We believe that frontier technology should be democratized so that a wide breadth of human society would have access, rather than a handful of extremely wealthy corporations. We believe that most robust and dynamic cybersecurity measures for AI will emerge from a foundation of transparent, open-source software. Nous is researching and building an intelligent infrastructure to further these goals. 

If it sounds counterintuitive, take a look at some data. The open-source Linux community outperforms private companies in both finding and fixing vulnerabilities in software, averaging more than 50% faster than its closed-source peers. Open-source code allows for public scrutiny, which means vulnerabilities can be more easily identified and addressed by a broader community with a more comprehensive range of skills. It attracts developers working on different projects across all parts of the technology stack. Open-source also allows for continual auditing by the entire world.

If large language models are closed-source, then this stifles safety research. Without access to weights and the ability to freely test model robustness against dangerous prompts, any progress on making AI safer for everyone is at the discretion of two or three corporations rather than the larger research community. With closed-source, private companies and politicians are asking you to trust them. Open source doesn’t ask for your trust; it allows people to inspect and verify. Is it safe to depend solely on flimsy sources of reliability like “human trust” in the face of algorithmic threats? We place our faith in far more robust actors, like advanced cybersecurity systems that can dynamically modify and reinforce themselves. The contemporary alignment landscape is strangely barren of cryptographic security research; instead, it is focused on proselytizing governance and regulation.

And perhaps we shouldn’t be putting so much trust in corporations and politicians. Ask yourself a question: if you lean left, how would you feel about Trump winning the election and being in control of powerful AI technologies? Should he control the licensing of the regime of machine intelligence? If you lean right, how would you feel about Kamala determining what constitutes truth, bias, and misinformation? Is anyone in these organizations trustworthy enough to hand them the exclusive keys to superintelligence? Does that sound safe?

War-mongering politicians who benefit from their citizens being surveilled and obedient may be the worst option to hand the power over to. The most extensive atrocities in the world have been at the hands of them. Regardless of your politics, power changes every four years. At best, you might get two terms of ideological safety. You might trust your leaders now, but can you trust it when the next guy assumes power? Should we worry more about the AI than about the people controlling it? Which human or institution do you trust with controlling technological progress?

Centralized, closed-source AI concentrates power in the hands of a few individuals or organizations. This creates an environment that is ripe for exploitation. One might consider that as a society, we need decentralized, open-source AI and distributed training as a prudent counterbalance to protect ourselves from bad actors and the inevitable abuse of power.

You may think otherwise because giant corporations have taught you otherwise. Open-source projects can be a threat to private companies’ competitive advantage. Distributed training disrupts the structure of modern data centers and the nature of their partnerships with centralised AI monopolies–now that groups can aggregate compute, new markets are opened by this act of democracy. Regulatory capture is a tendency, intentional or not, for regulations to be heavily influenced by industry leaders, resulting in protectionism and exclusion of competitors. We believe some of the industry’s current AI safetyism is forged from these tenets, deliberately conflating hypothetical catastrophic risks with whether or not an LLM can say controversial things.

Those who appeal to political leaders to regulate large corporate interests forget that these companies already work closely with the government. The larger the company, the more entwined with politicians it becomes. They are already being controlled, the control is just opaque to you.

Now, regulators are beginning to mandate licenses to build AI technologies, often down to what kind of mathematics may be used. What does the future world look like if we as people need licenses to do math or to use the internet? Is it safe to permit that degree of centralized power?

Tools are neutral. You can use a hammer to build a house or to kill a person. You can use the internet to watch puppy videos or to gather information on how to make a bomb. Today’s AI and common search engines are already capable of providing information on biological and weapons manufacturing, organized terror, and other serious dangers. The information is already there, and ill-intentioned people can already find it. You don’t even need the internet; you can find this knowledge in books.

Language models won’t create bioterrorists any more readily than celebrity cookbooks have created Michelin-star chefs. 

Knowledge isn’t the enemy. Knowledge is a virtue, and a society that licenses or criminalizes knowledge is destined for decline.

THE AI ACCELERATOR COMPANY

NODES

THE AI ACCELERATOR COMPANY

NODES