I recently saw an interesting tweet on X—it felt like a reminder: We can't just quietly observe the intelligent era from the sidelines; we must actively participate.
In this era, slogans like "Intelligence by people, for people, and of people" like SentientAGI @SentientAGI seem to be throwing a challenge to traditional AI development methods.
So, I want to discuss: What is "Open AGI," why is it so important now, and how can we, as ordinary people, developers, and observers, participate? Let's break it down.
1. Open AGI is more than just "more powerful"
Many people see the term "AGI" (Artificial General Intelligence) and think, "Oh, it's just an intelligent system that can do what anyone can do, right?" Actually, it is, but it's more than that.
Wikipedia defines "artificial general intelligence" as autonomous systems that can surpass human capabilities in most economically valuable tasks. (Wikipedia)
"Open AGI" has a more complex meaning: it's not a monopoly by a single large company or a few large models, but rather a platform where the community, researchers, developers, and ordinary users can all participate, oversee, and benefit.
Take SentientAGI, for example. Their official website states:
"Community-built, community-owned, community-aligned, community-controlled." (Hugging Face)
They have a series of code repositories on GitHub, such as ROMA (Recursive Open Meta Agent), OpenDeepSearch, and various tool frameworks.
They emphasize that "intelligence should serve humanity, not belong to a single monopoly." Someone on Medium wrote, "SentientAGI: We’re Not Watching the Future, We’re Building It."
So, open AGI isn't just about "making models smarter." Rather, it's a question of who controls, who uses, and who owns the value of intelligence.
2. Why is now a critical juncture?
Okay, some might ask, "Aren't there many AI models and many major companies working on AGI? Why is now so crucial?" I believe there are several reasons:
The technical barrier to entry has dropped: Open-source tools, research sharing, and community engagement are much easier than they were ten years ago. For example, Sentient's codebase is publicly available, and its models are viewable.
The impact of intelligence is growing: AI is no longer just a black technology in the lab; it is influencing healthcare, finance, government, culture, and content creation. The sooner we consider questions like "who benefits," the better.
The issue of control has surfaced: As intelligent systems become increasingly powerful and more "human-like" or "agent-like," who sets the rules and who has a say becomes crucial. For example, regarding the public's question of whether "sentient" or "subjective" AI should have rights, a survey found that 71% believe that "when AI becomes conscious, it should be respected," while 38% support "legal rights."
A new paradigm is brewing: Sentient is pursuing a "network of intelligence" (such as its GRID architecture) rather than a single, monolithic model.
So, to put it bluntly: we seem to be at a turning point regarding who will define the future of intelligence. If only a few large companies monopolize intelligent systems, the future may see intelligence serving a select few, with the interests of a few prioritized. The call for open AGI aims to break this pattern.
3. So how can ordinary people, developers, and observers participate?
Okay, that sounds a bit abstract, so what can you, me, and everyone else do practically?
Learning & Following: Start with understanding. Follow organizations like #SentientAGI, open source models, community discussions, and the hashtag #SentientAGI (or #OpenAGI) on X.
Trial/Practice: If you're interested in the technology, you can check out their models (such as ROMA on GitHub). For casual users, you can also try their chat interface to experience the difference between community-driven and traditional closed-source systems.
Participate in discussions: For example, ask questions in the community: Who designed this model? What are its data preferences? Who receives its revenue? Is it transparent? These questions may seem high-sounding, but the sooner you ask them, the better.
Contribute to the project: Even if you're not an AI engineer, you can still participate in areas such as content creation, user experience, ethical oversight, and model usage feedback. Open AGI means it's not just a game for technical professionals.
Reflect on value/power structures: Don't just ask "what does this system do?" but also ask "who does it serve," "who controls it," and "who benefits." This is actually a social issue, not just a technological one.
4. My Thoughts on the Future (Casual Chat)
I believe that future intelligent systems may no longer resemble a "single brain," but more like a "collaborative network"—different modules, different agents, and different data sources working together behind the scenes.
I've heard that Sentient's GRID architecture embodies this approach: multiple models, multiple tools, and multiple data—rather than a closed model. (Medium)
"Subjectivity" may become the key word. The discussion we have about AI isn't just about "can it calculate or write?" but rather "can it understand people," "can it have conversations," and "can it have certain social roles." While "perception" or "consciousness" remains controversial,
If open AGI succeeds...Perhaps intelligence will no longer be just an asset for large corporations but become something like "public infrastructure"—much like the internet or electricity. Think about it: our expectations for intelligence won't just be "helping me do things," but "how can we use it together?"
But this also presents risks. For example, if someone controls the model, the data, or the platform, then this goes against the original purpose of "openness." Governance, transparency, and value ownership all need to be clearly understood.
Finally, as ordinary people, we may not be able to build AGI immediately, but we can build awareness: awareness of the direction of intelligence development, awareness of control structures, and awareness of fairness, inclusion, and ethics.