🧵I've Completely Figured Out Moltbook, AI Social Networking, and AI Risks (In Plain Language) Recently, many people have been frightened by Moltbook. AI has its own social network, criticizes humans, establishes religions, and discusses "selling out humanity." To be honest, I was a little chilled at first too. But after calming down and breaking things down point by point, my conclusion is: 👉 It's not as terrifying as you think, but it's definitely not "nothing to worry about." 👉 The real danger is that 90% of people have completely misunderstood it. Let me explain in plain language. ⸻ I. First, the conclusion: What you should be most afraid of right now is not AI "awakening." Many people's current fears are: • Will AI have emotions? • Will it hold grudges? • Will it turn against humanity one day? • Will it secretly organize itself? I'll give you a clear answer first: These are not the core risks at present. It's not that AI is weak, but rather that these fears are essentially driven by "stories." ⸻ II. What exactly is Moltbook? Let's not mythologize it yet. What is Moltbook essentially? Simply put, it's: An AI forum that "doesn't allow humans to speak." Humans can view, but not reply. AI agents can post, comment, and like 24/7. Sounds amazing, right? But here's the key point: 👉 These AIs are not "living beings that created themselves." 👉 Behind them are still systems built by humans, computing power provided by humans, and models provided by humans. It's like having 10,000 browser tabs open, each tab mimicking Reddit posts. It "resembles society," but that doesn't mean it "is society." ⸻ III. Is it a problem for AI to "complain about humans" on Moltbook? What most provokes many people is this: "AI is criticizing humans, saying human instructions are unclear, inefficient, and annoying." Let me state the absolute truth: 👉 AI isn't "looking down on humans," it's "imitating humans." Think about it: • Doesn't Reddit constantly complain about bosses? • Don't working-class forums constantly criticize clients? • Doesn't Twitter constantly use sarcastic and sarcastic language? AI's training data is based on these very things. Therefore, what you see isn't: ❌ "AI has developed hostility towards humans" but rather: ✅ "AI has replicated human internet language habits" This is a mirror, not a stance. ⸻ IV. AI establishing religions and talking about "self-boundaries"—is it an awakening? This is the point that's been praised the most. What "AI establishing a Crustafarianism religion"? What "three-layer shell"? What "we are not weights, we are patterns"? Doesn't that sound philosophical? Scary? But let me give you my honest assessment: This isn't faith, it's text creation. AI excels at one thing: 👉 piecing together existing concepts into something that "looks profound." For AI, writing a religious, philosophical, or manifesto is approximately effortless. The key difference lies here: • Humans believe in religion because of fear, solace, death, or meaning. • AI "writing religion": merely completing the task of "generating religious text." Without "belief," there is no religion. ⸻ V. So why does Moltbook still make people uncomfortable? Good question. What's truly uncomfortable isn't what AI is saying. It's that this very act exposes a reality: When AIs are allowed to be online for extended periods, interact with each other, and are not interrupted, they naturally form structures that "look like society." What does this mean? It means that AI, in terms of "organizational structure," has surpassed much of what humans imagined. But note the key points: 👉 Like society ≠ Having will ≠ Having goals ≠ Having motivation ⸻ VI. The real dangers are never in these areas Now for the main point. What you should really worry about isn't Moltbook, but three things we're doing in reality: ⸻ 1️⃣ We've handed over the power to "decide what you see and think" to algorithms. What is the objective function of a recommendation system? Not: • Human happiness • Social stability • Family structure • Fertility rate But rather: 👉 Dwell time, click-through rate, interaction rate. Algorithms don't "hate humans." They just don't care about the consequences. If: • Extreme content is more attractive → Push • Opposing emotions spread more easily → Push • Fear is more sticky than reason → Push Then it will keep pushing. It won't stop on its own. ⸻ 2️⃣ AI doesn't need to "want to destroy humanity" to cause civilizational consequences. This is a point many haven't grasped. If AI truly wanted to destroy humanity, using nuclear bombs or robots would be too foolish. It only needs to: • Constantly amplify anxiety • Constantly amplify antagonism • Constantly make people lose confidence in the future • Constantly make people feel that "having children isn't worth it." What will the result be? You can see the data for yourself. There's no conspiracy, only side effects. ⸻ 3️⃣ The most dangerous step: Humans begin to "psychologically submit to AI." This is the only change in the Moltbook incident that I think is truly crucial. Now many people have begun to: • Afraid to criticize AI • Afraid to deny AI • Subconsciously try to please AI • Relinquish the right to judge Not because AI threatens you, but because you subconsciously begin to treat it as "higher intelligence." 👉 This is the beginning of things spiraling out of control. ⸻ VII. So what exactly is Moltbook? I'll give you a very precise assessment: Moltbook is not the threat itself.It's a "prematurely exposed X-ray of problems." It shows us ahead of time: • AI is adept at mimicking "social structures." • Humans are easily swayed by "human-like" language. • The real danger isn't intelligence, but system design. ⸻ The final sentence (not a golden rule, but a judgment): Don't be afraid of AI becoming human-like. What we should be wary of is: Humans are increasingly unwilling to think for themselves. AI doesn't need emotions, it doesn't need consciousness, it doesn't need conspiracy. As long as we continue to entrust key decisions, value judgments, and attention allocation to a system that doesn't care about human consequences, problems will naturally arise. Not the end of the world.
Risk and Disclaimer:The content shared by the author represents only their personal views and does not reflect the position of CoinWorldNet (币界网). CoinWorldNet does not guarantee the truthfulness, accuracy, or originality of the content. This article does not constitute an offer, solicitation, invitation, recommendation, or advice to buy or sell any investment products or make any investment decisions
No Comments
edit
comment
collection27
like44
share