The ghost protocol
When rejected founders build the future their critics couldn't imagine
In 2023, two Russian-American brothers walked into a meeting with the world’s most famous privacy advocate and walked out with a rejection. Twenty-four months later, they’re running a live network processing inference across 5,000 GPUs while their skeptics scramble to copy the homework—except they’re getting the math fundamentally wrong.
(And don’t forget to say ‘hi!’ to Chris Gledhill from 2016-2017!)
This isn’t just another blockchain drama. The collision between Gonka AI and Cocoon represents the most consequential fork in decentralized AI infrastructure, and it’s happening right now while most of the developer community remains blissfully unaware. One path leads to genuine decentralization, computational democracy, and open-source permanence. The other leads to surveillance capitalism with better PR. The tragedy is that most developers won’t realize which path they’re on until it’s too late.
The founding myth that Silicon Beach doesn’t want you to know
Here’s what actually happened, buried in ODaily’s reporting and conveniently absent from most Western tech coverage: The Liberman brothers—Daniel and David, former Snap Directors who built the 3D avatars you’ve been using without knowing their names— Libermans pitched decentralized AI compute to Pavel and Nikolai Durov in 2023. Nikolai, the mathematical genius behind MTProto and TON, was skeptical. He had “a different vision.”
So the Libermans did what rejected founders do when they know they’re right: they built it anyway. Gonka mainnet launched September 15, 2025. Crypto Briefing: Forty-four days later, on October 29, Pavel Durov stood on stage at Blockchain Life 2025 in Dubai and announced Cocoon—a decentralized GPU marketplace for AI inference. Sahm Capital: The core concept, as Odaily diplomatically notes, “coincides” with what the Libermans pitched two years earlier.
Call it parallel innovation if you’re diplomatic. Call it competitive intelligence if you’re cynical. Either way, we now have two competing visions for decentralized AI infrastructure, both emerging from the post-Soviet technical diaspora, both claiming to liberate compute from centralized tyranny, and both targeting the exact same developers, researchers, and academics reading this essay.
But here’s what matters: only one of them is actually decentralized.
Proof of Work 2.0 vs Proof of Marketing 1.0
Let’s cut through the blockchain theater and examine the technical DNA, because this is where the philosophical chasm becomes unbridgeable.
Gonka’s architecture is brutally honest about its priorities: 99% of GPU computational power goes to actual AI workloads. GitHub: The remaining 1% handles consensus through their “Sprint” mechanism— 6-minute competition where nodes prove work by solving actual transformer model computations (2.3 billion parameters, 64 layers, 128 attention heads). This isn’t Bitcoin’s hash-hunting waste. It’s productive proof-of-work where the mining itself advances AI capability.
The tokenomics follow Bitcoin’s deflationary emission schedule with exponential decay. TechBullion: Miners don’t just get paid for compute—they’re mining GNK tokens that appreciate as the network grows. BeInCrypto: It’s one-compute-unit-one-vote, not one-dollar-one-vote. The verification system uses randomized sampling (1-10% of tasks verified instead of Byzantine 100% redundancy), with reputation scoring that rewards honest long-term participants. Twenty percent of all inference revenue funds open-source model training, guaranteed on-chain.
Cocoon’s architecture is... we don’t actually know. Because there’s no public code. No GitHub repository. No technical whitepaper with mathematical proofs. No API documentation. What we have is marketing copy about “confidential computing” and “Trusted Execution Environments,” - Qazinform: which is legitimate privacy technology (Intel SGX, AMD SEV-SNP) but tells us nothing about the actual implementation.
Here’s what we do know: TON tokens serve as payment mechanism only. GPU providers earn TON for compute, - Cointelegraph: but there’s no mining, no token emission, no deflationary economics. It’s a marketplace, not a protocol. The Libermans’ assessment is devastating: “Cocoon’s model has historically proven unsuccessful, and we don’t think this time will be an exception. But first and foremost, Cocoon is not an independent project; its existence is largely to drive traffic to the TON platform.”
That’s not trash talk—that’s architectural criticism. One system creates self-sustaining economic incentives for computational infrastructure that compound over time (like Bitcoin created mining). The other system creates a gig economy for GPUs where participants are vendors, not stakeholders.
The open source litmus test: Who shows their work?
For GitHub power users and academic researchers, this question should be dispositive: Can you audit the code?
Gonka: github.com/gonka-ai/gonka—40+ releases, active development, MIT-licensed Gonka whitepaper, comprehensive documentation, CertiK security audit, TypeScript/Python/Go SDKs. The ML node repository is open. The tokenomics are mathematically specified. All models trained on the network are guaranteed to remain open-source in perpetuity.
Cocoon: No GitHub. No public repositories. No technical documentation beyond announcement blog posts. Application requires DMing a Telegram channel. It’s pre-launch, which could excuse early-stage opacity, except it’s backed by Pavel Durov—a billionaire who could easily open-source the protocol if he wanted to. The Open Source Initiative released OSAID 1.0 in 2024 specifically to define what qualifies as “open source AI.” Meta’s Llama famously failed that test despite marketing claims. Wikipedia: Will Cocoon even try?
This isn’t just philosophical purity. Stack Overflow’s 2025 developer survey shows trust in AI accuracy dropped from 40% to 29% even as adoption soared to 84%. Developers are deploying tools they don’t trust because they have no alternative. The #1 frustration? “Almost right but not quite” (66% cite this). Stack Overflow: How do you debug “almost right” when you can’t inspect the system?
Academia faces existential questions: Research reproducibility requires transparent methods. How do you publish papers using black-box inference networks? How do you cite computational methods you can’t verify? ArXiv preprints increasingly include model weights and training data precisely because peer review demands it. A closed-source decentralized network is a contradiction in terms—it’s just distributed centralization.
Privacy theater vs privacy engineering
Both platforms claim privacy as a core feature, but the implementations reveal different threat models.
Cocoon relies on confidential computing: data remains encrypted during processing via hardware-based Trusted Execution Environments. QazinformQuickex: This protects against the GPU operator seeing your prompts. It doesn’t protect against Cocoon itself having architectural backdoors. It doesn’t protect against nation-state compromises of the TEE attestation process (see: Intel SGX historical vulnerabilities). It doesn’t protect against the TON blockchain recording transaction metadata that reveals usage patterns even if content is encrypted.
More fundamentally, confidential computing protects you from the miner, but who protects you from the protocol? If the code isn’t public, how do you verify the encryption implementation is correct? How do you audit key management? How do you ensure Telegram—the first major customer with 1 billion users— Decrypt: doesn’t have privileged access?
Gonka’s privacy model is simpler and auditable: task origin anonymization through random routing, unlinked transaction balances, synthetic dummy tasks indistinguishable from real requests, and reputation-based verification that reduces overhead as nodes prove trustworthy. The security model is adversarial by design—malicious hosts lose all rewards and reputation, economic disincentives exceed gains, and user-driven oversight allows anyone to report suspicious results.
The critical difference is mathematical trust vs institutional trust. Gonka’s threat model assumes everyone might be adversarial; economic incentives and transparent code ensure honest behavior. Cocoon’s threat model assumes Telegram/TON governance is benevolent; you’re trusting institutions, not math.
For developers working on GDPR-regulated healthcare data, HIPAA-compliant medical research, or classified government contracts, this distinction is career-defining. Self-hosted Gonka nodes give you complete infrastructure control. You can audit every line of code. You can run it air-gapped. Cocoon requires trusting TON’s blockchain, Telegram’s intentions, and TEE hardware you can’t fully verify.
The Telegram distribution advantage is actually a centralization vulnerability
Pavel Durov’s killer argument for Cocoon is undeniable: 1 billion Telegram users, instant integration, seamless TON payments, native Mini Apps ecosystem. Cointelegraph: Developers can ship AI features to a billion people overnight. That’s not a competitive advantage—that’s a monopolistic moat.
But here’s what GitHub’s 2025 Octoverse Report reveals about actual developer behavior: 66% don’t believe current metrics reflect their true contributions. The developer community increasingly values non-technical factors (62% cite as critical) over raw technical performance (51%). What matters most? Transparency. Clear communication. Constructive feedback. Community ownership.
Telegram’s distribution advantage makes Cocoon instantly successful by user-count metrics while potentially failing by the metrics developers actually care about. It’s the Windows Phone paradox—massive built-in distribution (Microsoft’s enterprise customers) couldn’t overcome ecosystem control issues and misaligned incentives. The platform succeeded by conventional metrics (OEM partnerships, enterprise deployments) while failing to create genuine developer community ownership.
Gonka’s approach is slower but structurally antifragile: No foundation. No controlling organization. PoC-weighted governance means miners who actually run nodes control protocol evolution through on-chain voting. BeInCrypto: When your GPU is earning GNK tokens through actual work, you have skin in the game. When you’re just earning TON payment for compute services, you’re an interchangeable contractor.
The Linux Foundation’s model proves this approach scales: PyTorch, Kubernetes, and countless infrastructure projects are developer-owned, with no single entity controlling the roadmap. GitHub itself—Microsoft-owned but developer-governed through transparent RFC processes—shows how community ownership creates more value than corporate control despite slower initial adoption.
For academic researchers, this distinction is existential. Universities are increasingly wary of platform lock-in after watching vendor pricing escalate post-adoption (see: Elsevier’s journal stranglehold, Mathematica’s licensing evolution, even GitHub Copilot price increases). A decentralized protocol with no extraction mechanisms is worth waiting for, even if Telegram offers faster distribution today.
The integration pattern that determines who wins the agentic future
InfoQ’s 2025 AI/ML Trends Report identifies the critical problem: 63% of AI tools lack context of organizational architecture, tools, and processes. Developers run 3+ AI tools simultaneously (59%), with 35% using 6-10 distinct tools. This isn’t feature richness—it’s integration failure.
GitHub Copilot succeeded not through superior code generation but through zero-friction integration. Medium:You’re already in VSCode/GitHub; Copilot appears as a ghost cursor. Cursor succeeded by being a complete IDE replacement, not a plugin—single cohesive tool, not franken-integration.
Gonka’s OpenAI-compatible API is strategic genius disguised as convenience: Developers switch by changing the base URL in their existing OpenAI client configuration. Crypto Briefing: One line of code. Every Python/TypeScript/Go SDK that works with OpenAI works with Gonka. It’s not differentiation through features—it’s differentiation through zero switching costs.
Cocoon’s integration strategy is... Telegram Mini Apps. Which is brilliant for consumer applications and completely irrelevant for serious developer tooling. Researchers training models don’t use Telegram Mini Apps. DevOps engineers managing CI/CD pipelines don’t use Telegram Mini Apps. The academic community publishing on arXiv doesn’t use Telegram Mini Apps.
The Model Context Protocol (MCP) represents the next evolution: AI-native workflows where tools expose capabilities through standardized interfaces. GitHub and VSCode teams sponsored 9 MCP projects in 2025. GitHub: This is how agentic AI will work—autonomous agents coordinating across tools through open protocols.
Guess which architecture naturally extends to MCP? The one with open APIs, documented interfaces, and transparent protocols. Guess which architecture requires Telegram’s blessing to evolve? The one built primarily to drive TON adoption.
What the Libermans understand about the 2025-2030 horizon
Stack Overflow’s 2025 survey found 52% of developers don’t use AI agents yet, with 38% having no plans to adopt them. This isn’t resistance—it’s skepticism about current capabilities. The “almost right but not quite” problem makes autonomous agents dangerous. But everyone agrees agents are inevitable once reliability crosses threshold.
The Libermans’ bet is that the team controlling the most efficient, transparent, and economically aligned infrastructure will win the agentic era, not the team with the most users today. They’re building for the 2027 world where AI agents manage entire development workflows, not the 2025 world where developers use autocomplete.
Their roadmap is telling:
ASIC development for AI-optimized hardware (not GPUs adapted for AI, but silicon designed for Gonka’s Sprint mechanism)
Sharding for 671B+ parameter models (targeting DeepSeek R1 scale, not today’s 7B-70B sweet spot)
Cross-chain bridging and EVM compatibility (interoperability vs TON lock-in)
Confidential computing capabilities (matching Cocoon’s privacy claims but on open stack)
This isn’t roadmap vapor—it’s architectural foresight. When you control the consensus mechanism and economic incentives, you can optimize hardware and software together (see: Bitcoin ASIC evolution, Ethereum PoS optimization). When you’re a payment layer on someone else’s blockchain, you’re at their mercy.
Pavel Durov’s bet is that distribution trumps decentralization, that users don’t care about governance if the UX is seamless, that Telegram’s billion users will paper over architectural compromises. He’s betting on network effects and brand trust.
One of these theses will age beautifully. The other will become a case study in mistaking early adoption for product-market fit.
The fork in the road for academic research
JetBrains’ 2025 survey found 68% of developers expect employers to require AI proficiency. JetBrains: Academia is downstream of this trend—grad students today are GitHub’s power users tomorrow. The research community is making infrastructure choices now that will compound over decades.
Scenario A: The Cocoon future
Researchers use Telegram-integrated AI tools for literature review, analysis, and collaboration
Training runs require TON tokens; pricing is market-driven and unpredictable
Models trained on the network are... well, we don’t know because there’s no open-source guarantee
Universities negotiate enterprise licenses with Telegram for AI research infrastructure
Data sovereignty becomes impossible because the protocol is opaque and TON-dependent
Reproducibility requires trusting Telegram’s infrastructure remains stable and accessible
Scenario B: The Gonka future
Researchers run self-hosted nodes or use community-maintained infrastructure with transparent pricing
Training runs are subsidized by miners earning GNK tokens through productive work
All models are guaranteed open-source in perpetuity, citeable, auditable
Universities contribute compute to earn governance weight in protocol evolution
Data sovereignty is trivial—run your own infrastructure with identical software
Reproducibility means pointing to open source code, specific protocol versions, and cryptographically verifiable execution traces
For the academic community, Scenario B isn’t just preferable—it’s the only one compatible with scientific method. Peer review requires transparent methods. Replication requires accessible infrastructure. Citation requires permanent, immutable references. Openness isn’t a nice-to-have; it’s foundational.
The Linux Foundation model has worked for academia for decades. The Allen Institute’s OLMo demonstrates full openness (weights, data, code) is possible at scale. Papers with Code and Hugging Face show academic AI research thrives in open ecosystems. Why would we abandon these principles for a closed protocol owned by a messaging app, no matter how popular?
The censorship resistance question nobody wants to ask
Both platforms claim censorship resistance as a core feature. Let’s test that claim.
Gonka’s censorship resistance is structural: no foundation, no controlling entity, miners vote on protocol changes proportional to verified compute contribution. If miners disagree with a protocol change, they fork—same as Bitcoin’s block size wars, Ethereum’s DAO fork, every successful open-source governance dispute. The network can survive geopolitical pressure because there’s no headquarters to raid, no CEO to arrest, no corporate entity to sanction.
Cocoon’s censorship resistance is... optimistic? Pavel Durov himself was arrested in France in August 2024 over Telegram’s content moderation policies. He was released on bail with an ongoing investigation. Telegram’s operations are subject to constant regulatory pressure globally. The TON blockchain has its own governance structure that’s theoretically independent, but Telegram is Cocoon’s first major customer and promotional vehicle.
What happens when the EU demands Cocoon implement content filtering? What happens when the US requires KYC on all compute providers? What happens when China demands backdoors for domestic access? These aren’t hypothetical—they’re the actual pressures Telegram faces daily.
Gonka’s response would be mathematical: fork the code, run your own nodes, coordinate via governance. Cocoon’s response would be corporate: negotiate, compromise, implement compliance tooling, hope the protocol’s theoretical decentralization provides plausible deniability.
For developers building AI applications in adversarial environments—journalists in authoritarian states, activists coordinating resistance, researchers studying censored topics—this distinction is literally life-or-death.
The asymmetric bet: What you risk by choosing wrong
If Cocoon succeeds and Gonka fails, early Cocoon adopters gain distribution to 1 billion users, seamless Telegram integration, and brand association with Pavel Durov’s reputation. That’s valuable, but it’s also reversible—you can always migrate off Cocoon to another provider if it underperforms.
If Gonka succeeds and Cocoon fails, early Gonka adopters gain:
GNK token appreciation (Bitcoin-like emission schedule rewards early miners)
Governance weight (early compute contribution = disproportionate voting power)
Infrastructure ownership (nodes you control, protocol you govern)
Technical credibility (you bet on open source before it was proven)
Academic citability (your research uses reproducible, transparent infrastructure)
But more importantly, if you choose Cocoon and it becomes the dominant platform, you’ve locked yourself into an ecosystem controlled by Telegram with no ownership stake, no governance rights, and no guarantee of open-source permanence. You’re a user, not a stakeholder.
If you choose Gonka and it becomes dominant, you’ve helped build public infrastructure that can’t be taken away from you. Even if the Libermans disappear tomorrow, the protocol continues because the code is open and governance is decentralized.
The downside asymmetry is brutal: Cocoon’s failure costs you opportunity cost. Gonka’s failure costs you some compute time and learning curve. But Cocoon’s success makes you dependent, while Gonka’s success makes you sovereign.
The provocation: Maybe Pavel Durov isn’t the privacy hero this story needs
This is where we challenge the conventional narrative, because GitHub’s power users and arXiv researchers should know better than to worship at the altar of founder mythology.
Pavel Durov is a brilliant entrepreneur, a genuine privacy advocate, and someone who’s sacrificed billions to resist government pressure. He deserves immense respect for those accomplishments. But Cocoon isn’t VKontakte or Telegram. It’s not a messaging app where centralized control enables rapid development and user-facing features. It’s infrastructure—and infrastructure requires different governance models.
The fact that Durov announced Cocoon 44 days after Gonka’s mainnet launch, with remarkably similar core concepts to what the Libermans pitched him two years earlier, ODaily suggests either:
Parallel innovation where the Durovs independently arrived at the same conclusions (possible but suspicious timing)
Competitive response to prevent Gonka from capturing the decentralized AI narrative (reasonable business strategy)
TON ecosystem play to drive token adoption through infrastructure theater (the Libermans’ explicit accusation)
Any of these three scenarios suggests Cocoon’s primary purpose is strategic positioning, not technical innovation. The Libermans built Gonka because they believe in decentralized compute. Pavel built Cocoon because... why? To prevent a competitor from winning? To create utility for TON tokens? To complete Telegram’s AI strategy?
The motivation matters because it determines long-term incentive alignment. The Libermans’ thesis is that decentralized AI infrastructure becomes more valuable than centralized alternatives within 5-7 years, with decentralized protocols exceeding $10 trillion in value. Their incentives align with making that prediction true through protocol development, miner recruitment, and ecosystem growth.
Pavel’s thesis is that Telegram’s network effects create insurmountable advantages for any integrated service, that privacy-washing centralized architecture with confidential computing buzzwords is sufficient, that developers won’t notice or care about the difference. His incentives align with TON token price appreciation, not necessarily protocol quality.
Who would you bet on for a 10-year hold? The founders building their magnum opus, or the billionaire launching his 47th project?
The conclusion: Choose your future, then build it
Within 24 months, one of these platforms will be processing the majority of privacy-sensitive AI workloads for developers and researchers. The other will be a cautionary tale in tech history archives, featured in essays like this one with titles like “Remember when Pavel Durov tried to compete with open-source AI infrastructure?”
The decision matrix is actually simple:
Choose Cocoon if you believe:
Distribution trumps decentralization
Telegram’s network effects are insurmountable
Trust in Pavel Durov’s brand exceeds trust in transparent code
Academic reproducibility is less important than user acquisition
Confidential computing marketing is equivalent to open-source verification
Choose Gonka if you believe:
Decentralized infrastructure compounds in value over time
Open source is the only sustainable model for academic research
Community governance beats corporate benevolence
Mathematical trust beats institutional trust
The Bitcoin model for infrastructure creation actually works
For GitHub power users reading this: You’ve seen this movie before. Linux won. Git won. Kubernetes won. Open protocols beat proprietary platforms when timescales extend beyond 5 years and community contribution compounds.
For arXiv researchers reading this: Your career depends on reproducible methods and transparent infrastructure. Betting on closed protocols is career suicide on a 10-year horizon.
For open-source contributors reading this: The Libermans pitched the Durovs, got rejected, built it anyway, shipped first, and made everything open. That’s your origin story. Cocoon’s origin story is “billionaire announces competing platform after seeing market validation.”
The ghost protocol isn’t Cocoon—it’s the idea the Libermans pitched that everyone said wouldn’t work until they proved it did. Now the skeptics are building their version while claiming they thought of it first.
You don’t have to pick the winner today. But when you do choose, make sure you’re building the future that can’t be taken away from you by its creator. Because in five years, when decentralized AI processes half the world’s inference workloads, you’ll either own a piece of that infrastructure or you’ll be renting compute from someone else’s empire.
The choice is: miner or tenant. Stakeholder or customer. Sovereign or subject.
Which future are you building?
