Cal.com Closes Its Source Code, Blames AI Hackers

Cal.com moved its core codebase to a private repo after five years of open source, arguing AI tools make public code 5-10x easier to exploit. The community isn't buying it.

Cal.com Closes Its Source Code, Blames AI Hackers

TL;DR

  • Cal.com closed its core production codebase on April 15 after five years of open source (40,000+ GitHub stars), moving to a private repo
  • CEO Bailey Pumfleet's argument: AI tools can now systematically scan public codebases for vulnerabilities, making open code "5-10x easier to exploit"
  • They launched cal.diy as a free MIT-licensed alternative for hobbyists - but it's an older codebase snapshot, not the production code
  • The open-source community is overwhelmingly skeptical, calling it "security through obscurity" and a "classic bait and switch"
  • Critics point out the contradiction: if the code is too dangerous to be open for enterprise, why is it safe enough for hobbyists?

Cal.com, the open-source scheduling platform with 40,000+ GitHub stars and positioning as the ethical Calendly alternative, just closed its source code. The stated reason: AI makes open codebases too dangerous to keep public.

The community reaction has been brutal.

Pumfleet's argument

CEO Bailey Pumfleet published the rationale on April 14. The core claim: "AI can be pointed at an open source codebase and systematically scan it for vulnerabilities." He described open-source code as "giving attackers the blueprints to the vault" in an era where AI can exploit weaknesses at machine speed.

The specific example he cited: an AI model uncovering a 27-year-old vulnerability in the BSD kernel and generating working exploits in hours. That's a reference to Claude Mythos Preview, which Anthropic demonstrated through Project Glasswing in early April.

Pumfleet argues the risk is asymmetric. Previously, exploiting an application required a skilled hacker with years of experience. Now anyone with API credits can point a frontier model at a public codebase and get back a list of exploitable flaws. The cost of offense dropped to nearly zero; the cost of defense stayed the same.

What they're doing

Cal.com's core production code - including rewritten authentication, data handling, and infrastructure - has moved to a private repository. Self-hosting customers will receive access to a private on-premise GitHub repo as part of the transition.

Simultaneously, Cal.com released cal.diy: a free MIT-licensed version of the codebase for "developers, hobbyists, and anyone who wants to explore and experiment." This is an earlier snapshot of the code, not the production system. The core rewrites that prompted the security concerns are not included.

The contradiction nobody can explain

If the production code is too dangerous to be public because AI can find vulnerabilities in it, what does releasing an older version under MIT accomplish? Either the older code has the same classes of vulnerabilities (in which case it's equally dangerous to publish), or the new code introduced new vulnerabilities that didn't exist before (in which case the problem is Cal.com's code quality, not open source as a model).

The Hacker News thread (310+ upvotes) centers on exactly this point. The most upvoted response: "If the code is too dangerous to remain open for enterprise use, why is it safe enough for hobbyists?"

The community response

The reaction splits into three camps.

"This is security through obscurity." The dominant view. Closing source code doesn't prevent AI from finding vulnerabilities - it prevents the community from finding and fixing them first. As one commenter noted: "attackers have effectively infinite time to throw an LLM against every line of your code." Decompilation, reverse engineering, and binary analysis mean closed source is a speed bump, not a wall.

"This is the real reason." Multiple commenters suspect the actual motivation is commercial, not security. Cal.com's business model depends on enterprises paying for hosted scheduling instead of self-hosting. Open source made self-hosting viable. Closing source eliminates the competition. The AI security framing provides cover. One comment: "Classic open source bait and switch."

"The timing is telling." Cal.com launched its PR campaign with a press release titled "Open source is collapsing under AI-powered threats." The press release hit the same week as the Mythos revelations. Several critics noted this looks like opportunistic marketing rather than a genuine security response.

The counterarguments from open-source leaders

The Thunderbird project explicitly offered itself as an open-source alternative. Xata, facing identical AI security concerns, went the opposite direction and chose to open more code - arguing that community scrutiny finds vulnerabilities faster than private teams.

The most compelling technical rebuttal: since AI models can already analyze compiled binaries (Mythos Preview demonstrated this capability on closed-source browsers and operating systems), closing source code provides diminishing protection. The vulnerability-finding capability that scared Pumfleet works on closed software too.

Hugging Face CEO Clement Delangue argued AI could make open source safer by enabling rapid automated patching - using the same models for defense that attackers would use for offense. The question is whether defenders adopt the tools faster than attackers do.

What this actually signals

Cal.com isn't wrong that AI changes the threat model for open-source software. It does. Mythos found thousands of zero-days in widely-reviewed code that had been public for decades. The 27-year-old OpenBSD bug is real.

But the conclusion - that hiding code is the answer - doesn't follow. Hiding code from community auditors while AI tools can analyze binaries is giving up your best defenders while barely inconveniencing attackers.

The more likely reading: Cal.com was struggling with the economics of open-source SaaS, AI security provided a defensible narrative for a license change they were going to make anyway, and the PR release titled "open source is collapsing" was designed to make a business decision sound like an industry inflection point.

Users who chose Cal.com because it was open source are already migrating. Several commented they're canceling paid subscriptions over the decision.


Sources:

Cal.com Closes Its Source Code, Blames AI Hackers
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.