Back to all

On Cal.com, AI security reports, and why Vikunja can't easily close

2026-04-16

If Vikunja is useful to you, please consider buying me a coffee, sponsoring me on GitHub or buying a sticker pack. I'm also offering a hosted version of Vikunja if you want a hassle-free solution for yourself or your team.

Cal.com has announced that their production codebase is going closed source. Their stated reason is that AI has made it too easy for attackers to find bugs in open code, so hiding the code is now the safer option.

Every time a project like this closes, I get the same question: could Vikunja do this too? The list of recent examples is long, and the endings are mixed. HashiCorp moved Terraform to the Business Source License in 2023 and was acquired by IBM in early 2025; Terraform is still BSL. MongoDB has been on SSPL since 2018 and stayed there. Redis went to SSPL in 2024 and then back to AGPL-3 in 2025, after the Valkey fork took over much of its enterprise usage. Elasticsearch made roughly the same round trip: SSPL in 2021, AGPL added as an option in 2024, once OpenSearch had built its own ecosystem. And now Cal.com.

To be fair to those companies: making a business on open source is hard. The database products watched hyperscalers like AWS repackage their code as managed services, capture most of the revenue, and contribute little back upstream. The license changes were responses to a real structural problem that nobody has solved cleanly. Cal.com’s framing is different (they lead with security), but commercial pressure on open-source businesses is always part of the story.

I want to answer the question honestly. “I promise we won’t” is what every founder who eventually did said. I believe they meant it at the time. Circumstances change, and people take a different path.

So the question isn’t about intent. It’s about what would have to happen for Vikunja to close, and whether the answer is comfortable.

Before that, the security argument Cal.com made deserves a response on its own terms.

Security through obscurity, dressed for 2026#

The core claim is that an AI scanning a public codebase can find vulnerabilities faster than defenders can patch them. Therefore: hide the codebase.

Anthropic’s Project Glasswing, announced in early April 2026, uses a new Claude model to find vulnerabilities in critical open-source software at a pace nothing before could match. The 27-year-old OpenBSD bug Cal.com cited as evidence against open source was found that way. The capability is real.

But Glasswing is partnered with open-source maintainers, not run against them. The Linux Foundation is a founding participant. The whole point of feeding an AI the source of critical software is to find and fix bugs faster, which only works if the source is available in the first place. If you close the code, you also close off the defenders using the same tools to help you.

This is security through obscurity, a well-known bad idea. For over a century the field has held to Kerckhoffs’s principle: a system should remain secure even when everything about it except the key is public.

LLMs don’t change the direction, only the speed. If AI can scan open code and find bugs, it can also run against closed code through fuzzing a binary, static analysis of anything shipped to a customer’s machine, or the API itself. The bugs don’t disappear when the source is hidden. They just get found later, by someone who has no obligation to tell you first.

The most instructive example I can offer is one from Vikunja’s own release notes. CVE-2026-28268, shipped as a fix in Vikunja 2.1.0, was a bug where password reset tokens weren’t cleaned up after use. Anyone who got hold of a reset link once could keep reusing it. That bug had been in Vikunja since version 0.18.0, released in September 2021. Almost five years of it sitting there in public code.

It got found. Because the code was public, a researcher could look at it, recognise the problem, and report it responsibly. Had Vikunja been closed source, nobody external would have been in a position to catch it. The bug would still be there, and whoever eventually found it wouldn’t have been obligated to tell me first, because the incentive would likely not have been “make this open-source product better”.

That’s the tradeoff open source actually makes: transparency trades “bugs found later by the wrong people” for “bugs found earlier by the right ones.”

What AI-assisted security reports actually look like from this side#

I have direct experience with what Cal.com is describing.

Across the last five releases (2.0.0, 2.1.0, 2.2.0, 2.2.1/2.2.2, and 2.3.0), Vikunja shipped fixes for 35 CVEs in about six weeks. That reflects the same shift Cal.com identified.

In the 2.3.0 release notes I wrote:

Vikunja seems to be in the same bucket of getting high-quality AI-generated security reports. The main difference to what made the curl project end its bug bounty-program is that these are genuine vulnerabilities that are worth fixing.

The reports I’ve been getting aren’t LLM-generated noise. They’re from researchers who probably used AI to accelerate their work and then did the investigation that a responsible disclosure requires. Several have reported multiple issues across several releases, each with enough detail to reproduce, fix, and credit.

The curl project’s earlier decision to end their bug bounty was about a related but different problem: unvetted LLM output flooding maintainers with plausible-looking false positives. That was accurate a year ago. The picture has since shifted. Daniel Stenberg, curl’s maintainer, recently wrote that “the AI slop security reporting is basically extinct,” and that the first quarter of 2026 alone produced more fixed curl vulnerabilities than either 2024 or 2025 on its own. Submission rates and report quality are both up, and more of the reports land on real vulnerabilities.

The bottleneck has moved. Stenberg is explicit about it: “The problem here is not AI. Just good old overloading a few with so much work.” That’s a capacity and coordination problem. It’s fixable, and it doesn’t get fixed by closing the source.

I said this at the bottom of the 2.3.0 post and I’ll say it again here: I’m happy to get more of these reports. If you’re doing a thorough investigation, with or without AI, reach out through the security process first so we can coordinate timing and disclosure. That’s the scale fix. Hiding the code isn’t.

What would have to happen for Vikunja to close#

Closing Vikunja isn’t a single decision I could just make. Four structural facts would have to change first.

Vikunja is AGPL-3 licensed#

Vikunja ships under AGPL-3, the strongest widely-used copyleft license. If you modify Vikunja and run it as a service for other people, you’re required to publish your modifications under the same license. A closed-source fork served as SaaS would violate the terms of every contributor whose code is in it.

There is no CLA#

This is the one most people miss. A contributor license agreement is a document that contributors sign to grant or transfer broader rights to their code, typically to let the project owner relicense it later. MongoDB, HashiCorp, and Redis all had one before they relicensed.

Vikunja doesn’t. When someone sends a pull request to the Vikunja repo, they keep the copyright to their code, so I can’t unilaterally relicense it. To change the license of Vikunja, I would either have to ask every contributor who ever wrote a non-trivial patch and get every single “yes”, or rewrite every one of their contributions myself. Neither is practical given the size of the contributor list.

You can verify this on any pull request: there’s no bot-signed CLA check and no agreement link beyond the license that’s already in the repo.

No investors#

Most relicensing stories have a VC in the third act. The investor needs an exit, the company needs margin, the path to margin runs through capturing self-hosted users, and the license change is the tool.

Vikunja is bootstrapped. There’s no investor asking me to optimise for an exit. The people I have to answer to are the users paying for Vikunja Cloud and the contributors whose code is in the repo. Neither group wants Vikunja to close. Any paid tier I build will ship under the same license, in the same public repo.

Forks are always possible#

If every other structural thing on this list failed somehow, the AGPL-3 license and the public repo history give anyone reading this the right to take today’s Vikunja and keep going. A fork doesn’t need my permission. It doesn’t need my cooperation.

The safety net isn’t hypothetical. Valkey forked Redis in 2024 and is now the default choice for most cloud providers. OpenSearch forked Elasticsearch in 2021 and grew fast enough that Elastic eventually reopened the license. OpenTofu forked Terraform in 2023 and has grown into a mainstream alternative, with a large share of Terraform users already evaluating or migrating. In each case, the fork either became a legitimate alternative or pressured the incumbent back into open source. That’s what “forkable” looks like when it actually matters.

The limits of this commitment#

None of this is a guarantee. Any commitment can break. What I can honestly say is that in Vikunja’s case, the commitment isn’t only mine. It’s baked into the license, the absence of a CLA, and the right-to-fork that every user already has. Undoing those would be a visible and contested process, well beyond a quiet Monday-morning announcement.

And I don’t want to undo them. Open source is how Vikunja gets the security reports I wrote about above, the contributors who ship features I wouldn’t have thought of, and the trust that makes self-hosters comfortable running it on their own infrastructure. Closing would destroy the conditions that make Vikunja work in the first place.

If you want to verify any of this yourself: the LICENSE file is in the repo, and the absence of a CLA is visible in every pull request.

Vikunja stays open.