Non-overclockable: A radically simpler open-core?

Here’s one solution to the problem of funding open source software I’m kicking around. It’s a good fit for smaller projects which aren’t ready for something like Fast Path/open-core.

The solution in brief: a developer withholds new performance improvements and non-critical bugfixes from the public, making these available to supporters only. Blue Oak License or similar is used. Supporters download the supporters-only version via a password-protected repository (easy for NPM, golang, PyPI, etc.). (Fixes are eventually “unlocked” for everyone after some time.)

When a supporter signs up and pays the monthly/yearly subscription fee to be a supporter, they agree not to redistribute the source code to non-subscribers.(Not unlike how Fast Path users agree not to redistribute the documents when they access them.)

Might this work? Is anyone already doing this?

There are two sources of inspiration here. First is Patreon/Substack as used by many podcasters/youtubers/writers addressing a large audience. These creatives produce a steady stream of new content and withhold ~5% of content for supporters, people paying at least 5 USD a month. What’s being sold is easy access to this new material.

The second inspiration is the practice of withholding “performance fixes” by hardware companies. Hardware manufactures frequently selectively disable functionality in otherwise perfectly good CPUs and GPUs. For example, certain CPUs which can be safely overclocked are configured such that they cannot be overclocked by consumers. Such CPUs are called “non-overclockable SKUs”. The company is doing this in order to do price discrimination and increase revenue. For example, hardware enthusiasts will have to pay for the overclockable version of the chip—many other users will not care. The strategy being described here is similar in spirit.


  • How does this improve on open core?

    It’s a featherweight version of open core. There are two key benefits:

    1. The developer doesn’t need to prepare or sign any contracts. Open core typically does require preparing a license for each end user.
    2. The developer is relieved of the cognitive burden of deciding which features should go in the core project and which features should be licensed to paying users.
  • What if some jerk mirrors the supporters-only version?

    I’m encouraged by the observation that this does not seem to be a big problem for podcasters/youtubers/writers using Patreon and Substack. I’m also encouraged that many Data APIs – services which sell access to a data stream of some kind – seem to do well even though they also potentially have this problem.

  • What are the limitations?

    There’s initial setup required. Should be as easy as setting up an IRC channel for Patreon subscribers. There are certain languages/ecosystems where implementing this is easier: Javascript (NPM), Python (PyPI), and Golang are examples.

    It clearly works better for a piece of software which is being actively developed. The value proposition is clearer when performance fixes arrive regularly.

  • What if the supporter wants to redistribute a modified version of the supporters-only version?

    An additional tier would allow for this.

    Having a special tier for this really does solve the problem, I think. After all, only the supporter has the API key to access updates at the private repository. Downstream users (of the supporter’s software) may have a copy of the supporters-only version but they will not get updates.

    If this got out of hand, the supporter in question would be asked to sign a more formal (Fast Path-like) agreement, the cost of which would include the expected value of lost subscriptions.

1 Like

Try it!

I believe the developer of the Hapi web framework did something like the opposite. He continued publishing his bleeding edge versions, but charged companies stuck back on old versions for work backporting fixes.

1 Like

Did not know about hapi. Thanks!

So there are no conspicuous legal concerns here? No problems with the clickwrap agreement not to share open source code?

I’ve taken to calling these models “delayed release”. We see them in at least a couple different variations.

Under one approach, old versions of the software are published under permissive terms, but the latest and greatest is closed and proprietary. You can only download and use the latest and greatest if you pay.

Under the other approach, old versions of the software are still published and permissive, but the latest and greatest is copyleft, with proprietary licenses available for sale.


The canonical historical example of delayed-release is probably Ghostscript, the GPL (now AGPL) Postscript/PDF processing library. Since printer manufacturers get stuck supporting the versions of that software they ship on their devices, and they often need or want to make proprietary changes and extensions, they’re often willing to pay for a commercial license for the latest and greatest Ghostscript.

You should also read up on MariaDB and other users of the Business Source License.

Withholding Security

There are more concerns with holding back security fixes than performance improvements.

Nearly every common permissive license, including the Blue Oak Model, tries to disclaim all warranties—legally enforceable guarantees—about the software. That would include any guarantee about it working correctly or functioning securely. This is what the license terms themselves say.

But there are states and countries where the laws can stop license terms from completely disclaiming all warranties. The ones I hear about most often are Virginia and Maryland, which adopted the aspirationally mis-named Uniform Computer Information Transactions Act (UCITA), and Germany. I’m not a German lawyer. I don’t speak German. But I’ve heard this over and over again.

In those jurisdictions, it’s at least theoretically possible for a user of a program with a security vulnerability to sue the developer for damages caused by the vuln and then argue that local law does not allow the developer to disclaim all responsibility for poor quality of the software.

1 Like

I am not a lawyer, but I would very much like to with-hold security updates, because I think that is where the greatest financial incentive is. I also don’t think that it is harmful if it is only a delay of a month or so.

I see that the UCITA is aimed solely at software, but the whole idea of with-holding security updates doesn’t seem that different to me than a car manufacturer not giving people free replacement brake pads. I don’t own a car, but I do own a bicycle and in my user manual, they specifically inform me that if I do not service the brakes regularly (once every 1000Km) that the bike is unsafe to ride. And servicing the brakes cost me money. How is that any different? The bike company knows that there bikes are unsafe if the user doesn’t service the brakes, they warn the user, and the user has to pay to get the brakes serviced. Similarly, when I release software, I know it is insecure if there are never any security patches, I inform the user of this fact, and give them the opportunity to pay for those patches. Seems like the same thing to me.

1 Like

Thanks for these notes. Glad this has some precedents.

Your comment about the different licenses suggests an improvement to what I proposed. In addition to the non-overclockable version missing bugfixes, it should also have a more restrictive license. So the non-overclockable version could be AGPL whereas the supporters-only version would be Blue Oak. At the margins, this should help fund the development.

I agree that withholding security updates would make this compelling and would make this distinctly different from the “delayed release” approach. In fact, I think saying that you may withhold security updates would be enough.

Also, another way to make this distinct from traditional delayed release would be to offer no guarantee of eventually releasing the software. Make the delayed release contingent on reaching, for instance, a $200/year fundraising target. Or just say that the developer may, on a whim, release older versions if they feel like it.

Here’s the proposed edit:

The solution in brief: a developer withholds new performance improvements and non-critical bugfixes from the public, making these available to supporters only. Blue Oak License or similar is used. Supporters download the supporters-only version via a password-protected repository (easy for NPM, golang, PyPI, etc.). (Fixes are eventually “unlocked” for everyone after some time.) (Bugfixes and performance enhancements could be released to the public eventually, at the developer’s option.)

This concerns me greatly.

While you may get paid for your work, it does go against the spirit of making your software freely available and worse of all, it does go against code of conducts i.e.

Community-focus - Members’ responsibility for the welfare and rights of the community shall come before their responsibility to their profession, sectional or private interests or to other members;

If you are bound by a professional code of conduct such an approach you could be in a lot of trouble by withholding known security fixes from people. This doesn’t just affect you today. This limits your future potential employment.

1 Like

I don’t think that you are correct when it comes to “withholding known security fixes” being wrong. After-all this is already widely practiced. It is called “coordinated release”. Here is an example: Massive, coordinated DNS patch released - CNET

My plan is to make a bug bounty pool, anyone can pay in and receive access to information about security problems before the coordinated/public release date. These members would be bound by a code of ethics which would forbid them from using that early info for hacking. They might only get a few days extra notice before the patch was made publicly available, but for large institutions even hours extra notice can be worth money.

1 Like

You’re telling people they may not get security fixes promptly. You could even put the normal FOSS release behind a clickwrap screen so people would have to affirmatively consent to not getting security updates.

I’m also a little perplexed by this idea that there’s an obligation to provide security updates. Google doesn’t provide security fixes for phones which were released more than 4 years ago. Are its engineers violating a code of ethics even if they are open about the fact that they are not making security updates available?

1 Like

There are a few things to unwrap here.

If you were going to abandon a set of versions (say a major version) of course, you are not obligated to develop or give out security fixes. That is end of life.

For your example with Google, there are two issues there. Planned obsolescence and updates. Planned obsolescence means that Google can wipe their hands of it before that device for a large percentage of users would otherwise need to be replaced and hence creates a lot more waste than what they should be producing. This is absolutely immoral due to the environmental problems it causes.

As for the updates, as long as the last major version is still receiving work done on it, it should keep receiving updates. Once that work stops, then no more updates. Perfectly ethical. Requirements change, you can’t keep the same major version going forever. But if you have a new version in repo and all you need to do is deploy a binary the CI produced… well yeah it would be unethical to not deploy it.

A good comparison would be to Microsoft. Microsoft offers a service where they keep end-of-life Windows versions going for security patches at a cost to those that need it. Allowing companies to continue to use end-of-life software is ethical in the sense that it buys them time to get off it. But you really don’t want the average consumer to think they can keep using it without it causing them problems. So it is very gray whether that is ethical or not I think.

However, if you were going to do the work anyway, have it available to you and don’t release it and can release it immediately that is unethical. Because you are giving bad actors time to exploit it which you could otherwise entirely prevent. After all, patches are not guaranteed to be applied immediately and by the time you have a fix, it could have been already exploited.

An alternative to withholding updates is withholding your time to work on it. Either your funding is above X or you cannot guarantee you have time to spend contributing to the project. That to me is perfectly fine. Nobody can expect you to work on something for free.

1 Like

This discussion has cleared up my thinking about security-related maintenance a bit. Not that clear thinking has led to any clear answers!

Sampling my intuition a bit, and setting the issue of legal systems that don’t allow full disclaimers aside, I think the troubling case is where:

  1. the developer continues holding a project out as “active” and
  2. the developer knows about a vuln in that projec and
  3. the developer has a patch for that vuln and
  4. the developer holds the patch back from the project for a commercial reason, as opposed to an issue of practicality, process, or security

There’s a twinge of the same intuitive feeling when companies abandon products they still know to be in use, such as old Android phones. I think part of the problem there is that my mind sees a commercial reason: nudge people out to buy newer phones.

1 Like