The Great Subscription Pivot of January 2026

Every new year is an opportunity to reassess our priorities. This January, I took a hard look at my monthly software subscriptions. They say that the defining characteristic of Baby Boomers is a dozen paid-for but unused app subscriptions. I always like to play against type, and so I’m pretty good at cancelling them, but it still surprised me how much things had changed in only one year.

Goodbye Receipt Platform, Hello Claude

This was the easiest cut to make. Every January, my thoughts turn to doing my taxes, and that involves gathering a lot of information. Being self-employed is great, but it makes doing one’s taxes a slog. I long ago figured out that I should have a dedicated credit card for business (preferably one with a nice, high fee that I can deduct), but it’s not always possible to use that card, and only that card, for business expenses. So, I dutifully make scans of my receipts. Last year, I found a dedicated receipt tracking platform, and I was happy that it scanned and categorized receipts for me. I got a year’s subscription, in anticipation of this year’s tax homework.

But this year I did an experiment, and sent copies of receipts to Claude, asking it to extract the information, and organize it into a spreadsheet that I can sort by date and vendor. It did a great job.

The receipt platform cost me $6/month. That may not sound like much, but Claude does it better for free (or as part of my Claude Pro subscription, which I was already using for a boatload of other things).

Ingesting my receipts on Claude did have a glitch: I hit my compute limit. But I solved that by waiting a few hours and then changing my prompts to eliminate unnecessary aspects of the output, like categorization, which cost extra compute cycles.

My only regret is that I am now getting less use out of my beloved Keychron number pad, which I customized with colored keycaps, and for which I have a deep and abiding attachment. 

Adobe’s Value Proposition Failed

This is another specialized software suite that got gobbled up by the general LLMs. I’ve been a relatively happy customer of Adobe for a while, but by the end of last year, as my needs changed, the main thing driving my Creative Cloud subscription was Firefly. For quite a while, it was the best-of-breed image generator. But it got blown out of the water by Gemini in the last few months.

I mostly use image generation for slide decks, or articles like this one. So, for an upcoming talk, I asked Gemini and Firefly to produce an image for “the marriage of open source and AI” depicting a penguin marrying a robot.

Here’s what I got. The Adobe one was not awful, but it had some creepy elements. What is up with the dolphin? Are those a regular sight at weddings these days? “Open hearts & source code” not only contains a misused ampersand, it sounds like a bad translation on a Japanese T-Shirt. The bride looks so miserable she might be the victim of wife stealing, and her dress is slit up to her non-existent private parts.

The Gemini image got the message across, without out any extraneous dolphins or wardrobe malfunctions.

I cancelled my Adobe subscription the next day.
undefined

Microsoft Office Is Now Google Workspace!

Just kidding! Both Google and Microsoft are notorious for rebranding, but this was a sea change in requirements, not names.

I held onto my personal Microsoft 365 subscription for over a decade, mostly out of fear of that one moment when a client would call with a hair-on-fire demand that required me to run a redline on my cell phone. But when I examined my workflow, I realized Google Docs and Sheets, supplemented by Libre Office, could do everything I needed. The real-time collaboration in Google Drive is great, the mobile apps are excellent, and the interface is clean. Microsoft Office used to be the only tool that could reliably produce redlines. But now Google’s apps do this, too.

Plus, I consolidated my storage needs in the process. These days, it’s nearly impossible to go without a cloud storage service. Yes, I have friends who run their own private cloud server, but most of us mere mortals can’t be bothered. If you have more than one that you use regularly, you can more easily get lost in the hellish landscape of trying to find your documents. To me, Google Drive works best mainly because–surprise!–its search feature is the best. When you are thinking, “I know I wrote a thing about that thing once, what was that thing called?” and try to find it in Outlook or One Drive–forget it.

Becoming a Paid Prifina User 

This one’s new. I’ve been using Prifina for a while. I love the digital twin concept. If you are worried about AI taking your job, you might consider that knowledge work is kind of like figure skating: the success of work product is about half performance and half reputation. With a digital twin, you can train your own AI on whatever you like. Prifina respects your personal data, and its training dashboard is drag-and-drop. So when my free account expired last year, I put money behind it. The idea of having a digital twin that I actually control—that learns from my data and lets me control my personal branding—is the right kind of AI for me.

You can check out my bot COSSMO here: https://chinstrap.community/ask-cossmo/. It’s trained on most of my writing.

Spotify? YouTube Music? Amazon Music? CD-ROMs?

I seriously do not understand how people deal with their music anymore. This is one of those areas where the available tech is getting worse and worse.  

Now, perhaps that’s because I’m an eccentric dinosaur. I want music, not music-as-a-service. I’m a musician, so I record some of my own music. None of the music services can deal with that, and none of the few remaining players are good at it, either. Next, I choreograph dance routines, and for those, I need to download the music and cut it in my digital audio workstation. That requires access to local digital copies. Next, I like to listen to music on planes, where streaming doesn’t work. (Who doesn’t? I guess, people who don’t travel?) Last, I have no desire at all for the latest hits, because, true to my dinosaur self, I don’t think more than a dozen good songs have been released since the music industry died, which was sometime around 1992. So I don’t care about what any platform thinks I want to listen to (because they’re wrong). 

I want my 2004 iPod mini back! But you can’t go home again: iTunes is awful these days, and Apple Music is like putting your music in jail. I have to admit that I don’t get the point of Spotify, and YouTube Music has re-branded so many times that I have no idea what it’s called this week, or which of my music is there, or where it resides. I get access to Amazon Music Unlimited via my prime subscription, so I’ve taken the path of least resistance.

This decision got tabled until next year–again.

General Purpose AI

As I mentioned, I have settled on Claude for my main paid LLM subscription, but Google Gemini is getting hard to ignore, particularly for image processing. I’m guessing that, sometime this year, I will be upgrading my Google subscriptions for access to its AI. But meanwhile, Claude writes code like a superhero.  

Password Manager

I have been paying for Dashlane for years, while also having access to password management through both Google and Apple. I still don’t entirely get passkeys, and authenticator apps require you to have your mobile devices surgically attached so you can access them at all times. I am probably not the only person who has tried to access an online service while on the road, only to have it demand that I authenticate on another device–which is currently in another country. So, I’m not ready to let go of Dashlane quite yet.

I suspect this special purpose app will eventually get eclipsed as well. But mainly, it is a good way to share access with my husband. Until you have to access a loved one’s accounts in an emergency, you don’t know how important this is. I’m keeping my passwords platform-independent for now.

The Bottom Line

All told, I have only saved, at most, $50 a month on my subscription costs by making these updates. (Over half of that was Adobe.) Given I use most of them for my work, that’s not a lot. But pausing to re-evaluate them was useful. My key insight from this year is that many of the specialized tools I was paying for are being superseded by more general, more powerful AI assistants and integrated platforms.

The lesson isn’t that specialized tools are bad—it’s that the landscape has shifted dramatically in the past year, and will continue to do so in the next years to come. AI assistants can now handle tasks that used to require dedicated apps. Cloud platforms have matured to the point where they can replace entire software suites. 

I am mindful that this situation is probably not sustainable. The general AI platforms are hemorrhaging money, and I am not confident they can be made profitable without drastic increases in fees–which in turn may drive users back to specialized platforms. But I have always looked at the massive losses of unicorns as a wealth transfer from venture investors to users, and that’s fine with me, at least for now.

Actors Trademarking Themselves

An article (sorry for the paywall) appeared recently in the Wall Street Journal under the title “Matthew McConaughey Trademarks Himself to Fight AI Misuse–
Actor plans to use trademarks of himself saying ‘Alright, alright, alright’ and staring at a camera to combat AI fakes in court
.”

The WSJ article said, “McConaughey’s lawyers believe that the threat of a lawsuit in federal courts would help deter misuse more broadly, including for AI video that isn’t explicitly selling anything.” It also quoted the lawyer as saying: “I don’t know what a court will say in the end. But we have to at least test this.”

I think the more accurate statement is: Mr. McConaughey’s lawyers are generating fees (or possibly appeasing a demanding client with speculative legal work) by doing trademark filings on something that is not properly the basis for a trademark. But I guess a client who is a wealthy, successful actor can easily fund speculative trademark registrations, so…everybody wins. Except maybe the PTO. Perhaps I need more clients like that.

To me, trademark law covering a man saying “alright” sounds much more speculative than publicity rights–which are already designed to protect a personal likeness and image. On the plaintiff’s side, the problem is that trademarks are intended to cover the source or origin of goods and services, and a guy–real or fake–saying something in a video clip is neither of those. On the defendant’s side, if the AI truly “isn’t selling anything” then a trademark claim is weak. Trademark infringement is a commercial tort. Publicity rights, in contrast, are designed to redress claims about personal, rather than commercial, reputation. Both are also vulnerable to first amendment limitations. Mr. McConaughey is, after all, a public figure who has voluntarily placed himself in public view.

The better legal policy would be to advocate for consistent publicity rights via federal law, instead of the crazy-quilt of state law that currently covers it in the US.

I suspect also that one element of the fitting-a-square-publicity-rights-peg-into-a-round-trademark-hole strategy is to leverage international treaties about trademark that might not extend to publicity rights.

Looking for the Registration

I took a look on the USPTO site, because I was curious to see how to claimed goods and services would be described, and found this:

All of the above were filed for products, but have been abandoned, though it’s not clear whether the abandonment may have been “suggested” by the actor’s lawyers.

Now, there are many registrations at the PTO by J.K. Living Brands, which is apparently McConaughey’s trademark holding company. But I could not find any application of the kind described in the WSJ article.

A couple of the live registrations are for “entertainment services” in category IC 041.

The goods listed for this last one are “Entertainment services, namely, personal appearances by an actor and celebrity; entertainment services, namely, acting services in the nature of live performances and personal appearances by an actor and celebrity; entertainment services, namely, acting services in the nature of live visual and audio performances by a professional entertainer; entertainment services, namely, film and television show production service.”

But a meme of someone saying “Alright” would not be in this goods description.

So, is this just another example of the press drumming up a headline without any particular regard for how IP law works, or a brilliant new legal tactic that IP lawyers need to learn? That’s unclear.

I will update if I find the registration or more explanations of why this tactic should work.

X Sues Over Open Source “Exfiltration”

This week, a new lawsuit cropped up relating to corporate open source releases. 

X Corp. v. Yao Yue and IOP Systems, ND Cal, filed 12/4/2025

X, the-micro-blogging-service-formerly-known-as-Twitter, sued Yao Yue, a former engineer at X, alleging theft of proprietary source code. The facts sprung from the fraught events surrounding Elon Musk’s purchase of the company in 2022. The complaint linked above sets out the allegations in detail.

“Yue had been making repeated requests to open-source certain X Corp. data, as well as made comments that she had exfiltrated X Corp. source code to benefit her new company. …[A]fter Yue’s termination, Yue began contacting [X Director of Performance Engineering] Ms. Strong, asking her to push through a project that had been underway at the time of the Musk Acquisition. The aim of the project was to open-source certain data logs, with the purported goal of educating the broader technology community as to the performance of systems in a company of X Corp.’s scale. Prior to the Musk Acquisition, the project was going through the normal process for approval.  Yue wanted Ms. Strong to “nudge” the project along, claiming that it was simply waiting for someone to sign off on the open-source designation.”

Later, Yue “bragged about how she and other former X Corp. colleagues had exfiltrated X Corp. source code needed to start their own venture, IOP Systems.” 

The complaint alleges that an article in The Verge quoted Yue as an anonymous source, and that “after her termination, Yue used the service elevator to sneak into X Corp.’s San Francisco, CA office and purportedly gather personal belongings,” but used the opportunity to “exfiltrate to a USB drive 6 million lines of X Corp.’s proprietary and confidential source code from her company-issued laptop.”

The software or data in question was developed by X’s Redbird group, which focuses on infrastructure technology.

An academic paper co-authored by Yue described a tool, “Latensheer,” which can “predict end-to-end latency of a complex internet platform.” The paper stated that “LatenSeer is open-sourced at: https://github.com/yazhuo/LatenSeer, and the Twitter traces will be released upon legal approval.” 

As of this writing, the repository is still online–which is interesting, given that if it is infringing, I would expect a DMCA takedown request to have been issued. I checked GitHub’s repository of takedown requests and did not find anything about this repository.

The Open Source Skunkworks

This points up a troubling trend for technology companies for some time: employees push their company to release software under open source licenses, or release the software without company authority, and then quit, created a new company, and use the released software to build competing products.

This happened most spectacularly with NGINX–or at least, that is what a 2020 lawsuit claimed. For some details, see my post here: https://heathermeeker.com/2020/07/23/lawsuit-alleges-nginx-conspiracy/. The lawsuit was later dismissed, but its complaint told quite a tale, so I recommend reading the original complaint, if only for entertainment value.

The skunkworks problem illustrates why it is key to have a process for corporate open source releases. Here, because X had an approval process, there will be a smaller likelihood of a dispute over whether the open source release was actually authorized or not. Companies without formal policies can end up in finger pointing disputes, with potential defendants claiming they had the right to use the software because it was released under an open source license, and the company claiming it did not authorize the release.

The X lawsuit claims misappropriation of trade secrets, and related claims like violation of the California Computer Data Access and Fraud law, and unfair competition, but not, notably, copyright infringement. So it is not clear whether the LatenSeer code is alleged to be infringing.

What Happens Next

The allegations in the complaint are only the plaintiff’s version of events at this point, not proven. The defendants will likely respond by denying the allegations, and the suit will plug along the way lawsuits do.

Investing in the Red Zone: Commercial Open Source and the Bear Market

Note: This is an article from 2020 whose original link has broken. I’ve posted it here for continuity.

When business is on an upward trajectory, investing is not so hard. After all, the stock market always goes up over the long term. But it takes more work to identify good investments during down markets. Fortunately, commercial open source software (COSS) businesses can be a great investment during bad times.

There is plenty of anecdotal evidence for this premise. First, the demand curve. Linux was getting popular before 2001, but its popularity skyrocketed during the Internet bust. That shows the power of COSS on the demand side, and it fits a classic demand curve analysis. When times are bad, and profits are down, buyers turn to lower cost goods. COSS is often developed as a substitute for more costly proprietary software. IT managers tasked with cutting budgets turned to it in earnest beginning with the downturn of 2001.

But the economic profile of COSS is also about the supply side. In difficult times, the companies that use capital most efficiently survive. COSS companies are fundamentally more efficient at running on a less capital, and that makes COSS one of the most interesting investments in a down market.

COSS companies leverage capital efficiently for many reasons. First, consider the people who write open source software (OSS). Today, even though OSS is heavily underwritten by industry, an OSS project is still usually the brainchild of an individual or small team who came up with the idea on their own and had no top-down direction to create it. So, the roots of most projects are still in the garage. That means the labor to make the initial development sprint is usually a volunteer effort. This can play out in different ways. Perhaps an engineer starts a side project while employed doing something else. Perhaps an engineer expends time during slow periods, or while between jobs, to create a resume trail, network, or prepare for the next opportunity. The cost here is the engineer’s sweat equity–definitely not a zero cost. But it is, undeniably, an efficient cost. No expensive office lease, no free lunches, no swag. Just work.

Once a project is underway, it gets a slew of free marketing advice from adopters who vote with their feet. Downloads are not dollars, but downloads can tell you a lot. Is the project going in the right direction to meet market needs? Is it reliable? Is it structured correctly? Are its goals and its value properly communicated to the community? As they say, criticism is a gift. All this feedback is a trial by fire. Any project that comes out on the other end of its initial pipeline alive has been road tested in the most ruthless way imaginable.

There are also efficiencies in ongoing maintenance, but this can be a red herring. Lots of people focus on this most obvious benefit of COSS–that the world is your maintenance and support team. But that is the bean-counter viewpoint, and the real efficiency has more to do with the costs of a mature project versus a nascent one. It’s also misleading. In truth, most OSS projects are primarily maintained by their core committer team, and the value of community input is primarily in feedback: bug reporting, feature requests, and evaluation. In fact, projects like Linux that have a wide and active community of committers vying for PRs is the exception, not the rule. So if you find one of those, it’s probably a great investment. But until that unicorn comes along, there are plenty of projects with great potential that don’t “outsource” their support to the community.

If we take all the above as a given, a COSS company makes ridiculously efficient use of resources in its early stages. Now, suppose you are an investor looking for your highest long-term multiple. Consider that a COSS company does not usually get formed on day one of this process. It usually gets formed after all this initial honing has taken place. So, if you have a choice between investing in a fledgling COSS company, and a proprietary company, that choice is simple. The proprietary company will be using your capital for initial development, feature definition, and road testing–not to mention the financing roadshow. The COSS company already has a foothold on its product and market. So, at a minimum, investing in COSS companies takes place at a better inflection point than for fully proprietary companies.

But that is theory, and now we have to road test the hypothesis: do COSS companies survive downturns well? The analysis below suggests that the answer is a resounding: Yes.

To investigate this proposition, I looked at approximately 50 COSS companies. These included notable exits and companies with a notable business. 

Then I identified the major down markets of the last 30 years. Our working assumption was that development would have started in the 12 months prior to first release. So, I included in the RED ZONE companies who released in, or within 1 year after, a down market. 

The RED ZONE shows companies whose initial development took place during one of the downturns identified below.

The results speak for themselves–most of the biggest COSS were built on software developed during a downturn–even if we eliminate the Linux distros. Of the companies considered, over 50% of them were started during a recession. The table appears below. 

So, I am excited for whatever comes next. If the market is great, there will be lots of winners. If not, I will be following the winners.

The exceptions are also notable: the wave of acquisitions by Oracle in the 1990s and 2000s, Kubernetes, Docker, and several Apache projects during the past 7 years since the recession of 2008.  

Now, to be scientific, I would have to also pick a control group, but I haven’t done that. To be candid, this is like one of those clinical trials where they stop the control group for moral reasons once the initial data comes in. I don’t need to know if proprietary companies would do as well–I don’t think that’s likely. But even if so, the more efficient capital use by COSS companies would motivate us. Capital, thoughtfully applied to COSS businesses in bad times, is a big countercyclical advantage.

A note on methodology:  The selection of 50 companies was to some degree arbitrary, in the sense that I applied a few rules to identify COSS companies according to our definition.  I did not include companies like Facebook or Google, which contribute greatly to open source development, but whose primary business is not open source development. (In other words, primarily proprietary companies with open source activities, rather than vice-versa. For more on this distinction see www.chinstrap.community. I excluded cryptocurrencies–because their products are not primarily based on providing software, companies that sold only proprietary versions of open source software written by others, and a few others whose business was too complex to map–positively or negatively–to the analysis. Other methodology notes appear in the table.

Is AI the re-Democratization of the Web?

For a few years now, the news has been full of prognosticators screeching about the dangers of AI. And while some of it is potentially concerning, we all know that the news tends to lean into the catastrophic. So, I’ve been thinking about one aspect of the advent of AI that might actually be great – at least for the time being.

Once upon a time, the web was a level playing field. I remember my delight in being able to use algorithmic search results. In those results, even small webpages sometimes came up before big ones.

Then the commercialization of search started–and never stopped.

Don’t get me wrong, there were some things about the commercialization of search that were great. The theory was that people who were willing to pay to show search results typically had more resources and therefore offered better products or more interesting information. And those who complain about targeted ads have surely forgotten the early days where every ad was for Viagra.

Once Upon a Query

For a while, search engines like Google clearly separated algorithmic and paid search results–whereas some search engines leaned more heavily into paid results without identifying them as paid. And each of us used the search engine that fit our needs best. I was an Altavista fan until it got acquired by Yahoo and mothballed. Altavista was the algorithmic search engine beloved by nerds everywhere.

But eventually, paid search took over the web experience. These days, you can’t even search for information about hotels without getting an entire page of results from aggregators–so much so that the official sites of the hoteliers are actually hard to find. And don’t get me started about trying to file government documents; the actual government sites are buried in a slew of ads by charlatans who want to charge you money to file something that is usually just as easy to file yourself.

For Now, AI is Better

Now, recently, we’ve seen some hue and cry in the press about AI taking over search. Let me remind you that, a few years ago, the same hue and cry was about videos taking over search. All these articles seemed to imply that anything taking over search was a danger, because (reading between the lines) search yielded up purer, more factual, or less brain-rot results. These articles bemoanthat the golden days of search were over, and possibly that Google’s ad-related business model is doomed–though given the Google-hating so common in media, it wasn’t clear why that was supposed to be a cause for alarm. 

Recently, OpenAI announced a browser called Atlas. Again, the alarm bells sounded for the death of search.

Then I started thinking, is that really a bad thing? When I ask AI a question, the AI answers based on what it knows. And mostly, it knows facts, not the potential for ad revenue. I also get web links as references in the answer. Those references seem to be more like the old days of search, where information took precedence over advertising.

Here’s an example, I searched for a flight to Samarkand. With Google Search, the entire first page was paid results. It found Turkish Air, which was good, but the first hit was Delta.

Now, Delta and Lufthansa are not the best flight to anywhere, in my experience, but guess what? Delta–the top result–apparently doesn’t even go there.

Meanwhile, Claude gave me a lot of useful information. But even AI is at the mercy of what is on the web, so it pointed me to an aggregator instead of an airline.

And so, exactly who is surprised that AI is replacing search? I mean, AI is helpful, but the problem is that search is broken. 

Waiting for the Other Shoe to Drop

Now the question is: where will the search ads go? What will be the next business initiative to divert my attention from what I want to see, to what advertisers want me to see? Ads aren’t in AI results yet, because the AI providers are getting paid for using their models. In that sense, Google search is more like the old over-the-airwaves TV model: the service is free, but the ads pay for it. Now, for AI, we seem to be in the equivalent of the early streaming days: pay for the service, but no ads. But we all know what happens next: pay for the service, and see ads, as well.

Meanwhile, let’s enjoy this time, which we might later look back on as a golden age of ad-free AI search results.

Amicus Brief in Thomson Reuters v. Ross

I am excited to announce that I filed an amicus brief in this case (about which I wrote a while ago). The case is on interlocutory appeal to the Third Circuit on topics of protectability of legal headnotes under copyright, and fair use of legal headnotes in AI training. My brief is focused on protectability.

On a personal note: This is the first time I’ve ever filed an amicus brief–or any brief–and the process was a learning experience for me. Writing the argument was fun, but for me, that was only the beginning. It was truly a 90/10 rule. In the end, with the help of the excellent team at Counsel Press, I was able to get it filed.

I look forward to the court’s eventual decision on this case.

Anthropic Settling AI Class Action

Of all the many pending lawsuits about AI and copyright, the Anthropic class action has been blazing trails in the US courts. The case is still not precisely over, but apparently heading toward settlement.

Update as of 9/5/25: Under the proposed settlement, Anthropic will pay about $3,000 for each of about 500,000 books used from pirate sites, for a total of at least $1.5 billion. “All works in the Class are treated the same in this settlement, entitled to the same pro-rata amount of the Settlement Fund, reflective of the per-work statutory damages remedy authorized by the Copyright Act itself. The allocation for each Class Work will be calculated by dividing the total amount of the Settlement Fund (less fees and expenses) by the total number of Class Works.”

And in case you were wondering, this summary was done almost entirely with Claude, Anthropic’s LLM, with minimal editing. So…

Don't believe me,  just watch!

Background

Case: Bartz v. Anthropic PBC, Case No. 3:24-cv-05417 (N.D. Cal.)

Court Docket: CourtListener.com

Plaintiffs: Three named plaintiffs – Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson – filed a class action lawsuit against Anthropic.

The Training: Anthropic downloaded over seven million books from pirate sites and digitized millions of purchased print books to build a “central library of “all the books in the world'” to support the training of its large language models. Specifically:

  • Anthropic used millions of copyrighted books to train its Claude LLMs for use with its AI services capable of generating writings that mimic the writing style of humans.
  • Millions of books were downloaded from shadow library sites like Pirate Library Mirror and Library Genesis and stored in a central repository that Anthropic employees could access for model training and internal research.
  • The Court relied in part on an internal Anthropic email in which an employee was tasked with obtaining “all the books in the world” while avoiding as much “legal/practice/business slog” as possible.

Legal Claims

The plaintiffs claimed Anthropic infringed their copyrights by (1) pirating copies of their works for Anthropic’s library and (2) reproducing their works to train Anthropic’s LLMs. The authors argued that use of their books to train Anthropic’s LLMs could result in the production of works that compete and displace demand for their books and that Anthropic’s unauthorized use has the potential to displace an emerging market for licensing the plaintiffs’ works for the purpose of training LLMs.

Procedural Facts

  • Filed: August 19, 2024
  • Complaint: Bartz et al. v. Anthropic PBC – 3:24-cv-05417
  • Judge: U.S. District Judge William Alsup of the Northern District of California
  • Motion: Anthropic moved for summary judgment on an asserted defense of fair use. Judge Alsup issued a mixed decision on June 23, 2025, granting summary judgment on some issues while denying it on others.
  • Class Action Status: The case was certified as a class action on July 17, 2025.
  • Interlocutory Appeal: Anthropic filed a Rule 23(f) petition seeking interlocutory appeal of Judge Alsup’s class certification in Bartz v. Anthropic.
  • Motion to Stay: Anthropic moved to stay the case, pending its seeking of Rule23(f) interlocutory appeal of his class certification. Judge Alsup denied Anthropic’s request to stay the case on August 11, 2025.
  • Notice of Settlement and Joint Stipulation for Stay was filed August 25, 2025 indicating hte parties were close to a settlement.
  • Order re: Settlement. The case is stayed for the parties to file a settlement by September 5, 2025.

The June 23 Summary Judgment Order:

Granted Summary Judgment (Fair Use Found):

  • Training LLMs: The court concluded that use of the books at issue to train Anthropic’s LLMs was “exceedingly transformative” and a fair use under Section 107 of the Copyright Act. Judge Alsup wrote that the “‘purpose and character of using works to train LLMs was transformative – spectacularly so'” and described it as “quintessentially transformative.”
  • Digitizing Purchased Books: The Court concluded this was fair use because the new digital copies were not redistributed, but rather, simply, convenient space-saving replacements of the discarded print copies.

Denied Summary Judgment (Not Fair Use):

  • Pirated Books: The court found that downloading and copying pirated books for its library was not fair use. Because Anthropic never paid for the pirated copies, the court thought it was clear the pirated copies displaced demand for the authors’ works, copy for copy.

Fair Use Analysis

  • Factor 1 (Purpose/Character): The court noted that authors cannot exclude others from using their works to learn. It noted that, for centuries, people have read and re-read books, and that the training was for the purpose of creating something different, not to supplant the works
  • Factor 4 (Market Effect): The Court found that the copies used by Anthropic to train LLMs did not (and will not) displace demand for the authors’ works. The court dismissed concerns by analogizing to complaining that “training schoolchildren to write well would result in an explosion of competing works”
  • Partial Victory: Upon weighing all the fair use factors, the Court granted Anthropic’s summary judgment motion for fair use as to the training of LLMs and the digitization (format change) of legally purchased works. The Court, however, denied summary judgment relating to pirated copies and ordered a trial on that issue and any related damages.
  • Trial Scheduled: The court wrote that “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages, actual or statutory (including for willfulness)” Trial was Scheduled for December, 2025.
  • Potential Damages: Depending on how many titles were involved, Anthropic’s potential liability could reach into the billions.

PHP License Metamorphoses to BSD

The PHP project announced it is moving to a new license.

PHP is a scripting language used for web development. It can be embedded within HTML and used to create dynamic web pages. It is the “P” in the LAMP stack (Linux, Apache, MySQL and PHP)–although some people use the P to refer to Python or PERL. The Zend Engine, the core of PHP.

For years, PHP as a whole has been offered under the PHP License and the Zend Engine License–both permissive licenses. The PHP License is OSI approved, but the Zend Engine License is not. The Zend Engine License has specific naming restrictions related to “Zend” and “Zend Engine,” sometimes referred to as advertising clauses or attribution clauses. Such restrictions were common in early permissive licenses like Apache 1.0, but have since been deprecated by the open source community and do not appear in most recent permissive licenses.

License changes can be a challenge, because unless a project uses a contribution license agreement (CLA), it must get permission from all contributors to change the license for their contributions. In major projects, this has been done a few times, such as the Wikipedia migration and the OpenSSL change, but it’s a big project that can require broad socialization and risk that a contributor will object. These changes usually take place with popular projects whose licenses are outdated, ad hoc, and confusing.

But PHP has found a neat trick to avoid having to get permission from every contributor. Like many open source licenses, the PHP license allows the license steward to issue new versions.

  5. The PHP Group may publish revised and/or new versions of the
     license from time to time. Each version will be given a
     distinguishing version number.
     Once covered code has been published under a particular version
     of the license, you may always continue to use it under the terms
     of that version. You may also choose to use such covered code
     under the terms of any subsequent version of the license
     published by the PHP Group. No one other than the PHP Group has
     the right to modify the terms applicable to covered code created
     under this License.

Apparently, PHP, as license steward, is redefining its own license as the BSD license. The announcement says that the BSD License will be adopted as the PHP License v.4 and as the Zend Engine License v. 3.

Meta Wins Partial Summary Judgment in AI Infringement Claim

On the heels of the landmark judgment in favor of Anthropic this week, a judge in another pending AI copyright case, Kadrey v. Meta, ruled for the defendants.

Thirteen authors, including most notably Sarah Silverman, sued Meta for using their copyrighted books, downloaded from “shadow libraries,” to train its large language model (Llama). The court explained, “A shadow library is an online repository that provides things like books, academic journal articles, music, or films for free download, regardless of whether that media is copyrighted.” The most notorious of these is called The Pile.

Even though Judge Chhabria ruled for the defendants, the language of his opinion was extremely favorable to the plaintiffs. The court said, for example: “[B]y training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way.” This statement points to the final and most important factor of fair use–effect on the market for the original work–and suggests that, if the case were argued correctly, this factor would weigh in favor of infringement.

The plaintiffs had argued that Llama could reproduce snippets the text of their works, and that Meta’s unauthorized training diminished their ability to license works for AI training. However, the court stated that “Llama is not capable of generating enough text from the plaintiffs’ books to matter, and the plaintiffs are not entitled to the market for licensing their works as AI training data.”

Keep in mind that this same judge had stated in a pervious hearing on this case, “I understand your core theory. Your remaining theories of liability I don’t understand even a little bit.” https://www.reuters.com/legal/litigation/us-judge-trims-ai-copyright-lawsuit-against-meta-2023-11-09/

The court implicitly lamented that the plaintiffs did not assert sufficient facts to withstand summary judgment, noting, “Because the issue of market dilution is so important in this context, had the plaintiffs presented any evidence that a jury could use to find in their favor on the issue, factor four would have needed to go to a jury.”

The court strongly hinted that similar cases could benefit from better advocacy. “As for the potentially winning argument—that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution—the plaintiffs barely give this issue lip service, and they present no evidence about how the current or expected outputs from Meta’s models would dilute the market for their own works.” This is what one might call a playbook for bringing a more successful claim.

Given the state of the record, the Court has no choice but to grant summary judgment to Meta on the plaintiffs’ claim that the company violated copyright law by training its models with their books. But in the grand scheme of things, the consequences of this ruling are limited.

This particular case is not quite over yet. But removing the infringement claims is a significant win for the defense.

It may be no coincidence that this case came on the heels of Judge Alsop’s opinion only days ago. The order in this Meta case referred specifically to Judge Alsop’s opinion, disagreeing with some of his fair use analysis.

AI Training Ruled Fair Use

This week, in Bartz v. Anthropic, Judge Alsup (Northern District of California) ruled that training AI large language models (LLMs) on lawfully acquired works of authorship is fair use.

This is a landmark ruling by the highly respected judge, who handled the Oracle v. Google case.

Infringement claims regarding AI come in two basic flavors: that the act of training is infringement, and that the AI producing output similar to the input is infringement. This ruling is only about the first flavor–the training stage.

Two Acts of Copying

In this case, the defendant purchased copyrighted books, tore off the bindings, scanned every page, and stored them in digitized, searchable files. (This is called destructive scanning, which is faster and easier to do than non-destructive scanning that preserves the original book.) It used selected portions of the resulting database to train various large language models. But Anthropic also downloaded many pirated copies of books, though it later decided not to use them for training. These copies were retained in a digital library for possible future use.

The plaintiffs are authors of some of the books.

Anthropic moved to dismiss the claims based on fair use, and Alsup found the act of training to be transformative, one of the key factors in modern fair use doctrine. Regarding transformation, Alsup cited the Google Books case, one of the key decisions on fair use in the digital age. (Authors Guild v. Google, Inc., 804 F.3d 202, 217 (2d Cir. 2015)).

The Fair Use Analysis

Fair use is analyzed according to four non-exclusive factors set out in 17 USC 107. On the first factor of fair use, the court distinguished between scanning and pirating activities. The court called the destructive scanning of the books a “mere format change,” which supported a finding of fair use. The purpose of the copy was to support searchability. Anthropic only ended up with the digital copies, not the books.

Before buying the physical books, Anthropic “downloaded over seven million pirated copies of books, paid nothing, and kept these pirated copies…even after deciding it would not use them to train its AI.” The court viewed this differently from the scanning: “Such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded.” The court was not convinced by Anthropic’s argument that the use would ultimately be transformative. Citing the recent Warhol case, the order says, “what a copyist says or thinks or feels matters only to the extent it shows what a copyist in fact does with the work.”

The last of the factors in a fair use analysis–usually considered the most important factor–is the effect of the otherwise infringing activity on the market for the original work. The court said, “The copies used to train specific LLMs did not and will not displace demand for copies of Authors’ works, or not in the way that counts under the Copyright Act.” But this was only for the purchased copies; the court reached the opposite conclusion for the pirated copies.

What’s Next?

The case can now proceed to trial only for the pirated copies. For the purchased books that were destructively scanned, the claims were dismissed.

This case is a class action, and the motion for class certification is still pending. If the class is not certified, plaintiffs often give up or settle for small amounts. Law firms that specialize in bringing class actions depend on a class certification of a large class to increase damages, and accordingly, their fees.

There are about 40 pending cases in the US on AI and copyright, and many of them may have suffered a setback with this opinion. Alsup’s opinion is in line with what many copyright commentators (including me) have proposed: that training is lawful if done with lawful access to the training material. The decision of a district court will not bind cases pending in other districts. However, because Alsup is a well-respected jurist, his analysis may persuade other courts to follow suit.

The court did not reach the second flavor of infringement claims regarding output, because it was not at issue here. But many commentators are skeptical that such claims will be successful for properly trained models. ML models typically do not produce “copies” in the sense intended by the copyright law. Claims regarding output may therefore be relegated to trademark, publicity and trade dress claims, which are outside of the ambit of copyright law.