Scott Peterson’s recent article, Making Compliance Scalable in a Container World, is a good explanation of some challenges and opportunities in open source compliance for software is delivered via containers. The article advocates the use of container registries to store and deliver portable compliance information. Also, it reminds us that container continue to be a challenge for open source compliance.
Today, much software is delivered via containers. A container is a standalone, executable package of software that includes everything needed to run an application: the application software, libraries for the operating system or language platforms, system tools, and configuration settings. Containerization is the “plug-and-play” way of running software, without complicated or time-consuming installation.
Most computer users are familiar with installing applications on a desktop or laptop computer. It takes a while to install them, and for an application to work, the computer must already have a variety of standard software that the application needs in order to run on the computer platform. If installing an application is like making a recipe, creating a container is opening a takeout box.
A container registry is a service that facilitates delivery of container images. It helps developers build and manage containers, and decide who can access them. So, a registry can be used to deliver containers and track information about them.
Show Me the Code
A lot of open source compliance is about managing information. Complying with open source licenses requires you to know what open source software is in your product (its bill of materials), and what licenses cover all of it.
The average software application installation on your desktop or laptop copies software into local directories, and once installed they are relatively easy to examine. In these old-style installations, it is not too difficult to figure out which binaries correspond to the source code that generated them. So, if you needed to know what open source software was included, you might be able to figure that out from an installed software directory.
But containers are opaque; once they are created, it is nearly impossible to “unpack” them to determine their bill of materials. And it turns out that associating binaries with source code is one of the keys to open source compliance.
Delivering source code upfront, where feasible, is always the best way to comply with open source license notice requirements. That is because the license notices are “baked in” to the source code. In contrast, delivering object code only, without the source code, is more work – requiring distributors to pick out the license notice files and pass them along.
Many companies struggle with providing notices for containerized software, in part because the notice requirements of the major licenses are outdated. For example:
GPL2 requires that redistributors “give any other recipients of the Program a copy of this License along with the Program.”
BSD requires notices to be “in the documentation and/or other materials provided with the distribution”
MIT requires a copy of the license “in all copies or substantial portions of the Software”
These notice requirements were all written before the web came into common use, much less containers, so they presume a means of delivering software that is about 25 years out of date.
The question of how to deliver the notices “along with” or “provided with” or “in all copies” of a container can be complicated. Every year, notice delivery gets harder to accomplish, given the increasing number of open source components, and the way software is deployed in the real world – often with no user interface and with containers being spun up and down on demand. There seems little appetite for revising the licenses to include more workable notice requirements, so many users are left in a quandary about how to do the right thing.
Using Containers as a Force for Good
“We should build a container ecosystem with compliance that is portable across registries.”
Scott K Peterson (Red Hat)
Mr. Peterson’s article goes into some detailed suggestions about how to use container registries and automated tooling to help with compliance by preserving the information that associates containerized code with its source code. However, any solution using container registries needs to ensure that the means of delivery will actually fulfill the notice requirements for the licenses, and that may be a difficult problem to solve with tooling.
Fortunately, most community open source enforcers do not pursue foot foul violations when the source code has been made freely available in an effective manner, such as posting it on a publicly available web page – or, presumably, delivering information via a container registry. Such notices may or may not be in the copy or along with the software, exactly, but they help users get access to the source code nonetheless. That fulfills the spirit of the open source license, if not necessarily its exact conditions.
Is there a way to bring best practices in lines with literal license requirements by leveraging container registries? Not exactly, or at least not yet. But more information is always better than less, and more automation is the only realistic way to pursue open source compliance properly. Tooling developments in this area are worth tracking.
On June 8, 2020, Lynwood Investments CY Limited brought a lawsuit in the Northern District of California against various NGINX entities, various individuals, Runa Capital, E.venture Capital Partners II, LLC and F5 Networks, Inc., alleging the improper release and subsequent use of the popular open source software NGINX (pronounced “EngineX”), as part of a conspiracy to misappropriate corporate assets. All of the following is according to the complaint:
The NGINX software was developed primarily by Igor Sysoev, who is named as a defendant in the complaint, while he was employed at Rambler Internet Holdings LLC (“Rambler”), a Russian entity. (The plaintiff in the lawsuit, Lynwood, is an assignee of Rambler.) Sysoev also developed a commercial extension called NGINX Plus. This development was done in the course of employment and using Rambler resources. Rambler alleges that it owned the software by virtue of the work-made-for-hire doctrine (albeit its Russian equivalent).
Sysoev was employed by Rambler from 2000 until 2011. NGINX software was first developed in 2001, and released in 2004 under the BSD license. For the next seven years, Sysoev continued to be the primary author of NGINX code, making commits during what the complaint describes as “business hours.”
Sysoev’s supervisor was Rambler’s CTO, Maxim Konavolov. The complaint states, “Konovalov’s key senior management position at Rambler enabled him to provide Sysoev beginning in 2008 with an ecosystem within Rambler that was free from oversight or accountability.”
The complaint goes on to say, “Even though Konovalov and the rest of the Disloyal Employees were fixated on misappropriating the NGINX Enterprise, which they viewed as a highly valuable business, Konovalov uniformly gave the NGINX Software a rating of “1” on a scale of “1-5” with “1” being deemed “worthless” or “no value.” Konovalov’s designations were designed to … lull Rambler into complacency with respect to the value of the NGINX Software” and to avoid “any serious oversight by Rambler’s senior management or board of directors.”
While still employed at Rambler, Sysoev, with Konavolov and several other colleagues, formed a new company called NGINX, Inc., and obtained financing from defendants Runa Capital and E.Venture Capital (then, BV Capital). The complaint alleges that Runa Capital and E.Venture “knew … that Rambler maintained the ownership rights to the NGINX Software” but nevertheless “assisted and encouraged Sysoev and Konovalov while they were still employees at Rambler, to breach their duties to Rambler … for the benefit of the fledgling business that Sysoev and Konovalov were forming.” The complaint makes much of a trademark application filed by NGINX, Inc. on its first day of incorporation claiming first use of NGINX in commerce in commerce on March 1, 2011, a date on which the team were still employed at Rambler.
The funding of the new entity was not without its challenges. “Greycroft pulled out of the closing because its concerns over Rambler’s ownership of the NGINX Software…. In contrast, Runa Capital and BV Capital went forward and closed on the Series A financing on or about October 23, 2011 after conducting their own due diligence, with full knowledge that Rambler was the legal owner of the entire NGINX Enterprise ….”
NGINX, Inc. was sold in 2019 for $670 million to F5 Networks, Inc., also named as a defendant. The complaint alleges that as a result of due diligence, “F5 was aware prior to the consummation of the merger that the conspirators had stolen the NGINX Enterprise from Rambler…”
Rambler and Lynwood learned of the defendants’ alleged conspiracy when a whistleblower provided evidence to them.
This complaint, which alleges many claims including civil conspiracy, is long, complex, and makes for dramatic reading. It promises to be just the beginning of activity in this case. It was previously reported that Rambler was pursuing a criminal case in Russia based on the facts of this lawsuit, until Russian state lender Sberbank, which owns 46.5% of Rambler, exhorted Rambler’s board of directors to stop the pursuit. Rambler apparently dropped the criminal case in late 2019, instead pursuing “negotiations” with F5.
This complaint describes a situation that is, unfortunately, becoming more common – developers have been known to write code on company time, release it under an open source license without proper authorization, and proceed to form a competing business leveraging the code, claiming the right to use the code under the open source license. In such cases, the question of authorization of the release is often key, and points up the need for companies to have a formal open source release policy. But the level of misdoing alleged in this complaint is unusual, reaching up to the CTO level. The complaint is also interesting in that it names the venture investors and buyers of NGINX, Inc. – an unusual move that seeks to circumvent the corporate veil. That is a troubling development for companies contemplating due diligence on potential investments and acquisitions.
The complaint, which was not published with the announcement but was quoted liberally in Jelruida’s press release, alleged that Apollo’s software was “cloned from Nxt.” It further states, “Apollo copied source code files from the Nxt Software one-on-one into the Apollo Software. In addition, Apollo modified source code files of the Nxt Software and included these in the source code for the Apollo Software.”
The press release goes on to say, “In 2018, Apollo launched as an Nxt clone but failed to adhere to the JPL license terms. After ignoring four cease and desist letters, Apollo belatedly complied in August 2018. The compliance proved short-lived, however, and in October 2019 Apollo replaced the JPL in its software with a proprietary license. This constitutes a violation of the JPL, which … requires that any derived work continue to be distributed only under exactly the same license. This viral “copyleft” requirement is a cornerstone of many open source licenses, most notably the GPL.”
The press release further stated that “A writ of summons has been filed in the Netherlands and the case will be heard on August 25 in Amsterdam. The court case has implications for the widespread illicit practice of cloning blockchain code, and open source software in general….”
The JPL has a “10% airdrop requirement.” The JPL FAQ says, “as a consequence of the 10% Airdrop requirement, internal use (a private or permissioned clone) is not allowed except for an evaluation purposes no longer than three months. This is because a private/permissioned blockchain clone cannot fulfill the Airdrop requirement.” The relevant language in Section 3.4 of the JPL is fairly impenetrable to the lay reader and specific to the blockchain context:
3.4 If the Covered Work is a DLT Software [i.e. a distributed ledger], after your modifications it must continue to work with the original DLT Instance without violating the consensus algorithm or resulting in a permanent fork…. If your modifications result in a different DLT Instance you must satisfy the following airdrop requirement:
3.4.1 The token holders from the original distributed ledger instance shall be allocated a portion (an “airdrop”) of the tokens in that new DLT Instance proportional to their token balances.
These terms violate the open source definition on face, given they require a fee — albeit in crypto-coins (violating plank 1 of the OSD) — and discriminate against DLTs (violating planks 6 and 10) . Moreover, most open source licensors do not describe their own licenses as “viral.” So, Jelruida’s efforts to characterize the prosecution of this dispute as advancing the spirit of open source may be difficult.
In response, Apollo brought and action for declaratory judgment (Case 7:20-cv-00186 Document 1 Filed on 07/08/20 in TXSD) asking a US federal court to find:
that Apollo is the owner of the copyrights in its computer software and that no ownership and/or co-ownership rights therein extend Jelruida;
Jelruida must cease and desist from asserting that they own and/or co-own any rights therein
The complaint alleges that the Nxt software claimed as original by Jelruida was “in actuality, a public domain work which was anonymously deposited into public domain for the benefit of the entire open source community and not as something that Defendants or any of them could assert exclusive ownership or exclusive control rights over.”
It is unclear whether the reference to “public domain” refers to the absence of a copyright interest or other intellectual property interests. Outside the US, it is notoriously difficult to confidently conclude that any copyright has fallen into the public domain before its expiration.
The DJ complaint contains a number of statements that contradict the Jelruida press release. According to the complaint, the Nxt blockchain was “licensed and license fees valued at over $5 million (U.S.) dollar were fully paid” to Jelruida, who “now claim that no license or other rights exist in any aspect of the previously fully paid open source use.”
Of course, a truly “open source” use would not require a license fee, but the JPL is not exactly open source, which might explain the seeming inconsistency of this statement.
The complaint also alleges, “Since 2017 and on a continuing basis and with over 200 person-years of software engineering, Plaintiff continually wrote, re-wrote, improved and extended …its APL software. Apollo’s source code and its overall functionality have significantly changed….The Apollo team wrote and re-wrote significant replacement and significant additional computer software. As a result, Apollo is radically different from anything owned or claimed to be owned by [Jelruida]. All copyrights of [Apollo] are owned solely by [Apollo].”
An allegation of violation of an open source license is not exactly germane to ownership of code, and does not imply “co-ownership.” Contrary to the persistent and pernicious “virality” meme still alive in open source discussions, violating an open source license does not alter ownership of derivative works of the licensed code. However, Apollo seems to be alleging that the JPL was irrelevant, because it rewrote the code to the extent necessary to avoid any copying, and thus avoid the need for a copyright license — a difficult thing to do even in a “clean room” process.
Oddly, Apollo is asking the court to declare that it owns all of the code in its software. DJ actions normally only ask a court to declare that an allegation of infringement is not true. There is a wide gulf between owning every line of one’s software and merely having the right to use it — the former being a fairly rare subset of the latter.
The complaint goes on to state, “Put simply, it is against the custom and usage of the open source community … for [Jelruida] to assert that Plaintiffs may not continue to use and continue deploy the software of Plaintiffs for the benefit of its users in the United States. No trade secrets and no patents exist in any of the software of Defendants and all ideas, concepts, processes, algorithms, systems, and methods disclosed in the published software of Defendants have been and now are in the public domain.”
Of course, ideas, concepts, processes, and so forth are elements expressly not governed by copyright law under 17 USC 102b, so the statement appears to be saying that Apollo is not infringing any copyrightable element in the Jelruida code.
What is this Case About?
We won’t know until the facts become clearer, but the crux of the dispute seems to be whether Apollo re-wrote the Nxt code or copied it. This set of lawsuits may develop into a dispute over what elements are protectable under copyright, which would be interesting given the pendency before the US Supreme Court of Oracle America v. Google. But given the JPL is not a true open source license, this case promises to further muddy the waters as to the meaning of “open source.” It may also provide a view into the highly competitive world of crypto-currency development. Most blockchain systems are in fact heavily based on true open source software, so allegations of non-compliance are likely to arise in the field, even after this case is over and done.
For transactions lawyers, negotiating intellectual property infringement indemnities is an unfortunate and often painful fact of life. Allocation of risk terms are notoriously difficult to resolve, and often are the last issues in a deal to be agreed. Business persons consider them abstruse “lawyer work,” and lawyers consider them business issues that get short shrift in deal memos. But for transactional lawyers, negotiating indemnities is part of the life we have chosen.
As open source software has become integral to technology development, negotiating allocation of risk terms for IP infringement has become even more challenging. Open source software does not fit well into the traditional paradigm for allocating IP infringement risk, so a difficult negotiation topic has become even more difficult.
Clients, often frustrated with this process, always ask two questions: What is “market”? And what is reasonable? The answers are anecdotal at best, and usually not the same for both questions.
Clients repeatedly ask two questions: What is “market”? And what is reasonable?
This analysis is intended to help lawyers and business people understand how to analyze allocation of risk terms for third party open source software in commercial transactions. I refer to this issue as the “IP Question.” This analysis posits a negotiation between a vendor of a technology product that contains third party open source software, and a potential customer.
But the principles of the IP Question can also be applied to indemnities in other kinds of transactions, such as investments, acquisitions or joint development deals. My thesis is that vendors today are regularly asked to bear an unreasonable amount of liability due to a misunderstanding of the IP Question. While that may seem on face like a windfall to customers, it leads to unsustainable business agreements for vendors and customers alike.
Beyond Good and Evil
One of the hurdles in negotiating the IP Question is that negotiating parties tend to view it as a moral issue. In the moral conflict, the vendor views an indemnity against third party open source infringement as an unfair cost. The customer wants the vendor to be morally responsible for any harm that may befall the customer as a result of using the product. Regardless of which view you take, it is clear that reducing the IP Question to a moral question makes it a zero-sum game, and those are always difficult to resolve. While the moral view may be tempting, it is not very useful to get deals done.
The IP Question manifests when, at some point in the negotiation, the customer utters the phrase, “You need to stand up for your product!” And once this gauntlet has been thrown, and the IP Question has thus been reduced to a moral dilemma, the vendor and customer have no choice but to ham-handedly exercise their relative bargaining power tor resolve it, without either side engaging in the burden of critical thought. If the vendor is small and the customer is big, the customer wins. If the vendor is big and the customer is small, the vendor wins. But if you are an intrepid soul willing to engage in a more thoughtful approach, this analysis will help you “think outside the box” about the IP Question.
Indemnities are Costly
In fact, indemnities are not moral choices, but economic mechanisms to share risk. An infringement indemnity reduced to its purest form is an insurance contract. If I get hurt, you pay me. No one makes the mistake of thinking that insurance is free of charge, or a moral dilemma. But customers often expect vendors to bear broad indemnities for their products at no additional price.
Suppose that a third party insurer were willing to write a policy to indemnify the customer against third party open source intellectual property infringement. What would happen?
There would be a money premium for the insurance
The policy would contain limitations on coverage, and the premium would be priced accordingly
Unfortunately, in negotiations between vendors and customers on the IP Question, infringement indemnities are not negotiated in this way. If they were, the result would be simple:
The customer would choose whether or not to pay extra to get the indemnity.
The price of the indemnity would be calculated based on the type and amount of coverage the customer chose.
The vendor and the customer could, if they chose, share the cost of the premium by negotiating a discount.
Indemnities are difficult to negotiate because they are never reduced to a priced deal point in this way. But why not, when doing this is so obviously sensible? Mainly because third party insurance for such risks is generally not available, making most vendors self-insured. Also, when parties negotiate the terms of contracts, they treat indemnities as undisclosed “legal” terms rather than essential deal terms — meaning they have merely kicked the can down the road for the lawyers to argue over.
But it does no good to lament this phenomenon. If the vendor and customer will not view the IP Question as a pure economic decision, then how do they actually come to agreement?
What is Not Relevant
First, let’s understand what the IP Question is not. There are two similar issues that arise when negotiating IT procurement deals, that should not be conflated with the IP Question.
Vendor Compliance. The first of these is the vendor’s open source license compliance. Because open source compliance claims are usually cast as copyright infringement claims, non-compliance is potentially an IP risk, but not a risk arising from third party actions. A vendor who supplies a customer with third party open source software must follow the license terms that facially apply to that software. That duty does not arise from the customer contract; it arises from the open source licenses. The customer contract may require the vendor to pay legal fees to defend a compliance claim against a customer, if that claim arises from a failure of the vendor to comply with the open source license terms when it delivers the product to the customer. But few vendors attempt to avoid responsibility for their own compliance. The IP Question, instead, is about third party open source software that is infringing of third party IP rights, even when the vendor has complied with the facially applied license. In other words, it is a risk the vendor cannot control.
For example, suppose a project is on GITHUB and bears an Apache 2.0 license, but the project contains code that was improperly contributed by a person without the right to make the contribution, or a cut-and-paste of third party code under GPL. That could result in copyright or trade secret claims against re-distributors or users of the project. It is one cause of the IP Question. Alternatively, a project may, unbeknownst to the project’s maintainers, infringe third party patent rights. That can result in claims of patent infringement against users of the code. But neither of these arise from malfeasance by the vendor, or by the customer.
Performance Warranties. The second is warranties or indemnities arising from the performance, or non-performance, of software. These are commercial warranties, and vendors often undertake them for third party open source elements, because the vendors are engaging in quality control, maintenance and support for their products that happen to include the third party open source elements. These are not IP claims at all, and they are not part of the IP Question.
Two Theories: Control and Internal Pricing
Now that we know the boundaries of the IP Question, and accepting the dismaying premise that we cannot simply price it out as insurance, we consider other ways to rationally allocate the risk it represents. In contracts, there are a couple of common theories as to why one party or another should bear risk: control and economic efficiency.
Control. Most lawyers focus on the control theory, under which the party who is best able to control the risk bears the liability. The cost of bearing that risk will tend to change the bearer’s behavior toward reducing the risk. This approach works well for risks like products liability. If a vendor manufactures a light switch, the vendor can make sure the light switch is properly built, will not short circuit and injure the user, and is properly tested for compatibility with local electrical standards. Moreover, the vendor can easily insure against products liability risk. So, it makes sense for the vendor to undertake that risk, because of its relatively high level of control.
The difference with the IP Question, of course, is that the vendor has almost no control over whether third party open source software infringes IP rights. So, if a vendor sells a smart light switch, it may include open source software that interfaces with a mobile app to set automatic on and off cycles via Bluetooth. The vendor probably gets that software from a third party open source project.
Customers will argue that the vendor can make a build-or-buy decision to address this risk. For example, instead of getting the Bluetooth control software from an open source project, the vendor could write its own software. Obviously, that would raise the development cost for the light switch, perhaps substantially, and the vendor would pass that cost on to the customer. And a sophisticated vendor will also point out that home-grown software is unlikely to be as reliable or secure as existing open source software. Moreover, if the light switch is intended to conform to a larger specification for IoT, like a Google Home or Apple Home system, writing home-grown routines will tend to make the device incompatible, or require the vendor to reinvent the wheel to figure out how to make it compatible. In such cases, if the vendor exercises control over developing the software, it will actually make the product worse. For these reasons, control is not a very useful theory to resolve the IP Question.
The flip side of the control argument is that risks better addressed by the customer should be borne by the customer. For example, if a customer elects to use the vendor’s product in high risk activities or in ways that violate the agreement between them, the customer would usually bear liability for those uses, because the customer can best control those decisions. While indemnities from customers to vendors are not as common as vice-versa, these allocations of risk sometimes take the form of express terms limiting the vendor’s liability in the contract, rather than allocating them to the customer. Risks that are not expressly allocated in contracts will fall to the parties in accordance with background law.
The control theory is also significantly out of alignment with the way open source infringement problems are solved in practice. If there is an IP problem with an open source project, particularly a project that is widely used, that problem is not solved by one vendor. If there were an IP problem with Linux, or Hadoop, or Firefox, that problem would be solved by the maintainers of the project — probably with plenty of help from the community. For a single vendor using that project to try to resolve it would be inefficient and counterproductive. At a minimum, it would cause the vendor to have to fork the code to engineer around the problem, defeating many of the benefits of including open source software in the product. In fact, licenses, like GPL3 actually limit the possibility of doing so, by requiring licensees to clear patent rights for everyone if they clear it for themselves via a license. So in sum, the vendor has no reasonable way to either prevent IP problems, or resolve them.
Allocation of Internal Risk Premium. The other theory or risk allocation is based on economic efficiency, and under this theory the vendor essentially loses the argument. In any deal, one party is usually paying and the other is getting paid. The party making the profit from the transaction can therefore more easily bear the cost of the indemnity, and take a reasonable reserve against its profits as self-insurance. The problem with this approach, of course, is that the vendor is likely to build this reserve into the product cost, thereby raising the product price. So, while a customer may always win this argument, it may be a Pyrrhic victory, and it places a high burden on the vendor to amortize an unknown risk.
These two approaches once worked fairly well for products developed by a single vendor — the cathedral rather than the bazaar. But in today’s landscape of heavy use of open source, this analysis is broken.
What is the Product?
Now, finally, we can find a path through the IP Question, but only if we leave the old ways behind. Contemporary IT systems are increasingly vertically dis-integrated. Once upon a time, you may have bought a computer and all of its software in one transaction from one vendor, but those days are ancient history. Yet vendors and customers are still negotiating the IP Question as if IT systems are monolithic technology of the 1980s. When the customer utters the battle cry, “Stand up for your product!”, the next question is: what exactly is the product?
Vendors in today’s computing world are, more than ever, systems integrators of layered technology solutions that largely consist of third party open source software and IP. Taking an extreme example, a company like Red Hat, which sells subscriptions to Linux distributions, is mostly selling quality control. The software is free, but the QC has a price. Most IT products today, of course, are not so starkly reliant on third party open source software, but even the most “proprietary” products today are not developed in a vertically integrated manner. That’s a good thing; it means that because of the wealth of open source software in existence today, vendors no longer need to reinvent the wheel. The job of vendors today is to select and integrate open source components with their own technology,add their own value in the form of unique product functionality, and provide quality control.
So, it makes sense to develop a more nuanced notion of what constitutes a vendor product. In the old, monolithic model, it is everything that the vendor delivers to the customer. But in the more nuanced model, the vendor delivers a substantial amount of third party software as a courtesy to the customer — much of which is not reasonably considered part of the vendor’s product.
Consider, for example, a software vendor contract in which the vendor, FOOBAR, Inc., delivers its application, FOOBAR for Linux. Once upon a time, that product would have been delivered on a diskette for the customer to install on its own Linux system. In those circumstances, the vendor would be asked to indemnify for IP infringement arising from the FOOBAR application, but not the Linux operating system. This business expectation is by no means unique to open source; if the product were FOOBAR for the Windows operating system, no customer would expect the vendor to indemnify for IP infringement arising from Windows.
The Product of Today
Today, FOOBAR will just as likely be delivered as part of a virtualized image or container that includes FOOBAR and Linux. It’s the same product, but packaged differently. It would, of course, be possible for the customer to get Linux on its own, free of charge. But that would only cause technical problems, because the vendor can better ensure that it is delivering compatible versions of Linux and FOOBAR. The delivery of the operating system is a convenience for both parties, but it doesn’t mean the operating system is part of the vendor’s product. This approach to delivery is possible only because the operating system is open source, so the vendor has the right to distribute it free of charge.
Why, then, does this result in a customer demanding infringement indemnities for the entire package? It is because the parties have failed to correctly define the vendor’s product in light of contemporary delivery practices. To correct this, those negotiating IT deals need to understand the difference between software and the environment in which it runs. Here is a very simple version of the difference:
The above is the “birds and bees” version of the modern software landscape. It is a very simple abstraction using only two computing layers — the application and the operating system, but it gets the point across. It’s no wonder the parties have trouble understanding the scope of the product. It is quite possible that that vendor will make a warranty about the performance of the entire container, including both the FOOBAR application and the operating system, when it is the vendor’s job to deliver a quality, integration solution. But this should not drive the definition of the product for the purpose of the IP Question.
The problem of defining the product is more stark when one considers a more realistic version of what vendors today actually deliver.
Any application sold today now rests on a formidable stack of open source software that makes it faster, and more fault-resistant, flexible, and powerful. This is the breakthrough in software that allow companies to run thousands or millions of applications in parallel, and maintain consistent data bases across them. That stack of software is sometimes referred to as the “LAMP stack” (Linux, Apache, MySQL and Python) but today, even LAMP is an oversimplified view, so let’s just call it the computing landscape.
If we want to think rationally about the IP Question, we need this more nuanced view of the technology landscape. And it isn’t hard to understand. Suppose you want to buy a boat. A boat dealer sells you the boat. You cannot sail the boat without an ocean. Is the boat dealer responsible for the ocean? Of course not. Today, the open source stack is like the ocean. It is the basis on which software products are developed. But it is not the vendor’s product.
That open source landscape now represents the backbone of the world’s information technology. So, when a customer demands that the vendor indemnify against IP infringement for this “product,” the customer is essentially making one vendor take responsibility for the entire technology landscape of the world.
Moreover, the customer probably already has all this software within its organization. It gets the open source stack from each of its vendors, as well as its own internal IT activity. The use of a product of one vendor rarely contributes more than a fraction of the marginal risk arising from the use of that software.
The UCC Gets this Right
There is a long-standing precedent for this view in the Uniform Commercial Code.
Unless otherwise agreed a seller who is a merchant regularly dealing in goods of the kind warrants that the goods shall be delivered free of the rightful claim of any third person by way of infringement or the like but a buyer who furnishes specifications to the seller must hold the seller harmless against any such claim which arises out of compliance with the specifications.
UCC Section 2-312(3)
Moreover, although the UCC does not expressly provide for this, it has long been market practice in technology contracts to absolve the vendor of liability for products that infringe only because they meet enunciated industry standards that both parties elect to use. This makes sense in light of the UCC. A customer would usually specify that it wanted to buy products that meet industry standards, and vendors will conform to industry standards because their customers demand it.
If we view the open source landscape as part of the customer’s specifications, and not the vendor’s build-or-buy decision, then it makes more sense for the vendor to avoid liability for the landscape. Alternatively, we can view the software landscape as an industry standard, for which neither the vendor nor the customer should undertake liability on its own.
But even if we can agree that the vendor should not be responsible for the software landscape, that doesn’t release us from the IP Question entirely, because we still need to understand where the vendor product ends and the landscape begins.
The Build or Buy Decision
For negotiating parties to solve the IP Question, they need to separate the definition of the vendor product from the software landscape stack. Of course, if the parties have settled on the exact stack to be delivered, the product and the stack can be listed ad hoc in their agreement. But for those seeking a more general approach, one useful point of reference might be the definition of “Linux” promulgated by the Open Invention Network (OIN). OIN is a patent pool covering Linux, but its definition of Linux is broader than merely the Linux kernel, and includes many of the major components in the landscape stack. https://www.openinventionnetwork.com/joining-oin/linux-system/
Even after the parties make that distinction, there will be some third party open source embedded into in the vendor’s product that is not part of the landscape. Examples might include small routines to do generic calculations, or libraries that are included in the vendor product executable. The vendor should be far more likely to accept liability for this software, given it has made more granular decisions to use these elements in its products.
Some Provisions for Your Toolkit
Below are some suggested contract provisions to help differentiate the vendor product from its open source landscape.
“Open Source Computing Stack” means any open source software created by third parties that is so referenced in the specifications for the computing environment of the Product in the applicable purchase order, which software may include operating systems such as Linux, web server software such as the Apache web server, language engines such as Java, PHP, Python or PERL, and database software such as MySQL. The Open Source Computing Stack includes without limitation all software included in the definition of a Linux System promulgated by the Open Invention Network.
Vendor will have no liability under [reference indemnity provision] for infringement of third party intellectual property rights by the use of the Open Source Computing Stack; provided, however, that the foregoing sentence will not limit Vendor’s liability for compliance by Vendor with the terms and conditions of the open source licenses applicable to the Open Source Computing Stack.
Alternatively, focusing on open source software already in use by the customer — which is likely to include much of the open source computing stack:
Vendor will have no liability under [reference indemnity provision] for infringement of third party intellectual property rights by the use of any open source software made generally available by third parties that is in use by Customer prior to the delivery of the Vendor Product hereunder ; provided, however, that the foregoing sentence will not limit Vendor’s liability for compliance by Vendor with the terms and conditions of the open source licenses applicable to such open source software.
Non-disclosure agreements (NDAs) are some of the most “plain vanilla” technology agreements around. They are usually short, and don’t vary dramatically in content from one set of boilerplate to another. Technology companies sign NDAs all the time with little or no negotiation.
In fact, despite their brevity and simplicity, NDAs are significant obligations that recipients of information should avoid. But they are also a fact of life. Think of them as a chronic disease you can’t get rid of, but have to manage.
The name of an NDA can be misleading. NDAs usually contains both non-disclosure and non-use provisions. It may be workable to avoid disclosing documents given to you, but it is harder to avoid disclosure of information given to you, whether the information was communicated in documents or oral discussions. And it is a tricky task not to use information given to you. You can’t “unlearn” information. So while the agreement is called a non-disclosure agreement, complying with the non-use requirements is the harder task. This problem is sometimes referred to as taint — being exposed to information you can’t forget but you can’t use, even if you might have come up with it independently.
To make it worse, NDAs are intrinsically expensive contracts to breach. Whereas most commercial agreement contain limitations on liability, the point of an NDA is to put the recipient on the hook for legal liability. So, violating an NDA can expose you to high damages.
Most NDAs specify a limited purpose for use of information. Most often, that purpose is to negotiate a more detailed agreement. But sometimes, the purpose is to evaluate technical or business information for a more specific purpose. Receiving technical information under NDA is more risky than receiving general business information. So while you may sign NDAs routinely to negotiate commercial deals, think carefully about your risks under NDA if you intend to evaluate a product, particularly if you will be exposed to software source code or detailed technical specifications that you may plan to independently develop. That can place you in the difficult position of “proving a negative” — that you did not use the information in breach of the NDA.
To be safe, you should talk to a lawyer before signing an NDA — but that’s easy for a lawyer to say. In the real world, legal review costs money and time. If you are presented with an NDA to sign, particularly if you are a startup, you may not have the resources to have a lawyer review the agreement. Even if you could engage a lawyer, you might not have any bargaining power to negotiate the NDA terms. That’s particularly true when you are using the NDA to negotiate your first big customer deal.
Here are some tips for managing the chronic disease that is NDAs.
Ask for a 2-way NDA. Some companies have 1-way and 2-way forms, and as you might imagine, the 1-way forms are more aggressive in favor of the company presenting the NDA to you. Reciprocal terms are not always fairer, of course. In any NDA, one party will act more in the role of discloser and one will be more in the role of recipient, so equal terms won’t have an equal effect. Even most 2-way NDAs are written somewhat in favor of the discloser or recipient , and clever companies will have two different 2-way forms to present to you, depending on which side they expect to be on. But 2-way obligations tend to “keep people honest” and avoid some of the most draconian terms that appear in 1-way forms.
Segregate the information. When you receive information that will be subject to the NDA, store it in a special-purpose location (password protected) that is only accessible to those who need to see it. Do not make copies. This can be more challenging than it sounds — remember that email cc’s and routine backups can result in lots of copies. If you make paper copies, shred them after use. Or, refuse to accept electronic copies. If you do get electronic copies, avoid forwarding them to personal email accounts where they might persist. Delete them after you do not need it any longer (including from desktop trash cans and email deleted-items folders.) Give similar treatment to the notes you take transcribing orally disclosed information. When you delete the copies, keep a record that you did so, such as a note to file or a note to the other side saying you have done so.
Limit what you receive. Avoid receiving information that might overlap with your product roadmap. If you unexpectedly get information that you are concerned will “taint” you, return or destroy it and tell the other side in writing that you have done so. Or best, ask first what information the other side plans to send, and if you think it will taint you too much, decline to receive it.
Implement a Document Retention Policy. Keeping all documents forever is not a good idea, and a systematic plan to routinely delete unused documents is an important shield against trade secret claims. But deleting documents when you know a legal claim is looming is usually unlawful, so you should have a policy for deletion of documents that is content-neutral. That way, confidential information of others will be less likely to persist for too long, even if you fail to delete it when the NDA requires you to.
Use special-purpose consultants for risky reviews. If you have to review high-risk information, instead of receiving it under NDA, you might agree with the discloser to engage a third party consultant to do the review. There, the consultant, and not you, would be subject to the most significant obligations of the NDA, and would only communicate to you the results of the review.
You, Too, Can Learn to be a Lawyer
If you want to learn more about how to review and negotiate NDAs, you can learn to do it the same way lawyers learn. Any smart and diligent person can learn to review NDAs, and in fact, reviewing NDAs is a common task for junior lawyers as they cut their teeth on technology transactions practice. Below is a quick summary of the most common issues in NDAs. If you have the opportunity to negotiate some of these points, give it a try. But you may want to tread lightly: a fierce negation over an NDA can sour follow-on negotiations. Your potential business partner may — rightly or wrongly — consider them “standard” agreements to which no one should object. (If you want to see an example of a standardized NDA, take a look at the Waypoint NDA.)
Definition of Confidential Information. The broader the definition of Confidential Information, the more favorable the NDA is to the discloser. Most NDAs define Confidential Information with a long laundry list of items that is meant to be broad. But a few NDAs are limited to cover specific types of information for the particular deal, for example, source code, product designs, or customer lists.
Writing requirements. One of the biggest variations in NDAs is called a writing requirement. Writing requirements are very favorable to recipients. They mean that the NDA does not cover any information that is disclosed orally, such as at meetings, unless it is embodied in a document or summarized in writing promptly after the meeting. Disclosers will be concerned that failing to write down all confidential information is a “foot foul” that will cause valuable information to escape coverage. Examples are of clauses implementing a writing requirement are:
Confidential Information must be communicated in writing.
Oral disclosures must be reduced to writing within 30 days after disclosure.
Exceptions. All NDAs make exceptions to confidentiality. These are sometimes styled as exceptions to the definition of Confidential Information, and sometimes as exceptions to the confidentiality obligation. These exceptions roughly track the limits of misappropriation in trade secret law. They exclude from coverage information that:
was publicly known to the recipient prior to disclosure
became publicly known after disclosure other than due to the fault of the recipient
was already in the possession of recipient at the time of disclosure
was disclosed to the recipient by a third party without a duty of confidentiality
is independently developed by the recipient — note here that deleting the information in a timely was will help you prove that you have engaged in independent development
Screened Disclosure. As noted in the “chronic care” points above, some NDAs specifically say that any disclosure can only take place after a written request describing the information, and the written consent of Recipient.
Exceptions to Disclosure. NDAs often expressly allow certain kinds of disclosure:
Upon court order or subpoena, but recipient must cooperate to give the discloser has opportunity to challenge the order or seek confidential treatment
As required by law (such as SEC filings), but recipient must cooperate to seek confidential treatment or redaction of the information in public filings
To accountants or attorneys operating under their own NDA or an equivalent duty of confidentiality, in connection with due diligence or audits (note that accountants and financial auditors often have a higher duty under law than would be imposed by an NDA)
To affiliates, but may require recipient to have the authority to bind them to the NDA terms
Disclosure to potential acquirors and investors, under their own NDA
Degree of Care to Keep Confidential. These terms usually track the requirements for treatment of information to qualify for protection under trade secret law.
No less than reasonable measures to protect against disclosure
At least those measures that the recipient takes to protect its own similar information
Prompt notice of any unauthorized use or disclosure and assistance in stopping it
Residuals. This is the single most significant variation in NDAs (short of omitting the non-use provision entirely, which is rare, but always worth checking). A residuals clause is extremely favorable to the recipient. It says that the recipient may use ideas, information and understandings retained in the memory of the recipient’s personnel. It is usually an exception to the non-use requirement, but not the non-disclosure requirement. Residuals clauses are written in many different ways and need to be reviewed on a case-by-case basis.
Parties. Pay attention to how the parties to the contract are defined. If the parties include affiliates or other parties, the sphere of disclosure might be broader. (For example, “Recipient means Company XZY and all its affiliates.”) If you are disclosing, consider limiting disclosure to a single recipient entity. Also, NDAs normally do not allow disclosure certain categories persons:
Those with a need to know for the defined purpose
Employees who are bound to confidentiality agreements or equivalent obligations
Contractors who sign confidentiality agreements (often subject to approval of the agreement by discloser)
Duration. In a sense, all NDAs have two durations. One is the period during which information will be exchanged. This is sometimes called a capture period and is often the same as the term of the agreement. Although some NDAs continue indefinitely, many are limited to a capture period of one year. The other duration is the period during which information, once disclosed, must be kept confidential. These range from indefinite to short, typically 2-5 years. Keep in mind that, as a discloser, you may not be able to protect your information from use by other parties once it is free for unrestricted use by any one party. 2-5 year limits work for information that has no value after that time; business plans and customer information may be stale after that time. However, technical information can often have value for a much longer period.
Warranty Disclaimer. Disclosure of information is usually made as-is, with no warranties as to quality or accuracy.
Return of Materials. NDAs usually require return or destruction of the information upon termination of the disclosure period, or earlier upon discloser’s request. Disclosure of information under NDAs is usually voluntary, which means that a sudden termination of the disclosure period is usually not considered an issue.
Neo4J had brought a trademark infringement suit against Purethink, LLC, an erstwhile reseller of Neo4J’s enterprise products, and its related entity iGov. After the reseller agreement between the parties terminated, Neo4J sued alleging trademark infringement, and the defendant counterclaimed that the trademark had been abandoned.
Neo4J offers both a community edition under GPL/AGPL, as well as a commercial edition, which had additional features only provided under commercial terms. The defendant argued that Neo4J’s trademark was unenforceable because Neo4J used the mark on its open source software as well as its enterprise product. The defendant characterized licensing under GPL and AGPL as “naked licensing” (i.e. licensing of a trademark without exercise of sufficient quality control), which can lead to a loss of rights in the trademark.
The court rejected the argument, saying,”Defendants do not raise any allegations indicating the Plaintiff has failed to exercise actual control over licensees’ use of the trademark….[T]he fact the Plaintiff distributed Neo4J software on an open source basis pursuant to the GPL and AGPL is not, without more, sufficient to establish a naked license or demonstrate abandonment.”
This result is not unexpected, but it is a useful precedent. Open source licenses like GPL are not trademark licenses, and therefore cannot be “naked” trademark licenses. When it comes to stewarding brands, it is the actual work of maintaining quality control, and not the software copyright license terms, that matters. There are many companies that implement an open core business models with community and enterprise editions. While those companies, like any company, are wise to properly manage their brands, that management is by no means antithetical to an open source licensing model.
PolyForm Project has launched the last of its first tranche of licenses: Perimeter and Defensive. Each of these licenses are used to make source code available, but place certain limitations on its use.
The Perimeter license prohibits use of the software in a manner competitive with the software. The Defensive license prohibits use in a manner competitive with the products and services of the company licensing the software.
This adds to the PloyForm suite of source-available licenses released last year. Below is a handy chart to show the differences among them.
Use to enable R&D and Non-Profit Use
Use to enable a limited evaluation
End user license with source code
Use to enable commercial use by small organizations
No Competition with Software
Use to limit competition with the licensed software
No Competition with Licensor
Use to limit competition with all your related products
More information about the PolyForm Project is here.
In 1996, a salvage company called Intersal discovered the wreck of a pirated slave ship, The Queen Anne’s Revenge, that ran aground off the coast of North Carolina in 1718. Intersal was under a salvage contract from the legal owner of the wreck, which was the State of North Carolina.* Intersal contracted with videographer Frederick Allen to document its efforts. Allen took photos and videos of the recovery for more than a decade and registered copyrights in his works. (Presumably their contract did not provide for an assignment of copyrights that would be typical in such a contract, but those facts were not outlined in the Supreme Court decision.) When North Carolina published some of Allen’s videos and photos online, Allen sued for copyright infringement. The state asserted sovereign immunity as a defense.
US states are immune from legal liability of most kinds in civil lawsuits. However, there are various exceptions. That is why, when one sees lawsuits against state agencies for negligence or other civil claims, they are usually styled with an individual state governor or other official as a defendant. In the US, sovereign immunity is a complex doctrine, given our federal system includes states, which exercise basic sovereign powers, and the federal government, which only has limited powers, the two of which often overlap, and where each kind of authority enjoys some level of sovereign immunity, under doctrine or statute.
The question in this lawsuit was which doctrine trumped: sovereign immunity, or copyright law, as reflected in a federal law called the Copyright Remedy Clarification Act. The US Constitution Article I, §8, gives Congress authority to grant copyrights. The CRCA, in turn, relies on that power, saying that for claims of copyright infringement, a state “shall not be immune, under the Eleventh Amendment [or] any other doctrine of sovereign immunity, from suit in Federal court.”17 U. S. C. §511(a). The Supreme Court unanimously decided that the CRCA was unconstitutional to the extent it authorized a claim in Allen, because Congress lacked constitutional authority to take away the state’s immunity. The court’s opinion left the door open for Congress to amend the CRCA to make it constitutional.
In this opinion, the Court relied on Florida Prepaid Postsecondary Ed. Expense Bd. v. College Savings Bank, 527 U. S. 627, which invalidated provisions of the Patent Remedy Act, a law allowing for patent infringement claims, similar to the CRCA for copyright. When weighing congressional authority against sovereign immunity, “There must be a congruence and proportionality between the injury to be prevented or remedied and the means adopted to that end.” City of Boerne v. Flores, 521 U. S. 507, 520. In Florida Prepaid, the Court defined the scope of unconstitutional patent infringement as “intentional conduct for which there is no adequate state remedy.” In contrast, most copyright infringement claims have no requirement of intent, though some kinds of damages can be enhanced if infringement is willful.
* If you want to read about ownership fights under maritime law, and the technological challenges of shipwreck diving, try Shadow Divers.
In the last few weeks, it seems every organization I have ever had contact with is telling me their COVID19 plans. While I am impressed that my mortgage company has plans to keep me safe and healthy — even though I have never spoken to nor interacted with any actual human representative of this company since my loan was sold to them nearly 10 years ago — the pandemic now seems to be developing into yet another reason for disingenuous customer outreach, rivaling even the California Consumer Privacy Act in its ability to produce unwanted email in the first month of 2020.
We technology transactions lawyers barely require human contact in the first place, so the most serious long term effect for us may be that we finally have to understand force majeure clauses. For those of you non-lawyers intrepid enough to read this post to this point, that phrase is a legal term of art roughly equivalent to an “Act of God,” and it sets rules about when parties to a contract, particularly suppliers, can breach contracts but not be held legally liable, a legal doctrine sometimes referred to as excusing performance.
Like many of the clauses in the miscellaneous section of a contract, force majeure clauses tend to go unread — or worse, become like the socks collected in the discontinuous time-space continuum of our clothes dryers — an ever-growing laundry list of items no lawyers are brave enough to remove, in case they “miss” something and are later blamed for the omission. But, like all contract clauses, force majeure clauses should be written thoughtfully, or they have the potential to backfire.
What if you say nothing?
First, chances are high that the state statute governing your contract already contains some useful rules about force majeure. That statute may not use the words force majeure at all, so it might be easy to miss. The common terms of art are frustration of purpose, impracticability and impossibility, but modern rules favor impracticability over the older impossibility doctrine. The UCC, for example, says:
§ 2-615. Excuse by Failure of Presupposed Conditions. Except so far as a seller may have assumed a greater obligation and subject to the preceding section on substituted performance:
(a) Delay in delivery or non-delivery in whole or in part by a seller who complies with paragraphs (b) and (c) is not a breach of his duty under a contract for sale if performance as agreed has been made impracticable by the occurrence of a contingency the non-occurrence of which was a basic assumption on which the contract was made or by compliance in good faith with any applicable foreign or domestic governmental regulation or order whether or not it later proves to be invalid. (b) Where the causes mentioned in paragraph (a) affect only a part of the seller’s capacity to perform, he must allocate production and deliveries among his customers but may at his option include regular customers not then under contract as well as his own requirements for further manufacture. He may so allocate in any manner which is fair and reasonable. (c) The seller must notify the buyer seasonably that there will be delay or non-delivery and, when allocation is required under paragraph (b), of the estimated quota thus made available for the buyer.
UCC 2-614, Substituted Performance, addresses more specifically unexpected disruptions in availability of carriers and means of payment. UCC 2-616, Procedure on Notice Claiming Excuse, describes the process of notice of the application of Section 2-615.
Impracticability, Impossibility, and Frustration of Purpose
Courts do not require that performance actually be impossible to apply the doctrine embodied in the UCC, merely that it be commercially impracticable, such as due to excessive cost. But the doctrine has its limits. For example, in Watson Labs. v. Rhone-Poulenc, Inc., 178 F. Supp. 2d 1099 (C.D. Cal. 2001), plaintiff Watson sought relief for breach of a pharmaceutical supply agreement. The supplier in the contract, an RPR affiliate, operated a manufacturing plant for the pharmaceutical product in question. At the time the agreement was signed, the plant was already operating under an FDA consent decree, resulting from “violation of numerous… Good Manufacturing Practices” established by FDA regulations, and providing that the FDA could shut down manufacturing in the event of future violations. After the plant was actually shut down and the supply disrupted, the buyer sued for breach of contract and seller invoked the force majeure clause, but the court did not excuse defendants’ failure to perform because the shutdown was foreseeable, and within the defendant’s reasonable control. The court probably gave weight to the fact that the contract was expressly intended to meet all of the plaintiff’s requirements for the drug, and that both parties knew there was no other approved supplier.
Frustration of purpose happens when the supplier is willing to perform, but one of the contract’s basic premises fails. This is sometimes referred to as creating an implied condition to performance. If you are a lawyer, you probably remember from law school the old coronation cases such asKrell v. Henry, 2 K.B. 740 (1903), in which a man rented a room temporarily to watch the coronation parade of King Edward VII. The coronation was rescheduled due to the King’s appendicitis, so the purpose of the contract was frustrated, and the renter was excused from renting the room. Notably, the application of this doctrine resulted in excuse for the buyer, not the seller.
In sum, these doctrines are meant to handle the unexpected — facts that the parties could not have reasonably foreseen when they entered into the contract. They are intended to be general in nature, so they are flexible enough to handle circumstances that are difficult for parties to predict.
Drafting Specific Force Majeure Clauses
One might be tempted this month to change all the contract forms in existence to include “pandemic,” and consider the matter handled, but that’s probably not the right long-term approach. The endlessly-growing-laundry-list is doomed to failure, because it pits the lawyer’s imagination for catastrophe against that of reality, and in that respect, reality always wins. Despite the famously pessimistic imagination of of most lawyers, none of us knows what the next crisis will be. So, think hard about what you write, particularly if you are a seller, because you may be foregoing some of the automatic relief from performance the statute would otherwise provide.
But force majeure clauses don’t merely define what events make performance excusable; they can be used to set the details of what happens when performance is excused. For example, they can outline a specific process for notice of shipment delays, set preferences for allocation of orders among customers in the case of shortage, or set the process to cancel an order or contract if the event persists. These specific remedies and procedures need to be based on the facts of the deal.
Force majeure clauses can also seek to expand the application of the doctrine to specific contingencies, unexpected changes in the cost of the inputs of goods, which may be not captured by background doctrines of impracticability. For example, in the early 2000s, “worldwide semiconductor shortages” were a popular addition to the laundry list, due to a phenomenon that Wikipedia charmingly calls “chip famine.” Otherwise, “[e]conomic events, such as failures of markets, are very difficult to assert as events of force majeure…” ( J. Kelley, “So What’s Your Excuse? An Analysis of Force Majeure Claims,” 2 Texas Journal of Oil, Gas, and Energy Law 91 , 110 (2006).)
If you want to guide a court’s finding of frustration of purpose, you can draft wisely to that effect as well. The purpose of a contract is often set out in its recitals — yet another reason to write them correctly and specifically for the deal.
Does Force Majeure Cover COVID19 Disruptions?
Of course, that’s a trick question because it can’t be answered generally, only with reference to specific facts. The existence of a virus standing alone would not trigger a force majeure clause, but resulting actions or developments could be considered force majeure. For example:
Travel restrictions imposed by government or suggested by health authorities
Embargoes, export or import restrictions
Broad failure of supply chains
Closure of public buildings or cancellation of events
Shortages of products due to hoarding
Shortages of medical services or supplies due to pandemic conditions
Courts will generally tend to interpret express force majeure clauses narrowly, and will not excuse performance merely because the of potential existence of a performance problem, or a performance problem with simultaneous causes other than force majeure. General economic downturns that make performance unprofitable do not generally qualify — that’s a risk of doing business. The court will look for a specific external cause that could not be reasonably avoided. For an example of a detailed test used by one federal court, see Transatlantic Fin. Corp. v. United States, 363 F.2d 312, 315 (D.C. Cir. 1966), a case involving the 1956 nationalization of the Suez Canal.
As with everything in life, the practical steps to addressing force majeure due to the COVID19 pandemic in Q1 2020 are less exciting than reading overwrought news headlines about it. If contracting parties today have concerns about invoking force majeure clauses, those concerns need to be analyzed on a case-by-case basis. The relevant law is state law, so one can’t merely rely on the UCC, even though most state statutes roughly follow it; one must check relevant state cases for more detailed rules. Here are a few citations to relevant statutes for the most common jurisdictions, for a starting point. To find the relevant case law, it’s helpful to turn to an annotated version of the statute, or look for cases that cite the statute.
As for the rest of it, now is the time to be grateful for whatever free time you have recaptured from cancellation of your doubtless excessive professional commitments, and to do your taxes, plant a victory garden, use up those groceries in your freezer, and watch the new episodes of Better Call Saul. And wash those hands.