What are Data Cartels and how does BeefLedger solve for them?

Cartels have long been recognised as having deleterious impacts on consumers ever since Stigler’s 1964 classic “A Theory of Oligopoly”. This has typically taken the form of collusion to control production volumes and manipulate pricing.

We have identified a specific category of cartel risk in our work on supply chain optimisation in an era of digitalisation: namely data cartels. Here, we extend the foundational thinking around collusive activities associated with pricing and production of goods and services to fundamentally address the issues of the production, storage and dissemination of data (the data function) itself.

How does BeefLedger solve for them? Project Kratos

To achieve this, BeefLedger has stood up Project Kratos by enlisting the community at large (the “buy-side”, so to speak) to participate and become a fundamental part of the whole-of-ecosystem integrity. In other words, the community can take part in attesting that the data on the network is credible and reliable.

Project Kratos addresses this set of interrelated common or mutual knowledge challenges by embedding the following design principles into the common data ecosystem, underpinned by decentralised consensus protocols:

  1. Data requirement priorities are user-determined.

  2. Consumers have an inalienable interest in the integrity of supply chains, and supply chain data.

  3. All data proposals require a multi-party governance structure and protocol to be valid. No-one can act alone.

  4. The data commons governance – as enabled by the underlying blockchain architecture – should verge towards empowering self-governance and organic adaption, with decision-making capacity vested with network members.

Multisig & Schelling Points

At the heart of the BeefLedger data community is a 2-part set of processes by which data is proposed, validated and published to the blockchain:

  1. A multi-sig procedure so that any data proposal requires a number of actors to share responsibility for proposing and witnessing the information. We apply an organic philosophy here so that self-governing “thresholds of trust” can be determined over time by the community at large. In other words, no one size fits all; and

  2. A whole-of-community attestation procedure (optional) underpinned by the economics of “Schelling” points (see diagram below)). Data validation is a common utility resource, and as a service, is something proposers and the community at large needs to pay for. By rendering explicit the value of data validation, we create novel transparent mechanism that incentivise “truthful convergence”. Incidentally, these mechanisms also effectively valorise reputations, which over time contribute to rich trustfulness for actors that demonstrate informational virtue within the ecosystem.

Last updated