The Larger Vision of RChain
RChain released a nice video today describing the higher level goals of their project. While increasing the throughput of decentralized systems is the primary goal of this project, rho calculus represents a foundational and long overdue missing link in how we build and manage software at scale.
While the crypto currency crowd is convinced that many blockchain projects are just elaborate funding schemes, many blockchain projects regard crypto the same way. The larger story here is that advanced blockchain projects like RChain are tackling fundamental limitations of lambda computing that have allowed Big Tech to accrete so much power. Amazon AWS is essentially now just a giant mainframe provider… which makes me wonder when AWS customers suffer the same fate as former IBM customers.
AWS Lambda is *not* Lambda
Actually, the code you shove inside AWS Lambda is the lambda part. The surrounding AWS, Google or Azure cloud ‘magic’ is a higher order calculus, which Big Tech is happy to control for you. The question is whether your IT organization understands what is really afoot or just blindly hands the future of your company over to them.
Lambda Calculus Refresher
Lambda is a method for developers to neatly abstract away millions of complex instructions into tidy function calls. Lambda is Turing-centric by design: imperative code is managed in a massive call tree and scanned depth first to generate a linear stream of instructions.
The hierarchical lambda tree shown below (yellow/amber) is a way to arrange code — the execution order as we walk the tree (purple) converts to linear instructions (green) as shown below:
While pure lambda fans claim that lambda is not doing anything with state, in reality it is performing all sorts of stateful modifications to an undo/rollback log posing as the call stack as it walks the tree. As such, there can be only one process per tree or things get corrupted.
So let’s ignore the function tree and just focus on lambda’s output — a sequence of instructions e.g. our Turing tape:
Then along comes things like mixed language environments and asynchronous programming, which basically require us to split the codebase up into separate runtime sections:
The problem of course is that something needs to “wire” these chunks up again. This is where things like flow-based programming come in. Most VMs will hide this magic behind the scenes, but devs are vaguely aware of some sort of housekeeping/coordination going on like this:
So what are those red lines? And what if we had a way to see or even directly code all that? The first problem is that lambda developers normally don’t have much in the way of access to the runtime except maybe through a debugger.
Serverless: Where Dev Meets DevOps
The problem here is that the software industry has been so focused on lambda they forgot about everything else, leaving the whole runtime consideration basically a free-for-all where you gotta roll your own (hence the mad need for AWS or Kubernetes DevOps experts these days). In fact, the Docker folks (especially FaaS) are facing a suspiciously similar integration/orchestration diagram:
This all seems a bit nebulous so CIOs are faced with either (1) training developers to do more operations or (2) training operations experts to do more development. In other words, an IT department full of expensive purple unicorns.
New York State of Mind
From a (lambda) coding standpoint, the above problem seems rather intractable… and it is. However, NYC finance moved beyond the world of lambda decades ago with the advent of the spreadsheet and so money managers immediately recognized the problem:
Of course, there’s a massive difference between pushing data around between cells and von Neumann machines. Or is there? Spreadsheets reflect a profound duality between code and data that was best described by Robin Milner’s research with pi calculus — and this is a bit of a mind-bender for traditional software developers. In fact, while Docker folks are studying “Function as a Service”, Milner’s seminal paper is called “Functions as Processes”. This suggests functions can take on a life or ‘state’ of their own instead of only coming alive when called by lambda. The Kubernetes effort is steaming into the same waters with “reactive operators” or “Operator As a Service”. Sure, Excel cells can declaratively communicate state as numbers and such, but those outside of finance (or memoization caching) normally don’t equate lines of software code with a ‘state’ — by which I mean its ‘return value’ or its associated side effects (most of which happen in heap memory which has rather primitive data management capabilities).
Code to Data
Coding software is about applying a series of state transitions and ultimately this state must be communicated somewhere. Lemme ask this: which is more important — your code or the resulting state change(s)? While many developers would argue “of course” the former, database administrators might pick the latter. In other worse, if I already have the data, why do I still need the SQL anymore? An awful lot of SQL can be reverse engineered from an existing database (this also gets into automation, a slightly different topic). So let’s pretend that our lambda code is writing state to some sort of persistent store (e.g. database or memory or message queue or screen):
Saving data seems do-able, but then how does lambda code receive it?
The short answer is that lambda doesn’t. Which means that going from code to data back to code again does not compose. Of course you can poll state at regular clock intervals but this spins up a processor and introduces unnecessary lag. You could also sniff an event stream of data changes until you find something you are interested in, but what if you missed an event? What about the sender clobbering prior events? What about when events depend on other events? There are lots of ways this could all go wrong.
Rho Calculus to the Rescue
Actually there are countless ways this can go wrong and an entire database industry has been fighting these battles for decades. NYC finance also spent billions trying to retrofit lambda languages like Scala and Python (and a good bit of Ocaml and Haskell) with various infrastructure to solve this dilemma. It is logical to assume that blockchain finance will enter this same morass.
So let’s get back to our earlier question: what are the red lines?
This brings us to the rho calculus. Fortunately for RChain, Lucius (Greg) Meredith formalized the calculus behind what is now commonly referred to as reactive software: essentially the idea of attaching database-like ‘triggers’ to data through dual code/data structures called ‘channels’. For blockchain, this makes it possible to implement some notion of transaction isolation for higher performance without going full database. There are also opportunities to minimize recomputations through dependency management and often this becomes the heart of the system. But this all requires some notion of topology (you have to be able to describe in advance where those triggers are supposed to live) and this is where the work of Mike Stay comes in. Other projects like Cardano meanwhile have focused on the various ways to operate the communication channels themselves (chi calculus) but exposing this sort of sausage to the developer implies complexity that you probably want to remain hidden.
Costing and Serializability
Rho orchestration is really another way to think about database transaction serializability. Why serializability matters is because it directly drives the cost and execution speed of a transaction. The double spend problem is just another way to say we cannot ‘sort’ the transaction dependency graph and therefore it is not conflict-serializable. But detecting these conflicts involve building agreements or ‘schedules’ that are computationally expensive and so participants who want cheap fast transactions between smaller parties may be willing to accept higher risks (e.g. somewhat incomplete schedules), while others will accept higher cost and latency to gain a more comprehensive schedule (agreement) across more players when moving larger sums of money. Unfortunately this suggests hopping around different blockchains (actually transaction models) depending on the type of transaction. It also means the schedule window is tied to overall block production rate. Reducing inter-node chatter while coordinating schedules has been a major issue for commercial distributed DBMS.
RChain’s dynamic sharding approach directly reflects this tradeoff and why they seem to be solving the general case. For example, suppose we have a merchant/customer transaction (essentially a p2p arrangement) and we want to be able to detect possible double-spend elsewhere. Because RChain is essentially “aware” of the account names (technically names of channels), it will automatically lump those transactions together in the same schedule handling those names and detect the problem. More specifically, those transactions can be routed to processing (scheduling) nodes in the RChain network that are responsible for handling both of those accounts:
Moving up the dependency tree does not have to be more expensive but this introduces a serious risk of centralization. That is, if aggregations of smaller accounts are already handled by low-cost processing centers like what Visa has today, it would be fairly simple for them to offer lower-cost settlements of more complex transactions. RChain is tackling this challenge at an engineering level while other projects seem to be falling into this trap as a natural consequence of Proof of Stake.
Web developers have been trying to marry lambda with async programming for a long time (with mixed success) and a lot of reactive concepts go back to Windows event streams, Ruby observables, Rx, Meteor async variable references and later mobx. It is strange that companies like to throw “junior” coders into a front-end world full of thorny computer science problems that the rest of the industry has only recently started to tackle — and hardware is even further behind.
Rho shows up in a number of areas but is the biggest cost driver in finance, enterprise-scale and blockchain systems where extensive reactive/distributed coordination is required.
How expensive? Top-secret NYC finance projects used to run around $1B each (mind you they were early in the game). The Kubernetes effort is similarly massive in scale. Although I often bash Silicon Valley for an overall lack of innovation, the one thing Big Tech knows how to do is grow bigger. In a post-lambda world, you either plan for it up front, eat the costs later, or be at the utter mercy of the west coast.
This is why the story of rho needs to get out there. I would encourage the reader to check out RChain and the rho calculus, as their backstory is far more than just another blockchain project.