What Bitcoin Did


I’ve been engaged in the crypto space for a little over 3 years. I am a developer in my day job and as a hobby, and have some limited experience integrating apps with the Bitcoin base layer and Layer 2 networks – namely Lightning Network. I’ve also been on Twitter for well over 10 years and for all that time have tried my utmost not to engage in tribalism on any subject. For that reason also, I try especially to avoid Reddit!


Since early 2016 until about mid-April 2019, I was just another “Bitcoiner”. Firmly in the BTC camp. Firmly of the position that Bitcoin forks must of course be jokes, and regardless of market cap, each was destined for obscurity. I imagined Lightning Network becoming the dominant Layer 2 scaling solution, and because of it being early days, that problems in its design and implementation would be seen-off with further improvement over time, just like any other agile project that seeks feedback, allowing it to iterate further.

Then I listened to the Peter Rizun interview on the What Bitcoin Did Podcast with Peter McCormack. I don’t know what I had imagined prior to the interview. That perhaps McCormack would wipe the floor with him, or better, that I’d find his arguments about Lightning in particular to be flawed in some way, leading me to imagine Rizun as a lamentable idiot. But it wasn’t to be. Rizun was intelligent, well-reasoned and calm, impartial to a level greater than I was prepared for. His view of Bitcoin Cash and Bitcoin being “just bitcoin”, was one that I hadn’t really entertained before, despite my being a practitioner of impartiality and general sitting-on-the-fence behaviour in daily life.

What Bitcoin Did

The interview put the cat among the pigeons for me, and while the pigeons weren’t precisely sent packing, their distribution was altered just enough to permit me a view of the cobbled street beneath that I hadn’t imagined existed before. I immediately felt more like “me” and less like a member of a tribe.

Then I listened to McCormack’s cast with Tadge Dryja, thinking, but not necessarily hoping, that here Rizun, or at least the issues he had raised with regard to Lightning’s perceived design flaws, would be cast in their proper light. But to McCormack’s and my own surprise, Dryja appeared to distance himself a little from what Lightning was becoming. That he wasn’t exactly a fan of the work being done by various parties on the BOLT specification, and found it weird that people would feature Lightning invoices in their Twitter bios.

The perspective I gained from listening to Dryja helped me firm up some thoughts I had been having about Lightning myself; That it is indeed amazing technology, and will probably be deployed in a selection of commercial contexts, but at the moment, it is a live experiment, and a very complex one at that. The real issue I think I’ve always had with Lightning is that it just seems way too complicated, even more so than I recall Bitcoin itself being for myself as a newbie, from Antonopoulos’ book Mastering Bitcoin. More complicated perhaps than a Layer 2 solution really ought to be. I’ve never really been a “bigger blocks” proponent, but only because it seems a rather arbitrary conclusion to arrive at: That exactly 2, 4 or N Megabytes will just work for an unknown and (as far as I know) un-calculable variation of transactees, transaction patterns and address formats.

Of course there are improvements being made in wallets all the time, which may yet yield a vision I first received from Antonopoulos in a talk of his some years ago; Where multi-chain payments would happen seamlessly; Customers purchase in Bitcoin, even if items for sale are priced in Litecoin. They need not be aware their Bitcoin are being swapped for Litecoin in order to complete the purchase. By extension, there’s every reason therefore to project forwards and imagine a Lightning wallet smart enough to automatically balance-out channels without anyone being aware of liquidity problems in any part the route. But right now, I’m not hearing that this ability may just be around the corner. The need to stake (sorry!) your Bitcoin and ensure every part of a potentially limited route to a payee is liquid enough, seems kind of weird to me, clunky even. It feels far from peer to peer electronic cash, and even further away from the kinds of transactions we humans are used to performing each day in person. “Adoption” needs to look at least a little bit like the incumbent.

When motor cars replaced horses, they still moved in a straight-ish line, could take passengers or limited cargo and required fuel (Petrol for the car, and hay for the horse). But Lightning doesn’t yet feel to me like the feeling of “Saving Money”, “Spending Money” or “Receiving Money”. Which is not to say it won’t be able to, but if it is, then wallet devs have a hell of a technical and UX challenge ahead of them – or maybe I just need to buy a Nodl, setup a Casa Node and integrate an app with BTCPayServer

Coding with glue

I’d wager that almost no-one writes web-based software these days without using a framework somewhere. It doesn’t matter if you’re dealing with front or backend engineering, you’re likely making use of one somewhere.

Frameworks are great; They reduce to a bare minimum the amount of boilerplate code needed to achieve tasks that we all encounter in our daily work. Good ones even give us readable documentation – with examples – of how to implement some aspect of their function.

I see a problem though and it’s not as if it’s just over there on the horizon. No it’s here. Now. Shoot, the personification of it is probably sitting in the same room as you, it may even be you.

The problem I’m referring to is the inexorable reduction of our jobs into Glue Coders. We Google for a solution, install some package or other, call some aspect of its public API in our own logic, and call our feature done.

Now unless our own project contains decent test-coverage, very often we have absolutely no idea how that part of the package we’re using really works, and have no idea under what circumstances parts of its API will fail – that is until a client reports an outage or bug on production that is ultimately traced back to this package and our peculiar use of it. In using other’s APIs in this way, are we not just glueing things together? What are we, Airfix modelers?

My old man used to have a saying, (he probably still does) something along the lines of the following:

..if something is too easy, there's usually a reason for it..

Regardless of his formidable Fortran and Pascal skills, my Dad wasn’t referring to software development. Rather, I think this was a reference to one’s life-choices. In his experience, behaviour driven by simple solutions, often lead to worse problems later on in life. (There might have been some quasi-religious basis for part of this, I forget).

Being the developers that we are, often under the pressure that we are, we welcome quick solutions, and actively seek them out. What my Pa was saying was a hat-tip to its antithesis (Try stating that as BNF); That something being difficult shouldn’t immediately signal avoidance. When something is perceived to be difficult (and that path is subsequently chosen) one necessarily needs to take one step at a time to figure it out. Poke, see where it squeals and in doing so learn about it. In software, the parallel is in learning some aspect of a package’s internals which can only be beneficial when encountering last minute bugs on UAT and production.

By all means, install the latest package directly from GitHub, via aptitude, npm or wherever, but take some time to understand what it’s doing and how the part you want to use actually works. If (when) you’re in a hurry, usually just one or two core system calls or top-level functions in that package are all you need to Google for, to get the gist of what the thing is doing and how it’s doing it.

This way you’re better prepared when the brown stuff takes flight.