Blockchain Architecture, a closer look at CSC’s TechCom24


screen-shot-2016-11-26-at-1-30-08-pm

By now we’ve all heard about Bitcoin as this magical decentralized digital currency. You may even have bought and spent some bitcoin, but most of us are not quite sure how or even why it works. Perhaps more puzzling is that we keep hearing that Bitcoin is just the beginning and that we’ll start seeing exciting new blockchain applications that will revolutionize whole industries.

Is this mostly hype or is there an element of reality to it?

Research, experimentation and unfolding events over the past few years have been pretty convincing that we’re on the cusp of something big. There will be ups and down, but we’ll all benefit from keeping up.

I recently blogged  about participating in the Initial Coin Offering of Inchain, an ambitious blockchain native insurance startup. Since then the Inchain team notified investors that the ICO didn’t raise sufficient funds to launch.

In other news, #TeamCSC had a crash course in blockchain tech during our recent adventure  at the GE Minds + Machines Hackathon.

Despite all the activity and buzz, it’s still quite new to a lot of us.  So, to further the conversation I’m doing a talk on Blockchain Architecture and Apps during Techcom24, CSC’s global, virtual technology conference for employees. I’ll break down the tech into its underlying components. During the session, we’ll look at how  key characteristics of blockchain architecture materialize into solutions. We’ll also take a pragmatic look at how traditional players may be dis-intermediated by decentralized autonomous blockchain applications that increase efficiencies and reduce costs.  And, so we can all hit the ground running, I’ll provide practical advice for  blockchain enthusiasts and aspiring coders to get started learning and contributing to  communities.  I’m really excited about exploring blockchain further with so many of my CSC colleagues around the globe.

Follow the conversation at the CSC Hyperthink Blog.

fs

Advertisements

Insurance and Initial Coin Offerings

circle-159252_960_720

Last month, I started hearing some chatter in blockchain tech circles (some skeptical) about Inchain, an autonomous insurance service on the Ethereum blockchain.  Its initial stated objective is to manage the risk of loss of virtual assets stored on blockchains – rather than physical assets in the “real world”.

Insurance

Inchain is a decentralised insurance platform that mitigates risks associated with total or partial losses of crypto assets due to cyber attacks and hacks. We have placed Ethereum smart contracts at the core of the platform so it requires minimal human involvement.

The general idea is to use a crowdsale to get investment into a DAO style smart contract.  The funds will then be used to sell insurance for blockchain assets and eventually for off-chain assets (real stuff).

This is interesting from an insurance perspective because it taps into the community involvement inherent to blockchain and especially Ethereum circles.  Taking it further, it seems like quite a natural fit for Peer to Peer insurance popularized by Lemonade.

Distributed Autonomous Organizations

Inchain’s use of smart contracts to run its core business is particularly striking since this allows its activities to be run autonomously based on “votes” made by investors.  For those of us that followed the rise and fall of The DAO, this is both welcome and ironic.   But more about that in a later post.

Crowdsales and ICOs

Like many blockchain startups, Inchain announced an Initial Coin Offering (ICO) as a crowdsale rather than pursuing traditional funding sources.  Investors get tokens of ownership and get to vote on investments made by the fund.

I’ve invested some Ether in the ICO, but am not holding my breath for a quick return.  I’d say that my modest investment is more from a technical and professional curiosity rather than deep commitment  to success of autonomous insurance services.

Win or lose, it should be interesting 🙂

References

Birthdays and Blockchains

birthday_candles

I have a birthday coming up.  Each year, my family tries to surprise me in some way, but it doesn’t always work.  The more people coordinating the plan, the harder it becomes to maintain the surprise – especially if one or more of the schemers aren’t the most reliable at keeping it quiet.    

Birthday surprises remind me of a thought experiment called the Byzantine Generals problem.  It describes a situation in which multiple armies (presumably in the Byzantine Empire) are planning a coordinated surprise attack.  Success depends on the generals of the armies being able to secretly coordinate the attack at a particular time.  The problem is that the generals can’t trust messengers from coordinating armies since they could have been captured or corrupted by an opposing party.  This is a lot like coordinating nodes in the peer-to-peer (P2P) networks I mentioned in a previous post where nodes (generals) communicate using unreliable channels (messengers).

Screen Shot 2016-09-02 at 2.59.24 PM

Byzantine Generals

This problem has formally been proven to be unsolvable,  but several clever techniques have been implemented in decentralized architectures to solve the problem in practice using Consensus Algorithms.  Consensus is fundamental to making decentralized systems work and I’ll describe them further in this post.  

Centralized vs Decentralized Architectures

To understand consensus, let’s first take a closer look at Centralized and Decentralized Architectures. 

Screen Shot 2016-09-03 at 12.17.16 PM

Centralized Architectures => Single Point of Failure

Traditional centralized applications for file sharing, data storage, communication and even virtual currencies have all existed for a long time.  It’s just that centralized architectures often place shared resources (files, databases or code) on a single server or possibly multiple load balanced clones. Applications are therefore only as resilient as the central server(s).

Decentralized P2P architectures have proven useful to address resiliency problems associated with centralized applications. Although this reduces or eliminates reliance on centralized servers, it comes at a cost.

Screen Shot 2016-09-03 at 3.41.45 PM

Transitioning from Centralized to Decentralized Architectures Adds Resiliency

 

 P2P nodes in decentralized applications such as BitTorrent, Gnutella, Skype and Bitcoin must assume responsibility for both managing and consuming resources among all peers.

Screen Shot 2016-09-03 at 12.28.40 PM

P2P Node Responsibilities

These resources may include data, documents and code.  A file sharing node in BitTorrent  or Gnutella would share an index (data) of available files across the P2P network, serve files that it has and access those resources from other nodes.  

Similarly, a blockchain node on Bitcoin is responsible for executing cryptographic code on  transactions as well as maintaining a copy of the distributed ledger (data).

Decentralized Consensus

This brings us to the topic of consensus among participants in a P2P network.

Screen Shot 2016-09-02 at 3.16.13 PM

Since application resources are distributed among peers in a decentralized system, it becomes important to put all the pieces together without losing anything.  P2P nodes of a distributed system must be able to “agree” to the validity of their resources. The state of agreement is called consensus.  It’s worth noting that the mechanics of how participating nodes reach agreement and the consequences of failure to reach agreement will depend on the purpose of the application.  For example, a microblogging service running a distributed database may be satisfied with the occasional dropped or eventually consistent post shared among nodes. A transaction between nodes on a cryptocurrency network will likely have more stringent consensus requirements.   

Consensus is a core requirement in decentralized systems and is implemented using one of several consensus algorithms running on each node. These algorithms reach consensus when a majority of participating peers agree.  Failure or unreliability of some participating nodes can be compensated by other participating nodes.

Blockchain Consensus Algorithms

Bitcoin and other blockchain-based cryptocurrencies achieve consensus by requiring that nodes complete some computation that’s hard to do, but easy to verify.  This is called Proof of Work and is part of the role of Bitcoin miners.

Proof of Work consensus has its roots in an earlier proposal called Hashcash that was intended to reduce email spam.  Briefly, senders of email would be required to execute a  cryptographic hash of the message which would be verified before the email was delivered.  Successful verification would be considered proof of the (cryptographic computational ) work done in creating the hash.  The work would be trivial for the average email user, but would be a disincentive for the malicious bulk mailer.

Cryptocurrency miners use Hashcash proof of work to verify that transactions reach consensus across participating nodes on the blockchain.  Miners (nodes) are incentivized to validate transactions with the allocation of a “block reward” paid in the cryptocurrency being mined.  But mining is designed to become more computationally intensive while reducing the block reward over time.  The bitcoin block reward was halved from 25 BTC to 12.5 BTC on July 9th this year and will be cut into half again in 2020. This results in an arms race for a combination of faster computation (hash power) and cheaper electricity to run the equipment, which can be wasteful and environmentally unfriendly.

Some blockchains have responded by implementing non-proof of work consensus algorithms such as Proof of Stake or Practical Byzantine Fault Tolerance (PBFT).  The Ethereum blockchain and cryptocurrency is planning a move to Proof of Stake to address some of the problems in Proof of Work including centralization trends and long(ish) block times.  The Hyperledger project uses PBFT to reach consensus among blockchain nodes without mining.

In my next post, I’ll take a closer look at the architecture of Bitcoin and Ethereum blockchains and blockchain applications.

References

 

What’s the big deal with blockchains?

metallicapic
Image Source

In early 2000, members of the band Metallica heard a demo of their song “I Disappear” on the radio.  The problem was that the song wasn’t scheduled to be officially released until later that year to coincide with the “Mission: Impossible II” movie soundtrack.  The source of the leaked track was traced back to the then popular Napster peer-to-peer (P2P) file-sharing network.  

In federal district court and in a Senate Judiciary Committee hearing, Metallica argued that Napster was illegally enabling users to exchange copyrighted MP3 files.  Napster was forced to search through its system and remove all copyrighted songs by Metallica.

Shutting down infringing files and eventually Napster itself was possible due to Napster’s underlying architecture.  Although it enabled P2P file sharing over a distributed network, there was a central index server that Napster clients used to search for files and locate peers that had them.  

Screen Shot 2016-08-28 at 1.07.25 PM

Napster Architecture

Lesson learned.  It didn’t take long for other P2P file sharing networks to emerge without the vulnerability of a centralized server. For example, the BitTorrent protocol distributes files and indexes among peers and trackers making it effectively impossible to shut down.  Legal debates aside, BitTorrent style P2P networks were well suited for distributing large files.  Whereas very popular resources tend to slow down centralized distribution services and even content delivery networks (CDNs), with BitTorrent the more popular the file is the faster it downloads – because more people are pitching in.  Large files such as Linux distributions and World of Warcraft content updates are routinely distributed to huge numbers of people with ease using P2P protocols.

Screen Shot 2016-08-28 at 1.07.04 PM

BitTorrent Architecture

Enter Bitcoin

“Governments are good at cutting off the heads of a centrally controlled network like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own.” — Satoshi Nakamoto, November  2008 (The Cryptography Mailing List)

If decentralization works so well for distributing large files, what else could we decentralize?  How about money?

Before Bitcoin, conducting monetary transactions between unknown participants relied on trusted intermediaries.  Intermediaries such as banks and financial service providers such as Paypal or Western Union for transacting “real” (fiat) currencies.  Of course, the maintainers of World of Warcraft and Second Life issued “virtual” in-game currencies, but you couldn’t use them to buy a cup of coffee in the real world.  

Since its modest beginnings in 2009, Bitcoin has demonstrated the ability to conduct transactions on a distributed network without relying on such trusted intermediaries.  In my opinion, what makes decentralization a Good Thing is reduction in friction and cost, resulting in increased transactional efficiencies.

Blockchain

These days it’s hard to get through a day without hearing or talking about the promise of blockchain applications.  Technologies behind blockchains (distributed databases, strong asymmetric encryption, P2P networks plus some clever game theory) have been around for quite some time.  However it’s just in the last several years that they’ve all come together in Bitcoin, the first and arguably largest blockchain application.  Startups and established enterprise players across a variety of industries have enthusiastically rushed to stake a claim

In addition to the elegant technology behind distributed blockchain applications, there is a solid business proposition to be made.  With Bitcoin having successfully demonstrated the decentralization of money, it becomes feasible to consider that all kinds of other transactions can also be decentralized on blockchains with similar benefits.  Decentralized applications are being developed on blockchains for tracking the provenance of diamonds, simplifying interoperability of electronic health records,  adding IoT smarts to the power grid and disrupting a range of industries with these other fascinating use cases.

Businesses, particularly financial institutions, treat such applications as a combination of threat and potential growth opportunity.  Technologists and business decision makers alike are responding with equal doses of sound planning and breathless optimism around this still nascent technology. Implementations appear in different forms and in different levels of maturity.  

As a tech optimist, I see a future in which traditional fee-charging intermediaries and service providers may be threatened, but immense new opportunities present themselves.  We’re likely to see much more prolific reduced fee (or no fee) P2P transactions without artificial intermediaries and gatekeepers.

Stay tuned..

OSSRank: a project for ranking and categorizing open source projects

There are millions of open-source software projects and libraries available on the internet and many more get added every day. As a prospective user of open source software, you probably find yourself going through a process something like this:

  • Search for candidate projects you could use to solve a particular problem
  • Evaluate various projects that meet your search criteria
  • Visit each of their project sites and source code repositories to try to determine things like maturity of the project, size and activity level of the community, responsiveness to issues
  • Look around on StackOverflow, Twitter and various discussion forums to see what people are saying about it

As developers of OSSRank we saw this repeated pattern and attempted to automate this whole process by:

  • Discovering open-source projects by collecting their metadata from GitHub
  • Classifying them into a growing list of specified categories
  • Collecting data about them from stackexchange & finding their social footprint on twitter
  • Continuously evaluating them and tracking their growth over a timeline
  • Ultimately ranking each project within its categories

The result is a new open source project – check it out on GitHub here.

Still lots of room for improvement, but you can try a running copy of OSSRank at http://ossrank.org

Screen Shot 2016-08-22 at 3.31.10 PM

Have an idea for improving OSSRank?  Add to the issues list https://github.com/csc/OSSRank/issues or just send us a pull request.

Chatty REST API Calls in Angular Forms

AngularJS Single Page Apps are great at interacting with REST APIs.  A common capability we’ve come to use is invoking PATCH operations on an Angular form using two way data binding with an API resource.

The question is when and how often we should update the API resource.

  • Once per form submission?
  • Once per form section?
  • Or every time a field is updated?

We realized that there are very good reasons to keep the resource updated every time a field changes.  In a multi-user environment, having a client hold on to changes can be risky, so frequent updates use useful.  Furthermore, even within a single resource representation, there can be dependencies in which the value of an attribute can affect the value of another attribute.

It was trickier than expected to find the optimal AngularJS technique to keeping resources updated.

The obvious choice is to call PATCH on ngBlur, but that means that we’re hitting the API even if the user simply tabs/clicks in and out of a form field.

What we really want is to identify when the value of a field has been changed by the user.  So ngChange? Turns out that ngChange is called on each keypress, so we certainly don’t want to PATCH that frequently.

The answer turned out to be a combination of the two.  ngBlur with the updateOn attribute of ngModelOptions!

Codepen link