Blockchain Architecture, a closer look at CSC’s TechCom24


screen-shot-2016-11-26-at-1-30-08-pm

By now we’ve all heard about Bitcoin as this magical decentralized digital currency. You may even have bought and spent some bitcoin, but most of us are not quite sure how or even why it works. Perhaps more puzzling is that we keep hearing that Bitcoin is just the beginning and that we’ll start seeing exciting new blockchain applications that will revolutionize whole industries.

Is this mostly hype or is there an element of reality to it?

Research, experimentation and unfolding events over the past few years have been pretty convincing that we’re on the cusp of something big. There will be ups and down, but we’ll all benefit from keeping up.

I recently blogged  about participating in the Initial Coin Offering of Inchain, an ambitious blockchain native insurance startup. Since then the Inchain team notified investors that the ICO didn’t raise sufficient funds to launch.

In other news, #TeamCSC had a crash course in blockchain tech during our recent adventure  at the GE Minds + Machines Hackathon.

Despite all the activity and buzz, it’s still quite new to a lot of us.  So, to further the conversation I’m doing a talk on Blockchain Architecture and Apps during Techcom24, CSC’s global, virtual technology conference for employees. I’ll break down the tech into its underlying components. During the session, we’ll look at how  key characteristics of blockchain architecture materialize into solutions. We’ll also take a pragmatic look at how traditional players may be dis-intermediated by decentralized autonomous blockchain applications that increase efficiencies and reduce costs.  And, so we can all hit the ground running, I’ll provide practical advice for  blockchain enthusiasts and aspiring coders to get started learning and contributing to  communities.  I’m really excited about exploring blockchain further with so many of my CSC colleagues around the globe.

Follow the conversation at the CSC Hyperthink Blog.

fs

Advertisements

Insurance and Initial Coin Offerings

circle-159252_960_720

Last month, I started hearing some chatter in blockchain tech circles (some skeptical) about Inchain, an autonomous insurance service on the Ethereum blockchain.  Its initial stated objective is to manage the risk of loss of virtual assets stored on blockchains – rather than physical assets in the “real world”.

Insurance

Inchain is a decentralised insurance platform that mitigates risks associated with total or partial losses of crypto assets due to cyber attacks and hacks. We have placed Ethereum smart contracts at the core of the platform so it requires minimal human involvement.

The general idea is to use a crowdsale to get investment into a DAO style smart contract.  The funds will then be used to sell insurance for blockchain assets and eventually for off-chain assets (real stuff).

This is interesting from an insurance perspective because it taps into the community involvement inherent to blockchain and especially Ethereum circles.  Taking it further, it seems like quite a natural fit for Peer to Peer insurance popularized by Lemonade.

Distributed Autonomous Organizations

Inchain’s use of smart contracts to run its core business is particularly striking since this allows its activities to be run autonomously based on “votes” made by investors.  For those of us that followed the rise and fall of The DAO, this is both welcome and ironic.   But more about that in a later post.

Crowdsales and ICOs

Like many blockchain startups, Inchain announced an Initial Coin Offering (ICO) as a crowdsale rather than pursuing traditional funding sources.  Investors get tokens of ownership and get to vote on investments made by the fund.

I’ve invested some Ether in the ICO, but am not holding my breath for a quick return.  I’d say that my modest investment is more from a technical and professional curiosity rather than deep commitment  to success of autonomous insurance services.

Win or lose, it should be interesting 🙂

References

OSSRank: a project for ranking and categorizing open source projects

There are millions of open-source software projects and libraries available on the internet and many more get added every day. As a prospective user of open source software, you probably find yourself going through a process something like this:

  • Search for candidate projects you could use to solve a particular problem
  • Evaluate various projects that meet your search criteria
  • Visit each of their project sites and source code repositories to try to determine things like maturity of the project, size and activity level of the community, responsiveness to issues
  • Look around on StackOverflow, Twitter and various discussion forums to see what people are saying about it

As developers of OSSRank we saw this repeated pattern and attempted to automate this whole process by:

  • Discovering open-source projects by collecting their metadata from GitHub
  • Classifying them into a growing list of specified categories
  • Collecting data about them from stackexchange & finding their social footprint on twitter
  • Continuously evaluating them and tracking their growth over a timeline
  • Ultimately ranking each project within its categories

The result is a new open source project – check it out on GitHub here.

Still lots of room for improvement, but you can try a running copy of OSSRank at http://ossrank.org

Screen Shot 2016-08-22 at 3.31.10 PM

Have an idea for improving OSSRank?  Add to the issues list https://github.com/csc/OSSRank/issues or just send us a pull request.

Chatty REST API Calls in Angular Forms

AngularJS Single Page Apps are great at interacting with REST APIs.  A common capability we’ve come to use is invoking PATCH operations on an Angular form using two way data binding with an API resource.

The question is when and how often we should update the API resource.

  • Once per form submission?
  • Once per form section?
  • Or every time a field is updated?

We realized that there are very good reasons to keep the resource updated every time a field changes.  In a multi-user environment, having a client hold on to changes can be risky, so frequent updates use useful.  Furthermore, even within a single resource representation, there can be dependencies in which the value of an attribute can affect the value of another attribute.

It was trickier than expected to find the optimal AngularJS technique to keeping resources updated.

The obvious choice is to call PATCH on ngBlur, but that means that we’re hitting the API even if the user simply tabs/clicks in and out of a form field.

What we really want is to identify when the value of a field has been changed by the user.  So ngChange? Turns out that ngChange is called on each keypress, so we certainly don’t want to PATCH that frequently.

The answer turned out to be a combination of the two.  ngBlur with the updateOn attribute of ngModelOptions!

Codepen link

My Understanding of REST Architecture

REST is such a misunderstood and yet overused term that I thought I’d share my understanding of the term and usage patterns.

REpresentational State Transfer (abbreviated as REST) is a simple way to organize interactions between independent systems.   This means that:

  • Everything is Represented as a Resource
  • Resources have State
  • Resource State is Transferred over the wire (HTTP)

The design rationale for REST is based on the fact that the web is a massively scalable distributed software system that works really well.  We should therefore be able to use the success of its underlying architecture to build integrated systems more easily.  Due to their relative simplicity clients usually find it really easy to interact with REST APIs.

Why is REST usually referred to as an “Architectural Style” it’s useful to first establish what it is not:

  • REST is not a Standard such as HTML, CSS, XML or WSDL. The W3C will not ratify REST
  • REST is not a Protocol such as HTTP or SOAP

REST Constraints

REST was inspired by the simplicity and robustness of the Web itself and was first described in his PhD dissertation by Roy Fielding.   It was designed as a set of Constraints on top of HTTP which are detailed below.

1. Client-Server

The principle of Separation of Concerns encourages us to isolate the user interface from data storage and business processing concerns.

The result is:

  • Improved portability of UIs across OS and device platforms
  • Improved scalability by simplifying server components
  • Client and Server components are allowed to evolve independently

2. Statelessness
The next constraint to the client-server interaction is that communication must be stateless.  The server should not be relied upon to maintain application state by storing session objects.  Each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.

Note that imposing statelessness is a tradeoff in which simplicity and scalability may come at the expense of increase in network traffic since the server “forgets” application state between requests.  This tradeoff is normally addressed by adjusting the grain of the exchanges between client and server to reduce chattiness. In general REST is most efficient for large-grain resources.

3. Cache
As with the web, the state of HTTP accessible REST resources can be persisted in a client-side cache so we don’t have to hit the server each time.  The Cache constraint requires that resource be identified as cacheable if appropriate. If a response is cacheable, then a client can choose to cache and reuse that response for subsequent requests for a configurable period of time.

4. Uniform Interface
The uniform interface constraint ensures that our API looks and behaves consistently across requests and time, thus allowing each part of the API to evolve independently.

The four parts of this interface that must remain uniform are:

  1. Resource Identification

Resources are uniquely identifiable using URIs such as:

http://api.csc.com/employee/ecartman

 

  1. Resource Representation

Each resource must have a representation which is produced as a response when it is requested using its resource identifier.  For example, a URI GET request for the resource

GET http://api.example.com/founder/ecartman

may return the following state representation:

{
  “id”: “123456”
  “name”: “Eric Cartman”,
  “role”: “Founder”
}

Furthermore, this representation must contain enough information to subsequently modify or delete the resource on the server should it have permission to do so.

  1. Self-Descriptive Messages

Each client request and server response is a standard http message consisting of a body and headers.  The body+headers should contain all the information necessary to complete the requested task on both ends of the exchange.  For example, response messages should explicitly indicate their cacheability.

This type of message exchange is also referred to as stateless and context-free.

  1. Hypermedia as the engine of application state (A.K.A. HATEOAS)

Clients make state transitions only through actions that are dynamically identified within hypermedia by the server (e.g., by hyperlinks within hypertext). Except for simple fixed entry points to the application, a client does not assume that any particular action is available for any particular resources beyond those described in representations previously received from the server.

5. Layered System
The Layered System constraint dictates that a client need have no knowledge of whether it is connected directly to the end server, or to an intermediary or proxy on the way to the server. Use of intermediary servers may improve system scalability by enabling load-balancing and by providing shared caches. They may also enforce security policies.

6. Code on Demand (Optional)

The Code on demand constraint allows servers to extend the behavior of a client by sending code to be executed on the client.  This code usually takes the form of javascript.  We’re used to seeing this on web pages, but the very same capability can optionally be used in a REST API.

References

Why do I need an API?

All commercial software solutions have inherent value in their business functionality.  However, relatively few make this functionality easily available to external systems.  Application Programming Interfaces (APIs) provide the means to expose such functionality to authorized users and software systems for use as needed.

It is impossible to include all features for all customers in our products, so we usually focus on a narrow set of high value features.  Doing so initially makes great sense for the bottom line but effectively ignores the opportunity afforded by the “long tail” of less popular functionality.  APIs offer a way to harness and potentially monetize this long tail of functionality locked into enterprise systems.

longtail

APIs vs SOA

Discussions about API development often include comparisons to SOA (Service Oriented Architectures) and Web Services.  Both APIs and SOA address the need for allowing software systems talk to each other. APIs are even considered part of SOA, but the two approach this problem from different perspectives.  Whereas SOA has traditionally been associated with formality, tight governance and limited access, the value of APIs becomes more apparent when some of this rigidity is loosened.

SOA

APIs

Strict integration contracts

Loose integration contracts

Precisely defined implementation stack

Very minimal implementation stack

Used internally or by trusted partners

Appropriate for external use by potentially unknown (but authorized) developers/users

Tightly controlled

Self service

Harder to code

Easier to code

The Case for Openness

Enterprise apps are expected to have deep knowledge of web services they consume.  This effectively results in close coupling of application logic with services even when they are isolated behind a SOA layer, effectively creating a monolith as seen below.

monolith

Making the same services available as loosely coupled APIs can lead to increased openness and therefore increased usage by both internal as well as external applications.

non-monolith

This typically involves changes to implementation technologies.

Typical SOA

Typical APIs

SOAP

REST

WS Security

OAuth, others

WSDL

Self describing hypermedia

XML

JSON or XML

 Recommendation

An API application layer can be used as a facade for enterprise SOA interfaces and handle complexities of SOAP, service orchestration, XSL transformation, data thinning, access control, etc.

APIs should be designed based on RESTful principles and where appropriate, JSON should be used to exchange service payloads.

 app api soa