2017-08-31

Beauty of #blockchain – separating the wheat from the tares

As we know, the blockchain technology is actually a multi-user, centralised (logically) and distributed (physically) archive with excellent availability and integrity characteristics. Such an archive collects various records and packs them into chained and (practically) immutable blocks.

Why is it centralised? Because of the single uniform code base and the consensus process, i.e. a combination of “administrative” means and technology.

Numerous applications (e.g., bitcoin) use the blockchain technology to resolve the problem of “double spending”, i.e. if a record (which is a transaction in this case) spends the same “piece” of cryptocurrency more than once then that double “piece” of cryptocurrency it will be detected as “untrusted” and, finally, such a record (i.e. transaction) will be rejected by the blockchain-as-an-archive of records (or a ledger).

Thus, in such applications, blockchain plays two roles: ( see also http://improving-bpm-systems.blogspot.ch/2016/06/disassembling-blockchain-concept.html ).
  1. validating that a used “piece” of cryptocurrency is “trusted” (logical integrity or counterparty risk is acceptable) and 
  2. guarantying that a record is safely stored (physical integrity).
Because these roles are not explicitly separated, the average time to store a transaction in the bitcoin application is about 10 mins because each transaction must be packed into a block and the “block time” is 10 mins. And, the "wait time" for POW blockchain is approximate 60 mins for minimal risk of a transaction is rejected. Obviously, this is not practical at the point-to-sale to buy a cup of coffee.

Actually, at the point-of-sale, a buyer and a seller need only the logical integrity, i.e. validating that a “piece” of cryptocurrency to be used in the transaction is “trusted”. The physical integrity is an “internal business” of the blockchain-as-an-archive.

So, the blockchain-as-an-archive has to have a validating function that confirms a particular “piece” of cryptocurrency is “trusted” by three conditions:
  1. it is based on existing transactions which are stored in the blockchain-as-an-archive, 
  2. those transactions used “trusted” “pieces” of cryptocurrency and 
  3. those transactions are “old” enough (e.g. they were included into the blockchain-as-an-archive at least 60 mins ago). 
There can be several simple algorithms for implementing such a validating function, e.g. by asking to vote a random collection of miners.

Of course, not all “pieces” of cryptocurrency are always “trusted”. Their normal life cycle is “under validation” and then “trusted” or “untrusted”. This means that an owner of some “pieces” of cryptocurrency will have to use in his/her transactions only “trusted” “pieces” of cryptocurrency.

The both sides of any transaction (the seller and the buyer) may check independently the level of trust for involved “pieces” of cryptocurrency. For example, the seller may define its own level of “trust” thus rejecting some “untrusted” (from his/her point of view) “pieces” of cryptocurrency. This is very similar to old good practices when a cashier (or booking-clerk) was checking some new banknotes.

Also, the blockchain-as-an-archive has to have an adding function that sends a transaction to the blockchain-as-an-archive.



Thus, by separating the logical integrity and the physical integrity, it would be possible to improve performance of some applications which are based on blockchain-as-an-archive.


Thanks,
AS

And thanks to Charles Moore for reviewing this blogpost.

2017-08-23

Towards Software-Defined Organisations

My presentation for BrightTalk


And some feedback from the organisers.

You had 362 pre-registered users and 122 views so far. 36 people downloaded your slides and you got 4.2/5 rating for your webinar.

All users said it was a very useful presentation, but I would like to highlight one in particular about acronyms:
"I missed the 1st minute or so of the presentation. There may have been a table of acronyms presented then because I didn't see any explanation later during the presentation. It would be very helpful if acronym definitions were provided. otherwise it was a very broad presentation encompassing a host of smaller topics worthy of discussion in themselves. It was well done and the presenter was fully competent concerning the subject matter. Thank you"

Thanks,
AS

2017-08-14

Beauty of #microservices - from #DevOps to #BizDevOps via #microservices first

As we all know, usage of MicroService Architecture (MSA) requires the very comprehensive operational practices and infrastructure. A microservice is a unit-of-functionality (or “class” in the informal IT terminology) within its own unit-of-deployment (or “component” in the informal IT terminology) acting as a unit-of-execution (or “computing process” in the informal IT terminology). Some applications may comprise a few hundred of microservices. This is certainly a serious barrier for exploiting MSA benefits such as easy to update and easy to scale to absorb heavy workloads.

Fortunately, as we know, various performance characteristics (e.g. easy to update, easy to scale) are not spread uniformly within applications. For example, 95% of CPU consumption is located in 5% of program code. Thus, it is not necessary to implement the whole application via microservices.

Let us ask a simple question, if a microservice is, actually, a service then can we use microservices and services together? Yes, and some functionality from platforms or monoliths may be used (via API) as well.

Now, let us reformulate the problem. Let us consider that any application is built from many units-of-functionality which must be deployed and then executed. What is the optimal arrangement of units-of-functionality into units-of-deployment and then units-of-execution? In other words,
  • which units-of-functionality have to be implemented as microservices (microservices are agile and good for easy to update, but have some execution and management overhead);
  • which units-of-functionality have to be implemented as monoliths (monoliths are not agile and not easy to update, but have no execution and management overhead);
  • which units-of-functionality have to be implemented as services (classic services are something in between microservices and monoliths).
Thus, a few recommendations may be formulated.
  • Units-of-functionality which are “often” updates must be implemented as microservices (so BizDevOps will be happy).
  • Units-of-functionality which require to absorb heavy workloads must be implemented as microservices (so DevOps will be happy).
  • Units-of-functionally which are “rarely” updated may be packed in a few units-of-deployment (different “packing” criteria may be used) and each unit-of-deployment has its own computing process (so DevOps will be happy). Another option is dynamic loading of those units-of-functionality.
  • Units-of-functionality which are “never” updated may be packed as a monolith or platform, i.e. one unit-of-deployment and one unit-of-execution (so DevOps will be extremely happy).
Applying these recommendations to some phases of the whole application life cycle (conception, development, deployment, production, support, retirement and destruction) the following recommendations may be formulated:
  • At the beginning of the application life cycle (concept, i.e. prototyping, and initial development), the majority of the units-of-functionality must be implemented as microservices, because easy to update characteristic is very important (especially for the business people) and, fortunately, performance characteristics are not an issue. 
  • More close to the end of the development phase, it becomes clear which units-of-functionality have to changed more often than others; so those others may be considered as services and even monoliths or platforms.
  • Also, the load tests (during the development and deployment phases) must show which units-of-functionality will require to absorb heavy workloads thus to be implemented as microservices.
  • Other criteria may be considered as risk, security, etc. 

Obviously, that “moving” a unit-functionality from microservice-like implementation to service-like implementation and to platform-like implementation is much easier that “moving” a unit-of-functionality from monolith-like implementation to service-like implementation and to microservice-like implementation.

This confirms the primacy of the “microservices first” approach. This approach, actually, provides support for BizDevOps practices ( see http://improving-bpm-systems.blogspot.ch/2017/05/beauty-of-microservices-ebanliing.html ). Additionally, this approach enables interesting transformations such as automatic reconfiguration of applications to absorb the heavy workloads by moving temporarily some units-of-functionality from service-like implementation to microservice-like implementation.

Remember from prof. Knuth "Premature optimisation is the root of all evil".

Thanks,
AS

The collection of posts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservice