2017-09-28

Beauty of #blockchain - game of intermediaries

Any blockchain-based solution is a virtual (invisible but real) intermediary (with its data, computing resources, workers, miners and decision makers) between people using this solution (the users). Actually, the users agreed to trust the goodwill of networked miners (not technology as claimed by many blockchain enthusiasts) to be able to work between themselves (e.g. carry out some transactions) without trusting each other. Although, some blockchain-based financial solutions already formed real infrastructures for their world-wide operations but they are lacking the legal endorsements from financial and governmental authorities.



Obviously, blockchain or not-blockchain, following the financial and governmental regulations and usual good business practices is even more important in the digital era than before because the damages can be huge and done quickly ( see http://taxilife.ru/nationalnews/7124/ and http://www.independent.co.uk/voices/uber-tfl-london-taxi-black-cabs-regulation-a7964066.html ).

It seems that the blockchain-technology and cryptocurrencies is a way for replacing the existing intermediaries by new ones. The majority of blockchain enthusiasts claims that their technology removes intermediates although this is not true.

However, it is possible to do such replacements differently. Possible examples are the following.

In any of such replacements, it is mandatory to
  • avoid sudden creation of a powerful intermediary like Uber, Airbnb, Facebook, Amazon, Alibaba, etc.
  • understand what services are provided by new intermediaries, 
  • what are contractual agreements (including SLA) for these services;
  • impose transparency of new intermediaries and 
  • exercise necessary control via explicit ownership (in different forms) or external testability.
See also https://www.linkedin.com/feed/update/activity:6320376284153282561/ 

Thanks,
AS

See the whole collection of bogposts about blockchain - http://improving-bpm-systems.blogspot.ch/search/label/%23blockchain

2017-09-15

Relationships between AS-IS, TO-BE and transition architectures

Just an illustration



BTW, the idea "stolen" from the agile development methodology.

Thanks,
AS

2017-08-31

Beauty of #blockchain – separating the wheat from the tares

As we know, the blockchain technology is actually a multi-user, centralised (logically) and distributed (physically) archive with excellent availability and integrity characteristics. Such an archive collects various records and packs them into chained and (practically) immutable blocks.

Why is it centralised? Because of the single uniform code base and the consensus process, i.e. a combination of “administrative” means and technology.

Numerous applications (e.g., bitcoin) use the blockchain technology to resolve the problem of “double spending”, i.e. if a record (which is a transaction in this case) spends the same “piece” of cryptocurrency more than once then that double “piece” of cryptocurrency it will be detected as “untrusted” and, finally, such a record (i.e. transaction) will be rejected by the blockchain-as-an-archive of records (or a ledger).

Thus, in such applications, blockchain plays two roles: ( see also http://improving-bpm-systems.blogspot.ch/2016/06/disassembling-blockchain-concept.html ).
  1. validating that a used “piece” of cryptocurrency is “trusted” (logical integrity or counterparty risk is acceptable) and 
  2. guarantying that a record is safely stored (physical integrity).
Because these roles are not explicitly separated, the average time to store a transaction in the bitcoin application is about 10 mins because each transaction must be packed into a block and the “block time” is 10 mins. And, the "wait time" for POW blockchain is approximate 60 mins for minimal risk of a transaction is rejected. Obviously, this is not practical at the point-to-sale to buy a cup of coffee.

Actually, at the point-of-sale, a buyer and a seller need only the logical integrity, i.e. validating that a “piece” of cryptocurrency to be used in the transaction is “trusted”. The physical integrity is an “internal business” of the blockchain-as-an-archive.

So, the blockchain-as-an-archive has to have a validating function that confirms a particular “piece” of cryptocurrency is “trusted” by three conditions:
  1. it is based on existing transactions which are stored in the blockchain-as-an-archive, 
  2. those transactions used “trusted” “pieces” of cryptocurrency and 
  3. those transactions are “old” enough (e.g. they were included into the blockchain-as-an-archive at least 60 mins ago). 
There can be several simple algorithms for implementing such a validating function, e.g. by asking to vote a random collection of miners.

Of course, not all “pieces” of cryptocurrency are always “trusted”. Their normal life cycle is “under validation” and then “trusted” or “untrusted”. This means that an owner of some “pieces” of cryptocurrency will have to use in his/her transactions only “trusted” “pieces” of cryptocurrency.

The both sides of any transaction (the seller and the buyer) may check independently the level of trust for involved “pieces” of cryptocurrency. For example, the seller may define its own level of “trust” thus rejecting some “untrusted” (from his/her point of view) “pieces” of cryptocurrency. This is very similar to old good practices when a cashier (or booking-clerk) was checking some new banknotes.

Also, the blockchain-as-an-archive has to have an adding function that sends a transaction to the blockchain-as-an-archive.



Thus, by separating the logical integrity and the physical integrity, it would be possible to improve performance of some applications which are based on blockchain-as-an-archive.


Thanks,
AS

And thanks to Charles Moore for reviewing this blogpost.

2017-08-23

Towards Software-Defined Organisations

My presentation for BrightTalk


And some feedback from the organisers.

You had 362 pre-registered users and 122 views so far. 36 people downloaded your slides and you got 4.2/5 rating for your webinar.

All users said it was a very useful presentation, but I would like to highlight one in particular about acronyms:
"I missed the 1st minute or so of the presentation. There may have been a table of acronyms presented then because I didn't see any explanation later during the presentation. It would be very helpful if acronym definitions were provided. otherwise it was a very broad presentation encompassing a host of smaller topics worthy of discussion in themselves. It was well done and the presenter was fully competent concerning the subject matter. Thank you"

Thanks,
AS

2017-08-14

Beauty of #microservices - from #DevOps to #BizDevOps via #microservices first

As we all know, usage of MicroService Architecture (MSA) requires the very comprehensive operational practices and infrastructure. A microservice is a unit-of-functionality (or “class” in the informal IT terminology) within its own unit-of-deployment (or “component” in the informal IT terminology) acting as a unit-of-execution (or “computing process” in the informal IT terminology). Some applications may comprise a few hundred of microservices. This is certainly a serious barrier for exploiting MSA benefits such as easy to update and easy to scale to absorb heavy workloads.

Fortunately, as we know, various performance characteristics (e.g. easy to update, easy to scale) are not spread uniformly within applications. For example, 95% of CPU consumption is located in 5% of program code. Thus, it is not necessary to implement the whole application via microservices.

Let us ask a simple question, if a microservice is, actually, a service then can we use microservices and services together? Yes, and some functionality from platforms or monoliths may be used (via API) as well.

Now, let us reformulate the problem. Let us consider that any application is built from many units-of-functionality which must be deployed and then executed. What is the optimal arrangement of units-of-functionality into units-of-deployment and then units-of-execution? In other words,
  • which units-of-functionality have to be implemented as microservices (microservices are agile and good for easy to update, but have some execution and management overhead);
  • which units-of-functionality have to be implemented as monoliths (monoliths are not agile and not easy to update, but have no execution and management overhead);
  • which units-of-functionality have to be implemented as services (classic services are something in between microservices and monoliths).
Thus, a few recommendations may be formulated.
  • Units-of-functionality which are “often” updates must be implemented as microservices (so BizDevOps will be happy).
  • Units-of-functionality which require to absorb heavy workloads must be implemented as microservices (so DevOps will be happy).
  • Units-of-functionally which are “rarely” updated may be packed in a few units-of-deployment (different “packing” criteria may be used) and each unit-of-deployment has its own computing process (so DevOps will be happy). Another option is dynamic loading of those units-of-functionality.
  • Units-of-functionality which are “never” updated may be packed as a monolith or platform, i.e. one unit-of-deployment and one unit-of-execution (so DevOps will be extremely happy).
Applying these recommendations to some phases of the whole application life cycle (conception, development, deployment, production, support, retirement and destruction) the following recommendations may be formulated:
  • At the beginning of the application life cycle (concept, i.e. prototyping, and initial development), the majority of the units-of-functionality must be implemented as microservices, because easy to update characteristic is very important (especially for the business people) and, fortunately, performance characteristics are not an issue. 
  • More close to the end of the development phase, it becomes clear which units-of-functionality have to changed more often than others; so those others may be considered as services and even monoliths or platforms.
  • Also, the load tests (during the development and deployment phases) must show which units-of-functionality will require to absorb heavy workloads thus to be implemented as microservices.
  • Other criteria may be considered as risk, security, etc. 

Obviously, that “moving” a unit-functionality from microservice-like implementation to service-like implementation and to platform-like implementation is much easier that “moving” a unit-of-functionality from monolith-like implementation to service-like implementation and to microservice-like implementation.

This confirms the primacy of the “microservices first” approach. This approach, actually, provides support for BizDevOps practices ( see http://improving-bpm-systems.blogspot.ch/2017/05/beauty-of-microservices-ebanliing.html ). Additionally, this approach enables interesting transformations such as automatic reconfiguration of applications to absorb the heavy workloads by moving temporarily some units-of-functionality from service-like implementation to microservice-like implementation.

Remember from prof. Knuth "Premature optimisation is the root of all evil".

Thanks,
AS

The collection of posts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservice 

2017-07-27

Better Architecting With – systems approach

All blogposts on this topic are at the URL http://improving-bpm-systems.blogspot.ch/search/label/%23BAW 


1 The systems approach basics


The systems approach is a holistic approach to understanding a system and its elements in the context of their behaviour and their relationships to one another and to their environment. Use of the systems approach makes explicit the structure of a system and the rules governing the behaviour of the system.

The systems approach is based on the consideration that functional and structural engineering, system-wide interfaces and compositional system properties become more and more important due to the increasing complexity, convergence and interrelationship of technologies.

The goal of the systems approach is to walk people and organisations working on complex systems through various stages and steps of analysis and synthesis in order to build a comprehensive understanding of the system-of-interest and, ultimately, be able to architect and engineer that system at any desired level of detail depth.

The systems approach helps to produce the following digital work products.
  • artefacts (entities made by creative human work) which are used to implement the system-of-interest;
  • system-of-interest terminology to explain various system approach concepts and relationships between them
  • nomenclatures (or classifications) of artefacts of the same type;
  • models to formally codify some relationships between some artefacts;
  • views (collections of views) to address of some concerns of some stakeholders, and
  • architecture descriptions which consists of several views.

To facilitate the production of those digital work products, the systems approach provides:
  • system approach terminology to explain various concepts of the system approach and relationships between them;
  • several templates to define various artefacts;
  • several nomenclatures with artefacts related to the systems approach;
  • several model kinds which formally defines views;
  • several architecture viewpoints conventions which can include languages, notations, model kinds, design rules, and/or modelling methods, analysis techniques and other operations on architecture views; architecture views are system-of-interest dependent and architecture viewpoints are system-of-interest independent, and
  • several patterns with techniques for transforming (not necessary fully automatically) some model kinds into other models kinds.

Many viewpoints and views are possible.

   


Different stakeholders see the same system differently and recognise different artefacts. 


2 Four levels of architecting


If the system-of-interest is rather complex, then it is recommended to use the following four levels of architecting:
  1. reference model is an abstract framework for understanding concepts and relationships between them in a particular problem space (actually, this is terminology)
  2. reference architecture is a template for solution architectures which realizes a predefined set of requirements
    Note: A reference architecture uses its subject field reference model (as the next higher level of abstraction) and provides a common (architectural) vision, a modularization and the logic behind the architectural decisions taken 
  3. solution architecture is an architecture of the system-of-interest
    Note: A solution architecture (also known as a blueprint) can be a tailored version of a particular reference architecture (which is the next higher level of abstraction)
  4. implementation is a realisation of a system-of-interest

The dependencies between these 4 levels are shown in illustration below.


The purpose of the reference architecture is the following:
  • Explain to any stakeholder how future implementations (which are based on the reference architecture) can address his/her requirements and change his/her personal, professional and social life for the better; for example, via an explicitly link between stakeholders’ high-level requirements and the principles of reference architecture.
  • Provide a common methodology for architecting the system-of-interest in the particular problem space, thus different people in similar situations find similar solutions or propose innovations.

In case of the very complex system to be implemented in several projects and the necessity to collaborate and coordinate between those projects, it is recommended to develop a reference solution architecture and, if required, a reference implementation (see illustration below). It helps to identify smaller systems elements (e.g. services, data, etc.) and relationships between them (e.g. interfaces) thus they can be shared between projects.


The reference solution architecture and the reference implementation are often experimental prototypes which are not production quality.

3 An example of digital work products


The digital work products below are listed in an approximate order because some modifications of a digital work product may necessitate some modifications in some other digital work products. The patterns to transform some digital work products into some other digital work products are not mentioned below.

3.1 Value viewpoint

The value viewpoint comprises several digital work products which describe the problem space, and provides some ideas about the future solution and its expected value for the stakeholders. The digital work products of this viewpoint:
  • problem space description;
  • system-of-interest terminology (as an initial version the system-of-interest ontology);
  • business drivers;
  • problem space high-level requirements (or some kind of guiding principles);
  • dependencies between viewpoints, stakeholders and stakeholders’ roles;
  • dependencies between viewpoints, stakeholders, stakeholders’ roles, stakeholders’ concerns and categories of concerns;
  • beneficiaries, i.e. stakeholders who/which benefit from the system-of-interest;
  • beneficiaries’ high-level requirements;
  • scope of the future solution space;
  • mission statement and vision statement, and
  • goals (if the vision statement must be further detailed).

3.2 Big picture viewpoint

The big picture viewpoint comprises several digital work products which describe the future solution as the whole::
  • system-of-interest ontology as a reference model;
  • some classifications which are specific for this solution space;
  • illustrative model;
  • essential characteristics of the future solution;
  • dependency matrix: high-level requirements vs. essential characteristics;
  • architecture principles model kind, and
  • dependency matrix: essential characteristics vs. architecture principles.

3.3 Capability viewpoint

The capability viewpoint comprises several digital work products which describe the future solution as a set of capabilities:
  • level 1 capability map;
  • level 2 capability map;
  • level 3 capability map (if necessary), and
  • heat maps (if necessary).

3.4 TOM engineering viewpoint

The engineering viewpoint comprises several digital work products which describe the future solution as sets of some artefacts:
  • data model
  • process map
  • function map
  • service map
  • information flow map
  • document/content classification
  • etc.

3.5 Some other viewpoints

  • Organisational viewpoint
  • Operational viewpoint
  • Implementation viewpoint
  • Compliance framework
  • Regulations framework
  • Security, safety, privacy, reliability and resilience framework
  • Evolution viewpoint
  • etc.

4 Some definitions


1. reference model

abstract framework for understanding concepts and relationships between them in a particular problem space or subject field
  • Note 1 to entry: A reference model is independent of the technologies, protocols and products, and other concrete implementation details.
  • Note 2 to entry: A reference model uses a concept system for a particular problem space or subject field.
  • Note 3 to entry: A reference model is often used for the comparison of different approaches in a particular problem space or subject field.
  • Note 4 to entry: A reference model is usually a commonly agreed document, such as an International Standard or industry standard.

2. reference architecture
template for solution architectures which realize a predefined set of high-level requirements (or needs)
  • Note 1 to entry: A reference model is the next higher level of abstraction to the reference architecture.
  • Note 2 to entry: A reference architecture uses its subject field reference model and provides a common (architectural) vision, a modularization and the logic behind the architectural decisions taken. 
  • Note 3 to entry: There may be several reference architectures for a single reference model.
  • Note 4 to entry: A reference architecture is universally valid within a particular problem space (or subject field).
  • Note 5 to entry: An important driving factor for the creation of a reference architecture is to improve the effectiveness of creating products, product lines and product portfolios by
    • managing synergy,
    • providing guidance, e.g. architecture principles and good practices,
    • providing an architecture baseline and an architecture blueprint, and
    • capturing and sharing (architectural) patterns.

3. solution architecture
system architecture (or solution blueprint)
architecture of the system-of-interest
  • Note 1: A solution architecture can be a tailored version of a particular reference architecture which is the next higher level of abstraction.
  • Note 2: For experimentation and validation purposes, a reference solution architecture may be created. It helps in the creation of other solution architectures and implementations.

4. implementation
realisation of the system-of-interest in accordance with its solution architecture
  • Note 1: A reference implementation is a realisation of the system-of-interest in accordance with its reference solution architecture. It can be production quality or not.

Thanks,
AS

2017-06-20

Smart Cities from the systems point of view

Thanks,
AS

2017-06-17

Better Architecting With – big picture

This blogpost continues the blogpost “#entarch frameworks are typical monoliths which have to be disassembled for better architecting” ( see http://improving-bpm-systems.blogspot.bg/2017/06/entarch-frameworks-are-typical.html ) and uses some feedback from a LinkedIn discussion https://www.linkedin.com/feed/update/urn:li:activity:6278239461654437888/

This blogpost outlines a “big picture” including the components and operating model for Better Architecting With (BAW).

Again, the goals of BAW are:
  • to standardise a good set of #entarch common components (viewpoints, artefacts, models, etc.)
  • to enable the users to add their own components, if necessary
  • to provide formal and repeatable guidance how to achieve unique user’s needs with available components.
BAW follows the “Platform-Enables Agile Solutions” (PEAS) pattern ( see http://improving-bpm-systems.blogspot.bg/2011/04/enterprise-patterns-peas.html ). BAW comprises the following:
  1. BAW platform;
  2. a set of ready-to-use popular BAW solutions (consider them as recipes) which you may try as-is and gradually adapt for your unique needs, and
  3. BAW guidance (including some obvious documentation).
The BAW platform comprises the following components:
  • BAW ontology – a set of about 200 concepts (in my estimation) which are already defined in many sources and needed to be aligned.
  • BAW artefacts – a set of about 50 (in my estimation) well-known artefacts to be aligned.
  • BAW viewpoints – a user-extendable set of about 20-30 (for now) viewpoints.
  • BAW model kinds – a user-extendable set of about 50-70 (for now) model kinds.
  • BAW patterns – a user-extendable set of techniques for transforming (not necessary fully automatically) some model kinds into other models kinds.
The most innovative part of the BAW platform is the BAW patterns because they capture architecting knowledge in a formal and reproducible way. BAW patterns are formalised as small processes with human and automated activities. Some example of such patterns are in http://improving-bpm-systems.blogspot.bg/search/label/enterprise%20patterns

The BAW solutions comprise the following:
  • BAW scenarios – a set of popular architectural works such as designing data-entry application or process-based applications, defining business architecture, formulating IT strategy, etc.
  • BAW skeletons – a set of existing #entarch frameworks
The BAW guidance is the most important part of BAW. In accordance with the selected scenario, the user is guided what views and models must be developed and how to develop them. The order of development can be almost arbitrary because the user must be able to adjust his/her models in the “pinball” way.


Again, the whole BAW must be organised in a way that anyone can add new viewpoints, model kinds, patterns and related documentation to enrich BAW with formalised and repeatable knowledge.

Thanks,
AS

2017-06-07

#entarch frameworks are typical monoliths which have to be disassembled for better architecting

This blogpost starts the "Better Architecting With" series http://improving-bpm-systems.blogspot.bg/search/label/%23BAW

#entarch frameworks are considered as a must for any serious #entarch work. There are about 1 000 #entarch frameworks on this planet. The most popular of them are typical monoliths – huge in size, contain a lot of overlaps, slow to evolve, difficult to adapt to particular needs, expensive to learn, tricky to explain, etc.

Not surprising that some organisations have to use a mixture of #entarch frameworks although some #entarch frameworks allow some tailoring. For example, an organisation has to use FEA because they work with the local government, TOGAF for solutions and ZF as a foundation.

Considering that organisations are demolishing/modernizing/transforming their application monoliths, let us, enterprise architects, apply the same tendency to #entarch frameworks. Such a transformation must:
  1. preserve and externalise (from the monolith frameworks) the knowledge which is accumulated by those #entarch frameworks, and
  2. provide a guidance how to build and operationalize unique #entarch practices from a coherent set of repeatable (proven or innovative) #entarch techniques and methodologies.
De facto, the “erosion” of the monolith nature of #entarch frameworks is already ongoing (examples, Tom Graves work).

Let us outline the target way of architecting:

The process of architecting will be as the following:
  • use the configurator to describe the problem space and generate a set of viewpoints for the solution space
  • use techniques and methods to specify an initial set of models
  • obtain OK from all the stakeholders
  • use techniques and methods to specify all the rest models
Again, the key point is a set of techniques and methodologies to link models. Repeatable techniques and methodologies will lead to better #entarch tools and high level of automation. The whole architecting process will be faster, better and cheaper.


Thanks,
AS







2017-06-02

#GDPR as an #BPM application

This blogpost explains how to implement the EU General Data Protection Regulation (GDPR) by design and by default via Business Process Management (BPM). This blogpost describes only a reference solution architecture without much implementation details. It focuses, primarily, on the artefacts such as capabilities, rules, roles, data-structures, documents, explicit coordination and audit trails.



1 Terminology in the GDPR


Although the information security domain is developed well, the GDPR document (see article 3) uses rather exotic terminology (sources are not provided). For example, many concepts, available from the standard privacy framework (ISO/IEC 29100) have different designations (terms). Another example is that the concepts “data” and “information” do not follow the DIKW “pyramid”.

Although a mapping between 16 terms of the GDPR and existing terminology is not difficult, it would be better to avoid such a mapping.

2 The main element


The main element of the GDPR is a data-structure object “Personally Identifiable Information” (PII) in the ISO/IEC 29100 terminology or “personal data” in the GDPR terminology. It must be explicitly and carefully protected:
  • for its confidentiality, integrity and availability
  • in rest, in transit and in use
  • throughout its life cycle
Usually, the life cycle of PII objects is very simple and covered by 4 actions which are known as the CRUD pattern (although each update may create a new version of the PII object). 

3 The core processes


Those actions, namely, Create, Read, Update and Delete, must be presented as small processes (or workflows) to provide the design and execution traceability. Considering that any PII is owned by the “PII principal” (or a natural person to whom a set of personally identifiable information relates), he/she must approve some actions on his/her PII object.

For example, an PII principal must provide his/her consent to process his/her PII object and such a consent must be kept as a record by the “PII processor” (privacy stakeholder that processes personally identifiable information on behalf of and in accordance with the instructions of a “PII controller”).

4 Some supporting capabilities


The core (or life cycle) processes use several capabilities (services or processes) such as:
  • identity management
  • access management
  • anonymization 
  • encryption
  • etc.
Also, some error- and exception-handling processes are necessary to properly handle privacy incidents.

5 Related roles


The essential roles are the following:
  • “PII principal” in the ISO/IEC 29100 terminology or “data subject” in the GDPR terminology – the owner of the PII, 
  • “PII processor” – persons or organisations who/which execute the GDPR processes,
  • “PII controller” – authority which, alone or jointly with others, determines the purposes and means of the processing of personal data, and
  • “Data Protection Officer (DPO)” – person who is the owner of the GDPR processes.

6 Rules


The execution of the GDPR processes is guided by numerous rules which are; actually, the majority of the GDPR documents. For example, if a PPI principal is a citizen of an EU countries then “PII processes” must follow the GDPR.

Unfortunately, it is unknown if these rules comply to the MECE principle (no overlaps and no holes). 

7 Complex scenarios


There are a few scenarios which involve more than one PII object. For example, split, merge, export, transportation, correlation, etc.

8 Conclusion


Use of BPM to implement the GDPR addresses all the GDPR concerns. Explicit and machine-executable processes are mandatory to achieve by design and by default the listed below key points.
  • Compliance – all privacy-related activities and coordination between them can be easy analysed.
  • Accountability – generated audit trails provide factual and objective information about why did what.
  • Data protection officers (DPOs) – as a role which owns all the GDPR processes.
  • Consent – is achieved by the design of the GDPR processes and records management.
  • Enhanced rights for individuals – is achieved by the design of the GDPR processes.
  • Privacy policies – all PII controllers and PII processors must analyse their privacy policies via the logic of explicit processes.
  • International transfers – also become processes.
  • Breach notification – as an integral part of the privacy incident GDPR processes.


Thanks,
AS

2017-05-31

Beauty of #microservices - making them practical

The classic definition of the microservice architectural style “as an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanism” creates a lot of fears and misunderstandings:
  • Application monoliths are evils, but having too many microservices sounds like creating an (unknown) evil as well.
  • Everything has to be re-developed.
  • Microservices will create a huge backlog for our agile team.
  • Microservices? They are neither architecture nor architectural style – just a technical stack.
As usual in IT, any new technology or methodology (which pretends to revolutionized everything) must be used together with many existing ones. Let us “intermix” MSA with some existing and proven technologies and methodologies.

MicroService Architecture (MSA) bring two major concepts:
  1. microservice as a unit-of-functionality, unit-of-deployment and unit-of execution with the same boundaries, and
  2. assembling a whole application from microservices of different origins: off-the-shelf (commercial and FOSS), brought, rented, built, provided from SaaS, PaaS, APaaS, etc.
Using these two concepts, let us try to find a practical balance between monolith architecture and MSA.

Firstly, it is necessary to think about any application as a set of the following artefacts
  • Events
  • Roles (actually, access rights management)
  • Rules (or decisions)
  • Business objects – data structures
  • Business objects – documents
  • Human activities (or screens or interactive services)
  • Automation activities (or scripting fragments or automation services)
  • Coordination
  • Audit trails
  • KPIs
  • Reports
Secondly, consider that each artefact must be, ideally, handled
  • Explicitly
  • As a set of microservices
  • Via APIs
  • With versioning 
  • By a specialized OTS tool, e.g. data structures are handled by a database, processes are handled by a BPM-suite tool
  • In a Domain Specific Language (DSL), e.g. BPMN for processes, DMN for rules
  • Over its whole life cycle
Thirdly, understand specialised tools for that each artefacts:
  • Coordination as explicit and machine-executable processes via a BPM-suite tool
  • Roles via an access management tool
  • Documents via an ECM product
  • Automation fragments as scripts in an interpretive language and execution robots
  • Audit trail and reports via BI tools
  • etc.
Fourthly, prepare two common “pool” for future tools, services and microservices:
  • technological pool for generic off-the-shelf products; their functionality is available via APIs
  • enabling pool for services, microservices, tools which are a) specific for the particular organisation and b) potentially reusable within organisation; their functionality is available via APIs
For each monolith application, sort its functionality out into 2 common pools and an individual pool.


At the result, we got a corporate unified business execution platform which standardise and simplify core elements of the corporate-wide computing system. For any elements outside the platform, new opportunities should be explored using agile principles. These twin approaches should be mutually reinforcing:
  • The platform frees up resource to focus on new opportunities while successful agile innovations are rapidly scaled up when incorporated into the platform.
  • An agile approach requires coordination at a system level.
  • To minimise duplication of effort in solving the same problems, there needs to be system-wide transparency of agile initiatives.
  • Existing elements of the platform also need periodic challenge. Transparency, publishing feedback and the results of experiments openly, will help to keep the pressure on the platform for continual improvement as well as short-term cost savings.
Obviously, do not forget to the a good application architecture - http://improving-bpm-systems.blogspot.ch/2017/05/beauty-of-microservices-ebanliing.html and http://improving-bpm-systems.blogspot.ch/2016/08/better-application-architecture-apparch.html


Thanks,
AS

Other blogposts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservices

2017-05-22

Beauty of #microservices - enabling #BizDevOps culture

Everyone heard about the DevOps culture which refers to a set of practices that emphasize the collaboration and communication of both software developers and IT professionals while automating the process of software delivery and infrastructure changes.

Certainly, DevOps improves the time-to-market for digital solutions, but it spans only a down-stream part of the idea-to-solution value stream. To cover the whole value stream, any up-stream stumbling blocks must be removed.

An application architecture, which is built on microservices and their machine-executable coordination (e.g. by processes), enables a new BizDevOps culture by quick implementations of business ideas. (I think that ING has introduced the BizDevOps concept).

Microservice is a service with the same boundaries as
  • a unit-of-functionality (for Biz)
  • a unit-of-deployment (for Dev)
  • a unit-of-execution (for Ops)
Thus, an implementation of a business idea as a group of microservices will have no unnatural complexity and, therefore, its time-to-market will be short.


Thanks,

AS

Other blogposts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservices  

Beauty of #microservices - adding ASSEMBLE to BUILD or BUY or RENT

MicroServices Architecture (MSA) is changing the classic Buy or Build or Rent question.

With MSA, orgnisations may use an extra option - Assemble. This allows an organisation to select the best option for each service or microservices. For example:
  • Buy services and microservices from organisation's business partners
  • Build microservices for organisation's unique capabilities
  • Rent services and microservices from the commodity markets
  • Assemble all services and microservices into organisation's unique solutions

Thanks,
AS

Other blogposts about microservices - http://improving-bpm-systems.blogspot.ch/search/label/%23microservices

2017-05-08

Beauty of #microservices - #microservice architecture maturity model

This blogpost is inspired by the discussion https://www.linkedin.com/feed/update/urn:li:activity:6266622261210411008/

I can imagine an MSA maturity matrix:
  1. atomic microservices 
  2. compound microservices (a microservice with wide single responsibility is built from microservices with narrow single responsibility) 
  3. data-entry applications in MSA (interactive microservices, validation microservices, persistence microservices) 
  4. process-driven applications in MSA (coordination microservices, management microservices, etc.) 
  5. all former applications in a value-chain are in MSA(actually, no boundaries between applications)
  6. all former applications within an organization are in MSA
  7. all former applications in a supply-chain are in MSA
  8. all former applications in an ecosystem are in MSA

Please, see my blogposts on #microservices http://improving-bpm-systems.blogspot.it/search/label/%23microservices 

Thanks,
AS

2017-03-04

Enterprise patterns: SRAM

This enterprise architecture pattern Security, Risk and Architecture Mixture (SRAM) is just a picture.



Thanks,
AS

2017-02-20

Systems-level standardisation (example of smart cities)

Overall common city goals include, for example, sustainable development, efficiency, resilience, safety and support for citizen’s engagement and participation. However, an individual city will follow its own approach in smart cities programmes and projects.

The current implementation practices of smart cities are rather disjoint, namely:
  • smart cities projects are, primarily, local initiatives
  • smart cities projects are considered as technology projects
  • numerous smart cities interest groups are, primarily, clubs
  • efforts for development of common vision are insufficient
  • typical financing patterns are: the government is budgeting (to some extent) some cities which engage technological companies and the government is budgeting some technological companies (China’s approach) which engage cities.
As a result, there is no agreed basis for efficient and effective cooperation and coordination between different smart cities programmes and projects. They carry out a lot of duplication of work, developed solutions are not reusable, the same mistakes are repeated.

To address such negative phenomena, the IEC came up with a new approach to standardisation – the systems-level standardisation which provides the context for the traditional product-level standardisation. The systems-level standardisation is aim to achieve synergy between uniformity (availability of standard products) and diversity (ability to combine standard and proprietary products).

The systems-level standardisation (which is carried out by the IEC Systems Committee “Smart Cities”) will offer to smart cities programmes and projects commonly agreed and fully traceable deliverables, namely:
  • reference model (ideally, as an ontology), 
  • reference architecture of a smart city as a system,
  • typical use cases (how various actors interact with the smart city as a system); and 
  • set of existing and new standards for implementation of various capabilities of smart cities. 
The openness of those deliverables allows easily adjust the reference architecture to an individual city and re-use already available standard elements.

The smart cities vision can be illustrated by the following figure.
(CUBE means Common Urban Business Execution)


Being equipped by those deliverables, various smart cities programmes and projects can carry out efficient and effective cooperation and coordination among them thus
  • decreasing the total cost of smart cities programmes and projects, 
  • reduce the lead time; and
  • increase the quality of implementations.
At present, there are 4 Systems Committees (SyC):
  1. Smart Energy
  2. Active Assisted Living
  3. Low Voltage Direct Current
  4. Electrotechnical aspects of Smart Cities
And one System Evaluation Group (SEG):
  1. Smart Manufacturing

The IEC system-level standardisation is based on the IEC Systems Approach.

Thanks,
AS