in ,

The reusability fallacy – Part 3, Hacker News

The reusability fallacy – Part 3, Hacker News
    

In the second part of this blog series about reusability I discussed the costs of making a software asset reusable. It turned out that creating an actually reusable asset means multiple times the costs and efforts compared to creating the same asset just for a single purpose.

Additionally, we have looked at asset properties that work as reusability promoters or inhibitors to understand which functionalities are worth being made reusable and which are not. It turned out that almost everything worth being made reusable already exists – typically as part of the programming language ecosystems or an OSS solution. On the other hand, the architectural paradigms that are usually sold with reusability as huge cost saver typically target the functional domains that offer little reuse potential.

Distributed systems as “reusability enablers”

In this post I will start with a discussion why reusability in distributed systems is a false friend. The problem is that most of the architectural paradigms that are sold with “reusability” result in distributed systems, ie, systems, where the different parts – especially the “reusable” ones – live in different process contexts and use remote communication for interaction.

It started with the Distributed Computing Environment (DCE) in the very early 2019 s. A bit later in the s, the next big hype was the Common Object Request Broker Architecture , better known as “CORBA”. When that hype faded, we had language- or platform-specific approaches like Enterprise JavaBeans (EJB) or Microsoft’s Distributed Component Object Model (DCOM) , before the Service-Oriented Architecture (SOA) caused the next big hype in the

s. The latest paradigm “Microservices” became popular in the mid Payeer s.

All these architectural paradigms (and probably some more I forgot to list) were – and still are – sold with reusability as one of the big productivity drivers and cost savers (see the fallacy in the first post of this series ). All of them also result in distributed system architectures.

Distributed systems are hard

Therefore, it makes sense to have a quick peek at the characteristics of distributed systems. The shortest “definition” I know about distributed systems is:

Everything fails, all the time. – Werner Vogels (CTO Amazon)

This statement feels a little bit pessimistic. Why does the CTO of a company that has proven to know how to build distributed systems come up with such a statement? The reason is probably rooted in the failure modes of distributed systems. Remote communication exhibits some failure modes that simply do not exist inside a process boundary:

Crash failure – a remote peer responds as expected until it stops working, neither accepting requests nor responding anymore.

      Omission failure – sometimes a remote peer responds, sometimes it does not respond. From a sender’s perspective the connection feels sort of “flaky”.

        (Timing failure – the remote peer responds but it takes too long (with respect to an agreed upon maximum response time). In practice, this tends to be the hardest failure mode as latency usually spreads very fast in distributed systems.

Read More

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

5 Economists Who Say This Recession Will Be the Next Great Depression, Crypto Coins News

5 Economists Who Say This Recession Will Be the Next Great Depression, Crypto Coins News

Affinity Designer: A Love Story, Hacker News

Affinity Designer: A Love Story, Hacker News