Admittedly, I was mildly shocked and a bit scared when stumbling over @ZackMaril’s recent tweet claiming Clojure was a dying language. A huge discussion was sparked and a lot has been written since. A summary can be found in Arne Brasseur’s post which also offers some good insight into the community behind Clojure. Eric Normand added another layer of abstraction today and collected some more links in this week’s Drama edition of the purelyfunctional.tv newsletter.

I share Dan Lebrero’s sentiment on Clojure. Clojure taught me lessons and introduced concepts to me  that I do not want to miss any more. First and foremost I love the built-in immutable collections. Immutability as default is a milestone in my life as a developer. I compare it to git, pair-programming, continuous integration and IntelliJ IDEA (with Cursive now, of course). There was a time when I did not even know about each of these. Now I cannot imagine to ever work without any of them.

In this post I want to emphasize another not less important aspect: Productivity. With Clojure we get a whole lot of stuff done. Seriously.

In einem fachbereichsübergreifenden Projekt wollte OTTO Insights zum Thema Customer-Journey generieren und als Basis für Maßnahmen (z.B. Verbesserung der Customer Experience) nutzen. Dabei sollte eine Analysebasis geschaffen werden, die Potenziale sichtbar macht und einen Mehrwert für OTTO bietet.

An dem Projekt waren die Bereiche Business Intelligence (BI) und Kundenservice und Logistik (KL) beteiligt.

BI ist ein zentraler Treiber für die Entwicklung der „data driven company“.

Beim Export von Shopdaten an Preissuchmaschinen machen die Betreiber genaue Vorgaben, in welchem Format die Produktdaten eingereicht werden müssen. Ein Betreiber möchte z.B. den Artikelpreis als Zahl in Cent, der andere als String mit Komma und Eurozeichen. Auch für die Angabe von Lieferbarkeit, Versandkosten oder Bildern sind sehr unterschiedliche Vorgaben zu beachten.
Die Software, die bei OTTO die Transformation und Datenbelieferung übernimmt, heißt ProPHET (Produktdaten Partnerfeed Handling & Export Tool). Dieser Artikel zeigt die Vorteile der Umstellung auf eine domänenspezifische Sprache (DSL) zur Beschreibung der Transformation dieser Daten.

Aktuelle E-Commerce-Systeme bestehen häufig aus mehreren Servern, die mit dem Browser der Benutzer über SSL-gesicherte Leitungen kommunizieren. Allerdings sind die Rechner der Kunden nicht die einzigen, mit denen ein Shop Daten austauschen muss: fast immer ist es notwendig, dass die einzelnen Server des Systems auch untereinander kommunizieren müssen. Dabei ist es unerheblich, ob es sich um eine Microservice– oder eine Vertikalen-Architektur handelt. Sogar die in die Tage gekommenen Monolithen nutzen bei einem Multi-Tier-Ansatz gerne mehrere Server.

Gemein ist diesen Backend-Requests (im Gegensatz zu dem Netzwerkverkehr zwischen Kunden-Browser und otto.de), dass hier im Allgemeinen keine vertraulichen Daten über die Leitung gehen, sondern beispielsweise die Aktualisierung einer Produktverfügbarkeit oder Konfigurations-Requests. Diese Inhalte müssen also nicht geheim gehalten werden, was uns den Aufwand und die Komplexität erspart, sie zu verschlüsseln.

Trotzdem darf natürlich nicht jeder die Texte im Shop ändern oder Produktinformationen anpassen, daher benötigen wir eine Methode zur Authentifizierung und Autorisierung der Requests.

A Completely Biased Summary of the Assets of Microservice Architectures

by @GuidoSteinacker

When we started ‚Lhotse‘, the project to replace the old, monolithic e-commerce platform of otto.de a few years ago, we chose self-contained systems (SCS) to implement the new shop: Instead of developing a single big monolithic application, we chose to vertically decompose the system by business domains (’search‘, ’navigation‘, ‚order‘, …) into several mostly loosely coupled applications. Each application having it’s own UI, database, redundant data and so on. If you are more interested in the ‚how‘ instead of the ‚why‘, please have a look into an earlier article on monoliths and microservices.

Lately, some of these SCS turned out to be still too large, so we decomposed them by extracting several microservices. Because we are already running a distributed system, cutting applications into smaller pieces is now a rather easy exercise.  One of the reasons, why I agree with Stephan Tilkov that you should not start with a monolith, when your goal is a microservices architecture.

This article is not about the pros and cons of microservice architectures. This article is mostly about the pros. Not because they do not have downsides, but because I’m biased and completely convinced that microservices are a great idea.

When we began the development of our new Online Shop otto.de, we chose a distributed, vertical-style architecture at an early stage of the process. Our experience with our previous system showed us that a monolithic architecture does not satisfy the constantly emerging requirements. Growing volumes of data, increasing loads and the need to scale the organization, all of these forced us to rethink our approach.

Therefore, this article will describe the solution that we chose, and explain the reasons behind our choice.

Abstract

In the last two months, we started our journey towards a new microservices architecture. Among other things, we found that our existing CD tools were not ready to scale with new requirements. So we tried a new approach, defining our pipelines in code using LambdaCD. In combination with a Mesos cluster we can deploy new applications after a few minutes to see how they fit into our architecture by running tests against existing services.

Part 1: The underlying infrastructure
Part 2: Microservices and continuous integration
Part 3: Current architecture and vision for the future

Abstract

In the last two months, we started our journey towards a new microservices architecture. Among other things, we found that our existing CD tools were not ready to scale with new requirements. So we tried a new approach, defining our pipelines in code using LambdaCD. In combination with a Mesos cluster we can deploy new applications after a few minutes to see how they fit into our architecture by running tests against existing services.

Part 1: The underlying infrastructure
Part 2: Microservices and continuous integration
Part 3: Current architecture and vision for the future

In this part of my article I want to explain how we define microservices and why we think they are the best choice for our applications. Furthermore I will give you a brief introduction outlining which problems we have with common CI tools and how we want to solve them with LambdaCD.

A little more than a year ago, things at work were completely different for me than they are now. I was a Java programmer working on a big, monolithic piece of software. There was also some JavaScript for frontend and MongoDB, some Groovy for scripting Gradle and the occasional Bash script, but if you had called me a 100% Java programmer, I would not have objected. Don’t get me wrong, I enjoyed my job and I always loved to work with Java but as you may relate to, after years of hacking Java, I was a little tired of it. As you may also relate to, I had little hope for the situation to change significantly anytime soon. As it turned out, I was wrong about that. Hugely wrong.