Whenever we present how we release features and deploy our code in one of OTTOs core functional teams, we are met with a certain set of questions, e.g..: “Why do you want to deploy more than once a week?”, “If you automate release and test management, what are the release and test managers doing?”, “How can we prevent major bugs to enter the shop?”, “Where is the final control instance to decide if something goes live?”, or the typical question “Who is responsible if something breaks?” or simply “Why the heck would someone want to do this?”

Let us answer those questions. Let us guide you through our way of working. Let us show you what processes we have (and which ones we do not have) and give you a hint on how to increase productivity and quality at the same time (without firing the test manager). All you have to do is to sit back, relax and let go of your concerns to lose control. Don’t worry, you won’t lose it.

Abstract

In the last two months, we started our journey towards a new microservices architecture. Among other things, we found that our existing CD tools were not ready to scale with new requirements. So we tried a new approach, defining our pipelines in code using LambdaCD. In combination with a Mesos cluster we can deploy new applications after a few minutes to see how they fit into our architecture by running tests against existing services.

Part 1: The underlying infrastructure
Part 2: Microservices and continuous integration
Part 3: Current architecture and vision for the future

Abstract

In the last two months, we started our journey towards a new microservices architecture. Among other things, we found that our existing CD tools were not ready to scale with those new requirements. So we tried a new approach, defining our pipelines in code using LambdaCD. In combination with a Mesos cluster we can deploy new applications after a few minutes to see how they fit into our architecture by running tests against existing services.

Part 1: The underlying infrastructure
Part 2: Microservices and continuous integration
Part 3: Current architecture and vision for the future

On March 6th, we hosted the Varnish Developer Day VDD at our Loft 6. We are proud to give the community a place to meet and to support the open source project Varnish.

Why do we do this?
Varnish is one of the main software products we have integrated into otto.de to make our shop fast and to achieve our business goals. Keeping in mind that we have a ’shared nothing‘ software architecture, we need at least one central system where everything comes together. How should each of our independent applications receive HTTP requests? A reverse proxy fulfills this role naturally.

Ich war am 03. & 04.04.2014 in Berlin auf der BED-CON mit einem Talk über unsere Erfahrungen im Projekt LHOTSE aus Sicht von Operations vertreten. Die Konferenz fand auf dem Gelände der Freien Universität Berlin statt – ein sehr schönes Campusflair war also dabei. Die Talks waren Java & entwicklungslastig und passten daher sehr gut zu OTTO und unserem Shop. Hier mein persönlicher Konferenzbericht.

Eines der Ziele, die wir uns für die Lhotse Plattform gesetzt haben ist Continous Delivery: Änderungen sollen nicht nur kontinuierlich integriert, sondern auch rasch in Produktion genommen werden, sobald die Abnahme erfolgt ist. Wie wir das in der Praxis erreichen wollen, ist ein anderes Thema. Damit wir aber prinzipiell in der Lage sind, jederzeit eine neue Version live zu stellen, muss das System in der Lage sein, ohne Unterbrechung auf ein neues Release umzuschalten.

Donnerstag, 28.2.2013: Livegang alpha.otto.de

Um 12:00 wurden die ersten paar tausend Einladungen an Kunden versendet, die ersten „echten“ Kunden trudeln ein: alpha.otto.de, ist live! Eine vollständig neu entwickelte Shop-Plattform, die ab Herbst die Grundlage für otto.de bilden wird.

Mit der Alpha Version wollen wir erste Erfahrungen in der freien Wildbahn sammeln, den Betrieb der Plattform üben und – vor allem – Feedback von Kunden für die Weiterentwicklung erhalten.