Pipem: Pipeline Monitoring made easy

Pipem
4 min readDec 14, 2022

--

Pipem

When we decided to migrate our monolith server to kubernetes, we obviously had to think about how to divide all the infrastructure into microservices.

After the analysis, we started building repositories and pipelines for ci/cd.
We noticed that the pipelines grew quite fast, and after only 15, it was to be difficult to keep track of which ones had gone well, whether there has been particularly slow executions or whatever.

The Google Cloud dashboard for to monitoring your pipelines (Cloud Build) is this:

Cloud Build Dashboard

We wanted something that would give us the status of all pipelines at a glance, with last run date, status, duration, etc.

After some search on the market and not having found anything that was right for us, we decided to create an ad hoc software.

The idea

So the first thing we did was create a parser that downloaded the execution events and presented them on a dashboard:

Dashboard

Once done, we thought it would be interesting to see some statistics on the execution results, perhaps with a nice graph, so we developed the insights section:

Insights

Here we have grouped the events by day, giving a color based on the execution result.

Then one day we came across the DORA metrics, used to evaluate the performance of a team, awesome!

We already had the data, it was enough just to analyze them, and so here is the metrics section:

Metrics

A month later, during a startup event, we heard the word “carbon footprint” and “efficiency”.

Apart from the environmental impact issue, it is certainly interesting to be able to analyze the efficiency of a team, how many minutes of execution are wasted and which pipelines have the worst number of successful executions (and if you use a cloud provider, you should avoid wasting resources to save your money).

Once again, having already the data, it was enough just to find a way to present them. After some testing and testing, we released the efficiency section:

Efficiency

Pipem was taking a defined shape.

After a few hours of development we have therefore added the sections of history, logs, settings and users management.

User

Once we finished the code to manage Google Cloud builds, we started thinking about integrating other providers as well.

GitHub, GitLab, Azure, AWS, BitBucket, Jenkins, Buddy, CircleCI, Bitrise, TravisCI, Heroku, are just some of the providers in development or that we have already released.

Regardless of the tool used for your ci/cd, Pipem takes care of standardizing the data, in order to have a uniform dashboard and be able to analyze execution events.

How Pipem works

How it works

Pipem is modular, and you can decide what to install between 4 types of components:

  • app (frontend)
  • api (backend)
  • providers (download data from ci/cd tools)
  • nats-producer (generates a message on nats and triggers the download of new events)

The minimum installation we did for one of our customers is this (because they use only Github actions):

  • app
  • api
  • nats producer
  • provider-github

We have decided to not install mongodb on kubernetes, so we decided to use services on MongoDB Atlas.

If you want to know more about us, you can learn more on docs.pipem.io.

Do you want to try Pipem?

If you are interested in trying Pipem, you can go to app.pipem.io and log in with this data:

  • username: demo
  • password: pipem

We are always looking for feedback from users, so if you have something to tell us, you can write us at support@pipem.io.

We also have a GitHub organization (github.com/pipem-io), where we store our helm charts.

Roadmap

Furthermore in GitHub you can check our development roadmap https://github.com/pipem-io/roadmap.

Let’s talk!

Do you want to have a chat with us? You can join on our Discord server.

--

--

Pipem
Pipem

Written by Pipem

Pipeline Monitoring made easy

No responses yet