What is Platform Engineering

What is Platform Engineering

IT teams, developers, department heads and CTOs must ensure that applications and digital products are launched quickly, efficiently and securely and are always available. But often the conditions for this are not given. Compliance and security policies, as well as long and complicated processes, make it difficult for IT teams to achieve these goals. But this doesn't have to be the case and can be solved with the help of a developer self-service or Internal Developer Platform.

Simplified comparison of Platform Engineering vs Internal Developer Platform vs Developer Self-Service.

Platform Engineering vs. Internal Developer Platform vs. Developer Self-Service

What is Platform Engineering?

Platform Engineering is a new trend that aims to modernize enterprise software delivery. Platform engineering implements reusable tools and self-service capabilities with automated infrastructure workflows that improve developer experience and productivity. Initial platform engineering efforts often start with internal developer platforms (IDPs).

Platform Engineering helps make software creation and delivery faster and easier by providing unified tools, workflows, and technical foundations. It's like a well-organized toolkit and workshop for software developers to get their work done more efficiently and without unnecessary obstacles.

Webinar - Platform Engineering: AWS Account Creation with Developer Self-Service (Jira Service Management)

What is Platform Engineering used for?

The ideal development platform for one company may be completely unusable for another. Even within the same company, different development teams may have very different requirements.

The main goal of a technology platform is to increase developer productivity. At the enterprise level, such platforms promote consistency and efficiency. For developers, they provide significant relief in dealing with delivery pipelines and low-level infrastructure.

What is an Internal Developer Platform (IDP)?

Internal Developer Platforms (IDPs), also known as Developer Self-Service Platforms, are systems set up within organizations to accelerate and simplify the software development process. They provide developers with a centralized, standardized, and automated environment in which to write, test, deploy, and manage code.

IDPs provide a set of tools, features, and processes. The goal is to provide developers with a smooth self-service experience that offers the right features to help developers and others produce valuable software with as little effort as possible.

How is Platform Engineering different from Internal Developer Platform?

Platform Engineering is the overarching area that deals with the creation and management of software platforms. Within Platform Engineering, Integrated Development Platforms (IDPs) are developed as specific tools or platforms. These offer developers self-service and automation functions.

What is Developer Self-Service?

Developer Self-Service is a concept that enables developers to create and manage the resources and environments they need themselves, without having to wait for support from operations teams or other departments. This increases efficiency, reduces wait times, and increases productivity through self-service and faster access to resources. This means developers don't have to wait for others to get what they need and can get their work done faster.

How do IDPs help with this?

Think of Internal Developer Platforms (IDPs) as a well-organized supermarket where everything is easy to find. IDPs provide all the tools and services necessary for developers to get their jobs done without much hassle. They are, so to speak, the place where self-service takes place.

The transition to platform engineering

When a company moves from IDPs to Platform Engineering, it's like making the leap from a small local store to a large purchasing center. Platform Engineering offers a broader range of services and greater automation. It helps companies further streamline and scale their development processes.

By moving to Platform Engineering, companies can make their development processes more efficient, improve collaboration, and ultimately bring better products to market faster. The first step with IDPs and Developer Self-Service lays the foundation to achieve this higher level of efficiency and automation.

Challenges that can be solved with platform engineering

Scalability & Standardization

In growing companies, as well as large and established ones, the number of IT projects and teams can grow rapidly. Traditional development practices can make it difficult to scale the development environment and keep everyone homogeneous. As IT projects or applications continue to grow, there are differences in setup and configuration, security and compliance standards, and an overview of which user has access to what.

Platform Engineering enables greater scalability by introducing automation and standardized processes that make it easier to handle a growing number of projects and application developments.

Efficiency and productivity

Delays in developing and building infrastructure can be caused by manual processes and dependencies between teams, increasing the time to market for applications. Platform Engineering helps overcome these challenges by providing self-service capabilities and automation that enable teams to work faster and more independently.

Security & Compliance

Security concerns are central to any development process. Through platform engineering, we standardize and integrate security and compliance standards into the development process and IT infrastructure in advance, enabling consistent security auditing and management.

Consistency and standardization

Different teams and projects might use different tools and practices, which can lead to inconsistencies. Platform engineering promotes standardization by providing a common platform with consistent tools and processes that can be used by everyone.

Innovation and experimentation

The ability to quickly test and iterate on new ideas is critical to a company's ability to innovate. Platform Engineering provides an environment that encourages experimentation and rapid iteration by efficiently providing the necessary infrastructure and tools.

Cost control

Optimizing and automating development processes can reduce operating costs. Platform Engineering provides the tools and practices to use resources efficiently and thus reduce the total cost of development.

Real-world example: IDP and Developer Self-Service with Jira Service Management and AWS

One way to start with platform engineering is for example Jira Service Management as a developer self-service to set up AWS cloud infrastructure in an automated and secure way and to provide templates for developers and cloud engineers in a wiki.

How does it work?

Developer self-service for automatic AWS account creation with Jira service management

Jira Service Management Developer Self-Service

Using Jira Service Management, one of our customers provides a self-service that allows developers to set up an AWS organization account automatically and securely. This works with a simple portal and a service request form where the user has to provide information like name, function, account type, security and technical responsible and approving manager.

The account is then created on AWS in the backend using Python scripts in a build pipeline. During setup, all security and compliance relevant standards are already integrated and the JSM self-service is linked to the company's Active Directory. Due to the deep integration with all relevant systems of the company, it is possible to explicitly track who has access to what. This also facilitates the control of accesses and existing accounts in retrospect.

The result: The time required to create AWS organization accounts is reduced to less than an hour (from several weeks) with the help of JSM, enabling IT teams to publish, test and update their products faster. It also provides visibility into which and how many accounts already exist and for which product, making it easier to control the cost of cloud infrastructure on AWS.

Confluence Cloud as a knowledge database for IT teams

Of course, developer self-service is only a small part of platform engineering. IT teams need concrete tools and apps tailored to their needs.

One of these tools is a knowledgebase where IT teams, from developers to cloud engineers, can find relevant information such as templates that make their work easier and faster.

We have built a knowledge database with Confluence at our customer that provides a wide variety of templates, courses, best practices, and important information about processes. This knowledge database enables all relevant stakeholders to obtain important information and further training at any time.

Webinar - The First Step in Platform Engineering with a Developer Self-Service and JSM

After discussing the challenges and solutions that Platform Engineering brings, it is important to put these concepts into practice and explore them further. A great opportunity to learn more about the practical application of Platform Engineering is an upcoming webinar. This webinar will put a special focus on automating AWS infrastructure creation using Jira Service Management and Developer Self-Service. In addition, ess will feature a live demo with our DevOps experts.

Webinar - Platform Engineering: AWS Account Creation with Developer Self-Service (Jira Service Management)


The journey from Internal Developer Platforms to Platform Engineering is a progressive step that helps organizations optimize their development processes. By leveraging a Developer Self-Service and overcoming software development challenges, Platform Engineering paves the way for more efficient and innovative development practices. With practical resources like the featured webinar, interested parties can dive deeper into this topic. And also gain valuable insights into how to effectively implement Platform Engineering.

A comparison of popular container orchestration tools: Kubernetes vs Amazon ECS vs Azure Container Apps

A comparison of popular container orchestration tools

With the increasing adoption of new technologies and the shift to cloud-native environments, container orchestration has become an indispensable tool for deploying, scaling and managing containerized applications. Kubernetes, Amazon ECS and Azure Container Apps have emerged as leaders among the many options available. But with so many options, how can you figure out which one is best for your business?

In this article, we'll take an in-depth look at the features and benefits of Kubernetes, Amazon ECS, and Azure Container Apps and compare them side-by-side so you can make an informed decision. We'll address real-world use cases and explore the pros and cons of each option so you can choose the tool that best meets your organization's needs. By the end of this article, you'll have a clear understanding of the benefits and limitations of each tool and be able to make a decision that aligns with your business goals.

Let's get started!

Overview: Container Orchestration Tools

Explanation of the common tools

While Kubernetes is the most widely used container orchestration tool, there are other options that should be considered. Some of the other popular options are:

  • Amazon ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of Docker containers.
  • Azure Container Apps is a fully managed environment that allows you to run microservices and containerized apps on a serverless platform.
  • Kubernetes is an open source platform that automates the deployment, scaling and management of containerized applications.


Let's start with an overview of Kubernetes. Kubernetes was developed by Google and is now maintained by the Cloud Native Computing Foundation. Kubernetes is an open source platform that automates the deployment, scaling, and management of container applications. Its flexibility and scalability make it a popular choice for organizations of all sizes, from small startups to large enterprises.

Why is Kubernetes so popular?

Kubernetes is widely considered the industry standard for container orchestration, and for good reason. It offers a wide range of features that make it ideal for large-scale, production-scale deployment.

  • Automatic scaling: Kubernetes can automatically increase or decrease the number of replicas of a containerized application based on resource utilization.
  • Self-healing: Kubernetes can automatically replace or reschedule containers that fail.
  • Service discovery and load balancing: Kubernetes can automatically discover services and balance traffic between them.
  • Rollbacks and rollouts: With Kubernetes, you can easily revert to a previous version of your application or do a gradual rollout of updates.
  • High availability: Kubernetes can automatically schedule and manage application replica availability.

The Kubernetes ecosystem also includes Internet-of-Things (IoT) deployments. There are special Kubernetes distributions (e.g. k3s, kubeedge, microk8s) that allow Kubernetes to be installed on telecom devices, satellites, or even a Boston Dynamics robot dog.

The main advantages of Kubernetes

One of the key benefits of Kubernetes is its ability to manage many nodes and containers, making it particularly suitable for organizations with high scaling requirements. Many of the largest and most complex applications in production today, such as those from Google, Uber, and Shopify, are powered by Kubernetes.

Another great advantage of Kubernetes is its wide ecosystem of third-party extensions and tools. They easily integrate with other services such as monitoring and logging platforms, CI/CD pipelines, and others. This flexibility allows organizations to develop and manage their applications in the way that best suits their needs.

Disadvantages of Kubernetes

But Kubernetes is not without its drawbacks. One of the biggest criticisms of Kubernetes is that it can be complex to set up and manage, especially for smaller companies without dedicated DevOps teams. In addition, some users report that Kubernetes can be resource intensive, which can be a problem for organizations with limited resources.

So is Kubernetes the right choice for your business?

If you're looking for a highly scalable, flexible, and feature-rich platform with a large ecosystem of third-party extensions, Kubernetes may be the perfect choice. However, if you are a smaller organization with limited resources and little experience with container orchestration, you should consider other options.

Managed Kubernetes Services

Want to take advantage of the scalability and flexibility of Kubernetes, but don't have the resources or experience to handle the complexity? There are managed Kubernetes services like GKE, EKS and AKS that can help you overcome that.

Kubernetes offerings in the cloud significantly lower the barrier to entry for Kubernetes adoption because of lower installation and maintenance costs. However, this does not mean that there are no costs at all, as most offerings have a shared responsibility model. For example, upgrades to Kubernetes clusters are typically performed by the owner of a Kubernetes cluster, not the cloud provider. Version upgrades require planning and an appropriate testing framework for your applications to ensure a smooth transition.

Use cases

Kubernetes is used by many of the world's largest companies, including Google, Facebook and Uber. It is well suited for large-scale, production-ready deployments.

  • Google: Google uses Kubernetes to manage the delivery of its search and advertising services.
  • Netflix: Netflix uses Kubernetes to deploy and manage its microservices.
  • IBM: IBM uses Kubernetes to manage its cloud services.

Comparison with other orchestration tools

While Kubernetes is widely considered the industry standard for container orchestration, it may not be the best solution for every organization. For example, if you have a small deployment or a limited budget, you may be better off with a simpler tool like Amazon ECS or even a simple container engine installation. For large, production-ready deployments, however, Kubernetes is hard to beat.

Advantages and disadvantages of Kubernetes as a container orchestration tool

Highly scalable and flexibleCan be complex to set up and manage
Large ecosystem of third-party extensionsResource-intensive
Widespread use in production by large companiesSteep learning curve for smaller organizations without their own DevOps teams
Managed Kubernetes services available to manage complexity
Can be installed on IoT devices

Amazon ECS: A powerful and scalable container management service

Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container management service provided by Amazon Web Services (AWS). It allows you to run and manage Docker applications on a cluster of Amazon EC2 instances and provides a variety of features to help you optimize your container-based applications.

Features and Benefits Amazon ECS is characterized by a rich set of features and tight integration with other AWS services. It works hand-in-hand with the AWS CLI and Management Console, making it easy to launch, scale, and monitor your containerized applications.

ECS is fully managed by AWS, so you don't have to worry about managing the underlying infrastructure. It builds on the robustness of AWS and is compatible with a wide range of AWS tools and services.

Why is Amazon ECS so popular?

Amazon ECS is popular for a number of reasons, making it suitable for a variety of deployment scenarios:

  • Powerful and easy to use: Amazon ECS integrates well with the AWS CLI and AWS Management Console and provides a seamless experience for developers already using AWS.
  • Scalability: ECS is designed to easily handle large, enterprise-wide deployments and automatically scales to meet the needs of your application.
  • High availability: ECS ensures high availability by enabling deployment in multiple regions, providing redundancy, and maintaining application availability.
  • Cost-effective: With ECS, you only pay for the AWS resources you use (e.g. EC2 instances, EBS volumes) and there are no additional upfront or licensing costs.

Use cases

Amazon ECS is suitable for large deployments and for enterprises looking for a fully managed container orchestration service.

  • Large-scale deployment: Due to its high scalability, ECS is an excellent choice for large-scale deployment of containerized applications.
  • Fully managed service: For organizations that do not want to manage their infrastructure themselves, ECS offers a fully managed service where the underlying servers and their configuration are managed by AWS.

Azure Container Apps: A managed and serverless container service

Azure Container Apps is a serverless container service provided by Microsoft Azure. It allows you to easily build, deploy, and scale containerized apps without having to worry about the underlying infrastructure.

Features and benefits Azure Container Apps offers simplicity and integration with Azure services. The intuitive user interface and good integration with the Azure CLI simplify the management of your containerized apps.

With Azure Container Apps, the infrastructure is fully managed by Microsoft Azure. It is also based on Azure's robust architecture, which ensures seamless interoperability with other Azure services.

Why is Azure Container Apps so popular?

Azure Container Apps offers a number of benefits that are suitable for a wide range of deployments:

  • Ease of use: Azure Container Apps is integrated with the Azure CLI and Azure Portal, providing a familiar interface for developers already using Azure.
  • Serverless: Azure Container Apps abstracts the underlying infrastructure, giving developers more freedom to focus on programming and less on operations.
  • Highly scalable: Azure Container Apps can scale automatically to meet the needs of your application, making it well suited for applications with fluctuating demand.
  • Cost-effective: Azure Container Apps is only charged for the resources you use, and there are no additional infrastructure or licensing costs.

Use cases

Azure Container Apps is great for applications that require scalability and a serverless deployment model.

  • Scalable applications: Because Azure Container Apps automatically scales, it is ideal for applications that need to handle variable workloads.
  • Serverless model: Azure Container Apps offers a serverless deployment model for organizations that prefer not to manage servers and want to focus more on application development.

Amazon ECS vs. Azure CA vs. Kubernetes

Both Amazon ECS and Azure Container Apps are strong contenders in the container orchestration tool space. They offer robust, fully managed services that abstract the underlying infrastructure so developers can focus on their application code. However, they also cater to specific needs and ecosystems.

Amazon ECS is deeply integrated into the AWS ecosystem and is designed to easily handle large, enterprise-scale deployments. Azure Container Apps, on the other hand, operates on a serverless model and offers excellent scalability features, making it well suited for applications with fluctuating demand.

Here is a table for comparison to illustrate these points:

Amazon ECSAzure Container AppsKubernetes
Ecosystem compatibilityDeep integration with AWS servicesDeep integration with Azure servicesWidely compatible with many cloud providers
Deployment modelManaged service on EC2 instancesServerlessSelf-managed and hosted options available
ScalabilityDesigned for large-scale implementationsExcellent for variable demand (automatic scaling)Highly scalable with manual configuration
ManagementFully managed by AWSFully managed by Microsoft AzureManual, with complexity
CostsPayment for AWS resources usedPay for resources used, serverless modelDepends on hosting environment, can be cost-effective if self-managed
High availabilitytCross-regional deployments for high availabilityManaged high availabilityManual setup required for high availability

When choosing the right container orchestration tool for your organization, it's important to carefully evaluate your specific needs and compare them to the features and benefits of each tool.

Are you looking for a tool that can handle different workloads? Or are you looking for a simple and flexible tool that is easy to manage? Or are you looking for a tool that focuses on multi-cluster management and security?

Check out these options and see which one best fits your needs.


In this article, we've explored the features and benefits of Kubernetes, Amazon ECS, Azure Containers, and other popular container orchestration tools and compared them side-by-side to help you make an informed decision. We also examined real-world use cases and reviewed the pros and cons of each option, found that Kubernetes is widely considered the industry standard for container orchestration and is well suited for large-scale, production-ready deployments. We also saw that each container orchestration tool has its pros and cons.

What is Infrastructure as Code (IaC)?

What is Infrastructure as Code (IaC)?

Infrastructure as code describes the managing and provisioning of computer data centers through machine-readable definition files (e.g. YAML-Config Files). Instead of using physical hardware configuration or interactive configuration tools.

The term "Infrastructure as Code" was first used by Andrew Clay Shafer and Patrick Debois in 2009. At the time, the two developers were working at Google on a project to automate the provisioning of physical servers. Since then, many companies have adopted the concept. Today, it is a best practice for infrastructure management.

Infrastructure as code (IaC) compared to traditional infrastructure provisioning

Provisioning and managing data centers has been time-consuming and error-prone. It often relies on the manual configuration of servers and networking devices. This can lead to configuration drift, where the actual state of the infrastructure diverges from the intended form. IaC helps to avoid these problems by providing a repeatable and consistent way to provision and manage infrastructure. It also makes it easier to audit and track changes, and to roll back changes if necessary.

When should you consider using IaC to provision infrastructure?

IaC is especially well suited for automated cloud environments, where infrastructure is often provisioned and managed. However, you can also use it on on-premises data centers. Further, there are a few more key factors to consider before using IaC. If you are running on-premises data centers, IaC may need more effort to set up and maintain.

Infrastructure as code can be beneficial if, you

  • uses dynamic or complex environments,
  • repeatedly change your infrastructure and
  • have a hard time tracking and managing the changes.

What are the benefits of using IaC?

Reduced time and cost

IaC can help to reduce the time and cost associated with provisioning and managing infrastructure.

Improved consistency and repeatability

IaC can improve the consistency and repeatability of infrastructure provisioning and management processes.

Increased agility

IaC can increase the agility of an organization by making it easier to provision and manage infrastructure in response to changing requirements.

Improved audibility and traceability

IaC can help to improve the audibility and traceability of changes to infrastructure.

Reduced risk

By providing a more consistent and repeatable way to provision and manage infrastructure, IaC can help to reduce the risk of errors and configuration drift.

What are the challenges in using IaC?

You need to consider a few challenges when using IaC, including:

  • Complexity: IaC can increase the complexity of an organization's infrastructure. This makes it more difficult to understand and troubleshoot problems.
  • Security: IaC will increase the security risks associated with an organization's infrastructure.
  • Tooling and processes: IaC requires you to use new or unfamiliar tooling and processes.

How do you get started with IaC?

If you're interested in using IaC, there are a few things you need to do to get started:

  • Choose an IaC tool. Each with its own strengths and weaknesses. Choose a tool that's well suited to your organization's needs.
  • Define your infrastructure using a declarative or imperative approach.
  • Provision your infrastructure using your chosen IaC tool.
  • Manage your infrastructure using your chosen IaC tool.

To get started with DevOps (or to improve your DevOps maturity) read this: DevOps: How to get started - How to get started successfully

Tools you can use for Infrastructure as code (IaC tools)

  • Configuration management tools: Use Puppet, Chief and Ansibleto manage the configuration of servers and other infrastructure components.
  • Infrastructure provisioning tools: Use Terraform and CloudFormation, to provision and manage infrastructure resources.
  • Continuous integration and delivery tools: Use Jenkins and TravisCI, to automate the build, testing, and deployment of infrastructure.
  • Container orchestration tools: Use Kubernetes and Docker Swarm, to manage and orchestrate containers.

IaC is part of the bigger picture: CALMS and DevSecOps

Infrastructure as code is a small piece of automation within the DevOps cycle. Next to provisioning infrastructure by code, the core focus of DevOps is to increase efficiency and effectiveness by automating key processes in the software development life cycle (SDLC) while CALMS focuses on automating operations. This allows for faster feedback, shorter lead times, and more frequent deployments.

So to leverage IaC a fundamental DevOps maturity is essential.

Learn more about CALMS in our guide: CALMS Framework


Infrastructure as code (IaC) is a term used to describe managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Many companies adopted the framework to this day. Today, it is a best practice for managing infrastructure.

IaC helps to reduce the time and cost associated with provisioning and managing infrastructure. Additionally, it improves the consistency and repeatability of infrastructure provisioning and management processes, as well as increases the agility of an organization.

Atlassian Cloud vs Data Center

Atlassian Data Center vs Cloud - What to choose and when

Atlassian Cloud or Data Center. Both deployment methods have their advantages and disadvantages. Currently, what you choose mainly depends on your requirements, the number of users you need on your systems, the security specifications of your IT-Department and the long term strategy of your organization.

Before we further discuss when to choose Data Center or Cloud, let’s first tackle the various pros and cons of each system.

Atlassian Data Center

Atlassian Data Center is a self-hosted method to run your Jira or Confluence systems. Data Centers have the feature of routing server requests through different nodes that are self-hosted. In the event of one node failing, the others are able to handle the load. A server system with different nodes would be worth the effort when an instance has 500 or more users. However, there, are occasions when there are as few as 250 users. Data Centers can save you money in the long run, especially if your company is growing quickly or you want to grow.

Using Atlassian Data Center you are able to take full control over your IT Infrastructure. This way you can:

  • Reduce downtime to a self-controlled minimum
  • Scale the required infrastructure to your requirements (on demand)
  • Full control over data protection and data security
  • Control over the updates for your system
  • Unlimited number of users

Data Centers are independent of the number of users you need for your organization. The Atlassian Cloud, on the other hand, is currently limited to 20.000 users. (They are currently testing up to 50.000 users at Atlassian.)

Atlassian Cloud

The Cloud is here, and it won’t go away. On-premise solutions are no longer an option (at least for most organizations). The question though remains, do you want to follow the trend and be an early adopter, or will you miss the opportunity to adopt future-proof technologies?

Serverless infrastructure (cloud) offers many benefits to organizations throughout the world. Additionally, Atlassian's constant work towards optimizing its product is paying off: concerns regarding security and compliance are being taken seriously and appropriate steps are being taken to close the gaps. Let's take a look at exactly why you should consider the cloud option for your business, now that you know how it works in Atlassian products:

  • 100% Availability: The Atlassian Cloud has guaranteed availability of 99,95%.
  • Faster configuration: Setup and configure new instances within just a few minutes.
  • Automatic Updates: The Atlassian Cloud receives updates as soon as they’re available. No more manual updates *????*.
  • Increased productivity: Leverage modern tools, approaches and features to your advantage and save valuable resources (time)
  • Reduced Management costs: The time for physical hardware, manual maintenance, sunk costs and upgrades are finally over. Moving to the cloud clearly saves you money by removing infrastructure costs from the equation.
  • Pay for what you use and when you use it -> cost-effectiveness

Learn more about hosting Atlassian apps here via Data Center or Cloud here.

Atlassian Cloud or Data Center - What to choose and when

Data centers are chosen by companies when their Atlassian applications have become "mission critical." Ask yourself what the cost of a system outage would be and how valuable the Atlassian applications are to your business. If your entire development team could not work due to a system outage, it would be especially detrimental to your business, since you are paying for the work you do.

In addition, certain organizations have an enhanced need for data control and privacy. Hosting your own Data Center on your own servers means you have complete control over the upkeep and maintenance of the servers, but also have full control over the data at your disposal. Companies with extensive security needs, like banks or health insurers, may find this to be a crucial factor.

Cloud is a long-term investment that allows companies to scale, improve employee productivity, increase speed, and increase innovation.

With both models - cloud and Data Center - you gain reliability, increased productivity, and cost savings. Server users who wish to continue to maintain their own IT infrastructure should consider switching to Data Center.

As you can see, both models have their advantages. In the end, the main factors of choosing cloud over Data Center are up to your requirements.

Need help evaluating the different possibilities? Our Atlassian expert will help you choose the right solution for your requirements and needs.

Get in touch with us today.

Container8 DevOps as a Service Platform

Keynote Container8 - The all in one DevOps as a Service Platforms

End of last year, we launched Container8 - a DevOps as a Service platform. DevOps, in general, can affect an entire organization and implement a completely new culture and form of collaboration between teams. Yet DevOps also standardizes tools, processes and approaches throughout teams and changes the way IT teams (and even business teams) tackle digital projects. Container8 brings your DevOps Culture to the next level, by enabling software teams to release faster, more often, and autonomously and greatly reduce the complexity of a self-managed As a DevOps Platform. This allows an entire organization to bring digital products to market faster and more streamlined.

Serview Festival - An event for IT champions

In November 2021, we not only launched Container8 but also had the opportunity to directly present it to a large audience of IT and DevOps professionals as part of a Keynote at Serview Festival.

After two talks from Dr Dominic Lindner and Mario Willecke on Wednesday it was time for the Keynote: DevOps-as-a-Service-Platform by Benjamin Nothdurft from codecentric AG.

In these 45 Minutes, he explained the process of an occurring incident at one of his global clients to directly fix it in the next steps. He talks about how to fix an incident using three different approaches, and why he chose to proceed with the DevOps as a service platform (Container8).

To learn more about the keynote and how to approach fixing an incident in IT, have a look at the keynote itself down below.

The evolution of DevOps to an All-in-one DevOps as a Service platform

DevOps goes back to the year 2000 as the first agile method of working on digital projects we introduced. Yet the term DevOps was introduced mid-2007 by the Belgian IT professional Patrick Debois when he became frustrated by the friction between developer and operations teams. By 2008 the first Velocity event was held and Debois raised awareness to DevOps to a worldwide audience. Following this, DevOps wasn't something to be just a one-time thing. It became a guideline and method everyone wanted.

Yet as humans tend to be holding on to old and traditional approaches, DevOps still hasn't gotten a foothold in all organizations. Doing things as they've always been done and breaking habits is hard. Teams always had a hard time talking to others outside of their scope (this isn't just an IT thing, but common in business in general).

But DevOps is more than just breaking down silos and bringing every stakeholder on board. It's about providing the right toolset for everybody to use. It's about implementing a culture of trust, automation to simplify work and speeding up recurring tasks, learning from past mistakes, collecting insightful data by measuring every aspect of the workflow (from development to provisioning, release and customer experience) and sharing information. The CALMS framework was born.

Container8 takes the DevOps approach to the next level.

What is Container8?

Container8 is an unblocking, low dependency, and highly automated DevOps as a Service Platform, integrating your existing tools or providing a managed industry-standard toolset to make DevOps easy.

It provides real value with best practices and a great onboarding experience and usability for automation, security, transparency, and collaboration in a psychologically safe environment. In this safe environment, people are not afraid to bring their full self to work and bad news is a learning opportunity rather than a reason for scolding or blaming people.

Container8 enables you to develop, test, and deploy with ease by using a high-performance toolchain that's always available, up-to-date, and easy to use.

Release more often

Save valuable time at your next release by using a fully automated and customized product pipeline to your advantage. Fast MTTR and low complexity small releases reduce the risk in your product pipeline.

Keep a complex microservice environment secure

Reduce costs of your managed toolchains with the included, proactive tool maintenance and on-demand support.

Bring different departments together

Easily bring teams of different departments together and collaborate on software releases by automating recurring tasks and workflows.

Enable real agility, delivering constant results in large complex environments.

The Service Platform helps you to automate repetitive tasks and processes and creates reliable systems to your advantage. It integrates itself without friction into your existing DevOps Culture and the CALMS model.

Want to bring DevOps to the next level? Get a deep understanding of Container8 here.

AWS Consulting Partner

XALT becomes AWS Consulting Partner

Since 2016, we have been helping and accompanying IT teams on their way to implementing DevOps methods, cloud concepts and agile workflows. At the beginning of the year, XALT received the Atlassian Gold Partner Award. And at the end of July 2021, the time had finally come - we were named an AWS Consulting Partner.

What do we at XALT intend to do with this certification? And how we will use this new capability to help companies successfully implement DevOps or migrate their software landscape to the cloud in the future, you can read in this article.

Future-proof companies need a fail-safe IT infrastructure

The cloud migration of an existing software landscape has never been more relevant than today. Performance, downtimes and availability are important factors that have a direct impact on the business viability of a company and additionally impact customer satisfaction. 

The demands of today's customers and users are far more diverse and consolidated than they were just a few years ago. Due to the unlimited availability of the Internet and its direct impact on the workplace, it has become a necessity that productive systems are permanently available from everywhere. 

By leveraging AWS cloud computing solutions, we have been able to keep these outages to a minimum for our customers and ensure availability of 99.95% and above. With solutions like Blue-Green Deployments and the use of Docker, Kubernetes, Atlassian Apps and the implementation of the DevOps culture, we also achieve that teams of different departments can work together more easily and efficiently. And in doing so, achieve a direct and, above all, positive impact on the company's goals. 

XALT's plans for 2021 and 2022 at a glance

Along with the new AWS partnership, XALT is tackling several issues. Here's what's next for XALT.

  • Migrating existing solutions, Atlassian plugins and apps to the cloud to make them fit for the future and actively support our existing customers and partners
  • Further development of our existing apps and development of new approaches. And provision of Jira Sync as a stand-alone 3rd party integration for Jira software. 
  • Growth in the DACH region and the USA.
  • Expansion of the teams in the areas of infrastructure, DevOps, development, sales and marketing, distributed across our locations in Munich and Leipzig. 

Cloud computing provides more flexibility

For many companies, 2020 and 2021 will go down in corporate history as a year of change and reorganization - this is the year when the potential of cloud computing becomes clear. With home offices and remote working, a smoothly running software environment with digital processes and tools has become key to a company's agility and long-term success.

In this sense, cloud computing based on AWS plays a crucial role. 

"We have to internalize that a company depends not only on its employees but also on the permanent availability of software. This includes all the tools a company uses on a daily basis. Only when companies guarantee this can all users concentrate fully on their work and be truly successful." - Philipp G.

The migration of software to the cloud - such as Jira and Confluence from Atlassian - is currently still in its infancy at many German companies. The great advantage of cloud migrations of Atlassian tools is that business processes can also be digitized and automated. This results in further savings and positive effects on the company's goals.

Become part of the team!

Work with us on exciting projects and solutions for current, important IT questions in the areas of Atlassian, DevOps and Cloud technologies.

Digital transformation and cloud computing requires further thoughts

The pandemic has changed many things. Remote work and home office are now inevitably linked to the modern working world. Every company must decide for itself how it lives and shapes this new reality.

In terms of digital transformation and cloud computing, this means implementing new methods and building state-of-the-art IT infrastructure. It is time to look ahead and take things in hand that inevitably, already today, stand for the 21st century. It is time to prepare for further change and to provide all employees with the tools they use every day, everywhere and at all times, which they need for their value-adding work. 

→ Success Story Cloud: Here you can find out how Weltbild was able to improve its service and infrastructure stability with the help of the migration of their ecommerce shop to the cloud.

Atlassian Hosting

Atlassian Hosting: Technical Approach

There are several options for Atlassian hosting. Server, Data Center or even directly in the Atlassian Cloud. One of these options is to host Atlassian software directly on your own server or at the AWS / Azure (cloud provider) to host.

In this short article you will find all the information and requirements for hosting Atlassian software.

Atlassian Hosting

Atlassian Hosting: Requirements

Official domain with a DNS server where records can be added and resolved.

DNS records pointing to the external IP of the server on which the Atlassian application is to run.

If route 53 is used, we can use certbot and letsencrypt to generate wildcard certificates for the entire domain.

Application server

Server provisioning

As an application server, we usually use Ubuntu and run Ansible to configure it. Ansible then does some basic configuration and installs needed software, like Docker on the host. It also rolls out some Docker applications on a host, like the reverse proxy.

Reverse proxy

We use the Docker image xalt/nginx as a reverse proxy. This can attach to the Docker socket and is thus able to read which Docker containers are running on the host. If an application has certain parameters set, the Nginx configuration is automatically changed and reloaded, upstream and server configurations are created for the specified virtual hostname and ports. If the hostname matches an available SSL certificate, an SSL listener is also configured for that application. The reverse proxy can serve content over 80 (http) and 443 (https), these ports are mounted on host ports 80 and 443.

Managed Atlassian Hosting

Reduce operational complexity and simplify the operation of your Atlassian products such as Confluence, Bitbucket and Jira

License management

A side-cart container is provided, the Let's Encrypt helper. This container also has read access to the Docker socket, and when a container provides Let's Encrypt parameters, it initiates a Let's Encrypt certificate challenge and stores the certificates so that the reverse proxy container can use them for the https connections.

Docker application

All Docker applications are described in a docker-compose.yml file provided by Ansible. JIRA/ Confluence and the PostgreSQL database have their home directory included in separate directories at the same level as the docker-compose.yml. This way, the application configuration and the application data are in the same area and can be easily maintained.

JIRA/ Confluence

This Docker container runs a Tomcat with the JIRA application. The Tomcat connector must be configured correctly via environment variables of the container. The same applies to application monitoring via NewRelic. The heap parameters can be configured in the same way and of course the reverse proxy parameters have to be set, as well as the Letsencrypt parameters if required.

For test systems we have implemented a few more features to allow the recovery of persistent home data via ssh rsync and the modification of the database with Liquibase. For example, the base URL or the application links can be changed.


We typically use a PostgreSQL container and spawn it in a separate, application-specific Docker network under the DNS name "db", which is only accessible to the containers specified in this docker-compose.yml file. Username, password and database are specified as environment parameters of the container. The database for JIRA/ Confluence can also be MySQL or OracleSQL, but we chose PostgreSQL for compatibility reasons.

For test systems, we implemented a few more features to allow recovery of persistent home data via ssh rsync.


This container basically runs a cron job that shuts down the JIRA/Confluence and PostgreSQL container and transfers the persistent home folders to the backup server via rsync. This container is also configured by several environment parameters, such as the cron job time, the backup server name, and also needs some mount points, such as the Docker socket, to shut down and start Docker containers from inside a Docker container. Other mounts are also needed to locate where the data to be backed up is located.

Web request

When a Docker application is to be accessed, the user usually enters a DNS name in the browser. The IP attempts to be resolved by a DNS query to R53. The response is usually an A record pointing to the server running the application. The browser establishes a connection to port 80 of the application server. There, the Docker proxy forwards the requests to the Nginx reverse proxy, which redirects to https if a valid certificate exists for the application's virtual hostname. When this directive is executed by the browser, the request is accepted on port 443 at the application server. This goes to the reverse proxy, which then sends it to the configured upstream, the Atlassian application, and delivers the response to the requesting browser to load more resources and render the downloaded web page in the en.

Atlassian Hosting: Backup Server

This host is typically provisioned with large memory to store the application data of multiple Docker applications. It is also provisioned via Ansible.
Here, Docker is not installed by default, but rsync and rsnapshot, which are the most important components for our backup concept. Rsync is used for the data transport from host to host, but also for rsnapshot.
The usual configuration for the retention mechanism allows the storage of 7 daily, 4 weekly and 12 monthly backups. This enforces the use of hard links for the backup to save space if a file has not been touched since the last day.
This way, it consumes about 2.5 times the original application sizes (rsync of the data from the application server to the backup server consumes 1 + the rsnapshot copy of it and the corresponding deltas take another 1.5 times the size) to have backups that can be restored up to 12 months.

More about Managed Atlassian Hosting

The whole concept requires generating an SSH key on the application server and storing the public key in the authorized keys of the backup server's root user:

  1. Synchronize the data (rsync) from the backup container (application server) to the backup server.
  2. Rsnapshot is triggered by cronjob logic to run the various retention configurations once per day, week, and month.
  3. A Docker application is started with the correctly specified backup parameters and will rsync this data from the backup server before the application starts.

The data to be restored can be configured in the Docker container of JIRA/Confluence and PostgreSQL. If the most recent backup is needed, the destination folders of the rsync backup can be specified. However, if data from an older backup is to be restored, we need the correct rsnapshot path here.

Managed Atlassian Hosting

Reduce operational complexity and simplify the operation of your Atlassian products such as Confluence, Bitbucket and Jira

License management