xalt_webinar_platform engineering

Webinar Platform Engineering: AWS account setup with JSM

In our webinar "Platform Engineering - Build AWS Accounts in Just One Hour with JSM Cloud", our DevOps ambassadors Chris and Ivan, along with Atlassian platform expert Marcin from BSH, introduced the transformative approach of platform engineering and how it is revolutionizing cloud infrastructure management for development teams. In our conversation, we discussed the concept of platform engineering, including how to initiate the process of using platform engineering, what obstacles organizations may encounter and how to overcome them with self-service for developers. We also showed how Jira Service Management can be used as a self-service for developers to create AWS accounts in just one hour.

Understanding platform engineering

"Platform Engineering is a foundation of self-service APIs, tools, services, knowledge and support designed as a compelling internal product," said Ivan Ermilov during the webinar. This concept is at the heart of internal developer platforms (IDPs)aimed at streamlining operations and supporting development teams. By simplifying access to cloud resources, platform engineering promotes a more efficient and autonomous working environment.

Find out more about platform engineering in our article "What is Platform Engineering“.

The decisive advantages

One of the key takeaways from the webinar was the numerous benefits that platform engineering brings. Not only does it speed up the delivery of features, but it also significantly reduces manual tasks for developers. The discussion highlighted how teams gain independence, leading to a more agile and responsive IT infrastructure.

Overcoming traditional challenges

Traditional methods of managing cloud infrastructure often lead to project delays and security compliance issues. Ivan pointed out that "a common scenario I've personally encountered in my career is that deploying infrastructure requires a cascade of approvals. The whole process can take weeks. One specific example we encounter in our customer environment is that AWS account provisioning can take weeks to complete. One reason for this is usually that the infrastructure landscape is simply inefficient and not standardized." By using platform engineering, companies can overcome these hurdles and pave the way for a more streamlined and secure process.

Success story from the field: BSH's journey

Marcin Guz from BSH told the story of the company's transformation and illustrated the transition to automated cloud infrastructure management. The practical aspects of implementing platform engineering principles were highlighted, emphasizing how operational efficiency could be improved.

Technical insights: The self-service model

Ivan and Chris Becker discussed the implementation of a self-service model using Jira Service Management (JSM) and automation pipelines. This approach allows developers to manage cloud resources, including the creation of AWS accounts, in as little as an hour - a marked difference from the days or weeks it used to take.

Live demo: Quick AWS account creation

A highlight was the live demonstration by Chris Becker, who presented the optimized process for setting up AWS accounts. This real-time presentation served as a practical guide for the audience, illustrating the simplicity and efficiency of the self-service model.

A look into the future: The future of platform engineering

The webinar concluded with a look to the future. Ivan spoke about exciting future developments such as multi-cloud strategies and the integration of DevSecOps approaches, giving an indication of the ever-evolving landscape of platform engineering.

Watch our webinar on-demand

Want to learn about the possibilities of platform engineering and developer self-service? Watch our on-demand webinar to learn more about platform engineering, IDPs and developer self-service. In this informative session, you'll gain insights that will help you transform your cloud infrastructure management.

What is Platform Engineering

What is Platform Engineering

IT teams, developers, department heads and CTOs must ensure that applications and digital products are launched quickly, efficiently and securely and are always available. But often the conditions for this are not given. Compliance and security policies, as well as long and complicated processes, make it difficult for IT teams to achieve these goals. But this doesn't have to be the case and can be solved with the help of a developer self-service or Internal Developer Platform.

Simplified comparison of Platform Engineering vs Internal Developer Platform vs Developer Self-Service.

Platform Engineering vs. Internal Developer Platform vs. Developer Self-Service

What is Platform Engineering?

Platform Engineering is a new trend that aims to modernize enterprise software delivery. Platform engineering implements reusable tools and self-service capabilities with automated infrastructure workflows that improve developer experience and productivity. Initial platform engineering efforts often start with internal developer platforms (IDPs).

Platform Engineering helps make software creation and delivery faster and easier by providing unified tools, workflows, and technical foundations. It's like a well-organized toolkit and workshop for software developers to get their work done more efficiently and without unnecessary obstacles.

Webinar - Platform Engineering: AWS Account Creation with Developer Self-Service (Jira Service Management)

What is Platform Engineering used for?

The ideal development platform for one company may be completely unusable for another. Even within the same company, different development teams may have very different requirements.

The main goal of a technology platform is to increase developer productivity. At the enterprise level, such platforms promote consistency and efficiency. For developers, they provide significant relief in dealing with delivery pipelines and low-level infrastructure.

What is an Internal Developer Platform (IDP)?

Internal Developer Platforms (IDPs), also known as Developer Self-Service Platforms, are systems set up within organizations to accelerate and simplify the software development process. They provide developers with a centralized, standardized, and automated environment in which to write, test, deploy, and manage code.

IDPs provide a set of tools, features, and processes. The goal is to provide developers with a smooth self-service experience that offers the right features to help developers and others produce valuable software with as little effort as possible.

How is Platform Engineering different from Internal Developer Platform?

Platform Engineering is the overarching area that deals with the creation and management of software platforms. Within Platform Engineering, Integrated Development Platforms (IDPs) are developed as specific tools or platforms. These offer developers self-service and automation functions.

What is Developer Self-Service?

Developer Self-Service is a concept that enables developers to create and manage the resources and environments they need themselves, without having to wait for support from operations teams or other departments. This increases efficiency, reduces wait times, and increases productivity through self-service and faster access to resources. This means developers don't have to wait for others to get what they need and can get their work done faster.

How do IDPs help with this?

Think of Internal Developer Platforms (IDPs) as a well-organized supermarket where everything is easy to find. IDPs provide all the tools and services necessary for developers to get their jobs done without much hassle. They are, so to speak, the place where self-service takes place.

The transition to platform engineering

When a company moves from IDPs to Platform Engineering, it's like making the leap from a small local store to a large purchasing center. Platform Engineering offers a broader range of services and greater automation. It helps companies further streamline and scale their development processes.

By moving to Platform Engineering, companies can make their development processes more efficient, improve collaboration, and ultimately bring better products to market faster. The first step with IDPs and Developer Self-Service lays the foundation to achieve this higher level of efficiency and automation.

Challenges that can be solved with platform engineering

Scalability & Standardization

In growing companies, as well as large and established ones, the number of IT projects and teams can grow rapidly. Traditional development practices can make it difficult to scale the development environment and keep everyone homogeneous. As IT projects or applications continue to grow, there are differences in setup and configuration, security and compliance standards, and an overview of which user has access to what.

Platform Engineering enables greater scalability by introducing automation and standardized processes that make it easier to handle a growing number of projects and application developments.

Efficiency and productivity

Delays in developing and building infrastructure can be caused by manual processes and dependencies between teams, increasing the time to market for applications. Platform Engineering helps overcome these challenges by providing self-service capabilities and automation that enable teams to work faster and more independently.

Security & Compliance

Security concerns are central to any development process. Through platform engineering, we standardize and integrate security and compliance standards into the development process and IT infrastructure in advance, enabling consistent security auditing and management.

Consistency and standardization

Different teams and projects might use different tools and practices, which can lead to inconsistencies. Platform engineering promotes standardization by providing a common platform with consistent tools and processes that can be used by everyone.

Innovation and experimentation

The ability to quickly test and iterate on new ideas is critical to a company's ability to innovate. Platform Engineering provides an environment that encourages experimentation and rapid iteration by efficiently providing the necessary infrastructure and tools.

Cost control

Optimizing and automating development processes can reduce operating costs. Platform Engineering provides the tools and practices to use resources efficiently and thus reduce the total cost of development.

Real-world example: IDP and Developer Self-Service with Jira Service Management and AWS

One way to start with platform engineering is for example Jira Service Management as a developer self-service to set up AWS cloud infrastructure in an automated and secure way and to provide templates for developers and cloud engineers in a wiki.

How does it work?

Developer self-service for automatic AWS account creation with Jira service management

Jira Service Management Developer Self-Service

Using Jira Service Management, one of our customers provides a self-service that allows developers to set up an AWS organization account automatically and securely. This works with a simple portal and a service request form where the user has to provide information like name, function, account type, security and technical responsible and approving manager.

The account is then created on AWS in the backend using Python scripts in a build pipeline. During setup, all security and compliance relevant standards are already integrated and the JSM self-service is linked to the company's Active Directory. Due to the deep integration with all relevant systems of the company, it is possible to explicitly track who has access to what. This also facilitates the control of accesses and existing accounts in retrospect.

The result: The time required to create AWS organization accounts is reduced to less than an hour (from several weeks) with the help of JSM, enabling IT teams to publish, test and update their products faster. It also provides visibility into which and how many accounts already exist and for which product, making it easier to control the cost of cloud infrastructure on AWS.

Confluence Cloud as a knowledge database for IT teams

Of course, developer self-service is only a small part of platform engineering. IT teams need concrete tools and apps tailored to their needs.

One of these tools is a knowledgebase where IT teams, from developers to cloud engineers, can find relevant information such as templates that make their work easier and faster.

We have built a knowledge database with Confluence at our customer that provides a wide variety of templates, courses, best practices, and important information about processes. This knowledge database enables all relevant stakeholders to obtain important information and further training at any time.

Webinar - The First Step in Platform Engineering with a Developer Self-Service and JSM

After discussing the challenges and solutions that Platform Engineering brings, it is important to put these concepts into practice and explore them further. A great opportunity to learn more about the practical application of Platform Engineering is an upcoming webinar. This webinar will put a special focus on automating AWS infrastructure creation using Jira Service Management and Developer Self-Service. In addition, ess will feature a live demo with our DevOps experts.

Webinar - Platform Engineering: AWS Account Creation with Developer Self-Service (Jira Service Management)


The journey from Internal Developer Platforms to Platform Engineering is a progressive step that helps organizations optimize their development processes. By leveraging a Developer Self-Service and overcoming software development challenges, Platform Engineering paves the way for more efficient and innovative development practices. With practical resources like the featured webinar, interested parties can dive deeper into this topic. And also gain valuable insights into how to effectively implement Platform Engineering.

Price adjustments Atlassian

Atlassian Cloud: Price changes in October 2023 and product changes in November 2023

Atlassian Cloud is at the center of many teams' minds when it comes to effective collaboration. There are now some changes coming to the pricing structure and individual products in October 2023. These pricing adjustments affect the Atlassian Cloud products, among others Jira, Jira Service Management, Confluence and Access.

The adjustments are made for the Atlassian Cloud on October 18, 2023 and for the Jira Cloud Products on November 1, 2023 come into force.

In this article we will give you an overview of the price and product adjustments.

The most important at a glance

  1. The increase in list prices concerns:
    • Jira Software and Confluence (5 %)
    • Jira Service Management (5-30 %)
    • Access (for more than 1,000 users 10 %)
  2. Cloud pricing increases for renewal subscriptions to Jira Software Premium, Jira Service Management Standard, Jira Service Management Premium, and Atlassian Access.
  3. New automation limits for Jira Cloud products go into effect on November 1, 2023.

Why do the prices change?

These adjustments underscore Atlassian's commitment to developing innovative products that better connect teams and increase their efficiency. Over the past year, Atlassian has introduced various security updates and new product features, including guest access for Confluence, progressive deployment and improved security features in Jira Software, and advanced incident and change management in Jira Service Management.

Detailed information on pricing adjustments for the Atlassian Cloud

In this table you will find all the information about the price adjustments and how the changes will affect your existing licenses.

Increase in percent (%)
Jira Software Cloud
Standard, Premium, Enterprise
5 %
Jira Service Management Cloud
Standard, Premium, Enterprise
0-250 agent tier: 5 %
251-500 agent tier: 30 %
All other agent tiers: 20 %
501 - 1,000: 25 % (JSM standard)>1,000: 20 % (JSM standard)
501 - 2,500: 25 % (JSM Premium)>2,501: 20 % (JSM Premium)
Confluence Cloud
Standard, Premium, Enterprise
5 %
Access1.000+ users<1,000 users: 0%
1.000+ User tier: 10%

In addition, Atlassian is increasing the preferential pricing/pricing on renewal of existing subscriptions to its cloud products. This applies to Jira Software, Jira Service Management and Jira Access.

You can find more information about the price adjustments on the Atlassian website.

Product customizations for automations in Jira Cloud products

In addition, Atlassian announced changes for performing automations for Jira software, Jira Service Management, Jira Work Management, and Jira Product Discovery. These will come into effect on November 1, 2023.

In the previous model, customers receive a single, common limit across all Jira Cloud products. For example, if a customer has Jira Software Free and Jira Service Management Standard, they receive a total of 600 executions of automation rules per month (100 from Jira Software Free and 500 from Jira Service Management Standard) that can be used in both products.

In the new model starting November 2023, each Jira Cloud product has its own usage limit. Each automation rule will use the limit of a specific product when it is run. The limits for the Atlassian Free and Standard plans will increase to reflect this. The automation limits in the new model are as follows:

Product and tariff New Automation Limits / Month 
Jira SoftwareFree100
Premium1.000 per user/month
Jira Service ManagementFree500
Premium1.000 per user/month
Jira Work ManagementFree100
Premium100 per user/month
Jira Product DiscoveryFree200

Here you can find more information about limits for automations.

More updates from Atlassian: Improved support for cloud migration

The pricing adjustments are just a few of Atlassian's recent changes. The company has also revised its support and testing options for migration projects to make it easier to move to the cloud.

Server customers who have not yet migrated to the cloud can now test the cloud for six months. The test phase now also includes support for selected Marketplace applications.

Dual licensing for large server customers has been extended through February 15, 2024, and for enterprise customers through February 15, 2025.

Why is it worth considering a migration to the Atlassian Cloud now?

For Enterprise customers who want to move to the cloud but cannot complete the migration in time for the end of Server Support in February 2024, Atlassian offers an extension of Server Support in the form of Dual Licensing*. (*For all customers who purchase an annual cloud subscription of 1,001 or more users on or after September 12, 2023).

Wondering how the price adjustments will affect you, or already thinking about migrating to the cloud?

Contact us - our experts will check which options are worthwhile for you. We offer you a free cloud assessment: Within a very short time, you will receive a detailed cost calculation for your migration.

Together with you, we will conduct a cloud assessment and tell you what your options are and how to get to the Atlassian Cloud the fastest.

Click here for your personal cloud assessment and for the path to the cloud.

A comparison of popular container orchestration tools: Kubernetes vs Amazon ECS vs Azure Container Apps

A comparison of popular container orchestration tools

With the increasing adoption of new technologies and the shift to cloud-native environments, container orchestration has become an indispensable tool for deploying, scaling and managing containerized applications. Kubernetes, Amazon ECS and Azure Container Apps have emerged as leaders among the many options available. But with so many options, how can you figure out which one is best for your business?

In this article, we'll take an in-depth look at the features and benefits of Kubernetes, Amazon ECS, and Azure Container Apps and compare them side-by-side so you can make an informed decision. We'll address real-world use cases and explore the pros and cons of each option so you can choose the tool that best meets your organization's needs. By the end of this article, you'll have a clear understanding of the benefits and limitations of each tool and be able to make a decision that aligns with your business goals.

Let's get started!

Overview: Container Orchestration Tools

Explanation of the common tools

While Kubernetes is the most widely used container orchestration tool, there are other options that should be considered. Some of the other popular options are:

  • Amazon ECS is a fully managed container orchestration service that simplifies the deployment, management, and scaling of Docker containers.
  • Azure Container Apps is a fully managed environment that allows you to run microservices and containerized apps on a serverless platform.
  • Kubernetes is an open source platform that automates the deployment, scaling and management of containerized applications.


Let's start with an overview of Kubernetes. Kubernetes was developed by Google and is now maintained by the Cloud Native Computing Foundation. Kubernetes is an open source platform that automates the deployment, scaling, and management of container applications. Its flexibility and scalability make it a popular choice for organizations of all sizes, from small startups to large enterprises.

Why is Kubernetes so popular?

Kubernetes is widely considered the industry standard for container orchestration, and for good reason. It offers a wide range of features that make it ideal for large-scale, production-scale deployment.

  • Automatic scaling: Kubernetes can automatically increase or decrease the number of replicas of a containerized application based on resource utilization.
  • Self-healing: Kubernetes can automatically replace or reschedule containers that fail.
  • Service discovery and load balancing: Kubernetes can automatically discover services and balance traffic between them.
  • Rollbacks and rollouts: With Kubernetes, you can easily revert to a previous version of your application or do a gradual rollout of updates.
  • High availability: Kubernetes can automatically schedule and manage application replica availability.

The Kubernetes ecosystem also includes Internet-of-Things (IoT) deployments. There are special Kubernetes distributions (e.g. k3s, kubeedge, microk8s) that allow Kubernetes to be installed on telecom devices, satellites, or even a Boston Dynamics robot dog.

The main advantages of Kubernetes

One of the key benefits of Kubernetes is its ability to manage many nodes and containers, making it particularly suitable for organizations with high scaling requirements. Many of the largest and most complex applications in production today, such as those from Google, Uber, and Shopify, are powered by Kubernetes.

Another great advantage of Kubernetes is its wide ecosystem of third-party extensions and tools. They easily integrate with other services such as monitoring and logging platforms, CI/CD pipelines, and others. This flexibility allows organizations to develop and manage their applications in the way that best suits their needs.

Disadvantages of Kubernetes

But Kubernetes is not without its drawbacks. One of the biggest criticisms of Kubernetes is that it can be complex to set up and manage, especially for smaller companies without dedicated DevOps teams. In addition, some users report that Kubernetes can be resource intensive, which can be a problem for organizations with limited resources.

So is Kubernetes the right choice for your business?

If you're looking for a highly scalable, flexible, and feature-rich platform with a large ecosystem of third-party extensions, Kubernetes may be the perfect choice. However, if you are a smaller organization with limited resources and little experience with container orchestration, you should consider other options.

Managed Kubernetes Services

Want to take advantage of the scalability and flexibility of Kubernetes, but don't have the resources or experience to handle the complexity? There are managed Kubernetes services like GKE, EKS and AKS that can help you overcome that.

Kubernetes offerings in the cloud significantly lower the barrier to entry for Kubernetes adoption because of lower installation and maintenance costs. However, this does not mean that there are no costs at all, as most offerings have a shared responsibility model. For example, upgrades to Kubernetes clusters are typically performed by the owner of a Kubernetes cluster, not the cloud provider. Version upgrades require planning and an appropriate testing framework for your applications to ensure a smooth transition.

Use cases

Kubernetes is used by many of the world's largest companies, including Google, Facebook and Uber. It is well suited for large-scale, production-ready deployments.

  • Google: Google uses Kubernetes to manage the delivery of its search and advertising services.
  • Netflix: Netflix uses Kubernetes to deploy and manage its microservices.
  • IBM: IBM uses Kubernetes to manage its cloud services.

Comparison with other orchestration tools

While Kubernetes is widely considered the industry standard for container orchestration, it may not be the best solution for every organization. For example, if you have a small deployment or a limited budget, you may be better off with a simpler tool like Amazon ECS or even a simple container engine installation. For large, production-ready deployments, however, Kubernetes is hard to beat.

Advantages and disadvantages of Kubernetes as a container orchestration tool

Highly scalable and flexibleCan be complex to set up and manage
Large ecosystem of third-party extensionsResource-intensive
Widespread use in production by large companiesSteep learning curve for smaller organizations without their own DevOps teams
Managed Kubernetes services available to manage complexity
Can be installed on IoT devices

Amazon ECS: A powerful and scalable container management service

Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container management service provided by Amazon Web Services (AWS). It allows you to run and manage Docker applications on a cluster of Amazon EC2 instances and provides a variety of features to help you optimize your container-based applications.

Features and Benefits Amazon ECS is characterized by a rich set of features and tight integration with other AWS services. It works hand-in-hand with the AWS CLI and Management Console, making it easy to launch, scale, and monitor your containerized applications.

ECS is fully managed by AWS, so you don't have to worry about managing the underlying infrastructure. It builds on the robustness of AWS and is compatible with a wide range of AWS tools and services.

Why is Amazon ECS so popular?

Amazon ECS is popular for a number of reasons, making it suitable for a variety of deployment scenarios:

  • Powerful and easy to use: Amazon ECS integrates well with the AWS CLI and AWS Management Console and provides a seamless experience for developers already using AWS.
  • Scalability: ECS is designed to easily handle large, enterprise-wide deployments and automatically scales to meet the needs of your application.
  • High availability: ECS ensures high availability by enabling deployment in multiple regions, providing redundancy, and maintaining application availability.
  • Cost-effective: With ECS, you only pay for the AWS resources you use (e.g. EC2 instances, EBS volumes) and there are no additional upfront or licensing costs.

Use cases

Amazon ECS is suitable for large deployments and for enterprises looking for a fully managed container orchestration service.

  • Large-scale deployment: Due to its high scalability, ECS is an excellent choice for large-scale deployment of containerized applications.
  • Fully managed service: For organizations that do not want to manage their infrastructure themselves, ECS offers a fully managed service where the underlying servers and their configuration are managed by AWS.

Azure Container Apps: A managed and serverless container service

Azure Container Apps is a serverless container service provided by Microsoft Azure. It allows you to easily build, deploy, and scale containerized apps without having to worry about the underlying infrastructure.

Features and benefits Azure Container Apps offers simplicity and integration with Azure services. The intuitive user interface and good integration with the Azure CLI simplify the management of your containerized apps.

With Azure Container Apps, the infrastructure is fully managed by Microsoft Azure. It is also based on Azure's robust architecture, which ensures seamless interoperability with other Azure services.

Why is Azure Container Apps so popular?

Azure Container Apps offers a number of benefits that are suitable for a wide range of deployments:

  • Ease of use: Azure Container Apps is integrated with the Azure CLI and Azure Portal, providing a familiar interface for developers already using Azure.
  • Serverless: Azure Container Apps abstracts the underlying infrastructure, giving developers more freedom to focus on programming and less on operations.
  • Highly scalable: Azure Container Apps can scale automatically to meet the needs of your application, making it well suited for applications with fluctuating demand.
  • Cost-effective: Azure Container Apps is only charged for the resources you use, and there are no additional infrastructure or licensing costs.

Use cases

Azure Container Apps is great for applications that require scalability and a serverless deployment model.

  • Scalable applications: Because Azure Container Apps automatically scales, it is ideal for applications that need to handle variable workloads.
  • Serverless model: Azure Container Apps offers a serverless deployment model for organizations that prefer not to manage servers and want to focus more on application development.

Amazon ECS vs. Azure CA vs. Kubernetes

Both Amazon ECS and Azure Container Apps are strong contenders in the container orchestration tool space. They offer robust, fully managed services that abstract the underlying infrastructure so developers can focus on their application code. However, they also cater to specific needs and ecosystems.

Amazon ECS is deeply integrated into the AWS ecosystem and is designed to easily handle large, enterprise-scale deployments. Azure Container Apps, on the other hand, operates on a serverless model and offers excellent scalability features, making it well suited for applications with fluctuating demand.

Here is a table for comparison to illustrate these points:

Amazon ECSAzure Container AppsKubernetes
Ecosystem compatibilityDeep integration with AWS servicesDeep integration with Azure servicesWidely compatible with many cloud providers
Deployment modelManaged service on EC2 instancesServerlessSelf-managed and hosted options available
ScalabilityDesigned for large-scale implementationsExcellent for variable demand (automatic scaling)Highly scalable with manual configuration
ManagementFully managed by AWSFully managed by Microsoft AzureManual, with complexity
CostsPayment for AWS resources usedPay for resources used, serverless modelDepends on hosting environment, can be cost-effective if self-managed
High availabilitytCross-regional deployments for high availabilityManaged high availabilityManual setup required for high availability

When choosing the right container orchestration tool for your organization, it's important to carefully evaluate your specific needs and compare them to the features and benefits of each tool.

Are you looking for a tool that can handle different workloads? Or are you looking for a simple and flexible tool that is easy to manage? Or are you looking for a tool that focuses on multi-cluster management and security?

Check out these options and see which one best fits your needs.


In this article, we've explored the features and benefits of Kubernetes, Amazon ECS, Azure Containers, and other popular container orchestration tools and compared them side-by-side to help you make an informed decision. We also examined real-world use cases and reviewed the pros and cons of each option, found that Kubernetes is widely considered the industry standard for container orchestration and is well suited for large-scale, production-ready deployments. We also saw that each container orchestration tool has its pros and cons.

10 Best Practices for Deploying and Managing Microservices in a Production Environment

10 Best Practices for Deploying and Managing Microservices in the Production Environment

Microservices are a hot topic in software development, and for good reason. By breaking down a monolithic application into smaller, independently deployable services, teams can increase the speed and flexibility of their development process. However, deploying and managing microservices in a production environment is challenging. That's why it's important to follow best practices to ensure the stability and reliability of your microservices-based system.

Leading companies have tested these practices and significantly improved the performance and reliability of microservices-based systems. So read on if you want to get the most out of your microservices!

Why Microservices

Microservices are used for enterprises moving from traditional licensing to subscription models, such as SaaS solutions. This is often necessary when moving from an on-premise deployment to a global, public cloud deployment with elastic capabilities. Companies like Atlassian have transformed their products into microservices and deployed them in the cloud to make their applications available globally. However, microservices are complex and not suitable for every business, especially early-stage startups.

For enterprises, the transition from a traditional licensing model to a subscription-based model is critical to surviving in today's digital landscape. The benefits of this shift can be seen in the success of SaaS solutions such as Gmail, where customers pay only a small monthly fee for access to a wide range of features.

This concept can also be applied to microservices, making them an indispensable tool for companies that want to make this change. Take Atlassian and its Jira product, for example. Previously, Jira was deployed in on-premise environments, but to move to a subscription-based model, the company needed a global reach that could only be achieved in the public cloud. This move enabled elasticity so that the application could scale horizontally as needed and adapt to load changes without restrictions.

Best Practice #1: Define clear responsibilities and accountabilities for each microservice.

One of the main benefits of microservices is that they allow teams to work more independently and move faster. However, this independence can also lead to confusion about who is responsible for each service.

Therefore, it is important to define the responsibility for each microservice. This means that responsibility is assigned to a specific team or person and that this team or person is responsible for the development, maintenance and support of the service.

By establishing clear lines of authority and responsibility, you can ensure that each microservice is supported by a dedicated team focused on its success. In addition, IT helps avoid issues such as delays in fixing bugs or implementing new features because it's clear who is responsible for resolving them.

But how do you define responsibilities for your microservices?

Option 1: Individual responsibility for a range of services

One approach is to use a service ownership model, where each team or individual is responsible for a specific set of services. Each SCRUM team delivers one solution, one set of components (microservices). Option 1 ensures that each service has its own owner who is responsible for its success.

Option 2: Individual responsibility for a set of features

Another option is to use a feature ownership model, where each team or individual is responsible for developing and maintaining a specific set of features across multiple services. Option 2 can be a good option if you only have a small number of services or if the features you are developing span multiple services.

Regardless of which approach you take, you need to ensure that responsibilities and accountabilities are clearly defined and communicated to all team members. For example, each developer should be responsible for a feature, deployment, and hypercare support. This ensures that everyone knows who is responsible for each microservice, and can avoid confusion and delays in the development process.

Best Practice #2: Use versioning and semantic versioning for all microservices

When working with microservices, it is important to keep track of the different versions of each service. This way, if you run into problems with a new version, you can revert to a previous version and ensure that the correct version of each service is used throughout the system.

One way to accomplish this is to use versioning for your microservices. Versioning assigns a version number to each version of a microservice, for example, 1.0, 1.1, and so on. This way, you can easily track the different versions of your microservices.

However, it is also a good idea to use semantic versioning for your microservices. Semantic versioning uses a three-part version number (e.g., 1.2.3), with the parts representing the major version, minor version, and patch number, respectively. The major version is incremented for significant changes, the minor version is incremented for new backward compatible features, and the patch number is incremented for bug fixes and other minor changes.

Semantic versioning can make it easier to understand the impact of a new version, and it can also help ensure that the correct version of each service is used throughout the system. Therefore, it is a good idea to use both versioning and semantic versioning for your microservices to ensure that you have a clear and comprehensive understanding of the different versions of each service.

For example, the entire mono repository with a set of microservices for a business domain should be versioned with a semver2 tag. The tag could be in the form of a git annotated tag, which is an object in Git.

Best Practice #3: Use a CI/CD Pipeline for Automated Testing and Deployment

Are you tired of manually testing and deploying your microservices? Then it's time to consider using a Continuous Integration and Delivery (CI/CD) pipeline.

A CI/CD pipeline is a set of automated processes that take care of testing, deploying, and releasing your microservices. It allows you to automate many tasks in the development and deployment process, such as building, testing, and deploying your code. This way, you can speed up the development process and improve the reliability of your microservices-based system.

There are several tools and platforms available for setting up a CI/CD pipeline, including Jenkins, CircleCI, and AWS CodePipeline. Each application has specific features and capabilities, so it's important to choose the one that best meets your needs.

The deployment logic should be prepared from day one along with the Mono repository. The workflow is that when a developer commits, he starts the build (compiling the artifact and publishing the image to Docker Hub) of the project in the CI. Finally, the project is deployed to the hosting platform, e.g. EKS, so that you can verify that the code can be deployed, have the REPL feel and show the result to the product owners.

By using a CI/CD pipeline, you can automate the testing and deployment of your microservices and free up your teams to focus on developing and improving your services.

Best Practice #4: Use containers and container orchestration tools:

Tired of deploying and scaling your microservices manually? Then it's time to consider using containers and container orchestration tools.

Containers allow you to package your microservices and their dependencies into a single unit, making it easier to deploy and run them in different environments. This reduces the time and effort required to deploy and scale your microservices and improves their reliability.

In addition to using containers, it is also recommended that you use a container orchestration tool to manage the deployment and scaling of your microservices. With these tools, you can automate the deployment, scaling, and management of your containers, simplifying the execution and maintenance of your microservices-based system.

Also, each microservice should be containerized and published to a Docker Hub with an appropriate semver2 tag.

Some popular container orchestration tools include Kubernetes, Docker Swarm and Mesos.

By using containers and container orchestration tools, you can streamline the deployment and management of your microservices and free up your teams to focus on developing and improving your services.

Best Practice #5: Use an API Gateway to Manage External Access to Microservices

When external clients, such as mobile apps or web clients, access your microservices-based system, use an API gateway to manage access to your microservices. If your microservices expose models (mostly REST API) that are not sufficient to build your model, then a GraphQL API should be presented, a kind of facade called Backed for Frontend (BFF).

What is an API gateway?

An API gateway is a layer that sits between your clients and your microservices and is responsible for forwarding requests from clients to the appropriate microservice and returning the response to the client. It can also perform authentication, rate limiting, and caching tasks.

By using an API gateway, you can improve the security and performance of your system. It acts as a central entry point for external traffic and can take certain tasks off your microservices. It also makes it easier to manage and monitor external access to your microservices because you can track and log all requests and responses through the gateway.

Several options for implementing an API gateway include using a third-party service or creating your own gateway with tools like Kong or Tyk.

In addition, if you want to address security-related issues such as Keycloak and IDS, you should especially consider API gateway components such as Kong

By using an API gateway, you can improve the security and performance of your microservices-based system and make it easier to manage external access to your services.

Best Practice #6: Monitor the health and performance of microservices:

When working with microservices, it is critical to monitor the health and performance of each service to ensure they are running smoothly and meeting the needs of your system.

There are several tools and techniques you can use to monitor the health and performance of your microservices, including:

  • Application performance monitoring (APM) tools: These tools track the performance of your microservices and provide insight into potential issues or bottlenecks.
  • Log analysis tools: These tools allow you to analyze the logs generated by your microservices to identify errors, performance issues, and other important information.
  • Load testing tools: These tools allow you to simulate the load on your microservices to test their performance and identify potential issues.

With these and other tools and techniques, you can monitor the health and performance of your microservices and identify and fix any problems that arise. This ensures that your microservices run smoothly and meet the requirements of your system.

Best Practice #7: Implement a rolling deployment strategy:

When deploying updates to your microservices, remember that it is important to minimize downtime and disruption to your system. One way to do this is to implement a rolling deployment strategy.

With a rolling deployment strategy, you deploy updates to multiple microservices at once and then gradually roll out the update to the rest of the system. This allows you to test the update on a small scale before deploying it to the entire system, minimizing the risk of disruption or problems.

The following approaches exist for implementing a phased deployment strategy:

  • Blue-Green-Deployment: This involves deploying updates in a separate "green" environment and switching traffic from the "blue" environment once the update has been tested and is ready to go live.
  • Canary deployment: This involves deploying updates to a small percentage of users and gradually increasing the percentage over time as you watch for problems.

By implementing a rolling deployment strategy, you can minimize downtime and disruption during updates and ensure a smooth and reliable deployment process.

Best Practice #8: Use a central logging and monitoring system

When working with microservices, it is important to have a way to monitor the overall health and performance of your system. One way to do this is to use a centralized logging and monitoring system.

With a centralized logging and monitoring system, you can collect and analyze logs and other data from your microservices in a single place, making it easier to track the overall health and performance of your system. This way, you can identify and fix problems faster because you can see all relevant data in one place.

There are several options for implementing a centralized logging and monitoring system, including using a third-party service like Splunk or creating your own system with tools like Elasticsearch and Logstash. It's important to choose the option that best fits your needs and budget.

With a centralized logging and monitoring system, you can track the overall health and performance of your microservices-based system and identify and fix any issues that arise. So why not try it out?

Best Practice #9: Use circuit breakers and bulkheads to prevent cascading failures

When working with microservices, it is important to prevent problems in one service from affecting the entire system. One way to achieve this is to use circuit breakers and partitions.

A circuit-breaker is a pattern that allows a service to fail quickly and stop processing requests when a problem occurs, rather than attempting to continue processing and potentially causing further problems, in order to prevent cascading failures and protect the overall stability of the system.

A bulkhead is a pattern that allows you to isolate different parts of your system so that problems in one part do not affect the rest of the system. In this way, you prevent cascading failures and increase the stability of the overall system.

By using circuit breakers and partitions, you can prevent problems in one service from affecting the entire system and ensure the stability and reliability of your microservices-based system.

Best Practice #10: Implement an appropriate testing strategy

When working with microservices, you should ensure the stability and reliability of your system by properly testing your microservices. There are several types of tests you should consider, including:

  • Unit testing: This involves testing individual units of code to ensure that they function correctly.
  • Integration testing: This tests the integration between different microservices to ensure that they work together correctly.
  • Performance testing: This involves testing the performance of your microservices under various loads and conditions to ensure that they meet the requirements of your system.
  • Chaos testing: This involves deliberately introducing failures or other disruptions into your system to test its resilience and ensure that it can recover from outages.


By implementing an appropriate testing strategy and regularly testing your microservices, you can ensure the stability and reliability of your microservices-based system.

Companies that are able to rapidly and reliably deliver new features and functionality in today's fast-paced business environment have a significant competitive advantage. Microservices can be a powerful tool for rapid development and deployment, but they also bring new deployment and management challenges.

The 10 best practices described in this article provide a roadmap for successfully deploying and managing microservices in a production environment. By following these best practices, organizations can ensure that their microservices are stable, reliable, and perform at their best. This way, they can deploy new features and functionality faster and stay ahead of the competition.

In addition to improving microservices performance, these best practices also help reduce the risk of costly downtime and outages. You'll also improve overall system reliability and stability. You protect your company's reputation and customer satisfaction, which ultimately contributes to business success.

Therefore, these best practices are a must for any organization that wants to get the most out of their microservices-based systems. Take the first step towards implementing these best practices and see the benefits for yourself.

An image of a cloud with a lock symbol superimposed, representing the concept of secure cloud computing and how cloud-security can drive business success through improved data protection and risk management.

How cloud security can drive business success

Reading time: 12 Minuten

The cloud has become an invaluable resource for businesses of all sizes, offering access to data and applications from anywhere and increasing efficiency. However, it is essential to remember that the cloud is vulnerable to security threats and breaches. Fortunately, you can implement security measures to ensure a secure cloud and protect business data. By implementing these measures, businesses can maximize the benefits of the cloud and enjoy increased security, reliability, and, ultimately, business success. This article will discuss the importance of cloud security and how a secure cloud can lead to business success.

The importance of cloud security

We cannot stress the importance of cloud security enough. Businesses are vulnerable to malware, ransomware, and data breaches without proper security measures. These can cause significant damage, resulting in lost data and compromised systems. Further, your business is liable for data losses or breaches, resulting in fines and penalties.

A study from 2019 by Oracle and KPMG revealed that organizations are losing an average of $5 million per cloud security incident. Additionally, according to Accenture, organizations worldwide will lose an estimated $5 trillion in revenue due to cloud security breaches over the next five years.

A secure cloud is essential for protecting business data, as it can provide an extra layer of security to protect against potential threats. Businesses can protect their data by implementing security measures such as authentication, access control, and encryption. This can reduce the risk of data breaches and mitigate the potential impacts of any attack. Additionally, a secure cloud can provide increased reliability, as data is less likely to be corrupted or lost. This helps maximize ROI, as businesses can access their data quickly and reliably.

Potential risks of an unsecured system

The potential risks of an unsecured system are significant, and businesses must take steps to protect themselves against potential threats. Without proper security measures, companies are vulnerable to various attacks, including malware, ransomware, and data breaches. Malware, such as viruses and worms, can infect systems and cause damage to data. Ransomware is malicious software that can encrypt data and hold it hostage until businesses pay a ransom. Data breaches can result in the unauthorized access and disclosure of sensitive information, such as customer data or trade secrets.

These attacks can cause significant damage, resulting in lost data and compromised systems. As such, businesses must take steps to protect themselves by implementing security measures to ensure a secure cloud.

Common risks at a glance:

  • Data Breaches: Unsecured cloud systems can be vulnerable to malicious actors that could gain access to sensitive data and use it for malicious intent.
  • Denial of Service (DoS) Attacks: DoS attacks involve flooding a network or service with traffic, resulting in the system being unable to respond to legitimate requests. This can lead to outages and disruption of service for cloud users.
  • Malware Infection: Cloud systems can be vulnerable to malware infections, allowing attackers to access confidential data and disrupt operations.
  • Insufficient Access Controls: If access control measures are not implemented correctly, unauthorized individuals may gain access to sensitive data or resources stored in the cloud system without permission.
  • Poor Configuration Management: Inadequate configuration management practices, such as a lack of patching or outdated software versions, can make a cloud system vulnerable to attack from malicious actors, resulting in data breaches or unauthorized access to resources by attackers.

Advantages of a secure cloud

A secure cloud can provide numerous benefits to businesses, including increased security and improved reliability. Companies can protect their data and reduce the risk of data breaches by implementing security measures such as authentication, access control, and encryption. Additionally, a secure cloud can provide increased reliability, as data is less likely to be corrupted or lost. This helps maximize ROI, as businesses can access their data quickly and reliably.

Furthermore, a secure cloud can provide additional benefits, such as improved customer satisfaction. Businesses can demonstrate that they value their customers and data by protecting customer data and ensuring privacy. This can result in increased customer loyalty and a better customer experience overall.

Finally, a secure cloud can help businesses to comply with data regulations, such as the European Union’s General Data Protection Regulation (GDPR). Companies can avoid costly fines and penalties by complying with these regulations and ensuring their data is secure and protected.

Types of cloud security measures

Companies must take security measures to ensure a secure cloud. Authentication verifies a user's identity to allow access to data or services through passwords, biometrics, or two-factor authentication. Encryption converts data into an unreadable format to protect it from unauthorized access and disclosure. Finally, access control restricts the actions of specific users or services based on predefined rules and criteria.

These measures can provide organizations with additional security and help them protect their data from potential threats. Enterprises can also implement monitoring and alerting tools to detect potential breaches or suspicious activity and alert administrators accordingly. By implementing these measures, organizations can ensure that their cloud systems are secure and protected from potential threats.


Authentication is an important security measure that can help organizations protect their data and ensure that only authorized users have access (using strong passwords, biometrics, or two-factor authentication). Passwords are the most common form of authentication because they are easy to implement and use. However, passwords can be easily guessed or cracked by brute force attacks. Therefore, it is important to use strong passwords that are difficult to guess and to change them regularly.

Biometrics refers to a user's unique physical characteristics, such as fingerprints or facial recognition. This provides an additional layer of security and ensures that only authorized users have access to data or services. Two-factor authentication (2FA) combines two different authentication methods for added security, such as a password combined with a code sent via text message or email. By implementing these measures, organizations can ensure that their cloud systems are secure and protected from potential threats.


Encryption is an important security measure that can help companies protect their data from unauthorized access and disclosure. It involves converting data into an unreadable format, such as a code or cipher, to prevent it from being read or understood by anyone other than the intended recipient. Encrypted data is therefore more secure because it cannot be read, even if it falls into the wrong hands. Encryption can also help ensure data integrity by detecting and preventing changes to encrypted data.

There are various encryption algorithms, each of which has its strengths and weaknesses. Therefore, it is important to choose an algorithm that provides high security but requires little computing power or storage space. Organizations must also keep their encryption keys secure to prevent unauthorized access to encrypted data. By implementing these measures, companies can ensure that their cloud systems are secure and protected from potential threats.

Access control

Access control is an important security measure that can help organizations protect their data by restricting access to specific users or services. This includes setting rules and criteria for who can access what data and when. For example, a company can establish rules that allow only certain employees to access sensitive customer data or limit access to certain times of the day or week. This ensures that only authorized users can access the data they need, while preventing unauthorized users from accessing sensitive information.

For added security, organizations should also consider implementing multi-factor authentication (MFA). MFA combines two or more authentication methods, such as a password combined with biometric data or a code sent via SMS. By implementing these measures, organizations can ensure that their cloud systems are secure and protected from potential threats.

How companies benefit from a secure cloud

Enterprises can benefit from a secure cloud in many ways, including increased security, improved reliability, and maximized ROI. By implementing security measures such as authentication, encryption and access control. This can reduce the risk of data breaches and mitigate the potential impact of an attack. In addition, a secure cloud can increase reliability by making it less likely that data will be corrupted or lost. This helps maximize ROI, as companies can access their data quickly and reliably.

In addition, companies can also benefit from improved customer satisfaction. By protecting customer data and ensuring privacy, companies can show that they value their customers and their data. This can lead to stronger customer loyalty and an overall better customer experience. Finally, companies can also benefit from complying with data regulations such as the General Data Protection Regulation (GDPR) by avoiding costly fines and penalties while ensuring their data is safe and secure.

Increased security

One of the main benefits of a secure cloud is increased security. By implementing security measures such as authentication, encryption and access control, organizations can ensure that their data is protected from potential threats. This can reduce the risk of data breaches and mitigate the potential impact of an attack. By protecting customer data and ensuring privacy, companies can also benefit from higher customer satisfaction.

These benefits can help companies maximize their ROI by providing fast and reliable access to their data without worrying about security threats or compliance issues. A secure cloud offers companies numerous benefits that can boost business success.

Learn more about cloud security in our Whitepaper: Zero Trust

Improved reliability

Another benefit of a secure cloud is improved reliability. By implementing security measures such as encryption, companies can ensure that their data is protected from unauthorized access and disclosure. This can ensure data integrity by detecting and preventing any changes to encrypted data. A secure cloud can also provide greater reliability, as data is less likely to be corrupted or lost. This helps maximize ROI, as companies can access their data quickly and reliably without worrying about potential threats or breaches.

Finally, by protecting customer data and ensuring privacy, companies can benefit from higher customer satisfaction. This can lead to higher customer loyalty and an overall better customer experience. A secure cloud offers companies numerous benefits that can increase business success.

Maximized ROI

One of the key benefits of a secure cloud is maximizing return on investment. By implementing security measures such as authentication, encryption and access control, organizations can ensure that their data is protected from potential threats. This can reduce the risk of data breaches and mitigate the potential impact of an attack. In addition, a secure cloud can increase reliability, as data is less likely to be corrupted or lost. This helps maximize ROI, as businesses can access their data quickly and reliably without worrying about security threats or breaches.

In addition, by protecting customer data and ensuring privacy, companies can benefit from higher customer satisfaction. This can lead to higher customer loyalty and an overall better customer experience. Finally, businesses also benefit from complying with data regulations such as the General Data Protection Regulation (GDPR) by avoiding costly fines and penalties while ensuring their data is safe and secure. A secure cloud offers companies numerous benefits that can improve business success.

Actively reduce costs

By implementing security measures such as authentication, encryption and access control, organizations can reduce the potential cost of a cloud security incident by up to 50 %. In addition, with a secure cloud, organizations can increase customer satisfaction by up to 20 %, which translates into higher customer retention and customer lifetime value. In addition, companies that comply with data regulations such as the GDPR can save up to 25 % in fines and penalties. Finally, a secure cloud can also increase ROI by up to 30 % by improving reliability and access times.


In summary, cloud security is an essential component for business success. By implementing security measures such as authentication, access control and encryption, organizations can ensure that their data is protected from potential threats. In addition, a secure cloud can increase reliability, improve customer satisfaction and ensure compliance with data regulations such as the GDPR. Ultimately, a secure cloud can improve business success by maximizing ROI and protecting against potential threats.

How to deploy to the production environment 100 times a day (CI/CD)

How to deploy to production 100 times a day (CI/CD)

A software company's success is dependent on its ability to ship new features, fix bugs, and improve code and infrastructure.

A tight feedback loop is essential, as it permits constant and speedy iteration. This necessitates that the codebase should always be in a deployable state so that new features can be rapidly shipped to production.

Achieving this can be difficult, as there are many working parts and it can be easy to introduce new bugs when shipping code changes.

Small changes don't seem to impact the state of the software in the short term, but long term it can have a big effect.

If small software companies want to be successful, they need to move fast. As they grow, they become slow, and that's when things get tricky.

Now, they

  • have to coordinate their work more,
  • need to communicate more,
  • and have more people working on the same codebase. This makes it more difficult to keep track of what is happening.

Thus, it is essential to have a team who handles shipping code changes. This team should be as small and efficient as possible so that they can rapidly iterate on code changes.

Furthermore, use feature flags to toggle new features on and off in production. This allows for prompt and easy experimentation, as well as the capability to roll back changes if need be. Set up Alerts to notify the team when you deploy new code. This way, they can monitor the effects of the changes and take action if need be.

There are a few things that can make this process easier:

  • Automate as much of the development process as possible
  • A separate team is responsible for publishing code changes.
  • Use feature flags to turn new features on and off in production
  • Set up alerts to notify the team when you deploy new code.

If you follow these tips, you can deploy code to the production environment 100 times a day. And with minimal disruption.

Continuous deployment of small changes

This insight, though not new, is a core element of the DevSecOps movement. Another way to reduce risk (next to growing teams) is to optimize the developer workflow for rapid delivery. Achieve this, by increasing the number of people in the engineering department. This not only leads to an increase in the number of deployments but also in the number of deployments per engineer.

But what's even more remarkable, this reduces the number of incidents. While the average number of rollbacks remains the same.

But be careful with these metrics. On paper they are great. But, there's not a 100% correlation between customer satisfaction or negative customer impact.

Your goal should be to deploy many small changes. They are quicker to implement, quicker to validate, and of course to roll back.

Further, small changes tend to have only a minor impact on your system compared to big changes.

Generally speaking, the process, from development to deployment needs to be as smooth as possible. Any friction will result in developers bulking up changes and releasing them all at once.

To mitigate the friction within your process, do this:

  • Allow engineers to deploy a change without communicating it to a manager.
  • Automate testing and deployment at every stage.
  • Allow different developers to test simultaneously and multiple times.
  • Offer numerous development and test systems.

Next to a frictionless development and deployment process, concentrate on a sophisticated, open-minded, and blameless engineering culture. Only then you can deploy to production 100 times per day (or even more).

Our engineering (& company) culture

At XALT, we have a specific image in mind when we talk about our development culture.

For us, a modern development culture is

  • one that is based on trust.
  • that puts the customer at the center,
  • uses data as a basis for decision-making,
  • focuses on learning,
  • is result and team oriented and
  • promotes a culture of continuous improvement.

This type of development culture enables our development team to work quickly, deliver high-quality code, and learn from mistakes.

This approach goes hand in hand with our entire corporate culture. Regardless of the department, team or position. We also tend to challenge the status quo.

I know, this sounds a bit cheesy. But it's true. Allowing our team to focus on the problem at hand without any friction or unnecessary regulations enabled us to be more productive and faster.

For example, our development, testing and deployment process looks like this.

It's pretty simple. Once one of our developers has created and tested a new code branch, all it takes is one more person to review the code and it is integrated into the production environment.

But the most important core element at XALT is trust! Let me explain that in more detail.

We trust our team

We trust our team on what they are doing or what tools they are using to accomplish a task. If things go wrong or something doesn’t work out, it doesn’t matter. We start our post-mortem process and find the root cause of the incident, fix it and learn from our mistakes.

I know it's not just about development, testing and other parts are just as important.

Monitoring and testing

In order to get better, faster and ultimately make our users (or customers) happy, we constantly monitor and review our development processes.

In the event of an incident, it's not just a matter of getting the system up and running again. But also to make sure that something like this doesn't happen again.

That is why we have invested heavily in monitoring and auditing.

So we can

  • Get real-time insights into what's going on,
  • Identify problems and possible improvements,
  • Take corrective action when necessary; and
  • recover more quickly from incidents.

We have also implemented an automatic backup solution (daily) for our core applications and infrastructure. So if something breaks, we can revert to a previous version, further reducing the risk.

Minimizing risk in a DevOps culture

To mitigate risk in day-to-day development, we employ the following tactics:

  • Trunk-based development: This is a very simple branching model where all developers work on the main development branch or trunk. This is the default branch in Git. All developers commit their changes to this branch and push their changes regularly. The main advantage of this branching model is that it reduces the risk of merge conflicts because there is only one main development branch.
  • Pull Requests: With a pull request, you ask another person to review your code and include it in their branch. This is usually used when you want to contribute to another project or when you want someone else to review your code.
  • Code review: Code review involves manually checking the code for errors. This is usually done by a colleague or supervisor. Perform code reviews using tools that automate this process.
  • Continuous Integration (CI): This is the process of automatically creating and testing code changes. This is usually done with a CI server such as Jenkins. CI helps to find errors early and prevent them from flowing into the main code base.
  • Continuous Deployment (CD): This is the process of automated deployment of code changes in a production environment.

It is also important that we establish clear guidelines to guide our development team.

The guidelines at XALT:

  • At least one other developer reviews all code changes before we add them to the main code base.
  • In order to create and test code changes before committing them to the main code base, we set up a Continuous Integration Server.
  • Use tools such as Code SonarQube to ensure code quality and provide feedback on potential improvements.
  • Implement a comprehensive automated test suite to find defects before they reach production.


The success of a software company depends on its ability to regularly deliver new features, fix bugs, and improve code and infrastructure. This can be difficult because there are numerous components being worked on, and as code changes are released, new bugs can easily appear. There are a few things that can make this process easier: Automate the process as much as possible, create a dedicated team responsible for releasing code changes, use feature flags to turn new features on and off in production, and set up alerts to notify the team when new code is deployed.

If you follow these tips, you should be able to go to production 100 times a day with minimal interruptions.

DevOps Automation

How to get started DevOps Automation and why it's important

DevOps automation allows for faster and more consistent deployments better, tracking of deployments, and for more control over the release process. Additionally, DevOps automation can help reduce the need for manual intervention, saving time and money.

Automation, in general, should simplify how software is developed, delivered, and managed. The main goal of DevOps Automation is to reach faster delivery of reliable software and to reduce risk to the business. Further, automation helps to increase the speed and quality of software development while also reducing the risk of errors within your development and operations departments.

IT Departments usually show a sense of need to automate or digitize their processes and workflows during times of unease. Especially during these times, the typical DevOps automation challenges are the center of attention.

Why automate anyway?

Automation is a way of identifying patterns in computation and considering them as a constant complexity O(1) [Big O notation].

For efficiency reasons, we want to share resources (e.g. Uber transport) and have no boilerplate (less verbosity to make the code clear and simple). We deliver only a delta of changes to the generic state considering generics as utils/helpers/commons.

In the context of cloud automation, we say that if provisioning is not automated it doesn’t work at all.

In the context of DevOps automation and software integration, it's about building facades. We call this "Agile Integration" in the industry. The façade pattern is also very common in the industry for software projects that are not created on a greenfield site.

Most of the software solutions out there are facades on top of other facades (K8s → docker → linux kernel) or a superset of a parent implementation (check verbosity of syntax code of Kotlin vs Java).

DevOps automation of a single deployment release

An example of Agile Integration within an arbitrary domain (DDD) of microservices deployment.

What are typical DevOps Automation challenges?

Lack of integration and communication between development and operations:

This can be solved by using a DevOps platform that enables communication and collaboration between the two departments. The platform should also provide a single source of truth for the environment and allow for the automation of workflows.

Inefficient workflows and missing tools

Efficient workflows can be built in DevOps by automating workflows. Automating workflows can help to standardize processes, save time, and reduce errors.

Security vulnerabilities

These can be solved by integrating a standardized set of best practices of security and compliance requirements into your DevOps platform. Further, make sure, that this platform is the single source of truth for your DevOps environment.

Environment inconsistencies

Environment inconsistencies can lead to different versions of code in different environments, which can cause errors. Most of the time environment inconsistencies can occur when there is a lack of communication and collaboration between the development and operations teams.

How to get started with DevOps automation

One way is to start with a tool that automates a specific process or workflow, and a As a DevOps platform that enables communication and collaboration between the development and operations teams. In addition, the platform should provide a single source of truth for the environment and enable workflow automation.

Start by automating a core process that benefits your teams or business the most:

  1. Understand what the workflow looks like and break down the steps that are involved. This can be done by manually going through the workflow or by using a tool that allows you to visualize the workflow.
  2. Identify which parts of the workflow can be automated. This can be done by looking at the workflow and determining which steps are repetitive, take a long time, or are prone to errors.
  3. Choose a tool or platform that will enable you to automate the workflow. There are many different options available, so it is important to choose one that fits your specific needs.
  4. Implement the automation. This can be done by following the instructions provided by the tool or by working with a developer or external partner who is familiar with the tool.

Pro Tip:

  1. Use a tool like Puppet or Chef to automate the provisioning and configuration of your infrastructure.
  2. Use a tool like Jenkins to automate the build, deployment, and testing of your applications.
  3. Use a tool like Seleniumto automate the testing of your web applications.
  4. Use a tool like Nagios to monitor your infrastructure and applications.

Summary: DevOps Automation

DevOps automation is important because it can help reduce the need for manual intervention, saving time and money. Automation, in general, should simplify how software is developed, delivered, and managed.

Lack of integration and communication between development and operations, inefficient workflows and missing tools, security vulnerabilities, and environment inconsistencies are some of the typical DevOps Automation challenges.

Get started with DevOps automation by integrating a tool that automates a specific process or workflow. Further, use a DevOps platform that fosters communication and collaboration, and that provides a single source of truth (e.g. Container8.io).

DevOps Assessment

Evaluate your DevOps maturity with our free DevOps assessment checklist.

Process digitization and automation with Jira

Digitization and automation of processes with Jira

Are you currently considering digitizing and automating the processes of your sales teams? But you're not sure why or how to go about it? Then ask yourself if this scenario sounds familiar to you:

Someone finds an interesting product online, has a question about getting started or about pricing, and sends the company an inquiry. At this point, it happens all too often that the question is not followed up by an answer.

It seems relatively easy to send a customer or prospect a response to their inquiry. But it often happens, especially with SMEs, that inquiries get lost.

This article explains how we have automated and digitized our sales process with Jira.

Status quo before the automation

We also had to deal with exactly this challenge in our team.

Inquiries about our services and products accumulate through many different channels. These include contact inquiries via our homepage as well as via social media or by telephone.

After an extensive rebranding and search engine optimization of our homepage at the beginning of 2021, these inquiries as well as the interest in our services increased rapidly. While we had established internal responsibilities to create and manage an overview of incoming inquiries. However, due to the number of questions, our channels were overloaded, which meant that individual customer inquiries were lost and often answered weeks later.

Find out why you should digitize your business processes and what benefits you can expect here: Read the article.

Define goals together

As a team, we believe in offering the best possible service to interested prospects and potential customers. That's why we decided to fundamentally restructure and improve our process and implement it with Jira.

To implement our project, we defined 3 basic goals in advance, which we wanted to achieve with the Jira project:

  1. Collect all requests from the different channels in one place (Kanban board) in order to be processed mutually by several team members.
  2. Setting a deadline of 24h until the first contact occurs with a new contact or a new request. For this we have set a SLA of < 24h in the Jira project.
  3. Reduce the manual workload via automations in Jira and automate certain intermediate steps. Since we currently don't have our own sales department, this is particularly important for us.

Using Jira to digitize a sales process

To achieve these goals, we needed an established, digitized sales process that could be easily automated. Jira offers numerous advantages for the implementation of sales processes. Jira makes it possible to integrate all content into one platform and one board. Teams can continue to work efficiently because all necessary customer information as well as information about the communication that took place is stored in the system and can be easily accessed.

In this article, we would like to show you what the concrete implementation with Jira as a basis for sales processes looks like and how we use Jira and workflows for our own sales process.

Jira is a versatile project management tool for departments such as finance, marketing, human resources, and sales. Sales managers can use Jira to channel incoming leads, send automated responses, track processes, or manage quotes.

In Jira, tasks and processes are managed via workflows. A workflow represents the steps of your process and the status that a task (here the request) goes through.

Managing existing and new customer inquiries becomes noticeably easier when you visualize your sales workflow and respond directly to questions about your products or services.

Learn more about Jira and our consulting services

Design of our sales process

At the beginning of our project, we created a conceptual workflow and the individual steps in the process.

The first steps to set up the workflow were:

  1. Brainstorm about the different types of requests.
  2. Evaluating the channels of which requests are coming in.
  3. Detailed discussion, planning, and description of each step in our workflow for new requests.

To create a workflow that meets our requirements, our company's IT and business development experts pooled their content and knowledge. The result represents a flexible workflow with individual processes as well as intermediate steps and partially automated processes.

Even with the most precise planning, it will happen that further adjustments have to be made at a later point in time or additional optimization potential is discovered. This is where a major advantage of Jira comes into play: Changing workflows and automations is uncomplicated and quick, without having to adjust the entire project.

Creation of a Jira project

After defining your own workflow, it is time to integrate it into Jira. To do so, a new project must be created first.

  1. To do this, click on Projects > Show all projects
  2. Then click in the upper right corner on Create project
  3. Select base project* >Enter the name and the project key.

We recommend using the base project template as it provides the best way to track, prioritize, and resolve requests.

Creating the Kanban board in Jira

  1. Next click on Boards > Show all boards
  2. And then in the upper right corner click on Create board
  3. Select Canban board > Select board from an existing project.
  4. Enter the name and select the project that was just created

After the board and project are created, click the Project Settings button to configure preferences such as automation, workflows, SLAs, and users and roles.

Sales Process Kanban Board
Kanban Board in Jira

Digitization and automation in 6 steps

The following tasks and settings must be adjusted after the project and Kanban board have been created:

  1. Automatic conversion of requests into tickets by linking the various channels
  2. Creation of a workflow and the individual statuses
  3. Defining the SLAs
  4. Creation of automations for different process steps
  5. Kanban board configuration including columns and swimlanes
  6. Definition of responsibilities and notifications

This section provides a rough overview of how we implemented our sales process concept into Jira using workflows and automations. Due to the scope, the following points are only a small insight into the project. If you are interested, we would be happy to discuss the project in detail in a separate meeting.

1. Automatic conversion of requests into tickets

To begin with, it must be ensured that all incoming e-mails containing inquiries are automatically processed by the system. To achieve this, Jira needs to integrate the email system. This setting can be changed under Project Settings > Email Request . Make sure that…

  • you have the authorization to manage the project.
  • public registration or adding clients is enabled for your project to ensure that you receive new requests.
  • have your email channel enabled so that you can use your sales email address to create new requests.
  • have set up an appropriate request type and selected it, so that the requests created from emails are assigned this request type.

After the setup, the incoming requests are transferred and automatically converted into tickets. All requests are thus on one board, which enables the team to work together on these requests without having to keep an eye on the various inboxes.

2. Creation of a workflow as well as the individual statuses

Furthermore, a workflow must be created. For a flexible workflow, it is important to select a Software Simplified Workflow Scheme and to create a link between the individual statuses so that the status of each ticket in this workflow can be transferred to any other status. This creates high flexibility and ease of use. Using the Simplified Workflow, the contents in the Kanban board (including columns and statuses) can be changed at any time.

The workflow for the sales process can then be adapted to an individual, personalized workflow based on the requirements collected during the conceptual design phase. A sales workflow structure can look like the following:

Defining the SLAs

Good customer service ensures that customers remain loyal to you. An important part of good customer service is responsiveness. With Jira, you can achieve good responsiveness and keep your sales team on track by setting SLAs on how quickly requests should be handled. We set the SLA in our workflow to 24h. If the SLA time remaining to review a request is <60 minutes, our assigned team members will be notified.

SLAs can track the following properties:

  • Respond to all inquiries within X hours.
  • Completion of high-priority requests within X hours.
  • Warning about the expiration of an SLA at X minutes before expiration.

Creation of automations for different process steps

In addition to SLAs, additional automations can be created. You can add "When" triggers, "If" conditions and "Then" actions. These parameters define the process and create the automation. For example, for the initial meeting with a new contact, we implemented the first step of our sales process with automations. Thus, a new contact receives an automated mail in advance that we have received his inquiry and are taking care of his request. The status of the request is changed from New to Confirmed.

Free Resource

Case Study - Digitized Sales Process

Fill out this form to download our case study for a digitized sales process.

Additional options for automations

As a next step, we would like to set a first meeting with the interested party to talk about their request and to present our services and way of cooperation. For this, a specific assignee from the team must be associated with the ticket. Furthermore, the ticket must be given either the label lead-de or lead-en.

If, for example, the user XY is set as assignee and the label lead-en is set, the contact will receive an automatic email from the user XY with his meeting link in English. To kick off this automation, the ticket in the Kanban board only needs to be dragged to the Schedule Appointment status field.

Example: Automation Rule

WHEN: status changed > IF: issue matches status = Schedule appointment AND assignee = User XY AND label = lead-en > THEN: send email = Template "Answering contact" AND transition issue = Waiting for appointment AND add comment = 'XALT Bot: Invitation to phone conversation sent'.

Other automations include simple acknowledgement of receipt, a reminder to the customer if there is no response within 72 hours, and an update of the ticket status to Appointment if the customer has signed up for a time slot via the meeting link.

Conclusion - Jira, automation and reporting

In our automated Jira project, we can now answer and process requests easily and flexibly. Due to our SLAs and the bundling of all requests on a project and Kanban board, no requests are lost anymore and we can provide a good service to all contacts. Through tracking and document management in Jira, it is still possible to easily provide all stakeholders with information about the customer, their interests, and relevant documents such as quotes or contracts.

Additional benefits of the implementation are Jira's extensive reporting feature, which gives us insight into how many requests have arrived at the board and how their process flow has taken place.

"Anything that can be digitized will be digitized." - Carly Fiorina, CEO of Hewlett-Packard

Our Best Practices

Our sales process works without an extensive sales team. There are two project owners who have the task of processing the existing requests and ensuring that none of them are lost. The workflow is designed flexible and open, so that consultants who want to process a request themselves have the opportunity to do so according to the workflow. For the various steps in the workflow, from the initial contact to the preparation of the offer, internal contacts with the necessary knowledge are also stored, which can be contacted in the event of a blocker.

In our internal projects, we always follow the philosophy of creating a holistic system consisting of the input of different stakeholders, which in turn creates output and value in different directions, such as customer service or marketing. For this very reason, we have kept our workflow flexible and merged different departments, such as Marketing and Sales, for the project. This ensures that different teams and groups in your company work together, creating synergy effects and team spirit.

Our marketing managers therefore also have access to the project and the Kanban board, which allows them to view the progress of a contact in our sales funnel. The marketing team can accordingly plan marketing activities and provide contacts with further, informative and relevant information about their request.

Vision and outlook

If you think one step further, you could also include Operations personnel and when the status reaches the Offer column, an email is automatically sent to Operations with all the necessary information.

The created quote is sent and the status on the Kanban board is moved to the Closed Won or Closed Lost column depending on whether the quote is accepted or rejected. This again activates an SLA that sends an internal reminder to Operations after 14 days, for example, to check payment receipts.

Free Resource

Case Study - Digitized Sales Process

Fill out this form to download our case study for a digitized sales process.

Digitized business processes for an improved customer and user experience

Digitized business processes for an improved customer and user experience

Customers buy from companies that offer them the best, most seamless, and easiest digital experience.

In today’s world, speed is everything. Being the first on the market and fastest on delivering new features or fixing bugs or shipping products within a day improves user experience and customer satisfaction. Digital business processes are an incremental part of realizing this and are a must-have in a globalized world to meet the expectations of your users and customers.

Automating and digitizing processes for a consistently positive customer experience?

As consumers, customers, and employees, we are spoiled. Big eCommerce companies like Amazon have achieved almost a 100% rate of automated direct-to-consumer processes. This enabled them to offer a unique and seamless user experience along every touchpoint; from the initial contact to receiving the package at our front porch and even returning it to their warehouse.

We log in to our bank account and every transaction we have ever made can be easily checked. Transactions are automatically processed by your banks' ERP system and transmitted to a web interface. All of that happens in an instant and without any human interaction.

Remember, when you had to call an airline to book a flight for your next business trip or holiday? Remember all the hassle you had to go through? You had to talk to customer service, transmit your payment information, only to get confirmation and tickets by mail. Today you just need to search for flights on the internet, pay and you’re good to go.

Customer Experience the key to success?

How about B2B sales processes? Ask yourself, how long does it take for your enterprise to answer contact form submissions? One hour? One or two days? Even Longer? Do you have an automated delivery system to notify people in charge? Do you automatically follow up after a certain period? Do you send out an automated thank you message after form submission?

Customers want to be engaged quickly. If they don’t hear from you sometime soon, they most likely move on to one of your competitors.

So, if you take a closer look, digitized and automated business processes are everywhere, hard to miss, and part of our daily lives.

Wondering what benefits you can gain from digitizing and automating business processes? Find out more here.

Digitalization is on the rise and more important than ever before

Business processes are being fundamentally revamped in many industries to meet customers' and users' expectations. Companies that get it right can offer lower prices due to lower costs, better operational controls, and fewer risks.

But just digitizing an existing process is not enough. These processes tend to be biased, hard to go through, and not state of the art.

Digital processes should be easy to follow and built to simplify individual steps for your users or customers. You definitely should not just copy and paste an analog process into a digital environment but rather redesign and rebuild it from scratch and merge best practices with digital capabilities.

This means,

  • cut project steps that don’t add value to reduce complexity,
  • reduce the number of documents needed (often one is more than enough),
  • automate decision making steps and notifications,
  • and reduce the number of approvers to a minimum.

Digitized business processes and data collection

To meet today's standards and to outpace your competitors, acquiring and analyzing data is key to business success. Digitized corporate processes enable you to collect key data more easily and subsequently make better decisions. For example, you can collect customer support data on SLAs to improve your ITSM process, or customer behavior data in digital marketing to improve website content, ads, and KPIs.

Yet, digitizing corporate processes is just the beginning. To match the reimagined processes, operating models, skills, organizational structures, and roles often need to be redesigned.

Business team collaboration

Marketing or sales teams need to closely work together and include data in their decision-making process to delight customers along their entire journey. Marketing, for example, may learn how long and how often a specific visitor has already spent on specific pages before contacting sales. This data in return can be used by sales in future customer meetings.

Customer support and success teams can use digital tools such as Jira Service Management and well-designed support portals in conjunction with a help center to build a self-service solution. This allows them to focus exclusively on high-priority support tickets that require personal customer interaction. According to ServiceNow and Gartner, using modern ITSM solutions reduces face-to-face contact by 40% and 72%, respectively, compared to contact by phone and email.

Redesigning and digitizing a process is the first step. To fully leverage digital possibilities, new roles have to be created in your teams. Roles like Data Analyst / Scientist and User Experience Designer are two of the most important.


Digitizing business processes can be driven for countless reasons or business teams. But before you start, ask yourself, "What purpose and goal do I want to achieve?"

Getting this done beforehand simplifies the next steps and questions you need to answer. Questions like, what tools or software do we need? Do we need to hire new staff that manages the digitization, or are responsible for reaching the desired goals? How do I make sure that existing staff are able to use the new tools? Do I need to onboard them and hire an expert?

Going digital enables you not only to increase performance or to save money but ultimately will delight your customers (directly and indirectly) and thereby improve customer experience and happiness.

This might also interest you

Advantages of digital business processes

Advantages of digital business processes

By digitizing business processes, companies benefit from a whole range of positive effects. Learn more in this article