Deploying Magento2 – Future Prospects [4/4]

Deploying Magento2 – Future Prospects [4/4]

This post is part of series:

Recap

In the previous posts we dived into our Deployment Pipeline and the Release to the staging or production environments. You should check those posts first before reading this one.

In this post we will share our thoughts on where we want to go with our deployment setup and what we have planned.

To recall, this is our current Deployment Setup in a simplified overview:

I have marked the phases the deployment is going through and the one important point in our deployment which is when all artifacts have been created and are available in the filesystem.
This will be the key-point in our deployment setups, because we can have a clean cut here and switch /adjust the following phase based on our customers needs or the server infrastructural requirements.

Our goal is to have a standard setup as far as possible and then be able to deploy to physical servers, cloud setups or even use completely different deployment approach.

Preface

The next paragraphs will be about the different setups we plan to serve with this deployment. Note that the following Deployment Setups are still under evaluation and are just stating my current thoughts on their specific area. Furthermore the diagrams shown below are superficial abstractions of the matter, so don’t expect to many details here.

Optimising Artifact Generation

Before we can continue to attach our deployment to different setups, there is one optimization I want to take in advance.
At the moment we are generating multiple artifacts. A short reminder, these are the artifacts we are creating:

config.tar.gz
var_di.tar.gz
var_generation.tar.gz
pub_static.tar.gz
shop.tar.gz

To be more flexible in the future and to have a clean integration point (think of it like an Interface), I want to reduce the artifacts we create to exactly one.
This should be possible but has not been implemented yet. It will be easier to extend and easier to understand if we have one artifact to continue with from there.
Furthermore some setups might even require exactly one artifact so we would need it anyways.

Deploying to platform.sh

At the moment we are having some Magento2 projects delivered through platform.sh. The Deployment process and setup itself currently differs heavily from the previously described setup. Mainly because of historical reasons. At the time we had to create it, we still had our more or less our PULL & PUSH Setup described in the first post Deploying Magento2 – History & Overview [1/4]. With our current platform.sh deployment we are still used jenkins, but mainly to trigger the build and deploy processes on the platform.sh side.
That means that all processes are run on the platform.sh setup and thus directly pull from our gitlab or the Magento Composer Repository.

This is not ideal due to speed issues we experience when compiling the assets in our platform.sh setup. Additionally we need to configure access to the netz98 Gitlab and Composer repository and of course the Magento Composer repository, as the composer install is run on the plaform.sh setup.
To ease these situations we are tending to create a setup like this:

As you can see, we are generating the assets and the artifact on our build server which is way faster than doing this in our platform.sh setup. If the artifact is available we will push that artifact to the git repository offered by platform.sh, thus triggering the actual deployment to the production environment.
The final steps are to upgrade the production database, import config, control the release, cleanup, etc.

In theory this should work, because we are just pushing code to platform.sh which is then used to run our application. We are planning to try this approach with the next platform.sh setup, probably in a months time. You can expect some post about our experience with this.

Deploying to AWS using CodeDeploy

We are working on AWS Cloud Deployments as well. With the approach we are following now should be able to deploy to a AWS Cloud Setup as well. We are evaluating different approaches to meet our customers requirements and still be cost effective.

In this version we would deploy our code using AWS CodeDeploy which is taking care of updating the EC2 instances. The Database Upgrade would then be triggered on a admin EC2 instance which is not in the auto-scaling group.

This is an example of how the deployment of the source-code / the application might look like. I know this is more like an easy setup, depending on the customers needs and budget this is one way to go.

Deploy to AWS using ECS

Deploying the source code to the EC2 Instances is one way to go. You can also use Amazon EC2 Container Service (in short Amazon ECS) to create Containers and deploy them to your EC2 instances. In short you are running one or more containers on you EC2 instances and control those containers through the Amazon ECS container registry.

What we plan on doing here is creating the container image based on the artifacts we created using the standard deployment mechanism. This pre-build container image is then pushed to the Amazon ECS Registry. From there the deployment to the EC2 instances is controlled. The Container definition and the images to use for them is defined using Task Definitions. You can define multiple containers and the EC2 instances they shall be running on. The above overview is limited to the application deployment as this is the main target of this blog series. We might go in to more detail on our plans for different AWS Deployment setups with a more complete view.

Deploying to …

Thinking ahead, we might run into unexpected or complicated server environments. Following this push only approach we have a way that should be re-usable in most cases. Be it deploying with a restrictive VPN connection or to a highly secured server which does not allow a PULL.

Summary

This series was all about introducing our way of automatically deploying to our environments and how we got there. I hope you got a good understanding on the advantages of a PULL Deployment and you might achieve it yourself.

As always, leave a comment if you got anything to add or to give us some feedback.

Oh and …

P.S.

As I mentioned in my last post I am working on a default setup for Magento2 deployments. It is meant to be used as a starting point for custom deployments and helps you getting your automatic deployment pipeline up and running in a short amount of time. Futhermore I want to create a central point were issues or special constellations regarding the asset generation are handled.
It will be configureable and highly customizable and it will contain some basic tasks that can be re-used.
The project will be completely open-source and available via github.
My next post will be a introduction to that Deployment, so stay tuned and leave a message here or ping me on twitter if you feel like it.

One thought on “Deploying Magento2 – Future Prospects [4/4]

  1. Really nice set of articles. I came across them because I felt our deployments were somewhat broken. We had contacted a magento solutions company to set it up. It worked initially but would fail from time-to-time especially with moving to magento 2.1. Firstly in your article, I like the fact that the artifact generation phase happens on the build server. For our setup it currently happens in production and that’s bad. In the case of failure, rollback is not automatic. I really liked when I saw the use of code-deploy. We use bamboo for our build server and use code-deploy to deploy to the front-end instances on AWS.
    How do you deal with setup upgrade for multi-server deployments and do you deploy one after the other or at once using codeploy. We currently deploy one after the other for the front end instances and there’s considerable down-time when our deployments include upgrade to schema/data versions in magento.

Leave a Reply

Your email address will not be published. Required fields are marked *