Browsed by
Month: February 2017

Deploying Magento2 – Future Prospects [4/4]

Deploying Magento2 – Future Prospects [4/4]

This post is part of series:

Recap

In the previous posts we dived into our Deployment Pipeline and the Release to the staging or production environments. You should check those posts first before reading this one.

In this post we will share our thoughts on where we want to go with our deployment setup and what we have planned.

To recall, this is our current Deployment Setup in a simplified overview:

I have marked the phases the deployment is going through and the one important point in our deployment which is when all artifacts have been created and are available in the filesystem.
This will be the key-point in our deployment setups, because we can have a clean cut here and switch /adjust the following phase based on our customers needs or the server infrastructural requirements.

Our goal is to have a standard setup as far as possible and then be able to deploy to physical servers, cloud setups or even use completely different deployment approach.

Preface

The next paragraphs will be about the different setups we plan to serve with this deployment. Note that the following Deployment Setups are still under evaluation and are just stating my current thoughts on their specific area. Furthermore the diagrams shown below are superficial abstractions of the matter, so don’t expect to many details here.

Optimising Artifact Generation

Before we can continue to attach our deployment to different setups, there is one optimization I want to take in advance.
At the moment we are generating multiple artifacts. A short reminder, these are the artifacts we are creating:

To be more flexible in the future and to have a clean integration point (think of it like an Interface), I want to reduce the artifacts we create to exactly one.
This should be possible but has not been implemented yet. It will be easier to extend and easier to understand if we have one artifact to continue with from there.
Furthermore some setups might even require exactly one artifact so we would need it anyways.

Deploying to platform.sh

At the moment we are having some Magento2 projects delivered through platform.sh. The Deployment process and setup itself currently differs heavily from the previously described setup. Mainly because of historical reasons. At the time we had to create it, we still had our more or less our PULL & PUSH Setup described in the first post Deploying Magento2 – History & Overview [1/4]. With our current platform.sh deployment we are still used jenkins, but mainly to trigger the build and deploy processes on the platform.sh side.
That means that all processes are run on the platform.sh setup and thus directly pull from our gitlab or the Magento Composer Repository.

This is not ideal due to speed issues we experience when compiling the assets in our platform.sh setup. Additionally we need to configure access to the netz98 Gitlab and Composer repository and of course the Magento Composer repository, as the composer install is run on the plaform.sh setup.
To ease these situations we are tending to create a setup like this:

As you can see, we are generating the assets and the artifact on our build server which is way faster than doing this in our platform.sh setup. If the artifact is available we will push that artifact to the git repository offered by platform.sh, thus triggering the actual deployment to the production environment.
The final steps are to upgrade the production database, import config, control the release, cleanup, etc.

In theory this should work, because we are just pushing code to platform.sh which is then used to run our application. We are planning to try this approach with the next platform.sh setup, probably in a months time. You can expect some post about our experience with this.

Deploying to AWS using CodeDeploy

We are working on AWS Cloud Deployments as well. With the approach we are following now should be able to deploy to a AWS Cloud Setup as well. We are evaluating different approaches to meet our customers requirements and still be cost effective.

In this version we would deploy our code using AWS CodeDeploy which is taking care of updating the EC2 instances. The Database Upgrade would then be triggered on a admin EC2 instance which is not in the auto-scaling group.

This is an example of how the deployment of the source-code / the application might look like. I know this is more like an easy setup, depending on the customers needs and budget this is one way to go.

Deploy to AWS using ECS

Deploying the source code to the EC2 Instances is one way to go. You can also use Amazon EC2 Container Service (in short Amazon ECS) to create Containers and deploy them to your EC2 instances. In short you are running one or more containers on you EC2 instances and control those containers through the Amazon ECS container registry.

What we plan on doing here is creating the container image based on the artifacts we created using the standard deployment mechanism. This pre-build container image is then pushed to the Amazon ECS Registry. From there the deployment to the EC2 instances is controlled. The Container definition and the images to use for them is defined using Task Definitions. You can define multiple containers and the EC2 instances they shall be running on. The above overview is limited to the application deployment as this is the main target of this blog series. We might go in to more detail on our plans for different AWS Deployment setups with a more complete view.

Deploying to …

Thinking ahead, we might run into unexpected or complicated server environments. Following this push only approach we have a way that should be re-usable in most cases. Be it deploying with a restrictive VPN connection or to a highly secured server which does not allow a PULL.

Summary

This series was all about introducing our way of automatically deploying to our environments and how we got there. I hope you got a good understanding on the advantages of a PULL Deployment and you might achieve it yourself.

As always, leave a comment if you got anything to add or to give us some feedback.

Oh and …

P.S.

As I mentioned in my last post I am working on a default setup for Magento2 deployments. It is meant to be used as a starting point for custom deployments and helps you getting your automatic deployment pipeline up and running in a short amount of time. Futhermore I want to create a central point were issues or special constellations regarding the asset generation are handled.
It will be configureable and highly customizable and it will contain some basic tasks that can be re-used.
The project will be completely open-source and available via github.
My next post will be a introduction to that Deployment, so stay tuned and leave a message here or ping me on twitter if you feel like it.

Magento1 since 2008 / Magento2 since 2015
Passionate Road Bike Rider (~3.500km/yr)
Loves building software with a elaborate architecture and design
3x Magento Certified
Software Developer >10 years
Head of Magento Development @ netz98
Cronjob performance optimization: Magento 1 vs. Magento 2

Cronjob performance optimization: Magento 1 vs. Magento 2

Introduction

This article is about problems that can occur with Magento cronjobs. The standard way to configure crontab for Magento 1 has it’s limits. The more custom cronjobs  a Magento system has, the more probable the system will face problems considering cronjobs. The most common issues are:

  • Indexer Cronjob (Magento Enterprise malways mode) takes longer than usual so that other cronjobs (mdefault mode) are skipped (not executed) for the time the indexer runs
  • Some of the cronjobs in mdefault scope take a long time to run and block others

Second issue can be avoided if this rule is followed: create a shell script and make a separate crontab entry on the server for long running jobs, e.g. imports or exports.

Magento 1

Plain Magento 1

If  we have a plain Magento 1 we can split the malways and mdefault cronjob modes:

This will prevent that the indexer blocks other mdefault jobs or an mdefault job blocks the indexer.

But there are much more options of parallelization if you use the Magento 1 extension AOE Scheduler.

Magento 1 with AOE Scheduler

The AOE Scheduler has multiple benefits for managing Magento Cronjobs. In this article I want to focus on the “cron groups” feature.

The instruction how to use cron groups can be found here.

The main idea is to split Magento cronjobs into groups. The execution of those groups can be triggered separately via the server crontab.

I recently introduced this feature in a project. These are the steps I needed to take:

  1. Create a new module, e.g. Namespace_AoeSchedulerCronGroups
    This module contains only an empty helper and config.xml.
  2. In the config.xml define the groups for each cronjob in the system like this:

    To get a full list of cronjobs you can either use the backend grid of AOE Scheduler or use the following Magerun command:

    The splitting of cronjobs in groups should be based on project knowledge and experience. In my case the groups were something like this:

    • magento_core_general 
    • general
    • important_fast
    • important_long_running
    • projectspecific_general
    • projectspecific_important
    • erp
    • erp_long_running
  3. After deploying the new code base with the new module to the server, edit the crontab, remove the standard cron.sh / cron.php call and add something like this (matches my example groups):

    The last entry is pretty important: this executes jobs, which are not assigned to any group, e.g. for newly developed cronjobs which didn’t get any group assignment.

Magento 2

Magento 2 comes with the cron groups feature out of the box. The feature and how to configure multiple groups are explained in the magento devdocs:

In Magento 2 there are more explicit options for cron groups than in Magento 1 including installed AOE Scheduler module:

Groups are defined in a cron_groups.xml file and each group may get its own configuration values:

Conclusion

In this article we looked at the evolution of cronjob performance optimization beginning with Magento 1, over Magento 1 with installed AOE Scheduler extension, up to Magento 2. Here we have a good example, how community modules with nice features can be a benefit for Magento and also that Magento can implement those features in future releases.

Feel free to leave a comment.

Magento Certified Developer Plus