Browsed by
Category: Magento 2

Use Swagger to generate a full functional Magento API Client

Use Swagger to generate a full functional Magento API Client

Magento 2 comes with a nice swagger schema which describes the Webapi. The Magento guys were very clever to choose swagger. It not only comes with a schema, but moreover it is a complete interactive API client as well.

A swagger schema is a JSON document to formalize the REST API. Formalized documents have the big advantage that you can process the data with a machine. One idea I had was to create a PHP API for the Magento 2 API. Fortunately the swagger guys created a code generator tool. I really like the idea to generate code out of the schema. The swagger code-gen tool comes with support for multiple languages. PHP is one of the standard languages.

The code generator tool can be found here:

If you are on a mac it’s possible to install the code generator with homebrew.

After the installation you can test the tool with the help command.

On my machine i can see this help output. Works!

Run the generator

Now we are able to run the code generator. I used the schema from public developer documentation. You can also use your own schema from an existing installation.

You should see a long list of generated classes like this:

Run test unit tests

After the code generation is done, we should run the generated unit tests. You can run the tests by typing vendor/bin/phpunit in the project folder.

Test the new generated client

After that we can try our freshly generated API client library.

As an example we will fetch all the installed Magento modules of our shop instance.

Save the script as installed_modules.php and replace {{YOUR_SHOP_URL}}  with a local or remote shop url and {{YOUR_API_TOKEN}} with a API bearer token of your user. A brief description about the generation of API-Tokens can be found in the developer documentation topic “Token-based authentication“.

Now run the script with php installed_modules.php.

On my local machine I am getting this output:


That’s it. We have a full functional REST API client in PHP to call Magento 2 instances. The generated code is not perfect but very usable.

You can try it by yourself. For all lazy developers we pushed the code in a public github repository.

Have fun!

– Creator of n98-magerun
– Fan of football club @wormatia
– Magento user since version 0.8 beta
– 3x certified Magento developer
– PHP Top 1.000 developer (yes, I’m PHP4 certified and sooooo old)
– Chief development officer at netz98
Nice to know: Install N98-Magerun via Composer

Nice to know: Install N98-Magerun via Composer

There is a so far merely undocumented installation procedure for Magerun that is extremely handy in project configurations.

You just require Magerun within the Magento project and you can then execute it from the vendor’s bin folder:

Afterwards if you commit the composer.json  and composer.lock  files it is a take-away for the whole team.

So it is regardless whether you’re running it locally, inside a docker container or a complete different system. After composer install, n98-magerun2 is available on all target systems.

Just Another Install Example

Here another example I just did with one of our systems that run via docker on my end, but I’m installing on my local system (the folder is mounted inside the docker container):

The --ignore-platform-reqs  switch make composer to install it even despite my local system does not have all Magento2 requirements.

Introducing MageDeploy2

Introducing MageDeploy2

In our recent post series about Deploying Magento2 using Jenkins and deployer I was showing you how our Deployments are set up.

In case you haven’t read them and are interested in the details here are the links:

During the time of writing those articles I realized quite some improvements and generalizations that could be done to make this deployment more maintainable, extensible and customizable. I wanted to have a deployment setup that allows local execution with colored output, execution on a build server without interaction and usage in a build pipeline.
Furthermore I wanted the deployment setup not only to be usable within netz98 but also by the whole Magento community.

What I came up with I called MageDeploy2 which I will introduce with this post.

If you read the previous post you will probably remember the diagrams showing the actions executed on the particular servers. I used one of those to mark the areas which will be provided by the MageDeploy2 setup.

Now let’s go into details on how those phases and steps are implemented and what you need to get started with a PUSH deployment for Magento2 yourself.

About MageDeploy2

MageDeploy2 combine’s multiple technologies and open-source projects to provide the deployment setup.
It basically is a set of tools, configurations files, some custom tasks for Robo and Deployer, all tailored to fit the needs of deploying a Magento2 project.

For those new to Robo and Deployer:

  • Robo is a task runner that allows you to write fully customizable tasks in common OOP PHP style
  • Deployer is Deployment tool for php, which follows a approach similar to capistrano

I will not go into to much detail on how those tools work, you can get that from their designated websites and documentation.

MageDeploy2 can be divided into 3 phases that can each be triggered separately.

  • magento-setup (preparing a local magento-setup)
  • artifacts-generate (generating the assets and packaging them)
  • deploy (release to production environment)

Those phases are implemented as commands in the RoboFile.

MageDeploy2 is divided into different packages that are installed when installing through composer.

  • mwltr/robo-deployer : contains Robo-Tasks for deployer
  • mwltr/robo-magento2 : contains Magento2 specific Robot-Tasks

Those Robo-Tasks are not a full set of all possible commands and options but currently offer the commands and modifiers needed in deployment scenario. They are decoupled and can be re-used in other projects.

As far as the deployer setup is concerned, MageDeploy2 uses n98/n98-deployer to include deployer configurations and tasks, them being:

  • set of Magento2 specific tasks
  • Magento2 Default Recipe
  • RoleManager for servers
  • optimized deployer standard tasks


As I mentioned earlier, Magento2 Deployment Setup is using Robo to control the local setup and the overall deployment process. To achieve the actual deployment to the distinct environment it comes with a pre-configured Deployer setup. Please note that using Deployer is not mandatory, you can use whatever tool you like.

It also expects that you have a git repository available, where you have commited your Magento2 composer.json file in either the root or in a sub-directory. Right now we are only supporting git but it should not be that big of a problem to connect to another VCS.
Finally you need to have configured the access to the Magento composer repository for your current user.

Create a new Deployment

To Create a new deployment setup just run the following command.

Note: Robo needs to be installed using composer, otherwise the usage of custom Tasks is not available. See the Robo Documentation Including Additional Tasks


After the Installation you have to edit the magedeploy2.php and the deploy.php file to suit your needs. MageDeploy2 assumes you have a git repository containing the magento composer.json. Furthermore your local build environment can clone said repository and download the Magento packages using composer.

MageDeploy2 Configuration

To configure the MageDeploy2 use the following command:

It will guide you throught the most important configuration options. Don’t worry you can edit the magedeploy2.php later-on.

Next, run

to validate your build environment is setup.

Setup local build environment

If you are done with the configuration in magedeploy2.php, you can see if your build environment can be setup. To do so run this command:

You can use a different branch or tag depending on your git repository setup.

After the magento-setup has run successfully, you can now generate the assets by running the command:

After this command is complete you should see the packages beneath shop.

At this point we are sure that the local build setup is working and we can now continue with releasing our project.

Deployer Configuration

To evaluate we will create a local deployment target. To do so copy the local.php.dist by runing

and set the config values according to your local deploy target.

Check the configuration in deploy.php and adjust it to your requirements. The default configurations and tasks are defined in \N98\Deployer\Recipe\Magento2Recipe. You can also have a look at all the configurations available in the Deployer Documentation

Setting up deploy directory tree

After you are done with setting the configuration, you can now initialize the directory tree of the deploy target run

This will create the required directories on your local deploy target.

Setting up deploy target (optional)

If you want to set up your deploy target as well you can use the command

It will make an initial deployment to push your code to the deploy target.

When this is done navigate to your local deploy_path and run the magento install command to setup the database. This might look something like this:

Now we have Magento database and configuration on our deploy target and are ready to continue with the final step.

Deploying the project

At this point, you have setup the build environment and target environment and can finally start with the actual deployment. You can do so by running:

Congrats you have successfully setup your deployment pipeline and run the first deployment!


If you went through the tutorial above, you may have already used most of them.
A full list of commands is available in the github repository here:
The following diagram shows the commands responsibility within the deployment pipeline.


Runs all tasks in the stage magento-setup. It will setup or update a local Magento instance by pulling the source-code from git, installing composer dependencies and installing or updating a local database.


Runs the Magento di:compile and setup:static-content-deploy commands to generate the assets. It is using your configuration from the magedeploy2.php.

After generating those assets it will create packages, again according to your configuration.


This command will invoke deployer to release your project and push the prepared artifacts to the server.


Triggers the deployment with all it’s stages and can be considered to run deploy:magento-setup, deploy:artifacts-generate and deploy:deploy internally.


MageDeploy2 was designed to be highly customizable to suite your needs. Here are some areas that are easy to adjust:

  • Add or overwrite Robo-Tasks
  • Add or overwrite existing or additional configuration to MageDeploy2
  • Customize Deployer but still have the basic set of tasks available
  • Exchange deployer with a different tool

The go into details here would exceed the purpose of this introduction. We may go into details in this area in a later post though.

Final Words

This is it, I hope you like the tool and it will be helpful setting up a PUSH deployment of your own.
And as always let me know your thoughts and feedback in the comments below or contact me directly.


Magento1 since 2008 / Magento2 since 2015
Passionate Road Bike Rider (~3.500km/yr)
Loves building software with a elaborate architecture and design
3x Magento Certified
Software Developer >10 years
Head of Magento Development @ netz98
Deploying Magento2 – Future Prospects [4/4]

Deploying Magento2 – Future Prospects [4/4]

This post is part of series:


In the previous posts we dived into our Deployment Pipeline and the Release to the staging or production environments. You should check those posts first before reading this one.

In this post we will share our thoughts on where we want to go with our deployment setup and what we have planned.

To recall, this is our current Deployment Setup in a simplified overview:

I have marked the phases the deployment is going through and the one important point in our deployment which is when all artifacts have been created and are available in the filesystem.
This will be the key-point in our deployment setups, because we can have a clean cut here and switch /adjust the following phase based on our customers needs or the server infrastructural requirements.

Our goal is to have a standard setup as far as possible and then be able to deploy to physical servers, cloud setups or even use completely different deployment approach.


The next paragraphs will be about the different setups we plan to serve with this deployment. Note that the following Deployment Setups are still under evaluation and are just stating my current thoughts on their specific area. Furthermore the diagrams shown below are superficial abstractions of the matter, so don’t expect to many details here.

Optimising Artifact Generation

Before we can continue to attach our deployment to different setups, there is one optimization I want to take in advance.
At the moment we are generating multiple artifacts. A short reminder, these are the artifacts we are creating:

To be more flexible in the future and to have a clean integration point (think of it like an Interface), I want to reduce the artifacts we create to exactly one.
This should be possible but has not been implemented yet. It will be easier to extend and easier to understand if we have one artifact to continue with from there.
Furthermore some setups might even require exactly one artifact so we would need it anyways.

Deploying to

At the moment we are having some Magento2 projects delivered through The Deployment process and setup itself currently differs heavily from the previously described setup. Mainly because of historical reasons. At the time we had to create it, we still had our more or less our PULL & PUSH Setup described in the first post Deploying Magento2 – History & Overview [1/4]. With our current deployment we are still used jenkins, but mainly to trigger the build and deploy processes on the side.
That means that all processes are run on the setup and thus directly pull from our gitlab or the Magento Composer Repository.

This is not ideal due to speed issues we experience when compiling the assets in our setup. Additionally we need to configure access to the netz98 Gitlab and Composer repository and of course the Magento Composer repository, as the composer install is run on the setup.
To ease these situations we are tending to create a setup like this:

As you can see, we are generating the assets and the artifact on our build server which is way faster than doing this in our setup. If the artifact is available we will push that artifact to the git repository offered by, thus triggering the actual deployment to the production environment.
The final steps are to upgrade the production database, import config, control the release, cleanup, etc.

In theory this should work, because we are just pushing code to which is then used to run our application. We are planning to try this approach with the next setup, probably in a months time. You can expect some post about our experience with this.

Deploying to AWS using CodeDeploy

We are working on AWS Cloud Deployments as well. With the approach we are following now should be able to deploy to a AWS Cloud Setup as well. We are evaluating different approaches to meet our customers requirements and still be cost effective.

In this version we would deploy our code using AWS CodeDeploy which is taking care of updating the EC2 instances. The Database Upgrade would then be triggered on a admin EC2 instance which is not in the auto-scaling group.

This is an example of how the deployment of the source-code / the application might look like. I know this is more like an easy setup, depending on the customers needs and budget this is one way to go.

Deploy to AWS using ECS

Deploying the source code to the EC2 Instances is one way to go. You can also use Amazon EC2 Container Service (in short Amazon ECS) to create Containers and deploy them to your EC2 instances. In short you are running one or more containers on you EC2 instances and control those containers through the Amazon ECS container registry.

What we plan on doing here is creating the container image based on the artifacts we created using the standard deployment mechanism. This pre-build container image is then pushed to the Amazon ECS Registry. From there the deployment to the EC2 instances is controlled. The Container definition and the images to use for them is defined using Task Definitions. You can define multiple containers and the EC2 instances they shall be running on. The above overview is limited to the application deployment as this is the main target of this blog series. We might go in to more detail on our plans for different AWS Deployment setups with a more complete view.

Deploying to …

Thinking ahead, we might run into unexpected or complicated server environments. Following this push only approach we have a way that should be re-usable in most cases. Be it deploying with a restrictive VPN connection or to a highly secured server which does not allow a PULL.


This series was all about introducing our way of automatically deploying to our environments and how we got there. I hope you got a good understanding on the advantages of a PULL Deployment and you might achieve it yourself.

As always, leave a comment if you got anything to add or to give us some feedback.

Oh and …


As I mentioned in my last post I am working on a default setup for Magento2 deployments. It is meant to be used as a starting point for custom deployments and helps you getting your automatic deployment pipeline up and running in a short amount of time. Futhermore I want to create a central point were issues or special constellations regarding the asset generation are handled.
It will be configureable and highly customizable and it will contain some basic tasks that can be re-used.
The project will be completely open-source and available via github.
My next post will be a introduction to that Deployment, so stay tuned and leave a message here or ping me on twitter if you feel like it.

Magento1 since 2008 / Magento2 since 2015
Passionate Road Bike Rider (~3.500km/yr)
Loves building software with a elaborate architecture and design
3x Magento Certified
Software Developer >10 years
Head of Magento Development @ netz98
Cronjob performance optimization: Magento 1 vs. Magento 2

Cronjob performance optimization: Magento 1 vs. Magento 2


This article is about problems that can occur with Magento cronjobs. The standard way to configure crontab for Magento 1 has it’s limits. The more custom cronjobs  a Magento system has, the more probable the system will face problems considering cronjobs. The most common issues are:

  • Indexer Cronjob (Magento Enterprise malways mode) takes longer than usual so that other cronjobs (mdefault mode) are skipped (not executed) for the time the indexer runs
  • Some of the cronjobs in mdefault scope take a long time to run and block others

Second issue can be avoided if this rule is followed: create a shell script and make a separate crontab entry on the server for long running jobs, e.g. imports or exports.

Magento 1

Plain Magento 1

If  we have a plain Magento 1 we can split the malways and mdefault cronjob modes:

This will prevent that the indexer blocks other mdefault jobs or an mdefault job blocks the indexer.

But there are much more options of parallelization if you use the Magento 1 extension AOE Scheduler.

Magento 1 with AOE Scheduler

The AOE Scheduler has multiple benefits for managing Magento Cronjobs. In this article I want to focus on the “cron groups” feature.

The instruction how to use cron groups can be found here.

The main idea is to split Magento cronjobs into groups. The execution of those groups can be triggered separately via the server crontab.

I recently introduced this feature in a project. These are the steps I needed to take:

  1. Create a new module, e.g. Namespace_AoeSchedulerCronGroups
    This module contains only an empty helper and config.xml.
  2. In the config.xml define the groups for each cronjob in the system like this:

    To get a full list of cronjobs you can either use the backend grid of AOE Scheduler or use the following Magerun command:

    The splitting of cronjobs in groups should be based on project knowledge and experience. In my case the groups were something like this:

    • magento_core_general 
    • general
    • important_fast
    • important_long_running
    • projectspecific_general
    • projectspecific_important
    • erp
    • erp_long_running
  3. After deploying the new code base with the new module to the server, edit the crontab, remove the standard / cron.php call and add something like this (matches my example groups):

    The last entry is pretty important: this executes jobs, which are not assigned to any group, e.g. for newly developed cronjobs which didn’t get any group assignment.

Magento 2

Magento 2 comes with the cron groups feature out of the box. The feature and how to configure multiple groups are explained in the magento devdocs:

In Magento 2 there are more explicit options for cron groups than in Magento 1 including installed AOE Scheduler module:

Groups are defined in a cron_groups.xml file and each group may get its own configuration values:


In this article we looked at the evolution of cronjob performance optimization beginning with Magento 1, over Magento 1 with installed AOE Scheduler extension, up to Magento 2. Here we have a good example, how community modules with nice features can be a benefit for Magento and also that Magento can implement those features in future releases.

Feel free to leave a comment.

Magento Certified Developer Plus
Deploying Magento2 – Releasing to Production [3/4]

Deploying Magento2 – Releasing to Production [3/4]

This post is part of series:


In the last post Jenkins Build-Pipeline Setup we had a look at our Jenkins Build-Pipeline and how to the setup and configuration is done. If you haven’t read it yet you should probably do so before reading this post.
The last step in our Build-Pipeline was the actual Deployment which can be defined like this:

You may notice the missing sshagent call compared to the previous post. This sshagent call results from one of our older deployment setups where we were still pulling code from the server. After writing the post about our Build-Pipeline setup I questioned that, and as it turns out we don’t need that anymore and can simplify our Deployments. This part was actually not so trivial to setup if you don’t know exactly what to do and what to look for, so I am happy to scratch that complexity.

In this post we will dive into the actual Deployment and Rollout of your Magento2 application.

Remembering the visualization of your Deployment Process, we are now enter the last action-block. I have marked the part we are going to elaborate accordingly.


In the stage ‘Asset Generation’ we build all the necessary assets and created tar.gz files for them.
Thus before starting stage ‘Deployment’ we have the following files available in the workspace of our Jenkins Build.

Next up, those files will be used to build the release on the remote server.

Starting the Deploy

As mentioned in the last post we are using Deployer here to handle the release to staging or production environments.

The TAG and STAGE environment variables are set by Jenkins and defined for each Build, before starting the actual Build.
A possible command might state like this:

This call will rollout the release with the tag  to the production  environment.
Though our deployer setup is no longer making a git connection we are providing the tag here to identify the release later on.

Deployer Setup

So this is how our Deploy Repository is setup:

Here you can also see the Jenkinsfile defining the Build-Pipeline. We have a config directory containing the configurations for our environments. Including a possible local setup. The local setup is really helpful when improving or upgrading the deployment.

In our deploy repository we have a composer.json to manage the necessary dependencies for the deployment. Them being deployer itself and our own set of tasks. Having our tasks in a dedicated repository gives us the possibility to share those tasks through out all deployments. That’s one thing I didn’t like with the default deployer approach.


Let’s take a look at the deploy.php file that defines the configuration and tasks that are necessary for our deployment. We will go into more Detail afterwards.

As you can see this file does not look like the default deploy.php files using lambda functions. We have moved the Task definition into a class N98\Deployer\Registry that is provided by  n98/lib.n98.framework.deployer. Furthermore we have moved our tasks and their identifier to seperate classes to get them reusable and shareable using a composer package.
Now let’s have a look at each section.

deploy.php – configuration

We have added the default shared files and directories to the deployer default parameters shared_files and shared_dirs.
ssh_type is set to native so we are using the ssh client provided by the operation system.
webserver-user and webserver-group are used to apply the correct directory permissions.
phpfpm_service and nginx_service is used to restart those services automatically during the deployment (using a custom task).

deploy.php – servers

We have put the server specific configurations into separate files in the directory config. This way we can have a local.php.dist to setup a config for a local dev-environment.
We could extend this to just include the environment provided as a parameter to deployer.

A server config might look like this:

We are using the identityFile .ssh/config provided within the deploy repository. At first, I was assuming that deployer will use this file when running the native ssh commands and pass the config-file as a parameter like  ssh -i .ssh/config . As it turns out it does not do that, instead it parses the ssh config-file and just extracts the Hostname, user and IdentifyFile directives.
Though I will be creating a pull request that will make the usage of the config-file possible. I have tested it, and it works well, because why shouldn’t it.

Futhermore we have created a class called RoleManager, which we use to define roles for servers and assign tasks to those roles. This functionality is needed for easily triggering specific tasks only on specific servers. It will be translated to $task->onlyOn() call later in the deployment. The main advantage and purpose is the ease of use and portability throught multiple deployment projects.

deploy.php – adding the tasks

To register our default Tasks we have created a Registry class that takes care of this process. This class also takes the roles mentioned above into account.

With deployer you can define as much tasks as you like. It all comes together with your deploy pipeline that you define in your deploy.php.

deploy.php – task classes

We have split up all of our tasks to the following classes:

  • BuildTasks – tasks for basic initialization and an overwrite for the rollback
  • CleanupTasks – improved cleanup task
  • DeployTasks – improved rollback task
  • MagentoTasks – our Magento specific tasks
  • SystemTasks – tasks to restart nginx and php-fpm

Those classes have class constants that are used to register the tasks and to define the build pipeline.

I won’t go into to much detail regarding all the Tasks, because some of them are just triggering Magento commands. And it would just go beyond the scope of this post.
If you are interested in more details about the Tasks just let me know, we might add another post highlighting and explaining them.

Here is an excerpt from MagentoTasks:

This is what the task action and the definition inside the Registry::register(); looks like this:

With the Registry::registerTask being defined like this:

Using this method we are adding the default tasks to the deployer project and are applying the roles mentioned above.

deploy.php – deploy pipeline

Having defined all of our tasks, we can now take care of the deploy pipeline. This is how our default deploy pipeline for deployer is defined.

We have added the deploy:initialize task which will detect the stable release and save it with  \Deployer\set('release_path_stable', $releasePathStable);

The  BuildTasks::TASK_SHARED_DIRS_GENERATEwill ensure the necessary shared directories are available.

The last thing I want to point out regarding the pipeline, is the rollback after an error during the deployment.

By default deployer does not rollback in case somethings goes sideways. Deployer has a default task defined but it is not used by default, you would have to call it manually.


While setting up this deployment pipeline we ran into different troubles with deployer. The rollback task and the detection of the current stable release are a bit buggy which is why we implemented an improved version ourselves. This improved version will not use an integer as the release directory but instead used the tag or branch being provided to deployer. The branch is getting postfixed with the current date and for the tags there is also a check to not deploy to the same directory twice.

During development the releases folder might look something like this:

Furthermore the standard cleanup tasks was also not quite stable and reliable, so we had to overwrite that too. We had situations where the former current release was deleted due to an issue how deployer builds its internal release_list. That error only occurred when multiple deploys went sideways.

I am evaluating how much of our adjustments can be provided as a pull-request to the deployer project itself.


This is it, I hope you gathered some insights on how our deployment setup works and how you could setup your own.

In the next blog post we will share some thought on where we want to go with this deployment in the future and how it is re-used in different environments and server setups.

If you want to know or see more details, feel free to leave a comment or contact me directly on twitter, see the authors box below.

See you next time.


I am working on a default setup for a Magento2 deployment that can be used as starting point for deployment. Containing the most important tasks, the possibility to use for Pipeline builds, a default deployer setup, etc.

So stay tuned 🙂

Magento1 since 2008 / Magento2 since 2015
Passionate Road Bike Rider (~3.500km/yr)
Loves building software with a elaborate architecture and design
3x Magento Certified
Software Developer >10 years
Head of Magento Development @ netz98
Workaround for Magento 2 Issue #5418 – Product Grid does not work after Import

Workaround for Magento 2 Issue #5418 – Product Grid does not work after Import

The Magento 2 Importer is a simple way to import and update Product Data and many more. Since July 2016, an Import will throw an Exception at the Product Grid. Today, I added a small script as a Workaround, which I want to share.

It is actually simple and based on the Yonn-Trimoreau‘s SQL Query. I setup a bashscript, which enters the working dir and executes the query via n98-magerun2. After that, I added a CronJob to call the Script every Minute(In case someone starts the Import manually).

This is the Bash Script:

My CronJob Configuration looks like this:

It is pretty dirty, but it’ll work until Magento applied a Fix.