Browsed by
Category: General

Use EavSetup to Import Attributes

Use EavSetup to Import Attributes

I recently had the issue that I needed to use the \Magento\Eav\Setup\EavSetup outside the setup-context.
To be a bit more concrete, I wanted to import attribute-sets, attributes and attribute-options without using an install-script.
My first idea was

in magento2 you can easily inject the ‘EavSetup’ via constructor injection and then use it in your own class

First try

So I injected into my class which worked out well in the development-mode.

But using the same code with the production-mode a fatal error occurred with a message like

After a short research and some debugging, it turned out that the “production”-mode was not the issue. The root of the problem was the output “setup:di:compile”-command created. And more over how the EavSetup is instantiated in general.

A brief digression

For better testing I wrote a short php-file which tries to instantiate my class with the help of the object-manager. It is based on the ‘index.php’ without the $bootstrap->run($app); line, instead of the run-method I call “getObjectManager” and work with the object-manager for my tests.

With these information I started my analysis how magento does it and how could I use the “EavSetup” for my purposes.
Here you can see some classes that are relevant in instantiating an EavSetup:

  • \Magento\Setup\Model\ObjectManagerProvider
  • \Magento\Setup\Module\DataSetupFactory
  • \Zend\ServiceManager\ServiceManager
  • \Magento\Framework\ObjectManager\Factory\Compiled::create
  • \Magento\Framework\ObjectManager\Config\Compiled::getArguments

The main issue was that \Magento\Framework\ObjectManager\Config\Compiled::getArguments  did not return the correct arguments.
So while I was not able to pass the arguments as needed, I had to figure out another way how to use the “EavSetup” in my class.

After debugging the magento instantiation of the “EavSetup”, I started to forge the magento process in my construct.

The Solution

The resulting solution looks something like this

Let me just explain in a few words what I had to do.

For a better understanding let us move from bottom to top, my goal was to create a “EavSetup” instance to reach this goal I had to use the EavFactorySetup->create()  method.
The EavSetupFactory has a dependency to the DataSetupFactory which itself needs a ObjectManagerProvider instance injected.
This is the interesting point in this manual dependency injection flow:
the ObjectManagerProvider now needs a ServiceManager to create the instance.  
That’s the main cause of our issue with di:compile and production mode. This dependency cannot be injected by the default ObjectManager and thus the above mentioned error occured.

So after manually building the dependencies and creating the objects, we could work around this issue.

In case you haven’t seen how the “setup/index.php” “application” works you should have a closer look at that.

Magento Certified Developer Plus
Nice to know: Install N98-Magerun via Composer

Nice to know: Install N98-Magerun via Composer

There is a so far merely undocumented installation procedure for Magerun that is extremely handy in project configurations.

You just require Magerun within the Magento project and you can then execute it from the vendor’s bin folder:

Afterwards if you commit the composer.json  and composer.lock  files it is a take-away for the whole team.

So it is regardless whether you’re running it locally, inside a docker container or a complete different system. After composer install, n98-magerun2 is available on all target systems.

Just Another Install Example

Here another example I just did with one of our systems that run via docker on my end, but I’m installing on my local system (the folder is mounted inside the docker container):

The --ignore-platform-reqs  switch make composer to install it even despite my local system does not have all Magento2 requirements.

Introducing MageDeploy2

Introducing MageDeploy2

In our recent post series about Deploying Magento2 using Jenkins and deployer I was showing you how our Deployments are set up.

In case you haven’t read them and are interested in the details here are the links:

During the time of writing those articles I realized quite some improvements and generalizations that could be done to make this deployment more maintainable, extensible and customizable. I wanted to have a deployment setup that allows local execution with colored output, execution on a build server without interaction and usage in a build pipeline.
Furthermore I wanted the deployment setup not only to be usable within netz98 but also by the whole Magento community.

What I came up with I called MageDeploy2 which I will introduce with this post.

If you read the previous post you will probably remember the diagrams showing the actions executed on the particular servers. I used one of those to mark the areas which will be provided by the MageDeploy2 setup.

Now let’s go into details on how those phases and steps are implemented and what you need to get started with a PUSH deployment for Magento2 yourself.

About MageDeploy2

MageDeploy2 combine’s multiple technologies and open-source projects to provide the deployment setup.
It basically is a set of tools, configurations files, some custom tasks for Robo and Deployer, all tailored to fit the needs of deploying a Magento2 project.

For those new to Robo and Deployer:

  • Robo is a task runner that allows you to write fully customizable tasks in common OOP PHP style http://robo.li/
  • Deployer is Deployment tool for php, which follows a approach similar to capistrano https://deployer.org/

I will not go into to much detail on how those tools work, you can get that from their designated websites and documentation.

MageDeploy2 can be divided into 3 phases that can each be triggered separately.

  • magento-setup (preparing a local magento-setup)
  • artifacts-generate (generating the assets and packaging them)
  • deploy (release to production environment)

Those phases are implemented as commands in the RoboFile.

MageDeploy2 is divided into different packages that are installed when installing through composer.

  • mwltr/robo-deployer : contains Robo-Tasks for deployer
  • mwltr/robo-magento2 : contains Magento2 specific Robot-Tasks

Those Robo-Tasks are not a full set of all possible commands and options but currently offer the commands and modifiers needed in deployment scenario. They are decoupled and can be re-used in other projects.

As far as the deployer setup is concerned, MageDeploy2 uses n98/n98-deployer to include deployer configurations and tasks, them being:

  • set of Magento2 specific tasks
  • Magento2 Default Recipe
  • RoleManager for servers
  • optimized deployer standard tasks

Requirements

As I mentioned earlier, Magento2 Deployment Setup is using Robo to control the local setup and the overall deployment process. To achieve the actual deployment to the distinct environment it comes with a pre-configured Deployer setup. Please note that using Deployer is not mandatory, you can use whatever tool you like.

It also expects that you have a git repository available, where you have commited your Magento2 composer.json file in either the root or in a sub-directory. Right now we are only supporting git but it should not be that big of a problem to connect to another VCS.
Finally you need to have configured the access to the Magento composer repository for your current user.

Create a new Deployment

To Create a new deployment setup just run the following command.

Note: Robo needs to be installed using composer, otherwise the usage of custom Tasks is not available. See the Robo Documentation Including Additional Tasks

Configuration

After the Installation you have to edit the magedeploy2.php and the deploy.php file to suit your needs. MageDeploy2 assumes you have a git repository containing the magento composer.json. Furthermore your local build environment can clone said repository and download the Magento packages using composer.

MageDeploy2 Configuration

To configure the MageDeploy2 use the following command:

It will guide you throught the most important configuration options. Don’t worry you can edit the magedeploy2.php later-on.

Next, run

to validate your build environment is setup.

Setup local build environment

If you are done with the configuration in magedeploy2.php, you can see if your build environment can be setup. To do so run this command:

You can use a different branch or tag depending on your git repository setup.

After the magento-setup has run successfully, you can now generate the assets by running the command:

After this command is complete you should see the packages beneath shop.

At this point we are sure that the local build setup is working and we can now continue with releasing our project.

Deployer Configuration

To evaluate we will create a local deployment target. To do so copy the local.php.dist by runing

and set the config values according to your local deploy target.

Check the configuration in deploy.php and adjust it to your requirements. The default configurations and tasks are defined in \N98\Deployer\Recipe\Magento2Recipe. You can also have a look at all the configurations available in the Deployer Documentation

Setting up deploy directory tree

After you are done with setting the configuration, you can now initialize the directory tree of the deploy target run

This will create the required directories on your local deploy target.

Setting up deploy target (optional)

If you want to set up your deploy target as well you can use the command

It will make an initial deployment to push your code to the deploy target.

When this is done navigate to your local deploy_path and run the magento install command to setup the database. This might look something like this:

Now we have Magento database and configuration on our deploy target and are ready to continue with the final step.

Deploying the project

At this point, you have setup the build environment and target environment and can finally start with the actual deployment. You can do so by running:

Congrats you have successfully setup your deployment pipeline and run the first deployment!

Commands

If you went through the tutorial above, you may have already used most of them.
A full list of commands is available in the github repository here:
https://github.com/mwr/magedeploy2-base#commands
The following diagram shows the commands responsibility within the deployment pipeline.

deploy:magento-setup

Runs all tasks in the stage magento-setup. It will setup or update a local Magento instance by pulling the source-code from git, installing composer dependencies and installing or updating a local database.

deploy:artifacts-generate

Runs the Magento di:compile and setup:static-content-deploy commands to generate the assets. It is using your configuration from the magedeploy2.php.

After generating those assets it will create packages, again according to your configuration.

deploy:deploy

This command will invoke deployer to release your project and push the prepared artifacts to the server.

deploy

Triggers the deployment with all it’s stages and can be considered to run deploy:magento-setup, deploy:artifacts-generate and deploy:deploy internally.

Customization

MageDeploy2 was designed to be highly customizable to suite your needs. Here are some areas that are easy to adjust:

  • Add or overwrite Robo-Tasks
  • Add or overwrite existing or additional configuration to MageDeploy2
  • Customize Deployer but still have the basic set of tasks available
  • Exchange deployer with a different tool

The go into details here would exceed the purpose of this introduction. We may go into details in this area in a later post though.

Final Words

This is it, I hope you like the tool and it will be helpful setting up a PUSH deployment of your own.
And as always let me know your thoughts and feedback in the comments below or contact me directly.

 

Magento1 since 2008 / Magento2 since 2015
Passionate Road Bike Rider (~3.500km/yr)
Loves building software with a elaborate architecture and design
3x Magento Certified
Software Developer >10 years
Head of Magento Development @ netz98
A framework to prevent invalid stuff in your GIT repository

A framework to prevent invalid stuff in your GIT repository

The following blog post describes a a framework for managing and maintaining multi-language pre-commit hooks. The described methods adding a comprehensive quality gate to your publishing workflow. If you are using SVN instead of GIT you can skip this blog post 😛

The framework was designed by Yelp three years ago. It brings many pre defined checks designed for a generated GIT pre-commit hook. Most of the checks are made to to run against python files. This is not a blocker for PHP developers. Fortunately the framework can be extended by scripts. It’s also possible to share the checks in extra remote repositories. So you can build a pre-commit kit for your purposes. The standard repository comes with some nice checks for i.e. XML or YAML files. Other stuff like checking for broken symlinks or “merge residues”. A complete list and a documentation can be found on project website.

Installation

The installation is simple. It can be done by brew or the python installer pip. Most Linux distributions come with pip already installed. Mac users can install python with pip or use brew.

On Mac:

or with Python PIP:

After the installation we should have a binary “pre-commit”.

Config

For configuration a YAML format is used. All the configs are validated by pre-commit. That’s a good thing. If you have a mistake in your config file it will print out a long list of syntax rules. Config entries start with a „repo“ which must be a git repository URL. The example shows the external repository provided by hootsuite.

Hooks can also be defined locally. Add the pseudo repository name „local“:

Every rule must have an IDE. That’s important if you share a rule in your own repository. If the rule is provided by an external repository it must be defined in a „hooks.yaml“ file. To use the hooks in your lokal project a .pre-commit-config.yaml file must be created.

Install the hooks

The installation of the hooks in your config can be done by running pre-commit install. That’s all we need to do. After that all our commits are checked by the installed hooks.
It’s also possible to update the YAML file versions like „composer update“ with pre-commit autoupdate. This fetches the newest version of the commits from remote repositories.

Test the hooks

Simply run pre-commit run --all-files to test all hooks against the whole local working copy.

Commit your code

Congratulations! You have now a QA step between you and your CI server. If you commit some code the automatic checks should run and prevent bigger issues. To secure the complete project it’s necessary to setup the same checks on your continuous integration server. If you don’t have a CI-Server like Jenkins, Gitlab etc. and working for your own this setup is good enough.

Example config for a PHP library

This config provides us the following checks:

  • Validate composer.json file with composer
  • Prevent large files in commits like a database dump
  • Check for valid JSON and XML files
  • Check if merge conflict entries are not resolved
  • Check if a file has a wrong BOM
  • Run php-cs-fixer and fix code against a .php_cs file.

Example .pre-commit-config.yaml:

Output:

If you find the concept good, we would be happy if you leave a comment.

Have fun!

 

PS: Thanks to David Lambauer for discovering the framework at netz98.

– Creator of n98-magerun
– Fan of football club @wormatia
– Magento user since version 0.8 beta
– 3x certified Magento developer
– PHP Top 1.000 developer (yes, I’m PHP4 certified and sooooo old)
– Chief development officer at netz98
PSR-7 Standard – Part 1 – Overview

PSR-7 Standard – Part 1 – Overview

This is the first post of my new PSR-7 series. If you already use PSR-7 in your daily life as programmer you can skip the first part of this post.

What is PSR-7?

PSR-7 is a standard defined by the PHP-FIG. It don’t like to repeat the standard documents in my blog post. The idea is to give you some real world examples how you can use PSR-7 in your PHP projects. If you investigate the standard you can determine that it doesn’t contain any implementation.

Like the other standard of the FIG it only defines PHP interfaces as contracts. The concrete title of the standard is HTTP message interfaces. And that’s all what it defines. It defines a convenient way to create and consume HTTP messages. A client sends a request and a server processes it. After processing it, the server sends a response back to the client.

Nothing new? Yes, that is how any PHP server application works. But without PSR-7 every big framework or application implements it’s own way to handle requests and responses. Our dream is that we can share HTTP related source code between applications. The main goal is: interoperability.

History

Before any PSR standard we had standalone PHP applications. There were some basic PHP libraries to use. Most of the code was incompatible. With PSR-0 we got an autoload standard to connect all the PHP libraries.

The PSR-7 is a standard to connect an application on HTTP level. The first draft for PSR-7 was submitted by Michael Dowling in 2014. Michael is the creator of Guzzle, a famous PHP HTTP client library. He submitted his idea. After that the group discussed the idea behind a standardized way to communicate with messages. Matthew Weier O’Phinney (the man behind Zend Framework) took over the work of Michael.

In May 2015 we had an officially accepted PSR-7. After that the most big frameworks adopted the standard or created some bridge/adapter code to utilize the new standard.

Overview

Thanks to Beau Simensen

 

The image gives us an overview about the PSR-7 interfaces. The blue color represents the inheritance. The message interface is the main interface of the standard. The request of a client and the response of the server inherit the message interface. That’s not surprising, because the message utilizes the HTTP message itself. The red dotted lines clarify the usage of other parts.

Request-flow with PSR-7

The main flow with PSR-7 is:

  1. Client creates an URI
  2. Client creates a request
  3. Client sends the request to server
  4. Server parses incoming request
  5. Server creates a response
  6. Server sends response to client
  7. Client receives the response

This was the first part of the blog series. The next part will look more closely at the request and the URI.

– Creator of n98-magerun
– Fan of football club @wormatia
– Magento user since version 0.8 beta
– 3x certified Magento developer
– PHP Top 1.000 developer (yes, I’m PHP4 certified and sooooo old)
– Chief development officer at netz98
Solving a 2006 MySQL error connection timeout in Magento1

Solving a 2006 MySQL error connection timeout in Magento1

In my recent task I was testing a web crawler script which uses Magento database information for the crawling requests. I have encountered the following error:

Fatal error: Uncaught exception ‘PDOException’ with message ‘SQLSTATE[HY000]: General error: 2006 MySQL server has gone away’ in lib/Zend/Db/Statement/Pdo.php:228

The Problem

This error occured after around 45 minutes of script runtime.
The script was written in a way that it was possible that there is no database interaction for a longer period.
In consequence to that when the script reached a point where it was trying to save or fetch something from the database, the mysql connection ran into a timeout – unnoticed by the script.
Thus resulting in the above mentioned MySQL error.

mysql wait_timeout

The variable controlling this timeout from MySQL is the wait_timeout system variable.

Definition from MySQL Reference Manual :

“The number of seconds the server waits for activity on a non-interactive connection before closing it.”

As it turns out we have already had this situation in another project – thanks to the netz98 developers for the hint.

The Solution

The solution is to close the connection before using it – in case of long running and un-interrupted code part that does no database communication.

We have added the following code snippet to our ResourceModel:

With this method the Connection within the Adapter can be closed. When it is closed the connection will be re-initialized automatically with the next Database Interaction that is triggered by our code. To do so we introduced the parameter $useNewConncetion which enforces this behaviour.

Each time we have reached a point in our script where it could be possible that the connection hit the wait_timeout we just call this method with useNewConnection set to true.

I hope this article is helpful for you, in case you face the same situation. Feel free to comment if you faced this issue too or if you have any additions.

Update: Keep alive implementation by Ivan Chepurnyi (@IvanChepurnyi)

 

Magento Certified Developer Plus