The requirements of the example that we'll look at are simple; Magento's inventory, or more accurately, Magento MSI sources, are synced with warehouses inventory levels that come from an ERP. Whenever stock levels change in the ERP, Magento's sources are updated via the API — a one way sync from the ERP to Magento, keeping inventory levels accurate at all times.
When an order is placed on Magento, the ERP deducts stock within itself (The order is synced via another mechanism that we won’t get into.) The ERP stock level change triggers the ERP to notify Magento.
As stock deduction is done on ERP side (and the ERP being the source of truth for inventory levels), we will need to disable Magento’s stock deduction on shipment. Otherwise, we would end up with double the stock deduction.
To disable stock deduction on the Magento side, we can create a preference for Magento\InventorySourceDeductionApi\Model\SourceDeductionServiceInterface
, and override the execute
method, making it do nothing:
In your di.xml:
<preference
for="Magento\InventorySourceDeductionApi\Model\SourceDeductionServiceInterface"
type="Company\ErpIntegration\Model\SourceDeductionService"/>
And the plugin:
<?php
declare(strict_types=1);
namespace Company\ErpIntegration\Model;
use Magento\InventorySourceDeductionApi\Model\SourceDeductionRequestInterface;
use Magento\InventorySourceDeductionApi\Model\SourceDeductionServiceInterface;
class SourceDeductionService implements SourceDeductionServiceInterface
{
/**
* Dummy method as source deduction is done by ERP, and is not Magento's responsibility
* at this point.
*
* @phpcs:disable Magento2.CodeAnalysis.EmptyBlock
* @param SourceDeductionRequestInterface $sourceDeductionRequest
*/
public function execute(SourceDeductionRequestInterface $sourceDeductionRequest): void
{
// Do nothing
}
}
(If you have a better implementation for disabling stock deduction on shipment, please let me know in the comments!)
So far so good. Our Magento stock levels should match the ERP levels fairly accurately.
Let's take a look at an example of a product sale, including Magento MSI's stock reservation system that takes care of the saleable quantity before the actual source deduction takes place.
Event | Qty calculation | Saleable qty |
Initial stock levels | 5 | 5 |
Customer places an order | 5 - 1 (reservation) | 4 |
Order is processed in ERP and API call made | 4 - 1 (reservation) | 3 |
Shipment is created in Magento | 4 - 1 + 1 (compensation reservation) | 4 |
Reservations cleanup | 4 | 4 |
Through this example we can see that there is an instance where the saleable quantity drops to 3, even though we've only sold one product. However, the saleable quantity jumps back to 4 as soon as the order is shipped and Magento creates the compensation reservation.
This might be OK as long as the shipment is created at the exact time that the order is processed by the ERP.
In the case that there is a delay between ERP's stock deduction and the shipment creation, the business now has to decide whether it prefers underselling or overselling.
If you prefer to undersell (saying something is out of stock when in reality...
]]>To run different scopes per subdirectory (or URL slug), the situation is slightly more complex. If you try to run different scopes for example.com/en and example.com/au, it can become hard to manage this through the webserver; Nginx doesn't let you set variables in a location block and managing a map becomes messy (see footnote for updated info regarding using an Nginx map). Apache also has its own issues with setting variables on a directory block (or rather, it's not really meant to be used that way).
To work around this issue, most solutions suggest copying the index.php
and .htaccess
file and putting it in a subdirectory of your project. (A variation of this is where you symlink the app, lib, var and vendor directories.) Along with some modifications to the two files, this solution works fine, and is well documented:
You can also use the "Add Store Code to Urls" configuration that comes with Magento out of the box. However, this setting is extremely limited; it only works with store view scopes. It also uses the store view code as the URL slug, which doesn't work if you want URLs such as shop.example.com/en, but your store view code is b2c_en. (In such cases, the URL would need to be shop.example.com/b2c_en.)
Going back to the subdirectory/symlink workaround, the suggested solutions become messier if you have "multiple dimensions" of scopes. I.e. say that I'm managing a business that does both B2C and B2B in multiple countries. I could end up with a URL structure that would require to look like this:
What if, without touching the web server, the PHP application could determine the correct run code from the URL, internally rewrite the request to remove the language code, and then forward the request to Magento and its front-controller as if the request came through directly? This would give us the flexibility and scale to manage complex multi-store set-ups in an elegant way.
It turns out that this is quite straightforward to do!
First we need to intercept our request and rewrite it before Magento bootstraps itself. This can be done through two ways:
auto_prepend_file
, orI suggest going with the Composer autoloader method as it's easier to manage within most project.
You might be the first person to mention simple and async in one sentence.
— Fooman (@foomanNZ) July 8, 2019
That said, today I’m going to write about how to create a queue that does additional order processing using Magento 2 and MySQL only — without RabbitMQ.
One use case is when you want to synchronize orders to an external platform, do some post processing, or simply want to make some API requests related to a new order. There are plenty of ways to hook into the order placement flow. If you can, however, this should be done asynchronously.
Asynchronous processing of orders is preferred because it doesn’t delay the order flow from the customer’s viewpoint . Any latency or errors that happen during the order handling will not interfere with the customer’s experience — the task happens in a separate PHP process in the background, usually triggered by a cron-job.
Magento 2 supports RabbitMQ and it’s officially recommended to use RabbitMQ; it scales better. However, I think MySQL has plenty of “scale” for the vast majority of use-cases. Running a simple queue can be hardly considered a bottleneck. Especially if all it's doing is keeping track of new order IDs. If anything, a separate node can be used to consume the queue.
The reality is, it’s harder to set-up RabbitMQ, it’s another service to maintain, and it so happened that my Docker development environment didn’t come with RabbitMQ.
Edit: I now use Warden as my development environment which comes with RabbitMQ.
It’s probably above reasons that Magento added a MySQL driver for their queueing functionality.
As a side-note, Magento 1 didn’t have a queue/functionality to process events asynchronously. For M1, I’ve always used a ProxiBlue module.
Edit #2: Magento actually uses MySQL for 14 out of its 15 core queues. More on that below (see "Some futher comments on the two connection types").
magento/framework-message-queue: Magento’s main message queue functionality.
magento/framework-amqp: Adds AMQP functionality to the above package; handles connections to external queueing systems.
magento/module-message-queue: Ties framework-message-queue into the Magento application, and adds some application functionality like CLI commands and crontabs.
magento/module-mysql-mq: Adds MySQL driver so you don’t need an external queueing system like RabbitMQ. We’ll talk more about this one.
In an ideal world framework packages can be potentially used as a standalone package,...
]]>For Magento 1, check out my post here: Four ways to edit Magento's Javascript.
A while ago I wrote a reasonably popular article that simply listed out the different ways you can modify Magento's Javascript. Here we go again, but for Magento 2.
Of course, the easiest way to modify Javascript is to copy the file over in your own theme, and make your changes there.
For example, copying the following file:
vendor/magento/module-swatches/view/frontend/web/js/swatch-renderer.js
to
app/design/frontend/{Namespace}/{Theme}/Magento_Swatches/web/js/swatch-renderer.js
If going down this path, I encourage (read force) my team to always Git commit the original file before making any changes - it makes code reviews much easier; you can see what was part of the original Magento codebase, and what was edited. I think it's good practice to do this with any overrides (especially template files).
Alan Storm has already written thoroughly on this subject so I won't explain it here. In short, the identifier that is used when requesting a dependency in your module definition, requirejs, x-magento-init script, or data-mage-init attribute can be pointed to a different asset. With a file named requirejs-config.js
.
Example:
// File: app/code/{Namespace}/{Theme}/view/frontend/requirejs-config.js
var config = {
map: {
'*': {
Magento_Swatch/js/swatch-renderer.js: '{Namespace}_{Module}/js/{filename}',
}
}
};
Above forces the dependency to be loaded from your module instead.
This approach might make it hard to debug as developers don't expect an explicit path to be rewritten like that. However, if you're working within an extension as opposed to a theme (if you're an extension developer for example), this seems to be the only approach. If you're customising a theme, it's better to take the first approach, and override the JS file in your theme.
Arguably the best way to modify JS is through a concept that Magento calls "Mixins".
It's a system that works with RequireJS, but is unique to Magento 2. It allows you to modify any dependency when it is requested, and doesn't require you to overwrite the file, allowing for easier Magento upgrades.
Let's continue with the swatch-renderer.js
example. Add the following contents to your requirejs-config.js
file:
// File: app/code/{Namespace}/{Theme}/view/frontend/requirejs-config.js
var config = {
'config':{
'mixins': {
'Magento_Swatches/js/swatch-renderer': {
'{Namespace}_{Module}/js/{filename}':true
}
}
}
};
And to your "hook" file:
// File: app/code/{Namespace}/{Theme}/view/frontend/web/js/{filename}.js
define([], function(){
'use strict';
return function(swatchRenderer) {
// Do something with swatchRenderer
return swatchRenderer
};
});
Now, when the swatch-renderer
dependency is requested, it goes through your custom file first. This allows you to modify behaviour through plugin-like calls. It does require some basic knowledge on JS closures, objects and how they work; hooking into a jQuery widget is different than a Magento uiComponent.
Again, Alan Storm has a great write-up of mixins, so go read his article on Magento 2 Mixins, and of course this article on modifying jQuery widgets through mixins.
I'm not too sure how I feel about mixins - in theory it's a great way to modify JS behaviour. However, I've noticed it sometimes doesn't load in Firefox or...
]]>Some clients have expressed interest to upgrade to Magento 2, the greatest and latest version of the popular eCommerce platform. It seems tempting, but considering the costs, and mostly intangleble benefits from an end-user point of view, it's hard to find a business case for it at this moment.
But don't you have to upgrade anyway in the future?
True - that is, if you're staying on Magento. However, you take the state of the Magento 2 ecosystem into account, it might still be worth it to maintain your Magento 1 codebase, and migrate later before M1 becomes end-of-life in late 2018.
Why? Wouldn't you save costs on double development?
Yes and no. While you would prevent double development, it is significantly more time-consuming to develop on Magento 2 than for Magento 1. It's still riddled with (somtimes major) bugs and inconsistencies that drag out development time. It's also heavy and slow on most development environments, with a messy front-end. On top of that the deployment process is more complicated (which not necessarily a bad thing), and many hosting providers don't offer Varnish. Agencies are reporting up to more development for their M2 builds (source, and source).
Hold on - does that mean Magento 2 is more upmarket/Enterprise?
As ImagineMedia points out, when Magento 1 launched, it was in the same boat. However, eventually an ecosystem formed around the platform: an active and dedicated community, extensions, themes, developers got more experienced, hosting providers, developer tools and environment, and so on. Also important is that hardware became faster and inexpensive (both for developers customizing a Magento installation locally, and hosting fees). All of this significantly lowered the barriers to entry for M1, and no doubt that the same thing will happen with Magento 2.
Back to the case to upgrade to Magento 2
Now we've determined that development costs of M2 will go down in the near future, and that there's plenty of time left for M1, it's very likley that smaller businesses are better off spending their money on marketing and product, rather than a M1 to M2 upgrade.
Any thoughts, please let me know in the comments!
]]>So I finished this Google Apps integration with Magento. It was an interesting and challenging job (interesting and challenging usually go hand in hand) and my expectations on using the Apps Script API were different than how it turned out, for better or worse.
Bit of background, the store I worked on sells heavily customisable products - predominantly in the B2B space. Rather than forcing customers to enter their specifications through the web store, customers are shared in a Google Spreadsheet which they can fill out to communicate the requirements.
Google provides various API’s to create Google Drive documents, and the main one is the Google Apps Script API. I also considered the Spreadsheet API, but decided it’s not as powerful (and actually I don’t think there’s any point in ever using it - correct me if I’m wrong).
Apps Script is basically Javascript that runs on Google’s environment by an authenticated and authorized user.
When programming with Apps Script, you’ve got access to a bunch of global variables to interact with Google Services. For example, below code will get the Gmail drafts of the user that runs the code:
var drafts = GmailApp.getDraftMessages();
Apps Script can run in two contexts, as a standalone application, or bound to a container. When ran in a standalone fashion, an Apps Script project is just a file in someone’s Google Drive that’s either triggered manually, or by a service called “Google Apps Script Execution API”.
A diagram of the system looks something like this:
A Magento admin user authenticates with Google using OAuth2 and authorizes some permissions. The refresh and access tokens get saved in Magento’s system config, since there’s only one authenticated user per website.
Then, after order placement, the Apps Script project gets executed, which eventually creates the document, and shares in the appropriate parties.
So far so good.
I also tried hooking up the deployment of the Apps Script project with normal code deployments (I use Envoyer.io), but this failed miserably and I sort of gave up that idea - Apps Script projects being a file somebody's Google Drive makes it really hard to update.
Even if the deployments work, someone manually has to go into the Apps Script project, and deploy it as an API executable with a new version number.
Also, you can only deploy and execute Apps Script projects that are owned by you. This means that if person x
maintains the application, but person y
uses it, the Apps Script project has to be owned by y
, and manually deployed by y
. This can lead to somewhat awkward situations because sometimes the deployment of the Magento integration and the Apps Script project have to be in sync.
There were some other annoying bits; the official code examples not saving the refresh token, some scopes not working well with each other, depreciated scopes that are still being referenced by official documentation and vagueness on how to deal with development, testing, staging and production...
]]>By default Magento won’t show the root category for reasons I’ve still not found out.
There are a lot of modules that have that functionality, like Amasty’s Improved Navigation. However, it turns out it’s also pretty easy to do yourself.
In the Mage_Catalog_CategoryController::_initCatagory
(yes, it’s spelled wrong) there is the following check:
if (!Mage::helper('catalog/category')->canShow($category)) {
return false;
}
The canShow
method basically checks if the category exists, is active and is not a root category. If it’s a root category, it won’t allow you to show it.
All we have to do is to create a new route to a new controller that extends Mage_Catalog_CategoryController
(no need to override the route, we’re just adding a new route).
Your config.xml should contain:
<config>
...
<frontend>
<routers>
<shop_root_category>
<use>standard</use>
<args>
<module>Module_Name</module>
<frontName>shop</frontName>
</args>
</shop_root_category>
</routers>
</frontend>
</config>
And your IndexController.php:
require_once Mage::getModuleDir('controllers', 'Mage_Catalog') . DS . 'CategoryController.php';
class Vendor_MyModule_IndexController
extends Mage_Catalog_CategoryController
{
protected function _initCatagory()
{
// ....
}
public function indexAction()
{
return $this->_forward('view');
}
}
As for the initCatagory
method, copy the original parent method, and comment out the canShow
check.
In addition to that, replace the following line:
$categoryId = (int) $this->getRequest()->getParam('id', false);
... with:
$categoryId = Mage::app()->getStore()->getRootCategoryId();
That should be it to show the root category products as a normal category page.
The cool thing about this is the abstraction between layout handles and controllers, so everything will still work even though we're using a different controller. Even Lesti_Fpc will cache your custom controller out of the box because it matches with the catalog_category_view
full action name handle.
I’m pretty late to the party, but decided to investigate it anyway since I have to deal with many stores whose indexers get stuck because of various SQL errors related to the URL rewrite index and whose tables are way too big and out of proportions for the amount of categories and products they have.
This bug will make your table grow into the millions of rows and hundreds of Megabytes. Sure, you can eliminate all duplicate keys, but what about apparel or fashion stores that rotate products quickly? They often have duplicate URL keys, however some of the products have been been disabled a long time ago. The fact that duplicate URL keys exist isn't bad per se.
There are many suggestions on the internet saying to truncate the core_url_rewrite
table, but that's only a temporary fix - so I propose a couple more long-term approaches.
I guess the best way to fix it is to never have duplicate URL-keys. You can create a module that will prevent you from saving the product if another product exists with the same URL key.
I assume it’s pretty easy to do - change the backend model for the url key attribute: Mage_Catalog_Model_Product_Attribute_Backend_Urlkey
and add a check in the beforeSave method, before calling return parent::beforeSave()
;
Second option is to patch the issue, but it'll require a class rewrite of the indexer model (if you want to keep your store upgradable): https://gist.github.com/edannenberg/5310008
There is also this module which fixes all three issues, but I haven't tried it: http://www.magentocommerce.com/magento-connect/dn-d-patch-index-url-1.html
If your table is already too big, see if if it’s actually due to duplicate keys.
You can find out how many duplicate url key's you have with the following query:
SELECT COUNT(DISTINCT entity_id) AS amount, `value`, entity_id
FROM catalog_product_entity_varchar v
WHERE EXISTS (
SELECT *
FROM eav_attribute a
WHERE attribute_code = "url_key"
AND v.attribute_id = a.attribute_id
AND EXISTS (
SELECT *
FROM eav_entity_type e
WHERE entity_type_code = "catalog_product"
AND a.entity_type_id = e.entity_type_id
)
)
GROUP BY v.VALUE
ORDER BY `amount` DESC;
Also, instead of truncating the whole table, you can use the following query to clear out only the unnecessary rewrites (and make sure to create a backup first):
DELETE
FROM core_url_rewrite
WHERE is_system <> 1
AND id_path REGEXP "^[0-9]+_[0-9]+$";
And before doing this, make sure you have a grip on your SEO situation since it might destroy existing links on the web.
]]>Let say a merchant wants a new Magento online store. Now, there are a couple scenario’s that are likely or less likely:
Lets discuss #2 to #5.
This one is easy. You can create a new customer object, set the password for it and it will automatically convert it into a hash when you save it. Check out the beforeSave
method in Mage_Customer_Model_Customer_Attribute_Backend_Password
.
A simple use case:
$customer = Mage::getModel(‘customer/customer’)
->setFirstname($firstname)
->setLastname($lastname)
->setEmail($email)
->setWebsite($website)
->setPassword($password)
->setImportMode(true)
->save();
I like to create my own advanced dataflow adapters. Their logic is simple, it doesn’t time out since it uses multiple (AJAX) calls to import a batch, and it uses the main customer model to access the persistent storage, so I know all required business logic will be executed.
It’s also easy to map the CSV the client gave you - which always has a different format ;) - to what Magento expects.
Creating an adapter that processes a CSV export from the Zencarts customer table can look like this:
class EI_ZencartCustomerImport_Model_Convert_Adapter_Customer
extends Mage_Customer_Model_Convert_Adapter_Customer
{
public function saveRow($data)
{
// I like working with objects better than arrays
$zenCustomer = new Varien_Object($data);
$customerData = array(
'website' => 'main',
'email' => trim($zenCustomer->getCustomersEmailAddress()),
'group' => 'General',
'firstname' => trim($zenCustomer->getCustomersFirstname()),
'lastname' => trim($zenCustomer->getCustomersLastname()),
'password_hash' => $zenCustomer->getCustomersPassword()
);
// Let's temporarily disable all email communication for this request
// just to be sure it won't send out any emails
Mage::app()->getStore()->setConfig('system/smtp/disable', 1);
parent::saveRow($customerData);
return $this;
}
}
One thing you want to take note of is passwords with less than 6 characters. You can prompt customers for their password to be reset because of security concerns.
Basically, it involves creating an attribute - essentially a flag - called “Password needs reset” for the customer. You turn on the flag when you're importing a customer whose password is less than 6 characters. When the customer tries to login and the flag is on, you redirect them to the forgot your password page (rename it to be a better fit, “reset password”) telling them their password is insecure and they need to update it before continuing.
Be aware though that you need to remove the 6 character password-validation for...
]]>Basically, I was working on a website that now wanted to expand in a different country. The new store looked similar, but would have slightly different templates. Usually - in a similar situation - I would use a different theme under the same package (to make use of the “natural” theme fallback to “default”).
I couldn't use a different theme in this case because I was already using themes for certain pages that looked slightly different as a way to easily override template files without having to write any XML. (I was changing the theme in an observer event for specific pages.)
Then I tried using a new package. But that didn’t work out because I didn’t want to override local.xml. Overriding the local.xml means the whole thing needs to be copied and results in duplication.
Eventually I added a layout update to the theme.xml in the new package. Also that didn’t work out because I couldn’t reference anything which was defined in the local.xml.
$updateFiles = array();
foreach ($updates as $updateNode) {
if (!empty($updateNode['file'])) {
$module = isset($updateNode['@']['module']) ? $updateNode['@']['module'] : false;
if ($module && Mage::getStoreConfigFlag('advanced/modules_disable_output/' . $module, $storeId)) {
continue;
}
$updateFiles[] = $updateNode['file'];
}
}
// custom local layout updates file - load always last
$updateFiles[] = 'local.xml';
As shown in this code snippet (Line 426 in Mage_Core_Model_Layout_Update
class) local.xml is always loaded last.
I remembered that since Magento 1.9.0.1 you can add your own layout updates in the theme.xml. All I did was copy everything which was in local.xml into my newly defined layout update file (in the package of the existing store) and everything kept working. Then I created a theme.xml for the new package which I added my changes of the new store to. Tada!
(The new store’s theme.xml had to define both layout updates, even though it was inheriting from the existing store. This is either a feature or a bug. See https://ericwie.se/blog/magento-infinite-theme-fallback-fix and http://alanstorm.com/magento_infinite_fallback_theme_xml.
In short, you can add a layout update in app/design/frontend/yourpackage/etc/theme.xml. The advantage is that you have control over the load order of your layout updates. Before Magento 1.9 local.xml was always loaded last, and it was impossible to have layout directives be executed after it.
I’m not using local.xml any more for project-specific theme modifications. I think local.xml is just used for backward compatibility reasons (in 1.9 and above) as there is no point in using it any more. Local.xml is also eliminated in Magento 2 (https://github.com/magento/magento2/issues/1037).
]]>