File naming in JavaScript world

One thing that I find confusing in JS/TS culture is file naming when it comes to things, like “something.xyz.ts”.

Let’s take a look at Nest.js.

These are some examples from documentation:

  • cats.controller.ts: export class CatsController
  • app.module.ts: export class AppModule
  • cats.service.ts: export class CatsService
  • create-cat.dto.ts: export class CreateCatDto
  • cat.interface.ts: export interface Cat

Along with this some examples from real projects:

  • virtual-product.mapper.ts: export class VirtualProductMapper
  • virtual-product.repository.ts: export class VirtualProductRepository
  • image-gallery.ts: export class ImageGallery

Some things, like repositories, controllers or services deserve a special case in file names for some reason. Though a “Product Repository” does not differ from an “Image Gallery”, those are just “repository of products” and “gallery of images”. Or a “product mapper”, or an “application module”. There is no sacral exclusive meaning in those.
They should all be just: “product-repository.ts”, “product-mapper.ts”, “app-module.ts” and so on.

Another case is “cat.interface.ts”. Contrary to PHP standards interface names in TypeScript usually don’t include the “Interface” suffix. But why do files do? Why not just “cat.ts”?

Same applies to enums, i.e.: “order-status.enum.ts”: export enum OrderStatus. Why putting this “.enum” part on a file name?

Another case from the swagger module:

  • type-helpers/omit-type.helper.ts: export function OmitType

It’s just chaotic: the folder says: “type-helpers”, the file name argues: “type.helper”. The function is just “OmitType”, so what’s a “.helper” stands for in the file name? What’s a helper anyways? It’s not a real thing.

The PSR auto-loading in PHP besides its main value brought consistency here:

  • there’s just one thing in a file: a class, an interface or a trait.
  • the file name 100% corresponds to the name of that thing.

This brings peace.

In code both PHP and JavaScript use PascalCase. JavaScript tends to use kebab or camel case in file names, which is totally fine, but please be consistent: drop those dots.

Secure by default set-cookie functions in PHP

Recently I studied the upcoming changes related to treating the SameSite cookie attribute.
And when I’ve got to the respective RFC, proposing a new parameter to the setcookie function, I was disappointed twice.

The reason to that was the decision taken and the cause of this decision. While I completely understand the historical and cultural base for this decision, I still discourage you to take it as an example solution for your own coding problems.

So the proposal was: add another parameter to the setcookie function for the ‘SameSite’ flag that was introduced recently. This would be the eights parameter to that function, that already sounds horrifying, but still it was in a “spirit” of this function.

However the proposal was declined by the voters in favor of another solution, which I would describe as following:

Let us stop adding parameters to this function, as there are already plenty of them
 and we don't know how many will appear in the future. We need a more flexible interface for this.

Of course, when it comes to flexibility in PHP the Array comes into the game. So the voted alternative signature became:

1
setcookie ( string $name [, string $value = "" [, array $options = [] ]] ) : bool

Hurray, now we can pass everything we want! But wait, what can we pass? What are the name of keys in the $options array?
Is it ‘expire’ or ‘expires’? Is it ‘httponly’ or ‘HttpOnly’ or maybe ‘http_only? ‘SameSite’ or ‘samesite’? Can I pass ‘maxage’?
You will always need to go to php.net to answer these questions. So the interface became absolutely unclear.

Okay, it’s not an easy thing to support and update a programming language (especially PHP), so let us not throw stones in the voters.
Let us instead learn what the problem is and how we can do better.

Let’s take a look at the setcookie function arguments:

1
2
3
4
5
6
7
8
9
10
setcookie (
string $name,
string $value = "",
int $expires = 0,
string $path = "",
string $domain = "",
bool $secure = false,
bool $httponly = false,
string $samesite = ""
)

The first two, name and value, are the attributes of the cookie itself. The last six are instruction to a browser on how this cookie has to be managed.

First step would be to introduce a Cookie value object with $name and $value constructor arguments:

1
2
3
4
5
6
7
8
9
setcookie (
Cookie $cookie,
int $expires = 0,
string $path = "",
string $domain = "",
bool $secure = false,
bool $httponly = false,
string $samesite = ""
)

The $expires parameter indicates the maximum lifetime of the cookie, represented as the timestamp of the date and time at which the cookie expires. The default value, 0, means that expiration date is not set for the cookie, so the browser keeps it for the session lifetime.

Most of the time you will find yourself writing something like: now() + 604800 /* one week */ for this parameter. Of course, we want to use a DateTime value object for this as well:

1
2
3
4
5
6
7
8
9
setcookie (
Cookie $cookie,
DateTime $expirationDate,
string $path = "",
string $domain = "",
bool $secure = false,
bool $httponly = false,
string $samesite = ""
)

As for the path and domain we could also introduce value objects, but there is no necessity for that. Let’s keep it simple and just stick with strings. We also leave the default values out of scope for now, otherwise we will need to dig deep into the HTTP State Management Mechanism RFC.

However we change the order of the arguments to a more natural one:

1
2
3
4
5
6
7
8
9
setcookie (
Cookie $cookie,
DateTime $expires,
string $domain = "",
string $path = "",
bool $secure = false,
bool $httponly = false,
string $samesite = ""
)

Now let’s handle the rest of the arguments.
First of all, you can see that the function is not secure by default. These relates to all three arguments, because they all are about security.
Second, these attributes are all just flags. Flag arguments reduce the readability and hide the intention of the function. Take a look at the example below.

1
setcookie(new Cookie('sause', 'bbq'), new DateTime('+1 week'), 'example.com', '/', true, false, 'Strict');

What are these true, false? Do you remember the order of the arguments?
What are other options for the samesite argument? What if I accidentally pass 'Steict' instead of 'Strict'?
What does the default value "" mean for the $samesite argument? Questions, questions. Is there a way to answer them all at once?

There is: remove the flag arguments and create a separate function for each meaningful state. Here you will end up with 10 functions:

1
2
3
4
5
6
7
8
9
10
setSameSiteCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setLaxSameSiteCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setNotSameSiteCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setSameSiteNotHttpOnlyCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setLaxSameSiteNotHttpOnlyCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setNotSameSiteNotHttpOnlyCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setSameSiteNotSecureCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setLaxSameSiteNotSecureCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setSameSiteNotSecureNotHttpOnlyCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");
setLaxSameSiteNotSecureNotHttpOnlyCookie(Cookie $cookie, DateTime $expires, string $domain = "", string $path = "");

As of the mentioned changes cookies with SameSite=None must be also with Secure flag, otherwise they are ignored, we can skip “not-same-site, not-secure” combinations.

Now I will just give you some examples, so you can compare their readability by yourselves:

1
2
3
setcookie('sause', 'bbq', now() + 60 * 60 * 24 * 7, '/', 'example.com', true, true, 'Lax');
// vs
setLaxSameSiteCookie(new Cookie('sause', 'bbq'), new DateTime('+1 week'), 'example.com', '/');
1
2
3
setcookie('sause', 'bbq', strtotime('+1 month'), '/', 'example.com', true, false, 'Strict');
// vs
setSameSiteNotHttpOnlyCookie(new Cookie('sause', 'bbq'), new DateTime('+1 month'), 'example.com', '/');

And this one is my favorite:

1
2
3
setcookie('sause', 'bbq', now() + 1209600, '/', 'example.com');
// vs
setLaxSameSiteNotSecureNotHttpOnlyCookie(new Cookie('sause', 'bbq'), new DateTime('+2 weeks'), 'example.com', '/');

The first variant seems so short and easy, yet it hides all the insecurity and uncertainty. It’s so easy to make a mistake there, whereas it is really difficult to do so in the second variant.

The approach to building interfaces described here helps keeping the surprise level at a very minimum, thus reduces number of bugs and improves code reading time for your colleagues.

PHP Namespaces the right way

Namespaces in PHP are one of things that are left without a proper attention from the standard creators. While this gives a lot of space for creativity, this also results in a messy unpredictable packages structure, sometimes even when just one developer works on a package. Not to say when you work in a team.

The rules below are reflection of my experience of working on large projects in couple-of-pizzas teams.
All of them are based on the single principle: predictability.

Standard

Stick to PSR-4 Autoloader standard. A standard is good for predictability, and this one is a good one. If you are forced to use PSR-1 (or whatever) by the framework, that’s not a big deal. Just follow the rest of the rules, they are not dependant on PSR.

Public/internal

Put the classes that are a public API of your package as closer to the package root as it is possible.
Use deeper nesting level (sub-namespaces) for internal API – the components of the public API classes. Internal API classes should be treated as private.

Sub-namespaces

Sub-namespaces may be of three types:

  • components,
  • group of classes,
  • subpackage.

Some rules apply to each type.

Components

Assuming you are using composition and SRP, your public API classes will be built from smaller components represented by other classes
(a DDD Aggregate is a good example). Normally these components are only used by their container class and should be treated as private/protected.

Components are put into a sub-namespace named by the container class.

Example:

1
2
3
\Acme\ConfigReader           // Public API, uses Merger and Parser as components.
\Acme\ConfigReader\Merger // Internal API
\Acme\ConfigReader\Parser // Internal API

Here the \Acme\ConfigReader class is a public API, and no one except it should work directly with \Acme\ConfigReader\Merger or \Acme\ConfigReader\Parser.

Of course, a component can act as a container for its own sub-components, so this rule applies recursively.

Group of classes

Some design patterns, for example Strategy, which I use often, require several classes that do the same sort of a thing but somehow differently. These classes are often represent different implementation of some Type (or Interface if you like) and are used in a polymorphic manner.

Such classes may be put into a sub-namespace named by their Type in a plural form.

Example:

1
2
\Acme\ConfigReaders\Filesystem
\Acme\ConfigReaders\Database

The Type should not be duplicated in the class names.

1
2
// Incorrect
\Acme\ConfigReaders\FilesystemConfigReader

Quite often you would also need to put a Type interface somewhere and a Factory that creates a final implementation based on some input.
Here is a common structure:

1
2
3
4
\Acme\ConfigReaders\Filesystem  // Internal API (concrete implementation), implements \Acme\ConfigReaderInterface
\Acme\ConfigReaders\Database // Internal API (concrete implementation), implements \Acme\ConfigReaderInterface
\Acme\ConfigReaderFactory // Public API, creates isntance of \Acme\ConfigReaderInterface
\Acme\ConfigReaderInterface // Public API

Another use case for this technique is domain-level exceptions classes:

1
2
\Acme\Exceptions\SomethingWeirdHappened
\Acme\Exceptions\SomethingWentDrasticallyWrong

However in this case exceptions are a public API of the package.

Subpackages

Sometimes you might want to organize your package into a subpackages. This may be valid, for example, when your package is a plug-in that affects the behavior of different sub-systems, i.e.: Catalog, Cart and Checkout.

Respective classes are then put into a sub-namespace named by the subpackage. This approach is similar to the components, but there is no container class in this structure.

Example:

1
2
3
\Acme\Catalog\... (classes related to Catalog)
\Acme\Cart\... (classes related to Cart)
\Acme\Checkout\... (classes related to Checkout)

Grouping by subpackages only makes sense if you have many classes to put in each of them.
Otherwise you can just mention the respective sub-system in a class name:

1
2
3
\Acme\CatalogMessageRenderer
\Acme\CartTotalRow
\Acme\CheckoutTotalRow

Anyway I would suggest keeping your packages small. When you are going to introduce a subpackage consider creating a new package first.

Shoot yourself in the foot with Exception

Zend:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class Zend_Controller_Request_Http 
{
/**
* @throws Zend_Controller_Request_Exception
*/
public function getHeader($header)
{
if (empty($header)) {
throw new Zend_Controller_Request_Exception('An HTTP header name is required');
}

...
}

...
}

Developer:

1
$request->getHeader('Origin');

IDE:

1
(!) Unhandled Zend_Controller_Request_Exception

Why should I handle the exception if KNOW that it will never happen?

Why should I suffer?

Using Xdebug with Docker Compose and PhpStorm

1. Add Xdebug to your PHP application container

Add following lines to your php Dockerfile:

1
2
3
4
# Install Xdebug
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find $(php-config --extension-dir) -name xdebug.so)" \
> /usr/local/etc/php/conf.d/xdebug.ini

2. Add necessary environment variables

docker-compose.override.yaml

1
2
3
4
5
6
7
8
9
services:
your_php_app_service:
environment:
XDEBUG_ENABLED: 1
XDEBUG_REMOTE_AUTOSTART: 1
XDEBUG_MAXNESTING_LEVEL: 1000
XDEBUG_REMOTE_CONNECT_BACK: 1
XDEBUG_REMOTE_HOST: host.docker.internal
PHP_IDE_CONFIG: serverName=localhost

Here we do following things:

  1. Enable the xdebug extension.
  2. Enable automatic start on every request (see note on this below).
  3. Increase default maximal function nesting level, because it is often not enough.
  4. Instruct XDebug to connect back to the IP where web request came from.
  5. Instruct XDebug to connect to host.docker.internal for command line execution or whenever “connect back” is not possible.
  6. Set PHP_IDE_CONFIG env variable to serverName=localhost. This will tell your PhpStorm which server configuration to use. See next step for details.

3. Configure server in PhpStorm

In your PhpStorm Settings go to Languages and Frameworks > PHP > Servers and add a new server:

  • Name: localhost
  • Host/Port: whatever host and port you use to open your local website, for example: ‘magento.localhost’ and ‘8080’.
  • Debugger: Xdebug
  • Use path mappings: yes

Configure the path mapping according to your source code volume mount in docker-compose.yaml.

I have the following mount: ./magento:/var/www/html, therefore my local ./magento directory is mapped to the /var/www/html path on the server.

Debugging

One important thing you need to do is to start listening for PHP debug connections with a small phone icon in your PhpStorm.

Autostart

Normally you would need a browser extension, which adds debug session start flag to your requests when you need it.
However I found it convenient to enable autostart and only control the XDebug via PhpStorm.
The case is that when XDebug tries to start the debugging session but the remote host is not listening, the XDebug does not continue.
So when I don’t need to debug, I just switch listening off. I believe this still adds a small overhead in time for all requests, but for me it is unnoticeable.

If you don’t like this approach, just disable the autostart and start the session your own way (see: Activate debugger).

Creating Run/Debug configurations in PhpStorm

Sometimes it is useful to create and store some specific configuration so you can run it over and over.
I will not describe the whole Run/Debug configurations topic here but only one Docker-specific aspect: you need to teach your PhpStorm to run PHP interpreter inside your container.

For this you need to create a new PHP CLI interpreter configuration:

  • in your PhpStorm Settings go to Languages and Frameworks > PHP and click the ‘…’ button near the “CLI Interpreter” field.
  • in new window add a new interpreter “From Docker, Vagrant, VM, Remote…”
  • choose “Docker Compose” radiobutton,
  • select or create new Server (use Unix socket to connect to Docker daemon)
  • choose Docker Compose Configuration files; in my case I choose two in following order:
    • docker-compose.yaml
    • docker-compose.override.yaml
  • select your PHP app service
  • choose “Connect to existing container” instead of starting a new one.

This should be enough, save everything and create your Run/Debug configuration using this CLI interpreter.

Integrating Blackfire.io with Docker Compose

Blackfire Integration with Docker is fully covered in the official documentation: https://blackfire.io/docs/integrations/docker
But here is a short summary on how you add support of Blackfire to your project Docker Compose setup.

Pre-requisites

1. You’ll need to have a Blackfire account

Blackfire stores all profiles on their servers so you need access to them.
You will be given a Server ID, Server Token, Client ID and Client Token, which you will need later for configuration. These could be found here: https://blackfire.io/my/settings/credentials.

2. You’ll (most probably) need a Blackfire Companion in your browser

It connects to your account and helps starting profiling directly from your browser. Install and configure it as described here:

Integrate Blackfire with your Docker Compose

1. Add Blackfire Agent to your network

Blackfire Agent sends your local profiling results to the Blackfire servers. The easiest way is to add it as a separate container using the official blackfire/blackfire image:

docker-compose.yaml

1
2
3
4
5
services:
...
blackfire_agent:
image: blackfire/blackfire
restart: always

The Agent needs Server ID and Server Token to communicate with Blackfire API. You can configure it using environment variables.

docker-compose.override.yaml

1
2
3
4
5
6
7
services:
...
blackfire_agent:
environment:
...
BLACKFIRE_SERVER_ID: <Your Server ID>
BLACKFIRE_SERVER_TOKEN: <Your Server Token>

Find out more about docker-compose.override.yaml file here: Using override file.

2. Add Blackfire PHP Probe and CLI tool to your application container

Blackfire Probe collects the execution stats and sends it to the Agent.

Considering that we base on PHP-FPM image add the following lines to your php Dockerfile:

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
FROM php:7.2-fpm

...

# You will need these to run commands below
RUN apt-get update && apt-get install -y \
wget \
gnupg2

...

# Install Blackfire CLI tool and PHP Probe
RUN wget -q -O - https://packages.blackfire.io/gpg.key | apt-key add -
RUN echo "deb http://packages.blackfire.io/debian any main" | tee /etc/apt/sources.list.d/blackfire.list
RUN apt-get update && apt-get install -y blackfire-agent blackfire-php

...

The CLI tool needs Client ID and Client Token to communicate with Blackfire API. You can configure it using environment variables.

docker-compose.override.yaml

1
2
3
4
5
6
7
services:
...
your_php_app_service:
environment:
...
BLACKFIRE_CLIENT_ID: <Your Client ID>
BLACKFIRE_CLIENT_TOKEN: <Your Client Token>

3. Point your PHP Probe to your Agent

Add the zz-blackfire.ini file to your php-related configuration files in the repo. In my case it’s: docker/local/php/conf/zz-blackfire.ini.
Put the following contents to it:

zz-blackfire.ini

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
; priority=90
; ?priority=90
[blackfire]
extension=blackfire.so
; Default port is 8707.
; You can check the actual configuration by running (see "socket" setting):
; docker-compose exec blackfire_agent blackfire-agent -d
blackfire.agent_socket = tcp://blackfire_agent:8707
blackfire.agent_timeout = 0.25

;Sets fine-grained configuration for Probe.
;This should be left blank in most cases. For most installs,
;the server credentials should only be set in the agent.
;blackfire.server_id =

;Sets fine-grained configuration for Probe.
;This should be left blank in most cases. For most installs,
;the server credentials should only be set in the agent.
;blackfire.server_token =
;blackfire.log_level = 3
;blackfire.log_file = /tmp/blackfire.log

Here we set the agent_socket to tcp://blackfire_agent:8707, where “blackfire_agent” is the name of your Agent’s service added above and “8707” is the default port on which the Agent is listening.

Then mount this file into your application container:

docker-compose.yaml

1
2
3
4
5
6
services:
your_php_app_service:
...
volumes:
...
- ./docker/local/php/conf/zz-blackfire.ini:/usr/local/etc/php/conf.d/zz-blackfire.ini

That’s it

Re-build your containers and you are good to go.

Profiling from CLI

Blackfire CLI tool can be used to profile your PHP CLI commands or simple scripts. We installed it to the application container so here is how you run it:

1
docker-compose exec your_php_app_service blackfire run bin/magento list

If you need this often I’d suggest to create a bin helper.

Profiling with PHP SDK

With Blackfire PHP SDK you can trigger profiling from PHP code and profile a specific piece of code in your app. You will also have access to profiling results programmatically, which will give you more flexibility to analyze them.

PHP SDK same as CLI tool uses Client ID and Client Token configuration from environment variables.

Add Blackfire PHP SDK as a dev dependency to your project

1
composer require blackfire/php-sdk --dev

Or add the latest package version manually to your composer.json:

composer.json

1
2
3
"require-dev": {
"blackfire/php-sdk": "^1.18"
}

That’s it.

The simplest way to profile a piece of code is:

1
2
3
4
5
6
$blackfire = new \Blackfire\Client();
$probe = $blackfire->createProbe();

// some PHP code you want to profile

$profile = $blackfire->endProbe($probe);

After putting this just open a page in browser or run the CLI command, no need to run profiling with a Companion.

Docker Compose setup for PHP project

Below I describe how I use Docker Compose for my PHP projects, although most of the concepts are helpful for any technology that you use. I will use Magento as an example, but there’s near-zero Magetno-specific stuff and this will work for any framework that you use.

I don’t pretend to be the author of all the techniques described here, I give all the thanks and credits to people that actually invented this.

Folder structure

This is how I organize files in the repo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
.
├── bin
│   └── magento
├── docker
│   └── local
│   ├── nginx
│   │   ├── conf
│   │   │   ├── default.conf
│   │   └── Dockerfile
│   └── php
│   ├── conf
│   │   ├── custom-config.ini
│   └── Dockerfile
├── magento
│   ├── bin
│   ├── ...
│   ├── composer.json
│   └── composer.lock
├── .gitignore
├── docker-compose.override.example.yml
├── docker-compose.override.yml
├── docker-compose.yaml
└── README.md

The docker folder contains all docker-related files except for Docker Compose YAMLs.

I put all files for local (development) purpose into a local sub-folder. This is becasue you might want to keep different Dockerfiles and configs for diferent environments in your repository as well.

Say, you have a “Testing” environment. Then:

  • put Dockerfiles and configs to docker/testing folder.
  • create docker-compose.testing.yaml in the root folder.
  • bring your containers up on the testing environemnt by running:
    docker-compose -f docker-compose.testing.yaml up --build -d

As local environment is used most often, I prefer local YAML files not to have “.local” suffix, so that I don’t need to specify the list of files each time for docker-compose commands.

docker-compose.yaml

The contents should be self-explanatory. The only trick here is using YAML alias (codebase) not to repeat ourselves with mounting application volume to both nginx and php-fpm containers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
version: '3'

services:

mage_web:
build:
context: ./docker/local/nginx
dockerfile: Dockerfile
restart: always
depends_on:
- mage_app
volumes:
- &codebase ./magento:/var/www/html
- ./docker/local/nginx/conf/default.conf:/etc/nginx/conf.d/default.conf

mage_app:
build:
context: ./docker/local/php
dockerfile: Dockerfile
restart: always
depends_on:
- mage_db
volumes:
- *codebase
- ./docker/local/php/conf/custom-config.ini:/usr/local/etc/php/conf.d/custom-config.ini

mage_db:
image: mariadb:10
restart: always
volumes:
- magento-dbdata:/var/lib/mysql
environment:
- MYSQL_DATABASE=magento
- MYSQL_ALLOW_EMPTY_PASSWORD=No
- MYSQL_ROOT_PASSWORD=demo
command: ['--character-set-server=utf8', '--collation-server=utf8_unicode_ci']

volumes:
magento-dbdata:

As you can see I also prefer to name my services in a more abstract way than just “php” or “nginx”. I like it more because I can change the underlying technology later without much pain (i.e. mariadb to mysql).

I also include the app name in the service name so that it is possible to run different, let’s say, php containers in the same network.

Using override file

Note that I don’t expose or map any ports in docker-compose.yaml above. I find this important because another developer can have ports that I use already taken on his machine. To manage this I use docker-compose.override.yaml which is loaded by default if present. Find out more about it at https://docs.docker.com/compose/extends/.

I keep this file ignorred by GIT and instead version docker-compose.override.example.yaml file, so that it is easy for everyone to start.

docker-compose.override.yaml

1
2
3
4
5
6
7
8
9
10
11
version: '3'

services:

mage_web:
ports:
- "127.0.0.1:8080:80"

mage_db:
ports:
- "3326:3306"

Here I map nginx container port 80 to local port 8080 so I can open my website in browser: http://magento.localhost:8080.

I also map mariadb container port 3306 to local port 3326 so that I can connect to the database from my local machine: mysql -h 127.0.0.1 -P 3326 -u root -p magento.

User permissions trick

By default docker containers are running from the root user. Sometimes they use custom user, like “nginx” for example. So files that are created in containers might appear owned by root or even unknown user in mounted folders on your local machine.

To overcome this inconvenience I use the following trick and change the user IDs inside containers to my local user.

  1. In your Dockerfile define the USER_ID and GROUP_ID args and modify the user from which the container is running:

For example for nginx we’ll have:

1
2
3
4
5
6
7
8
9
FROM nginx:1.15.2

ARG USER_ID=1000
ARG GROUP_ID=1000

RUN usermod -u ${USER_ID} nginx \
&& groupmod -g ${GROUP_ID} nginx

WORKDIR /var/www/html

For php-fpm:

1
2
3
4
5
6
7
8
9
10
11
FROM php:7.2-fpm

...

ARG USER_ID=1000
ARG GROUP_ID=1000

RUN usermod -u ${USER_ID} www-data \
&& groupmod -g ${GROUP_ID} www-data

...
  1. In your docker-compose.override.yaml define the actual user and group IDs that you want to use:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: '3'

services:

mage_web:
build:
args:
USER_ID: 1000
GROUP_ID: 1000
...

mage_app:
build:
args:
USER_ID: 1000
GROUP_ID: 1000
...

You can find out your current local user and group IDs with id -u and id -u respectively. Normally in Ubuntu these are both 1000.

Working with Composer

I only work with composer locally, never run it inside the container. This allows me not to care about copying ssh keys, composer repository credentials and other stuff.

As a disadvantage this leads to necessity of having PHP and Composer installed locally, but I always do have them latest anyway. However it is important to specify the PHP version that you use on the project for composer, so that it installs correct packages. You can do it like this in composer.json:

1
2
3
4
5
6
7
8
{
...
"config": {
"platform": {
"php": "7.2"
}
}
}

You can also specify PHP extensions versions here, see platform section documentation for more capabilities.

With properly configured platform in composer.json you can avoid adding “–ignore-platform-reqs” to your composer install or update command.

Bin helper

All modern frameworks have CLI interface for development purposes and it is being used quite often. Normally you must do this inside the container for it to work properly, but that’s not very convenient. The elegant way to improve this is to create a “bin helper” as I call it, which will pass the command you want to the container you need.

Below is the example for Magento, you can modify it for the framework of your choice:

bin/magento

1
2
3
4
5
#!/usr/bin/env bash
BIN_DIR=$(dirname "$0")
BASE_DIR=$(dirname "$BIN_DIR")

docker-compose exec --user=www-data mage_app /var/www/html/bin/magento "$@"

So now I just run this as if it would be the actual bin/magento file:

1
bin/magento cache:flush

That’s it for now.


Check out related topics:

Another naming problem reasoning

So this time I was thinking about names for boolean methods.

It started with this example that I saw in our project:

1
2
3
if ($this->simpleProductWasSaved($product)) {
// some actions
}

I assumed that it checks whether the given simple product was saved or not.
But I was wrong. Here’s this method:

1
2
3
4
private function simpleProductWasSaved(ProductInterface $product): bool
{
return 'simple' === $product->getTypeId();
}

While looking as a good descriptive method name it was misleading the developer (me) in his understanding of what the program does.

The author of this code was asking:

Is it a simple product that was saved?

Instead the function asnwered to a different question:

Is this product simple?

Considering that the method doesn’t have a clue about the saving, it has to be called just productIsSimple.

You may notice that I don’t promote calling boolean functions following the pattern “isSomething” or “hasSomething”. Which might seem reasonable at the first glance saying they are answering some question. But I don’t find it so.

The most frequent usage of such methods is inside an if construct:

1
2
3
if (isProductSaleable($product)) {
// sale it
}

This is just uncomfortable to read, it doesn’t sound like a sentense.
And it conflicts with a statement-like expressions with comparison operators that are more than often used in ifs.

1
2
3
4
//  |        question         |    |       statement       |
if (isProductSaleable($product) && $product->getPrice() > 50) {
// sale it
}

You can extract the value into a variable. But I bet you will call it in a statement-like form:

1
2
3
4
$productIsSaleable = isProductSaleable($product);
if ($productIsSaleable) {
// sale it
}

Which makes sense because it’s a statement which is either true or false. But what’s the reason of having a function name that is not always comfortable to use?

Some say that the name starting with “is” or “has” tells about the boolean nature of the function. Mmm. Exactly the same as the name in a form of a statement .

(I don’t consider naming a variable “$isProductSaleable”)

So there’s just no reason to name a function in a form of a question.

Ok, summing up.

  1. Name boolean functions in a form of a statement, not question.
  2. Don’t put irrelevant information in the names you give.

P. S.

Ah, yes, but that’s ok for class methods to start with “is” or “has” when they say something about the subject:

1
$product->isSimple();

This is completely ok, because together with the subjects name it forms a statement.

So these two can leave together happily:

1
2
$product->isSimple();
$product->childrenAreInStock();

Worst interface ever: PHP's version_compare()

The “Worst interface ever” title goes to version_compare().

If you pass two versions to version_compare() it acts as <=> operator: returns either -1, 0 or 1, which is totally understandable and reasonable. Even if it is machine-oriented, because this function is designed for the callback-based sorting.

However when you pass a third optional $operator parameter, it will change the functions behavior so that it returns TRUE or FALSE depending on the comparison result.

Would you guess if this call will return TRUE or FALSE?

1
version_compare('2.0.1', '1.5.4', '<')

Does it correspond to 2.0.1 < 1.5.4 or to 1.5.4 < 2.0.1?

Conclusions:

  1. Don’t add optional parameters to change the function behavior. Create new interface instead.

  2. Make interfaces for humans. This leaves no doubts:

    1
    version_compare('2.0.1', '<', '1.5.4')
  3. Use value-objects if possible, overloading the comparison operator for the specific types:

    1
    new Version('2.0.1') < new Version('1.5.4')