I am looking to implement a symfony application on Docker using Docker-Compose. I will have at least the following containers :
- Nginx
- Rabbitmq server
- PHP-FPM
- MySQL
- Solr
Currently we have a development environment using the above setup too.
The Symfony application is stored locally (host) and then a volume is used on the PHP-FPM container so that it can read the application - this works well. We bash
into the php-fpm container to run composer / app/console commands.
We also manually run the consumers (Symfony commands) that consume messages from the rabbitmq server.
What are my options in production ?
1) Can i create a single container running the application and then allow other containers to use it ? i see that the php-fpm container needs access to the application code - but i would also like to create a container to run a consumer - passing in the name of the service to run to the container - meaning i can have a single image that can be flexibly launched to process messages from any queue. What happens with logs / cache in this option ?
2) Have the application stored within each image that needs it ? this is my least favourite option as then to update the application i need to build each image
3) Something i haven't yet explored ?
I would like to allow easy updates to the application - something scripted perhaps, but i would also like to minimise downtime - i can do that using haproxy or something similar - has anyone else got any experiences with running a multi container symfony application in production ?