Trying to be cheap, I only allocated 16G to my AWS EC2 instance. But I eventually wanted to migrate my wordpress sites, to a second “test” environment, to try different things in upgrading the site. But 16G doesn’t leave a lot of spare space. However, AWS has a NFS sharing product called EFS, where you basically define a unix-type NFS share, and get charged by how much data is stored in it, and how much transfers in and out. So I thought it would be a perfect opportunity to see how this product would be of use. It is 4x more expensive /GB than a EC2 hard drive (the product is also known as EBS) but you pay for what you use, not for what you reserve for future use.
The instructions for mounting the EFS file share in Amazon Linux is pretty straightforward, in the EBS part of the site. There is a link or button that says something like “mount instructions”. I’ve done it thru the “mount” command, and it is straightforward as every EFS NFS share has a address. The DNS address is random, so you can’t tell what you named it, based on DNS name, but other than that, each share has it’s own address, and this makes it straightforward.
I tried to copy data from a local volume to a NFS volume. And restart WordPress on another host, that has that NFS volume on same host. I expected this to work out of box… actually I did not, but I didn’t know better, so I had to try.
However, using it as NFS volume in docker has some things you probably need to know.
1. For the wordpress and mysql containers necessary to run wordpress in a container, I don’t think FTP will suffice, to transfer the contents of the volumes into the NFS share. I did this, and the MySQL would not start. InnoDB will not initialize was the message. And I think this is b/c FTP does not preserve the individual file permissions.
However, I tarred the contents of the volumes, into a PHP tar and MYSQL tar, and transferred the 2 tar files, and untarred them into the EFS NFS share, which presumably kept the permissions, and then MySQL started successfully.
I used a vsftp ftps container to transfer the files. The VSFTP container has issues on it’s own, on maintaining a open listening port, for longer than a few gigabytes.
2. I read a website that when docker uses a NFS volume, it actually instructs the OS to mount the NFS share, on the directory where it would otherwise create a local colume. So this website basically claims that docker doesn’t have it’s own code to independently access NFS shares, and utilizes OS services and file access.
HOWEVER, when docker tries to mount the NFS share, any NFS share, not just EFS, it attempts a “root squash”. So if you create your own NFS share on your own Linux OS, by defining it in /etc/exports and turning on the nfs server service, it has to be configured to allow “root squash” or docker will throw an error starting any container with a NFS volume pointing to that NFS share.
AWS’s EFS product’s NFS implementation, automatically allows root squash, BUT instead of allowing access as root, it allows limited access, but doesn’t tell the NFS client that. I have not enocuntered an error, myself however, using docker container w NFS volume connected to EFS NFS share, with this feature. So I think it works fine
3. This has to do with the NFS mount client on Linux. It only seems to use a source IP, of either the primary eth0 interface (I’m speculating) on the default route OR it picks a random interface’s IP address as the source IP. I say this b/c in /etc/exports, I tried restricting the NFS shares on my linux host (that I wanted to serve data to the containers) to only the docker0 interface’s subnet. Docker0 on every machine I have installed docker, is docker0 is 172.17.0.1. This is the host’s IP on the docker subnet. All the containers will be on this subnet, with different addresses. So I added a line in /etc/exports, to restrict connections to only 172.17.0.0/16. But when I do this, I not only cannot mount from the container to the NFS server on the host with the address 172,17,0.1, but can’t mount from host to itself.
I can “showmount -e 172.17.0.1” and it will show me the right shares
BUT if I try to mount the 172.17.0.1, onto another host mount point, it will fail. I can do this, if there is no source IP restriction (it shows *) in /etc/exports. In the logs /var/log/syslog, it will read a connection failed from a ip address. And this IP address is not on the right interface, to route to the 172.17.0.1. It will read 192.168.1.100 as source IP. Which is the IP address of eth0. Which is the default route, bc it obtained the address from DHCP with a gateway address. If routing rules were applied correctly, the source address, should be the same as the destination address in this case, 172.17.0.1, bc they are on the same machine. But nevertheless, the source address should be 172.17.0.1, to any destination on the 172.17.0.0/16 subnet.
Since this was not implemented correctly by the NFS client on Linux, AND docker uses the OS’s NFS mounting services, both cannot limit the NFS share to only the docker subnet.
What you can do, is instead of restricting the NFS share to docker’s subnet, you can restrict access to NFS share, only for itself , in /etc/exports. But since every machine has a different IP address, you need to find out what address the NFS client uses as source (which I think is the default route interface IP), and enter that into /etc/exports (ie. 192.168.1.100/32 will restrict all connections to NFS server, come from source IP of 192.168.1.100, which should be the colocated NFS server and docker host’s IP).
Good luck playing with NFS mounts. It allows you to have central network source, for information shared between several containers. But there may be a performance penalty.
4. WordPress database connection details and database password are in the docker-compose file. So you need to remember to use the same docker-compose file to start the container to where to copied the data to, in the NFS share. As from where you copied the data from.
5. WordPress users obviously will be copied over as is, using this method. same WordPress admin username. Same password. A new one is not generated with a file copy method like this. (As opposed to a export/import file process, which should work b/c of backward compatibility features, even if PHP and mysql platforms used versions change, however will create new users)
6. Using AWS EFS NFS shares, you have to make sure the security groups in the “Network” of the EFS NFS share you created, is compliant for the security group assigned to EC2 instance.
https://stackoverflow.com/questions/49762840/unable-to-mount-efs-on-ec2-instance-connection-timed-out-error
EFS shares must have security group, that allows EC2 instance w a security group assigned to it. For NFS ports. it is a little convoluted but also clear if you decide that firewall settings can constitute a type of logical group.