A mailer, member database, and so much more, for digital activism.
The Vagrant file and related provisioning scripts set up a full Identity development environment within an Ubuntu VM.
If you are unfamiliar with the concepts of virtualisation, containerisation, virtual machines, etc you may want to do some background reading first to understand what’s going on in the instructions on this page! If so there are some links below:
git clone git@github.com:the-open/identity.git <MY_FOLDER_NAME>
cd <MY_FOLDER_NAME>
cp .env.development.sample .env.development
cp .env.test.sample .env.test
cp gems/mailer/spec/dummy/.env.test.sample gems/mailer/spec/dummy/.env.test
DATABASE_URL
are uncommented, and all other versions of these env vars (such as those for OSX) are commented out - the comments in the files make clear which lines are for Linux/Vagrant and which are for OSX.vagrant up
TODO - If Docker/Vagrant becomes the default way to run, then the different .env.sample
should default to using the Linux versions of the database URLs rather than the OSX ones. This will reduce manual changes required when running with either Vagrant or Docker.
vagrant up
), login to it: vagrant ssh
/vagrant
in the VM. When you ssh onto the box you should automatically be switched into this directory, but you can double check with pwd
if necessary.exit
to leave the VM and return your shell to your local machine.You must already be logged onto a provisioned Identity Development VM (see above)
./start.sh
to run Identity - this runs the Rails webapp, Sidekiq (for background jobs), and Clock (to run periodic jobs)http://192.168.33.10:3000
SKIP_AUTH
in your .env
fileYou must already be logged onto a provisioned Identity Development VM (see above)
/vagrant
) and then run bundle exec rubocop
/vagrant
) and then run bundle exec rspec
/vagrant/gems/mailer
) and then run bundle exec rspec
TODO - Some Selenium tests for both Core & Mailer fail with unable to connect to chromedriver 127.0.0.1:9515
These commands are to be run on your local machine, not from within your Vagrant VM
vagrant halt
can be used to shutdown your VM. You don’t need to reprovision it next time you want to run it, simply vagrant up
and it will restart ready to use again.vagrant destroy
will delete the VM entirely. This means next time you run vagrant up
the machine will be provisioned from scratch, installing all dependencies again, etc.rvm
rather than chruby
because they kindly documented the process to make it simpler :)TODO - Is vagrant preferred over Docker, or is Docker preferred? Local Docker setup doesn’t currently work for Identity Modular, so for now Vagrant is probably the preference!
If you pull code which has changed the ruby version used, you will see errors about the ruby version when running various commands.
./vagrant-provision-scripts/update-ruby.sh
This will install the required ruby version via rvm and reinstall all gems.
vagrant up
errors with the following:
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp 192.168.50.1:/home/andrews/Work/Projects/speakout /vagrant
Stdout from the command:
Stderr from the command:
mount.nfs: requested NFS version or transport protocol is not supported
This happened for me after upgrading to a new version of Ubuntu on my laptop. After much googling, I found the nfs service on my local Ubuntu was failing to start up. It’s unclear what caused this after the upgrade. You can check the status using:
systemctl status nfs-server.service
For me, this gave the following output:
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2020-05-12 18:49:50 BST; 15min ago
Process: 44176 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=1/FAILURE)
Process: 44177 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 44178 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
May 12 18:49:50 andrews-ThinkPad systemd[1]: Starting NFS server and services...
May 12 18:49:50 andrews-ThinkPad exportfs[44176]: exportfs: duplicated export entries:
May 12 18:49:50 andrews-ThinkPad exportfs[44176]: exportfs: 192.168.33.10:/home/andrews/Work/Projects/identity_modular
May 12 18:49:50 andrews-ThinkPad exportfs[44176]: exportfs: 192.168.33.10:/home/andrews/Work/Projects/identity_modular
May 12 18:49:50 andrews-ThinkPad exportfs[44176]: exportfs: duplicated export entries:
May 12 18:49:50 andrews-ThinkPad exportfs[44176]: exportfs: 192.168.50.50:/home/andrews/Work/Projects/speakout
May 12 18:49:50 andrews-ThinkPad exportfs[44176]: exportfs: 192.168.50.50:/home/andrews/Work/Projects/speakout
May 12 18:49:50 andrews-ThinkPad systemd[1]: nfs-server.service: Control process exited, code=exited, status=1/FAILURE
May 12 18:49:50 andrews-ThinkPad systemd[1]: nfs-server.service: Failed with result 'exit-code'.
May 12 18:49:50 andrews-ThinkPad systemd[1]: Stopped NFS server and services.
The solution was to go into the directories mentioned as duplicated entries and run vagrant destroy
. When destroying these Virtual Machines, the output included the following, which implies some invalid data had gotten into the NFS somehow, but destroying the VMs should clean this up:
==> default: Pruning invalid NFS exports. Administrator privileges will be required...
Once you’ve done this for all directories showing in the error, run sudo systemctl restart nfs-server.service
to restart the NFS service, and then run systemctl status nfs-server.service
again to check there are no errors present anymore. Then try destroying and creating your Vagrant VM from scratch, and hopefully it should work!
You can also check all the mounts you have setup using showmount --all
. This should be empty if no vagrant VMs are running (and you don’t have anything else mounted outside of vagrant!). If no vagrant VMs are running, but you can see a vagrant project folder with an active mount, you need to remove the mount. Unfortunately the only way I found to do this was from the vagrant VM itself… Spin up a vagrant VM in the offending folder, with the same IP address listed in showmount --all
, login to the VM, then run umount -f -l /vagrant
which should remove the mounted drive from the VM, and all mounts for that IP/folder combo from your local machine…
If the above solutions do not work for you, there’s a long thread of other potential solutions to try here:
https://github.com/hashicorp/vagrant/issues/9666#issuecomment-435765957