When to setup a remote development environment over SFTP (working copy)
When a programmer has a local copy of the code and an environment fully functional where the web can be tested before going live, we usually call it a working copy.
The action of moving/copying/putting this work in the final live server (a.k.a production server/environment) is called the deploy action.
There are many ways of having a working copy up and running. We, the key stroker maniacs, use to work more or less with one of the following working copy configurations:
- No working copy. The one who enters via SSH and uses the editor vi. Yeah!
- Edit via FTP/SFTP with an editor installed in the local machine. No working copy, the return.
- Local web server. Usually a pre-built LAMP solution, such as MAMP or XAMPP. Valid until you need to add 2 or 3 more services like Memcached, Sphinx, Solr, MongoDB, Redis or anything like that, or the Virtual host configuration starts to change over and over.
- A web server in your LAN. Several users using a machine with all the services configured, having their own running copy. Different DocumentRoot per user.
- The Virtual machine lover, who runs a VirtualBox or VMWare with a full linux installed and exports and shares the appliance with colleagues if necessary.
- The Vagrant chef, who has an automated system to deliver pre-configured environments to any number of developers using VirtualBoxes underneath.
In this post, I am proposing something different, it has the smell and aroma of the early 90's development, but still, a good solution when you have a middle team (2-20) where people don't want to be bothered with constant database changes and service tunning. It is the private remote working copy over SFTP. Keep reading...
The concept of private remote working copy
The remote working copy, or the remote development environment is what I call a development server available on the Internet. It is very useful if you have no offices and people from different places in the world need to access and work against a running server, meaning that no local configuration or installation is needed by end developers.
Of course there are other ways to address this problem, but this is one I like pretty much without involving Vagrant, Virtual Machines and other similar solutions (I have a 4 year-old machine that works like a charm, but is not very good at RAM, and VMs eat that for breakfast).
I call it private not because has sensitive data but because it should be invisible to Google (mainly) so it's not indexed in order to avoid any duplication/competence with your real live web app. You can accomplish this via a simple firewall configuration allowing only certain IPs, for instance:
# Accepts connections only from 80.24.22.70 to 80.24.22.80 iptables -A INPUT -i eth1 -m iprange --src-range 80.24.22.70-80 -j ACCEPT
header( 'WWW-Authenticate: Basic realm="Protected page"' );
And for people on shared hostings, the easier would be to place an .htaccess file.
This server is ment to be hacked, destroyed and broken several times a day if necessary. Yes, because is a working copy for development purposes only, and that's what we programmers do with the webs, destroy them all the time. And we like it.
Developer Workflow
The workflow is quite simple, but don't judge it too fast:
- You have a copy of the code in your local machine (git, svn, hg, whatever you use). That's it, no web services or related are installed.
- You work with a capable editor which will send all your changes to the remote server over SFTP (I strongly recommend PHPStorm, it's worth every penny )
- You test your work in a real URL, which should be unrelated to the real project. Buying a dummy domain might be a very good option.
Ingredients
You'll need:
- A dedicated server
- A different domain, but public
- PHPStorm. I don't recommend other solutions, they fail to work.
- Clone the production server in the development machine. All services.
- A folder structure that holds every developer.
- Virtualhosts for every user
- Subdomains (a catch-all should be configured) for every developer
- alo.my-devel-project.com
- sam.my-devel-project.com
- api.my-devel-project.com
- aga.my-devel-project.com
- /var/www/wc/alo/htdocs
- /var/www/wc/sam/htdocs
- /var/www/wc/api/htdocs
- /var/www/wc/aga/htdocs
Pros
- You work with more people
- You want to keep all the colleagues with the architecture in-sync. You like to do alter tables, apply changes to services constantly and don't want to tell the others "hey! do this ALTER in your environment" or, "I just changed the search index configuration, here's new".
- There is people in the team who is skilled in everything else but in systems engineering or DevOps
- You use different machines or you set-up the environments a lot.
Cons
- It might be slow if you have a rural Internet connection under 512kbps. But connections nowadays are very fast. Uploading 100Kb files (a lot of code there) is done while you switch from the editor to the browser.
- One-way by default. Although clients like PHPStorm will warn you if a file is has changed in the remote server with a link to download or merge, this system is not very good if you need to generate a lot of files in the webserver, because you would need to remember to trigger a synchronization action to download them to your machine and then add them to version control system.
Hands-on
In the next post I explain how to do it step by step.