RDI: The Future of Remote Development
When we founded Border, we set out to create a company that established the future of work. If you've read our ethos, we feel strongly that remote work will be the future. Employees are happiest when they have the flexibility to balance their work/time with life needs, and also when they feel their employer trusts them to get the job done. This combination of trust and autonomy is a powerful tool to ensure employee happiness and retention—much more than just a higher paycheck.
For remote software development, though, there are a number of hurdles that need to be overcome in order to make for an effective remote developer. Most remote development suffers from two major problems:
Performance. The typical remote developer is issued a remote desktop environment on which to perform their work. This is usually a remote Windows machine that is granted access to secure areas of the client's network and provides the necessary development tools. If anyone has used one of these atrocities, they know how slow working with a remote desktop can be. You move your mouse and you watch it move across the screen after a one second delay. Not to mention the fact that they are typically shared resources with low performance specs that make for an extremely unproductive worker. Finally, it's heavily dependent on the client's IT team to configure and maintain these machines, which is an unnecessary burden on the client.
Security. As most developers will tell you, the ideal development environment is a local, high-spec machine that builds and runs the necessary project software. As any enterprise security manager will tell you, this is next to impossible for external parties. Granting access to disparate computers outside of the corporate network with different OS setups, anti-virus policies, and unknown software installed is infeasible. Also, nobody wants to have valuable source code just sitting on different laptops around the world that may be stolen or lost at any time.
The ideal solution is a high-performance development environment where developers don't notice any slowdown, yet have access to all the corporate resources and software to ensure they can get the job done. Oh ya, and it needs to be set up on the first day the developers work, and can but shut down the moment they leave.
What did we set out to solve?
When designing our ultimate solution, we focused on the following features:
Hardware Agnostic. We are not in the business of managing hardware; we are in the business of building software. Having to deal with logistics of shipping and maintaining different computers around the world is too costly and cumbersome. We also didn't want to dictate a hardware setup to our developers. Mac fanboy, Linux guru, we just don't care. We wanted our devs to use whatever made them happy, got the job done and let them manage it themselves. In our experience, a burdensome IT staff that requires permissions to install a development tool is one of the biggest headaches of any software developer. The ideal solution allows developers to do what they want without sacrificing security.
Location Agnostic. Like we mentioned before, we are all about remote work. If our devs want to work on a beach in Thailand, then go for it. As long as they get their work done, we couldn't care less. Given this, it means that we can't depend on the network connection between the developers machine and the clients network.
High Performance. Remote desktops are out. They simply can't provide the low latency environment our developers need. We want a highly secure development environment without the performance problems that come with remote desktops.
High Security. Sensitive code should not be replicated when possible, and should definitely not reside on a machine that is not controlled. It's incredibly hard to manage dedicated hardware be shipped to all our developers, let alone making sure it stays secure during the lifetime of a project. We also don't want to have our clients IT teams configuring VPN backdoors on all our different developers' machines every time we add a new developer or computer to the project.
Easy Deployment. We want to configure the environment to easily connect to our clients network and have as minimal IT back-and-forth as possible. Ideally, once we configure a setup, we could copy that environment as developers are added and it just works with minimal re-configuration.
How did we solve it?
The solution we settled on was providing dedicated development server instances hosted in cloud infrastructure for each developer for a given project. We call these servers RDI, which stands for "Remote Developer Instance." This was made possible by recent developments by Microsoft in their Visual Studio Code Remote Development tools. If you haven't been paying attention, Microsoft has been investing heavily in this area because they also believe this will be the future of software development. Most of our team has already been using Visual Studio Code for their development because it provides the flexibility of a light weight code editor and the extensibility using plugins to add desired shortcuts. Although this does require that developers use Visual Studio Code as their IDE, we felt that loss of IDE flexibility was a good trade-off against the power of this final solution.
The diagram below shows the final solution and the areas that were key to ensuring the solution met our needs.
- For each project, we create a base Linux installation script that contains the necessary tools for the project. This makes it easy to create new servers and add developers, since the script contains everything we need for that given project. The base image contains the necessary tools, and ensures all other services and ports are shutdown to harden the security.
- A development server instance is deployed to your favorite cloud infrastructure using the base installation script. You will need to also attach a read/write disk for storing the software during development. We used SSDs, since front-end code development tends to be disk intensive, with lots of small files to read during the compilation process. (Note: Do not choose high performance SSDs as we saw costs shoot up by over $50/month on Azure using this option.) The server should use a fixed IP address, which allows it to connect to the clients network using a known IP whitelist.
- Developers provide their public SSH key when they first join a project. A new instance is created for the developer and the SSH key installed, allowing the developer to login with just the recognized SSH key. This is the only way to login to the development instance.
- We provide the IP address of the new developer instance to our client to allow it to access their network/resources. If any VPN software was needed to connect to the clients network, it is installed and configured as part of the base Docker image, so the developer does not need to configure it. Since we use fixed IP addresses, the client can use an IP whitelist to ensure only these special clients can access their network, something we could never hope to achieve with a bunch of disparate development machines at each of our developers homes.
- Using Visual Studio Code Remote Development Tools, the developer connects to the development server using an SSH tunnel. Editing/viewing files are as fast as on their local machine because only the text files are being transferred between host and client, instead of mouse movements and entire screen images. In all our testing, developers never noticed any lag or difference between working remotely or working locally. Visual Studio Code also provides tools for port forwarding through the SSH tunnel. This allows us to map a port on the remote machine to a port on the local machine. For instance, when a Webpack dev server is running on the remote machine, we can map its port to our local machine and we can open it just like we would on the local machine: opening
localhost:3000in the local machines web browser.
Using the solution in the real world
The biggest benefit we actually found was in the setup and configuration with our clients network. Our clients all have different requirements and IT rules to ensure security of their network. By building the tools and rules into the base image, we could very easily add new developers and they are instantly productive, instead of having to constantly go back and forth with the client's IT team, which could add days and sometimes weeks. Now, we simply had to provide the IT department with the new servers fixed IP address and we could be up and running in minutes.
Secondly, if we had any concern that a developer's local machine had been compromised, we can quickly shutdown his developer instance and lock access to it. Since the project source code is never installed on the developers local machine, we ensure the source code is never leaked.
Finally, our developers can use whatever they like as their development machine. They can install whatever software they want and configure it to their liking, since we don't care whats on the machine. We also ensure a homogenous specification for development because we can specify the amount of computing power in each instance to our needs, which is dedicated to that server, rather than shared on the developers local machine. Developers love this because they have fast compiles with no impact on their local machine CPU cycles. They can't see a performance difference between running locally, so developers are happy. Finally, since the development servers are just bare-bones Linux servers dedicated to compilation, they do not require a large amount of CPU resources or RAM to be faster and more performant than a local development machine.
In our testing, we tried instances on Google Cloud Compute, Microsoft Azure, and Amazon AWS. Its important to note that you cannot use the lowest power instances available, in case you were trying to find the lowest cost (or free) solution. Using low power instances had a problem that the SSH tunnel would disconnect during compilation because all the CPU cycles would be taken up by the compiler. You need a minimum of a 2-core machine with 1-2GB of RAM. As mentioned before, we chose SSD drives to ensure the fastest compile times for our projects, which was more expensive than standard disks, but worth it in time savings. Our findings were the following:
Costs Per Month (Single instance / 30GB SSD attached disc) at the time of writing:
Amazon AWS: $21.60
Google Cloud Compute: $20.40
*This was in free tier, so unsure of final cost
Microsoft Azure: $30.75
Why is this better?
If you've read the article, we've already mentioned the many benefits of this solution. To highlight some of the not-so-obvious benefits though:
- Save money by getting out of managing remote developer team's hardware. Developers love this because they can use their preferred machines. You can provide them high security remote development machines at a minimal/fixed cost to you.
- Spinning developers up and down on a given project is immediate. Just add a new development server pre-configured for that project and very easily connect it to the client's network.
- Clients love this because it has minimal impact on IT teams and infrastructure. You can also reassure them that their source code is protected because it's centrally managed and access is tightly restricted.
- Use your favorite cloud infrastructure. All you need is basic cloud Linux servers with minimal resource configurations. Since there is so much competition in this area, you are all but guaranteed that costs will go down as time goes on.
- Developers love this because they see no difference than developing on their local machine, with the additional benefit that it does not actually use their local machine's resources, so compiles won't slow down everything else. Oh ya, it also means that they can work from anywhere that has a decent Internet connection, even a beach in Thailand. (We can't guarantee how productive they will be on that beach, but in theory, it's possible.)
For only a small, fixed-cost of approximately . $30 / month, we now have highly secure, project specific, RDI machines for all our remote engineers. With minimal training and no impact on their local machine, our devs are now able to use any hardware they like to do development without any restrictions on what they have installed. Onboarding new developers is almost instantaneous and we can ensure no source code is on our developers machines. If a laptop is lost or the developer has to be let go, we can immediately block access. Our clients love how it minimizes impact on their IT team as well as ensuring security.
Of course, this type of development is not for every type of project. Here at Border, we specialize in front-end web application development, which lends itself very well to this type of development environment. That said, extending this to apply to other types of application development should be relatively easy given that necessary tool chains are installed in the remote instance Docker image and port mapping can be achieve through the SSH tunnel.
We've been extremely happy with this setup and look forward to the new ways this allows us to work. Even more exciting is the latest updates in Visual Studio Online that bundles all of these features together and includes some really interesting environment/code sharing capabilities. This can make it even easier to spin up development environments, although we have some concern on the final pricing of their solution because Azure was the most expensive environment in our testing.
Additional Information / Resources
- Visual Studio Code Remote Development information (including more on Visual Studio Online)
- Visual Studio Online information
- Facebook and Microsoft team up on remote development using Visual Studio Code. This will be their primary work style/environment at Facebook moving forward.
- Stripe adding a dedicated engineering hub: remote
- Using a Chromebook for cheap/fast machine using a remote development environment
- 11 Data-backed Reasons to Work With Remote Developers