As was stated earlier, Nebula is composed of a number of Gradle plugins, each serving a unique purpose. There are however logical groupings to some of the plugins. The categories
Take A Sneak Peak At The Movies Coming Out This Week (8/12) Music festivals are BACK and we’re ready for it; Hollywood history-making at the 2021 Oscars. Learn more about the Maingear Element Laptop here! A Local Micro Center: Parts ListCPU - Intel i9-9980. Download Nebula OS for free. Nebula is a Linux based OS for PCs, it runs entirely in RAM with a fullscreen web browser, no needs for an HD or a rewrittable storage device. The perfect client candidate for your personal cloud computing platform or for unmanaged workstations.
Gradle provides a substantial dependency management facility, but we have found these capabilities insufficient for the needs of Netflix engineers. As we result, we have built a few plugins that extend Gradle dependency management capabilities.
The goal of the gradle-dependency-lock-plugin is to ensure builds are repeatable over time, by locking the complete transitive dependency graph into a single source file. This particular plugin is useful for:
Check out the gradle-dependency-lock-plugin GitHub page for details on how to use it.
The goal of the nebula-dependency-recommender-plugin is to make it easier to produce and consume a Maven BOM file. This plugin allows library producers to publish a single BOM file that defines a graph of dependency versions that all work together. This plugin also allows library consumers to defer the specification of specific dependency versions to an authoritative BOM producer.
Check out the nebula-dependency-recommender-plugin GitHub page for details on how to use it.
The goal of the gradle-resolution-rules-plugin is provide a mechanism to share dependency rules for all Gradle builds within an entire organization. The rules can be packaged and published using the nebula.resolution-rules-producer
sub-plugin. The rules can be consumed and applied using the nebula.resolution-rules
sub-plugin.
Check out the gradle-resolution-rules-plugin GitHub page for details on how to use it.
All deployments at Netflix are conducted via what we call a “bake”. A bake is the process of taking a base Amazon Machine Image (BaseAMI) and installing the application via its Debian package. This enables us to easily conform to the Immutable Server pattern. Nebula provides a few plugins to help with this process.
The goal of this plugin is to produce a system package, typically an RPM or Debian package.
Check out the gradle-ospackage-plugin GitHub page for details on how to use it.
There are a number of Nebula plugins that assist in the publishing and releasing of software, based on the needs of Netflix services.
The goal of this plugin is to make it dirt simple to publish your Java library to a Maven or Ivy repository without all of the boilerplate Gradle DSL.
Check out the nebula-publishing-plugin GitHub page for details on how to use it.
The goal of this plugin is to simplify a Semantic Versioning approach to releasing Gradle based builds.
Check out the nebula-release-plugin GitHub page for details on how to use it.
The goal of this plugin is to remove much of the boilerplate required in using the existing gradle-bintray-plugin. The goal is allow users to apply the plugin and publish to Bintray with very little ceremony.
Check out the nebula-bintray-plugin page for details on how to use it.
The goal of this plugin is to allow for the execution of Git commands from a Gradle build.
Check out the gradle-git-scm-plugin page for details on how to use it.
The goal of this plugin is to provide tasks that allow the management of a Git repository in a Bitbucket server (formerly Stash).
Check out the gradle-stash-plugin GitHub page for details on how to use it.
The Nebula team has built a variety of additional miscellaneous plugins over time, each with a unique purpose.
The Gradle Lint plugin is a pluggable and configurable linter tool for identifying and reporting on patterns of misuse or deprecations in Gradle scripts and related files. It is inspired by the excellent ESLint tool for Javascript and by the formatting in NPM’s eslint-friendly-formatter package.
It assists a centralized build tools team in gently introducing and maintaining a standard build script style across their organization.
The gradle-java-cross-compile plugin automatically configures the bootstrap classpath when the requested targetCompatibility
is less than the current Java version, avoiding:
The plugin supports Java, Groovy joint compilation, and Kotlin. The plugin locates JDKs via either:
The goal of this project is to make it easy to set up a Java project the Netflix way. While it is tailored to Netflix’s view of project setup, the defaults are sane enough for most projects. Applying this plugin:
*-javadoc.jar
and a *-sources.jar
as build outputsCheck out the nebula-project-plugin page for details on how to use it.
A feature provided by Maven that is missing from Gradle is the <developers/>
section, which denotes the contact information for the owners of the project. The purpose of the gradle-contacts-plugin is to provide comparable features to Gradle.
Check out the gradle-contacts-plugin GitHub page for details on how to use it.
The goal of this plugin is to make it easier to add either an optional
or provided
configuration to an existing Gradle project.
Check out the gradle-extra-configurations-plugin page for details on how to use it.
This plugin allows you to override arbitrary Gradle properties via command line args. Convenient if you want to quickly change values that are normally static for one off builds.
Check out the gradle-override-plugin page for details on how to use it.
An opinionated plugin that wraps the clojuresque gradle plugin, removing the Clojars logic.
Check out the nebula-clojure-plugin page for details on how to use it.
Gradle plugin to setup common needs for Netflix OSS projects
This plugin is to support projects in the NetflixOSS org (and it isn’t meant to be used elsewhere). It is at its essence just a combination of other plugins that are common to all NetflixOSS projects, with some additional configuration. The primary responsibilities is to:
This project could be used as an example of how a “project plugin” could work. A “project plugin” is a Gradle plugin that provides consistency across many projects, e.g. in a Github org or an enterprise.
These plugins don’t provide any significant value by themselves, but generally are used with some other plugin or infrastructure component.
The goal of this plugin is to collect metadata about the environment where the Gradle build is being executed.
Check out the gradle-info-plugin GitHub page for details on how to use it.
This plugin is the foundation of the gradle-git-scm-plugin and can be used to build other scm related Gradle plugins.
Check out the gradle-scm-plugin GitHub page for details on how to use it.
Kotlin library providing extensions to assist with Gradle iterop and backwards compatibility.
Check out the nebula-gradle-interop GitHub page.
The goal of this plugin was to capture and publish metadata and build performance metrics to a centralized location so the Netflix Build Tools team could analyze build trends. The gradle-metrics-plugin publishes a json document to an ElasticSearch cluster by default.
Check out the gradle-metrics-plugin GitHub page for details on how to use it.
The nebula-test plugin was extremely useful in ensuring we can easily test our plugins. However, Gradle has begun to integrate these concepts into Gradle core. As a result, we recommend using Gradle TestKit instead of nebula-test.
The nebula-kotlin plugin was extremely useful in providing the Kotlin plugin via the Gradle plugin portal, and added ergonomic improvements over the default plugin:
targetJdk
if desiredkotlin-allopen
and kotlin-noarg
plugins to allow them to be applied without adding them manually to the classpathHowever, this plugin is in maintenance mode but will continue to receive 1.2 and 1.3 Kotlin releases. JetBrains has deprecated the existing jvm
plugin and replaced it with the multiplatform
plugin.
The multiplatform plugin is a complete migration from the legacy plugin and provides many of the ergonomic features, such as JVM target configuration and Kotlin library version management that this plugin provided. If you have a project that will move to 1.4 once it’s released you should migrate to multiplatform
.
Common classes shared by Nebula plugins. Adds useful Gradle tasks such as Download, Unzip and Untar.
Last week, we covered the launch of Slack Engineering's open source mesh VPN system, Nebula. Today, we're going to dive a little deeper into how you can set up your own Nebula private mesh network—along with a little more detail about why you might (or might not) want to.The biggest selling point of Nebula is that it's not 'just' a VPN, it's a distributed VPN mesh. A conventional VPN is much simpler than a mesh and uses a simple star topology: all clients connect to a server, and any additional routing is done manually on top of that. All VPN traffic has to flow through that central server, whether it makes sense in the grander scheme of things or not.
In sharp contrast, a mesh network understands the layout of all its member nodes and routes packets between them intelligently. If node A is right next to node Z, the mesh won't arbitrarily route all of its traffic through node M in the middle—it'll just send them from A to Z directly, without middlemen or unnecessary overhead. We can examine the differences with a network flow diagram demonstrating patterns in a small virtual private network.
All VPNs work in part by exploiting the bi-directional nature of network tunnels. Once a tunnel has been established—even through Network Address Translation (NAT)—it's bidirectional, regardless of which side initially reached out. This is true for both mesh and conventional VPNs—if two machines on different networks punch tunnels outbound to a cloud server, the cloud server can then tie those two tunnels together, providing a link with two hops. As long as you've got that one public IP answering to VPN connection requests, you can get files from one network to another—even if both endpoints are behind NAT with no port forwarding configured.
Where Nebula becomes more efficient is when two Nebula-connected machines are closer to each other than they are to the central cloud server. When a Nebula node wants to connect to another Nebula node, it'll query a central server—what Nebula calls a lighthouse—to ask where that node can be found. Once the location has been gotten from the lighthouse, the two nodes can work out between themselves what the best route to one another might be. Typically, they'll be able to communicate with one another directly rather than going through the router—even if they're behind NAT on two different networks, neither of which has port forwarding enabled.
By contrast, connections between any two PCs on a traditional VPN must pass through its central server—adding bandwidth to that server's monthly allotment and potentially degrading both throughput and latency from peer to peer.
Nebula can—in most cases—establish a tunnel directly between two different NATted networks, without the need to configure port forwarding on either side. This is a little brain-breaking—normally, you wouldn't expect two machines behind NAT to be able to contact each other without an intermediary. But Nebula is a UDP-only protocol, and it's willing to cheat to achieve its goals.
If both machines reach the lighthouse, the lighthouse knows the source UDP port for each side's outbound connection. The lighthouse can then inform one node of the other's source UDP port, and vice versa. By itself, this isn't enough to make it back through the NAT pinhole—but if each side targets the other's NAT pinhole and spoofs the lighthouse's public IP address as being the source, their packets will make it through.
UDP is a stateless connection, and very few networks bother to check for and enforce boundary validation on UDP packets—so this source-address spoofing works, more often than not. However, some more advanced firewalls may check the headers on outbound packets and drop them if they have impossible source addresses.
AdvertisementIf only one side has a boundary-validating firewall that drops spoofed outbound packets, you're fine. But if both ends have boundary validation available, configured, and enabled, Nebula will either fail or be forced to fall back to routing through the lighthouse.
We specifically tested this and can confirm that a direct tunnel from one LAN to another across the Internet worked, with no port forwarding and no traffic routed through the lighthouse. We tested with one node behind an Ubuntu homebrew router, another behind a Netgear Nighthawk on the other side of town, and a lighthouse running on a Linode instance. Running iftop on the lighthouse showed no perceptible traffic, even though a 20Mbps iperf3 stream was cheerfully running between the two networks. So right now, in most cases, direct point-to-point connections using forged source IP addresses should work.
To set up a Nebula mesh, you'll need at least two nodes, one of which should be a lighthouse. Lighthouse nodes must have a public IP address—preferably, a static one. If you use a lighthouse behind a dynamic IP address, you'll likely end up with some unavoidable frustration if and when that dynamic address updates.
The best lighthouse option is a cheap VM at the cloud provider of your choice. The $5/mo offerings at Linode or Digital Ocean are more than enough to handle the traffic and CPU levels you should expect, and it's quick and easy to open an account and get one set up. We recommend the latest Ubuntu LTS release for your new lighthouse's operating system; at press time that's 18.04.
Nebula doesn't actually have an installer; it's just two bare command line tools in a tarball, regardless of your operating system. For that reason, we're not going to give operating system specific instructions here: the commands and arguments are the same on Linux, MacOS, or Windows. Just download the appropriate tarball from the Nebula release page, open it up (Windows users will need 7zip for this), and dump the commands inside wherever you'd like them to be.
On Linux or MacOS systems, we recommend creating an /opt/nebula
folder for your Nebula commands, keys, and configs—if you don't have an /opt yet, that's okay, just create it, too. On Windows, C:Program FilesNebula is probably a more sensible location.
The first thing you'll need to do is create a Certificate Authority using the nebula-cert program. Nebula, thankfully, makes this a mind-bogglingly simple process:
What you've actually done is create a certificate and key for the entire network. Using that key, you can sign keys for each node itself. Unlike the CA certificate, node certificates need to have the Nebula IP address for each node baked into them when they're created. So stop for a minute and think about what subnet you'd like to use for your Nebula mesh. It should be a private subnet—so it doesn't conflict with any Internet resources you might need to use—and it should be an oddball one so that it won't conflict with any LANs you happen to be on.
Nice, round numbers like 192.168.0.x, 192.168.1.x, 192.168.254.x, and 10.0.0.x should be right out, as the odds are extremely good you'll stay at a hotel, friend's house, etc that uses one of those subnets. We went with 192.168.98.x—but feel free to get more random than that. Your lighthouse will occupy .1 on whatever subnet you choose, and you will allocate new addresses for nodes as you create their keys. Let's go ahead and set up keys for our lighthouse and nodes now:
Now that you've generated all your keys, consider getting them the heck out of your lighthouse, for security. You need the ca.key file only when actually signing new keys, not to run Nebula itself. Ideally, you should move ca.key out of your working directory entirely to a safe place—maybe even a safe place that isn't connected to Nebula at all—and only restore it temporarily if and as you need it. Also note that the lighthouse itself doesn't need to be the machine that runs nebula-cert—if you're feeling paranoid, it's even better practice to do CA stuff from a completely separate box and just copy the keys and certs out as you create them.
AdvertisementEach Nebula node does need a copy of ca.crt, the CA certificate. It also needs its own .key and .crt, matching the name you gave it above. You don't need any other node's key or certificate, though—the nodes can exchange them dynamically as needed—and for security best practice, you really shouldn't keep all the .key and .crt files in one place. (If you lose one, you can always just generate another that uses the same name and Nebula IP address from your CA later.)
Nebula's Github repo offers a sample config.yml with pretty much every option under the sun and lots of comments wrapped around them, and we absolutely recommend anyone interested poke through it see to all the things that can be done. However, if you just want to get things moving, it may be easier to start with a drastically simplified config that has nothing but what you need.
Lines that begin with a hashtag are commented out and not interpreted.
Warning: our CMS is mangling some of the whitespace in this code, so don't try to copy and paste it directly. Instead, get working, guaranteed-whitespace-proper copies from Github: config.lighthouse.yaml and config.node.yaml.
There isn't much different between lighthouse and normal node configs. If the node is not to be a lighthouse, just set am_lighthouse
to false
, and uncomment (remove the leading hashtag from) the line # - '192.168.98.1'
, which points the node to the lighthouse it should report to.
Note that the lighthouse:hosts
list uses the nebula IP of the lighthouse node, not its real-world public IP! The only place real-world IP addresses should show up is in the static_host_map
section.
I hope you Windows and Mac types weren't expecting some sort of GUI—or an applet in the dock or system tray, or a preconfigured service or daemon—because you're not getting one. Grab a terminal—a command prompt run as Administrator, for you Windows folks—and run nebula against its config file. Minimize the terminal/command prompt window after you run it.
root@lighthouse:/opt/nebula# ./nebula -config ./config.yml
That's all you get. If you left the logging set at info the way we have it in our sample config files, you'll see a bit of informational stuff scroll up as your nodes come online and begin figuring out how to contact one another.
If you're a Linux or Mac user, you might also consider using the screen utility to hide nebula away from your normal console or terminal (and keep it from closing when that session terminates).
Figuring out how to get Nebula to start automatically is, unfortunately, an exercise we'll need to leave for the user—it's different from distro to distro on Linux (mostly depending on whether you're using systemd or init). Advanced Windows users should look into running Nebula as a custom service, and Mac folks should call Senior Technology Editor Lee Hutchinson on his home phone and ask him for help directly.
Nebula is a pretty cool project. We love that it's open source, that it uses the Noise platform for crypto, that it's available on all three major desktop platforms, and that it's easy...ish to set up and use.
With that said, Nebula in its current form is really not for people afraid to get their hands dirty on the command line—not just once, but always. We have a feeling that some real UI and service scaffolding will show up eventually—but until it does, as compelling as it is, it's not ready for 'normal users.'
Right now, Nebula's probably best used by sysadmins and hobbyists who are determined to take advantage of its dynamic routing and don't mind the extremely visible nuts and bolts and lack of anything even faintly like a friendly interface. We definitely don't recommend it in its current form to 'normal users'—whether that means yourself or somebody you need to support.
Unless you really, really need that dynamic point-to-point routing, a more conventional VPN like WireGuard is almost certainly a better bet for the moment.