FreeBSD’s default filesystem is ZFS. This means ZFS on FreeBSD is baked right in and well documented. ZFS has been the best thing to happen for system administrators since containers. ZFS isn’t just a filesystem, it’s also a volume manager and elimiates the need for things like LVM (and as you’ll soon see, the need for seperate software RAID).
Thick Jails in BSD predate Docker containers, and jail management built right into the kernel. Even though you can manage jails without any extra software, we’re going to use a dedicated jail manager tool to make life simiplier. Jails can give you the flexibility of docker/podman containers with less overhead.
Jails can also run versions older than the hosts. For example, your host can be FreeBSD 14.0, while you can leave some jails on older, FreeBSD 13.
In FreeBSD land, the “base” packages are managed seperately from user-installed packages. freebsd-update fetch install
&& pkg update && pkg upgrade
respectively. I think this makes for an easier to manage server system where you have more control over software updates.
You can also downgrade, so if you upgrade to a new FreeBSD major or minor release, and find something doesn’t work correctly, you can revert back to the previous version.
The reason I started building this server in the first place was to have a semi-dedicated place to store and stream my digital media. So most of the design choices revovled around utilizing Plex. Some people have had a fallen out with Plex as they are moving to become more of a Netflix-style platform integrating with self-hosted media.
Plex is the only self-hosted media solution that officially supports FreeBSD. Jellyfin exists, but there is no official Jellyfin support, and at the time of me building this server Jellyfin had mixed reviews on reliability.
Nextcloud had been around for ages and is very mature as a self-hosted content platform. Was an obvious choice.
I wanted to have my local Git repo be my main repo for private projects and Yadm dotfiles storage. Gogs offers a nice and simple web UI that’s easy to install and maintain.
This is actually a debated issue for the simple fact that ECC RAM is specifically useful for ZFS pools. But the consumer Intel chips do not have any motherboards that support ECC RAM. You’d have to go with either a server or workstation chasis and motherboards, which will increase the cost several hundred dollars. While AMD has consumer-grade motherboards that support ECC.
When designing my system, Plex supported FreeBSD Intel Hardware Transcoding. However, as of May 2023, Plex has removed Intel hardware transcoding. This means you will have to setup a Docker Container running Linux, or wait for the two Pull requests to be merged to bring back native FreeBSD support. With that said, had I known about this before designing my system, I may have went with AMD to run ECC RAM.
There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. Since ECC-RAM is not a hard requirement, I went with:
Room to upgrade to 64GB in the future
I wanted something with enough M.2 slots to run two in a mirrored vdev without taking away lanes with a SATA port. This one has 3x M.2 slots, but the 2nd middle slot if used will disable one of the onboard SATA ports. This is fine for me as I do not intend on using all three.
6x SATA ports is fine to start, you can upgrade with an PCI-E 3.0 HBA to add more drives later.
Every other feature is standard and not something you have to go out of your way to look for.
The bulk of my money was spent on HDDs and SDDs.
Running in a mirrored vdev so the total usable space will be ~1TB. Enough to run the OS and user-installed packages without worrying about disk space.
I wanted all of the Jails running on seperate pools and seperate physical disks than the OS.
Mirroed vdev pool so total space will be 1TB. Though I don’t expect much actual storage needed by jails, in hindsight I probably should have opted for double the space just to be on the safe side.
Most of the storage is packed into large HDDs to hold the actual video media files. I wanted to maximize physical bay space by picking relatively large drivers (14TB+) and cheap price per TB. I researched a popular HDD seller that sold factory recertified disk drives. I felt confident given the track record, and the fact that the majority of the video files can be recovered from remote sources and the setup with ZFS will add redundancy and error checking.
2 seperate mirrored vdevs equals 32TB of total usable space.
The FreeBSD Handbook has details on installing in Chapter 2. Complete with screenshots and important notes. However, if you’ve installed Linux before, it’s not much different.
I use a simple USB drive to install operating systems these days. So I downloaded the FreeBSD-14.0-RELEASE-amd64-memstick.img.xz
which can be found in the ISO Images download site.
Using Etcher I flashed it to the USB drive. But of course you can just use any other method.
Before I install, I like to disable SecureBoot since this system will not have windows installed, and FreeBSD does not support SecureBoot.
The key being that it defaults to selecting debugging packages, which you likely won’t need for such a system. These are the Components I installed, but remember, these can always be installed or removed later, so it’s not that big a deal.
Using guided root-on-ZFS is the simplest route. For Install, select the two boot drives, in my case the two NVMe devices.
The NIC on my Motherboard is a Dragon RTL8125BG, which is just Realtek RTL8125BG chipset. The install did not recongnize my network card, so I had to skip the network setup portion of my install. This means out of the box the machine won’t have LAN/WAN connectivity. I will show you how to install the drivers and configure networking at the end of this article. But for now just skip the network setup, and make sure you to keep your keyboard and monitor handy.
I selected:
I skipped this section as I will configure this manually later, after the install.
By default there is just a root account. I like to use sudo with a regular account. So you can create that account here now, or later.
The rest of the install should be self-explanatory.
Since the installer did not reconginze my NIC, I had to manually install the drivers. This takes a bit of work, as you have to get the driver on the system in the first place.
You can use any USB drive formatted in FAT32.
You will need the following two packages
Pkg is the package management tool FreeBSD uses to install binary packages. It wasn’t installed in my case during the OS installation, and we’re going to use it to get the Realtek kernel driver installed.
Copy over the pkg files to a USB drive formatted in FAT32 and plug it into our FreeBSD machine.
When you insert the USB drive it won’t mount automatically. But you should see it in /dev folder.
ls /dev/da*
should show /dev/da0s1 listed after you insert the USB drive, which you can then mount.
[~/]$ mkdir /media/usb
[~/]$ mount_msdosfs /dev/da0s1 /media/usb
Move the two .pkg
files you place on the drive to any folder on the host. cp /media/usb/* /tmp/
The files are really XZ compressed TAR file (.txz) files. You’ll need to decompress the pkg
package in order to bootstrap the package management in order to install the drivers.
The pkg-VERSION.pkg file has a full path to a binary called pkg-static
, which you will use to install itself. Then install the realtek drivers.
[/tmp]$ tar xf pkg-VERSION.pkg
[/tmp]$ usr/local/sbin/pkg-static add pkg-VERSION.pkg
[/tmp]$ pkg add realtek-re-kmod-VERSION.pkg
After the driver is installed. We simply have to add a line to the /etc/rc.conf
file
ifconfig_re0="DHCP"
After that you can restart networking, manually bring up the interface or restart the server.
[/tmp]$ ifconfig re0 up
Configuring a simple dynamic IP address to get it started, which can be setup with static later.
With the system setup, we can start adding ZFS pools for both Jail and Media storage. I’ll go over this in Part 2 of this series.
]]>If you want a mobile and more secure way to manage your SSH and GPG keys, one route you can take is leveraging hardware keys. The Nitrokey 3C acts as an MFA device, but it also has the ability to store keys commonly used for SSH authentication. One of the main benefits of storing your private keys on a secure hardware device is they’re never loaded into your computer’s RAM. The cryptographic operations are performed on the device’s firmware itself, meaning you’re protected from any malware attacks that could attempt to steal your private keys.
All of Nitrokey’s products are open-source and the Nitrokey 3C has firmware written in Rust. I have FIDO2 which I use for MFA for website logins and the 3C which is their newer product line that I received not too long ago. This is what I will be using for this blog post, though this guide should work for most other OpenPGP compatible hardware devices since we’re just using OpenGP.
The latest firmware at the time of this writing is v1.5.0 and it supports everything you need to use the keys stored on the device to SSH into Linux servers.
If you already have keys, you can skip this section.
Run gpg2 --card-status
to make sure your NitroKey shows up. If it doesn’t you may need to add the udev rules found in the Troubleshooting section of this post.
Run gpg2 --card-edit
which will put you to the gpg prompt
Enter admin, then enter generate
It will ask you for the Admin PIN (default: 12345678), then ask for the PIN (default: 123456).
When it asks you to make an off-card backup of the encryption key, note it will only backup the encryption key, and not the whole key set. So the best option is to select No. If you want to make a full backup, follow these instructions.
Afterward, it will take some time, but it will generate the whole key set (Signature Key, Encryption Key, Authentication Key)
Note the “Authentication key”, as this is the key that will be used for the Linux server.
Export the public key in ssh format. I like to place it in the .ssh folder so I can copy it to servers later.
gpg2 --export-ssh-key AUTH_KEY_ID > ~/.ssh/nitrossh.pub
You can use ssh-copy-id to send it to the server under the user you want to access.
ssh-copy-id -f -i ~/.ssh/nitrossh.pub -o 'IdentityFile test123.pem' youruser@example.net
If you need to use a server key not in a keyring, such as the case with AWS EC2 nodes by default, use the -o
flag and set the “IdentityFile” to the key you currently use to login
ssh-copy-id -f -i ~/.ssh/nitrossh.pub -o 'IdentityFile test123.pem' ec2-user@ec2-34-333-22-11.compute-1.amazonaws.com
Next you need to setup the GPG agent to interact with your SSH agent. There are a few configuration files that need to be in play, so I’ve created a GitHub repository with them.
Make sure you have an instance of ssh-agent running in your current terminal/shell.
eval $(ssh-agent)
You can source the bash file that sets the gpgagent environment variables and configuration required.
source nitrokey.sh
Afterward, you should see the Authentication key stored on the Nitrokey loaded into the SSH agent.
ssh-add -L
You should see something like the following:
ssh-rsa LONGKEYHERE cardno:000F D51E28ED
You should be able to login to the server
If executing gpg2 --card-edit
or gpg2 --card-status
as a non-privileged user gives an error that the card cannot be found or opened. It is likely that the daemon responsible for handling smart cards is not running.
If you are using PC Smart Card Daemon, make sure it’s running.
systemctl status pcscd.service
The Arch documentation has good information on this issue
]]>The following sections will briefly go over code and is meant to be read while viewing the code on the GitHub.
Just like the code from the previous, we start with the main point of entry, the app.py
file.
The Props Dictionary holds some of the common values we’ll make use in the code that does the heavy lifting. Something new here is the introduction of runtime contexts to pass two account-specific values, noted by the -c VAR
flag.. You could use dotenvs, but I wanted to show more CDK concepts this time.
The DynamoDB is setup first, but the two stacks are not dependent upon each other and can swap places in order.
The prepare_import_data
function highlights one of the main benefits of using CDK over CloudFormation, or even Terraform. This is a custom function I’ve created to handle importing data from CSV files (located in ./data
) and bringing it into the DynamoDB tables that will be created later. You can obviously create functions or classes to handle anything you can imagine.
Digging deeper into this function, we’re formatting the data for PutRequest, a DynamoDB data type that will be used in an API call in this stack.
There are three tables being created: “Resource”, “Category”, and “Bookmark”. The Resource table has an added Global Secondary Index.
A cool feature of CDK is the ability to create custom resources which are AWS Lambda-backed functions. This function will load our BatchWriteItem API call to a short-lived Lambda function that will be responsible for actually making the API request to the DynamoDB table. We don’t have to worry about creating and managing a Lambda function.
The props.copy()
is going to copy the props Dictionary we passed to it from app.py
and copy it, combining any additions you might add to it so it can be passed to another stack. In this case, looking at app.py
we pass this to the ECSStack.
While this stack is the bulk of our infrastructure, ironically it has fewer lines of code than the DynamoDB stack.
The VPC is created, and that’s all we really need to do in terms of our VPC configuration. The Fargate construct will create nearly everything else for us.
The construct for creating the ECS Cluster object is straightforward.
We have to create a Task Execution Role, so using the Role construct and giving it the correct service principal. This is so the AmazonEC2ContainerRegistryReadOnly managed role can get attached to it. Allows the task to pull from an ECR registry on launch if needed.
The ecs_patterns
is a high-level construct library, and it will require less code to accomplish what we need. This particular code creates the service which will be underneath the cluster that will be created. This service will have an Application Load Balancer created with the target group being the service and its tasks. The image in this example is an image from Docker Hub, but it can be from any Docker registry, including ECR. There are also example environmental variables set, and Docker labels for the sake of example.
Since the Fargate construct created the security group, we use the object holding it to modify the ingress rule for the Security Group associated with the ECS Service. The SG for the ECS Service will allow port 8080 from the ELB and the CIDR 10.0.0.0/16. The ELB will use a default rule on port 80 from 0.0.0.0/0.
The db_stack
object was passed from the DynamoDB stack through the main app.py
to this ECSStack so that we can modify the Fargate task role to allow access to the DynamoDB Tables created in the previous stack.
The last part simply prints the load balancer DNS to the terminal.
cdk deploy --all -c account_id="111111111111" -c preferred_region="us-east-2"
The CDK Documentation recommends for production stacks to explicitly specify the environment (Account no. & Region) for each stack in your app using the env property.
Instead of hardcoding the values, I opted to use runtime contexts as mentioned earlier.
Found in app.py
:
preferred_region = app.node.try_get_context("preferred_region")
account_id = app.node.try_get_context("account_id")
env = core.Environment(region=preferred_region, account=account_id)
Defined just below:
db_stack = DynamoDBStack(app, f"{props['namespace']}-Dynamo",props,env=env)
ecs_stack = ECSStack(app, f"{props['namespace']}-ECS",db_stack.output_props, db_stack=db_stack, env=env)
After this, you will have an ECS Fargate service setup behind an Application Load Balancer. The ALB will be setup as you expected, forwarding traffic on the ports you specified, and performing health checks on the traffic port (8080).
The ECR repository can easily be replaced by a DockerHub registry, or any supported image repository. There’s obviously a lot more you can do here, but I think this is a good start.
]]>When developing an application that you want to get deployed to the cloud quickly, one option would be to use Elastic Beanstalk. The Beanstalk platform handles your deployment’s provisioning, load balancing, scaling, and application health monitoring.
There may be other services you’re interested in stacking, such as RDS or an Elastic Load Balancer. Since you’re developing this stack on AWS, and will likely transition away from Beanstalk in the future for production, it would be good practice to get started with the Cloud Development Kit (CDK). This way you will have a good foundation for developing and deploying a wide range of services on AWS.
CDK is an open-source software development framework to define your cloud application resources using popular programming languages, such as Python, Go, and Java. You define your architecture in your favorite programming languages, and the CDK engine outputs, or synthesizes your code to Cloudformation templates. You get the best of both worlds.
You get to benefit from the programming language’s expressive nature and functionality, such as objects, loops, and conditions. If your application were written in Python, it would make sense to write your CDK constructs in Python. This way you could integrate Python classes created for your application into your CDK application code to help bootstrap the application for deployment.
You also have the benefit of being able to version control your CDK code, just as you would if writing Cloudformation templates directly.
There are a few command-line tools and libraries you will need to install on your local system to get up and running. Obviously, you could install these in a Docker container, but that’s out of scope for this article.
Normally, you will initialize a working directory by: cdk init sample-app --language python
in the directory. As this will generate some scaffolding to help develop your CDK code.
But if you’re using my Github app as a starting point, you can just git pull my repo.
CDK in Python is similar to writing an application in Python, meaning it’s best practice to use something like virtrualenv to manage Python libraries with Pip. Most Python CDK projects should make installing the required CDK libraries and constructs easy with Pip.
For my sample CDK project that deploys Elastic Beanstalk, MySQL RDS, and an Application Load Balancer, we have three main files. In order of execution:
This is the main file that gets executed when you run any of the CDK commands. Therefore, it should import and include all of the CDK constructs, Python libraries, and Stacks to run your code. It’s the file that connects everything together. I’ve also added some variables that will be shared between stacks.
This stack takes the tedious, but important task of configuring your VPC, including subnets, route tables, and security groups, and expresses it into easy-to-read code that you can replicate and reuse. Since some of this information will be used in the next stack that’s in a separate file, I made sure to pass references to objects when needed.
For example, for a few subnets, and the VPC itself I pass the reference to a variable to be used with certain Beanstalk construct options that expect them. Until CDK creates the VPC, you won’t know what the VPC ID is, but by passing the reference to the object you won’t need to.
Starts off with some basic settings for launching the RDS service, along with generating a secure password and storing it in Secrets Manager.
Behind the scenes, when you bootstrap the CDK app into your region it creates and manages an S3 bucket used for storing files it uses. We can leverage this when uploading our application in a zip file to load into Beanstalk. The code I have expects a properly structured zip file to be one level up from the main app.py.
Most of this code uses Level 1 (low-level) CDK constructs, this is because it allows for more flexibility where needed.
As you’ll soon find out, Elastic Beanstalk makes a lot of assumptions about your application environment, which is fine because its job is to quickly deploy your application. But if you’re sitting down to write CDK code for your deployment, you might as well add a few lines to fine-tune it to fit your needs. There is a long list of options you can customize with Beanstalk.
The last few lines are important and depend on the application you want to deploy. So be sure to set the appropriate Beanstalk platform
This should set you off to a nice start with your first CDK app. The one thing the code doesn’t handle is making use of the database password stored in Secrets Manager. Beanstalk doesn’t directly support pulling from Secrets Manager at the time of this writing, however, there are a handful of ways to go about it.
One way, though one I wouldn’t recommend for production, is to use environmental variables. However, the cleartext password will appear in console logs. A better way would be to use ebextensions, and use AWS CLI (get-secret-value). Depending on the actual application you’re deploying, you can use the AWS SDK and pull the secret that way.
Elastic Beanstalk is a good way to get your application deployed, but for a more scalable and flexible solution you may want to consider Elastic Container Service. So in my next article I’m going to discuss deploying an ECS/Fargate, DynamoDB stack for your application.
]]>When dealing with AWS Command Line Interface you’ll quickly find out you will have to deal with multiple AWS Named Profiles. After all, they make sense to use when you have multiple user credentials you have to frequently use, especially when separating permissions between different users. As outlined in the AWS documentation you would specify the profile you want to run the command as by adding --profile myprof
option on a command. If you don’t set this flag the command looks for the environmental variable AWS_PROFILE=myprof
. If neither of these is set, then the default profile is used.
However, neither of these options really leverage the benefits you gain from using the command line. These options require you to remember to add the --profile
option to each command, or remember which environmental variable you set when executing commands.
A while ago I discussed why you should switch to ZSH, and in this article, I’m going to give some tangible use-cases to support that statement. Working at the shell can and should be a more streamlined experience than dealing with GUI’s or web consoles. Using AWS web console may be an easier experience to get up and running, but once you start having to complete redundant tasks it becomes obvious that having to browse the web interface can be slow when configuring services.
The CLI allows a much faster workflow and the ability to reuse code to cut down on redundant tasks. You also get access to certain configuration and troubleshooting options and not available in the Web Console
There’s a much better way and it involves ZSH and the AWS plugin. You simply set your local $HOME/.aws/
profiles as you normally would. But instead of messing with environmental variables directly or adding the --profile
option to each command, you use the functions added with the plugin.
The ZSH AWS plugin includes the AWS Switch Profile (asp) function that we will use later. If you use Oh My ZSH, you can add the following to your .zshrc
or .zshrc.local
:
plugins = ( ... aws ... )
This function asp
ran as a command switch to the specified AWS profile (as it exists in your $HOME/.aws/config
file. Depending on your ZSH theme, the active profile will either be displayed at all times in your prompt or as in my case, it will be displayed in the corner when you type aws
on the command line.
This plugin provides two important features that will make your workflow at the command line much easier.
My setup uses Powerlevel10k, the default Manjaro Linux ZSH theme. Out of the box, it only shows the active AWS profile when you finish typing aws
on the command line. Of course, if you want to always display the active profile you can do so.
There’s a line of code in the p10k.zsh
configuration file that you can comment out to always show the active profile, non-default profile.
Another popular ZSH theme is Agnoster where the default behavior is to always show the aws profile if it’s non-default.
The AWS CLI command-line completion gives you the ability to view CLI options when typing commands. Depending on your system and how you installed the AWS CLI, it may or may not be configured.
ZSH and this plugin should certainly work on OS X, though I have not looked into it much since I haven’t owned a Mac in years.
You can spice up your AWS CLI use with a command-line fuzz finder.
]]>One of the major benefits of Unix-like desktop systems is the ability to configure and fine-tune your desktop environment. The majority of these files are dotfiles, because traditionally they either reside in a folder that starts with a dot, e.g .ssh
, or the file itself .zshrc
. Obviously, this isn’t always the case, what’s important is that these are files and directories that contain configuration variables, functions, and settings.
Some of the dotfiles on my system include:
This is considered a small list, but it handles the core of my desktop environment’s configuration. What becomes a hassle is when you run multiple desktops and include laptops in your arsenal.
The answer to these questions is why we’re here.
Yadm (Yet Another Dotfiles Manager), solves the problems that we mentioned above. It’s a wrapper for Git, includes encryption support, and it’s extremely portable and lightweight since it’s just a Bash script. Git revision control allows you to create a centralized repository for you to push and pull configuration files where you get all the benefits of a typical git repository.
Installing Yadm is easy, most Linux distributions have yadm in their official repositories. There is also a homebrew package for OS X. At the time of this writing Fedora 35 is the latest stable release but does not have yadm in its official repo, however you can grab the latest Fedora Rawhide RPM package from OpenSuse Build System.
Check the full installation list here.
Outside of installing yadm and git on your local systems, you will probably want to set up your own private Git server. You could obviously use a private git repo from GitHub, Bitbucket, GitLab, or any one of the other Git services instead of managing your own server. If you elect not to manage sensitive configuration files, you could also just use a public Git repo.
I wouldn’t recommend using any Git service to store your configuration files. In my opinion, when dealing with a set of files that at one point in the future may contain sensitive files or credentials you won’t want to risk pushing them to a public Git repository. In my opinion, when taking control and adding more privacy to your digital life you should try to avoid services, and instead choose to store your dotfiles on your own private remote server. Luckily, there really isn’t much to install or manage to host your Yadm repo.
There really is no server to install, you simply install Git (if not installed already) and initialize your repo. Git isn’t a daemon so once you have the CLI tool installed, you either execute it on the server where it manages the git repo directories or run the CLI on your local computer to manage the Git working directories.
On your remote node that will act as the centralized host you want to add a dedicated Git user.
adduser git -m
You want to make sure this user has a home directory, as this will be where the Git repositories reside. You can store the repositories in any directory, but since this user will already own the home directory, we won’t need to set any permissions.
Next, you switch to the newly created git user, and initialize the bare repo in your home directory.
[git@mygithost ~]$ git init --bare dotfiles.git
You should see a directory named dotfiles.git
. Conventionally, repositories initialized with the –bare flag ends in .git.
So why do we create a “bare” repo instead of a regular repo? If you’re familiar with using a Git service like GitHub you can create a Git repo through github.com before or after you git init
your local project so you can push it to the remote git repo.
When you run git init, it creates a WORKING directory. It’s intended to be used as a directory where your file changes will take place. So this makes sense to run on your local dev machine. However, for the remote server that exists to store your repo and no person or service will be making changes to the files there is no need to take up file-space storing a working directory.
You’d want to use ssh keys to log in to the remote git server as the git user. The way to do this varies depending on the VPS or cloud service provider you use. Some general-purpose guides:
The goal is to be able to git remote add origin mygithost:dotfiles.git
and push/pull without being bothered with passwords. Again, you have multiple options, one such is to use ssh_config
Host mygithost
HostName 2.34.33.4
User admin
Port 22
IdentityFile ~/.ssh/mygitkey.pem
As mentioned Yadm is largely a wrapper for Git. So most of the commands are the same, just prepended with yadm
. So to initialize the working directory of yadm, you would execute:
[localuser@dev-computer ~]$ yadm init
Notice that we want to initialize our home directory because this is the root of all our config files. After that, you add the files and directories you want to version control.
For example:
yadm add .spacemacs
yadm add .aws
yadm add myReadMe.org # I like to keep a file of config specific notes
After you’ve added the files you want to manage, you commit and push similarly to using pure Git.
yadm commit -m "added files"
yadm push -u origin master
Some configuration files are sensitive, they either contain credentials or are themselves private keys. Yadm is nice enough to offer a simple way to manage encrypted files without much manual work. You simply need to have either OpenSSL or GPG installed, and edit the yadm text file located in your local .config/
directory.
Edit a file at $HOME/.config/yadm/encrypt
(create if it doesn’t exist) to include the files and/or directories you want to encrypt. My file consists of:
.ssh/* .aws/credentials
I have every file inside my .ssh/
directory encrypted, and the one .aws/credentials
file with sensitive credentials added. After the file is saved you tell Yadm you files to encrypt, then you add the file you just created along with the generated encrypted archive to versioning.
yadm encrypt
<ENTER PASSWORD>
yadm add .config/yadm/encrypt
yadm add .local/share/yadm/archive
Now we can commit and push this to our remote repo. And on another system that we want to sync our config files with you run yadm pull
& yadm decrypt
.
We now have a system that accomplishes our two main goals. Our important configuration files now have versioning, and we only need to maintain a central repository of configuration files. We only need to maintain one set of configuration files, but what if we have different systems? Say my desktop runs Linux, but I have a MacBook running OS X which requires slight configuration differences. One way is to make use of Alternate Files - yadm. This way we can specify different files depending on specific conditions.
Yadm also supports bootstrapping, the ability to create and run scripts to do any extra configuring in addition to the files you’re syncing. As an ideal use case for my setup you can imagine the extra libraries and software I have installed to work with Spacemacs. Such as the yaml-language-server
which I installed using NPM, and certain Python libraries like flycheck. All of which I would rather not have to install manually on each new system. You can create an executable script that gets ran when you execute yadm bootstrap
to install and configure any extra software.
Ansible is an Infrastructure as Code (IaC) tool used to provision and manage resources. The benefit of IaC becomes evident when you realize it allows you to write a single source of truth in the form of configuration files that you can reuse. Configuration files you can put under version control and allow for collaboration across teams.
Back in the old days, system administrators usually kept a collection of bash scripts to repeat typical system configuration changes when deploying new nodes. This wasn’t easy to maintain, and bash can be difficult to read and comes with weird syntax quirks.
Ansible configuration files are largely expressed in YAML format, which is a common syntax when dealing with configuration for modern admin tools and frameworks.
I’m going to start with the concept of a Playbook within Ansible. When you’re working with Ansible, you want to start with a Playbook, which will contain plays that will execute in order from top to bottom the goals of your specific Playbook. These plays include tasks that also run in order from top to bottom.
To better explain these core concepts, I’m going to discuss concepts while moving along a playbook I am developing with the goal of bootstrapping a Linux virtual machine.
# Initial server setup
#
---
- hosts: testhost # Host which is defined in hosts.yaml file
vars: # Used to define variables.
tmzone: America/Chicago # Timezone to be used in bootstrap role
sudo_timeout: 20 # Used in bootstrap role
# variable using a Literal Block Scalar
vimrc: |
set mouse-=a
roles:
- bootstrap
- {role: jnv.unattended-upgrades, ansible_become: yes}
After the variables are defined, we get to the meat of the Playbook. We mentioned that Playbooks include what are considered plays that perform tasks. For our particular Playbook, we’re utilizing Roles, which package up tasks along with contained variables, files, and handlers.
This makes tasks and consequently plays, portable and function almost as modules. The first role is called bootstrap
, and it’s the core of the project. It follows a basic Role directory structure and is the code that does the heavy lifting. This code is in my ansible-bootstrap GitHub that I will be hacking away at in the next few weeks, or months…
The next role is a little different because it’s a 3rd part package that handles the specific function of configuring unattended-upgrades on your remote host. There are different ways to go about installing this external role, but one method is making use of Ansible Galaxy. Another important note is that the role is in brackets and I redefine a role
that is already defined as a child to the roles
dictionary. This is because I like to run my playbook as an unprivileged user and become root only when it’s needed. In this case, all of the tasks within the unattended-upgrades role require root permissions. It’s encapsulated between the brackets to use ansible_become only for that specific role.
The next file host.yml
is an important configuration that declares the systems we intend to run our playbook against.
all:
vars:
ansible_python_interpreter: /usr/bin/python3
ansible_become_method: sudo
children:
aws_ec2:
hosts:
testhost:
user: user
cfg_static_network: false
This is a fairly straightforward file that defines a few variables relating to host connections, setting python v3 on the hosts and specifying the use of sudo
for escalating privileges.
testhost
is not only the name of our remote host in Ansible, it’s also inconspicuously the name of the host in our ~/.ssh/config
file for ssh_config client configuration..
So for example, this section is in my config file:
Host testhost
HostName myhost.net
User user
Port 22
IdentityFile ~/.ssh/myHost.pem
Note that Host
attribute is the same exact name as the YAML-dictionary name directly under the hosts
YAML dictionary. This is the simplest method to maintain connection details for our remote hosts, just use the built-in functionality of OpenSSH since we’re already using its client for remote connections.
Next, we just need to run the playbook, assuming we have installed both roles bootstrap
& jnv.unattended-upgrades
.
[~/ansible-bootstrap]$ ansible-playbook debian.yml
If the connection is established, you should see Ansible going through our plays and making changes on your host(s).
This is a nice and short dive into some of the basic functionality. The GitHub will have the bulk of the code, and over the course of time, I’ll go over specific segments.
Of course, for future reading:
]]>AWS S3 is one of Amazon’s oldest web services. It’s pretty straightforward, hence the name. It’s a simple object storage. Allows you to store arbitrary files, including all of our static files used in our website.
CloudFront is Amazon’s global content delivery network (CDN). The main reason we’re using CloudFront in conjunction with S3 is that it allows us to use the free TLS certificate provided by Amazon for HTTPS support.
This isn’t exactly the same as AWS Lambda, instead, this is a feature of CloudFront that leverages Lambda. We’re going to use this to “rewrite” the standard functionality of CloudFront to mimic a traditional web server for static site hosting. The feature we need to mimic from traditional web servers like Nginx is the ability to set a default index file. So when we view example.com/folder1/
it requests example.com/folder1/index.html
.
Amazon’s DNS will be used to route traffic to our CloudFront distribution.
There are several ways to set up S3 with CloudFront for static site hosting. Using S3 by itself for static file hosting can be a bit peculiar. Using four separate services to run a simple static site may seem like overkill, but as you’ll see it’s not much configuration and all four services are designed to be used together.
Create a bucket with the same name as your site’s domain. If your domain name is example.net
, then the bucket name should be example.net
. If it’s blog.example.net
, then the bucket should be blog.example.net
.
Use the default settings, including blocking all public access. Our S3 bucket will NOT be public, it will only allow access via CloudFront. Static website hosting We will enable static website hosting, and configure our index document.
This is all we need to do, for now for S3.
Create a public hosted zone for your domain name, if you haven’t already.
We’ll come back to Route 53 when it’s time to point our domain to our CloudFront distribution
CloudFront will be our frontward facing service. You need to create a distribution and Origin Access Identity (OAI) for access S3. Most of the settings will be left to their default configs.
Pick the S3 bucket we created. Once you select the S3 bucket, a new option will appear for options in regards to OAI.
Since our S3 bucket blocks all public access, we need to tell S3 to allow our CloudFront distribution to access the bucket. Create a new OAI with a name that allows you to easily identify what it does. Tell CloudFront to update the S3’s bucket policy for us so we don’t have to manually do it.
Enter the domain(s) you plan to use based on the bucket name. If the bucket name is example.net
:
Click on `Request Certificate, this will open a new tab in your web browser where you will add domain names to the certificate. Use the Cnames you entered previously.
Choose DNS Validation since Route 53 is hosting our domain. Create the request.
Since Route 53 is hosting our domain, we can simply hit Create record in Route 53
for both domains and it will create the appropriate record to allow us to validate our domain so Amazon can issue us a certificate.
Head back to the page you were on when creating the CloudFront distribution. Refresh the certificate options, and select our newly created certificate.
Set Default root object to index.html
.
Click on Create distribution, it may take a few minutes to propagate. Make note of the random Domain name that CloudFront generates for the distribution. We will need to know this to set up the DNS records.
Create (or edit) the root A name with a Simple routing policy. Select Alias to CloudFront distribution and pick the CloudFront distribution we just created.
You may or may not need to create a CNAME so we can access the site via www.example.net
Lastly, we should setup Lambda@Edge to allow us to use our index.html. Not all locations allow Amazon CloudFront Origins using Lambda@Edge, so keep that in mind if you can’t add a trigger in the next step. us-east-1
is an acceptance location.
Create a Function from scratch using the Node.js 12.x
runtime. It could work with newer versions, but I haven’t tested it yet. Make sure Create a new role with basic Lambda permissions is selected for the Execution role.
Hit Create Function and move to the next page.
This is the main Lambda page. Hit Add Trigger. Configure the trigger by selecting CloudFront.
Click on Deploy to Lambda@Edge and a popup will appear.
Select the Distribution ID of the CloudFront distribution we created for our site. Leave Cache behavior as the wildcard, and CloudFront event as Origin request. Origin request has our Lambda function execute before CloudFront forwards the request to the origin, which is S3 in this particular case. Click Deploy.
The following code will allow requests to example.net/folder1/
to example.net/folder1/index.html
. This should be copy and pasted to replace any default code source for the function.
The permissions should have been created for us by default. If not, the code is as follows
This gives the function the ability to write logs to a CloudWatch log group.
With everything set up, you should be able to push your Jekyll blog to S3 and access it as you would on a traditional web server.
You may want to tell CloudFront where your custom error pages are at to avoid showing the standard CloudFront error pages when visitors access an incorrect URL.
]]>If you want a modern shell that integrates nicely with your workflow then you should consider Z Shell (Zsh). More specifically, the trifecta combination of Zsh, Oh My Zsh, and fzf.
What makes Oh My Zsh worth your time to install? The short answer is the plugins. The first plugin you should install is fzf, which is why I have it is included in my trifecta. It’s a good starting point to highlight the power of Zsh. It can be hard for Linux veterans who haven’t experienced Oh My Zsh to comprehend how useful their traditionally boring shell can be.
Zsh is an improved version of Bash that includes some of the best features from Bash as well as KornShell, and tcsh, two shells that might not get a lot of love, but are remarkable in their own right. Zsh can be used as a drop-in replacement for Bash, so there’s no need to modify existing Bash scripts when switching to Zsh. Even bash -c
works while using Zsh as your shell.
Oh My Zsh framework is the star of the show, which becomes evident when you search for z shell
and the Oh My Zsh website has a higher position. It’s the free and open-source framework that helps you manage your Zsh configurations, so you won’t have to spend time tinkering (though you have this option) with files to get the full benefit of switching to Zsh.
Fzf might be one of the most beneficial CLI tools for productivity in a while. Remembering specific file names, locations, or even file types can be a pain, but this small piece of software will change your entire way of searching and opening arbitrary files. It’s the closest thing to a futuristic CLI “assistant” that exists.
With Fuzzy search alone, you’ll begin to prefer using the command line over messing with a GUI in many instances. The speed and precision afforded here is similar to why Vim or Emacs have cult followings. The removal of the mouse for most input can drastically increase your speed. Your hands are already at the keyboard, so why move one hand to fiddle with a mouse when you don’t have to.
Searching for a file where you are not know the exact filename or location can be a hassle. But by executing the command find * -type f | fzf > selected
, which you can add to an alias, you are prompted with an interactive, near real-time file search as you type.
Moving and accessing deep directories is tedious. You either must navigate to it in a file explorer or type out the full path , not to mention it’s case sensitive.
Instead, if you know the first character of each directory, which can sometimes be easier:
Viola, you’re in the directory. Thanks to fzf for predicting where you want to go. Instead of your CLI needing explicit directions, it now aides you by making educated guesses based on several factors, including previously used locations (or commands).
These same general principles can be leveraged throughout your CLI endeavors. Searching for files, viewing logs.
Regularly typing commands that you use on a normal basis is tedious and time-consuming. Of course, you can traverse bash’s history by hitting the “Up” arrow repeatedly until you come across the command you’re looking for and may even need to modify. You could also use the relatively unknown “Ctrl+R” keyboard shortcut.
There’s a much better way; simply start typing the command you want, and when you get to a good point:
Then Hit “Up” arrow: This time you’re greeted with the most likely command based upon previous commands you’ve entered. Again, we have a much smarter and more intuitive system. I can’t do the full feature set justice in one post so you should check out the zsh-completion page.
Wrappers are plugins that are essentially pre-configured aliases and functions for different CLI tools/scripts, but with the benefit that you don’t have to manually manage and update them when things change.
Since I work with Docker, I will typically use a command like:
With the docker plugin, I can type:
I’ve just scratched the surface on what you can accomplish with Oh My Zsh. With that said, there are over 200 plugins you can look through. No doubt there is something for everyone.
]]>Bitwarden is my preferred password manager and for good reason.
Bitwarden uses industry standard encryption, rather than rolling their own encryption.
Bitwarden uses AES-CBC 256-bit encryption for your Vault data, and PBKDF2 SHA-256 to derive your encryption key.
Data is always encrypted on whatever local client you’re using.
Bitwarden always encrypts and/or hashes your data on your local device before anything is sent to cloud servers for storage. Bitwarden servers are only used for storing encrypted data.
Thanks to Zero-Knowledge
“…Bitwarden as a company cannot see your passwords, they remain encrypted end-to-end with your individual email and Master Password. We never store and cannot access your Master Password.”
It is akin to Zero-Knowledge proofs in some blockchains and is an important security feature when dealing with credentials stored on hardware, not under your control. The entity that administers the platform (in this case Bitwarden) does not have the ability to see your secrets. This removes important attack vectors that involve malicious actors at Bitwarden, security breaches at Bitwarden, and Bitwarden being compelled by a third party to hand over your secrets.
This is possibly the most vital feature of Bitwarden. When working with in the cloud, it is important to minimize the damage that can be caused by third parties.
Bitwarden puts their source code underneath a dual license: AGPL v.3.0, and Bitwarden’s own license.
There’s a standalone application that runs on Windows, OS X, and Linux (via AppImage). The bulk of your time will likely be with the Android and iOS apps. There are also command-line tools in addition to a web vault.
I would say the majority of users will be just fine with the free account, only really needing to upgrade if you frequently work with teams, especially for the API and hardware key features.
The command-line tool allows you to manage your entire Vault and integrate it into your workflow and applications.
Sharing credentials will sometimes be required when working in teams. Luckily, Bitwarden has the concept of “Organizations”. Not only does this allow you to share credentials with others who you add to your Organization, but you have the ability to secretly share notes and credit card details.
The API and CLI software provide you with all the tools you need to tightly integrate Bitwarden into your specific use case.
By Bitwarden being a fully-featured open source solution, it supports on-premise installation and deployment. Giving you even more control over your credential infrastructure.
]]>