<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[bencuan's devlog]]></title><description><![CDATA[Hi! I've been making some cool stuff lately, and wanted a place to write about my experiences making them in case you wanted to do something similar. For non-te]]></description><link>https://devlog.bencuan.me</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 02:58:39 GMT</lastBuildDate><atom:link href="https://devlog.bencuan.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[TurtleNet: End of Season 1]]></title><description><![CDATA[Hi there! Hope you're doing well :)
I've spent the last few months documenting my experiences with running a server at home. I hope that I've been able to help others get started with doing the same, or at least put some inspiration out for exploring...]]></description><link>https://devlog.bencuan.me/turtlenet-end-of-season-1</link><guid isPermaLink="true">https://devlog.bencuan.me/turtlenet-end-of-season-1</guid><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Thu, 15 Jun 2023 08:38:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686818166131/923fb5ef-8093-462f-a9d4-862605b4a9a6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there! Hope you're doing well :)</p>
<p>I've spent the last few months documenting my experiences with running a server at home. I hope that I've been able to help others get started with doing the same, or at least put some inspiration out for exploring the topic further!</p>
<p>At this point, I think I've covered everything you'd need to get a server up and running with acceptable availability and security practices for personal use. As such, this seems like a good time to wrap things up for the season and take some time away from keyboard mashing about server configs.</p>
<p><strong>This hopefully won't be the end of the series though!</strong> There are still some topics left to be explored in Season 2:</p>
<ul>
<li><p>Home Networking: setting up routers, switches, access points, and so on</p>
</li>
<li><p>Setting up a UPS (uninterruptible power supply)</p>
</li>
<li><p>Upgrading to a rack-mounted setup</p>
</li>
<li><p>Creating a media server</p>
</li>
<li><p>Minimizing power consumption</p>
</li>
<li><p>ARM and alternative architectures</p>
</li>
</ul>
<p>If you have any suggestions for any more topics, <a target="_blank" href="https://bencuan.me/contact">let me know!</a> Also feel free to contact me if there are any issues with the current content- as mentioned before, I intend for this series to be living documentation that will get gradually upgraded over time.</p>
<p>Getting back to this will probably take a while; I'll need to spend some more time upgrading TurtleNet itself before I will have the ability and expertise to actually document the things I want to do with it. Besides, there are plenty of other fun things I want to work on now that the TurtleNet series is (mostly) in working order!</p>
<p><strong>If you want to be emailed when new posts come out, please consider subscribing!</strong> You can also subscribe to my regular blog (<a target="_blank" href="http://blog.bencuan.me">blog.bencuan.me</a>) for non-technical content which will be coming out very soon.</p>
<p>Happy homelabbing!</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 7: Backups]]></title><description><![CDATA[Introduction
You should always keep backups of your important data in the case of catastrophic hardware loss or corruption! Think about this like a good insurance plan: when accidents inevitably happen, you want to be able to recover what you lost wi...]]></description><link>https://devlog.bencuan.me/turtlenet-7-backups</link><guid isPermaLink="true">https://devlog.bencuan.me/turtlenet-7-backups</guid><category><![CDATA[Homelab]]></category><category><![CDATA[server]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[Backup]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Thu, 15 Jun 2023 08:12:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686816585402/77c4cbb0-c324-4163-9f42-ca89aa9472ea.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>You should always keep backups of your important data in the case of catastrophic hardware loss or corruption! Think about this like a good insurance plan: when accidents inevitably happen, you want to be able to recover what you lost with as little hassle as possible.</p>
<h3 id="heading-the-3-2-1-rule">The 3-2-1 Rule</h3>
<p>There's a well-known rule of thumb that sysadmins usually try to follow</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686816608069/b0213ddc-0a38-4594-a7c1-31530c44c7bd.png" alt="Source: MSP360" class="image--center mx-auto" /></p>
<p>(Source: MSP360)</p>
<p>As it stands now, let's evaluate how well our homelab stacks up to this rule:</p>
<ul>
<li><p><strong>3 copies:</strong> This is probably somewhat satisfied if you have a NAS configured with anything more than RAID 0. However, there is only 1 copy of your VM boot data!</p>
</li>
<li><p><strong>2 locations:</strong> This needs some work.</p>
</li>
<li><p><strong>1 off-site location:</strong> Hmm... So, not great. In this section, we'll explore some ways we can approach our ideal backup solution, and compare solutions to see which ones will work for your use case.</p>
</li>
</ul>
<h2 id="heading-git-backups">Git Backups</h2>
<p>For small, important, non-sensitive files like configurations, documentation, and custom scripts, using a standard source control solution like Git is a great option to easily back things up to the cloud.</p>
<p>As an example, I keep two monolithic GitHub repositories: one for my <a target="_blank" href="https://github.com/64bitpandas/TurtleNetPublic">TurtleNet configs</a>, and one for my Obsidian vault (basically anything I have ever written in Markdown). Keeping configs on GitHub is especially convenient, since it can be easily pulled onto all VM's.</p>
<p>The main drawbacks of using GitHub or another cloud provider for Git are twofold:</p>
<ol>
<li><p>Storage limitations: Git is not intended for use with large files (or a large quantity of files). Although Git LFM exists, providers like GitHub often charge you a decent amount for it. Additionally, storing binaries on Git is not ideal.</p>
</li>
<li><p>Security: You should never store secrets and passwords on any Git repo, even if it's private! This means that you have to be careful with what data you plan on storing in a repo.</p>
</li>
</ol>
<h2 id="heading-software-solutions">Software Solutions</h2>
<h3 id="heading-syncthing">Syncthing</h3>
<p>Syncthing is a peer-to-peer file sync application that allows you to share folders between multiple devices. I personally use Syncthing to ensure all my documents are available and up to date on my laptop, phone, and NAS.</p>
<p>Setting up Syncthing on a device with a GUI is very easy- simply download the latest version <a target="_blank" href="https://syncthing.net/downloads/">here</a> and follow the <a target="_blank" href="https://docs.syncthing.net/intro/getting-started.html">setup instructions</a> on each device!</p>
<p>Setting up Syncthing on a VM can be done in the same way as any Docker service. Here's a sample <code>docker-compose.yml</code>:</p>
<pre><code class="lang-yml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3'</span>

<span class="hljs-attr">services:</span>
 <span class="hljs-attr">syncthing:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">linuxserver/syncthing</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">syncthing</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">PUID=1000</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">PGID=1000</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-comment"># - /syncthing_share_folder1:/data1</span>
      <span class="hljs-comment"># - /syncthing_share_folder2:/data2</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">8384</span><span class="hljs-string">:8384</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">22000</span><span class="hljs-string">:22000</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">21027</span><span class="hljs-string">:21027/udp</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">proxy</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.enable=true"</span>
      <span class="hljs-comment">## HTTP Routers</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.syncthing-rtr.entrypoints=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.syncthing-rtr.rule=Host(`sync.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.syncthing-rtr.tls=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.syncthing-rtr.service=syncthing-svc"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.syncthing-svc.loadbalancer.server.port=8384"</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">proxy:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>In order to get it to work properly, you'll need to add the folders you want to share as volumes. These can be named anything on both the host and container ends- just remember to add a folder using its location on the container and not the host.</p>
<p>The web UI is accessible via <code>server_ip:8384</code>.</p>
<h3 id="heading-duplicati">Duplicati</h3>
<p>Duplicati is an open source, self-hosted backup service that can be configured to back up files to another device, your NAS, or even Google Drive.</p>
<p>Here's a sample docker-compose:</p>
<pre><code class="lang-yml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3"</span>
<span class="hljs-attr">services:</span>
  <span class="hljs-attr">duplicati:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">lscr.io/linuxserver/duplicati:latest</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">duplicati</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">PUID=1000</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">PGID=1000</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">TZ=America/Los_Angeles</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">CLI_ARGS=</span> <span class="hljs-comment">#optional</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./config:/config</span>
      <span class="hljs-comment"># More volumes here</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-number">8200</span><span class="hljs-string">:8200</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">proxy</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.enable=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati.entrypoints=http"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati.rule=Host(`duplicati.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.middlewares.duplicati-https-redirect.redirectscheme.scheme=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati.middlewares=duplicati-https-redirect"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati-secure.entrypoints=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati-secure.rule=Host(`duplicati.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati-secure.tls=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.duplicati-secure.service=duplicati"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.duplicati.loadbalancer.server.port=8200"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.docker.network=proxy"</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">proxy:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>Like Syncthing, you'll need to add external folders from the host as volumes (remember, <code>host_folder:container_folder</code>). These can be named whatever you want.</p>
<h3 id="heading-rsyncrclone">rsync/rclone</h3>
<p>If you prefer to keep it CLI-only, rsync and rclone are simple and work great. I don't personally use them, but I'm sure you can find resources online like <a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories">this one</a> for your use case.</p>
<h2 id="heading-cloud-storage-providers">Cloud Storage Providers</h2>
<p>Unless you physically own multiple servers in multiple locations, cloud storage providers are probably the most convenient way to get off-premise backups of your data. There are plenty of providers, so choose the one that works best for you and your wallet! Here's some information about some alternatives I considered:</p>
<h3 id="heading-google-drive">Google Drive</h3>
<p>Google Drive actually has <a target="_blank" href="https://one.google.com/about/plans">extremely reasonable pricing options</a> which start around $5/TB/month. Plus, it's fairly straightforward to send your backups to your Drive via Duplicati or another software solution.</p>
<p>However, there are some drawbacks to consider:</p>
<ul>
<li><p>Google Drive has a hidden 750GB/day upload limit, so an initial backup could take a long time to fully complete.</p>
</li>
<li><p>Upload/download speeds can be somewhat inconsistent- Drive is generally not intended for such heavy usage by a single user.</p>
</li>
<li><p>Say what you want about Google, I personally wouldn't trust them with my sensitive data-- but as long as everything's sufficiently encrypted, it shouldn't be too much of a problem.</p>
</li>
</ul>
<h3 id="heading-rsyncnethttprsyncnet"><a target="_blank" href="http://rsync.net">rsync.net</a></h3>
<p><a target="_blank" href="http://rsync.net">rsync.net</a> offers cost-effective, simple access to cloud storage. At $15/TB/month with no usage costs, it's definitely pricier than Google Drive but is faster, more secure, and more convenient (you can mount your network drive in the same way you can mount any other NAS).</p>
<p>They also offer a significant education discount upon request, so if you're a student this could be a good option.</p>
<h3 id="heading-aws-glacier">AWS Glacier</h3>
<p>At around $1/TB/month, AWS Glacier Deep Archive is probably the cheapest cloud storage around-- that is, until you need to retrieve your data.</p>
<p>According to the <a target="_blank" href="https://aws.amazon.com/s3/pricing">pricing chart</a>, transferring data out of AWS from us-east-1 costs $0.09 per GB- which is a staggering $90/TB! But if you just need to back up a few TB of data and are willing to pay a (pretty reasonable) couple hundred bucks to recover your data in an absolute-emergency scenario, this could be a good solution to have extremely cheap off-premise storage.</p>
<h3 id="heading-friends-and-family">Friends and Family</h3>
<p>If you know someone with a NAS, consider asking them to host your backups! You can even build and gift them a NAS in order to gain access to another server offsite. Make sure that you trust them with your personal data, though, since they'll have full hardware access to the drives.</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 6: Network Attached Storage (NAS)]]></title><description><![CDATA[What is a NAS and why should I care?
Whether it be photos, videos, music, or important documents, you most likely have a bunch of files scattered around your computer. Maybe you have an external hard drive if you ran out of space, and maybe you also ...]]></description><link>https://devlog.bencuan.me/6-nas</link><guid isPermaLink="true">https://devlog.bencuan.me/6-nas</guid><category><![CDATA[Homelab]]></category><category><![CDATA[server]]></category><category><![CDATA[nas]]></category><category><![CDATA[self-hosted]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Thu, 15 Jun 2023 04:29:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686802962686/5869962b-26f0-4bbc-886b-cff92239b87c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-a-nas-and-why-should-i-care">What is a NAS and why should I care?</h2>
<p>Whether it be photos, videos, music, or important documents, you most likely have a bunch of files scattered around your computer. Maybe you have an external hard drive if you ran out of space, and maybe you also back up your most important files on a cloud storage provider like Google Drive.</p>
<p>In an ideal world, we wouldn't have to worry about backups or capacity: we'd just have an infinite amount of indestructible cloud storage at our disposal! But realistically, this would cost a fortune (in fact, this exact need funnels billions of dollars into Google, Amazon, Microsoft, etc. every day)...</p>
<p>Luckily, we have a server now so we can just become our own cloud storage provider at a fraction of the price! As long as you can buy some hard drives and pay the electricity cost, you'll have as much storage as you'd like, available anywhere you can get an internet connection. Hosting this service is called <strong>Network Attached Storage</strong>, often referred to as NAS (or "a NAS", if talking about the hardware itself). "NAS" is pronounced in the same manner as the famous rapper.</p>
<p>Hosting a NAS has many benefits over cloud storage or shoving an old hard drive enclosure into the back of your closet:</p>
<ul>
<li><p><strong>Price:</strong> If you need anything more than a few hundred GB, cloud storage can get prohibitively expensive: for example, 5TB of Google Drive storage costs $250 per year- enough to buy a 14TB hard drive!</p>
</li>
<li><p><strong>Speed:</strong> Since you'll be hosting your storage inside your home, network speed and latency won't be a concern. NAS performance is almost always bottlenecked by your drives' read/write speeds.</p>
</li>
<li><p><strong>Security:</strong> Another benefit of self-hosting your storage is that you don't have to worry about uploading your sensitive personal data onto someone else's server (especially if that someone is notorious for selling your data... cough cough Google)</p>
</li>
<li><p><strong>Availability:</strong> You can mount your NAS and access it just like it were physically attached to your computer at all times. This means you effectively have server-grade storage capacity on your phone, Raspberry Pi, or thin ultrabook!</p>
</li>
<li><p><strong>Sharing:</strong> If any of your friends or family want to join in, you can easily share your files with them, or give them their own private storage pool.</p>
</li>
</ul>
<h2 id="heading-an-intro-to-raid-and-zfs">An Intro to RAID and ZFS</h2>
<p>Unfortunately, creating a NAS is not quite as easy as plugging a whole bunch of disks into your server. Let's say you have five 4TB drives: how would you combine all of them to get 20TB of storage space? And what happens if one of those drives fails?</p>
<p>These are huge problems for datacenters, which often have thousands of drives that need to be continuously monitored and replaced before failures create unrecoverable data loss. There have been lots of solutions proposed, which mostly fall under the RAID (Redundant Array of Inexpensive Disks) umbrella.</p>
<p>Basically, the idea is that the chance of failure increases proportionally with the number of disks we have in our NAS- but since we have multiple drives, we can store multiple copies of our data such that if any one hard drive fails, we can look up the redundant copies stored in other drives to recover that data.</p>
<p>Different RAID configurations are designed to provide options for the tradeoff between storage and redundancy: the more backups we store, the less likely we are to lose our data but at the cost of taking away storage space from the data pool itself.</p>
<h3 id="heading-raid-basics">RAID Basics</h3>
<p>RAID configurations are specified with a number. Check out the <a target="_blank" href="https://en.wikipedia.org/wiki/Standard_RAID_levels">Wikipedia page</a> for a full list, but the most commonly used configurations are:</p>
<ul>
<li><p>RAID 0: Your files are spread across all drives ("striping"), and no backups are created. This means that the capacity and throughput of your drives are maximized, but if any one drive fails you will lose all of your data. This is <strong>very dangerous</strong> and should not be considered for any serious NAS setup (unless you know exactly what you're doing and have backups)!</p>
</li>
<li><p>RAID 1: Your files are "mirrored" to all drives, such that each drive holds exactly the same data. This means you can lose all but one drive, but the capacity of your entire pool is limited by your single-drive capacity.</p>
</li>
<li><p>RAID 10: Same as RAID 1, but with the addition of striping.</p>
</li>
<li><p>RAID 5: If there are at least 3 disks, then any one disk can be lost without causing data loss.</p>
</li>
</ul>
<h3 id="heading-raid-vs-zfs">RAID vs ZFS</h3>
<p>ZFS is a filesystem (like EXT4 or NTFS) that is popular for use with larger storage pools (like your NAS!) due to its support for RAID. In fact, ZFS comes with its own implementation known as RAID-Z. You'll see ZFS configurations being used, sometimes interchangeably with standard RAID configurations in online forums, so it's good to know how they are equivalent.</p>
<ul>
<li><p>Striped (RAID-Z0) is functionally equivalent to RAID 0 (no redundancy).</p>
</li>
<li><p>Mirrored is functionally equivalent to RAID 10, and duplicates your data by the number of drives you have.</p>
</li>
<li><p>RAID-Z or RAID-Z1 is functionally equivalent to RAID 5.</p>
</li>
<li><p>RAID-Z2 is functionally equivalent to RAID 6 and can survive any two disk failures.</p>
</li>
</ul>
<p>Mirrored pools are most popular for small (2-3 drive) setups, and RAID-Z2 is most popular for larger pools.</p>
<h3 id="heading-raid-is-not-backup">RAID Is Not Backup</h3>
<p>You'll hear these four words if you talk to literally any sysadmin around, and for good reason. Although a proper RAID setup will protect against hard drive failure, you're still storing all of your data in one physical location! If your server ever gets stolen/destroyed or goes offline for some other reason, you won't be able to access any of your data.</p>
<p>We'll cover proper backup solutions in the next part.</p>
<h2 id="heading-so-what-disks-should-i-get">So what disks should I get???</h2>
<h4 id="heading-hdd-vs-ssd">HDD vs SSD</h4>
<p>This is largely a matter of cost: although SSD's are better than HDD's in most aspects (speed, reliability/lifespan, power consumption...), they get prohibitively expensive once you reach the 10+ TB range. As such, almost all homelabbers primarily use hard drives for their NAS setups.</p>
<p>However, it's also possible to run multiple pools- a SSD-based "fast" pool for frequently used storage, and a larger HDD-based "slow" pool for archival storage.</p>
<p>While it is theoretically possible to create some kind of caching setup to have one accelerated pool of both HDD's and SSD's, this is very uncommon in practice and will take some significant messing around to get it to work. As such, I would not recommend this approach.</p>
<h4 id="heading-number-of-drives">Number of Drives</h4>
<p>Since using anything other than RAID 0 will eat away at your raw disk capacity, you'll need to get more raw storage than you actually need.</p>
<p>Calculating this manually gets really tricky, but luckily there are plenty of resources online to figure out the optimal configuration for your desired pool capacity and level of redundancy.</p>
<p>Here are a couple of calculators to play around with and bookmark for future use. They'll ask for some parameters we have yet to discuss, and you probably don't know exactly what software you're using just yet- so keep them handy on the side for now!</p>
<ul>
<li><p><a target="_blank" href="https://jro.io/capacity/">ZFS calculator</a></p>
</li>
<li><p><a target="_blank" href="https://www.raid-calculator.com/">RAID calculator</a></p>
</li>
</ul>
<h4 id="heading-smr-vs-cmr">SMR vs CMR</h4>
<p>If you dig around the spec sheets for the hard drive you're thinking of buying, you'll come across either the term "CMR" or "SMR".</p>
<p>These stand for "Conventional Magnetic Recording" and "Shingled Magnetic Recording" respectively. Like the name suggests, the differentiating factor is how data is stored on the disk: data bands on an SMR disk overlap like shingles on a roof, while CMR data does not overlap.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686803028588/50629748-3776-4a73-997f-564797bb7a65.png" alt class="image--center mx-auto" /></p>
<p>For example, here's the Amazon listing for the <a target="_blank" href="https://www.amazon.com/Western-Digital-Plus-Internal-Drive/dp/B0BDXSK2K7/">WD Red Plus</a>, which specifies that it is a CMR drive in the title:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686803039866/1621125f-62ab-46ab-9168-6f1dab54985d.png" alt class="image--center mx-auto" /></p>
<p>Due to their physical nature, CMR drives tend to be faster and more reliable than SMR drives, but are also more expensive (not by a whole lot though). Unless you're really strapped for cash, it's almost always worth it to spend the extra $10 or so to buy a CMR drive- so double check the listing before you buy!</p>
<p>Some recommendations for industry-standard CMR drives include:</p>
<ul>
<li><p>WD Red Plus (Not the standard WD Red- those are SMR)</p>
</li>
<li><p>Seagate IronWolf Pro</p>
</li>
<li><p>Toshiba N300</p>
</li>
</ul>
<h4 id="heading-rotation-speed">Rotation Speed</h4>
<p>Nearly all hard drives these days will either be 5400RPM or 7200RPM. The main tradeoffs for this decision are speed over power consumption, noise, and longevity. Additionally, 7200RPM drives take longer to spin up, which means they may be less optimal if you're planning on only grabbing a few files from your NAS every now and then.</p>
<p>There are plenty of debates on the Internet about which one is better, but both are perfectly acceptable. But if you don't care too much about a little bit of extra read/write performance, going with 5400RPM is a safe default choice.</p>
<h4 id="heading-new-vs-refurbished">New vs Refurbished</h4>
<p>Buying refurbished drives can save you a <em>lot</em> of money- but given the limited lifespans of hard drives, it's good to be wary of used drives that might be close to dying.</p>
<p>If you're not concerned about price at all, definitely just buy a new hard drive- you'll get several years of manufacturer warranty and can easily RMA the drive if it's defective or fails sooner than expected.</p>
<p>Otherwise, you can find some really nice deals on sites like ServerPartDeals- for example, a <a target="_blank" href="https://serverpartdeals.com/products/seagate-exos-x18-st18000nm000j-18tb-7-2k-rpm-sata-6gb-s-512e-4kn-256mb-3-5-hdd">manufacturer refurbished 18TB Exos</a> is going for $175 at the time of writing- over $100 off its new price, and even cheaper than a new 10TB CMR drive. Note that you'll have to do some more extensive research about the warranties of these drives, which are usually much shorter (several months to a year at most) and come with additional terms that make them much more difficult to RMA compared to new drives.</p>
<h4 id="heading-a-note-on-shucking">A note on shucking</h4>
<p>You might have heard of the term "shucking" before- this refers to the act of taking apart external hard drive enclosures to get the drive inside them due to external enclosures often being sold for far cheaper than the standalone drive.</p>
<p>If it's holiday season and you spot some WD EasyBooks being sold for $100 off, go for it! Shucking is easier than you might think and very commonly practiced. It's even possible to maintain your warranty if you are extra careful about it (though this is a legal gray area and definitely not guaranteed).</p>
<p><a target="_blank" href="https://www.ifixit.com/Guide/How+to+Shuck+a+WD+Elements+External+Hard+Drive/137646">Here's a guide from iFixIt</a> in case you don't already have a guide handy from Reddit or another forum.</p>
<h2 id="heading-choosing-nas-software">Choosing NAS Software</h2>
<p>Now that you have some idea of the hardware that you'll be using, it's time to pick out NAS software!</p>
<h3 id="heading-truenas">TrueNAS</h3>
<p>TrueNAS is fully free and open source, and my personal choice for NAS software. It requires some additional configuration compared to the other choices, but has pretty much everything you need. I'll be demonstrating how to set up TrueNAS for the remainder of this guide.</p>
<p>TrueNAS is based on FreeBSD and uses ZFS- so while it's still UNIX-based, it won't work exactly like you might expect for a standard Linux distro.</p>
<h3 id="heading-unraid">UnRAID</h3>
<p>UnRAID is probably the most popular proprietary NAS software solution out there. You're mostly paying for a more polished experience compared to the other choices, as well as some cool features like great hotswap support, Docker container management, and caching. It's also not horribly expensive (starting at $59, or $129 if you want unlimited drives). I'd personally recommend trying out TrueNAS first, and switching over to UnRAID if you find that it has features you are willing to pay more for.</p>
<p>UnRAID uses its own filesystem, which is generally regarded as less robust compared to ZFS. However, it's perfectly acceptable (and probably still slightly overkill) for our homelab use case.</p>
<h3 id="heading-synologyxpenology">Synology/Xpenology</h3>
<p>Synology is one of the most popular NAS enclosure companies. If you are looking for a plug-and-play solution with minimal configuration, consider getting one of their enclosures. The main drawback is their price- expect to pay $400 or more for a respectable configuration with more than 2 drive bays.</p>
<p>If you're down for some mild violation of terms of service, the <a target="_blank" href="https://xpenology.org/">Xpenology project</a> allows you to self-host Synology NAS software on your own hardware. I don't have any experience doing this, but it might be worth a try if you're looking for the polish of a Synology device but without the cost of one.</p>
<h2 id="heading-setting-up-your-nas-in-a-vm">Setting up your NAS in a VM</h2>
<p>Generally, the recommended configuration for a NAS is to host it on its own dedicated machine, whether that be an integrated solution like Synology's offerings or one that you build yourself.</p>
<p>However, if you do it properly, hosting your NAS software within a VM in your server can work just as well, and save you the cost and hassle of needing multiple physical servers. This is a great starting point, and one that lends itself to an easy upgrade if/when you decide to expand your setup.</p>
<p>I'll go over how you can host your NAS in a Proxmox VM here. If you choose to run it on dedicated hardware instead, install your chosen software plus ZeroTier, then skip this section and the HDD Passthrough section.</p>
<h3 id="heading-setting-up-a-truenas-vm">Setting up a TrueNAS VM</h3>
<p>First, get the TrueNAS image at <a target="_blank" href="https://download.freenas.org/13.0/STABLE/U5.1/x64/TrueNAS-13.0-U5.1.iso">https://download.freenas.org/13.0/STABLE/U5.1/x64/TrueNAS-13.0-U5.1.iso</a> (may be different if later versions have been released- see <a target="_blank" href="https://download.freenas.org/13.0/STABLE/U5.1/x64/TrueNAS-13.0-U5.1.iso">here</a> for the latest version). Remember to download it into the <code>var/lib/vz/template/iso</code> folder in Proxmox via <code>curl</code> or <code>wget</code> so it shows up in the console!</p>
<p>Next, create a new VM in Proxmox (see <a target="_blank" href="https://devlog.bencuan.me/2-proxmox#heading-creating-your-first-vm">here</a> for a refresher), using the image you downloaded. TrueNAS uses RAM as a cache, so make sure to allocate enough memory. Although some forum posts suggest a "1GB of RAM per 1TB storage rule", the accuracy of such a rule is rather debatable. Going off of official specs, 8GB is the absolute minimum with 16GB recommended. If you have a large amount of storage or plan on using deduplication, you should allocate more RAM accordingly (perhaps 32GB for anything more than 50ish TB).</p>
<p>One consideration to make when allocating resources for your VM is whether you will be running media server applications like Plex or Jellyfin. If you are, you should allocate additional CPUs, memory, and storage to be able to run these applications. Otherwise, a minimal configuration like 1-2 CPU cores and 16GB boot disk storage should be enough to get you started.</p>
<p>For the juicy hardware spec details, refer to the <a target="_blank" href="https://www.truenas.com/docs/core/gettingstarted/corehardwareguide/">official documentation</a>.</p>
<h2 id="heading-hdd-passthrough">HDD Passthrough</h2>
<p>In order to work properly, your NAS software must have full control over the hard drives that will be used to create a pool-- using virtual drives as you do for any other VM will not work! There are several methods for hard drive passthrough, which you are welcome to compare and choose between for your specific use case.</p>
<h3 id="heading-method-1-hba-card">Method 1: HBA Card</h3>
<p>If your chassis has enough space to hold multiple hard drives, you can adapt them to PCIe and pass through the card directly to your NAS. The PCIe card that allows you to do this is an HBA (host bus adapter).</p>
<p>For this method, you will need to purchase the following:</p>
<ul>
<li><p>LSI SAS HBA card in IT Mode (example models include 9211-8i, 9300-16i). Each SAS port can drive up to 4 hard drives, so getting a 2-port/8i card is probably more than enough.</p>
</li>
<li><p>SAS to SATA adapter cables, if your card does not come with them</p>
</li>
</ul>
<p>You can acquire the above used via eBay starting from around $40 total at the time of writing.</p>
<p>Do <strong>not</strong> buy a direct PCIe-SATA expansion card- these are not designed for the workloads our NAS will require (i.e. prolonged read/write over all disks simultaneously).</p>
<p>RAID cards are also not necessary, since your software can perform all of the RAID calculations without additional hardware.</p>
<p>Once you've acquired your HBA card, plugged it into any available PCIe slot, and hooked up your drives to it, you can now pass it into Proxmox! You can do this by selecting your VM, going to the Hardware tab, then clicking Add -&gt; PCI Device and selecting your HBA card from the list.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686803121088/8137328a-b472-4cfa-b558-15772873d73d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-method-2-external-enclosure">Method 2: External Enclosure</h3>
<p>If your server can't hold enough drives in it (or you're using something like a Raspberry Pi), getting an external enclosure might be your best bet. Companies like <a target="_blank" href="https://www.u-nas.com/">U-NAS</a> offer hard drive enclosures that you can plug into your server.</p>
<p>Passing an enclosure into your VM is the same process as passing through an HBA card, except that the enclosure may be connected over USB versus PCI. You can click on Add -&gt; USB Device and select the correct option instead.</p>
<h3 id="heading-method-3-proxmox-passthrough">Method 3: Proxmox Passthrough</h3>
<p>If you really can't afford a HBA card or external enclosure, you can connect your hard drives as you normally would (onto your motherboard's SATA ports) and pass them through individually. This method is <em>not</em> recommended because it adds an additional layer between your NAS and hard drives, which makes data loss more likely to occur.</p>
<p>If you understand the implications and would still like to proceed, open up your Proxmox shell and do the following:</p>
<ol>
<li>Run <code>lsblk -o MODEL,SERIAL</code>. This should output a list of the model and serial numbers for all detected drives.</li>
</ol>
<ul>
<li><p>Run <code>ls /dev/disk/by-id</code> and cross-reference the <code>lsblk</code> output from above to identify the disks we want to pass through. For example, the serial number <code>WD-WX42AD0WV0L0</code> could correspond to the disk ID <code>ata-WDC_WD40EFAX-68JH4N1_WD-WX42AD0WV0L0</code>.</p>
</li>
<li><p>Run <code>qm set &lt;VM_ID&gt; -scsi&lt;N&gt; /dev/disk/by-id/&lt;DISK_ID&gt;</code> where <code>VM_ID</code> is the ID of the TrueNAS VM, <code>N</code> is an integer that hasn't been assigned a disk yet (e.g. if <code>scsi1</code> exists, use <code>-scsi2</code> to add a new disk), and <code>DISK_ID</code> is taken from the output of <code>ls /dev/disk/by-id</code>.</p>
</li>
<li><p>Edit the file <code>/etc/pve/qemu-server/&lt;ID&gt;.conf</code> to add <code>serial=&lt;SERIAL_NUMBER&gt;</code> to the end of each of the new <code>scsi&lt;N&gt;</code> lines.</p>
</li>
<li><p>Repeat the above steps for any other drives you would like to add.</p>
</li>
</ul>
<h2 id="heading-set-up-a-pool">Set up a Pool</h2>
<p>Now that your hard drives are detectable by TrueNAS, it's time to set up your pool!</p>
<p>First, let's make sure we can access the TrueNAS web UI. Open the Proxmox console for the VM and you should be greeted by the following prompt:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686803130153/c4a71d69-937a-469e-b8c7-1d63db322e9e.png" alt class="image--center mx-auto" /></p>
<p>The web interface IP listed is probably your LAN address. Let's install ZeroTier to make sure you can access it anywhere- <a target="_blank" href="https://alan.norbauer.com/articles/zerotier-on-truenas/">here</a> is a good guide on how you can do this (it's a little different due to TrueNAS being BSD-based rather than Linux-based).</p>
<p>You can proceed to set up your DNS records and reverse proxy for more human-friendly access if you would like- but remember that this console should only be accessible via your internal network.</p>
<p>Once you can access the web console, navigate to Storage -&gt; Pools -&gt; Add. You should be greeted with the following screen:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686803069278/4135a060-7a4c-4014-91bb-d41e1d0cce2c.png" alt class="image--center mx-auto" /></p>
<p>There's a lot of terminology here (what's a VDev?? How is this different from pools???). You should consult the <a target="_blank" href="https://www.truenas.com/community/threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/">FreeNAS Guide</a> for a comprehensive introduction if you would like. Here's the summary:</p>
<ul>
<li><p>A VDev is a "virtual device", or a collection of physical drives organized via software RAID. Once a VDev is created, you <em>cannot</em> add or remove drives from it!</p>
</li>
<li><p>A pool (or ZPool) is a collection of VDevs. This is what will be available when your other VM's/devices connect to the NAS to access data.</p>
</li>
<li><p>If any one vdev fails in a pool, then the entire pool fails. So make sure you configure your vdevs in a robust manner, using ZFS RAID configurations, such that you can handle drive failure.</p>
</li>
</ul>
<p>If you're unsure of what to do here, just make one VDev with all of your available drives using the suggested layout (mirror for 2 drives, RAID-Z1/Z2 for 3+ drives).</p>
<p>Once the pool has been created, TrueNAS will automatically prepare your drives and make the pool available! You should be able to see this pool within your TrueNAS instance at the location <code>mnt/POOLNAME/</code>.</p>
<h2 id="heading-sharing">Sharing</h2>
<p>There are a variety of ways you can access your new pool from other devices. You can also restrict certain users or devices to a subfolder in your pool (known as a dataset).</p>
<h3 id="heading-permissions">Permissions</h3>
<p>First, let's set up some basic permissions to allow yourself and others to access the pool.</p>
<p>If you want to restrict a user to a subfolder, let's first create a new dataset. You can do this from the terminal (click on Shell from the TrueNAS sidebar):</p>
<ul>
<li><p>Make sure the folder exists: <code>mkdir -p /mnt/POOLNAME/FOLDERNAME</code></p>
</li>
<li><p>Create the dataset: <code>zfs create POOLNAME/FOLDERNAME</code></p>
</li>
</ul>
<p>Then, let's create the user from the Accounts -&gt; Users screen. You can assign whatever username and password you want- this is what the user will type in when attempting to connect. There should be no need to adjust any other settings besides reassigning the home directory to your dataset if needed.</p>
<h3 id="heading-nfs-linux">NFS (Linux)</h3>
<p>To connect from a Linux device (or another one of your VM's), you will need to set up a NFS share.</p>
<p>Go to Sharing -&gt; Unix Shares (NFS), and select the dataset you wish to share. If you want to share the entire pool, additionally go to the Advanced Options and configure the Access section as follows. This will ensure you will connect as the root user and be able to read/write.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686803076901/93f60e1b-2a12-47ad-adb3-f8f35649dc15.png" alt class="image--center mx-auto" /></p>
<p>You can also configure the authorized networks (also in the advanced settings) to restrict all connections to come from your ZeroTier subnet if desired. If you're unsure of what to enter, refer back to the ZeroTier web console and copy over the subnet and subnet mask fields.</p>
<p>Now, on your Linux machine, mount your pool by doing the following:</p>
<ol>
<li><p>Install the NFS package: <code>sudo apt install nfs-common</code></p>
</li>
<li><p>Create a local mount point: <code>sudo mkdir -p /mnt/DIRECTORYNAMEHERE</code></p>
</li>
<li><p>Run the mount command: <code>sudo mount -t nfs ZEROTIER_IP_OF_NAS:/mnt/POOLNAME/FOLDERNAME /mnt/DIRECTORYNAMEHERE</code></p>
</li>
<li><p>Allow your regular user access to the mounted folder: <code>sudo chown USERNAME /mnt/DIRECTORYNAMEHERE</code></p>
</li>
<li><p>If you want the mount to persist on reboot, add a line to <code>/etc/fstab</code>: <code>ZEROTIER_IP_OF_NAS:/mnt/POOLNAME/FOLDERNAME /mnt/DIRECTORYNAMEHERE nfs defaults 0 0</code></p>
</li>
</ol>
<h3 id="heading-smb-mac-and-windows">SMB (Mac and Windows)</h3>
<p>To connect from a Mac or Windows machine, we can use the SMB protocol for more convenient access.</p>
<p>This should work out of the box on TrueNAS: go to Sharing -&gt; Windows Shares (SMB), then follow the steps to add a new share with the default parameters.</p>
<p>On your Windows machine, open Windows Explorer. In the file name bar, type in <code>//ZEROTIER_IP_OF_NAS/POOLNAME/FOLDERNAME</code> and you should be prompted to log in. You should use the credentials of the user you made in TrueNAS, and <em>not</em> your Windows login. Once the connection is successful, you can bookmark the location to save your NAS folder to quick access.</p>
<p>On your Mac machine, open Finder. Then, navigate to Go -&gt; Connect to Server... For the server address, type in <code>smb://ZEROTIER_IP_OF_NAS/POOLNAME/FOLDERNAME</code>. Use the login you created in TrueNAS. Once you're connected, your NAS should show up as a network drive on the Finder sidebar.</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 5: Public Networking]]></title><description><![CDATA[Introduction
At this point in the series, you should have a fully functioning set of services available at your command and some knowledge on how to extend this framework to host whatever you want! That's pretty much all you need for the most basic h...]]></description><link>https://devlog.bencuan.me/5-public-networking</link><guid isPermaLink="true">https://devlog.bencuan.me/5-public-networking</guid><category><![CDATA[Homelab]]></category><category><![CDATA[server]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[self-hosted]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Sat, 10 Jun 2023 17:59:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686419793342/d8a78229-02ac-4caa-bb88-072584266965.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>At this point in the series, you should have a fully functioning set of services available at your command and some knowledge on how to extend this framework to host whatever you want! That's pretty much all you need for the most basic homelab setup.</p>
<p>For the rest of this series, I'll discuss a few <em>almost-but-not-quite-mandatory</em> steps to really build out your system and ensure its durability. Unlike the previous parts which were incremental, each of the following parts may be done independently, in any order, or not at all. For example, if you don't want to expose your setup to the public whatsoever but still want to set up a NAS and backup system, feel free to skip to the next part.</p>
<p>With that said, let's suppose you <em>do</em> want to host something and make it available to others, whether that would be a game server, chat service like Matrix, or some personal projects you're hosting. Simply exposing your home network to the Internet via port forwarding is definitely an option, but is ill-advised from a security standpoint (everyone will now know your IP address and can send attacks directly to you!).</p>
<h2 id="heading-setting-up-an-ingress">Setting up an Ingress</h2>
<p>As an alternative to port-forwarding, let's take advantage of Zerotier alongside the millions of hours of engineering time cloud computing companies pour into hardening their security to set up an off-premise ingress. This has many benefits:</p>
<ul>
<li><p>Public Internet users will never directly connect to your homelab: all requests will be handled via the ingress.</p>
</li>
<li><p>Cloud providers likely have much more robust network security and monitoring compared to your home network, so you can ensure nothing nefarious is happening without advanced security knowledge of your own.</p>
</li>
<li><p>You can monitor your entire server from another device that's also always on- for instance, I run Uptime Kuma on my ingress to send an email whenever it detects that my server is down.</p>
</li>
<li><p>Most cloud providers have a free tier that's more than enough to run a simple webserver/reverse proxy, so all of this can be done at no cost!</p>
</li>
</ul>
<p>Of course, you should be the one to decide how you want to set up public access- there's nothing stopping you from doing something else, like hosting a VPN to share with friends, or just going ahead with portforwarding.</p>
<h3 id="heading-provider-options">Provider Options</h3>
<p>If you want a free server, here's a list of some providers and what they offer:</p>
<ul>
<li><p><a target="_blank" href="https://cloud.google.com/free/docs/free-cloud-features#free-tier-usage-limits">Google Cloud E2-micro</a>: 2 vCPUs, 1GB memory, 30GB storage</p>
</li>
<li><p><a target="_blank" href="https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier_topic-Always_Free_Resources.htm">Oracle Cloud</a>: Choice between E2.Micro (1 vCPU, 1GB memory, 200GB storage) or ARM Ampere (4 vCPU, 24GB memory, 200GB storage) - more details <a target="_blank" href="https://levelup.gitconnected.com/a-powerful-server-from-oracle-cloud-always-free-cbc73d9fbfee">here</a></p>
</li>
<li><p>Some more providers can be found in <a target="_blank" href="https://github.com/cloudcommunity/Cloud-Free-Tier-Comparison">this list</a>, which may not be always free</p>
</li>
</ul>
<p>I personally use an Oracle Cloud E2.micro instance. If you also choose to do so, <a target="_blank" href="https://stackoverflow.com/questions/54794217/opening-port-80-on-oracle-cloud-infrastructure-compute-node">here's a guide on how to expose port 80</a> (repeat for 443 as well).</p>
<h3 id="heading-software-setup">Software Setup</h3>
<p>Regardless of the provider you choose, you're ultimately just getting another VM to play with, so your usual setup procedure will apply: install packages, join your ZeroTier network, and get stuff running. <a target="_blank" href="https://github.com/64bitpandas/TurtleNetPublic/blob/main/setup/ubuntu-setup.sh">Here's my setup script</a> if you need some inspiration and are getting tired of copying the same commands over and over again for each VM!</p>
<p>Mainly, you'll want to have a reverse proxy up and running so you can redirect traffic directed towards your ingress into the rest of your network. Here's how it'll go:</p>
<ol>
<li><p>Get a reverse proxy. While I use Traefik for my internal services, I went with Caddy for my external services since I only have a few proxied sites, and all of them are hosted on a server other than my ingress.</p>
</li>
<li><p>For each domain you're hosting, create a DNS record pointing to the <strong>public IP</strong> of your ingress (not your Zerotier IP)! You should be able to find this on the web dashboard for your provider.</p>
</li>
<li><p>Reverse proxy each domain to its desired route on ZeroTier. For example, <a target="_blank" href="https://github.com/64bitpandas/TurtleNetPublic/blob/main/docker/caddy/data/Caddyfile">here's my Caddyfile</a> that maps domains to ports on my other server.</p>
</li>
</ol>
<p>And that's pretty much it!</p>
<h2 id="heading-some-extra-stuff">Some Extra Stuff</h2>
<h3 id="heading-load-balancing">Load Balancing</h3>
<p>If you're expecting a lot of traffic to your services, you can set up load balancing to serve more people at once! <a target="_blank" href="https://caddy.community/t/v2-load-balancer-example-with-caddyfile/6903">Here's an example for Caddy.</a></p>
<p>Load balancing takes requests and forwards them to <em>multiple</em> destinations, which are all probably hosting the same service! For example, if you have two servers each running a copy of your website under Round Robin balancing, your reverse proxy will alternate between forwarding requests to each of those two servers.</p>
<h3 id="heading-a-note-on-vpns">A note on VPNs</h3>
<p>Since you have ZeroTier, hosting a VPN is usually not necessary- you have full access to everything on your home network at all times already.</p>
<p>However, if friends or family need access to your internal network and you don't want to go through the hassle of setting up ZeroTier for them or proxying a public domain, it could be a good choice to set one up on your ingress.</p>
<p>Any self-hosted VPN solution should do; <a target="_blank" href="https://openvpn.net/">OpenVPN</a> is the industry standard if you need some place to get started.</p>
<h3 id="heading-a-note-on-https">A note on HTTPS</h3>
<p>Both Caddy and Traefik automatically provision TLS certificates via LetsEncrypt as long as you follow the setup instructions accordingly (<a target="_blank" href="https://caddyserver.com/docs/automatic-https">Caddy</a>, <a target="_blank" href="https://doc.traefik.io/traefik/https/acme/">Traefik</a>). Getting this set up is especially important for public services, both for usability (so your users don't get big red errors in their browsers) and security (so you aren't communicating everything through an insecure protocol) so don't skip out on it!</p>
<p>HTTPS should work even for internal domains not exposed to the network if you use the <a target="_blank" href="https://letsencrypt.org/docs/challenge-types/#dns-01-challenge">DNS-01 ACME challenge</a> which the Caddy/Traefik setups walk you through. This works because it involves putting a TXT record in your DNS records to prove you own the domain, without needing to ping your main server at all.</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 4: Reverse Proxying Your First Service]]></title><description><![CDATA[Introduction
It's taken a little while, but we're finally ready to host our first service in a VM!
This section is very choose-your-own-adventure: I'll give an example of how to set up a service I run (Portainer), as well as the general framework I u...]]></description><link>https://devlog.bencuan.me/turtlenet-4-reverse-proxying-your-first-service</link><guid isPermaLink="true">https://devlog.bencuan.me/turtlenet-4-reverse-proxying-your-first-service</guid><category><![CDATA[Homelab]]></category><category><![CDATA[server]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Traefik]]></category><category><![CDATA[self-hosted]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Fri, 09 Jun 2023 22:51:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686350844179/986d9d73-d393-4195-b7e7-186a1672a285.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>It's taken a little while, but we're finally ready to host our first service in a VM!</p>
<p>This section is very choose-your-own-adventure: I'll give an example of how to set up a service I run (Portainer), as well as the general framework I use to spin up new services. You should then be able to apply this framework to installing any other service of your choice!</p>
<p>If you're planning on running a lot of services on bare VM's, you basically have two options:</p>
<ol>
<li><p>Make a VM for each service you're offering: this helps keep each service separated in the event of crashes or resource conflicts, but takes up a lot of additional compute resources. Managing a huge amount of VM's is also somewhat time consuming.</p>
</li>
<li><p>Run most/all of your services on a single VM: this saves lots of compute power, but you will run into conflicts rather frequently. For instance, if two of your services both use MySQL, they might overwrite each others' database entries if configured improperly!</p>
</li>
</ol>
<p>It's evident that neither of these options are quite ideal. Luckily, there is a solution to this:</p>
<h3 id="heading-containerization">Containerization!!</h3>
<p>Essentially, containerization is the process of making very standard, mini-images within one operating system. These images are almost like VM's, but don't need dedicated disk space, memory, or processor cores like a VM does. In addition, due to how standard they are, you can install them on practically any device and they will still work in exactly the same way.</p>
<p>One of the most popular containerization management tools used in the software industry is Docker. Besides providing the containers, Docker also provides lots of other goodies like:</p>
<ul>
<li><p>A standard way of defining and sharing container images through Dockerfiles;</p>
</li>
<li><p>Rudimentary virtual networking that allows each service to either be isolated or to communicate with one another;</p>
</li>
<li><p>Reliability and crash recovery (containers can auto-restart on crash).</p>
</li>
</ul>
<h4 id="heading-aside-but-what-about-kubernetes-k8s">Aside: But what about Kubernetes :k8s:??</h4>
<p>If you have heard of the mystical framework that is Kubernetes and want to use it to power your own server, go for it! I will warn you that it gets fairly involved, and is probably extremely overpowered for any hobbyist system- but that being said, part of the fun of homelabbing is playing around with things and learning how to use them!</p>
<p>I wrote an <a target="_blank" href="https://decal.ocf.berkeley.edu/labs/10/">interactive lab</a> for getting started with Kubernetes if you'd like an intro and some additional resources.</p>
<h2 id="heading-docker-setup">Docker Setup</h2>
<h3 id="heading-installation">Installation</h3>
<p>Docker can be installed by following the <a target="_blank" href="https://docs.docker.com/engine/install/">official documentation</a>. Note that we want to install <em>Docker Engine</em> and not <em>Docker Desktop</em> since we are only interacting with the command line. For example, <a target="_blank" href="https://docs.docker.com/engine/install/ubuntu/">here are the Ubuntu installation instructions</a>.</p>
<p>You may also need to install <a target="_blank" href="https://docs.docker.com/compose/install/linux/">Docker Compose</a>.</p>
<p>To verify that you have both successfully installed, run <code>docker --version</code> and <code>docker-compose --version</code>.</p>
<h3 id="heading-some-notes-on-config-management">Some notes on config management</h3>
<p>There are multiple ways of managing and configuring services using Docker Compose. These include:</p>
<ol>
<li><p>Making one <code>docker-compose.yml</code> and listing all of your services in it</p>
</li>
<li><p>Making one <code>docker-compose.yml</code> for each of your services</p>
</li>
<li><p>Creating and managing all configs via Portainer</p>
</li>
</ol>
<p>Each of these options have their own benefits and drawbacks:</p>
<ol>
<li><p>Starting/stopping your entire service deployment can be done with a single command, but having such a large config file can get unwieldy.</p>
</li>
<li><p>Separating each service means some redundant configuration and less convenient management, but is modular and it's easy to work on one service without affecting others.</p>
</li>
<li><p>Using Portainer is the most convenient and powerful method, but it's more difficult to share and back up configs.</p>
</li>
</ol>
<p>For my own purposes, I chose Option 2 since I like the organizational aspect of having a folder for each service, and can have a Git repo with all my configs in it. You can see this in action by viewing some of my sample configs <a target="_blank" href="https://github.com/64bitpandas/TurtleNetPublic/tree/main/docker">here</a>.</p>
<h3 id="heading-some-more-setup">Some more setup</h3>
<p>Before you begin, it's recommended to to give your user access to Docker commands so you don't have to prepend <code>sudo</code> before everything (replace <code>YOUR_USER</code> with your username):</p>
<pre><code class="lang-bash">sudo usermod -aG docker YOUR_USER &amp;&amp; newgrp docker
</code></pre>
<p>Now, you should be able to run <code>docker ps</code> to list all running containers. If it's successful, you should see an empty list at the moment.</p>
<p>If you instead see something like <code>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?</code> then you'll need to start the Docker service:</p>
<pre><code class="lang-plaintext">sudo systemctl enable --now docker
</code></pre>
<h2 id="heading-your-first-docker-compose-file">Your First Docker Compose File</h2>
<p>Using Docker Compose, all services can be defined in a standard format: the <a target="_blank" href="https://docs.docker.com/compose/compose-file/03-compose-file/">Compose file</a>. To create one, simply make a file named <code>docker-compose.yml</code>.</p>
<p>Within this file, we'll mostly be working with the <code>services</code> element. For example, here is a simple config for getting Portainer up:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">portainer:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">portainer/portainer-ce</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">portainer</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">"9000:9000"</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/etc/localtime:/etc/localtime:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/var/run/docker.sock:/var/run/docker.sock:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">./data:/data</span>
</code></pre>
<p>Let's break this down:</p>
<ul>
<li><p>The first line after <code>services</code> is the ID of your service- you can name this whatever you want. You can list multiple services under <code>services</code> in the same file, but as discussed above I typically don't do this unless the services rely on each other.</p>
</li>
<li><p>The <code>image</code> is the name of the container image that will be installed. You can look through a repository at <a target="_blank" href="https://hub.docker.com/">Docker Hub</a>, but this could also be the name of a custom image you have compiled locally (more on that later).</p>
</li>
<li><p>The <code>restart</code> option specifies the behavior when the container or server goes down. <code>unless-stopped</code> is my personal default: the container will automatically restart itself unless it was manually brought down by a user.</p>
</li>
<li><p><code>ports</code> exposes ports from the container (right) to the system (left). <strong>Remember the order host:container</strong> (I always get it mixed up)- for example, <code>8080:80</code> will expose a service running in the container's port 80 to <a target="_blank" href="http://localhost:8080"><code>localhost:8080</code></a> on the server it's running on.</p>
</li>
<li><p><code>volumes</code> exposes files and folders in the container to the host. Again, the order is host:container. The <code>:ro</code> at the end specifies that those specific files are read-only.</p>
<ul>
<li><p>Since I keep a folder handy for each service, I like to expose the service's data to the working directory using <code>./data:/data</code>. However, this is only one of many methods of using volumes: see the <a target="_blank" href="https://docs.docker.com/storage/volumes/">official documentation on Volumes</a> for more info.</p>
</li>
<li><p>Every service will have a different set of directories/files to expose so make sure to check the documentation to see what you will need.</p>
</li>
</ul>
</li>
</ul>
<p>Once you've saved your Compose file, you can run the command <code>docker-compose up -d --force-recreate</code> to get it running in the background. It might take a minute to pull the image on the first run, but once it's done you should be able to run <code>docker ps</code> and see something like this:</p>
<pre><code class="lang-plaintext">CONTAINER ID   IMAGE                    COMMAND        CREATED         STATUS                 PORTS
79ad464d6e7e   portainer/portainer-ce  "/portainer"    6 months ago    Up 12 days             8000/tcp, :::9000-&gt;9000/tcp, 9443/tcp
</code></pre>
<p>Congrats, you now have a running service! If you set up networking from the <a target="_blank" href="https://devlog.bencuan.me/3-zerotier">previous section</a>, you should now be able to navigate to <code>yourserverdomain.tld:9000</code> (replacing with your server domain, of course) to access the Portainer dashboard.</p>
<h2 id="heading-traefik">Traefik</h2>
<p>We are left with one big problem: although accessing Portainer via <code>domain.tld:9000</code> is fine, imagine if you had tens of services- having to remember the port number for each service gets annoying very quickly. Wouldn't it be so much better if we could map it to something like <code>portainer.domain.tld</code>?</p>
<p>To solve this problem, we shall invoke the power of a <strong>reverse proxy</strong>!</p>
<p>Essentially, a reverse proxy creates a layer in between your services and the rest of the internet, translating user requests (<code>portainer.domain.tld</code>) into something your services can understand (<a target="_blank" href="http://localhost:9000"><code>localhost:9000</code></a>).</p>
<blockquote>
<p>Side node: The name "reverse proxy" begs the question: what makes it "reverse" of a regular proxy? This stems from the fact that reverse proxies are generally hosted closer to the services (as you'll see soon in our case, exactly the same server as our services) and manage incoming traffic. On the other hand, regular proxies are hosted with the users and manage outgoing traffic from a network.</p>
</blockquote>
<p>There are lots of reverse proxy implementations:</p>
<ul>
<li><p>Nginx is industry standard and includes lots of additional features like a load balancer and integrated web server. Its power and flexibility also make it more difficult to configure and maintain, however.</p>
</li>
<li><p>Apache 2 is another standard reverse proxy and web server implementation; with Nginx, they have been estimated to serve over half the internet. Choosing Apache over Nginx is mostly a personal/design/legacy decision, and for our purposes Apache has many of the same benefits and drawbacks as Nginx.</p>
</li>
<li><p>Caddy is a more recent addition to the list, and has the simplest configuration I've seen so far. For example, the single line <code>reverse_proxy portainer.domain.tld {</code> <a target="_blank" href="http://localhost:9000"><code>localhost:9000</code></a> <code>}</code> in a Caddyfile will do exactly what we want it to! If you just want something that works, I highly recommend Caddy.</p>
</li>
<li><p>Traefik is the implementation I will go over now. While more complex to set up compared to Caddy, it has a wider range of features and can automatically route Docker containers!</p>
</li>
</ul>
<p>To get started, see <a target="_blank" href="https://doc.traefik.io/traefik/getting-started/quick-start/">https://doc.traefik.io/traefik/getting-started/quick-start/</a>. You can also just copy the Compose file below:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">traefik:</span>
    <span class="hljs-comment"># The official v2 Traefik docker image</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">traefik:v2.7</span>
    <span class="hljs-comment"># Enables the web UI and tells Traefik to listen to docker</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">--api.insecure=true</span> <span class="hljs-string">--providers.docker</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-comment"># The HTTP port</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"80:80"</span>
      <span class="hljs-comment"># The HTTPS port</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"443:443"</span>
      <span class="hljs-comment"># The Web UI (enabled by --api.insecure=true)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"8080:8080"</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-comment"># So that Traefik can listen to the Docker events</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/var/run/docker.sock:/var/run/docker.sock</span>
      <span class="hljs-comment"># Config</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/home/turtle/traefik/data/config.yml:/config.yml:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/home/turtle/traefik/data/traefik.yml:/traefik.yml:ro</span>
      <span class="hljs-comment"># SSL</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/home/turtle/traefik/data/acme.json:/acme.json</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.enable=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik.entrypoints=http"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik.rule=Host(`traefik.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik.middlewares=traefik-https-redirect"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.entrypoints=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.rule=Host(`traefik.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.tls=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.tls.domains[0].main=t.bencuan.me"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.tls.domains[0].sans=*.t.bencuan.me"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.traefik-secure.service=api@internal"</span>

    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-comment">### TODO ###</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">CF_API_EMAIL=REDACTED</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">CF_DNS_API_TOKEN=REDACTED</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">proxy</span>
  <span class="hljs-attr">whoami:</span>
    <span class="hljs-comment"># A container that exposes an API to show its IP address</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">traefik/whoami</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.whoami.rule=Host(`whoami.t.bencuan.me`)"</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">proxy:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>You should replace the following:</p>
<ul>
<li><p>Right now, I'm mapping all my services to various subdomains of <a target="_blank" href="http://t.bencuan.me"><code>t.bencuan.me</code></a>. You have a different domain, so change all instances of this to your domain. Using a subdomain is preferred for internal services, so you can map your DNS record to your ZeroTier IP and have all of your services automatically route to your server.</p>
</li>
<li><p>If you're using Cloudflare, generate an API token <a target="_blank" href="https://developers.cloudflare.com/fundamentals/api/get-started/create-token/">here</a> and replace the <code>environment</code> section with the correct credentials.</p>
</li>
</ul>
<p>Now, go back to your DNS provider and create a new record for your subdomain using the ZeroTier IP for your server. For example, here's mine:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686350922277/d51caac2-0e3e-48b2-a433-c970c0377f2c.png" alt class="image--center mx-auto" /></p>
<p>Next, create a <code>data/</code> folder. Inside, create two files: <code>traefik.yml</code> and <code>config.yml</code>.</p>
<p>Inside <code>traefik.yml</code>, paste the following:</p>
<pre><code class="lang-yml"><span class="hljs-attr">api:</span>
  <span class="hljs-attr">dashboard:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">debug:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">entryPoints:</span>
  <span class="hljs-attr">http:</span>
    <span class="hljs-attr">address:</span> <span class="hljs-string">":80"</span>
  <span class="hljs-attr">https:</span>
    <span class="hljs-attr">address:</span> <span class="hljs-string">":443"</span>
<span class="hljs-attr">serversTransport:</span>
  <span class="hljs-attr">insecureSkipVerify:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">providers:</span>
  <span class="hljs-attr">docker:</span>
    <span class="hljs-attr">endpoint:</span> <span class="hljs-string">"unix:///var/run/docker.sock"</span>
    <span class="hljs-attr">exposedByDefault:</span> <span class="hljs-literal">false</span>
  <span class="hljs-attr">file:</span>
    <span class="hljs-attr">filename:</span> <span class="hljs-string">/config.yml</span>
<span class="hljs-attr">certificatesResolvers:</span>
  <span class="hljs-attr">cloudflare:</span>
    <span class="hljs-attr">acme:</span>
      <span class="hljs-attr">email:</span> <span class="hljs-string">YOUR_CLOUDFLARE_EMAIL</span>
      <span class="hljs-attr">storage:</span> <span class="hljs-string">acme.json</span>
      <span class="hljs-attr">dnsChallenge:</span>
        <span class="hljs-attr">provider:</span> <span class="hljs-string">cloudflare</span>
        <span class="hljs-attr">resolvers:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"1.1.1.1:53"</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"1.0.0.1:53"</span>
</code></pre>
<p>See the <a target="_blank" href="https://doc.traefik.io/traefik/https/acme/">official documentation</a> for more details on <code>certificatesResolvers</code> if you don't use Cloudflare. This is necessary for automatically ensuring all of your sites are on HTTPS (otherwise your browser will yell at you a lot).</p>
<p>You can leave <code>config.yml</code> empty for now, but it'll be useful for routing to services not hosted on the same server. You can see mine <a target="_blank" href="https://github.com/64bitpandas/TurtleNetPublic/blob/main/docker/traefik/data/config.yml">here</a> for an example.</p>
<p>Finally, you're ready to get Traefik up! Run <code>docker-compose up -d --force-recreate</code> once again, making sure that you're in the same folder as your new <code>docker-compose.yml</code>. You should now be able to navigate to the location you pointed the Traefik console to (<a target="_blank" href="http://traefik.t.bencuan.me"><code>traefik.t.bencuan.me</code></a> in my case).</p>
<p>If anything went wrong, you can run <code>docker-compose logs</code> to see what happened.</p>
<h2 id="heading-more-services">More Services</h2>
<p>Here's a handy Compose template for getting started with hosting future services:</p>
<pre><code class="lang-yml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3"</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">SERVICE_NAME:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">IMG</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">NAME</span>
    <span class="hljs-attr">environment:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">variables</span> <span class="hljs-string">here</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">volume</span> <span class="hljs-string">info</span> <span class="hljs-string">here</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">ports</span> <span class="hljs-string">here</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.enable=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.NAME.entrypoints=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.NAME.rule=Host(`NAME.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.NAME.tls=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.NAME.service=NAME-svc"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.NAME-svc.loadbalancer.server.port=PORT"</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">proxy</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">proxy:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>You'll probably need to create the <code>proxy</code> network (<code>docker network create proxy</code>) if you haven't already. Also note that the loadbalancer port is the <em>container</em> port, not the host port it's mapped to.</p>
<p>For example, I can now modify our Portainer config to the following to get it running on <a target="_blank" href="http://portainer.t.bencuan.me"><code>portainer.t.bencuan.me</code></a>:</p>
<pre><code class="lang-yml"><span class="hljs-attr">version:</span> <span class="hljs-string">'3'</span>

<span class="hljs-attr">services:</span>
  <span class="hljs-attr">portainer:</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">portainer/portainer-ce</span>
    <span class="hljs-attr">container_name:</span> <span class="hljs-string">portainer</span>
    <span class="hljs-attr">restart:</span> <span class="hljs-string">unless-stopped</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/etc/localtime:/etc/localtime:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/var/run/docker.sock:/var/run/docker.sock:ro</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/home/turtle/portainer/data:/data</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.enable=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer.entrypoints=http"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer.rule=Host(`portainer.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.middlewares.portainer-https-redirect.redirectscheme.scheme=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer.middlewares=portainer-https-redirect"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer-secure.entrypoints=https"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer-secure.rule=Host(`portainer.t.bencuan.me`)"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer-secure.tls=true"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.routers.portainer-secure.service=portainer"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.http.services.portainer.loadbalancer.server.port=9000"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"traefik.docker.network=proxy"</span>
    <span class="hljs-attr">networks:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">proxy</span>

<span class="hljs-attr">networks:</span>
  <span class="hljs-attr">proxy:</span>
    <span class="hljs-attr">external:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>Most services you look up online will come with a provided sample Compose file; you can copy those over and add the necessary labels to get it hooked up to Traefik. You can also reference <a target="_blank" href="https://github.com/64bitpandas/TurtleNetPublic/tree/main/docker">my Compose files</a> if you're thinking of running the same services. There are lots of other resources online as well like <a target="_blank" href="https://github.com/docker/awesome-compose">this list</a>.</p>
<h3 id="heading-happy-hosting">Happy hosting!!</h3>
<p><img src="https://4.bp.blogspot.com/-7kbrqnXfuLk/WqR6bZg882I/AAAAAAAAAkM/0vvnQrIZAwk9ijiTvfF8m5pWpBSJsKuFQCLcBGAs/s1600/Screen%2BShot%2B2018-03-10%2Bat%2B7.37.12%2BPM.png" alt="That cute Docker whale" /></p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 3: ZeroTier and Private Networking]]></title><description><![CDATA[Before we can properly offer services on our VMs, we'll need to make sure they are accessible over the Internet!
However, simply exposing the entire server to the Internet (via portforwarding or otherwise) is extremely dangerous, since it allows anyo...]]></description><link>https://devlog.bencuan.me/3-zerotier</link><guid isPermaLink="true">https://devlog.bencuan.me/3-zerotier</guid><category><![CDATA[Homelab]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[ZeroTier]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Sun, 09 Apr 2023 22:44:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681080174083/cbf258f7-613b-4403-8de7-91d5ccbbe642.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before we can properly offer services on our VMs, we'll need to make sure they are accessible over the Internet!</p>
<p>However, simply exposing the entire server to the Internet (via portforwarding or otherwise) is extremely dangerous, since it allows anyone from anywhere in the world to connect to your private services, like the Proxmox console. Even with proper security measures and strong passwords, it's impossible to guarantee that malicious actors won't be able to find an exploit and steal your private information!</p>
<p>As a solution, let's be careful to separate network access to <strong>public services,</strong> which anyone can interact with, and <strong>private services</strong>, which only you and authorized users can access.</p>
<p>In this section, we'll set up our private network.</p>
<h2 id="heading-introducing-zerotier">Introducing ZeroTier</h2>
<p>Without any further configuration, everything is private by default: in order to access the Proxmox web console, you'll need to be on the same network as your server at all times. It's also very difficult to access VMs from other devices as well, since they're networked internally within Proxmox itself.</p>
<p>Let's first build up our private access, such that you and others can access server resources even when you're not physically next to the server.</p>
<p>As I explained in Part 1, there are many alternatives to ZeroTier that achieve similar things through slightly different protocols, like WireGuard or Teleport. You are welcome to choose what works best for your use case, but for now I'll use ZeroTier as an example of how a private networking setup might look like.</p>
<h3 id="heading-how-does-zerotier-actually-work">How does ZeroTier actually work?</h3>
<p>ZeroTier essentially creates a virtual, software-defined network that you can add devices to. Regardless of what physical networks or locations those devices are in, they'll all be connected to the same ZeroTier network, allowing them to access one another as if they were in a LAN setup.</p>
<p>The technical details of how this is possible without portforwarding is beyond the scope of this guide, but you're welcome to read their whitepapers and explore UDP holepunching to learn more and confirm that it's secure.</p>
<h2 id="heading-zerotier-setup">ZeroTier Setup</h2>
<p>First, you'll need a ZeroTier account, which you can make on the website here.</p>
<p>Once you have an account, you can create a network. You'll want this network to be private, such that you'll have to manually validate each device that requests to join.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681079998764/be06d4de-5b98-4731-8111-2a86ef1e60a6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-joining-the-zerotier-network-from-an-external-device">Joining the ZeroTier Network from an external device</h3>
<p>As a warm-up to VM configuration, let's see how the joining process looks like using an external computer or phone.</p>
<p>First, download the correct ZeroTier distribution for your device <a target="_blank" href="https://www.zerotier.com/download/">here</a>. Then, you should be able to join the network by entering in the network ID into the GUI like so:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1681079926415/03f2847a-bec2-464c-9156-973d98135390.png" alt class="image--center mx-auto" /></p>
<p>You'll notice that the device doesn't immediately join. You will need to go to the ZeroTier website and check the box next to the new device that pops up. You should give this device a name and description so you can remember it later.</p>
<p>You'll also see that the device is assigned an IP address in the ZeroTier website. This will be important to note down when we do this process for the VM.</p>
<h3 id="heading-joining-the-zerotier-network-from-your-vm">Joining the ZeroTier Network from your VM</h3>
<p>Now, let's do the same thing from with your VM!</p>
<p>The process is pretty similar, only we don't have a GUI anymore. Instead, you can do the following:</p>
<ol>
<li><p>Open the Proxmox web console, and navigate to the live console instance for your VM.</p>
</li>
<li><p>Run the command <code>curl -s</code> <a target="_blank" href="https://install.zerotier.com"><code>https://install.zerotier.com</code></a> <code>| sudo bash</code> to install the ZeroTier command line interface.</p>
</li>
<li><p>Run the command <code>sudo zerotier-cli join &lt;NETWORKID&gt;</code>, replacing <code>&lt;NETWORKID&gt;</code> with your actual network ID.</p>
</li>
<li><p>The terminal should now signal that the join was successful. Don't forget to check the box next to the new device in the ZeroTier website, and note down the new IP address!</p>
</li>
</ol>
<h3 id="heading-joining-the-zerotier-network-from-proxmox">Joining the ZeroTier Network from Proxmox</h3>
<p>Doing this is exactly the same process as joining ZeroTier from your VM, except now you should go through the steps in the <em>Proxmox shell</em> rather than your VM's console. After you assign your Proxmox instance to a domain name, you should now be able to access your Proxmox console using <a target="_blank" href="https://node.domain.tld:8006"><code>https://node.domain.tld:8006</code></a> in your web browser. For example, my server is named <code>turtle</code> and my domain is <code>bencuan.me</code>, so I could type <a target="_blank" href="https://turtle.bencuan.me:8006"><code>https://turtle.bencuan.me:8006</code></a>.</p>
<h2 id="heading-domain-configuration">Domain Configuration</h2>
<p>Now that ZeroTier has been configured on both your VM and your regular device, let's make it easier to access!</p>
<p>In Part 2, you should have acquired a domain. If you did not do this and opted to have a local domain instead, skip to "Local Resolution".</p>
<p>Your domain provider should have an option to set DNS records in their web console. If they don't, or you don't trust your provider, you can also link an external provider like Cloudflare, then continue with this process.</p>
<p>In your DNS configuration, let's add a new record corresponding to your VM.</p>
<ol>
<li><p>Create a new A record. (You can also create an AAAA record if you prefer to use IPv6).</p>
</li>
<li><p>For the name, use your VM's hostname (ex. <code>arabia</code>).</p>
</li>
<li><p>For the IP, enter the ZeroTier IP corresponding to your VM (found in the ZeroTier web console).</p>
</li>
<li><p>If using Cloudflare or a similar service, disable the option to proxy the record.</p>
</li>
<li><p>Save the new record, and wait a couple minutes for it to propagate.</p>
</li>
</ol>
<p>Now, you should be able to reach your domain using your new record! As an example, I have a VM named <code>arabia</code> and my domain is <a target="_blank" href="http://bencuan.me"><code>bencuan.me</code></a>. Thus, if I type in the command <code>ping</code> <a target="_blank" href="http://arabia.bencuan.me"><code>arabia.bencuan.me</code></a> on my laptop, I should be able to reach it and get the following output:</p>
<pre><code class="lang-bash">❯ ping sweden.bencuan.me

Pinging arabia.bencuan.me [172.24.220.210] with 32 bytes of data:
Reply from 172.24.220.210: bytes=32 time=26ms TTL=64
Reply from 172.24.220.210: bytes=32 time=29ms TTL=64
Reply from 172.24.220.210: bytes=32 time=32ms TTL=64
Reply from 172.24.220.210: bytes=32 time=23ms TTL=64

Ping statistics <span class="hljs-keyword">for</span> 172.24.220.210:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip <span class="hljs-built_in">times</span> <span class="hljs-keyword">in</span> milli-seconds:
    Minimum = 23ms, Maximum = 32ms, Average = 27ms
</code></pre>
<p>If you tried to ping <a target="_blank" href="http://arabia.bencuan.me"><code>arabia.bencuan.me</code></a> right now though, it will most likely result in a timeout since you haven't been added to my ZeroTier network!</p>
<h3 id="heading-local-resolution">Local Resolution</h3>
<p>If you didn't acquire a public domain, you'll still have to use your IP addresses to access your VM's. We'll see how we can set up a custom DNS server to get around this in a future step.</p>
<h3 id="heading-ssh">SSH</h3>
<p>Depending on your distribution, SSH may or may not be enabled by default in your VM. If it is, you should now be able to run <code>ssh vmname.domain.tld</code> (replacing with your actual VM name and domain, of course) and connect to your VM from your other devices.</p>
<p>If this is not the case, you may need to install it and/or enable it manually. This is how to do so for Ubuntu/Debian-based systems (look up how to do it on your OS of choice if this does not apply):</p>
<ol>
<li><p>Run <code>sudo apt install ssh</code></p>
</li>
<li><p>Run <code>sudo systemctl enable ssh.service --now</code></p>
</li>
</ol>
<p>To save time for future logins, you can set up public key authentication so that you don't have to type in your password every time you SSH into your VM. (This is only available for Linux-based VM's.)</p>
<ol>
<li><p>On your personal device, run <code>ssh-keygen</code> if you haven't done so before.</p>
</li>
<li><p>Run <code>ssh-copy-id &lt;username@vm.domain.tld&gt;</code> , replacing the part in brackets with your actual VM user and address.</p>
</li>
<li><p>Type in your VM's user password once.</p>
</li>
</ol>
<h3 id="heading-remote-desktop">Remote Desktop</h3>
<p>If you're using Windows or another OS with a graphical interface, you can access the GUI through Proxmox's default console view. However, you might have noticed that this console can be extremely laggy or unresponsive at times.</p>
<p>If you will need to access your VM graphically for things like gaming or video editing, you should set up some sort of remote desktop service.</p>
<ul>
<li><p>For Windows and Mac VM's, I highly recommend <a target="_blank" href="https://parsec.app/">Parsec</a>- it's given the best latency/responsiveness out of all the remote desktop applications I've tried so far.</p>
</li>
<li><p>For easiest access to Windows VM's, you can alternatively use the built-in <a target="_blank" href="https://support.microsoft.com/en-us/windows/how-to-use-remote-desktop-5fe128d5-8fb1-7a23-3b8a-41e636865e8c">Windows Remote Desktop</a>. A client is automatically installed on all Windows devices, so you don't need to install any additional software to access it. (Note that a Windows Pro VM is required.)</p>
</li>
<li><p>For Linux VM's, you can use the <a target="_blank" href="https://en.wikipedia.org/wiki/Virtual_Network_Computing">VNC</a> protocol. There are many clients for this: see <a target="_blank" href="https://www.google.com/search?client=firefox-b-1-d&amp;q=vnc+setup+linux">here</a> for a guide on how to configure VNC.</p>
</li>
</ul>
<h2 id="heading-summary">Summary</h2>
<p>If you've gotten this far, you should now be able to access your VM from anywhere, but only on your personal devices! This will allow you to access server resources such as the Proxmox console even if you're not connected to the same network as your server.</p>
<p>Next, we'll take advantage of our private ZeroTier network to set up a reverse proxy to access internal services in a convenient manner.</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 2.5: GPU Passthrough with Proxmox]]></title><description><![CDATA[Introduction
This section is optional, and provides additional context in the case that you would like to attach a GPU to your VM. This can be useful for a variety of tasks, such as setting up a cloud gaming server (which we'll do now) or performing ...]]></description><link>https://devlog.bencuan.me/25-gpu-passthrough</link><guid isPermaLink="true">https://devlog.bencuan.me/25-gpu-passthrough</guid><category><![CDATA[self-hosted]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[GPU]]></category><category><![CDATA[proxmox]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Wed, 29 Mar 2023 06:46:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680071777270/981ec289-be46-476b-9d75-8db880ee6db3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This section is optional, and provides additional context in the case that you would like to attach a GPU to your VM. This can be useful for a variety of tasks, such as setting up a cloud gaming server (which we'll do now) or performing machine learning- or graphics- related tasks.</p>
<p>As a disclosure, you can find lots of guides on how to do this online: <a target="_blank" href="https://3os.org/infrastructure/proxmox/gpu-passthrough/gpu-passthrough-to-vm/">here's one</a>, and <a target="_blank" href="https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/">here's another</a>. The information here summarizes the process I took to get my setup working, but yours may be different so just be patient and try more than one guide if it's not doing what you expect it to!</p>
<h3 id="heading-disclaimer-multiple-gpus-make-life-easier">Disclaimer: Multiple GPUs make life easier!</h3>
<p>Essentially, what GPU passthrough does is that it donates the entire graphics card over to the VM. That means that Proxmox, and by extension every other VM, will no longer be able to access the card you pass through!</p>
<p>For most cases, this should be perfectly fine (as long as you're not running anything that requires a GUI, like Windows, on other VMs). However, you'll probably find it much easier to have two GPUs: one cheap, low-powered primary GPU to drive Proxmox, and a higher-powered GPU to pass through to the VM that needs it.</p>
<h2 id="heading-preliminary-checks">Preliminary Checks</h2>
<ol>
<li><p><strong>Make sure that the GPU is plugged in and detected by Proxmox:</strong> If the hardware is faulty, then none of this will work of course! You can ensure that the GPU is detected by entering the command <code>lspci</code>. Your GPU should show up in the output, along with an ID (such as <code>06:00</code>).</p>
</li>
<li><p><strong>Make sure that the VM you want to pass the GPU through to is working:</strong> Before doing the passthrough, the VM should be accessible via the Proxmox console. You should also set up SSH, Remote Desktop, Parsec, or another service that will allow you to access it if it's working, since the Proxmox console will be disabled after passthrough!</p>
</li>
<li><p><strong>Make sure that no other VM's are using the GPU at the same time:</strong> Only one VM can use a passed-through GPU at a time. If another VM needs access to the GPU, either ensure that only one of them is running at a time, or get a second GPU to pass both through.</p>
</li>
</ol>
<h2 id="heading-follow-the-guide">Follow the guide</h2>
<p>In the interest of conciseness, I will defer to the large collection of pre-existing guides. Try your favorite one, and come back when you're done, even if things aren't working!</p>
<h3 id="heading-if-things-are-working">If things are working</h3>
<p>If GPU passthrough is working, you'll notice the following:</p>
<ul>
<li><p>Attempting to connect via console will give an error.</p>
</li>
<li><p>Plugging in a monitor via HDMI/DP directly to the passed through GPU will give the VM output, not the proxmox console.</p>
</li>
<li><p>Even if ballooning is enabled, the VM should use 100% of its allocated RAM.</p>
</li>
<li><p>If you set up SSH or another form of access, you should be able to access the VM as usual from other devices.</p>
</li>
</ul>
<p>Should the above be true, congratulations! You have successfully implemented GPU passthrough.</p>
<h3 id="heading-when-things-are-not-working">When things are not working</h3>
<p>Chances are that passthrough didn't work the first time around. (It didn't for me either!) Here are a few debugging steps I followed that worked for me:</p>
<ol>
<li><p>If you're passing through to Windows, your PCI device should look like this:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680071798824/6cd72187-4e04-484e-9b92-d1fee8d960ac.png" alt class="image--center mx-auto" /></p>
<p> If you're passing through to Linux, your PCI device should look the same as above, except "Primary GPU" should be unchecked.</p>
</li>
<li><p>Make sure that the "Machine Type" of the VM is q35.</p>
</li>
<li><p>If you're using an NVIDIA 10-series GPU (like a GTX 1070), <a target="_blank" href="https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher">this ROM patcher</a> might be necessary.</p>
</li>
<li><p>It's possible that the GPU is being used by another process (Simple Framebuffer seems to be a common culprit). You can use these steps to fix this:</p>
<ol>
<li><p>Run <code>cat /proc/iomem</code> in the Proxmox shell to view a list of PCI devices. You should be able to identify your GPU in this list, and it should not be used by any processes (assuming the VM is shut off). For example, my GPU is device <code>06:00</code> and my output looks like this:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680072109066/cd5ea87e-17bd-4295-94ec-4670cf312f71.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>If your output does not look like the above (might have more things under the "PCI Bus 0000:06"), run the following code block, replacing <code>XX</code> with the desired ID:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">echo</span> 1 &gt; /sys/bus/pci/devices/0000\:XX\:00.0/remove \
 <span class="hljs-built_in">echo</span> 1 &gt; /sys/bus/pci/rescan
 <span class="hljs-built_in">echo</span> simple-framebuffer.0 &gt; /sys/bus/platform/drivers/simple-framebuffer/unbind
</code></pre>
<p> You may need to run this every time the server reboots.</p>
</li>
</ol>
</li>
<li><p>If none of the above works, view the error log using <code>tail -100 /var/log/syslog</code> . If the system is having trouble processing the GPU, you'll probably get a huge amount of error spam in this output that you can paste into Google and get more useful results.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 2: Getting Started with Proxmox]]></title><description><![CDATA[Prerequisites
If you haven't read Part 1 yet, go do that now!
At this point in the series, my hope is that you now have a very high-level sense of what you might want to achieve with your homelab, and are generally aware of the steps you'll be taking...]]></description><link>https://devlog.bencuan.me/2-proxmox</link><guid isPermaLink="true">https://devlog.bencuan.me/2-proxmox</guid><category><![CDATA[Homelab]]></category><category><![CDATA[self-hosted]]></category><category><![CDATA[server hosting]]></category><category><![CDATA[proxmox]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Wed, 29 Mar 2023 06:13:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1680069730735/138c2f40-cc78-4855-9eb6-02b238e40a09.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680069770301/e9440b44-0a97-45ad-bcee-eae42dfca6c9.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>If you haven't read <a target="_blank" href="https://devlog.bencuan.me/1-setup">Part 1</a> yet, go do that now!</p>
<p>At this point in the series, my hope is that you now have a very high-level sense of what you might want to achieve with your homelab, and are generally aware of the steps you'll be taking to get it up and running.</p>
<p>Additionally, you should either have your desired hardware or have acquired access to a VPS or virtual machine of your liking. I'll make the assumption that you're using your own hardware from now on (true to the spirit of homelabbing); if you're not, use your best judgement on what parts might need to be tweaked or ignored for your particular case! To reiterate, my intention is to give a framework to build up your own intuition of how all of this works, instead of giving a step-by-step guide.</p>
<h2 id="heading-the-hypervisor">The Hypervisor</h2>
<p>Great, so now you have some hardware that's ready to do things. Let's install the software that it needs!</p>
<p>Since we'll be installing multiple virtual machines, we need a main operating system to help manage all of them. Proxmox is my tool of choice, due to its wide support, included web interface, and the fact that it's free and open source. But under the hood, Proxmox is basically just a souped-up version of Debian, so if you really wanted to get dirty you could get all of its functionality by installing the right things on a basic Linux instance.</p>
<h2 id="heading-installation">Installation</h2>
<p>Installing Proxmox is extremely similar to the way you'd install any other operating system. If you haven't installed Linux before, feel free to take a quick intermission and refresh yourself on the <a target="_blank" href="https://ubuntu.com/tutorials/create-a-usb-stick-on-windows#1-overview">installation process</a>.</p>
<p>Proxmox can be installed via live USB like any standard Linux distro. Just make sure to flash it in DD mode, not ISO mode! If you're using a GUI application like Rufus, there should be a visible toggle to change this.</p>
<p>Once you boot up your server with the Proxmox live USB, the GUI will walk you through the process for the most part. Here are a few important notes to keep in mind as you click through the prompts.</p>
<p><strong>Make sure that you record the root password!</strong> If you forget the password, you'll need to start over. When logging in, the default username is <code>root</code> and the password is the one you set.</p>
<h3 id="heading-domains">Domains</h3>
<p>Proxmox is best used when you have a publicly addressable domain name (like <a target="_blank" href="http://bencuan.me">bencuan.me</a>). I would strongly encourage you to purchase one using your favorite provider (Namecheap, Porkbun, and Cloudflare are a few I have used before and have had good experiences with); it's only around $10 per year. You can even <a target="_blank" href="https://nc.me">get a free .me domain if you're a student</a>! Of course, you can still use Proxmox with a local domain only (conventionally ending in <code>.local</code>)- you'll just need to manually configure your DNS to resolve this in a later step if you wish to go this route.</p>
<p>It's fine if you're not familiar with how domains work right now- just acquire one, and we'll do lots of fun stuff with it later.</p>
<p>When the installer prompts for the domain, be careful since the subdomain will automatically become the hostname of the machine! For example, using the domain <a target="_blank" href="http://turtle.bencuan.me"><code>turtle.bencuan.me</code></a> will set the hostname of the machine to <code>turtle</code>. Changing the hostname afterwards is possible but it's best to avoid it since it can be a hassle.</p>
<h3 id="heading-post-installation">Post-installation</h3>
<p>After installation completes, a message in the console should prompt you to connect to the web client, and provide some instructions on how to do so:</p>
<p><img src="https://phoenixnap.com/kb/wp-content/uploads/2022/01/proxmox-welcome-output.png" alt="Install Proxmox VE {Step-by-Step Guide}" /></p>
<p>If this message does not appear, check to make sure that the <code>pveproxy</code> service started correctly (using <code>systemctl status pveproxy</code> and/or <code>journalctl -xe</code>).</p>
<p>You may also need to enable virtualization in your BIOS if you have not done so already. For AMD CPUs, this setting is called SVM Mode or something similar; for Intel CPUs, it might be called "VT-X", "Intel Virtualization", or something similar. You should refer to your particular motherboard model's manual if you can't find it, since each manufacturer may call it something different.</p>
<h2 id="heading-navigating-the-web-client">Navigating the Web Client</h2>
<p>If all went well, you should be able to access the web client from your web browser on another computer! It should now look something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680069955250/af1d2c47-3cd9-4df1-81b6-eeef9fb3d1c2.png" alt class="image--center mx-auto" /></p>
<p>There's a lot of settings, and we'll go over them in due time. For now, here's a list of the most important tools and metrics:</p>
<ul>
<li><p>On the top left, you should see a "Datacenter" tab, followed by your node. If you click on your node, you should then be able to see some basic information about it in the "Summary" tab:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680069977889/f974aaa5-a280-49b7-b047-ed2cf43e6e14.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>If you need to access the shell to enter commands, you can do so by clicking on your node, then selecting the "Shell" option in the sidebar (this is functionally equivalent to SSHing into your Proxmox machine from another terminal):</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070024474/ea50fae2-0355-475b-926b-87c5f9359dcd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>The reboot/shutdown options are available in the "Search" tab:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070053535/08d14f41-b0c7-4a92-8f79-20d9f9cb3fd8.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>The first thing you will likely need to do is disable the paid Proxmox repositories by going to your node -&gt; updates -&gt; repositories. You will still be able to get the latest updates, but Proxmox has extended features and security updates that are exclusive for businesses or other paying customers. As hobbyists, using the free repositories are perfectly fine.</p>
<p>You'll also get a popup every time you log in notifying you that you don't have a subscription. This is also a side effect of the above, and <a target="_blank" href="https://johnscs.com/remove-proxmox51-subscription-notice/">there are ways to disable this popup</a> if you find it annoying.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070084253/41ade46e-d085-459c-852d-ec81f979bc91.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-creating-your-first-vm">Creating your First VM</h2>
<p>Now, we're finally ready to make some virtual machines!</p>
<p>You can choose to run practically any operating system on a virtual machine. Typically, most VM's run some form of Linux, but it's fairly common to run Windows or even Mac VM's depending on your use case.</p>
<p>Regardless of which OS you desire, you'll need to acquire the ISO and download it to the <code>/var/lib/vz/template/iso</code> folder. Here are the steps:</p>
<ol>
<li><p>Find the ISO online. For example, the Ubuntu Desktop ISO can be downloaded from <a target="_blank" href="https://releases.ubuntu.com/22.04.2/ubuntu-22.04.2-desktop-amd64.iso">this link</a>.</p>
</li>
<li><p>Open the Proxmox shell and navigate to the correct folder: <code>cd /var/lib/vz/template/iso</code></p>
</li>
<li><p>Download the ISO using <code>wget</code>, <code>curl</code>, or some similar command. For Windows ISO's, make sure you surround the link in quotes in the command since it has spaces.</p>
</li>
</ol>
<h3 id="heading-namingnumbering-schemes">Naming/Numbering Schemes</h3>
<p>Although this is completely optional, it's fun to come up with a cohesive naming scheme for your VM's so that it's easy to keep track of them (and come up with names in the future)! As an example, I name my VM's after popular civilizations from the game Civilization V (babylon, arabia, persia, and so on). Some other ideas just to throw them out:</p>
<ul>
<li><p>names of famous scientists/people from a particular field</p>
</li>
<li><p>names of elements (if sufficiently heavy, could even correspond to your VM ID's)</p>
</li>
<li><p>any category (animals, cities, cars) but in alphabetical order</p>
</li>
</ul>
<p>Less optional and more important is the numerical ID's you will need to assign to your VM's. Each VM will have a unique ID number, which cannot be changed after creation. You can choose to assign these ID's in any way you wish, but it's helpful to group them together in some way. As an example, my VM's starting with 1 host critical services (like DNS and NAS), those starting with 2 host external services, and those starting with 3 host internal services.</p>
<h3 id="heading-vm-options">VM Options</h3>
<p>Now, let's begin the creation process by hitting the "Create VM" button:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070159269/e38bb334-47e4-479e-8c8b-12ac18b274ff.png" alt class="image--center mx-auto" /></p>
<p>You'll first need to assign the VM name and ID. Remember that the ID can't be changed, so make sure it's correct!</p>
<p>You might also want to select the "Start at Boot" option if you want the VM to automatically start when the server starts.</p>
<p>Next, go to the "OS" tab and select the ISO you just downloaded. Make sure that the "Guest OS" settings match the type of operating system that you're installing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070187763/345569e3-6f99-412a-9cf0-61be363601bd.png" alt class="image--center mx-auto" /></p>
<p>For the "System" tab, let's leave everything at default for now. We can change this later if we want to do something fancy like GPU passthrough.</p>
<p>For the "Disks" tab, specify how much space you want this VM to take up. From personal experience, 32GB is enough for a small number of basic server tasks, and 64-128GB is a safe bet if you're planning on doing a lot on this VM. You don't need to change any of the other options if you're not sure what they do.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070227310/177d8025-6819-4543-ac34-5cd28f4a6202.png" alt class="image--center mx-auto" /></p>
<p>For the "CPU" tab, specify how many cores you want to dedicate to this VM. If your CPU supports hyperthreading (nearly all modern CPUs do), a "core" in Proxmox is equivalent to one "CPU thread". So, a 4-core, 8-thread CPU would have 8 "cores" available to assign to VMs.</p>
<p>For the "Memory" tab, specify how much RAM you want to dedicate to this VM. You can select the "Ballooning Device" option for most applications, which will reserve memory when needed rather than eating up the whole block when the VM boots up.</p>
<p>That's pretty much all you need to do for now! Go to the "Confirm" tab and ensure all of the options are what you want. Then, click "Finish", and after a few seconds your new VM should appear in your server node!</p>
<p>You can start the new VM by right clicking it in the sidebar and selecting the "Start" option. You'll probably need to go through initial setup similar to what you did for Proxmox itself, which can be accessed in the "Console" tab:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680070286090/710feb18-04ac-44fc-a3fa-194325407270.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-summary">Summary</h2>
<p>Congrats, you now have a working Proxmox instance that's hosting a virtual machine! While it doesn't really do much of anything yet, we'll get started on installing services on it right away.</p>
<p>If you're interested in adding a GPU or other devices to the VM, you can proceed to the next mini-section. Otherwise, move on to Part 3!</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 1.5: PC Part Picking for small homelabs]]></title><description><![CDATA[This article is optional, and only concerns those who want to build a new server using consumer hardware. If you already have an old computer/Raspberry Pi, or are using a VPS, feel free to skip this article.

Introduction
This section is intended for...]]></description><link>https://devlog.bencuan.me/turtlenet-15-pc-part-picking-for-small-homelabs</link><guid isPermaLink="true">https://devlog.bencuan.me/turtlenet-15-pc-part-picking-for-small-homelabs</guid><category><![CDATA[hardware]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Mon, 23 Jan 2023 07:41:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673742666933/7224da05-c779-441f-8721-5af26a4a2909.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p><strong>This article is optional, and only concerns those who want to build a new server using consumer hardware.</strong> If you already have an old computer/Raspberry Pi, or are using a VPS, feel free to skip this article.</p>
</blockquote>
<h2 id="heading-introduction">Introduction</h2>
<p>This section is intended for those who already have some familiarity with consumer PC hardware. If this is not you, watch the video below for some more context about the parts that go into a computer, and how to build one:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=BL4DCEp7blY">https://www.youtube.com/watch?v=BL4DCEp7blY</a></div>
<p> </p>
<h2 id="heading-parts">Parts</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673744477254/ff9e332c-047e-4412-9846-a6ab741577b6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-cpu">CPU</h3>
<p>I chose the Ryzen 3700x (8 cores, 16 threads). A CPU with a decent core count (preferably at least 6c/12t) is a nice-to-have for running multiple VM's, since each VM will need to use at least 1 thread.</p>
<p>Going with AMD or Intel is a personal choice and there isn't any clear winner at the moment, but I ultimately decided with getting an AMD CPU for the socket compatibility (should I ever decide to upgrade).</p>
<h3 id="heading-ram">RAM</h3>
<p>I got 64GB of RAM. Like core counts, this is another thing you'd ideally want a lot of to support running multiple VM's.</p>
<p>AMD Ryzen CPU's support <strong>unregistered ECC memory</strong>, which provides extra error correction (useful when your server will be on for many months without a reboot). However, unregistered ECC is pricey and hard to obtain, so I decided that it wasn't worth it for me. Maybe it is for you though! (Don't get <em>registered</em> ECC- this type of memory only works on server platforms like Xeon.)</p>
<h3 id="heading-gpu">GPU</h3>
<p>Unless you're also planning on running a cloud gaming setup with GPU passthrough like me, you really don't need a good GPU for a server since nearly everything will be headless (command line only). That being said, you still need <em>a</em> GPU to get output for initial setup and debugging if your CPU doesn't include integrated graphics.</p>
<p>An ideal homelab setup with GPU passthrough would have two GPU's: a low-powered cheap one to have output from Proxmox, and a higher-powered GPU to pass through to one VM. (Once the second GPU is passed through, it will no longer be accessible by any other VM!) For my setup, I got a used GTX 1080 for ~$250 on eBay (average price is likely lower when you read this- sort price from low to high).</p>
<p>You can get cheap used GPU's like the Radeon HD 8350 if you only need a basic video output for debugging purposes.</p>
<h3 id="heading-case">Case</h3>
<p>In general, any standard size case will do. The main feature you're looking for is the number of drive bays, assuming you're building in NAS support. The <a target="_blank" href="https://www.fractal-design.com/products/cases/define/define-r5/black/">Fractal Design R5</a> has 10 drive bays, making it an excellent choice for building a homelab. (I reused an old case from a previous non-server build, the <a target="_blank" href="https://pcpartpicker.com/product/CbqhP6/cooler-master-masterbox-nr600-wo-odd-atx-mid-tower-case-mcb-nr600-kgnn-s00">CoolerMaster NR600</a>.)</p>
<h3 id="heading-ssd">SSD</h3>
<p>You'll probably want some fast persistent storage to actually run VMs and Proxmox off of. You usually don't need a lot of capacity- most VM's will only require 32-64GB and you'll only be running a handful.</p>
<p>I got a 1TB SSD; half of it is used by the Windows gaming VM though, so in practice it's just a 500GB SSD.</p>
<h3 id="heading-hard-drives">Hard Drives</h3>
<p>Typically, a storage server works by using software to combine many individual hard drives into a large pool.</p>
<p>If you're planning on using ZFS or RAID (i.e. having some redundancy in case of disk failure which <em>will</em> happen if you're running them 24/7), getting 3 or more identical disks will make configuration easier than mixing and matching.</p>
<p>Buy more disks than the actual capacity you want! You can find calculators online to see how much raw storage you need to achieve a certain capacity (<a target="_blank" href="https://jro.io/capacity/">ZFS here</a>, <a target="_blank" href="https://www.raid-calculator.com/">RAID here</a>). Generally, buying a little less than double your desired capacity is a safe bet (for example, if you want 10TB, buying 4x4TB (16TB raw) in 4-wide RAIDZ1 will do the trick).</p>
<h3 id="heading-ups">UPS</h3>
<p>An external UPS (Uninterruptible Power Supply) is highly recommended, since it will protect against power surges and outages by allowing your server to gracefully shut down whenever main power is cut.</p>
<p>When purchasing a UPS, look out for Pure Sine Wave (not approximate or stepped approximation)- although more costly, it can reduce the risk of damage to your components.</p>
<p>The most economical method of purchasing a UPS is to get it refurbished from a reseller such as <a target="_blank" href="https://excessups.com/">ExcessUPS</a>. You may also need to purchase a new battery for it (these are fairly standardized and found in many stores, both online and retail).</p>
]]></content:encoded></item><item><title><![CDATA[TurtleNet 1: Setup, and The Big Picture]]></title><description><![CDATA[Enough with the intro fluff- let's jump right into it!
Here's the architecture diagram from the last article. I'll walk through what each part means, and how it'll correspond to stuff we have to set up. (Click here for a bigger version!)


The purpos...]]></description><link>https://devlog.bencuan.me/1-setup</link><guid isPermaLink="true">https://devlog.bencuan.me/1-setup</guid><category><![CDATA[Homelab]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Mon, 23 Jan 2023 07:40:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673742383963/d39cb10b-1b62-4dc6-b8c9-c76fc4c16ac6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Enough with the intro fluff- let's jump right into it!</p>
<p>Here's the architecture diagram from the last article. I'll walk through what each part means, and how it'll correspond to stuff we have to set up. (<a target="_blank" href="https://www.figma.com/file/Ndyn8rMc8frfnPrywwN44f/TurtleNet?node-id=8%3A9&amp;t=S5GSEXGUlXGCWUbB-1">Click here</a> for a bigger version!)</p>
<iframe style="border:1px solid rgba(0, 0, 0, 0.1)" src="https://www.figma.com/embed?embed_host=share&amp;url=https%3A%2F%2Fwww.figma.com%2Ffile%2FNdyn8rMc8frfnPrywwN44f%2FTurtleNet%3Fnode-id%3D8%253A9%26t%3DS5GSEXGUlXGCWUbB-1" width="800" height="450"></iframe>

<p>The purpose of this particular article is to give a very high-level overview of the components in my setup, such that someone already familiar with how and why we need each part can see which solutions I chose.</p>
<p>If you're not sure what most of these things are, that's alright! I'll break down each step in future parts.</p>
<h2 id="heading-a-summary-of-the-summary">A Summary of the Summary</h2>
<p>Here's the super short 30 second version of this already heavily condensed article for all the busy people out there!</p>
<p>A <strong>homelab</strong> refers to a server whose hardware is fully controlled by the person hosting it, and lives in a non-commercial environment like a home or school.</p>
<p>Using server hardware (which is basically just another computer), we can run software like Proxmox to manage <strong>virtual machines (VMs),</strong> which are full operating systems run within the server.</p>
<p>VM's allow us to host a variety of <strong>services</strong> using the same hardware, even if they have different requirements.</p>
<p>To allow my VM's to talk to each other (and to host private services that only I and trusted users can connect to), I use ZeroTier, which is a software-defined networking solution that simulates a network switch online.</p>
<p>Then, to allow others to access my public services, I run a <strong>reverse proxy</strong> on an externally-hosted VM with a public IP address to redirect all requests to their internal locations without exposing where they really are. I then create DNS records to map friendly domain names (like blog.bencuan.me) to that public IP address.</p>
<h2 id="heading-the-hardware">The Hardware</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673742107759/199c3f67-8ced-4230-8631-01b637ea91de.png" alt class="image--center mx-auto" /></p>
<p>Here's my physical server! As you can see, it's basically just a modern mid-range consumer desktop.</p>
<p><strong>Interested in building your own computer for the specific purpose of homelabbing?</strong> Jump to Part 1.5 for a guide explaining why I chose those particular parts.</p>
<h3 id="heading-why-did-i-use-consumer-parts">Why did I use consumer parts?</h3>
<p>It's generally more popular for established homelabbers to build a full rack-mount setup using enterprise hardware, since that's the kind of stuff that's designed to be running as a server.</p>
<p>However, for newer homelabbers and those who don't need as much performance, I believe that using consumer hardware is generally a better choice for the following reasons:</p>
<ul>
<li><p><strong>Price and availability:</strong> Unless you're good at finding when companies are throwing out their old hardware and nabbing it, consumer parts are way easier to obtain. You can buy consumer parts for a reasonable price pretty much anywhere (Amazon, Best Buy, Newegg, etc). My setup cost slightly over $1000, which is about what you'd spend on a desktop anyways.</p>
</li>
<li><p><strong>Noise:</strong> Consumer hardware is way quieter, which is a huge plus if you're running it in your house and don't want to annoy everyone in a 500-foot radius of your server.</p>
</li>
<li><p><strong>Electricity usage:</strong> Unless you're running some crazy setup, consumer hardware shouldn't bring up your electricity bill by that much (after all, you're just running a second computer in the house). My particular setup runs at 80W for most of the time (equivalent to a handful of bright LED lightbulbs). On the other hand, rack setups can easily draw hundreds to thousands of watts, since they're optimized for performance rather than electricity usage.</p>
</li>
</ul>
<h3 id="heading-alternatives">Alternatives</h3>
<p>As I've mentioned a couple times previously, my setup of a purpose-built consumer PC is only one possible way to start a homelab! Here are a few others, and some notes about them:</p>
<ul>
<li><p><strong>Using an old desktop/laptop or Raspberry Pi:</strong> This is the most economical choice, and gives you more than enough power to get started! If you're only running simple web servers or a handful of Docker containers, this might be all you need. However, if you expect to outgrow it soon, it might be better just to go for it and purchase the parts for a new one.</p>
</li>
<li><p><strong>Using a cloud provider:</strong> If you just want to get the hang of configuring server software or run a simple service purely for its utility, homelabbing may not be for you just yet! You can purchase a VM from a provider like DigitalOcean, AWS EC2, Oracle Cloud, or Linode, which will serve the same purpose of a self-hosted VM but without having control over the hardware that runs it. Typically, these VM's are billed monthly, but for low usages many providers offer a free tier.</p>
</li>
<li><p><strong>Using enterprise hardware:</strong> If you have a serious need for powerful hardware or want to take the hobby to the next level, building your own rack is the ultimate homelab setup. If you're curious, you can search <a target="_blank" href="https://www.youtube.com/results?search_query=homelab+rack">YouTube for "homelab rack"</a> to get a variety of examples of all shapes and sizes.</p>
</li>
</ul>
<h2 id="heading-networking">Networking</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674027006170/c69da7c8-048e-4748-813a-cbcd0bf1acde.png" alt class="image--center mx-auto" /></p>
<p>When you imagine a server in a datacenter, one of the first images that comes to mind is probably the rows upon rows of networking cables:</p>
<p><img src="https://images.pexels.com/photos/2881232/pexels-photo-2881232.jpeg?auto=compress&amp;cs=tinysrgb&amp;w=1260&amp;h=750&amp;dpr=1" alt="Free Cables Connected to Ethernet Ports Stock Photo" /></p>
<p>Luckily, that isn't necessary when I only have one physical machine! Once that one machine is hooked up to the router, that's really all I need for now. In the future, I might <a target="_blank" href="https://www.pfsense.org/">run my own router software</a> and need a switch, but that isn't happening anytime soon.</p>
<p>There's been a recent wave of <a target="_blank" href="https://en.wikipedia.org/wiki/Software-defined_networking">software defined networking (SDN)</a> solutions, which create virtual networks and use software, rather than hardware switches, to manage connections between machines in a network.</p>
<p>The particular solution I went with was ZeroTier, which assigns IP addresses to every machine connected to the network that can only be resolved by machines also connected to the network. (Don't worry about the fact that my diagram is full of IP addresses- they're all completely inaccessible to the public!)</p>
<p>Since I still have public services, I need some way to allow access in without having every stranger connect to my ZeroTier network. So, I have one (free) Oracle Cloud VM that's assigned a public IPv4 address while connected to ZeroTier. Its only job is to run a reverse proxy which allows public access to only the particular resources I want to expose- more on that later.</p>
<p>Another software-based alternative to ZeroTier is to run a VPN, such as OpenVPN or <a target="_blank" href="https://www.wireguard.com/">Wireguard</a>. From a security standpoint all of these solutions are relatively similar, but running a VPN requires forwarding a port, and you'll be unable to connect to the network if the host machine is down.</p>
<h2 id="heading-virtual-machines">Virtual Machines</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674027721352/859beb93-9289-43e2-8f6c-b7510a34333c.png" alt class="image--center mx-auto" /></p>
<p>Besides the Oracle Cloud ingress, I currently host six VM's on my main server, using Proxmox as a hypervisor:</p>
<ul>
<li><p>Korea hosts all of my internal services (which are only accessible to machines within my ZeroTier network).</p>
</li>
<li><p>Babylon is my development server, where I write most of my code and can quickly spin up live web servers if needed.</p>
</li>
<li><p>Arabia is my gaming VM. It has 6 cores, 20GB of RAM, 500GB of SSD storage, and a GTX 1080 assigned to it, so it's quite capable! I connect to it via <a target="_blank" href="https://parsec.app/">Parsec</a>, which has unbelievably low latency.</p>
</li>
<li><p>Venice is where I host my DNS server (Pihole) and monitoring suite (Prometheus and Grafana). This could have easily been absorbed into Korea, but I decided to keep it separate for futureproofing purposes (such as if I were to duplicate Pihole and load-balance it for redundancy, or add a VPN server to it).</p>
</li>
<li><p>Persia is my TrueNAS instance, which manages the hard drives which are passed into it. All of the other VM's and physical machines in the network can access its pool like any other network drive (using NFS, Windows SMB, etc).</p>
</li>
<li><p>Zulu hosts all of my external services, which are publicly accessible via some subdomain of bencuan.me.</p>
</li>
</ul>
<p>With the exception of Arabia and Persia (which run Windows and TrueNAS Core respectively) all of the VM's above run Ubuntu 22.04. I chose Ubuntu since it works right out of the box-- also, I learned pretty quickly that while building servers, you want your core infrastructure to be as boring as possible. There's no correct distro choice, as long as you choose one that's reliable and familiar to you.</p>
<h2 id="heading-applications">Applications</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674028118654/306e574a-01e3-484c-8796-7847c313a07b.png" alt class="image--center mx-auto" /></p>
<p>Now for the fun part! Here's all the stuff that actually lives on my setup. Future parts will go into far more detail about how you can set these things up for yourself. There's plenty more you can do with your own that I don't, but hopefully this gives you a taste of the power of homelabbing:</p>
<ul>
<li><p><strong>Reverse proxies:</strong> Caddy and Traefik. I use Caddy to reverse proxy external services, and Traefik to reverse proxy internal services. (Reverse proxying is basically mapping domains to local IP's and ports-- again, more on what this means exactly in a future part.) The reason why I have two is because I can point the wildcard domain <code>*.t.bencuan.me</code> to my internal server (where Traefik is), and point all my public subdomains to the public IP of my Oracle VM (where Caddy is), making it impossible for general users to have any access to internal services whatsoever.</p>
</li>
<li><p><strong>Docker:</strong> I use docker-compose and Portainer to manage Docker, which is how I containerize and persist services such that multiple services can be easily run within the same VM.</p>
</li>
<li><p><strong>Backup:</strong> Duplicati and Syncthing allow duplication of my most important data onto other devices like my laptop and a friend's server, so it can be restored in case my server blows up.</p>
</li>
<li><p><strong>Monitoring:</strong> Prometheus and Grafana collect and aggregate data about metrics like disk/CPU/RAM usage. Uptime Kuma provides a nice-looking status page to let users (and myself) know if something isn't working.</p>
</li>
<li><p><strong>Documentation:</strong> Focalboard and Outline provide some platforms to host private documentation.</p>
</li>
<li><p><strong>Blogging:</strong> Ghost provides a self-hosted Medium alternative that powers my blog (<a target="_blank" href="https://blog.bencuan.me">blog.bencuan.me</a>).</p>
</li>
<li><p><strong>Analytics:</strong> Matomo is a self-hosted version of Google Analytics. Self hosting allows me to fully own the data I collect, and ensure that I'm respecting the privacy of users while still gaining helpful insights about what people are looking at.</p>
</li>
<li><p><strong>API:</strong> I have a custom API written in Go, which allows me to host custom endpoints to serve things to my various websites when needed. The main feature I use this for is to enable the <a target="_blank" href="https://blog.bencuan.me/applause-test/">applause button</a> on my blog.</p>
</li>
<li><p><strong>Shorturls:</strong> Shlink is a self-hosted version of Tinyurl that allows me to create aliases starting in <code>s.bencuan.me</code>- this is pretty helpful for sharing links.</p>
</li>
<li><p><strong>Content delivery:</strong> Projectsend provides a way for me to host and share files with others. I mainly use this to deliver proprietary fonts to Netlify during the build process.</p>
</li>
<li><p><strong>Archival:</strong> Paperless allows me to archive scans of physical documents, and ArchiveBox is like a self-hosted Internet Archive that allows me to save webpages locally in case they go down in the future.</p>
</li>
<li><p><strong>Dashboard:</strong> Heimdall provides me with a cool new tab page to hold links to all of the above services and some frequently used external services.</p>
</li>
</ul>
<h2 id="heading-subdomains">Subdomains</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674457872351/32fe8008-2592-4484-b9ae-8e90542b6ab5.png" alt class="image--center mx-auto" /></p>
<p>After hosting them, I need a way for users to easily access my external services. I happen to use Cloudflare as a DNS provider, but it really doesn't matter which one you use.</p>
<p>For all of the domains listed in the "Self Hosted" box, I have an A record mapping the domain name to my ingress server's public IP address. The Caddy instance hosted there then routes users to the desired resource.</p>
<p>My static sites are mostly hosted on Netlify, since I need the additional power over other solutions like GItHub Pages to configure builds and custom environments. If you're really into self-hosting everything, you could try to find a self-hosted CI/CD solution for static site hosting, but I didn't find it worth the effort for me since doing so would probably have a poorer user experience compared to Netlify.</p>
]]></content:encoded></item><item><title><![CDATA[Welcome to TurtleNet!]]></title><description><![CDATA[Hello!
Over the past year, I've been building up my personal server infrastructure. I wanted to share my experiences here, in the hopes of filling in one of the biggest gaps in the traditional computer science curriculum: how to host and share the co...]]></description><link>https://devlog.bencuan.me/welcome-to-turtlenet</link><guid isPermaLink="true">https://devlog.bencuan.me/welcome-to-turtlenet</guid><category><![CDATA[Homelab]]></category><category><![CDATA[server hosting]]></category><category><![CDATA[self-hosted]]></category><dc:creator><![CDATA[Ben Cuan]]></dc:creator><pubDate>Thu, 12 Jan 2023 07:09:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673641963709/d5fd4212-1617-4ab4-b7f2-7054fd1247ec.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello!</p>
<p>Over the past year, I've been building up my personal server infrastructure. I wanted to share my experiences here, in the hopes of filling in one of the biggest gaps in the traditional computer science curriculum: <strong>how to host and share the cool things we've created, and what "production deployment" really means behind the scenes.</strong></p>
<p>I believe that running a server of some sort- whether it be a free cloud instance, a Raspberry Pi, or a <a target="_blank" href="https://www.ocf.berkeley.edu/docs/staff/backend/servers/">system serving thousands of students</a> (join the OCF!)- should be something all CS students (really anyone curious about computers) try out.</p>
<p>In this series, I want to not only show you an example of <em>how</em> a homelab setup could look like, but also <em>why</em> I decided on the architecture I did so that you can make your own decisions about which parts to keep or change. <strong>This is not a step-by-step guide</strong> (there are lots of YouTube tutorials and blogs that help you achieve whatever you need to, some of which I'll link below)- I would rather help you build the intuition you need to debug, solve problems, and make decisions about your own setup.</p>
<h2 id="heading-some-prerequisites-and-assumptions">Some prerequisites and assumptions</h2>
<p>Since this resource is intended for those with little to no experience with self-hosting, I <em>won't</em> assume prior exposure to core concepts such as:</p>
<ul>
<li><p>Servers and Hypervisors</p>
</li>
<li><p>Virtual machines</p>
</li>
<li><p>Domains, HTTPS/TLS</p>
</li>
<li><p>(Reverse) proxies</p>
</li>
<li><p>Docker and containerization</p>
</li>
</ul>
<p>However, if I had to explain absolutely everything, we'd probably be here for a very long time. So I'll have to make some assumptions about what you know, and where to go if these assumptions do not hold.</p>
<ol>
<li><p>You've used the terminal before, and recognize basic Unix commands like <code>cd</code>, <code>ls</code>, <code>cat</code>, <code>ssh</code>, and <code>man</code>. You're also able to make simple edits to text files from the command line (CLI) using <code>vim</code>, <code>nano</code>, etc.</p>
<ol>
<li>Go <a target="_blank" href="https://www.youtube.com/watch?v=TXNpIIlcHm4">here</a> for a quick shell demo, or go <a target="_blank" href="https://decal.ocf.berkeley.edu/archives/2022-spring/">here</a> for a full Linux course hosted by the OCF.</li>
</ol>
</li>
<li><p>You know how to find help if you get stuck on a bug, or want to understand more about what you're doing.</p>
<ol>
<li>Google, Stack Overflow, Reddit, ArchWiki, and other online forums should be your go-to!</li>
</ol>
</li>
<li><p>You're familiar with common computer terms like "operating system", "disk", "RAM", "CPU", "ethernet", and "IP address".</p>
<ol>
<li>Not sure what those are? Here's an exercise of the previous part- you should be able to figure out what they are based on your online resource of choice.</li>
</ol>
</li>
</ol>
<h2 id="heading-so-what-is-a-server-and-why-should-i-run-my-own">So what is a server, and why should I run my own?</h2>
<p>As you may be aware, the internet is made up of millions of devices, all interconnected through a complex series of cables, fiber optics, and wireless endpoints.</p>
<p>All of these devices agree on some common <strong>protocols</strong>, so they can understand each other. Some examples include:</p>
<ul>
<li><p>IP (Internet Protocol), which assigns addresses to hosts so you can contact them</p>
</li>
<li><p>TCP (Transmission Control Protocol), which enables (mostly) reliable delivery of information between hosts</p>
</li>
<li><p>HTTP (Hypertext Transport Protocol), which allows web applications to send and receive information</p>
</li>
</ul>
<p>If you type a website like <code>google.com</code> into your browser's search bar, all that's happening is that you're connecting to another device on the Internet. But since the Google computer is a) accessible from another computer and b) sends you data that you requested from it, it's known as a <strong>server</strong>.</p>
<p>So, if there's some sort of service you'd like to host and allow others (or maybe just yourself) to use, then running a server is the way to go. Common services include:</p>
<ul>
<li><p>Creating a <a target="_blank" href="https://nextcloud.com/">Google Drive-like storage server</a> to get terabytes of cheap cloud storage in a place you trust</p>
</li>
<li><p>Running a <a target="_blank" href="https://www.plex.tv/">media server</a> to share photos and videos with friends</p>
</li>
<li><p>Hosting an <a target="_blank" href="https://www.google.com/search?client=firefox-b-1-d&amp;q=pihole">ad blocker</a> for your entire home network</p>
</li>
<li><p>Hosting your own <a target="_blank" href="https://ghost.org/">blog</a></p>
</li>
<li><p>Running game servers, or even hosting a VM for cloud gaming with <a target="_blank" href="https://parsec.app/">Parsec</a></p>
</li>
</ul>
<p>The list goes on and on- if there's something you use online (search engine, document editor, internet archive, social media...) chances are there's a way to self-host it.</p>
<p>Self-hosting has a variety of benefits, mostly in exchange for the cost (monetary and time) of running the server to host them on:</p>
<ul>
<li><p><strong>Control your own data and privacy options:</strong> Instead of trusting a big company like Google or Facebook to keep your sensitive data, you can put it on your own machine on your own network, and have full control over who can access that data.</p>
</li>
<li><p><strong>Recycle old hardware:</strong> Servers don't need to be powerful datacenter beasts with a thousand ports and 100 petaflops of compute; you can get quite far with an old laptop or desktop that's collecting dust in the closet.</p>
</li>
<li><p><strong>Get valuable experience:</strong> As I mentioned earlier, the skills you pick up from self-hosting things aren't typically taught in school, and are critical for many software-related careers.</p>
</li>
<li><p><strong>Have a cool hobby:</strong> Running a server can be pretty fun and rewarding, just like other hobbies where you make stuff! Even if you somehow don't find something super useful to host, it's quite satisfying to tinker around and get things to work.</p>
</li>
</ul>
<h2 id="heading-homelabbing-is-not-cloud-computing">Homelabbing is Not Cloud Computing</h2>
<p>I've been throwing around the term "homelab" a bit. It's not exactly an official term, but it's been adopted to mean any server where the hardware exists in your home or another physical location you control. (<a target="_blank" href="https://linuxhandbook.com/homelab/">Here's an article</a> that reiterates this, with some extra info.)</p>
<p>Homelabbing is the form of self-hosting that I'll focus on, since it's what I do- but there are plenty of other methods of running your own services to choose from.</p>
<p>A popular choice is to buy a VPS (Virtual Private Server) or VM from a provider online, such as DigitalOcean or Linode. This allows you to install and run whatever software you want, while not having the hassle of needing to manage the hardware itself. If you choose to do this, skip directly to Part 3.</p>
<p>Another alternative, which is more popular in corporate settings, is the <a target="_blank" href="https://aws.amazon.com/serverless/">serverless</a> approach, in which you only manage your application, and all of the server configuration is left to the provider. Common serverless providers include AWS, GCP, and Azure. I would <em>not</em> recommend this for personal setups, since it can get really expensive for personal use and doesn't provide the benefit of helping you understand how servers work.</p>
<h2 id="heading-introducing-turtlenet">Introducing TurtleNet</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673740486045/08cdb443-459e-48b3-b031-defa48f93f55.png" alt class="image--center mx-auto" /></p>
<p>TurtleNet is my personal setup, and what I'll be modeling this guide after. Over the course of this series, I'll explain all the components in the diagram above, and the decisions that went into designing them!</p>
<p>To reiterate, I don't want you copying every bit of my architecture- there's a lot of stuff that makes sense for myself, but probably won't for your use case. I'll try to mention common alternatives whenever they arise.</p>
<h2 id="heading-this-series-is-a-living-document">This series is a living document</h2>
<p>As I gain more experience and run into more bugs, I'll update this guide accordingly. This is for myself as much as it is for you-- what I write here serves as documentation in case my server blows up and I need to recreate everything!</p>
<h2 id="heading-resources">Resources</h2>
<p>Here are some resources that I used while setting up TurtleNet:</p>
<ul>
<li><p><a target="_blank" href="https://www.youtube.com/@TechnoTim">TechnoTim</a> and <a target="_blank" href="https://www.youtube.com/@CraftComputing">Craft Computing</a>: two homelabbing YouTube channels that have very well-made guides</p>
</li>
<li><p><a target="_blank" href="https://www.reddit.com/r/homelab/">r/homelab</a> and <a target="_blank" href="https://www.reddit.com/r/selfhosted">r/selfhosted</a>: a source for inspiration and to see what everyone else is doing with their setups</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>