› Nick Plunkett


tl;dr: Don’t sign up for Proton’s paid account if you think you might ever cancel. Their cancellation process is centered around a dark pattern of refusing to let you unsubscribe until you go through their arbitrary weeks-long process, after which you still might not even be able to cancel!

I have been a long time fan of Proton, I signed up for Proton Mail a long time ago near when it was first available to US users. I used it as a secondary email for a long time, with my main email routing through a legacy G Suite free account that I had retained from signing up for that service early on as well.

When G Suite announced they were sunsetting free accounts, I naturally gravitated toward Proton and made it my primary email provider - signing up for a paid subscription and everything. Over time I grew annoyed with how they handled filtering and email notifications, and G Suite announced they would still support legacy free accounts for the foreseeable future, so I decided to migrate back. I didn’t cancel the auto renewal of my subscription right away - a rookie mistake. When I got an email later that my subscription had renewed for another year, I figured it was time to cancel. I would eat the cost of this year and be free next year - that was fine, as long as I could cancel.

I was met with the most user hostile system that seemed to be built around trapping people into subscriptions for Proton. They do not let you downgrade your account if you are using more storage than the free tier, which

I’ve recently been looking for an easily deployable sFlow collector to receive and index sFlow data from various network hardware. I came across this post on sflow.com and am sharing it here because it was extremely helpful for me. I was able to get an sFlow collector up within 10 minutes using their Git repository and Docker, and once I got my network hardware configured, I was able to see traffic based on source and destination ASN with implied country based traffic as well. This is super powerful software that is all free and open source, I would highly recommend this if you’re just getting into sFlow data ingestion and querying.

https://blog.sflow.com/2023/07/deploy-real-time-network-dashboards.html

Internet archive link in case the previous link becomes unavailable: http://web.archive.org/web/20231004203143/https://blog.sflow.com/2023/07/deploy-real-time-network-dashboards.html

The next speed step in pluggable optics is here and pricing is reaching tolerable levels. While they still use significantly more power than QSFP optics, the actual unit cost of the pluggables is reaching sub $500/pc levels, which makes them viable in many more use cases.

I’ve been researching these in order to get myself more familiar with what is available out there. These are my notes for what I’ve found so far. I’ll probably update these a few times as I learn and try more.

400G FR4

The 400G FR4 is a cost-effective pluggable optic that uses duplex LC SMF connectors and operates at a center wavelength of 1310nm. It has a maximum reach of 2km.

Features:

  • Duplex LC SMF connectors
  • 1310nm wavelength
  • 2km maximum reach
  • Cost-effective option - available for around $900 list price

Use Cases:

I recently downloaded and processed some large files on Windows Subsystem for Linux (WSL) on Windows 10. Once I was done with the files, I deleted them from my Ubuntu installation, but the VHD file was still taking up the same amount of space as it was before I deleted the files.

It turns out that Windows and WSL does not automatically shrink the VHD file if you delete files - it just automatically expands it as you use more space in WSL.

How to shrink the VHD file

Locate the VHD file in your user directory of Windows. For me, this was: “C:\Users\Nick Plunkett\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu_79rhkp1fndgsc\LocalState\ext4.vhdx”

In order to reclaim the disk space that is no longer in use in WSL, open a powershell window and run the following commands:

I was recently chatting with a coworker talking through how we use MPO cables and various optics. It made me realize that I have a lot of “in my head” knowledge about fiber cables and optics from working in the networking space over the past 5 years or so, but I haven’t found a good reference point online to point people to and it would probably be better for that information to live written down somewhere rather than it just being in my head. So here it is - everything that is floating around in my head about MPO cables. I’ve done my best to double check what I’m writing here but it should be mostly accurate.

What is MPO?

MPO (Multi-Fiber Push On) in common terms refers to both the connector and the cable itself. It is a cable that contains (currently, 2023) either 8, 12, or 24 fibers terminated onto a single connector, male or female, on both sides of the cable. The cables are available in Single Mode (OS3) and Multimode (OM3, OM4, OM5) formats.

Where is MPO used?

For typical use cases, MPO cables are used in the following applications:

I have recently been integrating Peering Manager into my network deployment at my day job in order to help automate our BGP configuration & management. We run Arista switches running as routers across our entire footprint.

Peering Manager has NAPALM integration built into it, both for managing and deploying configuration as well as polling device status. However, for Arista devices, this requires the Arista eAPI to be enabled on the router, and it must be running in HTTPS mode. That means you need some sort of security certificate installed.

I hadn’t dealt with this before and this wasn’t straightforward. I wasn’t able to find great documentation online for how to do this. Below is my process for generating a self signed key, then using that key to generate a self-signed certificate, then using that certificate to allow HTTPS connections to the router over the management interface for eAPI command and control.

  1. Generate a self signed key:
router# security pki key generate rsa 2048 self-signed.key
  1. Generate a self signed certificate using that self signed key:

Recently, as a part of network automation at $dayjob, I have been provisioning Salt across our network footprint. One particular problem I’ve run into is that we use a dedicated management VRF on all of our devices.

This was an issue because by default, commands ran on bash on the Arista software run in the default VRF, and in that state we can’t communicate with our management IP networks. There just isn’t a route to the management networks on the default routing table.

Our Salt server only has a Management VRF IP address, and we did not want to configure a proxy to make the Salt master reachable outside the Management VRF.

I had previously had no experience with management VRFs on Linux, and there were no articles that were particularly helpful in helping me to run commands specifically in the management VRF of an Arista switch, within the context of the Bash/Linux shell.

If you find yourself in this same situation, you’ll want to do the following, assuming you were able to get Salt Minion installed on the switch already.

I recently have been working on a home network buildout in my new home. One of the features I was excited to implement was PoE - power over ethernet. My main use case for PoE in my home network was going to be to power small desktop switches and routers near the wall mounted ethernet ports, in order to eliminate unnecessary wall wart style AC to DC power adapters.

I purchased a NETGEAR 16-Port Gigabit Ethernet Unmanaged PoE Switch (GS116PP), one of the highest power budget consumer level fanless switches I was able to find on the current market. It has 16 1Gb ports with a total PoE budget of 183W - more than enough for my needs for now and into the future when it will eventually need to be replaced with a 2.5Gb version. It also supports up to PoE+ and can supply up to 30 watts of power on each PoE port - perfect for my intended use case.

I had gotten a little ahead of myself though, because the only gear I have to power on the other end of the PoE connection is a Mikrotik HeX router. One of the key features of this router is that it can be powered entirely by PoE on the first port - no wall adapter required.

One thing that the documentation of the HeX router leaves vague is that it requires passive PoE - the device itself is not capable of negotiating PoE with any other device. However, the Netgear switch doesn’t supply passive PoE - an old and outdated standard at this point.

The HeX router requires adaptation to passive PoE, which only took a cheap $30 adapter from Ubiquiti - once that was in place, the device was able to sense the passive power and came online and I could ditch the wall wart!

I’ve always loved using my Mac for network engineering tasks - because it has a UNIX-like base it was kind of built from the ground up in a way that is suited to the task. I’ve recently tried to use a Windows laptop for network engineering work to not pigeon-hole myself to one specific platform. One of the seemingly small but high friction activities on Windows through the terminal by default is that when you double click an IPv4 address it only highlights the first octet. This seemingly small annoyance builds up over time when working with tons of IPs and usually causes me to shift back to macOS.

However, I recently learned about delimiters within the various OSes. From what I understand, macOS handles highlighting/double clicking on a system-wide level, while Windows allows each application to handle highlighting/double clicking independently.

In the Windows terminal settings, you can manually set your delimiters - this is where the magic happens.

In October of 2018, I gave a brief Lightning Talk presentation at NANOG 74. It covered how I and my colleagues at CENIC identified and deployed a crafty low cost metro DWDM solution across CENIC’s network backbone. The video recording of this talk is available in the Youtube URL above.

« Older posts Newer posts »