Archive for the ‘Uncategorized’ Category

Bandwidth monitoring

Saturday, May 6th, 2017

I recently posted on Facebook and G+ about how the TWC buy-out by Spectrum has done a great deal of good for me – I’m actually getting all of the bandwidth that I pay for, and them some. I can see the improvement on my bandwidth graphs, which are generated by home-grown (i.e. messy and hackish) scripts. Some people asked for the sources of those scripts, and I promised I’d put something together, so here it is.

The first component is the script, which is a command-line interface version of I’m using version 0.3.4, but I see no reason why newer versions wouldn’t work just as well. I don’t modify that script in any way, so I’m not going to post it here.

The next component is the sampler script. This is a purely home-grown script, which I never intended to be used anywhere other than my network, so I won’t guarantee it will work anywhere else. Here it is:


import sys
import os
import subprocess
import rrdtool
import datetime
import urllib
import tweepy
from token import *

curdate =
datestr = curdate.strftime("%Y%m%d%H")
resfile = 'speedtest-results-'+datestr+'.png'

readings = subprocess.check_output(["/usr/bin/python", "/root/", "--simple", "--share", "--secure"])

ping = readings.split('\n')[0].split()[1]
download = readings.split('\n')[1].split()[1]
upload = readings.split('\n')[2].split()[1]
image = readings.split('\n')[3].split()[2]

rrdtool.update('pingtime.rrd', 'N:'+ping)
rrdtool.update('downloadspeed.rrd', 'N:'+download)
rrdtool.update('uploadspeed.rrd', 'N:'+upload)
urllib.urlretrieve(image, resfile)

auth = tweepy.OAuthHandler(consumer_token, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

msg = 'Download speed %s, upload speed %s, ping time %s' % (download, upload, ping)

#msg = "@jwbernin: speed problem: %s is only %s"

#if ( float(download) < 240.0 ):
# api.update_status("I'm paying for 300 Mbit down, why am I only getting %s Mbit?" % download)

#if ( float(upload) < 16.0 ):
# api.update_status("I'm paying for 20 Mbit up, why am I only getting %s Mbit?" % upload)

print ping
print download
print upload
print image

Some things to note about this script… First, it uses the tweepy module to post the results of each test to Twitter. The authentication information is in a separate file, “”, that I will not be posting here. That file contains only four variable strings, and those variable strings are used only for authentication of the tweepy agent. Next, it also imports the rrdtool module, and uses RRDTool to record data. I’ll leave the creation of the RRD’s as an exercise for the reader, since it’s a fairly simple process.

The script prints out the upload and download speed, the measured latency, and the URL for the results image, all of which get sent to email since I run this through cron. It also saves the image file in a directory on my firewall – which reminds me that I need to go clean things up. Excuse me a bit while I take care of that…

Okay, I’m back now. So I’ve sampled my bandwidth every hour and recorded it into RRDs. Now, how to display it? I do that with PHP. First, I have a basic page with the alst 24 hours of data for upload, download, and latency.

This is a bit longer, so here we go:


$rrdDir = '/net/gateway/usr/local/stats/';
$imageDir = '/var/www/html/netspeedGraphs/';

$graphsAvailable = array (
'downloadspeed'=> array ('Download speed', 'MBps', 'MBps'),
'uploadspeed'=> array ('Upload speed', 'MBps', 'MBps'),
'pingtime'=> array ('Ping time', 'ms', 'ms')

function callError($errorString) {
print ("Content-Type: text/plain");
print ("\n\n");
printf ("Error message: %s", $errorString);

$basicOptions = array (
'-w', '700',
'-h', '150',
'--start', '-86400',
'--end', 'now',

foreach ( array_keys($graphsAvailable) as $graph ) {
$options = $basicOptions;
$options[] = "--title";
$options[] = $graphsAvailable[$graph][0];
$options[] = "--vertical-label";
$options[] = $graphsAvailable[$graph][1];
$options[] = sprintf ("DEF:%s=%s:%s:AVERAGE", $graphsAvailable[$graph][2], $rrdDir.$graph.".rrd", $graphsAvailable[$graph][2]);
if ( $graphsAvailable[$graph][0] == "Download speed" ) {
$options[] = sprintf ("HRULE:300#00FF00:Max");
$options[] = sprintf ("HRULE:240#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:200#FF00FF:Min guaranteed");
if ( $graphsAvailable[$graph][0] == "Upload speed" ) {
$options[] = sprintf ("HRULE:16#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:20#00FF00:Max");
$options[] = sprintf ("LINE1:%s#FF0000", $graphsAvailable[$graph][2]);
$options[] = sprintf ("PRINT:%s:LAST:Cur\: %%5.2lf", $graphsAvailable[$graph][2]);

$tmpname = tempnam("/tmp", "env");
$ret = rrd_graph($tmpname, $options);
if ( ! $ret ) {
echo "<b>Graph error: </b>".rrd_error()."\n";
$destname = sprintf ("%s%s.png", $imageDir, $graph);
rename ($tmpname, $destname);

<title>Network Speeds - Main</title>
<meta http-equiv="refresh" content="300">
<font size="+2"><b>John's Home Network Speeds</b></font><br/>
<a href="specific.php?sensorname=downloadspeed"><img src="netspeedGraphs/downloadspeed.png" border=0 /></a><br/>
<a href="specific.php?sensorname=uploadspeed"><img src="netspeedGraphs/uploadspeed.png" border=0 /></a><br/>
<a href="specific.php?sensorname=pingtime"><img src="netspeedGraphs/pingtime.png" border=0 /></a><br/>

You’ll notice the references to another PHP file, “specific.php” – this is another homegrown script that displays the past day, week, month, quarter, half-year, and year graphs for the selected dataset (upload speed, download speed, latency). That file:


$rrdDir = '/net/gateway/usr/local/stats/';
$imageDir = '/var/www/html/netspeedGraphs/';

$graphsAvailable = array (
'downloadspeed'=> array ('Download speed', 'bps', 'MBps'),
'uploadspeed'=> array ('Upload speed', 'bps', 'MBps'),
'pingtime'=> array ('Ping time', 'ms', 'ms')

$graphPeriods = array(
'day' => '-26hours',
'week' => '-8days',
'month' => '-32days',
'quarter' => '-3months',
'half-year' => '-6months',
'year' => '-1year'

$theSensor = $_GET['sensorname'];

function callError($errorString) {
print ("Content-Type: text/plain");
print ("\n\n");
printf ("Error message: %s", $errorString);

if ( ! array_key_exists($theSensor, $graphsAvailable) ) {
callError("Invalid sensor name specified.");

$basicOptions = array (
'-w', '700',
'-h', '150',
'--end', 'now',

foreach ( array_keys($graphPeriods) as $graphWindow ) {
$options = $basicOptions;
$options[] = '--start';
$options[] = $graphPeriods[$graphWindow];
$options[] = "--title";
$options[] = $graphsAvailable[$theSensor][0];
$options[] = "--vertical-label";
$options[] = $graphsAvailable[$theSensor][1];
$options[] = sprintf ("DEF:%s=%s:%s:AVERAGE", $graphsAvailable[$theSensor][2], $rrdDir.$theSensor.".rrd", $graphsAvailable[$theSensor][2]);
$options[] = sprintf ("LINE1:%s#FF0000", $graphsAvailable[$theSensor][2]);
$options[] = sprintf ("PRINT:%s:LAST:Cur\: %%5.2lf", $graphsAvailable[$theSensor][2]);
if ( $graphsAvailable[$theSensor][0] == "Download speed" ) {
$options[] = sprintf ("HRULE:300#00FF00:Max");
$options[] = sprintf ("HRULE:240#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:200#FF00FF:Min guaranteed");

if ( $graphsAvailable[$theSensor][0] == "Upload speed" ) {
$options[] = sprintf ("HRULE:16#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:20#00FF00:Max");

$tmpname = tempnam("/tmp", "env");
rrd_graph($tmpname, $options);
$destname = sprintf ("%s%s-%s.png", $imageDir, $theSensor, $graphWindow);
rename ($tmpname, $destname);

<title>Environmental Sensor - <?php echo $graphsAvailable[$theSensor][0]; ?> </title>
<meta http-equiv="refresh" content="300">
Sensor: <?php echo $graphsAvailable[$theSensor][0]; ?><br/>
foreach ( array_keys($graphPeriods) as $graphWindow ) {
printf ("Previous %s
\n", $graphWindow);
printf ("<img src=\"netspeedGraphs/%s-%s.png\"><br/><hr/>\n", $theSensor, $graphWindow);

That’s about it. The sampler is run on the very first device connected to the cable modem – in my case, the firewall – and running it anywhere behind that first device, or having other devices directly connected to the cable modem, will probably give you bad data. Feel free to use it, though there are no guarantees any of it will actually work for you.

An Old Hobby Resurfacing

Saturday, September 10th, 2016

For those few who have known me for a long time, you know one of my hobbies is model railroading. I haven’t had much chance to engage in it recently, though I did start an N-scale layout while I was in the townhome. Well, recently, I’ve joined a model railroad club (Neuse River Valley –, and I’ve started working on the club’s HO layout. Since my interest is primarily in the underpinnings – the physical wiring, control system, etc – I’m working on setting up the JMRI server and related components for this layout. I’m also going to be doing some minor scenic work, but my main concentration will be on layout control.

The club has a laptop set up under the layout that’s intended for use with JMRI; today I got the drivers installed on the OS and managed to get decoder detection working on a temporary programming track. I’m starting to do some reading about JMRI and how it operates, and for the most part it looks like it all Just Works(r)(tm). Which based on what it was the last time I looked at it several years ago is both a large relief and a not-so-small miracle.

The specs so far – the Windows 7 laptop is hooked up to a Digitrax PR3, which in turn is connected via LocoNet to a Digitrax DCS200. The WiFi router that we’ll be using for mobile phone control has to be replaced, but we will be replacing that soon. Once that’s done, I just have to scan all the locomotives on the programming track, and they should be controllable through JMRI.

We will not have actual switch control through JMRI – all the switches on this layout and the N-scale layout are manual, and the idea of making them remote controllable is a non-starter due to cost. I was told there are about 80 turnouts on the HO layout alone, and adding remote machines to them at $35 per is… well, cost-prohibitive is the nice way to put it.

Mind, when I start building my home layout, the turnouts will be remote powered, and ultimately JMRI will be controlling the turnouts. I’ve just got other things going on at the homestead that take priority over the railroad layout. Stay tuned for more updates on what I’m doing with the club layout – if I start making enough posts, I may spin up a new site dedicated to my model railroading endeavors. If I do, I’ll let you know here.

Hello again!

Thursday, August 25th, 2016

Well hello again. I realized this morning that it’s been over six months since I’ve posted anything, nd I’ve got a few things I might want to post about, so I thought I’d check in with everyone. Life has been rather busy of late – I moved to a new house (detached house with about 0.65 acres), and I’m in the middle of sprucing up the townhome to put it on the market. That means, among other things, that I’ve had to redo my home network, and I’ve got a few things to say about that. That will probably be a new post all it’s own.

This time, I want to focus on something I’ve been doing for work. Among my many other responsibilities as a systems administrator, I’ve dealt with quite a number of configuration management schemes. Most of them little more than “if it breaks, make sure the configuration is current, otherwise leave it alone”. Which is to say, no configuration management at all. I’ve used CFEngine – way back in the past – and adapted Nagios to check on configuration items (a very ugly kludge – please don’t ever do that). Recently, I’ve started playing with Ansible, since that’s the tool that my current boss wants me to use.

Ansible is, in a word, lacking. Why do I say that? Several reasons. First – it’s a bigger and more involved version of the Expect SSH script I wrote (adapted from someone I knew at NCSU at the time, who later went on to Red Hat and then elsewhere) over a decade ago. It doesn’t really do much that my decade old script isn’t capable of, so there’s no major benefit to it. It requires a huge amount of setup prior to actually using it, to get authentication (SSH public keys) and escalation (sudo privileges) right, and it can’t handle slow connections or VPN tunnels very well.

The major downfalls of Ansible are in it’s language and it’s operation – the playbook language is rather difficult to wrap your head around. It’s neither simple nor intuitive, and bears little to no resemblance to any already-existing programming or scripting languages. The problem with it’s operation is that it’s a one-shot deal – you have to actively manage errors or connection issues as opposed to having a tool that retries connections or deploys automatically. If I start a deploy to 100+ systems and I get any errors at all, then I get called to a meeting about something else entirely, I can guarantee you that I won’t remember to go back and fix those errors for a day or more, and that is a rather bad thing. A good configuration management system needs to take a config update, and keep attempting to apply it until successful or until it gets an error that requires admin intervention (e.g. a package conflict, as opposed to a connection timeout which it should be able to handle on it’s own). It’s especially difficult if, as with Ansible, the result status is simply logged to the screen as opposed to a file.

Perhaps some of the issues I have with Ansible are because I haven’t gotten into it deeply enough – but if I’m perfectly honest, I shouldn’t have to get into it any more deeply than I have to know how to solve these issues. This is the final issue I have with Ansible – the documentation is, bluntly, atrocious. I could find almost no examples of how to write a playbook. The example playbooks I did find were from a git archive, where the commit messages told me what had been done most recently, but offered no clue as to what a given playbook file was supposed to do.

Overall, I have to say, Ansible is over-hyped and under-performant. It comes across as an attempt by a programmer of mediocre skills to semi-automate systems administration tasks that said programmer shouldn’t be exposed to or aware of in the first place. For me, Ansible doesn’t give me enough ease of use or automation to make it worth the trouble it took to set it up in the first place.

The next chapter

Sunday, January 17th, 2016

For those of you who don’t know (which shouldn’t be all that many of you, since I’ve announced it several other places), I started a new job on Jan 4 2016. The previous gig started out as a 9 month contract-to-perm, but there was a lot of what seemed like confusion and hesitancy on the company’s end ot make me permanent. There were also several gratuitous insults offered to me, some of which I’m quite sure the company didn’t realize the extent of the insult they were offering, so I started looking around quietly. The new gig found me during this phase, and after a coworker conducted an especially egregious attack against me in a public email, I stepped up the contacts a little bit, and less than two weeks later submitted my resignation effective Dec 31 2015. Well, all that’s water under the bridge, and while I hope the old company has learned some things from my departure, I also hope they do well in the future.

My new company is a very small startup based in Chapel Hill – smaller than  thought at first, actually. I am employee number 7; a week after I started, they brought on employee number 8. We have three programmers, two customer service specialists fresh out of college, a training specialist, a business specialist, and me. We have no venture capital investments, which is actually a good thing in this case as we’re also profitable and growing.

Enough with the tangent. On to the point of this update. The new gig is a remote one – I’ve been commuting about 35 – 45 minutes for the two weeks I’ve been working there, and that will continue next week, but then I start working from home full-time. I’ll go in to the office if I need to, or if I feel like doing so for some reason, but the majority of my work time will be from my office in the basement. Which means, of course, that I need an office that has enough compute and display real estate to do the job, which means I had to upgrade the desktop. I had been working just fine, for the limited bits I used my office desktop, on a 2004-vintage Mac Mini. Honestly, if it had been able to drive two monitors, I wouldn’t have upgraded, but it can only handle a single monitor. I picked up a cheap BRIX from Intrex- Intel Core i3 chip in a form factor smaller than the Mac Mini, added 8 GB memory and a 250G SSD to it, and in all honesty I’m loving the new machine already. It doesn’t feel lightning fast, but it feels solidly capable. Best of all, the BRIX is designed to mount on the back of the monitor using the VESA mount. The only things sitting on the top of the desk are the two monitors and a slim DVD drive – and even the DVD drive will probably disappear soon.

Overall, despite the fact that I had to spend money (something that I really don’t like being necessary, though I’ll spend entirely too much money when I want to spend it), I’m pleased with the upgrade. The OS (Fedora 23) is installing / updating now, and I’ll probably finish setting up my environment tomorrow. Then I’ll wander in to the main room of the basement and do some more work on my model railroad layout. :)

Tangent: fitness

Friday, July 31st, 2015

About 4 months ago, I finally got my FitBit Charge HR and started using it to look at my fitness. I say “look at” because it has been just that – momentary looks, with no sort of history. I don’t like the way the FitBit web site presents the data – it confuses me and tries to make things too “candy-coated” – so I had to figure out a way to track trends myself. Oh, and before we go any further – this is not a review paid for by FitBit. This is just me telling you why I think having a FitBit and using it is a good idea – I’m getting zero benefit to writing this aside from the finger exercise involved in actually typing.

Fortunately, FitBit is really awesome about giving individuals access to their data through the API they have set up, and they’re also awesome about providing individuals access to the Partner API which allows access to intraday data. That was one of the major reasons I went with a FitBit instead of an Up3 from Jawbone – Jawbone says they allow access to your data, but in my testing of it, I couldn’t find a programmatic API and even the “data download” area of their web site only gave me data from a year ago, not current data.

So, I got myself access to the Partner API from FitBit, and started pulling down my personal data daily. I’ve only been doing this for about 7 days so far, so I don’t have very much in the way of trends yet, but it’s already started helping me understand some things about my habits. Since I’ve found it so useful, I figured I’d share what I’ve done in hopes that someone else will find it useful as well.

First things first – get yourself a FitBit. I chose the Charge HR because I wanted the intraday heart rate measurements, but I didn’t see the benefit to the location data the Surge provides. In hindsight, I probably could have made use of it, but it’s not something that I feel adds sufficient value to my analysis for the price differential. Once you have the FitBit – whatever model you end up getting – use it! No sense spending money on something that’s going to sit in your kitchen junk drawer.

Now that you have your FitBit, you need to open the door to downloading your data. This can get a bit confusing – it took me several tries to figure it all out – but stick with me here. Step one, register an application at I gave my app a name of “Personal” – the name doesn’t matter too much, it’s just something you have to put in. For this method, the OAuth 1.0 Application Type should be “Browser” and the OAuth 2.0 Application Type should be “client”. I used “http://localhost/callback” as the callback URL – this field has to be filled in, but for what we’re doing here, it doesn’t matter much what you put there. Once you’ve done that, send an email to “”  and request access to the Partner API for intraday data. Be sure to include the app’s client ID as given to you after registering the app ont he dev site. Please note – they are very supportive of personal use, but don’t try to slide a commercial application that you’ll be selling in claiming that you want access for personal use. That’s just bad form. It may take them a while to get to your request depending on volume – it took about 3 weeks for me to get Partner API access after my inital email.

Now that you have access, you need to set up the authentication key. FitBit has decent documentation for doing this on their site at, but this is where it got confusing for me. I’m only going to cover the OAuth 2.0 authentication bits, since that’s what you need for heart rate measurements and it’s a superset of what OAuth 1.0 gets you. Please note that as of when I write this article, OAuth 2.0 at FitBit is in beta state, so it might break without warning. Buyer beware, caveat emptor, and all that. We’ll be looking at the “Authorization Code Grant Flow” at

The instructions tell us to “redirect the user to FitBit’s authorization page”. This really confused me, since I hadn’t directed myself anywhere yet – ultimately, it means I have to poke a FitBit URL with a well-known set of URL parameters, which include the application’s client ID as given to you by the “Manage My Apps” page ( The easiest way to do this for now is to type the following into the location bar of your web browser:${ID_HERE}

Replace the ${ID_HERE} with your app client ID. This page will try to redirect you to your callback URL, which if you use the values above won’t exist, so you’ll end up seeing a URL in your location bar with a “code=” part to it. Save the long string after the “code=” – this is the part you need for the next step.

Next, FitBit tells us the applciation needs to “exchange the authorization code for an access token”. This must be completed within 10 minutes, or the code we got expires and we have to start over. For this, the response will be in JSON so I used an interactive Python session. Here’s what I did:

$ python
>>> import requests
>>> import base64
>>> import urllib
>>> clientid='XXXXXX'
>>> secret='YYYYYYYYY'
>>> authStr = "Basic "+base64.b64encode("%s:%s" % (clientid, secret))
>>> authHdr = {'Authorization' : authStr}
>>> body=urllib.quote("clientid=%s&grant_type=authorization_code&code=%s" % (clientid, code))
>>> req ='', headers=authHdr, data=body)

You’re probably asking, “So what does all this mess mean?” Well, it becomes a little more clear when you replace the XXX’s with the client ID from the FitBit API page and the YYY’s with the applciation secret from the same page. Then replace the ZZZ’s witht he code you got from your browser above.

Once this is done, dump the result of the request with:

>>> req.json()

This will show you the JSON notation for the request response. The important parts are the “access_token” and the “refresh_token” strings, so we’ll want to save those in another variable:

>>> access = req.json()['access_token']
>>> refresh = req.json()['refresh_token']

Now we want to save those two items to a file locally, since we’ll need both pieces of information in the future. The easiest way to do so:

>>> tok = {}
>>> tok['access_token'] = access
>>> tok['refresh_token'] = refresh
>>> with open ('.fitbitAuthFile', 'w') as fh:
...   json.dump(tok, fh)

Exit the interactive Python interpreter and confirm the “.fitbitAuthFile” file contains the access_token and refresh_token we just wrote to it. If it doesn’t, you’ll probably need to start the process over by going back to the web page to get a new code. If it does, congratulations, you’ve finished the hard part!

The actual retrieval of the data is both much simpler and much more complex. Simpler because we only have to read in the token information, test if it’s expired or not and if so refresh it, then ask for the data we want. More complex because this is where processing the data comes in to play. I’m saving data to spreadsheets through the openpyxl Python module. I haven’t finished developing the script or the spreadsheets, but you can download it in its current state from http://www/ You’ll need to make some changes to insert the relevant values into places I’ve put generic all-caps strings, and please do keep in mind this was intended for a Linux (specifically, Fedora 21) system, not Windows. I don’t intend to make any changes to accommodate a Windows system either – I’m a linux systems administrator by trade and I don’t get along with Windows. If there’s enough interest, I’ll update it in the future and/or upload the weight tracking spreadsheet template I use.

Crunchtime Fun

Monday, December 15th, 2014

Welcome to the end of the year, when late projects suddenly get rushed to completion so boxes can be checked off and project managers can take credit for “having the initiative to push this project to completion”. It’s also the time of year when the systems admins realize they’re about to lose several days to several weeks of vacation time if they don’t take it, so you can see the conflict of interest there.

Well, I’m in the second category. I’ve scheduled my use-it-or-lose-it vacation days so they’re somewhat spread out, and I’m actually working the week between Christmas and New Years because I’m on call that week. In all honesty, things aren’t that bad this year – yes, there’s a mad rush to get projects out the door, but it’s not disrupting my schedule too much. So, I’m using the relative quiet and downtime to make plans for next year – mostly aimed at not putting myself in the position of having to burn two weeks of vacation time in December so I don’t lose it. This is my idea of crunchtime – and it’s quite a bit more fun than the typical crunchtime mess. :)

So, what are my plans so far? Well, I’m taking a page from Ingress, which has a new-to-me feature called “missions”. I’m making a list of places to visit / things to do over the course of the twelve months starting January 1 2015. Are these New Years Resolutions? You might consider some of them to be, but I don’t. They’re waypoints that  hope to get to during my journey through 2015. Let’s take a look at some of the “things to do”:

  • New kitchen countertops
  • Faux stone accent wall (on the wall with the fireplace)
  • New backsplash in kitchen
  • Tile floor in kitchen
  • Finish suspended railroad in living room
  • New flooring through main level
  • Sell townhome, upgrade to detached single family

Now if that isn’t one of the most discriminatory terms I’ve ever come across…  why is it called a “single family” home? Are unmarried childless people not allowed to live there? Given my situation, that term is about as welcome as a burning bag of shit on the front stoop. Call it a “detached home” – don’t associate it with the assumption of a family involving spouses and children.

Ok, gripe mode off. I’ll try to warn you next time I hit a pet peeve, but can’t promise I’ll succeed. Anyway, the whole point to most of these items, as you can probably tell, is to improve the value of the townhome so I can maximize my profit when I sell. This is mostly so I can invest a large chunk of the profits, but a small part will also help fund the (possibly multiple) road trip(s) I want to take throughout the year. Some of the cities already on the list:

  • Tampa FL
  • Miami FL
  • Washington DC
  • Williamsburg VA
  • Charleston SC

What do these cities have in common? Well, aside from the fact that they’ve made this list, I’m not telling. :) Seriously, though, if you know me, you probably have a good idea what the rationale is, even if you don’t know specifics. I was going to make this a list and modify it throughout the year – I might still do that, but right now it’s time for me to go get lunch (more specifically, visist the gym then get lunch), so I’ll leave it at that.

Home networking done right

Friday, June 6th, 2014

This, ladies and gentlemen, and children of all ages, is how you do home networking correctly. First, you start with a central wiring panel:wiring-1

Notice how there is a module for cable and telephone distribution on the left and three modules for network distribution on the right? Yes, start there. Hook up the cable and phone distribution first – incoming lines go behind the module, outgoing to the house go in front. Networking lines to the house go behind the modules.


Make sure your terminations are clean – you want a little bit of slack, a little bit of what would be called a “drip loop” for an aquarium setting, but not so much that the excess cabling gets in your way.

Then you connect active computers to one or two ports elsewhere in the house and start verifying your infrastructure bits work. See the green lights on the switch? Green lights are good:


Once you’ve got one or two good distribution connections, add your home server:


Make sure it has power, and make sure your other machines can get to it in every way you need to get to it – SSH, VPN, RDP, VNC, whatever.

Now, finish cabling the distribution panel to the switch:


If you have the ability, you want to make your own custom-length cables. Seriously, you don’t want 4 foot long cables hanging down looking like an overturned bowl of spaghetti, that’s just amateurish.

Finally, add the LCD panel and keyboard for the home server, just in case you do something stupid and break network connectivity to it:


If you’re competent, you’ll use this monitor/keyboard maybe three times in your entire life, save for power loss events which are really the power company’s fault, not yours.

Now, young padawan, go enjoy the fruits of your labor – if you’ve managed to get everything accomplished properly, you deserve a beer. Which is where I am headed as soon as I hit the “Publish” button!

My Happy Place

Thursday, May 22nd, 2014

I have an awesome job right now. The group I work with is talented and open about sharing information. The management team is supportive of the workerbees like me, and even lets me vent when I need to. Which I unfortunately need to more often than I should.

Like today. I ran into two head-scratchers within 10 minutes of each other today. First was when I tried to find out why a Tomcat user was required for a new system when the old system didn’t have Tomcat anything. Nor does the new system. As it turns out, they tried to load the new system with Tomcat at first, since it made more sense to do so that way in our environment. Well, then the consultants on side told us that Tomcat isn’t really supported as well as the advert / marketing materials say it is, so we punted back to the “old” way of doing things – but didn’t bother to remove the Tomcat user bits. So yes, we have a Tomcat user running a suite of JBoss BPMS processes that have nothing to do with Tomcat… and I sometimes wonder why our user authorization scheme is so dorked up. Well, no longer.

The next one hurts a bit more, honestly, because it comes from people that I expect to know better. The aforesaid BPMS system is having performance issues, I’m told, so the consultants want me to replace OpenJDK with Oracle’s JDK, because “the font packages are supposed to come with the JDK”.

Wait, what? Since when do font packages come with a freaking JDK? Especially when the font packages you asked for were installed separately from any JDK anything via a “yum install” dealing with – wait for it – X font packages!?

Two nights ago, I met a current employee of Red Hat at the climbing gym, and mentioned my dismay at the support org’s response to a ticket one of the on-site consultants filed. I would like to apologize for that assumption, since I now realize it was an apporpriate response given the consultant’s demonstrated skill level. The response which would have annoyed me was indeed completely appropriate for this consultant – an unfortunate fact which reflects badly not on RH Support, but on RH Consulting.

IT’s self-stratification fetish

Wednesday, March 6th, 2013

I’ve recently converted from a contractor to a permanent employee at Railinc, and I’m quite happy with the change. Does it affect my daily activities? No, not at all – but it does give me some peace of mind about vacations, income stability, benefits, etc. I’ve been here for 10 months so far, and yet I still keep getting contacted by recruiters, most often these days on LinkedIn. The frequency has dropped significantly, but it’s still higher than I would expect it to be after 10 months of “I’m happy where I am, not looking to change jobs” responses. I had another recruiter contact me on LinkedIn just today, actually.

What is surprising, and a little disturbing in some ways, is the demographics of my LinkedIn connections versus my career. This image is from the right-hand sidebar of the LinkedIn page that I was taken to after clicking the “Accept Invitation” button in the email I got.

What’s the first thing you notice about all these images? They’re all women. I’ve deliberately cropped out names and company references; but they are all IT recruiters. In fact, there are a total of three male recruiters in my LinkedIn contact list out of a total of 22 recruiters of all types.

Now contrast this with my present and past coworkers… I’ve had a total of 8 jobs. Somewhere in the neighborhood of 100+ present and previous coworkers – people reporting to the same manager I reported to. Going through my jobs one by one, I counted a total of 13 women out of that hundred plus. If I include managers, that total goes up to 15 out of 100+.

I’m not sure what I think of this situation. Pattern? Intrinsic to the field? Intrinsic to male vs female nature? Bias? Self-selection? Fetish? Obsession? I’m going to go ponder during lunch.

Some things corporate recruiters need to know…

Wednesday, August 29th, 2012

This morning, I got an email from an internal corporate recruiter by the name of Natalia Delape at Peoplefluent. It was sent to me through LinkedIn. This isn’t that unusual, I get anywhere from two to five inquiries about open positions a month, and I’m usually willing to simply reply with a polite “No, thanks, I’m happy where I am.” and move on. Truth is, I am quite happy in my current position – it’s a great company and I feel like I’m having a significant positive impact on the company.

This one, I wasn’t willing to be so polite. Why not, you ask? Well, the email sent to me was a form email, with absolutely no individual, specific, personal information in it – it might as well have been from a Nigerian 409 scam artist as from an actual company. Second, the job description had absolutely zero relevance to my skillset, as would have been self-evident had Mz Delape bothered to read the job description and the first line of the summary of my LinkedIn profile. So she basically wasted my rather valuable time by making me do research on the position that she should have before contacting me.

I wonder how Peoplefluent can be such an excellent staffing firm if their own internal recruiters are so bad at their jobs? I mean, Peoplefluent claims right on their website that they were named a “Market Leader” for “Recruitment” – yet their own internal recruiters are the next best thing to small shell scripts?

Mz Delape was evidently a bit miffed at my response, telling me she did not appreciate my tone, and that she was “just doing [her] job.” Well, Mz Delape, I really don’t give a rat’s ass that you didn’t like my tone. Number one, you clearly weren’t doing your job, or you wouldn’t have contacted me about a position that I don’t even begin to have the skillset for. Number two, I don’t appreciate you wasting my valuable time because of your refusal to do even a basic skim of my profile before clicking your cut-n-paste message on it’s way to me.

Thankfully, not all recruiters are quite so bad at their jobs. I was placed in my current position by TekSystems, and before they even submitted me to any positions they took the time to get to know me, and more importantly, get to know my skillset and what I wanted out of my next position. It took about 6 months and eight on-site interviews before I got an offer, but of those 8 on-site interviews, there was only one company I was unsure of going to work for. Yes, it takes a bit longer to get a job, and it takes a bit more work on the recruiter’s side, but trust me, the karma you build up by actually doing your job well is worth it. Plus, the IT community is smaller than most recruiters might think, and we all talk to each other. It might not be a 1960’s sorority house gossip line, but when a bad recruiter hits one area sysadmin, pretty soon the entire RTP community will know to avoid that person or company, and your emails will get killboxed, so you won’t be able to find any qualified staff.

Peoplefluent, if you waste my valuable time like this again, rest assured I will send you a bill for recruitment services, since your internal staff is totally unable to perform their duties adequately.