Home updates

November 21st, 2018

Over the course of the past year, I’ve made quite a few updates to my home. Most of them aren’t really visible – by design. I don’t want my updated speaker wiring or networking to be visible – it’s supposed to be hidden, so I’ve put it behind walls and in the crawl space. Some of this I’ve gone over before, like the networking, so we’ll ignore that. Some I haven’t really gone over in an organized fashion, like the new entertainment center infrastructure.

As part of a multi-year effort (due to budget restrictions), I put in in-wall wiring for HDMI, coax, and networking to above the fireplace, and put in a 7.1 speaker distribution panel on the side living room wall. My 7.2 surround sound system is now up and running, all the speaker wires are traveling through the walls, so the receiver plugs in to the wall directly behind it, with speaker wires no more than a foot long. I’ve not yet gotten a new wall-mount TV, but that’s going to happen within the next 6 to 12 months.

I also upgraded the receiver and DVD player to a newer model receiver and a 4K Blu-Ray player. Since the TV is still only 1080p, there’s no image difference, but everything is now HDMI instead of component. That’s a mixed bag, since it’s easier to plug in one cable than five, but it also means my old consoles (PS/2 and Wii) no longer work – the new receiver doesn’t upconvert video from component to HDMI, so I would need to buy an upconverter if I wanted to continue using those consoles.

Well, having upgrade those bits, I decided to turn back to my compute environment. I had been limping along trying to figure out how to replace my “frankendisk” server for about a year and a half. Well, about two months ago I learned of a new home-priced NAS system from Synology, and started researching it. It looked to all intents and purposes like exactly what I needed, and the more I looked in to it, the more it seemed like a perfect fit. I mentioned my thoughts to some coworkers, and was strongly urged to get one by several people who are former or current owners. I bit the bullet, bought a used DS1815+ from eBay, and started considering how to put it into use. I ended up gutting my existing virtualization environment to cannibalize the disks, which since it’s a lab type environment was easy enough to rebuild, and I started out with 6 2TB disks. With the Synology Hybrid RAID, that gave me about 9TB usable with a one-disk fault tolerance. While the NAS itself takes a long time to perform administrative tasks, the user experience overall is just about the best I’ve ever had. It took me under 15 minutes to set it up and make it available on my network, and all 9 TB were available while the parity check and RAID synchronizations were ongoing.

Next I integrated it with my IdM environment for autmounting home directories, and it worked like a charm. No issues, just mounted up with acceptable permissions and let me do what I needed to do. When the time came to stand my virtualization host back up, adding the NAS for a VM store was trivial – everything just worked with no fuss or bother.

Next it was time to figure out backups. With my previous server, I had been using rsnapshot to maintain backups at various intervals – daily for 8 days, weekly for 5 weeks, monthly for 12 months, annual for 2 years. As I start setting up the server that used to house my frankenstorage, I came across an interesting Synology application called “Active Backup for Business”. Installation onto the NAS was a snap, and when I opened it up I realized my life had just gotten even easier. I was able to set up the exact same backup schedules I had been using with rsnapshot directly on the NAS, and have it do the backups via rsync. The only two grips I have are that I can’t configure one server to have different users for backing up different directories, and the labeling of some of the configuration items is decidedly sub-optimal. The “Physical Server” item refers specifically (and only) to Windows servers, and the “Virtual Machine” items only works with VMware instances. Since I’m using RHV and RHEL everywhere, I’m forced to configure all of my targets as “File Servers”. I suppose this makes sense for the average home user, since the vast majority of them will be using Windows machines, but it still irks me.

Then I started exploring the different applications available for the Synology, and I came across one called “Media Server”. This had intrigued me, since my new receiver has something called “HEOS” available – which is, in the simplest terms I can come up with, a way to use a smartphone to remote control the receiver and what music it’s playing. The receiver has to be on the wireless network along with the smartphone for this to work, which annoyed me since I’d hooked it up on the wired network initially. Denon, please fix that – the firmware updates are large enough that I would really rather use my wired network. Well, I poked at Google for a bit and discovered that the Synology can serve music to HEOS devices – wait a second… okay, let’s move the iTunes library over to the NAS…

Well, after a little bit of horsing around, I figured it all out. I’m going to re-rip all my CDs to FLAC (they’re MP3 now) so get the lossless encoding, but… I can be sitting in my office, open the HEOS app on my phone, and tell my stereo to play whatever I feel like listening to from my personal collection! Without even having to go into the living room to pick up the stereo remote! Holy crap, this is spoiling me… well, I kept looking around the HEOS app in amazement that such magic actually worked for me pretty much out of the box. That’s when I found the “Sound Mode” screen. I had been playing a Lindsey Stirling album (don’t hate – I happen to like her music, and if you don’t, you can go suck eggs) in the default “Stereo” mode. Most of the settings aren’t applicable, because they’re aimed at playing movies/DVDs/Blu-Rays (e.g. DTS), but there was an option for “Multi-Channel Stereo”. I clicked that button… and holy crap, it was as if my stereo had just gone from monaural AM to full surround-sound 7.2 stereo sound, and I was still in the office / spare bedroom! The difference was as great as the difference between listening to “Olympic Fanfare and Theme” on a cheap one-speaker cassette player and listening to the same arrangement played by the Boston Pops live.

I am a very happy IT geek. 🙂

Useful scripts

May 1st, 2018

Having been back at Red Hat for about a year and a half now, I’m starting to get back into a mode of ‘fix that which hasn’t yet been fixed by others’. Which, as a consultant, really means ‘make stuff work for clients’. Since most (read: 90% plus) of my deployments have been Satellite-based, I’ve been getting a lot of questions along the lines of “Can I get a report on ${foo} from the Satellite?”

Well, most of the reports that clients are used to, in my experience at least, are no longer available or are very difficult to get in Satellite 6. While not required for operational reasons, those reports often are required for reporting or auditing reasons. Some of the reasons advanced by clients seem specious (at best), but I try not to judge, I simply give them what they ask for, while telling them about better ways to get the results they’re looking for, though perhaps not the data arranged the way they think they want it.

One of my clients wanted to be able to look at a report of hosts/hostgroups that needed updates, and the types of updates (security, bugfix, enhancement), so I decided this was as good a time as any to start scripting some useful things. I don’t know if these things are available elsewhere, and I’m reasonably certain there are better ways to get to the data that I wanted but I’ve now gotten to the point where I have my first “report” available. I’ve tested it against my homelab environment, and it seem to work exactly the way I want it to.

This report is in the form of an OpenOffice / LibreOffice workbook (.ods format), and details the hosts in the Satellite that have applicable errata outstanding and how many errata are outstanding. It then creates a new sheet for each host, which lists all of the errata available for that host, the type of erratum (security, bugfix, enhancement), and how many packages are affected by that erratum. Then it creates individual sheets for each erratum outstanding across the entire environment, and lists the erratum ID, the name, the type, and the actual packages it affects / updates.

If you want to take a look at it, head over to GitHub (https://github.com/jwbernin/sat-tools) and grab the sources, create an appropriate credentials.json file, then run the errataByHost.py script. If you find it useful, let me know – if you’d like to see something added to it, or if you’d like to see another report, also let me know and I’ll do what I can.

Bandwidth monitoring

May 6th, 2017

I recently posted on Facebook and G+ about how the TWC buy-out by Spectrum has done a great deal of good for me – I’m actually getting all of the bandwidth that I pay for, and them some. I can see the improvement on my bandwidth graphs, which are generated by home-grown (i.e. messy and hackish) scripts. Some people asked for the sources of those scripts, and I promised I’d put something together, so here it is.

The first component is the speedtest_cli.py script, which is a command-line interface version of speedtest.net. I’m using version 0.3.4, but I see no reason why newer versions wouldn’t work just as well. I don’t modify that script in any way, so I’m not going to post it here.

The next component is the sampler script. This is a purely home-grown script, which I never intended to be used anywhere other than my network, so I won’t guarantee it will work anywhere else. Here it is:


#!/usr/bin/python

import sys
import os
import subprocess
import rrdtool
import datetime
import urllib
import tweepy
from token import *

curdate = datetime.datetime.now()
datestr = curdate.strftime("%Y%m%d%H")
resfile = 'speedtest-results-'+datestr+'.png'

readings = subprocess.check_output(["/usr/bin/python", "/root/speedtest_cli.py", "--simple", "--share", "--secure"])

ping = readings.split('\n')[0].split()[1]
download = readings.split('\n')[1].split()[1]
upload = readings.split('\n')[2].split()[1]
image = readings.split('\n')[3].split()[2]

rrdtool.update('pingtime.rrd', 'N:'+ping)
rrdtool.update('downloadspeed.rrd', 'N:'+download)
rrdtool.update('uploadspeed.rrd', 'N:'+upload)
urllib.urlretrieve(image, resfile)

auth = tweepy.OAuthHandler(consumer_token, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)

msg = 'Download speed %s, upload speed %s, ping time %s' % (download, upload, ping)
api.update_status(msg)

#msg = "@jwbernin: speed problem: %s is only %s"

#if ( float(download) < 240.0 ):
# api.update_status("I'm paying for 300 Mbit down, why am I only getting %s Mbit?" % download)

#if ( float(upload) < 16.0 ):
# api.update_status("I'm paying for 20 Mbit up, why am I only getting %s Mbit?" % upload)

print ping
print download
print upload
print image

Some things to note about this script… First, it uses the tweepy module to post the results of each test to Twitter. The authentication information is in a separate file, “token.py”, that I will not be posting here. That file contains only four variable strings, and those variable strings are used only for authentication of the tweepy agent. Next, it also imports the rrdtool module, and uses RRDTool to record data. I’ll leave the creation of the RRD’s as an exercise for the reader, since it’s a fairly simple process.

The script prints out the upload and download speed, the measured latency, and the URL for the results image, all of which get sent to email since I run this through cron. It also saves the image file in a directory on my firewall – which reminds me that I need to go clean things up. Excuse me a bit while I take care of that…

Okay, I’m back now. So I’ve sampled my bandwidth every hour and recorded it into RRDs. Now, how to display it? I do that with PHP. First, I have a basic page with the alst 24 hours of data for upload, download, and latency.

This is a bit longer, so here we go:

<?php

$rrdDir = '/net/gateway/usr/local/stats/';
$imageDir = '/var/www/html/netspeedGraphs/';

$graphsAvailable = array (
'downloadspeed'=> array ('Download speed', 'MBps', 'MBps'),
'uploadspeed'=> array ('Upload speed', 'MBps', 'MBps'),
'pingtime'=> array ('Ping time', 'ms', 'ms')
);

function callError($errorString) {
print ("Content-Type: text/plain");
print ("\n\n");
printf ("Error message: %s", $errorString);
die();
}

$basicOptions = array (
'-w', '700',
'-h', '150',
'--start', '-86400',
'--end', 'now',
);

foreach ( array_keys($graphsAvailable) as $graph ) {
$options = $basicOptions;
$options[] = "--title";
$options[] = $graphsAvailable[$graph][0];
$options[] = "--vertical-label";
$options[] = $graphsAvailable[$graph][1];
$options[] = sprintf ("DEF:%s=%s:%s:AVERAGE", $graphsAvailable[$graph][2], $rrdDir.$graph.".rrd", $graphsAvailable[$graph][2]);
if ( $graphsAvailable[$graph][0] == "Download speed" ) {
$options[] = sprintf ("HRULE:300#00FF00:Max");
$options[] = sprintf ("HRULE:240#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:200#FF00FF:Min guaranteed");
}
if ( $graphsAvailable[$graph][0] == "Upload speed" ) {
$options[] = sprintf ("HRULE:16#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:20#00FF00:Max");
}
$options[] = sprintf ("LINE1:%s#FF0000", $graphsAvailable[$graph][2]);
$options[] = sprintf ("PRINT:%s:LAST:Cur\: %%5.2lf", $graphsAvailable[$graph][2]);

$tmpname = tempnam("/tmp", "env");
$ret = rrd_graph($tmpname, $options);
if ( ! $ret ) {
echo "<b>Graph error: </b>".rrd_error()."\n";
}
$destname = sprintf ("%s%s.png", $imageDir, $graph);
rename ($tmpname, $destname);
}

?>
<html>
<head>
<title>Network Speeds - Main</title>
<meta http-equiv="refresh" content="300">
</head>
<body>
<center>
<font size="+2"><b>John's Home Network Speeds</b></font><br/>
<a href="specific.php?sensorname=downloadspeed"><img src="netspeedGraphs/downloadspeed.png" border=0 /></a><br/>
<a href="specific.php?sensorname=uploadspeed"><img src="netspeedGraphs/uploadspeed.png" border=0 /></a><br/>
<a href="specific.php?sensorname=pingtime"><img src="netspeedGraphs/pingtime.png" border=0 /></a><br/>
<hr/>
</center>
</body>
</html>

You’ll notice the references to another PHP file, “specific.php” – this is another homegrown script that displays the past day, week, month, quarter, half-year, and year graphs for the selected dataset (upload speed, download speed, latency). That file:


<?php

$rrdDir = '/net/gateway/usr/local/stats/';
$imageDir = '/var/www/html/netspeedGraphs/';

$graphsAvailable = array (
'downloadspeed'=> array ('Download speed', 'bps', 'MBps'),
'uploadspeed'=> array ('Upload speed', 'bps', 'MBps'),
'pingtime'=> array ('Ping time', 'ms', 'ms')
);

$graphPeriods = array(
'day' => '-26hours',
'week' => '-8days',
'month' => '-32days',
'quarter' => '-3months',
'half-year' => '-6months',
'year' => '-1year'
);

$theSensor = $_GET['sensorname'];

function callError($errorString) {
print ("Content-Type: text/plain");
print ("\n\n");
printf ("Error message: %s", $errorString);
die();
}

if ( ! array_key_exists($theSensor, $graphsAvailable) ) {
callError("Invalid sensor name specified.");
die(0);
}

$basicOptions = array (
'-w', '700',
'-h', '150',
'--end', 'now',
);

foreach ( array_keys($graphPeriods) as $graphWindow ) {
$options = $basicOptions;
$options[] = '--start';
$options[] = $graphPeriods[$graphWindow];
$options[] = "--title";
$options[] = $graphsAvailable[$theSensor][0];
$options[] = "--vertical-label";
$options[] = $graphsAvailable[$theSensor][1];
$options[] = sprintf ("DEF:%s=%s:%s:AVERAGE", $graphsAvailable[$theSensor][2], $rrdDir.$theSensor.".rrd", $graphsAvailable[$theSensor][2]);
$options[] = sprintf ("LINE1:%s#FF0000", $graphsAvailable[$theSensor][2]);
$options[] = sprintf ("PRINT:%s:LAST:Cur\: %%5.2lf", $graphsAvailable[$theSensor][2]);
if ( $graphsAvailable[$theSensor][0] == "Download speed" ) {
$options[] = sprintf ("HRULE:300#00FF00:Max");
$options[] = sprintf ("HRULE:240#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:200#FF00FF:Min guaranteed");

}
if ( $graphsAvailable[$theSensor][0] == "Upload speed" ) {
$options[] = sprintf ("HRULE:16#0000FF:80 pct of max");
$options[] = sprintf ("HRULE:20#00FF00:Max");
}

error_log(implode($options));
$tmpname = tempnam("/tmp", "env");
rrd_graph($tmpname, $options);
$destname = sprintf ("%s%s-%s.png", $imageDir, $theSensor, $graphWindow);
rename ($tmpname, $destname);
}

?>
<html>
<head>
<title>Environmental Sensor - <?php echo $graphsAvailable[$theSensor][0]; ?> </title>
<meta http-equiv="refresh" content="300">
</head>
<body>
<center>
Sensor: <?php echo $graphsAvailable[$theSensor][0]; ?><br/>
<?php
foreach ( array_keys($graphPeriods) as $graphWindow ) {
printf ("Previous %s
\n", $graphWindow);
printf ("<img src=\"netspeedGraphs/%s-%s.png\"><br/><hr/>\n", $theSensor, $graphWindow);
}
?>
</center>
</body>
</html>

That’s about it. The sampler is run on the very first device connected to the cable modem – in my case, the firewall – and running it anywhere behind that first device, or having other devices directly connected to the cable modem, will probably give you bad data. Feel free to use it, though there are no guarantees any of it will actually work for you.

An Old Hobby Resurfacing

September 10th, 2016

For those few who have known me for a long time, you know one of my hobbies is model railroading. I haven’t had much chance to engage in it recently, though I did start an N-scale layout while I was in the townhome. Well, recently, I’ve joined a model railroad club (Neuse River Valley – nrvclub.net), and I’ve started working on the club’s HO layout. Since my interest is primarily in the underpinnings – the physical wiring, control system, etc – I’m working on setting up the JMRI server and related components for this layout. I’m also going to be doing some minor scenic work, but my main concentration will be on layout control.

The club has a laptop set up under the layout that’s intended for use with JMRI; today I got the drivers installed on the OS and managed to get decoder detection working on a temporary programming track. I’m starting to do some reading about JMRI and how it operates, and for the most part it looks like it all Just Works(r)(tm). Which based on what it was the last time I looked at it several years ago is both a large relief and a not-so-small miracle.

The specs so far – the Windows 7 laptop is hooked up to a Digitrax PR3, which in turn is connected via LocoNet to a Digitrax DCS200. The WiFi router that we’ll be using for mobile phone control has to be replaced, but we will be replacing that soon. Once that’s done, I just have to scan all the locomotives on the programming track, and they should be controllable through JMRI.

We will not have actual switch control through JMRI – all the switches on this layout and the N-scale layout are manual, and the idea of making them remote controllable is a non-starter due to cost. I was told there are about 80 turnouts on the HO layout alone, and adding remote machines to them at $35 per is… well, cost-prohibitive is the nice way to put it.

Mind, when I start building my home layout, the turnouts will be remote powered, and ultimately JMRI will be controlling the turnouts. I’ve just got other things going on at the homestead that take priority over the railroad layout. Stay tuned for more updates on what I’m doing with the club layout – if I start making enough posts, I may spin up a new site dedicated to my model railroading endeavors. If I do, I’ll let you know here.

Hello again!

August 25th, 2016

Well hello again. I realized this morning that it’s been over six months since I’ve posted anything, nd I’ve got a few things I might want to post about, so I thought I’d check in with everyone. Life has been rather busy of late – I moved to a new house (detached house with about 0.65 acres), and I’m in the middle of sprucing up the townhome to put it on the market. That means, among other things, that I’ve had to redo my home network, and I’ve got a few things to say about that. That will probably be a new post all it’s own.

This time, I want to focus on something I’ve been doing for work. Among my many other responsibilities as a systems administrator, I’ve dealt with quite a number of configuration management schemes. Most of them little more than “if it breaks, make sure the configuration is current, otherwise leave it alone”. Which is to say, no configuration management at all. I’ve used CFEngine – way back in the past – and adapted Nagios to check on configuration items (a very ugly kludge – please don’t ever do that). Recently, I’ve started playing with Ansible, since that’s the tool that my current boss wants me to use.

Ansible is, in a word, lacking. Why do I say that? Several reasons. First – it’s a bigger and more involved version of the Expect SSH script I wrote (adapted from someone I knew at NCSU at the time, who later went on to Red Hat and then elsewhere) over a decade ago. It doesn’t really do much that my decade old script isn’t capable of, so there’s no major benefit to it. It requires a huge amount of setup prior to actually using it, to get authentication (SSH public keys) and escalation (sudo privileges) right, and it can’t handle slow connections or VPN tunnels very well.

The major downfalls of Ansible are in it’s language and it’s operation – the playbook language is rather difficult to wrap your head around. It’s neither simple nor intuitive, and bears little to no resemblance to any already-existing programming or scripting languages. The problem with it’s operation is that it’s a one-shot deal – you have to actively manage errors or connection issues as opposed to having a tool that retries connections or deploys automatically. If I start a deploy to 100+ systems and I get any errors at all, then I get called to a meeting about something else entirely, I can guarantee you that I won’t remember to go back and fix those errors for a day or more, and that is a rather bad thing. A good configuration management system needs to take a config update, and keep attempting to apply it until successful or until it gets an error that requires admin intervention (e.g. a package conflict, as opposed to a connection timeout which it should be able to handle on it’s own). It’s especially difficult if, as with Ansible, the result status is simply logged to the screen as opposed to a file.

Perhaps some of the issues I have with Ansible are because I haven’t gotten into it deeply enough – but if I’m perfectly honest, I shouldn’t have to get into it any more deeply than I have to know how to solve these issues. This is the final issue I have with Ansible – the documentation is, bluntly, atrocious. I could find almost no examples of how to write a playbook. The example playbooks I did find were from a git archive, where the commit messages told me what had been done most recently, but offered no clue as to what a given playbook file was supposed to do.

Overall, I have to say, Ansible is over-hyped and under-performant. It comes across as an attempt by a programmer of mediocre skills to semi-automate systems administration tasks that said programmer shouldn’t be exposed to or aware of in the first place. For me, Ansible doesn’t give me enough ease of use or automation to make it worth the trouble it took to set it up in the first place.

The next chapter

January 17th, 2016

For those of you who don’t know (which shouldn’t be all that many of you, since I’ve announced it several other places), I started a new job on Jan 4 2016. The previous gig started out as a 9 month contract-to-perm, but there was a lot of what seemed like confusion and hesitancy on the company’s end ot make me permanent. There were also several gratuitous insults offered to me, some of which I’m quite sure the company didn’t realize the extent of the insult they were offering, so I started looking around quietly. The new gig found me during this phase, and after a coworker conducted an especially egregious attack against me in a public email, I stepped up the contacts a little bit, and less than two weeks later submitted my resignation effective Dec 31 2015. Well, all that’s water under the bridge, and while I hope the old company has learned some things from my departure, I also hope they do well in the future.

My new company is a very small startup based in Chapel Hill – smaller than  thought at first, actually. I am employee number 7; a week after I started, they brought on employee number 8. We have three programmers, two customer service specialists fresh out of college, a training specialist, a business specialist, and me. We have no venture capital investments, which is actually a good thing in this case as we’re also profitable and growing.

Enough with the tangent. On to the point of this update. The new gig is a remote one – I’ve been commuting about 35 – 45 minutes for the two weeks I’ve been working there, and that will continue next week, but then I start working from home full-time. I’ll go in to the office if I need to, or if I feel like doing so for some reason, but the majority of my work time will be from my office in the basement. Which means, of course, that I need an office that has enough compute and display real estate to do the job, which means I had to upgrade the desktop. I had been working just fine, for the limited bits I used my office desktop, on a 2004-vintage Mac Mini. Honestly, if it had been able to drive two monitors, I wouldn’t have upgraded, but it can only handle a single monitor. I picked up a cheap BRIX from Intrex- Intel Core i3 chip in a form factor smaller than the Mac Mini, added 8 GB memory and a 250G SSD to it, and in all honesty I’m loving the new machine already. It doesn’t feel lightning fast, but it feels solidly capable. Best of all, the BRIX is designed to mount on the back of the monitor using the VESA mount. The only things sitting on the top of the desk are the two monitors and a slim DVD drive – and even the DVD drive will probably disappear soon.

Overall, despite the fact that I had to spend money (something that I really don’t like being necessary, though I’ll spend entirely too much money when I want to spend it), I’m pleased with the upgrade. The OS (Fedora 23) is installing / updating now, and I’ll probably finish setting up my environment tomorrow. Then I’ll wander in to the main room of the basement and do some more work on my model railroad layout. 🙂

Tangent: fitness

July 31st, 2015

About 4 months ago, I finally got my FitBit Charge HR and started using it to look at my fitness. I say “look at” because it has been just that – momentary looks, with no sort of history. I don’t like the way the FitBit web site presents the data – it confuses me and tries to make things too “candy-coated” – so I had to figure out a way to track trends myself. Oh, and before we go any further – this is not a review paid for by FitBit. This is just me telling you why I think having a FitBit and using it is a good idea – I’m getting zero benefit to writing this aside from the finger exercise involved in actually typing.

Fortunately, FitBit is really awesome about giving individuals access to their data through the API they have set up, and they’re also awesome about providing individuals access to the Partner API which allows access to intraday data. That was one of the major reasons I went with a FitBit instead of an Up3 from Jawbone – Jawbone says they allow access to your data, but in my testing of it, I couldn’t find a programmatic API and even the “data download” area of their web site only gave me data from a year ago, not current data.

So, I got myself access to the Partner API from FitBit, and started pulling down my personal data daily. I’ve only been doing this for about 7 days so far, so I don’t have very much in the way of trends yet, but it’s already started helping me understand some things about my habits. Since I’ve found it so useful, I figured I’d share what I’ve done in hopes that someone else will find it useful as well.

First things first – get yourself a FitBit. I chose the Charge HR because I wanted the intraday heart rate measurements, but I didn’t see the benefit to the location data the Surge provides. In hindsight, I probably could have made use of it, but it’s not something that I feel adds sufficient value to my analysis for the price differential. Once you have the FitBit – whatever model you end up getting – use it! No sense spending money on something that’s going to sit in your kitchen junk drawer.

Now that you have your FitBit, you need to open the door to downloading your data. This can get a bit confusing – it took me several tries to figure it all out – but stick with me here. Step one, register an application at https://dev.fitbit.com/apps/. I gave my app a name of “Personal” – the name doesn’t matter too much, it’s just something you have to put in. For this method, the OAuth 1.0 Application Type should be “Browser” and the OAuth 2.0 Application Type should be “client”. I used “http://localhost/callback” as the callback URL – this field has to be filled in, but for what we’re doing here, it doesn’t matter much what you put there. Once you’ve done that, send an email to “api@fitbit.com”  and request access to the Partner API for intraday data. Be sure to include the app’s client ID as given to you after registering the app ont he dev site. Please note – they are very supportive of personal use, but don’t try to slide a commercial application that you’ll be selling in claiming that you want access for personal use. That’s just bad form. It may take them a while to get to your request depending on volume – it took about 3 weeks for me to get Partner API access after my inital email.

Now that you have access, you need to set up the authentication key. FitBit has decent documentation for doing this on their site at https://wiki.fitibit.com/display/API, but this is where it got confusing for me. I’m only going to cover the OAuth 2.0 authentication bits, since that’s what you need for heart rate measurements and it’s a superset of what OAuth 1.0 gets you. Please note that as of when I write this article, OAuth 2.0 at FitBit is in beta state, so it might break without warning. Buyer beware, caveat emptor, and all that. We’ll be looking at the “Authorization Code Grant Flow” at https://wiki.fitbit.com/display/API/OAuth+2.0.

The instructions tell us to “redirect the user to FitBit’s authorization page”. This really confused me, since I hadn’t directed myself anywhere yet – ultimately, it means I have to poke a FitBit URL with a well-known set of URL parameters, which include the application’s client ID as given to you by the “Manage My Apps” page (https://dev.fitbit.com/apps). The easiest way to do this for now is to type the following into the location bar of your web browser: https://www.fitbit.com/oauth2/authorize?scope=activity+heartrate+location+nutrition+profile+settings+sleep+social+weight&response_type=code&client_id=${ID_HERE}

Replace the ${ID_HERE} with your app client ID. This page will try to redirect you to your callback URL, which if you use the values above won’t exist, so you’ll end up seeing a URL in your location bar with a “code=” part to it. Save the long string after the “code=” – this is the part you need for the next step.

Next, FitBit tells us the applciation needs to “exchange the authorization code for an access token”. This must be completed within 10 minutes, or the code we got expires and we have to start over. For this, the response will be in JSON so I used an interactive Python session. Here’s what I did:

$ python
>>> import requests
>>> import base64
>>> import urllib
>>> clientid='XXXXXX'
>>> secret='YYYYYYYYY'
>>> code='ZZZZZZZZZZZZZZZZZZZZZ'
>>> authStr = "Basic "+base64.b64encode("%s:%s" % (clientid, secret))
>>> authHdr = {'Authorization' : authStr}
>>> body=urllib.quote("clientid=%s&grant_type=authorization_code&code=%s" % (clientid, code))
>>> req = requests.post('https://api.fitbit.com/oauth2/token', headers=authHdr, data=body)

You’re probably asking, “So what does all this mess mean?” Well, it becomes a little more clear when you replace the XXX’s with the client ID from the FitBit API page and the YYY’s with the applciation secret from the same page. Then replace the ZZZ’s witht he code you got from your browser above.

Once this is done, dump the result of the request with:

>>> req.json()

This will show you the JSON notation for the request response. The important parts are the “access_token” and the “refresh_token” strings, so we’ll want to save those in another variable:

>>> access = req.json()['access_token']
>>> refresh = req.json()['refresh_token']

Now we want to save those two items to a file locally, since we’ll need both pieces of information in the future. The easiest way to do so:

>>> tok = {}
>>> tok['access_token'] = access
>>> tok['refresh_token'] = refresh
>>> with open ('.fitbitAuthFile', 'w') as fh:
...   json.dump(tok, fh)
...
>>>

Exit the interactive Python interpreter and confirm the “.fitbitAuthFile” file contains the access_token and refresh_token we just wrote to it. If it doesn’t, you’ll probably need to start the process over by going back to the web page to get a new code. If it does, congratulations, you’ve finished the hard part!

The actual retrieval of the data is both much simpler and much more complex. Simpler because we only have to read in the token information, test if it’s expired or not and if so refresh it, then ask for the data we want. More complex because this is where processing the data comes in to play. I’m saving data to spreadsheets through the openpyxl Python module. I haven’t finished developing the script or the spreadsheets, but you can download it in its current state from http://www/ncphotography.com/fitbitcollect.py. You’ll need to make some changes to insert the relevant values into places I’ve put generic all-caps strings, and please do keep in mind this was intended for a Linux (specifically, Fedora 21) system, not Windows. I don’t intend to make any changes to accommodate a Windows system either – I’m a linux systems administrator by trade and I don’t get along with Windows. If there’s enough interest, I’ll update it in the future and/or upload the weight tracking spreadsheet template I use.

Crunchtime Fun

December 15th, 2014

Welcome to the end of the year, when late projects suddenly get rushed to completion so boxes can be checked off and project managers can take credit for “having the initiative to push this project to completion”. It’s also the time of year when the systems admins realize they’re about to lose several days to several weeks of vacation time if they don’t take it, so you can see the conflict of interest there.

Well, I’m in the second category. I’ve scheduled my use-it-or-lose-it vacation days so they’re somewhat spread out, and I’m actually working the week between Christmas and New Years because I’m on call that week. In all honesty, things aren’t that bad this year – yes, there’s a mad rush to get projects out the door, but it’s not disrupting my schedule too much. So, I’m using the relative quiet and downtime to make plans for next year – mostly aimed at not putting myself in the position of having to burn two weeks of vacation time in December so I don’t lose it. This is my idea of crunchtime – and it’s quite a bit more fun than the typical crunchtime mess. 🙂

So, what are my plans so far? Well, I’m taking a page from Ingress, which has a new-to-me feature called “missions”. I’m making a list of places to visit / things to do over the course of the twelve months starting January 1 2015. Are these New Years Resolutions? You might consider some of them to be, but I don’t. They’re waypoints that  hope to get to during my journey through 2015. Let’s take a look at some of the “things to do”:

  • New kitchen countertops
  • Faux stone accent wall (on the wall with the fireplace)
  • New backsplash in kitchen
  • Tile floor in kitchen
  • Finish suspended railroad in living room
  • New flooring through main level
  • Sell townhome, upgrade to detached single family

Now if that isn’t one of the most discriminatory terms I’ve ever come across…  why is it called a “single family” home? Are unmarried childless people not allowed to live there? Given my situation, that term is about as welcome as a burning bag of shit on the front stoop. Call it a “detached home” – don’t associate it with the assumption of a family involving spouses and children.

Ok, gripe mode off. I’ll try to warn you next time I hit a pet peeve, but can’t promise I’ll succeed. Anyway, the whole point to most of these items, as you can probably tell, is to improve the value of the townhome so I can maximize my profit when I sell. This is mostly so I can invest a large chunk of the profits, but a small part will also help fund the (possibly multiple) road trip(s) I want to take throughout the year. Some of the cities already on the list:

  • Tampa FL
  • Miami FL
  • Washington DC
  • Williamsburg VA
  • Charleston SC

What do these cities have in common? Well, aside from the fact that they’ve made this list, I’m not telling. 🙂 Seriously, though, if you know me, you probably have a good idea what the rationale is, even if you don’t know specifics. I was going to make this a list and modify it throughout the year – I might still do that, but right now it’s time for me to go get lunch (more specifically, visist the gym then get lunch), so I’ll leave it at that.

Home networking done right

June 6th, 2014

This, ladies and gentlemen, and children of all ages, is how you do home networking correctly. First, you start with a central wiring panel:wiring-1

Notice how there is a module for cable and telephone distribution on the left and three modules for network distribution on the right? Yes, start there. Hook up the cable and phone distribution first – incoming lines go behind the module, outgoing to the house go in front. Networking lines to the house go behind the modules.

wiring-2

Make sure your terminations are clean – you want a little bit of slack, a little bit of what would be called a “drip loop” for an aquarium setting, but not so much that the excess cabling gets in your way.

Then you connect active computers to one or two ports elsewhere in the house and start verifying your infrastructure bits work. See the green lights on the switch? Green lights are good:

wiring-3

Once you’ve got one or two good distribution connections, add your home server:

wiring-4

Make sure it has power, and make sure your other machines can get to it in every way you need to get to it – SSH, VPN, RDP, VNC, whatever.

Now, finish cabling the distribution panel to the switch:

wiring-5

If you have the ability, you want to make your own custom-length cables. Seriously, you don’t want 4 foot long cables hanging down looking like an overturned bowl of spaghetti, that’s just amateurish.

Finally, add the LCD panel and keyboard for the home server, just in case you do something stupid and break network connectivity to it:

monitor2

If you’re competent, you’ll use this monitor/keyboard maybe three times in your entire life, save for power loss events which are really the power company’s fault, not yours.

Now, young padawan, go enjoy the fruits of your labor – if you’ve managed to get everything accomplished properly, you deserve a beer. Which is where I am headed as soon as I hit the “Publish” button!

My Happy Place

May 22nd, 2014

I have an awesome job right now. The group I work with is talented and open about sharing information. The management team is supportive of the workerbees like me, and even lets me vent when I need to. Which I unfortunately need to more often than I should.

Like today. I ran into two head-scratchers within 10 minutes of each other today. First was when I tried to find out why a Tomcat user was required for a new system when the old system didn’t have Tomcat anything. Nor does the new system. As it turns out, they tried to load the new system with Tomcat at first, since it made more sense to do so that way in our environment. Well, then the consultants on side told us that Tomcat isn’t really supported as well as the advert / marketing materials say it is, so we punted back to the “old” way of doing things – but didn’t bother to remove the Tomcat user bits. So yes, we have a Tomcat user running a suite of JBoss BPMS processes that have nothing to do with Tomcat… and I sometimes wonder why our user authorization scheme is so dorked up. Well, no longer.

The next one hurts a bit more, honestly, because it comes from people that I expect to know better. The aforesaid BPMS system is having performance issues, I’m told, so the consultants want me to replace OpenJDK with Oracle’s JDK, because “the font packages are supposed to come with the JDK”.

Wait, what? Since when do font packages come with a freaking JDK? Especially when the font packages you asked for were installed separately from any JDK anything via a “yum install” dealing with – wait for it – X font packages!?

Two nights ago, I met a current employee of Red Hat at the climbing gym, and mentioned my dismay at the support org’s response to a ticket one of the on-site consultants filed. I would like to apologize for that assumption, since I now realize it was an apporpriate response given the consultant’s demonstrated skill level. The response which would have annoyed me was indeed completely appropriate for this consultant – an unfortunate fact which reflects badly not on RH Support, but on RH Consulting.