Giving Chrome Some Pep

I’ve been using Google Chrome on my Macbook Pro for a long while, and I’ve noticed that some websites take some time to get moving along. In some ways, it feels like the browser is panting and trying to catch its breath. So today, while trying to solve a work problem I accidentally stumbled over a neat way to give my Chrome browser a little bit of a boost in performance. It seems to benefit when I use sites that are interactive, like my work help desk site or PNC online banking for example.

The trick is, create a small RAM drive on the system, and then copy the Chrome profile over, link to that profile so Chrome can find it, and then start to use Chrome. As Chrome works, things like settings and cache data go to RAM instead of the HD on my MacBook Pro. Then I use rsync to copy data into a backup folder just in case my MacBook pro suffers a kernel panic or something else that would accidentally dump the RAM drive.

There are a few pieces to this, mostly from scripts I copied off the network.

I copied the script called mount-tmp.sh and made only a few small adjustments. Specifically changed the maximum RAM drive size to 512MB.

Then I created two different bash scripts to check-in the profile to the RAM drive and then to check-out the profile from the RAM drive back to the HD. Since I wrote them from scratch, here they are:

check-in.sh


#!/bin/bash
/Users/andy/mount_tmp.sh
mv /Users/andy/Library/Application\ Support/Google/Chrome/Default ~/tmp
ln -s /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default
rsync -avp -delete /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default_BACKUP
echo “Complete.”

check-out.sh


#!/bin/bash
rsync -avp -delete /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default_BACKUP
rm /Users/andy/Library/Application\ Support/Google/Chrome/Default
mv /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome
/Users/andy/mount_tmp.sh umount
echo “Complete.”

If you give this a shot as well, I would love to hear from you about your experiences with this little speed improvement hack! Hope you enjoy!

Moment of Geek: Raspberry Pi as Thermal Canary

A few days ago I had run into a problem at work. The small Mitsubishi Air Conditioner had decided to take a cooling nap in the middle of the day. So my office, which is also the machine room at work was up around 85 degrees Fahrenheit. I was used to this sort of thing, summers bringing primary cooling systems to their knees, but this time I had a huge A/C unit in the ceiling that I elected not to have removed and left in place, just in case. So I turned it on, set it’s thermal controller to 70 degrees and the room temperature tumbled in about ten minutes. Right after the room temperature was normal, and I had service out to visit me about my little wall-mounted A/C unit, the damn thing started functioning normally again. The tables turned on IT, where for our users, this is what happens to them. They can sit there and struggle, and then we arrive and the machines behave themselves like nothing at all was wrong.

So I had the big A/C, and it’s smaller wall-mounted unit both running overnight and faced a problem. I want to know what the temperature is in my machine room without having to buy a TempPageR device. I had one long ago, and it was rather expensive. I looked on my desk and noticed my Raspberry Pi, just sitting there, doing nothing of consequence. I did a brief cursory search on Google, and I knew the Raspberry Pi had a CPU Temperature interface hidden somewhere, and I was happily surprised to find a website detailing how to use this exact feature in Python programming language to write a temperature log, and optionally graph it. It was mostly copypasta, adapting things I had found online pretty much by copy and paste and hammering them here and there to work. I have programming skills, but they are rather dated and rusty. Plus I’ve never used Python, specifically. So my first effort was successful, I got a 1-second temperature logger in place. I was rather happily satisfied with my efforts, but I knew I would not be happy with Celsius, but I knew the temperature was colored by the CPU in the Raspberry Pi itself, so the reported temperature was quite higher than the room temperature.
I started to tinker. First searching for the equation to convert C into F. So I got it, 115 degrees. When I turned on the big A/C device, and its thermal controller displayed the ambient room temperature in F, 74. So I did some math and subtracted a constant 44 degrees from the CPU temperature, which “calibrated” the CPU temperature to be a rough approximation to the room temperature. Some eagle-eyed readers may notice that my math is off, but after I had moved the Pi over to the server stack, I had to adjust for a higher CPU temperature because of it being further away from the wall A/C unit. So now I had a 1-second temperature logger. I turned on graphing, and the entire program crashed and burned, I wasn’t running the application in an X-Windows environment, so I tore the graphing library and code out because I was never going to use the graphing feature anyways.

That, of course, was not enough to replace the TempPageR device. I needed some alarm system to alert me to what was going on. I thought of some interfaces, email, SMS, iMessage, email-to-telephone-call cleverness and each thought brought me against different versions of the cliffs of insanity. I could have probably smashed and hacked my way to a solution involving some ghastly labyrinth of security settings, passwords hashed with special algorithms that are only available on ENIAC computer simulators that only run on virtualized Intel 8086 processors with the Slovenian language pack loaded and using the Cyrillic character set; An arrangement that was an epic pain in the ass. So earlier in the day, I had tripped over an app advertisement for Slack so that it could use incoming data from the Pingometer website. I have a Pingometer account, a free one because I’m a cheap bastard. The single pinger externally checks my fiber optic connection at work, keeping AT&T on their toes when it comes to outages. The Pingometer website uses incoming Slack webhooks. An incoming Slack webhook comes from some source that makes a really simple web browser call using HTTP. It wraps JSON into HTTP and sends the request to Slacks servers. Slack then does everything needed to make sure the message is pretty and ends up on the right Slack channel, on the right team; this was my alert mechanism.

So I did another Google search, found the intersection between Linux, Python, and Slack and some more copypasta and some tinkering and I had a Python app that displayed the room temperature in Degrees F, and made my Slack a noisy mess, as it was sending incoming webhook requests every second. One more tweak, which was a super-simple IF-THEN block, set my high-temperature mark at 90 degrees F and let it go.

 

There is something satisfying about being able to hack something together, cobble it actually, and have it work without blowing up on the terminal, blowing up Slack, or otherwise failing. So now I have a $35 Raspberry Pi running as a rough temperature alarm, it’ll send alerts to Slack and let me and my System Admin know at the same time over Slack. I’m quite happy with how it all worked out. No obnoxious email settings, ports, security frameworks, awkward and obtuse hashing routines, just a single JSON-formatted HTTP call and BAM. All set. An alarm, with a date and time stamp and a temperature, delivered right onto my iPhone with automatic notifications from Slack, so it wakes me up if I need it.

So anyways, without further ado, here is the code:


from gpiozero import CPUTemperature
from time import sleep, strftime, time
import json
import requests

# Set the webhook_url to the one provided by Slack when you create the webhook a
t https://my.slack.com/services/new/incoming-webhook/
webhook_url = ‘https://hooks.slack.com/services/####/#####’

cpu = CPUTemperature()

def write_temp(temp):
with open(“cpu_temp.csv”, “a”) as log:
log.write(“{0},{1}\n”.format(strftime(“%Y-%m-%d %H:%M:%S”),str(temp)))
if temp > 90:
slack_data = {‘text’: “{0},{1}\n”.format(strftime(“%Y-%m-%d %H:%M:%S”
),str(temp))}
response = requests.post(
webhook_url, data=json.dumps(slack_data),
headers={‘Content-Type’: ‘application/json’}
)
if response.status_code != 200:
raise ValueError(
‘Request to slack returned an error %s, the response is:\n%s’
% (response.status_code, response.text)
)

while True:
temp = cpu.temperature
temp = (9.0/5.0 * temp + 32) – 44
write_temp(temp)
sleep(1)


It has been forever since I’ve needed to program anything. Once I was done, and I saw it work the way I wanted it to, I was quite happy with myself. I haven’t felt this particular sense of accomplishment since my college years. It was quite a welcome feeling.

What Roy Batty Saw

We hired a new coworker and learned that he needed a Cisco VOIP phone. I had one spare unit left, an older Cisco 7912 unit. I went to go plug it in, and the POE over Ethernet simply wasn’t registering on the phone. I knew for a fact that the phone itself was fine, and the switch I was plugging the phone into was functioning well. I also knew that my station cables were working fine, so I used my Fluke LinkRunner to test the cables and the port. Everything checked out; the Fluke indicated proper POE, however, when I plugged the phone in, nothing at all.

I knew that this port had a history of being troublesome, but previously to this I had a Cisco 7940 phone working well in this spot, so it was a mystery as to why a 7912 wasn’t also working. I tested the port a few times, each time seeing proper POE voltage and wattage. Even the switch itself noticed my Fluke tester and was registering that a device was consuming POE supply on the port in question. I couldn’t understand why a phone that works well in one place doesn’t work in another when everything is equal. Obviously, not everything was as equal as I thought. Something had to be wrong.

I looked at the Fluke LinkRunner, it listed POE as coming in on pairs 1 and 2 for the positive circuit and 3 and 6 for the negative circuit. So then I took the Fluke to my testing lab and looked at POE coming from a Cisco Catalyst 3560 switch. The Fluke indicated that 3 and 6 were positive, and 1 and 2 were negative. I immediately figured out what the issue was. Ethernet jacks can conform to T568A or T568B, the difference is subtle and is a flipped pair of conductors. I did a little desk diving and popped the cover off the jack in the wall, everything that I deal with is always T568B. Always. The jack in the wall? T568A. So armed with what I knew, I tugged the old keystone jack out and replaced it with the last good one that I have. Punched it down, and tested it again. The Fluke indicated POE, 3-6-1-2, I plugged in the phone and pop! The phone came to life!

So, just when you think you can just get on with things, always check the standards. You always have to assume that nobody else is. What a mess. But at least it was an easy fix.

FreeBSD Crater

I started out looking at FreeBSD based on a draw from FreeNAS, which then led to ZFS, the primary file system that FreeNAS and FreeBSD use. At work, I am looking at the regular handling of enormous archival files and the further along I went the more I realized that I would also need storage for a long time. There are a lot of ways to ensure that archival files remain viable, error correcting codes, using the cloud, rotating media. So all of this has led me to learn more about ZFS.

I have to admit that at first, ZFS was very strange to me. I’m used to HFS and EXT3 and EXT4 type file systems with their usual vocabularies. You can mount it, unmount it, and check it with an option to repair it. ZFS adds a whole new universe of vocabulary to file systems. There are two parts, the zpool creates the definition of the devices and files you want to use for your file system, and the zfs command allows you to manipulate it, in terms of mounting and unmounting. When it comes to error-checking and repair, that is the feature called scrub. The commands themselves aren’t difficult to grasp but the nature of this new file system is very different. It enables the administrator to perform actions that other file systems just don’t have. You can create snapshots, manipulate them, and even draw older snapshots – even out of order – forward as clones. So let us say that you have a file system, and you’ve been making regular snapshots every 15 minutes. If you need something from that filesystem at snapshot 5 out of 30, you don’t have to roll back the file system manually; you can just pluck snapshot 5 and create a clone. The cloning procedure feels a lot like “mounting” a snapshot so you can access it directly. If you destroy a clone, the snapshot is undamaged, it just goes back into the pile from whence it came. The big claim to fame for ZFS is that it is regarded by many as the safest file system, if one of the parts of it, in the zpool should fail the file system can heal itself. You can tear out that bad part, put in a new part, and the file system will rebuild and recover. In a lot of ways, ZFS is a lot like RAID 1, 5, or 6. Apparently there is a flaw with RAID 5 when you get to big data volumes and from what I can gather, ZFS is the answer to those problems.

So I have ZFS ported over to my Macbook Pro, and I’ve been playing around with it for a little while. It works as advertised so I’ve been enjoying that. One of the biggest stumbling blocks I had to deal with was the concepts of zfs mounting, unmounting and how they relate to zpool’s export and import commands. I started with a fully functional ZFS file system, created the zpool, then mounted it to the operating system. Then the next step is to unmount the file system and export the zpool. Exploring the way you can fully disconnect a ZFS file system from a host machine and then reverse the process. While doing this, I was reticent on using actual physical devices, so I instead used blank files as members in my zpool. I was able to create, mount, and then unmount the entire production, and then export the zpool. When I looked over how to reverse that, import the zpool I just had the system told me that there weren’t any pools in existence to import. This had me thinking that ZFS was a crock. What is the point of exporting a zpool if there is no hope on importing it afterwards? It turns out, there is a switch, -d, which you have to use – and that’s the trick of it. So once I got that, I became much more comfortable using ZFS, or at least exploring it.

So then today I thought I would explore the source of FreeNAS, which is FreeBSD. BSD is a kind of Unix/Linux operating system, and so I thought I would download an installation image and try it out in my VirtualBox on my Macbook Pro. So, I started with the image FreeBSD-10.2-RELEASE-amd64-dvd1.iso and got VirtualBox up and running. The installation was very familiar and I didn’t run into any issues. I got the FreeBSD OS up and running and thought I should add the VirtualBox Guest Additions. I thought I could just have VirtualBox add the additions as an optical drive and that the OS would notice and mount it for me in /mnt or /media. No. So that was a no-go. I then looked online and searched for VirtualBox Guest Additions. I found references to procedures to follow in the “ports” section of the FreeBSD OS. I tried it, and it told me that it couldn’t proceed without the kernel sources. So then I searched for that. This turned into a fork/branch mess and I knew that familiar sinking feeling all too well. You try and fix something and that leads to a failure, so you look for help on Google and follow a fix, which leads to another failure, and then you keep on going. This branching/forking leads you on a day-wasting misadventure. The notion that you couldn’t get what you wanted from the start just sits there on your shoulder, reminding you that everything you do from this point forward is absurd. There is a lot of bullshit you are wading through, and the smart move would be to give up. You can’t give up because of the time investment, and you want to fight it out, to justify the waste of time. The battle with FreeBSD begins. At the start we need the kernel sources, okay, use svn. Not there, okay, how to fix that? Get svn. Sorry, can’t do it as a regular user. Try sudo, command doesn’t exist, look for su, nope, not that either. Try to fix that, can’t. Login as root and try, nope. So I pretty much just reached my limit on FreeBSD and gave up. I couldn’t get VirtualBox Additions added, svn is impossible to load, sudo is impossible to load. Fine. So then I thought about just screwing around with ZFS on FreeBSD, to rescue some semblance of usefulness out of this experience. No, you aren’t root, piss off. I even tried SSH, but you can’t get in as root and without sudo there is no point to go forward.

So, that’s that for FreeBSD. We’re up to version 10 here, but it is still firmly bullshit. There are people who are massively invested in BSD and they no doubt are grumpy when I call out their OS for its obnoxiousness. Is it ready for prime time use? Of course not. No kernel sources included, no svn, no sudo, no su, no X for that matter, but honestly, I wasn’t expecting X.

It points to the same issues that dog Linux. If you don’t accept the basic spot where you land post-install then you are either trapped with Google for a long while or you just give up.

My next task will be to shut down the FreeBSD system and dump all the files. At least I only wasted two hours of my life screwing around with the bullshit crater of FreeBSD. What have I learned? Quite a lot. BSD I’m sure is good, but to use it and support it?

Thank god it’s free. I got exactly what I paid for. Hah.

Surprise! Scan-to-Folder is broken!

That’s what we faced earlier this week in our Grand Rapids office. It was a mystery as to why all of a sudden a Canon iR-3235 copier would stop working when it came to its “Scan to Folder” function. For Canon, the “Scan to Folder” function opens a CIFS connection to wherever you tell it to go and deposits a scanned PDF file to the destination. Everything up to Monday was working well for us.

After Monday, it was broken. Thanks to a Google Form linked to a Google Spreadsheet I have a handy way to log changes I make to the network in a very convenient way. I open up the form, enter my name and the change, and the Google spreadsheet catches the timestamp automatically. So what changed on Monday? I was using Wireshark and found a flurry of broadcast traffic on using two protocols, LLMNR and NBNS. The first protocol, LLMNR is only useful for small ad-hoc networks that don’t have a standard DNS infrastructure, since we do have a fully-fleshed DNS system running, LLMNR is noisy and superfluous. NBNS is an old protocol, and turning it off system-wide is an accepted best-practice. So I turned off NBNS for all the workstations and turned NBNS off on the servers also. It’s 2016, what could need NBNS?

Then we discovered that our older Canon ir3235 copiers suddenly couldn’t save data to CIFS folders. We verified all the settings, and there was no reason the copiers couldn’t send data to the server, whatsoever, or so we thought. The error from the copier was #751, which was a vague error code and nothing we could find online pointed to error #751 being a protocol problem.

I can’t recommend instituting some change tracking system enough for any other IT shop. Having a log, and being able to pin down exactly what happened and when was invaluable to solving this problem. As it turns out, Canon copiers require NBNS, but not specifically that protocol. When you turn off NBNS on a server, that closes port TCP/139. The other port for CIFS traffic, TCP/445 is used by modern implementations of CIFS. These Canon copiers only use TCP/139. So when I turned off NBNS to tamp down the broadcast traffic, I accidentally made the server deaf to the copiers. Turn NBNS back on, re-open TCP/139, and that fixes these old Canon copiers.

Apple’s Activation Lock

open-159121_640I just spent the last hour bashing my head against Apple’s Activation Lock on a coworkers iPad 2. They brought it to me because it had nearly every assistive mode option turned on, and it was locked with an unknown iCloud account. I tried to get around the lock to no avail, even to return the device to factory specifications. Even the factory reset ends up crashing into the Activation Lock.

It’s heartening to know that Activation Lock took the guts out of the stolen devices market for Apple mobile devices, but in this particular case it’s creating a huge headache. There is no way for me to move forward with treating this issue because the iPad only refers to its owner by a guesstimate email address, b******@gmail.com. I don’t know what this is, and there is no way for me to figure it out. So this device is pretty much bricked, and I have no choice but to send the user directly to an Apple store with the instructions to throw the device on their mercy.

If you are going to give away or sell your Apple device, make sure you TURN OFF ACTIVATION LOCK. There is no way, not even DFU-mode or Factory Reset that can defeat the lock. There are some hacks that used to work, but Apple catches on quickly and updates their iOS to close each possible hack soon after it appears.

I don’t pitch a fight with Apple over this, it was a clear and present requirement that they met, it just makes dealing with this particular issue impossible for people like me to resolve. The best way around this issue is to secure each and every device with an iCloud account and write the iCloud username and password down in a very legible and memorable safe place! Without the iCloud account details or a trip to the Apple Store, the device is so much plastic, metal, and glass.

Vexatious Microsoft

Microsoft never ceases to bring the SMH. Today I attempted to update a driver for a Canon 6055 copier here at the office. The driver I had was a dead duck, so out to get the “handy dandy UFR II driver”. I downloaded it, noted that it was for 64-bit Windows 2012 R2 server and selected it. Then I went to save it, and this is the error that greets me:

Capture
“Printer Properties – Printer settings could not be saved. This operation is not supported.”

So, what the hell does this mean? Suddenly the best and the brightest that Microsoft has to offer cannot save printer settings, and saving of printer settings is an operation that is not supported. Now step back and think about that for a second, saving your settings is not supported.

The error is not wrong, but it is massively misleading. The error doesn’t come from the print driver system but rather from the print sharing system. That there is no indication of that is just sauce for the goose. What’s the fix? You have to unshare the printer on the server, and then update the driver, and then reshare the printer. The path is quick, just uncheck the option to share from the neighboring tab, go back, set your new driver, then turn sharing back on. It’s an easy fix however because the error is not written properly, you don’t know where to go to address it. A more elegant system would either tell you to disable sharing before changing drivers or because you are already sharing and trying to install a new driver, programmatically unshare, save the driver, then reshare. Hide all of this from the administrator, as you do. That’s not what Microsoft does; they do awkward and poorly stated errors leading you on a wild goose chase.

But now I know, so that’s half the battle right there. Dumb, Microsoft. So Dumb.

Network Monitoring

I’m in the middle of a rather protracted evaluation of network infrastructure monitoring software. I’ve started looking at Paessler’s PRTG, also SolarWinds Orion product and in January I’ll be looking at Ipswitch’s products.

I also started looking at Nagios and Cacti. That’s where the fun-house mirrors start. The first big hurdle is no cost vs. cost. The commercial products mentioned before are rather pricey while Nagios and Cacti are GPL, and open sourced, principally available for no cost.

With PRTG, it was an engaging evaluation however I ran into one of the first catch-22’s with network monitoring software, that Symantec Endpoint Protection considers network scanning to be provocative, and so the uneducated SEP client blocks the poller because it believes it to be a network scanner. I ran into a bit of a headache with PRTG as the web client didn’t register changes as I expected. One of the things that I have come to understand about the cost-model network products is that each one of them appears to have a custom approach to licensing. Each company approaches it differently. PRTG is based on individual sensor, Orion is based on buckets, and I can’t readily recall Ipswitches design, but I think it was based on nodes.

Many of these products seem to throw darts at the wall when it comes to their products, sometimes hit and sometimes miss. PRTG was okay, it created a bumper crop of useless alarms, Solarwinds Orion has an exceptionally annoying network discovery routine, and I haven’t uncorked Ipswitch’s product yet.

I don’t know if I want to pay for this sort of product. Also, it seems that this is one of those arrangements that if I bite on a particular product, I’ll be on a per-year budget cost treadmill for as long as I use the product unless I try the no-cost options.

This project may launch a new blog series, or not, depending on how things turn out. Looking online didn’t pan out very much. There is somewhat of a religious holy war surrounding these products. Some people champion the GPL products; other people push the solution they went with when they first decided on a product. It’s funny but now that I care about the network, I’m coming to the party rather late. At least, I don’t have to worry about the hot slag of “alpha revision software” and much of the provider space seems quite mature.

I really would like anyone who works in the IT industry to please comment with your thoughts and feelings about this category if you have any recommendations or experiences. I’m keenly aware of what I call “show-stopper” issues.

Archiving and Learning New Things

As a part of the computing overhaul at my company, each particular workstation that we overhauled had its user profile extracted. This profile contains documents, downloaded files, anything on the Desktop, that sort of information. There never really was any centralized storage until I brought a lot of it to life, later on, so many of these profiles are rather heavy with user data. They range all the way up to about 144 gigabytes each. This user data primarily just serves as a backup, so while it’s not essential for the operation of the company, I want to keep as much as I can for long-term storage and maximally compress it.

The process started with setting up an Ubuntu server on my new VMWare Host and giving it a lot of RAM to use. Once the Ubuntu server was established, which on its own took a whole five minutes to install, I found a version of the self-professed “best compression software around” 7zip and got that installed on the virtual Ubuntu server. Then I did some light reading on 7zip and the general rule of thumb appears to be “throw as much as you can at it and it will compress better”, so I maxed out the application with word size, dictionary size, the works. Then started to compress folders containing all the profile data that I had backed up earlier. Throwing 144 gigabytes of data at 7zip when it’s maxed out takes a really long time. Then I noticed the older VMWare cluster and realized that nothing was running on that so for its swan song I set up another Ubuntu server and duplicated the settings from the first one on the second one and pressed that into service as well.

I then thought about notification on my phone when the compression routine was done, but by the time I had thought about it, I had already started the 7zip compressor on both servers. Both of these were far enough along where I didn’t want to cancel either operation and lose the progress I had made compressing all these user profiles. I am not a Bash Shell expert so it took a little digging around to find that there already was a way, temporarily, to freeze an application and insert more commands after it so that when the first application completes, the next application will go immediately into operation. You use Control-Z, which freezes the application and then the command “bg %1 ; wait %1 ; extra command”. Then I thought about how I’d like to be notified and dug around for some sort of email method. None of these servers that I put together had anything at all in the way of email servers and I really wasn’t keen on screwing around with postfix or sendmail. I discovered a utility called ssmtp which did the trick. Once I configured it for use with my workplace Office365 account and did some testing, I had just the thing that I was looking for. I stopped the application on both servers doing the compression and inserted the email utility to the end of the application finishing. When the compression is done, I will be emailed.

All in all, quite nifty and it only took a few minutes to set up. Once I’m done with this particular task, I can eliminate the “junky” Ubuntu server altogether on the old VMWare host and trim back the Ubuntu server running on my new VMWare host. I quite love Ubuntu, it’s quick and easy, set up what you want, tear it down when you don’t need it anymore, or put the VMWare guest on ice as an appliance until you do need it sometime later. Very handy. Not having to worry about paying for it or licensing it is about as refreshing as it can get. I just need something to work a temporary job, not a permanent solution. Although considering how much malware is out there, the breakpoint between the difficulty-to-use for end users in Linux may eventually give way to the remarkable computing safety of using Linux as a primary user workstation operating system. There is still a long while before Linux is ready for end-user primetime. I sometimes wonder what it will take for the endless vulnerabilities of Windows to break Microsoft. Hope springs eternal!

Trials

A major Fortune 500 company has a world-renowned hiring trial for their new IT staff. There are all the usuals, the resumes, the interviews, but there is also a fully funded practical trial as part of the job application process. The job itself is cherry, practically autonomous, with real challenges and true financial backing so the winner can dig in and achieve serious results.

The trial is rather straightforward, given a property address, you must approach, perform an intake procedure to discover what is required and then plan and execute whatever is needed to solve the IT need.

The property has one person, a newly hired young woman who is sitting at a central desk on the ground floor. She has a folder, within it, a script that she reads to each candidate:

“Welcome to your trial, this building has everything required to run a branch of our company. Every computer, networking component, and server component is placed and wired properly. Your task is to configure all the equipment throughout the branch properly. You will find all the resources you need to complete this task within the building. You have one week to complete this task. Good Luck.”

The young woman then folds her hands together and waits.

Several candidates engage with the trial, hoping to get the cherry job and have learned about the young lady at the reception desk. They pass all the requirements, and they eagerly arrive to try their hand at the trial. They impatiently sit through her canned speech and quickly head off to the basement to start in the server room.

Candidates come and go, some pass and some fail. The trial is to get the branch fully operational and on the last day of the week the branch becomes staffed, and the candidate must ensure that all the preparations are in place and that everyone can work without a technological failure. The trial is winnable but very arduous.

The young lady sitting at the central desk on the ground floor has a secret. She has a shoebox locked in a drawer attached to her desk and around her neck is a key on a golden necklace. She has specific instructions, which if a candidate approaches her and engages pleasantly and shows sincere interest in her role in the branch without being the destination of a last-ditch effort, she is to pause the conversation, unlock the desk and produce the shoebox to the candidate. Within the shoebox is the answer to the trial, it is every specific requirement written in clear, actionable text with a memory stick containing every proper configuration and a full procedure list that will bring the branch to full operation without a single hiccup. Everything from networking configurations to the copier codes for the janitorial staff is covered and once executed virtually guarantees a win.

How many people would simply ignore the receptionist and get cracking on the trial and how many would take their time to get to know everyone and their roles in that particular branch? Either kind of candidate can win, either through a sheer act of will or simply being kind, careful, and honestly interested in the welfare of each of their coworkers. Nobody knows about the secret key, but sometimes the answer you need comes from a place you would never expect.