Windows 10, QNAP, and error 0x80004005

While setting up a new Windows 10 laptop we ran into a head-scratcher problem. We store a lot of our setup data in a network attached storage system called a QNAP. The laptop was connected to our local area network over Wifi, and everything connection-wise looked to be good. We could ping both the IP address of the QNAP and the DNS name as well, so we knew for a fact that the laptop could indeed send and receive traffic with the QNAP. When we attempted to call up the QNAP using Windows Explorer on this laptop we used the UNC convention to get to our data, like this \\10.1.1.100 and when we press enter, Windows 10 would pause for a short few seconds and then throw back an error code:

Windows Cannot Access \\10.1.1.100 Error Code: 0x80004005 Unspecified Error.

We then attempted to reboot, then we escalated to a full system rebuild and nothing seemed to work for us. We fiddled with PowerShell commands, to no effect, also flipping on and off IPv6, which also had no effect. So our next step was to switch to wildcat debugging and just start taking wild potshots at the laptop trying to find a way to just make this work. And we found the solution, thanks to a user by the name of dimamed on Spiceworks, who posted the solution we needed:

Adjust Registry Value:
HKLM\SYSTEM\CurrentControlSet\LanmanWorkstation\Parameters\AllowInsecureGuestAuth, and set the DWORD to 1.

Then I closed the registry editor, opened up Windows Explorer again, tried the QNAP as I usually do and it worked! We don’t really need it to function for our end users, but it became a matter of professional pursuit to make sure that all our technology can work together properly. It can, with some coaxing.

We hope this solution works for other folks, if you also run into this issue. Please leave a comment if you would, so we can see just how much of an impact something like this has if you don’t mind.

On The Domain?

At work a funny question came up. Should we put an important user and their super slim executive-style laptop on the Windows Domain or just use a Local Account? There is really only one user who fits this bill, and so we’ll leave that obvious bit out because I don’t include names in any of my blog writing.

The question comes down to reliability. Can we trust that the Windows Domain account will always work? Eh, that’s the 64,000 dollar question, now isn’t it? The user cannot under any circumstances ever see “This laptop cannot form a trust relationship with the selected domain.” error that pops up rarely and irregularly around our Windows Domain.

Obviously the answer is, since it’s Microsoft, apply the KISS Principle. We keep it simple, we keep it a local account, because we simply cannot trust Microsoft at all. Maybe the domain will work, maybe it won’t. Maybe Kerberos will work, maybe it won’t. Right up there with the worthlessness of Windows Domain GPO’s, will they apply? Well, they appear to, but they do nothing in practice. In my experience GPO’s are a mixed bag at best, sometimes they work, like home drives and printers, but sometimes they just bellyflop. We don’t really do much with GPO’s because Microsoft’s technology is so hilariously poor. Roll out software through the Domain? Hah. Never works. Fiddle with settings on the Domain? Never works. Never ever ever works. GPO’s are essentially a crock of shit at best, and a waste of time at worst.

So, if you have a mission critical user on a computer, do you use a Windows Domain? Only if you like putting 2×4’s up against your legs and whacking your ankles with a sledgehammer. Yeah, that’s the level of suffering and agony that is Windows. We’ll skip it, thanks.

I will say, I did briefly consider calling Microsoft Technical Support once a long time ago when we were looking at GPO’s for something in the long ago. But you know, that’s not a serious offer either, and creates way more work and suffering than just skipping the entire thing and declaring that whatever it is simply cannot be done. Not that any requests have actually come in that way, our interest in GPO’s were purely in-department wonderings. One foray into them, they don’t work, spread gasoline on everything and light a match and let it all burn.

It’s been a long time since I wrote this bit, but it still holds true and will for the rest of time. Microsoft is the worst company on Earth and I regret every experience coming into contact with them. I only use their “technology” because I have no other choice. Microsoft rules a kingdom of shit. May they all die in a fire.

So no, we won’t be using a Domain Account.

Cisco SmartInstall Vulnerability Mitigation

At work, I use Cisco gear everywhere. Recently the SmartInstall Hack has become a security concern. There is a vulnerability in the SmartInstall system that allows bad actors to send arbitrary commands to your network infrastructure.

So I started out knowing how my network is shaped, that I customarily keep the 10-net IP space organized by state, then by city, and then finally by kind of equipment. Out of the four octets, the first one has to be 10, the second one is the state, and the next is the city in that state, and finally, I prefer to keep all my infrastructure gear between 250 and 254.

I started with nmap because I wanted a memory refresher so that I wouldn’t miss a device.

nmap 10.1-10.1-10.250-254

This command provides me a handy report of all the places on the inside of my network where ssh or telnet (depending on the age of the gear) reside. I print off the list, and it becomes an authoritative checklist for all my infrastructure gear.

Then one at a time, either ssh or telnet into the infrastructure devices and issue these commands in one paste command:

conf t
no vstack
end
wr mem

I don’t care if the command fails, it’ll write NVRAM to Flash either way which suits me fine. Once I was sure I got all the equipment that could be affected, I know that at least for this vulnerability, we’re all done. There won’t be anything, at least for this, at work for me to worry over.

Now if you use vstack or SmartInstall, your mileage may vary, but I certainly don’t use it. The default is to leave it on, so the smart money is in forcing it off. Why leave it open as a vulnerability if you don’t have any chance of bad actors on your LAN? Because it is one less thing to worry over.

Security Notes: OpenDNS Umbrella

In my workplace, I have deployed OpenDNS Umbrella across my company network to secure and manage my DNS system. I have found that Umbrella is remarkably good at preventing unwanted behavior and protecting my corporate network from threats both outside the firewall and inside it.

All traffic destined for domain resolution must pass to two Hyper-V VM’s located in my Headquarters branch. These two virtual machines handle all requests from my entire network, including the branches across the Data WAN, facilitated by the Meraki Site-to-Site VPN mesh network that the Meraki system handles for me automatically. These two VM’s then pass all their collected queries to OpenDNS itself, where my policies about what kind of Layer 7 categories I have allowed and disallowed for resolution. Malware is the primary reason for Umbrella, as everything from viruses to trojan horses all rely on DNS to function and be clear as a bell so they can function in a harmful manner. Umbrella acts as a canary in a coal mine, messaging the admins about everything from Command-and-Control requests, to Malware requests and category violations throughout the company.

As I have been working with Umbrella, I noticed an immediate vulnerability in the way the system works. There is technically no reason why a user with a company device, or theirs even, could define their DNS servers manually and side-step Umbrella completely. Specifically, I am thinking about Google’s DNS servers at 8.8.8.8 and 8.8.4.4, although any public DNS server would work in this arrangement. It is important to include in this discussion that as an IT administrator I buck the trend against my own industries best practices, that all users are local admins of their machines. I don’t believe in “nailing down the workstations” at all. Instead, I keep my security surface deep into the domain controller and file server, a much tighter arrangement that affords end users more liberty. With the liberty comes a risk that end users could perform some action which would ruin their day. This keeps the users responsible, and it keeps what we have to say in IT more relevant than ever. We don’t keep you from ruining your day, we help you cope. I have found that users, for the most part, treat their computers like simple tools, they don’t go poking about where they shouldn’t, and it has served me very well. Except in situations like this one, where users or malware have the inherent rights to change the DNS resolver settings if they know where to go and how to do it.

So that started me thinking about ways to address this risk and naturally I thought of the switching layer that everyone is connected to. The best place to control this is within the Cisco Catalysts themselves. It’s a matter of an ACL, an Access Control List. I poked about online and eventually came up with this solution. My two DNS resolvers are at 10.1.1.238 and 10.1.1.239 respectively:

ip access-list extended FIXDNS
!
permit udp any host 10.1.1.238 eq domain
permit udp 10.1.1.238 0.0.0.0 any eq domain
permit udp any host 10.1.1.239 eq domain
permit udp 10.1.1.239 0.0.0.0 any eq domain
permit tcp any host 10.1.1.238 eq domain
permit tcp 10.1.1.238 0.0.0.0 any eq domain
permit tcp any host 10.1.1.239 eq domain
permit tcp 10.1.1.239 0.0.0.0 any eq domain
deny tcp any any eq domain log
deny udp any any eq domain log
permit ip any any
!

This code block creates an ACL package named FIXDNS in the switch, and then on individual ports, or VLAN’s, or even the entire switch input flow I can affix this command and put this rule into operation:

ip access-group FIXDNS in

Obviously, I would use this in individual cases across the system, applying the limits only to end-user facing ports and skipping the trunks and support services like servers, copiers, and plotters. Being only a single command, it also makes it a snap to tear it out of ports as well, just on the off chance that I want to relax my security posture for some specific reason. I like the idea of the granularity of control this solution provides me, and I spend every day in my switching systems, so managing this is not any more work for me than usual.

I tested it in the lab as well, which is how this all got started. If the test laptop is configured to fetch its DNS settings from the DHCP pool, the users notice absolutely nothing at all unusual about their connection. Their DNS queries head off to OpenDNS Umbrella for resolution as normal, and everything works as it should. Acceptable traffic is allowed, while malware or banned categories are blocked. In the lab, if I set the laptops NIC to a specific DNS server outside my organization, like Google DNS, then any DNS related queries do not work. As a matter of record, I have included log directives in the block statements above, so if someone is breaking the rules, we’ll see where they are attempting to get their DNS services from and head out to correct it. Although the chances are that they would likely call us to find out why their Internet has stopped working.

I have this FIXDNS package installed on all my switches company-wide, but I haven’t actually enabled it anywhere. I think I am going to roll out the blocks very slowly and make sure that there aren’t any alarms raised at my efforts. Not that I seriously think anyone has the interest or know-how to customize their DNS resolvers, but it is nice to know that they cannot even if they tried.

Going West With Facebook

Much like the elves in Tolkiens tales, sometimes the time is right to board the boats and head west. In this particular case, what to do with Facebook.

I’ve been using Facebook since July 2nd 2008. In the beginning it was wonderful, sharing and everyone seemed kinder, more conscientious, I suppose the world was better back then. Many people were looking for a new platform once LiveJournal collapsed, which if we are really serious about it, came when SixApart was sold to the Russians. Americans fled pretty much after that. And so, Facebook was a thing.

Mostly friends, it hadn’t taken off yet. Many of the later iterations that make Facebook the way it is today weren’t even thought up of back then, and in a lot of ways, it was better in the past. But then everyone started to join the service and we started to learn about the ramifications and consequences of using Facebook. I can remember that feeling of betrayal as Facebook posts were printed out and handed to my workplace management. That really was the first lesson in privacy and the beginning of the end of my involvement with Facebook.

Facebook has been on-again-off-again for a while. In time I realized that I was addicted to the service and the sharing. With enough time I realized that Facebook was actually fit more as a mental illness than an addiction. I had to stop it, because in a very big way, it was the service or my mental health.

So fleeing Facebook is the name of the game. First I downloaded all my content from the service, then I started to move the saved links from Facebook to Pocket for safekeeping. Then I went through and started hacking away at groups, pages, and apps. All of these tasks will be long-tailed, they’ll take a while for me to polish off because Facebooks tentacles run very deep, and in a rather surprising way, just how deep they actually go is remarkable.

So now I’m looking at writing more and sharing more from my Blog. This post is kind of a waypoint to this end. I installed a new theme with some new images featured, and the next step is to figure out a “Members Only” area where I can separate out the public from my friends. There are some items that I intend to write about that use specific names and I don’t want to play the pronoun game with my readers. I also don’t want hurt feelings or C&D notices, both of which some of my writing has created in the past.

I will detail my journey with disposing of Facebook here on this blog. I have eliminated publicity to Twitter and Facebook, but I left G+ on, because G+ is a desert.

So, here we go!

Extracting Cisco Unity 10.5 Voicemails

In my work, I wear many hats. Amongst these is VOIP Manager. It’s not really a job, or a position really but fits neatly under the heading of IT Manager, which is my position title. I oversee the companies Cisco CallManager and Unity systems.

Occaisonally when coworkers of mine leave employment, they sometimes leave behind voicemails in their Unity mailbox. I’ve been searching for a long while to find a convenient method to extract these voicemails out of Unity and into any other format that could be easily moved around so that other people could listen to the recordings and get somewhere with them.

I’ve tried a lot of options, and endless Google searches. I eventually discovered a rather involved method to acquire these messages. This method is something that I would categorize as “bloody hell” because it involves a lot of questionable hacking in order to procure the audio files.

The hack begins with Cisco Disaster Recovery System, known as DRS. If you have a Unity and CallManager system set up, like I do, you probably have already established the DRS and have it pointed somewhere where your backups live. In my case, I have the DRS pointed to a share that lives on my primary file server. So that’s where you start. You have to make sure that DRS is running, and that it generated good backups. This method essentally backdoors the backup system to get at the recordings that Unity takes.

In my Unity folder, I have two days worth of backups, and the files you need specifically are 2018-02-19-20-00-07_CUXN01_drfComponent.xml, and 2018-02-19-20-00-07_CUXN01_CONNECTION_MESSAGES_UNITYMBXDB1_MESSAGES.tar. Your filenames may be slightly different depending on what you named your Unity system. When I found these files, I didn’t even think anything of the XML file, but the tar file attracted my notice. I attempted to copy this to my MacBook and once there, attempted to unpack it with bsdtar. It blew up. As it turns out, Cisco made a fundamental change to DRS after Unity 7, they started encrypting the tar files with a randomized key, derived from the Cluster Security Password. My cluster is very simple, just Unity and CM, and I suppose also Jabber, but Jabber is worthless and so I often times forget it exists. It wouldn’t be that they would use .tar.enc, no, just .tar, which confuses bystanders. That is pretty much the way of things as Cisco, I’ve grown to appreciate.

The next tool you need is from a site called ADHD Tech. Look for their DRS Backup Decrypter. Its a standalone app on Windows and you need it to scan and extract the unencrypted tar data.

The next utility you will need is the DRS Message Fisher. Download that as well. I will say that this app has some rough edges, and one of them is that you absolutely have to run it in Administrator mode, otherwise it won’t function properly.

Start the DRS Message Fisher, select the tar file that has your message archive in it, decrypted, and then you can sort by your users aliases. Click on the right one, then it will present you with a list of all the voicemails the user has in that backup set. You would imagine that selecting all the messages would extract all the voicemails in individual files, but that is not how this application behaves. My experience is that you really should extract one message at a time, because the app dumps its saving folder after every request and cannot understand multiple selections even though you can make multiple selections. It is also eight years old, so that it functions at all is a miracle.

You start at the top, click the first message and then “Extract Message Locally” which should open a window and show you the result WAV file you need. I learned that without Administrator mode, you never ever get that folder, it just opens up your Documents folder and does nothing constructive. In case you need help finding it, look for it here:

C:\Program Files (x86)\Cisco Systems\DRS Message Fisher\TEMPMsgs

With the app in Administrator mode, and a message selected, click the button mentioned above. This will open the TEMPMsgs folder and show you the WAV file. Click and drag this anywhere else to actually save it. Then advance to the next message and extract, and so on and so forth until you have all the messages extracted. There wont be any actual useful data in the filename, it’s just a UUID, so I suppose we should be happy we are getting the audio and count our blessings.

Once you have all the WAV files you need, then you can dump the voicemail account and move on.

What a mess. I look at Cisco and marvel at the configurability of both Call Manager and Unity, but watch it trip embarrassingly hard on things like this. Apparently nobody ever cared that much to address voicemail survivability and extraction. As far as I know, this overwrought and wretched procedure is required to meet that particular need. It goes without saying, this is wholly and completely unsupported by Cisco TAC, so save yourself the headache of running full-speed into that bulkhead.

In many ways, this solution, if you can call it that, is getting voicemail survivability by dumpster diving. Ah well, it’s Cisco, complaining is very much like going down to the river and screaming at it to change course. You’d probably get further with the river.

Giving Chrome Some Pep

I’ve been using Google Chrome on my Macbook Pro for a long while, and I’ve noticed that some websites take some time to get moving along. In some ways, it feels like the browser is panting and trying to catch its breath. So today, while trying to solve a work problem I accidentally stumbled over a neat way to give my Chrome browser a little bit of a boost in performance. It seems to benefit when I use sites that are interactive, like my work help desk site or PNC online banking for example.

The trick is, create a small RAM drive on the system, and then copy the Chrome profile over, link to that profile so Chrome can find it, and then start to use Chrome. As Chrome works, things like settings and cache data go to RAM instead of the HD on my MacBook Pro. Then I use rsync to copy data into a backup folder just in case my MacBook pro suffers a kernel panic or something else that would accidentally dump the RAM drive.

There are a few pieces to this, mostly from scripts I copied off the network.

I copied the script called mount-tmp.sh and made only a few small adjustments. Specifically changed the maximum RAM drive size to 512MB.

Then I created two different bash scripts to check-in the profile to the RAM drive and then to check-out the profile from the RAM drive back to the HD. Since I wrote them from scratch, here they are:

check-in.sh


#!/bin/bash
/Users/andy/mount_tmp.sh
mv /Users/andy/Library/Application\ Support/Google/Chrome/Default ~/tmp
ln -s /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default
rsync -avp -delete /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default_BACKUP
echo “Complete.”

check-out.sh


#!/bin/bash
rsync -avp -delete /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default_BACKUP
rm /Users/andy/Library/Application\ Support/Google/Chrome/Default
mv /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome
/Users/andy/mount_tmp.sh umount
echo “Complete.”

If you give this a shot as well, I would love to hear from you about your experiences with this little speed improvement hack! Hope you enjoy!

Moment of Geek: Raspberry Pi as Thermal Canary

A few days ago I had run into a problem at work. The small Mitsubishi Air Conditioner had decided to take a cooling nap in the middle of the day. So my office, which is also the machine room at work was up around 85 degrees Fahrenheit. I was used to this sort of thing, summers bringing primary cooling systems to their knees, but this time I had a huge A/C unit in the ceiling that I elected not to have removed and left in place, just in case. So I turned it on, set it’s thermal controller to 70 degrees and the room temperature tumbled in about ten minutes. Right after the room temperature was normal, and I had service out to visit me about my little wall-mounted A/C unit, the damn thing started functioning normally again. The tables turned on IT, where for our users, this is what happens to them. They can sit there and struggle, and then we arrive and the machines behave themselves like nothing at all was wrong.

So I had the big A/C, and it’s smaller wall-mounted unit both running overnight and faced a problem. I want to know what the temperature is in my machine room without having to buy a TempPageR device. I had one long ago, and it was rather expensive. I looked on my desk and noticed my Raspberry Pi, just sitting there, doing nothing of consequence. I did a brief cursory search on Google, and I knew the Raspberry Pi had a CPU Temperature interface hidden somewhere, and I was happily surprised to find a website detailing how to use this exact feature in Python programming language to write a temperature log, and optionally graph it. It was mostly copypasta, adapting things I had found online pretty much by copy and paste and hammering them here and there to work. I have programming skills, but they are rather dated and rusty. Plus I’ve never used Python, specifically. So my first effort was successful, I got a 1-second temperature logger in place. I was rather happily satisfied with my efforts, but I knew I would not be happy with Celsius, but I knew the temperature was colored by the CPU in the Raspberry Pi itself, so the reported temperature was quite higher than the room temperature.
I started to tinker. First searching for the equation to convert C into F. So I got it, 115 degrees. When I turned on the big A/C device, and its thermal controller displayed the ambient room temperature in F, 74. So I did some math and subtracted a constant 44 degrees from the CPU temperature, which “calibrated” the CPU temperature to be a rough approximation to the room temperature. Some eagle-eyed readers may notice that my math is off, but after I had moved the Pi over to the server stack, I had to adjust for a higher CPU temperature because of it being further away from the wall A/C unit. So now I had a 1-second temperature logger. I turned on graphing, and the entire program crashed and burned, I wasn’t running the application in an X-Windows environment, so I tore the graphing library and code out because I was never going to use the graphing feature anyways.

That, of course, was not enough to replace the TempPageR device. I needed some alarm system to alert me to what was going on. I thought of some interfaces, email, SMS, iMessage, email-to-telephone-call cleverness and each thought brought me against different versions of the cliffs of insanity. I could have probably smashed and hacked my way to a solution involving some ghastly labyrinth of security settings, passwords hashed with special algorithms that are only available on ENIAC computer simulators that only run on virtualized Intel 8086 processors with the Slovenian language pack loaded and using the Cyrillic character set; An arrangement that was an epic pain in the ass. So earlier in the day, I had tripped over an app advertisement for Slack so that it could use incoming data from the Pingometer website. I have a Pingometer account, a free one because I’m a cheap bastard. The single pinger externally checks my fiber optic connection at work, keeping AT&T on their toes when it comes to outages. The Pingometer website uses incoming Slack webhooks. An incoming Slack webhook comes from some source that makes a really simple web browser call using HTTP. It wraps JSON into HTTP and sends the request to Slacks servers. Slack then does everything needed to make sure the message is pretty and ends up on the right Slack channel, on the right team; this was my alert mechanism.

So I did another Google search, found the intersection between Linux, Python, and Slack and some more copypasta and some tinkering and I had a Python app that displayed the room temperature in Degrees F, and made my Slack a noisy mess, as it was sending incoming webhook requests every second. One more tweak, which was a super-simple IF-THEN block, set my high-temperature mark at 90 degrees F and let it go.

 

There is something satisfying about being able to hack something together, cobble it actually, and have it work without blowing up on the terminal, blowing up Slack, or otherwise failing. So now I have a $35 Raspberry Pi running as a rough temperature alarm, it’ll send alerts to Slack and let me and my System Admin know at the same time over Slack. I’m quite happy with how it all worked out. No obnoxious email settings, ports, security frameworks, awkward and obtuse hashing routines, just a single JSON-formatted HTTP call and BAM. All set. An alarm, with a date and time stamp and a temperature, delivered right onto my iPhone with automatic notifications from Slack, so it wakes me up if I need it.

So anyways, without further ado, here is the code:


from gpiozero import CPUTemperature
from time import sleep, strftime, time
import json
import requests

# Set the webhook_url to the one provided by Slack when you create the webhook a
t https://my.slack.com/services/new/incoming-webhook/
webhook_url = ‘https://hooks.slack.com/services/####/#####’

cpu = CPUTemperature()

def write_temp(temp):
with open(“cpu_temp.csv”, “a”) as log:
log.write(“{0},{1}\n”.format(strftime(“%Y-%m-%d %H:%M:%S”),str(temp)))
if temp > 90:
slack_data = {‘text’: “{0},{1}\n”.format(strftime(“%Y-%m-%d %H:%M:%S”
),str(temp))}
response = requests.post(
webhook_url, data=json.dumps(slack_data),
headers={‘Content-Type’: ‘application/json’}
)
if response.status_code != 200:
raise ValueError(
‘Request to slack returned an error %s, the response is:\n%s’
% (response.status_code, response.text)
)

while True:
temp = cpu.temperature
temp = (9.0/5.0 * temp + 32) – 44
write_temp(temp)
sleep(1)


It has been forever since I’ve needed to program anything. Once I was done, and I saw it work the way I wanted it to, I was quite happy with myself. I haven’t felt this particular sense of accomplishment since my college years. It was quite a welcome feeling.

Sample Malware

Today I received a sample email that some of my coworkers caught. They asked me to look into it. The email link led to a bit.ly link, which I was able to extract and through a clever little trick, appending the bit.ly link with a + character doesn’t load the site that the bit.ly link goes to but tells you about the link. This link has been clicked on about 7000 times. Already I know we’re dealing with malware, so now it’s not a question of if it’s a rabbit hole, but rather, how deep does it go?

I pulled the bit.ly link contents out and handed it to curl on the terminal in my Macbook Pro. I don’t expect curl to do anything but show me the text of where this bit.ly link goes. It heads to a PHP file on a presumably hacked web-server or blog. The PHP itself is a HTTP refresh-redirect to a Dropbox hosted file. So I opened up my Virus Lab VM and followed where this led. The Dropbox content said it was a 1MB PDF file, but when I opened that, it led to a phishing attempt.

The phishing hack had an obnoxious URL attached to it, so I pulled that out and discovered it was encoded in base64 format. I decoded that text chunk online, and it revealed a Javascript script-block formed by a single call to document.write(unescape()) function.

Whoever it was, went to a long length to obfuscate their malware. Ultimately it led nowhere because we caught it. I find this sort of thing fascinating to pull apart, like an easy little puzzle to unravel. The phishing attempt is for email username and password, and if someone falls for that, then thanks to people being usually lazy with passwords, once you have one password, chances are you have all of them on every other site.

Just another reason to use a password manager and have individual passwords per individual sites. If one breaches, then the damage is limited to that one site, not all of them.

What Roy Batty Saw

We hired a new coworker and learned that he needed a Cisco VOIP phone. I had one spare unit left, an older Cisco 7912 unit. I went to go plug it in, and the POE over Ethernet simply wasn’t registering on the phone. I knew for a fact that the phone itself was fine, and the switch I was plugging the phone into was functioning well. I also knew that my station cables were working fine, so I used my Fluke LinkRunner to test the cables and the port. Everything checked out; the Fluke indicated proper POE, however, when I plugged the phone in, nothing at all.

I knew that this port had a history of being troublesome, but previously to this I had a Cisco 7940 phone working well in this spot, so it was a mystery as to why a 7912 wasn’t also working. I tested the port a few times, each time seeing proper POE voltage and wattage. Even the switch itself noticed my Fluke tester and was registering that a device was consuming POE supply on the port in question. I couldn’t understand why a phone that works well in one place doesn’t work in another when everything is equal. Obviously, not everything was as equal as I thought. Something had to be wrong.

I looked at the Fluke LinkRunner, it listed POE as coming in on pairs 1 and 2 for the positive circuit and 3 and 6 for the negative circuit. So then I took the Fluke to my testing lab and looked at POE coming from a Cisco Catalyst 3560 switch. The Fluke indicated that 3 and 6 were positive, and 1 and 2 were negative. I immediately figured out what the issue was. Ethernet jacks can conform to T568A or T568B, the difference is subtle and is a flipped pair of conductors. I did a little desk diving and popped the cover off the jack in the wall, everything that I deal with is always T568B. Always. The jack in the wall? T568A. So armed with what I knew, I tugged the old keystone jack out and replaced it with the last good one that I have. Punched it down, and tested it again. The Fluke indicated POE, 3-6-1-2, I plugged in the phone and pop! The phone came to life!

So, just when you think you can just get on with things, always check the standards. You always have to assume that nobody else is. What a mess. But at least it was an easy fix.