How 7-Zip, Hyper-V, and DNS Paralyzed A VOIP Phone System

Today was a tour-de-force in unintended consequences. It started with an old coworker, as a kind of boomerang. They came to work for us, then they moved on, only to come back. That was the premise of this story, the start of it, a coworker boomerang.

The task was really straightforward. De-compress the previously compressed user files related to this particular coworker, so that when they login, they see exactly what they left behind. It was modest, about 36GB worth of data. Looking at everything, the intended target had 365GB of open space, so plenty of room for this. I started with 7-Zip on Windows, opened the archive and extracted it to the drive with all the space. Near the end of the transaction, 7-Zip threw an error, “Out of Disk Space.” and I frowned and scratched my head. 365GB open space, and… this? Turns out, 7-Zip on Windows, at least this copy of it, unpacks the archive to a temporary folder on the temporary resource that Windows assigns, by default this ends up on C: drive. The process was filling an already low-on-capacity primary OS drive. I chased the temporary folder and removed it, correcting the issue. Or so I had thought.

An hour later, out of the apparent blue, around 12:30pm today, all the VOIP desk phones suddenly went “NO SERVICE”. I scrambled, naturally, feeling that rising panic as nothing had changed, there were no alarms, just suddenly total phone failure. I called the VOIP support line, and the official line from support was to reboot my network. A stack of eight fully packed Cisco Catalyst switches, three servers, and a gaggle of networking gear designed to offer at least a dozen vital services – reboot all of that. While talking with support, I opened up a console to my Linux box running on Hyper-V on one of my servers, which is to say, plugged into the very network core itself that I was asked to reboot. I then found my out-of-service desk phone, it’s IP was fine, it was totally functional, I grabbed the SIP password, logged into the phone, went to where it lists the VOIP endpoint for our phone carrier, and then asked mtr to show me the packet flow across the network, from my humble little wooden box of an office to the VOIP endpoint. The utility was clear, it was fine. No issues. 500 and counting packets all arriving promptly, no flaws, no errors, and NO SERVICE.

So I was growing more vexed with support, really unwilling to reboot the entirety of my network core when mtr was just merrily popping packets directly to the correct VOIP endpoint deep inside the carriers network. My traffic could get to where it had to go, the phones were NO SERVICE still. Support was flat-footed. I stopped myself, because I could feel the rage build, my old companion, the anger that comes when people aren’t listening to what I am trying to tell them. I stopped. It was not going anywhere and I promised myself that I would fight this anger, tooth and claw to the best of my ability. So I simply calmly asked for the ticket number on their side, and thanked them for their time and hung up my cell phone. I obviously muttered some choice phrases in a small voice, but otherwise I was very proud of myself. I derailed what could have become a very ugly scene.

Everything works. I am not going to reboot the core. The phones simply say NO SERVICE. Then other reports rolled in, network faults, adjacent but not the same, Wifi failures in Houston Texas, hmmm. What does Wifi out in Houston have to do with dud phones in Kalamazoo?

I had this sinking feeling, my gut screamed at me, something about the PDC, Wifi, and the Phones were all touching something common that had failed, but had failed silently. I chuckled to myself, the old IT chestnut occurred to me, “It’s always DNS.” and so, in respect to that, I opened the Hyper-V management window on the PDC and looked for my twin OpenDNS Resolvers, they are VM’s that run quietly, flawlessly, for years on years without a peep deep within Hyper-V. There it was, right there, right in front of me. The two resolver VM’s and just to the right of their names, the quaint little status indicator from Hyper-V. “PAUSED.”

The moment I saw that, I yelled out “PAUSED” and “NO SERVICE” and screamed. Right click on both VM’s, click Resume, and Hyper-V gleefully, in a heartbeat, resumed both little VM’s and just like that, another reboot to the VOIP phone and bleep-bloop-blunk, the phone was functional and just fine.

It is always DNS. I have three resolvers, the two resolvers were on the same host and the host had a wee panic and Hyper-V silently just paused everything, and then after a short while of cooking, the phones and Wifi, which also uses those resolvers, all went kaput all in one happy bunch.

Obviously the answer is to round-robin the resolvers, the primary on the PDC, then one resolver running in VMWare nearby, and then the secondary on the PDC. A sandwich right down the middle. I both thanked my past self and kicked my past self, for having the wits to set up a third resolver, which was then for a short while, the only resolver there was, except for choice parts of my network.

So, it ended happily, alls well that ends well. The next step is to spread this round-robin resolver correction throughout my network, to help avoid this from ever happening again. But then I laughed as I considered the gamut of what had transpired. 7-Zip, well meaning and purely accidentally caused an unintended disk space alert, Hyper-V silently and studiously paused its charges, and the network kind of rolled on over the speed-bumps, and at the end, proved again, “It’s always DNS.”

Meraki Z1 & Cisco 2801 Link Negotiation Gremlin

Today at work I ran into a really long-standing issue that we’ve had in one of our company branches. This branch uses an EOL/EOS Meraki Z1 Teleworker Gateway and also uses a hilariously EOL/EOS Cisco 2801 Integrated Services Router.

The setup is very straightforward, on the Internet side of the Teleworker gateway is a Comcast cable modem, and it’s only capable of 60mbit downlink and 10mbit uplink for maximum speed. We rebooted everything, re-tested from the cable modem and then to the desktop itself, and the speed from the cable modem was just as we expected, 60/10, but the speed from the desktop was 4/6!

I had rebooted everything. The cable modem, the Meraki Z1 Gateway, the Cisco 2801 ISR, and the Cisco 3560 Catalyst switch. Even the Cisco IP Phone got a reset! The speed gremlin held out, 4/6. So while working with some staff in the branch, I just happened to mouse-over the graphic on the Meraki Dashboard for this device and spotted the gremlin. The mouse-over tip for LAN1, where the Cat5 cable goes from the Meraki Z1 to the Cisco 2801 showed 100mbit/half-duplex. I checked into the terminal on the 2801 and verified that the port was fixed at 100mbit/full-duplex! So, I opened the Meraki Z1 device Ethernet configuration page, found LAN1, and changed it from “Auto” to 100mbit/full-duplex.

Forcing the speed and duplex settings resolved all the problems right out to the Desktop! Hooray! And what I learned from this is that Meraki Z1 Teleworker Gateways cannot successfully auto-negotiate link speed and duplex with a Cisco 2801 Router. So if you have unexplained crappy network performance, always make sure that link speed and duplex match what you think they should. Sometimes “Auto” isn’t.

Photo Credit: Gremlin Grotesque, Winchelsea church
cc-by-sa/2.0 - © Julian P Guffogg - geograph.org.uk/p/3334405

Cisco AMP for Endpoints

Several months ago we bought into Cisco AMP for Endpoints. There was a lot of work right after that, so we set up the management account and put it aside. Months later, I felt a little awkward about it, so I thought I would devote my April to Cisco AMP for Endpoints.

I just uncorked my AMP for Endpoints account, for this post and going forward, when I write AMP, I mean Cisco AMP for Endpoints, because it’s a mouthful. AMP itself seemed forbidding and difficult, but then once I started working with the site, configuration wasn’t that bad. I decided to test AMP in my environment by starting a “Factory Fresh” copy of Windows 7 32-bit in VirtualBox on my Mac, with 4GB of RAM assigned to it. A standard humdrum little workstation model.

I downloaded a bunch of starter packs, including the “Audit” model, the weakest of them all. I installed it on the workstation and the site responded well enough, noticing the install. As I was working with the system, I noticed that AMP complained that the definitions were out of date on the client, so I went hunting for a “definition update” function. There isn’t anything the user can trigger, you have to wait for it. Oh, that’s not good.

So then I had AMP on the test machine and I thought I would try to infect it. So I found a copy of EICAR, which is a sample file that all these technologies are supposed to detect and find hazardous. Symantec Endpoint Protection (SEP) sees EICAR well enough, and really gets upset by it, immediately stuffing it into Quarantine and sending an alert. AMP also detected EICAR and because it was in Audit mode, just sat on its hands. Which I expected.

So then I found a bunch of sample malware files on a testing website, because while EICAR is useful for basic testing, it’s as relevatory as a knee-jerk reflex. It’s nice to know there is a reflex, but it’s not the same as an actual malware infection. I opened the ZIP file, typed in the password and all these malware samples came spilling out into the downloads directory. So, a workstation that is quickly becoming filthy. That’s my use-case for AMP.

So after “infecting” the computer with the files, and the tamest model, which is just to have them in a folder, I went to AMP and told it to switch the model on the test machine from Audit to Triage. That took almost twenty minutes! Are you for real on this, Cisco? Twenty minutes!!!

So I knew what I had on this workstation, but I pretended that I was the admin on the other side, with an unknown workstation connected, reclassified with Triage and waiting. I knew that the computer was infected, and as the admin, “not knowing what is going on” with the endpoint, I sent a scan command. This is the worst case scenario.

On the AMP side, I didn’t see anything at all. I panicked around looking for any hint that the AMP system recognized my scan request, and so I sent five more scan requests. Obviously, one scan request should have done it, but I wanted to make sure that I worked around even an imaginary screw-up in Cisco over scanning. Nothing. Workstation just plotzing along, infected files just sitting right there in the Downloads folder, just waiting for double-clicking end-user to make a tame infection a wild one.

Obviously this is the worlds worst scenario, one were SEP somehow is gone, not installed, or somehow lost its marbles, leaving AMP on its own to run defense. Scan! Scan! Scan! — Nothing at all. AMP just sits there just merrily SITTING THERE. Like shaking a coma patient, is very much what it felt like.

So then I started with the Help feature, request help, okay, I knew how this would go. This would lead to TAC. God help me. Cisco’s system didn’t know what AMP was, hahahahaha of course not. But there was a chat system in a teeny tiny little button, so I tried that. Someone! Hallelujah! They found my contract and linked it up, and started a case for me. When I went back to the test system, AMP had done it’s work. FINALLY. It only took twenty minutes! A lot can happen in twenty minutes. How many files could have been ransomware-encrypted in those twenty minutes?

So now I await a response from Cisco TAC. During the chat I declined the entire phone call angle since Cisco TAC people cannot speak English, or at least, I cannot understand their speech. So I told them that I would only communicate over email. So lets see what TAC has to say. We spent a lot of money on this, so obviously I’ll likely deploy it, but man, I am sorely disappointed in a system where every second counts. On reflection, Cisco AMP for Endpoints was probably a mistake.

Cisco SmartInstall Vulnerability Mitigation

At work, I use Cisco gear everywhere. Recently the SmartInstall Hack has become a security concern. There is a vulnerability in the SmartInstall system that allows bad actors to send arbitrary commands to your network infrastructure.

So I started out knowing how my network is shaped, that I customarily keep the 10-net IP space organized by state, then by city, and then finally by kind of equipment. Out of the four octets, the first one has to be 10, the second one is the state, and the next is the city in that state, and finally, I prefer to keep all my infrastructure gear between 250 and 254.

I started with nmap because I wanted a memory refresher so that I wouldn’t miss a device.

nmap 10.1-10.1-10.250-254

This command provides me a handy report of all the places on the inside of my network where ssh or telnet (depending on the age of the gear) reside. I print off the list, and it becomes an authoritative checklist for all my infrastructure gear.

Then one at a time, either ssh or telnet into the infrastructure devices and issue these commands in one paste command:

conf t
no vstack
end
wr mem

I don’t care if the command fails, it’ll write NVRAM to Flash either way which suits me fine. Once I was sure I got all the equipment that could be affected, I know that at least for this vulnerability, we’re all done. There won’t be anything, at least for this, at work for me to worry over.

Now if you use vstack or SmartInstall, your mileage may vary, but I certainly don’t use it. The default is to leave it on, so the smart money is in forcing it off. Why leave it open as a vulnerability if you don’t have any chance of bad actors on your LAN? Because it is one less thing to worry over.

Security Notes: OpenDNS Umbrella

In my workplace, I have deployed OpenDNS Umbrella across my company network to secure and manage my DNS system. I have found that Umbrella is remarkably good at preventing unwanted behavior and protecting my corporate network from threats both outside the firewall and inside it.

All traffic destined for domain resolution must pass to two Hyper-V VM’s located in my Headquarters branch. These two virtual machines handle all requests from my entire network, including the branches across the Data WAN, facilitated by the Meraki Site-to-Site VPN mesh network that the Meraki system handles for me automatically. These two VM’s then pass all their collected queries to OpenDNS itself, where my policies about what kind of Layer 7 categories I have allowed and disallowed for resolution. Malware is the primary reason for Umbrella, as everything from viruses to trojan horses all rely on DNS to function and be clear as a bell so they can function in a harmful manner. Umbrella acts as a canary in a coal mine, messaging the admins about everything from Command-and-Control requests, to Malware requests and category violations throughout the company.

As I have been working with Umbrella, I noticed an immediate vulnerability in the way the system works. There is technically no reason why a user with a company device, or theirs even, could define their DNS servers manually and side-step Umbrella completely. Specifically, I am thinking about Google’s DNS servers at 8.8.8.8 and 8.8.4.4, although any public DNS server would work in this arrangement. It is important to include in this discussion that as an IT administrator I buck the trend against my own industries best practices, that all users are local admins of their machines. I don’t believe in “nailing down the workstations” at all. Instead, I keep my security surface deep into the domain controller and file server, a much tighter arrangement that affords end users more liberty. With the liberty comes a risk that end users could perform some action which would ruin their day. This keeps the users responsible, and it keeps what we have to say in IT more relevant than ever. We don’t keep you from ruining your day, we help you cope. I have found that users, for the most part, treat their computers like simple tools, they don’t go poking about where they shouldn’t, and it has served me very well. Except in situations like this one, where users or malware have the inherent rights to change the DNS resolver settings if they know where to go and how to do it.

So that started me thinking about ways to address this risk and naturally I thought of the switching layer that everyone is connected to. The best place to control this is within the Cisco Catalysts themselves. It’s a matter of an ACL, an Access Control List. I poked about online and eventually came up with this solution. My two DNS resolvers are at 10.1.1.238 and 10.1.1.239 respectively:

ip access-list extended FIXDNS
!
permit udp any host 10.1.1.238 eq domain
permit udp 10.1.1.238 0.0.0.0 any eq domain
permit udp any host 10.1.1.239 eq domain
permit udp 10.1.1.239 0.0.0.0 any eq domain
permit tcp any host 10.1.1.238 eq domain
permit tcp 10.1.1.238 0.0.0.0 any eq domain
permit tcp any host 10.1.1.239 eq domain
permit tcp 10.1.1.239 0.0.0.0 any eq domain
deny tcp any any eq domain log
deny udp any any eq domain log
permit ip any any
!

This code block creates an ACL package named FIXDNS in the switch, and then on individual ports, or VLAN’s, or even the entire switch input flow I can affix this command and put this rule into operation:

ip access-group FIXDNS in

Obviously, I would use this in individual cases across the system, applying the limits only to end-user facing ports and skipping the trunks and support services like servers, copiers, and plotters. Being only a single command, it also makes it a snap to tear it out of ports as well, just on the off chance that I want to relax my security posture for some specific reason. I like the idea of the granularity of control this solution provides me, and I spend every day in my switching systems, so managing this is not any more work for me than usual.

I tested it in the lab as well, which is how this all got started. If the test laptop is configured to fetch its DNS settings from the DHCP pool, the users notice absolutely nothing at all unusual about their connection. Their DNS queries head off to OpenDNS Umbrella for resolution as normal, and everything works as it should. Acceptable traffic is allowed, while malware or banned categories are blocked. In the lab, if I set the laptops NIC to a specific DNS server outside my organization, like Google DNS, then any DNS related queries do not work. As a matter of record, I have included log directives in the block statements above, so if someone is breaking the rules, we’ll see where they are attempting to get their DNS services from and head out to correct it. Although the chances are that they would likely call us to find out why their Internet has stopped working.

I have this FIXDNS package installed on all my switches company-wide, but I haven’t actually enabled it anywhere. I think I am going to roll out the blocks very slowly and make sure that there aren’t any alarms raised at my efforts. Not that I seriously think anyone has the interest or know-how to customize their DNS resolvers, but it is nice to know that they cannot even if they tried.

Extracting Cisco Unity 10.5 Voicemails

In my work, I wear many hats. Amongst these is VOIP Manager. It’s not really a job, or a position really but fits neatly under the heading of IT Manager, which is my position title. I oversee the companies Cisco CallManager and Unity systems.

Occaisonally when coworkers of mine leave employment, they sometimes leave behind voicemails in their Unity mailbox. I’ve been searching for a long while to find a convenient method to extract these voicemails out of Unity and into any other format that could be easily moved around so that other people could listen to the recordings and get somewhere with them.

I’ve tried a lot of options, and endless Google searches. I eventually discovered a rather involved method to acquire these messages. This method is something that I would categorize as “bloody hell” because it involves a lot of questionable hacking in order to procure the audio files.

The hack begins with Cisco Disaster Recovery System, known as DRS. If you have a Unity and CallManager system set up, like I do, you probably have already established the DRS and have it pointed somewhere where your backups live. In my case, I have the DRS pointed to a share that lives on my primary file server. So that’s where you start. You have to make sure that DRS is running, and that it generated good backups. This method essentally backdoors the backup system to get at the recordings that Unity takes.

In my Unity folder, I have two days worth of backups, and the files you need specifically are 2018-02-19-20-00-07_CUXN01_drfComponent.xml, and 2018-02-19-20-00-07_CUXN01_CONNECTION_MESSAGES_UNITYMBXDB1_MESSAGES.tar. Your filenames may be slightly different depending on what you named your Unity system. When I found these files, I didn’t even think anything of the XML file, but the tar file attracted my notice. I attempted to copy this to my MacBook and once there, attempted to unpack it with bsdtar. It blew up. As it turns out, Cisco made a fundamental change to DRS after Unity 7, they started encrypting the tar files with a randomized key, derived from the Cluster Security Password. My cluster is very simple, just Unity and CM, and I suppose also Jabber, but Jabber is worthless and so I often times forget it exists. It wouldn’t be that they would use .tar.enc, no, just .tar, which confuses bystanders. That is pretty much the way of things as Cisco, I’ve grown to appreciate.

The next tool you need is from a site called ADHD Tech. Look for their DRS Backup Decrypter. Its a standalone app on Windows and you need it to scan and extract the unencrypted tar data.

The next utility you will need is the DRS Message Fisher. Download that as well. I will say that this app has some rough edges, and one of them is that you absolutely have to run it in Administrator mode, otherwise it won’t function properly.

Start the DRS Message Fisher, select the tar file that has your message archive in it, decrypted, and then you can sort by your users aliases. Click on the right one, then it will present you with a list of all the voicemails the user has in that backup set. You would imagine that selecting all the messages would extract all the voicemails in individual files, but that is not how this application behaves. My experience is that you really should extract one message at a time, because the app dumps its saving folder after every request and cannot understand multiple selections even though you can make multiple selections. It is also eight years old, so that it functions at all is a miracle.

You start at the top, click the first message and then “Extract Message Locally” which should open a window and show you the result WAV file you need. I learned that without Administrator mode, you never ever get that folder, it just opens up your Documents folder and does nothing constructive. In case you need help finding it, look for it here:

C:\Program Files (x86)\Cisco Systems\DRS Message Fisher\TEMPMsgs

With the app in Administrator mode, and a message selected, click the button mentioned above. This will open the TEMPMsgs folder and show you the WAV file. Click and drag this anywhere else to actually save it. Then advance to the next message and extract, and so on and so forth until you have all the messages extracted. There wont be any actual useful data in the filename, it’s just a UUID, so I suppose we should be happy we are getting the audio and count our blessings.

Once you have all the WAV files you need, then you can dump the voicemail account and move on.

What a mess. I look at Cisco and marvel at the configurability of both Call Manager and Unity, but watch it trip embarrassingly hard on things like this. Apparently nobody ever cared that much to address voicemail survivability and extraction. As far as I know, this overwrought and wretched procedure is required to meet that particular need. It goes without saying, this is wholly and completely unsupported by Cisco TAC, so save yourself the headache of running full-speed into that bulkhead.

In many ways, this solution, if you can call it that, is getting voicemail survivability by dumpster diving. Ah well, it’s Cisco, complaining is very much like going down to the river and screaming at it to change course. You’d probably get further with the river.

Cisco Phone Outage Solved!

Since early November of 2015, I’ve been contending with the strangest behavior from my Cisco infrastructure. Only a few number of Cisco IP Phones appear to go out around 7:30 am and then pop back to life right at 8 am. The phones fail and fix all by themselves, a very strange turn of events.

My first thought was that it was power related, so I started to debug and play around with POE settings. That wasn’t the answer. I then moved some phones from the common switch to a different switch nearby and the problem went away for the moved phones. That right there proved to me that it wasn’t the phones, and it wasn’t the Cisco CallManager. It had something to do with the switch itself. So I purified the switch, moved everything that wasn’t a Cisco IP Phone off it and the problem continued.

I eventually got in touch with Cisco support, and they suggested a two-prong effort, set SPAN up on the switch and run a packet capture there, and set SPAN up on a phone and run a packet capture there as well. The capture on the switch showed a switch in distress, many ugly lines where TCP was struggling. The phone capture was immense, and I got it opened up in Wireshark and went to the start of the days phone failure event. The minute I got to the start, 7:33:00.3 am the first line appeared. It was an ICMPv6 “Multicast Listener Report” packet. One of many that filled the rest of the packet capture. Millions of packets, all the same.

The multicast packets could explain why I saw the same traffic curves on every active interface. When a switch encounters a multicast packet, every active port responds as if the packet was sent to that port. As it turns out, once I extracted the addresses of where all these offensive packets were coming from, sorted the list, and dropped the duplicates I ended up with a list of four computers. I poked around a little bit more on Google and discovered to my chagrin that there was a specific Intel Network Interface Controller, the I217-LM, which was uniquely centered in this particular network flood scenario. I looked at the affected machines, and all of them were the same, HP ProDesk 600 G1 DM’s. These tiny computers replaced a good portion of our oldest machines when I first started at Stafford-Smith and I never even gave them a second thought. Each of these systems had this very Intel NIC in them, with a driver from 2013. The fix is listed as updating the driver, and the problem goes away. So that’s exactly what I did on the workstations that were the source of the multicast packet storm.

I can’t believe that anyone would design a NIC like this, where there is a possibility of a multicast flood, which is the worst kind of flood I think. All it takes is a few computers to start flooding and it sets off a cascade reaction that drags the switch to the ground.

We will see in the days to come if this solved the issue or not. This has all the hallmarks of what was going on and has filled me with a nearly certain hope that I’ve finally overcome this three-month headache.

Sparks and Shorts

Today I had a chance to get to work earlier than usual and connected up all my equipment and logged into the switch that was causing all the grief with my phones. Everything looked good, and all the phones were up and behaving fine. I logged into the switch, and I had a thought, Cisco devices have impressive debug features. If power is an issue, why not debug power?

So I turned on the debug traps and adjusted the sensitivity for the log-shipper and turned on the inline power debug for both the events manager and the controller. My Syslog system started to flood with new debug logs from this switch. The phones continued to behave themselves, so I sat down and looked at the log output. There were notable sections where the switch was complaining about an IEEE short on a port. OMFG. AC power being sent down twisted-pair Ethernet, and we’ve got a short condition!? Why did Cisco never even look for short conditions? Upon further investigation, anything that was connected to a server, computer, or printer were all randomly shorting out. These shorts were causing the POE police system to scream debugs to the Syslog system. So I found all the ports that did not have Cisco IP Phones on them and were also not uplinks to the backbone switch and turned off their POE.

Now that all the POE is off for devices that would never need it, the debug list has gone silent for shorts. It is still sending out debugs, but mostly that is the POE system regularly talking back and forth to the Cisco IP Phones, and that output looks tame enough to ignore. I updated my Cisco TAC case, and now we will wait and see if the phones fail in the mornings. At least, there can’t be any more POE shorts in the system!

Incommunicado

Here at work, I’ve got peculiar sort of failure with my Cisco phones. In the mornings, sometimes, all the phones connected to two Cisco Catalyst 3560-X 48-port POE switches all fail around 7:35 am and then all un-fail, all by themselves around 7:50 am.

I’ve tried to engage with Cisco TAC over this issue. I started a ticket when we first started noticing it, in November 2015. Yesterday I got in touch with the first Cisco TAC Engineer and was told that it was a CallManager fault, not a switching fault and that the ticket on the switches would close.

So I opened a new ticket for CallManager. Once I was able to prove my Cisco entitlements I was underway with a new Cisco TAC Engineer. So we shall see how this goes. What concerns me is that Cisco told me that obviously it wasn’t the Catalyst switches. I am a little at odds with this determination because as part of the early diagnosis of this problem we had a small group of users who couldn’t endure phone failures at all, so I moved their connections from the Catalyst 3560 to the other switch, a Catalyst 3850. For the phones that failed on the 3560, they stopped failing when attached to the other switch. That shows me that the issue isn’t with the phones, or the CallManager, but rather the switches themselves. But now that TAC has ruled out the switches, we’re looking at phones and CallManager.

My experience with Cisco so far is checkered. Their hardware is very handsome and works generally. That’s as far as I can go under the auspices of “If you can’t say anything nice, don’t say anything at all.” because the rest of what I have to say is unpleasant to hear. Alas, they have a name and public credibility, and one checkered customer isn’t going to alter the path of a machine as large and determined as Cisco.

We’ll see what TAC has for me next. I am eagerly in suspense, and I’ll update this blog if we find the answer. Holding the breath is probably inadvisable.