I love being with people. It’s the most incredible thing in the world. That world may change and evolve but the one thing that will never change – we’re all part of one big family.
Well it’s been a minute hasn’t it? WLPC 2022 was a few months ago. (I wrote this shortly after WLPC but forgot to hit the “publish” button.) Boy was great to be back with most of the family once again. It was also, I have to admit, a bit stressful. I had more in-person interaction in those 7 days than I had in the last few years. I can’t speak for anyone else, but I was usually glad to
I was really looking forward to the event. Firstly, my last 2020 had been rather stressful. While I don’t exactly regret it teaching a Deep Dive at this amazing event is a HUGE effort and much respect to those who pull it off better than I did. I was glad to just be an attendee this year. Also, this was my first WLPC since becoming CWNE #414 and I was really looking forward to hanging out with the folks from that community. (Maybe flex a little bit…just a little bit.)
WLPC has a bit of everything (if you want to partake of it). There are bootcamps the weekend before – some of the best training you can find anywhere. There’s the actual conference itself where you hear presentations from your peers about current events in our industry. And last and certainly not least there are the social aspects. There are parties to attend and friends to hang out with. All good fun.
My week started off with the Python bootcamp with Jake Snyder. Automation is a big focus at work and I keep hearing how Python is supposed to be super hot and whatnot. I was fortunate to not be coming in completely ignorant about writing code – I actually did graduate with a CS degree once upon a time. Python just didn’t exist back then. (We managed our own memory allocation and we LIKED IT! Wait, no we hated that.)
I’m not going to pretend that in 3 days I came out of it a Python master but because the class is geared towards Wi-Fi engineers I did come out with a good chunk of healthy examples that show how to do things that can help me in my day to day job. Like: Pull a list of Mist APs from a switch using LLDP information in order to properly configure their ports. (I did that by hand earlier this year for a 500+ AP building and that was, you know, not fun.) And then use that info to pull even more details from the Mist API about those APs. I had been slowly working my way through a Python bootcamp on Udemy but that sort of individual thing lacked the focus and interaction that a real in-person class provides.
Moving on to the main conference I was surprised about the amount of “Wi-Fi 7” talks there were. Let’s keep in mind that 802.11be initial draft is going to be over a year late and that 6E hardware is barely shipping. I think the vendors need to slow their roll a bit. I know, there’s a lot of cool nerd stuff in there but let’s call it 802.11be for now. By invoking the “Wi-Fi 7” moniker it makes folks thinks that products are on the way. And I also wouldn’t be shocked if leadership is going “Why buy Wi-Fi 6E if Wi-Fi 7 will be out soon?” (Here’s a hint, it won’t.)
One of my favorite talks was the first one – Peter Mackenzie’s “It is Impossible to Calculate Wi-Fi Capacity” was a great talk. He spoke the truth about something we all know – and it’s something that I’ve bookmarked to send to management the next time they ask. He also said out loud how Wi-Fi design is both art and science. At times it feels like it gets treated as a commodity…which is how folks end up with bad designs.
The other big surprise was the WLAN Pi Pro. It’s amazing how this tool has evolved over time and I have to stay that this most recent iteration is an amazing tool. If you find a way to pick it up I can highly recommend it. It’s a Swiss Army Knife for Wi-Fi and at the moment it’s the only Wi-Fi 6E client I have.
It was another great WLPC. If you’ve found yourself looking for a conference that gives a great bang for the buck I can’t recommend WLPC highly enough. If you missed Phoenix then WLPC EU in Prague will be in October. If you have a chance you should totally check it out.
Current events compel action. Black Lives Matter. Black Trans Lives Matter. Those are words I believe in but are easy to say and I kept asking myself “what can I do to help?” Saying the words is fine, donating money is better, but is it enough? How to help in the right way?
My industry, the “world of Information Technology”, certainly suffers from a lack of diverse voices. I’ve heard a lot of talk about how “there are no candidates in the pipeline”…ok, fine. But why aren’t there? Is it a lack of training and opportunities? Or is it something much deeper – institutionalized racism and discrimination that keeps black people out?
Irrespective of the cause, we can do something about that. Here is what I’ve come up with: I’ve joined Blacks in Technology as a dues-paying member and donated to fund scholarships for people who want to get their CWNA. We’ve got 10 scholarships for this round. (We’ll do another chunk in about 6 months.) It’s for the “Self Paced Training Kit“.
The key thing here is that I’ve aligned myself with BIT as a supporter and I’m listening. You can do the same. Joining and listening is just the beginning. Or, you can do what thousands of others like me are now doing; speaking out, bringing others along, donating, contributing and actively doing something.
If you’ve been to this blog before (or if you know me outside of this blog) then you probably know one of the areas of networking I am passionate about is Wi-Fi. One of the things I like about Wi-Fi as a career is how much education is valued. There is much to learn and a lot about Wi-Fi that we take for granted. Fortunately we have a great resource in the form of the Certified Wireless Network Professional certifications.
The path to being a Certified Wireless Networking Expert (a path I am on myself) starts with the the Certified Wireless Networking Administrator (CWNA) certification. It covers a lot of really great topics that folks often don’t cover when they start out doing Wi-Fi:
Radio Frequency (RF) Technologies
Wireless LAN Hardware and Software
Network Design, Installation, and Management
Wireless Standards and Organizations
802.11 Network Architecture
Wireless LAN Security
How to Perform Site Surveys
It gives you a very strong foundation about how Wi-Fi works and it’s all vendor neutral. Cisco? Aruba? Mist? Arista? Ruckus? Doesn’t matter – it’s information that anyone working in Wi-Fi should have at their fingertips. As a wise man once said, “You have to learn WHY things work on a starship.” (It was Captain Kirk. Kids, ask your parents.)
The self-paced training kit includes a study guide, access to an online self test, and a test voucher. CWNP’s exams are available in an at-home format, so you don’t need to go anywhere to get this certification. And one of the best parts of this certification, and about Wi-Fi in general, is that there is a lot of demand for wireless engineers all over the country. This isn’t a skill that is just needed in a few places – they need Wi-Fi everywhere.
(One slight note: Links will take you the CWNA-107 exam, which is being refreshed in September with new goodies like Wi-Fi 6. We may wait for that to drop before sending out kits. Just a ‘heads up’.)
I’m not just going to throw money at the problem, as fun as that is. I am pledging to be a resource and mentor for anyone seeking to walk the path of the Wi-Fis. Whether it’s technical questions or anything else, feel free to slide up in those DMs (or probably email) if you want help. The Twitters, LinkedIn, Slack, Discord, whatever works for you.
If you’re wondering “why Wi-Fi?” it’s because of how much Wi-Fi is needed (and how bad a lot of it is). A couple of the major focus areas for Wi-Fi (we usually call them “verticals”) are K-12 education and healthcare. We’re building networks in parking lots for COVID-19 testing, helping teachers and students get connected, and providing support in hospitals so that front-line healthcare gets done. Wi-Fi is more important than ever.
So if this sounds interesting to you then you may be wondering what to do next. The logistical details are still being sorted and when BIT is ready for students they’ll announce it and I’ll be sure to try and hype it. The money is donated, so this is happening. And thank you to the NVIDIA Foundation for that sweet, sweet corporate match.
I cannot thank BIT enough for being such a great partner. I would have zero idea where to start without their help and Peter Beasley has been an absolute pleasure to work with. They’re doing the hard part, I’m just a guy with an idea. As anyone who has done a startup knows, ideas are cheap – execution is everything. And thanks to Keith Townsend who suggested I reach out to BIT in the first place.
This may sound strange to those of you who haven’t encountered it, but Wi-Fi is really a community. We help each other and share in our common burden – which is folks blaming the Wi-Fi when it isn’t the Wi-Fi. (It’s DNS. It’s ALWAYS DNS.) I’ve really enjoyed joining this community, and I hope you will enjoy it too.
I’ve been rather thrilled that my little “here, do this silly thing with the Nano” idea seems to have worked well for a bunch of folks. Thank you to everyone for their feedback and interest as we all excitedly dive in to this new protocol.
I think it goes without saying that the Wlan Pi is an amazing tool for network engineers. I’ve got one and I love it. And I’ve been wondering – could I do something similar? And possibly make the Nano easier for new users to to get right to the fun part, which is the 802.11ax stuff?
My answer is: The nanoax image.
I’ve taken the standard Nano image (release 4.2.3 as of this writing) and done the following:
Updated all of the pre-installed packages (duh)
Installed the AX200 drivers
Updated Docker to something more recent
Downloaded the version of the libretest container I built for the Nano (jakichan/speedtest)
Installed Wireshark and other capture tools like Francois did, and I also hid the Eye PA coloring rules in the image.
Compiled Kismet – it now works with the AX200.
I will note, this isn’t *quite* like the WLAN Pi image. It still installs somewhat like the default Nano image. (A couple of minor steps are different from the “stock” image that you may notice if you look closely.) You still have to accept the EULA, you get to choose your username and password, configure a timezone, etc.
I’m still working on all the documentation, but for now you should be able to download the image here. By the time this goes live the people in the OFDMA deep dive should have been banging on it for a day or two. I’ll be working on the documentation (I guess I need a cool logo), but in the mean time give it a try and let me know if you have any suggestions.
In an earlier post I covered my favorite way to access the Nano via SSH over the USB mini port. And I covered what Mac users need to do to remotely display the Wireshark GUI. But not everyone is a Mac user (I CAN’T EVEN) so what is a Windows user to do? I got you fam.
Step 1: Get an X11 Server
There are a few options, and I can’t speak to all of them. There is Xming and it’s been around the longest as far as I know. It has free and pay versions, but the free version hasn’t really been updated in a while. Cygwin/X is part of the Cygwin package, and that may be a viable option, but with WSL these days I don’t really use Cygwin anymore. (Side note, you’ll note that WSL is *not* part of the solution I’m presenting. I thought it would be easily done that way, but turns out…not. WSL is not really Linux, and that became an issue. I’m not saying it can’t be done, but it was more work than I wanted to do. But for SSH/SCP/etc without X11 it works great.)
I ended up choosing VcXsrv. And to be honest, it “just worked”. Download that and install. Be sure to start it before proceeding.
Step 2: Install PuTTY
Ok, this may seem redundant – I’m pretty sure 99% of network engineers with Windows laptops have PuTTY installed, but in the off chance that you’t don’t have it installed…go do that.
Step 3: Configure networking
At this time, you want to use a USB-micro to USB-A cable and plug that in to the USB-mini port on the Nano and to your laptop. You *should* end up seeing a new network interface. It should look something like:
You may also see some messages about preparing “Linux for Tegra” for use. It may also not be Ethernet 2 depending on your system, but the Remote NDIS part is what to look for. In theory you should have IP connectivity, but in at least once case I was seeing some…oddness…with the DHCP server on the Nano, so I highly recommend you configure a static address on this interface. Anything in 192.168.55.0/24 will work, just don’t use .1. (That’s what the Nano uses.) I’m not going to go over how to statically configure a Windows network interface here, though.
Step 4: Configure display forwarding in PuTTY
Configure a PuTTY session for the Nano. You’ll be SSHing to 192.168.55.1:
and enable X11 forwarding. Scroll down on the left and it’s under Connection, SSH, X11.
A lot of really cool folks like François Vergès and Gjermund Raaen have been using the Nano as an 802.11ax capture tool, as have I. Let me show you what I’ve been doing to make my captures a bit easier. This method is Mac/Linux oriented but I’m confident it should be easy to do with the Windows Subsystem for Linux. I’ll be sorting out those details shortly.
Step 1 – XQuartz
So no matter which Linux GUI you use it can trace its roots back to X11 and believe it or not when MacOS X first came out it had native X11 support. That has fallen by the wayside but it can still be yours thanks to the XQuartz project. So go to the XQuartz site, download XQuartz, and that gives you an X11 server. Yay!
By the way – if Linux is your desktop of choice then you already have an X11 server, but you also probably already knew that.
Step 2 – Connect to the Nano
As I mentioned in my original post, I power the Nano via the DC barrel connector because it allows 10 watt operation, which is a nice chunk of power. But did you know that the mini port still works while the DC jack power is used? It doesn’t work as a host port but it works just great as a device port. If you plug it in to your Mac you should see something like this:
Yep, it works as an RNDIS device, very similar to what you see with the WLAN Pi if you have one of those (and you should). By the way, “Linux for Tegra” is the official name of the OS running on the Nano. If you ever see “L4T” references, that is what it means. In fact, if you look at the output of “ifconfig -a” on the Nano with the USB connected you should see:
On the WLAN Pi it’s USBO:, but here it’s lt4br0. (I keep seeing L4T BRO!, and the developers just laugh at me.) And with it you can connect to your Nano via SSH by to 192.168.55.1.
However, before you SSH in you should make sure your SSH config includes X-Forwarding. So in your config file, located at ~/.ssh/config you should add a couple of lines:
Step 3: Install Wireshark
Now you want to make sure Wireshark is installed. François has great instructions on his blog – here are the most relevant bits for us at this moment:
// Install Wireshark (development version)
sudo add-apt-repository ppa:wireshark-dev/stable
sudo add-apt-repository ppa:dreibh/ppa
sudo apt update
sudo apt -y install wireshark
sudo apt -y install wireshark-qt
// Install aircrack-ng
sudo apt -y install aircrack-ng
// Install tcmpdump
sudo apt -y intall tcpdump
// Allow the user to be able to use tcmpdump over an SSH connection (remote connection)
sudo groupadd pcap
sudo usermod -a -G pcap $USER
sudo chgrp pcap /usr/sbin/tcpdump
sudo chmod 750 /usr/sbin/tcpdump
sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump
Step 4: Monitor Interface
I haven’t found a way around this yet, but what you need to do is use airmon to get the interface into monitor mode. So open a terminal on your Mac and ssh in to the nano:
$ sudo airmon-ng start wlan0 140
Found 5 processes that could cause trouble.
If airodump-ng, aireplay-ng or airtun-ng stops working after
a short period of time, you may want to run 'airmon-ng check kill'
PHY Interface Driver Chipset
phy0 wlan0 iwlwifi Intel Corporation Device 2723 (rev 1a)
(mac80211 monitor mode vif enabled for [phy0]wlan0 on [phy0]wlan0mon)
(mac80211 station mode vif disabled for [phy0]wlan0)
All that looks scary, but I haven’t had problems yet. Now you’ll see we’ve chosen a channel on the CLI, but don’t worry – we can change it.
Step 5: Start Wireshark
Simple as that. If things are working correctly you should see something like:
That copy of Wireshark is running on the Nano, being displayed on your Mac, and sent over USB. Select the wlan0mon interface and it will start capturing. Also of note, if you go to View -> Wireless Toolbar you can now configure the channel number and width which is a bit easier than doing via CLI. This functionality is appearing to work well – I went hopping around on a few channels and verified I was seeing the BSSIDs I expected to see.
Keep in mind – this application is running on the Nano directly. So if you want your coloring rules to work you need to copy them over. (Thanks Joel!)
That’s pretty much it. For those of you using the Nano as an 802.11ax capture and analysis tool I thank you – maybe this will be a bit easier for some folks. I know it’s coming in handy in my lab.
I’ve been messing around with something on the Nano, and I wanted to use the Librespeed Speedtest application. As part of their github they have a Docker branch and the container *is* on Docker Hub but obviously it’s for amd64. So I rebuilt it on the Nano and pushed it back up as “jakichan/speedtest”. Here’s what you need to do if you’d like to run it on your Nano.
Step 1: Update docker
First things first, the default Nano image does come with Docker. It’s just a bit stale. So let’s update that. To do that you’ll need curl:
sudo apt install curl
Then you need to update docker. Now a good friend of mine (who is a serious expert on container security) said this was a BAD IDEA. (He REALLY DOESN’T LIKE IT.)You should never just download and run stuff from the internet, right? It’s horrible. But yeah, do this:
curl -ssl https://get.docker.com | sh
And now your docker is current! There maybe a better way to do this, but several “how to docker on Ubuntu ARM” pages I saw used this method.
There are some other things you may want to do, such as adding your default login to the docker group to avoid having to type sudo all the time. They tell you how to do that at the end of the install script, it looks like
sudo usermod -aG docker <username>
And then you do have to open a new terminal or login again. But for the rest of this I’ll use sudo in case you didn’t want to do that.
Step 2: Grab the container
sudo docker pull jakichan/speedtest
That will install download the container from Docker Hub.
Step 3: Run the container
To make it easy, run the container with this command:
I need to start this guide off with a few disclaimers: I do work for NVIDIA. And while I’m a decent enough wireless engineer, I’m certainly not an expert on exactly how our embedded products work (but I do know where those experts sit). This is also not a sponsored post – I paid for all of the hardware involved except for the NIC (we don’t get a discount – no free 2080Tis for anyone). But I honestly think the Nano is a great little maker board and this has been a really fun project.
Also, this was certainly a team effort with my friend Robert Boardman. Disclaimers for him: He works for Mist. No, Mist didn’t sponsor this. No, Robert will not buy you a 3D printer. (I already asked, he’s got quite a few of them.) For my Cisco friends who are likely reading this: Relax, we’re just friends.
I know that some of us right now have 802.11ax-capable APs (we’ll leave ‘Wi-Fi 6’ for another day). But having APs with no clients isn’t fun, and for testing (and fun) having a real AX client or two is awesome. If you get two of them you have a chance of seeing OFDMA over the air.
At the moment you can get the Galaxy S10, which is pretty expensive, or you can get an AX200 from Intel and stick it in a laptop. That’s also a thing. But when I saw how the Jetson Nano has an M.2 slot it got me thinking. The 8265 works well with it but we wondered about getting the AX200 to work. Turns out it was pretty easy.
Without the keyboard and monitor the BOM is at $185.45. With the keyboard and monitor it’s $265.43. All prices are without tax and shipping.
The Jetson Nano can be powered by a 5V 2A USB power supply, but we used the 5V 4A barrel jack option. The reason that the jumper bag is on the list is that you do need to connect jumper J48 for the DC power input to work. So first thing, bridge the J48 jumper. It’s on the left side of this diagram towards the middle, below the camera connector:
If you go through the official NVIDIA getting started guide, you’ll note that the Nano can be powered by USB. However, in the testing that Robert and I did, the Nano is much happier when being used as a desktop if it has full power. You can find a discussion about the usage of DC power here. Also, with the DC power option we were able to plug the USB of the monitor into the Nano to get power – powering the Nano via USB the monitor was unhappy. As a side bonus, powering the monitor from the Nano makes it a touchscreen. And it eliminates the need for two plugs.
Next, you’re going to want to prepare the SD Card. Download the image and use Etcher to write it to the SD card. If you’ve never done it before NVIDIA has pretty good instructions on how to do it for folks on Linux, Mac, and Windows.
Now it’s time to install the AX200. This is best explained via video and I think this one does a pretty good job. As Wi-Fi engineers we all know that the antenna connectors are a challenge. The antennas are from NVIDIA’s open source robot Kaya, but the 35cm cables are a bit long. I’m on the lookout for an antenna package with a shorter cable option.
Now you’re ready to boot everything up! Connect the USB from your keyboard/mouse to the USB ports, connect the HDMI and USB to the monitor, wired ethernet, and power. Next thing you know you’ll should be at this screen:
Now that you’ve logged in you will probably note that there’s no sign that the Nano sees the AX200. What you’ll want to do is to build the core45 release of the iwlwifi driver that you can find here. Here are the commands:
git clone --single-branch --branch release/core45 https://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/backport-iwlwifi.git
sed -i 's/CPTCFG_IWLMVM_VENDOR_CMDS=y/# CPTCFG_IWLMVM_VENDOR_CMDS is not set/' .config
sudo make install
Now that the iwlwifi driver is up to date and installed, it’s a good idea to install the latest firmware for the ax200.
Once both the driver and the firmware are installed then reboot the Nano and you should be up and running.
Just in case…
This is all great, but you have to admit that it’s a bit unwieldy. If you happen to be able to do 3d printing (or, in my case, know a master 3d printer such as Robert) there are some nice case options for both the Nano and that monitor. We are currently using this case for the Jetson Nano (note: Don’t try to use the antenna holes on the panel with all the connectors – it will block the DC jack). Robert also printed this case for the monitor and it’s pretty nice. He has some interesting ideas on how to perhaps modify these designs to make things a bit more functional.
Do you even ax bro?
Why yes. Yes we do.
So what now?
At this point we have an 802.11ax client for around $200 (give or take). Up next:
Taking packet captures from the CLI and Wireshark
Performance testing using iperf3 and IxChariot
General ax hacking
What would you like to see? Feel free to leave a comment below.
If you’ve been fortunate enough to deploy Arista in the datacenter you’re probably a fan. I certainly am. What’s not to love? They’ve got great hardware, the OS is super consistent, they do great DWDM, and CloudVision (CVP) has gotten to be pretty decent as an automation system and VxLAN controller. (I’ve seen datacenters come up in about 6 minutes or so, once all the patching is done. That’s full config, up and passing traffic. You’re not gonna get much better unless you write your own orchestration system.)
My only complaint with Arista has been a lack of love, on their part, for the campus. I’ve been asking them for about 5 years or so for two things: PoE and 802.1x. Once Arista acquired Mojo Networks it became clear that they wanted to expand into campus networking, so it should shock no one that they have been working on what I think of as a “campus feature set” for a while, and the first products in that line are now out in the open. It’s been quite a while since we’ve had a new contender in the campus switching space and I find it exciting.
One of the reasons I’m excited by this development is the approach that Arista has taken in terms of software. There is no code fork. These switches are running off of the same EOS code base as their datacenter leaf and spine switches. They come with all the datacenter features you love, especially all that VxLAN/EVPN goodness. This is really important because other vendors seem to have forked or re-written their SDN stack for campus and sometimes that leads to growing pains. Here you have a battle-proven SDN stack. I do see applications for SDN in the campus at an architectural level but at an engineering level I haven’t seen an implementation I like yet.
Disclaimer: As a function of $DayJob I am under NDA with Arista, so my comments will solely be based upon what’s been publicly released on their website, which you can find here. (Sorting out what I know that’s public and private is just too much stress.) I am participating in their EFT for these switches and have been playing with mine for about a week. I also am a bit of an Arista fanboi. (I love my Cisco friends too. Everyone play nice.)
Before we dive in to specs a couple of notes:
Just like with all PoE switches you have a choice between max power and redundancy. Based on the folks I’ve spoken to at Arista, the 720XP has available PSUs that can more than drive all ports concurrently, up to about 1800W. This is comparable to what the Catalyst 9300 series from Cisco offers. I find the published spec sheets a bit confusing in that regard and I think the data sheets will get updated in the future to make it clear.
Also, you’re going to see that all of the uplinks on these switches are 25G or 100G. I know that to some network engineers that focus on campus environments these port speeds may be unfamiliar. In today’s campus 10G and 40G ports are common but the 40G port is really just 4 “lanes” of 10G. This is the same thing – 100G is 4 lanes of 25G. This jump from 10 to 25 is based around the design of a 28Gbps SerDes (Serializer/Deserializer) – you just lose a bit of the 28Gbps in overhead, and you end up with 25Gbps of payload. But don’t sweat it, all of the 25G SFP+ ports can take 10G SFPs.
There are 4 switches in the 720XP line – here’s a quick summary.
720XP-24Y6 and 720XP-48Y6 start with 24 or 48 1G PoE ports and 6 25G uplinks. All of the ports provide 802.3at (30W) power.
720XP-24ZY4 has 16 2.5G 30W ports, 8 5G 802.3bt (60W) ports, and 4 25G uplinks. This is what Arista gave me as part of their EFT so my hands on comments with specifically apply to this model.
720XP-48ZC2 is the big daddy with 40 2.5G 30W ports, 8 5G 60W ports, 4 25G uplinks and 2 100G uplinks.
When comparing features and what not I think these switches are best compared to Cisco’s Catalyst 9300 offering. Before we get in to dueling spec sheets, I will note that the Arista switches don’t stack. Arista is staying true to their spine-leaf architecture here (and you don’t stack in spine-leaf). That may seem like a management challenge, but keep in mind that automation scales to solve this problem. (If you’re deploying Catalyst 9000 you’ll be encouraged to deploy DNA Center, and with Arista you’ll be similarly encouraged to deploy CloudVision.) So in the following comparisons we’ll only be looking at single-switch performance.
By the way, these comparisons are based off of spec sheets, not any lab testing I have done. You can find the Arista spec sheets here and the Cisco spec sheets here. Also, Cisco seems to be counting “switching capacity” a bit differently than Arista. With the Arista switches if you add up all the ports the switching capacity matches the port capacities. With Cisco if you add up all the ports then the switching capacity is 2x the port capacity. So let’s just all agree that all switches in this comparison offer “line-rate switching”. The actual switching fabrics inside the boxes exceed the port capabilities.
The 24Y6 and the 48Y6 compare pretty directly to the C9300-24P and C9300-48P. Cisco does offer modular uplinks, but the best option they have is 2x40Gbe. They also have 2×25 Gbe. The Arista version has 6×25 Gbe. (However, given that these are all 24 or 48 port 1G PoE switches, more than 50Gbps of uplink is a bit of overkill.) All of these switches are “non-blocking”, in that there’s both enough backplane capacity to support all ports and enough “northbound” capacity for all ports. Arista, however, provides a pretty solid edge in forwarding.
From here on out things get a bit more fuzzy. It’s harder to make “apples to apples” comparisons due to differing port configurations and capacities.
The C9300-24UX has some significant advantages over the 24ZY4. All of the 24UXs ports are 1G/2.5G/5G/10G and 60W capable. The 24ZY4 has a mix of 2.5 and 5G, and a mix of 30W and 60W. However, I will note that while the 24UX has the backplane capacity to switch all ports it lacks the uplink capacity to forward all 24 of it’s copper ports if they’re running at 10G. In fact it’s uplinks are oversubscribed by 3 to 1.
The 48ZC2 and the 48UXM are not the same but they’re a closer comparison. The 48UXM has 36 2.5G ports, and 12 “Multigigabit” (1/2.5/5/10) ports. All ports are 60W capable (but it can only do 30 60W ports). The 48ZC2 has 40 2.5G 30W ports and 8 2.5G/5G 60W ports. Since it doesn’t have as many 60W ports it can always power all ports. (The 48UMX could power the same configuration.) But again we’re limited by the uplink choices. The 48UXM will be trying to shove 210Gbps of traffic from the copper ports through an 80Gbps pipe, or about a 2.625:1 oversubscription. With the 48ZC2 it depends on how you look at those 4×25 ports. It starts with 140Gbps of copper ports. If you use the 4 25G ports as client ports and the 100G as uplinks then you’re 1.2:1 over subscribed. If you only use 2 of the 25G ports for clients and 2 as uplinks (along with the 100G) you’re back to 1:1.
So before folks get out their pitchforks allow me to acknowledge: the campus is not the datacenter. Do you *need* your switches to be non-blocking? Probably not. However, when comparing product lines from two different vendors it’s nice to be able to objectively compare specifications. It may not be important to you that your uplinks aren’t oversubscribed, but it’s interesting that they can be.
One of the things I think about when designing a large-ish campus network is if I want a separate switch fabric for APs or not. On the one hand, life is simpler when you have fewer switches and a unified topology. On the other hand, isolating your APs from the user switch fabric can have it’s advantages. And if you don’t have huge PoE requirements (like, say, you don’t have desk phones) then you may be able to limit your PoE to separate switches.
Right now with PoE switches we’re seeing a move from 802.11at to 802.11bt. How much PoE are we going to actually need going forward? If we look at how the 802.11ax APs are shaping up we can note a few things:
The 8×8:8 APs, with all features on, need 802.3bt power. The Cisco 9117 can run on an 802.3at port, but you lose the USB function. The Aruba AP-555 has to downgrade the 8×8 to 4×4 on single 802.3at power *and* kills the USB (although it does offer you the option to plug both Ethernet ports in to 802.3at power and get full functionality that way). They also both come with 5G ethernet.
The 4×4:4 APs are a bit more forgiving. The Cisco 9120 and the Aruba AP-535 lose the USB port on 802.3at but otherwise survive just fine on 30W. It’s interesting that the AP-535 has a 5G port while the 9120 has a 2.5G port. (Not sure WHY it has a 5G port, I don’t think it needs it.) They both can use 802.3bt and need it for “full functionality”, but if you power them with 802.3at you won’t really be missing anything. The only thing I’ve used the USB for on an AP is to add BLE to APs that don’t have it natively.
Edit: I am somewhat glossing over all the possible AP configurations. As Stephen Cooper pointed out to me, the Cisco APs (and most of them vendors, to be honest) can run on 802.3af power, just with reduced transmit chains. And Scott Lester reminded me, the Aruba AP-555 has Intelligent Power Monitoring to dynamically adjust it’s features based on the provided power. The point of this article isn’t to deep dive on AP power management, but rather to provide a broad overview of what the current landscape looks like when it comes to AP power consumption.
(Yes, Arista C-250, I see you there. But your spec sheet doesn’t help me here because it says it can operate on 802.3at power with “reduced function” without telling me what I’m giving up. You’re an 8×8:8 AP, so I assume that you fall back to 4×4:4 but your spec sheet is missing that data.)
So here’s my take on these switches in light of what the next generation of APs are looking like. Cisco can deliver a lot more 60W ports. If you’re deploying 8×8:8 APs, or using any optional features that require more power (like the USB port) then you need to take that in to account. If you’re deploying mostly 4×4 APs then you’re a bit more free in your choices.
The 24ZY4 could handle a mix of 4×4 and 8×8 APs, as could the 48ZC2. I’m just not sure if I’m going to be mixing and matching APs like that. The 24UX is fine as well, because if you run all the copper ports at 5G you’re much less oversubscribed, and if you’re running them at 2.5G then you’re not even oversubscribed at all. Wi-Fi isn’t likely to run ports at full line rate, so in my mind the 24UX is a really solid choice for “AP switch”. I’d probably choose the 48ZC2 over the 48UXM simply due to the uplink options.
I will note the existence of the C9300-48UN. It’s got 48 ports of 5G/2.5G/1G and they’re all 60W capable, although again you’re still limited to being only able to power up to 30 of them at 60W. If you need 5G then you need 60W, since the only way you truly need 5G is if you have 8×8 APs.
This is Arista’s first set of offerings in the campus switch portfolio. I’m sure it won’t be their last. For now I’m going to play with my EFT sample and let folks know what I find. If you have questions or want me to try something, let me know!
If I think about what I like about Wi-Fi as a networking discipline I would say that it’s how layers 1 and 2 in our domain are so interesting. 802.11 is a fascinating protocol to study and we also get to practice RF engineering. I might be known to, from time to time, tease my datacenter teammates about how “cute” it is that their signals go through these copper wires and how it’s all deterministic and stuff. Must be nice…but I digress.
One of the more contentious areas of debate (see what I did there?) is regarding how we manage our RF space. There’s a contingent that advocates static channel and power is the way to go. And then there are vendors with their Radio Resource Management (RRM) algorithms, and some of us do use those. I use RRM, even if sometimes it needs to be slapped upside the head.
(Side note: Because I started my Wi-Fi journey in an Aruba environment I thought that “RRM” was a Cisco-specific name especially since Aruba called it “ARM” at the time (and now AirMatch) but it turns out that RRM is really a generic term. I mean there’s even a Wikipedia entry about it.)
Neither side is wrong – both approaches have their benefits. At the end of the day we’re all trying to accomplish the same thing – we’re trying to provide a great user experience and that requires your RF to be clean. But what does clean really mean? And how clean is clean enough?
When I started my current job I was tasked with choosing 3 Wi-Fi Key Performance Indicators (KPIs – a very “enterprise” sort of thing) to have on a dashboard. What were the 3 metrics that I thought would be the most important to represent to the Wi-Fi user experience? That was quite a challenge, and one that I don’t feel I’ve fully resolved even 2+ years later. I knew one that I wanted was average client MCS Index, but I also didn’t have a way to get it from my mostly-Windows fleet. And I still don’t. I do track and graph average client SNR. It’s not perfect, but it is readily available. (The other two metrics are AP Uptime and average clients per AP, by the way.) So one way I’m managing and judging my RF performance is based on the metrics I had access to, even if they weren’t the right ones.
So we come to the meat of something that’s been in my head a while: with all the time we spend worrying about and managing and designing RF how do we correlate that RF performance to user experience? Are we focusing on the right things? We have a lot of metrics about RF performance, but do they really help us improve the user experience?
If you were expecting an answer to the question I’m going to have to disappoint you. I don’t have one. I’m more proposing a topic for debate. But here’s why I’m thinking about this out loud: I think we spend a lot of time focusing on RF stats because we can get them, look at them, and understand what they mean. We are assuming the impact they have on user experience based on our understanding of the protocol and our own experiences but I don’t think there’s enough data out there to prove those assumptions.
Let’s take channel utilization as an example. It’s a GREAT set of RF metrics. You can look at AP duty cycle, how much time the channel is in use by other APs and their clients to see what the impact of CCI is, and yet no one can give me a data-derived value for what an acceptable level of channel utilization is. I understand that so much of Wi-Fi is more art than science, which is to say that it’s experience based, and so there may be no way to have a universal value.
I don’t want to sound like I “don’t believe in RF tuning” or something crazy like that. RF performance absolutely matters. If you let anyone’s RRM run with out of the box settings you’re going to have a bad day. You’ll see all the radio stats be bad, and your users will be frustrated, and yes you absolutely need to adjust things. All those great RF stats will guide you and help you understand what you need to fix. As those numbers get better your users will be happier.
And let’s keep in mind that everyone’s radio resource management (RRM) algorithms are designed for the “common case” scenario. The less standard your physical space is – the further you move away from drop-ceiling office land – the more help those algorithms are going to need to achieve a good result. If your environment is completely insane (I’ve got a building like that) even the strongest advocate of RRM might say “I’m just gonna turn that off…”.
But it does make me wonder something. We can spend a lot of time tuning and dialing our RF in to be as close to perfect as we can get but what is the ROI on that effort? Where is the point of diminishing returns? What does “good enough” mean? And can we define “good enough” in a way that reduces (but doesn’t eliminate perhaps) the need for hand-tuning? Because I’m pretty sure that for a lot of engineers responsible for Wi-Fi in enterprise settings that sort of manual tuning just doesn’t scale.
This may be mostly a data science problem for the various Wi-Fi vendors. Can they extract enough data from the systems we have to infer what the user experience is and then tie that to the RF metrics they already have? I know that it’s what just about everyone is working on, given the number of analytics platforms I’m seeing these days.
Right now, whether it’s an algorithm or manually, we’re all managing our radios the same way – based on RF parameters whose impact on user experience is difficult to quantify. Sure, I can say “I changed AP foo from channel bar to channel baz and channel utilization decreased by X%”, but what can you tell me about how that change impacted the users? Can you tell me how that improved their experience? Was it disruptive?
I know these are very hard, perhaps almost impossible, questions to answer. But that doesn’t mean they aren’t the right questions to ask. Right now we manage what we can measure but does that lead to the best results? If we focus more on measuring the user experience then should that data influence how we manage our radios? And if we did that, what would happen? Feel free to share your thoughts on this!
The other day I was chatting with a co-worker and the conversation ended up turning to Wi-Fi and for some reason I ended up explaining what “dBi” meant, what an isotropic radiator was, and how antennas basically worked. At the end of the conversation he asked me where I had learned all that stuff – he was curious if it had been part of some Wi-Fi training I had undergone. He knew it wasn’t college because I make it clear to folks that my major was Computer Science – not EE or some hybrid. (And I work in a company with a lot of EEs.)
Nope, I told him. My training in RF fundamentals came from amateur radio.
Several years ago I was reading about the emergency response to the Loma Prieta earthquake and it included some recordings of amateur radio traffic. (I remember the Loma Prieta quake well – it was so powerful that it caused the light fixtures to sway all way in my parents’ home in Sacramento.) This lead to learning more about amateur radio and its role in disaster response. Living in the San Francisco Bay Area I’m well aware of the need to be ready. I have a disaster kit, I have non-perishable food stores, but this got me interested in being able to communicate in a disaster.
This story will be familiar to most “hams”. Disaster communications is sort of the gateway drug to amateur radio. It starts there, then you do your first Field Day and make your first DX contacts, and then you’re wondering if you have room in the back yard for a tower. You might fall in with a group of contesters and get that bug. They prey on your civic responsibility and then the next thing you know you’re hanging out at HRO Sunnyvale (RIP) thinking that the shiny radio isn’t THAT expensive…
But I digress.
Part of becoming an amateur radio operator is getting licensed. And the studying you do for that is a pretty good introduction to how radio waves work, how antennas work, and many of the other things that tend to be important in Wi-Fi. Yes, the frequencies are very different but all of the concepts carry over. Amateur radio is nice because it’s very “hands on”. You build and test things yourself, you find out what works and what doesn’t (and sometimes why). This mostly happens around the dark art of antenna building.
I was pretty far along in my career as a network engineer when I “fell into” Wi-Fi. And as I started doing more and more work in that area I became more aware of how much of a head start my amateur radio experience had given me. It’s not only a fun hobby (with lots of cool toys) but it’s also provides a lot of very valuable professional education for wireless network engineers. And as a side bonus it will help with your qualifications for a CWNE certification!
Studying for an Amateur Radio license in the US is pretty easy. There are 3 different classes of license: Technician, General, and Extra. As you get licensed for a more advanced class you get access to more and more spectrum that you can use to try and talk to folks. The question pools are all public so there should be no surprises on the exam. I used HamTestOnline as a study tool back when I was getting licensed.
If you want to get started look for a local radio club. They have been, in my experience, very welcoming to new people who are interested in the hobby and mentoring is part of that. A good resource for finding a club is the ARRL Club Finder. You’ll be able to find out about testing opportunities as well as club events where you can get a chance to operate and practice without having to invest in equipment yourself. It can’t get any easier than that.