The Gathering Technical blog

Network design TG17

16 Apr 2017, by Martin Karlsen from Tech:Net

The Gathering has almost come to an end and it's about time we posted some details about our network design.
Network Design

Network design

Our network is designed in the traditional three-layered hierarchical model with the core, distribution and access layer where L3 is terminated at our distribution.


Core layer
The core L3 switches consists of 2x Juniper QFX5100 in a virtual chassis, which is Junipers stacking technology. In addition to provide our distribution with uplinks, the core switch also connects our 80Gig backbone ring with our stand and border router.
Between our border router the internet we have an inline Juniper SRX5800 which is capable of pushing 2Tbps worth of firewall throughput(!). This is where we terminate our BGP peering with Telenor and do route redistribution to OSPF, making the SRX our OSPF ASBR.

Distribution Layer
The L3 distribution switches consists of 3x Juniper EX3300 in a virtual chassis per distribution. It connects to the core using 2x 10Gbps singel-mode transceivers patched into our MPO cassettes pulled from the ceiling. The distro redistributes its connected routes into the OSPF area and advertises it to the core.

Access layer
The L2 access switches consists of 144+ Juniper EX2200 with a 3x 1Gbps connection to our distribution. To protect our network at the edge, we run a series of security features collectively called first-hop security. This takes care of a lot of potential issues such as loops, spoofing and ARP-poisoning.

vc1.ring
One of the design choices this year was to turn our backbone ring, which traverses the entire arena, into a virtual chassis instead of separate routers. This effectively means that it becomes a distribution switch for our crew network. This makes it easy for us to provision edge/access switches to our sponsors and crew areas. As a result we have for the first time ever provisioned our entire access network. Not a single access switch has been configured manually this year!

Internet capacity

TL;DR - 40Gbps...

At TG16 we suffered several DDoS attacks towards our network and even our website (gathering.org). In order to be able to handle a potential DDoS attack this year we decided to upgrade our internet capacity from 40Gbps to 40Gbps + 10Gbps, where the newly added 10Gbps-link would be reserved for our production environment. Instead of dedicating a single physical interface, we decided to include the interface in our aggregated interface and rate-limit our participants network to 40Gbps. This way we keep our production network alive when our participants network gets lit up.

A report from the NOC

The party is well underway, and I was dumb enough to say aloud the phrase "We should probably blog something?". Everyone agreed, and thus told me to do it. Damn it.

Anyway, things are going disturbingly well. We were done with our setup 24 hours ahead of time, more or less. I've had the special honor of being the first to get a valid DHCP lease in the NOC and the first to get a proper DHCP lease "on the floor". And I've zeroized the entire west side of the ship (e.g.: reset the switches to get them to request proper configuration, this involved physically walking to each switch with a console cable and laptop).

But we have had some minor issues.

First, which you might have picked up, we have to tickle the edge switches a bit to get them to request configuration. This cost us a couple of hours of delay during the setup. And it means that whenever we get a power failure, our edge switches boot up in a useless state and we have to poke them with a console cable. We've been trying to improve this situation, but it's not really a disaster.

We've also had some CPU issues on our distribution switches. Mainly whenever we power on all the edge switches. To reduce the load, we disabled LACP - the protocol used to control how the three uplinks to each edge switch is combined into a single link. This worked great, until we ran into the next problem.

The next problem was a crash on one of the EX3300 switches that make up a distribution switch (each distribution switch has 3 EX3300 switches in a virtual chassis). We're working with Juniper on the root cause of these crashes (we've had at least 4 so far as I am aware). A single member in a VC crashing shouldn't be a big deal. At worst, we could get about a minute or two of down-time on that single distribution switch before the two remaining members take over the functionality.

However, since we hade disabled LACP earlier, that caused some trobule: The link between the core router and the distribution switch didn't come back up again because that's a job for LACP. This happened to distro7 on wednesday. We were able to bring distro7 up again quite fast regardless, even with a member missing.

After that, we re-enabled LACP on all distribution switches, which was the cause of the (very short) network outage on wednesday across the entire site.

Other than that, there is little to report. On my side, being in charge of monitoring and tooling (e.g.: Gondul), the biggest challenge is the ring now being a single virtual chassis making it trickier to measure the individual members. And the fact that graphite-api has completely broken down.

Oh, and we've had to move our SRX firewall, because it was getting far too hot... more on that later?

This time, there were no funny pictures though!

Labels: Tech TG17
more

About

TG - Technical Blog is the unofficial rambling place of the Info:Systems, Tech:Net and Tech:Server crews from The Gathering.

Filter posts by crew

Related sites

Collaborators