The image below is an example of how channel layers can be deployed to support high client density areas such as a open space like TG. For this we will use different ESSIDs, some deployed on multiple channels some may only be needed on one or two channels. The ESSID used for video broadcasting may be at just one channel. This channel is then “in the air” reserved for this purpose only. Do we need capacity it’s all about adding in AP’s and maybe using more channels for this ESSID.
This can in general actually for TG be handled by just one wireless controller, but for density and RF reasons we can spilt the channels and ESSID we need across multiple controllers also. We have also placed in a redundant controller ready if one of the master controllers should fail.
In addition we use the unique Airtime Fairness technology Meru provides by default.
Meru’s Airtime Fairness governs Wi-Fi access so that every client gets the same amount of time, ensuring consistent performance for the users. With Meru Airtime Fairness(r), the speed of the network is not determined by the slowest traffic. By allocating time equally among clients, Airtime Fairness allows every transmission to move at its highest potential. At TG this is very useful since we will serve many wifi clients in same RF space ☺. So we will most likely use all the possible RF bandwidth, but dived equal to the clients based on air time.
For getting the option to monitor and follow the wireless network at TG we use Meru Network Manager
We can then track down clients, usage per AP/radio, controller and so on. This will give a good insight where to fine-tune and optimize the entire installation at TG.
(we will make the final version in high quality format available after TG).
Here are some relevant links to Meru resources, if you are interested in wireless networking:
This is the 10Gig fiber optical transceivers provided by SmartOptics. These transceivers run on different wavelengths and therefore can “talk” over the same pair of fiber from Hamar to Oslo. SmartOptics; awesome people that deliver awesome equipment <3 🙂
Most of the Tech:Net-crew is traveling up to Vikingskipet already tomorrow to establish the internet-connection and some of the most critical components of the infrastructure. From Saturday morning we’ll be working on getting the core equipment up and running so that we are absolutely sure that we can provide the best and the most stable networking services for our users when they arrive on Wednesday.
For some years now, The Gathering has utilized different methods for automatic provisioning of the edge switches that the participants connect to. The first iteration of this system was used to configure ZyXel switches, and was called ‘zyxel-ng’. Then, in 2010, The Gathering bought new D-Link edge switches with gigabit ports. New wendor, new configuration methods. ‘dlink-ng' was born. It had lots of ugly hacks and exception handling. This was due to several reasons, but mainly because the D-Link’s wouldn’t take configuration automatically from TFTP/FTP/similar.
Five years had passed. We’d outgrown the number of switches that was bought in 2010, and we needed more. After thorough research and several rounds with RFQ’s, we decided to buy new switches for TG15. We ended up buying Juniper EX2200’s as edge switches. This meant, once again, a new configuration tool. We had this in mind when writing the RFQ, so we already knew what to expect. After some testing, trial and error, we landed on a proof-of-concept. It involves DHCP Option 82, custom-made DHCP-server and some scripts to serve software- and configuration files over HTTP. The name? Fast and Agile Provisioning (FAP).
With this tool, we can connect all the edge switches on-the-fly, and they’ll get the configuration designed for that specific switch (based on what port on the distro they connect to). If the switch doesn’t have the specific software we want it to have, it’ll automatically download this software and install it.
It’s completely automated once set up, and can be kept running during the entire party (so f.ex. if an edge switch fails during the party, we can just replace it with a blank one, at it’ll get the same configuration as the old one).
As The Gathering 2015 draws closer we thought it was about time for an update regarding the network.
We have been in a comprehensive round of evaluation of and purchasing new edge/access switches to replace the D-Link’s that have been the access-switches for the last 5 events. After a lot of planning, meetings, e-mails, more meetings, shortlisting and more meetings – we ended up with choosing nLogic as our main collaborator , and we are happy to announce that TG will be using equipment from Juniper Networks for TG15 and the years to come. nLogic have been very forthcoming and fantastic to work with and we look forward to work with them. nLogic is a consultancy company in Oslo, which happens to be a Juniper Elite Portfolio Partner in Norway.
Most of the equipment have been purchased as part of the deal with nLogic, with very good prices (of course, or we could never have afforded purchasing these cool switches). Thus, the equipment will end up being owned by KANDU/TG, free for us to do what we want with them after the contract ends and we, of course, have paid the bank all its money…
As core-switches this year we will be using two Juniper QFX5100-48S switches. These high-performance, low-latency switches are based on the Trident 2 chipset and offers 48 x 10G and 6 x 40G interfaces making them ideal to run as core-switches in a network such as ours.
This year we will be running the Juniper EX3300-48P switches in stacks (Virtual-Chassis) of four with 20Gbps uplink to the core-switches (upgradable to 80Gbps if needed). The EX3300-48 comes with 48 x 1G copper and 4 x 10G SFP+ interfaces. Running these switches in a stack will grant us both full redundancy as well as the scalability and speed we need. This switch model will also be used for the backend network in the arena (CamGW, LogGW, etc).
For The Gathering 2015 will will be utilizing the EX2200-48T-4G as the edge switches. The EX2200-48 comes with 48 x 1G copper and 4 SFP interfaces and offers a rich feature set ideal for us. Of functionality worth mentioning are; IGMP- and MLD snooping, first-hop security for both IPv4 and IPv6 (IP-source-guard, IPv6-source-guard, DHCP-snooping, DHCPv6-snooping, IPv6 ND-inspection, dynamic ARP-inspection), sFlow, DHCPv4 option 82, DHCPv6 option 17/37, etc.
NocGW and TeleGW this year will consist of stacks of EX4300-24T and QFX5100-48S. This gives us the ideal port-combination of 1G, 10G and 40G and also providing us with a fully redundant 80G (2*40GbE) ring between TeleGW, NocGW and Core.
With the above setup in mind we have designed a network where we can suffer an outage of any single network element without experiencing outage on any critical services.
This weekend we have fulfilled one of the Juniper workshops at nLogic, lead by senior network consultant Harald Karlsen, which is in the trail for us in Tech:Net (and some from Tech:Server and Tech:Support) to be prepared for working with Juniper Junos after 10, very good and pleasant, years with Cisco IOS.
(*) All pictures are taken, owned and copyright by Marius Hole – ask before you download them and use them somewhere!
Wannabe er nå åpent for søknader for TG15 og du kan lese her beskrivelsen Net:
Om dette høres riktig ut for deg, så anbefaler jeg deg å registrere deg i wannabe og levere en søknad: http://wannabe.gathering.org/tg15/
Vi håper å se mange interessante søknader og søkere! 🙂
*Hvem gjør noe for Internett og Fri Programvare i Norge? *
Some of you may have experienced some problems with the internet, the wireless and the network in general. We have had some minor issues with the internet link, with the internal routing and the wireless. Everything was on track and working before 09 Wednesday morning, but we never really know how well things work before at least a few thousand participants actually arrive and connect to the network and put some load on it.
*The wireless: *
We had some small problems with the servers to start with, and then some small problems with the configuration. The main problem here was that we had to prioritize the cabled network.
We are still working on improving the wireless solution and hope that we have everything optimized by tomorrow morning.
The internal network:
We don’t have one specific problem to point to, more like hundreds of small problems. The list is long and it contains everything from bug in software to missing parts and some human error. But there have not been any major incidents.
The internet, which is a two part problem:
1. We have 4x10Gig links in a port bundle down to Blix Solutions in Oslo. These were connected and tested OK on Friday. When participants arrived on Wednesday and the links became loaded with traffic we started to see problems with the load balancing. We removed two ports that weren’t performing well from the bundle and continued on 100% working 20Gig (2x10Gig).
This morning, around 11:00, SmartOptics arrived with new optical transceivers and converters. They checked the transceivers on the links we had problems with using an optical microscope and could see that they weren’t completely clean. Using special cleaning sauce, they managed to remove the dust and dirt from our transceivers, leaving it to us to put them back in the bundle, now in 100% working condition. Next year we’ll make sure to be more adamant about this before patching things together.
2. Origin, Steam, Blizzard, NRK, Microsoft, HP, Twitch… Some of these services rely on geolocation. There are multiple providers of geolocation service (like MaxMind), but the services usually charge money per database pull. This means that the cheaper the companies are, the longer between every pull. This means that we can be seen as being in Norway for some services that update often, but in Russia, Puerto Rico, Italy or Antarctica etc from companies that pull data from the geolocation database less frequently.
The reason for this is because our IP-address range is a temporary allocation from RIPE. RIPE has a pool with IP-addresses they lend out for a short amount of time to temporary events. This means that we are not guaranteed to get the same IP-addresses every year and that a lot of different events in different countries have been using the allocation in the months before us.
We are working continuously to solve this. We talk to Origin/EA and Valve, we try to NAT the most known and most used services through permanent Norwegian IP-addresses and we do ugly DNS-hacks. The sad fact is however that in the limited amount of time we have during TG, we won’t be able to solve this for every service.
5GHz… Connect to the broadcasted ESSID: “The Gathering 2014” <- this one is only 5GHz and you are 100% surest to getest the bestest and freshestest frequencies. YAY!! 😉
Legacy clients with only 2,4GHz can connect to the “The Gathering 2014 2.4Ghz” ESSID.
2,4GHz is only best effort – the main focus is stable 5GHz
The password for both is: Transylvania
N.B. with capital T
TG - Technical Blog is the unofficial rambling place of the Systemstøtte and Tech crews from The Gathering.