Home and server rack 10GbE network upgrade and deploy

When I built the house I planned to one day upgrade to 10GbE so I put in cat6A. The electrician said he has never seen a residential customer ask for it only businesses. So he cracked up about that.

The 1GbE standard has been around since 1999. That is 24 years! I see consumers doing an incremental jump to 2.5GbE targeted more towards consumers. However I decided to go ahead to 10GbE.

Phase One:

Originally the upgrade to 10GbE networking was going to be down the road because of the high cost of the switches.

What really kicked the upgrade in the but was me wanting to learn switches and networking for work.. I had a colleague who had an extra dell switch. It is a S4810P granted the switch is over 10 years old. However it is a start.. I just paid for shipping. Only negative I did not realize was it was a SFP+ switch and I had all RJ45. So the added cost for SFP+ added up.

Now to plan on the client I built a new pc this past year that we use as a family. I wanted to be able to get to my photos and files with the response a local drive would have. I also wanted have across my systems as central location for all my files. After being a photographer and a parent 23+ years of data adds up. With 10 gig networking no longer am limited with networking for my weekly backups.

When I started looking for 10 gig nics. I quickly found that the system I built before planning to upgrade could not handle the cards as it only had 1x and 2x pcie card. Every 10gig nic requires more pci lanes. I was searching for solutions and a ran across a few m.2 cards.

I pulled the trigger on a card off amazon off brand China card but it was based on the well known Marvell AQtion chipset.

At first, I was having issues with the card resetting. I found out that my system fans were not pushing enough air to keep it cooled.

Once that was stable I started when transferring have the transfer speed go from like 400MB/s down to 1MB/s.. I was about to return the card when I decided to look deeper it turns out that the m.2 sata ssd I bought was known to have issues. The disk util will go to 100% and transfer speed would stop..

However most of the time they are stable unless i get excessive. I bought a new nvme m.2 drive still pending install.

For non vital storage I decided to use freenas. The caching with zfs helped with overcoming the limit of using spinning SAS drives. All my production nas is still on Linux vm with xfs that rsyncs to external drive..

At this point in my adventure I had my PC connected to my 10gig switch connected to my R720.. It was working well.

Phase Two:

Now that I was talking so some colleagues at work about the setup. I planned to get a 10gig card for my work desktop along with one for my future servers/desktops.

My friend indicated he had some cards and some twinax cables he would send me. Next thing I know I get 6 intel 10hig cards in the mail and 12 twianx cables. I decided to put a few on my R720 to do some tests with nic teaming and lags..

Phase Three:

Now here is where things kind of got out of hand. However in a good way.. My same friend had a 10gig cisco switch Nexus N3K-C3172TQ-10GT rj45 and a R730xd 3.5 server. I had to pay for shipping on both the switch and server which was around 350$ however for what I got still a good deal. I was having issues with my patch panel so decided to make some new cables and re config..

After racking I migrated my 8 3TB 7200 SAS HDD’s over to the R730xd Then put 8 300gig 10k SAS disks in the R720 for now. It of course was stressful moving over drives. I did run into one problem my virtualized truenas array faulted.. However it did not have any data on it I did not already have some place else. So know not to do that so I do not make that mistake again.

For testing I now have created a non prod RAID0 on my r720 with trunas vm on it.. My Prod is RAID6 I then created an iscsi volume and presented it to both esxi hosts. I have put all my temporary test vm’s on the volume and it is super fast over the 10gig network.

Now I have a full rack with 2 enterprise 10 gig switches and 2 PowerEdge servers. I am planning a separation plan where I can shut down the R720 and the dell switch when I am not using them for use cases. However I need to buy 2 more intel RJ45 SFP+. Right now the whole rack pulls about 900 watts.. So not ideal for power saving but don’t plan on leaving all the systems up 24/7

On to the next adventure!

#iworkfordell

Leave a Comment

WordPress Appliance - Powered by TurnKey Linux