• Same story, different day...........year ie more of the same fiat floods the world
  • There are no markets
  • "Spreading the ideas of freedom loving people on matters regarding high finance, politics, constructionist Constitution, and mental masturbation of all types"

10 GbE for the home lol

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#1
When I worked on the "Network" Team at an Enterprise level company we used very expensive fiber for our SAN. Our poor hundreds of staffers & workers got barely any bandwidth due to my boss thinking nobody needs bandwidth except the SAN. He had switches with 48 x 1 Gb ports funneled out through a 1 Gb or 2 Gb connection. WTF? To me, time is money, get the stuff done and you can start on something else.

Anyway, I read some stories about using 10 GbE in the home a few years ago, based on used hardware from ebay. Switches even used cost a fortune, so the only real solution was peer to peer - a dedicated link between one computer and a server for example. Any other access from outside those two would have to be over the normal 1 Gb NIC.

Had a few minutes on my hands the other day, so I decided to investigate again. Turns out I was able to buy two new NICs for the price of used ones on ebay - 2 for $40, or $20 each. I paid a lot more than that for my Intel 1 Gb NICs not that long ago!
I picked up a cable at the same time, $17.
Now without tweaking the cards, just on default settings, and NOT transferring to or from a RAM disk, rather I copied to and from the disks I use daily, I have got anywhere from over 300 MB/s to about 550 MB/s. Compare this to my best 1 GbE rates - at best around 120 MB/s.
So I was pretty happy. Transfer times cut into a third or better, and no bogging down with 35 GB files for example. Sweet!

Then I thought, well, if I put a two port card in the server, I can add one more workstation. I found one card for $42 that matched the two cards I already had.

I started investigating further, and found a switch, that has 4 10GbE ports, and 28 1 GbE ports, and a management port, goes for anywhere from $300 to over $500 depending on where you buy it, well, I found ONE that is going for $200 so I ordered it. This will be sweet I can tell you. This means the 10GbE machines can all talk to each other, and the 1GbE can also talk to them over the same NIC, of course at the lower speeds.

Should be here by Friday. I'm excited.
 

solarion

Gold Member
Gold Chaser
Site Supporter ++
Joined
Nov 25, 2013
Messages
5,050
Likes
7,614
#2
Nice. I've been toying with the idea of replacing my gb infrastructure with faster goodies. What kind of connectors are you running? Are these still RJ45 or SFP+ based toys?
 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#3
I'm using SFP+.

Keep in mind for the cards I am using "old tech", which is 100x better than most "new tech" for the home pc/gamer guys. This grade of equipment isn't made cheap and nasty.
 
Last edited:

the_shootist

Targeted!
Midas Member
Sr Site Supporter
Joined
May 31, 2015
Messages
20,900
Likes
22,571
#4
I'm using SFP+.

Keep in mind for the cards I am using "old tech", which is 100x better than most "new tech" for the home pc/gamer guys. This grade of equipment isn't made cheap and nasty.
so 10Gb running over fiber?

Sounds like you guys (back at work) were using iSCSI or FCOE for SAN transport. If so they should have had a separate isolated network for the SAN traffic or gone to fiber channel!
 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#5
I'm running copper at home. The distances are a few feet, no reason to go to fiber.
Fiber channel at work, redundancy and all that shit.
Perhaps I confused things with the user experience.
We had EMC2 storage in two plants with dark fiber between plants.
Plus virtual tape drives and robotic IBM tape machines, everything in one plant was duplicated in another, etc.
For example I took over the rather tiny VMware install [about 4 hosts] and by the time I left it was huge, if I remember right I had close to 100 hosts and tons of VMs, you could move VMs between plants while they were running and doing a job and only 1 ping would be dropped. Every host I had in one plant was duplicated in the other plant. Same thing with the storage. We had truckloads of old physical servers that went to the dump. Power requirement dropped like a rock. If we had a power failure in one plant [and the generators failed too] it wasn't an issue for the servers on VMware, I had enough host resources to run everything from one plant- and automatic failover. I also ran some physical clusters across plants, some for specialized image software they had and also for our email system, although at the time I left we were already discussing moving the email to VMware as the failover was more robust. What I was talking about that might of confused the issue was the actual user's workstations, the SAN worked great it was the users who had shit. Of course, many machines were doing industrial jobs and didn't need any or not much bandwidth, but some of the users were moving a lot of data and they did need it. They had 1 GbE NICs but there was no point, they would have not noticed if they dropped to 100.

Boot from SAN was a big thing when I started there, but after a couple of years we didn't bother, drives were cheap but SAN storage was not. Those fiber HBAs ran hot and we went through those things like candy. Qlogic was one brand that lasted about 3 years or so and you had to replace it.
 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#6
Ok, this evening I have been processing some just under 5gb video files on the workstation, end state they are on a RAID 0 volume [HDD], when I finish each I move it to the server storage, which has SSDs as a "landing zone" for new files, they are migrated into the archive spinners later automatically. My average speed is around 414 MB/s when moving them from the WS to the Server. Not too shabby.
 

solarion

Gold Member
Gold Chaser
Site Supporter ++
Joined
Nov 25, 2013
Messages
5,050
Likes
7,614
#7
Nice. At least 3x speeds I'm getting. Thanx bud, now I have network envy. lol
 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#8
Just do it solarion.
2 x NIC, $40, a cable, say $20, that's $60.
Hook them up and you are rocking.

It is just pure fun.
Here is the cards I used, mine were brand new and flashed to the latest firmware. Yes, they are EOL but what do I care?
Windows sees them and puts in a driver, but I used the drivers from the company.
https://www.amazon.com/gp/product/B016OYD0D4/ref=oh_aui_detailpage_o06_s00?ie=UTF8&psc=1
Here is the cable I got, but I think I might of messed up it is AWG 30, I think I should have got AWG 28 correction sorry AWG 24. We will see, somebody said that I might have a problem with the switch with 30.
https://www.amazon.com/gp/product/B01DNBX4VY/ref=oh_aui_detailpage_o05_s00?ie=UTF8&psc=1
Or go to fleabay and pick up a set of used cards, I would say just match them so you don't have any issues and when you play with the settings it will carry over from card to card. You can save maybe 10 bucks or more over what I paid, but I don't think you will get brand new stuff.

Maybe you would like my storage I'll give a little hint here is some of it

 
Last edited:

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#9
Oh, BTW, the NICs I used are Joo NICs, made in Israel.
I stopped my boycott because I got a good price IMHO.

I hope they aren't spying on me or they won't turn on me when I don't like something Israel does.
 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#10
I just found this cable on amazon, it is AWG 30, and they say they test every cable on Dlink and they are good to go, so maybe I am ok with AWG 30. I will hold off buying cables until I test the one I have first. Patience, I must tell myself. I just don't want to deal with a bunch of dropped packets etc and have to trouble shoot to see who or what is the guilty party.

 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#11
I have a good switch now, managed, does everything, but it will have to go. Perhaps I will flog it on fleabay.
I have consumer switches all over the place, gathering dust, I always am looking for something better.
I should gather them up and flog them on fleabay as well. Same with wireless routers, I got a bunch.
 

solarion

Gold Member
Gold Chaser
Site Supporter ++
Joined
Nov 25, 2013
Messages
5,050
Likes
7,614
#12
Yeah, tossing my dual wan router/switch would make me sad. No Interwebz makes me cranky. lol

I'd just keep that infrastructure in place and begin installing a new 10Gbe network for higher bandwidth transfers when needed.
 

gringott

Killed then Resurrected
Midas Member
Site Supporter ++
Joined
Apr 2, 2010
Messages
14,756
Likes
19,093
Location
You can't get there from here.
#13
Yeah, I have a dualwan Cisco Router stashed away somewhere, I used to have dual providers and it worked ok.