|
|
Subscribe / Log in / New account

Port configuration is ambiguous..

Port configuration is ambiguous..

Posted Mar 28, 2025 19:18 UTC (Fri) by zorg24 (subscriber, #138982)
In reply to: Port configuration is ambiguous.. by pizza
Parent article: Making the OpenWrt One

My understanding is that OpenWrt One is based on BPI-R3 (with some changes to the board) and the two will be based on the BPI-R4, both are using MediaTek Filogic SoCs from the family\generation. You can definitely get the BPI-R4 with 2x 10Gb SFP.


to post comments

Port configuration is ambiguous..

Posted Mar 29, 2025 1:02 UTC (Sat) by pizza (subscriber, #46) [Link] (12 responses)

> You can definitely get the BPI-R4 with 2x 10Gb SFP.

Thanks, but I've long since learned to avoid trusting anything important to Arm-based SBCs that won't function properly with a mainline kernel and/or boot with off-the-shelf distros. (A special snowflake vendor provided "preinstalled SD card image" doesn't count. Oh, and this includes most Raspberry Pis.)

As I'm not going to be able to wait until the end of the year for OpenWRT to build this new board (assuming it has the dual SFPs that I need), this is what I'll probably end up going with:

https://www.amazon.com/Healuck-Firewall-Appliance-OPNsens...

Port configuration is ambiguous..

Posted May 12, 2025 0:49 UTC (Mon) by pizza (subscriber, #46) [Link] (11 responses)

> As I'm not going to be able to wait until the end of the year for OpenWRT to build this new board (assuming it has the dual SFPs that I need), this is what I'll probably end up going with:

> https://www.amazon.com/Healuck-Firewall-Appliance-OPNsens...

I wanted to post a followup. The SoCs these CWWK designs are built on (N100/N150, or N305/N355 on the high end) only have a total of 9 PCIe 3.0 lanes. These lanes are split between a pair of i226 2.5GbE controllers, a slot for nvme storage, a slot for a wifi card, and one or two additional peripherals (second nvme slot, WAN card slot, and/or a third 2.5GbE controller) That means anywhere from 5 to 7 of the possible 9 PCIe lanes are already spoken for, leaving at most 4 (but more likely 2) for the 10GbE ports.

These designs all seem to use an i82559ES dual-port 10GbE controller, which is a PCIe 2.0 device whose documentation states an x8 link is necessary if you are seeking to run both ports at full speed. This means at _best_ (in a x4 setup and 0% overhead) this design provides only 80% of the raw bandwidth necessary for full utilization of both ports, and in an x2 configuration, it won't even be able to run a single interface at full duplex (or both at half duplex).

That's... quite disappointing.

Port configuration is ambiguous..

Posted May 12, 2025 1:39 UTC (Mon) by intelfx (subscriber, #130118) [Link] (8 responses)

> > https://www.amazon.com/Healuck-Firewall-Appliance-OPNsens...
> I wanted to post a followup <...> That's... quite disappointing.

So... another instance of "you get what you pay for"? ;-)

Port configuration is ambiguous..

Posted May 12, 2025 13:35 UTC (Mon) by pizza (subscriber, #46) [Link] (7 responses)

> So... another instance of "you get what you pay for"? ;-)

Generally I'd agree with you but when your headliner feature is so badly kneecapped... it rather defeats the purpose.

FWIW the rest of the system appears to be more than adequate. Variations of this thing exist with more 2.5GbE ports instead of the 10GbE SFPs, which is fine. Other variations exist with even more ports, but they're built on much more capable SoCs with at least *20* PCIe lanes to play with. But none of those seem to be optionable with with 10GbE.

...FWIW, the i82599ES is used instead of something more capable because it's really, really cheap these days -- first released 16 years ago, and as it turns out, was formally EOL'd (order books closed and support formally ended) just seven days ago.

Port configuration is ambiguous..

Posted May 12, 2025 21:03 UTC (Mon) by intelfx (subscriber, #130118) [Link] (1 responses)

> Generally I'd agree with you but when your headliner feature is so badly kneecapped... it rather defeats the purpose.

What I wanted to say is that cheap x86 boxes are not *that* better than Arm boxes. You just exchange the drivers issues for other kinds of issues. So if you have an aversion to Arm SBCs and your plan for dealing with this involves buying a cheap no-name x86 thing instead, there is a chance that you might be disappointed.

Port configuration is ambiguous..

Posted May 13, 2025 1:54 UTC (Tue) by pizza (subscriber, #46) [Link]

> What I wanted to say is that cheap x86 boxes are not *that* better than Arm boxes. You just exchange the drivers issues for other kinds of issues. So if you have an aversion to Arm SBCs and your plan for dealing with this involves buying a cheap no-name x86 thing instead, there is a chance that you might be disappointed.

I disagree; all of the problem those "cheap x86" systems have (eg underspec'd buses for the peripherals and nonexistant vendor support), "cheap ARM SBCs" also have in spades. The primary advantage for those Arm SBCs is their lower power consumption, but that's balanced by the huge disadvantage of being one-off special snowflakes that rarely move beyond "only works with the vendor's never-updated original pre-installed image".

Still, if those SBCs give "good enough" performance/features/etc that can be an overall win, though one has to consider how long it would take to come out ahead from the power savings.

Port configuration is ambiguous..

Posted May 13, 2025 10:38 UTC (Tue) by farnz (subscriber, #17727) [Link] (4 responses)

Note that, depending on use case, that box can be perfectly usable. For example, I have two interfaces between my home server and my switch, not because I need throughout increases (1G is plenty at the moment), but so that I have a redundant link, and when the wires break, I get a message from network monitoring telling me that I've lost redundancy, rather than losing service.

Similarly, at a previous job, we had 2x10G links to our ISP, consisting of a primary and a failover link; if we'd wanted 20G service, we'd have had to have 3 links, two primary and one failover to cover "backhoe fade" between us and our ISP.

If that's why you want 2x10G, then this sort of box is useful; if you need high throughput, it's not so useful.

Port configuration is ambiguous..

Posted May 13, 2025 10:45 UTC (Tue) by paulj (subscriber, #341) [Link] (1 responses)

> if we'd wanted 20G service, we'd have had to have 3 links, two primary and one failover to cover "backhoe fade" between us and our ISP.

You also need to obtain survey maps of where they have physically have placed their fibre, and /verify/ any claims they make about path independence of the fibres. Potentially down to hiring independent surveyors to verify such claims.

A certain large tech company lost connectivity for DC for a while once, discovering in the process their fibre suppliers had lied^Wwere mistaken in their claims about physical independence, when a JCB somewhere took out in 1 go a number of bundles of fibres that were not meant to be anywhere near each other. They significantly increased the level of verification of future supplier's claims after that.

Port configuration is ambiguous..

Posted May 13, 2025 10:57 UTC (Tue) by farnz (subscriber, #17727) [Link]

We wouldn't have needed to do any of that - the reason for redundancy was not because we wanted it, but because our ISP insisted on it as part of the service (since the service came with a 6 hour SLA, after which they'd be paying out).

If the claims about diverse pathing turned out to be false, that would have been our ISP's problem - they'd have been paying out on a 6 hour SLA while chasing their suppliers to fix it ASAP.

And we'd agreed an SLA payout that was large enough that the business was better off with the Internet link down than with it up; we weren't foolish enough to believe that a "business" service meant it'd be prioritised for repair, but did believe that if we were getting more in SLA payouts than it was costing us to get alternatives (like LTE sticks for everyone), we'd be OK.

Port configuration is ambiguous..

Posted May 13, 2025 11:07 UTC (Tue) by pizza (subscriber, #46) [Link] (1 responses)

> If that's why you want 2x10G, then this sort of box is useful; if you need high throughput, it's not so useful.

You make a valid point, but I do feel compelled to point out that having redundant 10Gb ISP uplinks but only 2.5Gb of internal network bandwidth seems backwards.

Port configuration is ambiguous..

Posted May 13, 2025 11:21 UTC (Tue) by farnz (subscriber, #17727) [Link]

Our ISP was fairly typical - it was paying for 2x10G bearers to us, to provide a symmetric 500 Mbit/s service on top of those bearers.

The reason it paid for 10G bearers is that change of bearer is a slow process, since it involves taking down a bearer (or running fresh fibre) then replacing kit on both ends, whereas getting faster service is just a software change - and they wanted to be able to upgrade us on-demand to a more expensive 1 Gbit/s or 2 Gbit/s without delay.

This is not an atypical configuration for SME dedicated internet access (as opposed to "business" service on consumer products); fast bearer, slow service on top. You don't need more than 2.5 Gbit/s of internal network when you've got under 2.5 Gbit/s of external network, supplied on 2x 10G ports.

Port configuration is ambiguous..

Posted May 12, 2025 9:38 UTC (Mon) by farnz (subscriber, #17727) [Link] (1 responses)

Do any of the designs have a PCIe switch chip involved? The situation you're describing (PCIe 3.0 lanes on the host, PCIe 2.0 lanes on the device) is what switch chips excel at, since you can have 8 lanes of PCIe 3.0 to the host becoming 32 lanes of PCIe 3.0 facing the devices, with the switch chip operating on a per-TLP basis (so 8 lanes of PCIe 2.0 to the device consumes 4 lanes of PCIe 3.0 on the host side).

You'd be looking for something like the Microchip Switchtec family devices, or the PLX (now Broadcom) PEX family of devices; a cheap design would put the 10G controller, WiFi card slot and WAN card slot behind the switch, so that you can feed 4 PCIe 3.0 lanes to the switch, and have 16 PCIe lanes out (4x PCIe 3.0 for the WAN card slot and WiFi slot, 8x PCIe 2.0 for the 10G controller), and have WiFi card, WAN card and 10G ports compete for the 4 PCIe 3.0 lanes worth of throughput. If you're going overkill, you'd use a switch with more lanes, and have 8 lanes from the host to the switch, with more ports on the other side of the switch.

Port configuration is ambiguous..

Posted May 12, 2025 13:51 UTC (Mon) by pizza (subscriber, #46) [Link]

> Do any of the designs have a PCIe switch chip involved?

Not that I could tell -- And using one would be likely be more expensive (if even possible to fit in that tiny form factor) than just using a PCIe3.x-capable 10GbE controller to begin with.

(That is probably why their devices with more ports use more capable SoCs -- on the lower end, they sport the Pentium 8505 which sports 20 PCIe 4.0 lanes...)

Port configuration is ambiguous..

Posted Mar 29, 2025 18:27 UTC (Sat) by Mook (subscriber, #71173) [Link]

The Two is (according to the initial proposal under vote linked by the article) planned to be manufactured by GL.iNet, so it's probably not quite the same as BPI-R4. It actually doesn't look like any of GL.iNet's existing devices, since none of the announced BE devices have SFP ports.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds