@Andy : I’m now getting slightly over 500 Mb/s from lan1 to wan routed by the EB. But, read on.
There’s another, more “subtle” performance limit, which is based on the architecture of the EB.
As you probably know, the board has a SOC with one 1Gb/s port; and an Ethernet switch (topaz), with 4 1Gb/s ports, one of which internally connected to the SOC.
Now, as long as LAN switching can be done inside the switch, you can get full 1Gb in / 1Gb out performance between, say, wan and lan1. The board, and, mainly, the kernel code, are very smart in offloading functionality into the switch. So stuff like basic routing, and even some iptables filtering, - can mostly be offloaded into the switch so there’s little performance impact.
But: when you consider traffic between two of the switch’s ports, with processing that must be done at the CPU level, - all that traffic needs to flow into the CPU and then back into the switch. This will now hit the limits of both the internal port (each packet needs to go over it twice - in and out), and the kernel which now needs to deal with double the packet handling interrupts (on a single core). While the port is full duplex, this translates to high load processing those packets, and on Linux you may see Soft IRQ overload.
Case in point (mine): I needed to terminate the ISP PPPOE tunnel on the EB. This means that the board needs to perform both PPPOE tunneling and point-to-multipoint NAT (aka masquerading). These can’t be offloaded to the switch chip; hence all my wan port traffic flows into the CPU and then down to lan port. High interrupt load on CPU. Net effect: tops at somewhat over 500Mb/s.
If you will terminate your tunnel (and do your inevitable NATing) outside of (i.e. in front of) your EB wan port, you may probably get much closer to the 1Gb/s theoretical limit.
Hope this helps!