Re: no-carrier, state DOWN for using LACP
by Jiri Pirko
Fine, all is looking good. Aggratator "3" is selected for port "ethi1"
and "ethi0". Lacp is running just fine. I tried the exactly same config
and works fine for me.
Looks like in your case, somebody (NetworkManager perhaps) is playing
with operstate. Because from iproute2 code:
if (flags & IFF_UP && !(flags & IFF_RUNNING))
fprintf(fp, "NO-CARRIER%s", flags ? "," : "");
And when you look into kernel core:
unsigned int dev_get_flags(const struct net_device *dev)
{
unsigned int flags;
flags = (dev->flags & ~(IFF_PROMISC |
IFF_ALLMULTI |
IFF_RUNNING |
IFF_LOWER_UP |
IFF_DORMANT)) |
(dev->gflags & (IFF_PROMISC |
IFF_ALLMULTI));
if (netif_running(dev)) {
if (netif_oper_up(dev))
flags |= IFF_RUNNING;
if (netif_carrier_ok(dev))
flags |= IFF_LOWER_UP;
if (netif_dormant(dev))
flags |= IFF_DORMANT;
}
return flags;
}
As you can see, IFF_RUNNING is set in case netif_oper_up(dev) and after
further code follow, you can see that dev->operstate is set from userspace
via RT netlink.
So I suggest you investigate who is messing up with your newly created device.
Hope this helps.
Jiri
Fri, May 31, 2013 at 09:02:18AM CEST, wu.tommy(a)gmail.com wrote:
>result for teamdctl team0 sd
>{
> "ports": {
> "ethi0": {
> "ifinfo": {
> "dev_addr": "8a:13:9c:72:7c:bc",
> "dev_addr_len": 6,
> "ifindex": 3,
> "ifname": "ethi0"
> },
> "link": {
> "duplex": "full",
> "speed": 1000,
> "up": true
> },
> "link_watches": {
> "list": {
> "link_watch_0": {
> "delay_down": 0,
> "delay_up": 0,
> "name": "ethtool",
> "up": true
> }
> },
> "up": true
> },
> "runner": {
> "actor_lacpdu_info": {
> "key": 0,
> "port": 3,
> "port_priority": 255,
> "state": 61,
> "system": "8a:13:9c:72:7c:bc",
> "system_priority": 65535
> },
> "aggregator": {
> "id": 3,
> "selected": true
> },
> "key": 0,
> "partner_lacpdu_info": {
> "key": 3,
> "port": 13,
> "port_priority": 128,
> "state": 5,
> "system": "c8:be:19:e7:b1:43",
> "system_priority": 32768
> },
> "prio": 255,
> "selected": true,
> "state": "current"
> }
> },
> "ethi1": {
> "ifinfo": {
> "dev_addr": "8a:13:9c:72:7c:bc",
> "dev_addr_len": 6,
> "ifindex": 5,
> "ifname": "ethi1"
> },
> "link": {
> "duplex": "full",
> "speed": 1000,
> "up": true
> },
> "link_watches": {
> "list": {
> "link_watch_0": {
> "delay_down": 0,
> "delay_up": 0,
> "name": "ethtool",
> "up": true
> }
> },
> "up": true
> },
> "runner": {
> "actor_lacpdu_info": {
> "key": 0,
> "port": 5,
> "port_priority": 255,
> "state": 61,
> "system": "8a:13:9c:72:7c:bc",
> "system_priority": 65535
> },
> "aggregator": {
> "id": 3,
> "selected": true
> },
> "key": 0,
> "partner_lacpdu_info": {
> "key": 3,
> "port": 14,
> "port_priority": 128,
> "state": 5,
> "system": "c8:be:19:e7:b1:43",
> "system_priority": 32768
> },
> "prio": 255,
> "selected": true,
> "state": "current"
> }
> }
> },
> "runner": {
> "active": true,
> "fast_rate": false,
> "select_policy": "lacp_prio",
> "sys_prio": 65535
> },
> "setup": {
> "daemonized": true,
> "dbus_enabled": false,
> "debug_level": 3,
> "kernel_team_mode_name": "loadbalance",
> "pid": 123390,
> "pid_file": "/var/run/teamd/team0.pid",
> "runner_name": "lacp"
> },
> "team_device": {
> "ifinfo": {
> "dev_addr": "8a:13:9c:72:7c:bc",
> "dev_addr_len": 6,
> "ifindex": 11,
> "ifname": "team0"
> }
> }
>}
>
>grep team from syslog:
>May 31 14:56:26 fw1 teamd_team0[123390]: Using team runner "lacp".
>May 31 14:56:26 fw1 kernel: [75582.820000] team0: Mode changed to
>"loadbalance"
>May 31 14:56:26 fw1 teamd_team0[123390]: Using active "1".
>May 31 14:56:26 fw1 teamd_team0[123390]: Using sys_prio "65535".
>May 31 14:56:26 fw1 teamd_team0[123390]: Using fast_rate "0".
>May 31 14:56:26 fw1 teamd_team0[123390]: Using min_ports "1".
>May 31 14:56:26 fw1 teamd_team0[123390]: Using agg_select_policy
>"lacp_prio".
>May 31 14:56:26 fw1 teamd_team0[123390]: TX balancing disabled.
>May 31 14:56:26 fw1 teamd_team0[123390]: usock: Using sockpath
>"/var/run/teamd/team0.sock"
>May 31 14:56:26 fw1 teamd_team0[123390]: ethi0: Adding port (found ifindex
>"3").
>May 31 14:56:26 fw1 teamd_team0[123390]: ethi1: Adding port (found ifindex
>"5").
>May 31 14:56:26 fw1 kernel: [75582.882675] team0: Port device ethi0 added
>May 31 14:56:26 fw1 teamd_team0[123390]: 1.2 successfully started.
>May 31 14:56:26 fw1 kernel: [75582.945998] team0: Port device ethi1 added
>May 31 14:56:26 fw1 teamd_team0[123390]: <changed_option_list>
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_stats_refresh_interval 0
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:255)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:254)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:253)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:252)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:251)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:250)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:249)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:248)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:247)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:246)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:245)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:244)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:243)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:242)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:241)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:240)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:239)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:238)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:237)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:236)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:235)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:234)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:233)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:232)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:231)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:230)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:229)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:228)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:227)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:226)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:225)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:224)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:223)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:222)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:221)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:220)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:219)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:218)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:217)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:216)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:215)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:214)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:213)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:212)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:211)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:210)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:209)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:208)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:207)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:206)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:205)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:204)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:203)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:202)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:201)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:200)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:199)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:198)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:197)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:196)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:195)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:194)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:193)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:192)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:191)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:190)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:189)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:188)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:187)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:186)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:185)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:184)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:183)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:182)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:181)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:180)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:179)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:178)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:177)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:176)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:175)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:174)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:173)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:172)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:171)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:170)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:169)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:168)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:167)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:166)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:165)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:164)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:163)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:162)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:161)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:160)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:159)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:158)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:157)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:156)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:155)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:154)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:153)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:152)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:151)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:150)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:149)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:148)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:147)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:146)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:145)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:144)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:143)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:142)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:141)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:140)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:139)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:138)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:137)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:136)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:135)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:134)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:133)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:132)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:131)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:130)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:129)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:128)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:127)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:126)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:125)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:124)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:123)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:122)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:121)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:120)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:119)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:118)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:117)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:116)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:115)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:114)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:113)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:112)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:111)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:110)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:109)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:108)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:107)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:106)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:105)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:104)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:103)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:102)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:101)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:100)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:99)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:98)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:97)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:96)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:95)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:94)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:93)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:92)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:91)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:90)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:89)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:88)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:87)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:86)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:85)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:84)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:83)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:82)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:81)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:80)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:79)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:78)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:77)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:76)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:75)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:74)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:73)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:72)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:71)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:70)
>\00\00\00\00\00\00\00\00
>May 31 14:56:26 fw1 teamd_team0[123390]: *lb_hash_stats (arridx:69)
>\00\00\00\00\00\00\00\00
>May 31 14:57:00 fw1 kernel: [75616.623980] IPv6: ADDRCONF(NETDEV_UP):
>team0: link is not ready
>May 31 14:57:01 fw1 ntpd[8152]: Listen normally on 26 team0 192.168.0.11
>UDP 123
>May 31 14:57:23 fw1 teamd_team0[123390]: usock: calling method "ConfigDump"
>May 31 14:57:23 fw1 teamd_team0[123390]: usock: calling method
>"ConfigDumpActual"
>May 31 14:57:23 fw1 teamd_team0[123390]: usock: calling method "StateDump"
>May 31 14:57:31 fw1 teamd_team0[123390]: usock: calling method "ConfigDump"
>May 31 14:57:31 fw1 teamd_team0[123390]: usock: calling method
>"ConfigDumpActual"
>May 31 14:57:31 fw1 teamd_team0[123390]: usock: calling method "StateDump"
>
>
>
>
>2013/5/31 Jiri Pirko <jiri(a)resnulli.us>
>
>> Thu, May 30, 2013 at 05:04:35PM CEST, wu.tommy(a)gmail.com wrote:
>> >I just try libteam 1.2 (also 1.0) with kernel 3.9.4.
>> >
>> >my setting for teamd (lacp.conf):
>> >{
>> > "device": "team0",
>> > "runner": {
>> > "name": "lacp",
>> > "active": true,
>> > "fast_rate": false,
>> > "tx_hash": ["eth", "ipv4", "ipv6"]
>> > },
>> > "link_watch": {"name": "ethtool"},
>> > "ports": {"ethi0": {}, "ethi1": {}}
>> >}
>> >
>> >after execute 'teamd -f lacp.conf -d', the device team0 created, but I
>> >can't make it up.
>>
>> Try to execute that with -ggg so we can see debug messages.
>>
>> please attach teamd syslog messages.
>>
>> Also, please do "teamdctl team0 s d" and attach the output here.
>>
>> Thanks!
>>
>> Jiri
>>
>> >
>> >ip link show:
>> >1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
>> >DEFAULT
>> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> >2: ethha0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> >state UP mode DEFAULT qlen 1000
>> > link/ether 00:24:1d:5e:64:1e brd ff:ff:ff:ff:ff:ff
>> >3: ethi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
>> >state UP mode DEFAULT qlen 1000
>> > link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
>> >4: ethext: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> >state UP mode DEFAULT qlen 1000
>> > link/ether 00:1b:21:10:ab:9f brd ff:ff:ff:ff:ff:ff
>> >5: ethi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
>> >state UP mode DEFAULT qlen 1000
>> > link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
>> >6: ethha1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> >state UP mode DEFAULT qlen 1000
>> > link/ether 00:0e:0c:35:e4:c2 brd ff:ff:ff:ff:ff:ff
>> >8: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
>> >DOWN mode DEFAULT
>> > link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
>> >
>> >ip addr show:
>> >1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> > inet 127.0.0.1/8 scope host lo
>> > valid_lft forever preferred_lft forever
>> > inet6 ::1/128 scope host
>> > valid_lft forever preferred_lft forever
>> >2: ethha0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> >state UP qlen 1000
>> > link/ether 00:24:1d:5e:64:1e brd ff:ff:ff:ff:ff:ff
>> > inet 10.0.0.11/24 brd 10.0.0.255 scope global ethha0
>> > valid_lft forever preferred_lft forever
>> > inet6 fe80::224:1dff:fe5e:641e/64 scope link
>> > valid_lft forever preferred_lft forever
>> >3: ethi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
>> >state UP qlen 1000
>> > link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
>> > inet6 fe80::c4b1:c1ff:fed0:2dd6/64 scope link
>> > valid_lft forever preferred_lft forever
>> >4: ethext: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> >state UP qlen 1000
>> > link/ether 00:1b:21:10:ab:9f brd ff:ff:ff:ff:ff:ff
>> > inet 192.168.1.11/24 brd 192.168.1.255 scope global ethext
>> > valid_lft forever preferred_lft forever
>> > inet6 fe80::21b:21ff:fe10:ab9f/64 scope link
>> > valid_lft forever preferred_lft forever
>> >5: ethi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
>> >state UP qlen 1000
>> > link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
>> > inet6 fe80::c4b1:c1ff:fed0:2dd6/64 scope link
>> > valid_lft forever preferred_lft forever
>> >6: ethha1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> >state UP qlen 1000
>> > link/ether 00:0e:0c:35:e4:c2 brd ff:ff:ff:ff:ff:ff
>> > inet 10.0.1.11/24 brd 10.0.1.255 scope global ethha1
>> > valid_lft forever preferred_lft forever
>> > inet6 fe80::20e:cff:fe35:e4c2/64 scope link
>> > valid_lft forever preferred_lft forever
>> >8: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
>> >DOWN
>> > link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
>> > inet 192.168.0.11/24 brd 192.168.0.0 scope global team0
>> > valid_lft forever preferred_lft forever
>> >
>> >no help for execute: ip link set dev team0 up
>> >it always down.
>> >
>> >the switch setting should be ok, because the same setting work fine for
>> >bonding LACP like this:
>> >Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>> >
>> >Bonding Mode: IEEE 802.3ad Dynamic link aggregation
>> >Transmit Hash Policy: layer3+4 (1)
>> >MII Status: up
>> >MII Polling Interval (ms): 100
>> >Up Delay (ms): 200
>> >Down Delay (ms): 200
>> >
>> >802.3ad info
>> >LACP rate: slow
>> >Min links: 0
>> >Aggregator selection policy (ad_select): stable
>> >Active Aggregator Info:
>> > Aggregator ID: 1
>> > Number of ports: 2
>> > Actor Key: 17
>> > Partner Key: 3
>> > Partner Mac Address: c8:be:19:e7:b1:43
>> >
>> >Slave Interface: ethi1
>> >MII Status: up
>> >Speed: 1000 Mbps
>> >Duplex: full
>> >Link Failure Count: 0
>> >Permanent HW addr: 00:1b:21:36:52:7f
>> >Aggregator ID: 1
>> >Slave queue ID: 0
>> >
>> >Slave Interface: ethi0
>> >MII Status: up
>> >Speed: 1000 Mbps
>> >Duplex: full
>> >Link Failure Count: 0
>> >Permanent HW addr: 00:1b:21:36:52:7e
>> >Aggregator ID: 1
>> >Slave queue ID: 0
>> >
>> >Any suggestion for this?
>> >
>> >--
>> >
>> >Tommy Wu
>>
>> >_______________________________________________
>> >libteam mailing list
>> >libteam(a)lists.fedorahosted.org
>> >https://lists.fedorahosted.org/mailman/listinfo/libteam
>>
>>
>
>
>--
>
>Tommy Wu
9 years, 12 months
no-carrier, state DOWN for using LACP
by Tommy Wu
I just try libteam 1.2 (also 1.0) with kernel 3.9.4.
my setting for teamd (lacp.conf):
{
"device": "team0",
"runner": {
"name": "lacp",
"active": true,
"fast_rate": false,
"tx_hash": ["eth", "ipv4", "ipv6"]
},
"link_watch": {"name": "ethtool"},
"ports": {"ethi0": {}, "ethi1": {}}
}
after execute 'teamd -f lacp.conf -d', the device team0 created, but I
can't make it up.
ip link show:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ethha0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP mode DEFAULT qlen 1000
link/ether 00:24:1d:5e:64:1e brd ff:ff:ff:ff:ff:ff
3: ethi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
state UP mode DEFAULT qlen 1000
link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
4: ethext: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP mode DEFAULT qlen 1000
link/ether 00:1b:21:10:ab:9f brd ff:ff:ff:ff:ff:ff
5: ethi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
state UP mode DEFAULT qlen 1000
link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
6: ethha1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP mode DEFAULT qlen 1000
link/ether 00:0e:0c:35:e4:c2 brd ff:ff:ff:ff:ff:ff
8: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT
link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
ip addr show:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ethha0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 00:24:1d:5e:64:1e brd ff:ff:ff:ff:ff:ff
inet 10.0.0.11/24 brd 10.0.0.255 scope global ethha0
valid_lft forever preferred_lft forever
inet6 fe80::224:1dff:fe5e:641e/64 scope link
valid_lft forever preferred_lft forever
3: ethi0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
state UP qlen 1000
link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c4b1:c1ff:fed0:2dd6/64 scope link
valid_lft forever preferred_lft forever
4: ethext: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 00:1b:21:10:ab:9f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.11/24 brd 192.168.1.255 scope global ethext
valid_lft forever preferred_lft forever
inet6 fe80::21b:21ff:fe10:ab9f/64 scope link
valid_lft forever preferred_lft forever
5: ethi1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master team0
state UP qlen 1000
link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c4b1:c1ff:fed0:2dd6/64 scope link
valid_lft forever preferred_lft forever
6: ethha1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
link/ether 00:0e:0c:35:e4:c2 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.11/24 brd 10.0.1.255 scope global ethha1
valid_lft forever preferred_lft forever
inet6 fe80::20e:cff:fe35:e4c2/64 scope link
valid_lft forever preferred_lft forever
8: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN
link/ether c6:b1:c1:d0:2d:d6 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.0 scope global team0
valid_lft forever preferred_lft forever
no help for execute: ip link set dev team0 up
it always down.
the switch setting should be ok, because the same setting work fine for
bonding LACP like this:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 17
Partner Key: 3
Partner Mac Address: c8:be:19:e7:b1:43
Slave Interface: ethi1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:36:52:7f
Aggregator ID: 1
Slave queue ID: 0
Slave Interface: ethi0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:36:52:7e
Aggregator ID: 1
Slave queue ID: 0
Any suggestion for this?
--
Tommy Wu
9 years, 12 months
Any performance info on 'teamed' adapters V. bonding?
by Linda A. Walsh
I was wondering if anyone had looked at how, say, 2 (or more)
aggregated connections using 10Gb ethernet connections
compared as teamed vs. using them as bonded?
I know that a major hit in performance on both linux and windows
platforms is the time spent in interrupt processing (mostly in
deferred or SW interrupts, not HW interrupts).
With teamed connections on linux being in user space, I would
tend to think about how user-space file-system drivers perform
compared to drivers in the kernel. On another list, people were
talking about one of the limiting factors in NTFS performance on
linux was that it was a user-space driver, and as such, seemed
to generate noticeable performance limitations over native drivers.
Note, I use the example of 10G connections, as 1G connections
don't really stress today's processors on large reads/writes.
(Bad applications that use small, <64K buffers can... really
horrid are those that use 4K or smaller sizes at the application
level, the round-trip time is a killer, especially on local nets
w/Jumbo packets in the 9000+ byte range.
In case it isn't obvious, at those speeds, we are talking only
locally unencrypted links as encryption imposes extra latency
and bandwidth limitations.
Anyway, just wondering if teamed connections might go counter to
the the performance probs in file-systems?
10 years
Feature request: VLAN balancing
by Bartosz Lis
Dear "team" developers and users,
Is it possible to configure VLANs, say: VLAN2 and VLAN3 on top of team with
enslaved interfaces say eth0 and eth1, in such a way, that:
1. when only one interface is up, all the traffic goes through that interface;
2. when both interfaces are up, VLAN2 traffic goes through eth0 while VLAN3
traffic goes through VLAN3?
This (requested) behaviour is a little bit similar to the bonding mode 1 -
active/backup. The difference is that there is no single "global" active
interface, instead each VLAN has its active interface defined irrespective of
other VLANs. Looking from the side of enslaved interfeces - any given
interface is not definitely in an active or backup state, but rather, for a
group of VLANs is active while being in backup state for the other VLANs.
Kind regards,
--
Bartosz Lis
[pl] Instytut Informatyki Politechniki Łódzkiej
[en] Institute of Information Technology, Lodz University of Technology
Wolczanska 215
90-924 Lodz, Poland
phone: +48(42)6312796
fax: +48(42)6303414
email: bartosz.lis(a)ics.p.lodz.pl
10 years