[GTER] l2circuit/xconnect entre MX-104 e ME3600X
Caio
caiot5 at gmail.com
Thu Sep 29 11:07:11 -03 2016
Guilherme,
Nem o ping pinga, nem o traceroute completa:
# run traceroute mpls ldp XX.XX.XX.XX
Probe options: ttl 64, retries 3, wait 10, paths 16, exp 7, fanout 16
ttl Label Protocol Address Previous Hop Probe Status
1 WW.WW.WW.WW (null) No reply
2 (null) (null) No reply
3 (null) (null) No reply
4 (null) (null) No reply
5 (null) (null) No reply
6 (null) (null) No reply
7 (null) (null) No reply
8 (null) (null) No reply
WW.WW.WW.WW é o IP diretamente conectado na interface (não-loopback).
Você se importa de me dizer qual IOS você está usando nos teus ME3600X e se
tiver um cenário parecido, compartilhar a configuração, ou alguma parte
relevante dela que esteja diferente?
2016-09-29 10:44 GMT-03:00 Guilherme de Freitas Figueiredo <
guilhermefreitasfigueiredo at gmail.com>:
> tenho bastante mpls com 3600 aqui sem nenhum problema, o ping mpls ou o
> traceroute mpls para o destino do juniper tem resposta? muito estranho essa
> forwarding-table estar sem o destino do prefixo desejado.
>
> []s!
>
> --
> Guilherme de Freitas Figueiredo
>
> On Thu, Sep 29, 2016 at 9:55 AM, Caio <caiot5 at gmail.com> wrote:
>
> > Guilherme,
> >
> > Service-instance sem bridge domain com xconnect na service-instance, a
> CEF
> > está ok, mas a forwarding-table está vazia, o que eu acredito ser devido
> a
> > falha no MPLS Dataplane veja:
> >
> > Local Outgoing Prefix Bytes Label Outgoing Next Hop
> >
> > Label Label or Tunnel Id Switched interface
> > 17 No Label l2ckt() 0 drop
> >
> >
> > Eduardo, segue configuração do serviço, está bem simples (obs: tenho
> outros
> > cenários rodando com exatamente a mesma configuração funcionando ok,
> porém,
> > em outros equipamentos, C2951, etc):
> >
> > Juniper side:
> >
> > set interfaces ge-0/0/0 mtu 1600
> > set interfaces ge-0/0/0 encapsulation ethernet-ccc
> > set interfaces ge-0/0/0 unit 0
> > set protocols l2circuit neighbor XX.XX.XX.XX interface ge-0/0/0.0
> > virtual-circuit-id 2
> > set protocols l2circuit neighbor XX.XX.XX.XX interface ge-0/0/0.0
> > control-word
> > set protocols l2circuit neighbor XX.XX.XX.XX interface ge-0/0/0.0 mtu
> 1600
> > set protocols l2circuit neighbor XX.XX.XX.XX interface ge-0/0/0.0
> > pseudowire-status-tlv
> > set protocols ldp interface xe-2/0/1.1 transport-address router-id
> > set protocols mpls interface xe-2/0/1.1
> > set protocols ldp egress-policy connected
> > set protocols ldp deaggregate
> > set protocols ldp interface lo0.0 transport-address interface
> > set interfaces lo0 unit 0 family inet address YY.YY.YY.YY/32
> >
> >
> > Cisco side:
> >
> > pseudowire-class eompls
> > encapsulation mpls
> > control-word
> >
> > interface GigabitEthernet0/1
> > no switchport
> > mtu 1600
> > no ip address
> > xconnect YY.YY.YY.YY 2 encapsulation mpls pw-class eompls
> >
> >
> > mpls ldp router-id Loopback0 force
> >
> > interface Loopback0
> > ip address XX.XX.XX.XX 255.255.255.255
> >
> >
> > 2016-09-28 23:42 GMT-03:00 Eduardo Schoedler <listas at esds.com.br>:
> >
> > > Se mandasse a configuração, ficaria muito mais simples de entender...
> > >
> > > Em quarta-feira, 28 de setembro de 2016, Guilherme de Freitas
> Figueiredo
> > <
> > > guilhermefreitasfigueiredo at gmail.com> escreveu:
> > >
> > > > como ta a configuração do seu transporte na g0/1 ? service instance
> com
> > > > bridge-domain ? switchport com vlan ? service-instance sem bridge
> > domain
> > > e
> > > > xconnect na service instance? a forwarding-table também está correta?
> > bem
> > > > como a cef? como fica o traceroute com pacotes mpls para o destino?
> > > >
> > > >
> > > >
> > > >
> > > > []s!
> > > >
> > > > --
> > > > Guilherme de Freitas Figueiredo
> > > >
> > > > On Wed, Sep 28, 2016 at 2:24 PM, Caio <caiot5 at gmail.com
> > <javascript:;>>
> > > > wrote:
> > > >
> > > > > Rubens,
> > > > >
> > > > > Tentei tudo que foi sugerido no post, ainda na mesma.
> > > > > Extraí o debug do VC tentando subir, quem quiser dar uma olhada:
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: Circuit attributes,
> > Receive
> > > > > update:
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: . Status: UP (0x1)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: . Alarm: 0x0
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: Process attrs
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: Receive status update
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: . Receive AC STATUS(UP)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .. AC status UP
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... S:Evt local up,
> > > > > LrdRruD->LruRruD
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... S:Act send
> > notify(DOWN)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Send notify(DOWN)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Dataplane :
> > > > > DOWN(pw-tx-fault)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Overall :
> > > > > DOWN(pw-tx-fault)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Send LDP for
> status
> > > > change
> > > > > from DOWN AC(rx/tx faults), (pw-tx-fault)
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... NMS: VC oper
> state:
> > > > DOWN
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... NMS: err
> codes:
> > > > > pw-rx-err
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... NMS:
> > : +
> > > > > dp-err
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... SYSLOG: VC is
> DOWN,
> > PW
> > > > Err
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ... Local ready
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Local service is
> > ready;
> > > > > send a label
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Alloc local binding
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... No need to update
> > the
> > > > > local binding
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Generate local
> event
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Ready, label 17
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Evt local ready, in
> > > > > activating
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Take no action
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .. Check if can activate
> > > > > dataplane
> > > > >
> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ... Not activating
> > > dataplane:
> > > > > not establishing
> > > > >
> > > > > Sep 28 17:19:08.264: AToM: 1631 cumulative msgs handled. rc=0
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Evt dataplane
> reactivate,
> > in
> > > > > activating
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Activate dataplane
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Need to setup the
> > > dataplane
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Setup dataplane, PWID
> 1
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. Provision SSM with
> PWID
> > > 1,
> > > > VC
> > > > > ID 2, Block ID 0
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. Set imp flags: cw ra
> > vcw
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. : nsf
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. Set segment count to
> 1
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. Provision SSM with
> > > > 5489/5527
> > > > > (sw/seg)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Receive SSM dataplane
> > > > > unavailable notification
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Evt dataplane down, in
> > > > > activating
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Dataplane unavailable
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Set last error: MPLS
> > > > dataplane
> > > > > reported a fault to the nexthop
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. S:Evt dataplane fault
> > in
> > > > > LruRruD
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. S:Act send SSS(DOWN),
> > > > > notify(DOWN)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... Dataplane :
> > > > > DOWN(pw-tx-fault)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... Overall :
> > > > > DOWN(pw-rx-fault)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... [filtered AC]
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... Send notify(DOWN)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... Dataplane :
> > > > > DOWN(pw-tx-fault)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... Overall :
> > > > > DOWN(pw-tx-fault)
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... [filtered LDP]
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Notify dataplane down
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Deactivating data plane
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Notify dataplane down
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Unprovision and
> deallocate
> > > SSM
> > > > > segment
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Added vc to 60 sec retry
> > > queue
> > > > >
> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Event provision retry
> > > already
> > > > in
> > > > > retry queue
> > > > >
> > > > > Sep 28 17:20:06.464: AToM: 1632 cumulative msgs handled. rc=0
> > > > >
> > > > >
> > > > > Procurando no Google achei alguns reports de problemas com o
> ME3600X
> > > > usando
> > > > > signaling em BGP, porém estou usando LDP para signaling então não
> > > consigo
> > > > > ver uma relação entre os problemas.
> > > > >
> > > > > Bom, deixo aí aberto pra quem puder ajudar, QUALQUER ajuda é bem
> > vinda.
> > > > >
> > > > >
> > > > > Abs,
> > > > >
> > > > > Caio
> > > > >
> > > > > 2016-09-28 12:50 GMT-03:00 Caio <caiot5 at gmail.com <javascript:;>>:
> > > > >
> > > > > > Rubens,
> > > > > >
> > > > > > Obrigado pela dica, vi algumas coisas que posso tentar nesse
> Post.
> > > > > > Vou testar todas as possibilidades hoje e passo um report pra
> > lista.
> > > > > >
> > > > > > Abs,
> > > > > > Caio
> > > > > >
> > > > > > Em 28/09/2016 11:43, "Lista" <lista.gter at gmail.com
> <javascript:;>>
> > > > escreveu:
> > > > > >
> > > > > > se funcionar nos reporte, seria interessante o feedback
> > > > > >
> > > > > > Em 28 de setembro de 2016 07:40, Rubens Kuhl <rubensk at gmail.com
> > > > <javascript:;>>
> > > > > escreveu:
> > > > > >
> > > > > > > http://blog.ipspace.net/2011/11/junos-versus-cisco-ios-
> > > > > mpls-and-ldp.html
> > > > > > > pode dar uma luz...
> > > > > > >
> > > > > > > Rubens
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > 2016-09-27 15:05 GMT-03:00 Caio <caiot5 at gmail.com
> > <javascript:;>>:
> > > > > > >
> > > > > > > > Senhores,
> > > > > > > >
> > > > > > > > Estou com um problema estranho ao tentar subir um
> > > > l2circuit/xconnect
> > > > > > > > interop. entre um MX-104 e um ME3600X.
> > > > > > > > Um detalhe interessante é que tanto no ME3600X quanto no
> MX-104
> > > há
> > > > > > > > l2circuits/xconnects fechados com outros dispositivos (outros
> > > > > Junipers
> > > > > > e
> > > > > > > > outros Ciscos como 2951 etc).
> > > > > > > > No lado do MX fica tudo up:
> > > > > > > >
> > > > > > > > Neighbor: XX.XX.XX.XX
> > > > > > > > Interface Type St Time last up
> > > #
> > > > Up
> > > > > > > trans
> > > > > > > > ge-0/0/0.0(vc 2) rmt Up Sep 27 14:54:56
> 2016
> > > > > > > 1
> > > > > > > > Remote PE: YY.YY.YY.YY, Negotiated control-word: Yes
> > (Null)
> > > > > > > > Incoming label: 300192, Outgoing label: 18
> > > > > > > > Negotiated PW status TLV: No
> > > > > > > > Local interface: ge-0/0/0.0, Status: Up, Encapsulation:
> > > > > ETHERNET
> > > > > > > >
> > > > > > > > Porém do lado do Cisco, não sobe nem na bala:
> > > > > > > >
> > > > > > > > Local interface: Gi0/1 up, line protocol up, Ethernet up
> > > > > > > > Destination address: XX.XX.XX.XX, VC ID: 2, VC status: down
> > > > > > > > Last error: *MPLS dataplane reported a fault to the
> > nexthop*
> > > > > > > >
> > > > > > > > As adjacências estão ok dos dois lados (apesar do Uptime não
> > > > bater):
> > > > > > > >
> > > > > > > > #sh mpls ldp neighbor
> > > > > > > > Peer LDP Ident: XX.XX.XX.XX:0; Local LDP Ident
> > YY.YY.YY.YY:0
> > > > > > > > TCP connection: XX.XX.XX.XX.646 - 177.21.44.122.23511
> > > > > > > > State: Oper; Msgs sent/rcvd: 62103/54188; Downstream
> > > > > > > > * Up time: 6d07h*
> > > > > > > >
> > > > > > > > > show ldp neighbor YY.YY.YY.YY detail
> > > > > > > > Address Interface Label space ID
> > Hold
> > > > > time
> > > > > > > > YY.YY.YY.YY lo0.0 YY.YY.YY.YY:0 41
> > > > > > > > Transport address: YY.YY.YY.YY, Configuration sequence: 0
> > > > > > > > * Up for 1w1d 23:35:12*
> > > > > > > >
> > > > > > > > Procurei bastante no Google e não achei nada, apenas pessoas
> > com
> > > o
> > > > > > mesmo
> > > > > > > > problema e que aparentemente não conseguiram resolver ou não
> > > > postaram
> > > > > > os
> > > > > > > > resultados.
> > > > > > > >
> > > > > > > > Alguém já passou por isso ou sabe o que pode ser?
> > > > > > > >
> > > > > > > > Desde já agradeço.
> > > > > > > > Abraços.
> > > > > > > > --
> > > > > > > > gter list https://eng.registro.br/mailman/listinfo/gter
> > > > > > > --
> > > > > > > gter list https://eng.registro.br/mailman/listinfo/gter
> > > > > > >
> > > > > > --
> > > > > > gter list https://eng.registro.br/mailman/listinfo/gter
> > > > > >
> > > > > >
> > > > > >
> > > > > --
> > > > > gter list https://eng.registro.br/mailman/listinfo/gter
> > > > >
> > > > --
> > > > gter list https://eng.registro.br/mailman/listinfo/gter
> > >
> > >
> > >
> > > --
> > > Eduardo Schoedler
> > > --
> > > gter list https://eng.registro.br/mailman/listinfo/gter
> > >
> > --
> > gter list https://eng.registro.br/mailman/listinfo/gter
> >
> --
> gter list https://eng.registro.br/mailman/listinfo/gter
>
More information about the gter
mailing list