[GTER] l2circuit/xconnect entre MX-104 e ME3600X
Caio
caiot5 at gmail.com
Tue Oct 4 15:56:21 -03 2016
Testei no MX104, sobe o VC com o switch de Lab também.
É algum problema com aquele ME3600X específico, vou substituí-lo em breve e
trazer pra bancada pra ver o que aconteceu.
Enfim, obrigado a todos mais uma vez.
2016-10-03 22:08 GMT-03:00 Caio <caiot5 at gmail.com>:
> Testei em outro ME3600X num LAB junto com um MX80, funcionou belezinha,
> tanto na versão 15.5-3S, como na 12.
> Amanhã vou levar esse lab lá pro MX104 que falhou da última vez e ver se
> sobe, ainda não faço ideia qual era o problema, se era físico no outro
> Switch, ou algo no MX104.
>
> Assim que descobrir reporto na lista pra termos um histórico desse
> problema.
>
> Abraços e obrigado a todos.
>
>
> 2016-10-03 12:03 GMT-03:00 Caio <caiot5 at gmail.com>:
>
>> Grande Westphal, acho que você pode estar certo, fez sentido pra mim.
>> O duro é que vou ter que continuar o lab em outro ME3600X, este que
>> estava rodando aparentemente 'bugou' o crypto depois de um downgrade pra
>> versão 12 e está inacessível.
>> Temos outro em estoque, essa semana ainda testo isso, talvez amanhã.
>>
>> Obrigado!
>>
>> Em 01/10/2016 13:09, "Renato Westphal" <renato at opensourcerouting.org>
>> escreveu:
>>
>> Tchê, tá com cara de que o Cisco não tá recebendo um label mapping
>> (implicit-null) pra loopback do Juniper, apesar de a configuração do
>> Juniper parecer ok.
>>
>> Se puder passa o output dos comandos abaixo (no Cisco):
>> #show mpls ip binding <IP-LO-JUNOS> 32
>> #show ip cef <IP-LO-JUNOS> detail
>> #show mpls ldp neighbor <IP-LO-JUNOS> // tem que ter duas adjacências
>> (normal e extended)
>> #show mpls l2transport vc detail
>>
>> 2016-09-29 19:35 GMT-03:00 Caio <caiot5 at gmail.com>:
>> > @Guilherme
>> > Diretamente conectados
>> >
>> > @Lista
>> > As loopbacks se pingam em L3, sobem a adjacência normal, mas não se
>> pingam
>> > tentando pingar diretamente a FEC (ping mpls).
>> >
>> > 2016-09-29 17:00 GMT-03:00 Lista <lista.gter at gmail.com>:
>> >
>> >> Se suas loopbacks não pinga entre si, verifica se seu protocolo de
>> base de
>> >> distribuição das rotas estão devidamente conectados e propagando as
>> rotas
>> >> com os neighboors, sendo assim, uma vez vc pingando eles via camada 3
>> >> normal, vc conseguirá estabelecer suas adjacencias via mpls.
>> >>
>> >>
>> >> 2016-09-29 16:34 GMT-03:00 Caio <caiot5 at gmail.com>:
>> >>
>> >> > Também não vai:
>> >> >
>> >> > #ping mpls
>> >> > Target IPv4, pseudowire or traffic-eng [ipv4]: ipv4
>> >> > Target IPv4 address: YY.YY.YY.YY
>> >> > Target mask: 255.255.255.255
>> >> > Repeat count [5]:
>> >> > Datagram size [72]:
>> >> > Timeout in seconds [2]:
>> >> > Send interval in msec [0]:
>> >> > Extended commands? [no]:
>> >> > Sweep range of sizes? [no]:
>> >> > Sending 5, 72-byte MPLS Echos to YY.YY.YY.YY/32,
>> >> > timeout is 2 seconds, send interval is 0 msec:
>> >> >
>> >> > Codes: '!' - success, 'Q' - request not sent, '.' - timeout,
>> >> > 'L' - labeled output interface, 'B' - unlabeled output interface,
>> >> > 'D' - DS Map mismatch, 'F' - no FEC mapping, 'f' - FEC mismatch,
>> >> > 'M' - malformed request, 'm' - unsupported tlvs, 'N' - no label
>> entry,
>> >> > 'P' - no rx intf label prot, 'p' - premature termination of LSP,
>> >> > 'R' - transit router, 'I' - unknown upstream index,
>> >> > 'l' - Label switched with FEC change, 'd' - see DDMAP for return
>> code,
>> >> > 'X' - unknown return code, 'x' - return code 0
>> >> >
>> >> > Type escape sequence to abort.
>> >> > QQQQQ
>> >> > Success rate is 0 percent (0/5)
>> >> > Total Time Elapsed 0 ms
>> >> >
>> >> >
>> >> > Realmente tem alguma coisa errada, só não consigo entender *O QUE*.
>> >> > A adjacência sobe normal, o VC sobe do lado do Juniper, chequei a
>> config
>> >> > com a do C2951 que também é uma adjacência de ambos e está IGUAL,
>> somente
>> >> > com os ips diferentes e no C2951 sobe. Realmente não tenho nenhuma
>> pista
>> >> do
>> >> > que pode ser.
>> >> >
>> >> > 2016-09-29 14:45 GMT-03:00 Guilherme de Freitas Figueiredo <
>> >> > guilhermefreitasfigueiredo at gmail.com>:
>> >> >
>> >> > > Caio, eu ja cheguei a fechar com juniper sem maiores problemas
>> também,
>> >> o
>> >> > > ping mpls ipv4 partindo da loopback do cisco para a loopback do
>> juniper
>> >> > > também nao funciona? se nem isso funcionar tem algo de errado no
>> mpls.
>> >> > >
>> >> > > []s!
>> >> > >
>> >> > > --
>> >> > > Guilherme de Freitas Figueiredo
>> >> > >
>> >> > > 2016-09-29 12:07 GMT-03:00 Caio <caiot5 at gmail.com>:
>> >> > >
>> >> > > > Outra dúvida Guilherme, os cenários que você tem com o ME3600X
>> estão
>> >> > > > fechando o VC direto com Juniper MX ou apenas entre ME3600X ?
>> (esse
>> >> > > último
>> >> > > > eu sei que funciona sem segredos)
>> >> > > >
>> >> > > > 2016-09-29 11:07 GMT-03:00 Caio <caiot5 at gmail.com>:
>> >> > > >
>> >> > > > > Guilherme,
>> >> > > > > Nem o ping pinga, nem o traceroute completa:
>> >> > > > >
>> >> > > > > # run traceroute mpls ldp XX.XX.XX.XX
>> >> > > > > Probe options: ttl 64, retries 3, wait 10, paths 16, exp 7,
>> >> fanout
>> >> > 16
>> >> > > > >
>> >> > > > > ttl Label Protocol Address Previous Hop
>> Probe
>> >> > > > Status
>> >> > > > > 1 WW.WW.WW.WW (null) No
>> reply
>> >> > > > >
>> >> > > > > 2 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > 3 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > 4 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > 5 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > 6 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > 7 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > 8 (null) (null)
>> No
>> >> > reply
>> >> > > > >
>> >> > > > > WW.WW.WW.WW é o IP diretamente conectado na interface
>> >> (não-loopback).
>> >> > > > >
>> >> > > > > Você se importa de me dizer qual IOS você está usando nos teus
>> >> > ME3600X
>> >> > > e
>> >> > > > > se tiver um cenário parecido, compartilhar a configuração, ou
>> >> alguma
>> >> > > > parte
>> >> > > > > relevante dela que esteja diferente?
>> >> > > > >
>> >> > > > > 2016-09-29 10:44 GMT-03:00 Guilherme de Freitas Figueiredo <
>> >> > > > > guilhermefreitasfigueiredo at gmail.com>:
>> >> > > > >
>> >> > > > >> tenho bastante mpls com 3600 aqui sem nenhum problema, o ping
>> >> mpls
>> >> > > ou o
>> >> > > > >> traceroute mpls para o destino do juniper tem resposta? muito
>> >> > estranho
>> >> > > > >> essa
>> >> > > > >> forwarding-table estar sem o destino do prefixo desejado.
>> >> > > > >>
>> >> > > > >> []s!
>> >> > > > >>
>> >> > > > >> --
>> >> > > > >> Guilherme de Freitas Figueiredo
>> >> > > > >>
>> >> > > > >> On Thu, Sep 29, 2016 at 9:55 AM, Caio <caiot5 at gmail.com>
>> wrote:
>> >> > > > >>
>> >> > > > >> > Guilherme,
>> >> > > > >> >
>> >> > > > >> > Service-instance sem bridge domain com xconnect na
>> >> > > service-instance, a
>> >> > > > >> CEF
>> >> > > > >> > está ok, mas a forwarding-table está vazia, o que eu
>> acredito
>> >> ser
>> >> > > > >> devido a
>> >> > > > >> > falha no MPLS Dataplane veja:
>> >> > > > >> >
>> >> > > > >> > Local Outgoing Prefix Bytes Label
>> Outgoing
>> >> > Next
>> >> > > > Hop
>> >> > > > >> >
>> >> > > > >> > Label Label or Tunnel Id Switched
>> interface
>> >> > > > >> > 17 No Label l2ckt() 0 drop
>> >> > > > >> >
>> >> > > > >> >
>> >> > > > >> > Eduardo, segue configuração do serviço, está bem simples
>> (obs:
>> >> > tenho
>> >> > > > >> outros
>> >> > > > >> > cenários rodando com exatamente a mesma configuração
>> funcionando
>> >> > ok,
>> >> > > > >> porém,
>> >> > > > >> > em outros equipamentos, C2951, etc):
>> >> > > > >> >
>> >> > > > >> > Juniper side:
>> >> > > > >> >
>> >> > > > >> > set interfaces ge-0/0/0 mtu 1600
>> >> > > > >> > set interfaces ge-0/0/0 encapsulation ethernet-ccc
>> >> > > > >> > set interfaces ge-0/0/0 unit 0
>> >> > > > >> > set protocols l2circuit neighbor XX.XX.XX.XX interface
>> >> ge-0/0/0.0
>> >> > > > >> > virtual-circuit-id 2
>> >> > > > >> > set protocols l2circuit neighbor XX.XX.XX.XX interface
>> >> ge-0/0/0.0
>> >> > > > >> > control-word
>> >> > > > >> > set protocols l2circuit neighbor XX.XX.XX.XX interface
>> >> ge-0/0/0.0
>> >> > > mtu
>> >> > > > >> 1600
>> >> > > > >> > set protocols l2circuit neighbor XX.XX.XX.XX interface
>> >> ge-0/0/0.0
>> >> > > > >> > pseudowire-status-tlv
>> >> > > > >> > set protocols ldp interface xe-2/0/1.1 transport-address
>> >> router-id
>> >> > > > >> > set protocols mpls interface xe-2/0/1.1
>> >> > > > >> > set protocols ldp egress-policy connected
>> >> > > > >> > set protocols ldp deaggregate
>> >> > > > >> > set protocols ldp interface lo0.0 transport-address
>> interface
>> >> > > > >> > set interfaces lo0 unit 0 family inet address YY.YY.YY.YY/32
>> >> > > > >> >
>> >> > > > >> >
>> >> > > > >> > Cisco side:
>> >> > > > >> >
>> >> > > > >> > pseudowire-class eompls
>> >> > > > >> > encapsulation mpls
>> >> > > > >> > control-word
>> >> > > > >> >
>> >> > > > >> > interface GigabitEthernet0/1
>> >> > > > >> > no switchport
>> >> > > > >> > mtu 1600
>> >> > > > >> > no ip address
>> >> > > > >> > xconnect YY.YY.YY.YY 2 encapsulation mpls pw-class eompls
>> >> > > > >> >
>> >> > > > >> >
>> >> > > > >> > mpls ldp router-id Loopback0 force
>> >> > > > >> >
>> >> > > > >> > interface Loopback0
>> >> > > > >> > ip address XX.XX.XX.XX 255.255.255.255
>> >> > > > >> >
>> >> > > > >> >
>> >> > > > >> > 2016-09-28 23:42 GMT-03:00 Eduardo Schoedler <
>> >> listas at esds.com.br
>> >> > >:
>> >> > > > >> >
>> >> > > > >> > > Se mandasse a configuração, ficaria muito mais simples de
>> >> > > > entender...
>> >> > > > >> > >
>> >> > > > >> > > Em quarta-feira, 28 de setembro de 2016, Guilherme de
>> Freitas
>> >> > > > >> Figueiredo
>> >> > > > >> > <
>> >> > > > >> > > guilhermefreitasfigueiredo at gmail.com> escreveu:
>> >> > > > >> > >
>> >> > > > >> > > > como ta a configuração do seu transporte na g0/1 ?
>> service
>> >> > > > instance
>> >> > > > >> com
>> >> > > > >> > > > bridge-domain ? switchport com vlan ? service-instance
>> sem
>> >> > > bridge
>> >> > > > >> > domain
>> >> > > > >> > > e
>> >> > > > >> > > > xconnect na service instance? a forwarding-table também
>> está
>> >> > > > >> correta?
>> >> > > > >> > bem
>> >> > > > >> > > > como a cef? como fica o traceroute com pacotes mpls
>> para o
>> >> > > > destino?
>> >> > > > >> > > >
>> >> > > > >> > > >
>> >> > > > >> > > >
>> >> > > > >> > > >
>> >> > > > >> > > > []s!
>> >> > > > >> > > >
>> >> > > > >> > > > --
>> >> > > > >> > > > Guilherme de Freitas Figueiredo
>> >> > > > >> > > >
>> >> > > > >> > > > On Wed, Sep 28, 2016 at 2:24 PM, Caio <caiot5 at gmail.com
>> >> > > > >> > <javascript:;>>
>> >> > > > >> > > > wrote:
>> >> > > > >> > > >
>> >> > > > >> > > > > Rubens,
>> >> > > > >> > > > >
>> >> > > > >> > > > > Tentei tudo que foi sugerido no post, ainda na mesma.
>> >> > > > >> > > > > Extraí o debug do VC tentando subir, quem quiser dar
>> uma
>> >> > > olhada:
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: Circuit
>> >> > attributes,
>> >> > > > >> > Receive
>> >> > > > >> > > > > update:
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: . Status:
>> UP
>> >> > (0x1)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: . Alarm:
>> 0x0
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: Process
>> attrs
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: Receive
>> status
>> >> > > update
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: . Receive
>> AC
>> >> > > > STATUS(UP)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .. AC
>> status UP
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... S:Evt
>> >> local
>> >> > > up,
>> >> > > > >> > > > > LrdRruD->LruRruD
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... S:Act
>> send
>> >> > > > >> > notify(DOWN)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Send
>> >> > > > notify(DOWN)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .....
>> >> Dataplane
>> >> > :
>> >> > > > >> > > > > DOWN(pw-tx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .....
>> Overall
>> >> > :
>> >> > > > >> > > > > DOWN(pw-tx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Send
>> LDP
>> >> > for
>> >> > > > >> status
>> >> > > > >> > > > change
>> >> > > > >> > > > > from DOWN AC(rx/tx faults), (pw-tx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... NMS:
>> VC
>> >> > oper
>> >> > > > >> state:
>> >> > > > >> > > > DOWN
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... NMS:
>> >> > err
>> >> > > > >> codes:
>> >> > > > >> > > > > pw-rx-err
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... NMS:
>> >> > > > >> > : +
>> >> > > > >> > > > > dp-err
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .....
>> SYSLOG:
>> >> VC
>> >> > is
>> >> > > > >> DOWN,
>> >> > > > >> > PW
>> >> > > > >> > > > Err
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ... Local
>> ready
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Local
>> >> > service
>> >> > > is
>> >> > > > >> > ready;
>> >> > > > >> > > > > send a label
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Alloc
>> >> local
>> >> > > > >> binding
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... No
>> need
>> >> to
>> >> > > > update
>> >> > > > >> > the
>> >> > > > >> > > > > local binding
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ....
>> Generate
>> >> > local
>> >> > > > >> event
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Ready,
>> >> label
>> >> > > 17
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .... Evt
>> local
>> >> > > ready,
>> >> > > > >> in
>> >> > > > >> > > > > activating
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ..... Take
>> no
>> >> > > action
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: .. Check
>> if can
>> >> > > > >> activate
>> >> > > > >> > > > > dataplane
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM[XX.XX.XX.XX, 2]: ... Not
>> >> > activating
>> >> > > > >> > > dataplane:
>> >> > > > >> > > > > not establishing
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:19:08.264: AToM: 1631 cumulative msgs
>> handled.
>> >> > rc=0
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Evt
>> dataplane
>> >> > > > >> reactivate,
>> >> > > > >> > in
>> >> > > > >> > > > > activating
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Activate
>> >> > > dataplane
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Need to
>> setup
>> >> > the
>> >> > > > >> > > dataplane
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Setup
>> >> > dataplane,
>> >> > > > >> PWID 1
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ..
>> Provision
>> >> SSM
>> >> > > with
>> >> > > > >> PWID
>> >> > > > >> > > 1,
>> >> > > > >> > > > VC
>> >> > > > >> > > > > ID 2, Block ID 0
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. Set imp
>> >> flags:
>> >> > > cw
>> >> > > > ra
>> >> > > > >> > vcw
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ..
>> >> :
>> >> > > nsf
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. Set
>> segment
>> >> > > count
>> >> > > > >> to 1
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ..
>> Provision
>> >> SSM
>> >> > > with
>> >> > > > >> > > > 5489/5527
>> >> > > > >> > > > > (sw/seg)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Receive SSM
>> >> > > dataplane
>> >> > > > >> > > > > unavailable notification
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Evt
>> dataplane
>> >> > down,
>> >> > > > in
>> >> > > > >> > > > > activating
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Dataplane
>> >> > > > unavailable
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Set last
>> >> error:
>> >> > > > MPLS
>> >> > > > >> > > > dataplane
>> >> > > > >> > > > > reported a fault to the nexthop
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. S:Evt
>> >> > dataplane
>> >> > > > >> fault
>> >> > > > >> > in
>> >> > > > >> > > > > LruRruD
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: .. S:Act
>> send
>> >> > > > >> SSS(DOWN),
>> >> > > > >> > > > > notify(DOWN)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ...
>> Dataplane
>> >> :
>> >> > > > >> > > > > DOWN(pw-tx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ...
>> Overall
>> >> :
>> >> > > > >> > > > > DOWN(pw-rx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ...
>> [filtered
>> >> > AC]
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ... Send
>> >> > > notify(DOWN)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ...
>> Dataplane
>> >> :
>> >> > > > >> > > > > DOWN(pw-tx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ...
>> Overall
>> >> :
>> >> > > > >> > > > > DOWN(pw-tx-fault)
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: ...
>> [filtered
>> >> > LDP]
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: . Notify
>> >> > dataplane
>> >> > > > down
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]:
>> Deactivating
>> >> data
>> >> > > > plane
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Notify
>> >> dataplane
>> >> > > down
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]:
>> Unprovision and
>> >> > > > >> deallocate
>> >> > > > >> > > SSM
>> >> > > > >> > > > > segment
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Added vc
>> to 60
>> >> > sec
>> >> > > > >> retry
>> >> > > > >> > > queue
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM[XX.XX.XX.XX, 2]: Event
>> provision
>> >> > > retry
>> >> > > > >> > > already
>> >> > > > >> > > > in
>> >> > > > >> > > > > retry queue
>> >> > > > >> > > > >
>> >> > > > >> > > > > Sep 28 17:20:06.464: AToM: 1632 cumulative msgs
>> handled.
>> >> > rc=0
>> >> > > > >> > > > >
>> >> > > > >> > > > >
>> >> > > > >> > > > > Procurando no Google achei alguns reports de problemas
>> >> com o
>> >> > > > >> ME3600X
>> >> > > > >> > > > usando
>> >> > > > >> > > > > signaling em BGP, porém estou usando LDP para
>> signaling
>> >> > então
>> >> > > > não
>> >> > > > >> > > consigo
>> >> > > > >> > > > > ver uma relação entre os problemas.
>> >> > > > >> > > > >
>> >> > > > >> > > > > Bom, deixo aí aberto pra quem puder ajudar, QUALQUER
>> >> ajuda é
>> >> > > bem
>> >> > > > >> > vinda.
>> >> > > > >> > > > >
>> >> > > > >> > > > >
>> >> > > > >> > > > > Abs,
>> >> > > > >> > > > >
>> >> > > > >> > > > > Caio
>> >> > > > >> > > > >
>> >> > > > >> > > > > 2016-09-28 12:50 GMT-03:00 Caio <caiot5 at gmail.com
>> >> > > > >> <javascript:;>>:
>> >> > > > >> > > > >
>> >> > > > >> > > > > > Rubens,
>> >> > > > >> > > > > >
>> >> > > > >> > > > > > Obrigado pela dica, vi algumas coisas que posso
>> tentar
>> >> > nesse
>> >> > > > >> Post.
>> >> > > > >> > > > > > Vou testar todas as possibilidades hoje e passo um
>> >> report
>> >> > > pra
>> >> > > > >> > lista.
>> >> > > > >> > > > > >
>> >> > > > >> > > > > > Abs,
>> >> > > > >> > > > > > Caio
>> >> > > > >> > > > > >
>> >> > > > >> > > > > > Em 28/09/2016 11:43, "Lista" <lista.gter at gmail.com
>> >> > > > >> <javascript:;>>
>> >> > > > >> > > > escreveu:
>> >> > > > >> > > > > >
>> >> > > > >> > > > > > se funcionar nos reporte, seria interessante o
>> feedback
>> >> > > > >> > > > > >
>> >> > > > >> > > > > > Em 28 de setembro de 2016 07:40, Rubens Kuhl <
>> >> > > > rubensk at gmail.com
>> >> > > > >> > > > <javascript:;>>
>> >> > > > >> > > > > escreveu:
>> >> > > > >> > > > > >
>> >> > > > >> > > > > > > http://blog.ipspace.net/2011/
>> >> 11/junos-versus-cisco-ios-
>> >> > > > >> > > > > mpls-and-ldp.html
>> >> > > > >> > > > > > > pode dar uma luz...
>> >> > > > >> > > > > > >
>> >> > > > >> > > > > > > Rubens
>> >> > > > >> > > > > > >
>> >> > > > >> > > > > > >
>> >> > > > >> > > > > > >
>> >> > > > >> > > > > > > 2016-09-27 15:05 GMT-03:00 Caio <caiot5 at gmail.com
>> >> > > > >> > <javascript:;>>:
>> >> > > > >> > > > > > >
>> >> > > > >> > > > > > > > Senhores,
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Estou com um problema estranho ao tentar subir
>> um
>> >> > > > >> > > > l2circuit/xconnect
>> >> > > > >> > > > > > > > interop. entre um MX-104 e um ME3600X.
>> >> > > > >> > > > > > > > Um detalhe interessante é que tanto no ME3600X
>> >> quanto
>> >> > no
>> >> > > > >> MX-104
>> >> > > > >> > > há
>> >> > > > >> > > > > > > > l2circuits/xconnects fechados com outros
>> >> dispositivos
>> >> > > > >> (outros
>> >> > > > >> > > > > Junipers
>> >> > > > >> > > > > > e
>> >> > > > >> > > > > > > > outros Ciscos como 2951 etc).
>> >> > > > >> > > > > > > > No lado do MX fica tudo up:
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Neighbor: XX.XX.XX.XX
>> >> > > > >> > > > > > > > Interface Type St Time
>> last
>> >> > up
>> >> > > > >> > > #
>> >> > > > >> > > > Up
>> >> > > > >> > > > > > > trans
>> >> > > > >> > > > > > > > ge-0/0/0.0(vc 2) rmt Up Sep
>> 27
>> >> > > 14:54:56
>> >> > > > >> 2016
>> >> > > > >> > > > > > > 1
>> >> > > > >> > > > > > > > Remote PE: YY.YY.YY.YY, Negotiated
>> >> control-word:
>> >> > > Yes
>> >> > > > >> > (Null)
>> >> > > > >> > > > > > > > Incoming label: 300192, Outgoing label: 18
>> >> > > > >> > > > > > > > Negotiated PW status TLV: No
>> >> > > > >> > > > > > > > Local interface: ge-0/0/0.0, Status: Up,
>> >> > > > >> Encapsulation:
>> >> > > > >> > > > > ETHERNET
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Porém do lado do Cisco, não sobe nem na bala:
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Local interface: Gi0/1 up, line protocol up,
>> >> Ethernet
>> >> > up
>> >> > > > >> > > > > > > > Destination address: XX.XX.XX.XX, VC ID: 2, VC
>> >> > status:
>> >> > > > >> down
>> >> > > > >> > > > > > > > Last error: *MPLS dataplane reported a
>> fault to
>> >> > the
>> >> > > > >> > nexthop*
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > As adjacências estão ok dos dois lados (apesar
>> do
>> >> > Uptime
>> >> > > > não
>> >> > > > >> > > > bater):
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > #sh mpls ldp neighbor
>> >> > > > >> > > > > > > > Peer LDP Ident: XX.XX.XX.XX:0; Local LDP
>> Ident
>> >> > > > >> > YY.YY.YY.YY:0
>> >> > > > >> > > > > > > > TCP connection: XX.XX.XX.XX.646 -
>> >> 177.21.44.122.23511
>> >> > > > >> > > > > > > > State: Oper; Msgs sent/rcvd: 62103/54188;
>> Downstream
>> >> > > > >> > > > > > > > * Up time: 6d07h*
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > > show ldp neighbor YY.YY.YY.YY detail
>> >> > > > >> > > > > > > > Address Interface Label
>> space ID
>> >> > > > >> > Hold
>> >> > > > >> > > > > time
>> >> > > > >> > > > > > > > YY.YY.YY.YY lo0.0
>> YY.YY.YY.YY:0
>> >> > > > >> 41
>> >> > > > >> > > > > > > > Transport address: YY.YY.YY.YY, Configuration
>> >> > > sequence:
>> >> > > > 0
>> >> > > > >> > > > > > > > * Up for 1w1d 23:35:12*
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Procurei bastante no Google e não achei nada,
>> apenas
>> >> > > > pessoas
>> >> > > > >> > com
>> >> > > > >> > > o
>> >> > > > >> > > > > > mesmo
>> >> > > > >> > > > > > > > problema e que aparentemente não conseguiram
>> >> resolver
>> >> > ou
>> >> > > > não
>> >> > > > >> > > > postaram
>> >> > > > >> > > > > > os
>> >> > > > >> > > > > > > > resultados.
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Alguém já passou por isso ou sabe o que pode
>> ser?
>> >> > > > >> > > > > > > >
>> >> > > > >> > > > > > > > Desde já agradeço.
>> >> > > > >> > > > > > > > Abraços.
>> >> > > > >> > > > > > > > --
>> >> > > > >> > > > > > > > gter list https://eng.registro.br/
>> >> > > > mailman/listinfo/gter
>> >> > > > >> > > > > > > --
>> >> > > > >> > > > > > > gter list https://eng.registro.br/
>> >> > > mailman/listinfo/gter
>> >> > > > >> > > > > > >
>> >> > > > >> > > > > > --
>> >> > > > >> > > > > > gter list https://eng.registro.br/
>> >> > mailman/listinfo/gter
>> >> > > > >> > > > > >
>> >> > > > >> > > > > >
>> >> > > > >> > > > > >
>> >> > > > >> > > > > --
>> >> > > > >> > > > > gter list https://eng.registro.br/
>> >> mailman/listinfo/gter
>> >> > > > >> > > > >
>> >> > > > >> > > > --
>> >> > > > >> > > > gter list https://eng.registro.br/mailma
>> n/listinfo/gter
>> >> > > > >> > >
>> >> > > > >> > >
>> >> > > > >> > >
>> >> > > > >> > > --
>> >> > > > >> > > Eduardo Schoedler
>> >> > > > >> > > --
>> >> > > > >> > > gter list https://eng.registro.br/mailma
>> n/listinfo/gter
>> >> > > > >> > >
>> >> > > > >> > --
>> >> > > > >> > gter list https://eng.registro.br/mailman/listinfo/gter
>> >> > > > >> >
>> >> > > > >> --
>> >> > > > >> gter list https://eng.registro.br/mailman/listinfo/gter
>> >> > > > >>
>> >> > > > >
>> >> > > > >
>> >> > > > --
>> >> > > > gter list https://eng.registro.br/mailman/listinfo/gter
>> >> > > >
>> >> > > --
>> >> > > gter list https://eng.registro.br/mailman/listinfo/gter
>> >> > >
>> >> > --
>> >> > gter list https://eng.registro.br/mailman/listinfo/gter
>> >> >
>> >> --
>> >> gter list https://eng.registro.br/mailman/listinfo/gter
>> >>
>> > --
>> > gter list https://eng.registro.br/mailman/listinfo/gter
>>
>>
>>
>> --
>> Renato Westphal
>> --
>> gter list https://eng.registro.br/mailman/listinfo/gter
>>
>>
>>
>
More information about the gter
mailing list