From anisio.neto at hotmail.com.br Tue Nov 2 10:16:37 2010 From: anisio.neto at hotmail.com.br (=?utf-8?B?QW7DrXNpbyBKLiBNb3JlaXJhIE5ldG8=?=) Date: Tue, 2 Nov 2010 10:16:37 -0200 Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) In-Reply-To: References: Message-ID: Renato, bom dia. Este servidor ? virtual? PPTP ou L2TP? Abra?os. -----Mensagem Original----- From: Renato Pinheiro de Souza Sent: Sunday, October 31, 2010 12:19 AM To: masoch-l at eng.registro.br Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) Amigos, estou com um problema no meu servidor VPN, ele est? cortando a comunica??o depois de um certo tempo. A vpn n?o chega a cair, mas nada funciona. Este problema tem alguma rela??o com trafego pois, se mantenho um ping -t para qq lugar, tudo funciona normalmente. J? olhei todas as configura??es de idle time/time out mas n?o encontrei nada que causasse isso. Ser? que algum de voc?s j? esbarrou nesse problema e/ou poderia me ajudar? Desde j?, obrigado pela ajuda. Abra?os, Renato Pinheiro renato.pinheiro at pobox.com pinheiro at gmail.com __ masoch-l list https://eng.registro.br/mailman/listinfo/masoch-l From renato.pinheiro at pobox.com Tue Nov 2 10:45:22 2010 From: renato.pinheiro at pobox.com (Renato Pinheiro de Souza) Date: Tue, 2 Nov 2010 10:45:22 -0200 Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) In-Reply-To: References: Message-ID: Oi An?sio, n?o, o servidor n?o ? virtual e utiliza PPTP. Desde j?, obrigado pela aten??o ;) Abra?os, Renato Pinheiro renato.pinheiro at pobox.com pinheiro at gmail.com 2010/11/2 An?sio J. Moreira Neto > Renato, bom dia. > > Este servidor ? virtual? PPTP ou L2TP? > > Abra?os. > > -----Mensagem Original----- From: Renato Pinheiro de Souza > Sent: Sunday, October 31, 2010 12:19 AM > To: masoch-l at eng.registro.br > Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) > > > Amigos, > > estou com um problema no meu servidor VPN, ele est? cortando a comunica??o > depois de um certo tempo. A vpn n?o chega a cair, mas nada funciona. Este > problema tem alguma rela??o com trafego pois, se mantenho um ping -t para > qq > lugar, tudo funciona normalmente. J? olhei todas as configura??es de idle > time/time out mas n?o encontrei nada que causasse isso. > > Ser? que algum de voc?s j? esbarrou nesse problema e/ou poderia me ajudar? > > Desde j?, obrigado pela ajuda. > > Abra?os, > Renato Pinheiro > renato.pinheiro at pobox.com > pinheiro at gmail.com > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > From anisio.neto at hotmail.com.br Tue Nov 2 20:47:10 2010 From: anisio.neto at hotmail.com.br (=?utf-8?B?QW7DrXNpbyBKLiBNb3JlaXJhIE5ldG8=?=) Date: Tue, 2 Nov 2010 20:47:10 -0200 Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) In-Reply-To: References: Message-ID: Renato. J? tive situa??o parecida com servidor virtual, no caso o problema era um bug do 2003 sobre Hyper-V, quando com mais de um processador virtual. No seu caso eu chutaria que pode ter a ver com algum tipo de antiv?rus "total protection", daqueles que incluem firewall. E em ultimo caso, e menos prov?vel, o IP fornecido ao cliente de VPN tamb?m estar sendo utilizado por uma esta??o interna. Abra?os. -----Mensagem Original----- From: Renato Pinheiro de Souza Sent: Tuesday, November 02, 2010 10:45 AM To: Mail Aid and Succor, On-line Comfort and Help Subject: Re: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) Oi An?sio, n?o, o servidor n?o ? virtual e utiliza PPTP. Desde j?, obrigado pela aten??o ;) Abra?os, Renato Pinheiro renato.pinheiro at pobox.com pinheiro at gmail.com 2010/11/2 An?sio J. Moreira Neto > Renato, bom dia. > > Este servidor ? virtual? PPTP ou L2TP? > > Abra?os. > > -----Mensagem Original----- From: Renato Pinheiro de Souza > Sent: Sunday, October 31, 2010 12:19 AM > To: masoch-l at eng.registro.br > Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) > > > Amigos, > > estou com um problema no meu servidor VPN, ele est? cortando a comunica??o > depois de um certo tempo. A vpn n?o chega a cair, mas nada funciona. Este > problema tem alguma rela??o com trafego pois, se mantenho um ping -t para > qq > lugar, tudo funciona normalmente. J? olhei todas as configura??es de idle > time/time out mas n?o encontrei nada que causasse isso. > > Ser? que algum de voc?s j? esbarrou nesse problema e/ou poderia me ajudar? > > Desde j?, obrigado pela ajuda. > > Abra?os, > Renato Pinheiro > renato.pinheiro at pobox.com > pinheiro at gmail.com > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ masoch-l list https://eng.registro.br/mailman/listinfo/masoch-l From renato.pinheiro at pobox.com Wed Nov 3 07:43:48 2010 From: renato.pinheiro at pobox.com (Renato Pinheiro de Souza) Date: Wed, 3 Nov 2010 07:43:48 -0200 Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) In-Reply-To: References: Message-ID: Bom, uso apenas um kaspersky nesse servidor (o firewall, um freebsd, esta no roteador), a VPN usa ips da faix 172.16.x.x e as esta??es, por mais incr?vel que pare?a, usam ips v?lidos. Esbarrei em um post dizendo que pode algo relacionada ao registro do IP falso no DNS, vou ver isso. De qq maneira, obrigado! Abra?os, Renato Pinheiro renato.pinheiro at pobox.com pinheiro at gmail.com 2010/11/2 An?sio J. Moreira Neto > Renato. > > J? tive situa??o parecida com servidor virtual, no caso o problema era um > bug do 2003 sobre Hyper-V, quando com mais de um processador virtual. > No seu caso eu chutaria que pode ter a ver com algum tipo de antiv?rus > "total protection", daqueles que incluem firewall. E em ultimo caso, e menos > prov?vel, o IP fornecido ao cliente de VPN tamb?m estar sendo utilizado por > uma esta??o interna. > > > Abra?os. > > -----Mensagem Original----- From: Renato Pinheiro de Souza > Sent: Tuesday, November 02, 2010 10:45 AM > To: Mail Aid and Succor, On-line Comfort and Help > Subject: Re: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) > > > Oi An?sio, > > n?o, o servidor n?o ? virtual e utiliza PPTP. > > Desde j?, obrigado pela aten??o ;) > > Abra?os, > Renato Pinheiro > renato.pinheiro at pobox.com > pinheiro at gmail.com > > > 2010/11/2 An?sio J. Moreira Neto > > Renato, bom dia. >> >> Este servidor ? virtual? PPTP ou L2TP? >> >> Abra?os. >> >> -----Mensagem Original----- From: Renato Pinheiro de Souza >> Sent: Sunday, October 31, 2010 12:19 AM >> To: masoch-l at eng.registro.br >> Subject: [MASOCH-L] Problemas com VPN (Windows 2003 RRAS) >> >> >> Amigos, >> >> estou com um problema no meu servidor VPN, ele est? cortando a comunica??o >> depois de um certo tempo. A vpn n?o chega a cair, mas nada funciona. Este >> problema tem alguma rela??o com trafego pois, se mantenho um ping -t para >> qq >> lugar, tudo funciona normalmente. J? olhei todas as configura??es de idle >> time/time out mas n?o encontrei nada que causasse isso. >> >> Ser? que algum de voc?s j? esbarrou nesse problema e/ou poderia me ajudar? >> >> Desde j?, obrigado pela ajuda. >> >> Abra?os, >> Renato Pinheiro >> renato.pinheiro at pobox.com >> pinheiro at gmail.com >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l >> >> __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > From rejaine at bhz.jamef.com.br Thu Nov 4 12:38:36 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 04 Nov 2010 12:38:36 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 Message-ID: <4CD2C56C.1020108@bhz.jamef.com.br> Pessoal, Venho pedir socorro.. Estou com um grave problema de performance em um servidor com SLES11(SP1) instalado em um servidor PowerEdge 1900 (configura??o abaixo) Ocorre o seguinte: T?nhamos em nossas localidades v?rios servidores bem inferiores, atendendo ao mesmo n?mero de usu?rios e mesmos servi?os, por?m, utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem at? ent?o, mas seguindo nosso planejamento de atualiza??o do parque de m?quinas, optamos por fazer upgrade de hardware e S.O (que se encontrava bastante desatualizado) nessas localidades e eis que come?aram os problemas. Inicialmente, fizemos a substitui??o em apenas duas localidades de menor porte e com menor n?mero de usu?rios e j? hav?amos notado um certo aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece que melhorou um pouco. Por?m, em uma outra localidade em especial, com cerca de 300 usu?rios, a performance do servidor est? simplesmente sofr?vel A carga de CPU sobe tanto, que as vezes mal consigo fazer login para visualizar o syslog, tendo muitas vezes que derrubar v?rios servi?os ou dar um reboot para voltar ao normal. J? fizemos v?rios ajustes de tunning no Kernel e v?rias outros ajustes de tunning nas v?rias aplica??es que o servidor executa (especialmente no servi?os mais importantes como drbd, heartebeat, ldap, nfsserver, etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria consider?vel mesmo ap?s dezenas de ajustes. Como temos dois servidores id?nticos (um em modo failover, por causa do HA), fizemos o teste subindo todos os servi?os no servidor backup, para descartar problemas de disco e/ou hardware na m?quina principal, por?m os problemas continuaram tamb?m no outro servidor. Quando a carga est? muito alta, o syslog come?a a gerar v?rios dumps no /var/log/messages (descritos abaixo) Aparentemente, n?o h? problemas de I/O (j? incluimos at? um RAID para melhorar a performance de disco e fizemos v?rios ajustes, mas nada resolveu ou surtiu efeito) O que percebemos, ? que n?o h? rela??o com iowait e cpu load , ou seja, quando a carga est? alta, o disco n?o apresenta sobrecarga. Parece ser algo haver com mem?ria, mas o servidor antigo trabalha com 4G no OpenSuSE 10.2 e dava conta do recado e j? este servidor, apesar de mais ser ainda "parrudo" e com o dobro de mem?ria n?o. Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware inferior, rodando basicamente os mesmos servi?os e com mesmo n?mero de usu?rios funcionava muito bem com o OpenSuSE 10.2 Segue abaixo descri??o do hardware, software e servi?os utilizados no servidor e logo mais adiante algumas mensgens que aparecem no syslog Se algu?m puder ajudar com qualquer dica, eu agrade?o muit?ssimo (qualquer ajuda ? bem vinda) Servidor> Del PowerEdge 1900 2 x Intel(R) Xeon(R) CPU E5310 1.60GHz DualCore 8G RAM 4 HDs SAS 15000rpm Software> Suse Linux Enterprise Server 11 - Service Pack 1 Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux Servicos basicos que est?o rodando nesse servidor: linux-ha (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp, named, squid e jabberd Numero de usuarios: 300 Usuarios Linux utilizam HOMEDIR montado via NFS Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de grupo e/ou backup de profile top - 10:33:37 up 57 min, 19 users, load average: 40.44, 49.96, 42.26 Tasks: 510 total, 1 running, 509 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 1.5%sy, 0.0%ni, 94.2%id, 1.7%wa, 0.0%hi, 1.4%si, 0.0%st Mem: 8188816k total, 8137392k used, 51424k free, 57116k buffers Swap: 2104432k total, 0k used, 2104432k free, 7089980k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9901 qscand 20 0 207m 164m 2032 S 0 2.1 0:04.63 clamd 4074 root 20 0 358m 57m 1992 S 0 0.7 0:03.03 nscd 9016 named 20 0 320m 54m 2464 S 0 0.7 0:17.37 named 22761 root 20 0 115m 50m 4604 S 0 0.6 0:02.30 nxagent 23477 root 20 0 597m 33m 21m S 0 0.4 0:01.20 plasma-desktop 23357 root 20 0 453m 30m 23m S 0 0.4 0:00.51 kwin 9028 ldap 20 0 1930m 26m 4564 S 0 0.3 1:36.51 slapd 9248 root 20 0 324m 24m 17m S 0 0.3 0:03.92 kdm_greet 24164 root 20 0 486m 23m 16m S 0 0.3 0:00.35 krunner 10870 root 20 0 24548 20m 1168 S 2 0.3 0:22.59 jabberd 9014 root 20 0 120m 19m 5328 S 0 0.2 0:03.04 Xorg 24283 root 20 0 173m 19m 14m S 0 0.2 0:00.18 kdialog 22940 root 20 0 290m 18m 12m S 0 0.2 0:00.22 kded4 24275 root 20 0 191m 18m 13m S 0 0.2 0:00.22 kupdateapplet 24270 root 20 0 237m 16m 10m S 0 0.2 0:00.11 kmix 4061 root -2 0 92828 16m 8476 S 0 0.2 0:01.18 heartbeat 24274 root 20 0 284m 15m 9.9m S 0 0.2 0:00.10 klipper 23299 root 20 0 309m 14m 9844 S 0 0.2 0:00.08 ksmserver 22899 root 20 0 201m 14m 10m S 0 0.2 0:00.10 kdeinit4 23743 root 20 0 228m 12m 7856 S 0 0.2 0:00.10 kglobalaccel 24167 root 20 0 235m 12m 7760 S 0 0.2 0:00.04 nepomukserver # /usr/bin/uptime 11:04am up 0:18, 7 users, load average: 27.52, 18.60, 10.27 # /usr/bin/vmstat 1 4 procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 50856 19300 7196808 0 0 507 378 1167 1175 3 3 88 6 0 0 0 0 41332 19300 7200960 0 0 176 1279 14284 10519 2 2 93 2 0 1 0 0 43184 19184 7181520 0 0 0 1074 7191 1856 0 1 99 0 0 0 0 0 43316 19128 7179868 0 0 0 1189 2237 2340 1 0 99 0 0 # /usr/bin/vmstat 1 4 procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 47276 19048 7177788 0 0 498 384 1166 1171 3 3 88 6 0 1 0 0 46128 19056 7167016 0 0 36 970 7530 4158 2 1 95 2 0 0 1 0 46452 19064 7163616 0 0 20 798 1411 1749 2 1 97 0 0 0 0 0 46868 19064 7162624 0 0 56 751 7079 2169 1 1 97 0 0 Nov 4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only an harmless informational message. Nov 4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a _continuous_flood_ of these messages it means Nov 4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working fine. Allocations from irqs cannot be Nov 4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and the kernel is designed to handle that. Nov 4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 Nov 4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper Tainted: G X 2.6.32.12-0.7-default #1 Nov 4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace: Nov 4 09:57:53 srv-linux kernel: [ 1284.893054] [] dump_trace+0x6c/0x2d0 Nov 4 09:57:53 srv-linux kernel: [ 1284.893063] [] dump_stack+0x69/0x71 Nov 4 09:57:53 srv-linux kernel: [ 1284.893070] [] __alloc_pages_slowpath+0x3ed/0x550 Nov 4 09:57:53 srv-linux kernel: [ 1284.893077] [] __alloc_pages_nodemask+0x13a/0x140 Nov 4 09:57:53 srv-linux kernel: [ 1284.893084] [] kmem_getpages+0x56/0x170 Nov 4 09:57:53 srv-linux kernel: [ 1284.893089] [] fallback_alloc+0x166/0x230 Nov 4 09:58:12 srv-linux kernel: [ 1284.893095] [] kmem_cache_alloc+0x192/0x1b0 Nov 4 09:58:12 srv-linux kernel: [ 1284.893102] [] skb_clone+0x3a/0x80 Nov 4 09:58:12 srv-linux kernel: [ 1284.893109] [] dev_queue_xmit_nit+0x82/0x170 Nov 4 09:58:12 srv-linux kernel: [ 1284.893114] [] dev_hard_start_xmit+0x4a/0x210 Nov 4 09:58:12 srv-linux kernel: [ 1284.893120] [] sch_direct_xmit+0x16e/0x1e0 Nov 4 09:58:12 srv-linux kernel: [ 1284.893126] [] dev_queue_xmit+0x366/0x4d0 Nov 4 09:58:12 srv-linux kernel: [ 1284.893132] [] ip_queue_xmit+0x210/0x420 Nov 4 09:58:12 srv-linux kernel: [ 1284.893139] [] tcp_transmit_skb+0x4cb/0x760 Nov 4 09:58:12 srv-linux kernel: [ 1284.893145] [] tcp_delack_timer+0x14f/0x2a0 Nov 4 09:58:12 srv-linux kernel: [ 1284.893151] [] run_timer_softirq+0x174/0x240 Nov 4 09:58:12 srv-linux kernel: [ 1284.893157] [] __do_softirq+0xbf/0x170 Nov 4 09:58:12 srv-linux kernel: [ 1284.893163] [] call_softirq+0x1c/0x30 Nov 4 09:58:12 srv-linux kernel: [ 1284.893168] [] do_softirq+0x4d/0x80 Nov 4 09:58:12 srv-linux kernel: [ 1284.893173] [] irq_exit+0x85/0x90 Nov 4 09:58:12 srv-linux kernel: [ 1284.893178] [] smp_apic_timer_interrupt+0x6c/0xa0 Nov 4 09:58:12 srv-linux kernel: [ 1284.893185] [] apic_timer_interrupt+0x13/0x20 Nov 4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared Nov 4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only an harmless informational message. Nov 4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a _continuous_flood_ of these messages it means Nov 4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working fine. Allocations from irqs cannot be Nov 4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and the kernel is designed to handle that. Nov 4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 Nov 4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper Tainted: G X 2.6.32.12-0.7-default #1 Nov 4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace: Nov 4 10:21:17 srv-linux kernel: [ 2687.132719] [] dump_trace+0x6c/0x2d0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132729] [] dump_stack+0x69/0x71 Nov 4 10:21:17 srv-linux kernel: [ 2687.132738] [] __alloc_pages_slowpath+0x3ed/0x550 Nov 4 10:21:17 srv-linux kernel: [ 2687.132746] [] __alloc_pages_nodemask+0x13a/0x140 Nov 4 10:21:17 srv-linux kernel: [ 2687.132754] [] kmem_getpages+0x56/0x170 Nov 4 10:21:17 srv-linux kernel: [ 2687.132761] [] fallback_alloc+0x166/0x230 Nov 4 10:21:17 srv-linux kernel: [ 2687.132768] [] kmem_cache_alloc+0x192/0x1b0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132777] [] skb_clone+0x3a/0x80 Nov 4 10:21:17 srv-linux kernel: [ 2687.132788] [] packet_rcv_spkt+0x78/0x190 [af_packet] Nov 4 10:21:17 srv-linux kernel: [ 2687.132807] [] netif_receive_skb+0x3a2/0x660 Nov 4 10:21:17 srv-linux kernel: [ 2687.132819] [] bnx2_rx_int+0x59d/0x820 [bnx2] Nov 4 10:21:17 srv-linux kernel: [ 2687.132836] [] bnx2_poll_work+0x6f/0x90 [bnx2] Nov 4 10:21:17 srv-linux kernel: [ 2687.132851] [] bnx2_poll+0x61/0x1cc [bnx2] Nov 4 10:21:17 srv-linux kernel: [ 2687.132865] [] net_rx_action+0xe3/0x1a0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132873] [] __do_softirq+0xbf/0x170 Nov 4 10:21:17 srv-linux kernel: [ 2687.132881] [] call_softirq+0x1c/0x30 Nov 4 10:21:17 srv-linux kernel: [ 2687.132887] [] do_softirq+0x4d/0x80 Nov 4 10:21:17 srv-linux kernel: [ 2687.132893] [] irq_exit+0x85/0x90 Nov 4 10:21:17 srv-linux kernel: [ 2687.132899] [] do_IRQ+0x6e/0xe0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132907] [] ret_from_intr+0x0/0xa Nov 4 10:21:17 srv-linux kernel: [ 2687.132915] [] mwait_idle+0x62/0x70 Nov 4 10:21:17 srv-linux kernel: [ 2687.132922] [] cpu_idle+0x5a/0xb0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info: Nov 4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu: Nov 4 10:21:17 srv-linux kernel: [ 2687.132934] CPU 0: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, btch: 1 usd: 0 ov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132941] CPU 2: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132945] CPU 3: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132948] CPU 4: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132951] CPU 5: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132955] CPU 6: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132958] CPU 7: hi: 0, btch: 1 usd: 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu: Nov 4 10:21:17 srv-linux kernel: [ 2687.132966] CPU 0: hi: 186, btch: 31 usd: 32 Nov 4 10:21:17 srv-linux kernel: [ 2687.132969] CPU 1: hi: 186, btch: 31 usd: 90 Nov 4 10:21:17 srv-linux kernel: [ 2687.132973] CPU 2: hi: 186, btch: 31 usd: 140 Nov 4 10:21:17 srv-linux kernel: [ 2687.132976] CPU 3: hi: 186, btch: 31 usd: 166 Nov 4 10:21:17 srv-linux kernel: [ 2687.132979] CPU 4: hi: 186, btch: 31 usd: 14 Nov 4 10:21:17 srv-linux kernel: [ 2687.132983] CPU 5: hi: 186, btch: 31 usd: 119 Nov 4 10:21:17 srv-linux kernel: [ 2687.132986] CPU 6: hi: 186, btch: 31 usd: 45 Nov 4 10:21:17 srv-linux kernel: [ 2687.132989] CPU 7: hi: 186, btch: 31 usd: 191 Nov 4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu: Nov 4 10:21:17 srv-linux kernel: [ 2687.132997] CPU 0: hi: 186, btch: 31 usd: 16 Nov 4 10:21:17 srv-linux kernel: [ 2687.133000] CPU 1: hi: 186, btch: 31 usd: 4 Nov 4 10:21:17 srv-linux kernel: [ 2687.133003] CPU 2: hi: 186, btch: 31 usd: 44 Nov 4 10:21:17 srv-linux kernel: [ 2687.133006] CPU 3: hi: 186, btch: 31 usd: 164 Nov 4 10:21:17 srv-linux kernel: [ 2687.133010] CPU 4: hi: 186, btch: 31 usd: 98 Nov 4 10:21:17 srv-linux kernel: [ 2687.133013] CPU 5: hi: 186, btch: 31 usd: 19 Nov 4 10:21:17 srv-linux kernel: [ 2687.133017] CPU 6: hi: 186, btch: 31 usd: 76 Nov 4 10:21:17 srv-linux kernel: [ 2687.133020] CPU 7: hi: 186, btch: 31 usd: 192 Nov 4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321 inactive_anon:23282 isolated_anon:0 Nov 4 10:21:17 srv-linux kernel: [ 2687.133029] active_file:56108 inactive_file:1701629 isolated_file:0 Nov 4 10:21:17 srv-linux kernel: [ 2687.133030] unevictable:5709 dirty:677685 writeback:2 unstable:0 Nov 4 10:21:17 srv-linux kernel: [ 2687.133032] free:9755 slab_reclaimable:66787 slab_unreclaimable:50212 Nov 4 10:21:17 srv-linux kernel: [ 2687.133033] mapped:13499 shmem:67 pagetables:6893 bounce:0 Nov 4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0 3251 8049 8049 Nov 4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32 free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0 4797 4797 Nov 4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB unevictable:22836kB isolated(anon):0kB isolated(file):0kB present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB mapped:52732kB shmem:252kB slab_reclaimable:159432kB slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0 0 Nov 4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15692kB Nov 4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 20428kB Nov 4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 2032kB Nov 4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache pages Nov 4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache Nov 4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add 0, delete 0, find 0/0 Nov 4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap = 2104432kB Nov 4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared Nov 4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only an harmless informational message. Nov 4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a _continuous_flood_ of these messages it means Nov 4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working fine. Allocations from irqs cannot be Nov 4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and the kernel is designed to handle that. Nov 4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 Nov 4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper Tainted: G X 2.6.32.12-0.7-default #1 Nov 4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace: Nov 4 11:07:27 srv-linux kernel: [ 1293.436052] [] dump_trace+0x6c/0x2d0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436061] [] dump_stack+0x69/0x71 Nov 4 11:07:27 srv-linux kernel: [ 1293.436069] [] __alloc_pages_slowpath+0x3ed/0x550 Nov 4 11:07:27 srv-linux kernel: [ 1293.436075] [] __alloc_pages_nodemask+0x13a/0x140 Nov 4 11:07:27 srv-linux kernel: [ 1293.436083] [] kmem_getpages+0x56/0x170 Nov 4 11:07:27 srv-linux kernel: [ 1293.436088] [] fallback_alloc+0x166/0x230 Nov 4 11:07:27 srv-linux kernel: [ 1293.436094] [] kmem_cache_alloc+0x192/0x1b0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436101] [] skb_clone+0x3a/0x80 Nov 4 11:07:27 srv-linux kernel: [ 1293.436108] [] dev_queue_xmit_nit+0x82/0x170 Nov 4 11:07:27 srv-linux kernel: [ 1293.436113] [] dev_hard_start_xmit+0x4a/0x210 Nov 4 11:07:27 srv-linux kernel: [ 1293.436119] [] sch_direct_xmit+0x16e/0x1e0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436125] [] dev_queue_xmit+0x366/0x4d0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436131] [] ip_queue_xmit+0x210/0x420 Nov 4 11:07:27 srv-linux kernel: [ 1293.436138] [] tcp_transmit_skb+0x4cb/0x760 Nov 4 11:07:27 srv-linux kernel: [ 1293.436144] [] tcp_delack_timer+0x14f/0x2a0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436150] [] run_timer_softirq+0x174/0x240 Nov 4 11:07:27 srv-linux kernel: [ 1293.436156] [] __do_softirq+0xbf/0x170 Nov 4 11:07:27 srv-linux kernel: [ 1293.436162] [] call_softirq+0x1c/0x30 Nov 4 11:07:27 srv-linux kernel: [ 1293.436167] [] do_softirq+0x4d/0x80 Nov 4 11:07:27 srv-linux kernel: [ 1293.436171] [] irq_exit+0x85/0x90 Nov 4 11:07:27 srv-linux kernel: [ 1293.436177] [] smp_apic_timer_interrupt+0x6c/0xa0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436184] [] apic_timer_interrupt+0x13/0x20 Nov 4 11:07:27 srv-linux kernel: [ 1293.436191] [] mwait_idle+0x62/0x70 Nov 4 11:07:27 srv-linux kernel: [ 1293.436196] [] cpu_idle+0x5a/0xb0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info: Nov 4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu: Nov 4 11:07:27 srv-linux kernel: [ 1293.436205] CPU 0: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436208] CPU 1: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436210] CPU 2: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436213] CPU 3: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436215] CPU 4: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436217] CPU 5: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436220] CPU 6: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436222] CPU 7: hi: 0, btch: 1 usd: 0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu: Nov 4 11:07:27 srv-linux kernel: [ 1293.436227] CPU 0: hi: 186, btch: 31 usd: 30 Nov 4 11:07:27 srv-linux kernel: [ 1293.436229] CPU 1: hi: 186, btch: 31 usd: 186 Nov 4 11:07:27 srv-linux kernel: [ 1293.436232] CPU 2: hi: 186, btch: 31 usd: 147 Nov 4 11:07:27 srv-linux kernel: [ 1293.436234] CPU 3: hi: 186, btch: 31 usd: 174 Nov 4 11:07:27 srv-linux kernel: [ 1293.436236] CPU 4: hi: 186, btch: 31 usd: 92 Nov 4 11:07:27 srv-linux kernel: [ 1293.436239] CPU 5: hi: 186, btch: 31 usd: 49 Nov 4 11:07:27 srv-linux kernel: [ 1293.436241] CPU 6: hi: 186, btch: 31 usd: 141 Nov 4 11:07:27 srv-linux kernel: [ 1293.436244] CPU 7: hi: 186, btch: 31 usd: 142 Nov 4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu: Nov 4 11:07:27 srv-linux kernel: [ 1293.436248] CPU 0: hi: 186, btch: 31 usd: 46 Nov 4 11:07:27 srv-linux kernel: [ 1293.436250] CPU 1: hi: 186, btch: 31 usd: 158 Nov 4 11:07:27 srv-linux kernel: [ 1293.436253] CPU 2: hi: 186, btch: 31 usd: 151 Nov 4 11:07:27 srv-linux kernel: [ 1293.436255] CPU 3: hi: 186, btch: 31 usd: 39 Nov 4 11:07:27 srv-linux kernel: [ 1293.436257] CPU 4: hi: 186, btch: 31 usd: 114 Nov 4 11:07:27 srv-linux kernel: [ 1293.436260] CPU 5: hi: 186, btch: 31 usd: 59 Nov 4 11:07:27 srv-linux kernel: [ 1293.436262] CPU 6: hi: 186, btch: 31 usd: 124 Nov 4 11:07:27 srv-linux kernel: [ 1293.436265] CPU 7: hi: 186, btch: 31 usd: 173 Nov 4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650 inactive_anon:21539 isolated_anon:0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436272] active_file:65104 inactive_file:1679351 isolated_file:0 Nov 4 11:07:27 srv-linux kernel: [ 1293.436273] unevictable:5709 dirty:474043 writeback:6102 unstable:0 Nov 4 11:07:28 srv-linux kernel: [ 1293.436275] free:9712 slab_reclaimable:51092 slab_unreclaimable:49524 Nov 4 11:07:28 srv-linux kernel: [ 1293.436276] mapped:13595 shmem:109 pagetables:6308 bounce:0 Nov 4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Nov 4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0 3251 8049 8049 Nov 4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32 free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0 4797 4797 Nov 4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB unevictable:22836kB isolated(anon):0kB isolated(file):0kB present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB mapped:54212kB shmem:360kB slab_reclaimable:95396kB slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15692kB Nov 4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 1*4096kB = 19828kB Nov 4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 1840kB Nov 4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache pages Nov 4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache Nov 4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add 0, delete 0, find 0/0 Nov 4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap = 2104432kB Nov 4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared Nov 4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only an harmless informational message. Nov 4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a _continuous_flood_ of these messages it means Nov 4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working fine. Allocations from irqs cannot be Nov 4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and the kernel is designed to handle that. Nov 4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 Nov 4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper Tainted: G X 2.6.32.12-0.7-default #1 Nov 4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace: Nov 4 11:07:28 srv-linux kernel: [ 1293.480883] [] dump_trace+0x6c/0x2d0 Nov 4 11:07:28 srv-linux kernel: [ 1293.480897] [] dump_stack+0x69/0x71 Nov 4 11:07:28 srv-linux kernel: [ 1293.480910] [] __alloc_pages_slowpath+0x3ed/0x550 Nov 4 11:07:28 srv-linux kernel: [ 1293.480921] [] __alloc_pages_nodemask+0x13a/0x140 Nov 4 11:07:28 srv-linux kernel: [ 1293.480933] [] kmem_getpages+0x56/0x170 Nov 4 11:07:28 srv-linux kernel: [ 1293.480944] [] fallback_alloc+0x166/0x230 Nov 4 11:07:28 srv-linux kernel: [ 1293.480955] [] kmem_cache_alloc+0x192/0x1b0 Nov 4 11:07:28 srv-linux kernel: [ 1293.480967] [] skb_clone+0x3a/0x80 Nov 4 11:07:28 srv-linux kernel: [ 1293.480979] [] dev_queue_xmit_nit+0x82/0x170 Nov 4 11:07:28 srv-linux kernel: [ 1293.480990] [] dev_hard_start_xmit+0x4a/0x210 Nov 4 11:07:28 srv-linux kernel: [ 1293.481000] [] sch_direct_xmit+0x16e/0x1e0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481010] [] __qdisc_run+0xaf/0x100 Nov 4 11:07:28 srv-linux kernel: [ 1293.481021] [] dev_queue_xmit+0x4cb/0x4d0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481032] [] ip_queue_xmit+0x210/0x420 Nov 4 11:07:28 srv-linux kernel: [ 1293.481044] [] tcp_transmit_skb+0x4cb/0x760 Nov 4 11:07:28 srv-linux kernel: [ 1293.481054] [] tcp_delack_timer+0x14f/0x2a0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481066] [] run_timer_softirq+0x174/0x240 Nov 4 11:07:28 srv-linux kernel: [ 1293.481077] [] __do_softirq+0xbf/0x170 Nov 4 11:07:28 srv-linux kernel: [ 1293.481088] [] call_softirq+0x1c/0x30 Nov 4 11:07:28 srv-linux kernel: [ 1293.481098] [] do_softirq+0x4d/0x80 Nov 4 11:07:28 srv-linux kernel: [ 1293.481108] [] irq_exit+0x85/0x90 Nov 4 11:07:28 srv-linux kernel: [ 1293.481118] [] smp_apic_timer_interrupt+0x6c/0xa0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481131] [] apic_timer_interrupt+0x13/0x20 Nov 4 11:07:28 srv-linux kernel: [ 1293.481142] [] mwait_idle+0x62/0x70 Nov 4 11:07:28 srv-linux kernel: [ 1293.481152] [] cpu_idle+0x5a/0xb0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info: Nov 4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu: Nov 4 11:07:28 srv-linux kernel: [ 1293.481173] CPU 0: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481178] CPU 1: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481184] CPU 2: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481189] CPU 3: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481195] CPU 4: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481200] CPU 5: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481206] CPU 6: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481211] CPU 7: hi: 0, btch: 1 usd: 0 Nov 4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu: Nov 4 11:07:28 srv-linux kernel: [ 1293.481226] CPU 0: hi: 186, btch: 31 usd: 30 Nov 4 11:07:28 srv-linux kernel: [ 1293.481231] CPU 1: hi: 186, btch: 31 usd: 186 Nov 4 11:07:28 srv-linux kernel: [ 1293.481237] CPU 2: hi: 186, btch: 31 usd: 147 Nov 4 11:07:28 srv-linux kernel: [ 1293.481242] CPU 3: hi: 186, btch: 31 usd: 174 Nov 4 11:07:28 srv-linux kernel: [ 1293.481248] CPU 4: hi: 186, btch: 31 usd: 92 Nov 4 11:07:28 srv-linux kernel: [ 1293.481253] CPU 5: hi: 186, btch: 31 usd: 49 Nov 4 11:07:28 srv-linux kernel: [ 1293.481259] CPU 6: hi: 186, btch: 31 usd: 141 Nov 4 11:07:28 srv-linux kernel: [ 1293.481264] CPU 7: hi: 186, btch: 31 usd: 142 Nov 4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu: Nov 4 11:07:28 srv-linux kernel: [ 1293.481278] CPU 0: hi: 186, btch: 31 usd: 46 Nov 4 11:07:28 srv-linux kernel: [ 1293.481284] CPU 1: hi: 186, btch: 31 usd: 158 Nov 4 11:07:28 srv-linux kernel: [ 1293.481289] CPU 2: hi: 186, btch: 31 usd: 151 Nov 4 11:07:28 srv-linux kernel: [ 1293.481295] CPU 3: hi: 186, btch: 31 usd: 39 Nov 4 11:07:28 srv-linux kernel: [ 1293.481300] CPU 4: hi: 186, btch: 31 usd: 114 Nov 4 11:07:28 srv-linux kernel: [ 1293.481306] CPU 5: hi: 186, btch: 31 usd: 59 Nov 4 11:07:28 srv-linux kernel: [ 1293.481311] CPU 6: hi: 186, btch: 31 usd: 124 ov 4 11:07:28 srv-linux kernel: [ 1293.481316] CPU 7: hi: 186, btch: 31 usd: 173 Nov 4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650 inactive_anon:21539 isolated_anon:0 Nov 4 11:07:29 srv-linux kernel: [ 1293.481327] active_file:65104 inactive_file:1679351 isolated_file:0 Nov 4 11:07:29 srv-linux kernel: [ 1293.481328] unevictable:5709 dirty:474043 writeback:6102 unstable:0 Nov 4 11:07:29 srv-linux kernel: [ 1293.481329] free:9712 slab_reclaimable:51092 slab_unreclaimable:49524 Nov 4 11:07:29 srv-linux kernel: [ 1293.481330] mapped:13595 shmem:109 pagetables:6308 bounce:0 Nov 4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Nov 4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0 3251 8049 8049 Nov 4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32 free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0 4797 4797 Nov 4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB unevictable:22836kB isolated(anon):0kB isolated(file):0kB present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB mapped:54212kB shmem:360kB slab_reclaimable:95396kB slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Nov 4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0 0 Nov 4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15692kB Nov 4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 1*4096kB = 19828kB Nov 4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 1840kB Nov 4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache pages Nov 4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache Nov 4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add 0, delete 0, find 0/0 Nov 4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap = 2104432kB Nov 4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared From javier.augusto at gmx.net Thu Nov 4 13:42:43 2010 From: javier.augusto at gmx.net (Javier Augusto) Date: Thu, 4 Nov 2010 13:42:43 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2C56C.1020108@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: uma duvida... tas usando o XEN? From rejaine at bhz.jamef.com.br Thu Nov 4 14:19:21 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 04 Nov 2010 14:19:21 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: <4CD2DD09.90108@bhz.jamef.com.br> N?o.. Nada de virtualiza??o. S? mesmo aqueles servi?os citados anteriormente. Em 04-11-2010 13:42, Javier Augusto escreveu: > uma duvida... tas usando o XEN? > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > From neto-listas at sagem-orga.com.br Thu Nov 4 14:36:01 2010 From: neto-listas at sagem-orga.com.br (=?ISO-8859-1?Q?Jos=E9_Augusto_dos_Santos_Neto?=) Date: Thu, 04 Nov 2010 14:36:01 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2DD09.90108@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2DD09.90108@bhz.jamef.com.br> Message-ID: <4CD2E0F1.8090508@sagem-orga.com.br> Ol?, Eu tive um problema bem parecido com o seu, mas com um hardware e S.O. um pouco diferente. No meu caso eu tenho um HP Compaq Proliant ML350 G5, com 3 GB de mem?ria RAM - na ?poca -, disco em raid 0 e S/O SLES 10SP2. Este servidor roda um Zimbra e inicialmente achei que fosse ele o problema. Do nada a mem?ria RAM esgotava, depois subia para o swap que tamb?m morria at? o pr?prio kernel come?ar a matar tarefas como uma forma de autodefesa. Eu mesmo n?o sabia dessa fun??o, mas quando aconteceram essa problemas e vi a mensagem no syslog, procurei no google e o encontrei a descri??o. Hoje, infelizmente, n?o me lembro do nome dessa fun??o. Pois bem, para sanar esse problema logo de cara aumentei o swap e a mem?ria ram para 9GB, mas para minha surpresa n?o adiantou em nada. Com o agravamento da situa??o fui tirando aos poucos algumas das fun??es que havia implementado no Zimbra, como Postgrey, Apolicyd, dentre outras. Mas mesmo assim, continuo a travar/negar servi?o. Os problemas foram tantos que resolvi atualizar o kernel, o que resolveu em 99% meus problemas. Digo isso porque ao menos uma vez por m?s tenho que reiniciar o servidor j? que ele para de responder. Hoje meu kernel ?: Linux mail 2.6.16.60-0.42.10-smp #1 SMP Tue Apr 27 05:11:27 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux O pacote instalado ?: kernel-smp-2.6.16.60-0.42.10 Espero ter ajudado de alguma forma, Att. Neto. From fernando at bluesolutions.com.br Thu Nov 4 16:20:25 2010 From: fernando at bluesolutions.com.br (Fernando Ulisses dos Santos) Date: Thu, 04 Nov 2010 16:20:25 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2C56C.1020108@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: <4CD2F969.5070208@bluesolutions.com.br> Rejaine, J? vi aumentar a carga assim quando tem conex?es de rede que ficam indispon?veis, por exemplo, algum compartilhamento NFS que ? mapeado e perde a conex?o por algum motivo, por exemplo, rede saturada, os processos v?o travando, sem I/O de disco e sem I/O de rede e a solu??o mais comum ? um reboot. Nesses casos ? normal encontrar muitos processos zombie. Por favor, me retorne a sa?da dos seguintes comandos pra procurar ind?cios disso e de outros gargalos: cat /proc/diskstats cat /proc/interrupts cat /proc/sys/fs/file-nr wc -l /proc/net/tcp wc -l /proc/net/udp wc -l /proc/net/raw wc -l /proc/net/unix Fernando Ulisses dos Santos Blue Solutions - Solu??es em TI 19-3321-9068 / 19-3551-3898 Em 04-11-2010 12:38, Rejaine Monteiro escreveu: > Pessoal, > > Venho pedir socorro.. > Estou com um grave problema de performance em um servidor com > SLES11(SP1) instalado em um servidor PowerEdge 1900 (configura??o abaixo) > > Ocorre o seguinte: > > T?nhamos em nossas localidades v?rios servidores bem inferiores, > atendendo ao mesmo n?mero de usu?rios e mesmos servi?os, por?m, > utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem at? ent?o, > mas seguindo nosso planejamento de atualiza??o do parque de m?quinas, > optamos por fazer upgrade de hardware e S.O (que se encontrava bastante > desatualizado) nessas localidades e eis que come?aram os problemas. > > Inicialmente, fizemos a substitui??o em apenas duas localidades de menor > porte e com menor n?mero de usu?rios e j? hav?amos notado um certo > aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece > que melhorou um pouco. > > Por?m, em uma outra localidade em especial, com cerca de 300 usu?rios, > a performance do servidor est? simplesmente sofr?vel > A carga de CPU sobe tanto, que as vezes mal consigo fazer login para > visualizar o syslog, tendo muitas vezes que derrubar v?rios servi?os ou > dar um reboot para voltar ao normal. > > J? fizemos v?rios ajustes de tunning no Kernel e v?rias outros ajustes > de tunning nas v?rias aplica??es que o servidor executa (especialmente > no servi?os mais importantes como drbd, heartebeat, ldap, nfsserver, > etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria > consider?vel mesmo ap?s dezenas de ajustes. > > Como temos dois servidores id?nticos (um em modo failover, por causa do > HA), fizemos o teste subindo todos os servi?os no servidor backup, para > descartar problemas de disco e/ou hardware na m?quina principal, por?m > os problemas continuaram tamb?m no outro servidor. > > Quando a carga est? muito alta, o syslog come?a a gerar v?rios dumps no > /var/log/messages (descritos abaixo) > > Aparentemente, n?o h? problemas de I/O (j? incluimos at? um RAID para > melhorar a performance de disco e fizemos v?rios ajustes, mas nada > resolveu ou surtiu efeito) > O que percebemos, ? que n?o h? rela??o com iowait e cpu load , ou seja, > quando a carga est? alta, o disco n?o apresenta sobrecarga. Parece ser > algo haver com mem?ria, mas o servidor antigo trabalha com 4G no > OpenSuSE 10.2 e dava conta do recado e j? este servidor, apesar de mais > ser ainda "parrudo" e com o dobro de mem?ria n?o. > > Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware > inferior, rodando basicamente os mesmos servi?os e com mesmo n?mero de > usu?rios funcionava muito bem com o OpenSuSE 10.2 > > Segue abaixo descri??o do hardware, software e servi?os utilizados no > servidor e logo mais adiante algumas mensgens que aparecem no syslog > > Se algu?m puder ajudar com qualquer dica, eu agrade?o muit?ssimo > (qualquer ajuda ? bem vinda) > > Servidor> Del PowerEdge 1900 > 2 x Intel(R) Xeon(R) CPU E5310 1.60GHz DualCore > 8G RAM > 4 HDs SAS 15000rpm > > Software> Suse Linux Enterprise Server 11 - Service Pack 1 > Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 > +0200 x86_64 x86_64 x86_64 GNU/Linux > > Servicos basicos que est?o rodando nesse servidor: linux-ha > (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp, > named, squid e jabberd > Numero de usuarios: 300 > Usuarios Linux utilizam HOMEDIR montado via NFS > Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de > grupo e/ou backup de profile > > top - 10:33:37 up 57 min, 19 users, load average: 40.44, 49.96, 42.26 > Tasks: 510 total, 1 running, 509 sleeping, 0 stopped, 0 zombie > Cpu(s): 1.3%us, 1.5%sy, 0.0%ni, 94.2%id, 1.7%wa, 0.0%hi, 1.4%si, > 0.0%st > Mem: 8188816k total, 8137392k used, 51424k free, 57116k buffers > Swap: 2104432k total, 0k used, 2104432k free, 7089980k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 9901 qscand 20 0 207m 164m 2032 S 0 2.1 0:04.63 clamd > 4074 root 20 0 358m 57m 1992 S 0 0.7 0:03.03 nscd > 9016 named 20 0 320m 54m 2464 S 0 0.7 0:17.37 named > 22761 root 20 0 115m 50m 4604 S 0 0.6 0:02.30 nxagent > 23477 root 20 0 597m 33m 21m S 0 0.4 0:01.20 plasma-desktop > 23357 root 20 0 453m 30m 23m S 0 0.4 0:00.51 kwin > 9028 ldap 20 0 1930m 26m 4564 S 0 0.3 1:36.51 slapd > 9248 root 20 0 324m 24m 17m S 0 0.3 0:03.92 kdm_greet > 24164 root 20 0 486m 23m 16m S 0 0.3 0:00.35 krunner > 10870 root 20 0 24548 20m 1168 S 2 0.3 0:22.59 jabberd > 9014 root 20 0 120m 19m 5328 S 0 0.2 0:03.04 Xorg > 24283 root 20 0 173m 19m 14m S 0 0.2 0:00.18 kdialog > 22940 root 20 0 290m 18m 12m S 0 0.2 0:00.22 kded4 > 24275 root 20 0 191m 18m 13m S 0 0.2 0:00.22 kupdateapplet > 24270 root 20 0 237m 16m 10m S 0 0.2 0:00.11 kmix > 4061 root -2 0 92828 16m 8476 S 0 0.2 0:01.18 heartbeat > 24274 root 20 0 284m 15m 9.9m S 0 0.2 0:00.10 klipper > 23299 root 20 0 309m 14m 9844 S 0 0.2 0:00.08 ksmserver > 22899 root 20 0 201m 14m 10m S 0 0.2 0:00.10 kdeinit4 > 23743 root 20 0 228m 12m 7856 S 0 0.2 0:00.10 kglobalaccel > 24167 root 20 0 235m 12m 7760 S 0 0.2 0:00.04 nepomukserver > > # /usr/bin/uptime > 11:04am up 0:18, 7 users, load average: 27.52, 18.60, 10.27 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 2 0 0 50856 19300 7196808 0 0 507 378 1167 1175 3 3 > 88 6 0 > 0 0 0 41332 19300 7200960 0 0 176 1279 14284 10519 2 > 2 93 2 0 > 1 0 0 43184 19184 7181520 0 0 0 1074 7191 1856 0 1 > 99 0 0 > 0 0 0 43316 19128 7179868 0 0 0 1189 2237 2340 1 0 > 99 0 0 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 0 1 0 47276 19048 7177788 0 0 498 384 1166 1171 3 3 > 88 6 0 > 1 0 0 46128 19056 7167016 0 0 36 970 7530 4158 2 1 > 95 2 0 > 0 1 0 46452 19064 7163616 0 0 20 798 1411 1749 2 1 > 97 0 0 > 0 0 0 46868 19064 7162624 0 0 56 751 7079 2169 1 1 > 97 0 0 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only > an harmless informational message. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working > fine. Allocations from irqs cannot be > Nov 4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and > the kernel is designed to handle that. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace: > Nov 4 09:57:53 srv-linux kernel: [ 1284.893054] [] > dump_trace+0x6c/0x2d0 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893063] [] > dump_stack+0x69/0x71 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893070] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893077] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893084] [] > kmem_getpages+0x56/0x170 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893089] [] > fallback_alloc+0x166/0x230 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893095] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893102] [] > skb_clone+0x3a/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893109] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893114] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893120] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893126] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893132] [] > ip_queue_xmit+0x210/0x420 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893139] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893145] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893151] [] > run_timer_softirq+0x174/0x240 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893157] [] > __do_softirq+0xbf/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893163] [] > call_softirq+0x1c/0x30 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893168] [] > do_softirq+0x4d/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893173] [] > irq_exit+0x85/0x90 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893178] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893185] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only > an harmless informational message. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working > fine. Allocations from irqs cannot be > Nov 4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and > the kernel is designed to handle that. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132719] [] > dump_trace+0x6c/0x2d0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132729] [] > dump_stack+0x69/0x71 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132738] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132746] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132754] [] > kmem_getpages+0x56/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132761] [] > fallback_alloc+0x166/0x230 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132768] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132777] [] > skb_clone+0x3a/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132788] [] > packet_rcv_spkt+0x78/0x190 [af_packet] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132807] [] > netif_receive_skb+0x3a2/0x660 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132819] [] > bnx2_rx_int+0x59d/0x820 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132836] [] > bnx2_poll_work+0x6f/0x90 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132851] [] > bnx2_poll+0x61/0x1cc [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132865] [] > net_rx_action+0xe3/0x1a0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132873] [] > __do_softirq+0xbf/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132881] [] > call_softirq+0x1c/0x30 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132887] [] > do_softirq+0x4d/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132893] [] > irq_exit+0x85/0x90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132899] [] > do_IRQ+0x6e/0xe0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132907] [] > ret_from_intr+0x0/0xa > Nov 4 10:21:17 srv-linux kernel: [ 2687.132915] [] > mwait_idle+0x62/0x70 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132922] [] > cpu_idle+0x5a/0xb0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132934] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > ov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132941] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132945] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132948] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132951] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132955] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132958] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132966] CPU 0: hi: 186, > btch: 31 usd: 32 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132969] CPU 1: hi: 186, > btch: 31 usd: 90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132973] CPU 2: hi: 186, > btch: 31 usd: 140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132976] CPU 3: hi: 186, > btch: 31 usd: 166 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132979] CPU 4: hi: 186, > btch: 31 usd: 14 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132983] CPU 5: hi: 186, > btch: 31 usd: 119 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132986] CPU 6: hi: 186, > btch: 31 usd: 45 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132989] CPU 7: hi: 186, > btch: 31 usd: 191 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132997] CPU 0: hi: 186, > btch: 31 usd: 16 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133000] CPU 1: hi: 186, > btch: 31 usd: 4 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133003] CPU 2: hi: 186, > btch: 31 usd: 44 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133006] CPU 3: hi: 186, > btch: 31 usd: 164 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133010] CPU 4: hi: 186, > btch: 31 usd: 98 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133013] CPU 5: hi: 186, > btch: 31 usd: 19 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133017] CPU 6: hi: 186, > btch: 31 usd: 76 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133020] CPU 7: hi: 186, > btch: 31 usd: 192 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321 > inactive_anon:23282 isolated_anon:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133029] active_file:56108 > inactive_file:1701629 isolated_file:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133030] unevictable:5709 > dirty:677685 writeback:2 unstable:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133032] free:9755 > slab_reclaimable:66787 slab_unreclaimable:50212 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133033] mapped:13499 shmem:67 > pagetables:6893 bounce:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32 > free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB > inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB > slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB > pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal > free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB > inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB > mapped:52732kB shmem:252kB slab_reclaimable:159432kB > slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB > 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB > 1*2048kB 0*4096kB = 20428kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB > 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB > 0*4096kB = 2032kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache > pages > Nov 4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache > Nov 4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared > Nov 4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only > an harmless informational message. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436052] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436061] [] > dump_stack+0x69/0x71 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436069] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436075] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436083] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436088] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436094] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436101] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436108] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436113] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436119] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436125] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436131] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436138] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436144] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436150] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436156] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436162] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436167] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436171] [] > irq_exit+0x85/0x90 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436177] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436184] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436191] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436196] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436205] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436208] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436210] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436213] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436215] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436217] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436220] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436222] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436227] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436229] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436232] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436234] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436236] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436239] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436241] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436244] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436248] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436250] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436253] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436255] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436257] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436260] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436262] CPU 6: hi: 186, > btch: 31 usd: 124 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436265] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436272] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436273] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436275] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436276] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache > pages > Nov 4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache > Nov 4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only > an harmless informational message. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace: > Nov 4 11:07:28 srv-linux kernel: [ 1293.480883] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480897] [] > dump_stack+0x69/0x71 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480910] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480921] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480933] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480944] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480955] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480967] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480979] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480990] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481000] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481010] [] > __qdisc_run+0xaf/0x100 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481021] [] > dev_queue_xmit+0x4cb/0x4d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481032] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481044] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481054] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481066] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481077] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481088] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481098] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481108] [] > irq_exit+0x85/0x90 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481118] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481131] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481142] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481152] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481173] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481178] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481184] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481189] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481195] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481200] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481206] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481211] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481226] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481231] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481237] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481242] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481248] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481253] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481259] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481264] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481278] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481284] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481289] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481295] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481300] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481306] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481311] CPU 6: hi: 186, > btch: 31 usd: 124 > ov 4 11:07:28 srv-linux kernel: [ 1293.481316] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481327] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481328] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481329] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481330] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache > pages > Nov 4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache > Nov 4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared > > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l From rejaine at bhz.jamef.com.br Thu Nov 4 16:54:22 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 04 Nov 2010 16:54:22 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2F969.5070208@bluesolutions.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2F969.5070208@bluesolutions.com.br> Message-ID: <4CD3015E.8030302@bhz.jamef.com.br> Ol? Fernando, Segue a sa?da para os comandos que voc? solicitou. No momento em que eles foram executados, a carga estava assim: 4:51pm up 2 days 15:13, 31 users, load average: 26.80, 38.18, 40.04 # cat /proc/diskstats 8 0 sda 2373526 5337314 362747321 6950680 1119916 589136 18078042 21478208 0 8884264 28427660 8 1 sda1 3 0 6 4 0 0 0 0 0 4 4 8 5 sda5 20588 7231 391563 101956 62443 241279 2429776 2818612 0 342780 2920548 8 6 sda6 8105 3851 355672 31132 201145 164353 2923944 2529036 0 1176384 2560192 8 7 sda7 28 106 884 200 8 36 352 344 0 504 544 8 8 sda8 2344773 5325879 361998092 6817276 856320 183468 12723970 16130216 0 7759176 22946228 8 16 sdb 233753 11745 3700365 1121176 1621349 8762163 441680121 213615504 0 7454384 214736244 8 17 sdb1 3 0 6 8 0 0 0 0 0 8 8 8 21 sdb5 9347 1041 75656 4332 262 1950 17744 16944 0 5016 21268 8 22 sdb6 19596 9215 1052600 338436 36287 89084 1002944 1286432 0 332092 1624860 8 23 sdb7 31 147 716 80 0 0 0 0 0 64 80 8 24 sdb8 204738 1070 2570167 778216 1584800 8671129 440659433 212312128 0 7270200 213089912 8 32 sdc 1103397 1831933 117183384 13374208 2833434 13409291 237123430 30021984 3 15776748 43396056 8 33 sdc1 1103374 1831894 117182888 13374172 2833434 13409291 237123430 30021984 3 15776672 43395904 11 0 sr0 0 0 0 0 0 0 0 0 0 0 0 7 0 loop0 0 0 0 0 0 0 0 0 0 0 0 7 1 loop1 0 0 0 0 0 0 0 0 0 0 0 7 2 loop2 0 0 0 0 0 0 0 0 0 0 0 7 3 loop3 0 0 0 0 0 0 0 0 0 0 0 7 4 loop4 0 0 0 0 0 0 0 0 0 0 0 7 5 loop5 0 0 0 0 0 0 0 0 0 0 0 7 6 loop6 0 0 0 0 0 0 0 0 0 0 0 7 7 loop7 0 0 0 0 0 0 0 0 0 0 0 147 0 drbd0 7670231 0 361980801 22353608 790391 0 12224264 23668176 0 6631480 42060388 147 1 drbd1 2932451 0 117145425 32553268 15218982 0 235073118 240505628 5 9461760 243188612 # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 126 111 107 88 71 113 127 99 IO-APIC-edge timer 1: 0 1 0 1 0 0 0 0 IO-APIC-edge i8042 3: 1 0 0 0 0 0 0 1 IO-APIC-edge 4: 0 0 0 1 0 1 0 0 IO-APIC-edge 8: 1 0 0 0 0 0 0 0 IO-APIC-edge rtc0 9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi 12: 1 0 0 0 0 1 1 1 IO-APIC-edge i8042 20: 4161 8 8 10 8 9 8 8 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb5 21: 3 4 3 3 5 3 3 2 IO-APIC-fasteoi ehci_hcd:usb1, uhci_hcd:usb2, uhci_hcd:usb4 23: 1157658 399 12 13 1190308 383 13 12 IO-APIC-fasteoi ata_piix 4340: 11835998 11961200 11961267 10498754 11873005 11961101 11961249 10499059 PCI-MSI-edge eth0 4341: 113512 5 1 2 1 223 2 7 PCI-MSI-edge eth1 4342: 1966 420 419 5559507 1969 423 419 5559192 PCI-MSI-edge ioc0 4346: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv 4347: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv 4348: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv 4349: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv 4350: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv 4351: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv NMI: 0 0 0 0 0 0 0 0 Non-maskable interrupts LOC: 6202471 5360089 5336912 5926017 4578018 4119057 4153793 7799872 Local timer interrupts RES: 4075362 3703559 3670625 4459376 3952669 3597535 3861743 4510502 Rescheduling interrupts CAL: 1497 1517 1522 1456 1471 1482 1487 452 function call interrupts TLB: 101705 93607 93412 89613 161066 152551 153405 146574 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts ERR: 0 # cat /proc/sys/fs/file-nr 7168 0 766538 # wc -l /proc/net/tcp 886 /proc/net/tcp # wc -l /proc/net/udp 48 /proc/net/udp # wc -l /proc/net/raw 2 /proc/net/raw # wc -l /proc/net/unix 306 /proc/net/unix Rejaine da Silveira Monteiro Suporte-TI Jamef Encomendas Urgentes Matriz - Contagem/MG Tel: (31) 2102-8854 www.jamef.com.br Em 04-11-2010 16:20, Fernando Ulisses dos Santos escreveu: > cat /proc/diskstats > cat /proc/interrupts > cat /proc/sys/fs/file-nr > wc -l /proc/net/tcp > wc -l /proc/net/udp > wc -l /proc/net/raw > wc -l /proc/net/unix From fernando at bluesolutions.com.br Thu Nov 4 17:55:50 2010 From: fernando at bluesolutions.com.br (Fernando Ulisses dos Santos) Date: Thu, 04 Nov 2010 17:55:50 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD3015E.8030302@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2F969.5070208@bluesolutions.com.br> <4CD3015E.8030302@bhz.jamef.com.br> Message-ID: <4CD30FC6.8050408@bluesolutions.com.br> Rejaine, Ok, do que eu pude analisar, o drbd1 est? travando alguns processos que podem justificar esse load. Por acaso ele est? sincronizado? Se n?o estiver, ? o culpado n?mero 1. A placa de rede eth0 ? usada para sincronizar o DRBD com o outro host? Est? ligada a Gigabit? Analise o tr?fego dela com algum utilit?rio como iptraf ou iftop, veja se n?o est? saturando o uso, altere os par?metros do DRBD para usar menos banda se for o caso. Se isso n?o resolver, execute o seguinte comando em hor?rio de pico e me passe o resultado: cat /proc/diskstats ; sleep 10 ; cat /proc/diskstats ; sleep 10 ; cat /proc/diskstats Fernando Ulisses dos Santos Blue Solutions - Solu??es em TI - Araras/SP 19-3321-9068 / 19-9294-0556 Em 04-11-2010 16:54, Rejaine Monteiro escreveu: > Ol? Fernando, > > Segue a sa?da para os comandos que voc? solicitou. > No momento em que eles foram executados, a carga estava assim: 4:51pm > up 2 days 15:13, 31 users, load average: 26.80, 38.18, 40.04 > > > # cat /proc/diskstats > > 8 0 sda 2373526 5337314 362747321 6950680 1119916 589136 18078042 > 21478208 0 8884264 28427660 > 8 1 sda1 3 0 6 4 0 0 0 0 0 4 > 4 > 8 5 sda5 20588 7231 391563 101956 62443 241279 2429776 2818612 0 > 342780 2920548 > 8 6 sda6 8105 3851 355672 31132 201145 164353 2923944 2529036 0 > 1176384 2560192 > 8 7 sda7 28 106 884 200 8 36 352 344 0 504 > 544 > 8 8 sda8 2344773 5325879 361998092 6817276 856320 183468 12723970 > 16130216 0 7759176 22946228 > 8 16 sdb 233753 11745 3700365 1121176 1621349 8762163 441680121 > 213615504 0 7454384 214736244 > 8 17 sdb1 3 0 6 8 0 0 0 0 0 8 > 8 > 8 21 sdb5 9347 1041 75656 4332 262 1950 17744 16944 0 5016 > 21268 > 8 22 sdb6 19596 9215 1052600 338436 36287 89084 1002944 1286432 0 > 332092 1624860 > 8 23 sdb7 31 147 716 80 0 0 0 0 0 64 > 80 > 8 24 sdb8 204738 1070 2570167 778216 1584800 8671129 440659433 > 212312128 0 7270200 213089912 > 8 32 sdc 1103397 1831933 117183384 13374208 2833434 13409291 > 237123430 30021984 3 15776748 43396056 > 8 33 sdc1 1103374 1831894 117182888 13374172 2833434 13409291 > 237123430 30021984 3 15776672 43395904 > 11 0 sr0 0 0 0 0 0 0 0 0 0 0 > 0 > 7 0 loop0 0 0 0 0 0 0 0 0 0 0 > 0 > 7 1 loop1 0 0 0 0 0 0 0 0 0 0 > 0 > 7 2 loop2 0 0 0 0 0 0 0 0 0 0 > 0 > 7 3 loop3 0 0 0 0 0 0 0 0 0 0 > 0 > 7 4 loop4 0 0 0 0 0 0 0 0 0 0 > 0 > 7 5 loop5 0 0 0 0 0 0 0 0 0 0 > 0 > 7 6 loop6 0 0 0 0 0 0 0 0 0 0 > 0 > 7 7 loop7 0 0 0 0 0 0 0 0 0 0 > 0 > 147 0 drbd0 7670231 0 361980801 22353608 790391 0 12224264 23668176 > 0 6631480 42060388 > 147 1 drbd1 2932451 0 117145425 32553268 15218982 0 235073118 > 240505628 5 9461760 243188612 > > # cat > /proc/interrupts > > CPU0 CPU1 CPU2 CPU3 CPU4 > CPU5 CPU6 CPU7 > 0: 126 111 107 88 71 > 113 127 99 IO-APIC-edge timer > 1: 0 1 0 1 0 > 0 0 0 IO-APIC-edge i8042 > 3: 1 0 0 0 0 > 0 0 1 IO-APIC-edge > 4: 0 0 0 1 0 > 1 0 0 IO-APIC-edge > 8: 1 0 0 0 0 > 0 0 0 IO-APIC-edge rtc0 > 9: 0 0 0 0 0 > 0 0 0 IO-APIC-fasteoi acpi > 12: 1 0 0 0 0 > 1 1 1 IO-APIC-edge i8042 > 20: 4161 8 8 10 8 > 9 8 8 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb5 > 21: 3 4 3 3 5 > 3 3 2 IO-APIC-fasteoi ehci_hcd:usb1, > uhci_hcd:usb2, uhci_hcd:usb4 > 23: 1157658 399 12 13 1190308 > 383 13 12 IO-APIC-fasteoi ata_piix > 4340: 11835998 11961200 11961267 10498754 11873005 > 11961101 11961249 10499059 PCI-MSI-edge eth0 > 4341: 113512 5 1 2 1 > 223 2 7 PCI-MSI-edge eth1 > 4342: 1966 420 419 5559507 1969 > 423 419 5559192 PCI-MSI-edge ioc0 > 4346: 0 0 0 0 0 > 0 0 0 PCI-MSI-edge aerdrv > 4347: 0 0 0 0 0 > 0 0 0 PCI-MSI-edge aerdrv > 4348: 0 0 0 0 0 > 0 0 0 PCI-MSI-edge aerdrv > 4349: 0 0 0 0 0 > 0 0 0 PCI-MSI-edge aerdrv > 4350: 0 0 0 0 0 > 0 0 0 PCI-MSI-edge aerdrv > 4351: 0 0 0 0 0 > 0 0 0 PCI-MSI-edge aerdrv > NMI: 0 0 0 0 0 > 0 0 0 Non-maskable interrupts > LOC: 6202471 5360089 5336912 5926017 4578018 > 4119057 4153793 7799872 Local timer interrupts > RES: 4075362 3703559 3670625 4459376 3952669 > 3597535 3861743 4510502 Rescheduling interrupts > CAL: 1497 1517 1522 1456 1471 > 1482 1487 452 function call interrupts > TLB: 101705 93607 93412 89613 161066 > 152551 153405 146574 TLB shootdowns > TRM: 0 0 0 0 0 > 0 0 0 Thermal event interrupts > THR: 0 0 0 0 0 > 0 0 0 Threshold APIC interrupts > SPU: 0 0 0 0 0 > 0 0 0 Spurious interrupts > ERR: 0 > > # cat /proc/sys/fs/file-nr > 7168 0 766538 > > # wc -l /proc/net/tcp > 886 /proc/net/tcp > > # wc -l /proc/net/udp > 48 /proc/net/udp > > # wc -l /proc/net/raw > 2 /proc/net/raw > > # wc -l /proc/net/unix > 306 /proc/net/unix > > > Rejaine da Silveira Monteiro > Suporte-TI > Jamef Encomendas Urgentes > Matriz - Contagem/MG > Tel: (31) 2102-8854 > www.jamef.com.br > > > Em 04-11-2010 16:20, Fernando Ulisses dos Santos escreveu: >> cat /proc/diskstats >> cat /proc/interrupts >> cat /proc/sys/fs/file-nr >> wc -l /proc/net/tcp >> wc -l /proc/net/udp >> wc -l /proc/net/raw >> wc -l /proc/net/unix > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l From rejaine at bhz.jamef.com.br Thu Nov 4 18:08:49 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 04 Nov 2010 18:08:49 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD30FC6.8050408@bluesolutions.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2F969.5070208@bluesolutions.com.br> <4CD3015E.8030302@bhz.jamef.com.br> <4CD30FC6.8050408@bluesolutions.com.br> Message-ID: <4CD312D1.7090203@bhz.jamef.com.br> Pois ? Fernando. Uma das primeiras coisas que fizemos foi fazer um tunning geral no drbd, que n?o resolveu. E para tirar a d?vida sobre o sync, chegamos a desativar o sincronismo, justamente para eliminar essa causa e nada... Ou seja, o sync j? n?o estava rodando quando foram executados os comandos que voce~ pediu, por?m os dados continuam sendo gravados na camada drbd (j? pensamos at? em desfazer todo o drbd e gravar direto em disco local, mas isso ainda n?o foi poss?vel fazer nesse momento) E quanto a placa: a para fazer o sync ? a eth1 e n?o a eth0 (esta ?ltima est? ligada ? rede local) Nos dois casos, s?o placas gigabit e o swtich que liga o servidor tamb?m ? gigabit Em 04-11-2010 17:55, Fernando Ulisses dos Santos escreveu: > Rejaine, > > Ok, do que eu pude analisar, o drbd1 est? travando alguns processos > que podem justificar esse load. > > Por acaso ele est? sincronizado? Se n?o estiver, ? o culpado n?mero 1. > > A placa de rede eth0 ? usada para sincronizar o DRBD com o outro host? > Est? ligada a Gigabit? Analise o tr?fego dela com algum utilit?rio > como iptraf ou iftop, veja se n?o est? saturando o uso, altere os > par?metros do DRBD para usar menos banda se for o caso. > > Se isso n?o resolver, execute o seguinte comando em hor?rio de pico e > me passe o resultado: > cat /proc/diskstats ; sleep 10 ; cat /proc/diskstats ; sleep 10 ; cat > /proc/diskstats > > > Fernando Ulisses dos Santos > Blue Solutions - Solu??es em TI - Araras/SP > 19-3321-9068 / 19-9294-0556 > > > Em 04-11-2010 16:54, Rejaine Monteiro escreveu: >> Ol? Fernando, >> >> Segue a sa?da para os comandos que voc? solicitou. >> No momento em que eles foram executados, a carga estava assim: 4:51pm >> up 2 days 15:13, 31 users, load average: 26.80, 38.18, 40.04 >> >> >> # cat /proc/diskstats >> >> 8 0 sda 2373526 5337314 362747321 6950680 1119916 589136 18078042 >> 21478208 0 8884264 28427660 >> 8 1 sda1 3 0 6 4 0 0 0 0 0 4 >> 4 >> 8 5 sda5 20588 7231 391563 101956 62443 241279 2429776 2818612 0 >> 342780 2920548 >> 8 6 sda6 8105 3851 355672 31132 201145 164353 2923944 2529036 0 >> 1176384 2560192 >> 8 7 sda7 28 106 884 200 8 36 352 344 0 504 >> 544 >> 8 8 sda8 2344773 5325879 361998092 6817276 856320 183468 12723970 >> 16130216 0 7759176 22946228 >> 8 16 sdb 233753 11745 3700365 1121176 1621349 8762163 441680121 >> 213615504 0 7454384 214736244 >> 8 17 sdb1 3 0 6 8 0 0 0 0 0 8 >> 8 >> 8 21 sdb5 9347 1041 75656 4332 262 1950 17744 16944 0 5016 >> 21268 >> 8 22 sdb6 19596 9215 1052600 338436 36287 89084 1002944 1286432 0 >> 332092 1624860 >> 8 23 sdb7 31 147 716 80 0 0 0 0 0 64 >> 80 >> 8 24 sdb8 204738 1070 2570167 778216 1584800 8671129 440659433 >> 212312128 0 7270200 213089912 >> 8 32 sdc 1103397 1831933 117183384 13374208 2833434 13409291 >> 237123430 30021984 3 15776748 43396056 >> 8 33 sdc1 1103374 1831894 117182888 13374172 2833434 13409291 >> 237123430 30021984 3 15776672 43395904 >> 11 0 sr0 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 0 loop0 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 1 loop1 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 2 loop2 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 3 loop3 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 4 loop4 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 5 loop5 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 6 loop6 0 0 0 0 0 0 0 0 0 0 >> 0 >> 7 7 loop7 0 0 0 0 0 0 0 0 0 0 >> 0 >> 147 0 drbd0 7670231 0 361980801 22353608 790391 0 12224264 23668176 >> 0 6631480 42060388 >> 147 1 drbd1 2932451 0 117145425 32553268 15218982 0 235073118 >> 240505628 5 9461760 243188612 >> >> # cat >> /proc/interrupts >> >> CPU0 CPU1 CPU2 CPU3 CPU4 >> CPU5 CPU6 CPU7 >> 0: 126 111 107 88 71 >> 113 127 99 IO-APIC-edge timer >> 1: 0 1 0 1 0 >> 0 0 0 IO-APIC-edge i8042 >> 3: 1 0 0 0 0 >> 0 0 1 IO-APIC-edge >> 4: 0 0 0 1 0 >> 1 0 0 IO-APIC-edge >> 8: 1 0 0 0 0 >> 0 0 0 IO-APIC-edge rtc0 >> 9: 0 0 0 0 0 >> 0 0 0 IO-APIC-fasteoi acpi >> 12: 1 0 0 0 0 >> 1 1 1 IO-APIC-edge i8042 >> 20: 4161 8 8 10 8 >> 9 8 8 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb5 >> 21: 3 4 3 3 5 >> 3 3 2 IO-APIC-fasteoi ehci_hcd:usb1, >> uhci_hcd:usb2, uhci_hcd:usb4 >> 23: 1157658 399 12 13 1190308 >> 383 13 12 IO-APIC-fasteoi ata_piix >> 4340: 11835998 11961200 11961267 10498754 11873005 >> 11961101 11961249 10499059 PCI-MSI-edge eth0 >> 4341: 113512 5 1 2 1 >> 223 2 7 PCI-MSI-edge eth1 >> 4342: 1966 420 419 5559507 1969 >> 423 419 5559192 PCI-MSI-edge ioc0 >> 4346: 0 0 0 0 0 >> 0 0 0 PCI-MSI-edge aerdrv >> 4347: 0 0 0 0 0 >> 0 0 0 PCI-MSI-edge aerdrv >> 4348: 0 0 0 0 0 >> 0 0 0 PCI-MSI-edge aerdrv >> 4349: 0 0 0 0 0 >> 0 0 0 PCI-MSI-edge aerdrv >> 4350: 0 0 0 0 0 >> 0 0 0 PCI-MSI-edge aerdrv >> 4351: 0 0 0 0 0 >> 0 0 0 PCI-MSI-edge aerdrv >> NMI: 0 0 0 0 0 >> 0 0 0 Non-maskable interrupts >> LOC: 6202471 5360089 5336912 5926017 4578018 >> 4119057 4153793 7799872 Local timer interrupts >> RES: 4075362 3703559 3670625 4459376 3952669 >> 3597535 3861743 4510502 Rescheduling interrupts >> CAL: 1497 1517 1522 1456 1471 >> 1482 1487 452 function call interrupts >> TLB: 101705 93607 93412 89613 161066 >> 152551 153405 146574 TLB shootdowns >> TRM: 0 0 0 0 0 >> 0 0 0 Thermal event interrupts >> THR: 0 0 0 0 0 >> 0 0 0 Threshold APIC interrupts >> SPU: 0 0 0 0 0 >> 0 0 0 Spurious interrupts >> ERR: 0 >> >> # cat /proc/sys/fs/file-nr >> 7168 0 766538 >> >> # wc -l /proc/net/tcp >> 886 /proc/net/tcp >> >> # wc -l /proc/net/udp >> 48 /proc/net/udp >> >> # wc -l /proc/net/raw >> 2 /proc/net/raw >> >> # wc -l /proc/net/unix >> 306 /proc/net/unix >> >> >> Rejaine da Silveira Monteiro >> Suporte-TI >> Jamef Encomendas Urgentes >> Matriz - Contagem/MG >> Tel: (31) 2102-8854 >> www.jamef.com.br >> >> >> Em 04-11-2010 16:20, Fernando Ulisses dos Santos escreveu: >>> cat /proc/diskstats >>> cat /proc/interrupts >>> cat /proc/sys/fs/file-nr >>> wc -l /proc/net/tcp >>> wc -l /proc/net/udp >>> wc -l /proc/net/raw >>> wc -l /proc/net/unix >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l From fernando at bluesolutions.com.br Fri Nov 5 09:46:45 2010 From: fernando at bluesolutions.com.br (Fernando Ulisses dos Santos) Date: Fri, 05 Nov 2010 09:46:45 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD312D1.7090203@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2F969.5070208@bluesolutions.com.br> <4CD3015E.8030302@bhz.jamef.com.br> <4CD30FC6.8050408@bluesolutions.com.br> <4CD312D1.7090203@bhz.jamef.com.br> Message-ID: <4CD3EEA5.5040609@bluesolutions.com.br> Rejaine, Desativar o DRBD parece ser o pr?ximo passo. Na sa?da do /proc/diskstats o DRBD estava segurando 5 processos e o sdc apenas 3, tinha uma diferen?a de 2 processos por conta da camada DRBD, que pode ser o respons?vel pelo load alto. Mesmo assim, se o sdc mantiver essa linha, teu load deve ficar na casa dos 4 ou 5. Voc? comentou que tinha 4 discos, mas tem apresentado apenas sda, sdb, sdc, teu RAID n?o parece estar na melhor configura??o de performance e disponibilidade, eu teria apresentado apenas RAID 10 com os 4 discos. Talvez voc? tenha encontrado um bug muito s?rio no conjunto de kernel + m?dulos + hardware, se nada resolver, a sa?da final ser? trocar a vers?o (atualizar firmware, trocar kernel, drbd, etc). Fernando Ulisses dos Santos Blue Solutions - Solu??es em TI - Araras/SP 19-3321-9068 / 19-3551-3898 Em 04-11-2010 18:08, Rejaine Monteiro escreveu: > Pois ? Fernando. > > Uma das primeiras coisas que fizemos foi fazer um tunning geral no drbd, > que n?o resolveu. > E para tirar a d?vida sobre o sync, chegamos a desativar o sincronismo, > justamente para eliminar essa causa e nada... > Ou seja, o sync j? n?o estava rodando quando foram executados os > comandos que voce~ pediu, por?m os dados continuam sendo gravados na > camada drbd (j? pensamos at? em desfazer todo o drbd e gravar direto em > disco local, mas isso ainda n?o foi poss?vel fazer nesse momento) > > E quanto a placa: a para fazer o sync ? a eth1 e n?o a eth0 (esta ?ltima > est? ligada ? rede local) > > Nos dois casos, s?o placas gigabit e o swtich que liga o servidor > tamb?m ? gigabit > > > Em 04-11-2010 17:55, Fernando Ulisses dos Santos escreveu: >> Rejaine, >> >> Ok, do que eu pude analisar, o drbd1 est? travando alguns processos >> que podem justificar esse load. >> >> Por acaso ele est? sincronizado? Se n?o estiver, ? o culpado n?mero 1. >> >> A placa de rede eth0 ? usada para sincronizar o DRBD com o outro host? >> Est? ligada a Gigabit? Analise o tr?fego dela com algum utilit?rio >> como iptraf ou iftop, veja se n?o est? saturando o uso, altere os >> par?metros do DRBD para usar menos banda se for o caso. >> >> Se isso n?o resolver, execute o seguinte comando em hor?rio de pico e >> me passe o resultado: >> cat /proc/diskstats ; sleep 10 ; cat /proc/diskstats ; sleep 10 ; cat >> /proc/diskstats >> >> >> Fernando Ulisses dos Santos >> Blue Solutions - Solu??es em TI - Araras/SP >> 19-3321-9068 / 19-9294-0556 >> >> >> Em 04-11-2010 16:54, Rejaine Monteiro escreveu: >>> Ol? Fernando, >>> >>> Segue a sa?da para os comandos que voc? solicitou. >>> No momento em que eles foram executados, a carga estava assim: 4:51pm >>> up 2 days 15:13, 31 users, load average: 26.80, 38.18, 40.04 >>> >>> >>> # cat /proc/diskstats >>> >>> 8 0 sda 2373526 5337314 362747321 6950680 1119916 589136 18078042 >>> 21478208 0 8884264 28427660 >>> 8 1 sda1 3 0 6 4 0 0 0 0 0 4 >>> 4 >>> 8 5 sda5 20588 7231 391563 101956 62443 241279 2429776 2818612 0 >>> 342780 2920548 >>> 8 6 sda6 8105 3851 355672 31132 201145 164353 2923944 2529036 0 >>> 1176384 2560192 >>> 8 7 sda7 28 106 884 200 8 36 352 344 0 504 >>> 544 >>> 8 8 sda8 2344773 5325879 361998092 6817276 856320 183468 12723970 >>> 16130216 0 7759176 22946228 >>> 8 16 sdb 233753 11745 3700365 1121176 1621349 8762163 441680121 >>> 213615504 0 7454384 214736244 >>> 8 17 sdb1 3 0 6 8 0 0 0 0 0 8 >>> 8 >>> 8 21 sdb5 9347 1041 75656 4332 262 1950 17744 16944 0 5016 >>> 21268 >>> 8 22 sdb6 19596 9215 1052600 338436 36287 89084 1002944 1286432 0 >>> 332092 1624860 >>> 8 23 sdb7 31 147 716 80 0 0 0 0 0 64 >>> 80 >>> 8 24 sdb8 204738 1070 2570167 778216 1584800 8671129 440659433 >>> 212312128 0 7270200 213089912 >>> 8 32 sdc 1103397 1831933 117183384 13374208 2833434 13409291 >>> 237123430 30021984 3 15776748 43396056 >>> 8 33 sdc1 1103374 1831894 117182888 13374172 2833434 13409291 >>> 237123430 30021984 3 15776672 43395904 >>> 11 0 sr0 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 0 loop0 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 1 loop1 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 2 loop2 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 3 loop3 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 4 loop4 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 5 loop5 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 6 loop6 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 7 loop7 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 147 0 drbd0 7670231 0 361980801 22353608 790391 0 12224264 23668176 >>> 0 6631480 42060388 >>> 147 1 drbd1 2932451 0 117145425 32553268 15218982 0 235073118 >>> 240505628 5 9461760 243188612 >>> >>> # cat >>> /proc/interrupts >>> >>> CPU0 CPU1 CPU2 CPU3 CPU4 >>> CPU5 CPU6 CPU7 >>> 0: 126 111 107 88 71 >>> 113 127 99 IO-APIC-edge timer >>> 1: 0 1 0 1 0 >>> 0 0 0 IO-APIC-edge i8042 >>> 3: 1 0 0 0 0 >>> 0 0 1 IO-APIC-edge >>> 4: 0 0 0 1 0 >>> 1 0 0 IO-APIC-edge >>> 8: 1 0 0 0 0 >>> 0 0 0 IO-APIC-edge rtc0 >>> 9: 0 0 0 0 0 >>> 0 0 0 IO-APIC-fasteoi acpi >>> 12: 1 0 0 0 0 >>> 1 1 1 IO-APIC-edge i8042 >>> 20: 4161 8 8 10 8 >>> 9 8 8 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb5 >>> 21: 3 4 3 3 5 >>> 3 3 2 IO-APIC-fasteoi ehci_hcd:usb1, >>> uhci_hcd:usb2, uhci_hcd:usb4 >>> 23: 1157658 399 12 13 1190308 >>> 383 13 12 IO-APIC-fasteoi ata_piix >>> 4340: 11835998 11961200 11961267 10498754 11873005 >>> 11961101 11961249 10499059 PCI-MSI-edge eth0 >>> 4341: 113512 5 1 2 1 >>> 223 2 7 PCI-MSI-edge eth1 >>> 4342: 1966 420 419 5559507 1969 >>> 423 419 5559192 PCI-MSI-edge ioc0 >>> 4346: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4347: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4348: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4349: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4350: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4351: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> NMI: 0 0 0 0 0 >>> 0 0 0 Non-maskable interrupts >>> LOC: 6202471 5360089 5336912 5926017 4578018 >>> 4119057 4153793 7799872 Local timer interrupts >>> RES: 4075362 3703559 3670625 4459376 3952669 >>> 3597535 3861743 4510502 Rescheduling interrupts >>> CAL: 1497 1517 1522 1456 1471 >>> 1482 1487 452 function call interrupts >>> TLB: 101705 93607 93412 89613 161066 >>> 152551 153405 146574 TLB shootdowns >>> TRM: 0 0 0 0 0 >>> 0 0 0 Thermal event interrupts >>> THR: 0 0 0 0 0 >>> 0 0 0 Threshold APIC interrupts >>> SPU: 0 0 0 0 0 >>> 0 0 0 Spurious interrupts >>> ERR: 0 >>> >>> # cat /proc/sys/fs/file-nr >>> 7168 0 766538 >>> >>> # wc -l /proc/net/tcp >>> 886 /proc/net/tcp >>> >>> # wc -l /proc/net/udp >>> 48 /proc/net/udp >>> >>> # wc -l /proc/net/raw >>> 2 /proc/net/raw >>> >>> # wc -l /proc/net/unix >>> 306 /proc/net/unix >>> >>> >>> Rejaine da Silveira Monteiro >>> Suporte-TI >>> Jamef Encomendas Urgentes >>> Matriz - Contagem/MG >>> Tel: (31) 2102-8854 >>> www.jamef.com.br >>> >>> >>> Em 04-11-2010 16:20, Fernando Ulisses dos Santos escreveu: >>>> cat /proc/diskstats >>>> cat /proc/interrupts >>>> cat /proc/sys/fs/file-nr >>>> wc -l /proc/net/tcp >>>> wc -l /proc/net/udp >>>> wc -l /proc/net/raw >>>> wc -l /proc/net/unix >>> __ >>> masoch-l list >>> https://eng.registro.br/mailman/listinfo/masoch-l >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l From fernando at bluesolutions.com.br Fri Nov 5 09:46:45 2010 From: fernando at bluesolutions.com.br (Fernando Ulisses dos Santos) Date: Fri, 05 Nov 2010 09:46:45 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD312D1.7090203@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2F969.5070208@bluesolutions.com.br> <4CD3015E.8030302@bhz.jamef.com.br> <4CD30FC6.8050408@bluesolutions.com.br> <4CD312D1.7090203@bhz.jamef.com.br> Message-ID: <4CD3EEA5.5040609@bluesolutions.com.br> Rejaine, Desativar o DRBD parece ser o pr?ximo passo. Na sa?da do /proc/diskstats o DRBD estava segurando 5 processos e o sdc apenas 3, tinha uma diferen?a de 2 processos por conta da camada DRBD, que pode ser o respons?vel pelo load alto. Mesmo assim, se o sdc mantiver essa linha, teu load deve ficar na casa dos 4 ou 5. Voc? comentou que tinha 4 discos, mas tem apresentado apenas sda, sdb, sdc, teu RAID n?o parece estar na melhor configura??o de performance e disponibilidade, eu teria apresentado apenas RAID 10 com os 4 discos. Talvez voc? tenha encontrado um bug muito s?rio no conjunto de kernel + m?dulos + hardware, se nada resolver, a sa?da final ser? trocar a vers?o (atualizar firmware, trocar kernel, drbd, etc). Fernando Ulisses dos Santos Blue Solutions - Solu??es em TI - Araras/SP 19-3321-9068 / 19-3551-3898 Em 04-11-2010 18:08, Rejaine Monteiro escreveu: > Pois ? Fernando. > > Uma das primeiras coisas que fizemos foi fazer um tunning geral no drbd, > que n?o resolveu. > E para tirar a d?vida sobre o sync, chegamos a desativar o sincronismo, > justamente para eliminar essa causa e nada... > Ou seja, o sync j? n?o estava rodando quando foram executados os > comandos que voce~ pediu, por?m os dados continuam sendo gravados na > camada drbd (j? pensamos at? em desfazer todo o drbd e gravar direto em > disco local, mas isso ainda n?o foi poss?vel fazer nesse momento) > > E quanto a placa: a para fazer o sync ? a eth1 e n?o a eth0 (esta ?ltima > est? ligada ? rede local) > > Nos dois casos, s?o placas gigabit e o swtich que liga o servidor > tamb?m ? gigabit > > > Em 04-11-2010 17:55, Fernando Ulisses dos Santos escreveu: >> Rejaine, >> >> Ok, do que eu pude analisar, o drbd1 est? travando alguns processos >> que podem justificar esse load. >> >> Por acaso ele est? sincronizado? Se n?o estiver, ? o culpado n?mero 1. >> >> A placa de rede eth0 ? usada para sincronizar o DRBD com o outro host? >> Est? ligada a Gigabit? Analise o tr?fego dela com algum utilit?rio >> como iptraf ou iftop, veja se n?o est? saturando o uso, altere os >> par?metros do DRBD para usar menos banda se for o caso. >> >> Se isso n?o resolver, execute o seguinte comando em hor?rio de pico e >> me passe o resultado: >> cat /proc/diskstats ; sleep 10 ; cat /proc/diskstats ; sleep 10 ; cat >> /proc/diskstats >> >> >> Fernando Ulisses dos Santos >> Blue Solutions - Solu??es em TI - Araras/SP >> 19-3321-9068 / 19-9294-0556 >> >> >> Em 04-11-2010 16:54, Rejaine Monteiro escreveu: >>> Ol? Fernando, >>> >>> Segue a sa?da para os comandos que voc? solicitou. >>> No momento em que eles foram executados, a carga estava assim: 4:51pm >>> up 2 days 15:13, 31 users, load average: 26.80, 38.18, 40.04 >>> >>> >>> # cat /proc/diskstats >>> >>> 8 0 sda 2373526 5337314 362747321 6950680 1119916 589136 18078042 >>> 21478208 0 8884264 28427660 >>> 8 1 sda1 3 0 6 4 0 0 0 0 0 4 >>> 4 >>> 8 5 sda5 20588 7231 391563 101956 62443 241279 2429776 2818612 0 >>> 342780 2920548 >>> 8 6 sda6 8105 3851 355672 31132 201145 164353 2923944 2529036 0 >>> 1176384 2560192 >>> 8 7 sda7 28 106 884 200 8 36 352 344 0 504 >>> 544 >>> 8 8 sda8 2344773 5325879 361998092 6817276 856320 183468 12723970 >>> 16130216 0 7759176 22946228 >>> 8 16 sdb 233753 11745 3700365 1121176 1621349 8762163 441680121 >>> 213615504 0 7454384 214736244 >>> 8 17 sdb1 3 0 6 8 0 0 0 0 0 8 >>> 8 >>> 8 21 sdb5 9347 1041 75656 4332 262 1950 17744 16944 0 5016 >>> 21268 >>> 8 22 sdb6 19596 9215 1052600 338436 36287 89084 1002944 1286432 0 >>> 332092 1624860 >>> 8 23 sdb7 31 147 716 80 0 0 0 0 0 64 >>> 80 >>> 8 24 sdb8 204738 1070 2570167 778216 1584800 8671129 440659433 >>> 212312128 0 7270200 213089912 >>> 8 32 sdc 1103397 1831933 117183384 13374208 2833434 13409291 >>> 237123430 30021984 3 15776748 43396056 >>> 8 33 sdc1 1103374 1831894 117182888 13374172 2833434 13409291 >>> 237123430 30021984 3 15776672 43395904 >>> 11 0 sr0 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 0 loop0 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 1 loop1 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 2 loop2 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 3 loop3 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 4 loop4 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 5 loop5 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 6 loop6 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 7 7 loop7 0 0 0 0 0 0 0 0 0 0 >>> 0 >>> 147 0 drbd0 7670231 0 361980801 22353608 790391 0 12224264 23668176 >>> 0 6631480 42060388 >>> 147 1 drbd1 2932451 0 117145425 32553268 15218982 0 235073118 >>> 240505628 5 9461760 243188612 >>> >>> # cat >>> /proc/interrupts >>> >>> CPU0 CPU1 CPU2 CPU3 CPU4 >>> CPU5 CPU6 CPU7 >>> 0: 126 111 107 88 71 >>> 113 127 99 IO-APIC-edge timer >>> 1: 0 1 0 1 0 >>> 0 0 0 IO-APIC-edge i8042 >>> 3: 1 0 0 0 0 >>> 0 0 1 IO-APIC-edge >>> 4: 0 0 0 1 0 >>> 1 0 0 IO-APIC-edge >>> 8: 1 0 0 0 0 >>> 0 0 0 IO-APIC-edge rtc0 >>> 9: 0 0 0 0 0 >>> 0 0 0 IO-APIC-fasteoi acpi >>> 12: 1 0 0 0 0 >>> 1 1 1 IO-APIC-edge i8042 >>> 20: 4161 8 8 10 8 >>> 9 8 8 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb5 >>> 21: 3 4 3 3 5 >>> 3 3 2 IO-APIC-fasteoi ehci_hcd:usb1, >>> uhci_hcd:usb2, uhci_hcd:usb4 >>> 23: 1157658 399 12 13 1190308 >>> 383 13 12 IO-APIC-fasteoi ata_piix >>> 4340: 11835998 11961200 11961267 10498754 11873005 >>> 11961101 11961249 10499059 PCI-MSI-edge eth0 >>> 4341: 113512 5 1 2 1 >>> 223 2 7 PCI-MSI-edge eth1 >>> 4342: 1966 420 419 5559507 1969 >>> 423 419 5559192 PCI-MSI-edge ioc0 >>> 4346: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4347: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4348: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4349: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4350: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> 4351: 0 0 0 0 0 >>> 0 0 0 PCI-MSI-edge aerdrv >>> NMI: 0 0 0 0 0 >>> 0 0 0 Non-maskable interrupts >>> LOC: 6202471 5360089 5336912 5926017 4578018 >>> 4119057 4153793 7799872 Local timer interrupts >>> RES: 4075362 3703559 3670625 4459376 3952669 >>> 3597535 3861743 4510502 Rescheduling interrupts >>> CAL: 1497 1517 1522 1456 1471 >>> 1482 1487 452 function call interrupts >>> TLB: 101705 93607 93412 89613 161066 >>> 152551 153405 146574 TLB shootdowns >>> TRM: 0 0 0 0 0 >>> 0 0 0 Thermal event interrupts >>> THR: 0 0 0 0 0 >>> 0 0 0 Threshold APIC interrupts >>> SPU: 0 0 0 0 0 >>> 0 0 0 Spurious interrupts >>> ERR: 0 >>> >>> # cat /proc/sys/fs/file-nr >>> 7168 0 766538 >>> >>> # wc -l /proc/net/tcp >>> 886 /proc/net/tcp >>> >>> # wc -l /proc/net/udp >>> 48 /proc/net/udp >>> >>> # wc -l /proc/net/raw >>> 2 /proc/net/raw >>> >>> # wc -l /proc/net/unix >>> 306 /proc/net/unix >>> >>> >>> Rejaine da Silveira Monteiro >>> Suporte-TI >>> Jamef Encomendas Urgentes >>> Matriz - Contagem/MG >>> Tel: (31) 2102-8854 >>> www.jamef.com.br >>> >>> >>> Em 04-11-2010 16:20, Fernando Ulisses dos Santos escreveu: >>>> cat /proc/diskstats >>>> cat /proc/interrupts >>>> cat /proc/sys/fs/file-nr >>>> wc -l /proc/net/tcp >>>> wc -l /proc/net/udp >>>> wc -l /proc/net/raw >>>> wc -l /proc/net/unix >>> __ >>> masoch-l list >>> https://eng.registro.br/mailman/listinfo/masoch-l >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l From rejaine at bhz.jamef.com.br Fri Nov 5 09:50:25 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Fri, 05 Nov 2010 09:50:25 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD32B5D.9050100@kinghost.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> <4CD2F969.5070208@bluesolutions.com.br> <4CD3015E.8030302@bhz.jamef.com.br> <4CD30FC6.8050408@bluesolutions.com.br> <4CD312D1.7090203@bhz.jamef.com.br> <4CD32B5D.9050100@kinghost.com.br> Message-ID: <4CD3EF81.6050800@bhz.jamef.com.br> Juliano, Segue a sa?da dos comandos J? tentamos usar os par?metros noatime e nodiratime, mas n?o adiantou muito. E o sistema de arquivo est? xfs (nas parti??es mais utilizadas, utilizadas pelo drbd, como /home e /samba ) e ext3 nas demais (como no /boot , /var , /usr) # free total used free shared buffers cached Mem: 8176096 6244268 1931828 0 30496 4090952 -/+ buffers/cache: 2122820 6053276 Swap: 2104432 292 2104140 # /usr/bin/iostat -d -k -x /dev/sda Linux 2.6.27.19-5-default (rede2-sao) 11/05/10 _x86_64_ Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 24.82 4.71 11.42 8.33 845.79 68.63 92.62 0.21 10.88 2.80 5.52 # /usr/bin/iostat -d -k -x /dev/sdb Linux 2.6.27.19-5-default (rede2-sao) 11/05/10 _x86_64_ Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdb 0.05 54.40 1.30 9.05 10.17 1239.72 241.51 1.26 121.72 4.00 4.14 # /usr/bin/iostat -d -k -x /dev/sdc Linux 2.6.27.19-5-default (rede2-sao) 11/05/10 _x86_64_ Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdc 14.18 55.28 6.84 21.22 443.01 535.91 69.77 0.27 9.72 4.00 11.23 Em 04-11-2010 19:53, Juliano Primavesi | KingHost escreveu: > > Rejane, posta um free por favor... dependendo do resultado, aumentar a > ram para 16 ou 24 gb, vai ajudar consideravelmente. > > Os HDs estao em RAID? Se sim, a placa controladora tem cache? Os HDS > tem quanto de cache interno? > > O que os comandos abaixo retornam? > > /usr/bin/iostat -d -k -x /dev/sda > /usr/bin/iostat -d -k -x /dev/sdb > /usr/bin/iostat -d -k -x /dev/sdc > > Qual sistema de arquivos voce esta usando? Independente do caso, um > "mount -o remount,noatime,nodiratime /home" vai ajudar > consideravelmente em todas as parti??es. A menos que tu precise > guardar o horario do ultimo acesso ao arquivo (nao me refiro a ultima > modificacao, mas acesso mesmo). Outro ponto a considerar ? colocar as > particoes de dados em reiserfs ou xfs. > > Juliano > > Em 04/11/2010 18:08, Rejaine Monteiro escreveu: >> Pois ? Fernando. >> >> Uma das primeiras coisas que fizemos foi fazer um tunning geral no drbd, >> que n?o resolveu. >> E para tirar a d?vida sobre o sync, chegamos a desativar o sincronismo, >> justamente para eliminar essa causa e nada... >> Ou seja, o sync j? n?o estava rodando quando foram executados os >> comandos que voce~ pediu, por?m os dados continuam sendo gravados na >> camada drbd (j? pensamos at? em desfazer todo o drbd e gravar direto em >> disco local, mas isso ainda n?o foi poss?vel fazer nesse momento) >> >> E quanto a placa: a para fazer o sync ? a eth1 e n?o a eth0 (esta ?ltima >> est? ligada ? rede local) >> >> Nos dois casos, s?o placas gigabit e o swtich que liga o servidor >> tamb?m ? gigabit >> >> >> Em 04-11-2010 17:55, Fernando Ulisses dos Santos escreveu: >>> Rejaine, >>> >>> Ok, do que eu pude analisar, o drbd1 est? travando alguns processos >>> que podem justificar esse load. >>> >>> Por acaso ele est? sincronizado? Se n?o estiver, ? o culpado n?mero 1. >>> >>> A placa de rede eth0 ? usada para sincronizar o DRBD com o outro host? >>> Est? ligada a Gigabit? Analise o tr?fego dela com algum utilit?rio >>> como iptraf ou iftop, veja se n?o est? saturando o uso, altere os >>> par?metros do DRBD para usar menos banda se for o caso. >>> >>> Se isso n?o resolver, execute o seguinte comando em hor?rio de pico e >>> me passe o resultado: >>> cat /proc/diskstats ; sleep 10 ; cat /proc/diskstats ; sleep 10 ; cat >>> /proc/diskstats >>> >>> >>> Fernando Ulisses dos Santos >>> Blue Solutions - Solu??es em TI - Araras/SP >>> 19-3321-9068 / 19-9294-0556 >>> >>> >>> Em 04-11-2010 16:54, Rejaine Monteiro escreveu: >>>> Ol? Fernando, >>>> >>>> Segue a sa?da para os comandos que voc? solicitou. >>>> No momento em que eles foram executados, a carga estava assim: 4:51pm >>>> up 2 days 15:13, 31 users, load average: 26.80, 38.18, 40.04 >>>> >>>> >>>> # cat /proc/diskstats >>>> >>>> 8 0 sda 2373526 5337314 362747321 6950680 1119916 589136 >>>> 18078042 >>>> 21478208 0 8884264 28427660 >>>> 8 1 sda1 3 0 6 4 0 0 0 0 0 4 >>>> 4 >>>> 8 5 sda5 20588 7231 391563 101956 62443 241279 2429776 >>>> 2818612 0 >>>> 342780 2920548 >>>> 8 6 sda6 8105 3851 355672 31132 201145 164353 2923944 >>>> 2529036 0 >>>> 1176384 2560192 >>>> 8 7 sda7 28 106 884 200 8 36 352 344 0 504 >>>> 544 >>>> 8 8 sda8 2344773 5325879 361998092 6817276 856320 183468 >>>> 12723970 >>>> 16130216 0 7759176 22946228 >>>> 8 16 sdb 233753 11745 3700365 1121176 1621349 8762163 441680121 >>>> 213615504 0 7454384 214736244 >>>> 8 17 sdb1 3 0 6 8 0 0 0 0 0 8 >>>> 8 >>>> 8 21 sdb5 9347 1041 75656 4332 262 1950 17744 16944 0 5016 >>>> 21268 >>>> 8 22 sdb6 19596 9215 1052600 338436 36287 89084 1002944 >>>> 1286432 0 >>>> 332092 1624860 >>>> 8 23 sdb7 31 147 716 80 0 0 0 0 0 64 >>>> 80 >>>> 8 24 sdb8 204738 1070 2570167 778216 1584800 8671129 440659433 >>>> 212312128 0 7270200 213089912 >>>> 8 32 sdc 1103397 1831933 117183384 13374208 2833434 13409291 >>>> 237123430 30021984 3 15776748 43396056 >>>> 8 33 sdc1 1103374 1831894 117182888 13374172 2833434 13409291 >>>> 237123430 30021984 3 15776672 43395904 >>>> 11 0 sr0 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 0 loop0 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 1 loop1 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 2 loop2 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 3 loop3 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 4 loop4 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 5 loop5 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 6 loop6 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 7 7 loop7 0 0 0 0 0 0 0 0 0 0 >>>> 0 >>>> 147 0 drbd0 7670231 0 361980801 22353608 790391 0 12224264 >>>> 23668176 >>>> 0 6631480 42060388 >>>> 147 1 drbd1 2932451 0 117145425 32553268 15218982 0 235073118 >>>> 240505628 5 9461760 243188612 >>>> >>>> # cat >>>> /proc/interrupts >>>> >>>> CPU0 CPU1 CPU2 CPU3 CPU4 >>>> CPU5 CPU6 CPU7 >>>> 0: 126 111 107 88 71 >>>> 113 127 99 IO-APIC-edge timer >>>> 1: 0 1 0 1 0 >>>> 0 0 0 IO-APIC-edge i8042 >>>> 3: 1 0 0 0 0 >>>> 0 0 1 IO-APIC-edge >>>> 4: 0 0 0 1 0 >>>> 1 0 0 IO-APIC-edge >>>> 8: 1 0 0 0 0 >>>> 0 0 0 IO-APIC-edge rtc0 >>>> 9: 0 0 0 0 0 >>>> 0 0 0 IO-APIC-fasteoi acpi >>>> 12: 1 0 0 0 0 >>>> 1 1 1 IO-APIC-edge i8042 >>>> 20: 4161 8 8 10 8 >>>> 9 8 8 IO-APIC-fasteoi uhci_hcd:usb3, >>>> uhci_hcd:usb5 >>>> 21: 3 4 3 3 5 >>>> 3 3 2 IO-APIC-fasteoi ehci_hcd:usb1, >>>> uhci_hcd:usb2, uhci_hcd:usb4 >>>> 23: 1157658 399 12 13 1190308 >>>> 383 13 12 IO-APIC-fasteoi ata_piix >>>> 4340: 11835998 11961200 11961267 10498754 11873005 >>>> 11961101 11961249 10499059 PCI-MSI-edge eth0 >>>> 4341: 113512 5 1 2 1 >>>> 223 2 7 PCI-MSI-edge eth1 >>>> 4342: 1966 420 419 5559507 1969 >>>> 423 419 5559192 PCI-MSI-edge ioc0 >>>> 4346: 0 0 0 0 0 >>>> 0 0 0 PCI-MSI-edge aerdrv >>>> 4347: 0 0 0 0 0 >>>> 0 0 0 PCI-MSI-edge aerdrv >>>> 4348: 0 0 0 0 0 >>>> 0 0 0 PCI-MSI-edge aerdrv >>>> 4349: 0 0 0 0 0 >>>> 0 0 0 PCI-MSI-edge aerdrv >>>> 4350: 0 0 0 0 0 >>>> 0 0 0 PCI-MSI-edge aerdrv >>>> 4351: 0 0 0 0 0 >>>> 0 0 0 PCI-MSI-edge aerdrv >>>> NMI: 0 0 0 0 0 >>>> 0 0 0 Non-maskable interrupts >>>> LOC: 6202471 5360089 5336912 5926017 4578018 >>>> 4119057 4153793 7799872 Local timer interrupts >>>> RES: 4075362 3703559 3670625 4459376 3952669 >>>> 3597535 3861743 4510502 Rescheduling interrupts >>>> CAL: 1497 1517 1522 1456 1471 >>>> 1482 1487 452 function call interrupts >>>> TLB: 101705 93607 93412 89613 161066 >>>> 152551 153405 146574 TLB shootdowns >>>> TRM: 0 0 0 0 0 >>>> 0 0 0 Thermal event interrupts >>>> THR: 0 0 0 0 0 >>>> 0 0 0 Threshold APIC interrupts >>>> SPU: 0 0 0 0 0 >>>> 0 0 0 Spurious interrupts >>>> ERR: 0 >>>> >>>> # cat /proc/sys/fs/file-nr >>>> 7168 0 766538 >>>> >>>> # wc -l /proc/net/tcp >>>> 886 /proc/net/tcp >>>> >>>> # wc -l /proc/net/udp >>>> 48 /proc/net/udp >>>> >>>> # wc -l /proc/net/raw >>>> 2 /proc/net/raw >>>> >>>> # wc -l /proc/net/unix >>>> 306 /proc/net/unix >>>> >>>> >>>> Rejaine da Silveira Monteiro >>>> Suporte-TI >>>> Jamef Encomendas Urgentes >>>> Matriz - Contagem/MG >>>> Tel: (31) 2102-8854 >>>> www.jamef.com.br >>>> >>>> >>>> Em 04-11-2010 16:20, Fernando Ulisses dos Santos escreveu: >>>>> cat /proc/diskstats >>>>> cat /proc/interrupts >>>>> cat /proc/sys/fs/file-nr >>>>> wc -l /proc/net/tcp >>>>> wc -l /proc/net/udp >>>>> wc -l /proc/net/raw >>>>> wc -l /proc/net/unix >>>> __ >>>> masoch-l list >>>> https://eng.registro.br/mailman/listinfo/masoch-l >>> __ >>> masoch-l list >>> https://eng.registro.br/mailman/listinfo/masoch-l >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l From anisio.neto at hotmail.com.br Sun Nov 7 10:01:30 2010 From: anisio.neto at hotmail.com.br (=?utf-8?B?QW7DrXNpbyBKLiBNb3JlaXJhIE5ldG8=?=) Date: Sun, 7 Nov 2010 10:01:30 -0200 Subject: [MASOCH-L] Provedor de e-mail In-Reply-To: References: Message-ID: Senhores, bom dia. Sei que este t?pico j? foi muito discutido, mas acho que seria do interesse de todos resgatar o mesmo neste momento. Hoje utilizo Google Apps como servidor de e-mail, mas navegando por ai sem mais nem menos, encontrei um servi?o da Microsoft chamado Windows Live Admin Center, url: domains.live.com, ao que me parece ? o perfeito concorrente do Google Apps, free, s? n?o encontrei nada informando se posso utilizar este servi?o para minha empresa e se apenas para institui??es sem fins lucrativos. Acredito que a concorr?ncia seja saud?vel. Abra?os. -----Mensagem Original----- From: Renato Pinheiro de Souza Sent: Thursday, October 14, 2010 8:12 PM To: Mail Aid and Succor, On-line Comfort and Help Subject: Re: [MASOCH-L] Provedor de email Desculpem por n?o ter respondido antes. Bom, gostaria de agradecer ?s empresas que enviaram propostas mas, o que eu busco no momento ? realmente a experi?ncia dos amigos da lista. J? quebrei muito a cara com os hosts e n?o queria sair do meu atual sem ter algum feedback de cliente. Enfim, vou dar uma olhada nesse KingHost e/ou tentar mais usu?rios no Apps da Google. Em tempo, obrigado pela ajuda! Abra?os, Renato Pinheiro renato.pinheiro at pobox.com pinheiro at gmail.com 2010/10/10 Bruno Camargo > Fala Renato, > > Passamos por um mart?rio aqui na empresa, um escrit?rio com cerca de > 15 funcion?rios, quanto ao servi?o de email. > > Testei a locaweb, e a conclus?o ? que cresceram demais sem > investimentos da infraestrutura. > Testei a hostgator, e apesar de toda pompa me pareceu um servi?o muito > amador, n?o tinham nem um telefone de suporte. > Agora estamos usando a kinghost, e n?o tenho ouvido reclama??es do > pessoal. > > Abraco > > Bruno Camargo > > 2010/10/3 Renato Pinheiro de Souza : > > Oi pessoal, > > > > estou buscando op??es confi?veis para hospedagem de emails, pensei em ir > > para o Google Apps mas, com US$ 50 por pessoa/ano e a necessidade de 80 > > contas, ficou um pouco salgado. > > > > Assim, gostaria de saber se algu?m tem boas experi?ncias com algum > provedor? > > Pode ser nacional ou internacional. > > > > Desde j?, obrigado pela ajuda!!! > > > > Abra?os, > > Renato Pinheiro > > renato.pinheiro at pobox.com > > pinheiro at gmail.com > > __ > > masoch-l list > > https://eng.registro.br/mailman/listinfo/masoch-l > > > > > > -- > BrCaBadT > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ masoch-l list https://eng.registro.br/mailman/listinfo/masoch-l From paulo.rddck at bsd.com.br Sun Nov 7 13:20:39 2010 From: paulo.rddck at bsd.com.br (Paulo Henrique) Date: Sun, 7 Nov 2010 13:20:39 -0200 Subject: [MASOCH-L] Provedor de e-mail In-Reply-To: References: Message-ID: FreeBSD Brasil, estou atualmente em analise tanto a ela quanto a Kinghost. Em 7 de novembro de 2010 10:01, An?sio J. Moreira Neto < anisio.neto at hotmail.com.br> escreveu: > Senhores, bom dia. > > Sei que este t?pico j? foi muito discutido, mas acho que seria do interesse > de todos resgatar o mesmo neste momento. > > Hoje utilizo Google Apps como servidor de e-mail, mas navegando por ai sem > mais nem menos, encontrei um servi?o da Microsoft chamado Windows Live Admin > Center, url: domains.live.com, ao que me parece ? o perfeito concorrente > do Google Apps, free, s? n?o encontrei nada informando se posso utilizar > este servi?o para minha empresa e se apenas para institui??es sem fins > lucrativos. > > Acredito que a concorr?ncia seja saud?vel. > > Abra?os. > > -----Mensagem Original----- From: Renato Pinheiro de Souza > Sent: Thursday, October 14, 2010 8:12 PM > To: Mail Aid and Succor, On-line Comfort and Help > Subject: Re: [MASOCH-L] Provedor de email > > Desculpem por n?o ter respondido antes. > > Bom, gostaria de agradecer ?s empresas que enviaram propostas mas, o que eu > busco no momento ? realmente a experi?ncia dos amigos da lista. J? quebrei > muito a cara com os hosts e n?o queria sair do meu atual sem ter algum > feedback de cliente. > > Enfim, vou dar uma olhada nesse KingHost e/ou tentar mais usu?rios no Apps > da Google. > > Em tempo, obrigado pela ajuda! > > Abra?os, > Renato Pinheiro > renato.pinheiro at pobox.com > pinheiro at gmail.com > > > 2010/10/10 Bruno Camargo > > Fala Renato, >> >> Passamos por um mart?rio aqui na empresa, um escrit?rio com cerca de >> 15 funcion?rios, quanto ao servi?o de email. >> >> Testei a locaweb, e a conclus?o ? que cresceram demais sem >> investimentos da infraestrutura. >> Testei a hostgator, e apesar de toda pompa me pareceu um servi?o muito >> amador, n?o tinham nem um telefone de suporte. >> Agora estamos usando a kinghost, e n?o tenho ouvido reclama??es do >> pessoal. >> >> Abraco >> >> Bruno Camargo >> >> 2010/10/3 Renato Pinheiro de Souza : >> > Oi pessoal, >> > >> > estou buscando op??es confi?veis para hospedagem de emails, pensei em ir >> > para o Google Apps mas, com US$ 50 por pessoa/ano e a necessidade de 80 >> > contas, ficou um pouco salgado. >> > >> > Assim, gostaria de saber se algu?m tem boas experi?ncias com algum >> provedor? >> > Pode ser nacional ou internacional. >> > >> > Desde j?, obrigado pela ajuda!!! >> > >> > Abra?os, >> > Renato Pinheiro >> > renato.pinheiro at pobox.com >> > pinheiro at gmail.com >> > __ >> > masoch-l list >> > https://eng.registro.br/mailman/listinfo/masoch-l >> > >> >> >> >> -- >> BrCaBadT >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l >> >> __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- :=)>Paulo Henrique (JSRD)<(=: Alone, locked, a survivor, unfortunately not know who I am From carlos at greco.com.br Mon Nov 8 08:41:15 2010 From: carlos at greco.com.br (Carlos Alberto Greco) Date: Mon, 8 Nov 2010 07:41:15 -0300 (GMT-03:00) Subject: [MASOCH-L] Fwd: Provedor de e-mail In-Reply-To: <1311026524.2316951289164345392.JavaMail.root@gi4.greco.com.br> Message-ID: <13933553.131289212864109.JavaMail.SYSTEM@greco-c5cead7ef> Acho um tema muito apropriado, o que devemos fazer? Esperar que o mundo ofere?a tudo de gra?a ou devemos continuar a investir no conhecimento e aprimorar os servi?os? Em um evento perguntei,para o representante do google, se eu precisar de suporte para os meu dominios gratis? A resposta foi o servi?o e t?o bom que n?o tem suporte, eu nunca vou poder dizer isso para os nossos clientes. N?o lembrava o nosso usu?rio do domains.live.com, pois usamos o msn com @greco.com.br. Fazem alguns meses que solicitamos a senha e at? hoje n?o responderam o e-mail. O Brasil ? a bola da vez e temos que pensar tamb?m em TI da mesma forma, seu que as vezes da um desanimo, pois temos que corremos atras do prorpio rabo, mas o caminho mais curto ? nos unirmos atrav?s de associa??es. N?o lembro de ter participado de um curso de servidor de e-mail por uma associa??o, acho que a troca de informa??o ? fundamental para n?o errarmos em qual caminho trilhar. Greco ----- Mensagem Original ----- De: Paulo Henrique Enviada: domingo, 7 de novembro de 2010 12:20 Para: Mail Aid and Succor, On-line Comfort and Help Assunto: Re: [MASOCH-L] Provedor de e-mail FreeBSD Brasil, estou atualmente em analise tanto a ela quanto a Kinghost. Em 7 de novembro de 2010 10:01, An?sio J. Moreira Neto < anisio.neto at hotmail.com.br> escreveu: > Senhores, bom dia. > > Sei que este t?pico j? foi muito discutido, mas acho que seria do interesse > de todos resgatar o mesmo neste momento. > > Hoje utilizo Google Apps como servidor de e-mail, mas navegando por ai sem > mais nem menos, encontrei um servi?o da Microsoft chamado Windows Live Admin > Center, url: domains.live.com, ao que me parece ? o perfeito concorrente > do Google Apps, free, s? n?o encontrei nada informando se posso utilizar > este servi?o para minha empresa e se apenas para institui??es sem fins > lucrativos. > > Acredito que a concorr?ncia seja saud?vel. > > Abra?os. > > -----Mensagem Original----- From: Renato Pinheiro de Souza > Sent: Thursday, October 14, 2010 8:12 PM > To: Mail Aid and Succor, On-line Comfort and Help > Subject: Re: [MASOCH-L] Provedor de email > > Desculpem por n?o ter respondido antes. > > Bom, gostaria de agradecer ?s empresas que enviaram propostas mas, o que eu > busco no momento ? realmente a experi?ncia dos amigos da lista. J? quebrei > muito a cara com os hosts e n?o queria sair do meu atual sem ter algum > feedback de cliente. > > Enfim, vou dar uma olhada nesse KingHost e/ou tentar mais usu?rios no Apps > da Google. > > Em tempo, obrigado pela ajuda! > > Abra?os, > Renato Pinheiro > renato.pinheiro at pobox.com > pinheiro at gmail.com > > > 2010/10/10 Bruno Camargo > > Fala Renato, >> >> Passamos por um mart?rio aqui na empresa, um escrit?rio com cerca de >> 15 funcion?rios, quanto ao servi?o de email. >> >> Testei a locaweb, e a conclus?o ? que cresceram demais sem >> investimentos da infraestrutura. >> Testei a hostgator, e apesar de toda pompa me pareceu um servi?o muito >> amador, n?o tinham nem um telefone de suporte. >> Agora estamos usando a kinghost, e n?o tenho ouvido reclama??es do >> pessoal. >> >> Abraco >> >> Bruno Camargo >> >> 2010/10/3 Renato Pinheiro de Souza : >> > Oi pessoal, >> > >> > estou buscando op??es confi?veis para hospedagem de emails, pensei em ir >> > para o Google Apps mas, com US$ 50 por pessoa/ano e a necessidade de 80 >> > contas, ficou um pouco salgado. >> > >> > Assim, gostaria de saber se algu?m tem boas experi?ncias com algum >> provedor? >> > Pode ser nacional ou internacional. >> > >> > Desde j?, obrigado pela ajuda!!! >> > >> > Abra?os, >> > Renato Pinheiro >> > renato.pinheiro at pobox.com >> > pinheiro at gmail.com >> > __ >> > masoch-l list >> > https://eng.registro.br/mailman/listinfo/masoch-l >> > >> >> >> >> -- >> BrCaBadT >> __ >> masoch-l list >> https://eng.registro.br/mailman/listinfo/masoch-l >> >> __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- :=)>Paulo Henrique (JSRD)<(=: Alone, locked, a survivor, unfortunately not know who I am __ masoch-l list https://eng.registro.br/mailman/listinfo/masoch-l From mustardahc at gmail.com Tue Nov 9 11:26:37 2010 From: mustardahc at gmail.com (Bruno Camargo) Date: Tue, 9 Nov 2010 11:26:37 -0200 Subject: [MASOCH-L] ALGUEM DO BRADESCO PARA CONVERSAR EM PVT? Message-ID: Srs, Algu?m do Bradesco pra conversar em PVT? Grato Bruno Camargo -- BrCaBadT From rejaine at bhz.jamef.com.br Thu Nov 11 17:37:57 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 11 Nov 2010 17:37:57 -0200 Subject: [MASOCH-L] Script de balanceamanto Message-ID: <4CDC4615.9000701@bhz.jamef.com.br> Pessoal Estou usando um script de balanceamento e tem funcionado normalmente. S? estranho o fato de que as vezes, o acesso a uma p?gina da Internet parece ficar "preso" e ao dar um F5 ou ao clicar em um link novamente, para recarregar a p?gina, a mesma ? aberta instantaneamente e por isso estou com receio do script estar incorreto e consequentemente comprometendo a performance de uso da Internet de forma geral. Por isso pe?o a ajuda de voc?s para avaliar se h? algo errado aqui, apesar de acreditar tamb?m que possa ser algo relacionado ? m? qualidade do link ADSL , claro... A marca??o dos pacotes ? feito por um script de firewall ? parte e ao final ? executado o script de balanceamento abaixo. A ideia ? direcionar todos os pacotes de sa?da HTTP (80/TCP) para a internet ADSL, deixando o link principal Embratel livre para correio eletr?nico e aplica??o WEB da empresa. O iptraf demonstra que os pacotes marcados est?o de fato saindo pelo link ADSL e tudo funciona relativamente bem (salvo pela observa??o citada anteriormente) Uso tambem' um script para monitorar (pingar) a conex?o via ADSL e derrubar a interface da ADSL em caso de problemas de conex?o... Enfim... Qualquer ajuda ser? bem vinda ============================== #!/bin/bash #Script de balanceamento # Resetando tabelas de rotas, padrao do sistema echo "255 local" > /etc/iproute2/rt_tables echo "254 main" >> /etc/iproute2/rt_tables echo "253 default" >> /etc/iproute2/rt_tables echo "0 unspec" >> /etc/iproute2/rt_tables #Setando variaveis #IPD:LINK INTERNET EMBRATEL (ROTA DEFAULT) #DSL: LINK ADSL (USADO PARA SAIDA DOS PACOTES MARCADOS (80/TCP) PARA INTERNET) export IPD_DEV=eth1 export IPD_IP="200.243.222.66" export IPD_GW="200.243.222.65" export IPD_NET="$IPD_IP/27" export DSL_DEV=eth2 export DSL_IP="192.168.1.2" export DSL_GW="192.168.1.1" export DSL_NET="$DSL_GW/24" #limpando rotas default route del default gw $IPD_GW route del default gw $DSL_GW #Removendo detecao de pacotes marcianos for eee in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 > $eee done cat /proc/sys/net/ipv4/conf/*/rp_filter #load balancing ip rule del fwmark 3 ip route del table 30 ip rule add fwmark 3 lookup 30 prio 30 ip route add default via $DSL_GW dev $DSL_DEV table 30 ip rule del from $IPD_IP ip rule del from $DSL_IP ip rule add prio 10 from $IPD_IP table 10 ip route del default via $IPD_GW dev $IPD_DEV src $IPD_IP proto static table 10 ip route add default via $IPD_GW dev $IPD_DEV src $IPD_IP proto static table 10 ip rule add prio 11 from $DSL_IP table 11 ip route del default via $DSL_GW dev $DSL_DEV src $DSL_IP proto static table 11 ip route add default via $DSL_GW dev $DSL_DEV src $DSL_IP proto static table 11 #adicionando routa default para Embratel (pacotes nao marcados devem sair por aqui) ip route add default via $IPD_GW #atualizar tabelas de roteamento ip route flush cache route -n ============================== From MBORBA at trf3.jus.br Thu Nov 11 17:55:39 2010 From: MBORBA at trf3.jus.br (MARLON BORBA) Date: Thu, 11 Nov 2010 17:55:39 -0200 Subject: [MASOCH-L] Script de balanceamanto In-Reply-To: <4CDC4615.9000701@bhz.jamef.com.br> References: <4CDC4615.9000701@bhz.jamef.com.br> Message-ID: <4CDC2E1B0200004600008595@d-gws03.jfsp.gov.br> Rejane, Uma recomenda??o: Ao postar seus "scripts" numa lista p?blica, "sanitize" ou oculte endere?os IP p?blicos, para evitar que algu?m, usando o hist?rico da lista, os capture e tente explorar poss?veis vulnerabilidades. >>>Em 11/11/2010 ?s 17:37, Rejaine Monteiro gravou: > Pessoal > > Estou usando um script de balanceamento e tem funcionado normalmente. S? > estranho o fato de que as vezes, o acesso a uma p?gina da Internet > parece ficar "preso" e ao dar um F5 ou ao clicar em um link novamente, > para recarregar a p?gina, a mesma ? aberta instantaneamente e por isso > estou com receio do script estar incorreto e consequentemente > comprometendo a performance de uso da Internet de forma geral. > [...] -- Abra?os, Marlon Borba, CISSP, APC DataCenter Associate T?cnico Judici?rio ? Seguran?a da Informa??o IPv6 Evangelist ? Moreq-Jus Evangelist Comiss?o Local de Resposta a Incidentes - CLRI TRF 3 Regi?o (11) 3012-2030 (VoIP) -- Governing for enterprise security means viewing adequate security as a non-negotiable requirement of being in business. Carnegie-Mellon CERT -- From rejaine at bhz.jamef.com.br Thu Nov 11 18:05:05 2010 From: rejaine at bhz.jamef.com.br (Rejaine Monteiro) Date: Thu, 11 Nov 2010 18:05:05 -0200 Subject: [MASOCH-L] Script de balanceamanto In-Reply-To: <4CDC2E1B0200004600008595@d-gws03.jfsp.gov.br> References: <4CDC4615.9000701@bhz.jamef.com.br> <4CDC2E1B0200004600008595@d-gws03.jfsp.gov.br> Message-ID: <4CDC4C71.4040706@bhz.jamef.com.br> errr.. foi mal.. normalmente adoto essa pr?tica, mas dessa vez, fai?. Em 11-11-2010 17:55, MARLON BORBA escreveu: > Rejane, > > Uma recomenda??o: Ao postar seus "scripts" numa lista p?blica, > "sanitize" ou oculte endere?os IP p?blicos, para evitar que algu?m, > usando o hist?rico da lista, os capture e tente explorar poss?veis > vulnerabilidades. > > > > >>>Em 11/11/2010 ?s 17:37, Rejaine Monteiro > gravou: > > >> Pessoal >> >> Estou usando um script de balanceamento e tem funcionado normalmente. >> > S? > >> estranho o fato de que as vezes, o acesso a uma p?gina da Internet >> parece ficar "preso" e ao dar um F5 ou ao clicar em um link >> > novamente, > >> para recarregar a p?gina, a mesma ? aberta instantaneamente e por >> > isso > >> estou com receio do script estar incorreto e consequentemente >> comprometendo a performance de uso da Internet de forma geral. >> >> > [...] > > From hamilton at theforce.com.br Thu Nov 11 22:00:45 2010 From: hamilton at theforce.com.br (Hamilton Vera) Date: Thu, 11 Nov 2010 22:00:45 -0200 Subject: [MASOCH-L] Script de balanceamanto In-Reply-To: <4CDC4615.9000701@bhz.jamef.com.br> References: <4CDC4615.9000701@bhz.jamef.com.br> Message-ID: Na sua rede tem algum proxy? Se houver pode ser isso e voc^e pode tirar a prova com o tcpdump. []'s Hamilton Vera Em 11 de novembro de 2010 17:37, Rejaine Monteiro escreveu: > Pessoal > > Estou usando um script de balanceamento e tem funcionado normalmente. S? > estranho o fato de que as vezes, o acesso a uma p?gina da Internet > parece ficar "preso" e ao dar um F5 ou ao clicar em um link novamente, > para recarregar a p?gina, a mesma ? aberta instantaneamente e por isso > estou com receio do script estar incorreto e consequentemente > comprometendo a performance de uso da Internet de forma geral. > > Por isso pe?o a ajuda de voc?s para avaliar se h? algo errado aqui, > apesar de acreditar tamb?m que possa ser algo relacionado ? m? qualidade > do link ADSL , claro... > > A marca??o dos pacotes ? feito por um script de firewall ? parte e ao > final ? executado o script de balanceamento abaixo. > A ideia ? direcionar todos os pacotes de sa?da HTTP (80/TCP) para a > internet ADSL, deixando o link principal Embratel livre para correio > eletr?nico e aplica??o WEB da empresa. > O iptraf demonstra que os pacotes marcados est?o de fato saindo pelo > link ADSL e tudo funciona relativamente bem (salvo pela observa??o > citada anteriormente) > Uso tambem' um script para monitorar (pingar) a conex?o via ADSL e > derrubar a interface da ADSL em caso de problemas de conex?o... Enfim... > > Qualquer ajuda ser? bem vinda > > ============================== > #!/bin/bash > #Script de balanceamento > > # Resetando tabelas de rotas, padrao do sistema > echo "255 local" > /etc/iproute2/rt_tables > echo "254 main" >> /etc/iproute2/rt_tables > echo "253 default" >> /etc/iproute2/rt_tables > echo "0 unspec" >> /etc/iproute2/rt_tables > > #Setando variaveis > #IPD:LINK INTERNET EMBRATEL (ROTA DEFAULT) > #DSL: LINK ADSL (USADO PARA SAIDA DOS PACOTES MARCADOS (80/TCP) PARA > INTERNET) > export IPD_DEV=eth1 > export IPD_IP="200.243.222.66" > export IPD_GW="200.243.222.65" > export IPD_NET="$IPD_IP/27" > export DSL_DEV=eth2 > export DSL_IP="192.168.1.2" > export DSL_GW="192.168.1.1" > export DSL_NET="$DSL_GW/24" > > #limpando rotas default > route del default gw $IPD_GW > route del default gw $DSL_GW > > #Removendo detecao de pacotes marcianos > for eee in /proc/sys/net/ipv4/conf/*/rp_filter; do > echo 0 > $eee > done > cat /proc/sys/net/ipv4/conf/*/rp_filter > > > #load balancing > ip rule del fwmark 3 > ip route del table 30 > ip rule add fwmark 3 lookup 30 prio 30 > ip route add default via $DSL_GW dev $DSL_DEV table 30 > ip rule del from $IPD_IP > ip rule del from $DSL_IP > ip rule add prio 10 from $IPD_IP table 10 > ip route del default via $IPD_GW dev $IPD_DEV src $IPD_IP proto static > table 10 > ip route add default via $IPD_GW dev $IPD_DEV src $IPD_IP proto static > table 10 > ip rule add prio 11 from $DSL_IP table 11 > ip route del default via $DSL_GW dev $DSL_DEV src $DSL_IP proto static > table 11 > ip route add default via $DSL_GW dev $DSL_DEV src $DSL_IP proto static > table 11 > > #adicionando routa default para Embratel (pacotes nao marcados devem > sair por aqui) > ip route add default via $IPD_GW > > #atualizar tabelas de roteamento > ip route flush cache > route -n > > ============================== > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- http://hvera.wordpress.com From alexandre at telbrax.com.br Sat Nov 13 09:06:05 2010 From: alexandre at telbrax.com.br (Alexandre Flach) Date: Sat, 13 Nov 2010 09:06:05 -0200 Subject: [MASOCH-L] Profissional Nivel 1, 2 e 3 Message-ID: <0a1e01cb8322$c60c6960$52253c20$@com.br> Pessoal, Nos estamos com vagas para profissionais de NOC n?vel 1, 2 e 3 para a nossa opera??o rede metro na grande BH e tamb?m profissionais com s?lidos conhecimentos de BGP e opera??o de Backbone IP. Quem tiver indica??es ou interesse favor enviar curr?culos para vagas at telbrax.com.br. Os profissionais podem ser de qualquer parte do Brasil. Atenciosamente, Alexandre Flach From listas at rafaelsantos.com Wed Nov 17 22:44:20 2010 From: listas at rafaelsantos.com (Rafael Santos) Date: Wed, 17 Nov 2010 22:44:20 -0200 Subject: [MASOCH-L] =?iso-8859-1?q?DNAT_para_rede_local_n=E3o_funciona_=28?= =?iso-8859-1?q?Debian=29?= Message-ID: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Prezados, Estou enfrentando um problema deveras bizarro aqui e gostaria de um help dos senhores(as): Tenho a seguinte topologia: Eth0 = 192.168.1.0/24 (LAN) Eth1 = 200.200.200.200 (WAN 1) Eth2 = 201.201.201.201 (WAN 2) Existe uma regra de DNAT para que todos os pacotes que chegam para determinadas portas com destino = IP WAN 1 sejam encaminhados (DNAT) para o servidor 192.168.1.45. At? a? beleza, de qualquer lugar do planeta e talvez do universo (a lat?ncia talvez incomode em dist?ncias maiores) a conex?o funciona que ? uma maravilha. O problema ocorre quando tento acessar o servidor atrav?s do IP WAN 1 a partir da pr?pria LAN. Voc?s me dir?o: ?Claro, filhote de apedeuta, voc? n?o v? que a requisi??o est? sendo feita para o IP da WAN 1, por?m por estarem na mesma rede a resposta sai direto do servidor 192.168.1.45 para a m?quina que fez a requisi??o?? Respondo: ?Sim! Por isso mesmo que eu fiz uma regra de SNAT, que diz que todos os pacotes que forem encaminhados para o servidor devem ir com o endere?o de origem igual ? WAN 1!? Tentei v?rias varia??es, mas as regras atuais s?o as seguintes: DNAT tcp -- * * !192.168.1.45 200.200.200.200 tcp multiport dports 80,443,5252,7531,7532 to:192.168.1.45 SNAT all -- * eth0 192.168.1.0/24 192.168.1.45 to:200.200.200.200 Os pacotes est?o saindo da esta??o (192.168.1.50) passam pelo GTW, s?o encaminhadas corretamente para o 192.168.1.45 com endere?o de origem = 200.200.200.200, o servidor 192.168.1.45 responde para o 200.200.200.200, por?m os pacotes n?o s?o retornados para a esta??o. N?o deveria existir um connection tracking para fazer este encaminhamento do retorno? Algu?m saberia me dizer porque esta $#@%@ n?o est? funcionando? Agrade?o qualquer ajuda. Att. Rafael Santos From danton.nunes at inexo.com.br Thu Nov 18 09:00:31 2010 From: danton.nunes at inexo.com.br (Danton Nunes) Date: Thu, 18 Nov 2010 09:00:31 -0200 (BRST) Subject: [MASOCH-L] =?iso-8859-15?q?DNAT_para_rede_local_n=E3o_funciona_?= =?iso-8859-15?q?=28Debian=29?= In-Reply-To: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: On Wed, 17 Nov 2010, Rafael Santos wrote: > N?o deveria existir um connection tracking para fazer este encaminhamento > do retorno? Algu?m saberia me dizer porque esta $#@%@ n?o est? funcionando? como o pacote originado dentro da rede interna n?o passa pela interface externa WAN1, as regras do iptables que est?o penduradas nessa interface n?o se aplicam. repita a regra para a interface interna e veja que apito toca. From jean.vosch at gmail.com Thu Nov 4 12:48:43 2010 From: jean.vosch at gmail.com (Jean Marcel Vosch) Date: Thu, 4 Nov 2010 12:48:43 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2C56C.1020108@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: Voc? tem suporte para o SLES11? Tentaram contato com eles? Eu recomendaria tentar o OpenSuSE 10.2 nesse novo hardware para tirar a "prova real" Se temos um problema de software, ou incompatibilidade dos Os com o hardware. []s Em 4 de novembro de 2010 12:38, Rejaine Monteiro escreveu: > > Pessoal, > > Venho pedir socorro.. > Estou com um grave problema de performance em um servidor com > SLES11(SP1) instalado em um servidor PowerEdge 1900 (configura??o abaixo) > > Ocorre o seguinte: > > T?nhamos em nossas localidades v?rios servidores bem inferiores, > atendendo ao mesmo n?mero de usu?rios e mesmos servi?os, por?m, > utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem at? ent?o, > mas seguindo nosso planejamento de atualiza??o do parque de m?quinas, > optamos por fazer upgrade de hardware e S.O (que se encontrava bastante > desatualizado) nessas localidades e eis que come?aram os problemas. > > Inicialmente, fizemos a substitui??o em apenas duas localidades de menor > porte e com menor n?mero de usu?rios e j? hav?amos notado um certo > aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece > que melhorou um pouco. > > Por?m, em uma outra localidade em especial, com cerca de 300 usu?rios, > a performance do servidor est? simplesmente sofr?vel > A carga de CPU sobe tanto, que as vezes mal consigo fazer login para > visualizar o syslog, tendo muitas vezes que derrubar v?rios servi?os ou > dar um reboot para voltar ao normal. > > J? fizemos v?rios ajustes de tunning no Kernel e v?rias outros ajustes > de tunning nas v?rias aplica??es que o servidor executa (especialmente > no servi?os mais importantes como drbd, heartebeat, ldap, nfsserver, > etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria > consider?vel mesmo ap?s dezenas de ajustes. > > Como temos dois servidores id?nticos (um em modo failover, por causa do > HA), fizemos o teste subindo todos os servi?os no servidor backup, para > descartar problemas de disco e/ou hardware na m?quina principal, por?m > os problemas continuaram tamb?m no outro servidor. > > Quando a carga est? muito alta, o syslog come?a a gerar v?rios dumps no > /var/log/messages (descritos abaixo) > > Aparentemente, n?o h? problemas de I/O (j? incluimos at? um RAID para > melhorar a performance de disco e fizemos v?rios ajustes, mas nada > resolveu ou surtiu efeito) > O que percebemos, ? que n?o h? rela??o com iowait e cpu load , ou seja, > quando a carga est? alta, o disco n?o apresenta sobrecarga. Parece ser > algo haver com mem?ria, mas o servidor antigo trabalha com 4G no > OpenSuSE 10.2 e dava conta do recado e j? este servidor, apesar de mais > ser ainda "parrudo" e com o dobro de mem?ria n?o. > > Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware > inferior, rodando basicamente os mesmos servi?os e com mesmo n?mero de > usu?rios funcionava muito bem com o OpenSuSE 10.2 > > Segue abaixo descri??o do hardware, software e servi?os utilizados no > servidor e logo mais adiante algumas mensgens que aparecem no syslog > > Se algu?m puder ajudar com qualquer dica, eu agrade?o muit?ssimo > (qualquer ajuda ? bem vinda) > > Servidor> Del PowerEdge 1900 > 2 x Intel(R) Xeon(R) CPU E5310 1.60GHz DualCore > 8G RAM > 4 HDs SAS 15000rpm > > Software> Suse Linux Enterprise Server 11 - Service Pack 1 > Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 > +0200 x86_64 x86_64 x86_64 GNU/Linux > > Servicos basicos que est?o rodando nesse servidor: linux-ha > (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp, > named, squid e jabberd > Numero de usuarios: 300 > Usuarios Linux utilizam HOMEDIR montado via NFS > Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de > grupo e/ou backup de profile > > top - 10:33:37 up 57 min, 19 users, load average: 40.44, 49.96, 42.26 > Tasks: 510 total, 1 running, 509 sleeping, 0 stopped, 0 zombie > Cpu(s): 1.3%us, 1.5%sy, 0.0%ni, 94.2%id, 1.7%wa, 0.0%hi, 1.4%si, > 0.0%st > Mem: 8188816k total, 8137392k used, 51424k free, 57116k buffers > Swap: 2104432k total, 0k used, 2104432k free, 7089980k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 9901 qscand 20 0 207m 164m 2032 S 0 2.1 0:04.63 clamd > 4074 root 20 0 358m 57m 1992 S 0 0.7 0:03.03 nscd > 9016 named 20 0 320m 54m 2464 S 0 0.7 0:17.37 named > 22761 root 20 0 115m 50m 4604 S 0 0.6 0:02.30 nxagent > 23477 root 20 0 597m 33m 21m S 0 0.4 0:01.20 plasma-desktop > 23357 root 20 0 453m 30m 23m S 0 0.4 0:00.51 kwin > 9028 ldap 20 0 1930m 26m 4564 S 0 0.3 1:36.51 slapd > 9248 root 20 0 324m 24m 17m S 0 0.3 0:03.92 kdm_greet > 24164 root 20 0 486m 23m 16m S 0 0.3 0:00.35 krunner > 10870 root 20 0 24548 20m 1168 S 2 0.3 0:22.59 jabberd > 9014 root 20 0 120m 19m 5328 S 0 0.2 0:03.04 Xorg > 24283 root 20 0 173m 19m 14m S 0 0.2 0:00.18 kdialog > 22940 root 20 0 290m 18m 12m S 0 0.2 0:00.22 kded4 > 24275 root 20 0 191m 18m 13m S 0 0.2 0:00.22 kupdateapplet > 24270 root 20 0 237m 16m 10m S 0 0.2 0:00.11 kmix > 4061 root -2 0 92828 16m 8476 S 0 0.2 0:01.18 heartbeat > 24274 root 20 0 284m 15m 9.9m S 0 0.2 0:00.10 klipper > 23299 root 20 0 309m 14m 9844 S 0 0.2 0:00.08 ksmserver > 22899 root 20 0 201m 14m 10m S 0 0.2 0:00.10 kdeinit4 > 23743 root 20 0 228m 12m 7856 S 0 0.2 0:00.10 kglobalaccel > 24167 root 20 0 235m 12m 7760 S 0 0.2 0:00.04 nepomukserver > > # /usr/bin/uptime > 11:04am up 0:18, 7 users, load average: 27.52, 18.60, 10.27 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 2 0 0 50856 19300 7196808 0 0 507 378 1167 1175 3 3 > 88 6 0 > 0 0 0 41332 19300 7200960 0 0 176 1279 14284 10519 2 > 2 93 2 0 > 1 0 0 43184 19184 7181520 0 0 0 1074 7191 1856 0 1 > 99 0 0 > 0 0 0 43316 19128 7179868 0 0 0 1189 2237 2340 1 0 > 99 0 0 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 0 1 0 47276 19048 7177788 0 0 498 384 1166 1171 3 3 > 88 6 0 > 1 0 0 46128 19056 7167016 0 0 36 970 7530 4158 2 1 > 95 2 0 > 0 1 0 46452 19064 7163616 0 0 20 798 1411 1749 2 1 > 97 0 0 > 0 0 0 46868 19064 7162624 0 0 56 751 7079 2169 1 1 > 97 0 0 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only > an harmless informational message. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working > fine. Allocations from irqs cannot be > Nov 4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and > the kernel is designed to handle that. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace: > Nov 4 09:57:53 srv-linux kernel: [ 1284.893054] [] > dump_trace+0x6c/0x2d0 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893063] [] > dump_stack+0x69/0x71 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893070] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893077] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893084] [] > kmem_getpages+0x56/0x170 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893089] [] > fallback_alloc+0x166/0x230 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893095] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893102] [] > skb_clone+0x3a/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893109] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893114] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893120] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893126] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893132] [] > ip_queue_xmit+0x210/0x420 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893139] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893145] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893151] [] > run_timer_softirq+0x174/0x240 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893157] [] > __do_softirq+0xbf/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893163] [] > call_softirq+0x1c/0x30 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893168] [] > do_softirq+0x4d/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893173] [] > irq_exit+0x85/0x90 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893178] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893185] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only > an harmless informational message. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working > fine. Allocations from irqs cannot be > Nov 4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and > the kernel is designed to handle that. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132719] [] > dump_trace+0x6c/0x2d0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132729] [] > dump_stack+0x69/0x71 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132738] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132746] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132754] [] > kmem_getpages+0x56/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132761] [] > fallback_alloc+0x166/0x230 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132768] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132777] [] > skb_clone+0x3a/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132788] [] > packet_rcv_spkt+0x78/0x190 [af_packet] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132807] [] > netif_receive_skb+0x3a2/0x660 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132819] [] > bnx2_rx_int+0x59d/0x820 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132836] [] > bnx2_poll_work+0x6f/0x90 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132851] [] > bnx2_poll+0x61/0x1cc [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132865] [] > net_rx_action+0xe3/0x1a0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132873] [] > __do_softirq+0xbf/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132881] [] > call_softirq+0x1c/0x30 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132887] [] > do_softirq+0x4d/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132893] [] > irq_exit+0x85/0x90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132899] [] > do_IRQ+0x6e/0xe0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132907] [] > ret_from_intr+0x0/0xa > Nov 4 10:21:17 srv-linux kernel: [ 2687.132915] [] > mwait_idle+0x62/0x70 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132922] [] > cpu_idle+0x5a/0xb0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132934] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > ov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132941] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132945] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132948] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132951] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132955] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132958] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132966] CPU 0: hi: 186, > btch: 31 usd: 32 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132969] CPU 1: hi: 186, > btch: 31 usd: 90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132973] CPU 2: hi: 186, > btch: 31 usd: 140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132976] CPU 3: hi: 186, > btch: 31 usd: 166 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132979] CPU 4: hi: 186, > btch: 31 usd: 14 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132983] CPU 5: hi: 186, > btch: 31 usd: 119 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132986] CPU 6: hi: 186, > btch: 31 usd: 45 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132989] CPU 7: hi: 186, > btch: 31 usd: 191 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132997] CPU 0: hi: 186, > btch: 31 usd: 16 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133000] CPU 1: hi: 186, > btch: 31 usd: 4 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133003] CPU 2: hi: 186, > btch: 31 usd: 44 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133006] CPU 3: hi: 186, > btch: 31 usd: 164 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133010] CPU 4: hi: 186, > btch: 31 usd: 98 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133013] CPU 5: hi: 186, > btch: 31 usd: 19 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133017] CPU 6: hi: 186, > btch: 31 usd: 76 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133020] CPU 7: hi: 186, > btch: 31 usd: 192 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321 > inactive_anon:23282 isolated_anon:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133029] active_file:56108 > inactive_file:1701629 isolated_file:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133030] unevictable:5709 > dirty:677685 writeback:2 unstable:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133032] free:9755 > slab_reclaimable:66787 slab_unreclaimable:50212 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133033] mapped:13499 shmem:67 > pagetables:6893 bounce:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32 > free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB > inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB > slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB > pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal > free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB > inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB > mapped:52732kB shmem:252kB slab_reclaimable:159432kB > slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB > 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB > 1*2048kB 0*4096kB = 20428kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB > 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB > 0*4096kB = 2032kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache > pages > Nov 4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache > Nov 4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared > Nov 4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only > an harmless informational message. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436052] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436061] [] > dump_stack+0x69/0x71 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436069] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436075] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436083] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436088] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436094] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436101] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436108] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436113] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436119] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436125] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436131] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436138] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436144] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436150] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436156] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436162] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436167] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436171] [] > irq_exit+0x85/0x90 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436177] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436184] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436191] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436196] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436205] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436208] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436210] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436213] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436215] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436217] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436220] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436222] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436227] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436229] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436232] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436234] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436236] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436239] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436241] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436244] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436248] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436250] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436253] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436255] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436257] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436260] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436262] CPU 6: hi: 186, > btch: 31 usd: 124 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436265] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436272] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436273] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436275] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436276] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache > pages > Nov 4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache > Nov 4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only > an harmless informational message. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace: > Nov 4 11:07:28 srv-linux kernel: [ 1293.480883] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480897] [] > dump_stack+0x69/0x71 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480910] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480921] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480933] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480944] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480955] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480967] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480979] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480990] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481000] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481010] [] > __qdisc_run+0xaf/0x100 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481021] [] > dev_queue_xmit+0x4cb/0x4d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481032] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481044] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481054] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481066] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481077] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481088] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481098] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481108] [] > irq_exit+0x85/0x90 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481118] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481131] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481142] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481152] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481173] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481178] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481184] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481189] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481195] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481200] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481206] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481211] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481226] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481231] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481237] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481242] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481248] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481253] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481259] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481264] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481278] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481284] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481289] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481295] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481300] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481306] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481311] CPU 6: hi: 186, > btch: 31 usd: 124 > ov 4 11:07:28 srv-linux kernel: [ 1293.481316] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481327] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481328] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481329] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481330] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache > pages > Nov 4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache > Nov 4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared > > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > From hamilton at theforce.com.br Thu Nov 4 14:29:10 2010 From: hamilton at theforce.com.br (Hamilton Vera) Date: Thu, 4 Nov 2010 14:29:10 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2C56C.1020108@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: Oi, veja se n?o h? nenhum problema com IRQs e afins, pois em algum momento a(s) sua(s) interface(s) de rede est?o com um comportamento estranho. Por?m n?o sei se isso acontece antes, durante/depois do load aumentar. Se voc? trabalha em cluster e por algum motivo sua interface de rede apresente algum comportamento anormal pode ser a? seu gargalo. mpstat lsdev cat /proc/interrupts Uma outra coisa que voc? pode tentar ? colocar o kernel do mainline que est? no 2.6.36, como o hardware teoricamente ? confi?vel voc? pode apelar para troca do kernel. No top que voc? mostrou n?o consegui ver o que consome CPU, se voc? n?o consegue ver nenhum outro processo que ocupe tanto o tempo da CPU provavelmente h? algum conflito de acesso a memoria, irqs ( como mencionei acima). []'s Hamilton Vera Em 4 de novembro de 2010 12:38, Rejaine Monteiro escreveu: > > Pessoal, > > Venho pedir socorro.. > Estou com um grave problema de performance em um servidor com > SLES11(SP1) instalado em um servidor PowerEdge 1900 (configura??o abaixo) > > Ocorre o seguinte: > > T?nhamos em nossas localidades v?rios servidores bem inferiores, > atendendo ao mesmo n?mero de usu?rios e mesmos servi?os, por?m, > utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem at? ent?o, > mas seguindo nosso planejamento de atualiza??o do parque de m?quinas, > optamos por fazer upgrade de hardware e S.O (que se encontrava bastante > desatualizado) nessas localidades e eis que come?aram os problemas. > > Inicialmente, fizemos a substitui??o em apenas duas localidades de menor > porte e com menor n?mero de usu?rios e j? hav?amos notado um certo > aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece > que melhorou um pouco. > > Por?m, em uma outra localidade em especial, com cerca de 300 usu?rios, > a performance do servidor est? simplesmente sofr?vel > A carga de CPU sobe tanto, que as vezes mal consigo fazer login para > visualizar o syslog, tendo muitas vezes que derrubar v?rios servi?os ou > dar um reboot para voltar ao normal. > > J? fizemos v?rios ajustes de tunning no Kernel e v?rias outros ajustes > de tunning nas v?rias aplica??es que o servidor executa (especialmente > no servi?os mais importantes como drbd, heartebeat, ldap, nfsserver, > etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria > consider?vel mesmo ap?s dezenas de ajustes. > > Como temos dois servidores id?nticos (um em modo failover, por causa do > HA), fizemos o teste subindo todos os servi?os no servidor backup, para > descartar problemas de disco e/ou hardware na m?quina principal, por?m > os problemas continuaram tamb?m no outro servidor. > > Quando a carga est? muito alta, o syslog come?a a gerar v?rios dumps no > /var/log/messages (descritos abaixo) > > Aparentemente, n?o h? problemas de I/O (j? incluimos at? um RAID para > melhorar a performance de disco e fizemos v?rios ajustes, mas nada > resolveu ou surtiu efeito) > O que percebemos, ? que n?o h? rela??o com iowait e cpu load , ou seja, > quando a carga est? alta, o disco n?o apresenta sobrecarga. Parece ser > algo haver com mem?ria, mas o servidor antigo trabalha com 4G no > OpenSuSE 10.2 e dava conta do recado e j? este servidor, apesar de mais > ser ainda "parrudo" e com o dobro de mem?ria n?o. > > Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware > inferior, rodando basicamente os mesmos servi?os e com mesmo n?mero de > usu?rios funcionava muito bem com o OpenSuSE 10.2 > > Segue abaixo descri??o do hardware, software e servi?os utilizados no > servidor e logo mais adiante algumas mensgens que aparecem no syslog > > Se algu?m puder ajudar com qualquer dica, eu agrade?o muit?ssimo > (qualquer ajuda ? bem vinda) > > Servidor> Del PowerEdge 1900 > 2 x Intel(R) Xeon(R) CPU E5310 1.60GHz DualCore > 8G RAM > 4 HDs SAS 15000rpm > > Software> Suse Linux Enterprise Server 11 - Service Pack 1 > Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 > +0200 x86_64 x86_64 x86_64 GNU/Linux > > Servicos basicos que est?o rodando nesse servidor: linux-ha > (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp, > named, squid e jabberd > Numero de usuarios: 300 > Usuarios Linux utilizam HOMEDIR montado via NFS > Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de > grupo e/ou backup de profile > > top - 10:33:37 up 57 min, 19 users, load average: 40.44, 49.96, 42.26 > Tasks: 510 total, 1 running, 509 sleeping, 0 stopped, 0 zombie > Cpu(s): 1.3%us, 1.5%sy, 0.0%ni, 94.2%id, 1.7%wa, 0.0%hi, 1.4%si, > 0.0%st > Mem: 8188816k total, 8137392k used, 51424k free, 57116k buffers > Swap: 2104432k total, 0k used, 2104432k free, 7089980k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 9901 qscand 20 0 207m 164m 2032 S 0 2.1 0:04.63 clamd > 4074 root 20 0 358m 57m 1992 S 0 0.7 0:03.03 nscd > 9016 named 20 0 320m 54m 2464 S 0 0.7 0:17.37 named > 22761 root 20 0 115m 50m 4604 S 0 0.6 0:02.30 nxagent > 23477 root 20 0 597m 33m 21m S 0 0.4 0:01.20 plasma-desktop > 23357 root 20 0 453m 30m 23m S 0 0.4 0:00.51 kwin > 9028 ldap 20 0 1930m 26m 4564 S 0 0.3 1:36.51 slapd > 9248 root 20 0 324m 24m 17m S 0 0.3 0:03.92 kdm_greet > 24164 root 20 0 486m 23m 16m S 0 0.3 0:00.35 krunner > 10870 root 20 0 24548 20m 1168 S 2 0.3 0:22.59 jabberd > 9014 root 20 0 120m 19m 5328 S 0 0.2 0:03.04 Xorg > 24283 root 20 0 173m 19m 14m S 0 0.2 0:00.18 kdialog > 22940 root 20 0 290m 18m 12m S 0 0.2 0:00.22 kded4 > 24275 root 20 0 191m 18m 13m S 0 0.2 0:00.22 kupdateapplet > 24270 root 20 0 237m 16m 10m S 0 0.2 0:00.11 kmix > 4061 root -2 0 92828 16m 8476 S 0 0.2 0:01.18 heartbeat > 24274 root 20 0 284m 15m 9.9m S 0 0.2 0:00.10 klipper > 23299 root 20 0 309m 14m 9844 S 0 0.2 0:00.08 ksmserver > 22899 root 20 0 201m 14m 10m S 0 0.2 0:00.10 kdeinit4 > 23743 root 20 0 228m 12m 7856 S 0 0.2 0:00.10 kglobalaccel > 24167 root 20 0 235m 12m 7760 S 0 0.2 0:00.04 nepomukserver > > # /usr/bin/uptime > 11:04am up 0:18, 7 users, load average: 27.52, 18.60, 10.27 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 2 0 0 50856 19300 7196808 0 0 507 378 1167 1175 3 3 > 88 6 0 > 0 0 0 41332 19300 7200960 0 0 176 1279 14284 10519 2 > 2 93 2 0 > 1 0 0 43184 19184 7181520 0 0 0 1074 7191 1856 0 1 > 99 0 0 > 0 0 0 43316 19128 7179868 0 0 0 1189 2237 2340 1 0 > 99 0 0 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 0 1 0 47276 19048 7177788 0 0 498 384 1166 1171 3 3 > 88 6 0 > 1 0 0 46128 19056 7167016 0 0 36 970 7530 4158 2 1 > 95 2 0 > 0 1 0 46452 19064 7163616 0 0 20 798 1411 1749 2 1 > 97 0 0 > 0 0 0 46868 19064 7162624 0 0 56 751 7079 2169 1 1 > 97 0 0 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only > an harmless informational message. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working > fine. Allocations from irqs cannot be > Nov 4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and > the kernel is designed to handle that. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace: > Nov 4 09:57:53 srv-linux kernel: [ 1284.893054] [] > dump_trace+0x6c/0x2d0 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893063] [] > dump_stack+0x69/0x71 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893070] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893077] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893084] [] > kmem_getpages+0x56/0x170 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893089] [] > fallback_alloc+0x166/0x230 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893095] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893102] [] > skb_clone+0x3a/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893109] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893114] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893120] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893126] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893132] [] > ip_queue_xmit+0x210/0x420 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893139] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893145] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893151] [] > run_timer_softirq+0x174/0x240 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893157] [] > __do_softirq+0xbf/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893163] [] > call_softirq+0x1c/0x30 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893168] [] > do_softirq+0x4d/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893173] [] > irq_exit+0x85/0x90 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893178] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893185] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only > an harmless informational message. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working > fine. Allocations from irqs cannot be > Nov 4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and > the kernel is designed to handle that. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132719] [] > dump_trace+0x6c/0x2d0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132729] [] > dump_stack+0x69/0x71 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132738] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132746] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132754] [] > kmem_getpages+0x56/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132761] [] > fallback_alloc+0x166/0x230 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132768] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132777] [] > skb_clone+0x3a/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132788] [] > packet_rcv_spkt+0x78/0x190 [af_packet] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132807] [] > netif_receive_skb+0x3a2/0x660 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132819] [] > bnx2_rx_int+0x59d/0x820 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132836] [] > bnx2_poll_work+0x6f/0x90 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132851] [] > bnx2_poll+0x61/0x1cc [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132865] [] > net_rx_action+0xe3/0x1a0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132873] [] > __do_softirq+0xbf/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132881] [] > call_softirq+0x1c/0x30 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132887] [] > do_softirq+0x4d/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132893] [] > irq_exit+0x85/0x90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132899] [] > do_IRQ+0x6e/0xe0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132907] [] > ret_from_intr+0x0/0xa > Nov 4 10:21:17 srv-linux kernel: [ 2687.132915] [] > mwait_idle+0x62/0x70 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132922] [] > cpu_idle+0x5a/0xb0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132934] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > ov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132941] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132945] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132948] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132951] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132955] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132958] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132966] CPU 0: hi: 186, > btch: 31 usd: 32 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132969] CPU 1: hi: 186, > btch: 31 usd: 90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132973] CPU 2: hi: 186, > btch: 31 usd: 140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132976] CPU 3: hi: 186, > btch: 31 usd: 166 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132979] CPU 4: hi: 186, > btch: 31 usd: 14 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132983] CPU 5: hi: 186, > btch: 31 usd: 119 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132986] CPU 6: hi: 186, > btch: 31 usd: 45 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132989] CPU 7: hi: 186, > btch: 31 usd: 191 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132997] CPU 0: hi: 186, > btch: 31 usd: 16 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133000] CPU 1: hi: 186, > btch: 31 usd: 4 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133003] CPU 2: hi: 186, > btch: 31 usd: 44 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133006] CPU 3: hi: 186, > btch: 31 usd: 164 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133010] CPU 4: hi: 186, > btch: 31 usd: 98 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133013] CPU 5: hi: 186, > btch: 31 usd: 19 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133017] CPU 6: hi: 186, > btch: 31 usd: 76 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133020] CPU 7: hi: 186, > btch: 31 usd: 192 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321 > inactive_anon:23282 isolated_anon:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133029] active_file:56108 > inactive_file:1701629 isolated_file:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133030] unevictable:5709 > dirty:677685 writeback:2 unstable:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133032] free:9755 > slab_reclaimable:66787 slab_unreclaimable:50212 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133033] mapped:13499 shmem:67 > pagetables:6893 bounce:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32 > free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB > inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB > slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB > pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal > free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB > inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB > mapped:52732kB shmem:252kB slab_reclaimable:159432kB > slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB > 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB > 1*2048kB 0*4096kB = 20428kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB > 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB > 0*4096kB = 2032kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache > pages > Nov 4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache > Nov 4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared > Nov 4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only > an harmless informational message. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436052] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436061] [] > dump_stack+0x69/0x71 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436069] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436075] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436083] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436088] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436094] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436101] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436108] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436113] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436119] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436125] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436131] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436138] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436144] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436150] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436156] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436162] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436167] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436171] [] > irq_exit+0x85/0x90 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436177] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436184] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436191] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436196] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436205] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436208] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436210] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436213] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436215] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436217] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436220] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436222] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436227] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436229] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436232] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436234] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436236] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436239] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436241] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436244] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436248] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436250] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436253] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436255] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436257] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436260] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436262] CPU 6: hi: 186, > btch: 31 usd: 124 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436265] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436272] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436273] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436275] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436276] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache > pages > Nov 4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache > Nov 4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only > an harmless informational message. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace: > Nov 4 11:07:28 srv-linux kernel: [ 1293.480883] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480897] [] > dump_stack+0x69/0x71 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480910] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480921] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480933] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480944] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480955] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480967] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480979] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480990] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481000] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481010] [] > __qdisc_run+0xaf/0x100 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481021] [] > dev_queue_xmit+0x4cb/0x4d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481032] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481044] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481054] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481066] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481077] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481088] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481098] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481108] [] > irq_exit+0x85/0x90 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481118] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481131] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481142] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481152] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481173] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481178] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481184] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481189] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481195] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481200] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481206] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481211] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481226] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481231] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481237] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481242] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481248] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481253] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481259] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481264] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481278] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481284] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481289] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481295] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481300] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481306] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481311] CPU 6: hi: 186, > btch: 31 usd: 124 > ov 4 11:07:28 srv-linux kernel: [ 1293.481316] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481327] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481328] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481329] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481330] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache > pages > Nov 4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache > Nov 4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared > > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- http://hvera.wordpress.com From dropsdef at gmail.com Thu Nov 4 15:07:44 2010 From: dropsdef at gmail.com (Armando Roque) Date: Thu, 4 Nov 2010 14:07:44 -0300 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: <4CD2C56C.1020108@bhz.jamef.com.br> References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: Rejaine, O Jos? Augusto j? fez as vezes de perguntar das atualiza??es. Al?m do SP1 como est?o?! Vc atualizou algo depois? Att. Em 4 de novembro de 2010 11:38, Rejaine Monteiro escreveu: > > Pessoal, > > Venho pedir socorro.. > Estou com um grave problema de performance em um servidor com > SLES11(SP1) instalado em um servidor PowerEdge 1900 (configura??o abaixo) > > Ocorre o seguinte: > > T?nhamos em nossas localidades v?rios servidores bem inferiores, > atendendo ao mesmo n?mero de usu?rios e mesmos servi?os, por?m, > utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem at? ent?o, > mas seguindo nosso planejamento de atualiza??o do parque de m?quinas, > optamos por fazer upgrade de hardware e S.O (que se encontrava bastante > desatualizado) nessas localidades e eis que come?aram os problemas. > > Inicialmente, fizemos a substitui??o em apenas duas localidades de menor > porte e com menor n?mero de usu?rios e j? hav?amos notado um certo > aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece > que melhorou um pouco. > > Por?m, em uma outra localidade em especial, com cerca de 300 usu?rios, > a performance do servidor est? simplesmente sofr?vel > A carga de CPU sobe tanto, que as vezes mal consigo fazer login para > visualizar o syslog, tendo muitas vezes que derrubar v?rios servi?os ou > dar um reboot para voltar ao normal. > > J? fizemos v?rios ajustes de tunning no Kernel e v?rias outros ajustes > de tunning nas v?rias aplica??es que o servidor executa (especialmente > no servi?os mais importantes como drbd, heartebeat, ldap, nfsserver, > etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria > consider?vel mesmo ap?s dezenas de ajustes. > > Como temos dois servidores id?nticos (um em modo failover, por causa do > HA), fizemos o teste subindo todos os servi?os no servidor backup, para > descartar problemas de disco e/ou hardware na m?quina principal, por?m > os problemas continuaram tamb?m no outro servidor. > > Quando a carga est? muito alta, o syslog come?a a gerar v?rios dumps no > /var/log/messages (descritos abaixo) > > Aparentemente, n?o h? problemas de I/O (j? incluimos at? um RAID para > melhorar a performance de disco e fizemos v?rios ajustes, mas nada > resolveu ou surtiu efeito) > O que percebemos, ? que n?o h? rela??o com iowait e cpu load , ou seja, > quando a carga est? alta, o disco n?o apresenta sobrecarga. Parece ser > algo haver com mem?ria, mas o servidor antigo trabalha com 4G no > OpenSuSE 10.2 e dava conta do recado e j? este servidor, apesar de mais > ser ainda "parrudo" e com o dobro de mem?ria n?o. > > Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware > inferior, rodando basicamente os mesmos servi?os e com mesmo n?mero de > usu?rios funcionava muito bem com o OpenSuSE 10.2 > > Segue abaixo descri??o do hardware, software e servi?os utilizados no > servidor e logo mais adiante algumas mensgens que aparecem no syslog > > Se algu?m puder ajudar com qualquer dica, eu agrade?o muit?ssimo > (qualquer ajuda ? bem vinda) > > Servidor> Del PowerEdge 1900 > 2 x Intel(R) Xeon(R) CPU E5310 1.60GHz DualCore > 8G RAM > 4 HDs SAS 15000rpm > > Software> Suse Linux Enterprise Server 11 - Service Pack 1 > Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 > +0200 x86_64 x86_64 x86_64 GNU/Linux > > Servicos basicos que est?o rodando nesse servidor: linux-ha > (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp, > named, squid e jabberd > Numero de usuarios: 300 > Usuarios Linux utilizam HOMEDIR montado via NFS > Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de > grupo e/ou backup de profile > > top - 10:33:37 up 57 min, 19 users, load average: 40.44, 49.96, 42.26 > Tasks: 510 total, 1 running, 509 sleeping, 0 stopped, 0 zombie > Cpu(s): 1.3%us, 1.5%sy, 0.0%ni, 94.2%id, 1.7%wa, 0.0%hi, 1.4%si, > 0.0%st > Mem: 8188816k total, 8137392k used, 51424k free, 57116k buffers > Swap: 2104432k total, 0k used, 2104432k free, 7089980k cached > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 9901 qscand 20 0 207m 164m 2032 S 0 2.1 0:04.63 clamd > 4074 root 20 0 358m 57m 1992 S 0 0.7 0:03.03 nscd > 9016 named 20 0 320m 54m 2464 S 0 0.7 0:17.37 named > 22761 root 20 0 115m 50m 4604 S 0 0.6 0:02.30 nxagent > 23477 root 20 0 597m 33m 21m S 0 0.4 0:01.20 plasma-desktop > 23357 root 20 0 453m 30m 23m S 0 0.4 0:00.51 kwin > 9028 ldap 20 0 1930m 26m 4564 S 0 0.3 1:36.51 slapd > 9248 root 20 0 324m 24m 17m S 0 0.3 0:03.92 kdm_greet > 24164 root 20 0 486m 23m 16m S 0 0.3 0:00.35 krunner > 10870 root 20 0 24548 20m 1168 S 2 0.3 0:22.59 jabberd > 9014 root 20 0 120m 19m 5328 S 0 0.2 0:03.04 Xorg > 24283 root 20 0 173m 19m 14m S 0 0.2 0:00.18 kdialog > 22940 root 20 0 290m 18m 12m S 0 0.2 0:00.22 kded4 > 24275 root 20 0 191m 18m 13m S 0 0.2 0:00.22 kupdateapplet > 24270 root 20 0 237m 16m 10m S 0 0.2 0:00.11 kmix > 4061 root -2 0 92828 16m 8476 S 0 0.2 0:01.18 heartbeat > 24274 root 20 0 284m 15m 9.9m S 0 0.2 0:00.10 klipper > 23299 root 20 0 309m 14m 9844 S 0 0.2 0:00.08 ksmserver > 22899 root 20 0 201m 14m 10m S 0 0.2 0:00.10 kdeinit4 > 23743 root 20 0 228m 12m 7856 S 0 0.2 0:00.10 kglobalaccel > 24167 root 20 0 235m 12m 7760 S 0 0.2 0:00.04 nepomukserver > > # /usr/bin/uptime > 11:04am up 0:18, 7 users, load average: 27.52, 18.60, 10.27 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 2 0 0 50856 19300 7196808 0 0 507 378 1167 1175 3 3 > 88 6 0 > 0 0 0 41332 19300 7200960 0 0 176 1279 14284 10519 2 > 2 93 2 0 > 1 0 0 43184 19184 7181520 0 0 0 1074 7191 1856 0 1 > 99 0 0 > 0 0 0 43316 19128 7179868 0 0 0 1189 2237 2340 1 0 > 99 0 0 > > # /usr/bin/vmstat 1 4 > procs -----------memory---------- ---swap-- -----io---- -system-- > -----cpu------ > r b swpd free buff cache si so bi bo in cs us sy > id wa st > 0 1 0 47276 19048 7177788 0 0 498 384 1166 1171 3 3 > 88 6 0 > 1 0 0 46128 19056 7167016 0 0 36 970 7530 4158 2 1 > 95 2 0 > 0 1 0 46452 19064 7163616 0 0 20 798 1411 1749 2 1 > 97 0 0 > 0 0 0 46868 19064 7162624 0 0 56 751 7079 2169 1 1 > 97 0 0 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only > an harmless informational message. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working > fine. Allocations from irqs cannot be > Nov 4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and > the kernel is designed to handle that. > Nov 4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace: > Nov 4 09:57:53 srv-linux kernel: [ 1284.893054] [] > dump_trace+0x6c/0x2d0 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893063] [] > dump_stack+0x69/0x71 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893070] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893077] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893084] [] > kmem_getpages+0x56/0x170 > Nov 4 09:57:53 srv-linux kernel: [ 1284.893089] [] > fallback_alloc+0x166/0x230 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893095] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893102] [] > skb_clone+0x3a/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893109] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893114] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893120] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893126] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893132] [] > ip_queue_xmit+0x210/0x420 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893139] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893145] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893151] [] > run_timer_softirq+0x174/0x240 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893157] [] > __do_softirq+0xbf/0x170 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893163] [] > call_softirq+0x1c/0x30 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893168] [] > do_softirq+0x4d/0x80 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893173] [] > irq_exit+0x85/0x90 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893178] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 09:58:12 srv-linux kernel: [ 1284.893185] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only > an harmless informational message. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working > fine. Allocations from irqs cannot be > Nov 4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and > the kernel is designed to handle that. > Nov 4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132719] [] > dump_trace+0x6c/0x2d0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132729] [] > dump_stack+0x69/0x71 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132738] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132746] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132754] [] > kmem_getpages+0x56/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132761] [] > fallback_alloc+0x166/0x230 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132768] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132777] [] > skb_clone+0x3a/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132788] [] > packet_rcv_spkt+0x78/0x190 [af_packet] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132807] [] > netif_receive_skb+0x3a2/0x660 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132819] [] > bnx2_rx_int+0x59d/0x820 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132836] [] > bnx2_poll_work+0x6f/0x90 [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132851] [] > bnx2_poll+0x61/0x1cc [bnx2] > Nov 4 10:21:17 srv-linux kernel: [ 2687.132865] [] > net_rx_action+0xe3/0x1a0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132873] [] > __do_softirq+0xbf/0x170 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132881] [] > call_softirq+0x1c/0x30 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132887] [] > do_softirq+0x4d/0x80 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132893] [] > irq_exit+0x85/0x90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132899] [] > do_IRQ+0x6e/0xe0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132907] [] > ret_from_intr+0x0/0xa > Nov 4 10:21:17 srv-linux kernel: [ 2687.132915] [] > mwait_idle+0x62/0x70 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132922] [] > cpu_idle+0x5a/0xb0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132934] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > ov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132941] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132945] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132948] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132951] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132955] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132958] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132966] CPU 0: hi: 186, > btch: 31 usd: 32 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132969] CPU 1: hi: 186, > btch: 31 usd: 90 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132973] CPU 2: hi: 186, > btch: 31 usd: 140 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132976] CPU 3: hi: 186, > btch: 31 usd: 166 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132979] CPU 4: hi: 186, > btch: 31 usd: 14 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132983] CPU 5: hi: 186, > btch: 31 usd: 119 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132986] CPU 6: hi: 186, > btch: 31 usd: 45 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132989] CPU 7: hi: 186, > btch: 31 usd: 191 > Nov 4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu: > Nov 4 10:21:17 srv-linux kernel: [ 2687.132997] CPU 0: hi: 186, > btch: 31 usd: 16 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133000] CPU 1: hi: 186, > btch: 31 usd: 4 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133003] CPU 2: hi: 186, > btch: 31 usd: 44 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133006] CPU 3: hi: 186, > btch: 31 usd: 164 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133010] CPU 4: hi: 186, > btch: 31 usd: 98 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133013] CPU 5: hi: 186, > btch: 31 usd: 19 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133017] CPU 6: hi: 186, > btch: 31 usd: 76 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133020] CPU 7: hi: 186, > btch: 31 usd: 192 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321 > inactive_anon:23282 isolated_anon:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133029] active_file:56108 > inactive_file:1701629 isolated_file:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133030] unevictable:5709 > dirty:677685 writeback:2 unstable:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133032] free:9755 > slab_reclaimable:66787 slab_unreclaimable:50212 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133033] mapped:13499 shmem:67 > pagetables:6893 bounce:0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32 > free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB > inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB > slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB > pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal > free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB > inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB > mapped:52732kB shmem:252kB slab_reclaimable:159432kB > slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0 0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB > 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB > 1*2048kB 0*4096kB = 20428kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB > 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB > 0*4096kB = 2032kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache > pages > Nov 4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache > Nov 4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared > Nov 4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only > an harmless informational message. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436052] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436061] [] > dump_stack+0x69/0x71 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436069] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436075] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436083] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436088] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436094] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436101] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436108] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436113] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436119] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436125] [] > dev_queue_xmit+0x366/0x4d0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436131] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436138] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436144] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436150] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436156] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436162] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436167] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436171] [] > irq_exit+0x85/0x90 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436177] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436184] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436191] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436196] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436205] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436208] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436210] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436213] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436215] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436217] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436220] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436222] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436227] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436229] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436232] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436234] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436236] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436239] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436241] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436244] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu: > Nov 4 11:07:27 srv-linux kernel: [ 1293.436248] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436250] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436253] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436255] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436257] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436260] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436262] CPU 6: hi: 186, > btch: 31 usd: 124 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436265] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436272] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:27 srv-linux kernel: [ 1293.436273] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436275] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436276] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache > pages > Nov 4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache > Nov 4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared > Nov 4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only > an harmless informational message. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a > _continuous_flood_ of these messages it means > Nov 4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working > fine. Allocations from irqs cannot be > Nov 4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and > the kernel is designed to handle that. > Nov 4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page > allocation failure. order:0, mode:0x20, alloc_flags:0x30 pflags:0x10200042 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper > Tainted: G X 2.6.32.12-0.7-default #1 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace: > Nov 4 11:07:28 srv-linux kernel: [ 1293.480883] [] > dump_trace+0x6c/0x2d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480897] [] > dump_stack+0x69/0x71 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480910] [] > __alloc_pages_slowpath+0x3ed/0x550 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480921] [] > __alloc_pages_nodemask+0x13a/0x140 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480933] [] > kmem_getpages+0x56/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480944] [] > fallback_alloc+0x166/0x230 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480955] [] > kmem_cache_alloc+0x192/0x1b0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480967] [] > skb_clone+0x3a/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480979] [] > dev_queue_xmit_nit+0x82/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.480990] [] > dev_hard_start_xmit+0x4a/0x210 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481000] [] > sch_direct_xmit+0x16e/0x1e0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481010] [] > __qdisc_run+0xaf/0x100 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481021] [] > dev_queue_xmit+0x4cb/0x4d0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481032] [] > ip_queue_xmit+0x210/0x420 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481044] [] > tcp_transmit_skb+0x4cb/0x760 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481054] [] > tcp_delack_timer+0x14f/0x2a0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481066] [] > run_timer_softirq+0x174/0x240 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481077] [] > __do_softirq+0xbf/0x170 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481088] [] > call_softirq+0x1c/0x30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481098] [] > do_softirq+0x4d/0x80 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481108] [] > irq_exit+0x85/0x90 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481118] [] > smp_apic_timer_interrupt+0x6c/0xa0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481131] [] > apic_timer_interrupt+0x13/0x20 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481142] [] > mwait_idle+0x62/0x70 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481152] [] > cpu_idle+0x5a/0xb0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481173] CPU 0: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481178] CPU 1: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481184] CPU 2: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481189] CPU 3: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481195] CPU 4: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481200] CPU 5: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481206] CPU 6: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481211] CPU 7: hi: 0, > btch: 1 usd: 0 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481226] CPU 0: hi: 186, > btch: 31 usd: 30 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481231] CPU 1: hi: 186, > btch: 31 usd: 186 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481237] CPU 2: hi: 186, > btch: 31 usd: 147 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481242] CPU 3: hi: 186, > btch: 31 usd: 174 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481248] CPU 4: hi: 186, > btch: 31 usd: 92 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481253] CPU 5: hi: 186, > btch: 31 usd: 49 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481259] CPU 6: hi: 186, > btch: 31 usd: 141 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481264] CPU 7: hi: 186, > btch: 31 usd: 142 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu: > Nov 4 11:07:28 srv-linux kernel: [ 1293.481278] CPU 0: hi: 186, > btch: 31 usd: 46 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481284] CPU 1: hi: 186, > btch: 31 usd: 158 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481289] CPU 2: hi: 186, > btch: 31 usd: 151 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481295] CPU 3: hi: 186, > btch: 31 usd: 39 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481300] CPU 4: hi: 186, > btch: 31 usd: 114 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481306] CPU 5: hi: 186, > btch: 31 usd: 59 > Nov 4 11:07:28 srv-linux kernel: [ 1293.481311] CPU 6: hi: 186, > btch: 31 usd: 124 > ov 4 11:07:28 srv-linux kernel: [ 1293.481316] CPU 7: hi: 186, > btch: 31 usd: 173 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650 > inactive_anon:21539 isolated_anon:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481327] active_file:65104 > inactive_file:1679351 isolated_file:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481328] unevictable:5709 > dirty:474043 writeback:6102 unstable:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481329] free:9712 > slab_reclaimable:51092 slab_unreclaimable:49524 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481330] mapped:13595 shmem:109 > pagetables:6308 bounce:0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > Nov 4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0 > 3251 8049 8049 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32 > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0 > 4797 4797 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > all_unreclaimable? no > Nov 4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0 0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > = 15692kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > 1*2048kB 1*4096kB = 19828kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > 0*4096kB = 1840kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache > pages > Nov 4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache > Nov 4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add > 0, delete 0, find 0/0 > Nov 4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared > > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- Armando Roque Ferreira Pinto Analista de sistemas From cpereira at unisc.br Thu Nov 18 09:33:33 2010 From: cpereira at unisc.br (Cristiano Maynart Pereira) Date: Thu, 18 Nov 2010 09:33:33 -0200 Subject: [MASOCH-L] =?iso-8859-1?q?RES=3A__DNAT_para_rede_local_n=E3o_func?= =?iso-8859-1?q?iona_=28Debian=29?= In-Reply-To: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: <642A062A3EA5004EB84EBD1D34C1C54C040BBB74@sun78.unisc.br> -----Mensagem original----- De: masoch-l-bounces at eng.registro.br [mailto:masoch-l-bounces at eng.registro.br] Em nome de Rafael Santos Enviada em: quarta-feira, 17 de novembro de 2010 22:44 Para: Mail Aid and Succor, On-line Comfort and Help Assunto: [MASOCH-L] DNAT para rede local n?o funciona (Debian) Prezados, Estou enfrentando um problema deveras bizarro aqui e gostaria de um help dos senhores(as): Tenho a seguinte topologia: Eth0 = 192.168.1.0/24 (LAN) Eth1 = 200.200.200.200 (WAN 1) Eth2 = 201.201.201.201 (WAN 2) Existe uma regra de DNAT para que todos os pacotes que chegam para determinadas portas com destino = IP WAN 1 sejam encaminhados (DNAT) para o servidor 192.168.1.45. At? a? beleza, de qualquer lugar do planeta e talvez do universo (a lat?ncia talvez incomode em dist?ncias maiores) a conex?o funciona que ? uma maravilha. O problema ocorre quando tento acessar o servidor atrav?s do IP WAN 1 a partir da pr?pria LAN. Voc?s me dir?o: "Claro, filhote de apedeuta, voc? n?o v? que a requisi??o est? sendo feita para o IP da WAN 1, por?m por estarem na mesma rede a resposta sai direto do servidor 192.168.1.45 para a m?quina que fez a requisi??o?" Respondo: "Sim! Por isso mesmo que eu fiz uma regra de SNAT, que diz que todos os pacotes que forem encaminhados para o servidor devem ir com o endere?o de origem igual ? WAN 1!" Tentei v?rias varia??es, mas as regras atuais s?o as seguintes: DNAT tcp -- * * !192.168.1.45 200.200.200.200 tcp multiport dports 80,443,5252,7531,7532 to:192.168.1.45 SNAT all -- * eth0 192.168.1.0/24 192.168.1.45 to:200.200.200.200 Os pacotes est?o saindo da esta??o (192.168.1.50) passam pelo GTW, s?o encaminhadas corretamente para o 192.168.1.45 com endere?o de origem = 200.200.200.200, o servidor 192.168.1.45 responde para o 200.200.200.200, por?m os pacotes n?o s?o retornados para a esta??o. N?o deveria existir um connection tracking para fazer este encaminhamento do retorno? Algu?m saberia me dizer porque esta $#@%@ n?o est? funcionando? Agrade?o qualquer ajuda. Att. Rafael Santos Ol?. J? que o servidor est? na mesma rede das esta??es n?o vejo porque passar pelo seu gateway, bastaria configurar seu DNS (views) para que internamente seu servidor respondesse com o IP 192.168.1.45. Cristiano Maynart From listas at rafaelsantos.com Thu Nov 18 09:38:27 2010 From: listas at rafaelsantos.com (listas at rafaelsantos.com) Date: Thu, 18 Nov 2010 09:38:27 -0200 (BRST) Subject: [MASOCH-L] DNAT para rede local =?iso-8859-1?Q?n=E3o_funciona_?=(Debian) In-Reply-To: References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: > como o pacote originado dentro da rede interna n?o passa pela interface > externa WAN1, as regras do iptables que est?o penduradas nessa interface > n?o se aplicam. repita a regra para a interface interna e veja que apito > toca. Danton, A regra de DNAT n?o est? especificando a interface justamente por isso, a condi??o dela ? a seguinte: "If protocol is TCP and source is not 192.168.1.45 and destination is 200.200.200.200 and destination ports are 80,443,5252,7531,7532" Na regra de SNAT especifiquei a interface de sa?da como sendo a eth0 (LAN): "If source is 192.168.1.0/24 and destination is 192.168.1.45 and output interface is eth0" Claro, esta ? a situa??o atual... fiz v?rios outros testes sem sucesso. Estou no 21? andar, espero conseguir resolver este problema antes que o servidor voe pela janela! Att. Rafael Santos From listas at rafaelsantos.com Thu Nov 18 09:40:31 2010 From: listas at rafaelsantos.com (listas at rafaelsantos.com) Date: Thu, 18 Nov 2010 09:40:31 -0200 (BRST) Subject: [MASOCH-L] RES: DNAT para rede local =?iso-8859-1?Q?n=E3o_funciona_?=(Debian) In-Reply-To: <642A062A3EA5004EB84EBD1D34C1C54C040BBB74@sun78.unisc.br> References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> <642A062A3EA5004EB84EBD1D34C1C54C040BBB74@sun78.unisc.br> Message-ID: > J? que o servidor est? na mesma rede das esta??es n?o vejo porque passar > pelo seu gateway, bastaria configurar seu DNS (views) para que > internamente seu servidor respondesse com o IP 192.168.1.45. Ol? Cristiano, o problema ? que eu trabalho com sistemas embarcados e plataformas mobile, onde problemas de DNS s?o frequentes, portanto todos os equipamentos s?o configurados para acessar diretamente pelo IP... Att. Rafael Santos From Rochele at unisinos.br Thu Nov 18 10:16:17 2010 From: Rochele at unisinos.br (Rochele at unisinos.br) Date: Thu, 18 Nov 2010 10:16:17 -0200 Subject: [MASOCH-L] Convite: [GTER] Programa e Inscricoes GTER30/GTS16 - Sao Leopoldo - RS References: <20101029220701.GT64648@registro.br> <4CD13B76.D9DA.00B6.0@unisinos.br> <4CD13D5F.D9DA.00B6.0@unisinos.br> <4CD13DAD.D9DA.00B6.0@unisinos.br> Message-ID: <4CE4FCF1.D9DA.00B6.0@unisinos.br> Senhores(as), Abaixo o programa do GTER30 e GTS16 As inscri??es j? encontram-se abertas no site do evento, http://gter.nic.br/reunioes/gter-30/ 25/11/2010 - Tutoriais 08:00 - 17:00 Implementando o OSSEC HIDS Jer?nimo Zucco - Universidade de Caxias do Sul 08:00 - 17:00 BGP para provedores de servi?o 26/11/2010 - GTER 30 08:00 - 08:50 Recep??o 08:50 - 09:00 Abertura 09:00 - 09:20 De onde vem o spam? Seis meses de funcionamento de um 'spamtrap' Danton Nunes - Internexo 09:20 - 10:00 Boas pr?ticas para peering no PTTMetro Lu?s Balbinot - Commcorp Telecom 10:00 - 10:30 DNSSEC - Provisionamento e reassinatura autom?tica usando BIND David Robert Camargo de Campos e Wilson Rog?rio Lopes - Nic.br 10:30 - 11:00 Coffee Break 11:00 - 11:30 Relato da entrada do servidor DNS raiz "I" em Porto Alegre Leandro Bertholdo e Liane Tarouco - UFRGS 11:30 - 12:30 DNS Root Signing HowTo, Lessons Learned, and Future Impact Richard Lamb - ICANN 12:30 - 14:00 Almo?o 14:00 - 14:50 Ferramentas para coexist?ncia e transi??o IPv4 e IPv6 Antonio M. Moreiras - Nic.br 14:50 - 15:10 ASN 32bits - Seu uso na Internet BR Ricardo Patara - Nic.br 15:10 - 15:40 Coffee Break 15:40 - 16:00 IPv6 - An?lise sobre seu uso na Internet BR Ricardo Patara - Nic.br 16:00 - 16:30 IPv6 sobre Redes Metropolitanas. Estudo de Caso: MetroPoa Cesar Loureiro, Leandro Bertholdo e Liane Tarouco - UFRGS 16:30 - 17:30 BIND 10 - The architecture of the next generation DNS server Shane Kerr - ISC 27/11/2010 - GTS 16 08:00 - 08:50 Recep??o 08:50 - 09:00 Abertura 09:00 - 09:40 Secure Application Development for the Enterprise : Practical, Real-world Tips Luiz Gustavo Cunha Barbato, Mauricio Westendorff Pegoraro e Rafael Dreher - Dell 09:40 - 10:20 Coleta, Identifica??o e Extra??o de Dados (Data Carving) em M?dias e em Redes Ricardo Kl?ber Martins Galv?o - IFRN 10:20 - 10:50 Coffee Break 10:50 - 11:30 An?lise Comportamental de Malware Andr? Gr?gio, Dario Fernandes, Vitor Afonso e Paulo L?cio de Geus - CTI/MCT e UNICAMP 11:30 - 12:10 Resposta a incidentes: Diagnosticos equivocados e finais felizes Nelson Murilo - DTE 12:10 - 14:00 Almo?o 14:00 - 14:40 Usando visualiza??o para documenta??o r?pida de incidentes de seguran?a Gabriel Dieterich Cavalcante e Paulo L?cio de Geus - IC/UNICAMP 14:40 - 15:20 Estudos de Caso Reinaldo de Medeiros - Entropia Security 15:20 - 15:50 Coffee Break 15:50 - 16:30 Seguran?a em Passaportes Eletr?nicos Ivo de Carvalho Peixinho - Pol?cia Federal 16:30 - 17:10 Apresenta??o convidada (a definir) 17:10 - 17:20 Encerramento -- Secretaria GTER 30* Reuni?o S?o Leopoldo RS - 25 a 27 de novembro de 2010 http://gter.nic.br/ -- Rochele A. S. Moreira UNISINOS - GSI/Infraestrutura www.unisinos.br - 51 3590-8386 inoc 19611*100 - ramal int. 1886 From danton.nunes at inexo.com.br Thu Nov 18 11:28:50 2010 From: danton.nunes at inexo.com.br (Danton Nunes) Date: Thu, 18 Nov 2010 11:28:50 -0200 (BRST) Subject: [MASOCH-L] =?iso-8859-15?q?DNAT_para_rede_local_n=E3o_funciona_?= =?iso-8859-15?q?=28Debian=29?= In-Reply-To: References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: On Thu, 18 Nov 2010, listas at rafaelsantos.com wrote: > A regra de DNAT n?o est? especificando a interface justamente por isso, a > condi??o dela ? a seguinte: > > "If protocol is TCP and source is not 192.168.1.45 and destination is > 200.200.200.200 and destination ports are 80,443,5252,7531,7532" tem MASQUERADE junto? qual ocorre primeiro? em que "chain" est? essa regra? (pelo pouco que entendo da coisa deveria ser PREROUTING). veja como um iptables-save ou iptables -L mostra a regra. agora a pergunta filos?fica: por que os usu?rios internos tem que aceder a esse servidor pelo endere?o externo? algum 'IP-based virtual host'? algu?m nesta mesma lista sugeriu que voc? configurasse o DNS com "views" para que quando interrogado internamente o servidor de nomes retornasse o endere?o interno, creio que esse ? um arranjo muito mais f?cil. From listas at rafaelsantos.com Thu Nov 18 11:35:26 2010 From: listas at rafaelsantos.com (Rafael Santos) Date: Thu, 18 Nov 2010 11:35:26 -0200 Subject: [MASOCH-L] =?iso-8859-1?q?RES=3A_=09DNAT_para_rede_local_n=E3o_fu?= =?iso-8859-1?q?nciona_=28Debian=29?= In-Reply-To: References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: <003201cb8725$75f5e4e0$61e1aea0$@rafaelsantos.com> >tem MASQUERADE junto? qual ocorre primeiro? em que "chain" est? essa regra? (pelo pouco que entendo da coisa deveria ser PREROUTING). > >veja como um iptables-save ou iptables -L mostra a regra. > >agora a pergunta filos?fica: por que os usu?rios internos tem que aceder a esse servidor pelo endere?o externo? algum 'IP-based virtual host'? algu?m nesta mesma lista sugeriu que voc? configurasse o DNS com >"views" para que quando interrogado internamente o servidor de nomes retornasse o endere?o interno, creio que esse ? um arranjo muito mais f?cil. DNAT est? na PREROUTING e SNAT na POSTROUTING, segue as regras tal e qual aparecem no "iptables -L -t nat -n -v", elas s?o as primeiras regras de cada uma das CHAINs: Chain PREROUTING (policy ACCEPT 128K packets, 12M bytes) pkts bytes target prot opt in out source destination 385 20828 DNAT tcp -- * * !192.168.1.45 200.200.200.200 tcp multiport dports 80,443,5252,7531,7532 to:192.168.1.45 Chain POSTROUTING (policy ACCEPT 21452 packets, 1549K bytes) pkts bytes target prot opt in out source destination 139 6688 SNAT all -- * eth0 192.168.1.0/24 192.168.1.45 to:200.200.200.200 A resposta para a sua "pergunta filos?fica" ? a que eu passei respondendo ao colega que sugeriu a solu??o DNS+views: trabalhamos com plataformas embarcadas e mobile, onde ocorrem muitos problemas de DNS, portanto os dispositivos s?o geralmente configurados para acessar diretamente pelo IP e ? extremamente contraproducente ficar alterando esta configura??o sempre que algu?m precisa usar o device ou at? mesmo um simulador aqui dentro da empresa... Tks again! Rafael Santos From mustardahc at gmail.com Thu Nov 18 16:17:53 2010 From: mustardahc at gmail.com (Bruno Camargo) Date: Thu, 18 Nov 2010 16:17:53 -0200 Subject: [MASOCH-L] Dificuldades para Receber Email do Bradesco Message-ID: Cara Lista, Estamos com dificuldades para receber emails vindos do dominio corpr.bradesco.com.br. Os emails nao chegam em nosso dominio, atualmente hospedado na kinghost. Tentei resolver o nome deste dominio e existe uma entrada v?lida: C:\Documents and Settings\ba956x>nslookup bradesco.com.br Server: brspdc01.americas.att.com Address: 135.75.22.40 Non-authoritative answer: Name: bradesco.com.br Address: 200.155.82.1 C:\Documents and Settings\ba956x>nslookup corpr.bradesco.com.br Server: brspdc01.americas.att.com Address: 135.75.22.40 *** brspdc01.americas.att.com can't find corpr.bradesco.com.br: Non-existent domain Adicionei ambos na whitelist do provedor, mas ainda assim nao recebo emails. Alguem tem alguma ideia? Grato Bruno Camargo -- BrCaBadT From paulo.rddck at bsd.com.br Sat Nov 20 06:31:31 2010 From: paulo.rddck at bsd.com.br (Paulo Henrique) Date: Sat, 20 Nov 2010 06:31:31 -0200 Subject: [MASOCH-L] =?iso-8859-1?q?RES=3A_DNAT_para_rede_local_n=E3o_funci?= =?iso-8859-1?q?ona_=28Debian=29?= In-Reply-To: <003201cb8725$75f5e4e0$61e1aea0$@rafaelsantos.com> References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> <003201cb8725$75f5e4e0$61e1aea0$@rafaelsantos.com> Message-ID: Por acaso este roteador est? separando o segmento de rede ? Ja testou nat reflector ? Em 18 de novembro de 2010 11:35, Rafael Santos escreveu: > >tem MASQUERADE junto? qual ocorre primeiro? em que "chain" est? essa > regra? > (pelo pouco que entendo da coisa deveria ser PREROUTING). > > > >veja como um iptables-save ou iptables -L mostra a regra. > > > >agora a pergunta filos?fica: por que os usu?rios internos tem que aceder a > esse servidor pelo endere?o externo? algum 'IP-based virtual host'? algu?m > nesta mesma lista sugeriu que voc? configurasse o DNS com >"views" para que > quando interrogado internamente o servidor de nomes retornasse o endere?o > interno, creio que esse ? um arranjo muito mais f?cil. > > > DNAT est? na PREROUTING e SNAT na POSTROUTING, segue as regras tal e qual > aparecem no "iptables -L -t nat -n -v", elas s?o as primeiras regras de > cada > uma das CHAINs: > > Chain PREROUTING (policy ACCEPT 128K packets, 12M bytes) > pkts bytes target prot opt in out source > destination > 385 20828 DNAT tcp -- * * !192.168.1.45 > 200.200.200.200 tcp multiport dports 80,443,5252,7531,7532 > to:192.168.1.45 > > Chain POSTROUTING (policy ACCEPT 21452 packets, 1549K bytes) > pkts bytes target prot opt in out source > destination > 139 6688 SNAT all -- * eth0 192.168.1.0/24 > 192.168.1.45 to:200.200.200.200 > > A resposta para a sua "pergunta filos?fica" ? a que eu passei respondendo > ao > colega que sugeriu a solu??o DNS+views: trabalhamos com plataformas > embarcadas e mobile, onde ocorrem muitos problemas de DNS, portanto os > dispositivos s?o geralmente configurados para acessar diretamente pelo IP e > ? extremamente contraproducente ficar alterando esta configura??o sempre > que > algu?m precisa usar o device ou at? mesmo um simulador aqui dentro da > empresa... > > Tks again! > > Rafael Santos > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- :=)>Paulo Henrique (JSRD)<(=: Alone, locked, a survivor, unfortunately not know who I am From paulo.rddck at bsd.com.br Sat Nov 20 08:01:10 2010 From: paulo.rddck at bsd.com.br (Paulo Henrique) Date: Sat, 20 Nov 2010 08:01:10 -0200 Subject: [MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1 In-Reply-To: References: <4CD2C56C.1020108@bhz.jamef.com.br> Message-ID: Squid Cache para 300 usu?rios com HA, verifique os tunnings do kernel quanto ao shared memory System V. Requisi??o de tempo de processador para duas Gigabit, mais todas essa carga ? mal dimensionamento de servi?os. e esses 7Gb de cache de disco na memoria ? outra coisa estranha, sei que linux utiliza-se de recursos bem inferiores para diminuir o acesso a disco, agora usar quase toda a memoria s? para evitar acesso a disco ? fato que n?o desconsideraria. Para todos os usu?rios linux, aprenda a compilar o kernel, ajustar ele para a sua real necessidade, usar kernel GENERIC ? o principal fator dos 99% de problemas, avalie qual schenduler utilizar, qual o modo de opera??o do mesmo. S? por que o nome sistema ? Server n?o quer dizer que o mesmo est? preparado para o seu ambiente. Depois de bater a cabe?a em um cliente por mais de dois meses, e praticamente me tornar praticamente perito em Red Hat e Debian passei a desconsiderar tais distribui??es, solucionei da maneira mais arcaica possivel, Slackware Linux 10.2 totalmente compilado local, assim como todos os demais servi?os necess?rios, NFS/Samba, httpd, DNS, ldap, PostGreSQL isso a pouco mais de 3 anos atr?s hoje nem sou muito resciliente com outras distribui??es, e sempre que posso n?o uso linux opto por FreeBSD ou algum da familia BSD, s? em casos muitos especificos uso linux ( software sem suporte no FreeBSD ). Fica as considera??es. Em 4 de novembro de 2010 15:07, Armando Roque escreveu: > Rejaine, > > O Jos? Augusto j? fez as vezes de perguntar das atualiza??es. Al?m do SP1 > como est?o?! Vc atualizou algo depois? > > Att. > > Em 4 de novembro de 2010 11:38, Rejaine Monteiro > escreveu: > > > > > Pessoal, > > > > Venho pedir socorro.. > > Estou com um grave problema de performance em um servidor com > > SLES11(SP1) instalado em um servidor PowerEdge 1900 (configura??o abaixo) > > > > Ocorre o seguinte: > > > > T?nhamos em nossas localidades v?rios servidores bem inferiores, > > atendendo ao mesmo n?mero de usu?rios e mesmos servi?os, por?m, > > utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem at? ent?o, > > mas seguindo nosso planejamento de atualiza??o do parque de m?quinas, > > optamos por fazer upgrade de hardware e S.O (que se encontrava bastante > > desatualizado) nessas localidades e eis que come?aram os problemas. > > > > Inicialmente, fizemos a substitui??o em apenas duas localidades de menor > > porte e com menor n?mero de usu?rios e j? hav?amos notado um certo > > aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece > > que melhorou um pouco. > > > > Por?m, em uma outra localidade em especial, com cerca de 300 usu?rios, > > a performance do servidor est? simplesmente sofr?vel > > A carga de CPU sobe tanto, que as vezes mal consigo fazer login para > > visualizar o syslog, tendo muitas vezes que derrubar v?rios servi?os ou > > dar um reboot para voltar ao normal. > > > > J? fizemos v?rios ajustes de tunning no Kernel e v?rias outros ajustes > > de tunning nas v?rias aplica??es que o servidor executa (especialmente > > no servi?os mais importantes como drbd, heartebeat, ldap, nfsserver, > > etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria > > consider?vel mesmo ap?s dezenas de ajustes. > > > > Como temos dois servidores id?nticos (um em modo failover, por causa do > > HA), fizemos o teste subindo todos os servi?os no servidor backup, para > > descartar problemas de disco e/ou hardware na m?quina principal, por?m > > os problemas continuaram tamb?m no outro servidor. > > > > Quando a carga est? muito alta, o syslog come?a a gerar v?rios dumps no > > /var/log/messages (descritos abaixo) > > > > Aparentemente, n?o h? problemas de I/O (j? incluimos at? um RAID para > > melhorar a performance de disco e fizemos v?rios ajustes, mas nada > > resolveu ou surtiu efeito) > > O que percebemos, ? que n?o h? rela??o com iowait e cpu load , ou seja, > > quando a carga est? alta, o disco n?o apresenta sobrecarga. Parece ser > > algo haver com mem?ria, mas o servidor antigo trabalha com 4G no > > OpenSuSE 10.2 e dava conta do recado e j? este servidor, apesar de mais > > ser ainda "parrudo" e com o dobro de mem?ria n?o. > > > > Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware > > inferior, rodando basicamente os mesmos servi?os e com mesmo n?mero de > > usu?rios funcionava muito bem com o OpenSuSE 10.2 > > > > Segue abaixo descri??o do hardware, software e servi?os utilizados no > > servidor e logo mais adiante algumas mensgens que aparecem no syslog > > > > Se algu?m puder ajudar com qualquer dica, eu agrade?o muit?ssimo > > (qualquer ajuda ? bem vinda) > > > > Servidor> Del PowerEdge 1900 > > 2 x Intel(R) Xeon(R) CPU E5310 1.60GHz DualCore > > 8G RAM > > 4 HDs SAS 15000rpm > > > > Software> Suse Linux Enterprise Server 11 - Service Pack 1 > > Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 > > +0200 x86_64 x86_64 x86_64 GNU/Linux > > > > Servicos basicos que est?o rodando nesse servidor: linux-ha > > (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp, > > named, squid e jabberd > > Numero de usuarios: 300 > > Usuarios Linux utilizam HOMEDIR montado via NFS > > Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de > > grupo e/ou backup de profile > > > > top - 10:33:37 up 57 min, 19 users, load average: 40.44, 49.96, 42.26 > > Tasks: 510 total, 1 running, 509 sleeping, 0 stopped, 0 zombie > > Cpu(s): 1.3%us, 1.5%sy, 0.0%ni, 94.2%id, 1.7%wa, 0.0%hi, 1.4%si, > > 0.0%st > > Mem: 8188816k total, 8137392k used, 51424k free, 57116k buffers > > Swap: 2104432k total, 0k used, 2104432k free, 7089980k cached > > > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > > 9901 qscand 20 0 207m 164m 2032 S 0 2.1 0:04.63 clamd > > 4074 root 20 0 358m 57m 1992 S 0 0.7 0:03.03 nscd > > 9016 named 20 0 320m 54m 2464 S 0 0.7 0:17.37 named > > 22761 root 20 0 115m 50m 4604 S 0 0.6 0:02.30 nxagent > > 23477 root 20 0 597m 33m 21m S 0 0.4 0:01.20 > plasma-desktop > > 23357 root 20 0 453m 30m 23m S 0 0.4 0:00.51 kwin > > 9028 ldap 20 0 1930m 26m 4564 S 0 0.3 1:36.51 slapd > > 9248 root 20 0 324m 24m 17m S 0 0.3 0:03.92 kdm_greet > > 24164 root 20 0 486m 23m 16m S 0 0.3 0:00.35 krunner > > 10870 root 20 0 24548 20m 1168 S 2 0.3 0:22.59 jabberd > > 9014 root 20 0 120m 19m 5328 S 0 0.2 0:03.04 Xorg > > 24283 root 20 0 173m 19m 14m S 0 0.2 0:00.18 kdialog > > 22940 root 20 0 290m 18m 12m S 0 0.2 0:00.22 kded4 > > 24275 root 20 0 191m 18m 13m S 0 0.2 0:00.22 > kupdateapplet > > 24270 root 20 0 237m 16m 10m S 0 0.2 0:00.11 kmix > > 4061 root -2 0 92828 16m 8476 S 0 0.2 0:01.18 heartbeat > > 24274 root 20 0 284m 15m 9.9m S 0 0.2 0:00.10 klipper > > 23299 root 20 0 309m 14m 9844 S 0 0.2 0:00.08 ksmserver > > 22899 root 20 0 201m 14m 10m S 0 0.2 0:00.10 kdeinit4 > > 23743 root 20 0 228m 12m 7856 S 0 0.2 0:00.10 kglobalaccel > > 24167 root 20 0 235m 12m 7760 S 0 0.2 0:00.04 > nepomukserver > > > > # /usr/bin/uptime > > 11:04am up 0:18, 7 users, load average: 27.52, 18.60, 10.27 > > > > # /usr/bin/vmstat 1 4 > > procs -----------memory---------- ---swap-- -----io---- -system-- > > -----cpu------ > > r b swpd free buff cache si so bi bo in cs us sy > > id wa st > > 2 0 0 50856 19300 7196808 0 0 507 378 1167 1175 3 3 > > 88 6 0 > > 0 0 0 41332 19300 7200960 0 0 176 1279 14284 10519 2 > > 2 93 2 0 > > 1 0 0 43184 19184 7181520 0 0 0 1074 7191 1856 0 1 > > 99 0 0 > > 0 0 0 43316 19128 7179868 0 0 0 1189 2237 2340 1 0 > > 99 0 0 > > > > # /usr/bin/vmstat 1 4 > > procs -----------memory---------- ---swap-- -----io---- -system-- > > -----cpu------ > > r b swpd free buff cache si so bi bo in cs us sy > > id wa st > > 0 1 0 47276 19048 7177788 0 0 498 384 1166 1171 3 3 > > 88 6 0 > > 1 0 0 46128 19056 7167016 0 0 36 970 7530 4158 2 1 > > 95 2 0 > > 0 1 0 46452 19064 7163616 0 0 20 798 1411 1749 2 1 > > 97 0 0 > > 0 0 0 46868 19064 7162624 0 0 56 751 7079 2169 1 1 > > 97 0 0 > > > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only > > an harmless informational message. > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a > > _continuous_flood_ of these messages it means > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working > > fine. Allocations from irqs cannot be > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and > > the kernel is designed to handle that. > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page > > allocation failure. order:0, mode:0x20, alloc_flags:0x30 > pflags:0x10200042 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper > > Tainted: G X 2.6.32.12-0.7-default #1 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace: > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893054] [] > > dump_trace+0x6c/0x2d0 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893063] [] > > dump_stack+0x69/0x71 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893070] [] > > __alloc_pages_slowpath+0x3ed/0x550 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893077] [] > > __alloc_pages_nodemask+0x13a/0x140 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893084] [] > > kmem_getpages+0x56/0x170 > > Nov 4 09:57:53 srv-linux kernel: [ 1284.893089] [] > > fallback_alloc+0x166/0x230 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893095] [] > > kmem_cache_alloc+0x192/0x1b0 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893102] [] > > skb_clone+0x3a/0x80 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893109] [] > > dev_queue_xmit_nit+0x82/0x170 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893114] [] > > dev_hard_start_xmit+0x4a/0x210 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893120] [] > > sch_direct_xmit+0x16e/0x1e0 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893126] [] > > dev_queue_xmit+0x366/0x4d0 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893132] [] > > ip_queue_xmit+0x210/0x420 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893139] [] > > tcp_transmit_skb+0x4cb/0x760 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893145] [] > > tcp_delack_timer+0x14f/0x2a0 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893151] [] > > run_timer_softirq+0x174/0x240 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893157] [] > > __do_softirq+0xbf/0x170 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893163] [] > > call_softirq+0x1c/0x30 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893168] [] > > do_softirq+0x4d/0x80 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893173] [] > > irq_exit+0x85/0x90 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893178] [] > > smp_apic_timer_interrupt+0x6c/0xa0 > > Nov 4 09:58:12 srv-linux kernel: [ 1284.893185] [] > > apic_timer_interrupt+0x13/0x20 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only > > an harmless informational message. > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a > > _continuous_flood_ of these messages it means > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working > > fine. Allocations from irqs cannot be > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and > > the kernel is designed to handle that. > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page > > allocation failure. order:0, mode:0x20, alloc_flags:0x30 > pflags:0x10200042 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper > > Tainted: G X 2.6.32.12-0.7-default #1 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace: > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132719] [] > > dump_trace+0x6c/0x2d0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132729] [] > > dump_stack+0x69/0x71 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132738] [] > > __alloc_pages_slowpath+0x3ed/0x550 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132746] [] > > __alloc_pages_nodemask+0x13a/0x140 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132754] [] > > kmem_getpages+0x56/0x170 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132761] [] > > fallback_alloc+0x166/0x230 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132768] [] > > kmem_cache_alloc+0x192/0x1b0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132777] [] > > skb_clone+0x3a/0x80 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132788] [] > > packet_rcv_spkt+0x78/0x190 [af_packet] > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132807] [] > > netif_receive_skb+0x3a2/0x660 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132819] [] > > bnx2_rx_int+0x59d/0x820 [bnx2] > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132836] [] > > bnx2_poll_work+0x6f/0x90 [bnx2] > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132851] [] > > bnx2_poll+0x61/0x1cc [bnx2] > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132865] [] > > net_rx_action+0xe3/0x1a0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132873] [] > > __do_softirq+0xbf/0x170 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132881] [] > > call_softirq+0x1c/0x30 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132887] [] > > do_softirq+0x4d/0x80 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132893] [] > > irq_exit+0x85/0x90 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132899] [] > > do_IRQ+0x6e/0xe0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132907] [] > > ret_from_intr+0x0/0xa > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132915] [] > > mwait_idle+0x62/0x70 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132922] [] > > cpu_idle+0x5a/0xb0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info: > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu: > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132934] CPU 0: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > > btch: 1 usd: 0 > > ov 4 10:21:17 srv-linux kernel: [ 2687.132938] CPU 1: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132941] CPU 2: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132945] CPU 3: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132948] CPU 4: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132951] CPU 5: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132955] CPU 6: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132958] CPU 7: hi: 0, > > btch: 1 usd: 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu: > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132966] CPU 0: hi: 186, > > btch: 31 usd: 32 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132969] CPU 1: hi: 186, > > btch: 31 usd: 90 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132973] CPU 2: hi: 186, > > btch: 31 usd: 140 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132976] CPU 3: hi: 186, > > btch: 31 usd: 166 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132979] CPU 4: hi: 186, > > btch: 31 usd: 14 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132983] CPU 5: hi: 186, > > btch: 31 usd: 119 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132986] CPU 6: hi: 186, > > btch: 31 usd: 45 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132989] CPU 7: hi: 186, > > btch: 31 usd: 191 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu: > > Nov 4 10:21:17 srv-linux kernel: [ 2687.132997] CPU 0: hi: 186, > > btch: 31 usd: 16 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133000] CPU 1: hi: 186, > > btch: 31 usd: 4 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133003] CPU 2: hi: 186, > > btch: 31 usd: 44 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133006] CPU 3: hi: 186, > > btch: 31 usd: 164 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133010] CPU 4: hi: 186, > > btch: 31 usd: 98 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133013] CPU 5: hi: 186, > > btch: 31 usd: 19 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133017] CPU 6: hi: 186, > > btch: 31 usd: 76 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133020] CPU 7: hi: 186, > > btch: 31 usd: 192 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321 > > inactive_anon:23282 isolated_anon:0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133029] active_file:56108 > > inactive_file:1701629 isolated_file:0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133030] unevictable:5709 > > dirty:677685 writeback:2 unstable:0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133032] free:9755 > > slab_reclaimable:66787 slab_unreclaimable:50212 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133033] mapped:13499 shmem:67 > > pagetables:6893 bounce:0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB > > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0 > > 3251 8049 8049 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32 > > free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB > > inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB > > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > > mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB > > slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB > > pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB > > pages_scanned:0 all_unreclaimable? no > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0 > > 4797 4797 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal > > free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB > > inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB > > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > > present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB > > mapped:52732kB shmem:252kB slab_reclaimable:159432kB > > slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB > > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > > all_unreclaimable? no > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0 > 0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB > > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > > = 15692kB > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB > > 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB > > 1*2048kB 0*4096kB = 20428kB > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB > > 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB > > 0*4096kB = 2032kB > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache > > pages > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add > > 0, delete 0, find 0/0 > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap = 2104432kB > > Nov 4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB > > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM > > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved > > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared > > Nov 4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only > > an harmless informational message. > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a > > _continuous_flood_ of these messages it means > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working > > fine. Allocations from irqs cannot be > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and > > the kernel is designed to handle that. > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page > > allocation failure. order:0, mode:0x20, alloc_flags:0x30 > pflags:0x10200042 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper > > Tainted: G X 2.6.32.12-0.7-default #1 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace: > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436052] [] > > dump_trace+0x6c/0x2d0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436061] [] > > dump_stack+0x69/0x71 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436069] [] > > __alloc_pages_slowpath+0x3ed/0x550 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436075] [] > > __alloc_pages_nodemask+0x13a/0x140 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436083] [] > > kmem_getpages+0x56/0x170 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436088] [] > > fallback_alloc+0x166/0x230 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436094] [] > > kmem_cache_alloc+0x192/0x1b0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436101] [] > > skb_clone+0x3a/0x80 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436108] [] > > dev_queue_xmit_nit+0x82/0x170 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436113] [] > > dev_hard_start_xmit+0x4a/0x210 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436119] [] > > sch_direct_xmit+0x16e/0x1e0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436125] [] > > dev_queue_xmit+0x366/0x4d0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436131] [] > > ip_queue_xmit+0x210/0x420 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436138] [] > > tcp_transmit_skb+0x4cb/0x760 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436144] [] > > tcp_delack_timer+0x14f/0x2a0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436150] [] > > run_timer_softirq+0x174/0x240 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436156] [] > > __do_softirq+0xbf/0x170 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436162] [] > > call_softirq+0x1c/0x30 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436167] [] > > do_softirq+0x4d/0x80 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436171] [] > > irq_exit+0x85/0x90 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436177] [] > > smp_apic_timer_interrupt+0x6c/0xa0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436184] [] > > apic_timer_interrupt+0x13/0x20 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436191] [] > > mwait_idle+0x62/0x70 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436196] [] > > cpu_idle+0x5a/0xb0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info: > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu: > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436205] CPU 0: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436208] CPU 1: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436210] CPU 2: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436213] CPU 3: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436215] CPU 4: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436217] CPU 5: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436220] CPU 6: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436222] CPU 7: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu: > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436227] CPU 0: hi: 186, > > btch: 31 usd: 30 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436229] CPU 1: hi: 186, > > btch: 31 usd: 186 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436232] CPU 2: hi: 186, > > btch: 31 usd: 147 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436234] CPU 3: hi: 186, > > btch: 31 usd: 174 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436236] CPU 4: hi: 186, > > btch: 31 usd: 92 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436239] CPU 5: hi: 186, > > btch: 31 usd: 49 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436241] CPU 6: hi: 186, > > btch: 31 usd: 141 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436244] CPU 7: hi: 186, > > btch: 31 usd: 142 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu: > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436248] CPU 0: hi: 186, > > btch: 31 usd: 46 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436250] CPU 1: hi: 186, > > btch: 31 usd: 158 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436253] CPU 2: hi: 186, > > btch: 31 usd: 151 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436255] CPU 3: hi: 186, > > btch: 31 usd: 39 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436257] CPU 4: hi: 186, > > btch: 31 usd: 114 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436260] CPU 5: hi: 186, > > btch: 31 usd: 59 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436262] CPU 6: hi: 186, > > btch: 31 usd: 124 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436265] CPU 7: hi: 186, > > btch: 31 usd: 173 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650 > > inactive_anon:21539 isolated_anon:0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436272] active_file:65104 > > inactive_file:1679351 isolated_file:0 > > Nov 4 11:07:27 srv-linux kernel: [ 1293.436273] unevictable:5709 > > dirty:474043 writeback:6102 unstable:0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436275] free:9712 > > slab_reclaimable:51092 slab_unreclaimable:49524 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436276] mapped:13595 shmem:109 > > pagetables:6308 bounce:0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB > > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0 > > 3251 8049 8049 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32 > > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > > pages_scanned:0 all_unreclaimable? no > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0 > > 4797 4797 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal > > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > > all_unreclaimable? no > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0 > 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB > > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > > = 15692kB > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB > > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > > 1*2048kB 1*4096kB = 19828kB > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB > > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > > 0*4096kB = 1840kB > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache > > pages > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add > > 0, delete 0, find 0/0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap = 2104432kB > > Nov 4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB > > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM > > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved > > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared > > Nov 4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only > > an harmless informational message. > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a > > _continuous_flood_ of these messages it means > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working > > fine. Allocations from irqs cannot be > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and > > the kernel is designed to handle that. > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page > > allocation failure. order:0, mode:0x20, alloc_flags:0x30 > pflags:0x10200042 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper > > Tainted: G X 2.6.32.12-0.7-default #1 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace: > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480883] [] > > dump_trace+0x6c/0x2d0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480897] [] > > dump_stack+0x69/0x71 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480910] [] > > __alloc_pages_slowpath+0x3ed/0x550 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480921] [] > > __alloc_pages_nodemask+0x13a/0x140 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480933] [] > > kmem_getpages+0x56/0x170 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480944] [] > > fallback_alloc+0x166/0x230 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480955] [] > > kmem_cache_alloc+0x192/0x1b0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480967] [] > > skb_clone+0x3a/0x80 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480979] [] > > dev_queue_xmit_nit+0x82/0x170 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.480990] [] > > dev_hard_start_xmit+0x4a/0x210 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481000] [] > > sch_direct_xmit+0x16e/0x1e0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481010] [] > > __qdisc_run+0xaf/0x100 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481021] [] > > dev_queue_xmit+0x4cb/0x4d0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481032] [] > > ip_queue_xmit+0x210/0x420 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481044] [] > > tcp_transmit_skb+0x4cb/0x760 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481054] [] > > tcp_delack_timer+0x14f/0x2a0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481066] [] > > run_timer_softirq+0x174/0x240 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481077] [] > > __do_softirq+0xbf/0x170 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481088] [] > > call_softirq+0x1c/0x30 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481098] [] > > do_softirq+0x4d/0x80 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481108] [] > > irq_exit+0x85/0x90 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481118] [] > > smp_apic_timer_interrupt+0x6c/0xa0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481131] [] > > apic_timer_interrupt+0x13/0x20 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481142] [] > > mwait_idle+0x62/0x70 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481152] [] > > cpu_idle+0x5a/0xb0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info: > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu: > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481173] CPU 0: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481178] CPU 1: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481184] CPU 2: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481189] CPU 3: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481195] CPU 4: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481200] CPU 5: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481206] CPU 6: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481211] CPU 7: hi: 0, > > btch: 1 usd: 0 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu: > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481226] CPU 0: hi: 186, > > btch: 31 usd: 30 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481231] CPU 1: hi: 186, > > btch: 31 usd: 186 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481237] CPU 2: hi: 186, > > btch: 31 usd: 147 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481242] CPU 3: hi: 186, > > btch: 31 usd: 174 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481248] CPU 4: hi: 186, > > btch: 31 usd: 92 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481253] CPU 5: hi: 186, > > btch: 31 usd: 49 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481259] CPU 6: hi: 186, > > btch: 31 usd: 141 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481264] CPU 7: hi: 186, > > btch: 31 usd: 142 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu: > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481278] CPU 0: hi: 186, > > btch: 31 usd: 46 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481284] CPU 1: hi: 186, > > btch: 31 usd: 158 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481289] CPU 2: hi: 186, > > btch: 31 usd: 151 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481295] CPU 3: hi: 186, > > btch: 31 usd: 39 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481300] CPU 4: hi: 186, > > btch: 31 usd: 114 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481306] CPU 5: hi: 186, > > btch: 31 usd: 59 > > Nov 4 11:07:28 srv-linux kernel: [ 1293.481311] CPU 6: hi: 186, > > btch: 31 usd: 124 > > ov 4 11:07:28 srv-linux kernel: [ 1293.481316] CPU 7: hi: 186, > > btch: 31 usd: 173 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650 > > inactive_anon:21539 isolated_anon:0 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481327] active_file:65104 > > inactive_file:1679351 isolated_file:0 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481328] unevictable:5709 > > dirty:474043 writeback:6102 unstable:0 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481329] free:9712 > > slab_reclaimable:51092 slab_unreclaimable:49524 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481330] mapped:13595 shmem:109 > > pagetables:6308 bounce:0 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB > > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB > > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB > > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB > > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB > > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB > > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0 > > 3251 8049 8049 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32 > > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB > > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB > > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB > > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB > > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB > > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB > > pages_scanned:0 all_unreclaimable? no > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0 > > 4797 4797 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal > > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB > > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB > > unevictable:22836kB isolated(anon):0kB isolated(file):0kB > > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB > > mapped:54212kB shmem:360kB slab_reclaimable:95396kB > > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB > > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 > > all_unreclaimable? no > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0 > 0 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB > > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB > > = 15692kB > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB > > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB > > 1*2048kB 1*4096kB = 19828kB > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB > > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB > > 0*4096kB = 1840kB > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache > > pages > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add > > 0, delete 0, find 0/0 > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap = 2104432kB > > Nov 4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB > > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM > > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved > > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared > > Nov 4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared > > > > > > __ > > masoch-l list > > https://eng.registro.br/mailman/listinfo/masoch-l > > > > > > -- > Armando Roque Ferreira Pinto > Analista de sistemas > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- :=)>Paulo Henrique (JSRD)<(=: Alone, locked, a survivor, unfortunately not know who I am From paulorsa at gmail.com Tue Nov 23 11:02:37 2010 From: paulorsa at gmail.com (=?ISO-8859-1?Q?Paulo_Rog=E9rio_Silva_Ara=FAjo?=) Date: Tue, 23 Nov 2010 11:02:37 -0200 Subject: [MASOCH-L] =?iso-8859-1?q?DNAT_para_rede_local_n=E3o_funciona_=28?= =?iso-8859-1?q?Debian=29?= In-Reply-To: References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: Uma boa pratica que ajudaria a resolver seria implantar o conceito de DMZ separando este host (e outros "servidores" se houverem) numa outra rede que nao seja a 192.168.1.0/24 e o melhor e que seja uma separacao fisica. Assim acaba-se estes tipos de problemas, e outros que por ventura ainda surgirao. Em nov 18, 2010 9:38 AM, escreveu: > como o pacote originado dentro da rede interna n?o passa pela interface > externa WAN1, as regras... Danton, A regra de DNAT n?o est? especificando a interface justamente por isso, a condi??o dela ? a seguinte: "If protocol is TCP and source is not 192.168.1.45 and destination is 200.200.200.200 and destination ports are 80,443,5252,7531,7532" Na regra de SNAT especifiquei a interface de sa?da como sendo a eth0 (LAN): "If source is 192.168.1.0/24 and destination is 192.168.1.45 and output interface is eth0" Claro, esta ? a situa??o atual... fiz v?rios outros testes sem sucesso. Estou no 21? andar, espero conseguir resolver este problema antes que o servidor voe pela janela! Att. Rafael Santos __ masoch-l list https://eng.registro.br/mailman/listinfo/masoch-l From listas at rafaelsantos.com Tue Nov 23 11:32:42 2010 From: listas at rafaelsantos.com (Rafael Santos) Date: Tue, 23 Nov 2010 11:32:42 -0200 Subject: [MASOCH-L] =?iso-8859-1?q?RES=3A_=09DNAT_para_rede_local_n=E3o_fu?= =?iso-8859-1?q?nciona_=28Debian=29?= In-Reply-To: References: <001801cb86b9$bd817200$38845600$@rafaelsantos.com> Message-ID: <004b01cb8b12$e8090ad0$b81b2070$@rafaelsantos.com> (Desculpem pelo top posting) Pois ?, o problema ? que isso ? um legado que assumi e tenho que manter desta maneira por um tempo, enquanto planejamos adequadamente a reestrutura??o da infra. Att. Rafael Santos -----Mensagem original----- De: masoch-l-bounces at eng.registro.br [mailto:masoch-l-bounces at eng.registro.br] Em nome de Paulo Rog?rio Silva Ara?jo Enviada em: ter?a-feira, 23 de novembro de 2010 11:03 Para: Mail Aid and Succor, On-line Comfort and Help Assunto: Re: [MASOCH-L] DNAT para rede local n?o funciona (Debian) Uma boa pratica que ajudaria a resolver seria implantar o conceito de DMZ separando este host (e outros "servidores" se houverem) numa outra rede que nao seja a 192.168.1.0/24 e o melhor e que seja uma separacao fisica. Assim acaba-se estes tipos de problemas, e outros que por ventura ainda surgirao. From felix.ricardo at gmail.com Wed Nov 24 14:24:01 2010 From: felix.ricardo at gmail.com (Ricardo Felix) Date: Wed, 24 Nov 2010 14:24:01 -0200 Subject: [MASOCH-L] VPN ipsec Message-ID: <201011241424.02403.felix.ricardo@gmail.com> Boa tarde galera, uma pergunta aqui que j? to suando. Algu?m por aqui j? criou uma VPN Ipsec com openswan e um Juniper SSG520 ? Consigo colocar a VPN no Ar, mas o tr?fego n?o passa de uma ponta a outra.... meus arquivos de conf... ipsec.conf conn HQtoDC type=tunnel left=189.38.x.x leftsubnet=172.16.16.0/24 leftnexthop=200.160.x.x right=200.160.x.x rightsubnet=172.16.18.0/24 pfs=yes keyingtries=0 aggrmode=no auto=start auth=esp esp=3des-sha1-96 ike=3des-sha1-96 authby=secret minha tabela de rotas ap?s subir o ipsec Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 201.6.249.136 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 200.207.121.196 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 200.204.154.71 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 201.81.231.236 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 74.125.93.121 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 201.81.224.243 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 200.171.213.106 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 200.158.83.246 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 200.160.255.48 189.38.250.1 255.255.255.240 UG 0 0 0 eth0 172.16.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 189.38.250.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 172.16.16.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 187.38.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth3 10.0.0.0 10.0.0.1 255.255.0.0 UG 0 0 0 eth2 74.125.0.0 189.38.250.1 255.255.0.0 UG 0 0 0 eth0 0.0.0.0 187.38.0.1 0.0.0.0 UG 0 0 0 eth3 Comandos iptables para permitir o tr?fego entre as redes. iptables -t nat -A POSTROUTING -o eth0 -s 172.16.16.0/24 -d ! 172.16.18.0/24 -j MASQUERADE iptables -A FORWARD -p tcp -i eth0 -s 172.16.18.0/24 -o eth1 -d 172.16.16.0/24 -j ACCEPT iptables -A FORWARD -p tcp -i eth1 -s 172.16.16.0/24 -o eth0 -d 172.16.18.0/24 -j ACCEPT N?o pego pacotes sendo dropados no linux. Alguma ideia iluminada...? Abra?os Ricardo Felix do Nascimento From emiliano.martins at ik1.com.br Wed Nov 24 18:28:14 2010 From: emiliano.martins at ik1.com.br (Emiliano Martins) Date: Wed, 24 Nov 2010 18:28:14 -0200 Subject: [MASOCH-L] VPN ipsec In-Reply-To: <201011241424.02403.felix.ricardo@gmail.com> References: <201011241424.02403.felix.ricardo@gmail.com> Message-ID: Ricardo, N?o conhe?o o openswan e nem o Juniper, mas j? tive o mesmo problema e resolvi ativando o NAT Traversal. Se estiver ativado verifique se a porta 4500 UDP n?o est? fechada nas duas pontas. Att. Em 24 de novembro de 2010 14:24, Ricardo Felix escreveu: > Boa tarde galera, uma pergunta aqui que j? to suando. > Algu?m por aqui j? criou uma VPN Ipsec com openswan e um Juniper SSG520 ? > > Consigo colocar a VPN no Ar, mas o tr?fego n?o passa de uma ponta a > outra.... > > meus arquivos de conf... > > ipsec.conf > > conn HQtoDC > type=tunnel > left=189.38.x.x > leftsubnet=172.16.16.0/24 > leftnexthop=200.160.x.x > right=200.160.x.x > rightsubnet=172.16.18.0/24 > pfs=yes > keyingtries=0 > aggrmode=no > auto=start > auth=esp > esp=3des-sha1-96 > ike=3des-sha1-96 > authby=secret > > > minha tabela de rotas ap?s subir o ipsec > > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref > Use Iface > 201.6.249.136 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 200.207.121.196 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 200.204.154.71 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 > tun0 > 201.81.231.236 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 74.125.93.121 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 201.81.224.243 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 200.171.213.106 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 200.158.83.246 189.38.250.1 255.255.255.255 UGH 0 0 0 > eth0 > 200.160.255.48 189.38.250.1 255.255.255.240 UG 0 0 0 > eth0 > 172.16.18.0 0.0.0.0 255.255.255.0 U 0 0 0 > eth0 > 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 > eth2 > 189.38.250.0 0.0.0.0 255.255.255.0 U 0 0 0 > eth0 > 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 > tun0 > 172.16.16.0 0.0.0.0 255.255.255.0 U 0 0 0 > eth1 > 187.38.0.0 0.0.0.0 255.255.240.0 U 0 0 0 > eth3 > 10.0.0.0 10.0.0.1 255.255.0.0 UG 0 0 0 > eth2 > 74.125.0.0 189.38.250.1 255.255.0.0 UG 0 0 0 > eth0 > 0.0.0.0 187.38.0.1 0.0.0.0 UG 0 0 0 > eth3 > > > Comandos iptables para permitir o tr?fego entre as redes. > > iptables -t nat -A POSTROUTING -o eth0 -s 172.16.16.0/24 -d ! > 172.16.18.0/24 -j MASQUERADE > iptables -A FORWARD -p tcp -i eth0 -s 172.16.18.0/24 -o eth1 -d > 172.16.16.0/24 -j ACCEPT > iptables -A FORWARD -p tcp -i eth1 -s 172.16.16.0/24 -o eth0 -d > 172.16.18.0/24 -j ACCEPT > > > N?o pego pacotes sendo dropados no linux. > Alguma ideia iluminada...? > > Abra?os > Ricardo Felix do Nascimento > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- Emiliano Martins iK1 Tecnologia Ltda From diogo.montagner at gmail.com Thu Nov 25 03:43:02 2010 From: diogo.montagner at gmail.com (Diogo Montagner) Date: Thu, 25 Nov 2010 13:43:02 +0800 Subject: [MASOCH-L] VPN ipsec In-Reply-To: <201011241424.02403.felix.ricardo@gmail.com> References: <201011241424.02403.felix.ricardo@gmail.com> Message-ID: Tem muito tempo que eu nao trabalho com iptables. Posso estar errado, mas pq vc precisa fazer nat para a rede 18 (opcao -j MASQUERADE) ? Talvez o problema esteja ai. []s On 11/25/10, Ricardo Felix wrote: > Boa tarde galera, uma pergunta aqui que j? to suando. > Algu?m por aqui j? criou uma VPN Ipsec com openswan e um Juniper SSG520 ? > > Consigo colocar a VPN no Ar, mas o tr?fego n?o passa de uma ponta a > outra.... > > meus arquivos de conf... > > ipsec.conf > > conn HQtoDC > type=tunnel > left=189.38.x.x > leftsubnet=172.16.16.0/24 > leftnexthop=200.160.x.x > right=200.160.x.x > rightsubnet=172.16.18.0/24 > pfs=yes > keyingtries=0 > aggrmode=no > auto=start > auth=esp > esp=3des-sha1-96 > ike=3des-sha1-96 > authby=secret > > > minha tabela de rotas ap?s subir o ipsec > > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref > Use Iface > 201.6.249.136 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 200.207.121.196 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 200.204.154.71 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 > 201.81.231.236 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 74.125.93.121 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 201.81.224.243 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 200.171.213.106 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 200.158.83.246 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > 200.160.255.48 189.38.250.1 255.255.255.240 UG 0 0 0 eth0 > 172.16.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 > 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 > 189.38.250.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 > 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 > 172.16.16.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 > 187.38.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth3 > 10.0.0.0 10.0.0.1 255.255.0.0 UG 0 0 0 eth2 > 74.125.0.0 189.38.250.1 255.255.0.0 UG 0 0 0 eth0 > 0.0.0.0 187.38.0.1 0.0.0.0 UG 0 0 0 eth3 > > > Comandos iptables para permitir o tr?fego entre as redes. > > iptables -t nat -A POSTROUTING -o eth0 -s 172.16.16.0/24 -d ! > 172.16.18.0/24 -j MASQUERADE > iptables -A FORWARD -p tcp -i eth0 -s 172.16.18.0/24 -o eth1 -d > 172.16.16.0/24 -j ACCEPT > iptables -A FORWARD -p tcp -i eth1 -s 172.16.16.0/24 -o eth0 -d > 172.16.18.0/24 -j ACCEPT > > > N?o pego pacotes sendo dropados no linux. > Alguma ideia iluminada...? > > Abra?os > Ricardo Felix do Nascimento > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > -- Sent from my mobile device ./diogo -montagner From javier.augusto at gmx.net Thu Nov 25 11:01:54 2010 From: javier.augusto at gmx.net (Javier Augusto) Date: Thu, 25 Nov 2010 11:01:54 -0200 Subject: [MASOCH-L] VPN ipsec In-Reply-To: References: <201011241424.02403.felix.ricardo@gmail.com> Message-ID: Ricardo, No Juniper tire a opcao do "Monitor" []s saboteaur From felix.ricardo at gmail.com Thu Nov 25 12:05:52 2010 From: felix.ricardo at gmail.com (Ricardo Felix) Date: Thu, 25 Nov 2010 12:05:52 -0200 Subject: [MASOCH-L] VPN ipsec In-Reply-To: References: <201011241424.02403.felix.ricardo@gmail.com> Message-ID: <201011251205.53203.felix.ricardo@gmail.com> Diogo, estou criando esta linha por 2 motivos... 1 - Meu firewall roda no mesmo equipamento que este concentrador VPN, ent?o os pacotes origin?rios da minha rede interna com destino a minha rede remota (172.16.18.0/24) n?o ser?o "NATEADOS" (acabei de inventar o termo). Note o ! na frente do -d na regra. 2 - A documenta??o do projeto sugere esta configura??o. Mas vou fazer alguns testes sem a regra para ver o efeito. Muito Grato! On Thursday 25 November 2010 03:43:02 Diogo Montagner wrote: > Tem muito tempo que eu nao trabalho com iptables. Posso estar errado, > mas pq vc precisa fazer nat para a rede 18 (opcao -j MASQUERADE) ? > > Talvez o problema esteja ai. > > []s > > On 11/25/10, Ricardo Felix wrote: > > Boa tarde galera, uma pergunta aqui que j? to suando. > > Algu?m por aqui j? criou uma VPN Ipsec com openswan e um Juniper SSG520 ? > > > > Consigo colocar a VPN no Ar, mas o tr?fego n?o passa de uma ponta a > > outra.... > > > > meus arquivos de conf... > > > > ipsec.conf > > > > conn HQtoDC > > type=tunnel > > left=189.38.x.x > > leftsubnet=172.16.16.0/24 > > leftnexthop=200.160.x.x > > right=200.160.x.x > > rightsubnet=172.16.18.0/24 > > pfs=yes > > keyingtries=0 > > aggrmode=no > > auto=start > > auth=esp > > esp=3des-sha1-96 > > ike=3des-sha1-96 > > authby=secret > > > > > > minha tabela de rotas ap?s subir o ipsec > > > > Kernel IP routing table > > Destination Gateway Genmask Flags Metric Ref > > Use Iface > > 201.6.249.136 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 200.207.121.196 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 200.204.154.71 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 > > 201.81.231.236 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 74.125.93.121 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 201.81.224.243 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 200.171.213.106 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 200.158.83.246 189.38.250.1 255.255.255.255 UGH 0 0 0 eth0 > > 200.160.255.48 189.38.250.1 255.255.255.240 UG 0 0 0 eth0 > > 172.16.18.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 > > 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 > > 189.38.250.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 > > 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 > > 172.16.16.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 > > 187.38.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth3 > > 10.0.0.0 10.0.0.1 255.255.0.0 UG 0 0 0 eth2 > > 74.125.0.0 189.38.250.1 255.255.0.0 UG 0 0 0 eth0 > > 0.0.0.0 187.38.0.1 0.0.0.0 UG 0 0 0 eth3 > > > > > > Comandos iptables para permitir o tr?fego entre as redes. > > > > iptables -t nat -A POSTROUTING -o eth0 -s 172.16.16.0/24 -d ! > > 172.16.18.0/24 -j MASQUERADE > > iptables -A FORWARD -p tcp -i eth0 -s 172.16.18.0/24 -o eth1 -d > > 172.16.16.0/24 -j ACCEPT > > iptables -A FORWARD -p tcp -i eth1 -s 172.16.16.0/24 -o eth0 -d > > 172.16.18.0/24 -j ACCEPT > > > > > > N?o pego pacotes sendo dropados no linux. > > Alguma ideia iluminada...? > > > > Abra?os > > Ricardo Felix do Nascimento > > __ > > masoch-l list > > https://eng.registro.br/mailman/listinfo/masoch-l > > > > -- > Sent from my mobile device > > ./diogo -montagner > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > From felix.ricardo at gmail.com Thu Nov 25 12:09:05 2010 From: felix.ricardo at gmail.com (Ricardo Felix) Date: Thu, 25 Nov 2010 12:09:05 -0200 Subject: [MASOCH-L] VPN ipsec In-Reply-To: References: <201011241424.02403.felix.ricardo@gmail.com> Message-ID: <201011251209.05701.felix.ricardo@gmail.com> Javier, j? havia retirado a op??o, a pr?pria documenta??o do Juniper informa sobre a instabilidade de usar esta op??o. Vou verificar algumas outras op??es de configura??o no Linux, devo estar tendo um problema com rotas...s?o dois links de internet no Linux, e hoje pela manh? consegui estabelecer a comunica??o da minha rede 172.16.18.0/24 em dire??o a rede 172.16.16.0/24 ... no sentido contr?rio ainda n?o. Muito Grato. On Thursday 25 November 2010 11:01:54 Javier Augusto wrote: > Ricardo, > > No Juniper tire a opcao do "Monitor" > > > []s > saboteaur > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l > From marcelo at mginformatica.com Mon Nov 29 14:39:12 2010 From: marcelo at mginformatica.com (Marcelo da Silva) Date: Mon, 29 Nov 2010 14:39:12 -0200 Subject: [MASOCH-L] Fwd: [GTER] IPS & IDS Message-ID: Ola pessoal, que sistema IPS ou IDS vc?s tem usado ou o que recomendam para um ISP de pequeno porte ? From webgeo at gmail.com Mon Nov 29 15:16:09 2010 From: webgeo at gmail.com (Giovane Heleno) Date: Mon, 29 Nov 2010 15:16:09 -0200 Subject: [MASOCH-L] Fwd: [GTER] IPS & IDS In-Reply-To: References: Message-ID: Cheguei a brincar, parece eficiente, mas n?o testei sua efic?cia. www.vyatta.org com IDS ativado usando assinatura Snort. Ou o pr?prio Snort mesmo. Giovane Heleno www.giovane.pro.br Em 29 de novembro de 2010 14:39, Marcelo da Silva escreveu: > > Ola pessoal, que sistema IPS ou IDS ?vc?s tem usado ou o que recomendam > para um ISP de pequeno porte ? > > __ > masoch-l list > https://eng.registro.br/mailman/listinfo/masoch-l >