[MASOCH-L] Problemas Dell PowerEdge 1900 com SLES11 SP1

Paulo Henrique paulo.rddck at bsd.com.br
Sat Nov 20 08:01:10 BRST 2010


Squid Cache para 300 usuários com HA, verifique os tunnings do kernel quanto
ao shared memory System V.
Requisição de tempo de processador para duas Gigabit, mais todas essa carga
é mal dimensionamento de serviços.
e esses 7Gb de cache de disco na memoria é outra coisa estranha, sei que
linux utiliza-se de recursos bem inferiores para diminuir o acesso a disco,
agora usar quase toda a memoria só para evitar acesso a disco é fato que não
desconsideraria.

Para todos os usuários linux, aprenda a compilar o kernel, ajustar ele para
a sua real necessidade, usar kernel GENERIC é o principal fator dos 99% de
problemas, avalie qual schenduler utilizar, qual o modo de operação do
mesmo.
Só por que o nome sistema é Server não quer dizer que o mesmo está preparado
para o seu ambiente.

Depois de bater a cabeça em um cliente por mais de dois meses, e
praticamente me tornar praticamente perito em Red Hat e Debian passei a
desconsiderar tais distribuições, solucionei da maneira mais arcaica
possivel, Slackware Linux 10.2 totalmente compilado local, assim como todos
os demais serviços necessários, NFS/Samba, httpd, DNS, ldap, PostGreSQL isso
a pouco mais de 3 anos atrás hoje nem sou muito resciliente com outras
distribuições, e sempre que posso não uso linux opto por FreeBSD ou algum da
familia BSD, só em casos muitos especificos uso linux ( software sem suporte
no FreeBSD ).


Fica as considerações.



Em 4 de novembro de 2010 15:07, Armando Roque <dropsdef at gmail.com> escreveu:

> Rejaine,
>
> O José Augusto já fez as vezes de perguntar das atualizações. Além do SP1
> como estão?! Vc atualizou algo depois?
>
> Att.
>
> Em 4 de novembro de 2010 11:38, Rejaine Monteiro
> <rejaine at bhz.jamef.com.br>escreveu:
>
> >
> > Pessoal,
> >
> > Venho pedir socorro..
> > Estou com um grave problema de performance em um servidor com
> > SLES11(SP1) instalado em um servidor PowerEdge 1900 (configuração abaixo)
> >
> > Ocorre o seguinte:
> >
> > Tínhamos em nossas localidades vários servidores bem inferiores,
> > atendendo ao mesmo número de usuários e mesmos serviços, porém,
> > utilizando OpenSuSE 10.2. Tudo funcionava perfeitamente bem até então,
> > mas seguindo nosso planejamento de atualização do parque de máquinas,
> > optamos por fazer  upgrade de hardware e S.O (que se encontrava bastante
> > desatualizado) nessas localidades e eis que começaram os problemas.
> >
> > Inicialmente, fizemos a substituição em apenas duas localidades de menor
> > porte e com menor número de usuários e  já havíamos notado um certo
> > aumento na carga da CPU. Atualizamos para SLES11 e SP1 e a coisa parece
> > que melhorou um pouco.
> >
> > Porém, em uma outra localidade em especial,  com cerca de 300 usuários,
> > a performance do servidor está simplesmente sofrível
> > A carga de CPU  sobe tanto,  que as vezes mal consigo fazer login para
> > visualizar  o syslog, tendo muitas vezes que derrubar vários serviços ou
> > dar um  reboot para voltar ao normal.
> >
> > Já fizemos vários ajustes de tunning no Kernel e várias outros ajustes
> > de tunning nas  várias aplicações que o servidor executa (especialmente
> > no serviços mais importantes como drbd, heartebeat,  ldap, nfsserver,
> > etc) Nada parece surgir qualquer efeito no problema, nenhuma melhoria
> > considerável mesmo após dezenas de ajustes.
> >
> > Como temos dois servidores idênticos (um em modo failover, por causa do
> > HA), fizemos o teste subindo todos os serviços no servidor backup, para
> > descartar problemas de disco e/ou hardware na máquina principal, porém
> > os problemas continuaram também no outro servidor.
> >
> > Quando a carga está muito alta, o syslog começa a gerar vários dumps no
> > /var/log/messages (descritos abaixo)
> >
> > Aparentemente, não há problemas de I/O (já incluimos até um RAID para
> > melhorar a performance de disco e fizemos vários ajustes, mas nada
> > resolveu ou surtiu efeito)
> > O que percebemos, é que não há relação com iowait e cpu load , ou seja,
> > quando a carga está alta, o disco não apresenta sobrecarga.  Parece ser
> > algo haver com memória, mas o servidor antigo trabalha com 4G no
> > OpenSuSE 10.2 e dava conta do recado e já este servidor, apesar de mais
> > ser ainda "parrudo" e com o dobro de memória não.
> >
> > Sinceramente, vamos tentar fazer um downgrade do S.O. porque um hardware
> > inferior, rodando basicamente os mesmos serviços  e com mesmo número de
> > usuários funcionava muito bem com o OpenSuSE 10.2
> >
> > Segue abaixo descrição do hardware, software e serviços utilizados no
> > servidor e logo mais adiante algumas mensgens que aparecem no syslog
> >
> > Se alguém puder ajudar com qualquer dica, eu agradeço muitíssimo
> > (qualquer ajuda é bem vinda)
> >
> > Servidor> Del PowerEdge 1900
> > 2 x Intel(R) Xeon(R) CPU E5310  1.60GHz DualCore
> > 8G RAM
> > 4 HDs SAS 15000rpm
> >
> > Software> Suse Linux Enterprise Server 11 - Service Pack 1
> > Kernel> Linux srv-linux 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20
> > +0200 x86_64 x86_64 x86_64 GNU/Linux
> >
> > Servicos basicos que estão rodando nesse servidor:  linux-ha
> > (drbd+heartbeat), openldap, qmail-ldap, samba-ldap, nfsserver, dhcp,
> > named, squid e jabberd
> > Numero de usuarios: 300
> > Usuarios Linux utilizam HOMEDIR montado via NFS
> > Usuarios Windows utilizacao SAMBA para compartilhamento de arquivos de
> > grupo e/ou backup de profile
> >
> > top - 10:33:37 up 57 min, 19 users,  load average: 40.44, 49.96, 42.26
> > Tasks: 510 total,   1 running, 509 sleeping,   0 stopped,   0 zombie
> > Cpu(s):  1.3%us,  1.5%sy,  0.0%ni, 94.2%id,  1.7%wa,  0.0%hi,  1.4%si,
> > 0.0%st
> > Mem:   8188816k total,  8137392k used,    51424k free,    57116k buffers
> > Swap:  2104432k total,        0k used,  2104432k free,  7089980k cached
> >
> >  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> >  9901 qscand    20   0  207m 164m 2032 S    0  2.1   0:04.63 clamd
> >  4074 root      20   0  358m  57m 1992 S    0  0.7   0:03.03 nscd
> >  9016 named     20   0  320m  54m 2464 S    0  0.7   0:17.37 named
> > 22761 root      20   0  115m  50m 4604 S    0  0.6   0:02.30 nxagent
> > 23477 root      20   0  597m  33m  21m S    0  0.4   0:01.20
> plasma-desktop
> > 23357 root      20   0  453m  30m  23m S    0  0.4   0:00.51 kwin
> >  9028 ldap      20   0 1930m  26m 4564 S    0  0.3   1:36.51 slapd
> >  9248 root      20   0  324m  24m  17m S    0  0.3   0:03.92 kdm_greet
> > 24164 root      20   0  486m  23m  16m S    0  0.3   0:00.35 krunner
> > 10870 root      20   0 24548  20m 1168 S    2  0.3   0:22.59 jabberd
> >  9014 root      20   0  120m  19m 5328 S    0  0.2   0:03.04 Xorg
> > 24283 root      20   0  173m  19m  14m S    0  0.2   0:00.18 kdialog
> > 22940 root      20   0  290m  18m  12m S    0  0.2   0:00.22 kded4
> > 24275 root      20   0  191m  18m  13m S    0  0.2   0:00.22
> kupdateapplet
> > 24270 root      20   0  237m  16m  10m S    0  0.2   0:00.11 kmix
> >  4061 root      -2   0 92828  16m 8476 S    0  0.2   0:01.18 heartbeat
> > 24274 root      20   0  284m  15m 9.9m S    0  0.2   0:00.10 klipper
> > 23299 root      20   0  309m  14m 9844 S    0  0.2   0:00.08 ksmserver
> > 22899 root      20   0  201m  14m  10m S    0  0.2   0:00.10 kdeinit4
> > 23743 root      20   0  228m  12m 7856 S    0  0.2   0:00.10 kglobalaccel
> > 24167 root      20   0  235m  12m 7760 S    0  0.2   0:00.04
> nepomukserver
> >
> > # /usr/bin/uptime
> >  11:04am  up   0:18,  7 users,  load average: 27.52, 18.60, 10.27
> >
> > # /usr/bin/vmstat 1 4
> > procs -----------memory---------- ---swap-- -----io---- -system--
> > -----cpu------
> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> > id wa st
> >  2  0      0  50856  19300 7196808    0    0   507   378 1167 1175  3  3
> > 88  6  0
> >  0  0      0  41332  19300 7200960    0    0   176  1279 14284 10519  2
> > 2 93  2  0
> >  1  0      0  43184  19184 7181520    0    0     0  1074 7191 1856  0  1
> > 99  0  0
> >  0  0      0  43316  19128 7179868    0    0     0  1189 2237 2340  1  0
> > 99  0  0
> >
> > # /usr/bin/vmstat 1 4
> > procs -----------memory---------- ---swap-- -----io---- -system--
> > -----cpu------
> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> > id wa st
> >  0  1      0  47276  19048 7177788    0    0   498   384 1166 1171  3  3
> > 88  6  0
> >  1  0      0  46128  19056 7167016    0    0    36   970 7530 4158  2  1
> > 95  2  0
> >  0  1      0  46452  19064 7163616    0    0    20   798 1411 1749  2  1
> > 97  0  0
> >  0  0      0  46868  19064 7162624    0    0    56   751 7079 2169  1  1
> > 97  0  0
> >
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893013] The following is only
> > an harmless informational message.
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893019] Unless you get a
> > _continuous_flood_ of these messages it means
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893021] everything is working
> > fine. Allocations from irqs cannot be
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893023] perfectly reliable and
> > the kernel is designed to handle that.
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893028] swapper: page
> > allocation failure. order:0, mode:0x20, alloc_flags:0x30
> pflags:0x10200042
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893032] Pid: 0, comm: swapper
> > Tainted: G           X 2.6.32.12-0.7-default #1
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893035] Call Trace:
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893054]  [<ffffffff810061dc>]
> > dump_trace+0x6c/0x2d0
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893063]  [<ffffffff81394288>]
> > dump_stack+0x69/0x71
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893070]  [<ffffffff810baa7d>]
> > __alloc_pages_slowpath+0x3ed/0x550
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893077]  [<ffffffff810bad1a>]
> > __alloc_pages_nodemask+0x13a/0x140
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893084]  [<ffffffff810ed6b6>]
> > kmem_getpages+0x56/0x170
> > Nov  4 09:57:53 srv-linux kernel: [ 1284.893089]  [<ffffffff810ee466>]
> > fallback_alloc+0x166/0x230
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893095]  [<ffffffff810ee8d2>]
> > kmem_cache_alloc+0x192/0x1b0
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893102]  [<ffffffff812e8c5a>]
> > skb_clone+0x3a/0x80
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893109]  [<ffffffff812f38f2>]
> > dev_queue_xmit_nit+0x82/0x170
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893114]  [<ffffffff812f3cba>]
> > dev_hard_start_xmit+0x4a/0x210
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893120]  [<ffffffff81308abe>]
> > sch_direct_xmit+0x16e/0x1e0
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893126]  [<ffffffff812f6f46>]
> > dev_queue_xmit+0x366/0x4d0
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893132]  [<ffffffff81322f80>]
> > ip_queue_xmit+0x210/0x420
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893139]  [<ffffffff8133753b>]
> > tcp_transmit_skb+0x4cb/0x760
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893145]  [<ffffffff8133aa3f>]
> > tcp_delack_timer+0x14f/0x2a0
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893151]  [<ffffffff81057964>]
> > run_timer_softirq+0x174/0x240
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893157]  [<ffffffff81052b5f>]
> > __do_softirq+0xbf/0x170
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893163]  [<ffffffff810040bc>]
> > call_softirq+0x1c/0x30
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893168]  [<ffffffff81005cfd>]
> > do_softirq+0x4d/0x80
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893173]  [<ffffffff810528d5>]
> > irq_exit+0x85/0x90
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893178]  [<ffffffff8101aecc>]
> > smp_apic_timer_interrupt+0x6c/0xa0
> > Nov  4 09:58:12 srv-linux kernel: [ 1284.893185]  [<ffffffff81003a93>]
> > apic_timer_interrupt+0x13/0x20
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.090713] 449274 pages non-shared
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132671] The following is only
> > an harmless informational message.
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132677] Unless you get a
> > _continuous_flood_ of these messages it means
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132680] everything is working
> > fine. Allocations from irqs cannot be
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132683] perfectly reliable and
> > the kernel is designed to handle that.
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132688] swapper: page
> > allocation failure. order:0, mode:0x20, alloc_flags:0x30
> pflags:0x10200042
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132696] Pid: 0, comm: swapper
> > Tainted: G           X 2.6.32.12-0.7-default #1
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132699] Call Trace:
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132719]  [<ffffffff810061dc>]
> > dump_trace+0x6c/0x2d0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132729]  [<ffffffff81394288>]
> > dump_stack+0x69/0x71
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132738]  [<ffffffff810baa7d>]
> > __alloc_pages_slowpath+0x3ed/0x550
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132746]  [<ffffffff810bad1a>]
> > __alloc_pages_nodemask+0x13a/0x140
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132754]  [<ffffffff810ed6b6>]
> > kmem_getpages+0x56/0x170
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132761]  [<ffffffff810ee466>]
> > fallback_alloc+0x166/0x230
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132768]  [<ffffffff810ee8d2>]
> > kmem_cache_alloc+0x192/0x1b0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132777]  [<ffffffff812e8c5a>]
> > skb_clone+0x3a/0x80
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132788]  [<ffffffffa011a688>]
> > packet_rcv_spkt+0x78/0x190 [af_packet]
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132807]  [<ffffffff812f2cf2>]
> > netif_receive_skb+0x3a2/0x660
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132819]  [<ffffffffa02bcfad>]
> > bnx2_rx_int+0x59d/0x820 [bnx2]
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132836]  [<ffffffffa02bd29f>]
> > bnx2_poll_work+0x6f/0x90 [bnx2]
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132851]  [<ffffffffa02bd3f1>]
> > bnx2_poll+0x61/0x1cc [bnx2]
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132865]  [<ffffffff812f3773>]
> > net_rx_action+0xe3/0x1a0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132873]  [<ffffffff81052b5f>]
> > __do_softirq+0xbf/0x170
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132881]  [<ffffffff810040bc>]
> > call_softirq+0x1c/0x30
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132887]  [<ffffffff81005cfd>]
> > do_softirq+0x4d/0x80
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132893]  [<ffffffff810528d5>]
> > irq_exit+0x85/0x90
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132899]  [<ffffffff8100525e>]
> > do_IRQ+0x6e/0xe0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132907]  [<ffffffff81003913>]
> > ret_from_intr+0x0/0xa
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132915]  [<ffffffff8100ad52>]
> > mwait_idle+0x62/0x70
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132922]  [<ffffffff8100206a>]
> > cpu_idle+0x5a/0xb0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132926] Mem-Info:
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132929] Node 0 DMA per-cpu:
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132934] CPU    0: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132938] CPU    1: hi:    0,
> > btch:   1 usd:   0
> > ov  4 10:21:17 srv-linux kernel: [ 2687.132938] CPU    1: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132941] CPU    2: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132945] CPU    3: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132948] CPU    4: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132951] CPU    5: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132955] CPU    6: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132958] CPU    7: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132961] Node 0 DMA32 per-cpu:
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132966] CPU    0: hi:  186,
> > btch:  31 usd:  32
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132969] CPU    1: hi:  186,
> > btch:  31 usd:  90
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132973] CPU    2: hi:  186,
> > btch:  31 usd: 140
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132976] CPU    3: hi:  186,
> > btch:  31 usd: 166
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132979] CPU    4: hi:  186,
> > btch:  31 usd:  14
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132983] CPU    5: hi:  186,
> > btch:  31 usd: 119
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132986] CPU    6: hi:  186,
> > btch:  31 usd:  45
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132989] CPU    7: hi:  186,
> > btch:  31 usd: 191
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132992] Node 0 Normal per-cpu:
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.132997] CPU    0: hi:  186,
> > btch:  31 usd:  16
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133000] CPU    1: hi:  186,
> > btch:  31 usd:   4
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133003] CPU    2: hi:  186,
> > btch:  31 usd:  44
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133006] CPU    3: hi:  186,
> > btch:  31 usd: 164
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133010] CPU    4: hi:  186,
> > btch:  31 usd:  98
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133013] CPU    5: hi:  186,
> > btch:  31 usd:  19
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133017] CPU    6: hi:  186,
> > btch:  31 usd:  76
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133020] CPU    7: hi:  186,
> > btch:  31 usd: 192
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133028] active_anon:90321
> > inactive_anon:23282 isolated_anon:0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133029]  active_file:56108
> > inactive_file:1701629 isolated_file:0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133030]  unevictable:5709
> > dirty:677685 writeback:2 unstable:0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133032]  free:9755
> > slab_reclaimable:66787 slab_unreclaimable:50212
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133033]  mapped:13499 shmem:67
> > pagetables:6893 bounce:0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133037] Node 0 DMA free:15692kB
> > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB
> > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB
> > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB
> > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB
> > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB
> > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133051] lowmem_reserve[]: 0
> > 3251 8049 8049
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133061] Node 0 DMA32
> > free:20800kB min:4632kB low:5788kB high:6948kB active_anon:69388kB
> > inactive_anon:16256kB active_file:33564kB inactive_file:2898248kB
> > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB
> > mlocked:0kB dirty:1095648kB writeback:4kB mapped:1264kB shmem:16kB
> > slab_reclaimable:107716kB slab_unreclaimable:11264kB kernel_stack:776kB
> > pagetables:5120kB unstable:0kB bounce:0kB writeback_tmp:0kB
> > pages_scanned:0 all_unreclaimable? no
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133076] lowmem_reserve[]: 0 0
> > 4797 4797
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133086] Node 0 Normal
> > free:2528kB min:6836kB low:8544kB high:10252kB active_anon:291896kB
> > inactive_anon:76872kB active_file:190868kB inactive_file:3908268kB
> > unevictable:22836kB isolated(anon):0kB isolated(file):0kB
> > present:4912640kB mlocked:22836kB dirty:1615092kB writeback:4kB
> > mapped:52732kB shmem:252kB slab_reclaimable:159432kB
> > slab_unreclaimable:189584kB kernel_stack:4312kB pagetables:22452kB
> > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
> > all_unreclaimable? no
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133101] lowmem_reserve[]: 0 0 0
> 0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133110] Node 0 DMA: 3*4kB 4*8kB
> > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB
> > = 15692kB
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133135] Node 0 DMA32: 1087*4kB
> > 1592*8kB 39*16kB 17*32kB 2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB
> > 1*2048kB 0*4096kB = 20428kB
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133160] Node 0 Normal: 110*4kB
> > 7*8kB 4*16kB 2*32kB 2*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB
> > 0*4096kB = 2032kB
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133185] 1759923 total pagecache
> > pages
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133188] 0 pages in swap cache
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133191] Swap cache stats: add
> > 0, delete 0, find 0/0
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133194] Free swap  = 2104432kB
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.133197] Total swap = 2104432kB
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.136597] 2097152 pages RAM
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.136597] 49948 pages reserved
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.136597] 1656353 pages shared
> > Nov  4 10:21:17 srv-linux kernel: [ 2687.136597] 449267 pages non-shared
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436013] The following is only
> > an harmless informational message.
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436018] Unless you get a
> > _continuous_flood_ of these messages it means
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436020] everything is working
> > fine. Allocations from irqs cannot be
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436022] perfectly reliable and
> > the kernel is designed to handle that.
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436026] swapper: page
> > allocation failure. order:0, mode:0x20, alloc_flags:0x30
> pflags:0x10200042
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436031] Pid: 0, comm: swapper
> > Tainted: G           X 2.6.32.12-0.7-default #1
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436034] Call Trace:
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436052]  [<ffffffff810061dc>]
> > dump_trace+0x6c/0x2d0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436061]  [<ffffffff81394288>]
> > dump_stack+0x69/0x71
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436069]  [<ffffffff810baa7d>]
> > __alloc_pages_slowpath+0x3ed/0x550
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436075]  [<ffffffff810bad1a>]
> > __alloc_pages_nodemask+0x13a/0x140
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436083]  [<ffffffff810ed6b6>]
> > kmem_getpages+0x56/0x170
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436088]  [<ffffffff810ee466>]
> > fallback_alloc+0x166/0x230
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436094]  [<ffffffff810ee8d2>]
> > kmem_cache_alloc+0x192/0x1b0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436101]  [<ffffffff812e8c5a>]
> > skb_clone+0x3a/0x80
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436108]  [<ffffffff812f38f2>]
> > dev_queue_xmit_nit+0x82/0x170
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436113]  [<ffffffff812f3cba>]
> > dev_hard_start_xmit+0x4a/0x210
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436119]  [<ffffffff81308abe>]
> > sch_direct_xmit+0x16e/0x1e0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436125]  [<ffffffff812f6f46>]
> > dev_queue_xmit+0x366/0x4d0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436131]  [<ffffffff81322f80>]
> > ip_queue_xmit+0x210/0x420
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436138]  [<ffffffff8133753b>]
> > tcp_transmit_skb+0x4cb/0x760
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436144]  [<ffffffff8133aa3f>]
> > tcp_delack_timer+0x14f/0x2a0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436150]  [<ffffffff81057964>]
> > run_timer_softirq+0x174/0x240
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436156]  [<ffffffff81052b5f>]
> > __do_softirq+0xbf/0x170
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436162]  [<ffffffff810040bc>]
> > call_softirq+0x1c/0x30
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436167]  [<ffffffff81005cfd>]
> > do_softirq+0x4d/0x80
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436171]  [<ffffffff810528d5>]
> > irq_exit+0x85/0x90
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436177]  [<ffffffff8101aecc>]
> > smp_apic_timer_interrupt+0x6c/0xa0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436184]  [<ffffffff81003a93>]
> > apic_timer_interrupt+0x13/0x20
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436191]  [<ffffffff8100ad52>]
> > mwait_idle+0x62/0x70
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436196]  [<ffffffff8100206a>]
> > cpu_idle+0x5a/0xb0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436200] Mem-Info:
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436202] Node 0 DMA per-cpu:
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436205] CPU    0: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436208] CPU    1: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436210] CPU    2: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436213] CPU    3: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436215] CPU    4: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436217] CPU    5: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436220] CPU    6: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436222] CPU    7: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436224] Node 0 DMA32 per-cpu:
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436227] CPU    0: hi:  186,
> > btch:  31 usd:  30
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436229] CPU    1: hi:  186,
> > btch:  31 usd: 186
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436232] CPU    2: hi:  186,
> > btch:  31 usd: 147
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436234] CPU    3: hi:  186,
> > btch:  31 usd: 174
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436236] CPU    4: hi:  186,
> > btch:  31 usd:  92
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436239] CPU    5: hi:  186,
> > btch:  31 usd:  49
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436241] CPU    6: hi:  186,
> > btch:  31 usd: 141
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436244] CPU    7: hi:  186,
> > btch:  31 usd: 142
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436245] Node 0 Normal per-cpu:
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436248] CPU    0: hi:  186,
> > btch:  31 usd:  46
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436250] CPU    1: hi:  186,
> > btch:  31 usd: 158
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436253] CPU    2: hi:  186,
> > btch:  31 usd: 151
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436255] CPU    3: hi:  186,
> > btch:  31 usd:  39
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436257] CPU    4: hi:  186,
> > btch:  31 usd: 114
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436260] CPU    5: hi:  186,
> > btch:  31 usd:  59
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436262] CPU    6: hi:  186,
> > btch:  31 usd: 124
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436265] CPU    7: hi:  186,
> > btch:  31 usd: 173
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436271] active_anon:121650
> > inactive_anon:21539 isolated_anon:0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436272]  active_file:65104
> > inactive_file:1679351 isolated_file:0
> > Nov  4 11:07:27 srv-linux kernel: [ 1293.436273]  unevictable:5709
> > dirty:474043 writeback:6102 unstable:0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436275]  free:9712
> > slab_reclaimable:51092 slab_unreclaimable:49524
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436276]  mapped:13595 shmem:109
> > pagetables:6308 bounce:0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436279] Node 0 DMA free:15692kB
> > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB
> > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB
> > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB
> > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB
> > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB
> > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436290] lowmem_reserve[]: 0
> > 3251 8049 8049
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436295] Node 0 DMA32
> > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB
> > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB
> > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB
> > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB
> > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB
> > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB
> > pages_scanned:0 all_unreclaimable? no
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436307] lowmem_reserve[]: 0 0
> > 4797 4797
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436311] Node 0 Normal
> > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB
> > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB
> > unevictable:22836kB isolated(anon):0kB isolated(file):0kB
> > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB
> > mapped:54212kB shmem:360kB slab_reclaimable:95396kB
> > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB
> > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
> > all_unreclaimable? no
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436323] lowmem_reserve[]: 0 0 0
> 0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436327] Node 0 DMA: 3*4kB 4*8kB
> > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB
> > = 15692kB
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436339] Node 0 DMA32: 53*4kB
> > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB
> > 1*2048kB 1*4096kB = 19828kB
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436350] Node 0 Normal: 8*4kB
> > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB
> > 0*4096kB = 1840kB
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436361] 1746592 total pagecache
> > pages
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436363] 0 pages in swap cache
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436365] Swap cache stats: add
> > 0, delete 0, find 0/0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436367] Free swap  = 2104432kB
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.436369] Total swap = 2104432kB
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.445967] 2097152 pages RAM
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.445967] 49948 pages reserved
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.445967] 1080140 pages shared
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.445967] 1014865 pages non-shared
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480826] The following is only
> > an harmless informational message.
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480832] Unless you get a
> > _continuous_flood_ of these messages it means
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480838] everything is working
> > fine. Allocations from irqs cannot be
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480843] perfectly reliable and
> > the kernel is designed to handle that.
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480850] swapper: page
> > allocation failure. order:0, mode:0x20, alloc_flags:0x30
> pflags:0x10200042
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480856] Pid: 0, comm: swapper
> > Tainted: G           X 2.6.32.12-0.7-default #1
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480862] Call Trace:
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480883]  [<ffffffff810061dc>]
> > dump_trace+0x6c/0x2d0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480897]  [<ffffffff81394288>]
> > dump_stack+0x69/0x71
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480910]  [<ffffffff810baa7d>]
> > __alloc_pages_slowpath+0x3ed/0x550
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480921]  [<ffffffff810bad1a>]
> > __alloc_pages_nodemask+0x13a/0x140
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480933]  [<ffffffff810ed6b6>]
> > kmem_getpages+0x56/0x170
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480944]  [<ffffffff810ee466>]
> > fallback_alloc+0x166/0x230
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480955]  [<ffffffff810ee8d2>]
> > kmem_cache_alloc+0x192/0x1b0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480967]  [<ffffffff812e8c5a>]
> > skb_clone+0x3a/0x80
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480979]  [<ffffffff812f38f2>]
> > dev_queue_xmit_nit+0x82/0x170
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.480990]  [<ffffffff812f3cba>]
> > dev_hard_start_xmit+0x4a/0x210
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481000]  [<ffffffff81308abe>]
> > sch_direct_xmit+0x16e/0x1e0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481010]  [<ffffffff81308bdf>]
> > __qdisc_run+0xaf/0x100
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481021]  [<ffffffff812f70ab>]
> > dev_queue_xmit+0x4cb/0x4d0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481032]  [<ffffffff81322f80>]
> > ip_queue_xmit+0x210/0x420
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481044]  [<ffffffff8133753b>]
> > tcp_transmit_skb+0x4cb/0x760
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481054]  [<ffffffff8133aa3f>]
> > tcp_delack_timer+0x14f/0x2a0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481066]  [<ffffffff81057964>]
> > run_timer_softirq+0x174/0x240
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481077]  [<ffffffff81052b5f>]
> > __do_softirq+0xbf/0x170
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481088]  [<ffffffff810040bc>]
> > call_softirq+0x1c/0x30
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481098]  [<ffffffff81005cfd>]
> > do_softirq+0x4d/0x80
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481108]  [<ffffffff810528d5>]
> > irq_exit+0x85/0x90
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481118]  [<ffffffff8101aecc>]
> > smp_apic_timer_interrupt+0x6c/0xa0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481131]  [<ffffffff81003a93>]
> > apic_timer_interrupt+0x13/0x20
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481142]  [<ffffffff8100ad52>]
> > mwait_idle+0x62/0x70
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481152]  [<ffffffff8100206a>]
> > cpu_idle+0x5a/0xb0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481159] Mem-Info:
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481163] Node 0 DMA per-cpu:
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481173] CPU    0: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481178] CPU    1: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481184] CPU    2: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481189] CPU    3: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481195] CPU    4: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481200] CPU    5: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481206] CPU    6: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481211] CPU    7: hi:    0,
> > btch:   1 usd:   0
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481216] Node 0 DMA32 per-cpu:
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481226] CPU    0: hi:  186,
> > btch:  31 usd:  30
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481231] CPU    1: hi:  186,
> > btch:  31 usd: 186
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481237] CPU    2: hi:  186,
> > btch:  31 usd: 147
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481242] CPU    3: hi:  186,
> > btch:  31 usd: 174
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481248] CPU    4: hi:  186,
> > btch:  31 usd:  92
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481253] CPU    5: hi:  186,
> > btch:  31 usd:  49
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481259] CPU    6: hi:  186,
> > btch:  31 usd: 141
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481264] CPU    7: hi:  186,
> > btch:  31 usd: 142
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481269] Node 0 Normal per-cpu:
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481278] CPU    0: hi:  186,
> > btch:  31 usd:  46
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481284] CPU    1: hi:  186,
> > btch:  31 usd: 158
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481289] CPU    2: hi:  186,
> > btch:  31 usd: 151
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481295] CPU    3: hi:  186,
> > btch:  31 usd:  39
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481300] CPU    4: hi:  186,
> > btch:  31 usd: 114
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481306] CPU    5: hi:  186,
> > btch:  31 usd:  59
> > Nov  4 11:07:28 srv-linux kernel: [ 1293.481311] CPU    6: hi:  186,
> > btch:  31 usd: 124
> > ov  4 11:07:28 srv-linux kernel: [ 1293.481316] CPU    7: hi:  186,
> > btch:  31 usd: 173
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481325] active_anon:121650
> > inactive_anon:21539 isolated_anon:0
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481327]  active_file:65104
> > inactive_file:1679351 isolated_file:0
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481328]  unevictable:5709
> > dirty:474043 writeback:6102 unstable:0
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481329]  free:9712
> > slab_reclaimable:51092 slab_unreclaimable:49524
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481330]  mapped:13595 shmem:109
> > pagetables:6308 bounce:0
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481336] Node 0 DMA free:15692kB
> > min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB
> > active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB
> > isolated(file):0kB present:15320kB mlocked:0kB dirty:0kB writeback:0kB
> > mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB
> > kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB
> > writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481354] lowmem_reserve[]: 0
> > 3251 8049 8049
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481377] Node 0 DMA32
> > free:20696kB min:4632kB low:5788kB high:6948kB active_anon:79808kB
> > inactive_anon:17188kB active_file:55724kB inactive_file:2866240kB
> > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3329568kB
> > mlocked:0kB dirty:1287108kB writeback:8764kB mapped:168kB shmem:76kB
> > slab_reclaimable:108972kB slab_unreclaimable:12288kB kernel_stack:824kB
> > pagetables:6980kB unstable:0kB bounce:0kB writeback_tmp:0kB
> > pages_scanned:0 all_unreclaimable? no
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481396] lowmem_reserve[]: 0 0
> > 4797 4797
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481419] Node 0 Normal
> > free:2460kB min:6836kB low:8544kB high:10252kB active_anon:406792kB
> > inactive_anon:68968kB active_file:204692kB inactive_file:3851164kB
> > unevictable:22836kB isolated(anon):0kB isolated(file):0kB
> > present:4912640kB mlocked:22836kB dirty:609064kB writeback:15644kB
> > mapped:54212kB shmem:360kB slab_reclaimable:95396kB
> > slab_unreclaimable:185808kB kernel_stack:3912kB pagetables:18252kB
> > unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
> > all_unreclaimable? no
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481438] lowmem_reserve[]: 0 0 0
> 0
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481462] Node 0 DMA: 3*4kB 4*8kB
> > 2*16kB 2*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB
> > = 15692kB
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481518] Node 0 DMA32: 53*4kB
> > 586*8kB 309*16kB 50*32kB 9*64kB 5*128kB 2*256kB 1*512kB 0*1024kB
> > 1*2048kB 1*4096kB = 19828kB
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481574] Node 0 Normal: 8*4kB
> > 12*8kB 1*16kB 3*32kB 1*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB
> > 0*4096kB = 1840kB
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481630] 1746592 total pagecache
> > pages
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481635] 0 pages in swap cache
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481641] Swap cache stats: add
> > 0, delete 0, find 0/0
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481646] Free swap  = 2104432kB
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.481651] Total swap = 2104432kB
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.484802] 2097152 pages RAM
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.484802] 49948 pages reserved
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.484802] 1079742 pages shared
> > Nov  4 11:07:29 srv-linux kernel: [ 1293.484802] 1013515 pages non-shared
> >
> >
> > __
> > masoch-l list
> > https://eng.registro.br/mailman/listinfo/masoch-l
> >
>
>
>
> --
> Armando Roque Ferreira Pinto
> Analista de sistemas
> __
> masoch-l list
> https://eng.registro.br/mailman/listinfo/masoch-l
>



-- 
:=)>Paulo Henrique (JSRD)<(=:

Alone,  locked, a survivor, unfortunately not know who I am


More information about the masoch-l mailing list