一次Linux服务器内存申请失败的优化

Posted by 云谷计算 on May 7, 2019

背景

在一个冷存储集群中(CPU 48C, 内存 256G),经常会有各种进程跑出类似下面内存分配超时错误:

28-159 kernel: [4395803.602414] python: page allocation stalls for 15412ms, order:1, mode:0x26040c0(GFP_KERNEL|__GFP_COMP|__GFP_NOTRACK)
Feb 28 11:16:37 MyHOST kernel: [4395803.602420] CPU: 4 PID: 1467 Comm: python Tainted: G           O    4.9.0-8-amd64 #1 Debian 4.9.110-3+deb9u6
Feb 28 11:16:37 MyHOST kernel: [4395803.602421] Hardware name: XXXXXXXXXXXX, BIOS YYYYYYYYYYY
Feb 28 11:16:37 MyHOST kernel: [4395803.602422]  0000000000000000 ffffffff84f31e54 ffffffff856016b8 ffffac699b2e3ab0
Feb 28 11:16:37 MyHOST kernel: [4395803.602425]  ffffffff84d8a84a 026040c0026040c0 ffffffff856016b8 ffffac699b2e3a50
Feb 28 11:16:37 MyHOST kernel: [4395803.602427]  ffff9dbb00000010 ffffac699b2e3ac0 ffffac699b2e3a70 6165da10c2b2f272
Feb 28 11:16:37 MyHOST kernel: [4395803.602429] Call Trace:
Feb 28 11:16:37 MyHOST kernel: [4395803.602435]  [<ffffffff84f31e54>] ? dump_stack+0x5c/0x78
Feb 28 11:16:37 MyHOST kernel: [4395803.602438]  [<ffffffff84d8a84a>] ? warn_alloc+0x13a/0x160
Feb 28 11:16:37 MyHOST kernel: [4395803.602440]  [<ffffffff84d8a58a>] ? __alloc_pages_direct_compact+0x4a/0xf0
Feb 28 11:16:37 MyHOST kernel: [4395803.602442]  [<ffffffff84d8b275>] ? __alloc_pages_slowpath+0x995/0xbf0
Feb 28 11:16:37 MyHOST kernel: [4395803.602444]  [<ffffffff84d8ab74>] ? __alloc_pages_slowpath+0x294/0xbf0
Feb 28 11:16:37 MyHOST kernel: [4395803.602446]  [<ffffffff84d8b6d1>] ? __alloc_pages_nodemask+0x201/0x260
Feb 28 11:16:37 MyHOST kernel: [4395803.602449]  [<ffffffff84de4d0a>] ? cache_grow_begin+0x9a/0x560
Feb 28 11:16:37 MyHOST kernel: [4395803.602451]  [<ffffffff84de4d0a>] ? cache_grow_begin+0x9a/0x560
Feb 28 11:16:37 MyHOST kernel: [4395803.602453]  [<ffffffff84de5481>] ? fallback_alloc+0x161/0x200
Feb 28 11:16:37 MyHOST kernel: [4395803.602455]  [<ffffffff84de6d88>] ? kmem_cache_alloc+0x1c8/0x530
Feb 28 11:16:37 MyHOST kernel: [4395803.602458]  [<ffffffff84cac3af>] ? place_entity+0x1f/0x70
Feb 28 11:16:37 MyHOST kernel: [4395803.602460]  [<ffffffff84e28670>] ? dup_fd+0x30/0x270
Feb 28 11:16:37 MyHOST kernel: [4395803.602463]  [<ffffffff84d22861>] ? audit_alloc+0xc1/0x160
Feb 28 11:16:37 MyHOST kernel: [4395803.602467]  [<ffffffff84c77182>] ? copy_process.part.34+0x8a2/0x1c60
Feb 28 11:16:37 MyHOST kernel: [4395803.602469]  [<ffffffff84de6cdc>] ? kmem_cache_alloc+0x11c/0x530
Feb 28 11:16:37 MyHOST kernel: [4395803.602470]  [<ffffffff84de6cdc>] ? kmem_cache_alloc+0x11c/0x530
Feb 28 11:16:37 MyHOST kernel: [4395803.602472]  [<ffffffff84c78723>] ? _do_fork+0xe3/0x3f0
Feb 28 11:16:37 MyHOST kernel: [4395803.602475]  [<ffffffff84c033ce>] ? syscall_trace_enter+0x1ae/0x2c0
Feb 28 11:16:37 MyHOST kernel: [4395803.602477]  [<ffffffff84d22dc8>] ? __audit_syscall_exit+0x1d8/0x260
Feb 28 11:16:37 MyHOST kernel: [4395803.602478]  [<ffffffff84c03b7d>] ? do_syscall_64+0x8d/0xf0
Feb 28 11:16:37 MyHOST kernel: [4395803.602481]  [<ffffffff85215c4e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
Feb 28 11:16:37 MyHOST kernel: [4395803.602482] Mem-Info:
Feb 28 11:16:37 MyHOST kernel: [4395803.602488] active_anon:2135446 inactive_anon:219133 isolated_anon:0
Feb 28 11:16:37 MyHOST kernel: [4395803.602488]  active_file:26813845 inactive_file:1300246 isolated_file:0
Feb 28 11:16:37 MyHOST kernel: [4395803.602488]  unevictable:2298 dirty:1270262 writeback:18941 unstable:0
Feb 28 11:16:37 MyHOST kernel: [4395803.602488]  slab_reclaimable:1053450 slab_unreclaimable:78937
Feb 28 11:16:37 MyHOST kernel: [4395803.602488]  mapped:60055 shmem:712110 pagetables:7215 bounce:0
Feb 28 11:16:37 MyHOST kernel: [4395803.602488]  free:153575 free_pcp:0 free_cma:0
Feb 28 11:16:37 MyHOST kernel: [4395803.602490] Node 0 active_anon:4507860kB inactive_anon:593028kB active_file:51265328kB inactive_file:2530232kB unevictable:5948kB isolated(anon):0kB isolated(file):0kB mapped:114712kB dirty:2458908kB writeback:50312kB shmem:1965612kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB pages_scanned:7486 all_unreclaimable? no
Feb 28 11:16:37 MyHOST kernel: [4395803.602493] Node 1 active_anon:4033924kB inactive_anon:283504kB active_file:55990052kB inactive_file:2670752kB unevictable:3244kB isolated(anon):0kB isolated(file):0kB mapped:125508kB dirty:2622140kB writeback:25452kB shmem:882828kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? no
Feb 28 11:16:37 MyHOST kernel: [4395803.602494] Node 0 DMA free:15884kB min:4kB low:16kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15968kB managed:15884kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602498] lowmem_reserve[]: 0 1346 128303 128303 128303
Feb 28 11:16:37 MyHOST kernel: [4395803.602500] Node 0 DMA32 free:508276kB min:468kB low:1844kB high:3220kB active_anon:78008kB inactive_anon:9116kB active_file:763188kB inactive_file:29396kB unevictable:0kB writepending:29408kB present:1724612kB managed:1462468kB mlocked:0kB slab_reclaimable:70228kB slab_unreclaimable:508kB kernel_stack:64kB pagetables:108kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602504] lowmem_reserve[]: 0 0 126956 126956 126956
Feb 28 11:16:37 MyHOST kernel: [4395803.602506] Node 0 Normal free:43880kB min:44456kB low:174456kB high:304456kB active_anon:4429852kB inactive_anon:583912kB active_file:50502140kB inactive_file:2500952kB unevictable:5948kB writepending:2479192kB present:132120576kB managed:130008524kB mlocked:5948kB slab_reclaimable:3189536kB slab_unreclaimable:162484kB kernel_stack:12744kB pagetables:16232kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602516] lowmem_reserve[]: 0 0 0 0 0
Feb 28 11:16:37 MyHOST kernel: [4395803.602518] Node 0 DMA: 1*4kB (U) 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15884kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602526] Node 0 DMA32: 1003*4kB (UME) 259*8kB (UME) 156*16kB (UME) 141*32kB (UME) 158*64kB (UME) 176*128kB (UME) 59*256kB (UME) 42*512kB (UME) 8*1024kB (UME) 8*2048kB (UM) 98*4096kB (UM) = 508324kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602535] Node 0 Normal: 11218*4kB (UMEH) 32*8kB (MEH) 17*16kB (H) 11*32kB (H) 4*64kB (H) 1*128kB (H) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 46136kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602543] Node 1 Normal: 6038*4kB (UMEH) 2987*8kB (UMEH) 39*16kB (UMEH) 16*32kB (H) 6*64kB (H) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 49568kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602551] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602552] Node 0 hugepages_total=32768 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602553] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602554] Node 1 hugepages_total=32768 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602555] 28827012 total pagecache pages
Feb 28 11:16:37 MyHOST kernel: [4395803.602555] 0 pages in swap cache
Feb 28 11:16:37 MyHOST kernel: [4395803.602556] Swap cache stats: add 0, delete 0, find 0/0
Feb 28 11:16:37 MyHOST kernel: [4395803.602557] Free swap  = 0kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602557] Total swap = 0kB
Feb 28 11:16:37 MyHOST kernel: [4395803.602558] 67019721 pages RAM
Feb 28 11:16:37 MyHOST kernel: [4395803.602558] 0 pages HighMem/MovableOnly
Feb 28 11:16:37 MyHOST kernel: [4395803.602558] 1120110 pages reserved
Feb 28 11:16:37 MyHOST kernel: [4395803.602559] 0 pages hwpoisoned

初步观察发现,出现问题的服务器free的内存已经都在1G左右,绝大部分内存被用作Cache, kswapd进程使用率在100%左右,因此推测是由于空闲内存很少,导致进入慢路径回收内存,由于每次回收需要处理的内存页数量比较多,最终导致超时。所以希望通过Linux内存分配参数控制下空闲内存和高阶内存的水位,尽量避免慢路径超时。

/proc/buddyinfo

buddy info 描述了当前可用内存的分布情况:

# cat /proc/buddyinfo
Node 0, zone      DMA      4      4      3      3      3      3      2      0      1      1      2
Node 0, zone   Normal    140     90     34   5201   2816    556     29      0      0      0      0
Node 0, zone  HighMem      0   2542   1859    253    961   3568    560     19      1      0      0
可以看到这份信息里面包含3个zone, DMA, Normal, HighMem, 高通平台没有去这样分,默认DMA zone就代表所有内存空间

DMA 行第3列表示有3个 2^2 × page_size 的内存块可以用;

HighMem的第4列表示253个 2^3 × page_size 的内存块可以用;

以此类推,越是往后的空间,就越是连续,数目越多,就代表这个大小的连续空间越多,当大的连续空间很少的时候,也就说明,内存碎片已经非常多了。 全部加起来就是当前free状态的内存size。如果当前空闲列表中没有满足指定大小的内存页,就可能会进入分配慢路径,触发内存合并和回收。

/sys/kernel/debug/extfrag/extfrag_index

# cat /sys/kernel/debug/extfrag/extfrag_index
Node 0, zone      DMA -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000
Node 0, zone    DMA32 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 -1.000
Node 0, zone   Normal -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 0.980 0.990 0.995 0.998 0.999
Node 1, zone   Normal -1.000 -1.000 -1.000 -1.000 -1.000 -1.000 0.977 0.989 0.995 0.998 0.999

extfrag_index 表示的是内存分配失败是由于内存不足还是碎片化引起。和extfrag_index 直接关联的是一个叫 extfrag_threshold 的阈值, 这个参数的取值范围是0-1000,在内存不足时候,系统会讲当前的碎片指数(fragmentation index)和extfrag_threshold比较,

fragmentation index:

  1. -1000: 表示分配可以成功
  2. 0: 表示分配失败和内存不足关联更大
  3. 1000: 表示分配失败主要和碎片化有关,数值越接近1000

/proc/sys/vm/extfrag_threshold

如果大于extfrag_threshold, kswapd就会触发memory compaction。

所以, 这个值设置接近1000, 说明系统在内存碎片的处理倾向于把旧的页换出, 以符合申请的需要; 而设置接近0, 表示系统在内存碎片的处理倾向于做memory compaction.

extfrag_threshold默认是500,我们在这里把它改成100,期望每次内存回收的时候,尽量多合并一些高阶内存:

cat /proc/sys/vm/extfrag_threshold
100

/proc/sys/vm/vfs_cache_pressure

这是用来控制内存回收时,对于inode/dir cache的回收策略。默认值是100,最大值是10000。

这部分的缓存通常都是因为运维操作导致的,所以我们可以把它调整的激进一些:

# cat /proc/sys/vm/vfs_cache_pressure
10000

/proc/sys/vm/watermark_scale_factor

如果/proc/vmstat中的allocstall参数或者kswapd_low_wmark_hit_quickly数量较大(allocstall通常意味内存分配进入慢路径,kswapd_low_wmark_hit_quickly表示kswapd经常被唤醒),说明系统的空闲内存较少。

这个参数可以控制kswapd被唤醒后的工作程度, 值越大表示会尝试回收更多的内存。配置范围是[0,1000], 1000表示10%的内存。这个值默认是10,我们期待在空闲时每次内存回收尽量多回收一些空闲页面,所以将它改成500。

# cat /proc/sys/vm/watermark_scale_factor
500

结论

经过以上优化调整后,系统空闲内存从1G增加到10-20G, 内存分配失败问题消失。

参考