Exactly how much physical memory is installed?

By Abhijit Menon-Sen <>

I've often wanted to calculate the exact amount of physical memory installed in a system running Linux, and I finally stumbled across a satisfying solution today: look under /sys/devices/system/memory.

I know I have exactly 16GB (or GiB, if you prefer) of memory installed in this system. I've always found it frustrating that commands like free -g report only 15GB as the “Total installed memory”.

I understand why this is the case — some of the total memory is reserved by the system, some is occupied by the kernel, and only the remainder is available to applications. That's the number reported as the “MemTotal” in /proc/meminfo, and that's the number everything else uses. For most practical purposes, this is the number that matters.

$ grep MemTotal: /proc/meminfo                          
MemTotal:       16283212 kB
$ echo $((
    $(getconf PAGE_SIZE)
    * $(getconf _PHYS_PAGES)
    / 1024 ))

I knew of one other way to get an answer: dmidecode will report the exact size of each “memory device”. For example, my system has 2×8GB DIMMs installed:

$ sudo dmidecode --type memory | \
  grep Size:
        Size: 8192 MB
        Size: 8192 MB

Unfortunately, you must have root access in order to run dmidecode, the output is quite verbose (try running just "dmidecode"), and it works only on systems with (correct!) DMI tables.

It's not a really satisfying solution. But look what I stumbled across today:

$ cd /sys/devices/system &&
  echo $((
    $(grep -x online memory/memory[0-9]*/state|wc -l)
    * 0x$(cat memory/block_size_bytes)
    / 1024**3 ))

Perfect! Each memoryN subdirectory under /sys/devices/system/memory represents a block_size_bytes chunk of physical memory, and if you add up everything, you get the correct total. On my system, the block size is 0x8000000 bytes, i.e., 128MB, and there are 128 subdirectories, to give a total of 16GB. (Or run lsmem; see updates below.)

On systems with hot-swappable RAM, memoryN/state might be “offline”, so I grep for “online” to ensure it's not before counting the subdirectory towards the total. (It seems one can "echo offline" into the state file to change the state, but I didn't dare to try it the file is not writable on my machine.)

Interestingly, the subdirectories are not memory0 to memory127, but memory0 to memory20 and memory32 to memory138. There must be some explanation for the 11-block hole in the numbers, but I don't know what it is (see updates below).

The dmesg mystery

If your last reboot was not too long ago, you might find a message like this in your dmesg:

Memory: 16265512K/16659852K available (10252K kernel code,
1222K rwdata, 3252K rodata, 1552K init, 656K bss, 394340K
reserved, 0K cma-reserved)

This doesn't tell you exactly how much physical memory there is, but the numbers are interesting. The 16265512K corresponds to MemTotal, and if you add the 394340K of reserved memory, you get to 16659852K. Neat, but a smidgen less than 500MB short of the expected 16777216K.

That was from some old dmesg output that I happened to have saved. While writing this article, however, I got curious about what my system would say and rebooted to check. To my utter surprise, I couldn't make any sense of the result:

$ uname -r
$ sudo dmesg -t|grep Memory:
Memory: 2627884K/16659852K available (12291K kernel
code, 1292K rwdata, 4060K rodata, 1632K init, 1864K bss,
442808K reserved, 0K cma-reserved)

The 16659852K is the same, and the other numbers in parentheses look plausible (for a different kernel), but 2627884K is only 2.5GB!

Despite this strange result, the values in /proc/meminfo look fine:

MemTotal:       16283212 kB
MemFree:         8985772 kB
MemAvailable:   11920424 kB
DirectMap4k:      306976 kB
DirectMap2M:     9013248 kB
DirectMap1G:     8388608 kB

The message in question comes from the mem_init_print_info() function in mm/page_alloc.c (edited for clarity):

pages_to_kb = PAGE_SHIFT - 10; /* == 2 for 4KB x86 pages */
reserved_pages = get_num_physpages() - totalram_pages() …;
pr_info("Memory: %luK/%luK available (%luK kernel code, …, "
        "%luK reserved, …)\n",
        nr_free_pages() << pages_to_kb,
        get_num_physpages() << pages_to_kb, codesize >> 10, …
        reserved_pages << pages_to_kb, …);

I don't know why this kernel suddenly thinks I have a drastically smaller nr_free_pages() than before, but I look forward to finding out.

(Do you know the answer? Please tell ams@toroid.org or @amenonsen)


The discussion on HN about this article threw up some interesting tidbits, summarised below. Thanks to everyone who contributed.


The lsmem command from util-linux summarises information obtained from /sys/devices/system/memory.

$ lsmem 
RANGE                                  SIZE  STATE REMOVABLE  BLOCK
0x0000000000000000-0x00000000a7ffffff  2.6G online       yes   0-20
0x0000000100000000-0x0000000457ffffff 13.4G online       yes 32-138

Memory block size:       128M
Total online memory:      16G
Total offline memory:      0B

You can run "lsmem -a --output-all" to see information about each block.


Unfortunately, neither dmidecode nor lsmem work on a Raspberry pi (armv7l). There's no /sys/devices/system/memory, and no DMI tables either. No doubt there are many other systems where these methods don't work (e.g., dmidecode doesn't work Chromebooks).

MemTotal still works fine, though.

Memory discovery

How does the kernel find out about the memory available on a system? The traditional way is to use the BIOS E820 function, which reports a list of memory regions and their types. You can see this map early on in the dmesg output:

BIOS-provided physical RAM map:
[mem 0x0000000000000000-0x000000000009c7ff] usable
[mem 0x000000000009c800-0x000000000009ffff] reserved
[mem 0x00000000000e0000-0x00000000000fffff] reserved
[mem 0x0000000000100000-0x000000009d95afff] usable
[mem 0x000000009d95b000-0x000000009de23fff] reserved
[mem 0x000000009de24000-0x00000000a228ffff] usable
[mem 0x00000000a2290000-0x00000000a234bfff] reserved
[mem 0x00000000a234c000-0x00000000a2371fff] ACPI data
[mem 0x00000000a2372000-0x00000000a2ca1fff] ACPI NVS
[mem 0x00000000a2ca2000-0x00000000a2ffefff] reserved
[mem 0x00000000a2fff000-0x00000000a2ffffff] usable
[mem 0x00000000a3800000-0x00000000a7ffffff] reserved
[mem 0x00000000f8000000-0x00000000fbffffff] reserved
[mem 0x00000000fec00000-0x00000000fec00fff] reserved
[mem 0x00000000fed00000-0x00000000fed03fff] reserved
[mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[mem 0x00000000fee00000-0x00000000fee00fff] reserved
[mem 0x00000000ff000000-0x00000000ffffffff] reserved
[mem 0x0000000100000000-0x0000000456ffffff] usable

The first two entries above, for example, say that the first ~640K bytes are normal "usable" RAM, whereas the next 14KB are "reserved". The last entry starts at 4GB and extends until the end of available memory, all of which is usable. We can add up the sizes of all the usable ranges, and get a number very close to that reported later by dmesg:

$ sudo dmesg | perl -nE \
  '/e820: \[mem (0x[^-]*)-(0x[^\]]*). usable$/
  && {$total += hex($2)-hex($1)+1};
  END{say $total/1024}'
$ sudo dmesg | grep Memory:
Memory: 2627884K/16659852K available …

This difference is likely because we didn't account for the kernel updating its memory map as the boot progresses:

e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
e820: remove [mem 0x000a0000-0x000fffff] usable
e820: update [mem 0xa3000000-0xffffffff] usable ==> reserved

Bootloaders like GRUB can also compute and pass in a memory map to the kernel they're loading, and UEFI provides yet another way to obtain the map. Here's a detailed description of the various methods to detect memory on x86 systems, and the kernel's code to process the memory map. The kernel typically just uses the memory map it's given — in fact, GRUB has a "badram" option that can be used to remove ranges of RAM from the map that the kernel sees.

Here's another view of the memory on the system (reported later in the dmesg), after the kernel has divided up the available memory into zones:

Faking a node at [mem 0x0000000000000000-0x0000000456ffffff]
NODE_DATA(0) allocated [mem 0x456ff9000-0x456ffdfff]
Zone ranges:
  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
  Normal   [mem 0x0000000100000000-0x0000000456ffffff]
  Device   empty
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x0000000000001000-0x000000000009bfff]
  node   0: [mem 0x0000000000100000-0x000000009d95afff]
  node   0: [mem 0x000000009de24000-0x00000000a228ffff]
  node   0: [mem 0x00000000a2fff000-0x00000000a2ffffff]
  node   0: [mem 0x0000000100000000-0x0000000456ffffff]
Zeroed struct page in unavailable ranges: 29341 pages
Initmem setup node 0 [mem 0x0000000000001000-0x0000000456ffffff]
On node 0 totalpages: 4164963
  DMA zone: 64 pages used for memmap
  DMA zone: 21 pages reserved
  DMA zone: 3995 pages, LIFO batch:0
  DMA32 zone: 10296 pages used for memmap
  DMA32 zone: 658888 pages, LIFO batch:63
  Normal zone: 54720 pages used for memmap
  Normal zone: 3502080 pages, LIFO batch:63
Reserving Intel graphics memory at [mem 0xa4000000-0xa7ffffff]

The DMA zone extends from 4KB to 16MB, the DMA32 zone from 16MB to 4GB, and the Normal zone from 4GB to the end of RAM, corresponding to the last "all usable" entry in the BIOS memory map. The total number of 4KB pages does correspond to the reported total of 16659852K available memory.

Reserved memory holes

Within these zones, certain ranges are reserved for memory-mapped I/O, including the PCI memory hole in the address space from 2.5GB to 4GB. On a 32-bit system with 4GB of physical RAM installed, the kernel would have to use a hack like PAE to compensate for the limited address space and access the memory shadowed by that range.

On a 64-bit system, the kernel can just remap the physical memory to a different part of the address space. This is why my system is missing the memory21…memory31 directories (corresponding to memory between 2688MB and 4096MB in 128MB chunks) and instead has that portion of memory mapped under memory129…memory138.

You can see fine-grained information about memory reservations as root in /proc/iomem. Note that many of these ranges correspond to entries in the E820 memory map (e.g., look at the ACPI data ranges), while others are smaller slices of those entries.

$ sudo grep '^[^ ]' /proc/iomem
00000000-00000fff : Reserved
00001000-0009c7ff : System RAM
0009c800-0009ffff : Reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000cfdff : Video ROM
000d0000-000d0fff : Adapter ROM
000e0000-000fffff : Reserved
00100000-9d95afff : System RAM
9d95b000-9de23fff : Reserved
9de24000-a228ffff : System RAM
a2290000-a234bfff : Reserved
a234c000-a2371fff : ACPI Tables
a2372000-a2ca1fff : ACPI Non-volatile Storage
a2ca2000-a2ffefff : Reserved
a2fff000-a2ffffff : System RAM
a3000000-a37fffff : RAM buffer
a3800000-a7ffffff : Reserved
a8000000-dfffffff : PCI Bus 0000:00
f000e2c3-f001e2c2 : pnp 00:04
f8000000-fbffffff : PCI MMCONFIG 0000 [bus 00-3f]
fe000000-fe113fff : PCI Bus 0000:00
fec00000-fec00fff : Reserved
fed00000-fed03fff : Reserved
fed10000-fed17fff : pnp 00:04
fed18000-fed18fff : pnp 00:04
fed19000-fed19fff : pnp 00:04
fed1c000-fed1ffff : Reserved
fed20000-fed3ffff : pnp 00:04
fed45000-fed8ffff : pnp 00:04
fed90000-fed90fff : dmar0
fed91000-fed91fff : dmar1
fee00000-fee00fff : Local APIC
ff000000-ffffffff : Reserved
100000000-456ffffff : System RAM
457000000-457ffffff : RAM buffer

The region between 0xA8000000 (2688MB) and 0xFFFFFFFF (4096MB) has no "System RAM" area, and that's why there are no corresponding memoryN directories. Here's the relevant kernel code.


Finally, a reminder: my question was motivated by curiosity and the desire to wrest a nice power-of-2 number from the kernel somehow. If you want to find out how much memory a system has in order to decide how much to allocate for something (e.g., shared_buffers in Postgres), just use MemTotal from /proc/meminfo.