Conquering NUMA land

Udo Steinberg udo at ...121...
Tue Mar 19 15:31:18 CET 2013


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, 19 Mar 2013 14:36:17 +0100 Norman Feske (NF) wrote:

NF> One thing left me wondering, don't you see the different access
NF> latencies to local vs. remote memory in NUMA systems as a pressing
NF> problem that needs a solution by the OS? The consideration of memory
NF> locality was actually the driving motivation behind the vcore idea.

Definitely. But all cores that are on the same socket typically share
the LLC and the memory controller and therefore belong to the same NUMA
domain. For those cores shared memory is much less painful than if you go
off-socket.

So for a multi-core VM, you would like to acquire physical cores that are
all on the same socket. If that doesn't work for whatever reason, then you
have to pay the price of going cross-socket (and likely into a different
NUMA domain). The system should discourage, but not prevent that.

Applications probably want interfaces like:
* give me local memory for private use that is cheap to access
* give me memory that can be cheaply shared with cores X, Y, and Z
* give me globally shared memory

I don't think you would want to educate every application about NUMA, core
proximity and the like. Only few memory managers and schedulers in the
system need to know about this stuff and can then make allocation and
placement decisions based on their knowledge and the application requests
they receive.

Cheers,
Udo
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlFIdrYACgkQnhRzXSM7nSk8pwCfQDwfR5Oz6ourujsAWGkT272B
kqsAnjOpIMajmPD53OLAb0H9qEaf1+po
=9Ljt
-----END PGP SIGNATURE-----


More information about the users mailing list