Skip to content
Snippets Groups Projects
Unverified Commit d28f8f70 authored by xmo-odoo's avatar xmo-odoo Committed by GitHub
Browse files

[FIX] base: high memory usage in export

Overly attached cache's overhead turns out problematic. Clearing the
cache after batches of records keeps the cache overhead low and does
not change performances.

Explanations:

env.cache is 3 levels of maps {field: record_id: {env: value}}, where
env can be either an Environment or a pair (cr, uid) depending on the
field's dependency on the context or not.

This can be an issue when the current request loads many fields in an
enormous number of records in a single environment. The (PaaS) case
here was the export of a res.partner field from 36358 records in a single
environment[0]:

* prefetch expanded the single field to 68, leading the base `cache`
  to have 68 entries. getsizeof(d<len=68>) == 3360 (3kB, which we will
  soon see we can ignore entirely).
* *each* of these entries would hold a map of 36358
   records. getsizeof(d<len=36358>) = 3146016 (3MB), 68 times = 213MB.
* finally each record entry is also a {(cr, uid): value}, here the
  dicts have a single entry which makes them 280B, and their key is a
  2-tuple "worth" 72B, or 352B/record/field, or 352 * 36358 * 68 ~
  870MB[1].

For a total of ~1GB, which is roughly the issue we can observe.

Future possibilities: extract the cache-clearing iterator to be more generic 
and available on BaseModel directly? Or even make the default iterator 
batched & cache-clearing?

[0] note that sys.getsizeof only provides the size of the object it's
    called on, it is not recursive

[1] slightly more in actuality as there's some variation between the
    leaves depending on the field type e.g. M2O values are a 1-tuple
    adding 60B, ...

Fixes #22475
parent 668a090d
No related branches found
No related tags found
No related merge requests found
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment