Skip to content

Conversation

lmoneta
Copy link
Member

@lmoneta lmoneta commented Aug 22, 2025

This PR fixes an issue in merging the free chunks of memory which are used in the memory pool for the
allocation of the intermediate tensors.
This PR provides a significant (x2) improvement in total memory usage for the intermediate tensors

@lmoneta lmoneta requested a review from sanjibansg August 22, 2025 15:02
@lmoneta lmoneta self-assigned this Aug 22, 2025
Copy link
Contributor

@sanjibansg sanjibansg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for fixing this, some comments:

Copy link

github-actions bot commented Aug 22, 2025

Test Results

    19 files      19 suites   3d 11h 32m 14s ⏱️
 3 561 tests  3 424 ✅  0 💤 137 ❌
65 973 runs  65 819 ✅ 17 💤 137 ❌

For more details on these failures, see this check.

Results for commit 5e1d1e2.

♻️ This comment has been updated with latest results.

@lmoneta lmoneta force-pushed the tmva_sofie_fix_memory_pool branch from 59f9ea2 to 096e3ad Compare August 25, 2025 08:36
When broadcasting from scalar tensor the tensor size is 0 and
shape.front() is undefined. Add then the check on size before calling shape.front()
Add counters in RModel to monitor allocations of Constant, Weights, initermidiates and other types of tensor sizes allocated at code generation
By adding the conv temporary tensors in input lists they will be flushed
afterwards and their memory can be reused by next operator
Copy link
Contributor

@sanjibansg sanjibansg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for fixing this!

Fix an issue of merging free chunks in the list of available_stack memory
This will make easier to re-use more efficiently the memory

In addition order the optput tensor by decreasing sizes

Add also debug of the current chunk allocated  and avaialable during the process
@lmoneta lmoneta force-pushed the tmva_sofie_fix_memory_pool branch from 096e3ad to 5e1d1e2 Compare August 25, 2025 09:13
@lmoneta lmoneta merged commit 70da9f5 into root-project:master Aug 25, 2025
20 of 26 checks passed
@lmoneta lmoneta deleted the tmva_sofie_fix_memory_pool branch August 25, 2025 14:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants