TAGGED: computer-memory, lumerical, lumopt
-
-
September 20, 2024 at 1:45 pmManuel.RiegerSubscriber
Hi Ansys,
here is the current memory consumption of my lumerical script while running an optimisation using LumOpt (2DvarFDTD). It keeps crashing after only a few iterations and im pretty confident I have a memory leak, but I'm not sure how to fix it. I've managed to debug it down to an exact location in the package: lumopt->geometries->geometry.py line 192
Geometry.get_eps_from_index_monitor(sim.fdtd, 'current_eps_data')which calls the method on line 85 get_eps_from_index_monitor which does the following through the lumapi eval method:opt_fields_index_data_set = getresult('opt_fields_index','index');current_eps_data = matrix(length(opt_fields_index_data_set.x), length(opt_fields_index_data_set.y), length(opt_fields_index_data_set.z), length(opt_fields_index_data_set.f), 3);
current_eps_data(:, :, :, :, 1) = opt_fields_index_data_set.index_x^2;
current_eps_data(:, :, :, :, 2) = opt_fields_index_data_set.index_y^2;
current_eps_data(:, :, :, :, 3) = opt_fields_index_data_set.index_z^2;
clear(opt_fields_index_data_set);If I pause my script just before this part, and directly in the Lumerical console I enter:opt_fields_index_data_set = getresult('opt_fields_index','index');
My memory is now:
It increases (my roughly 25MB), which makes sence as we have a new variable storing data, however, if the very next thing I run is:
clear(opt_fields_index_data_set);
my memory doesnt change:
whats weird is, if i run the first command again, (remaking the variable), my memory goes up more:
and by roughly the same amount, which implies that the original variable was not actually freed from memory and the computer has just allocated a different address space for the "new" variable. Further, if I repeatedly run the command "opt_fields_index_data_set = getresult('opt_fields_index','index');" the memory ticks up by roughly the same amount each time. This happens throughout the whole iteration in LumOpt (once for each parameter), it then runs the forward and adjoint simulations, and in the next iteration it contunes increasing in memory consumption, but from the higher level, which leads to an eventual memory overload and causes the computer to crash...
For confirmation, after I run the clear command and try to call the variable again, Lumerical recognises that it has been deleted:
This effect happens regardless of if I save all the simulations or not during the iteration process.
Any ideas on how to actually clear the variable from memory?
p.s. I am using version 2024 R1 on Windows 11 64 bit
-
September 23, 2024 at 6:33 pmGuilin SunAnsys Employee
I am not sure if this is a memory leakage. Fro this article: https://optics.ansys.com/hc/en-us/articles/360050995394-Getting-Started-with-lumopt-Python-API
the default setting is to store all simulation files:
: store_all_simulations: bool
Indicates if the project file for each iteration should be stored or not. Default = True
Please try to set it as "false".
in addition, since the mesh accuracy is fixed and the mini resolution is predefined, with more and more materials filling in, the finer mesh region might be larger and larger. The mesh size depends on the material refractive index. If the filling material occupies more and more space, mesh memory will be increased for simulated data (mesh and field data). This might be another reason.
-
- You must be logged in to reply to this topic.
- Difference between answers in version 2024 and 2017 lumerical mode solution
- Errors Running Ring Modulator Example on Cluster
- INTERCONNECT – No results unless rerun simulation until it gives any
- Import material .txt file with script
- Help for qINTERCONNECT
- Trapezoidal ring
- Issues with getting result from interconnent analysis script
- How to measure transmission coefficients on a given plane .
- Topology Optimization Error
- Edge Coupler EME Example Issue
-
1281
-
591
-
543
-
524
-
366
© 2024 Copyright ANSYS, Inc. All rights reserved.