Photonics

Photonics

Topics related to Lumerical and more.

Ansys Insight: Why my simulation result is different from published paper or experiment?

    • Guilin Sun
      Ansys Employee

      This is a commonly asked question from my experience. In the early years, some users wanted to compare simulation results with those obtained from other simulation tools, and in recent years many users want to compare the published results using Lumerical tools by his own simulation. Comparison gives you more confidence, yet it has some challenges.

      Usually the published papers chose the best simulated results, with many simulations of different settings. For new users, and even for those experienced users who used other tools before using Lumerical tools, due to some unique features and designs of the software, they will need some time to be familiar with the software and settings, before obtaining reliable simulation results.

      Here is a brief list of points you should check:

      1. the geometrical parameters
      2. the mesh orders to properly create the actual geometry. Pay close attention when some geometries have overlaps.
      3. the mesh accuracy and/or override mesh can (well) resolve the geometry? in most cases your mesh is not exactly the same as the published results. Some structures are more sensitive to mesh size than others. However, for your initial test, you do not need to duplicate the exact result, see comment later.
      4. Simulation region or volume: too close the PML to the structure can cause some problems, if it does not diverge.
      5. Material models: is it the simple analytical Drude, Lorentz model, or measured data? does the measured data agree well with our database (we use measured data from popular references). Is the simulation material property the same as the real material used in device under test?
      6. material fitting: measured data have errors, check if the fitting is proper: rms error, artificial peaks?
      7. make sure source type and polarization are correct
      8. source pulse length (and offset) are proper
      9. source’s location is proper
      10. Boundary conditions match with source type (and structure symmetry)
      11. PML type and number of layers (and thickness) as well as locations are proper. In general PML should be located in uniform mesh region
      12. mesh refinement: in general we recommend the Conformal Variant 0. more information is here https://support.lumerical.com/hc/en-us/articles/360034382614
      13. Simulation time is long enough and autoshutoff min is small enough to make sure the em energy decays sufficiently. This is in particular important for transmission peak values.
      14. does the grating-like structure have higher-orders of diffraction? the transmission monitor gives the total transmission.



      In my opinion, if you can roughly get similar result with the publication, then the basic settings are almost correct. To get well-agreed result, more work of retuning the settings needs to be done. Further more, converging test may be needed, which is complicated: https://support.lumerical.com/hc/en-us/articles/360034915833-Convergence-testing-process-for-FDTD-simulations

      In addition, no all the papers gave all the necessary data, including device parameters, material models, and simulation settings, which lead to more challenge for duplication. and sometimes, the data given in the paper is not exactly the data they used in the simulation, due to a number of reasons.

      Some papers used other algorithms, and their data is good for their algorithms, but maybe not good for Lumerical tools.

      In other cases, the papers gave vague description about their devices. You will need to use your common sense to figure it out the correct parameters.

      The worst scenerio is, the publication has simulated results with defects, without justification from experiment,theory and benchmark. It may have preassumptions that are not written. Just because it is published in a journal does not necessarily mean it is "correct" (compared with what?) we should be cautious.

    • Guilin Sun
      Ansys Employee

      One of my suggestions for users to duplicate published result is to save time as long as you can get similar result, instead of pursuing exact duplication. It is better to focus on your own design. This is because, different software, algorithm, users and different lab experiment may treat something differently. There is an European group who is dedicated on evaluating software in microwave and antenna simulation. Every two years they published an article to report the compared results. Please refer those papers

      Benchmarking of Optimally Used Commercial Software Tools for Challenging Antenna Topologies: The 2012·2013 Run,iEEE Antennas and Propagation Magazine, Vol. 55, No. 3, June 2013

      State-of-the-Art in Antenna Software Benchmarking: “Are We There Yet ?” IEEE Antennas and Propagation Magazine, Vol. 56, No.4, August 2014

      State-of-the-Art in Antenna Software Benchmarking: “Are We There Yet ?” IEEE Antennas and Propagation Magazine, Vol. 56, No.4, August 2014

      bridging the simulation-measurment gap ,IEEE Antennas and Propagation Magazine, Vol. 58, No.6, August 2016,pp12~14

      This article compares simulation and experiment: June 2017 IEEE PhotonIcs society newsletter: Application-Tailored Specialty Optical Fibers. As you can see, the manufactured structure has errors whereas in simulation we use perfect geometry.

      There are more challenges to compare simulation with experiments: not only there are different manufacturing errors (size, geometry, corner etc), but also the experiment measurement also has its own error, in addition to the fact that actual material does not perfectly behave the same among different process techniques and different measurement conditions. In classical optics, each manufactured optical glass will need to measure their refractive index to make sure the whole process produces the glass with predefined tolerance of refractive index. In FDTD material database, you will find that several materials such as gold, silver have different refractive index obtained from different labs.

      In addition, the excitation of the source might be different. Please refer this post /forum/discussion/comment/156414#Comment_156414

      So to compare with experiment results, material, manufacturing error, excitation and measurement all can introduce differences.

      In the nano scale, the material property may not behave like in the bulk case. This paper showed this:

      Size-dependent permittivity and intrinsic optical anisotropy of nanometric gold thin films: a density functional theory study 1

      and there are some other models to deal with the size- and shape- dependent material properties:

      Bridging quantum and classical plasmonics with a quantum-corrected model

      Ruben Esteban, Andrei G. Borisov, Peter Nordlander & Javier Aizpurua 

      Nature Communications volume 3, Article number: 825 (2012) Cite this article

      Surface Plasmons and Nonlocality: A Simple Model

      Yu Luo, A. I. Fernandez-Dominguez, Aeneas Wiener, Stefan A. Maier, and J. B. Pendry


      In summary, there are many factors that affect the simulated results. In order to duplicate other results, more careful and proper settings are required, which need time and energy and your skill/knowledge.

Viewing 1 reply thread
  • The topic ‘Ansys Insight: Why my simulation result is different from published paper or experiment?’ is closed to new replies.