For a number of reasons, we are considering moving interpinic into CLM.
Here are some reasons:
Here is a proposed algorithm for parallelizing interpinic - in particular, the time-consuming piece that finds nearest neighbors:
For each point on the destination grid (all tasks work on the same destination point at the same time): - Task responsible for this destination point broadcasts its info (lat/lon and type) to all other tasks - All tasks loop through their source points, finding closest point of the correct type - Use a parallel reduce to find the absolute closest point (across all tasks), and which task owns that point |
If we plan to use this for the initialization of new grid cells with dynamic landunits (which requires running this interpinic algorithm at runtime rather than just at initialization), note that we might need more generality than is needed if it is only used at initialization. In particular, there may be a larger set of variables that need to be interpolated at runtime compared to the set that needs to be interpolated at initialization - in particular, because I could imagine there being some variables that are computed based on other variables in initialization. e.g., variables A and B may be on the restart file, and then variable C is computed based on the values of A and B in some initialization code; in this case variable C would also need to be included when doing interpinic at runtime. But I don't know if this is really the case for any variables.
There are a number of bug reports that should be addressed by this work - search bugzilla for 'interpinic'.