Bug in SaveScalars in parallel?

Clearly defined bug reports and their fixes
Post Reply
gagliar
Posts: 80
Joined: 04 Sep 2009, 16:34
Location: LGGE - Grenoble
Contact:

Bug in SaveScalars in parallel?

Post by gagliar »

Hi all,

I have an issue with the new version of the SaveScalars (may be not so new, but since OnBed and OneSurface were introduced). It seems that the "Boundary Sum" is not working correctly anymore in parallel.
I have made a simple test case of a cube with an imposed "External pressure" on the top and no flux on the 5 other BC. I then compute the sum of the residual of the Stokes vector. The sum of the third (Z) component of the residual should be equal to the integral of the imposed External pressure over the top surface (200x10^3*900/1e6 = 180MN). This is working nicely in serial, but not in parallel. I suspect that the nodes on the boundary of the partition are summed more than once?

I have attached a small testcase (mesh:
mesh3d11.grd
(706 Bytes) Downloaded 9 times
and sif:
sinus3dnS_19_11_08_00_.sif
(6.55 KiB) Downloaded 10 times
). Results in the .dat file are
1.000000000000E-003 1.800000000000E+002 -1.800000000000E+002
in serial and
1.000000000000E-003 1.992000000000E+002 -1.893000000000E+002
in parallel (ElmerGrid 1 2 mesh3d11.grd -partition 2 2 1 -periodic 1 1 0).
And even different if the mesh is made differently (ElmerGrid 1 2 mesh3d11.grd -partition 4 1 1 -periodic 1 1 0)
1.000000000000E-003 1.623000000000E+002 -1.872000000000E+002
(then with different number of shared nodes)

Is it a feature or a bug? If a feature, how to specify that each node should be only counted once in the sum?

Thanks
Olivier

PS: I also don't understand why in the .names file the boundary is referenced with OnBed and OnSurface for "Boundary Sum", where it is the number of the boundary (similar to the old version) that is used for "Boundary Int".
fgillet
Posts: 47
Joined: 30 Sep 2010, 16:58

Re: Bug in SaveScalars in parallel?

Post by fgillet »

Hello,

yes as today (rev: 71005ec) your test returns wrong results in parallel;

issue is related to changes introduced in rev:e396adc on Feb 21rst 2023;

The test in the BoundaryStatics routine in SaveScalars.F90 to skip nodes not owned by the current PE is :
IF( Var % Solver % Matrix % ParallelInfo % NeighbourList(NoDofs*(j2-1)+1) % Neighbours(1) /= ParEnv % MyPE ) CYCLE

In your case, the statitics are computed for a vector component with NoDOfs=1; However, the permutation for the solver % Matrix % ParallelInfo is associated to the Stokes solver with NoDOfs=4.

Fix might to revert to the old test using the mesh parallelInfo structure:
IF( Mesh % ParallelInfo % NeighbourList(j) % Neighbours(1) /= ParEnv % MyPE ) CYCLE

or accessing the solver variable to get the right permutation associated with the Matrix % ParallelInfo structure.

Fabien
raback
Site Admin
Posts: 4843
Joined: 22 Aug 2009, 11:57
Antispam: Yes
Location: Espoo, Finland
Contact:

Re: Bug in SaveScalars in parallel?

Post by raback »

Just to note: this was fixed by Fabien.

The reason for the original bug was that the application of this routine was extended beyond nodal fields. If we have other type of fields we cannot use just the parallel info related to the geometric nodes.

-Peter
Post Reply