Page 1 of 1

Some artefacts (nodal forces) at interchange nodes with mpi run

Posted: 07 Jul 2020, 00:09
by mabor

I'm interested in magnetic forces induced by current carrying wires and coils. I made a 2D-test case (MgDyn2D) with two small loops and played with the parallelisation utility mpi. For some reason, there exist some artefacts in the nodal forces field. These are obviously located at the interchange nodes between the partitioned surfaces/lines (see picture). The same case run by a single process results in an unremarkable solution.
Is this a known issue produced by my immature Elmer skills or a potential parallelisation bug?


PS: Parallelisation commands

Code: Select all

ElmerGrid 14 2 twowires.msh -2d -out ./ -partition 1 5 1 0
mpirun -np 5 ElmerSolver_mpi 

Re: Some artefacts (nodal forces) at interchange nodes with mpi run

Posted: 07 Jul 2020, 20:42
by raback

I must say that I don't remember out of heart how the nodal forces are treated in parallel. Basically we could have them distributed such that at the intefaces they sum up to zero (or close to). The shared nodes get contributions from both sides of the interface. I guess you get conflicting results?


Re: Some artefacts (nodal forces) at interchange nodes with mpi run

Posted: 08 Jul 2020, 00:33
by mabor
Hi Peter,

besides the missing weighting factor of 2*Pi for cylindrically symmetric cases (see your post: viewtopic.php?f=3&t=6959&start=10#p22098), the resulting component magnetic forces seems to be slightly different to the single process result but acceptable (Good results are
archived, if the distance between the loops <=1, for larger distances the mesh resultion must be better, I guess). I would say this is due to the fact, that there is virtually only one line with small errors within a wire.
The situation is different when looking at the vacuum component: As the picture indicates the resulting force in y-direction is about two orders of magnitudes different compared to the single process result (which is virtually close to zero). And this is -I would say- an absolute unphysical result. (Additional to this, there exist a resulting x-force at all components in both, parallel and single processed cases. This I do really not understand, but this is another issue.)

I compared the results for a larger test case including two thick coils. The results in y-direction are different but comparable. Here, the missing weighting factor is blowing up the correct solution, I guess.

But here comes the real problem:
If the nodal forces are coupled to the linear elasticity solver via Displacement Loads, the solver might not reach convergence if these sharing artefacts are accepted. In the small test cases the solution converges, but it takes much more iterations. In a huge case, it does not converge.