Parallel scaling of DOLFIN on ARCHER Chris N. Richardson Garth N. Wells 10.6084/m9.figshare.1304537.v1 https://figshare.com/articles/figure/Parallel_scaling_of_DOLFIN_on_ARCHER/1304537 <p>The figure shows the weak scaling of a DOLFIN finite element Poisson solver on a unit cube mesh, with tetrahedral linear elements, using DOLFIN v1.4.0+.  Simulations were performed on ARCHER, the UK national supercomputer (http://www.archer.ac.uk).</p> <p>The benchmark was run from 1 to 1024 nodes, with each node comprising 24 cores and 64GB RAM. The time for several key phases of the simulations was recorded, as well as the total wall time. The timing for "Build Mesh" is the the time spent constructing the distributed tetrahedral grid, "FunctionSpace" represents the time spent building the degree-of-freedom map across processors, "Assemble" is the time spent computing and inserting entries into the global matrix, and "Solve" is the time spent in the linear solver - in this case PETSc conjugate gradient method preconditioned with GAMG. GAMG is the PETSc smoothed aggregation algebraic multigrid solver.</p> <p>Each simulation was run with approximately 500k degrees of freedom per core, with the final simulation at 24756 cores solving for 12584301976 degrees of freedom.</p> <p>Over three orders of magnitude in degree of freedom count, the total run time approximately doubles, whilst many of the timingd for individual parts of the code grow much more slowly. The modest increase in time can be attributed in large part to the modest increase in the number of iterations required by conjugate gradient solver.</p> <p> </p> <p> </p> 2015-02-09 15:52:07 FEniCS DOLFIN archer Computational Physics Computation Theory and Mathematics