Skip to content

Executing MPI Simulations

Simulating dipoles in PyCharge is embarrassingly parallelizable, as the task of solving the equation of motion for the dipoles at each time step can be distributed across multiple processes. Ideally, each process will be tasked to calculate the trajectory of a single Dipole object at each time step. However, if there are more Dipole objects in the simulation than available processes, the set of Dipole objects can be evenly distributed among the processes; in this case, the trajectories of the Dipole objects are calculated sequentially. Once the processes have finished calculating the trajectories of their assigned Dipole object(s), the trajectories are broadcasted to all of the other processes. The trajectories of the other dipoles, received from the other processes, are then updated for the given time step.

Dipole simulations using the run method are executed in \(\mathcal{O}(N^2)\) time for \(N\) Dipole objects, since the driving electric field \(\mathbf{E}_\mathrm{d}\) of each dipole requires the calculation of the field contributions from the other \(N-1\) dipoles. By taking advantage of the parallel computations, the ideal time complexity of our MPI implementation (using \(N\) processes for \(N\) Dipole object) is \(\mathcal{O}(N)\). However, since each process must store the trajectory arrays of the \(N\) dipoles, the MPI implementation has a space complexity of \(\mathcal{O}(N^2)\), while the space complexity of the original implementation is \(\mathcal{O}(N)\).

To execute simulations using parallel processes, simply use the run_mpi method (which accepts the same input arguments as the run method) in your script. Then, execute the script using whichever MPI implementation is installed on your computer. The following command runs the script example.py (which uses PyCharge's run_mpi method) with 2 processes using the mpiexec command:

mpiexec -n 2 python example.py