Simulation on HPC

If you can access a HPC cluster, a powerful way to launch many simulation is to use the compute distribution engine, while using the front node to launch Funz commands.

Assuming that the simulation software (say Modelica, our standard example) is already installed on the cluster, you can use one of these ways:

  1. start backend on computing nodes, run Funz from front node
  2. start backend and run Funz from front node
  3. start backend on front node, and run Funz from your own computer

1. Backend on computing nodes + Funz on front node

On a shared path between computing and front nodes:

  • intall Funz:
    • Python: pip install Funz, then import Funz
    • R: remotes::install_github('Funz/Funz.R'), then library(Funz)
    • bash: download and unzip Funz-Bash.zip
  • install simulation plugin:
    • Python: Funz.installModel('Modelica')
    • R: Funz::install.Model('Modelica')
    • bash: download and unzip plugin-Modelica.zip
  • setup simulation script ‘Funz/scripts/Modelica.sh’ to
  • add front node IP (say 192.168.1.1) in the ‘Funz/calculator.xml’ file:
    <CALCULATOR>
    ...
    <HOST name="192.168.1.1" port="19001">
    <HOST name="192.168.1.1" port="19002">
    <HOST name="192.168.1.1" port="19003">
    <HOST name="192.168.1.1" port="19004">
    ...
    </CALCULATOR>
    
  • start background Funz computing daemon:
    • bash: submit srun --exclusive Funz/FunzDaemon.sh task (for SLURM)

You can now check that these computers are well setup by running basic example on your own, that will use one of the local network computers:

  • check that you well receive network Funz heartbeats: nc -lu 19001 or socat -u udp-recv:19001
  • launch basic calculation: * Python: Funz.Run(model="Modelica",input_files="samples/NewtonCooling.mo") * R: Funz::Run(model="Modelica",input.files="samples/NewtonCooling.mo") * bash: ./Funz.sh Run -m Modelica -if samples/NewtonCooling.mo

2. Backend + Funz on front node

On the front node:

  • intall Funz:
    • Python: pip install Funz, then import Funz
    • R: remotes::install_github('Funz/Funz.R'), then library(Funz)
    • bash: download and unzip Funz-Bash.zip
  • install simulation plugin:
    • Python: Funz.installModel('Modelica')
    • R: Funz::install.Model('Modelica')
    • bash: download and unzip plugin-Modelica.zip
  • setup simulation script ‘Funz/scripts/Modelica.sh’ to suit Modelica on computing nodes
  • wrap command through cluster scheduler in the ‘Funz/calculator.xml’ file:
    <CALCULATOR>
    ...
    <CODE name="Modelica" command="./scripts/slurm.sh /opt/Funz/scripts/Modelica.sh"/>
    ...
    </CALCULATOR>
    

    note that SLURM, SGE and OAR wrapping scripts are available inside ‘Funz/scripts’ directory

  • start background Funz computing daemon:
    • Python: Funz.startCalculators(1)
    • R: Funz::startCalculators(1)
    • bash: launch backend Funz/FunzDaemon_start.sh 1

You can now check that the backend is well setup by running basic example:

  • check that you well receive network Funz heartbeats: nc -lu 19001 or socat -u udp-recv:19001
  • launch basic calculation: * Python: Funz.Run(model="Modelica",input_files="samples/NewtonCooling.mo") * R: Funz::Run(model="Modelica",input.files="samples/NewtonCooling.mo") * bash: ./Funz.sh Run -m Modelica -if samples/NewtonCooling.mo

3. Backend on front node + Funz on computer

On the front node:

  • intall Funz:
    • Python: pip install Funz, then import Funz
    • R: remotes::install_github('Funz/Funz.R'), then library(Funz)
    • bash: download and unzip Funz-Bash.zip
  • install simulation plugin:
    • Python: Funz.installModel('Modelica')
    • R: Funz::install.Model('Modelica')
    • bash: download and unzip plugin-Modelica.zip
  • setup simulation script ‘Funz/scripts/Modelica.sh’ to suit Modelica on computing nodes
  • wrap command through cluster scheduler in the ‘Funz/calculator.xml’ file:
    <CALCULATOR>
    ...
    <CODE name="Modelica" command="./scripts/slurm.sh /opt/Funz/scripts/Modelica.sh"/>
    ...
    </CALCULATOR>
    

    note that SLURM, SGE and OAR wraping scripts are available inside ‘Funz/scripts’ directory

  • add your computer IP (say 192.168.1.1) in the ‘Funz/calculator.xml’ file:
    <CALCULATOR>
    ...
    <HOST name="192.168.1.1" port="19001">
    <HOST name="192.168.1.1" port="19002">
    <HOST name="192.168.1.1" port="19003">
    <HOST name="192.168.1.1" port="19004">
    ...
    </CALCULATOR>
    
  • start background Funz computing daemon:
    • Python: Funz.startCalculators(1)
    • R: Funz::startCalculators(1)
    • bash: launch backend Funz/FunzDaemon_start.sh 1

You can now check that the backend is well setup by running basic example from your computer (ie. not in cluster):

  • check that you well receive network Funz heartbeats: nc -lu 19001 or socat -u udp-recv:19001
  • intall Funz:
    • Python: pip install Funz, then import Funz
    • R: remotes::install_github('Funz/Funz.R'), then library(Funz)
    • bash: download and unzip Funz-Bash.zip
  • install simulation plugin:
    • Python: Funz.installModel('Modelica')
    • R: Funz::install.Model('Modelica')
    • bash: download and unzip plugin-Modelica.zip
  • launch basic calculation:
    • Python: Funz.Run(model="Modelica",input_files="samples/NewtonCooling.mo")
    • R: Funz::Run(model="Modelica",input.files="samples/NewtonCooling.mo")
    • bash/cmd.exe: ./Funz.sh Run -m Modelica -if samples/NewtonCooling.mo or ./Funz.bat Run -m Modelica -if samples/NewtonCooling.mo

Autostart backend

The script ‘Funz/FunzDaemon_won.sh’ (“Wake on Network”) is dedicated to wakeup the backend when Funz is called from user side:

  • on the front node, just start Funz/FunzDaemon_won.sh
  • on your computer, before and after launching Funz, just wake and sleep the backend using:
    echo "hi"| curl -m 1 telnet://frontnode:19000
    ./Funz.sh Run -m Modelica -if samples/NewtonCooling.mo
    echo "bye"| curl -m 1 telnet://frontnode:19000
    

    or

    echo "hi"| curl -m 1 telnet://frontnode:19000
    ./Funz.bat Run -m Modelica -if samples/NewtonCooling.mo
    echo "bye"| curl -m 1 telnet://frontnode:19000
    


Improve this page