Alpine Linux provides an efficient platform for running particle physics simulations, offering minimal overhead while supporting complex scientific computing frameworks. This guide will walk you through setting up a complete particle physics simulation environment including ROOT, Geant4, and supporting tools.
Table of Contents
- Prerequisites
- Understanding Particle Physics Simulations
- Installing Development Environment
- Installing ROOT Framework
- Installing Geant4
- Setting Up PYTHIA
- Configuring MadGraph
- Data Analysis Tools
- Visualization Setup
- Running Simulations
- Performance Optimization
- Cluster Integration
- Troubleshooting
- Best Practices
- Conclusion
Prerequisites
Before setting up particle physics simulations, ensure you have:
- Alpine Linux with at least 8GB RAM (16GB recommended)
- 50GB+ free disk space
- C++ compiler and development tools
- Python 3.8 or later
- Basic understanding of particle physics concepts
- Familiarity with scientific computing
Understanding Particle Physics Simulations
Key Components
# Check system requirements
free -h
df -h
cat /proc/cpuinfo | grep processor | wc -l
Particle physics simulations involve:
- Event Generation: Creating physics events
- Detector Simulation: Modeling detector response
- Reconstruction: Processing detector data
- Analysis: Extracting physics results
Installing Development Environment
Step 1: Base Development Tools
# Update package repository
apk update
# Install essential build tools
apk add build-base cmake git wget curl
# Install development libraries
apk add \
python3-dev \
boost-dev \
gsl-dev \
libxml2-dev \
openssl-dev \
zlib-dev
# Install additional tools
apk add \
gfortran \
openblas-dev \
lapack-dev \
fftw-dev
Step 2: Python Scientific Stack
# Install Python package manager
apk add py3-pip py3-numpy py3-scipy
# Create virtual environment
python3 -m venv /opt/hep-env
source /opt/hep-env/bin/activate
# Install scientific Python packages
pip install --upgrade pip
pip install \
matplotlib \
pandas \
jupyter \
uproot \
awkward \
particle \
hepdata-lib
Installing ROOT Framework
Step 1: Download and Build ROOT
# Create build directory
mkdir -p /opt/root
cd /opt/root
# Download ROOT source
wget https://root.cern/download/root_v6.28.00.source.tar.gz
tar -xzf root_v6.28.00.source.tar.gz
# Create build directory
mkdir build && cd build
# Configure ROOT build
cmake ../root-6.28.00 \
-DCMAKE_INSTALL_PREFIX=/opt/root/install \
-Dpython=ON \
-Dmathmore=ON \
-Droofit=ON \
-Dbuiltin_freetype=ON \
-Dbuiltin_gsl=OFF \
-DCMAKE_CXX_STANDARD=17
# Build ROOT (this takes time)
make -j$(nproc)
make install
Step 2: Configure ROOT Environment
# Create ROOT setup script
cat > /etc/profile.d/root.sh << 'EOF'
#!/bin/sh
export ROOTSYS=/opt/root/install
export PATH=$ROOTSYS/bin:$PATH
export LD_LIBRARY_PATH=$ROOTSYS/lib:$LD_LIBRARY_PATH
export PYTHONPATH=$ROOTSYS/lib:$PYTHONPATH
EOF
# Source ROOT environment
source /etc/profile.d/root.sh
# Test ROOT installation
root -l -q -e 'printf("ROOT %s\n", gROOT->GetVersion());'
Installing Geant4
Step 1: Download and Build Geant4
# Create Geant4 directory
mkdir -p /opt/geant4
cd /opt/geant4
# Download Geant4 source
wget https://geant4-data.web.cern.ch/releases/geant4-v11.1.0.tar.gz
tar -xzf geant4-v11.1.0.tar.gz
# Download datasets
mkdir -p data
cd data
wget https://geant4-data.web.cern.ch/datasets/G4NDL.4.7.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4EMLOW.8.2.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4PhotonEvaporation.5.7.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4RadioactiveDecay.5.6.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4PARTICLEXS.4.0.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4PII.1.3.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4RealSurface.2.2.tar.gz
wget https://geant4-data.web.cern.ch/datasets/G4SAIDDATA.2.0.tar.gz
# Extract datasets
for file in *.tar.gz; do tar -xzf "$file"; done
cd ..
# Build Geant4
mkdir build && cd build
cmake ../geant4-v11.1.0 \
-DCMAKE_INSTALL_PREFIX=/opt/geant4/install \
-DGEANT4_USE_GDML=ON \
-DGEANT4_INSTALL_DATA=OFF \
-DGEANT4_INSTALL_DATADIR=/opt/geant4/data \
-DGEANT4_BUILD_MULTITHREADED=ON
make -j$(nproc)
make install
Step 2: Configure Geant4 Environment
# Create Geant4 setup script
cat > /etc/profile.d/geant4.sh << 'EOF'
#!/bin/sh
source /opt/geant4/install/bin/geant4.sh
export G4LEDATA=/opt/geant4/data/G4EMLOW8.2
export G4LEVELGAMMADATA=/opt/geant4/data/PhotonEvaporation5.7
export G4NEUTRONHPDATA=/opt/geant4/data/G4NDL4.7
export G4NEUTRONXSDATA=/opt/geant4/data/G4NEUTRONXS1.4
export G4PIIDATA=/opt/geant4/data/G4PII1.3
export G4RADIOACTIVEDATA=/opt/geant4/data/RadioactiveDecay5.6
export G4REALSURFACEDATA=/opt/geant4/data/RealSurface2.2
export G4SAIDXSDATA=/opt/geant4/data/G4SAIDDATA2.0
export G4PARTICLEXSDATA=/opt/geant4/data/G4PARTICLEXS4.0
EOF
source /etc/profile.d/geant4.sh
Setting Up PYTHIA
Step 1: Install PYTHIA8
# Create PYTHIA directory
mkdir -p /opt/pythia8
cd /opt/pythia8
# Download PYTHIA
wget https://pythia.org/download/pythia83/pythia8310.tgz
tar -xzf pythia8310.tgz
cd pythia8310
# Configure with ROOT support
./configure \
--prefix=/opt/pythia8/install \
--with-root=/opt/root/install \
--with-gzip \
--with-python
# Build and install
make -j$(nproc)
make install
Step 2: Python Bindings
# Build Python bindings
cd /opt/pythia8/pythia8310
make clean
./configure \
--prefix=/opt/pythia8/install \
--with-python-include=/usr/include/python3.11 \
--with-python-lib=/usr/lib/python3.11
make -j$(nproc)
make install
# Test PYTHIA
cd examples
./main01
Configuring MadGraph
Step 1: Install MadGraph5
# Create MadGraph directory
mkdir -p /opt/madgraph
cd /opt/madgraph
# Download MadGraph
wget https://launchpad.net/mg5amcnlo/3.0/3.4.x/+download/MG5_aMC_v3.4.2.tar.gz
tar -xzf MG5_aMC_v3.4.2.tar.gz
cd MG5_aMC_v3_4_2
# Install dependencies
pip install six
Step 2: Configure MadGraph
# Create configuration file
cat > input/mg5_configuration.txt << 'EOF'
# MadGraph5 Configuration
pythia8_path = /opt/pythia8/install
eps_viewer = evince
web_browser = firefox
text_editor = vim
fortran_compiler = gfortran
cpp_compiler = g++
f2py_compiler = f2py3
EOF
# Test MadGraph
./bin/mg5_aMC
Data Analysis Tools
Step 1: Install HepMC
# Build HepMC3
mkdir -p /opt/hepmc3
cd /opt/hepmc3
wget https://gitlab.cern.ch/hepmc/HepMC3/-/archive/3.2.6/HepMC3-3.2.6.tar.gz
tar -xzf HepMC3-3.2.6.tar.gz
cd HepMC3-3.2.6
mkdir build && cd build
cmake .. \
-DCMAKE_INSTALL_PREFIX=/opt/hepmc3/install \
-DHEPMC3_ENABLE_ROOTIO=ON \
-DHEPMC3_ENABLE_PYTHON=ON
make -j$(nproc)
make install
Step 2: Install FastJet
# Build FastJet
mkdir -p /opt/fastjet
cd /opt/fastjet
wget http://fastjet.fr/repo/fastjet-3.4.0.tar.gz
tar -xzf fastjet-3.4.0.tar.gz
cd fastjet-3.4.0
./configure \
--prefix=/opt/fastjet/install \
--enable-python
make -j$(nproc)
make install
Visualization Setup
Step 1: Install Visualization Tools
# Install X11 libraries for visualization
apk add \
mesa-dev \
mesa-gl \
mesa-gles \
libx11-dev \
libxext-dev \
libxft-dev \
libxpm-dev
# Install additional visualization tools
apk add \
gnuplot \
graphviz \
imagemagick
Step 2: Configure Event Display
# Create event display script
cat > /usr/local/bin/hep-display << 'EOF'
#!/usr/bin/env python3
import ROOT
import sys
def display_event(filename):
"""Display particle physics event"""
ROOT.gROOT.SetBatch(False)
ROOT.gStyle.SetOptStat(0)
# Open file
file = ROOT.TFile(filename)
tree = file.Get("Events")
# Create canvas
canvas = ROOT.TCanvas("c1", "Event Display", 800, 600)
canvas.Divide(2, 2)
# Plot distributions
canvas.cd(1)
tree.Draw("pt")
canvas.cd(2)
tree.Draw("eta")
canvas.cd(3)
tree.Draw("phi")
canvas.cd(4)
tree.Draw("mass")
canvas.Update()
input("Press Enter to continue...")
if __name__ == "__main__":
if len(sys.argv) > 1:
display_event(sys.argv[1])
else:
print("Usage: hep-display <filename>")
EOF
chmod +x /usr/local/bin/hep-display
Running Simulations
Example 1: Simple Particle Decay
#!/usr/bin/env python3
# File: /opt/examples/particle_decay.py
import ROOT
import numpy as np
# Create histogram
h_mass = ROOT.TH1F("h_mass", "Invariant Mass;M [GeV];Events", 100, 0, 200)
# Generate events
for i in range(10000):
# Generate two particles from Z decay
m_z = 91.2 # Z boson mass
# Random decay products
theta = np.random.uniform(0, np.pi)
phi = np.random.uniform(0, 2*np.pi)
# Calculate invariant mass with resolution
mass = np.random.normal(m_z, 2.5)
h_mass.Fill(mass)
# Draw histogram
canvas = ROOT.TCanvas("c1", "Z Boson Mass", 800, 600)
h_mass.Draw()
canvas.SaveAs("z_mass.png")
Example 2: Geant4 Detector Simulation
// File: /opt/examples/detector_sim.cc
#include "G4RunManager.hh"
#include "G4UImanager.hh"
#include "G4VisExecutive.hh"
#include "G4UIExecutive.hh"
#include "FTFP_BERT.hh"
#include "DetectorConstruction.hh"
#include "ActionInitialization.hh"
int main(int argc, char** argv) {
// Initialize Run Manager
G4RunManager* runManager = new G4RunManager;
// Set mandatory initialization classes
runManager->SetUserInitialization(new DetectorConstruction());
runManager->SetUserInitialization(new FTFP_BERT);
runManager->SetUserInitialization(new ActionInitialization());
// Initialize G4 kernel
runManager->Initialize();
// Get UI manager
G4UImanager* UI = G4UImanager::GetUIpointer();
if (argc == 1) {
// Interactive mode
G4VisManager* visManager = new G4VisExecutive;
visManager->Initialize();
G4UIExecutive* ui = new G4UIExecutive(argc, argv);
UI->ApplyCommand("/control/execute init_vis.mac");
ui->SessionStart();
delete ui;
delete visManager;
} else {
// Batch mode
G4String command = "/control/execute ";
G4String fileName = argv[1];
UI->ApplyCommand(command + fileName);
}
delete runManager;
return 0;
}
Example 3: PYTHIA Event Generation
#!/usr/bin/env python3
# File: /opt/examples/pythia_gen.py
import pythia8
import ROOT
# Initialize PYTHIA
pythia = pythia8.Pythia()
# Configure process
pythia.readString("Beams:eCM = 13000.") # 13 TeV
pythia.readString("HardQCD:all = on") # QCD processes
pythia.readString("PhaseSpace:pTHatMin = 20.")
# Initialize
pythia.init()
# Create ROOT file
outFile = ROOT.TFile("pythia_events.root", "RECREATE")
tree = ROOT.TTree("Events", "PYTHIA Events")
# Variables
pt = ROOT.std.vector('float')()
eta = ROOT.std.vector('float')()
phi = ROOT.std.vector('float')()
pdgId = ROOT.std.vector('int')()
tree.Branch("pt", pt)
tree.Branch("eta", eta)
tree.Branch("phi", phi)
tree.Branch("pdgId", pdgId)
# Generate events
for iEvent in range(1000):
if not pythia.next():
continue
# Clear vectors
pt.clear()
eta.clear()
phi.clear()
pdgId.clear()
# Loop over particles
for i in range(pythia.event.size()):
particle = pythia.event[i]
# Select final state particles
if particle.isFinal() and particle.isCharged():
pt.push_back(particle.pT())
eta.push_back(particle.eta())
phi.push_back(particle.phi())
pdgId.push_back(particle.id())
tree.Fill()
# Save and close
outFile.Write()
outFile.Close()
# Print statistics
pythia.stat()
Performance Optimization
Parallel Processing
# Configure parallel ROOT
cat > /opt/scripts/parallel_analysis.py << 'EOF'
#!/usr/bin/env python3
import ROOT
import multiprocessing as mp
# Enable multi-threading
ROOT.ROOT.EnableImplicitMT()
def process_file(filename):
"""Process single file"""
df = ROOT.RDataFrame("Events", filename)
# Define computations
df_filtered = df.Filter("pt > 20")
hist = df_filtered.Histo1D(("pt", "pT;pT [GeV];Events", 100, 0, 200), "pt")
return hist.GetValue()
# Process multiple files in parallel
if __name__ == "__main__":
files = ["file1.root", "file2.root", "file3.root"]
with mp.Pool() as pool:
histograms = pool.map(process_file, files)
# Merge histograms
h_total = histograms[0].Clone()
for h in histograms[1:]:
h_total.Add(h)
EOF
chmod +x /opt/scripts/parallel_analysis.py
Memory Management
# Create memory optimization script
cat > /opt/scripts/optimize_memory.sh << 'EOF'
#!/bin/sh
# Optimize memory for large simulations
# Set memory limits
export ROOTSYS_MEMSTAT=1
export G4FORCENUMBEROFTHREADS=4
# Configure system limits
ulimit -v unlimited
ulimit -s unlimited
# Set OMP threads
export OMP_NUM_THREADS=$(nproc)
# Run with optimized settings
exec "$@"
EOF
chmod +x /opt/scripts/optimize_memory.sh
Cluster Integration
SLURM Job Script
# Create SLURM submission script
cat > /opt/scripts/submit_simulation.sh << 'EOF'
#!/bin/bash
#SBATCH --job-name=hep_sim
#SBATCH --output=sim_%j.out
#SBATCH --error=sim_%j.err
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --mem=32G
#SBATCH --partition=compute
# Load environment
source /etc/profile.d/root.sh
source /etc/profile.d/geant4.sh
# Run simulation
cd $SLURM_SUBMIT_DIR
./run_simulation.sh $SLURM_ARRAY_TASK_ID
EOF
HTCondor Configuration
# Create HTCondor job file
cat > /opt/scripts/condor_job.sub << 'EOF'
Universe = vanilla
Executable = /opt/scripts/run_analysis.sh
Arguments = $(Process)
Log = job_$(Cluster).log
Output = job_$(Cluster)_$(Process).out
Error = job_$(Cluster)_$(Process).err
Requirements = (OpSys == "LINUX") && (Arch == "X86_64")
Request_memory = 4 GB
Request_cpus = 1
Queue 100
EOF
Troubleshooting
Common Issues
- Library conflicts:
# Check library dependencies
ldd $(which root)
# Fix library paths
export LD_LIBRARY_PATH=/opt/root/install/lib:$LD_LIBRARY_PATH
- Python module errors:
# Rebuild Python bindings
cd /opt/root/build
cmake .. -Dpython=ON -DPYTHON_EXECUTABLE=$(which python3)
make -j$(nproc)
- Geant4 data files:
# Verify data files
ls -la /opt/geant4/data/
# Re-download if missing
cd /opt/geant4/data
wget [missing data file URL]
Debug Mode
# Enable debug output
export G4VERBOSE=1
export ROOT_DEBUG=1
# Run with gdb
gdb ./simulation
(gdb) run
(gdb) bt # backtrace on crash
Best Practices
Code Organization
# Create project structure
mkdir -p /opt/hep-project/{src,include,scripts,data,results}
# Example CMakeLists.txt
cat > /opt/hep-project/CMakeLists.txt << 'EOF'
cmake_minimum_required(VERSION 3.16)
project(HEPAnalysis)
find_package(ROOT REQUIRED)
find_package(Geant4 REQUIRED)
include_directories(
${ROOT_INCLUDE_DIRS}
${Geant4_INCLUDE_DIRS}
${CMAKE_CURRENT_SOURCE_DIR}/include
)
add_executable(analysis src/main.cc)
target_link_libraries(analysis
${ROOT_LIBRARIES}
${Geant4_LIBRARIES}
)
EOF
Documentation
# Generate Doxygen documentation
cat > /opt/hep-project/Doxyfile << 'EOF'
PROJECT_NAME = "HEP Analysis"
INPUT = src include
RECURSIVE = YES
GENERATE_HTML = YES
GENERATE_LATEX = NO
EXTRACT_ALL = YES
EOF
# Run Doxygen
cd /opt/hep-project
doxygen Doxyfile
Validation Scripts
# Create validation script
cat > /opt/scripts/validate_results.py << 'EOF'
#!/usr/bin/env python3
import ROOT
import numpy as np
def validate_simulation(data_file, mc_file):
"""Compare data with Monte Carlo"""
# Open files
f_data = ROOT.TFile(data_file)
f_mc = ROOT.TFile(mc_file)
# Get histograms
h_data = f_data.Get("h_mass")
h_mc = f_mc.Get("h_mass")
# Normalize
h_mc.Scale(h_data.Integral() / h_mc.Integral())
# Calculate chi2
chi2 = h_data.Chi2Test(h_mc, "CHI2/NDF")
print(f"Chi2/NDF: {chi2}")
# Plot comparison
canvas = ROOT.TCanvas("c1", "Validation", 800, 600)
h_data.Draw("E")
h_mc.Draw("HIST SAME")
canvas.SaveAs("validation.png")
return chi2 < 2.0 # Pass if chi2/ndf < 2
if __name__ == "__main__":
validate_simulation("data.root", "mc.root")
EOF
chmod +x /opt/scripts/validate_results.py
Conclusion
Youβve successfully set up a comprehensive particle physics simulation environment on Alpine Linux. This setup includes ROOT for data analysis, Geant4 for detector simulation, PYTHIA for event generation, and various analysis tools. The lightweight nature of Alpine Linux combined with these powerful frameworks provides an efficient platform for particle physics research.
Remember to regularly update your frameworks, validate your simulations against known results, and optimize your code for the specific physics processes youβre studying. The modular nature of this setup allows you to add additional tools and frameworks as needed for your research.