Supercharge Your Shiny App by Offloading Computations to HPC Cluster
Remote job submission and resource management for interactive apps
Overview
Advanced Shiny HPC
With increasing data and analysis complexity, developers are tempted to put serious computations into Shiny apps. Given resource limitations and shared infrastructure, this often leads to crashes. This workshop teaches you to interact with remote HPC clusters from your laptop or Posit Connect, offloading computations while keeping apps interactive.
What Youβll Learn
- π₯οΈ HPC cluster interaction from laptop
- π Remote job submission from Shiny
- β‘ Parallel computing for faster results
- π Resource management and optimization
- π App stability while leveraging HPC power
Prerequisites
Required Knowledge:
- Intermediate Shiny development
- Basic understanding of HPC concepts
- R programming proficiency
Helpful:
- Experience with computational bottlenecks
- SSH and remote systems familiarity
Key Technologies
Shiny
HPC Cluster
Posit Connect
Job Schedulers
{future}
The Problem
Typical Scenario
# BAD: Heavy computation in Shiny
server <- function(input, output, session) {
output$result <- renderPlot({
# This crashes the app!
heavy_simulation(
iterations = 1000000,
samples = input$samples
)
})
}Issues:
- β App becomes unresponsive
- β Other users affected (shared server)
- β Memory limits exceeded
- β Timeouts and crashes
The Solution
Offload to HPC
# GOOD: Offload to cluster
server <- function(input, output, session) {
observeEvent(input$run, {
# Submit job to HPC
job_id <- submit_hpc_job(
script = "simulation.R",
params = list(samples = input$samples)
)
# Poll for results
result <- poll_job_result(job_id)
output$result <- renderPlot(result)
})
}Benefits:
- β App stays responsive
- β Leverage massive compute power
- β No local resource limits
- β Parallel processing
- β Other users unaffected
Workshop Content
Part 1: HPC Cluster Basics
Job Schedulers:
- SLURM
- PBS
- SGE
- LSF
Key Concepts:
- Job submission
- Resource requests (cores, memory, time)
- Queue management
- Job monitoring
Part 2: Connecting Shiny to HPC
Architecture:
User β Shiny App β SSH Connection β HPC Cluster
β
Job Scheduler
β
Compute Nodes
β
Results β Back to Shiny
Authentication:
- SSH keys
- Certificates
- Secure credential management
Part 3: Job Submission from R
library(ssh)
# Connect to cluster
session <- ssh_connect("user@hpc.university.edu")
# Submit job
ssh_exec_wait(session, command = "
sbatch --job-name=shiny_analysis \
--ntasks=16 \
--mem=64G \
--time=01:00:00 \
analysis_script.R
")
# Check status
status <- ssh_exec_internal(session, "squeue --user=myuser")
# Retrieve results
ssh_scp_download(session,
files = "results/output.rds",
to = "local_results/")Part 4: Building Reactive HPC Shiny App
Key Components:
- Job submission UI
ui <- fluidPage(
numericInput("n_samples", "Samples:", 1000000),
numericInput("n_cores", "Cores:", 16),
actionButton("submit", "Run on HPC"),
textOutput("status"),
plotOutput("results")
)- Job management
server <- function(input, output, session) {
job_status <- reactiveVal("Idle")
observeEvent(input$submit, {
job_status("Submitting...")
# Submit to HPC
job_id <- submit_slurm_job(
cores = input$n_cores,
memory = "64G",
script = generate_script(input$n_samples)
)
job_status(paste("Running - Job ID:", job_id))
# Poll for completion
observe({
if (is_job_complete(job_id)) {
results <- fetch_results(job_id)
output$results <- renderPlot(results)
job_status("Complete")
}
})
})
output$status <- renderText(job_status())
}Part 5: Advanced Patterns
Parallel Workflows:
- Multiple simultaneous jobs
- Parameter sweeps
- Sensitivity analysis
Result Caching:
- Store results in shared filesystem
- Avoid re-computation
- Session persistence
Progress Tracking:
- Real-time job monitoring
- Log file streaming
- Estimated completion
Practical Example: Pharmacokinetic Simulation
# Heavy PK simulation
pk_simulation <- function(n_subjects, n_doses, n_cores) {
# This takes 30 minutes locally
# But 2 minutes on HPC with 64 cores!
results <- parallel_pk_sim(
subjects = n_subjects,
doses = n_doses,
cores = n_cores
)
return(results)
}
# Shiny app
ui <- fluidPage(
titlePanel("PK Simulation (HPC-Powered)"),
sidebarLayout(
sidebarPanel(
sliderInput("subjects", "Subjects:", 100, 10000, 1000),
sliderInput("doses", "Doses:", 1, 20, 5),
sliderInput("cores", "HPC Cores:", 1, 128, 64),
actionButton("run_hpc", "Run on HPC"),
hr(),
textOutput("job_status")
),
mainPanel(
plotOutput("concentration_time"),
plotOutput("exposure_distribution")
)
)
)
server <- function(input, output, session) {
# HPC job management logic
# (See workshop materials for full code)
}Use Cases in Pharma
Clinical Trial Simulations
- Design optimization
- Power calculations
- Scenario analysis
Pharmacokinetic Modeling
- Population PK
- PBPK simulations
- Dose optimization
Genomics Analysis
- Variant calling
- Pathway analysis
- Biomarker discovery
Real-World Evidence
- Large database queries
- Propensity matching
- Survival modeling
Best Practices
β Doβs
- Use HPC for genuinely heavy computations
- Implement proper error handling
- Cache results when possible
- Provide user feedback (progress bars)
- Set reasonable timeout limits
β Donβts
- Donβt submit jobs for trivial computations
- Donβt ignore job failures
- Donβt hardcode credentials
- Donβt monopolize cluster resources
- Donβt forget to clean up temp files
Deployment Considerations
Local Development
- SSH to institutional HPC
- Test with small jobs
- Debug locally when possible
Posit Connect Deployment
- Configure SSH keys
- Network access to HPC
- Service account credentials
- Monitor resource usage
Security
- Encrypted connections
- Credential management
- Audit trails
- Resource quotas
Learning Outcomes
β
Connect Shiny apps to HPC clusters
β
Submit and manage remote jobs
β
Build responsive compute-heavy apps
β
Implement parallel workflows
β
Deploy HPC-enabled apps to Posit Connect
β
Optimize resource usage
Workshop Format
Part 1 (1 hour):
- HPC basics and setup
- Simple job submission from R
- Monitoring and results
Part 2 (1 hour):
- Building HPC-enabled Shiny app
- Live demo and coding
- Troubleshooting
Part 3 (30 min):
- Participantsβ use cases discussion
- Q&A and problem-solving
- Deployment strategies
Next Steps
- Set up SSH access to your HPC
- Try example app from workshop
- Identify compute bottlenecks in your apps
- Consider HPC for heavy workloads
Similar Workshops
- From Data to Insights with {teal} - Shiny framework
- Getting Started with LLM APIs - Another compute-heavy use case
Next Steps
- Build Shiny apps first: {teal} workshop
- Advanced Shiny: Validating Shiny Apps
Last updated: November 2025 | R/Pharma 2025 Conference