Skip to contents

Background

For developing a web application, which is based on the R package kwb.qmra the OpenCPU API can be used. It has a lot of great features which its author Jeroen Ooms describes as follows (Source):

"OpenCPU is mature has been put to the test in production for many years now, both in private and public organizations. The system was developed out of a need for a reliable, scalable system for embedding R that we used at UCLA for teaching R to students, sometimes several classes at once (think about a classroom settings with 100+ concurrent students). The core implementation of OpenCPU is the opencpu-server stack based on systems native Apache2 webserver. The opencpu-server stack has been packaged as deb/rpm packages and can be installed out of the box on all popular Linux systems. This really provides a super stable production ready system out-of-the-box. Not only does it easily scale up but you can configure the server (if you want to add auth, proxies, etc) via standard Apache configuration on your server. An incredible amount of time and energy has been invested into optimizing the internals of opencpu-server for security, reliability and performance. In opencpu-server, each incoming request gets processed in a temporary process fork which serves as a sandbox that controls memory/cpu limits, access control, timeouts, etc. All these are critical to ensure that the stability of the server does not get compromised by users or packages (accidentally) messing with the system or using excessive resources. I am not aware of any other R server system that does this.

More details about the why and how of OpenCPU are available from these papers:"

For the kwb.qmra R package a very tiny example app (source code on GitHub) was developed which uses the OpenCPU framework. This web app performs a quantitative microbiological risk assessment (QMRA) for a dummy configuration and can be tested here: https://kwb-r.ocpu.io/kwb.qmra/www/

More advanced web-apps can be easily developed by using the OpenCPU API but will most probably need an own backend server which hosts the OpenCPU server, because the publically freely available OpenCPU server might be too limited.

Run risk simulatation

You need to provide the input parameters for kwb.qmra::opencpu_simulate_risk() required in a JSON data structure as provided in the example data config_dummy.json (for details see: kwb.qmra::config_dummy_json), which needs to be converted into an R list data structure (see below)


### Convert "config.json" to R list
config <- jsonlite::fromJSON("config_dummy.json")

### Optionally directly import a configuration from CSV files
### for details see: https://github.com/KWB-R/kwb.qmra/tree/master/inst/extdata/configs
# config_dir <- system.file("extdata/configs/dummy", package = "kwb.qmra")
# config <- kwb.qmra::config_read(confDir = config_dir)

# Run risk simulation
risk_dummy <- kwb.qmra::opencpu_simulate_risk(config)
#> 
#> # STEP 0: BASIC CONFIGURATION
#> 
#> Simulated 3 pathogen(s): Campylobacter jejuni and Campylobacter coli, Rotavirus, Giardia duodenalis
#> Number of random distribution repeatings: 10
#> Number of exposure events: 365
#> 
#> # STEP 1: INFLOW
#> 
#> Simulated pathogen: Campylobacter jejuni and Campylobacter coli
#> Create 10 random distribution(s): uniform (n: 365, min: 10.000000, max: 10000.000000)
#> Warning in data.frame(..., check.names = FALSE): row names were found from a
#> short variable and have been discarded
#> Simulated pathogen: Rotavirus
#> Create 10 random distribution(s): uniform (n: 365, min: 10.000000, max: 10000.000000)
#> Warning in data.frame(..., check.names = FALSE): row names were found from a
#> short variable and have been discarded
#> Simulated pathogen: Giardia duodenalis
#> Create 10 random distribution(s): uniform (n: 365, min: 10.000000, max: 10000.000000)
#> Warning in data.frame(..., check.names = FALSE): row names were found from a
#> short variable and have been discarded
#> Providing inflow events ... ok. (0.00s) 
#> 
#> # STEP 2: TREATMENT SCHEMES
#> 
#> Create 10 random distribution(s): uniform (n: 365, min: 0.200000, max: 2.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 1.000000, max: 2.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 0.100000, max: 3.400000)
#> Create 10 random distribution(s): uniform (n: 365, min: 0.200000, max: 4.400000)
#> Create 10 random distribution(s): uniform (n: 365, min: 0.400000, max: 3.300000)
#> Create 10 random distribution(s): uniform (n: 365, min: 0.000000, max: 3.500000)
#> Create 10 random distribution(s): uniform (n: 365, min: 2.000000, max: 6.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 0.300000, max: 5.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 0.250000, max: 4.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 2.000000, max: 6.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 1.000000, max: 2.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 2.100000, max: 8.300000)
#> Create 10 random distribution(s): uniform (n: 365, min: 4.000000, max: 4.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 4.000000, max: 4.000000)
#> Create 10 random distribution(s): uniform (n: 365, min: 4.000000, max: 4.000000)
#> 
#> # STEP 3: EXPOSURE
#> 
#> Simulated exposure: volume per event
#> Create 10 random distribution(s): triangle (n: 365, min: 0.500000, max: 3.000000, mode = 1.500000)
#> 
#> # STEP 4: DOSE RESPONSE
#> 
#> 
#> # STEP 5: HEALTH
risk_dummy_json <- jsonlite::toJSON(risk_dummy, pretty = TRUE)

# Save simulation results in JSON format
writeLines(text = risk_dummy_json, "risk_dummy.json")

The structure of the optimisation results is stored in JSON format in the R object predictions and also saved. For inspecting it please open the risk_dummy.json file.

Real calculations should be performed using the config_default.json configuration developed by Christoph Sprenger (@chsprenger). However, due to the large size of the resulting risk_default.json object (~ 275MB) this default example could not be hosted at GitHub (maximum single file size <100MB). Thus this workflow was limited to the dummy configuration only!