Computes predicted dominance phase durations using posterior predictive distribution.
Source:R/predict.R
predict.cumhist.Rd
Computes predicted dominance phase durations using fitted model.
Usage
# S3 method for cumhist
predict(
object,
summary = TRUE,
probs = NULL,
full_length = TRUE,
predict_history = NULL,
...
)
Arguments
- object
An object of class cumhist
- summary
Whether summary statistics should be returned instead of raw sample values. Defaults to
TRUE
- probs
The percentiles used to compute summary, defaults to NULL (no CI).
- full_length
Only for
summary = TRUE
, whether the summary table should include rows with no predictions. I.e., rows with mixed phases, first/last dominance phase in the run, etc. Seepreprocess_data()
. Defaults toTRUE
.- predict_history
Option to predict a cumulative history state (or their difference). It is disabled by default by setting it to
NULL
. You can specify"1"
or"2"
for cumulative history for the first or second perceptual states (with indexes 1 and 2, respectively),"dominant"
or"suppressed"
for cumulative history for states that either dominant or suppressed during the following phase,"difference"
for difference between suppressed and dominant. See cumulative history vignette for details.- ...
Unused
Value
If summary=FALSE
, a numeric matrix iterationsN x clearN.
If summary=TRUE
but probs=NULL
a vector of mean predicted durations
or requested cumulative history values.
If summary=TRUE
and probs
is not NULL
, a data.frame
with a column "Predicted" (mean) and a column for each specified quantile.
Examples
# \donttest{
br_fit <- fit_cumhist(br_singleblock, state = "State", duration = "Duration")
#>
#> SAMPLING FOR MODEL 'historylm' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 0 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 4.393 seconds (Warm-up)
#> Chain 1: 5.113 seconds (Sampling)
#> Chain 1: 9.506 seconds (Total)
#> Chain 1:
predict(br_fit)
#> [1] NA NA 3.111925 3.038940 3.108320 4.978573 3.556128 4.392861
#> [9] 3.324947 3.697454 4.186017 5.256282 4.150889 4.602041 3.813495 3.978832
#> [17] 4.107675 3.412406 3.866068 3.570002 4.050164 3.811569 4.391107 4.467650
#> [25] 3.885314 3.803299 5.091503 4.191165 5.187422 4.862474 3.569223 4.384790
#> [33] 3.821754 4.367063 4.813696 3.693666 4.361824 5.414457 4.442852 4.041422
#> [41] 2.964934 2.858787 4.374155 5.081645 3.797854 5.218570 2.600773 NA
#> [49] 2.545678 NA 2.619134 3.206757 4.191966 3.708508 5.145146 5.642685
#> [57] 3.601663 NA 2.363356 3.245703 4.477596 3.025451 3.688626 4.595827
#> [65] 4.705550 3.282014 3.727020 4.793072 4.725245 4.898412 5.619271 5.559445
#> [73] 3.974080 4.424304 4.665970 NA
# full posterior prediction samples
predictions_samples <- predict(br_fit, summary=FALSE)
# }