// Parameter for Trial.run(), to enable network sharing in memory among Sessions.
// - false: disable Hogwild!
// - "shared": enable Hogwild!, share parameters all the time
// - "synced": enabled Hogwild!, sync parameters only after training step
// Whether to use rigorous eval.
// Note that this will be slower since it spawns vector environments separately to run evaluations, but it is more rigorous.
// This is useful when the agent and environment respond differently when in LAB_MODE=eval
// However, for all SLM-Lab-supported environments, this is optional since it can infer the total eval-rewards from training environments (see env.wrapper.TrackReward)
// - int: If > 0, spawn a number of vector environments to run eval separately
// - int|null: If 0 or null, infer eval scores from training checkpoints
// - defaults to null for performance
"rigorous_eval": int|null,
// Frequency to checkpoint on evaluation environment: logging and saving.
// This uses env.clock, which counts the total timesteps across vector environments by summing them up.
// Frequency to checkpoint on training environment: logging and saving.
// This uses env.clock, which counts the total timesteps across vector environments by summing them up.
// The maximum number of Sessions to spawn per Trial, indexed 0 to (max_session - 1)
// The maximum number of Trials to spawn per Experiment, indexed 0 to (max_session - 1)
// The number of CPUs to allocate for Ray.tune per Trial.
// If null, the trial is given {max_session} CPUs
// The number of GPUs to allocate for Ray.tune per Trial
// If null, the trial is given {max_session} GPUs