SLM Lab
v4.2.0
v4.2.0
  • SLM Lab
  • 🖥Setup
    • Installation
    • Quick Start
  • 🚀Using SLM Lab
    • Lab Command
    • Lab Organization
    • Train: REINFORCE CartPole
    • Resume and Enjoy: REINFORCE CartPole
    • Agent Spec: DDQN+PER on LunarLander
    • Env Spec: A2C on Pong
    • GPU Usage: PPO on Pong
    • Parallelizing Training: Async SAC on Humanoid
    • Experiment and Search Spec: PPO on Breakout
    • Run Benchmark: A2C on Atari Games
    • Meta Spec: High Level Specifications
    • Post-Hoc Analysis
    • TensorBoard: Visualizing Models and Actions
    • Using SLM Lab In Your Project
  • 📈Analyzing Results
    • Data Locations
    • Graphs and Data
    • Performance Metrics
  • 🥇Benchmark Results
    • Public Benchmark Data
    • Discrete Environment Benchmark
    • Continuous Environment Benchmark
    • Atari Environment Benchmark
    • RL GIFs
  • 🔧Development
    • Modular Design
      • Algorithm Taxonomy
      • Class Inheritance: A2C > PPO
    • Algorithm
      • DQN
      • REINFORCE
      • Actor Critic
    • Memory
      • Replay
      • PrioritizedReplay
      • OnPolicyReplay
      • OnPolicyBatchReplay
    • Net
      • MLP
      • CNN
      • RNN
    • Profiling SLM Lab
  • 📖Publications and Talks
    • Book: Foundations of Deep Reinforcement Learning
    • Talks and Presentations
  • 🤓Resources
    • Deep RL Resources
    • Contributing
    • Motivation
    • Help
    • Contact
Powered by GitBook
On this page
  • Permission denied when running bin/setup
  • conda activate lab fails
  • Google Colab / Jupyter setup
  • GLIBCXX_3.4.21version errors due to gcc, g++, libstdc++
  • NVIDIA GPU driver problem
  • Building and setting up a Linux GPU server
  • Breakage from SLM-Lab update
  • JSON parsing issue in spec
  • Vizdoom installation fails or not found
  • How to kill stuck processes?
  • No GUI or images saved on a headless remote server
  • How to forward GUI from a remote server?
  • How to sync data from a remote server?
  • What is SLM?
  • Reporting Issues

Was this helpful?

  1. 🤓Resources

Help

Permission denied when running bin/setup

This means you don't have sufficient privilege on your machine. Run it with sudo:

sudo ./bin/setup

conda activate lab fails

When Conda complains about certain variables should not be in your PATH:

CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. If your shell is Bash or a Bourne variant, enable conda for the current user with

$ echo ". /home/ubuntu/miniconda3/etc/profile.d/conda.sh" >> ~/.bashrc

or, for all users, enable conda with

$ sudo ln -s /home/ubuntu/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh

The options above will permanently enable the 'conda' command, but they do NOT put conda's base (root) environment on PATH. To do so, run

$ conda activate

in your terminal, or to put the base environment on PATH permanently, run

$ echo "conda activate" >> ~/.bashrc

Previous to conda 4.4, the recommended way to activate conda was to modify PATH in your ~/.bashrc file. You should manually remove the line that looks like

export PATH="/home/ubuntu/miniconda3/bin:$PATH"

^^^ The above line should NO LONGER be in your ~/.bashrc file! ^^^

To fix it, do the first thing it recommends and refresh your terminal session:

echo ". /home/ubuntu/miniconda3/etc/profile.d/conda.sh" >> ~/.bashrc
source ~/.bashrc

Google Colab / Jupyter setup

For users of Google Colab or Jupyter, simply use the Conda environment lab as the kernel setup by SLM Lab installation. SLM Lab setup installs Conda into the home directory ~/miniconda3. Note that in each notebook cell a bash command is a entirely new session. We have to expose the lab Conda environment directly and run the Python command. Furthermore, note that notebooks have no GUI thus have to be run headless. The following is an example for running the quickstart:

%%bash
# since each shell is a new bash session, this sources the Conda environment directly
export PATH=~/miniconda3/envs/lab/bin:$PATH
# and we run it in headless mode (Colab has no GUI)
# NOTE since each cell evaluates as a session,
# the logs will only be printed in the cell output when the command is finished,
# i.e. logs don't stream in here, so wait a few minutes to see the output
xvfb-run -a python run_lab.py slm_lab/spec/demo.json dqn_cartpole dev

GLIBCXX_3.4.21version errors due to gcc, g++, libstdc++

You encounter libgcc errors like:

ImportError: /home/deploy/miniconda3/envs/lab/lib/python3.6/site-packages/torch/../../.././libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /home/deploy/miniconda3/envs/lab/lib/python3.6/site-packages/ray/pyarrow_files/pyarrow/lib.cpython-36m-x86_64-linux-gnu.so)

Try installing libgcc in Conda:

  conda install libgcc

NVIDIA GPU driver problem

If you receive errors similar to the following when trying to use GPU:

NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver

Building and setting up a Linux GPU server

Breakage from SLM-Lab update

Make sure you also install the packages after updating the repo. Run:

git pull
./bin/setup

JSON parsing issue in spec

Newer dependencies of SLM Lab may cause issues when parsing JSON spec files. SLM Lab uses a looser JSON syntax which includes comma in the last element of enumerable. If you encounter a JSON parsing issue, simply edit the spec file to remove these extraneous commas.

Vizdoom installation fails or not found

Manually install it:

conda activate lab
sudo pip install vizdoom

How to kill stuck processes?

pkill -f run_lab
pkill -f slm-env
pkill -f ipykernel
pkill -f ray
pkill -f orca
pkill -f Xvfb
ps aux | grep -i Unity | awk '{print $2}' | xargs sudo kill -9

No GUI or images saved on a headless remote server

When running SLM Lab on a remote server, you may get NoSuchDisplayException: Cannot connect to "None". Or your graphs may not be generated. This is because servers are typically headless, i.e. without a display. This error occurs when you're trying to render without a headless display.

First, try setting environment variable RENDER=false before the lab command, for example:

RENDER=false python run_lab.py slm_lab/spec/demo.json dqn_cartpole train

Despite its simplicity, this option comes with the caveat that plots from Plotly cannot generated. The safer option is to install Xvfb, and prepend your command with xvfb-run -a. For example:

xvfb-run -a python run_lab.py slm_lab/spec/demo.json dqn_cartpole train

How to forward GUI from a remote server?

If you are running via ssh and want GUI forwarding from a server, do:

  • do ssh with a -X flag, e.g. ssh -X foo@bar.

How to sync data from a remote server?

What is SLM?

Reporting Issues

PreviousMotivationNextContact

Last updated 3 years ago

Was this helpful?

Please find an example , credit to for the initiative and discussion that led to it.

Reinstall your NVIDIA GPU driver using .

If you build your own desktop and want a quick and smooth setup for a Ubuntu GPU server, refer to .

You can see the running processes using tools like . Use the following commands to kill processes by their names. You may need to use sudo.

install OpenGL and/or configure Nvidia driver on your server.

SLM Lab produces a lot of data which are then zipped for our convenience of transferring/syncing them. We use Dropbox to upload these zip files. Follow to install Dropbox CLI.

SLM stands for Strange Loop Machine, in homage to Hofstadter’s iconic book . This lab is created as part of a long term project to try out AI ideas heavily influenced by it.

Can't find the issues you encountered? ; it helps all of us.

Colab notebook here
@piosif97
this instruction
this gist
glances
install X11 on your server
Follow instructions here.
install XQuartz/Xming on your laptop
this instruction
Gödel, Escher, Bach: An Eternal Golden Braid
Report new issues on Github