Ask HN: Reinforcement learning for single, lower end graphic cards?

49 points | by DrNuke 16 days ago

7 comments

  • pepemysurprised 16 days ago
    For many RL problems you don't really need GPUs because the networks used are relatively simple compared to supservised learning, and most simulations are CPU-bound. Many RL problems are constrained by data so that running simulations (CPU) is the bottleneck, not the network.
    • dimatura 15 days ago
      Yes. Unfortunately my experience in this area is about four years old, so maybe obsolete at this point. But at that time, for our problem -- which used many of the typical benchmark gym simulators -- the main bottleneck were CPUs, not GPUs.

      I suspect the equation changes on how complex the neural network is (if it's simple, not much is gained from GPU), whether the simulation can take advantage of GPUs (the ones we used didn't, but for 3D graphics-heavy simulation and other kinds of computation I'm sure it can help) and the algorithm -- some algorithms rely more on on-line evaluation and others make more of an effort to reuse older rollouts. (An extreme case is offline RL, which has also attracted a lot of interest recently. Since you were asking for references, this might be worth a read https://arxiv.org/abs/2005.01643).

      • KMnO4 14 days ago
        Yes. GPUs are extremely good at doing millions of small repetitive calculations, and not great elsewhere.

        My capstone project used RL on a Raspberry Pi to train hardware-in-the-loop (essentially when to open and close valves based on sensor input). It was incredibly slow because it couldn’t be parallelized (without buying additional hardware for $500 each). Lots of professors asked why a Raspberry Pi was chosen when we had high end GPUs in the lab, and I had to explain that the Pi was NOT the bottleneck, and in fact stayed idle 95% of the time.

        • seg_lol 14 days ago
          Would virtual physical simulation have helped in this case? What was the phase lag between valve actuation and the measured signal?

          Create a metalearner than can do one-shot learning when it gets access to physical hardware?

        • irrk 15 days ago
          Agreed, deep architectures are really only needed for feature engineering. There have been a few papers showing that even for these very deep setups, the actual policy can almost always be fully captured in a small mlp.
          • algo_trader 15 days ago
            Can you share some recent references?

            (Are you referring to the early papers showing that MPC and LQR solve SOME problems faster ?!)

            • lqr 14 days ago
              One example with model-based RL is "World Models" by Ha and Schmidhuber. They pre-train an autoencoder to reduce image observations to vectors, then pre-train a RNN to predict future reduced vectors, then use a parameter-space global optimization algorithm (but any RL algorithm would work) to train a policy that's linear in the concatenated observation vector and RNN hidden state.

              The important thing here is that the image encoder and the RNN weren't trained end-to-end with the policy. The learned "features" captured enough information to be an effective policy input, even though they only needed to be useful for predicting future states.

              It's also interesting that the image encoder was trained separately from the RNN. I think that only worked because the test environments were "almost" fully observable - there is world state that cannot be inferred from a single image observation, but knowing that state is not necessary for a good policy.

            • eutectic 15 days ago
              I don't really see the dichotomy. Surely the features you need to learn depend on the task? Or do you mean running linear / shallow rl on top of unsupervised learning?
            • imvetri 14 days ago
              Once a model is built, why do we need a high computation ?
            • sxp 15 days ago
              Instead of using your low end GPU, you could get a TPU like https://coral.ai/docs/edgetpu/benchmarks/. Or rent a single GPU on the cloud which costs less than a $/hour and can be free in certain cases.

              In terms of APIs, you can try WebGPU which is nominally meant for Javascript in the browser, but there are native interfaces for it such as Rust: https://github.com/gfx-rs/wgpu

              • hikarudo 15 days ago
                The Coral TPU is mostly made for inference only, not training, with a few exceptions.
              • bArray 15 days ago
                This is mostly in the realm of computer vision, but I would recommend checking out AlexeyAB's fork of Darknet: https://github.com/AlexeyAB/darknet It's got decent CUDA acceleration, I personally run a GTX 960M for training.
                • artifact_44 15 days ago
                  Check out Andrej karpathys convnet.js and deepq learning web apps.
                  • Tenoke 15 days ago
                    Convnet.js hasn't been updated in over half a decade.
                  • mate543 15 days ago
                    Not answering the question directly but you could use a free gpu from colab https://colab.research.google.com/github/tensorflow/docs/blo... . Note that you need to be backing the checkpoints if you intend to run for more than a couple of hours.
                    • bionhoward 15 days ago
                      GPU memory is a key limit here and RL worsens the problem because it requires some eligibility trace or memory system. One option you could try would be to store past S,A,R,S' tuples on disc using something like DeepMind Reverb, and have a small batch size and simpler model. Or as mentioned in other comments, you can just use CPU and RAM which is often a lot higher capacity depending on your rig.
                      • vpj 15 days ago
                        Depends on what sort of RL you are doing. If you are trying to train agents to play small games with vision the agent will need a small cnn to process images. This will need a gpu and what you have should be enough.

                        I was training on atari for a while with 1080ti. The games run on the cpu so you need a decent cpu as well.