diff --git a/README.md b/README.md index 777c19b6..363a2113 100644 --- a/README.md +++ b/README.md @@ -126,17 +126,6 @@ Then browse to `http://localhost:7860/?__theme=dark` -##### AMD GPU on Windows - -1) Use `requirements_cpu_only.txt` or `requirements_cpu_only_noavx2.txt` in the command above. - -2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration). - * Use the `LLAMA_HIPBLAS=on` toggle. - * Note the [Windows remarks](https://github.com/abetlen/llama-cpp-python#windows-remarks). - -3) Manually install AutoGPTQ: [Installation](https://github.com/PanQiWei/AutoGPTQ#install-from-source). - * Perform the from-source installation - there are no prebuilt ROCm packages for Windows. - ##### Manual install The `requirements*.txt` above contain various wheels precompiled through GitHub Actions. If you wish to compile things manually, or if you need to because no suitable wheels are available for your hardware, you can use `requirements_nowheels.txt` and then install your desired loaders manually.