This quick guide assumes you have already set up Auto-GPT. If you haven’t, follow our in-depth guide on the Finxter blog.
Use ./run.sh --help (Linux/macOS) or .\run.bat --help (Windows) to list command line arguments. For Docker, substitute docker-compose run --rm auto-gpt in examples.
Common Auto-GPT arguments include: --ai-settings <filename>, --prompt-settings <filename>, and --use-memory <memory-backend>. Short forms such as -m for --use-memory exist. Substitute any angled brackets (<>) with your desired values.
Enable Text-to-Speech using ./run.sh --speak.
Use continuous mode (potentially hazardous, may run indefinitely) with ./run.sh --continuous. Exit with Ctrl+C.
Use Self-Feedback mode (increases token usage, costs) by entering S in the input field.
Run GPT-3.5 only mode with ./run.sh --gpt3only or set SMART_LLM_MODEL in .env to gpt-3.5-turbo. For GPT-4 only, use ./run.sh --gpt4only (raises API costs).
Find logs in ./output/logs, debug with ./run.sh --debug.
Disable commands by setting DISABLED_COMMAND_CATEGORIES in .env. For instance, to disable coding features, use:
DISABLED_COMMAND_CATEGORIES=autogpt.commands.analyze_code,autogpt.commands.execute_code,autogpt.commands.gi.
Okay, this was dry, here’s a more fun article:
Recommended: 30 Creative AutoGPT Use Cases to Make Money Online
https://www.sickgaming.net/blog/2023/05/...and-usage/

