Common Issues¶
"km: command not found"¶
The km command isn't on your PATH. This usually means:
-
pip install location: pip installed to a user-local directory not on your PATH. Try:
Or add the pip scripts directory to your PATH. On macOS/Linux, this is typicallypython -m knowmarks.cli status~/.local/bin/. -
Virtual environment: If you installed in a venv, make sure it's activated:
source .venv/bin/activate km status -
uv: If using uv, install with:
uv pip install 'knowmarks[all,embeddings]'
Embedding Model Download Is Slow¶
On first run with the embeddings extra, Knowmarks downloads the BAAI/bge-small-en-v1.5 model (~130MB). This is a one-time download. If it's slow:
- Check your internet connection
- The model is cached in your data directory under
models/. Once downloaded, no network is needed. - Alternatively, use Ollama for embeddings — you control when models are downloaded.
Search Returns No Results¶
- Empty collection: Run
km statusto check your item count. If zero, save some items first. - Pending enrichment: After a bulk import, items need time for content extraction and embedding. Check Pulse for enrichment progress.
- Missing embeddings: If items were saved without embeddings (e.g., embedding provider was unavailable), run the
reembedgovernance action from the MCP or dashboard.
Dashboard Won't Load¶
If km serve starts but the browser shows nothing:
- Check the terminal for error output
- Verify the port isn't in use:
lsof -i :3749 - Try a different port:
KNOWMARKS_PORT=8080 km serve - Check that the
webextra is installed:pip install 'knowmarks[web]'
macOS Gatekeeper Warning¶
If macOS blocks Knowmarks with "cannot be opened because the developer cannot be verified":
- Right-click the application and select Open
- Click Open in the dialog
This only needs to be done once.
LLM Features Not Working¶
If conversational search returns standard results instead of generated answers:
- Check that an LLM endpoint is configured:
echo $KM_LLM_URL - Verify the endpoint is reachable:
curl -s $KM_LLM_URL/models | head -5 - Check that LLM is enabled:
echo $KM_LLM_ENABLED # Should be "1" or unset - For LM Studio or Ollama, make sure the server is running.
Docker: Can't Connect to Ollama¶
When using the slim Docker image with Ollama on the host:
-e KNOWMARKS_EMBEDDING_ENDPOINT=http://host.docker.internal:11434
host.docker.internal resolves to the host machine from inside Docker on macOS and Windows. On Linux, use --network host instead or pass the host IP.
Import Fails Partway¶
Bulk imports continue past individual failures. After import:
- Run
km statusto see how many items were imported - Check Pulse for items needing triage
- Use
km findto verify your content is searchable
If the entire import fails, check that the source file exists and is in the expected format (HTML for bookmark files, or valid API credentials for services).