Skip to content

Common Issues

"km: command not found"

The km command isn't on your PATH. This usually means:

  • pip install location: pip installed to a user-local directory not on your PATH. Try:

    python -m knowmarks.cli status
    
    Or add the pip scripts directory to your PATH. On macOS/Linux, this is typically ~/.local/bin/.

  • Virtual environment: If you installed in a venv, make sure it's activated:

    source .venv/bin/activate
    km status
    

  • uv: If using uv, install with:

    uv pip install 'knowmarks[all,embeddings]'
    

Embedding Model Download Is Slow

On first run with the embeddings extra, Knowmarks downloads the BAAI/bge-small-en-v1.5 model (~130MB). This is a one-time download. If it's slow:

  • Check your internet connection
  • The model is cached in your data directory under models/. Once downloaded, no network is needed.
  • Alternatively, use Ollama for embeddings — you control when models are downloaded.

Search Returns No Results

  • Empty collection: Run km status to check your item count. If zero, save some items first.
  • Pending enrichment: After a bulk import, items need time for content extraction and embedding. Check Pulse for enrichment progress.
  • Missing embeddings: If items were saved without embeddings (e.g., embedding provider was unavailable), run the reembed governance action from the MCP or dashboard.

Dashboard Won't Load

If km serve starts but the browser shows nothing:

  1. Check the terminal for error output
  2. Verify the port isn't in use: lsof -i :3749
  3. Try a different port: KNOWMARKS_PORT=8080 km serve
  4. Check that the web extra is installed: pip install 'knowmarks[web]'

macOS Gatekeeper Warning

If macOS blocks Knowmarks with "cannot be opened because the developer cannot be verified":

  1. Right-click the application and select Open
  2. Click Open in the dialog

This only needs to be done once.

LLM Features Not Working

If conversational search returns standard results instead of generated answers:

  1. Check that an LLM endpoint is configured:
    echo $KM_LLM_URL
    
  2. Verify the endpoint is reachable:
    curl -s $KM_LLM_URL/models | head -5
    
  3. Check that LLM is enabled:
    echo $KM_LLM_ENABLED  # Should be "1" or unset
    
  4. For LM Studio or Ollama, make sure the server is running.

Docker: Can't Connect to Ollama

When using the slim Docker image with Ollama on the host:

-e KNOWMARKS_EMBEDDING_ENDPOINT=http://host.docker.internal:11434

host.docker.internal resolves to the host machine from inside Docker on macOS and Windows. On Linux, use --network host instead or pass the host IP.

Import Fails Partway

Bulk imports continue past individual failures. After import:

  1. Run km status to see how many items were imported
  2. Check Pulse for items needing triage
  3. Use km find to verify your content is searchable

If the entire import fails, check that the source file exists and is in the expected format (HTML for bookmark files, or valid API credentials for services).