Skip to content

Support for different LLM hosts (remote or local, compatible with OpenAI API interface) #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 30, 2025

Conversation

P4o1o
Copy link
Contributor

@P4o1o P4o1o commented May 24, 2025

added two flags for the scan command:

  • --url <llm_host_url> / -u <llm_host_url> gives you the possibility to specify the url of the LLM backend (if not specified the program will use OpenAI url: https://api.openai.com)
  • --model <model_name> / -m <model_name> gives you the possibility to choose the model you prefer (if not specified the program will use gpt-4.1-nano)

Example:

 # example with ollama at local host with default ollama port
 vibesafe scan --url http://127.0.0.1:11434 --model gemma3:27b-it-q8_0
# or
 vibesafe scan -u http://127.0.0.1:11434 -m gemma3:27b-it-q8_0 

# or if you prefer Deepseek
vibesafe scan -u https://api.deepseek.com -m deepseek-chat

@slowcoder360
Copy link
Owner

Thank you for doing this, I really appreciate it!

Copy link
Owner

@slowcoder360 slowcoder360 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for your work! This is a great addition to the project, you are a legend.

@slowcoder360 slowcoder360 merged commit c832135 into slowcoder360:master May 30, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants