-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Insights: mudler/LocalAI
Overview
Could not load contribution data
Please try again later
27 Pull requests merged by 4 people
-
chore: ⬆️ Update ggml-org/llama.cpp to
41613437ffee0dbccad684fc744788bc504ec213
#5968 merged
Aug 4, 2025 -
docs: ⬆️ update docs version mudler/LocalAI
#5967 merged
Aug 4, 2025 -
chore(build): Rename sycl to intel
#5964 merged
Aug 4, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
d31192b4ee1441bbbecd3cbf9e02633368bdc4f5
#5965 merged
Aug 3, 2025 -
feat(backends): allow backends to not have a metadata file
#5963 merged
Aug 3, 2025 -
feat(backends): install from local path
#5962 merged
Aug 3, 2025 -
chore(stable-diffusion): bump, set GGML_MAX_NAME
#5961 merged
Aug 3, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
5c0eb5ef544aeefd81c303e03208f768e158d93c
#5959 merged
Aug 2, 2025 -
chore: ⬆️ Update ggml-org/whisper.cpp to
0becabc8d68d9ffa6ddfba5240e38cd7a2642046
#5958 merged
Aug 2, 2025 -
docs: ⬆️ update docs version mudler/LocalAI
#5956 merged
Aug 1, 2025 -
fix(docs): Improve responsiveness of tables
#5954 merged
Aug 1, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
daf2dd788066b8b239cb7f68210e090c2124c199
#5951 merged
Aug 1, 2025 -
feat(swagger): update swagger
#5950 merged
Jul 31, 2025 -
fix(intel): Set GPU vendor on Intel images and cleanup
#5945 merged
Jul 31, 2025 -
chore(model gallery): add flux.1-krea-dev-ggml
#5949 merged
Jul 31, 2025 -
chore(model gallery): add flux.1-dev-ggml-abliterated-v2-q8_0
#5948 merged
Jul 31, 2025 -
chore(model gallery): add flux.1-dev-ggml-q8_0
#5947 merged
Jul 31, 2025 -
chore(capability): improve messages
#5944 merged
Jul 31, 2025 -
feat(stablediffusion-ggml): allow to load loras
#5943 merged
Jul 31, 2025 -
chore: update swagger
#5946 merged
Jul 31, 2025 -
chore: ⬆️ Update ggml-org/whisper.cpp to
f7502dca872866a310fe69d30b163fa87d256319
#5941 merged
Jul 31, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
e9192bec564780bd4313ad6524d20a0ab92797db
#5940 merged
Jul 31, 2025 -
feat(stablediffusion-ggml): add support to ref images (flux Kontext)
#5935 merged
Jul 30, 2025 -
chore(model gallery): add qwen_qwen3-30b-a3b-thinking-2507
#5939 merged
Jul 30, 2025 -
chore(model gallery): add arcee-ai_afm-4.5b
#5938 merged
Jul 30, 2025 -
chore(model gallery): add qwen_qwen3-30b-a3b-instruct-2507
#5936 merged
Jul 30, 2025 -
chore: ⬆️ Update ggml-org/llama.cpp to
aa79524c51fb014f8df17069d31d7c44b9ea6cb8
#5934 merged
Jul 29, 2025
3 Pull requests opened by 1 person
-
chore(deps): bump torch and sentence-transformers
#5969 opened
Aug 4, 2025 -
chore(deps): bump torch and diffusers
#5970 opened
Aug 4, 2025 -
feat(backends install): allow to specify name and alias during manual installation
#5971 opened
Aug 4, 2025
13 Issues closed by 2 people
-
error="unexpected end of JSON input"
#2850 closed
Aug 5, 2025 -
Unable to find image 'localai/localai:21' locally
#2864 closed
Aug 5, 2025 -
diffuser backend processes stack up and hog GPU memory
#2866 closed
Aug 5, 2025 -
Document storage requirement
#2869 closed
Aug 4, 2025 -
Document permission requirements
#2870 closed
Aug 4, 2025 -
Allow sideloading backends on filesystem
#5917 closed
Aug 3, 2025 -
unable to load hercules-5.0-qwen2-7b with docker
#2900 closed
Aug 3, 2025 -
Must we need docker file to run LocalAI?
#2913 closed
Aug 3, 2025 -
Build error in build with GO_TAGS=stablediffusion
#2934 closed
Aug 3, 2025 -
intfloat/multilingual-e5-base can't load
#2946 closed
Aug 3, 2025 -
Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
#3007 closed
Aug 3, 2025 -
Error generating images with LocalAI integrated with Nextcloud AI (CPU only)
#3017 closed
Jul 30, 2025 -
Misspelling of the word "adequately"
#3028 closed
Jul 30, 2025
3 Issues opened by 3 people
-
gRPC update to 1.74 breaks local-ai on metal
#5966 opened
Aug 4, 2025 -
whisper backend not available on apple silicon (m4 pro)
#5953 opened
Aug 1, 2025 -
CPU-only version on Kubernetes references nvidia - and fails
#5952 opened
Aug 1, 2025
11 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
old torch causes version restriction due to CVE
#5933 commented on
Jul 29, 2025 • 0 new comments -
Unable to generate sd3 medium images with 24gb gpu
#2845 commented on
Aug 1, 2025 • 0 new comments -
animagine-xl wont start in "aio-gpu-nvidia-cuda-12"
#2787 commented on
Aug 2, 2025 • 0 new comments -
"Error: Failed to process stream" appears when on the "Chat" tab trying to chat to any model.
#3001 commented on
Aug 3, 2025 • 0 new comments -
Invalid json crashes the server / Json schema suppport incomplete
#2938 commented on
Aug 3, 2025 • 0 new comments -
ERR error installing backend - failed to download layer 0: unexpected EOF
#5924 commented on
Aug 3, 2025 • 0 new comments -
The prompt execution is not interrupted where the conversation is cleared in the WebUI
#2731 commented on
Aug 4, 2025 • 0 new comments -
Create authorization bearer tokens directly in the WebUI
#2730 commented on
Aug 4, 2025 • 0 new comments -
There is something wrong with VLM
#2668 commented on
Aug 5, 2025 • 0 new comments -
Can't build LocalAI with llama.cpp with CUDA
#3418 commented on
Aug 5, 2025 • 0 new comments -
ci: add static-checker
#2781 commented on
Aug 4, 2025 • 0 new comments