This website requires JavaScript.
Explore
Help
Sign In
k
/
llmServer
Watch
1
Star
0
Fork
0
You've already forked llmServer
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
5
Commits
1
Branch
0
Tags
8e576ae3322c7dcf3e3f579f733dfc236ff5188a
Go to file
Code
Clone
HTTPS
Tea CLI
Open with VS Code
Open with VSCodium
Open with Intellij IDEA
Download ZIP
Download TAR.GZ
Download BUNDLE
k
8e576ae332
Now handling temperature and max_tokens correctly
2026-03-04 00:55:29 -05:00
bot.py
Now handling temperature and max_tokens correctly
2026-03-04 00:55:29 -05:00
model.py
Basic version working.
2026-03-03 21:52:30 -05:00
README.md
Add readme
2026-03-03 13:42:41 -05:00
README.md
LLM server
A quick script to host my model useing batching. Exists so more interesting projects can use it.
Reference in New Issue
View Git Blame
Copy Permalink
Description
No description provided
ai
Readme
40
KiB
Languages
Python
100%