๐ฅ Download LM Studio:
๐ https://lmstudio.ai
found I had already downloaded this one day earlier
ChatGPT said:
❌ What LM Studio Cannot Run
Not Supported | Why |
---|---|
Transformers .bin , .pt , .safetensors | These require PyTorch/Transformers runtime |
Tokenizers that aren’t llama.cpp compatible | No runtime support for them |
Hosted (non-local) models | LM Studio is an offline-first app |
๐ก Bonus Tip
On Hugging Face, search TheBloke + GGUF for a huge selection of ready-to-use LM Studio models:
๐ https://huggingface.co/TheBloke
๐ Why TheBloke Models Are Recommended
Reason | Description |
---|---|
✅ GGUF-ready | TheBloke consistently converts the best LLMs into GGUF format, which is exactly what tools like LM Studio and llama.cpp need. |
๐งช Tested + Documented | Each model includes detailed quantization options, example prompts, context length support, and known issues. |
๐ง Wide Selection | From general chat models to code-specific LLMs (like CodeLLaMA, DeepSeek, Mistral), TheBloke uploads nearly every hot new release. |
๐ฌ Active community | Thousands of people use these models and leave feedback, making it easier to choose versions that work well locally. |
๐️ Unofficial but trusted | He's become the de facto standard for converting Hugging Face models into formats people can actually run locally. |
๐ฆ Example Models You’ll See from TheBloke
-
CodeLLaMA-7B-GGUF
-
Mistral-7B-Instruct-GGUF
-
WizardCoder-Python-34B-GGUF
-
Phi-2-GGUF
-
Nous-Hermes-2-Mixtral-GGUF
๐งญ TL;DR
Think of TheBloke like your favorite record label — he doesn’t make the models, but he remasters them for easy play on your local jukebox.
๐ฅ LM Studio vs Ollama: What’s Easier?
Feature | ๐ง LM Studio | ๐ ️ Ollama | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Setup | ✅ Plug-and-play GUI | ๐ก CLI-based, requires terminal comfort | ||||||||||||||||||||||
Model Format | GGUF only | Custom .bin format | ||||||||||||||||||||||
UI | ✅ Beautiful chat interface | ❌ No native UI (CLI only) | ||||||||||||||||||||||
Multiple model support | ✅ Easy model switching | ✅ Also supports multiple models | ||||||||||||||||||||||
Chat now | ✅ Just download and go | ๐ก Requires command line start | ||||||||||||||||||||||
Customization | ๐ก Prompt-level only | ✅ Can build Modelfiles & API chains | ||||||||||||||||||||||
Tool integrations | ๐ก Early stage YES! ✅ Visual Studio Code (VS Code) is exactly what you want. It is:
✅ With VS Code You Can:
| ✅ Strong (LangChain, VS Code, API) |
Write a Python script that runs in the background and listens for the Print Screen key. When pressed, it should:
- Take a screenshot of the entire screen
- Open a simple Tkinter window with a preview of the screenshot
- Let the user enter a text label
- Save the image to a folder called "ProofOfVibeClips" with a timestamp and the label in the filename
Use only standard or common packages (like pillow, keyboard, tkinter).
import os
from PIL import ImageGrab
import time
import tkinter as tk
def screenshot():
print("screenshot!")
img = ImageGrab.grab()
filename = "ProofOfVibeClips\\{}_{}.png".format(int(time.time()), label_entry.get())
img.save(filename)
def create_label():
global window
label_entry = tk.Entry(window, width=60)
label_entry.pack()
def capture():
screenshot()
window.destroy()
def show_preview():
img = ImageGrab.grab()
photo = tk.PhotoImage(img)
preview = tk.Label(window, image=photo)
preview.pack()
window.after(100, create_label)
window.after(200, capture)
def start_screenshot():
global window
label_entry = None
window = tk.Tk()
window.title("Capture ProofOfVibeClips")
preview = tk.Label(window, width=640, height=480)
preview.pack()
window.after(100, show_preview)
window.mainloop()
def on_keypress(*args):
if keyboard.is_pressed("Print"):
start_screenshot()
keyboard.on_release(callback=on_keypress)
Comments