Linux系統安裝ComfyUI,架設Stable Diffusion AI生圖服務
https://ivonblog.com/posts/stable-diffusion-comfyui/
https://ivonblog.com/posts/comfyui-linux-installation
python3 main.py –listen 0.0.0.0 –port 8188
sudo ufw allow 8188/tcp
若要從瀏覽器跨網域呼叫 ComfyUI API,加上:--enable-cors-header "*" (或指定你的網域)
直接讓 ComfyUI 自帶 TLS(較簡單,仍建議加防護)
python3 main.py –listen 0.0.0.0 –port 8188 \
–tls-keyfile /path/privkey.pem \
–tls-certfile /path/fullchain.pem
設為開機自動啟動(systemd)
建立 /etc/systemd/system/comfyui.service:
[Unit]
Description=ComfyUI
After=network.target
[Service]
Type=simple
WorkingDirectory=/home/ubuntu/ComfyUI
ExecStart=/home/jeng/anaconda3/envs/comfyui/bin/python3 main.py –listen 0.0.0.0 –port 8188
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟用:
sudo systemctl daemon-reload
sudo systemctl enable –now comfyui
sudo systemctl status comfyui
Integrating n8n Workflow Automation with Model Context Protocol (MCP) Servers
Integrating MCP Servers with FastAPI
如何防止惡意連結
https://chatgpt.com/share/684e364e-dd08-8002-b363-a3582babee48
https://chatgpt.com/share/684e630f-998c-8002-849b-fcb31d1eec86
sudo apt install fail2ban # Debian/Ubuntu
sudo systemctl enable –now fail2ban
新增 jail
/etc/fail2ban/jail.local
[DEFAULT]
封鎖時間、偵測視窗、失敗次數
bantime = 1h ; 封 1 小時(可寫 86400 或 1d)
findtime = 10m ; 10 分鐘內
maxretry = 5 ; 失敗 5 次就封
ignoreip = 127.0.0.1/8 192.0.2.10 ; 白名單 (例: 你的固定 IP)
[sshd] ; 啟用預設的 sshd filter
enabled = true
port = ssh ; 或 22,2222 等自訂 Port
logpath = %(sshd_log)s ; Debian=/var/log/auth.log, RHEL=/var/log/secure
backend = systemd ; 系統用 systemd journal 時可啟用
[apache-badbots]
enabled = true
port = http,https
filter = apache-badbots
logpath = /var/log/apache2/access.log
maxretry = 1
bantime = 86400 # 封 1 天
sudo systemctl restart fail2ban
看全局
sudo fail2ban-client status
看 sshd jail 詳細資訊
sudo fail2ban-client status sshd
全域白名單(所有 jail 皆生效)
[DEFAULT]
ignoreip = 127.0.0.1/8 192.0.2.10 203.0.113.0/24 2001:db8::/32
自定義一份適合您格式的 filter
建立 /etc/fail2ban/filter.d/apache-custombots.conf 並加入:
[Definition]
failregex = ^ .“(GET|POST|HEAD)..env.HTTP.” [45]\d{2} .+ “.” ^ .“(GET|POST|HEAD)..git.HTTP.” [45]\d{2} .+ “.“
^ .“(GET|POST|HEAD).” [45]\d{2} .+ “.ZmEu.“
^ .“(GET|POST|HEAD).” [45]\d{2} .+ “.Hello World.“
^ .“(GET|POST|HEAD).” [45]\d{2} .+ “.Keydrop.“
ignoreregex =
然後在 jail.local 加入:
[apache-custombots]
enabled = true
port = http,https
logpath = /var/log/apache2/access.log
filter = apache-custombots
maxretry = 1
findtime = 600
bantime = 1h
再 reload:
sudo fail2ban-client reload
sudo fail2ban-client status apache-custombots
查看 iptables
sudo iptables -L
建立 MCP 伺服器與整合 LangGraph
The Best Free Offline AI for Video Generation (The Results Are Unreal)
OLlama 安裝 Google MedGemma 模型(失敗)
以下步驟示範如何把 MedGemma(Google 針對醫療領域釋出的 Gemma 3 變體)裝進 Ollama,並分別說明「文字版 27 B」和「多模態 4 B(看圖)」兩種情境。
建議放在專用資料夾
mkdir -p ~/ollama-models/medgemma && cd ~/ollama-models/medgemma
27 B(文字版,Q4_K_M 量化)
wget -c https://huggingface.co/unsloth/medgemma-27b-text-it-GGUF/resolve/main/medgemma-27b-text-it-Q4_K_M.gguf
4 B(多模態版,Q4_K_M 量化 + mmproj 投影層)
wget -c https://huggingface.co/unsloth/medgemma-4b-it-GGUF/resolve/main/medgemma-4b-it-Q4_K_M.gguf
wget -c https://huggingface.co/unsloth/medgemma-4b-it-GGUF/resolve/main/mmproj-F16.gguf
若想用 原生 pre-train (-pt) 版,檔名一樣要注意大小寫:wget -c https://huggingface.co/mradermacher/medgemma-4b-pt-GGUF/resolve/main/medgemma-4b-pt-F16.gguf
2. 撰寫 Modelfile
2.1 文字版 27 B
Modelfile 內容(放在同一目錄):
Modelfile27B
FROM ./medgemma-27b-text-it-Q4_K_M.gguf
TEMPLATE """
{{- if .System }}<|im_start|>system
{{.System}}<|im_end|>{{ end -}}
{{- if .Prompt }}<|im_start|>user
{{.Prompt}}<|im_end|>{{ end -}}
<|im_start|>assistant
{{.Response}}<|im_end|>
"""
PARAMETER num_ctx 8192
2.2 多模態 4 B
多模態必須同時載入 主 GGUF 與 mmproj 投影層檔案;Ollama 允許用兩行 FROM:Ollama
Modelfile4B
FROM ./mmproj-F16.gguf # 第 1 行:視覺投影層
FROM ./medgemma-4b-it-Q4_K_M.gguf # 第 2 行:4B 主模型
TEMPLATE """
{{- if .System }}<|im_start|>system
{{.System}}<|im_end|>{{ end -}}
{{- if .Prompt }}<|im_start|>user
{{.Prompt}}<|im_end|>{{ end -}}
<|im_start|>assistant
{{.Response}}<|im_end|>
"""
PARAMETER num_ctx 4096
3. 建立並測試模型
# 建立 27B
ollama create medgemma-27b -f ./Modelfile27B
# 建立 4B
ollama create medgemma-4b-vision -f ./Modelfile4B
文字測試
ollama run medgemma-27b
你是誰?
圖像/多模態測試(Ollama CLI)
ollama run medgemma-4b-vision \
--image chest_xray.png \
-p "請描述這張影像的主要異常位置"
或透過 HTTP API:
curl http://localhost:11434/api/generate \
-d '{
"model": "medgemma-4b-vision",
"prompt": "Read this fundus photo and report findings in Chinese",
"images": ["data:image/png;base64,...."]
}'
使用 docker-compose.yml 安裝 n8n
https://github.com/dean9703111/n8n-google-sheet-exmaple
使用 docker-compose.yml 安裝 n8n
你可以直接 git clone 筆者的 GitHub 專案,或者建立一個 n8n 的資料夾,新增 docker-compose.yml 檔案。
volumes:
n8n_storage:
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- "127.0.0.1:5678:5678" # 根據實際需求設定
volumes:
- n8n_storage:/home/node/.n8n
volumes:
n8n_storage:
services:
n8n:
image: n8nio/n8n:latest
ports:
– “5678:5678”
environment:
– N8N_HOST=0.0.0.0
– N8N_PORT=5678
– N8N_SECURE_COOKIE=false
– N8N_PROTOCOL=http
貼上內容後,在終端機(Terminal)
輸入 docker compose n8n --project-name up -d 即可啟動
加上
environment:
- N8N_SECURE_COOKIE=false # 或改成 true 並上 HTTPS
- N8N_PROTOCOL=http
🔐 1. 安裝憑證(仍需暫時用 port 80)
sudo apt install certbot python3-certbot-apache
sudo certbot certonly --standalone -d yourdomain.com
成功後,憑證位置會是:
/etc/letsencrypt/live/yourdomain.com/
🛠 2. 設定 Apache SSL(以 8443 為例)
建立檔案:
sudo nano /etc/apache2/sites-available/n8n-ssl.conf
內容如下:
<VirtualHost *:8443>
ServerName yourdomain.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/yourdomain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain.com/privkey.pem
ProxyPreserveHost On
ProxyPass / http://localhost:5678/
ProxyPassReverse / http://localhost:5678/
ErrorLog ${APACHE_LOG_DIR}/n8n_error.log
CustomLog ${APACHE_LOG_DIR}/n8n_access.log combined
</VirtualHost>
啟用必要模組與設定:
sudo a2enmod ssl
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo a2ensite n8n-ssl.conf
sudo systemctl reload apache2
🐳 3. Docker Compose 設定 (n8n 走 http 5678)
services:
n8n:
image: n8nio/n8n
restart: always
ports:
– “5678:5678”
environment:
– N8N_HOST=yourdomain.com
– N8N_PORT=5678
– N8N_PROTOCOL=http
– N8N_SECURE_COOKIE=true
🔎 存取方式:
你可以透過:
https://yourdomain.com:8443
Setting -> Community nodes
新增 Line Bot 節點
@aotoki/n8n-nodes-line-messaging