- README.md: full project overview with setup, training, API, and RSpamd integration docs - server.py: add reason (human-readable explanation) and quote (suspicious snippet) to response - spamllm.lua: pass reason and quote through to RSpamd symbol description for logs/UI Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
129 lines
3.7 KiB
Markdown
129 lines
3.7 KiB
Markdown
# SpamLLM
|
|
|
|
A multilingual spam classifier for RSpamd using fine-tuned DistilBERT. Classifies emails in German and English, with automatic language detection that flags unexpected languages as suspicious.
|
|
|
|
## How it works
|
|
|
|
```
|
|
Incoming Mail → RSpamd → HTTP POST → SpamLLM Service → Score + Reason + Quote → RSpamd
|
|
```
|
|
|
|
RSpamd sends mail metadata (subject, body, sender) to the SpamLLM FastAPI service. The service runs the text through a fine-tuned `distilbert-base-multilingual-cased` model and returns:
|
|
|
|
- **score** (0-15, RSpamd-compatible)
|
|
- **reason** (human-readable explanation, e.g. "High spam confidence (94%)")
|
|
- **quote** (the most suspicious snippet from the mail)
|
|
- **language** detection with a score bonus for non-DE/EN mails
|
|
|
|
## Project structure
|
|
|
|
```
|
|
spamllm/
|
|
├── train.py # Fine-tune DistilBERT on spam/ham data
|
|
├── server.py # FastAPI classification service
|
|
├── test_classify.py # Local model validation (DE/EN/foreign samples)
|
|
├── export_rspamd_data.py # Export Maildir folders to CSV training data
|
|
├── requirements.txt # Python dependencies
|
|
└── rspamd/
|
|
├── local.d/
|
|
│ └── external_services.conf # RSpamd config
|
|
└── lua/
|
|
└── spamllm.lua # RSpamd Lua plugin
|
|
```
|
|
|
|
## Setup
|
|
|
|
```bash
|
|
python3 -m venv venv
|
|
source venv/bin/activate
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
## Training
|
|
|
|
### Option A: Demo dataset (quick start)
|
|
|
|
Uses the public SMS Spam Collection (~5,500 English messages):
|
|
|
|
```bash
|
|
python train.py --epochs 3
|
|
```
|
|
|
|
### Option B: Your own mail data (recommended for production)
|
|
|
|
1. Export your existing mails from Maildir:
|
|
|
|
```bash
|
|
python export_rspamd_data.py \
|
|
--spam-dir /var/vmail/user/.Junk/cur \
|
|
--ham-dir /var/vmail/user/.INBOX/cur \
|
|
--max-per-class 5000
|
|
```
|
|
|
|
2. Train on the exported data:
|
|
|
|
```bash
|
|
python train.py --custom-data --epochs 5
|
|
```
|
|
|
|
You can also place a German-only dataset at `data/train_de.csv` (columns: `text`, `labels`) to supplement the English demo data when training without `--custom-data`.
|
|
|
|
## Running the service
|
|
|
|
```bash
|
|
uvicorn server:app --host 127.0.0.1 --port 8000
|
|
```
|
|
|
|
### API
|
|
|
|
**POST /classify**
|
|
|
|
```json
|
|
{
|
|
"subject": "Sie haben gewonnen!",
|
|
"body": "Klicken Sie hier um Ihren Preis abzuholen...",
|
|
"from_addr": "spam@example.com"
|
|
}
|
|
```
|
|
|
|
Response:
|
|
|
|
```json
|
|
{
|
|
"is_spam": true,
|
|
"confidence": 0.94,
|
|
"score": 14.1,
|
|
"language": "de",
|
|
"foreign_lang_bonus": 0.0,
|
|
"reason": "High spam confidence (94%)",
|
|
"quote": "...Klicken Sie hier um Ihren Preis abzuholen. Senden Sie uns Ihre Bankdaten..."
|
|
}
|
|
```
|
|
|
|
**GET /health** — returns `{"status": "ok", "model_loaded": true}`
|
|
|
|
## RSpamd integration
|
|
|
|
1. Copy `rspamd/lua/spamllm.lua` to `/etc/rspamd/plugins.d/`
|
|
2. Copy `rspamd/local.d/external_services.conf` to `/etc/rspamd/local.d/`
|
|
3. Reload RSpamd: `rspamadm reload`
|
|
|
|
The plugin registers three symbols:
|
|
|
|
| Symbol | Weight | Description |
|
|
|--------|--------|-------------|
|
|
| `SPAMLLM_SPAM` | +5.0 | Spam detected by classifier |
|
|
| `SPAMLLM_HAM` | -2.0 | Ham detected by classifier |
|
|
| `SPAMLLM_FOREIGN_LANG` | +4.0 | Unexpected language (not DE/EN) |
|
|
|
|
The Lua plugin only sends mails in the RSpamd grey zone (score 3-12) to the service, so obvious spam/ham is handled by RSpamd's built-in rules without extra latency.
|
|
|
|
## Language detection
|
|
|
|
Mails are classified by language using `langdetect`. Expected languages (German, English) are scored normally. All other languages receive a +4 point spam bonus and a lowered spam threshold (0.3 instead of 0.5), since non-DE/EN mails are almost always spam in this environment.
|
|
|
|
## Performance
|
|
|
|
- Inference: ~20-50ms per mail on CPU
|
|
- Model size: ~500MB (distilbert-base-multilingual-cased)
|
|
- Training on demo dataset: ~10-15 min on CPU, ~2 min on GPU
|