File Parsing and Posting Bot
I have an SDR set up on a Windows 11 server running as a VM on my Proxmox host. This is recording a couple of radio systems in my area, and generating .mp3 files. I use this history as kind of an archive, as one of the radios systems is from my place of employment.
For the longest time I've been struggling to find a way to keep these files for a period of time should we ever need the audio from something, and I have tried all kinds of stuff. Right now, the audio files generated are simply dumped into a folder in my OneDrive, so I can access them from anywhere. While this works, it's not always easy to get to because of OneDrive. With that, I figured something else would be better, and allow me to share access without a lot of fuss, or creating an entirely different platform to share on, and I've settled on using my Discord server to do that.
I set up a category for the radio audio, and then created channels for each of the systems I'm monitoring. Each system has multiple talkgroups, so I figured employing the threading capabilities of a text channel would be best. I looked at using a forum, but for multiple audio files, it was determined that using a text channel would be easier (based on user input and polling). The bot/script I'm using will auto-create the threads based on talkgroup, and posts into the appropriate channel based on system name. If the parser can't figure out where to put it, there is a catch-all channel in the same category it can post to.
I'm using SDRTrunk to monitor/record the audio, and it does a great job with file names and adding metadata to the files, which makes parsing and picking where to post really easy. Here is how I got all this to work.
General Information
- The server where SDRTrunk is running is on a VM hosted on my Proxmox and is Windows 11 Server 2025.
- The directory where SDRTrunk dumps the audio files is here: C:\Users\Administrator\OneDrive\Scanner Recordings\SDRTrunk
- Parses filename (date, time, system, site, TO talkgroup, FROM radio) and ID3 tags (Title/Grouping/etc)
- Resolves System → text channel under your AUDIO category (e.g., System1→ a01-system01)
- For each Talkgroup, creates/uses a thread named tg-<tgid> <alias> (e.g., tg-111 FP Security)
- Posts an embed + the audio file, then records it in a local SQLite DB so restarts don’t duplicate posts
- Logs to a file (rotating) and to console
If SDRTrunk “derps” a talkgroup: you can Forward the message into the correct thread and delete the original. (Discord can’t truly move a message, but forward+delete is good enough.)
Prerequisites
- Python needs to be installed on the source server, in my case, W11 2025 server
- Discord Bot needs to be created with these permissions:
- View Channels, Send Messages, Attach Files, Read Message History
- Create Public Threads, Send Messages in Threads
- Manage Threads, Manage Messages (not required, but nice to have)
- Discord Server Layout
- Create a category for these files, or not - up to you
- Create channel names that you can figure out easily, and maybe match to the system name in your source audio
- Example:
- Category: Audio
- Text Channel 1: a01-system1
- Text Channel 2: a02-system2
- Example:
Get Started
Files and Locations
Create a folder for the bot and logs on the W11 server, I placed mine in C:\bots\sdr-discord-uploader
- Env File:
C:\bots\sdr-discord-uploader\.env
- Script:
C:\bots\sdr-discord-uploader\sdrtrunk-discord-uploader.py
Install Python
If you haven't already, install it on your machine using PowerShell in the above directory with this:
py -m pip install -U discord.py watchdog mutagen python-dotenv certifi
Then, if you want a quick sanity check for tokens, run this:
py - << 'PY'
import os; from dotenv import load_dotenv
load_dotenv()
t=os.getenv("DISCORD_TOKEN","")
print("Loaded token?", bool(t), "length:", len(t))
PY
Set Up The .env File
C:\bots\sdr-discord-uploader\.env
# REQUIRED: rotate a fresh token in the Discord Dev Portal, then paste here
DISCORD_TOKEN=PASTE_YOUR_BOT_TOKEN
# Fallback text channel ID if a system can't be matched to a channel
DEFAULT_CHANNEL_ID=12345678901234567890
# SDRTrunk recordings folder
WATCH_FOLDER=C:\Users\Administrator\OneDrive\Scanner Recordings\SDRTrunk
# Category that holds your audio system channels
AUDIO_CATEGORY_NAME=AUDIO
# Optional exact mapping: SystemName -> channel-name
# (Useful if channel names don't match system slugs)
SYSTEM_TO_CHANNEL_OVERRIDES=HonorHealth->a01-honorhealth,Skynet->a02-skynet
# Thread behavior
ENABLE_THREADS=true
THREAD_AUTO_ARCHIVE_MIN=10080
THREAD_TITLE_MAX=64
# Process files that arrived while bot was offline
PROCESS_BACKLOG_ON_START=true
# Logging directory (auto-created)
LOG_DIR=C:\bots\sdr-discord-uploader
# Keep console open on errors when double-clicking the script
PAUSE_ON_EXIT=true
Set Up The Script
C:\bots\sdr-discord-uploader\sdrtrunk-discord-uploader.py
# sdrtrunk_discord_uploader.py
# ------------------------------------------------------------
# SDRTrunk -> Discord (Text Channels + per-TG Threads)
# Robust logging to file + console, optional pause on exit, ID3 + filename parsing.
# ------------------------------------------------------------
import asyncio
import os
import re
import sqlite3
import sys
import time
import logging
from logging.handlers import RotatingFileHandler
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional, Dict, Any, Tuple
import discord
from discord import Embed, File
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
from mutagen import File as MutagenFile
from dotenv import load_dotenv
# -------------------- Load env --------------------
load_dotenv()
DISCORD_TOKEN = os.getenv("DISCORD_TOKEN", "").strip()
DEFAULT_CHANNEL_ID = int(os.getenv("DEFAULT_CHANNEL_ID", "0"))
WATCH_FOLDER = Path(os.getenv("WATCH_FOLDER", r"C:\Users\Administrator\OneDrive\Scanner Recordings\SDRTrunk"))
AUDIO_CATEGORY_NAME = os.getenv("AUDIO_CATEGORY_NAME", "AUDIO").strip()
ENABLE_THREADS = (os.getenv("ENABLE_THREADS", "true").lower() == "true")
THREAD_AUTO_ARCHIVE_MIN = int(os.getenv("THREAD_AUTO_ARCHIVE_MIN", "10080"))
THREAD_TITLE_MAX = int(os.getenv("THREAD_TITLE_MAX", "64"))
PROCESS_BACKLOG_ON_START = (os.getenv("PROCESS_BACKLOG_ON_START", "true").lower() == "true")
PAUSE_ON_EXIT = (os.getenv("PAUSE_ON_EXIT", "true").lower() == "true")
ALLOWED_EXT = {".mp3", ".wav", ".m4a", ".flac", ".ogg"}
# DB lives in the watch folder
DB_PATH = WATCH_FOLDER / "_uploaded.sqlite"
OVERRIDES_RAW = os.getenv("SYSTEM_TO_CHANNEL_OVERRIDES", "").strip()
TEMP_PREFIXES = {"~", "."}
TEMP_SUFFIXES = {".tmp", ".part", ".partial", ".crdownload"}
STABILITY_INTERVAL_SEC = 1.0
STABILITY_PASSES = 3
# -------------------- Logging (robust) --------------------
def _setup_logging() -> logging.Logger:
logger = logging.getLogger("sdrtrunk_uploader")
logger.setLevel(logging.INFO)
fmt = logging.Formatter("%(asctime)s [%(levelname)s] %(message)s")
# Always console
ch = logging.StreamHandler(stream=sys.stdout)
ch.setFormatter(fmt)
logger.addHandler(ch)
# Try file as well
log_dir_env = os.getenv("LOG_DIR", str(WATCH_FOLDER))
try:
log_dir = Path(log_dir_env)
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "_uploader.log"
fh = RotatingFileHandler(log_path, maxBytes=5_000_000, backupCount=3, encoding="utf-8")
fh.setFormatter(fmt)
logger.addHandler(fh)
logger.info(f"File logging to: {log_path}")
except Exception as e:
logger.warning(f"File logging disabled (could not open log file): {e}")
return logger
logger = _setup_logging()
# Ensure watch folder exists early (helps DB/log placement)
try:
WATCH_FOLDER.mkdir(parents=True, exist_ok=True)
except Exception as e:
logger.warning(f"Could not create WATCH_FOLDER '{WATCH_FOLDER}': {e}")
# -------------------- Helpers --------------------
def pause_if_needed():
if PAUSE_ON_EXIT:
try:
input("\nPress Enter to exit...")
except Exception:
pass
def slugify(s: str) -> str:
s = s.strip().lower()
s = re.sub(r"[^a-z0-9]+", "-", s)
s = re.sub(r"-+", "-", s).strip("-")
return s
def parse_overrides(raw: str) -> Dict[str, str]:
out: Dict[str, str] = {}
if not raw:
return out
for part in raw.split(","):
if "->" in part:
sys_name, ch_name = part.split("->", 1)
out[sys_name.strip()] = ch_name.strip()
return out
SYSTEM_TO_CHANNEL_OVERRIDES = parse_overrides(OVERRIDES_RAW)
def ensure_db() -> sqlite3.Connection:
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
conn = sqlite3.connect(DB_PATH)
conn.execute("""
CREATE TABLE IF NOT EXISTS uploaded (
path TEXT PRIMARY KEY,
mtime REAL NOT NULL,
size INTEGER NOT NULL,
uploaded_at REAL
)
""")
conn.execute("""
CREATE TABLE IF NOT EXISTS threads (
thread_key TEXT PRIMARY KEY, -- "<channel_id>:tg:<tgid>"
thread_id INTEGER NOT NULL,
name TEXT,
created_at REAL NOT NULL
)
""")
conn.commit()
return conn
def already_uploaded(conn: sqlite3.Connection, path: Path) -> bool:
cur = conn.execute("SELECT 1 FROM uploaded WHERE path=?", (str(path),))
return cur.fetchone() is not None
def mark_uploaded(conn: sqlite3.Connection, path: Path):
try:
st = path.stat()
except FileNotFoundError:
return
conn.execute(
"INSERT OR REPLACE INTO uploaded (path, mtime, size, uploaded_at) VALUES (?, ?, ?, ?)",
(str(path), st.st_mtime, st.st_size, time.time())
)
conn.commit()
def get_cached_thread_id(conn: sqlite3.Connection, key: str) -> Optional[int]:
cur = conn.execute("SELECT thread_id FROM threads WHERE thread_key=?", (key,))
row = cur.fetchone()
return int(row[0]) if row else None
def cache_thread_id(conn: sqlite3.Connection, key: str, thread_id: int, name: str):
conn.execute(
"INSERT OR REPLACE INTO threads (thread_key, thread_id, name, created_at) VALUES (?, ?, ?, ?)",
(key, thread_id, name, time.time())
)
conn.commit()
def delete_thread_cache(conn: sqlite3.Connection, key: str):
conn.execute("DELETE FROM threads WHERE thread_key=?", (key,))
conn.commit()
def channel_key_for_tg(channel_id: int, tgid: str) -> str:
return f"{channel_id}:tg:{tgid}"
# -------------------- File parsing --------------------
def is_stable(path: Path) -> bool:
try:
prev = None
stable_count = 0
time.sleep(0.25)
for _ in range(60): # ~60s cap
size = path.stat().st_size
if prev is None:
prev = size
stable_count = 1
else:
if size == prev:
stable_count += 1
else:
prev = size
stable_count = 1
if stable_count >= STABILITY_PASSES:
return True
time.sleep(STABILITY_INTERVAL_SEC)
return False
except FileNotFoundError:
return False
def parse_filename(fname: str) -> Dict[str, Any]:
"""
Example:
20250816_151031HonorHealth_11_Four_Peaks__TO_111_FROM_999.mp3
"""
stem = Path(fname).stem
if "_TO_" not in stem or "_FROM_" not in stem or "_" not in stem:
return {}
left, source = stem.rsplit("_FROM_", 1)
source = source.strip()
left2, dest = left.rsplit("_TO_", 1)
dest = dest.strip()
if "_" not in left2:
return {}
date_part, rest = left2.split("_", 1)
time_part = rest[:6] if len(rest) >= 6 else ""
remainder = rest[6:].strip("_")
system = ""
site = ""
if remainder:
parts = remainder.split("_")
if parts:
system = parts[0]
site_tokens = parts[1:]
site = " ".join([p for p in site_tokens if p]).replace("-", " ").replace("__", " ").strip()
dt = None
if len(date_part) == 8 and len(time_part) == 6:
try:
dt = datetime.strptime(f"{date_part}{time_part}", "%Y%m%d%H%M%S")
except ValueError:
pass
return {
"date": date_part or "",
"time": time_part or "",
"datetime": dt,
"system": system,
"site": site,
"talkgroup": dest,
"source_radio": source,
}
def read_tags(path: Path) -> Dict[str, str]:
out: Dict[str, str] = {}
try:
mf = MutagenFile(path)
if not mf or not mf.tags:
return out
def pick(*keys: str) -> str:
for k in keys:
if k in mf.tags:
v = mf.tags.get(k)
if isinstance(v, list):
v = v[0] if v else ""
return str(v)
return ""
out["title"] = pick("TIT2", "title") # e.g., "111'FP Security'"
out["artist"] = pick("TPE1", "artist") # "999 NOC Dispatch"
out["album"] = pick("TALB", "album") # "Four Peaks"
out["genre"] = pick("TCON", "genre") # "Scanner Audio"
out["composer"]= pick("TCOM", "composer") # "sdrtrunk v0.6.1"
out["date"] = pick("TDRC", "date") # "2025-08-16 15:10:31"
out["grouping"]= pick("TIT1", "grouping") # "HonorHealth"
out["comment"] = pick("COMM::XXX", "COMM", "comment")
except Exception as e:
logger.warning(f"Mutagen read failed for {path}: {e}")
return out
def fix_title_alias(s: str) -> str:
"""
"111'FP Security'" -> "111 FP Security"
Also handles 111"Alias", 111: Alias, 111-Alias, 111_Alias
"""
s = (s or "").strip()
if not s: return s
m = re.match(r"^\s*(\d+)\s*['\"]\s*(.*?)\s*['\"]\s*$", s)
if m: return f"{m.group(1)} {m.group(2)}".strip()
m = re.match(r"^\s*(\d+)\s*[:\-_/]\s*(.+)$", s)
if m: return f"{m.group(1)} {m.group(2)}".strip()
return s
def extract_tgid_and_alias(title_fixed: str) -> Tuple[Optional[str], Optional[str]]:
s = (title_fixed or "").replace("TG ", "").strip()
m = re.match(r"^\s*(\d+)\s+(.*)$", s)
if m: return m.group(1), (m.group(2).strip() or None)
if re.fullmatch(r"\d+", s): return s, None
return None, (s or None)
# -------------------- Watchdog --------------------
class NewFileHandler(FileSystemEventHandler):
def __init__(self, loop: asyncio.AbstractEventLoop, queue: asyncio.Queue):
self.loop = loop
self.queue = queue
def on_created(self, event):
if event.is_directory:
return
path = Path(event.src_path)
if path.suffix.lower() not in ALLOWED_EXT:
return
if any(path.name.startswith(p) for p in TEMP_PREFIXES):
return
if any(path.name.endswith(suf) for suf in TEMP_SUFFIXES):
return
self.loop.call_soon_threadsafe(self.queue.put_nowait, path)
# -------------------- Discord helpers --------------------
async def get_main_guild_and_category(client: discord.Client):
guild = None
category = None
if DEFAULT_CHANNEL_ID:
try:
ch = await client.fetch_channel(DEFAULT_CHANNEL_ID)
if hasattr(ch, "guild") and ch.guild:
guild = ch.guild
except Exception:
pass
if guild is None and client.guilds:
guild = client.guilds[0]
if guild:
for cat in guild.categories:
if cat.name.strip().lower() == AUDIO_CATEGORY_NAME.lower():
category = cat
break
return guild, category
def match_text_channel_for_system(system: str, category: Optional[discord.CategoryChannel]) -> Optional[discord.TextChannel]:
if not category or not system:
return None
desired = slugify(system)
for ch in category.channels:
if isinstance(ch, discord.TextChannel):
name = ch.name.lower()
name_no_prefix = re.sub(r"^a\d{2}-", "", name) # a01-honorhealth -> honorhealth
if name_no_prefix == desired:
return ch
return None
async def resolve_destination_text_channel(client: discord.Client, system: str) -> Optional[discord.TextChannel]:
guild, category = await get_main_guild_and_category(client)
# overrides by name -> exact channel name in guild
override_name = SYSTEM_TO_CHANNEL_OVERRIDES.get(system)
if override_name and guild:
ch = discord.utils.get(guild.channels, name=override_name)
if isinstance(ch, discord.TextChannel):
return ch
# match in AUDIO category by slug
ch = match_text_channel_for_system(system, category)
if ch:
return ch
# fallback to DEFAULT_CHANNEL_ID
if DEFAULT_CHANNEL_ID:
try:
fch = await client.fetch_channel(DEFAULT_CHANNEL_ID)
if isinstance(fch, discord.TextChannel):
return fch
except Exception:
pass
return None
def build_thread_name(tgid: str, alias: Optional[str], max_len: int) -> str:
tgid_clean = "".join(ch for ch in (tgid or "") if ch.isdigit()) or (tgid or "?")
alias = (alias or "").strip().replace("\n", " ")
if alias and len(alias) > max_len:
alias = alias[:max_len - 1] + "…"
base = f"tg-{tgid_clean}"
return f"{base} {alias}".strip()
async def get_or_create_tg_thread(
client: discord.Client,
conn: sqlite3.Connection,
parent: discord.TextChannel,
tgid: str,
alias: Optional[str]
) -> discord.abc.Messageable:
key = channel_key_for_tg(parent.id, tgid)
cached_id = get_cached_thread_id(conn, key)
if cached_id:
try:
ch = await client.fetch_channel(cached_id)
if isinstance(ch, discord.Thread) and ch.parent_id == parent.id:
return ch
delete_thread_cache(conn, key)
except discord.NotFound:
delete_thread_cache(conn, key)
except Exception:
pass
name = build_thread_name(tgid, alias, THREAD_TITLE_MAX)
try:
thread = await parent.create_thread(
name=name,
auto_archive_duration=THREAD_AUTO_ARCHIVE_MIN,
type=discord.ChannelType.public_thread
)
cache_thread_id(conn, key, thread.id, name)
return thread
except (discord.Forbidden, discord.HTTPException, AttributeError) as e:
logger.error(f"Thread create failed for TG {tgid} in #{parent}: {e}")
# Fallback: seed message then create thread
try:
seed = await parent.send(f"Auto-creating thread for TG {tgid}…")
thread = await seed.create_thread(
name=name,
auto_archive_duration=THREAD_AUTO_ARCHIVE_MIN
)
try:
await seed.delete()
except Exception:
pass
cache_thread_id(conn, key, thread.id, name)
return thread
except Exception as ee:
logger.error(f"Seed thread create failed for TG {tgid}: {ee}")
return parent # last resort: post into parent channel
# -------------------- Embed builder --------------------
def build_embed(path: Path, parsed: Dict[str, Any], tags: Dict[str, str]) -> Embed:
title_raw = tags.get("title") or ""
title_fixed = fix_title_alias(title_raw)
tgid_from_title, alias_from_title = extract_tgid_and_alias(title_fixed)
if tgid_from_title:
title = f"{tgid_from_title} {alias_from_title or ''}".strip()
elif parsed.get("talkgroup"):
title = f"{parsed['talkgroup']} {(alias_from_title or '')}".strip()
else:
title = path.stem
embed = Embed(title=title, description="Scanner clip", timestamp=datetime.now(timezone.utc))
if parsed.get("system"):
embed.add_field(name="System", value=parsed["system"], inline=True)
if parsed.get("site"):
embed.add_field(name="Site", value=parsed["site"], inline=True)
if parsed.get("talkgroup"):
embed.add_field(name="Talkgroup", value=parsed["talkgroup"], inline=True)
if parsed.get("source_radio"):
embed.add_field(name="Source", value=parsed["source_radio"], inline=True)
if parsed.get("datetime"):
embed.add_field(name="Timestamp", value=parsed["datetime"].strftime("%Y-%m-%d %H:%M:%S"), inline=True)
md_lines = []
label_map = [
("TX ID", "artist"),
("Site", "album"),
("Genre", "genre"),
("Software", "composer"),
("System", "grouping"),
("Tag Date", "date"),
("Comment", "comment"),
]
for label, key in label_map:
if tags.get(key):
md_lines.append(f"**{label}:** {tags[key]}")
if md_lines:
embed.add_field(name="Metadata", value="\n".join(md_lines), inline=False)
embed.set_footer(text=path.name)
return embed
# -------------------- Worker --------------------
async def upload_worker(queue: asyncio.Queue, client: discord.Client, conn: sqlite3.Connection):
await client.wait_until_ready()
logger.info("Uploader worker ready.")
while True:
path: Path = await queue.get()
try:
await asyncio.sleep(0.25)
if not is_stable(path):
logger.debug(f"Skipping unstable file: {path}")
queue.task_done(); continue
if already_uploaded(conn, path):
logger.debug(f"Already uploaded, skipping: {path}")
queue.task_done(); continue
parsed = parse_filename(path.name)
tags = read_tags(path)
system_name = tags.get("grouping") or parsed.get("system") or ""
parent = await resolve_destination_text_channel(client, system_name)
if parent is None:
logger.error(f"No text channel for system '{system_name}'. Check AUDIO category or overrides. Skipping {path.name}")
queue.task_done(); continue
title_fixed = fix_title_alias(tags.get("title") or "")
tgid_from_title, alias_from_title = extract_tgid_and_alias(title_fixed)
tgid = tgid_from_title or parsed.get("talkgroup") or "unknown"
alias = alias_from_title
destination = await get_or_create_tg_thread(client, conn, parent, tgid, alias)
embed = build_embed(path, parsed, tags)
try:
await destination.send(embed=embed, file=File(str(path)))
logger.info(f"Posted: {path.name} -> #{parent.name} / {getattr(destination, 'name', 'channel')}")
except discord.HTTPException as e:
logger.warning(f"Embed send failed ({e}); sending as text+file: {path.name}")
await destination.send(content=embed.title or path.stem, file=File(str(path)))
mark_uploaded(conn, path)
await asyncio.sleep(0.3)
except Exception as e:
logger.exception(f"Failed to upload {path}: {e}")
finally:
queue.task_done()
# -------------------- Client --------------------
class SDRUploader(discord.Client):
def __init__(self, conn: sqlite3.Connection, **kwargs):
super().__init__(**kwargs)
self.conn = conn
self.queue: asyncio.Queue[Path] = asyncio.Queue()
self.observer: Optional[Observer] = None
async def setup_hook(self) -> None:
self.loop.create_task(upload_worker(self.queue, self, self.conn))
async def on_ready(self):
logger.info(f"Logged in as {self.user} ({self.user.id})")
handler = NewFileHandler(self.loop, self.queue)
self.observer = Observer()
self.observer.schedule(handler, str(WATCH_FOLDER), recursive=False)
self.observer.start()
logger.info(f"Watching: {WATCH_FOLDER}")
if PROCESS_BACKLOG_ON_START:
candidates = [p for p in WATCH_FOLDER.glob("*") if p.suffix.lower() in ALLOWED_EXT]
candidates.sort(key=lambda p: (p.stat().st_mtime, p.name))
enqueued = 0
for path in candidates:
if any(path.name.startswith(pfx) for pfx in TEMP_PREFIXES): # temp
continue
if any(path.name.endswith(suf) for suf in TEMP_SUFFIXES): # partial
continue
if not already_uploaded(self.conn, path):
await self.queue.put(path); enqueued += 1
logger.info(f"Backlog enqueued: {enqueued} file(s)")
async def close(self):
if self.observer:
self.observer.stop()
self.observer.join()
await super().close()
# -------------------- Entrypoint --------------------
def validate_env() -> Optional[str]:
if not DISCORD_TOKEN:
return "DISCORD_TOKEN missing. Put it in .env next to this script."
# WATCH_FOLDER is ensured earlier; still warn if not found
if not WATCH_FOLDER.exists():
return f"WATCH_FOLDER does not exist: {WATCH_FOLDER}"
return None
def main():
err = validate_env()
if err:
logger.error(err)
pause_if_needed()
return
try:
conn = ensure_db()
intents = discord.Intents.default()
client = SDRUploader(conn=conn, intents=intents)
client.run(DISCORD_TOKEN)
except Exception as e:
logger.exception(f"Fatal error: {e}")
pause_if_needed()
if __name__ == "__main__":
main()
Run It
From C:\bots\sdr-discord-uploader
directory
$env:PYTHONUNBUFFERED=1
py -u .\sdrtrunk_discord_uploader.py
You should see logs like this:
[INFO] File logging to: C:\bots\sdr-discord-uploader\_uploader.log
[INFO] discord.client logging in using static token
[INFO] Logged in as <BotName> (<id>)
[INFO] Watching: C:\Users\Administrator\OneDrive\Scanner Recordings\SDRTrunk
[INFO] Backlog enqueued: <n> file(s)
How Routing Works
- System detection
- Prefer ID3 Grouping (TIT1), which SDRTrunk sets to the System name (e.g., System1).
- Fallback to filename system token after the timestamp.
- Channel selection
- If .env has an override mapping (SYSTEM_TO_CHANNEL_OVERRIDES), that wins.
- Else it looks in the AUDIO category for a text channel whose trailing slug matches the system name (System1→ a01-system1).
- Else it uses DEFAULT_CHANNEL_ID.
- Per-talkgroup threads
- It extracts the TG from the file name (_TO_<tg>) or from the Title tag cleanup.
- Thread name: tg-<tgid> <alias> (alias cleaned from titles like 111'FP Security' → 111 FP Security).
- The bot caches the thread ID in a small SQLite file so it reuses the same thread.
- Backlog & stability
- On startup, it scans the folder and enqueues any unseen audio.
- A file is considered “ready” after its size is unchanged for ~3 passes at 1s apart.
- Deduping
- DB file: WATCH_FOLDER\_uploaded.sqlite tracks posted files. Delete it if you deliberately want to re-post everything.
Day-to-Day Use
- It just runs. New SDRTrunk clips show up in the right channel/thread within a few seconds of the file finishing.
- If SDRTrunk assigns a wrong TG, you can:
- Forward the message to the correct TG thread, then delete the original.
- If you add a new system:
- Create a text channel under AUDIO with a matching slug (e.g., a05-newsystem), or add it to .env overrides.