Every 15 minutes exactly, in whatever terminal window(s) I have connected to my server, I’m getting these system-wide broadcast message:

Broadcast message from systemd-journald@localhost (Sun 2026-02-15 00:45:00 PST):  

systemd[291622]: Failed to allocate manager object: Too many open files  


Message from syslogd@localhost at Feb 15 00:45:00 ...  
 systemd[291622]:Failed to allocate manager object: Too many open files  

Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:00:01 PST):  

systemd[330416]: Failed to allocate manager object: Too many open files  


Message from syslogd@localhost at Feb 15 01:00:01 ...  
 systemd[330416]:Failed to allocate manager object: Too many open files  

Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:15:01 PST):  

systemd[367967]: Failed to allocate manager object: Too many open files  


Message from syslogd@localhost at Feb 15 01:15:01 ...  
 systemd[367967]:Failed to allocate manager object: Too many open files  

The only thing I found online that’s kind of similar is this forum thread, but it doesn’t seem like this is an OOM issue. I could totally be wrong about that, but I have plenty of available physical RAM and swap. I have no idea where to even begin troubleshooting this, but any help would be greatly appreciated.

ETA: I’m not even sure if this is necessarily a bad thing that’s happening, but it definitely doesn’t look good, so I’d rather figure out what it is now before it bites me in the ass later

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    You have a process holding open a bunch of FD’s. Instead of just blindly increasing the system limits, try and find the culprit with something like: lsof | awk '{print $1}' | sort | uniq -c | sort -nr

    That will give you a list of which processes are holding open descriptors. See which are the worst offenders and try and fix the issue.

    You COULD just increase the fd open max, but then you actually will more than likely run into OOMkill issues because you aren’t solving the problematic process.

    • guynamedzero@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      running this, i find that qbittorrent fluctuates around 19000 and python3 is steady around 18000, and my arrs a bit behind, in this case, I’m not sure if there’s anything I can easily do without stopping seeding

        18460 qbittorre
        18424 python3
        14056 docker-pr
        11424 Sonarr
        11072 Radarr
         9440 Prowlarr
      
      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Reduce the number of active connections, or the total number of active transfers available at once, and that will lower that number.

        If you’re POSITIVE your memory situation is in good shape (meaning you’re not running out of memory), then you can increase the max number of open files allowed for your user, or globally: https://www.howtogeek.com/805629/too-many-open-files-linux/

        Again: if you do this, you will likely start hitting OOMkill situations, which is going to be worse. The file limit set right now are preventing that from happening.