I found my solution using the inotify-tools package. I had originally planned to use a simple cron job with batch processing, but with that package I could perform on-demand single-file processing. Given the relatively low volume of files I expect to receive, this should suit my needs well.
Everything is the same in my original script, but I have added the following:
### File Processing
PROCESSED=/data/processed
mkdir -p $PROCESSED
# Ownership
find $PROCESSED -type f -exec chown root:wheel {} ;
find $PROCESSED -type d -exec chown root:wheel {} ;
# Permission
find $PROCESSED -type f -exec chmod 0644 {} ;
find $PROCESSED -type d -exec chmod 0755 {} ;
# Bash Script
mkdir -p /root/scripts
cat > /root/scripts/uploadprocesser.sh << '_EOF'
#!/bin/bash
SOURCE=/var/www/html/uploads
PROCESSED=/data/processed
inotifywait -m -e create -e moved_to --format "%f" $SOURCE
| while read FILENAME
do
if [[ $FILENAME != ".davfs.tmp"* ]]; then
echo "Detected $FILENAME, moving"
mv --backup=numbered "$SOURCE/$FILENAME" "$PROCESSED/$FILENAME"
fi
done
_EOF
chmod +x /root/scripts/uploadprocesser.sh
# Systemd Service
cat > /etc/systemd/system/uploadprocesser.service << _EOF
[Unit]
Description=File Upload Processing Service
After=httpd.service
[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStart=/bin/bash /root/scripts/uploadprocesser.sh
[Install]
WantedBy=multi-user.target
_EOF
systemctl daemon-reload
systemctl enable uploadprocesser
systemctl restart uploadprocesser
And since mv includes a built-in file rotation system, it negated the need for me to block overwrites. The comment from PMF got me thinking along these lines, so thanks for that!
[root@localhost ~]# ls /var/www/html/uploads/
lost+found
[root@localhost ~]# ls /data/processed/
test1 test1.~1~ test1.~2~ test1.~3~ test1.~4~ test1.~5~
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…