A
A
Alexander2020-09-14 13:12:55
Python
Alexander, 2020-09-14 13:12:55

Why does Logger create a new file before the maxBytes limit is exceeded?

logger stops considering maxBytes and starts backupCount

logger_django.conf
ROTATE_LOG_SIZE = 10 * 1024 * 1024
ROTATE_LOG_COUNT = 5

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'verbose': {
            'format': '{levelname} {asctime} {module}:{lineno} {message}',
            'style': '{',
            'datefmt' : '%Y-%m-%d %H:%M:%S',
        },
        'extended': {
            'format': '{levelname} {asctime} {message}',
            'style': '{',
            'datefmt' : '%Y-%m-%d %H:%M:%S',
        },
        'simple': {
            'format': '{levelname} {message}',
            'style': '{',
        },
    },
    'handlers': {
        'debug': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': os.path.join(LOG_ROOT, 'debug.log'),
            'formatter': 'verbose',
        },
        'logger': {
            'level': 'DEBUG',
            'class': 'logging.handlers.RotatingFileHandler',
            'filename': os.path.join(LOG_ROOT, 'logger.log'),
            'formatter': 'extended',
            'maxBytes': ROTATE_LOG_SIZE,
            'backupCount': ROTATE_LOG_COUNT,
        },
        'banking': {
            'level': 'DEBUG',
            'class': 'logging.handlers.RotatingFileHandler',
            'filename': os.path.join(LOG_ROOT, 'banking.log'),
            'formatter': 'extended',
            'maxBytes': ROTATE_LOG_SIZE,
            'backupCount': ROTATE_LOG_COUNT,
        },
    },
    'loggers': {
        'debug': {
            'handlers': ['debug',],
            'level': 'DEBUG',
            'propagate': False,
        },
        'logger': {
            'handlers': ['logger',],
            'level': 'INFO',
            'propagate': False,
        },
        'logger.main': {
            'handlers': ['logger',],
            'level': 'INFO',
            'propagate': False,
        },
        'banking': {
            'handlers': ['banking',],
            'level': 'INFO',
            'propagate': False,
        },
    },
}

log# ls -l --block-size=K ./
total 12172K
-rw-r--r-- 1 www-data www-data  125K Sep 14 13:01 banking.log
-rw-r--r-- 1 www-data www-data  121K Sep 14 13:01 banking.log.1
-rw-r--r-- 1 www-data www-data  285K Sep 14 12:20 banking.log.2
-rw-r--r-- 1 www-data www-data  333K Sep 14 12:17 banking.log.3
-rw-r--r-- 1 www-data www-data  298K Sep 14 10:29 banking.log.4
-rw-r--r-- 1 www-data www-data  315K Sep 14 10:20 banking.log.5

There are other logs in the folder, 4 and 6 mb, there is plenty of space on the disk. As you can see, the files are 125-315kb each, although 10mb is indicated in the config.
The problem was earlier, restarting supervisorctl helped, its config:
uwsgi.conf
[program:workplace-uwsgi]
command = /usr/local/projects/env/bin/uwsgi --ini /usr/local/projects/conf/uwsgi.ini
user = www-data
stdout_logfile = /usr/local/projects/var/log/uwsgi.log
stdout_logfile_maxbytes = 10MB
stderr_logfile = /usr/local/projects/var/log/uwsgi.error.log
stderr_logfile_maxbytes = 10MB
autostart = true
autorestart = true
redirect_stderr = false
priority = 999
stopsignal = QUIT


But a few days later, everything started all over again.

The logs are written actively and a lot, many users request a block (api), the work of which is written to this log. There is a suspicion, but not certainty, that this is the problem (several workers are trying to write to the file at the same time).

Thanks for the help.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
Dr. Bacon, 2020-09-14
@bacon

Yes, in workers, the concurrent-log-handler solution or similar.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question