K
K
kukkk2016-02-12 12:09:17
Django
kukkk, 2016-02-12 12:09:17

In Search of Best Practices. How to competently speed up and measure Django?

I write a page for reporting, write, write, then I decide to load real data into the database and see in Django Toolbar 2 seconds to generate a page, 150 database queries in 200ms. And I would not want my pages to be rendered faster than 100ms.
I come to the conclusion that some part of the page is generated extremely inefficiently on the side of the script, even outside of communicating with the database. Yes, I know that it is necessary to simplify templates, cache dynamic fields of models, template pieces. But the question is - how to understand what exactly eats resources? How to correctly profile django? After all, it is so easy to be deceived and optimize something that does not take time anyway. How to do this without pain, without sticking profiling lines into the code, without knowing the structure of the framework at the contributor level? Just a working workflow, what is it?
PS partly, the refactoring prompted me to ask the question, which forced me to add a huge amount of get_model instead of the standard Python import in order to avoid cross-import. Let's say I don't know how it all works. What would give me an answer, does such a solution eat resources, or does it not? And if I eat, can I eat a lot and should I avoid it? What would prompt me or my colleagues for insights on adequate optimization?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
I
IvanOne, 2016-02-12
@kukkk

Perhaps I’m not entirely versed in this topic, but I need a profiler, https://github.com/rkern/line_profiler
habrahabr.ru/sandbox/54557

U
un1t, 2016-02-12
@un1t

Firstly, you don’t need to cache and optimize anything until you figure out what slows you down.
In the debug toolbar, see if there are any of these 150 heaps of identical requests. It is highly likely that using select_related and prefetch_related you can reduce the number of queries many times over.
The most primitive way is to use the time() function to measure how long certain sections are executed.
Here is such a decorator (see below), wrap the code of any function with it and you will see what slows down.
The only thing I want to note is that if you pass a queryset to the templates without converting it to a list, then the actual execution of the query will take place in the template and such code will show that render slows down, although queries slow down.

from decorator import decorator
from line_profiler import LineProfiler

@decorator
def profile_each_line(func, *args, **kwargs):
    profiler = LineProfiler()
    profiled_func = profiler(func)
    try:
        return profiled_func(*args, **kwargs)
    finally:
        profiler.print_stats()

A
asd111, 2016-02-13
@asd111

https://docs.djangoproject.com/en/1.9/topics/perfo...

I
Ivan, 2016-02-19
@zeroplay

How to do this without pain, without sticking profiling lines into the code, without knowing the structure of the framework at the contributor level?
without sticking it will not work, even from what was written above it is clear that some degree of immersion is required (time is the most primitive and reliable tool),
although some believe that the browser timeline is quite enough to find out bottlenecks))

B
bash77, 2016-02-18
@bash77

don't know what to think about (no input data)... well you can:
try changing the template engine to jinja2.
push sessions to redis.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question