P
P
Puma Thailand2012-08-17 16:42:56
Information Security
Puma Thailand, 2012-08-17 16:42:56

How do you do a site security audit?

It is desirable to be as detailed as possible with an indication of the software, services and actions.
Now I do something like this
1) I scan nmap ports.
2) I do a scan with snort.
3) I run some of the site pages with antiviruses.
4)I look at the version of cms or framework and look for bugs in their bugtrackers.
5) Testing input escaping.
6) I watch how everything starts on the server, suphp, mod_security.
7) I check passwords for adequacy.
8) I do a little ddos.

I probably do something else, but not systematically.

Answer the question

In order to leave comments, you need to log in

6 answer(s)
E
Ents, 2012-08-17
@Ents

I check software versions (there are holes not only in scripts)
I check file permissions (so that php-fpm (apache?) cannot write to folders where you can request a php script)

C
cat_crash, 2012-08-17
@cat_crash

I run according to the OWAP ASVS standard

G
ggagnidze, 2012-08-17
@ggagnidze

you can also outsource to scriptkids and just calculate the checksums of all files every day

E
Evgeny Borisov, 2012-08-20
@Agel_Nash

Your algorithm is more about auditing a web server than a site.
If we take a specific site and have access to the sources , then the algorithm is as follows (for PHP)
1) Open the sources in the IDE
2) Do a search for a call to functions of the eval type and check what data is passed to these functions
3) Do a search for accessing variables from POST arrays , GET, REQUEST and check how the data is filtered and used
4) Do a search for move_uploaded_file, file_get_contents and check how the data passed to the function is filtered
If there is no access to the sources , then the algorithm is as follows
1) We are trying to determine the engine, for example, here /
1.1) Trying to Determine the version of the engine
1.2) Checking the presence of bugs in this engine
2) Trying to find the admin panel (Admin Finder)
2.1) Checking the standard login-password pairs
3) Checking the presence of the svn folder
3.1) Analyzing the sources
4) Checking for dumps (dump folder, dump.sql files, dump.zip, etc.)
4.1) Checking for passwords in the archive
4.2) Analyzing the sources
5) Compiling a list of unique URLs (http://example.com/q/1 and example/q/2 - not unique, but example/q/1 and example.com/p/2 are unique).
5.1) If the CNC is enabled, try to pick up the name of the GET variable
6) Quotes fly into the address bar, converting variables to an array (if the name of the GET variable was found), HTML tags
7) Checking the site's cookies. If there are cookies installed by the site itself, we change the values ​​​​to our own (again quotes, html tags
8) If there are forms on the site (comments, publication of articles, notes, search strings, etc.) XSS
9) Testing the operation of the site with a crooked browser name

E
egorinsk, 2012-08-17
@egorinsk

Theoretically, you can still look at the source (but it's boring), in terms of how the input data is processed. You can use SQL inj/XSS scanners. There is also a tool from Google, skipfish.

P
prox, 2012-08-18
@prox

I repeat probably:
- how the site / server behaves under load (a large number of connections, rotation of web server logs) I
run LOIC on port 80, webstress from as many machines as possible
at the same time we look at the result of the load impact
- I find out the weak point of the server (OS) at load (there was a case, the site was on pearl and at load it ate all the memory)
- Nmap intense scan
- Limit Connections Per Second Per IP (firewall)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question