A
A
Albert Tobacco2015-10-06 11:40:23
phpstorm
Albert Tobacco, 2015-10-06 11:40:23

Why is phpstrom indexing files all the time?

Storm constantly indexes, just finished indexing and immediately starts a new one. I unpacked another storm into another folder - the same thing.
What is the problem, does anyone know?

Answer the question

In order to leave comments, you need to log in

11 answer(s)
D
Dmitry, 2018-03-01
@kaktys123

while I was looking for a solution to a similar problem, I found this question .. pontyk.com.ua/phpstorm/tormozit-phpstorm
Here, clearing the cache helped from this article. There are many more options

N
nick23, 2015-10-06
@nick23

There was a similar problem with a project that had more than 1.2 million files. After removing unnecessary files that do not affect the development (img, cache, logs, ...) with the number of files ~ 300k, the endless indexing stopped.

A
asm0dey, 2012-03-18
@asm0dey

wget -r not an option?
Then you can go through the result with grep and look for what you need.

C
Chii, 2012-03-18
@Chii

And what are the problems for a web server to log 404/403 and then give you a log to read?
I've always done this

M
Mikhail Lyalin, 2012-03-19
@mr_jok

I usually test working through Find broken links on your site with Xenu's Link Sleuth (TM)

P
powder96, 2012-03-18
@powder96

IMHO, here you need to write a simple spider that follows links and set it on a local copy of the site.
First, download the site page. Check if it has PHP Notice/Error/Warning. Then you pull out all the links with a regular expression. Then for each link not yet verified, do the above.

X
xaker1, 2012-03-18
@xaker1

A small addition to the previous answers:
Instead of parsing the pages and searching them for Notice/Error/etc. you can hang your error handler through set_error_handler (it's stupid to log them). + similar 404/403 handler.
Next, we recursively go through the site (with the same wget) and look at the error log.

M
Marcel Markhabulin, 2012-03-19
@milar

Automated security testing: code.google.com/p/skipfish/
And one more thing: w3af.sourceforge.net/
It is recommended to make a file/database backup before research, because programs test forms for processing special characters, and submitted forms will most likely be written to the base.

P
p4s8x, 2012-03-19
@p4s8x

you can use wget -r to “download” the entire site without downloading the links and then see the Apache logs, but this is part of your task.

E
Evgeny Bezymyannikov, 2012-03-19
@psman

Google Xenu, that's exactly what it does.

K
kantim, 2012-03-20
@kantim

KLinkStatus - checking links and pictures
ps works under kubuntu, possibly under ubuntu

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question