M
M
Matematik1262018-11-30 10:56:58
bash
Matematik126, 2018-11-30 10:56:58

How to select from a log file only repeated values ​​a given number of times?

There is a file with a list of ip addresses (ip.log)
Output
$cat ip.log | sort | uniq -c > ip2.log
2 185.1.0.1
121 185.1.0.150
3 185.1.0.2
1 185.1.0.3
The uniq -d option only truncates the value that is repeated once
$cat ip.log | sort | uniq -d > ip2.log
185.1.0.1
185.1.0.150
185.1.0.2
Is it possible to process the log file in such a way that it would display only those ip addresses that occur a given number of times.
For example, if ip is repeated more than 100 times, then select only 185.1.0.150 and cut off 185.1.0.1, 185.1.0.2, 185.1.0.3.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
Q
q2zoff, 2018-11-30
@Matematik126

sort ip.log | uniq -c | awk '{if ($1 > 100) {print $0}}'
UPD: or if no numbering is needed:
sort ip.log | uniq -c | awk '{if ($1 > 100) {print $2}}'

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question