對(duì)于遵循robots協議(yì)的(de)蜘蛛,可(kě)以直接在robots禁止。将下(xià)面的(de)内容加入到網站根目錄下(xià)面的(de)robots.txt就可(kě)以了(le)
User-agent: SemrushBot
Disallow: /
User-agent: DotBot
Disallow: /
User-agent: MegaIndex.ru
Disallow: /
User-agent: MauiBot
Disallow: /
User-agent: AhrefsBot
Disallow: /
User-agent: MJ12bot
Disallow: /
User-agent: BLEXBot
Disallow: /
對(duì)于不尊許robots規則的(de)蜘蛛,目前能夠屏蔽的(de)方法就是根據useragent或者ip來(lái)禁止了(le)。 對(duì)于這(zhè)些蜘蛛程序我們可(kě)以在conf中配置
#禁止 Scrapy 等工具的(de)抓取
if ($http_user_agent ~* (Scrapy|Curl|HttpClient)) {
return 403;
}
#禁止指定 UA 及 UA 爲空的(de)訪問
if ($http_user_agent ~ "yisouspider|FeedDemon|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|YandexBot|ZoominfoBot|PetalBot|petalBot|Ezooms|^$" ) {
return 403;
}
#禁止非 GET|HEAD|POST 方式的(de)抓取
if ($request_method !~ ^(GET|HEAD|POST)$) {
return 403;
}
#禁壓縮包
location ~* .(tgz|bak|zip|rar|tar|gz|bz2|xz|tar.gz)$ {
return 400;
break;
}