doujia2090 2014-04-15 13:07
浏览 81

如何阻止spyder / Nutch-2等爬虫访问特定页面?

I have a Windows client application that consumes a php page hosted in a shared commercial webserver.

In this php page I am returning an encrypted json. Also in this page I have a piece of code to keep track of which IPs are visiting this php page, and I have noticed that there is a spyder/Nutch-2 crawler visiting this page.

I am wandering how is possible that a crawler could find a page that is not published in any search engines. I there a way to block crawlers from visiting this specific page?

Shall I use .htaccess file to configure it?

  • 写回答

5条回答

  • douzhongqiu5032 2014-04-15 13:13
    关注

    You can indeed use a .htaccess. robots.txt is another option but some crawlers will ignore this. You can also block specific user agent strings. (They differ from crawler to crawler)

    robots.txt:

    User-agent: *
    Disallow: /
    

    This example tells all robots to stay out of the website: You can block specific directories

    Disallow: /demo/
    

    More information about robots.txt

    评论

报告相同问题?

悬赏问题

  • ¥15 #MATLAB仿真#车辆换道路径规划
  • ¥15 java 操作 elasticsearch 8.1 实现 索引的重建
  • ¥15 数据可视化Python
  • ¥15 要给毕业设计添加扫码登录的功能!!有偿
  • ¥15 kafka 分区副本增加会导致消息丢失或者不可用吗?
  • ¥15 微信公众号自制会员卡没有收款渠道啊
  • ¥100 Jenkins自动化部署—悬赏100元
  • ¥15 关于#python#的问题:求帮写python代码
  • ¥20 MATLAB画图图形出现上下震荡的线条
  • ¥15 关于#windows#的问题:怎么用WIN 11系统的电脑 克隆WIN NT3.51-4.0系统的硬盘