I have an script that grabs content from third party sites and if the url is not found the site redirects with a 302 header location to a custom not found webpage instead of sending a 404 not found. The script also caches the content returned by curl_exec but i don't want to cache the error pages, so is there a way to log those redirects if i have turned on CURLOPT_FOLLOWLOCATION? How can i solve this situation? I know i could just find the error message using a dom parser and if found just discard it, but i want to know if there is other ways to accomplish this.
3条回答 默认 最新
- dpoxk64080 2013-05-17 12:50关注
I ended up disabling followlocation so i just have to catch the 302 code and if it's present i don't cache the page. Thought there would be a way of catching all codes before curl redirects.
本回答被题主选为最佳回答 , 对您是否有帮助呢?解决 无用评论 打赏 举报
悬赏问题
- ¥15 虚拟机打包apk出现错误
- ¥30 最小化遗憾贪心算法上界
- ¥15 用visual studi code完成html页面
- ¥15 聚类分析或者python进行数据分析
- ¥15 逻辑谓词和消解原理的运用
- ¥15 三菱伺服电机按启动按钮有使能但不动作
- ¥15 js,页面2返回页面1时定位进入的设备
- ¥50 导入文件到网吧的电脑并且在重启之后不会被恢复
- ¥15 (希望可以解决问题)ma和mb文件无法正常打开,打开后是空白,但是有正常内存占用,但可以在打开Maya应用程序后打开场景ma和mb格式。
- ¥20 ML307A在使用AT命令连接EMQX平台的MQTT时被拒绝