请教各位懂xhtml的大神

function openwindow(){
window.open("newwindow","图片","toolbars=0,scrollbars=0,location=0,
statusbars=0,menubars=0,resizable=0");
}

这段javascript是用html的格式写的,请教各位大神如何用xhtml的格式写这段js,尤其是窗口属性这段:("toolbars=0,scrollbars=0,location=0,
statusbars=0,menubars=0,resizable=0")
我的vs是2008,用html格式写的脚本没有效果,要用xhtml写,求指教 ~~~~!

2个回答

js和你html,xhtml没关系,你放到script标签里面没有,你的字符串换行了吧。。要在一行里面

Window_Open详解 http://www.cnblogs.com/stswordman/archive/2006/06/02/415853.html


<script>
function openwindow(){
window.open("newwindow","图片","toolbars=0,scrollbars=0,location=0,statusbars=0,menubars=0,resizable=0");
}
</script>
showbo
支付宝加好友偷能量挖 回复zhangchenhuan123: 第三个参数设定的值有些浏览器不支持,如chrome,chrome没有状态栏和工具栏什么的你设置了也没用
4 年多之前 回复
zhangchenhuan123
zhangchenhuan123 你好 我的代码和运行结果是下面的两张图片,把那些属性都改成1工具栏那些也不会出现,不知道是什么原因
4 年多之前 回复

图片说明
图片说明

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
请大神帮我看看代码哪里错了
![图片说明](https://img-ask.csdn.net/upload/201912/25/1577235379_120867.png) 这个地方一直报错,下面是aspx及cs部分代码段 ``` <%@ Page Language="C#" AutoEventWireup="true" CodeFile="sqlcarss.aspx.cs" Inherits="sqlcars.MyClass" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head id="Head1" runat="server"> <title>Display results of SQL commands on cars db </title> <style type = "text/css"> .titles {font-style: italic; font-weight: bold;} </style> </head> <body> <span class ="titles"> Please enter your command: </span> <form id="myForm" runat="server"> <asp:TextBox ID="command" columns = "80" runat="server" /> <br /><br /> <asp:Button ID="Button1" Text="Submit command" runat="server" /> <br /><br /> <span class ="titles"> Results of your command: </span> <br /><br /> <asp:Label ID="errors" runat="server" /> <asp:GridView ID="results" runat="server" /> </form> </body> </html> ``` ``` using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Data.Odbc; namespace sqlcars { public partial class MyClass : System.Web.UI.Page { const string ConnStr = "Driver={MySQL ODBC 8.0 Unicode Driver};"+"Server=localhost;Database=cars;uid=root;pwd=root;options=3"; protected void Page_Load() { if (IsPostBack) { DoCommand(command.Text); } } protected void DoCommand(string command) { OdbcConnection con = new OdbcConnection(ConnStr); OdbcCommand cmd = new OdbcCommand(command,con); try { con.Open(); OdbcDataReader reader = cmd.ExecuteReader( CommandBehavior.CloseConnection); results.DataSource = reader; results.DataBind(); } catch (Exception ex) { errors.Text = ex.Message; } } } } ``` 课程作业,一直运行不了,麻烦大神们看一下
一个百度拇指医生爬虫,想要先实现爬取某个问题的所有链接,但是爬不出来东西。求各位大神帮忙看一下这是为什么?
#写在前面的话 在这个爬虫里我想实现把百度拇指医生里关于“咳嗽”的链接全部爬取下来,下一步要进行的是把爬取到的每个链接里的items里面的内容爬取下来,但是我在第一步就卡住了,求各位大神帮我看一下吧。之前刚刚发了一篇问答,但是不知道怎么回事儿,现在找不到了,(貌似是被删了...?)救救小白吧!感激不尽! 这个是我的爬虫的结构 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574787999_274479.png) ##ks: ``` # -*- coding: utf-8 -*- import scrapy from kesou.items import KesouItem from scrapy.selector import Selector from scrapy.spiders import Spider from scrapy.http import Request ,FormRequest import pymongo class KsSpider(scrapy.Spider): name = 'ks' allowed_domains = ['kesou,baidu.com'] start_urls = ['https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M'] def parse(self, response): item = KesouItem() contents = response.xpath('.//h3[@class="t"]') for content in contents: url = content.xpath('.//a/@href').extract()[0] item['url'] = url yield item if self.offset < 760: self.offset += 10 yield scrapy.Request(url = "https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=" + str(self.offset) + "&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M",callback=self.parse,dont_filter=True) ``` ##items: ``` # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class KesouItem(scrapy.Item): # 问题ID question_ID = scrapy.Field() # 问题描述 question = scrapy.Field() # 医生回答发表时间 answer_pubtime = scrapy.Field() # 问题详情 description = scrapy.Field() # 医生姓名 doctor_name = scrapy.Field() # 医生职位 doctor_title = scrapy.Field() # 医生所在医院 hospital = scrapy.Field() ``` ##middlewares: ``` # -*- coding: utf-8 -*- # Define here the models for your spider middleware # # See documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class KesouSpiderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Request, dict # or Item objects. pass def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) class KesouDownloaderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called return None def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) ``` ##piplines: ``` # -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html import pymongo from scrapy.utils.project import get_project_settings settings = get_project_settings() class KesouPipeline(object): def __init__(self): host = settings["MONGODB_HOST"] port = settings["MONGODB_PORT"] dbname = settings["MONGODB_DBNAME"] sheetname= settings["MONGODB_SHEETNAME"] # 创建MONGODB数据库链接 client = pymongo.MongoClient(host = host, port = port) # 指定数据库 mydb = client[dbname] # 存放数据的数据库表名 self.sheet = mydb[sheetname] def process_item(self, item, spider): data = dict(item) self.sheet.insert(data) return item ``` ##settings: ``` # -*- coding: utf-8 -*- # Scrapy settings for kesou project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'kesou' SPIDER_MODULES = ['kesou.spiders'] NEWSPIDER_MODULE = 'kesou.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'kesou (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False USER_AGENT="Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67.0" # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'kesou.middlewares.KesouSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'kesou.middlewares.KesouDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'kesou.pipelines.KesouPipeline': 300, } # MONGODB 主机名 MONGODB_HOST = "127.0.0.1" # MONGODB 端口号 MONGODB_PORT = 27017 # 数据库名称 MONGODB_DBNAME = "ks" # 存放数据的表名称 MONGODB_SHEETNAME = "ks_urls" # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' ``` ##run.py: ``` # -*- coding: utf-8 -*- from scrapy import cmdline cmdline.execute("scrapy crawl ks".split()) ``` ##这个是运行出来的结果: ``` PS D:\scrapy_project\kesou> scrapy crawl ks 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: kesou) 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twis.7.0, Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryphy 2.6.1, Platform Windows-10-10.0.18362-SP0 2019-11-27 00:14:17 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'kesou', 'COOKIES_ENABLED': False, 'NEWSPIDER_MODULE': 'spiders', 'SPIDER_MODULES': ['kesou.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67 2019-11-27 00:14:17 [scrapy.extensions.telnet] INFO: Telnet Password: 051629c46f34abdf 2019-11-27 00:14:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled item pipelines: ['kesou.pipelines.KesouPipeline'] 2019-11-27 00:14:19 [scrapy.core.engine] INFO: Spider opened 2019-11-27 00:14:19 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-11-27 00:14:19 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-11-27 00:14:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX% (referer: None) 2019-11-27 00:14:20 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFFSYX%2B1M> (referer: None) Traceback (most recent call last): File "d:\anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback yield next(it) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output for x in result: File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr> return (_set_referer(r) for r in result or ()) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "D:\scrapy_project\kesou\kesou\spiders\ks.py", line 19, in parse item['url'] = url File "d:\anaconda3\lib\site-packages\scrapy\item.py", line 73, in __setitem__ (self.__class__.__name__, key)) KeyError: 'KesouItem does not support field: url' 2019-11-27 00:14:20 [scrapy.core.engine] INFO: Closing spider (finished) 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/KeyError': 1, 'start_time': datetime.datetime(2019, 11, 26, 16, 14, 19, 863597)} 2019-11-27 00:14:21 [scrapy.core.engine] INFO: Spider closed (finished) ```
求助各位大神关于javascript在chrome游览器的显示的问题
各位大神 我在页面上编辑了用javascript编辑了窗口属性也就是windows.open,新的 窗口打开后,像工具条,菜单栏和resizable这些都没显示出来,求指点; ``` <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> <script type="text/javascript" language="javascript"> function openwindow() { window.open("HTMLPage.htm", "图片窗口", "toolbars=1,scrollbars=1,location=1,statusbars=1,menubars=1,resizable=1,width=500,height=250") } </script> <style type="text/css"> body{background-image:(image/IMG_2796.JPG);} </style> </head> <body> <form id="form1" runat="server"> <asp:Button ID="btnshow" runat="server" Text="点击" OnClientClick="openwindow()" /> </form> </body> </html> ``` HTMLPage.htm页面弹出来的效果 ![图片说明](https://img-ask.csdn.net/upload/201506/06/1433556637_445534.png)
pycharm中 xhtml怎么转换成html ????
pycharm中 xhtml怎么转换成html 求大神帮忙,谢谢
接口返回数据错误,浏览器能正常访问,但是返回的数据不对?
http://www.shtdsc.com/i/shtdsc/jglb?pn=2&ps=10 这个网站可以在浏览器打开,但是使用requests返回的数据不对 返回的代码是202 ``` url = 'http://www.shtdsc.com/i/shtdsc/jglb?pn=2&ps=10' # url = 'http://www.shtdsc.com/2016/tdjy/dkjs/' headers ={ 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9', 'Cache-Control': 'max-age=0', 'Connection': 'keep-alive', 'Cookie': 'FSSBBIl1UgzbN7N80S=v5II3Ox4RUr46cw9mUaKp54cxsPxuL8u_JTfDG9DQV9ySF5uLGO4pH.i4pDquL0D; Hm_lvt_e951494076936a1cbdeaac613f3da44f=1578880532; Hm_lpvt_e951494076936a1cbdeaac613f3da44f=1578884586; FSSBBIl1UgzbN7Nenable=true; FSSBBIl1UgzbN7N80T=4hKEIXlAKI6wgL1WMPAxyS_behWrgdkeGu4fk7sB_7AQWawhqATZMuTis80M9U.tgS_D4Yx8u8jyTPlNcVS60RKKKccAxIN9TY0E9RaejTF9E08fWKl07oiCgOTKpXvOaEnqvClOamVJdLVJXRoyt5wHXjbzBpuzqTzZcUFOScPFqE.RaqPGDpOPszwkJkwfGBIFt1w5m.alQ7WGZkjWyH_Nxc6hTp735HU4Yr8jhrXgXGXHYjzwNPzzwJp37JUWpRsOIPqG6T8JOun5fdRV9YZMil2Z2bSt38.tjvRwWaVGsL.POjI4Mq.RLHOd7EuekG.DwODiBSgsOD3TcENWV6t3uStrN.gj4cHhL1vYBqjO0m22zkYAthgZmyEj0ugmekH9', 'Host': 'www.shtdsc.com', 'Upgrade-Insecure-Requests': '1', "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36", } resp = requests.get(url=url,headers=headers) print(resp) ``` 求大神帮忙解决,十万火急!
本人菜鸟一枚,请教大神一个关于CSS中ID和类选择器不能用,而标签选择器能用的问题
FIREFOX浏览器,代码如下: HTML代码片段: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset=$_SC[charset]" /> <meta http-equiv="x-ua-compatible" content="ie=7" /> <style type="text/css"> @import url(template/default/style.css); </style> </head> <body> <div id="header"> <div class="topwrap"> <hgroup class="hwrap"> 以上是HTML中代码片段,代码太多就不全发上来了,DIV的闭合检查没什么问题 STYLE.CSS代码片段: * { word-wrap: break-word; word-break: break-all; } h1, h2, h3, h4, h5, h6 { font-size: 1em; } #header, #header-sp-logo { background: url("http://photo.idate.163.com/static/cdn20130722120200/v2/images/pagebg-white.png") repeat scroll 0 0 #FFFFFF; } 外部链接时修改标签选择器中元素页面有变化,修改ID选择器没反应,这是什么情况求大神指教!
还是这个爬虫问题,请指教,可以连上个问答c币一起拿走
http://epub.sipo.gov.cn/flzt.jsp 条件随便选一下。填一个公告日2019网页内容,我直接用post 加这个表单,怎么请求的网页状态是202? 怎么能把这个页面打印出来呢?求大神指教 希望有成功的代码提供,可以再追加C币 ![图片说明](https://img-ask.csdn.net/upload/201912/27/1577438354_916883.jpg) import requests url='http://epub.sipo.gov.cn/overTran.action' headers={ 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8', 'Cache-Control': 'max-age=0', 'Connection': 'keep-alive', 'Content-Length': '183', 'Content-Type': 'application/x-www-form-urlencoded', 'Cookie': 'wIlwQR28aVgb80S=Lvo.g17PODuZgSBwRw5l_DbhAy2KLizhec2.qPccW7ZlsQGwXuZw4Wb5hOVq5oi8; WEB=20111132; JSESSIONID=FBCDD5153E797C518ED843E3AD1FB331; _gscu_884396235=77173060h9kwt732; _gscbrs_884396235=1; Hm_lvt_06635991e58cd892f536626ef17b3348=1577173065; Hm_lpvt_06635991e58cd892f536626ef17b3348=1577173065; _gscu_7281245=77173064qyzfmc15; _gscbrs_7281245=1; _gscs_7281245=7717306451fjlb15|pv:1; _gscs_884396235=77173060bpjon232|pv:3; wIlwQR28aVgb80T=4uHABazj.0t59Nq6rlCEGno19R_ZV0hQRyKhvNWAOrF48jAvrmpf9HW3lAO8BJGZ6XYZMEPfNUEiGv5qukwGzGvYHOBbXhvfIm6uWdcfupBcuyrmb0lubppaA2QciDK7GQHlwFO2OA8CPAjjVMNlb9vNguNiRhq2MfQC7FkGZT9CkU_yFz8uODRSS5Nr6rgQFGILh073HC18orKQQdnNdpkG7xipEjE1wz_VJb9FNRE6gwtG8ShAIz5sVNWQKSpK6cdrIUAbRWQGZZ84rE_JUFpnly61EJK2KE0duzqw7vQFTAH.jS6_Sx.oqxYhJnvnjPG9T86if_4Becmw.UgqaANEb', 'Host': 'epub.sipo.gov.cn', 'Origin': 'http://epub.sipo.gov.cn', 'Referer': 'http://epub.sipo.gov.cn/flzt.jsp', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' } data={'strWord': '法律状态公告日="2019"', 'numType': 18, 'numSortMethod': 4, 'strLicenseCode':'', 'selected':'', 'numFM': 0, 'numXX': 0, 'numWG': 0, 'pageSize': 10, 'pageNow': 1} rep=requests.post(url,json=data,headers=headers)
SpringBoot+Thymeleaf无法访问页面505问题: An error happened during template parsing怎么解决?
报错代码:2019-12-26 15:45:08.541 ERROR 13316 --- [nio-8080-exec-9] org.thymeleaf.TemplateEngine : [THYMELEAF][http-nio-8080-exec-9] Exception processing template "admin_index": An error happened during template parsing (template: "class path resource [templates/admin_index.html]") org.thymeleaf.exceptions.TemplateInputException: An error happened during template parsing (template: "class path resource [templates/admin_index.html]") Controller类是这样的 ``` @Controller @RequestMapping("/admin") public class LoginController { @Autowired private UserService userService; @GetMapping public String loginPage() { return "admin/login"; } @PostMapping("/login") public String login(@RequestParam String username, @RequestParam String password, HttpSession session, RedirectAttributes attributes) { User user = userService.checkUser(username, password); if (username == "admin"&&password == "111111") { return "admin_index"; } else { attributes.addFlashAttribute("message", "用户名和密码错误"); return "admin_index"; } } //为了方便跳转页面设置账号密码无论对错都跳转到admin_index } ``` admin_index.html头文件 ``` <!DOCTYPE html> <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org"> <head th:replace="admin/_fragments :: head(~{::title})"> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>博客管理</title> <link rel="stylesheet" href="https://cdn.jsdelivr.net/semantic-ui/2.2.4/semantic.min.css"> <link rel="stylesheet" href="../../static/css/me.css"> </head> <body> ``` 输入账号密码无论对错直接跳500,CSDN上搜到的方法都试过了都没用
使用Thymeleaf时标签的th:text值不替换,但js是能获取到值的时为什么?
在做一个账号被顶号/被冻结返回登录页面的拦截器, 写了一个返回登录页面的controller,想在返回时候同时提示一下账号退出的原因. 可是从controller传过去的属性在页面上没办法显示.不知道是什么原因. 我的controller代码: ```java @Controller public class RedirectController { @RequestMapping("/backToLogin") public ModelAndView backToLogin() { ModelAndView modelAndView=new ModelAndView(); modelAndView.setViewName("backToLogin"); modelAndView.addObject("msg","后端传值123"); return modelAndView; } } ``` 我的模板页面: ```html <!DOCTYPE html SYSTEM "http://www.thymeleaf.org/dtd/xhtml1-strict-thymeleaf-4.dtd"> <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org"> <head> <meta charset="UTF-8"> <title>重新登录</title> <script> window.onload=function(){ //alert("账号已退出,请重新登录!"); alert("${msg}"); //window.location.href="../dist/index.html"; } </script> </head> <body> <p th:text="${msg}">P标签默认内容</p> <input value="input默认" th:value="${msg}" /> </body> </html> ``` 启动项目之后,请求/backToLogin页面,显示如下 ![图片说明](https://img-ask.csdn.net/upload/201912/02/1575265487_297084.png) ![图片说明](https://img-ask.csdn.net/upload/201912/02/1575265495_39937.png) 很奇怪,是我哪里写的不对吗?我看官方的例子也是这么写的.... 我的JS里写的alert那里,是获取到后端传过来的msg的,可是标签里面默认的值并不替换是为什么啊... html标签那里xmlns那里也都写了.之前另一个小项目就是这样写的也没什么问题.... 烦请大佬帮我看看 感谢! 配置文件中,spring-thymeleaf相关的内容 ``` spring: # 环境 dev|test|prod profiles: active: dev servlet: multipart: max-file-size: 300MB max-request-size: 1000MB enabled: true jmx: enabled: false thymeleaf: suffix: .html mode: HTML5 encoding: UTF-8 cache: false prefix: classpath:/templates/ mvc: static-path-pattern: /** resources: chain: strategy: content: enabled: true paths: /** freemarker: suffix: .html request-context-attribute: request ``` 找到原因了. 今天没传参试了一下,发现报的错是freemarker的错...发现之前项目里加了freemarker的依赖,配置文件里的suffix也是.html.所以freemarker生效了,thymeleaf不生效... 把suffix改成了.ftl就好了!!!当时依赖和配置文件是从前一个项目搬过来的没仔细看...疏忽了疏忽了...
asp.net做的登陆界面如何添加背景图片?
asp.net做的登陆界面如何添加背景图片,下面是源代码,如何修改 ``` <%@ Page Language="C#" AutoEventWireup="true" CodeFile="index.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>游戏库</title> </head> <body> <form id="form1" runat="server"> <div> 用户名:<asp:TextBox ID="TextBox1" runat="server"></asp:TextBox><br /> <br /> 密&nbsp; &nbsp;码:<asp:TextBox ID="TextBox2" runat="server" TextMode="Password"></asp:TextBox><br /> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <br /> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; <asp:Button ID="Button1" runat="server" Text="登录" OnClick="Button1_Click" /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <asp:Button ID="Button2" runat="server" Text="注册" OnClick="Button2_Click" style="height: 21px" /></div> </form> </body> </html> ```
appium切换到webview后,获取不了真实的html页面,应该怎么处理?
**问题** 我使用appium 测试我公司的混合app,使用driver.context("WebView")切换到webview后,调用getPageSource(),返回如下 ``` <html xmlns="http://www.w3.org/1999/xhtml"><head></head><body><iframe name="chromedriver dummy frame" src="about:blank"></iframe></body></html> ``` **环境:** ``` appium:1.15.1 OS:Windows 10 API :java Android:8.1 ``` **chrome inspect** ![chrome inspect 图片](https://i.stack.imgur.com/X9L6U.png) ![图片说明](https://i.stack.imgur.com/V8Ldq.png) **配置代码** ``` capabilities.setCapability("platformName", "Android"); capabilities.setCapability("deviceName", "b307aa10"); capabilities.setCapability("automationName", "appium"); capabilities.setCapability("platformVersion", "8.1.0"); capabilities.setCapability("appPackage", "com.dayizhihui.dayishi.hpv"); capabilities.setCapability("appActivity", ".main.view.WelcomeActivity"); capabilities.setCapability("noReset", "true"); Map<String, Object> chromeOptions = new HashMap<String, Object>(); chromeOptions.put("androidPackage", "com.android.chrome"); capabilities.setCapability(ChromeOptions.CAPABILITY,chromeOptions); ``` **程序代码** ``` driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS); introducePageHandle.clickIntroduceIcon(); System.out.println("Before " + driver.getContext()); System.out.println("All Contexts " + driver.getContextHandles()); driver.context("WEBVIEW_com.dayizhihui.dayishi.hpv"); System.out.println("After " + driver.getContext()); System.out.println("PageSource " + driver.getPageSource()); ``` **输出如下** ``` Before NATIVE_APP All Contexts [NATIVE_APP, WEBVIEW_com.dayizhihui.dayishi.hpv, WEBVIEW_chrome] After WEBVIEW_com.dayizhihui.dayishi.hpv PageSource <html xmlns="http://www.w3.org/1999/xhtml"><head></head><body><iframe name="chromedriver dummy frame" src="about:blank"></iframe></body></html> ```
怎么用jquery给几个相同的标签动态添加不同的id?我的代码如下,求各位大神帮帮忙
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>无标题文档</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js" type="text/javascript"></script> <script> var title=$("p.title-season");//获取所有class为title-season的p元素 var season=$("div.season");//获取所有class为-season的div元素 for(i=0;i<season.length;i++){ if(i==title.length){ //$(season[i]).attr("id","cnContent"); season.length(i).setAttribute("id","cnContent");//为国配剧集添加id $("#cnContent").css("display","none"); } if(i<season.length&&i!=0){ //$(season[i]).attr("id","jpContent"); season.length(i).setAttribute("id","jpContent");//为日配剧集添加id $("#jpContent").css("display","none"); } } for(i=0;i<title.length;i++){ if(i==title.length){ //$(title[i]).attr("id","cn"); title.length(i).setAttribute("id","cn");//为国配标题添加id $("#cn").css("cursor","pointer"); $("#cn").live('click',function(){ $("#cnContent").toggle(); }); } if(i<title.length&&i!=0){ //$(title[i]).attr("id","jp"); title.length(i).setAttribute("id","jp");//为日配标题添加id $("#jp").css("cursor","pointer"); $("#jp").live('click',function(){ $("#jpContent").toggle(); }); } } </script> </head> <body> <p class="title-season">日配HD版</p> <div class="season"> <a class="btn btn-ep active primary" href="" data-aid="893545" data-vid="765811">01-02话</a> <a class="btn btn-ep" href="" data-aid="894374" data-vid="766560">03-04话</a> <a class="btn btn-ep" href="" data-aid="894374" data-vid="779541">05-06话</a> <a class="btn btn-ep" href="" data-aid="905694" data-vid="781164">07-08话</a> <span class="clearfix"></span> </div> <p class="title-season">日配TV版</p> <div class="season"> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771968">第1话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771969">第2话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771971">第3话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771972">第4话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771973">第5话</a> <span class="clearfix"></span> </div> <p class="title-season">国语配音版</p> <div class="season"> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771968">第1话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771969">第2话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771971">第3话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771972">第4话</a> <a class="btn btn-ep" href="" data-aid="294630" data-vid="771973">第5话</a> <span class="clearfix"></span> </div> </body> </html>
iOS 通过document.body.innerText 取xhtml的文字 ...
下面是我获取到的文件 我用了其他几个方法解析出来还是有这写编码,而且每次位置都不固定,经我确定这些好像是8进制的数据。我如果用模拟器来跑,解析出来就没有这些。 “党支部是党在社会基层组织中的战\346斗堡垒,不是经济组织和行政组织,也不是一般的社会组织,而是政治组织。党支部之所以是政治组织,具体有四个方面的内涵:一是担负政治任务。就是贴近群众、团结群众、引导群众、赢得***是由党的中央组织、地方组织和基层组织组成的,党支部是最基层的组织。在整个党的组织架构中,作为党的基层组织的党支部,占有特殊的重要位置。” 这样“ 进一步明确党支部的性质、地位和作用,充分发挥党支部的战斗堡垒作用,对于全面贯彻党的***精神是党联系群众的桥梁和纽带,是从政治上、思想上团结、凝聚广大群众的核心,是贯彻落实党的路线方针政策的决定性力量,也是对党员进行教育管理的最基本单位。加强党的自身建设,加强党\345\221员队伍的教育和管理,维护党的纪律, 、、、、、、 ” 网页地址http://resource.gbxx123.com/book/epubs/2016/4/29/1461927260070/ops/chapter_00008.xhtml 跪求大神解决。。
求助各位大神用TXT下的代码另存为html的文件为什么无法运行
``` <html xmlns="http://www.w3.org/1991/xhtml"> <head> <meta http-equiv="Content-type" content="text/html;charset=utf-8"/> <title>链接<title> </head> <body> <a href="http://www.baidu.com" target="_blank">百度</a> </body> </html> ``` 为什么运行后无法显示
asp.net做的登陆界面如何居中?
asp.net做的登陆界面如何居中,下面是源代码,如何修改 ``` <%@ Page Language="C#" AutoEventWireup="true" CodeFile="index.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>游戏库</title> </head> <body> <form id="form1" runat="server"> <div> 用户名:<asp:TextBox ID="TextBox1" runat="server"></asp:TextBox><br /> <br /> 密&nbsp; &nbsp;码:<asp:TextBox ID="TextBox2" runat="server" TextMode="Password"></asp:TextBox><br /> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <br /> &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; <asp:Button ID="Button1" runat="server" Text="登录" OnClick="Button1_Click" /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <asp:Button ID="Button2" runat="server" Text="注册" OnClick="Button2_Click" style="height: 21px" /></div> </form> </body> </html> ```
Linux-ARM 连接网络摄像头,tcp封装http协议的问题
1.问题背景 有一块裁剪过的ARM板要用网线和网络摄像头直连,厂商只提供了CGI-API,没有其他接口,由于板子资源限制,也没法装http服务和curl。目前考虑用tcp 封装http 进行控制。 尝试过用Wireshark抓过浏览器和摄像头通信的包结构如下 ``` GET /cgi-bin/images_cgi?channel=0&user=admin&pwd=admin HTTP/1.1 Accept: text/html, application/xhtml+xml, image/jxr, */* Accept-Language: zh-CN User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko Accept-Encoding: gzip, deflate Host: 192.168.14.18 DNT: 1 Connection: Keep-Alive ``` 问题: 我把cookie删掉了,其余部分直接同过socket调试工具发给摄像头是有应的 ![图片说明](https://img-ask.csdn.net/upload/201912/11/1576047462_916583.png) 然后我tcp客户端也按照相同的格式send,但是始终没有收到摄像头的回应,是哪边的问题? tcp 客户端代码: ``` static protocol_err_t protocol_CameraCommand(PROTOCOL *this) { unsigned short port = 80; // 服务器的端口号 char *server_ip = "192.168.14.18"; // 服务器ip地址 int sockfd = socket(AF_INET, SOCK_STREAM, 0);// 创建TCP套接字 if(sockfd < 0) { protocol_SendString(this,"socket failed\n"); return PROTOCOL_ERR_FAIL; } struct sockaddr_in server_addr; //定义服务器信息结构体 bzero(&server_addr,sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_port = htons(port); inet_pton(AF_INET, server_ip, &server_addr.sin_addr); int err_log = connect(sockfd, (struct sockaddr*)&server_addr, sizeof(server_addr)); // 主动连接服务器 if(err_log != 0) { sprintf(this->sendBuffer, "connect error: %s(errno: %d)\n",strerror(errno),errno); protocol_SendString(this, this->sendBuffer); close(sockfd); return PROTOCOL_ERR_FAIL; } char str1[1024] = ""; sprintf(str1, "%s\r\n","GET /cgi-bin/date_cgi?action=get&user=admin&pwd=admin HTTP/1.0"); //服务端接收数据处理文件地址,并带参数 sprintf(str1, "%s%s\r\n",str1,"Accept: text/html, application/xhtml+xml, image/jxr, */*"); sprintf(str1, "%s%s\r\n",str1,"Accept-Language: zh-CN"); sprintf(str1, "%s%s\r\n",str1,"User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"); sprintf(str1, "%s%s\r\n",str1,"Accept-Encoding: gzip, deflate"); sprintf(str1, "%s%s\r\n",str1,"Host: 192.168.14.18"); //服务器地址 sprintf(str1, "%s%s\r\n",str1,"Connection: Keep-Alive"); //sprintf(str1,"%s%s\r\n",str1,"Cookie: JSESSIONID=5386A9443729D7EB0B61E38A9C7CF52F"); sprintf(str1, "%s\r\n",str1); protocol_SendString(this,"----------------------------- HTTP Data ----------------------------------\n\n"); sprintf(this->sendBuffer,"%s",str1); protocol_SendString(this, this->sendBuffer); sprintf(this->sendBuffer,"--------------------------- Data Len=%d ----------------------------------\n\n",strlen(str1)); protocol_SendString(this, this->sendBuffer); int ret=send(sockfd, str1, strlen(str1), 0); // 向服务器发送信息 if(ret<0) { protocol_SendString(this,"send"); close(sockfd); return PROTOCOL_ERR_FAIL; } protocol_SendString(this, "------------------------ server retrun data -------------------------------\n"); char recv_buf[1024*10240]=""; recv(sockfd, recv_buf, sizeof(recv_buf), 0); sprintf(this->sendBuffer,"%s\n\n",recv_buf); protocol_SendString(this, this->sendBuffer); close(sockfd); return PROTOCOL_ERR_NONE; } ```
pycharm 中如何编写使得代码能变成URL 正常读取的那种html格式的????
<!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> 求大神指教应该怎么修改????
arm板作为tcp客户端无法连接网络摄像头,报错误代码111
1.问题背景 需要将arm用网线连接网络摄像头对其进行控制。厂商只提供了CGI接口,且由于系统是裁剪过的curl这些也用不了,所以尝试使用tcp协议直接封装成http。 2.问题现象 用tcp工具,pc作为客户端连接摄像头,发送可以直接接收到有效返回。但是arm板与tcp测试工具或者摄像头都连不上,报111 客户端代码如下 ``` static protocol_err_t protocol_CameraCommand(PROTOCOL *this) { unsigned short port = 8000; // 服务器的端口号 char *server_ip = "192.168.13.11"; // 服务器ip地址 int sockfd = socket(AF_INET, SOCK_STREAM, 0);// 创建TCP套接字 if(sockfd < 0) { protocol_SendString(this,"socket failed\n"); return PROTOCOL_ERR_FAIL; } struct sockaddr_in server_addr; //定义服务器信息结构体 bzero(&server_addr,sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_port = htons(port); inet_pton(AF_INET, server_ip, &server_addr.sin_addr); int err_log = connect(sockfd, (struct sockaddr*)&server_addr, sizeof(server_addr)); // 主动连接服务器 if(err_log != 0) { sprintf(this->sendBuffer, "connect error: %s(errno: %d)\n",strerror(errno),errno); protocol_SendString(this, this->sendBuffer); close(sockfd); return PROTOCOL_ERR_FAIL; } char str1[1024] = ""; sprintf(str1, "%s\r\n","GET /cgi-bin/date_cgi?action=get&user=admin&pwd=admin HTTP/1.1"); //服务端接收数据处理文件地址,并带参数 sprintf(str1, "%s%s\r\n",str1,"Accept: text/html, application/xhtml+xml, image/jxr, */*"); sprintf(str1, "%s%s\r\n",str1,"Accept-Language: zh-CN"); sprintf(str1, "%s%s\r\n",str1,"User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"); sprintf(str1, "%s%s\r\n",str1,"Accept-Encoding: gzip, deflate"); sprintf(str1, "%s%s\r\n",str1,"Host: 192.168.13.11"); //服务器地址 sprintf(str1, "%s%s\r\n",str1,"Connection: Keep-Alive"); //sprintf(str1,"%s%s\r\n",str1,"Cookie: JSESSIONID=5386A9443729D7EB0B61E38A9C7CF52F"); sprintf(str1, "%s\r\n",str1); protocol_SendString(this,"----------------------------- HTTP Data ----------------------------------\n"); sprintf(this->sendBuffer,"%s",str1); protocol_SendString(this, this->sendBuffer); sprintf(this->sendBuffer,"--------------------------- Data Len=%d ----------------------------------\n\n",strlen(str1)); protocol_SendString(this, this->sendBuffer); int ret=send(sockfd, str1, strlen(str1), 0); // 向服务器发送信息 if(ret<0) { protocol_SendString(this,"send"); close(sockfd); return PROTOCOL_ERR_FAIL; } char recv_buf[521]=""; recv(sockfd, recv_buf, sizeof(recv_buf), 0); protocol_SendString(this, "------------------------ server retrun data -------------------------------\n"); sprintf(this->sendBuffer,"%s\n\n",recv_buf); protocol_SendString(this, this->sendBuffer); close(sockfd); return PROTOCOL_ERR_NONE; } ```
不能访问外网的情况下报错xhtml1-transitional.dtd
各位大侠,我们的软件包部署在客户的服务器上,但是客户服务器不允许访问外网,因此在点击产品页面时总会报错 ERROR [stderr] (http-/10.10.12.52:8080-1) [Fatal Error] xhtml1-transitional.dtd:1:3: The markup declarations contained or pointed to by the document type declaration must be well-formed 按照网上搜到的一些方法,我们下载了xhtml1-transitional.dtd,修改了带有xhtml1-transitional.dtd的声明文件的两个html表头,把从网上下载xhtml1-transitional.dtd修改为了从服务器绝对路径上加载,但是还是报上面的错误;后来干脆删掉了引用xhtml1-transitional.dtd的那两个html,按理说应该不会报那个错了吧?但是还在报。。。是用UE的多文件查询,查询用到 xhtml1-transitional.dtd 的文件,难道这都查不全吗?各位大侠谁经历过这个,还有什么办法找找其他用到xhtml1-transitional.dtd的文件?
Kafka实战(三) - Kafka的自我修养与定位
Apache Kafka是消息引擎系统,也是一个分布式流处理平台(Distributed Streaming Platform) Kafka是LinkedIn公司内部孵化的项目。LinkedIn最开始有强烈的数据强实时处理方面的需求,其内部的诸多子系统要执行多种类型的数据处理与分析,主要包括业务系统和应用程序性能监控,以及用户行为数据处理等。 遇到的主要问题: 数据正确性不足 数据的收集主要...
volatile 与 synchronize 详解
Java支持多个线程同时访问一个对象或者对象的成员变量,由于每个线程可以拥有这个变量的拷贝(虽然对象以及成员变量分配的内存是在共享内存中的,但是每个执行的线程还是可以拥有一份拷贝,这样做的目的是加速程序的执行,这是现代多核处理器的一个显著特性),所以程序在执行过程中,一个线程看到的变量并不一定是最新的。 volatile 关键字volatile可以用来修饰字段(成员变量),就是告知程序任何对该变量...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
GitHub开源史上最大规模中文知识图谱
近日,一直致力于知识图谱研究的 OwnThink 平台在 Github 上开源了史上最大规模 1.4 亿中文知识图谱,其中数据是以(实体、属性、值),(实体、关系、实体)混合的形式组织,数据格式采用 csv 格式。 到目前为止,OwnThink 项目开放了对话机器人、知识图谱、语义理解、自然语言处理工具。知识图谱融合了两千五百多万的实体,拥有亿级别的实体属性关系,机器人采用了基于知识图谱的语义感...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
微信支付崩溃了,但是更让马化腾和张小龙崩溃的竟然是……
loonggg读完需要3分钟速读仅需1分钟事件还得还原到昨天晚上,10 月 29 日晚上 20:09-21:14 之间,微信支付发生故障,全国微信支付交易无法正常进行。然...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
相关热词 c# clr dll c# 如何orm c# 固定大小的字符数组 c#框架设计 c# 删除数据库 c# 中文文字 图片转 c# 成员属性 接口 c#如何将程序封装 16进制负数转换 c# c#练手项目
立即提问