js 16进制字符串 转 ArrayBuffer

在 javaScript中 如何将16进制字符串 转 ArrayBuffer

1个回答

示例:

 var hex = 'AA5504B10000B5'

var typedArray = new Uint8Array(hex.match(/[\da-f]{2}/gi).map(function (h) {
  return parseInt(h, 16)
}))

var buffer = typedArray.buffer
github_39231655
github_39231655 厉害
大约一年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
微信小程序如何把ArrayBuffer 转为字符串啊
自己尝试了一下一个方:工具调试可以运行就说类型Undefined........好坑啊 数据如下:Uint8Array(85) [123, 34, 100, 97, 116, 97, 84, 121, 112, 101, 34, 58, 34, 115, 101, 115, 115, 105, 111, 110, 95, 100, 97, 116, 97, 95, 116, 121, 112, 101, 95, 116, 101, 115, 116, 34, 44, 34, 101, 114, 114, 67, 111, 100, 101, 34, 58, 49, 44, 34, 100, 97, 116, 97, 34, 58, 123, 34, 110, 97, 109, 101, 34, 58, 34, 229, 188, 160, 228, 184, 137, 34, 44, 34, 97, 103, 101, 34, 58, 34, 50, 48, 34, 125, 125] let arrayBuffer = res.data; if (arrayBuffer instanceof ArrayBuffer){ let unit8Arr = new Uint8Array(arrayBuffer) ; console.log(unit8Arr); //下面这个可以但是中文乱码 console.log(String.fromCharCode.apply(null, unit8Arr )); //下面这个调试也可以但是微信预览就不行了报错说TextDecoder undefined var decodeStr = new TextDecoder("utf-8").decode(unit8Arr); console.log(decodeStr); } 另外自己想着转为Blob后解析,调试也可以,但是微信运行也报错Blob undefined,好坑啊,哪位朋友帮忙给点意见啊,再次先谢过
fetch请求一个gbk网站的html内容, 返回 arrayBuffer,如何把里面的中文乱码转换正确?
``` fetch(url1).then(function(response) { return response.arrayBuffer(); }).then(function(buffer) { try{ let r = iconvLite.decode(buffer, 'gbk'); console.log(r) }catch(e){ console.log(e) } }); ``` ![图片说明](https://img-ask.csdn.net/upload/201901/12/1547303631_116418.png) 如上图,第一个是我输出的是fetch默认设置的header里面的编码格式,我试过改变成功gbk,可是没有效果 第二行就是iconvLite.decode(buffer, 'gbk');中的buffer,是ArrayBuffer类型的数据。 第三行是转换后的结果,是空的。。。。。。 感觉还差一步就可以成功了,但是就是不知道该怎么继续了,求指导。。。
kafka启动报错,java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
试过很多方法,降级zk使其和kafka依赖的版本保持一致; zk3.414 ,kafka2.3 删除了scala的环境变量,依然不行; java_home只有一个 java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:238) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$2(ZooKeeperClient.scala:160) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1(ZooKeeperClient.scala:160) at kafka.zookeeper.ZooKeeperClient.$anonfun$handleRequests$1$adapted(ZooKeeperClient.scala:156) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:156) at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1660) at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1647) at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1642) at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1712) at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689) at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97) at kafka.server.KafkaServer.startup(KafkaServer.scala:262) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)
(Vue/wfs.js)为什么不能调用wfs.js里的websocket初始化和关闭函数,只能调用发送函数?
由于项目要进行H.264裸流播放,在github上找到了可用的[wfs.js](https://github.com/ChihChengYang/wfs.js "wfs.js"),最近又遇到了一个问题,无法调用wfs.js内部Websocket初始化和关闭的函数。 之前经过提问已经解决了[调用函数内部函数](https://ask.csdn.net/questions/891402 "")的问题,答案中的链接地址是https://blog.csdn.net/weixin_43694639/article/details/88723280, 但是我用同样的写法,却不能调用websocket的关闭函数,只能调用发送数据的函数。 --- 这是整个函数折叠起来的样子: ``` (function(f)){... })(function() {... }); ``` 这是我要调用的函数整体: ``` var WebsocketLoader = function(_EventHandler){ _inherits(WebsocketLoader, _EventHandler); function WebsocketLoader(wfs){... } _createClass(WebsocketLoader, [{... //要调用的函数在省略号里 }]); return WebsocketLoader; }(_eventHandler2.default); exports.default = WebsocketLoader; ``` **不能调用**的函数,初始化websocket: ``` key: 'initSocketClient', value: function initSocketClient() { this.client.binaryType = 'arraybuffer'; this.client.onmessage = this.receiveSocketMessage.bind(this); // clientSocket.binaryType = 'arraybuffer'; // clientSocket.onmessage = socketReceive; this.wfs.trigger(_events2.default.WEBSOCKET_MESSAGE_SENDING, { commandType: "open", channelName: this.channelName, commandValue: "NA" }); //不知道这个trigger是什么 console.log('Websocket Open!'); flagSP = true; //initSocket = initSocketClient ``` **不能调用**的函数,主动关闭websocket: ``` //这个函数是我后来自己加的 key: 'onWebsocketClose', value: function onWebsocketClose(i) { clientSocket.send(i) console.log('切换页面,中断连接。' + i) clientSocket.close(); socketClose = onWebsocketClose; ``` **每次调用都会报这个错**: ``` [Vue warn]: Error in v-on handler: "TypeError: Object(...) is not a function" found in ---> <WindowFrame> at src/components/WindowFrame.vue <ElHeader> at packages/header/src/main.vue <ElContainer> at packages/container/src/main.vue <Ccqg> at src/components/main.vue <App> at src/App.vue <Root> vue.esm.js:628 TypeError: "Object(...) is not a function" showMain WindowFrame.vue:309 click WindowFrame.vue:268 VueJS 3 ``` 但是这个函数可以在外部调用: ``` key: 'onWebsocketMessageSending', value: function onWebsocketMessageSending(i) { clientSocket.send(i) console.log('发送视频请求:' + i) //this.client.send(i) //this.client.send(JSON.stringify({ type: 2, carNum: 8888 })) sendMsg = onWebsocketMessageSending ``` 我之前写过单独的websocket,可以直接在外部调用我定义的所有方法,包括初始化和关闭。不明白为什么这里不行。
使用wfs.js时不能调用其内部Websocket关闭和初始化函数,如何在外部调用?
由于项目要进行H.264裸流播放,在github上找到了可用的 [wfs.js](https://github.com/ChihChengYang/wfs.js "wfs.js"),最近又遇到了一个问题,无法调用wfs.js内部Websocket初始化和关闭的函数。 之前经过提问已经解决了 [调用函数内部函数](https://ask.csdn.net/questions/891402 "")的问题,答案中的链接地址是https://blog.csdn.net/weixin_43694639/article/details/88723280, 但是我用同样的写法,却不能调用websocket的关闭函数,只能调用发送数据的函数。 --- 这是整个函数折叠起来的样子: ``` (function(f)){... })(function() {... }); ``` 这是我要调用的函数整体: ``` var WebsocketLoader = function(_EventHandler){ _inherits(WebsocketLoader, _EventHandler); function WebsocketLoader(wfs){... } _createClass(WebsocketLoader, [{... //要调用的函数在省略号里 }]); return WebsocketLoader; }(_eventHandler2.default); exports.default = WebsocketLoader; ``` **不能调用**的函数,初始化websocket: ``` key: 'initSocketClient', value: function initSocketClient() { this.client.binaryType = 'arraybuffer'; this.client.onmessage = this.receiveSocketMessage.bind(this); // clientSocket.binaryType = 'arraybuffer'; // clientSocket.onmessage = socketReceive; this.wfs.trigger(_events2.default.WEBSOCKET_MESSAGE_SENDING, { commandType: "open", channelName: this.channelName, commandValue: "NA" }); //不知道这个trigger是什么 console.log('Websocket Open!'); flagSP = true; //initSocket = initSocketClient ``` **不能调用**的函数,主动关闭websocket: ``` //这个函数是我后来自己加的 key: 'onWebsocketClose', value: function onWebsocketClose(i) { clientSocket.send(i) console.log('切换页面,中断连接。' + i) clientSocket.close(); socketClose = onWebsocketClose; ``` **每次调用都会报这个错**: ``` [Vue warn]: Error in v-on handler: "TypeError: Object(...) is not a function" found in ---> <WindowFrame> at src/components/WindowFrame.vue <ElHeader> at packages/header/src/main.vue <ElContainer> at packages/container/src/main.vue <Ccqg> at src/components/main.vue <App> at src/App.vue <Root> vue.esm.js:628 TypeError: "Object(...) is not a function" showMain WindowFrame.vue:309 click WindowFrame.vue:268 VueJS 3 ``` 但是这个函数可以在外部调用: ``` key: 'onWebsocketMessageSending', value: function onWebsocketMessageSending(i) { clientSocket.send(i) console.log('发送视频请求:' + i) //this.client.send(i) //this.client.send(JSON.stringify({ type: 2, carNum: 8888 })) sendMsg = onWebsocketMessageSending ``` 我之前写过单独的websocket,可以直接在外部调用我定义的所有方法,包括初始化和关闭。不明白为什么这里不行。
JavaScript类型化数组发生了看不懂的结果!请求哪位大神帮忙解答一下下!谢谢你啦!
# 这个Uint32就是用的32进制存的吗?Uint16就是用的16进制存的吗? # 第一种先使用Uint32Array再使用Uint16Array操作同一个数组,输出的结果为什么是1 0 2 0 3 0 4 0 ? var buffer = new ArrayBuffer(16); var array = new Uint32Array(buffer); for(var i = 0; i < array.length;++i){ array[i] = i+1; document.write(array[i]+" "); //1 2 3 4 } document.write("<br>"); var array2 = new Uint16Array(buffer); for(var i = 0; i < array2.length;++i){ document.write(array2[i]+" "); //1 0 2 0 3 0 4 0 } # 第二种先使用Uint16Array再使用Uint32Array操作同一个数组,输出的结果为什么是131073 262147 393221 524295 ? var buffer = new ArrayBuffer(16); var array = new Uint16Array(buffer); for(var i = 0; i < array.length;++i){ array[i] = i+1; document.write(array[i]+" "); //1 2 3 4 5 6 7 8 } document.write("<br>"); var array2 = new Uint32Array(buffer); for(var i = 0; i < array2.length;++i){ document.write(array2[i]+" "); //131073 262147 393221 524295 } # 大神请你将详细点!可以吗?哈哈
微信的蓝牙发送消息成功,但是蓝牙模块的串口助手没有收到任何消息
## 1、问题描述: 微信蓝牙小程序,向蓝牙设备HC-08(支持蓝牙协议4.0)发送一个0x00的16进制数据。 微信的真机调试展示:发送成功,即:writeBLECharacteristicValue:ok 但是,,,HC-08蓝牙接收模块的串口助手,没有收到任何数据。 有大佬遇到此问题嘛? 麻烦给个建议啥的~~~感谢 ## 2、相关代码: ``` startwrite: function () { // 向蓝牙设备发送一个0x00的16进制数据 const buffer = new ArrayBuffer(1) const dataView = new DataView(buffer) dataView.setUint8(0, 0) wx.writeBLECharacteristicValue({ deviceId: deviceId, serviceId: serviceId[1],//使用服务1 characteristicId: characteristicId[4], //使用服务1的第5个特征,支持read,write,notify value: buffer, success: function (res) { console.log('writeBLECharacteristicValue success', res.errMsg) } }) } ``` ## 3、报错信息 其实没有报错,就是蓝牙模块收不到数据。不知为何? 微信小程序的真机调试console: writeBLECharacteristicValue:ok 蓝牙模块HC-08的串口助手展示: ![图片说明](https://img-ask.csdn.net/upload/201812/18/1545123496_463917.png) 注:蓝牙的com5端口号是正确的,如下: ![图片说明](https://img-ask.csdn.net/upload/201812/18/1545123601_3091.png) 该蓝牙HC-08,不需要密钥加密,即可连接。 ## 4、尝试过的方法 怀疑是获取服务和获取特征值,取错了。导致发送失败。但是,排查后,确实是服务1的第5个特征值,才支持read,write,notify。截图如下: ![图片说明](https://img-ask.csdn.net/upload/201812/18/1545123896_404297.png) ## # 想不出来,是怎么回事 ## # 麻烦高人指点一二 总的截图如下: ![图片说明](https://img-ask.csdn.net/upload/201812/18/1545124166_529665.png)
【已解决】小程序前端生成二维码是base64转图片后页面不显示
![解析后的base64](https://img-ask.csdn.net/upload/201910/17/1571311813_5948.png) ``` onLoad: function(options) { var that = this; var access_token = that.data.access_token var scene = decodeURIComponent(options.scene) wx.request({ url: 'https://api.weixin.qq.com/wxa/getwxacodeunlimit?access_token=' + this.data.access_token, data: { scene: '000', page: "../../pages/home/home", width: 200 }, method: "POST", responseType: 'arraybuffer', success: res => { var base64 = wx.arrayBufferToBase64(res.data); this.setData({ urlIMG: 'data:image/PNG;base64,' + base64 }) console.log(that.data) }, fail(e) { } }) }, ```
js XMLHttpRequest.onreadystatechange里怎么把变量设置为全局变量
``` function MeshModel(url,flag) { var CellsNum; var oReq = new XMLHttpRequest(); oReq.open("GET", url, true); oReq.responseType = "arraybuffer"; oReq.send(); oReq.onreadystatechange = function () { if (oReq.readyState == oReq.DONE) { var arrayBuffer = oReq.response; var dataView = new DataView(arrayBuffer); CellsNum = dataView.getInt32(flag,true); //这里可以读出CellsNum 的值 } }; this.getCellsNum = function(){ return CellsNum; }; } function main(){ var url= "http://localhost:8080/Obj/TT.rtd"; var m = new MeshModel(url, 2); document.write(m.getCellsNum()); // 显示undefined } 我想访问CellsNum, 但显示为undefined,应该怎么改进。大神帮帮忙 ```
javascript 的socket 报出的“200”错误,怎么解决
WebSocket connection to 'ws://127.0.0.1:8020/' failed: Error during WebSocket handshake: Unexpected response code: 200 握手信息200是错误???不是很懂 以下是出事故的代码: connect : function(url) { // Websocket initialization if (this.ws != undefined) { this.ws.close(); delete this.ws; } this.ws = new WebSocket(url); //<==there is a error O~O this.ws.binaryType = "arraybuffer"; this.ws.onopen = () => { log("Connected to " + url); };
最近在用anjular post请求导出excel遇到的一些问题
以往做过直接导出数据,和上传excel, 用的poi3.17,前端框架用的angular 之前在导出时,都可以生成我预设的文件名,文件格式与后缀的excel 之前导出使用的是这样的 ``` window.location.href = "/api/livecoursepay?courseName=" + $scope.courseName + "&types=" + 14; ``` 赋现在的post传值代码 ``` $scope.uploadlineGuangdian=function(){ var fd = new FormData(); fd.append("file", $("#file")[0].files[0]); fd.append("types",1); $http.post("/api/improtExcel/guangdian",fd,{ withCredentials: true, headers: {'Content-Type': undefined }, responseType: 'arraybuffer', transformRequest: angular.identity }) .success(function(data){ var blob = new Blob([data], {type: "application/vnd.ms-excel"}); var objectUrl = URL.createObjectURL(blob); window.open(objectUrl); }) } ``` 但本次需求变更,需要导入一个excel文件,在返回一个excel文件, 所以只能用 post请求,但是使用过程中出现一些问题, 内容,表格名称都是我想要的样子,但是excel文件名和文件格式变成了奇怪的东西, 我需要用excel表格形式打开,再用excel另存才能达到我想要的样子 ![图片说明](https://img-ask.csdn.net/upload/201907/12/1562893703_461325.png),![图片说明](https://img-ask.csdn.net/upload/201907/12/1562893755_839418.png) 用window.location.href走的话在页面下载的就是excel文件而且文件名是我设置好的文件名 但是用上面的post方法返回的是一个没有后缀名的,文件名是一排随机编码的文件,需要用excel打开(如上图),但是打开后数据是我想要的 所以我想问问大神为什么会出现这种情况,怎么解决,非常感谢
PDFJS加载流预览PDF不展示问题
![问题页面](https://img-ask.csdn.net/upload/201905/20/1558347609_598483.png) ![图片说明](https://img-ask.csdn.net/upload/201905/20/1558347749_672343.png) 如图一所示,数据流已经传入,各项按钮都可以用,但就是主页面无法展示;图二演示模式等其余都可用,请教各位大佬这是怎么回事啊? 版本(PDFJS-1.9.426) 页面代码 ``` <%@ page pageEncoding="utf-8" contentType="text/html; charset=utf-8"%> <link rel="stylesheet" href="<%=request.getContextPath()%>/skin/js/pdfjs/web/viewer.css"> <link rel="resource" type="application/l10n" href="<%=request.getContextPath()%>/skin/js/pdfjs/web/locale/locale.properties"> <script type="text/javascript"> var DEFAULT_URL = "";//注意,删除的变量在这里重新定义 var PDFData = ""; $.ajax({ type:"post", async:false, // mimeType: 'text/plain; charset=x-user-defined', url:'publicworkindex.cmd?method=getPdfOutputStream', data:{'doc_id':${htmldata.STORE_ID}}, success:function(data){ PDFData = data; } }); var rawLength = PDFData.length; //转换成pdf.js能直接解析的Uint8Array类型,见pdf.js-4068 var array = new Uint8Array(new ArrayBuffer(rawLength)); for(i = 0; i < rawLength; i++) { array[i] = PDFData.charCodeAt(i) & 0xff; } DEFAULT_URL = array; </script> <script src="<%=request.getContextPath()%>/skin/js/pdfjs/build/pdf.js"></script> <script src="<%=request.getContextPath()%>/skin/js/pdfjs/web/viewer.js"></script> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <meta name="google" content="notranslate"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>PDF.js viewer</title> </head> <body tabindex="1" class="loadingInProgress"> <div id="outerContainer"> <div id="sidebarContainer"> <div id="toolbarSidebar"> <div class="splitToolbarButton toggled"> <button id="viewThumbnail" class="toolbarButton toggled" title="Show Thumbnails" tabindex="2" data-l10n-id="thumbs"> <span data-l10n-id="thumbs_label">Thumbnails</span> </button> <button id="viewOutline" class="toolbarButton" title="Show Document Outline (double-click to expand/collapse all items)" tabindex="3" data-l10n-id="document_outline"> <span data-l10n-id="document_outline_label">Document Outline</span> </button> <button id="viewAttachments" class="toolbarButton" title="Show Attachments" tabindex="4" data-l10n-id="attachments"> <span data-l10n-id="attachments_label">Attachments</span> </button> </div> </div> <div id="sidebarContent"> <div id="thumbnailView"> </div> <div id="outlineView" class="hidden"> </div> <div id="attachmentsView" class="hidden"> </div> </div> </div> <!-- sidebarContainer --> <div id="mainContainer"> <div class="findbar hidden doorHanger" id="findbar"> <div id="findbarInputContainer"> <input id="findInput" class="toolbarField" title="Find" placeholder="Find in document…" tabindex="91" data-l10n-id="find_input"> <div class="splitToolbarButton"> <button id="findPrevious" class="toolbarButton findPrevious" title="Find the previous occurrence of the phrase" tabindex="92" data-l10n-id="find_previous"> <span data-l10n-id="find_previous_label">Previous</span> </button> <div class="splitToolbarButtonSeparator"></div> <button id="findNext" class="toolbarButton findNext" title="Find the next occurrence of the phrase" tabindex="93" data-l10n-id="find_next"> <span data-l10n-id="find_next_label">Next</span> </button> </div> </div> <div id="findbarOptionsContainer"> <input type="checkbox" id="findHighlightAll" class="toolbarField" tabindex="94"> <label for="findHighlightAll" class="toolbarLabel" data-l10n-id="find_highlight">Highlight all</label> <input type="checkbox" id="findMatchCase" class="toolbarField" tabindex="95"> <label for="findMatchCase" class="toolbarLabel" data-l10n-id="find_match_case_label">Match case</label> <span id="findResultsCount" class="toolbarLabel hidden"></span> </div> <div id="findbarMessageContainer"> <span id="findMsg" class="toolbarLabel"></span> </div> </div> <!-- findbar --> <div id="secondaryToolbar" class="secondaryToolbar hidden doorHangerRight"> <div id="secondaryToolbarButtonContainer"> <button id="secondaryPresentationMode" class="secondaryToolbarButton presentationMode visibleLargeView" title="Switch to Presentation Mode" tabindex="51" data-l10n-id="presentation_mode"> <span data-l10n-id="presentation_mode_label">Presentation Mode</span> </button> <button id="secondaryOpenFile" class="secondaryToolbarButton openFile visibleLargeView" title="Open File" tabindex="52" data-l10n-id="open_file"> <span data-l10n-id="open_file_label">Open</span> </button> <button id="secondaryPrint" class="secondaryToolbarButton print visibleMediumView" title="Print" tabindex="53" data-l10n-id="print"> <span data-l10n-id="print_label">Print</span> </button> <button id="secondaryDownload" class="secondaryToolbarButton download visibleMediumView" title="Download" tabindex="54" data-l10n-id="download"> <span data-l10n-id="download_label">Download</span> </button> <a href="#" id="secondaryViewBookmark" class="secondaryToolbarButton bookmark visibleSmallView" title="Current view (copy or open in new window)" tabindex="55" data-l10n-id="bookmark"> <span data-l10n-id="bookmark_label">Current View</span> </a> <div class="horizontalToolbarSeparator visibleLargeView"></div> <button id="firstPage" class="secondaryToolbarButton firstPage" title="Go to First Page" tabindex="56" data-l10n-id="first_page"> <span data-l10n-id="first_page_label">Go to First Page</span> </button> <button id="lastPage" class="secondaryToolbarButton lastPage" title="Go to Last Page" tabindex="57" data-l10n-id="last_page"> <span data-l10n-id="last_page_label">Go to Last Page</span> </button> <div class="horizontalToolbarSeparator"></div> <button id="pageRotateCw" class="secondaryToolbarButton rotateCw" title="Rotate Clockwise" tabindex="58" data-l10n-id="page_rotate_cw"> <span data-l10n-id="page_rotate_cw_label">Rotate Clockwise</span> </button> <button id="pageRotateCcw" class="secondaryToolbarButton rotateCcw" title="Rotate Counterclockwise" tabindex="59" data-l10n-id="page_rotate_ccw"> <span data-l10n-id="page_rotate_ccw_label">Rotate Counterclockwise</span> </button> <div class="horizontalToolbarSeparator"></div> <button id="cursorSelectTool" class="secondaryToolbarButton selectTool toggled" title="Enable Text Selection Tool" tabindex="60" data-l10n-id="cursor_text_select_tool"> <span data-l10n-id="cursor_text_select_tool_label">Text Selection Tool</span> </button> <button id="cursorHandTool" class="secondaryToolbarButton handTool" title="Enable Hand Tool" tabindex="61" data-l10n-id="cursor_hand_tool"> <span data-l10n-id="cursor_hand_tool_label">Hand Tool</span> </button> <div class="horizontalToolbarSeparator"></div> <button id="documentProperties" class="secondaryToolbarButton documentProperties" title="Document Properties…" tabindex="62" data-l10n-id="document_properties"> <span data-l10n-id="document_properties_label">Document Properties…</span> </button> </div> </div> <!-- secondaryToolbar --> <div class="toolbar"> <div id="toolbarContainer"> <div id="toolbarViewer"> <div id="toolbarViewerLeft"> <button id="sidebarToggle" class="toolbarButton" title="Toggle Sidebar" tabindex="11" data-l10n-id="toggle_sidebar"> <span data-l10n-id="toggle_sidebar_label">Toggle Sidebar</span> </button> <div class="toolbarButtonSpacer"></div> <button id="viewFind" class="toolbarButton" title="Find in Document" tabindex="12" data-l10n-id="findbar"> <span data-l10n-id="findbar_label">Find</span> </button> <div class="splitToolbarButton hiddenSmallView"> <button class="toolbarButton pageUp" title="Previous Page" id="previous" tabindex="13" data-l10n-id="previous"> <span data-l10n-id="previous_label">Previous</span> </button> <div class="splitToolbarButtonSeparator"></div> <button class="toolbarButton pageDown" title="Next Page" id="next" tabindex="14" data-l10n-id="next"> <span data-l10n-id="next_label">Next</span> </button> </div> <input type="number" id="pageNumber" class="toolbarField pageNumber" title="Page" value="1" size="4" min="1" tabindex="15" data-l10n-id="page"> <span id="numPages" class="toolbarLabel"></span> </div> <div id="toolbarViewerRight"> <button id="presentationMode" class="toolbarButton presentationMode hiddenLargeView" title="Switch to Presentation Mode" tabindex="31" data-l10n-id="presentation_mode"> <span data-l10n-id="presentation_mode_label">Presentation Mode</span> </button> <button id="openFile" class="toolbarButton openFile hiddenLargeView" title="Open File" tabindex="32" data-l10n-id="open_file"> <span data-l10n-id="open_file_label">Open</span> </button> <button id="print" class="toolbarButton print hiddenMediumView" title="Print" tabindex="33" data-l10n-id="print"> <span data-l10n-id="print_label">Print</span> </button> <button id="download" class="toolbarButton download hiddenMediumView" title="Download" tabindex="34" data-l10n-id="download"> <span data-l10n-id="download_label">Download</span> </button> <a href="#" id="viewBookmark" class="toolbarButton bookmark hiddenSmallView" title="Current view (copy or open in new window)" tabindex="35" data-l10n-id="bookmark"> <span data-l10n-id="bookmark_label">Current View</span> </a> <div class="verticalToolbarSeparator hiddenSmallView"></div> <button id="secondaryToolbarToggle" class="toolbarButton" title="Tools" tabindex="36" data-l10n-id="tools"> <span data-l10n-id="tools_label">Tools</span> </button> </div> <div id="toolbarViewerMiddle"> <div class="splitToolbarButton"> <button id="zoomOut" class="toolbarButton zoomOut" title="Zoom Out" tabindex="21" data-l10n-id="zoom_out"> <span data-l10n-id="zoom_out_label">Zoom Out</span> </button> <div class="splitToolbarButtonSeparator"></div> <button id="zoomIn" class="toolbarButton zoomIn" title="Zoom In" tabindex="22" data-l10n-id="zoom_in"> <span data-l10n-id="zoom_in_label">Zoom In</span> </button> </div> <span id="scaleSelectContainer" class="dropdownToolbarButton"> <select id="scaleSelect" title="Zoom" tabindex="23" data-l10n-id="zoom"> <option id="pageAutoOption" title="" value="auto" selected="selected" data-l10n-id="page_scale_auto">Automatic Zoom</option> <option id="pageActualOption" title="" value="page-actual" data-l10n-id="page_scale_actual">Actual Size</option> <option id="pageFitOption" title="" value="page-fit" data-l10n-id="page_scale_fit">Page Fit</option> <option id="pageWidthOption" title="" value="page-width" data-l10n-id="page_scale_width">Page Width</option> <option id="customScaleOption" title="" value="custom" disabled="disabled" hidden="true"></option> <option title="" value="0.5" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 50 }'>50%</option> <option title="" value="0.75" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 75 }'>75%</option> <option title="" value="1" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 100 }'>100%</option> <option title="" value="1.25" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 125 }'>125%</option> <option title="" value="1.5" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 150 }'>150%</option> <option title="" value="2" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 200 }'>200%</option> <option title="" value="3" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 300 }'>300%</option> <option title="" value="4" data-l10n-id="page_scale_percent" data-l10n-args='{ "scale": 400 }'>400%</option> </select> </span> </div> </div> <div id="loadingBar"> <div class="progress"> <div class="glimmer"> </div> </div> </div> </div> </div> <menu type="context" id="viewerContextMenu"> <menuitem id="contextFirstPage" label="First Page" data-l10n-id="first_page"></menuitem> <menuitem id="contextLastPage" label="Last Page" data-l10n-id="last_page"></menuitem> <menuitem id="contextPageRotateCw" label="Rotate Clockwise" data-l10n-id="page_rotate_cw"></menuitem> <menuitem id="contextPageRotateCcw" label="Rotate Counter-Clockwise" data-l10n-id="page_rotate_ccw"></menuitem> </menu> <div id="viewerContainer" tabindex="0"> <div id="viewer" class="pdfViewer"></div> </div> <div id="errorWrapper" hidden='true'> <div id="errorMessageLeft"> <span id="errorMessage"></span> <button id="errorShowMore" data-l10n-id="error_more_info"> More Information </button> <button id="errorShowLess" data-l10n-id="error_less_info" hidden='true'> Less Information </button> </div> <div id="errorMessageRight"> <button id="errorClose" data-l10n-id="error_close"> Close </button> </div> <div class="clearBoth"></div> <textarea id="errorMoreInfo" hidden='true' readonly="readonly"></textarea> </div> </div> <!-- mainContainer --> <div id="overlayContainer" class="hidden"> <div id="passwordOverlay" class="container hidden"> <div class="dialog"> <div class="row"> <p id="passwordText" data-l10n-id="password_label">Enter the password to open this PDF file:</p> </div> <div class="row"> <input type="password" id="password" class="toolbarField"> </div> <div class="buttonRow"> <button id="passwordCancel" class="overlayButton"><span data-l10n-id="password_cancel">Cancel</span></button> <button id="passwordSubmit" class="overlayButton"><span data-l10n-id="password_ok">OK</span></button> </div> </div> </div> <div id="documentPropertiesOverlay" class="container hidden"> <div class="dialog"> <div class="row"> <span data-l10n-id="document_properties_file_name">File name:</span> <p id="fileNameField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_file_size">File size:</span> <p id="fileSizeField">-</p> </div> <div class="separator"></div> <div class="row"> <span data-l10n-id="document_properties_title">Title:</span> <p id="titleField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_author">Author:</span> <p id="authorField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_subject">Subject:</span> <p id="subjectField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_keywords">Keywords:</span> <p id="keywordsField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_creation_date">Creation Date:</span> <p id="creationDateField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_modification_date">Modification Date:</span> <p id="modificationDateField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_creator">Creator:</span> <p id="creatorField">-</p> </div> <div class="separator"></div> <div class="row"> <span data-l10n-id="document_properties_producer">PDF Producer:</span> <p id="producerField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_version">PDF Version:</span> <p id="versionField">-</p> </div> <div class="row"> <span data-l10n-id="document_properties_page_count">Page Count:</span> <p id="pageCountField">-</p> </div> <div class="buttonRow"> <button id="documentPropertiesClose" class="overlayButton"><span data-l10n-id="document_properties_close">Close</span></button> </div> </div> </div> <div id="printServiceOverlay" class="container hidden"> <div class="dialog"> <div class="row"> <span data-l10n-id="print_progress_message">Preparing document for printing…</span> </div> <div class="row"> <progress value="0" max="100"></progress> <span data-l10n-id="print_progress_percent" data-l10n-args='{ "progress": 0 }' class="relative-progress">0%</span> </div> <div class="buttonRow"> <button id="printCancel" class="overlayButton"><span data-l10n-id="print_progress_close">Cancel</span></button> </div> </div> </div> </div> <!-- overlayContainer --> </div> <!-- outerContainer --> <div id="printContainer"></div> </body> ```
scala定时器问题,求指教
scala中实现这样的事情,一个arraybuffer接收一行数据。 1.这个buffer的长度达到定长如300,提交一次,清零继续。 2.一定时间,如1分钟后,长度不到300,也提交一次,清零继续。如果1分钟内到了300,就提交,并重新计时。 试了几种办法都不行,请高手指教,谢谢
Kafka 输出上百G的日志Platform.log
莫名其妙输出日志 0918 150001 867 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.consumer.ConsumerFetcherManager: [ConsumerFetcherManager-1474179379371] Added fetcher for partitions ArrayBuffer() 0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Verifying properties 0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Property client.id is overridden to operationplat 0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Property metadata.broker.list is overridden to 0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] INFO kafka.utils.VerifiableProperties: Property request.timeout.ms is overridden to 30000 0918 150002 068 [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread] WARN kafka.consumer.ConsumerFetcherManager$LeaderFinderThread: [operationplat_TEST-VM-JC-121-1474179379302-692862cb-leader-finder-thread], Failed to find leader for Set([operAsynDown,0]) kafka.common.KafkaException: fetching topic metadata for topics [Set(operAsynDown)] from broker [ArrayBuffer()] failed at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72) at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93) at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60) 这个日志可以关闭或者如何避免这种异常发生?求解!
DecisionTreeClassifier.train()无法调用
我在尝试使用DecisionTreeClassifier.train()时出现了以下报错提示: > Error:(218, 41) method train in class DecisionTreeClassifier cannot be accessed in org.apache.spark.ml.classification.DecisionTreeClassifier > Access to protected method train not permitted because > enclosing object FeatureSelection in package core is not a subclass of > class DecisionTreeClassifier in package classification where target is defined > val dt = decisionTreeClassifier.train(trainRdd) 指出由于我的对象 FeatureSelection不是包classification的子类,因而不能调用它的protected类型方法,但是官方文档上train是public类型的 环境:scala2.10.6 spark2.10:1.6.1 jdk1.8 附上相关代码: ```scala import org.apache.spark.ml.classification.DecisionTreeClassifier object FeatureSelection { def selectFeatureGreedyDTNoLimit(){ val selectfeature=ArrayBuffer[String]() val selectsize=selectfeature.size val tempfeature=selectfeature++ArrayBuffer(line) val vectorDF = new VectorAssembler() .setInputCols(tempfeature.toArray) .setOutputCol("features") .transform(tempdf) .select("label", "features") val Array(trainRdd, testRdd) = vectorDF .rdd .map(row => LabeledPoint(Common.any2Double(row.get(0)).get, row.getAs[Vector](1))) .randomSplit(Array(0.5, 0.5), 0L) val numClasses = 2 val categoricalFeaturesInfo = Map[Int, Int]() val dt = decisionTreeClassifier.train(trainRdd, categoricalFeaturesInfo, numClasses) } } ```
DecisionTreeClassifier.train()调用报错
我在尝试使用DecisionTreeClassifier.train()时一直提示如下错误,说当前对象不是classification包的子类,因此不能调用他的protected属性方法,但是官方文档是public类型的,因此不知道哪里出错了 ``` Error:(218, 41) method train in class DecisionTreeClassifier cannot be accessed in org.apache.spark.ml.classification.DecisionTreeClassifier Access to protected method train not permitted because enclosing object FeatureSelection in package core is not a subclass of class DecisionTreeClassifier in package classification where target is defined val dt = decisionTreeClassifier.train(trainRdd) ``` 环境为scala 2.10.6 spark2.10:1.6.1 jdk1.8,相关代码如下: ``` import org.apache.spark.ml.classification.DecisionTreeClassifier object FeatureSelection { val selectfeature=ArrayBuffer[String]() val tempfeature=selectfeature++ArrayBuffer(line) val vectorDF = new VectorAssembler() .setInputCols(tempfeature.toArray) .setOutputCol("features") .transform(tempdf) .select("label", "features") val Array(trainRdd, testRdd) = vectorDF .rdd .map(row => LabeledPoint(Common.any2Double(row.get(0)).get, row.getAs[Vector](1))) .randomSplit(Array(0.5, 0.5), 0L) val numClasses = 2 val categoricalFeaturesInfo = Map[Int, Int]() val decisionTreeClassifier = new DecisionTreeClassifier() .setMaxBins(maxBins) .setImpurity(impurity) .setFeaturesCol("features") .setLabelCol("label") .setMaxDepth(maxDepth) val dt = decisionTreeClassifier.train(trainRdd, categoricalFeaturesInfo, numClasses) } ```
spark streaming运行一段时间报以下异常,怎么解决
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 1568735.0 failed 4 times, most recent failure: Lost task 2.3 in stage 1568735.0 (TID 11808399, iZ94pshi327Z): java.lang.Exception: Could not compute split, block input-0-1438413230200 not found at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 15/08/01 08:53:09 WARN AkkaUtils: Error sending message [message = Heartbeat(0,[Lscala.Tuple2;@544fc1ff,BlockManagerId(0, iZ94w2tczvjZ, 41595))] in 2 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:427) 15/08/01 08:53:28 WARN AkkaUtils: Error sending message [message = UpdateBlockInfo(BlockManagerId(0, iZ94w2tczvjZ, 41595),input-0-1438385673800,StorageLevel(false, false, false, false, 1),0,0,0)] in 1 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:221) at org.apache.spark.storage.BlockManagerMaster.updateBlockInfo(BlockManagerMaster.scala:62) at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$tryToReportBlockStatus(BlockManager.scala:384) at org.apache.spark.storage.BlockManager.reportBlockStatus(BlockManager.scala:360) at org.apache.spark.storage.BlockManager.dropOldBlocks(BlockManager.scala:1138) at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$dropOldNonBroadcastBlocks(BlockManager.scala:1115) at org.apache.spark.storage.BlockManager$$anonfun$1.apply$mcVJ$sp(BlockManager.scala:149) at org.apache.spark.util.MetadataCleaner$$anon$1.run(MetadataCleaner.scala:43) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) 15/08/01 08:53:42 WARN AkkaUtils: Error sending message [message = Heartbeat(0,[Lscala.Tuple2;@544fc1ff,BlockManagerId(0, iZ94w2tczvjZ, 41595))] in 3 attempts java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:427) 15/08/01 08:53:45 WARN Executor: Issue communicating with driver in heartbeater org.apache.spark.SparkException: Error sending message [message = Heartbeat(0,[Lscala.Tuple2;@544fc1ff,BlockManagerId(0, iZ94w2tczvjZ, 41595))] at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:209) at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:427) Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds] at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:195) ... 1 more
sparkSql使用insert、create table tablename as select 。。。会报一个错,查了很久都没有查到原因。
我在spark的bin下,使用spark-sql。 可以查阅hive库。 show databases; show tables; select * from tablename; drop table tablename; 但是一旦使用insert、create就会报错。 日志如下,有没有大牛答疑解惑呢? ``` > > > create table bak as select * from dm_bi_org_cfg; 18/10/20 15:31:53 ERROR Utils: Aborting task org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more 18/10/20 15:31:53 WARN FileOutputCommitter: Could not delete hdfs://master.datavip.ezhiyang.com:8020/user/hive/warehouse/bi_dm.db/bak/.hive-staging_hive_2018-10-20_15-31-53_518_4175343188146321007-1/-ext-10000/_temporary/0/_temporary/attempt_20181020153153_0003_m_000000_0 18/10/20 15:31:53 ERROR FileFormatWriter: Job job_20181020153153_0003 aborted. 18/10/20 15:31:53 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3) org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more 18/10/20 15:31:53 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more 18/10/20 15:31:53 ERROR TaskSetManager: Task 0 in stage 3.0 failed 1 times; aborting job 18/10/20 15:31:53 ERROR FileFormatWriter: Aborting job null. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:188) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:317) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:81) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:182) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:340) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:248) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more 18/10/20 15:31:53 ERROR SparkSQLDriver: Failed in [create table bak as select * from dm_bi_org_cfg] org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:215) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:317) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:81) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:182) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:340) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:248) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:188) ... 38 more Caused by: org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:215) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:173) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:317) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:81) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:182) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:340) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:248) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:188) ... 38 more Caused by: org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123) at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:305) at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:314) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:258) at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1375) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:261) ... 8 more Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.io.SequenceFile.CompressionType.block at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.io.SequenceFile$CompressionType.valueOf(SequenceFile.java:220) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:242) ... 16 more spark-sql> ```
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时想着能进去就不错了,管他哪个岗呢,就同意了面试...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
兼职程序员一般可以从什么平台接私活?
这个问题我进行了系统性的总结,以下将进行言简意赅的说明和渠道提供,希望对各位小猿/小媛们有帮助~ 根据我们的经验,程序员兼职主要分为三种:兼职职位众包、项目整包和自由职业者驻场。 所谓的兼职职位众包,指的是需求方这边有自有工程师配合,只需要某个职位的工程师开发某个模块的项目。比如开发一个 app,后端接口有人开发,但是缺少 iOS 前端开发工程师,那么他们就会发布一个职位招聘前端,来配合公司一...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
写了很久,这是一份最适合/贴切普通大众/科班/非科班的『学习路线』
说实话,对于学习路线这种文章我一般是不写的,大家看我的文章也知道,我是很少写建议别人怎么样怎么样的文章,更多的是,写自己的真实经历,然后供大家去参考,这样子,我内心也比较踏实,也不怕误导他人。 但是,最近好多人问我学习路线,而且很多大一大二的,说自己很迷茫,看到我那篇 普普通通,我的三年大学 之后很受激励,觉得自己也能行,(是的,别太浪,你一定能行)希望我能给他个学习路线,说
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
碎片化的时代,如何学习
今天周末,和大家聊聊学习这件事情。 在如今这个社会,我们的时间被各类 APP 撕的粉碎。 刷知乎、刷微博、刷朋友圈; 看论坛、看博客、看公号; 等等形形色色的信息和知识获取方式一个都不错过。 貌似学了很多,但是却感觉没什么用。 要解决上面这些问题,首先要分清楚一点,什么是信息,什么是知识。 那什么是信息呢? 你一切听到的、看到的,都是信息,比如微博上的明星出轨、微信中的表情大战、抖音上的...
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
基础拾遗:除了&和&&的区别,你还要知道位运算的这5个运算符
&和&&都可作逻辑与的运算符,表示逻辑与(and),&是位运算符,你还需要知道这5个位运算符,基础很重要,云运算其实很骚!
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问