急啊~Eclipse的Debug中有个Variables窗口的错误信息

以前查看变量的窗口现在显示如下信息Unable to create view: Argument not valid请问是怎么回事?

java.lang.IllegalArgumentException: Argument not valid
at org.eclipse.swt.SWT.error(SWT.java:3358)
at org.eclipse.swt.SWT.error(SWT.java:3297)
at org.eclipse.swt.SWT.error(SWT.java:3268)
at org.eclipse.swt.custom.SashForm.setWeights(SashForm.java:372)
at org.eclipse.debug.internal.ui.views.variables.VariablesView.showDetailPane(VariablesView.java:763)
at org.eclipse.debug.internal.ui.views.variables.VariablesView.setDetailPaneOrientation(VariablesView.java:747)
at org.eclipse.debug.internal.ui.views.variables.VariablesView.createViewer(VariablesView.java:561)
at org.eclipse.debug.ui.AbstractDebugView$ViewerPage.createControl(AbstractDebugView.java:270)
at org.eclipse.debug.ui.AbstractDebugView.createDefaultPage(AbstractDebugView.java:358)
at org.eclipse.ui.part.PageBookView.createPartControl(PageBookView.java:473)
at org.eclipse.debug.ui.AbstractDebugView.createPartControl(AbstractDebugView.java:319)
at org.eclipse.ui.internal.ViewReference.createPartHelper(ViewReference.java:332)
at org.eclipse.ui.internal.ViewReference.createPart(ViewReference.java:197)
at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:566)
at org.eclipse.ui.internal.Perspective.showView(Perspective.java:1675)
at org.eclipse.ui.internal.WorkbenchPage.busyShowView(WorkbenchPage.java:987)
at org.eclipse.ui.internal.WorkbenchPage.access$13(WorkbenchPage.java:968)
at org.eclipse.ui.internal.WorkbenchPage$13.run(WorkbenchPage.java:3514)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:67)
at org.eclipse.ui.internal.WorkbenchPage.showView(WorkbenchPage.java:3511)
at org.eclipse.ui.internal.WorkbenchPage.showView(WorkbenchPage.java:3487)
at org.eclipse.ui.internal.ShowViewAction.run(ShowViewAction.java:76)
at org.eclipse.jface.action.Action.runWithEvent(Action.java:499)
at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:539)
at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:488)
at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:400)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:66)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:928)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3348)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:2968)
at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:1930)
at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:1894)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:422)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149)
at org.eclipse.ui.internal.ide.IDEApplication.run(IDEApplication.java:95)
at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at org.eclipse.core.launcher.Main.invokeFramework(Main.java:336)
at org.eclipse.core.launcher.Main.basicRun(Main.java:280)
at org.eclipse.core.launcher.Main.run(Main.java:977)
at org.eclipse.core.launcher.Main.main(Main.java:952)

2个回答

我不清楚你的代码,所以无法明确指出。主要是我记不清哪些是系统函数,哪些是你写的~

是说你的SWT中的参数不合法。仔细查看下你的SWT参数,只要有不合适的都会抱这种错误。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
问个比较比较白痴的eclipse问题,关于debug的
是这样的,初学java,不熟悉ide。debug的时候手贱把variables和breakpoints视图给关掉了,现在不知道怎么开启。附图。 ![图片说明](https://img-ask.csdn.net/upload/201601/22/1453458993_751995.png)
Symbolic Logic Mechanization 逻辑的算法
Problem Description Marvin, the robot with a brain the size of a planet, followed some . . . markedly less successful robots as the product line developed. One such was Monroe, the robot — except, to help him recognize his name, he was referred to as Moe. He is sufficiently mentally challenged that he needs external assistance to handle symbolic logic. Polish notation is the prefix symbolic logic notation developed by Jan Lukasiewicz (1929). [Hence postfix expressions are referred to as being in Reverse Polish Notation — RPN.] The notation developed by Lukasiewicz (referred to as PN below) uses upper-case letters for the logic operators and lower-case letters for logic variables (which can only be true or false). Since prefix notation is self-grouping, there is no need for precedence, associativity, or parentheses, unlike infix notation. In the following table the PN operator is shown, followed by its operation. Operators not having exactly equivalent C/C++/Java operators are shown in the truth table (using 1 for true and 0 for false). [The operator J is not found in Lukasiewicz’ original work but is included from A.N.Prior’s treatment.] For every combination of PN operators and variables, an expression is a "well-formed formula" (WFF) if and only if it is a variable or it is a PN operator followed by the requisite number of operands (WFF instances). A combination of symbols will fail to be a "well-formed formula" if it is composed of a WFF followed by extraneous text, it uses an unrecognized character [uppercase character not in the above table or a non-alphabetic character], or it has insufficient operands for its operators. For invalid expressions, report the first error discovered in a left-toright scan of the expression. For instance, immediately report an error on an invalid character. If a valid WFF is followed by extraneous text, report that as the error, even if the extraneous text has an invalid character. In addition, every WFF can be categorized as a tautology (true for all possible variable values), a contradiction (false for all possible variable values), or a contingent expression (true for some variable values, false for other variable values). The simplest contingent expression is simply 'p', true when p is true, false when p is false. One very simple contradiction is "KpNp", both p and not-p are true. Similarly, one very simple tautology is "ApNp", either p is true or not-p is true. For a more complex tautology, one expression of De Morgan’s Law is "EDpqANpNq". Input Your program is to accept lines until it receives an empty character string. Each line will contain only alphanumeric characters (no spaces or punctuation) that are to be parsed as potential "WFFs". Each line will contain fewer than 256 characters and will use at most 10 variables. There will be at most 32 non-blank lines before the terminating blank line. Output For each line read in, echo it back, followed by its correctness as a WFF, followed (if a WFF) with its category (tautology, contradiction, or contingent). In processing an input line, immediately terminate and report the line as not a WFF if you encounter an unrecognized operator (even though it may fail to be well-formed in another way as well). If it has extraneous text following the WFF, report it as incorrect. If it has insufficient operands, report that. Use exactly the format shown in the Sample Output below. Sample Input q Cp Cpq A01 Cpqr ANpp KNpp Qad CKNppq JDpqANpNq CDpwANpNq EDpqANpNq KCDpqANpNqCANpNqDpq Sample Output q is valid: contingent Cp is invalid: insufficient operands Cpq is valid: contingent A01 is invalid: invalid character Cpqr is invalid: extraneous text ANpp is valid: tautology KNpp is valid: contradiction Qad is invalid: invalid character CKNppq is valid: tautology JDpqANpNq is valid: contradiction CDpwANpNq is valid: contingent EDpqANpNq is valid: tautology KCDpqANpNqCANpNqDpq is valid: tautology
Liveness Analysis 分析的问题
Problem Description A program syntax is defined as below: PROGRAM ::= STATEMENTS end '\n' STATEMENTS ::= STATEMENT STATEMENTS | epsilon STATEMENT ::= a [variable] '\n' | u [variable] '\n' | IF-CLAUSE IF-CLAUSE ::= if '\n' STATEMENTS end '\n' | if '\n' STATEMENTS else '\n' STATEMENTS end '\n' In which '\n' means the new line character, epsilon means an empty string, [variable] means a variable, which is represented by a number between 1 and n (inclusive). Same numbers denote the same variable. The program runs as follows: If the statement is "a [variable]", it assigns the variable a new value; If the statement if "u [variable]", it uses value of the variable to do something interesting; If there is an IF-CLAUSE, it is possible that condition of this clause is satisfied or not satisfied. We only need to do liveness analysis on this program. This is why we do not explicitly write assigned values, what we do with the variables and condition of IF-CLAUSEs. In this task, there are three kinds of liveness for each of the variables. 1. Live. In one or more paths of this program, value of the variable at the beginning of the program is used. 2. Dead. In all paths of this program, the variable is assigned a new value before any use of this variable. 3. Half-dead. In all paths of this program, value of the variable at the beginning of the program is not used, but in one or more paths, the variable is never assigned a new value. Input First line, number of test cases, T. Following are T test cases. For each test case, the first line is n, number of variables in the program. The following lines is the program. Number of lines in the input <= 500000 Output T lines. Each line contains n numbers. If the variable is live, output 1; if the variable is dead, output -1; if the variable is half-dead, output 0. Sample Input 10 1 end 1 a 1 end 1 u 1 end 1 a 1 u 1 end 1 u 1 a 1 end 1 if a 1 else a 1 end end 1 if a 1 else u 1 end end 1 if a 1 end end 1 if u 1 end end 2 a 1 u 2 end Sample Output 0 -1 1 -1 1 -1 1 0 1 -1 1
安装Eclipse 出现的问题,求解决。急用啊。感谢了。。。
org.osgi.framework.BundleException: Could not resolve module: org.eclipse.ant.ui [29] Unresolved requirement: Require-Bundle: org.eclipse.jface.text; bundle-version="[3.5.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.jface.text; bundle-version="3.10.0.v20150603-1752" org.eclipse.jface.text [173] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.5.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" org.eclipse.core.runtime [50] Unresolved requirement: Require-Bundle: org.eclipse.core.jobs; bundle-version="[3.2.0,4.0.0)"; visibility:="reexport" -> Bundle-SymbolicName: org.eclipse.core.jobs; bundle-version="3.7.0.v20150330-2103"; singleton:="true" org.eclipse.core.jobs [45] Unresolved requirement: Require-Capability: osgi.ee; filter:="(&(osgi.ee=JavaSE)(version=1.7))" Unresolved requirement: Require-Bundle: org.eclipse.ui.views; bundle-version="[3.2.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.ui.views; bundle-version="3.8.0.v20150422-0725"; singleton:="true" org.eclipse.ui.views [227] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.11.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" Unresolved requirement: Require-Bundle: org.eclipse.ant.core; bundle-version="[3.2.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.ant.core; bundle-version="3.4.0.v20150428-1928"; singleton:="true" org.eclipse.ant.core [27] Unresolved requirement: Require-Bundle: org.eclipse.core.variables; bundle-version="[3.1.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.variables; bundle-version="3.2.800.v20130819-1716"; singleton:="true" org.eclipse.core.variables [53] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.3.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" Unresolved requirement: Require-Bundle: org.eclipse.ui.workbench.texteditor; bundle-version="[3.5.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.ui.workbench.texteditor; bundle-version="3.9.100.v20141023-1946"; singleton:="true" org.eclipse.ui.workbench.texteditor [232] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.5.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" Unresolved requirement: Require-Bundle: org.eclipse.ui.editors; bundle-version="[3.2.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.ui.editors; bundle-version="3.9.0.v20150213-1939"; singleton:="true" org.eclipse.ui.editors [214] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.7.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" Unresolved requirement: Require-Bundle: org.eclipse.ui.ide; bundle-version="[3.2.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.ui.ide; bundle-version="3.11.0.v20150510-1749"; singleton:="true" org.eclipse.ui.ide [217] Unresolved requirement: Require-Bundle: org.eclipse.core.resources; bundle-version="[3.7.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.core.resources; bundle-version="3.10.0.v20150423-0755"; singleton:="true" org.eclipse.core.resources [48] Unresolved requirement: Require-Bundle: org.eclipse.ant.core; bundle-version="[3.1.0,4.0.0)"; resolution:="optional" -> Bundle-SymbolicName: org.eclipse.ant.core; bundle-version="3.4.0.v20150428-1928"; singleton:="true" Unresolved requirement: Require-Bundle: org.eclipse.core.expressions; bundle-version="[3.2.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.expressions; bundle-version="3.5.0.v20150421-2214"; singleton:="true" org.eclipse.core.expressions [39] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.3.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.2.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" at org.eclipse.osgi.container.Module.start(Module.java:434) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1582) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1561) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1533) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1476) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230) at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340) !ENTRY org.eclipse.compare 4 0 2015-07-27 00:21:10.073 !MESSAGE FrameworkEvent ERROR !STACK 0 org.osgi.framework.BundleException: Could not resolve module: org.eclipse.compare [30] Unresolved requirement: Require-Bundle: org.eclipse.ui; bundle-version="[3.5.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.ui; bundle-version="3.107.0.v20150507-1945"; singleton:="true" org.eclipse.ui [210] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.2.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" org.eclipse.core.runtime [50] Unresolved requirement: Require-Bundle: org.eclipse.core.jobs; bundle-version="[3.2.0,4.0.0)"; visibility:="reexport" -> Bundle-SymbolicName: org.eclipse.core.jobs; bundle-version="3.7.0.v20150330-2103"; singleton:="true" org.eclipse.core.jobs [45] Unresolved requirement: Require-Capability: osgi.ee; filter:="(&(osgi.ee=JavaSE)(version=1.7))" at org.eclipse.osgi.container.Module.start(Module.java:434) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1582) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1561) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1533) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1476) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230) at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340) !ENTRY org.eclipse.compare.core 4 0 2015-07-27 00:21:10.075 !MESSAGE FrameworkEvent ERROR !STACK 0 org.osgi.framework.BundleException: Could not resolve module: org.eclipse.compare.core [31] Unresolved requirement: Require-Bundle: org.eclipse.core.runtime; bundle-version="[3.2.0,4.0.0)" -> Bundle-SymbolicName: org.eclipse.core.runtime; bundle-version="3.11.0.v20150405-1723"; singleton:="true" org.eclipse.core.runtime [50] Unresolved requirement: Require-Bundle: org.eclipse.core.jobs; bundle-version="[3.2.0,4.0.0)"; visibility:="reexport" -> Bundle-SymbolicName: org.eclipse.core.jobs; bundle-version="3.7.0.v20150330-2103"; singleton:="true" org.eclipse.core.jobs [45] Unresolved requirement: Require-Capability: osgi.ee; filter:="(&(osgi.ee=JavaSE)(version=1.7))" at org.eclipse.osgi.container.Module.start(Module.java:434) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1582) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1561) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1533) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1476) at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1) at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230) at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340) 求解决怎么回事
tensorflow 报错You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是怎么看数据都没错,请大神指点
调试googlenet的代码,总是报错 InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是我怎么看喂的数据都没问题,请大神们指点 ``` # -*- coding: utf-8 -*- """ GoogleNet也被称为InceptionNet Created on Mon Feb 10 12:15:35 2020 @author: 月光下的云海 """ import tensorflow as tf from keras.datasets import cifar10 import numpy as np import tensorflow.contrib.slim as slim tf.reset_default_graph() tf.reset_default_graph() (x_train,y_train),(x_test,y_test) = cifar10.load_data() x_train = x_train.astype('float32') x_test = x_test.astype('float32') y_train = y_train.astype('int32') y_test = y_test.astype('int32') y_train = y_train.reshape(y_train.shape[0]) y_test = y_test.reshape(y_test.shape[0]) x_train = x_train/255 x_test = x_test/255 #************************************************ 构建inception ************************************************ #构建一个多分支的网络结构 #INPUTS: # d0_1:最左边的分支,分支0,大小为1*1的卷积核个数 # d1_1:左数第二个分支,分支1,大小为1*1的卷积核的个数 # d1_3:左数第二个分支,分支1,大小为3*3的卷积核的个数 # d2_1:左数第三个分支,分支2,大小为1*1的卷积核的个数 # d2_5:左数第三个分支,分支2,大小为5*5的卷积核的个数 # d3_1:左数第四个分支,分支3,大小为1*1的卷积核的个数 # scope:参数域名称 # reuse:是否重复使用 #*************************************************************************************************************** def inception(x,d0_1,d1_1,d1_3,d2_1,d2_5,d3_1,scope = 'inception',reuse = None): with tf.variable_scope(scope,reuse = reuse): #slim.conv2d,slim.max_pool2d的默认参数都放在了slim的参数域里面 with slim.arg_scope([slim.conv2d,slim.max_pool2d],stride = 1,padding = 'SAME'): #第一个分支 with tf.variable_scope('branch0'): branch_0 = slim.conv2d(x,d0_1,[1,1],scope = 'conv_1x1') #第二个分支 with tf.variable_scope('branch1'): branch_1 = slim.conv2d(x,d1_1,[1,1],scope = 'conv_1x1') branch_1 = slim.conv2d(branch_1,d1_3,[3,3],scope = 'conv_3x3') #第三个分支 with tf.variable_scope('branch2'): branch_2 = slim.conv2d(x,d2_1,[1,1],scope = 'conv_1x1') branch_2 = slim.conv2d(branch_2,d2_5,[5,5],scope = 'conv_5x5') #第四个分支 with tf.variable_scope('branch3'): branch_3 = slim.max_pool2d(x,[3,3],scope = 'max_pool') branch_3 = slim.conv2d(branch_3,d3_1,[1,1],scope = 'conv_1x1') #连接 net = tf.concat([branch_0,branch_1,branch_2,branch_3],axis = -1) return net #*************************************** 使用inception构建GoogleNet ********************************************* #使用inception构建GoogleNet #INPUTS: # inputs-----------输入 # num_classes------输出类别数目 # is_trainning-----batch_norm层是否使用训练模式,batch_norm和is_trainning密切相关 # 当is_trainning = True 时候,它使用一个batch数据的平均移动,方差值 # 当is_trainning = Flase时候,它就使用固定的值 # verbos-----------控制打印信息 # reuse------------是否重复使用 #*************************************************************************************************************** def googlenet(inputs,num_classes,reuse = None,is_trainning = None,verbose = False): with slim.arg_scope([slim.batch_norm],is_training = is_trainning): with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d], padding = 'SAME',stride = 1): net = inputs #googlnet的第一个块 with tf.variable_scope('block1',reuse = reuse): net = slim.conv2d(net,64,[5,5],stride = 2,scope = 'conv_5x5') if verbose: print('block1 output:{}'.format(net.shape)) #googlenet的第二个块 with tf.variable_scope('block2',reuse = reuse): net = slim.conv2d(net,64,[1,1],scope = 'conv_1x1') net = slim.conv2d(net,192,[3,3],scope = 'conv_3x3') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block2 output:{}'.format(net.shape)) #googlenet第三个块 with tf.variable_scope('block3',reuse = reuse): net = inception(net,64,96,128,16,32,32,scope = 'inception_1') net = inception(net,128,128,192,32,96,64,scope = 'inception_2') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block3 output:{}'.format(net.shape)) #googlenet第四个块 with tf.variable_scope('block4',reuse = reuse): net = inception(net,192,96,208,16,48,64,scope = 'inception_1') net = inception(net,160,112,224,24,64,64,scope = 'inception_2') net = inception(net,128,128,256,24,64,64,scope = 'inception_3') net = inception(net,112,144,288,24,64,64,scope = 'inception_4') net = inception(net,256,160,320,32,128,128,scope = 'inception_5') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block4 output:{}'.format(net.shape)) #googlenet第五个块 with tf.variable_scope('block5',reuse = reuse): net = inception(net,256,160,320,32,128,128,scope = 'inception1') net = inception(net,384,182,384,48,128,128,scope = 'inception2') net = slim.avg_pool2d(net,[2,2],stride = 2,scope = 'avg_pool') if verbose: print('block5 output:{}'.format(net.shape)) #最后一块 with tf.variable_scope('classification',reuse = reuse): net = slim.flatten(net) net = slim.fully_connected(net,num_classes,activation_fn = None,normalizer_fn = None,scope = 'logit') if verbose: print('classification output:{}'.format(net.shape)) return net #给卷积层设置默认的激活函数和batch_norm with slim.arg_scope([slim.conv2d],activation_fn = tf.nn.relu,normalizer_fn = slim.batch_norm) as sc: conv_scope = sc is_trainning_ph = tf.placeholder(tf.bool,name = 'is_trainning') #定义占位符 x_train_ph = tf.placeholder(shape = (None,x_train.shape[1],x_train.shape[2],x_train.shape[3]),dtype = tf.float32) x_test_ph = tf.placeholder(shape = (None,x_test.shape[1],x_test.shape[2],x_test.shape[3]),dtype = tf.float32) y_train_ph = tf.placeholder(shape = (None,),dtype = tf.int32) y_test_ph = tf.placeholder(shape = (None,),dtype = tf.int32) #实例化网络 with slim.arg_scope(conv_scope): train_out = googlenet(x_train_ph,10,is_trainning = is_trainning_ph,verbose = True) val_out = googlenet(x_test_ph,10,is_trainning = is_trainning_ph,reuse = True) #定义loss和acc with tf.variable_scope('loss'): train_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_train_ph,logits = train_out,scope = 'train') val_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_test_ph,logits = val_out,scope = 'val') with tf.name_scope('accurcay'): train_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(train_out,axis = -1,output_type = tf.int32),y_train_ph),tf.float32)) val_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(val_out,axis = -1,output_type = tf.int32),y_test_ph),tf.float32)) #定义训练op lr = 1e-2 opt = tf.train.MomentumOptimizer(lr,momentum = 0.9) #通过tf.get_collection获得所有需要更新的op update_op = tf.get_collection(tf.GraphKeys.UPDATE_OPS) #使用tesorflow控制流,先执行update_op再进行loss最小化 with tf.control_dependencies(update_op): train_op = opt.minimize(train_loss) #开启会话 sess = tf.Session() saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) batch_size = 64 #开始训练 for e in range(10000): batch1 = np.random.randint(0,50000,size = batch_size) t_x_train = x_train[batch1][:][:][:] t_y_train = y_train[batch1] batch2 = np.random.randint(0,10000,size = batch_size) t_x_test = x_test[batch2][:][:][:] t_y_test = y_test[batch2] sess.run(train_op,feed_dict = {x_train_ph:t_x_train, is_trainning_ph:True, y_train_ph:t_y_train}) # if(e%1000 == 999): # loss_train,acc_train = sess.run([train_loss,train_acc], # feed_dict = {x_train_ph:t_x_train, # is_trainning_ph:True, # y_train_ph:t_y_train}) # loss_test,acc_test = sess.run([val_loss,val_acc], # feed_dict = {x_test_ph:t_x_test, # is_trainning_ph:False, # y_test_ph:t_y_test}) # print('STEP{}:train_loss:{:.6f} train_acc:{:.6f} test_loss:{:.6f} test_acc:{:.6f}' # .format(e+1,loss_train,acc_train,loss_test,acc_test)) saver.save(sess = sess,save_path = 'VGGModel\model.ckpt') print('Train Done!!') print('--'*60) sess.close() ``` 报错信息是 ``` Using TensorFlow backend. block1 output:(?, 16, 16, 64) block2 output:(?, 8, 8, 192) block3 output:(?, 4, 4, 480) block4 output:(?, 2, 2, 832) block5 output:(?, 1, 1, 1024) classification output:(?, 10) Traceback (most recent call last): File "<ipython-input-1-6385a760fe16>", line 1, in <module> runfile('F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py', wdir='F:/Project/TEMP/LearnTF/GoogleNet') File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py", line 177, in <module> y_train_ph:t_y_train}) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3] [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,32,32,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: gradients/block4/inception_4/concat_grad/ShapeN/_45 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_23694_gradients/block4/inception_4/concat_grad/ShapeN", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] ``` 看了好多遍都不是喂数据的问题,百度说是summary出了问题,可是我也没有summary呀,头晕~~~~
Simulation? 模拟的问题
Problem Description A computer simulation, a computer model, or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics, astrophysics, chemistry and biology, human systems in economics, psychology, social science, and engineering, of course, also computer. “Fundamentals of compiling” is an important course for computer science students. In this course, most of us are asked to write a compiler to simulate how a programming language executes. Today, boring iSea invites a new programming language, whose name is Abnormal Cute Micro (ACM) language, and, YOU are assigned the task to write a compiler for it. ACM language only contains two kinds of variables and a few kinds of operations or functions, and here are some BNF-like rules for ACM. Also, here is some explanation for these rules: 1) In ACM expressions, use exactly one blank to separate variables and operators, and as the rule indicates, the operator should apply right to left, for example, the result of “1 - 2 - 3" should be 2. 2) In the build function, use exactly one blank to separate integers, too. 3) Beside there are brackets in function, no other bracket exists. 4) All the variables are conformable, and never exceed 10000. Given an ACM expression, your task is output its value. If the result is a integer, just report it, otherwise report an array using the format “{integer_0, integer_1, … , integer_n}”. Input The first line contains a single integer T, indicating the number of test cases. Each test case includes a string indicating an valid ACM expression you have to process. Technical Specification 1. 1 <= T <= 100 2. 1 <= |S| <= 100, |S| indicating the length of the string. Output For each test case, output the case number first, then the result variable. Sample Input 10 1 + 1 1 - 2 - 3 dance(3) vary(2) * 2 vary(sum(dance(5) - 1)) dance(dance(-3)) 1 - 2 - 3 * vary(dull(build(1 2 3))) dance(dance(dance(dance(dance(2))))) sum(vary(100)) - sum(build(3038)) build(sum(vary(2)) dull(build(1 0)) 2 dull(dance(2))) - build(1 1 1 1) Sample Output Case 1: 2 Case 2: 2 Case 3: {3, -2, 1} Case 4: {2, 4} Case 5: {2, 1} Case 6: -4 Case 7: {2, 5} Case 8: {4, -3, 2, -1} Case 9: 2012 Case 10: {2, 0, 1, 2}
Su-domino-ku 程序怎么写
Problem Description As if there were not already enough sudoku-like puzzles, the July 2009 issue of Games Magazine describes the following variant that combines facets of both sudoku and dominos. The puzzle is a form of a standard sudoku, in which there is a nine-by-nine grid that must be filled in using only digits 1 through 9. In a successful solution: Each row must contain each of the digits 1 through 9. Each column must contain each of the digits 1 through 9. Each of the indicated three-by-three squares must contain each of the digits 1 through 9. For a su-domino-ku, nine arbitrary cells are initialized with the numbers 1 to 9. This leaves 72 remaining cells. Those must be filled by making use of the following set of 36 domino tiles. The tile set includes one domino for each possible pair of unique numbers from 1 to 9 (e.g., 1+2, 1+3, 1+4, 1+5, 1+6, 1+7, 1+8, 1+9, 2+3, 2+4, 2+5, ...). Note well that there are not separate 1+2 and 2+1 tiles in the set; the single such domino can be rotated to provide either orientation. Also, note that dominos may cross the boundary of the three-by-three squares (as does the 2+9 domino in our coming example). To help you out, we will begin each puzzle by identifying the location of some of the dominos. For example, Figure 1 shows a sample puzzle in its initial state. Figure 2 shows the unique way to complete that puzzle. Input Each puzzle description begins with a line containing an integer N, for 10 ≤ N ≤ 35, representing the number of dominos that are initially placed in the starting configuration. Following that are N lines, each describing a single domino as U LU V LV. Value U is one of the numbers on the domino, and LU is a two-character string representing the location of value U on the board based on the grid system diagrammed in Figure 1. The variables V and LV representing the respective value and location of the other half of the domino. For example, our first sample input beings with a domino described as 6 B2 1 B3. This corresponds to the domino with values 6+1 being placed on the board such that value 6 is in row B, column 2 and value 1 in row B, column 3. The two locations for a given domino will always be neighboring. After the specification of the N dominos will be a final line that describes the initial locations of the isolated numbers, ordered from 1 to 9, using the same row-column conventions for describing locations on the board. All initial numbers and dominos will be at unique locations. The input file ends with a line containing 0. Output For each puzzle, output an initial line identifying the puzzle number, as shown below. Following that, output the 9x9 sudoku board that can be formed with the set of dominos. There will be a unique solution for each puzzle. Sample Input 10 6 B2 1 B3 2 C4 9 C3 6 D3 8 E3 7 E1 4 F1 8 B7 4 B8 3 F5 2 F6 7 F7 6 F8 5 G4 9 G5 7 I8 8 I9 7 C9 2 B9 C5 A3 D9 I4 A9 E5 A2 C6 I1 11 5 I9 2 H9 6 A5 7 A6 4 B8 6 C8 3 B5 8 B4 3 C3 2 D3 9 D2 8 E2 3 G2 5 H2 1 A2 8 A1 1 H8 3 I8 8 I3 7 I4 4 I6 9 I7 I5 E6 D1 F2 B3 G9 H7 C9 E5 0 Sample Output Puzzle 1 872643195 361975842 549218637 126754983 738169254 495832761 284597316 657381429 913426578 Puzzle 2 814267593 965831247 273945168 392176854 586492371 741358629 137529486 459683712 628714935
c# 操作Mysql查询数据问题
最近在开发程序过程当中,操作的数据库为MySQL,这时就会出现操作数据库时,出现查询数据库数据时,有时查询出来,有时查询不出来,同样的条件下,不是一直有数据,不知道是哪里出了问题,程序在执行时没有任何报错,执行都是正常的,网上找相关资料找不到,有没有人解决该问题?附上连接MYSQL时的字符串, string connectionString = string.Empty; connectionString = @"Data Source={0};Port={1};Database ={2};User Id={3};Password={4};pooling=false;CharSet=utf8;Allow User Variables=True;"; connectionString = string.Format(connectionString, Address, Port, Database, User, PassWord); return connectionString;
minst深度学习例程不收敛,成功率始终在十几
minst深度学习程序不收敛 是关于tensorflow的问题。我是tensorflow的初学者。从书上抄了minst的学习程序。但是运行之后,无论学习了多少批次,成功率基本不变。 我做了许多尝试,去掉了正则化,去掉了滑动平均,还是不行。把batch_size改成了2,观察变量运算情况,输入x是正确的,但神经网络的输出y很多情况下在x不一样的情况下y的两个结果是完全一样的。进而softmax的结果也是一样的。百思不得其解,找不到造成这种情况的原因。这里把代码和运行情况都贴出来,请大神帮我找找原因。大过年的,祝大家春节快乐万事如意。 补充一下,进一步的测试表明,不是不能完成训练,而是要到700000轮以上,且最高达到65%左右就不能提高了。仔细看每一步的参数,是regularization值过大10e15以上,一点点减少,前面的训练都在训练它了。这东西我不是很明白。 ``` import struct import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider, Button import tensorflow as tf import time #把MNIST的操作封装在一个类中,以后用起来方便。 class MyMinst(): def decode_idx3_ubyte(self,idx3_ubyte_file): with open(idx3_ubyte_file, 'rb') as f: print('解析文件:', idx3_ubyte_file) fb_data = f.read() offset = 0 fmt_header = '>iiii' # 以大端法读取4个 unsinged int32 magic_number, num_images, num_rows, num_cols = struct.unpack_from(fmt_header, fb_data, offset) print('idex3 魔数:{},图片数:{}'.format(magic_number, num_images)) offset += struct.calcsize(fmt_header) fmt_image = '>' + str(num_rows * num_cols) + 'B' images = np.empty((num_images, num_rows*num_cols)) #做了修改 for i in range(num_images): im = struct.unpack_from(fmt_image, fb_data, offset) images[i] = np.array(im)#这里用一维数组表示图片,np.array(im).reshape((num_rows, num_cols)) offset += struct.calcsize(fmt_image) return images def decode_idx1_ubyte(self,idx1_ubyte_file): with open(idx1_ubyte_file, 'rb') as f: print('解析文件:', idx1_ubyte_file) fb_data = f.read() offset = 0 fmt_header = '>ii' # 以大端法读取两个 unsinged int32 magic_number, label_num = struct.unpack_from(fmt_header, fb_data, offset) print('idex1 魔数:{},标签数:{}'.format(magic_number, label_num)) offset += struct.calcsize(fmt_header) labels = np.empty(shape=[0,10],dtype=float) #神经网络需要把label变成10位float的数组 fmt_label = '>B' # 每次读取一个 byte for i in range(label_num): n=struct.unpack_from(fmt_label, fb_data, offset) labels=np.append(labels,[[0,0,0,0,0,0,0,0,0,0]],axis=0) labels[i][n]=1 offset += struct.calcsize(fmt_label) return labels def __init__(self): #固定的训练文件位置 self.img=self.decode_idx3_ubyte("/home/zhangyl/Downloads/mnist/train-images.idx3-ubyte") self.result=self.decode_idx1_ubyte("/home/zhangyl/Downloads/mnist/train-labels.idx1-ubyte") print(self.result[0]) print(self.result[1000]) print(self.result[25000]) #固定的验证文件位置 self.validate_img=self.decode_idx3_ubyte("/home/zhangyl/Downloads/mnist/t10k-images.idx3-ubyte") self.validate_result=self.decode_idx1_ubyte("/home/zhangyl/Downloads/mnist/t10k-labels.idx1-ubyte") #每一批读训练数据的起始位置 self.train_read_addr=0 #每一批读训练数据的batchsize self.train_batchsize=100 #每一批读验证数据的起始位置 self.validate_read_addr=0 #每一批读验证数据的batchsize self.validate_batchsize=100 #定义用于返回batch数据的变量 self.train_img_batch=self.img self.train_result_batch=self.result self.validate_img_batch=self.validate_img self.validate_result_batch=self.validate_result def get_next_batch_traindata(self): n=len(self.img) #对参数范围适当约束 if self.train_read_addr+self.train_batchsize<=n : self.train_img_batch=self.img[self.train_read_addr:self.train_read_addr+self.train_batchsize] self.train_result_batch=self.result[self.train_read_addr:self.train_read_addr+self.train_batchsize] self.train_read_addr+=self.train_batchsize #改变起始位置 if self.train_read_addr==n : self.train_read_addr=0 else: self.train_img_batch=self.img[self.train_read_addr:n] self.train_img_batch.append(self.img[0:self.train_read_addr+self.train_batchsize-n]) self.train_result_batch=self.result[self.train_read_addr:n] self.train_result_batch.append(self.result[0:self.train_read_addr+self.train_batchsize-n]) self.train_read_addr=self.train_read_addr+self.train_batchsize-n #改变起始位置,这里没考虑batchsize大于n的情形 return self.train_img_batch,self.train_result_batch #测试一下用临时变量返回是否可行 def set_train_read_addr(self,addr): self.train_read_addr=addr def set_train_batchsize(self,batchsize): self.train_batchsize=batchsize if batchsize <1 : self.train_batchsize=1 def set_validate_read_addr(self,addr): self.validate_read_addr=addr def set_validate_batchsize(self,batchsize): self.validate_batchsize=batchsize if batchsize<1 : self.validate_batchsize=1 myminst=MyMinst() #minst类的实例 batch_size=2 #设置每一轮训练的Batch大小 learning_rate=0.8 #初始学习率 learning_rate_decay=0.999 #学习率的衰减 max_steps=300000 #最大训练步数 #定义存储训练轮数的变量,在使用tensorflow训练神经网络时, #一般会将代表训练轮数的变量通过trainable参数设置为不可训练的 training_step = tf.Variable(0,trainable=False) #定义得到隐藏层和输出层的前向传播计算方式,激活函数使用relu() def hidden_layer(input_tensor,weights1,biases1,weights2,biases2,layer_name): layer1=tf.nn.relu(tf.matmul(input_tensor,weights1)+biases1) return tf.matmul(layer1,weights2)+biases2 x=tf.placeholder(tf.float32,[None,784],name="x-input") y_=tf.placeholder(tf.float32,[None,10],name="y-output") #生成隐藏层参数,其中weights包含784*500=39200个参数 weights1=tf.Variable(tf.truncated_normal([784,500],stddev=0.1)) biases1=tf.Variable(tf.constant(0.1,shape=[500])) #生成输出层参数,其中weights2包含500*10=5000个参数 weights2=tf.Variable(tf.truncated_normal([500,10],stddev=0.1)) biases2=tf.Variable(tf.constant(0.1,shape=[10])) #计算经过神经网络前后向传播后得到的y值 y=hidden_layer(x,weights1,biases1,weights2,biases2,'y') #初始化一个滑动平均类,衰减率为0.99 #为了使模型在训练前期可以更新的更快,这里提供了num_updates参数,并设置为当前网络的训练轮数 #averages_class=tf.train.ExponentialMovingAverage(0.99,training_step) #定义一个更新变量滑动平均值的操作需要向滑动平均类的apply()函数提供一个参数列表 #train_variables()函数返回集合图上Graph.TRAINABLE_VARIABLES中的元素。 #这个集合的元素就是所有没有指定trainable_variables=False的参数 #averages_op=averages_class.apply(tf.trainable_variables()) #再次计算经过神经网络前向传播后得到的y值,这里使用了滑动平均,但要牢记滑动平均值只是一个影子变量 #average_y=hidden_layer(x,averages_class.average(weights1), # averages_class.average(biases1), # averages_class.average(weights2), # averages_class.average(biases2), # 'average_y') #softmax,计算交叉熵损失,L2正则,随机梯度优化器,学习率采用指数衰减 #函数原型为sparse_softmax_cross_entropy_with_logits(_sential,labels,logdits,name) #与softmax_cross_entropy_with_logits()函数的计算方式相同,更适用于每个类别相互独立且排斥 #的情况,即每一幅图只能属于一类 #在1.0.0版本的TensorFlow中,这个函数只能通过命名参数的方式来使用,在这里logits参数是神经网 #络不包括softmax层的前向传播结果,lables参数给出了训练数据的正确答案 softmax=tf.nn.softmax(y) cross_entropy=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y+1e-10,labels=tf.argmax(y_,1)) #argmax()函数原型为argmax(input,axis,name,dimension)用于计算每一个样例的预测答案,其中 # input参数y是一个batch_size*10(batch_size行,10列)的二维数组。每一行表示一个样例前向传 # 播的结果,axis参数“1”表示选取最大值的操作只在第一个维度进行。即只在每一行选取最大值对应的下标 # 于是得到的结果是一个长度为batch_size的一维数组,这个一维数组的值就表示了每一个样例的数字识别 # 结果。 regularizer=tf.contrib.layers.l2_regularizer(0.0001) #计算L2正则化损失函数 regularization=regularizer(weights1)+regularizer(weights2) #计算模型的正则化损失 loss=tf.reduce_mean(cross_entropy)#+regularization #总损失 #用指数衰减法设置学习率,这里staircase参数采用默认的False,即学习率连续衰减 learning_rate=tf.train.exponential_decay(learning_rate,training_step, batch_size,learning_rate_decay) #使用GradientDescentOptimizer优化算法来优化交叉熵损失和正则化损失 train_op=tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=training_step) #在训练这个模型时,每过一遍数据既需要通过反向传播来更新神经网络中的参数,又需要 # 更新每一个参数的滑动平均值。control_dependencies()用于这样的一次性多次操作 #同样的操作也可以使用下面这行代码完成: #train_op=tf.group(train_step,average_op) #with tf.control_dependencies([train_step,averages_op]): # train_op=tf.no_op(name="train") #检查使用了滑动平均模型的神经网络前向传播结果是否正确 #equal()函数原型为equal(x,y,name),用于判断两个张量的每一维是否相等。 #如果相等返回True,否则返回False crorent_predicition=tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) #cast()函数的原型为cast(x,DstT,name),在这里用于将一个布尔型的数据转换为float32类型 #之后对得到的float32型数据求平均值,这个平均值就是模型在这一组数据上的正确率 accuracy=tf.reduce_mean(tf.cast(crorent_predicition,tf.float32)) #创建会话和开始训练过程 with tf.Session() as sess: #在稍早的版本中一般使用initialize_all_variables()函数初始化全部变量 tf.global_variables_initializer().run() #准备验证数据 validate_feed={x:myminst.validate_img,y_:myminst.validate_result} #准备测试数据 test_feed= {x:myminst.img,y_:myminst.result} for i in range(max_steps): if i%1000==0: #计算滑动平均模型在验证数据上的结果 #为了能得到百分数输出,需要将得到的validate_accuracy扩大100倍 validate_accuracy= sess.run(accuracy,feed_dict=validate_feed) print("After %d trainning steps,validation accuracy using average model is %g%%" %(i,validate_accuracy*100)) #产生这一轮使用一个batch的训练数据,并进行训练 #input_data.read_data_sets()函数生成的类提供了train.next_batch()函数 #通过设置函数的batch_size参数就可以从所有的训练数据中读取一个小部分作为一个训练batch myminst.set_train_batchsize(batch_size) xs,ys=myminst.get_next_batch_traindata() var_print=sess.run([x,y,y_,loss,train_op,softmax,cross_entropy,regularization,weights1],feed_dict={x:xs,y_:ys}) print("after ",i," trainning steps:") print("x=",var_print[0][0],var_print[0][1],"y=",var_print[1],"y_=",var_print[2],"loss=",var_print[3], "softmax=",var_print[5],"cross_entropy=",var_print[6],"regularization=",var_print[7],var_print[7]) time.sleep(0.5) #使用测试数据集检验神经网络训练之后的正确率 #为了能得到百分数输出,需要将得到的test_accuracy扩大100倍 test_accuracy=sess.run(accuracy,feed_dict=test_feed) print("After %d training steps,test accuracy using average model is %g%%"%(max_steps,test_accuracy*100)) 下面是运行情况的一部分: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 76. 202. 254. 255. 163. 37. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 182. 253. 253. 253. 253. 253. 253. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 179. 253. 253. 212. 91. 218. 253. 253. 179. 109. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 105. 253. 253. 160. 35. 156. 253. 253. 253. 253. 250. 113. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 19. 212. 253. 253. 88. 121. 253. 233. 128. 91. 245. 253. 248. 114. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 104. 253. 253. 110. 2. 142. 253. 90. 0. 0. 26. 199. 253. 248. 63. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 173. 253. 253. 29. 0. 84. 228. 39. 0. 0. 0. 72. 251. 253. 215. 29. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 203. 13. 0. 0. 0. 0. 0. 0. 0. 0. 82. 253. 253. 170. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 164. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 198. 253. 184. 6. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 82. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 138. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 128. 253. 253. 47. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 48. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 154. 253. 253. 47. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 48. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 102. 253. 253. 99. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 48. 253. 253. 35. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 36. 253. 253. 164. 0. 0. 0. 0. 0. 0. 0. 0. 0. 16. 208. 253. 211. 17. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 32. 244. 253. 175. 4. 0. 0. 0. 0. 0. 0. 0. 0. 44. 253. 253. 156. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 253. 253. 29. 0. 0. 0. 0. 0. 0. 0. 30. 217. 253. 188. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 253. 253. 59. 0. 0. 0. 0. 0. 0. 60. 217. 253. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 78. 253. 253. 231. 48. 0. 0. 0. 26. 128. 249. 253. 244. 94. 15. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 151. 253. 253. 234. 101. 121. 219. 229. 253. 253. 201. 80. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 38. 232. 253. 253. 253. 253. 253. 253. 253. 201. 66. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 232. 253. 253. 95. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 86. 46. 0. 0. 0. 0. 0. 0. 91. 246. 252. 232. 57. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 103. 252. 187. 13. 0. 0. 0. 0. 22. 219. 252. 252. 175. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 0. 0. 0. 0. 8. 181. 252. 246. 30. 0. 0. 0. 0. 65. 252. 237. 197. 64. 0. 0. 0. 0. 0. 0. 0. 0. 0. 87. 0. 0. 0. 13. 172. 252. 252. 104. 0. 0. 0. 0. 5. 184. 252. 67. 103. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 172. 252. 248. 145. 14. 0. 0. 0. 0. 109. 252. 183. 137. 64. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 224. 252. 248. 134. 0. 0. 0. 0. 0. 53. 238. 252. 245. 86. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 12. 174. 252. 223. 88. 0. 0. 0. 0. 0. 0. 209. 252. 252. 179. 9. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 171. 252. 246. 61. 0. 0. 0. 0. 0. 0. 83. 241. 252. 211. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 129. 252. 252. 249. 220. 220. 215. 111. 192. 220. 221. 243. 252. 252. 149. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 144. 253. 253. 253. 253. 253. 253. 253. 253. 253. 255. 253. 226. 153. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 44. 77. 77. 77. 77. 77. 77. 77. 77. 153. 253. 235. 32. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 74. 214. 240. 114. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 221. 243. 57. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 8. 180. 252. 119. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 136. 252. 153. 7. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 136. 251. 226. 34. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 123. 252. 246. 39. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 165. 252. 127. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 165. 175. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.58273095 0.50121385 -0.74845004 0.35842288 -0.13741069 -0.5839622 0.2642774 0.5101677 -0.29416046 0.5471707 ] [ 0.58273095 0.50121385 -0.74845004 0.35842288 -0.13741069 -0.5839622 0.2642774 0.5101677 -0.29416046 0.5471707 ]] y_= [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]] loss= 2.2801425 softmax= [[0.14659645 0.13512042 0.03872566 0.11714067 0.07134604 0.04564939 0.10661562 0.13633572 0.06099501 0.14147504] [0.14659645 0.13512042 0.03872566 0.11714067 0.07134604 0.04564939 0.10661562 0.13633572 0.06099501 0.14147504]] cross_entropy= [1.9200717 2.6402135] regularization= 50459690000000.0 50459690000000.0 after 45 trainning steps: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 25. 214. 225. 90. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 145. 212. 253. 253. 60. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 106. 253. 253. 246. 188. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 45. 164. 254. 253. 223. 108. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 236. 253. 252. 124. 28. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 100. 217. 253. 218. 116. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 158. 175. 225. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 217. 241. 248. 114. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 21. 201. 253. 253. 114. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 107. 253. 253. 213. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 170. 254. 254. 169. 0. 0. 0. 0. 0. 2. 13. 100. 133. 89. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 18. 210. 253. 253. 100. 0. 0. 0. 19. 76. 116. 253. 253. 253. 176. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 41. 222. 253. 208. 18. 0. 0. 93. 209. 232. 217. 224. 253. 253. 241. 31. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 157. 253. 253. 229. 32. 0. 154. 250. 246. 36. 0. 49. 253. 253. 168. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 128. 253. 253. 253. 195. 125. 247. 166. 69. 0. 0. 37. 236. 253. 168. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 37. 253. 253. 253. 253. 253. 135. 32. 0. 7. 130. 73. 202. 253. 133. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 185. 253. 253. 253. 253. 64. 0. 10. 210. 253. 253. 253. 153. 9. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 66. 253. 253. 253. 253. 238. 218. 221. 253. 253. 235. 156. 37. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 111. 228. 253. 253. 253. 253. 254. 253. 168. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 9. 110. 178. 253. 253. 249. 63. 5. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 121. 121. 240. 253. 218. 121. 121. 44. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 17. 107. 184. 240. 253. 252. 252. 252. 252. 252. 252. 219. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 75. 122. 230. 252. 252. 252. 253. 252. 252. 252. 252. 252. 252. 239. 56. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 77. 129. 213. 244. 252. 252. 252. 252. 252. 253. 252. 252. 209. 252. 252. 252. 225. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 240. 252. 252. 252. 252. 252. 252. 213. 185. 53. 53. 53. 89. 252. 252. 252. 120. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 240. 232. 198. 93. 164. 108. 66. 28. 0. 0. 0. 0. 81. 252. 252. 222. 24. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 76. 50. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 252. 243. 108. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 144. 238. 252. 115. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 70. 241. 248. 133. 28. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 121. 252. 252. 172. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 64. 255. 253. 209. 21. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 246. 253. 207. 21. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 172. 252. 209. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 168. 252. 252. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 43. 208. 252. 241. 53. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 166. 252. 204. 62. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 166. 243. 191. 29. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 168. 231. 177. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 6. 172. 241. 50. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 177. 202. 19. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.8592988 0.3954708 -0.77875614 0.26675048 0.19804694 -0.61968666 0.18084174 0.4034736 -0.34189415 0.43645462] [ 0.8592988 0.3954708 -0.77875614 0.26675048 0.19804694 -0.61968666 0.18084174 0.4034736 -0.34189415 0.43645462]] y_= [[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]] loss= 2.2191708 softmax= [[0.19166051 0.12052987 0.0372507 0.10597225 0.09893605 0.04367344 0.09724841 0.12149832 0.05765821 0.12557226] [0.19166051 0.12052987 0.0372507 0.10597225 0.09893605 0.04367344 0.09724841 0.12149832 0.05765821 0.12557226]] cross_entropy= [2.3304868 2.1078548] regularization= 50459690000000.0 50459690000000.0 after 46 trainning steps: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 196. 99. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 49. 0. 0. 0. 0. 0. 0. 34. 244. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 89. 135. 0. 0. 0. 0. 0. 0. 40. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 171. 150. 0. 0. 0. 0. 0. 0. 40. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 254. 233. 0. 0. 0. 0. 0. 0. 77. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 255. 136. 0. 0. 0. 0. 0. 0. 77. 254. 99. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 254. 135. 0. 0. 0. 0. 0. 0. 123. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 254. 135. 0. 0. 0. 0. 0. 0. 136. 253. 98. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 16. 254. 135. 0. 0. 0. 0. 0. 0. 136. 237. 8. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 98. 254. 135. 0. 0. 38. 99. 98. 98. 219. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 196. 255. 208. 186. 254. 254. 255. 254. 254. 254. 254. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 105. 254. 253. 239. 180. 135. 39. 39. 39. 237. 170. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 137. 92. 24. 0. 0. 0. 0. 0. 234. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 13. 237. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 79. 253. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 31. 242. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 61. 248. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 234. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 234. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 196. 155. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 50. 236. 255. 124. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 53. 231. 253. 253. 107. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 9. 193. 253. 253. 230. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 7. 156. 253. 253. 149. 36. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 24. 253. 253. 190. 8. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 175. 253. 253. 72. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 123. 253. 253. 138. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 244. 253. 230. 34. 0. 9. 24. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 181. 253. 249. 123. 0. 69. 195. 253. 249. 146. 15. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 21. 231. 253. 202. 0. 70. 236. 253. 253. 253. 253. 170. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 22. 139. 253. 213. 26. 13. 200. 253. 253. 183. 252. 253. 220. 22. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 72. 253. 253. 129. 0. 86. 253. 253. 129. 4. 105. 253. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 72. 253. 253. 77. 22. 245. 253. 183. 4. 0. 2. 105. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 132. 253. 253. 11. 24. 253. 253. 116. 0. 0. 1. 150. 253. 70. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 189. 253. 241. 10. 24. 253. 253. 59. 0. 0. 82. 253. 212. 30. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 189. 253. 147. 0. 24. 253. 253. 150. 30. 44. 208. 212. 31. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 189. 253. 174. 3. 7. 185. 253. 253. 227. 247. 184. 30. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 150. 253. 253. 145. 95. 234. 253. 253. 253. 126. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 72. 253. 253. 253. 253. 253. 253. 253. 169. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 5. 114. 240. 253. 253. 234. 135. 44. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.7093834 0.30119324 -0.80789334 0.1838598 0.12065991 -0.6538477 0.49587095 0.6995347 -0.38699397 0.33823296] [ 0.7093834 0.30119324 -0.80789334 0.1838598 0.12065991 -0.6538477 0.49587095 0.6995347 -0.38699397 0.33823296]] y_= [[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]] loss= 2.2107558 softmax= [[0.16371341 0.10884525 0.03590371 0.09679484 0.09086671 0.04188326 0.1322382 0.16210894 0.05469323 0.11295244] [0.16371341 0.10884525 0.03590371 0.09679484 0.09086671 0.04188326 0.1322382 0.16210894 0.05469323 0.11295244]] cross_entropy= [2.3983614 2.0231504] regularization= 50459690000000.0 50459690000000.0 after 47 trainning steps: x= [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 139. 212. 253. 159. 86. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 34. 89. 203. 253. 252. 252. 252. 252. 74. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 49. 184. 234. 252. 252. 184. 110. 100. 208. 252. 199. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 95. 233. 252. 252. 176. 56. 0. 0. 0. 17. 234. 249. 75. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 220. 253. 178. 54. 4. 0. 0. 0. 0. 43. 240. 243. 50. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 221. 255. 180. 55. 5. 0. 0. 0. 7. 160. 253. 168. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 116. 253. 252. 252. 67. 0. 0. 0. 91. 252. 231. 42. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 32. 190. 252. 252. 185. 38. 0. 119. 234. 252. 54. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 177. 252. 252. 179. 155. 236. 227. 119. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 26. 221. 252. 252. 253. 252. 130. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 32. 229. 253. 255. 144. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 66. 236. 252. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 66. 234. 252. 252. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 19. 236. 252. 252. 252. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 53. 181. 252. 168. 43. 232. 253. 92. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 179. 255. 218. 32. 93. 253. 252. 84. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 81. 244. 239. 33. 0. 114. 252. 209. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 207. 252. 237. 70. 153. 240. 252. 32. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 207. 252. 253. 252. 252. 252. 210. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 61. 242. 253. 252. 168. 96. 12. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 68. 254. 255. 254. 107. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 176. 230. 253. 253. 253. 212. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 28. 197. 253. 253. 253. 253. 253. 229. 107. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 194. 253. 253. 253. 253. 253. 253. 253. 253. 53. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 69. 241. 253. 253. 253. 253. 241. 186. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 10. 161. 253. 253. 253. 246. 40. 57. 231. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 140. 253. 253. 253. 253. 154. 0. 25. 253. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 213. 253. 253. 253. 135. 8. 0. 3. 128. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 77. 238. 253. 253. 253. 7. 0. 0. 0. 116. 253. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 165. 253. 253. 231. 70. 1. 0. 0. 0. 78. 237. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 33. 253. 253. 253. 182. 0. 0. 0. 0. 0. 0. 200. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 98. 253. 253. 253. 24. 0. 0. 0. 0. 0. 0. 42. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 253. 24. 0. 0. 0. 0. 0. 0. 163. 253. 195. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 189. 13. 0. 0. 0. 0. 0. 53. 227. 253. 121. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 114. 0. 0. 0. 0. 0. 21. 227. 253. 231. 27. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 114. 0. 0. 0. 5. 131. 143. 253. 231. 59. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 236. 73. 58. 217. 223. 253. 253. 253. 174. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 197. 253. 253. 253. 253. 253. 253. 253. 253. 253. 253. 48. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 149. 253. 253. 253. 253. 253. 253. 253. 253. 182. 15. 3. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 12. 168. 253. 253. 253. 253. 253. 248. 89. 23. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] y= [[ 0.5813921 0.21609789 -0.8359629 0.10818548 0.44052082 -0.6865921 0.78338754 0.5727978 -0.4297532 0.24992661] [ 0.5813921 0.21609789 -0.8359629 0.10818548 0.44052082 -0.6865921 0.78338754 0.5727978 -0.4297532 0.24992661]] y_= [[0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] [1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] loss= 2.452383 softmax= [[0.14272858 0.09905256 0.03459087 0.08892009 0.1239742 0.04016358 0.1746773 0.14150718 0.05192496 0.10246069] [0.14272858 0.09905256 0.03459087 0.08892009 0.1239742 0.04016358 0.1746773 0.14150718 0.05192496 0.10246069]] cross_entropy= [2.9579558 1.9468105] regularization= 50459690000000.0 50459690000000.0 已终止 ```
Polly Nomials 程序怎么来写
Description The Avian Computation Mission of the International Ornithologists Union is dedicated to the study of intelligence in birds, and specifically the study of computational ability. One of the most promising projects so far is the "Polly Nomial" project on parrot intelligence, run by Dr. Albert B. Tross and his assistants, Clifford Swallow and Perry Keet. In the ACM, parrots are trained to carry out simple polynomial computations involving integers, variables, and simple arithmetic operators. When shown a formula consisting of a polynomial with non-negative integer coefficients and one variable x, each parrot uses a special beak-operated PDA, or "Parrot Digital Assistant," to tap out a sequence of operations for computing the polynomial. The PDA operates much like a calculator. It has keys marked with the following symbols: the digits from 0 through 9, the symbol 'x', and the operators '+','*', and '='. (The x key is internally associated with an integer constant by Al B. Tross for testing purposes, but the parrot sees only the 'x'.) For instance, if the parrot were presented with the polynomial x3 + x + 11 the parrot might tap the following sequence of symbols: x, *, x, *, x, +, x, +, 1, 1, = The PDA has no extra memory, so each * or + operation is applied to the previous contents of the display and whatever succeeding operand is entered. If the polynomial had been x3 + 2x2 + 11 then the parrot would not have been able to \save" the value of x3 while calculating the value of 2x2.Instead, a different order of operations would be needed, for instance: x, +, 2, *, x, *, x, +, 1, 1, = The cost of a calculation is the number of key presses. The cost of computing x3+x+11 in the example above is 11 (four presses of the x key, two presses of '*', two presses of '+', two presses of the digit '1',and the '=' key). It so happens that this is the minimal cost for this particular expression using the PDA. You are to write a program that finds the least costly way for a parrot to compute a number of polynomial expressions. Because parrots are, after all, just bird-brains, they are intimidated by polynomials whose high-order coefficient is any value except 1, so this condition is always imposed. Input Input consists of a sequence of lines, each containing a polynomial and an x value. Each polynomial anxn+an-1xn-1+...+a0 is represented by its degree followed by the non-negative coefficients an , ... , a0 of decreasing powers of x, where an is always 1. Degrees are between 1 and 100. The coefficients are followed on the same line by an integer value for the variable x, which is always either 1 or -1. The input is terminated by a single line containing the values 0 0. Output For each polynomial, print the polynomial number followed by the value of the polynomial at the given integer value x and the minimum cost of computing the polynomial; imitate the formatting in the sample output. Sample Input 3 1 0 1 11 1 3 1 0 2 11 -1 0 0 Sample Output Polynomial 1: 13 11 Polynomial 2: 8 11
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
谁能帮我解释下这段mysql是做什么的?怎么查询数据库名为[ShFTZNQI]的相关跟踪语句?为什么这段查询会很慢
```mysql -- 临时开启慢查询日志 show variables like '%quer%'; set global slow_query_log=1; -- 临时开启慢查询时间 show variables like 'long_query_time'; set global long_query_time=1; -- 设置慢查询存储的方式 show variables like 'log_output'; set global log_output = 'FILE'; set global log_output = 'TABLE'; -- 设置查看日志 SELECT * FROM mysql.slow_log; SELECT * FROM mysql.general_log; select sleep(30); explain select ID from new_table; 清空sql日志 SET GLOBAL slow_query_log = 'OFF'; ALTER TABLE mysql.slow_log RENAME mysql.slow_log_drop; CREATE TABLE mysql.slow_log LIKE mysql.slow_log_drop; SET GLOBAL slow_query_log = 'ON'; DROP TABLE mysql.slow_log_drop; SELECT * FROM mysql.slow_log order by start_time desc ```
A Simple Language 表达式问题
Problem Description Professor X teaches the C Programming language in college, but he finds it's too hard for his students and only a few students can pass the exam. So, he decide to invent a new language to reduce the burden on students. This new language only support four data type, but the syntax is an strict subset of C. It only support assignment operation, brackets operation , addition , subtration, multiplication and division between variables and numbers. The priority of operations is the same as C. In order to void the problem of forgetting the eliminator ";", this new language allow to omit it. The variable naming rules is the same as C. Comments is not allowed in this language. Now Prof.X need to impelment this language, and the variable part is done by himself. Now Prof.X need you, a execllent ACM coder's help: Given a section of this language's code, please calculate it's return value. Input The input contains many lines, each line is a section of codes written in the language described above, you can assume that all variables have been declared as int and have been set to 0 initially. Output To each line of input, output an integer indicate the return value of the input line. the semicolon will only appear in the end of line, you can assume that every literal, variable's value and the intermediate results of calculation would never bigger than a short integer. Notice: the result may affect by assignment operation, if you don't know the exact return value of some statements such as a=3, you can try run codes such as ' printf("%d",a=3); ' in C, and check the result. Sample Input a=3 a+b a=a*(b+2)+c; a+b a/4 _hello=2*a; Sample Output 3 3 6 6 1 12
openpose cmake gui 编译报错caffe lib?
CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: Caffe_LIB linked by target "openpose" in directory E:/openpose-1.5.1/src/openpose linked by target "Calibration" in directory E:/openpose-1.5.1/examples/calibration linked by target "OpenPoseDemo" in directory E:/openpose-1.5.1/examples/openpose linked by target "1_custom_post_processing" in directory E:/openpose-1.5.1/examples/tutorial_add_module linked by target "01_body_from_image_default" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "02_whole_body_from_image_default" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "09_keypoints_from_heatmaps" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "08_heatmaps_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "03_keypoints_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "12_asynchronous_custom_input_output_and_datum" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "05_keypoints_from_images_multi_gpu" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "04_keypoints_from_images" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "06_face_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "07_hand_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "10_asynchronous_custom_input" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "11_asynchronous_custom_output" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "13_synchronous_custom_input" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "17_synchronous_custom_all_and_datum" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "14_synchronous_custom_preprocessing" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "15_synchronous_custom_postprocessing" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "16_synchronous_custom_output" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "1_thread_user_processing_function" in directory E:/openpose-1.5.1/examples/tutorial_api_thread linked by target "2_thread_user_input_processing_output_and_datum" in directory E:/openpose-1.5.1/examples/tutorial_api_thread Caffe_Proto_LIB linked by target "openpose" in directory E:/openpose-1.5.1/src/openpose linked by target "Calibration" in directory E:/openpose-1.5.1/examples/calibration linked by target "OpenPoseDemo" in directory E:/openpose-1.5.1/examples/openpose linked by target "1_custom_post_processing" in directory E:/openpose-1.5.1/examples/tutorial_add_module linked by target "01_body_from_image_default" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "02_whole_body_from_image_default" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "09_keypoints_from_heatmaps" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "08_heatmaps_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "03_keypoints_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "12_asynchronous_custom_input_output_and_datum" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "05_keypoints_from_images_multi_gpu" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "04_keypoints_from_images" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "06_face_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "07_hand_from_image" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "10_asynchronous_custom_input" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "11_asynchronous_custom_output" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "13_synchronous_custom_input" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "17_synchronous_custom_all_and_datum" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "14_synchronous_custom_preprocessing" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "15_synchronous_custom_postprocessing" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "16_synchronous_custom_output" in directory E:/openpose-1.5.1/examples/tutorial_api_cpp linked by target "1_thread_user_processing_function" in directory E:/openpose-1.5.1/examples/tutorial_api_thread linked by target "2_thread_user_input_processing_output_and_datum" in directory E:/openpose-1.5.1/examples/tutorial_api_thread Configuring incomplete, errors occurred! See also "E:/openpose-1.5.1/bulid/CMakeFiles/CMakeOutput.log".
KFC -Z+W 怎么做的
Problem Description Welcome to KFC! We love KFC, but we hate the looooooooooong queue. Z is a programmer; he’s got an idea on the queue time computing. Z chooses the shortest queue at first time. He wrote a j2me program to predicate the time he have to wait in the queue. Now, he comes to KFC to test his program. BUT, he ignored some important things to get the precise time. * People choose different foods * Time used on foods various W encountered him just while his wondering on the problem, so he discussed it with HER. W suggested that they can add variables to this * A type ,who is looking down on the cell phone novel should come here alone, will call for 1 hamburger, 1 drink and 1 small fries * B type, two talkative lovers, 1 hamburger and 1 drink for each one and another big fires * C type, middle aged father/mother looks, brings their child out. 3 hamburgers, 3 drinks and two big friezes. Generally represent the types usually appear, not the exactly math work. They reprogram the app on W’s HTC-G1 with bash, run it and go for the fastest queue. Input Input contains multiple test cases, and for each case: First line: n B D f F, stands for n queues, time for hamburger B, time for Drink D, time for small fries f, time for big Fries F. The next n lines: the types of each line using ABC characters. (1<n,B,D,f,F<=100) Output For each case, please output the least time to wait in one line. Sample Input 3 2 2 1 2 ABCCBACBCBAB CBCBABBCBA ABC Sample Output 31
代码调试成功,但是数据却写入不到文件中去,我用的是VS2017,数据文件也已经放入项目中
想请问一下大家,代码运行没有报错,但是数据写入不到文件中去,请问是什么原因? 这是头文件 ``` _****#ifndef _NS_H #include<stdio.h> #include<math.h> #include<time.h> #define imax 100 #define jmax 100 #define maxit 10000 extern double ma0, a0, u0, v0, p0, t0, tr, tw, tratio, visr, xl, gamma, gascon, pr; extern double vis0; extern double cv, cp, r0, rel, e0; extern double kc0; extern double delt, yl, dx, dy; extern double dtime; extern int iter; extern int convergence; extern int t1, t2; extern double dt; extern double u[imax][jmax], v[imax][jmax], p[imax][jmax], r[imax][jmax], rl[imax][jmax], t[imax][jmax], vis[imax][jmax], ma[imax][jmax]; extern double e[imax][jmax], a[imax][jmax], kc[imax][jmax]; void bc(); void conver(); void dynvis(double tr, double tout, double visr, double visout); void mac(); void mdot(); void output(); void qcx(int i, int j, int icase, double qxout); void qcy(int i, int j, int icase, double qyout); void tauxx(int i, int j, int icase, double txx); void tauxy(int i, int j, int icase, double txy); void tauyy(int i, int j, int icase, double tyy); void thermc(double vis0, double cp, double pr, double kcout); void tstep(); #endif**_** ``` 这是主循环 ``` /* A C program of two-dimensional complete Navier-Stokes equations for the supersonic flow over a flat plate based on MacCormack's explicit time-marching technique. Written by ZhouMingming, January, 2019 Based on John D. Anderson, JR., Chapter 10, In Computational Fluid Dynamics, The Basics with Applications, McGraw-Hill, 2002, 4 */ #include"_NS_H.h" int main() { //初始化流场及各种参数 ma0 = 8; a0 = 340.28; u0 = ma0 * a0; v0 = 0; p0 = 1.01325e5; t0 = 288.16; tr = 288.16; tratio = 2; tw = t0 * tratio; visr = 1.7894e-5; xl = 0.00001; gascon = 287; gamma = 1.4; pr = 0.71; dynvis(tr, t0, visr, vis0); cv = gascon / (gamma - 1); cp = cv * gamma; r0 = p0 / (gascon * t0); e0 = cv * t0; rel = r0 * sqrt(u0*u0 + v0 * v0)*xl / vis0; thermc(vis0, cp, pr, kc0); delt = 5 * xl / sqrt(rel); yl = 5 * delt; dx = xl / (imax - 1); dy = yl / (jmax - 1); FILE *fp4; fp4 = fopen("time.txt", "w"); //主循环 t1 = clock(); for (iter = 0; iter < maxit; iter++) { if (iter == 0) { for (int i = 0; i < imax; i++) { for (int j = 0; j < jmax; j++) { u[i][j] = u0; v[i][j] = v0; p[i][j] = p0; t[i][j] = t0; r[i][j] = r0; rl[i][j] = r0; vis[i][j] = vis0; ma[i][j] = ma0; e[i][j] = e0; kc[i][j] = kc0; a[i][j] = a0; } } } //确定推进时间步长 tstep(); //使用MacCormack time marching algorithm求解 mac(); //确定是否收敛 conver(); if (convergence) { mdot(); output(); } } t2 = clock(); dt = (t2 - t1) / CLOCKS_PER_SEC; fprintf(fp4, "%f", dt); fclose(fp4); } ``` 以下是各个cpp文件 ``` #include"_NS_H.h" void bc() { //定义边界条件 //case1: at (0, 0) u[0][0] = 0; v[0][0] = 0; p[0][0] = p0; t[0][0] = t0; e[0][0] = cv * t[0][0]; r[0][0] = p[0][0] / gascon / t[0][0]; dynvis(tr, t[0][0], visr, vis[0][0]); thermc(vis[0][0], cp, pr, kc[0][0]); a[0][0] = a0; ma[0][0] = 0; //case2: at inflow and upper boundary for (int j = 1; j < jmax; j++) { u[0][j] = u0; v[0][j] = v0; p[0][j] = p0; t[0][j] = t0; e[0][j] = cv * t[0][j]; r[0][j] = p[0][j] / gascon / t[0][j]; dynvis(tr, t[0][j], visr, vis[0][j]); thermc(vis[0][j], cp, pr, kc[0][j]); a[0][j] = a0; ma[0][j] = ma0; } for (int i = 1; i < imax; i++) { u[i][jmax - 1] = u0; v[i][jmax - 1] = v0; p[i][jmax-1] = p0; t[i][jmax-1] = t0; e[i][jmax - 1] = cv * t[i][jmax - 1]; r[i][jmax-1] = p[i][jmax-1] / gascon / t[i][jmax-1]; dynvis(tr, t[i][jmax-1], visr, vis[i][jmax-1]); thermc(vis[i][jmax-1], cp, pr, kc[i][jmax-1]); a[i][jmax-1] = a0; ma[i][jmax-1] = ma0; } //case3: at lower surface boundary for (int i = 1; i < imax; i++) { u[i][0] = 0; v[i][0] = 0; p[i][0] = 2 * p[i][1] - p[i][2]; t[i][0] = tw; e[i][0] = cv * t[i][0]; r[i][0] = p[i][0] / gascon / t[i][0]; dynvis(tr, t[i][0], visr, vis[i][0]); thermc(vis[i][0], cp, pr, kc[i][0]); ``` ``` a[i][0] = sqrt(gamma*gascon*t[i][0]); ma[i][0] = sqrt(u[i][0] * u[i][0] + v[i][0] * v[i][0]) / a[i][0]; } //case4: at outflow boundary for (int j = 1; j < jmax - 1; j++) { u[imax - 1][j] = 2 * u[imax - 2][j] - u[imax - 3][j]; v[imax - 1][j] = 2 * v[imax - 2][j] - v[imax - 3][j]; p[imax - 1][j] = 2 * p[imax - 2][j] - p[imax - 3][j]; t[imax - 1][j] = 2 * t[imax - 2][j] - t[imax - 3][j]; e[imax - 1][j] = cv * t[imax - 1][j]; r[imax - 1][j] = p[imax - 1][j] / gascon / t[imax - 1][j]; dynvis(tr, t[imax - 1][j], visr, vis[imax - 1][j]); thermc(vis[imax - 1][j], cp, pr, kc[imax - 1][j]); a[imax - 1][j] = sqrt(gamma*gascon*t[imax - 1][j]); ma[imax - 1][j] = sqrt(u[imax - 1][j] * u[imax - 1][j] + v[imax - 1][j] * v[imax - 1][j]) / a[imax - 1][j]; } } ``` ``` #include"_NS_H.h" void conver() { FILE *fp1; fp1 = fopen("time.txt", "a"); double rcrit = 0; double dr; for (int i = 0; i < imax; i++) { for (int j = 0; j < jmax; j++) { dr = fabs(r[i][j] - rl[i][j]); if (rcrit < dr) { rcrit = dr; } } } if (rcrit <= 2e-6) { printf("Converence is done!"); convergence = 1; } else { fprintf(fp1, "%d ", iter); fprintf(fp1, "%f ", rcrit); fprintf(fp1, "%f ", dtime); fprintf(fp1, "\n"); fclose(fp1); for (int i = 0; i < imax; i++) { for (int j = 0; j < jmax; j++) { rl[i][j] = r[i][j]; } } convergence = 0; } } ``` ``` #include"_NS_H.h" void dynvis(double tr, double tout, double visr, double visout) { visout = visr * pow(tout / tr, 1.5); visout = visout * (tr + 110) / (tout + 110); } ``` ``` #include"_NS_H.h" double s[5][imax][jmax], f[5][imax][jmax], g[5][imax][jmax]; double sb[5][imax][jmax], sl[5][imax][jmax]; double txx[imax][jmax], txy[imax][jmax], tyy[imax][jmax]; double qx[imax][jmax], qy[imax][jmax]; void mac() { double dsdt; //使用MacCormack's algorithm更新流场变量 for (int i = 0; i < imax; i++) { for (int j = 0; j < jmax; j++) { s[0][i][j] = r[i][j]; s[1][i][j] = r[i][j] * u[i][j]; s[2][i][j] = r[i][j] * v[i][j]; s[3][i][j] = 0; s[4][i][j] = r[i][j] * (e[i][j] + 1 / 2 * (u[i][j] * u[i][j] + v[i][j] * v[i][j])); sl[0][i][j] = r[i][j]; sl[1][i][j] = r[i][j] * u[i][j]; sl[2][i][j] = r[i][j] * v[i][j]; sl[3][i][j] = 0; sl[4][i][j] = r[i][j] * (e[i][j] + 1 / 2 * (u[i][j] * u[i][j] + v[i][j] * v[i][j])); tauxx(i, j, 1, txx[i][j]); tauxy(i, j, 1, txy[i][j]); qcx(i, j, 1, qx[i][j]); f[0][i][j] = s[1][i][j]; f[1][i][j] = s[1][i][j] * s[1][i][j] / s[0][i][j] + p[i][j] - txx[i][j]; f[2][i][j] = s[1][i][j] * s[2][i][j] / s[0][i][j] - txy[i][j]; f[3][i][j] = 0; f[4][i][j] = (s[4][i][j] + p[i][j])*s[1][i][j] / s[0][i][j] + qx[i][j] - s[1][i][j] / s[0][i][j] * txx[i][j] - s[2][i][j] / s[0][i][j] * txy[i][j]; tauyy(i, j, 1, tyy[i][j]); tauxy(i, j, 3, txy[i][j]); qcy(i, j, 1, qy[i][j]); g[0][i][j] = s[2][i][j]; g[1][i][j] = s[1][i][j] * s[2][i][j] / s[0][i][j] - txy[i][j]; g[2][i][j] = s[2][i][j] * s[2][i][j] / s[0][i][j] + p[i][j] - tyy[i][j]; g[3][i][j] = 0; g[4][i][j] = (s[4][i][j] + p[i][j])*s[2][i][j] / s[0][i][j] + qy[i][j] - s[1][i][j] / s[0][i][j] * txy[i][j] - s[2][i][j] / s[0][i][j] * tyy[i][j]; } } //预测步 for (int i = 1; i < imax - 1; i++) { for (int j = 1; j < jmax - 1; j++) { for (int k = 0; k < 5; k++) { dsdt = (f[k][i][j] - f[k][i + 1][j]) / dx + (g[k][i][j] - g[k][i][j + 1]) / dy; sb[k][i][j] = s[k][i][j] + dsdt * dtime; } //decode the variables r[i][j] = sb[0][i][j]; u[i][j] = sb[1][i][j] / sb[0][i][j]; v[i][j] = sb[2][i][j] / sb[0][i][j]; e[i][j] = sb[4][i][j] / sb[0][i][j] - 0.5*v[i][j] * v[i][j]; if (e[i][j] < 0) { e[i][j] = 0; } t[i][j] = e[i][j] / cv; p[i][j] = r[i][j] * gascon*t[i][j]; dynvis(tr, t[i][j], visr, vis[i][j]); thermc(vis[i][j], cp, pr, kc[i][j]); a[i][j] = sqrt(gamma*gascon*t[i][j]); ma[i][j] = v[i][j] / a[i][j]; } } //更新边界条件 bc(); //使用MacCormack's algorithm更新流场变量 for (int i = 0; i < imax; i++) { for (int j = 0; j < jmax; j++) { s[0][i][j] = r[i][j]; s[1][i][j] = r[i][j] * u[i][j]; s[2][i][j] = r[i][j] * v[i][j]; s[3][i][j] = 0; s[4][i][j] = r[i][j] * (e[i][j] + 1 / 2 * (u[i][j] * u[i][j] + v[i][j] * v[i][j])); tauxx(i, j, 2, txx[i][j]); tauxy(i, j, 2, txy[i][j]); qcx(i, j, 2, qx[i][j]); f[0][i][j] = s[1][i][j]; f[1][i][j] = s[1][i][j] * s[1][i][j] / s[0][i][j] + p[i][j] - txx[i][j]; f[2][i][j] = s[1][i][j] * s[2][i][j] / s[0][i][j] - txy[i][j]; f[3][i][j] = 0; f[4][i][j] = (s[4][i][j] + p[i][j])*s[1][i][j] / s[0][i][j] + qx[i][j] - s[1][i][j] / s[0][i][j] * txx[i][j] - s[2][i][j] / s[0][i][j] * txy[i][j]; tauyy(i, j, 2, tyy[i][j]); tauxy(i, j, 4, txy[i][j]); qcy(i, j, 2, qy[i][j]); g[0][i][j] = s[2][i][j]; g[1][i][j] = s[1][i][j] * s[2][i][j] / s[0][i][j] - txy[i][j]; g[2][i][j] = s[2][i][j] * s[2][i][j] / s[0][i][j] + p[i][j] - tyy[i][j]; g[3][i][j] = 0; g[4][i][j] = (s[4][i][j] + p[i][j])*s[2][i][j] / s[0][i][j] + qy[i][j] - s[1][i][j] / s[0][i][j] * txy[i][j] - s[2][i][j] / s[0][i][j] * tyy[i][j]; } } //修正步 for (int i = 1; i < imax - 1; i++) { for (int j = 1; j < jmax - 1; j++) { for (int k = 0; k < 5; k++) { dsdt = (f[i - 1][j][k] - f[i][j][k]) / dx + (g[i][j - 1][k] - g[i][j][k]) / dy; s[i][j][k] = 0.5*(sl[i][j][k] + sb[i][j][k] + dtime * dsdt); } //decode the variables r[i][j] = sb[0][i][j]; u[i][j] = sb[1][i][j] / sb[0][i][j]; v[i][j] = sb[2][i][j] / sb[0][i][j]; e[i][j] = sb[4][i][j] / sb[0][i][j] - 0.5*v[i][j] * v[i][j]; if (e[i][j] < 0) { e[i][j] = 0; } t[i][j] = e[i][j] / cv; p[i][j] = r[i][j] * gascon*t[i][j]; dynvis(tr, t[i][j], visr, vis[i][j]); thermc(vis[i][j], cp, pr, kc[i][j]); a[i][j] = sqrt(gamma*gascon*t[i][j]); ma[i][j] = v[i][j] / a[i][j]; } } //更新边界条件 bc(); } ``` ``` #include"_NS_H.h" void mdot() { //检查质量守恒是否成立 FILE *fp3; fp3 = fopen("dmass.txt", "a"); double massin = 0.0, massout = 0.0; double dmass; for (int j = 0; j < jmax; j++) { massin += r[0][j] * u[0][j]; massout += r[imax - 1][j] * u[imax - 1][j]; } dmass = fabs(massout - massin) / massin * 100; if (dmass > 1.0) { fprintf(fp3, "%f ", dmass); fprintf(fp3, "\n"); fclose(fp3); } } ``` ``` #include"_NS_H.h" void output() { double x[imax], y[jmax]; FILE *fp2; fp2 = fopen("output.txt", "a"); for (int i = 0; i < imax; i++) { x[i] = dx * i; } for (int j = 0; j < jmax; j++) { y[j] = dy * j; } for (int i = 0; i < imax; i++) { for (int j = 0; j < jmax; j++) { fprintf(fp2, "%f ", x[i]); fprintf(fp2, "%f ", y[i]); fprintf(fp2, "%f ", u[i][j] / u0); fprintf(fp2, "%f ", v[i][j]); fprintf(fp2, "%f ", p[i][j] / p0); fprintf(fp2, "%f ", t[i][j] / t0); fprintf(fp2, "%f ", r[i][j] / r0); fprintf(fp2, "%f ", a[i][j]); fprintf(fp2, "%f ", ma[i][j]); fprintf(fp2, "\n"); } } fclose(fp2); } ``` ``` #include"_NS_H.h" void qcx(int i, int j, int icase, double qxout) { double dtdx; switch (icase) { //向前差分时 case 1: if (i == 0) { dtdx = (t[1][j] - t[0][j]) / dx; } else { dtdx = (t[i][j] - t[i - 1][j]) / dx; } break; //向后差分时 case 2: if (i == imax - 1) { dtdx = (t[imax - 1][j] - t[imax - 2][j]) / dx; } else { dtdx = (t[i + 1][j] - t[i][j]) / dx; } break; } qxout = -1 * kc[i][j] * dtdx; } ``` ``` #include"_NS_H.h" void qcy(int i, int j, int icase, double qyout) { double dtdy; switch (icase) { //向前差分时 case 1: if (j == 0) { dtdy = (t[i][1] - t[i][0]) / dy; } else { dtdy = (t[i][j] - t[i][j - 1]) / dy; } break; //向后差分时 case 2: if (j == jmax - 1) { dtdy = (t[i][jmax-1] - t[i][jmax-2]) / dy; } else { dtdy = (t[i][j+1] - t[i][j]) / dy; } break; } qyout = -1 * kc[i][j] * dtdy; } ``` ``` #include"_NS_H.h" void tauxx(int i, int j, int icase, double txx) { double dudx, dvdy; switch (icase) { //dF/dX为向前差分 case 1: if (j == 0) { dvdy = (v[i][1] - v[i][0]) / dy; } else if (j == jmax - 1) { dvdy = (v[i][jmax - 1] - v[i][jmax - 2]) / dy; } else { dvdy = (v[i][j + 1] - v[i][j - 1]) / 2 / dy; } if (i == 0) { dudx = (u[1][j] - u[0][j]) / dx; } else { dudx = (u[i][j] - u[i - 1][j]) / dx; } break; //dF/dX为向后差分 case 2: if (j == 0) { dvdy = (v[i][1] - v[i][0]) / dy; } else if (j == jmax - 1) { dvdy = (v[i][jmax - 1] - v[i][jmax - 2]) / dy; } else { dvdy = (v[i][j + 1] - v[i][j - 1]) / 2 / dy; } if (i == imax-1) { dudx = (u[imax - 1][j] - u[imax - 2][j]) / dx; } else { dudx = (u[i + 1][j] - u[i][j]) / dx; } break; } txx = vis[i][j] * (4 / 3 * dudx - 2 / 3 * dvdy); } ``` ``` #include"_NS_H.h" void tauxy(int i, int j, int icase, double txy) { double dudy, dvdx; switch (icase) { //dF/dX为向前差分 case 1: if (j == 0) { dudy = (u[i][1] - u[i][0]) / dy; } else if (j == jmax - 1) { dudy = (u[i][jmax - 1] - u[i][jmax - 2]) / dy; } else { dudy = (u[i][j + 1] - u[i][j - 1]) / 2 / dy; } if (i == 0) { dvdx = (v[1][j] - v[0][j]) / dx; } else { dvdx = (v[i][j] - v[i - 1][j]) / dx; } break; //dF/dX为向后差分 case 2: if (j == 0) { dudy = (u[i][1] - u[i][0]) / dy; } else if (j == jmax - 1) { dudy = (u[i][jmax - 1] - u[i][jmax - 2]) / dy; } else { dudy = (u[i][j + 1] - u[i][j - 1]) / 2 / dy; } if (i == imax - 1) { dvdx = (v[imax - 1][j] - v[imax - 2][j]) / dx; } else { dvdx = (v[i + 1][j] - v[i][j]) / dx; } break; //dG/dy为向前差分 case 3: if (i == 0) { dvdx = (v[1][j] - v[0][j]) / dx; } else if (i == imax - 1) { dvdx = (v[imax - 1][j] - v[imax - 2][j]) / dx; } else { dvdx = (v[i + 1][j] - v[i - 1][j]) / 2 / dx; } if (j == 0) { dudy = (u[i][1] - u[i][0]) / dy; } else { dudy = (u[i][j] - u[i][j - 1]) / dy; } break; //dG/dy为向后差分 case 4: if (i == 0) { dvdx = (v[1][j] - v[0][j]) / dx; } else if (i == imax - 1) { dvdx = (v[imax - 1][j] - v[imax - 2][j]) / dx; } else { dvdx = (v[i + 1][j] - v[i - 1][j]) / 2 / dx; } if (j == jmax-1) { dudy = (u[i][jmax - 1] - u[i][jmax - 2]) / dy; } else { dudy = (u[i][j + 1] - u[i][j]) / dy; } break; } txy = vis[i][j] * (dudy + dvdx); } ``` ``` #include"_NS_H.h" void tauyy(int i, int j, int icase, double tyy) { double dudx, dvdy; switch (icase) { //dG/dy为向前差分 case 1: if (i == 0) { dudx = (u[1][j] - u[0][j]) / dx; } else if (i == imax-1) { dudx = (u[imax - 1][j] - u[imax - 2][j]) / dx; } else { dudx = (u[i + 1][j] - u[i - 1][j]) / 2 / dx; } if (j == 0) { dvdy = (v[i][1] - v[i][0]) / dy; } else { dvdy = (v[i][j] - v[i][j - 1]) / dy; } break; //dG/dy为向后差分 case 2: if (i == 0) { dudx = (u[1][j] - u[0][j]) / dx; } else if (i == imax - 1) { dudx = (u[imax - 1][j] - u[imax - 2][j]) / dx; } else { dudx = (u[i + 1][j] - u[i - 1][j]) / 2 / dx; } if (j == jmax - 1) { dvdy = (v[i][jmax - 1] - v[i][jmax - 2]) / dy; } else { dvdy = (v[i][j + 1] - v[i][j]) / dy; } break; } tyy = vis[i][j] * (4 / 3 * dvdy - 2 / 3 * dudx); } ``` ``` #include"_NS_H.h" void thermc(double vis0, double cp, double pr, double kcout) { kcout = vis0 * cp / pr; } ``` ``` #include"_NS_H.h" double vv[imax][jmax], dtt[imax][jmax]; double rex[imax][jmax], rey[imax][jmax]; void tstep() { double vvmax = 0; double dt1, dt2, dt3; double dtmin = 1; double tf = 0.5; double rexmax = 0, reymax = 0; for (int i = 1; i < imax - 1; i++) { for (int j = 1; j < jmax - 1; j++) { vv[i][j] = 4 / 3*gamma*vis[i][j] * vis[i][j] / pr / r[i][j]; rex[i][j] = r[i][j] * u[i][j] * dx / vis[i][j]; rey[i][j] = r[i][j] * v[i][j] * dy / vis[i][j]; if (rexmax < rex[i][j]) { rexmax = rex[i][j]; } if (reymax < rey[i][j]) { reymax = rey[i][j]; } if (vvmax < vv[i][j]) { vvmax = vv[i][j]; } } } for (int i = 1; i < imax - 1; i++) { for (int j = 1; j < jmax - 1; j++) { dt1 = sqrt(u[i][j]) / dx + sqrt(v[i][j]) / dy; dt2 = dt1 + a[i][j] * sqrt(1 / (dx*dx) + 1 / (dy*dy)); dt3 = dt2 + 2 * vvmax*(1 / dx / dx + 1 / dy / dy); dtt[i][j] = 1 / dt3; if (dtmin > dtt[i][j]) { dtmin = dtt[i][j]; } } } dtime = tf * dtmin; } ```
python中setup_variables方法的问题
下面是一段python代码片段: try: m = _import_(module) opt_modules.append(m) m.setup_variables(vars) print "loaded module"+module except ImportError: print "failed load module"+module Exit(1); 我想知道代码中关于setup_variables方法的资料哪里有?或者谁用过这个方法知道详细的用途?或者谁在那本书中见过?
tensorflow训练完模型直接测试和导入模型进行测试的结果不同,一个很好,一个略差,这是为什么?
在tensorflow训练完模型,我直接采用同一个session进行测试,得到结果较好,但是采用训练完保存的模型,进行重新载入进行测试,结果较差,不懂是为什么会出现这样的结果。注:测试数据是一样的。以下是模型结果: 训练集:loss:0.384,acc:0.931. 验证集:loss:0.212,acc:0.968. 训练完在同一session内的测试集:acc:0.96。导入保存的模型进行测试:acc:0.29 ``` def create_model(hps): global_step = tf.Variable(tf.zeros([], tf.float64), name = 'global_step', trainable = False) scale = 1.0 / math.sqrt(hps.num_embedding_size + hps.num_lstm_nodes[-1]) / 3.0 print(type(scale)) gru_init = tf.random_normal_initializer(-scale, scale) with tf.variable_scope('Bi_GRU_nn', initializer = gru_init): for i in range(hps.num_lstm_layers): cell_bw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-bw') cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, output_keep_prob = dropout_keep_prob) cell_fw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-fw') cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, output_keep_prob = dropout_keep_prob) rnn_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_bw, cell_fw, inputs, dtype=tf.float32) embeddedWords = tf.concat(rnn_outputs, 2) finalOutput = embeddedWords[:, -1, :] outputSize = hps.num_lstm_nodes[-1] * 2 # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2 last = tf.reshape(finalOutput, [-1, outputSize]) # reshape成全连接层的输入维度 last = tf.layers.batch_normalization(last, training = is_training) fc_init = tf.uniform_unit_scaling_initializer(factor = 1.0) with tf.variable_scope('fc', initializer = fc_init): fc1 = tf.layers.dense(last, hps.num_fc_nodes, name = 'fc1') fc1_batch_normalization = tf.layers.batch_normalization(fc1, training = is_training) fc_activation = tf.nn.relu(fc1_batch_normalization) logits = tf.layers.dense(fc_activation, hps.num_classes, name = 'fc2') with tf.name_scope('metrics'): softmax_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = tf.argmax(outputs, 1)) loss = tf.reduce_mean(softmax_loss) # [0, 1, 5, 4, 2] ->argmax:2 因为在第二个位置上是最大的 y_pred = tf.argmax(tf.nn.softmax(logits), 1, output_type = tf.int64, name = 'y_pred') # 计算准确率,看看算对多少个 correct_pred = tf.equal(tf.argmax(outputs, 1), y_pred) # tf.cast 将数据转换成 tf.float32 类型 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.name_scope('train_op'): tvar = tf.trainable_variables() for var in tvar: print('variable name: %s' % (var.name)) grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvar), hps.clip_lstm_grads) optimizer = tf.train.AdamOptimizer(hps.learning_rate) train_op = optimizer.apply_gradients(zip(grads, tvar), global_step) # return((inputs, outputs, is_training), (loss, accuracy, y_pred), (train_op, global_step)) return((inputs, outputs), (loss, accuracy, y_pred), (train_op, global_step)) placeholders, metrics, others = create_model(hps) content, labels = placeholders loss, accuracy, y_pred = metrics train_op, global_step = others def val_steps(sess, x_batch, y_batch, writer = None): loss_val, accuracy_val = sess.run([loss,accuracy], feed_dict = {inputs: x_batch, outputs: y_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) return loss_val, accuracy_val loss_summary = tf.summary.scalar('loss', loss) accuracy_summary = tf.summary.scalar('accuracy', accuracy) # 将所有的变量都集合起来 merged_summary = tf.summary.merge_all() # 用于test测试的summary merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_Bi-GRU_Dropout_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.makedirs(run_dir) train_log_dir = os.path.join(run_dir, timestamp, 'train') test_los_dir = os.path.join(run_dir, timestamp, 'test') if not os.path.exists(train_log_dir): os.makedirs(train_log_dir) if not os.path.join(test_los_dir): os.makedirs(test_los_dir) # saver得到的文件句柄,可以将文件训练的快照保存到文件夹中去 saver = tf.train.Saver(tf.global_variables(), max_to_keep = 5) # train 代码 init_op = tf.global_variables_initializer() train_keep_prob_value = 0.2 test_keep_prob_value = 1.0 # 由于如果按照每一步都去计算的话,会很慢,所以我们规定每100次存储一次 output_summary_every_steps = 100 num_train_steps = 1000 # 每隔多少次保存一次 output_model_every_steps = 500 # 测试集测试 test_model_all_steps = 4000 i = 0 session_conf = tf.ConfigProto( gpu_options = tf.GPUOptions(allow_growth=True), allow_soft_placement = True, log_device_placement = False) with tf.Session(config = session_conf) as sess: sess.run(init_op) # 将训练过程中,将loss,accuracy写入文件里,后面是目录和计算图,如果想要在tensorboard中显示计算图,就想sess.graph加上 train_writer = tf.summary.FileWriter(train_log_dir, sess.graph) # 同样将测试的结果保存到tensorboard中,没有计算图 test_writer = tf.summary.FileWriter(test_los_dir) batches = batch_iter(list(zip(x_train, y_train)), hps.batch_size, hps.num_epochs) for batch in batches: train_x, train_y = zip(*batch) eval_ops = [loss, accuracy, train_op, global_step] should_out_summary = ((i + 1) % output_summary_every_steps == 0) if should_out_summary: eval_ops.append(merged_summary) # 那三个占位符输进去 # 计算loss, accuracy, train_op, global_step的图 eval_ops.append(merged_summary) outputs_train = sess.run(eval_ops, feed_dict={ inputs: train_x, outputs: train_y, dropout_keep_prob: train_keep_prob_value, is_training: hps.train_is_training }) loss_train, accuracy_train = outputs_train[0:2] if should_out_summary: # 由于我们想在100steps之后计算summary,所以上面 should_out_summary = ((i + 1) % output_summary_every_steps == 0)成立, # 即为真True,那么我们将训练的内容放入eval_ops的最后面了,因此,我们想获得summary的结果得在eval_ops_results的最后一个 train_summary_str = outputs_train[-1] # 将获得的结果写训练tensorboard文件夹中,由于训练从0开始,所以这里加上1,表示第几步的训练 train_writer.add_summary(train_summary_str, i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = {inputs: x_dev, outputs: y_dev, dropout_keep_prob: 1.0, is_training: hps.val_is_training })[0] test_writer.add_summary(test_summary_str, i + 1) current_step = tf.train.global_step(sess, global_step) if (i + 1) % 100 == 0: print("Step: %5d, loss: %3.3f, accuracy: %3.3f" % (i + 1, loss_train, accuracy_train)) # 500个batch校验一次 if (i + 1) % 500 == 0: loss_eval, accuracy_eval = val_steps(sess, x_dev, y_dev) print("Step: %5d, val_loss: %3.3f, val_accuracy: %3.3f" % (i + 1, loss_eval, accuracy_eval)) if (i + 1) % output_model_every_steps == 0: path = saver.save(sess,os.path.join(out_dir, 'ckp-%05d' % (i + 1))) print("Saved model checkpoint to {}\n".format(path)) print('model saved to ckp-%05d' % (i + 1)) if (i + 1) % test_model_all_steps == 0: # test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, dropout_keep_prob: 1.0}) test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) print("test_loss: %3.3f, test_acc: %3.3d" % (test_loss, test_acc)) batches = batch_iter(list(x_test), 128, 1, shuffle=False) # Collect the predictions here all_predictions = [] for x_test_batch in batches: batch_predictions = sess.run(y_pred, {inputs: x_test_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) all_predictions = np.concatenate([all_predictions, batch_predictions]) correct_predictions = float(sum(all_predictions == y.flatten())) print("Total number of test examples: {}".format(len(y_test))) print("Accuracy: {:g}".format(correct_predictions/float(len(y_test)))) test_y = y_test.argmax(axis = 1) #生成混淆矩阵 conf_mat = confusion_matrix(test_y, all_predictions) fig, ax = plt.subplots(figsize = (4,2)) sns.heatmap(conf_mat, annot=True, fmt = 'd', xticklabels = cat_id_df.category_id.values, yticklabels = cat_id_df.category_id.values) font_set = FontProperties(fname = r"/usr/share/fonts/truetype/wqy/wqy-microhei.ttc", size=15) plt.ylabel(u'实际结果',fontsize = 18,fontproperties = font_set) plt.xlabel(u'预测结果',fontsize = 18,fontproperties = font_set) plt.savefig('./test.png') print('accuracy %s' % accuracy_score(all_predictions, test_y)) print(classification_report(test_y, all_predictions,target_names = cat_id_df['category_name'].values)) print(classification_report(test_y, all_predictions)) i += 1 ``` 以上的模型代码,请求各位大神帮我看看,为什么出现这样的结果?
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
ES6基础-ES6的扩展
进行对字符串扩展,正则扩展,数值扩展,函数扩展,对象扩展,数组扩展。 开发环境准备: 编辑器(VS Code, Atom,Sublime)或者IDE(Webstorm) 浏览器最新的Chrome 字符串的扩展: 模板字符串,部分新的方法,新的unicode表示和遍历方法: 部分新的字符串方法 padStart,padEnd,repeat,startsWith,endsWith,includes 字
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
web前端javascript+jquery知识点总结
Javascript javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ,语法同java类似,是一种解释性语言,边执行边解释。 JavaScript的组成: ECMAScipt 用于描述: 语法,变量和数据类型,运算符,逻辑控制语句,关键字保留字,对象。 浏览器对象模型(Br
Qt实践录:开篇
本系列文章介绍笔者的Qt实践之路。 背景 笔者首次接触 Qt 大约是十多年前,当时试用了 Qt ,觉得不如 MFC 好用。现在 Qt 的 API、文档等都比较完善,在年初决定重新拾起,正所谓技多不压身,将 Qt 当为一种谋生工具亦未尝不可。利用春节假期的集中时间,快速专攻一下。 本系列名为“Qt实践”,故不是教程,笔者对 Qt 的定位是“使用”,可以帮助快速编写日常的工具,如串口、网络等。所以不
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
一条链接即可让黑客跟踪你的位置! | Seeker工具使用
搬运自:冰崖的部落阁(icecliffsnet) 严正声明:本文仅限于技术讨论,严禁用于其他用途。 请遵守相对应法律规则,禁止用作违法途径,出事后果自负! 上次写的防社工文章里边提到的gps定位信息(如何防止自己被社工或人肉) 除了主动收集他人位置信息以外,我们还可以进行被动收集 (没有技术含量) Seeker作为一款高精度地理位置跟踪工具,同时也是社交工程学(社会工程学)爱好者...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧...... 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
Python学习笔记(语法篇)
本篇博客大部分内容摘自埃里克·马瑟斯所著的《Python编程:从入门到实战》(入门类书籍),采用举例的方式进行知识点提要 关于Python学习书籍推荐文章 《学习Python必备的8本书》 Python语法特点: 通过缩进进行语句组织 不需要变量或参数的声明 冒号 1 变量和简单数据结构 1.1 变量命名 只能包含字母、数字和下划线,且不能以数字打头。 1.2 字符串 在Python中,用引号...
相关热词 c#导入fbx c#中屏蔽键盘某个键 c#正态概率密度 c#和数据库登陆界面设计 c# 高斯消去法 c# codedom c#读取cad文件文本 c# 控制全局鼠标移动 c# temp 目录 bytes初始化 c#
立即提问