OpenGL uniform block有index 而没有location

在opengl shader 里面,有独立uniform和uniform block两个概念,每个独立的uniform,又有一个uniform index和uniform location, glGetActiveUniform通过index来获取uniform type, name信息,再通过name 来获取uniform location, 接着根据type的不同选取不同函数填充uniform数据。但是在uniform block的情形下,只有uniform block index这个标志符,且地位和作用根uniform location相似,给uniform block 赋值的时候只是通过将uniform block index绑定到一个binding point,再通过这个bindingpoint 绑定到uniform buffer object.
为什么uniform block 有index,而没有location?

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Teleport Out! 怎么解答的
Problem Description You are in a rectangular maze and you would like to leave the maze as quickly as possible. The maze is a rectangular grid of square locations. Some locations are blocked. Some other locations are exits. If you arrive at an exit location, you can immediately leave the maze. You may walk one step at a time, onto one of the locations adjacent to your current location. Two locations are adjacent if they share a side. That is, you can only move one step North, South, East or West. Of course, you cannot step off the maze, and you cannot step onto a blocked location. In addition, at any step, you may choose to use your teleport device. This device will send you to a random non-blocked location inside the maze with uniform probability (including, possibly, the one where you currently are standing!). If the device happens to send you onto a spot that is also an exit, then you leave the maze immediately. Hooray! The only way to leave the maze is by moving onto an exit (either by teleport or walking), you may not walk off the boundary of the maze. Write a program to calculate the expected number of steps you need in order to leave the maze. Assume that you would choose your actions (movements and using teleport device) optimally in order to minimize the expected number of steps to leave the maze. Using the teleport device is considered one step. Input There will be multiple test cases. Each test case starts with a line containing two positive integers R and C ( R<=200, C<=200 ). Then, the next R lines each contain C characters, representing the locations of the maze. The characters will be one of the following: E : represents an exit, there will be at least one `E' in every maze. Y : represents your initial location, there will be exactly one `Y' in every maze. X : represents a blocked location. . : represents an empty space. You may move/teleport onto any location that is marked `E', `Y' or `.'. The end of input is marked by a line with two space-separated 0's. Output For each test case, print one line containing the minimum expected number of steps required to leave the maze, given that you make your choices optimally to minimize this value. Print your result to 3 decimal places. Do not print any blank lines between outputs. Sample Input 2 1 E Y 2 2 E. .Y 3 3 EX. XX. ..Y 3 3 EXY .X. ... 0 0 Sample Output 1.000 2.000 6.000 3.250
Uniform Generator 计算设计
Problem Description Computer simulations often require random numbers. One way to generate pseudo-random numbers is via a function of the form seed(x+1) = [seed(x) + STEP] % MOD where '%' is the modulus operator. Such a function will generate pseudo-random numbers (seed) between 0 and MOD-1. One problem with functions of this form is that they will always generate the same pattern over and over. In order to minimize this effect, selecting the STEP and MOD values carefully can result in a uniform distribution of all values between (and including) 0 and MOD-1. For example, if STEP = 3 and MOD = 5, the function will generate the series of pseudo-random numbers 0, 3, 1, 4, 2 in a repeating cycle. In this example, all of the numbers between and including 0 and MOD-1 will be generated every MOD iterations of the function. Note that by the nature of the function to generate the same seed(x+1) every time seed(x) occurs means that if a function will generate all the numbers between 0 and MOD-1, it will generate pseudo-random numbers uniformly with every MOD iterations. If STEP = 15 and MOD = 20, the function generates the series 0, 15, 10, 5 (or any other repeating series if the initial seed is other than 0). This is a poor selection of STEP and MOD because no initial seed will generate all of the numbers from 0 and MOD-1. Your program will determine if choices of STEP and MOD will generate a uniform distribution of pseudo-random numbers. Input Each line of input will contain a pair of integers for STEP and MOD in that order (1 <= STEP, MOD <= 100000). Output For each line of input, your program should print the STEP value right- justified in columns 1 through 10, the MOD value right-justified in columns 11 through 20 and either "Good Choice" or "Bad Choice" left-justified starting in column 25. The "Good Choice" message should be printed when the selection of STEP and MOD will generate all the numbers between and including 0 and MOD-1 when MOD numbers are generated. Otherwise, your program should print the message "Bad Choice". After each output test set, your program should print exactly one blank line. Sample Input 3 5 15 20 63923 99999 Sample Output 3 5 Good Choice 15 20 Bad Choice 63923 99999 Good Choice
python类中call和__call__的区别?
在使用tensorflow低阶API实现线性回归时,模型如下定义: ```python class Model(object): def __init__(self): self.w = tf.random.uniform([1]) self.b = tf.random.uniform([1]) def __call__(self,x): return self.w * x + self.b ``` 但在使用keras时如下写 ``` class Model(tf.keras.Model): def __init__(self): super(Model,self).__init__() self.dense = tf.keras.layers.Dense(1) def __call__(self,x): return self.dense(x) ``` 会报错`__call__() got an unexpected keyword argument 'training'` 需要将`__call__`修改为`call` 这两者区别是什么?
The Seven Percent Solution 的解决
Problem Description Uniform Resource Identifiers (or URIs) are strings like http://icpc.baylor.edu/icpc/, mailto:foo@bar.org, ftp://127.0.0.1/pub/linux, or even just readme.txt that are used to identify a resource, usually on the Internet or a local computer. Certain characters are reserved within URIs, and if a reserved character is part of an identifier then it must be percent-encoded by replacing it with a percent sign followed by two hexadecimal digits representing the ASCII code of the character. A table of seven reserved characters and their encodings is shown below. Your job is to write a program that can percent-encode a string of characters. Character Encoding " " (space) %20 "!" (exclamation point) %21 "$" (dollar sign) %24 "%" (percent sign) %25 "(" (left parenthesis) %28 ")" (right parenthesis) %29 "*" (asterisk) %2a Input The input consists of one or more strings, each 1–79 characters long and on a line by itself, followed by a line containing only "#" that signals the end of the input. The character "#" is used only as an end-of-input marker and will not appear anywhere else in the input. A string may contain spaces, but not at the beginning or end of the string, and there will never be two or more consecutive spaces. Output For each input string, replace every occurrence of a reserved character in the table above by its percent-encoding, exactly as shown, and output the resulting string on a line by itself. Note that the percent-encoding for an asterisk is %2a (with a lowercase "a") rather than %2A (with an uppercase "A"). Sample Input Happy Joy Joy! http://icpc.baylor.edu/icpc/ plain_vanilla (**) ? the 7% solution # Sample Output Happy%20Joy%20Joy%21 http://icpc.baylor.edu/icpc/ plain_vanilla %28%2a%2a%29 ? the%207%25%20solution
Aligning Two Shapes 对齐的问题
Problem Description The knowledge about Shape from Wikipedia: Simple two-dimensional shapes can be described by basic geometry such as points, line, curves, plane, and so on. (A shape whose points belong all the same plane is called a plane figure.) Most shapes occurring in the physical world are complex. Some, such as plant structures and coastlines, may be so arbitrary as to defy traditional mathematical description – in which case they may be analyzed by differential geometry, or as fractals. Rigid shape definition In geometry, two subsets of a Euclidean space have the same shape if one can be transformed to the other by a combination of translations, rotations (together also called rigid transformations), and uniform scalings. In other words, the shape of a set is all the geometrical information that is invariant to position (including rotation) and scale. Having the same shape is an equivalence relation, and accordingly a precise mathematical definition of the notion of shape can be given as being an equivalence class of subsets of a Euclidean space having the same shape. From the message above, we know that we can use the operations of translations, rotations and scaling to align two shapes. Now we assume that the shapes we describe in the problem is form with a set of points. For example, a shape S = {x1, y1, x2, y2.... xn, yn}. In the picture below, we use four points to represent a square. The two squares are all centred on the origin (0, 0). After the operation of scaling, S1 is coincides with S2. To simplify the problem, we suppose two shapes, X1 and X2, centred on the origin (0, 0) initially. That means you can only use the operations of rotations and scaling, but not the translations. We wish to scale and rotate S1 by (s, θ) so as to minimize the sum of the square distances between the points of S1 and S2. Rotation means a two-dimensional object rotates around a center (or point).Scaling means a linear transformation that enlarges or diminishes objects. Input Each test case contains a single integer t (t<=1000), indicating the size of the two shapes (i.e., each shape has t points). The next following t lines each contain two integers (xi, yi) representing the shape of S1, then following t lines stands for the shape of S2. The input is terminated by a set starting with t = 0. Output For each test case, you should output one line containing a real number representing the minimum distance of the two shapes after the operations of rotation and scaling. The number should be rounded to three fractional digits. Sample Input 4 1 1 1 -1 -1 1 -1 -1 2 2 2 -2 -2 2 -2 -2 0 Sample Output 0.000
Parsing URL 解析地址问题
Problem Description In computing, a Uniform Resource Locator or Universal Resource Locator (URL) is a character string that specifies where a known resource is available on the Internet and the mechanism for retrieving it. The syntax of a typical URL is: scheme://domain:port/path?query_string#fragment_id In this problem, the scheme, domain is required by all URL and other components are optional. That is, for example, the following are all correct urls: http://dict.bing.com.cn/#%E5%B0%8F%E6%95%B0%E7%82%B9 http://www.mariowiki.com/Mushroom https://mail.google.com/mail/?shva=1#inbox http://en.wikipedia.org/wiki/Bowser_(character) ftp://fs.fudan.edu.cn/ telnet://bbs.fudan.edu.cn/ http://mail.bashu.cn:8080/BsOnline/ Your task is to find the domain for all given URLs. Input There are multiple test cases in this problem. The first line of input contains a single integer denoting the number of test cases. For each of test case, there is only one line contains a valid URL. Output For each test case, you should output the domain of the given URL. Sample Input 3 http://dict.bing.com.cn/#%E5%B0%8F%E6%95%B0%E7%82%B9 http://www.mariowiki.com/Mushroom https://mail.google.com/mail/?shva=1#inbox Sample Output Case #1: dict.bing.com.cn Case #2: www.mariowiki.com Case #3: mail.google.com
为什么我的热力图只显示一半 seaborn matplot matplotlab
为什么我的热力图上下两行只显示一半? ``` import matplotlib.pyplot as plt import seaborn as sns import numpy as np a = np.random.uniform(0, 1, size=(10, 10)) sns.heatmap(a, cmap='Reds') plt.show() ``` ![上下两行方块只有一半](https://img-ask.csdn.net/upload/201912/12/1576141826_261656.png)
The Seven Percent Solution
Problem Description Uniform Resource Identifiers (or URIs) are strings like http://icpc.baylor.edu/icpc/, mailto:foo@bar.org, ftp://127.0.0.1/pub/linux, or even just readme.txt that are used to identify a resource, usually on the Internet or a local computer. Certain characters are reserved within URIs, and if a reserved character is part of an identifier then it must be percent-encoded by replacing it with a percent sign followed by two hexadecimal digits representing the ASCII code of the character. A table of seven reserved characters and their encodings is shown below. Your job is to write a program that can percent-encode a string of characters. Character Encoding " " (space) %20 "!" (exclamation point) %21 "$" (dollar sign) %24 "%" (percent sign) %25 "(" (left parenthesis) %28 ")" (right parenthesis) %29 "*" (asterisk) %2a Input The input consists of one or more strings, each 1–79 characters long and on a line by itself, followed by a line containing only "#" that signals the end of the input. The character "#" is used only as an end-of-input marker and will not appear anywhere else in the input. A string may contain spaces, but not at the beginning or end of the string, and there will never be two or more consecutive spaces. Output For each input string, replace every occurrence of a reserved character in the table above by its percent-encoding, exactly as shown, and output the resulting string on a line by itself. Note that the percent-encoding for an asterisk is %2a (with a lowercase "a") rather than %2A (with an uppercase "A"). Sample Input Happy Joy Joy! http://icpc.baylor.edu/icpc/ plain_vanilla (**) ? the 7% solution # Sample Output Happy%20Joy%20Joy%21 http://icpc.baylor.edu/icpc/ plain_vanilla %28%2a%2a%29 ? the%207%25%20solution
tensorflow训练完模型直接测试和导入模型进行测试的结果不同,一个很好,一个略差,这是为什么?
在tensorflow训练完模型,我直接采用同一个session进行测试,得到结果较好,但是采用训练完保存的模型,进行重新载入进行测试,结果较差,不懂是为什么会出现这样的结果。注:测试数据是一样的。以下是模型结果: 训练集:loss:0.384,acc:0.931. 验证集:loss:0.212,acc:0.968. 训练完在同一session内的测试集:acc:0.96。导入保存的模型进行测试:acc:0.29 ``` def create_model(hps): global_step = tf.Variable(tf.zeros([], tf.float64), name = 'global_step', trainable = False) scale = 1.0 / math.sqrt(hps.num_embedding_size + hps.num_lstm_nodes[-1]) / 3.0 print(type(scale)) gru_init = tf.random_normal_initializer(-scale, scale) with tf.variable_scope('Bi_GRU_nn', initializer = gru_init): for i in range(hps.num_lstm_layers): cell_bw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-bw') cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, output_keep_prob = dropout_keep_prob) cell_fw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-fw') cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, output_keep_prob = dropout_keep_prob) rnn_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_bw, cell_fw, inputs, dtype=tf.float32) embeddedWords = tf.concat(rnn_outputs, 2) finalOutput = embeddedWords[:, -1, :] outputSize = hps.num_lstm_nodes[-1] * 2 # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2 last = tf.reshape(finalOutput, [-1, outputSize]) # reshape成全连接层的输入维度 last = tf.layers.batch_normalization(last, training = is_training) fc_init = tf.uniform_unit_scaling_initializer(factor = 1.0) with tf.variable_scope('fc', initializer = fc_init): fc1 = tf.layers.dense(last, hps.num_fc_nodes, name = 'fc1') fc1_batch_normalization = tf.layers.batch_normalization(fc1, training = is_training) fc_activation = tf.nn.relu(fc1_batch_normalization) logits = tf.layers.dense(fc_activation, hps.num_classes, name = 'fc2') with tf.name_scope('metrics'): softmax_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = tf.argmax(outputs, 1)) loss = tf.reduce_mean(softmax_loss) # [0, 1, 5, 4, 2] ->argmax:2 因为在第二个位置上是最大的 y_pred = tf.argmax(tf.nn.softmax(logits), 1, output_type = tf.int64, name = 'y_pred') # 计算准确率,看看算对多少个 correct_pred = tf.equal(tf.argmax(outputs, 1), y_pred) # tf.cast 将数据转换成 tf.float32 类型 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.name_scope('train_op'): tvar = tf.trainable_variables() for var in tvar: print('variable name: %s' % (var.name)) grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvar), hps.clip_lstm_grads) optimizer = tf.train.AdamOptimizer(hps.learning_rate) train_op = optimizer.apply_gradients(zip(grads, tvar), global_step) # return((inputs, outputs, is_training), (loss, accuracy, y_pred), (train_op, global_step)) return((inputs, outputs), (loss, accuracy, y_pred), (train_op, global_step)) placeholders, metrics, others = create_model(hps) content, labels = placeholders loss, accuracy, y_pred = metrics train_op, global_step = others def val_steps(sess, x_batch, y_batch, writer = None): loss_val, accuracy_val = sess.run([loss,accuracy], feed_dict = {inputs: x_batch, outputs: y_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) return loss_val, accuracy_val loss_summary = tf.summary.scalar('loss', loss) accuracy_summary = tf.summary.scalar('accuracy', accuracy) # 将所有的变量都集合起来 merged_summary = tf.summary.merge_all() # 用于test测试的summary merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_Bi-GRU_Dropout_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.makedirs(run_dir) train_log_dir = os.path.join(run_dir, timestamp, 'train') test_los_dir = os.path.join(run_dir, timestamp, 'test') if not os.path.exists(train_log_dir): os.makedirs(train_log_dir) if not os.path.join(test_los_dir): os.makedirs(test_los_dir) # saver得到的文件句柄,可以将文件训练的快照保存到文件夹中去 saver = tf.train.Saver(tf.global_variables(), max_to_keep = 5) # train 代码 init_op = tf.global_variables_initializer() train_keep_prob_value = 0.2 test_keep_prob_value = 1.0 # 由于如果按照每一步都去计算的话,会很慢,所以我们规定每100次存储一次 output_summary_every_steps = 100 num_train_steps = 1000 # 每隔多少次保存一次 output_model_every_steps = 500 # 测试集测试 test_model_all_steps = 4000 i = 0 session_conf = tf.ConfigProto( gpu_options = tf.GPUOptions(allow_growth=True), allow_soft_placement = True, log_device_placement = False) with tf.Session(config = session_conf) as sess: sess.run(init_op) # 将训练过程中,将loss,accuracy写入文件里,后面是目录和计算图,如果想要在tensorboard中显示计算图,就想sess.graph加上 train_writer = tf.summary.FileWriter(train_log_dir, sess.graph) # 同样将测试的结果保存到tensorboard中,没有计算图 test_writer = tf.summary.FileWriter(test_los_dir) batches = batch_iter(list(zip(x_train, y_train)), hps.batch_size, hps.num_epochs) for batch in batches: train_x, train_y = zip(*batch) eval_ops = [loss, accuracy, train_op, global_step] should_out_summary = ((i + 1) % output_summary_every_steps == 0) if should_out_summary: eval_ops.append(merged_summary) # 那三个占位符输进去 # 计算loss, accuracy, train_op, global_step的图 eval_ops.append(merged_summary) outputs_train = sess.run(eval_ops, feed_dict={ inputs: train_x, outputs: train_y, dropout_keep_prob: train_keep_prob_value, is_training: hps.train_is_training }) loss_train, accuracy_train = outputs_train[0:2] if should_out_summary: # 由于我们想在100steps之后计算summary,所以上面 should_out_summary = ((i + 1) % output_summary_every_steps == 0)成立, # 即为真True,那么我们将训练的内容放入eval_ops的最后面了,因此,我们想获得summary的结果得在eval_ops_results的最后一个 train_summary_str = outputs_train[-1] # 将获得的结果写训练tensorboard文件夹中,由于训练从0开始,所以这里加上1,表示第几步的训练 train_writer.add_summary(train_summary_str, i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = {inputs: x_dev, outputs: y_dev, dropout_keep_prob: 1.0, is_training: hps.val_is_training })[0] test_writer.add_summary(test_summary_str, i + 1) current_step = tf.train.global_step(sess, global_step) if (i + 1) % 100 == 0: print("Step: %5d, loss: %3.3f, accuracy: %3.3f" % (i + 1, loss_train, accuracy_train)) # 500个batch校验一次 if (i + 1) % 500 == 0: loss_eval, accuracy_eval = val_steps(sess, x_dev, y_dev) print("Step: %5d, val_loss: %3.3f, val_accuracy: %3.3f" % (i + 1, loss_eval, accuracy_eval)) if (i + 1) % output_model_every_steps == 0: path = saver.save(sess,os.path.join(out_dir, 'ckp-%05d' % (i + 1))) print("Saved model checkpoint to {}\n".format(path)) print('model saved to ckp-%05d' % (i + 1)) if (i + 1) % test_model_all_steps == 0: # test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, dropout_keep_prob: 1.0}) test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) print("test_loss: %3.3f, test_acc: %3.3d" % (test_loss, test_acc)) batches = batch_iter(list(x_test), 128, 1, shuffle=False) # Collect the predictions here all_predictions = [] for x_test_batch in batches: batch_predictions = sess.run(y_pred, {inputs: x_test_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) all_predictions = np.concatenate([all_predictions, batch_predictions]) correct_predictions = float(sum(all_predictions == y.flatten())) print("Total number of test examples: {}".format(len(y_test))) print("Accuracy: {:g}".format(correct_predictions/float(len(y_test)))) test_y = y_test.argmax(axis = 1) #生成混淆矩阵 conf_mat = confusion_matrix(test_y, all_predictions) fig, ax = plt.subplots(figsize = (4,2)) sns.heatmap(conf_mat, annot=True, fmt = 'd', xticklabels = cat_id_df.category_id.values, yticklabels = cat_id_df.category_id.values) font_set = FontProperties(fname = r"/usr/share/fonts/truetype/wqy/wqy-microhei.ttc", size=15) plt.ylabel(u'实际结果',fontsize = 18,fontproperties = font_set) plt.xlabel(u'预测结果',fontsize = 18,fontproperties = font_set) plt.savefig('./test.png') print('accuracy %s' % accuracy_score(all_predictions, test_y)) print(classification_report(test_y, all_predictions,target_names = cat_id_df['category_name'].values)) print(classification_report(test_y, all_predictions)) i += 1 ``` 以上的模型代码,请求各位大神帮我看看,为什么出现这样的结果?
python使用scapy,使用发包工具时,报错NameError: name 'udp' is not defined,求解答,
``` #!env python # -*- coding: <encoding name> -*- import sys from scapy.all import * import os,random,datetime,time,math from random import randrange from random import uniform from functools import reduce def create_data(line): current_time = [ datetime.datetime.now().strftime('%b %d %H:%M:%S'), datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), # datetime.datetime.now().strftime('%b %d %H:%M:%S %Y'), str(math.trunc(datetime.datetime.now().timestamp())) ] regular = [ "(\w{3}\s+\d{1,2}\s+\d{1,2}:\d{1,2}:\d{1,2})", "(\d{4}-\d{1,2}-\d{1,2}\s+\d{1,2}:\d{1,2}:\d{1,2})", "(\w{3}\s+\d{1,2}\s+\d{1,2}:\d{1,2}:\d{1,2}\s+\d{1,4})", "(\d{10})" ] def current_time(line): for i in regular: for j in current_time(line): if len(re.findall(i, j)) != 0: #若正则匹配上相对应的时间格式,则对字符串进行替换 line=re.sub(i,j,line) return line dict_ip_logfile={ "192.168.58.84": r"C:\Users\CS\Desktop\HPHIDS.txt" } while True: for ip,logfile in dict_ip_logfile.items(): with open(logfile,"r",encoding='unicode_escape') as log: for line in log: print(ip) print(logfile) print(create_data(line)) # IP = (porto='udp','192.168.57.45') # UDP = (dst='192.168.57.14',8089) # scapy.all,send(IP,udp) scapy.all.send(IP(proto="udp",src=ip,dst="192.168.57.45")/udp(dst='192.168.57.14.',dport=8082)/line,inter=5,loop=1,count=2) log.close() ``` ![图片说明](https://img-ask.csdn.net/upload/201911/21/1574303068_105938.png)
matplotlib中add_subplot出错
以下代码是天文包astroML中的例子,运行时出错,调试发现是 一起 fig = plt.figure(figsize=(5, 1.66)) 与 ax = fig.add_subplot(131) 一起运行时出错。但不知道为什么? 还有很奇怪的时以前可以运行的代码,在运行完下面代码出错后,在运行也会出同样的错误 代码: ``` """ EM example: Gaussian Mixture Models ----------------------------------- Figure 6.6 A two-dimensional mixture of Gaussians for the stellar metallicity data. The left panel shows the number density of stars as a function of two measures of their chemical composition: metallicity ([Fe/H]) and alpha-element abundance ([alpha/Fe]). The right panel shows the density estimated using mixtures of Gaussians together with the positions and covariances (2-sigma levels) of those Gaussians. The center panel compares the information criteria AIC and BIC (see Sections 4.3.2 and 5.4.3). """ # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the following forum: # https://groups.google.com/forum/#!forum/astroml-general from __future__ import print_function import numpy as np from matplotlib import pyplot as plt from sklearn.mixture import GaussianMixture from astroML.datasets import fetch_sdss_sspp from astroML.utils.decorators import pickle_results from astroML.plotting.tools import draw_ellipse #---------------------------------------------------------------------- # This function adjusts matplotlib settings for a uniform feel in the textbook. # Note that with usetex=True, fonts are rendered with LaTeX. This may # result in an error if LaTeX is not installed on your system. In that case, # you can set usetex to False. if "setup_text_plots" not in globals(): from astroML.plotting import setup_text_plots setup_text_plots(fontsize=8, usetex=True) #------------------------------------------------------------ # Get the Segue Stellar Parameters Pipeline data data = fetch_sdss_sspp(cleaned=True) X = np.vstack([data['FeH'], data['alphFe']]).T # truncate dataset for speed X = X[::5] #------------------------------------------------------------ # Compute GaussianMixture models & AIC/BIC N = np.arange(1, 14) @pickle_results("GMM_metallicity.pkl") def compute_GaussianMixture(N, covariance_type='full', max_iter=1000): models = [None for n in N] for i in range(len(N)): print(N[i]) models[i] = GaussianMixture(n_components=N[i], max_iter=max_iter, covariance_type=covariance_type) models[i].fit(X) return models models = compute_GaussianMixture(N) AIC = [m.aic(X) for m in models] BIC = [m.bic(X) for m in models] i_best = np.argmin(BIC) gmm_best = models[i_best] print("best fit converged:", gmm_best.converged_) print("BIC: n_components = %i" % N[i_best]) #------------------------------------------------------------ # compute 2D density FeH_bins = 51 alphFe_bins = 51 H, FeH_bins, alphFe_bins = np.histogram2d(data['FeH'], data['alphFe'], (FeH_bins, alphFe_bins)) Xgrid = np.array(list(map(np.ravel, np.meshgrid(0.5 * (FeH_bins[:-1] + FeH_bins[1:]), 0.5 * (alphFe_bins[:-1] + alphFe_bins[1:]))))).T log_dens = gmm_best.score_samples(Xgrid).reshape((51, 51)) #------------------------------------------------------------ # Plot the results fig = plt.figure(figsize=(5, 1.66)) fig.subplots_adjust(wspace=0.45, bottom=0.25, top=0.9, left=0.1, right=0.97) # plot density ax = fig.add_subplot(131) ax.imshow(H.T, origin='lower', interpolation='nearest', aspect='auto', extent=[FeH_bins[0], FeH_bins[-1], alphFe_bins[0], alphFe_bins[-1]], cmap=plt.cm.binary) ax.set_xlabel(r'$\rm [Fe/H]$') ax.set_ylabel(r'$\rm [\alpha/Fe]$') ax.xaxis.set_major_locator(plt.MultipleLocator(0.3)) ax.set_xlim(-1.101, 0.101) ax.text(0.93, 0.93, "Input", va='top', ha='right', transform=ax.transAxes) # plot AIC/BIC ax = fig.add_subplot(132) ax.plot(N, AIC, '-k', label='AIC') ax.plot(N, BIC, ':k', label='BIC') ax.legend(loc=1) ax.set_xlabel('N components') plt.setp(ax.get_yticklabels(), fontsize=7) # plot best configurations for AIC and BIC ax = fig.add_subplot(133) ax.imshow(np.exp(log_dens), origin='lower', interpolation='nearest', aspect='auto', extent=[FeH_bins[0], FeH_bins[-1], alphFe_bins[0], alphFe_bins[-1]], cmap=plt.cm.binary) ax.scatter(gmm_best.means_[:, 0], gmm_best.means_[:, 1], c='w') for mu, C, w in zip(gmm_best.means_, gmm_best.covariances_, gmm_best.weights_): draw_ellipse(mu, C, scales=[1.5], ax=ax, fc='none', ec='k') ax.text(0.93, 0.93, "Converged", va='top', ha='right', transform=ax.transAxes) ax.set_xlim(-1.101, 0.101) ax.set_ylim(alphFe_bins[0], alphFe_bins[-1]) ax.xaxis.set_major_locator(plt.MultipleLocator(0.3)) ax.set_xlabel(r'$\rm [Fe/H]$') ax.set_ylabel(r'$\rm [\alpha/Fe]$') plt.show() ```
Aligning Two Shapes 实现的原理
Problem Description The knowledge about Shape from Wikipedia: Simple two-dimensional shapes can be described by basic geometry such as points, line, curves, plane, and so on. (A shape whose points belong all the same plane is called a plane figure.) Most shapes occurring in the physical world are complex. Some, such as plant structures and coastlines, may be so arbitrary as to defy traditional mathematical description – in which case they may be analyzed by differential geometry, or as fractals. Rigid shape definition In geometry, two subsets of a Euclidean space have the same shape if one can be transformed to the other by a combination of translations, rotations (together also called rigid transformations), and uniform scalings. In other words, the shape of a set is all the geometrical information that is invariant to position (including rotation) and scale. Having the same shape is an equivalence relation, and accordingly a precise mathematical definition of the notion of shape can be given as being an equivalence class of subsets of a Euclidean space having the same shape. From the message above, we know that we can use the operations of translations, rotations and scaling to align two shapes. Now we assume that the shapes we describe in the problem is form with a set of points. For example, a shape S = {x1, y1, x2, y2.... xn, yn}. In the picture below, we use four points to represent a square. The two squares are all centred on the origin (0, 0). After the operation of scaling, S1 is coincides with S2. To simplify the problem, we suppose two shapes, X1 and X2, centred on the origin (0, 0) initially. That means you can only use the operations of rotations and scaling, but not the translations. We wish to scale and rotate S1 by (s, θ) so as to minimize the sum of the square distances between the points of S1 and S2. Rotation means a two-dimensional object rotates around a center (or point).Scaling means a linear transformation that enlarges or diminishes objects. Input Each test case contains a single integer t (t<=1000), indicating the size of the two shapes (i.e., each shape has t points). The next following t lines each contain two integers (xi, yi) representing the shape of S1, then following t lines stands for the shape of S2. The input is terminated by a set starting with t = 0. Output For each test case, you should output one line containing a real number representing the minimum distance of the two shapes after the operations of rotation and scaling. The number should be rounded to three fractional digits. Sample Input 4 1 1 1 -1 -1 1 -1 -1 2 2 2 -2 -2 2 -2 -2 0 Sample Output 0.000
yolo3 darknet.py问题
我用darknetAB https://github.com/AlexeyAB/darknet 编译gpu版本后生成darknet.py文件 然后我也编译了yolo_cpp_dll.sln文件 生成dll文件 然后运行darknet.py文件 不显示图片 异常退出 ![图片说明](https://img-ask.csdn.net/upload/201911/02/1572688446_628910.png) 百度了这个问题 有人说要换python3.5版本 我也尝试了 但是也是不行 不会显示图片。请问各位大佬到底怎么解决??急!!!谢谢!!! ``` #!python3 """ Python 3 wrapper for identifying objects in images Requires DLL compilation Both the GPU and no-GPU version should be compiled; the no-GPU version should be renamed "yolo_cpp_dll_nogpu.dll". On a GPU system, you can force CPU evaluation by any of: - Set global variable DARKNET_FORCE_CPU to True - Set environment variable CUDA_VISIBLE_DEVICES to -1 - Set environment variable "FORCE_CPU" to "true" To use, either run performDetect() after import, or modify the end of this file. See the docstring of performDetect() for parameters. Directly viewing or returning bounding-boxed images requires scikit-image to be installed (`pip install scikit-image`) Original *nix 2.7: https://github.com/pjreddie/darknet/blob/0f110834f4e18b30d5f101bf8f1724c34b7b83db/python/darknet.py Windows Python 2.7 version: https://github.com/AlexeyAB/darknet/blob/fc496d52bf22a0bb257300d3c79be9cd80e722cb/build/darknet/x64/darknet.py @author: Philip Kahn @date: 20180503 """ #pylint: disable=R, W0401, W0614, W0703 from ctypes import * import math import random import os def sample(probs): s = sum(probs) probs = [a/s for a in probs] r = random.uniform(0, 1) for i in range(len(probs)): r = r - probs[i] if r <= 0: return i return len(probs)-1 def c_array(ctype, values): arr = (ctype*len(values))() arr[:] = values return arr class BOX(Structure): _fields_ = [("x", c_float), ("y", c_float), ("w", c_float), ("h", c_float)] class DETECTION(Structure): _fields_ = [("bbox", BOX), ("classes", c_int), ("prob", POINTER(c_float)), ("mask", POINTER(c_float)), ("objectness", c_float), ("sort_class", c_int)] class IMAGE(Structure): _fields_ = [("w", c_int), ("h", c_int), ("c", c_int), ("data", POINTER(c_float))] class METADATA(Structure): _fields_ = [("classes", c_int), ("names", POINTER(c_char_p))] #lib = CDLL("/home/pjreddie/documents/darknet/libdarknet.so", RTLD_GLOBAL) #lib = CDLL("libdarknet.so", RTLD_GLOBAL) hasGPU = True if os.name == "nt": cwd = os.path.dirname(__file__) os.environ['PATH'] = cwd + ';' + os.environ['PATH'] winGPUdll = os.path.join(cwd, "yolo_cpp_dll.dll") winNoGPUdll = os.path.join(cwd, "yolo_cpp_dll_nogpu.dll") envKeys = list() for k, v in os.environ.items(): envKeys.append(k) try: try: tmp = os.environ["FORCE_CPU"].lower() if tmp in ["1", "true", "yes", "on"]: raise ValueError("ForceCPU") else: print("Flag value '"+tmp+"' not forcing CPU mode") except KeyError: # We never set the flag if 'CUDA_VISIBLE_DEVICES' in envKeys: if int(os.environ['CUDA_VISIBLE_DEVICES']) < 0: raise ValueError("ForceCPU") try: global DARKNET_FORCE_CPU if DARKNET_FORCE_CPU: raise ValueError("ForceCPU") except NameError: pass # print(os.environ.keys()) # print("FORCE_CPU flag undefined, proceeding with GPU") if not os.path.exists(winGPUdll): raise ValueError("NoDLL") lib = CDLL(winGPUdll, RTLD_GLOBAL) except (KeyError, ValueError): hasGPU = False if os.path.exists(winNoGPUdll): lib = CDLL(winNoGPUdll, RTLD_GLOBAL) print("Notice: CPU-only mode") else: # Try the other way, in case no_gpu was # compile but not renamed lib = CDLL(winGPUdll, RTLD_GLOBAL) print("Environment variables indicated a CPU run, but we didn't find `"+winNoGPUdll+"`. Trying a GPU run anyway.") else: lib = CDLL("./libdarknet.so", RTLD_GLOBAL) lib.network_width.argtypes = [c_void_p] lib.network_width.restype = c_int lib.network_height.argtypes = [c_void_p] lib.network_height.restype = c_int copy_image_from_bytes = lib.copy_image_from_bytes copy_image_from_bytes.argtypes = [IMAGE,c_char_p] def network_width(net): return lib.network_width(net) def network_height(net): return lib.network_height(net) predict = lib.network_predict_ptr predict.argtypes = [c_void_p, POINTER(c_float)] predict.restype = POINTER(c_float) if hasGPU: set_gpu = lib.cuda_set_device set_gpu.argtypes = [c_int] make_image = lib.make_image make_image.argtypes = [c_int, c_int, c_int] make_image.restype = IMAGE get_network_boxes = lib.get_network_boxes get_network_boxes.argtypes = [c_void_p, c_int, c_int, c_float, c_float, POINTER(c_int), c_int, POINTER(c_int), c_int] get_network_boxes.restype = POINTER(DETECTION) make_network_boxes = lib.make_network_boxes make_network_boxes.argtypes = [c_void_p] make_network_boxes.restype = POINTER(DETECTION) free_detections = lib.free_detections free_detections.argtypes = [POINTER(DETECTION), c_int] free_ptrs = lib.free_ptrs free_ptrs.argtypes = [POINTER(c_void_p), c_int] network_predict = lib.network_predict_ptr network_predict.argtypes = [c_void_p, POINTER(c_float)] reset_rnn = lib.reset_rnn reset_rnn.argtypes = [c_void_p] load_net = lib.load_network load_net.argtypes = [c_char_p, c_char_p, c_int] load_net.restype = c_void_p load_net_custom = lib.load_network_custom load_net_custom.argtypes = [c_char_p, c_char_p, c_int, c_int] load_net_custom.restype = c_void_p do_nms_obj = lib.do_nms_obj do_nms_obj.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] do_nms_sort = lib.do_nms_sort do_nms_sort.argtypes = [POINTER(DETECTION), c_int, c_int, c_float] free_image = lib.free_image free_image.argtypes = [IMAGE] letterbox_image = lib.letterbox_image letterbox_image.argtypes = [IMAGE, c_int, c_int] letterbox_image.restype = IMAGE load_meta = lib.get_metadata lib.get_metadata.argtypes = [c_char_p] lib.get_metadata.restype = METADATA load_image = lib.load_image_color load_image.argtypes = [c_char_p, c_int, c_int] load_image.restype = IMAGE rgbgr_image = lib.rgbgr_image rgbgr_image.argtypes = [IMAGE] predict_image = lib.network_predict_image predict_image.argtypes = [c_void_p, IMAGE] predict_image.restype = POINTER(c_float) predict_image_letterbox = lib.network_predict_image_letterbox predict_image_letterbox.argtypes = [c_void_p, IMAGE] predict_image_letterbox.restype = POINTER(c_float) def array_to_image(arr): import numpy as np # need to return old values to avoid python freeing memory arr = arr.transpose(2,0,1) c = arr.shape[0] h = arr.shape[1] w = arr.shape[2] arr = np.ascontiguousarray(arr.flat, dtype=np.float32) / 255.0 data = arr.ctypes.data_as(POINTER(c_float)) im = IMAGE(w,h,c,data) return im, arr def classify(net, meta, im): out = predict_image(net, im) res = [] for i in range(meta.classes): if altNames is None: nameTag = meta.names[i] else: nameTag = altNames[i] res.append((nameTag, out[i])) res = sorted(res, key=lambda x: -x[1]) return res def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45, debug= False): """ Performs the meat of the detection """ #pylint: disable= C0321 im = load_image(image, 0, 0) if debug: print("Loaded image") ret = detect_image(net, meta, im, thresh, hier_thresh, nms, debug) free_image(im) if debug: print("freed image") return ret def detect_image(net, meta, im, thresh=.5, hier_thresh=.5, nms=.45, debug= False): #import cv2 #custom_image_bgr = cv2.imread(image) # use: detect(,,imagePath,) #custom_image = cv2.cvtColor(custom_image_bgr, cv2.COLOR_BGR2RGB) #custom_image = cv2.resize(custom_image,(lib.network_width(net), lib.network_height(net)), interpolation = cv2.INTER_LINEAR) #import scipy.misc #custom_image = scipy.misc.imread(image) #im, arr = array_to_image(custom_image) # you should comment line below: free_image(im) num = c_int(0) if debug: print("Assigned num") pnum = pointer(num) if debug: print("Assigned pnum") predict_image(net, im) letter_box = 0 #predict_image_letterbox(net, im) #letter_box = 1 if debug: print("did prediction") # dets = get_network_boxes(net, custom_image_bgr.shape[1], custom_image_bgr.shape[0], thresh, hier_thresh, None, 0, pnum, letter_box) # OpenCV dets = get_network_boxes(net, im.w, im.h, thresh, hier_thresh, None, 0, pnum, letter_box) if debug: print("Got dets") num = pnum[0] if debug: print("got zeroth index of pnum") if nms: do_nms_sort(dets, num, meta.classes, nms) if debug: print("did sort") res = [] if debug: print("about to range") for j in range(num): if debug: print("Ranging on "+str(j)+" of "+str(num)) if debug: print("Classes: "+str(meta), meta.classes, meta.names) for i in range(meta.classes): if debug: print("Class-ranging on "+str(i)+" of "+str(meta.classes)+"= "+str(dets[j].prob[i])) if dets[j].prob[i] > 0: b = dets[j].bbox if altNames is None: nameTag = meta.names[i] else: nameTag = altNames[i] if debug: print("Got bbox", b) print(nameTag) print(dets[j].prob[i]) print((b.x, b.y, b.w, b.h)) res.append((nameTag, dets[j].prob[i], (b.x, b.y, b.w, b.h))) if debug: print("did range") res = sorted(res, key=lambda x: -x[1]) if debug: print("did sort") free_detections(dets, num) if debug: print("freed detections") return res netMain = None metaMain = None altNames = None def performDetect(imagePath="data/dog.jpg", thresh= 0.25, configPath = "./cfg/yolov3.cfg", weightPath = "yolov3.weights", metaPath= "./cfg/coco.data", showImage= True, makeImageOnly = False, initOnly= False): """ Convenience function to handle the detection and returns of objects. Displaying bounding boxes requires libraries scikit-image and numpy Parameters ---------------- imagePath: str Path to the image to evaluate. Raises ValueError if not found thresh: float (default= 0.25) The detection threshold configPath: str Path to the configuration file. Raises ValueError if not found weightPath: str Path to the weights file. Raises ValueError if not found metaPath: str Path to the data file. Raises ValueError if not found showImage: bool (default= True) Compute (and show) bounding boxes. Changes return. makeImageOnly: bool (default= False) If showImage is True, this won't actually *show* the image, but will create the array and return it. initOnly: bool (default= False) Only initialize globals. Don't actually run a prediction. Returns ---------------------- When showImage is False, list of tuples like ('obj_label', confidence, (bounding_box_x_px, bounding_box_y_px, bounding_box_width_px, bounding_box_height_px)) The X and Y coordinates are from the center of the bounding box. Subtract half the width or height to get the lower corner. Otherwise, a dict with { "detections": as above "image": a numpy array representing an image, compatible with scikit-image "caption": an image caption } """ # Import the global variables. This lets us instance Darknet once, then just call performDetect() again without instancing again global metaMain, netMain, altNames #pylint: disable=W0603 assert 0 < thresh < 1, "Threshold should be a float between zero and one (non-inclusive)" if not os.path.exists(configPath): raise ValueError("Invalid config path `"+os.path.abspath(configPath)+"`") if not os.path.exists(weightPath): raise ValueError("Invalid weight path `"+os.path.abspath(weightPath)+"`") if not os.path.exists(metaPath): raise ValueError("Invalid data file path `"+os.path.abspath(metaPath)+"`") if netMain is None: netMain = load_net_custom(configPath.encode("ascii"), weightPath.encode("ascii"), 0, 1) # batch size = 1 if metaMain is None: metaMain = load_meta(metaPath.encode("ascii")) if altNames is None: # In Python 3, the metafile default access craps out on Windows (but not Linux) # Read the names file and create a list to feed to detect try: with open(metaPath) as metaFH: metaContents = metaFH.read() import re match = re.search("names *= *(.*)$", metaContents, re.IGNORECASE | re.MULTILINE) if match: result = match.group(1) else: result = None try: if os.path.exists(result): with open(result) as namesFH: namesList = namesFH.read().strip().split("\n") altNames = [x.strip() for x in namesList] except TypeError: pass except Exception: pass if initOnly: print("Initialized detector") return None if not os.path.exists(imagePath): raise ValueError("Invalid image path `"+os.path.abspath(imagePath)+"`") # Do the detection #detections = detect(netMain, metaMain, imagePath, thresh) # if is used cv2.imread(image) detections = detect(netMain, metaMain, imagePath.encode("ascii"), thresh) if showImage: try: from skimage import io, draw import numpy as np image = io.imread(imagePath) print("*** "+str(len(detections))+" Results, color coded by confidence ***") imcaption = [] for detection in detections: label = detection[0] confidence = detection[1] pstring = label+": "+str(np.rint(100 * confidence))+"%" imcaption.append(pstring) print(pstring) bounds = detection[2] shape = image.shape # x = shape[1] # xExtent = int(x * bounds[2] / 100) # y = shape[0] # yExtent = int(y * bounds[3] / 100) yExtent = int(bounds[3]) xEntent = int(bounds[2]) # Coordinates are around the center xCoord = int(bounds[0] - bounds[2]/2) yCoord = int(bounds[1] - bounds[3]/2) boundingBox = [ [xCoord, yCoord], [xCoord, yCoord + yExtent], [xCoord + xEntent, yCoord + yExtent], [xCoord + xEntent, yCoord] ] # Wiggle it around to make a 3px border rr, cc = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr2, cc2 = draw.polygon_perimeter([x[1] + 1 for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr3, cc3 = draw.polygon_perimeter([x[1] - 1 for x in boundingBox], [x[0] for x in boundingBox], shape= shape) rr4, cc4 = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] + 1 for x in boundingBox], shape= shape) rr5, cc5 = draw.polygon_perimeter([x[1] for x in boundingBox], [x[0] - 1 for x in boundingBox], shape= shape) boxColor = (int(255 * (1 - (confidence ** 2))), int(255 * (confidence ** 2)), 0) draw.set_color(image, (rr, cc), boxColor, alpha= 0.8) draw.set_color(image, (rr2, cc2), boxColor, alpha= 0.8) draw.set_color(image, (rr3, cc3), boxColor, alpha= 0.8) draw.set_color(image, (rr4, cc4), boxColor, alpha= 0.8) draw.set_color(image, (rr5, cc5), boxColor, alpha= 0.8) if not makeImageOnly: io.imshow(image) io.show() detections = { "detections": detections, "image": image, "caption": "\n<br/>".join(imcaption) } except Exception as e: print("Unable to show image: "+str(e)) return detections if __name__ == "__main__": print(performDetect()) ```
生成的算法计算,用 C 语言来解答Uniform Generator
Problem Description Computer simulations often require random numbers. One way to generate pseudo-random numbers is via a function of the form seed(x+1) = [seed(x) + STEP] % MOD where '%' is the modulus operator. Such a function will generate pseudo-random numbers (seed) between 0 and MOD-1. One problem with functions of this form is that they will always generate the same pattern over and over. In order to minimize this effect, selecting the STEP and MOD values carefully can result in a uniform distribution of all values between (and including) 0 and MOD-1. For example, if STEP = 3 and MOD = 5, the function will generate the series of pseudo-random numbers 0, 3, 1, 4, 2 in a repeating cycle. In this example, all of the numbers between and including 0 and MOD-1 will be generated every MOD iterations of the function. Note that by the nature of the function to generate the same seed(x+1) every time seed(x) occurs means that if a function will generate all the numbers between 0 and MOD-1, it will generate pseudo-random numbers uniformly with every MOD iterations. If STEP = 15 and MOD = 20, the function generates the series 0, 15, 10, 5 (or any other repeating series if the initial seed is other than 0). This is a poor selection of STEP and MOD because no initial seed will generate all of the numbers from 0 and MOD-1. Your program will determine if choices of STEP and MOD will generate a uniform distribution of pseudo-random numbers. Input Each line of input will contain a pair of integers for STEP and MOD in that order (1 <= STEP, MOD <= 100000). Output For each line of input, your program should print the STEP value right- justified in columns 1 through 10, the MOD value right-justified in columns 11 through 20 and either "Good Choice" or "Bad Choice" left-justified starting in column 25. The "Good Choice" message should be printed when the selection of STEP and MOD will generate all the numbers between and including 0 and MOD-1 when MOD numbers are generated. Otherwise, your program should print the message "Bad Choice". After each output test set, your program should print exactly one blank line. Sample Input 3 5 15 20 63923 99999 Sample Output 3 5 Good Choice 15 20 Bad Choice 63923 99999 Good Choice
openGL实现多光源光照效果
最近正在学openGL,用的是learnOpenGL,一个网址,学到Multiple lights多光源这一章的时候遇到了问题。我想实现的是一个平行光(无光源),4个点光源,一个聚光灯源对十个箱子的光照效果。但是当我在着色器源码中将聚光灯的光照效果注释掉以后,物体呈现了黑色,也就是说平行光以及点光源对物体的光照效果没有实现。我反复看了代码,不知道错在哪里。以下是我的物体的着色器源码: #version 330 core //物体结构体 struct Material { sampler2D texture0; sampler2D texture1; sampler2D texture2; float shininess; }; //聚光灯结构体 struct FlashLight { vec3 ambient; // 物体ambient颜色要素的因素因子 vec3 diffuse; // 光的颜色,一般设置为白色 vec3 specular; // 物体specular颜色要素的因素因子 vec3 position; // 点的位置 vec3 directional; // 光源的方向 float cutoff; // 内余弦 float outcutoff; // 外余弦 float constant; // 光强度衰减方程式常数项 float linear; // 光强度衰减方程式一次项 float quadratic; // 光强度衰减方程式二次项 }; //点光源结构体 struct PointLight { vec3 ambient; // 物体ambient颜色要素的因素因子 vec3 diffuse; // 光的颜色,一般设置为白色 vec3 specular; // 物体specular颜色要素的因素因子 vec3 position; // 点的位置 float constant; //光强度衰减方程式常数项 float linear; //光强度衰减方程式一次项 float quadratic; //光强度衰减方程式二次项 }; //平行光源结构体 struct DirectLight { vec3 ambient; // 物体ambient颜色要素的因素因子 vec3 diffuse; // 光的颜色,一般设置为白色 vec3 specular; // 物体specular颜色要素的因素因子 vec3 directional; //光的方向 }; in vec3 Normal; in vec3 fragPos; in vec2 TexCoord; out vec4 color; uniform Material material; //有四个点光源 uniform PointLight pointlight[4]; uniform FlashLight flashlight; uniform DirectLight directlight; uniform vec3 cameraPos; //声明各个光源的光照函数 vec3 calcFlashLight(FlashLight light,vec3 fragPos,vec3 normal,vec3 viewDir); vec3 calcPointLight(PointLight light,vec3 fragPos,vec3 normal,vec3 viewDir); vec3 calcDirectLight(DirectLight light,vec3 normal,vec3 viewDir); void main() { vec3 normal=normalize(Normal); vec3 viewDir=normalize(cameraPos-fragPos); vec3 result =calcDirectLight(directlight,normal,viewDir); for(int i=0;i<4;i++) result +=calcPointLight(pointlight[i],fragPos,normal,viewDir); result +=calcFlashLight(flashlight,fragPos,normal,viewDir); color=vec4(result,1.0f); } //聚光灯 vec3 calcFlashLight(FlashLight light,vec3 fragPos,vec3 normal,vec3 viewDir) { float distance=length(light.position-fragPos); float attenuation=1.0f/(light.constant+light.linear*distance+light.quadratic*(distance*distance)); //ambient vec3 ambient=light.ambient*vec3(texture(material.texture0,TexCoord)); //diffuse vec3 lightDir=normalize(light.position-fragPos); float diff=max(dot(lightDir,normal),0.0); vec3 diffuse=light.diffuse*diff*vec3(texture(material.texture0,TexCoord)); //specular vec3 refletDir=reflect(-lightDir,normal); float spec=pow(max(dot(viewDir,refletDir),0.0),material.shininess); vec3 specular=light.specular*spec*vec3(texture(material.texture1,TexCoord)); //attenuation ambient *=attenuation; //由于我们不希望在多光源下,ambient会累积增加,所以我们这边进行一个衰减 diffuse *=attenuation; specular *=attenuation; //cutoff float theta=dot(normalize(-light.directional),lightDir); float intensity=clamp((theta-light.outcutoff)/(light.cutoff-light.outcutoff),0.0,1.0); //我们不希望强度值在0与1之外 diffuse *=intensity; specular *=intensity; return (ambient+diffuse+specular); } //点光源 vec3 calcPointLight(PointLight light,vec3 fragPos,vec3 normal,vec3 viewDir) { float distance=length(light.position-fragPos); float attenuation=1.0f/(light.constant+light.linear*distance+light.quadratic*(distance*distance)); //ambient vec3 ambient=light.ambient*vec3(texture(material.texture0,TexCoord)); //diffuse vec3 lightDir=normalize(light.position-fragPos); float diff=max(dot(lightDir,normal),0.0); vec3 diffuse=light.diffuse*diff*vec3(texture(material.texture0,TexCoord)); //specular vec3 refletDir=reflect(-lightDir,normal); float strength=pow(max(dot(viewDir,refletDir),0.0),material.shininess); vec3 specular=light.specular*strength*vec3(texture(material.texture1,TexCoord)); //attenuation ambient *=attenuation; //由于我们不希望在多光源下,ambient会累积增加,所以我们这边进行一个衰减 diffuse *=attenuation; specular *=attenuation; return (ambient+diffuse+specular); } //平行光 vec3 calcDirectLight(DirectLight light,vec3 normal,vec3 viewDir) { //ambient vec3 ambient=light.ambient*vec3(texture(material.texture0,TexCoord)); //diffuse vec3 directional=normalize(-light.directional); float diff=max(dot(directional,normal),0.0); vec3 diffuse=light.diffuse*diff*vec3(texture(material.texture0,TexCoord)); //specular vec3 refletDir=reflect(-directional,normal); float spec=pow(max(dot(viewDir,refletDir),0.0),material.shininess); vec3 specular=light.specular*spec*vec3(texture(material.texture1,TexCoord)); return (ambient+diffuse+specular); } 我第一次发帖,没有C币也,sorry. 等我有了再给你们。 VS中的代码我没贴,不知道贴哪段,谢谢过来参观以及帮我解答的人。
OpenGL如何正确混合材质颜色?
我在修改一段程序时,在片元着色器中混合两个材质的颜色时报错,由于OpenGL版本较老,没找到相关解释。 具体是这样的: 基础的颜色是一个叫baseColor的变量,它是从一张1D贴图上取样下来的: ```GLSL vec4 baseColor=texture1D(heightColorMapSampler,heightColorMapTexCoord); ``` 这个heightColorMapTexCoord坐标是通过顶点坐标计算来的,好像是OpenGL里一个很老的方式了: ```GLSL uniform mat4 depthProjection; ......... ......... vec4 vertexDic=gl_Vertex; vec4 vertexCc=depthProjection*vertexDic; heightColorMapTexCoord=dot(heightColorMapPlaneEq,vertexCc); ``` 以上是定点着色器里的代码,然后我在片元着色器加入了一张新的材质,尝试把baseColor这个颜色和我加进来的材质做一个混合,我先这样做试了一下自己加的材质是否成功的导入进来了: ```GLSL vec4 mycolor = vec4(texture2DRect(mapSampler,gl_FragCoord.xy).rgb, 1.0); baseColor =mycolor; ``` 发现我导入的材质完全覆盖了之前渲染的结果,这正是我想要的效果。然后我用一个mix函数,把上面两行代码改成下面这样,希望可以混合baseColor和我自己的贴图颜色。 ```GLSL vec4 mycolor = vec4(texture2DRect(mapSampler,gl_FragCoord.xy).rgb, 1.0); baseColor =mix(baseColor, mycolor, 0.5); ``` 结果就报错了:Invalid operation。 我以为baseColor是从1D贴图上采样的,所以不能和2D贴图混合,但是我发现这个渲染器本身就有一个地方讲baseColor与2D贴图混合,所以应该是可以混合的。 求问各位这中间出了什么问题?应该怎么办来将两个颜色混合?
Tensorflow中进行图像随机resize为什么会失败
初学Tensorflow,望见谅,最近在学习数据増广,发现在进行图像大小随机缩放时会出错,代码如下: ``` img = tf.placeholder(tf.uint8,shape=[2048,1024,3]) #随机获取Scale scale = tf.random_uniform([1], minval=0.5, maxval=2.0, dtype=tf.float32, seed=None) #求resize后的图像大小 h_new = tf.to_int32(tf.multiply(tf.to_float(tf.shape(img)[0]), scale)) w_new = tf.to_int32(tf.multiply(tf.to_float(tf.shape(img)[1]), scale)) #进行resize new_shape = tf.squeeze(tf.stack([h_new, w_new]), squeeze_dims=[1]) img = tf.image.resize_images(img, size=new_shape) ``` 但是我发现,resize以后的img的高和宽竟然是none ![图片说明](https://img-ask.csdn.net/upload/201910/24/1571929669_942601.png) 另外,如果写成如下,也不行 ![图片说明](https://img-ask.csdn.net/upload/201910/24/1571929410_737680.png) 但是如果将resize的大小限定死,是可以的,这和网上大部分网页上是一致的 ![图片说明](https://img-ask.csdn.net/upload/201910/24/1571929641_46797.png) 所以问问各位大神,这是为什么啊,为什么不能随机resize一张图啊。
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
日均350000亿接入量,腾讯TubeMQ性能超过Kafka
整理 | 夕颜出品 | AI科技大本营(ID:rgznai100) 【导读】近日,腾讯开源动作不断,相继开源了分布式消息中间件TubeMQ,基于最主流的 OpenJDK8开发的
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法不过,当我看了源代码之后这程序不到50
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
相关热词 c#处理浮点数 c# 生成字母数字随机数 c# 动态曲线 控件 c# oracle 开发 c#选择字体大小的控件 c# usb 批量传输 c#10进制转8进制 c#转base64 c# 科学计算 c#下拉列表获取串口
立即提问