关于Tensorflow的DNN分类器

用Tensorflow写了一个简易的DNN网络(输入,一个隐层,输出),用作分类,数据集选用的是UCI 的iris数据集
激活函数使用softmax loss函数使用对数似然 以便最后的结果是一个概率解,选概率最大的分类的结果
目前的问题是预测结果出现问题,用测试数据测试显示结果如下

图片说明

刚刚入门...希望大家指点一下,谢谢辣!

 #coding:utf-8
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn import preprocessing
from sklearn.model_selection import cross_val_score

BATCH_SIZE = 30

iris = pd.read_csv('F:\dataset\iris\dataset.data', sep=',', header=None)

'''
# 查看导入的数据
print("Dataset Lenght:: ", len(iris))
print("Dataset Shape:: ", iris.shape)
print("Dataset:: ")
print(iris.head(150))
'''

#将每条数据划分为样本值和标签值
X = iris.values[:, 0:4]
Y = iris.values[:, 4]

# 整理一下标签数据
# Iris-setosa       ---> 0
# Iris-versicolor   ---> 1
# Iris-virginica    ---> 2
for i in range(len(Y)):
    if Y[i] == 'Iris-setosa':
        Y[i] = 0
    elif Y[i] == 'Iris-versicolor':
        Y[i] = 1
    elif Y[i] == 'Iris-virginica':
        Y[i] = 2

# 划分训练集与测试集
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=10)

#对数据集X与Y进行shape整理,让第一个参数为-1表示整理X成n行2列,整理Y成n行1列
X_train = np.vstack(X_train).reshape(-1, 4)
Y_train = np.vstack(Y_train).reshape(-1, 1)
X_test = np.vstack(X_test).reshape(-1, 4)
Y_test = np.vstack(Y_test).reshape(-1, 1)

'''
print(X_train)
print(Y_train)
print(X_test)
print(Y_test)
'''

#定义神经网络的输入,参数和输出,定义前向传播过程
def get_weight(shape):
    w = tf.Variable(tf.random_normal(shape), dtype=tf.float32)
    return w

def get_bias(shape):
    b = tf.Variable(tf.constant(0.01, shape=shape))
    return b

x = tf.placeholder(tf.float32, shape=(None, 4))
yi = tf.placeholder(tf.float32, shape=(None, 1))

def BP_Model():
    w1 = get_weight([4, 10])  # 第一个隐藏层,10个神经元,4个输入
    b1 = get_bias([10])
    y1 = tf.nn.softmax(tf.matmul(x, w1) + b1)  # 注意维度

    w2 = get_weight([10, 3])  # 输出层,3个神经元,10个输入
    b2 = get_bias([3])
    y = tf.nn.softmax(tf.matmul(y1, w2) + b2)

    return y

def train():
    # 生成计算图
    y = BP_Model()
    # 定义损失函数
    ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.arg_max(yi, 1))
    loss_cem = tf.reduce_mean(ce)
    # 定义反向传播方法,正则化
    train_step = tf.train.AdamOptimizer(0.001).minimize(loss_cem)
    # 定义保存器
    saver = tf.train.Saver(tf.global_variables())
    #生成会话
    with tf.Session() as sess:
        init_op = tf.global_variables_initializer()
        sess.run(init_op)
        Steps = 5000
        for i in range(Steps):
            start = (i * BATCH_SIZE) % 300
            end = start + BATCH_SIZE
            reslut = sess.run(train_step, feed_dict={x: X_train[start:end], yi: Y_train[start:end]})
            if i % 100 == 0:
                loss_val = sess.run(loss_cem, feed_dict={x: X_train, yi: Y_train})
                print("step: ", i, "loss: ", loss_val)
        print("保存模型: ", saver.save(sess, './model_iris/bp_model.model'))
    tf.summary.FileWriter("logs/", sess.graph)
#train()

def prediction():
    # 生成计算图
    y = BP_Model()
    # 定义损失函数
    ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.arg_max(yi, 1))
    loss_cem = tf.reduce_mean(ce)
    # 定义保存器
    saver = tf.train.Saver(tf.global_variables())
    with tf.Session() as sess:
        saver.restore(sess, './model_iris/bp_model.model')
        result = sess.run(y, feed_dict={x: X_test})
        loss_val = sess.run(loss_cem, feed_dict={x: X_test, yi: Y_test})
        print("result :", result)
        print("loss :", loss_val)
        result_set = sess.run(tf.argmax(result, axis=1))
        print("predict result: ", result_set)
        print("real result: ", Y_test.reshape(1, -1))

#prediction()

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Tensorflow重新训练图像分类器

<div class="post-text" itemprop="text"> <p>I just followed the documentation <a href="https://www.tensorflow.org/hub/tutorials/image_retraining" rel="nofollow noreferrer">here Image Retraining</a></p> <p>In step <strong>Using the Retrained Model</strong> I have a very correct estimate</p> <pre><code>python label_image.py --graph=graph.pb --labels=labels.pb --input_layer=Placeholder --output_layer=final_result --image=../color/blue/blue25.jpg 2018-09-30 10:57:53.153552: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA blue 0.9326643 red 0.06733578 </code></pre> <p>But if i try to use in this code :</p> <p><a href="https://github.com/hybridgroup/gocv/blob/3f4d73bc6a6e20e4880c0ea208f3e80880e1c725/cmd/tf-classifier/main.go" rel="nofollow noreferrer">tf-classifier by hybridgroup/gocv</a></p> <p>i have a error :</p> <pre><code>libc++abi.dylib: terminating with uncaught exception of type cv::Exception: OpenCV(3.4.2) /tmp/opencv-20180704-99354-1kt9ska/opencv-3.4.2/modules/dnn/src/dnn.cpp:2270: error: (-204:Requested object was not found) Requested blob "input" not found in function 'setInput' </code></pre> <p>But if I use Tensorflow "Inception" model it works.</p> <p>Do you have an idea ?</p> </div>

opencv3.4 加载tensorflow模型 net.forward()总是报错?

OpenCV Error: Assertion failed (!_aspectRatios.empty(), _minSize > 0) in cv::dnn::PriorBoxLayerImpl::PriorBoxLayerImpl, file C:\build\master_winpack-build-win64-vc14\opencv\modules\dnn\src\layers\prior_box_layer.cpp, line 207 C:\build\master_winpack-build-win64-vc14\opencv\modules\dnn\src\layers\prior_box_layer.cpp:207: error: (-215) !_aspectRatios.empty(), _minSize > 0 in function cv::dnn::PriorBoxLayerImpl::PriorBoxLayerImpl

iOS OpenCV3.4.2加载TensorFlow已训练好的pb模型失败

各位大哥大姐好! 小白最近在学习OpenCV,使用的是iOS端3.4.2版本:https://opencv.org/releases.html 使用DNN的cv::dnn::readNetFromTensorflow()方法加载TensorFlow网络模型失败,net为empty TensorFlow模型使用的是别人训练好的http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz 这几天尝试了很多模型,也寻了很多中英文的网站论坛。然,未果。 这可急坏了小白,忘大神们不吝赐教!小白愿以身相...额,还是送分吧!感谢!! ![help](https://img-ask.csdn.net/upload/201809/16/1537107723_393719.png)

使用tensorflow完成手写英文字母识别,有偿!

全连接网络,CNN RNN LSTM等技术都可以!只要能识别,不用tensorflow都可以!求!!

请问下面的问题该怎么解决

2019-09-26 19:05:34.968861: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:332] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED 2019-09-26 19:05:34.969011: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:336] error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows

c++内如何导入tensorflow中训练好的模型

我现在用tensorflow训练了一个自己搭建的卷积神经网络,现在需要再用c++实现该网络,不知道如何将这个模型保存成文本文件,然后再导入c++

关于opencv中dnn模块内存泄漏

我在c#中持续地去调用一个dll,dll中写的是相关的图像算法,算法中我用到了opencv的dnn模块去读取caffe网络,然后在net.forward的时候会造成内存泄漏,从而使内存爆炸,目前能想到的办法是在c#中清除这部分dll的内存,在不杀死进程的前提下这应该怎么操作呢

tensorflow 中文语音识别问题

#####tensorflow-gpu 1.12 + python3.6 + thchs30语音包 在执行以下代码时报错: ``` def speech_to_text(wav_file): wav, sr = librosa.load(wav_file, mono=True) mfcc = np.transpose(np.expand_dims(librosa.feature.mfcc(wav, sr), axis=0), [0, 2, 1]) logit = speech_to_text_network() saver = tf.train.Saver() with tf.Session() as sess: # 初始化 sess.run(tf.global_variables_initializer()) saver.restore(sess, tf.train.latest_checkpoint('./model')) decoded = tf.transpose(logit, perm=[1, 0, 2]) decoded, _ = tf.nn.ctc_beam_search_decoder(decoded, sequence_len, merge_repeated=False) # predict = tf.sparse_to_dense(decoded[0].indices, decoded[0].shape, decoded[0].values) + 1 # 执行到这一步的时候报错, output = sess.run(decoded, feed_dict={X: mfcc}) print(output) ``` - 报错信息 ``` WARNING:tensorflow:From E:/jupyter_project/yuyin_04/train.py:175: calling expand_dims (from tensorflow.python.ops.array_ops) with dim is deprecated and will be removed in a future version. Instructions for updating: Use the `axis` argument instead 2020-06-16 14:23:26.447367: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2020-06-16 14:23:26.612799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1660 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.59 pciBusID: 0000:01:00.0 totalMemory: 6.00GiB freeMemory: 4.92GiB 2020-06-16 14:23:26.612963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2020-06-16 14:23:27.283154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-06-16 14:23:27.283244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2020-06-16 14:23:27.283291: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2020-06-16 14:23:27.283437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4651 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5) 2020-06-16 14:23:32.282612: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED 2020-06-16 14:23:32.283322: E tensorflow/stream_executor/cuda/cuda_dnn.cc:373] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call return fn(*args) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node conv1d_0/conv1d/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1d_0/conv1d/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, conv1d_0/conv1d/ExpandDims_1)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\ProgramData\Anaconda3\envs\python3.6\lib\contextlib.py", line 99, in __exit__ self.gen.throw(type, value, traceback) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\ops.py", line 5229, in get_controller yield g File "E:/jupyter_project/yuyin_04/train.py", line 317, in speech_to_text output = sess.run(decoded, feed_dict={X: mfcc}) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 929, in run run_metadata_ptr) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run run_metadata) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1d_0/conv1d/Conv2D (defined at E:/jupyter_project/yuyin_04/train.py:133) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1d_0/conv1d/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, conv1d_0/conv1d/ExpandDims_1)]] Caused by op 'conv1d_0/conv1d/Conv2D', defined at: File "D:\Program Files\PyCharm 2019.3.3_pro\plugins\python\helpers\pydev\pydevd.py", line 2127, in <module> main() File "D:\Program Files\PyCharm 2019.3.3_pro\plugins\python\helpers\pydev\pydevd.py", line 2118, in main globals = debugger.run(setup['file'], None, None, is_module) File "D:\Program Files\PyCharm 2019.3.3_pro\plugins\python\helpers\pydev\pydevd.py", line 1427, in run return self._exec(is_module, entry_point_fn, module_name, file, globals, locals) File "D:\Program Files\PyCharm 2019.3.3_pro\plugins\python\helpers\pydev\pydevd.py", line 1434, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "D:\Program Files\PyCharm 2019.3.3_pro\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "E:/jupyter_project/yuyin_04/train.py", line 324, in <module> speech_to_text(wav_file) File "E:/jupyter_project/yuyin_04/train.py", line 306, in speech_to_text logit = speech_to_text_network() File "E:/jupyter_project/yuyin_04/train.py", line 205, in speech_to_text_network out = conv1d_layer(input_tensor=X, size=1, dim=n_dim, activation='tanh', scale=0.14, bias=False) File "E:/jupyter_project/yuyin_04/train.py", line 133, in conv1d_layer out = tf.nn.conv1d(input_tensor, W, stride=1, padding='SAME') + (b if bias else 0) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 553, in new_func return func(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 553, in new_func return func(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2471, in conv1d data_format=data_format) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1044, in conv2d data_format=data_format, dilations=dilations, name=name) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func return func(*args, **kwargs) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op op_def=op_def) File "D:\ProgramData\Anaconda3\envs\python3.6\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__ self._traceback = tf_stack.extract_stack() UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node conv1d_0/conv1d/Conv2D (defined at E:/jupyter_project/yuyin_04/train.py:133) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](conv1d_0/conv1d/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, conv1d_0/conv1d/ExpandDims_1)]] Process finished with exit code -1 ```

tensorflow 保存的PB模型200+MB 怎么处理

下面的是模型保存密码的部分 ```python def save_model(sess, epoch): builder = tf.saved_model.builder.SavedModelBuilder("model-v1.0_%d" % epoch) builder.add_meta_graph_and_variables(sess, ['v1.0']) builder.save() ```

android studio调用opencv的Dnn.readNet出错 ReadProtoFromBinaryFile

import org.opencv.dnn.*; …… Net net = Dnn.readNet("F://text_detect//models//east//frozen_east_text_detection.pb"); 我在android studio中调用opencv的dnn功能,在读取网络的时候出错,提示: FAILED: fs.is_open(). Can't open "F://android//AndroidStudioProjects//TestSo//app//frozen_east_text_detection.pb" in function 'ReadProtoFromBinaryFile' 代码如上所示。 我查看了很多资料,说是路径问题,我可以确定我在这个路径下的pb文件是存在的;另外看到了有人用java调用dnn的readNetFromDarknet函数,我使用的时候也是会提示类似的读取失败的问题,但是这个人是使用成功的(他用了相对路径: Net net = Dnn.readNetFromDarknet("text.cfg", "text.weights");),怀疑是需要放置到固定路径下,但是我没找到放置的位置。 有人知道应该怎么放pb文件的位置吗? 我在c++下调用是可以正常调用的 net = cv::dnn::readNet("F://text_detect//models//east//frozen_east_text_detection.pb"); 不晓得c++下调用dnn的读取网络功能和安卓下有什么不同

Tensorflow 源码构建 error executing command 错误

Ubuntu 16.04 通过源码安装tensorflow GPU版本。执行如下命令时出错,请各位帮忙分析下。 OS:Linux aikou 4.10.0-37-generic #41~16.04.1-Ubuntu SMP Fri Oct 6 22:42:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux GCC:gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5) CUDA:9.0 cuDNN: 5.1.10 执行命令: bazel build --copt=-march=native --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both ––verbose_failures --spawn_strategy=standalone --config=cuda //tensorflow/tools/pip_package:build_pip_package 错误提示: ERROR: /home/kou/tensorflow/tensorflow/stream_executor/BUILD:52:1: C++ compilation of rule '//tensorflow/stream_executor:cuda_platform' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command (cd /home/kou/.cache/bazel/_bazel_kou/3f3a4712723b62ae321569eb62995c39/execroot/org_tensorflow && \ exec env - \ LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:/usr/local/cuda/extras/CUPTI/lib64 \ PATH=/home/kou/anaconda3/bin:/home/kou/bin:/home/kou/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-9.0/bin \ PWD=/proc/self/cwd \ external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++11' -g0 -MD -MF bazel-out/host/bin/tensorflow/stream_executor/_objs/cuda_platform/tensorflow/stream_executor/cuda/cuda_dnn.pic.d '-frandom-seed=bazel-out/host/bin/tensorflow/stream_executor/_objs/cuda_platform/tensorflow/stream_executor/cuda/cuda_dnn.pic.o' -fPIC -DEIGEN_MPL2_ONLY -DTENSORFLOW_USE_JEMALLOC -DTF_USE_SNAPPY -iquote . -iquote bazel-out/host/genfiles -iquote external/nsync -iquote bazel-out/host/genfiles/external/nsync -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -iquote external/jemalloc -iquote bazel-out/host/genfiles/external/jemalloc -iquote external/eigen_archive -iquote bazel-out/host/genfiles/external/eigen_archive -iquote external/local_config_sycl -iquote bazel-out/host/genfiles/external/local_config_sycl -iquote external/gif_archive -iquote bazel-out/host/genfiles/external/gif_archive -iquote external/jpeg -iquote bazel-out/host/genfiles/external/jpeg -iquote external/protobuf_archive -iquote bazel-out/host/genfiles/external/protobuf_archive -iquote external/com_googlesource_code_re2 -iquote bazel-out/host/genfiles/external/com_googlesource_code_re2 -iquote external/farmhash_archive -iquote bazel-out/host/genfiles/external/farmhash_archive -iquote external/fft2d -iquote bazel-out/host/genfiles/external/fft2d -iquote external/highwayhash -iquote bazel-out/host/genfiles/external/highwayhash -iquote external/png_archive -iquote bazel-out/host/genfiles/external/png_archive -iquote external/zlib_archive -iquote bazel-out/host/genfiles/external/zlib_archive -iquote external/local_config_cuda -iquote bazel-out/host/genfiles/external/local_config_cuda -isystem external/nsync/public -isystem bazel-out/host/genfiles/external/nsync/public -isystem external/bazel_tools/tools/cpp/gcc3 -isystem external/jemalloc/include -isystem bazel-out/host/genfiles/external/jemalloc/include -isystem external/eigen_archive -isystem bazel-out/host/genfiles/external/eigen_archive -isystem external/gif_archive/lib -isystem bazel-out/host/genfiles/external/gif_archive/lib -isystem external/protobuf_archive/src -isystem bazel-out/host/genfiles/external/protobuf_archive/src -isystem external/farmhash_archive/src -isystem bazel-out/host/genfiles/external/farmhash_archive/src -isystem external/png_archive -isystem bazel-out/host/genfiles/external/png_archive -isystem external/zlib_archive -isystem bazel-out/host/genfiles/external/zlib_archive -isystem external/local_config_cuda/cuda -isystem bazel-out/host/genfiles/external/local_config_cuda/cuda -isystem external/local_config_cuda/cuda/cuda/include -isystem bazel-out/host/genfiles/external/local_config_cuda/cuda/cuda/include -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c tensorflow/stream_executor/cuda/cuda_dnn.cc -o bazel-out/host/bin/tensorflow/stream_executor/_objs/cuda_platform/tensorflow/stream_executor/cuda/cuda_dnn.pic.o)

TensorFlow2.0训练模型时,指标不收敛一直上升到1

我尝试着使用tf2.0来搭建一个DeepFM模型来预测用户是否喜欢某部影片, optimizer选择Adam,loss选择BinaryCrossentropy,评价指标是AUC; 因为涉及到了影片ID,所以我用了shared_embedding,并且必须关闭eager模式; 选用binary_crossentropy作为损失函数时模型在训练时AUC很快就到1了,但选用categorical_crossentropy时loss没太大变化,并且AUC一直保持在0.5,准确率也一直在0.5附近震荡。 下面是选用binary_crossentropy时的输出日志: ![图片说明](https://img-ask.csdn.net/upload/202002/21/1582271521_157835.png) ![图片说明](https://img-ask.csdn.net/upload/202002/21/1582271561_279055.png) 下面是我的代码: ``` one_order_feature_layer = tf.keras.layers.DenseFeatures(one_order_feature_columns) one_order_feature_layer_outputs = one_order_feature_layer(feature_layer_inputs) two_order_feature_layer = tf.keras.layers.DenseFeatures(two_order_feature_columns) two_order_feature_layer_outputs = two_order_feature_layer(feature_layer_inputs) # lr部分 lr_layer = tf.keras.layers.Dense(len(one_order_feature_columns), kernel_initializer=initializer)( one_order_feature_layer_outputs) # fm部分 reshape = tf.reshape(two_order_feature_layer_outputs, [-1, len(two_order_feature_columns), two_order_feature_columns[0].dimension]) sum_square = tf.square(tf.reduce_sum(reshape, axis=1)) square_sum = tf.reduce_sum(tf.square(reshape), axis=1) fm_layers = tf.multiply(0.5, tf.subtract(sum_square, square_sum)) # DNN部分 dnn_hidden_layer_1 = tf.keras.layers.Dense(64, activation='selu', kernel_initializer=initializer, kernel_regularizer=regularizer)(two_order_feature_layer_outputs) dnn_hidden_layer_2 = tf.keras.layers.Dense(64, activation='selu', kernel_initializer=initializer, kernel_regularizer=regularizer)(dnn_hidden_layer_1) dnn_hidden_layer_3 = tf.keras.layers.Dense(64, activation='selu', kernel_initializer=initializer, kernel_regularizer=regularizer)(dnn_hidden_layer_2) dnn_dropout = tf.keras.layers.Dropout(0.5, seed=29)(dnn_hidden_layer_3) # 连接并输出 concatenate_layer = tf.keras.layers.concatenate([lr_layer, fm_layers, dnn_dropout]) out_layer = tf.keras.layers.Dense(1, activation='sigmoid')(concatenate_layer) model = tf.keras.Model(inputs=[v for v in feature_layer_inputs.values()], outputs=out_layer) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate), loss=tf.keras.losses.BinaryCrossentropy(), metrics=['AUC']) # tf.keras.utils.plot_model(model, 'test.png', show_shapes=True) train_ds = make_dataset(train_df, buffer_size=None, shuffle=True) test_ds = make_dataset(test_df) with tf.compat.v1.Session() as sess: sess.run([tf.compat.v1.global_variables_initializer(), tf.compat.v1.tables_initializer()]) model.fit(train_ds, epochs=5) loss, auc = model.evaluate(test_ds) print("AUC", auc) ```

opencv net.setPreferableTarget(DNN_TARGET_OPENCL)问题

opencv4.0 dnn加载yolov3模型,当使用opencl时,报错,错误信息如下,有没有大佬知道怎么回事?无法识别的命令是什么意思? [ INFO:0] Initialize OpenCL runtime... OpenCV(ocl4dnn): consider to specify kernel configuration cache directory via OPENCV_OCL4DNN_CONFIG_PATH parameter. [ INFO:0] Successfully initialized OpenCL cache directory: C:\Users\xx\AppData\Local\Temp\opencv\4.0\opencl_cache\ [ INFO:0] Preparing OpenCL cache configuration for context: NVIDIA_Corporation--GeForce_GTX_1050_Ti--419_67 OpenCL program build log: dnn/dummy Status -11: CL_BUILD_PROGRAM_FAILURE -cl-no-subgroup-ifp Error in processing command line: Don't understand command line argument "-cl-no-subgroup-ifp"!

opencv自带的Harr-like特征人脸检测

如图是opencv自带的训练好的分类器文件haarcascade_frontalface_alt.xml,stage是级联层数,下面tree下面各结点代表的弱分类器的各参数值是什么意思啊?我知道这是弱分类器的组合,弱分类器是单层决策树吧?怎么有两个Rect和这么多阈值?多谢帮助 ![图片说明](https://img-ask.csdn.net/upload/201605/10/1462867305_444644.jpg)

为什么我使用TensorFlow2.0训练的时候loss的变化那么奇怪?

我使用tf2.0搭建了一个deepfm模型用来做一个二分类预测。 训练过程中train loss一直在下降,val loss却一直在上升,并且训练到一半内存就不够用了,这是怎么一回事? ``` Train on 19532 steps, validate on 977 steps Epoch 1/5 19532/19532 [==============================] - 549s 28ms/step - loss: 0.4660 - AUC: 0.8519 - val_loss: 1.0059 - val_AUC: 0.5829 Epoch 2/5 19532/19532 [==============================] - 522s 27ms/step - loss: 0.1861 - AUC: 0.9787 - val_loss: 1.7618 - val_AUC: 0.5590 Epoch 3/5 17150/19532 [=========================>....] - ETA: 1:06 - loss: 0.0877 - AUC: 0.9951 Process finished with exit code 137 ``` 还有个问题,我在设计过程中关闭了eager模式,必须使用了下面代码进行初始化: ``` sess.run([tf.compat.v1.global_variables_initializer(), tf.compat.v1.tables_initializer()]) ``` 但我的代码中使用了其他的初始化方法: ``` initializer = tf.keras.initializers.TruncatedNormal(stddev=stddev, seed=29) regularizer = tf.keras.regularizers.l2(l2_reg) .... dnn_hidden_layer_3 = tf.keras.layers.Dense(64, activation='selu', kernel_initializer=initializer, kernel_regularizer=regularizer)(dnn_hidden_layer_2) .... ``` 我这样做他还是按我定义的初始化方法那样初始化吗? 本人小白,在这里先跪谢大家了!

tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed ,程序中出现anaconda错误?

在pycharm中运行python程序时,出现anaconda中的错误,如下图: ![图片说明](https://img-ask.csdn.net/upload/201910/22/1571713274_114393.png) 这是版本不匹配,还是程序里有调用,或者其它什么问题?有人可以帮忙看一下吗?这是教程视频里的程序,视频里可以运行出来,我的tensorflow、CUDA、cudnn是官网下的,可能比他的新一些,10.1和10.0版本,或者测试版和正式版这种差别。下面是运行的前向传播的代码: ``` import tensorflow as tf from tensorflow import keras from tensorflow.keras import datasets import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # x: [60k, 28, 28], # y: [60k] (x, y), _ = datasets.mnist.load_data() # x: [0~255] => [0~1.] x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) print(x.shape, y.shape, x.dtype, y.dtype) print(tf.reduce_min(x), tf.reduce_max(x)) print(tf.reduce_min(y), tf.reduce_max(y)) train_db = tf.data.Dataset.from_tensor_slices((x,y)).batch(128) train_iter = iter(train_db) sample = next(train_iter) print('batch:', sample[0].shape, sample[1].shape) # [b, 784] => [b, 256] => [b, 128] => [b, 10] # [dim_in, dim_out], [dim_out] w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1)) b1 = tf.Variable(tf.zeros([256])) w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1)) b2 = tf.Variable(tf.zeros([128])) w3 = tf.Variable(tf.random.truncated_normal([128, 10], stddev=0.1)) b3 = tf.Variable(tf.zeros([10])) lr = 1e-3 for epoch in range(10): # iterate db for 10 for step, (x, y) in enumerate(train_db): # for every batch # x:[128, 28, 28] # y: [128] # [b, 28, 28] => [b, 28*28] x = tf.reshape(x, [-1, 28*28]) with tf.GradientTape() as tape: # tf.Variable # x: [b, 28*28] # h1 = x@w1 + b1 # [b, 784]@[784, 256] + [256] => [b, 256] + [256] => [b, 256] + [b, 256] h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256]) h1 = tf.nn.relu(h1) # [b, 256] => [b, 128] h2 = h1@w2 + b2 h2 = tf.nn.relu(h2) # [b, 128] => [b, 10] out = h2@w3 + b3 # compute loss # out: [b, 10] # y: [b] => [b, 10] y_onehot = tf.one_hot(y, depth=10) # mse = mean(sum(y-out)^2) # [b, 10] loss = tf.square(y_onehot - out) # mean: scalar loss = tf.reduce_mean(loss) # compute gradients grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3]) # print(grads) # w1 = w1 - lr * w1_grad w1.assign_sub(lr * grads[0]) b1.assign_sub(lr * grads[1]) w2.assign_sub(lr * grads[2]) b2.assign_sub(lr * grads[3]) w3.assign_sub(lr * grads[4]) b3.assign_sub(lr * grads[5]) if step % 100 == 0: print(epoch, step, 'loss:', float(loss)) ```

tensorflow自定义的损失函数 focal_loss出现inf,在训练过程中出现inf

![图片说明](https://img-ask.csdn.net/upload/201905/05/1557048780_248292.png) ``` python def focal_loss(alpha=0.25, gamma=2.): """ focal loss used for train positive/negative samples rate out of balance, improve train performance """ def focal_loss_calc(y_true, y_pred): positive = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred)) negative = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred)) return -(alpha*K.pow(1.-positive, gamma)*K.log(positive) + (1-alpha)*K.pow(negative, gamma)*K.log(1.-negative)) return focal_loss_calc ``` ```python self.keras_model.compile(optimizer=optimizer, loss=dice_focal_loss, metrics=[ mean_iou, dice_loss, focal_loss()]) ``` 上面的focal loss 开始还是挺正常的,随着训练过程逐渐减小大0.025左右,然后就突然变成inf。何解

如何将多个csv文件作为输入训练集读入tensorflow

我们的输入是body文件夹下面的image,每一个image都是一张424×512的csv文件,输出是bodylabel文件下下面的label,每一个label是一个5×1的csv文件,每一张image对应一个label,如何将输入输出读入tensorflow并处理成能够用于训练的张量

DNN Web API服务器错误

<div class="post-text" itemprop="text"> <p>I'm repeatedly getting an Internal Server Error 500 when trying to make a simple call to a DNN Web API controller. I do not get this error on my development machine, nor do I get it on one of our deployment servers. However, one of our <em>other</em> deployment servers is the problem; and I'd like to figure out why. </p> <p>Consider the following, simplistic API controller in <strong>TheBestController.cs</strong> </p> <pre><code>namespace MyTestLibrary.Controllers { public class TheBestController : DnnApiController { [DnnAuthorize] [HttpPost] public HttpResponseMessage TestMe() { return Request.CreateResponse(HttpStatusCode.OK, "I'm working"); } } } </code></pre> <p>With this RouteMapper in <strong>RouteMapper.cs</strong></p> <pre><code>using DotNetNuke.Web.Api; namespace MyTestLibrary { public class RouteMapper : IServiceRouteMapper { public void RegisterRoutes(IMapRoute mapRouteManager) { mapRouteManager.MapHttpRoute("TheBestController", "default", "{controller}/{action}", new[] { "MyTestLibrary.Controllers" }); } } } </code></pre> <p>With the following AJAX call from my Javascript:</p> <pre><code>$.post("DesktopModules/MyTestLibrary/API/TheBest/TestMe", function(data) { alert(data); }); </code></pre> <p>I have made sure that all of the correct permissions are set for the DNN application in IIS (as well as mirrored those and all other pertinent settings that are on our server that has this working). I place the compiled <strong>MyTestLibrary.dll</strong> in the bin folder of the DNN site, and I place the javascript file in the Resources/Shared/scripts folder of the DNN site. The DNN site is reading the javascript with no issues, and I am not using a module (and would not like to because I want all of my code to be easily transferrable to an MVC site).</p> <p>I have also tried using the <code>[AllowAnonymous]</code> tag on the controller method (to no avail).</p> <p>The version of DNN on all machines is 07.01.01, and the two servers are running Server 2008 R2 64bit. The dev machine is running Win 7 64 bit.</p> <p>Any ideas?</p> </div>

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解!

CSDN:因博主近期注重写专栏文章(已超过150篇),订阅博主专栏人数在突增,近期很有可能提高专栏价格(已订阅的不受影响),提前声明,敬请理解! 目录 博客声明 大数据了解博主粉丝 博主的粉丝群体画像 粉丝群体性别比例、年龄分布 粉丝群体学历分布、职业分布、行业分布 国内、国外粉丝群体地域分布 博主的近期访问每日增量、粉丝每日增量 博客声明 因近期博主写专栏的文章越来越多,也越来越精细,逐步优化文章。因此,最近一段时间,订阅博主专栏的人数增长也非常快,并且专栏价

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

工作八年,月薪60K,裸辞两个月,投简历投到怀疑人生!

近日,有网友在某职场社交平台吐槽,自己裸辞两个月了,但是找工作却让自己的心态都要崩溃了,全部无果,不是已查看无回音,就是已查看不符合。 “工作八年,两年一跳,裸辞两个月了,之前月薪60K,最近找工作找的心态崩了!所有招聘工具都用了,全部无果,不是已查看无回音,就是已查看不符合。进头条,滴滴之类的大厂很难吗???!!!投简历投的开始怀疑人生了!希望 可以收到大厂offer” 先来看看网...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

MySQL性能优化(五):为什么查询速度这么慢

前期回顾: MySQL性能优化(一):MySQL架构与核心问题 MySQL性能优化(二):选择优化的数据类型 MySQL性能优化(三):深入理解索引的这点事 MySQL性能优化(四):如何高效正确的使用索引 前面章节我们介绍了如何选择优化的数据类型、如何高效的使用索引,这些对于高性能的MySQL来说是必不可少的。但这些还完全不够,还需要合理的设计查询。如果查询写的很糟糕,即使表结构再合理、索引再...

用了这个 IDE 插件,5分钟解决前后端联调!

点击上方蓝色“程序猿DD”,选择“设为星标”回复“资源”获取独家整理的学习资料!作者 |李海庆我是一个 Web 开发前端工程师,受到疫情影响,今天是我在家办公的第78天。开发了两周,...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

【高并发】高并发秒杀系统架构解密,不是所有的秒杀都是秒杀!

网上很多的文章和帖子中在介绍秒杀系统时,说是在下单时使用异步削峰来进行一些限流操作,那都是在扯淡! 因为下单操作在整个秒杀系统的流程中属于比较靠后的操作了,限流操作一定要前置处理,在秒杀业务后面的流程中做限流操作是没啥卵用的。

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

Java14 新特性解读

Java14 已于 2020 年 3 月 17 号发布,官方特性解读在这里:https://openjdk.java.net/projects/jdk/14/以下是个人对于特性的中文式...

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

疫情后北上广深租房价格跌了吗? | Alfred数据室

去年3月份我们发布了《北上广深租房图鉴》(点击阅读),细数了北上广深租房的各种因素对租房价格的影响。一年过去了,在面临新冠疫情的后续影响、城市尚未完全恢复正常运转、学校还没开学等情况下...

面试官给我挖坑:a[i][j] 和 a[j][i] 有什么区别?

点击上方“朱小厮的博客”,选择“设为星标”后台回复&#34;1024&#34;领取公众号专属资料本文以一个简单的程序开头——数组赋值:int LEN = 10000; int[][] ...

又一起程序员被抓事件

就在昨天互联网又发生一起让人心酸的程序员犯罪事件,著名的百度不限速下载软件 Pandownload PC 版作者被警方抓获。案件大致是这样的:软件的作者不仅非法盗取用户数据,还在QQ群进...

应聘3万的职位,有必要这么刁难我么。。。沙雕。。。

又一次被面试官带到坑里面了。面试官:springmvc用过么?我:用过啊,经常用呢面试官:springmvc中为什么需要用父子容器?我:嗯。。。没听明白你说的什么。面试官:就是contr...

太狠了,疫情期间面试,一个问题砍了我5000!

疫情期间找工作确实有点难度,想拿到满意的薪资,确实要点实力啊!面试官:Spring中的@Value用过么,介绍一下我:@Value可以标注在字段上面,可以将外部配置文件中的数据,比如可以...

Intellij IDEA 美化指南

经常有人问我,你的 IDEA 配色哪里搞的,我会告诉他我自己改的。作为生产力工具,不但要顺手而且更要顺眼。这样才能快乐编码,甚至降低 BUG 率。上次分享了一些 IDEA 有用的插件,反...

【相亲】96年程序员小哥第一次相亲,还没开始就结束了

颜值有点高,条件有点好

太厉害了,终于有人能把TCP/IP 协议讲的明明白白了

一图看完本文 一、 计算机网络体系结构分层 计算机网络体系结构分层 计算机网络体系结构分层 不难看出,TCP/IP 与 OSI 在分层模块上稍有区别。OSI 参考模型注重“通信协议必要的功能是什么”,而 TCP/IP 则更强调“在计算机上实现协议应该开发哪种程序”。 二、 TCP/IP 基础 1. TCP/IP 的具体含义 从字面意义上讲,有人可能会认为...

腾讯面试题: 百度搜索为什么那么快?

我还记得去年面腾讯时,面试官最后一个问题是:百度/google的搜索为什么那么快? 这个问题我懵了,我从来没想过,搜素引擎的原理是什么 然后我回答:百度爬取了各个网站的信息,然后进行排序,当输入关键词的时候进行文档比对……巴拉巴拉 面试官:这不是我想要的答案 我内心 这个问题我一直耿耿于怀,终于今天,我把他写出来,以后再问,我直接把这篇文章甩给他!!! 两个字:倒排,将贯穿整篇文章,也是面试官...

相关热词 c# cad插入影像 c#设计思想 c#正则表达式 转换 c#form复制 c#写web c# 柱形图 c# wcf 服务库 c#应用程序管理器 c#数组如何赋值给数组 c#序列化应用目的博客园
立即提问