报错ValueError: None values not supported.如何解决

在跑代码的时候出现了ValueError: None values not supported.
停在了这里,这是我定义的class 应该是调用fit函数时候出现的问题

class NetworkBase(object):
    def train(self, x_train, y_train, x_test, y_test, epochs, batch_size, log_dir='/tmp/fullyconnected', stop_early=False):
        callbacks = []
        if backend._BACKEND == 'tensorflow':
            callbacks.append(TensorBoard(log_dir=log_dir))

        if stop_early:
            callbacks.append(EarlyStopping(monitor='val_loss', patience=2, verbose=1, mode='auto'))

        self.fcnet.fit(x_train, y_train,
                epochs=epochs,
                batch_size=batch_size,
                shuffle=True,
                validation_data=(x_test, y_test),
                callbacks=callbacks)

报错信息如下

  File "D:\R\实验室\代码\DL-hybrid-precoder-master\main_train\Model\network_base.py", line 20, in train
    callbacks=callbacks)
  File "C:\Users\admin\Anaconda3\lib\site-packages\keras\engine\training.py", line 1213, in fit
    self._make_train_function()
  File "C:\Users\admin\Anaconda3\lib\site-packages\keras\engine\training.py", line 316, in _make_train_function
    loss=self.total_loss)
  File "C:\Users\admin\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\admin\Anaconda3\lib\site-packages\keras\optimizers.py", line 543, in get_updates
    p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon)
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 815, in binary_op_wrapper
    y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1039, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1097, in convert_to_tensor_v2
    as_ref=False)
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1175, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 304, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 245, in constant
    allow_broadcast=True)
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 283, in _constant_impl
    allow_broadcast=allow_broadcast))
  File "C:\Users\admin\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 454, in make_tensor_proto
    raise ValueError("None values not supported.")
ValueError: None values not supported.

希望有大神可以帮我解答问题出在哪里

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
ValueError: None values not supported.

Traceback (most recent call last): File "document_summarizer_training_testing.py", line 296, in <module> tf.app.run() File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "document_summarizer_training_testing.py", line 291, in main train() File "document_summarizer_training_testing.py", line 102, in train model = MY_Model(sess, len(vocab_dict)-2) File "/home/lyliu/Refresh-master-self-attention/my_model.py", line 70, in __init__ self.train_op_policynet_expreward = model_docsum.train_neg_expectedreward(self.rewardweighted_cross_entropy_loss_multi File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in train_neg_expectedreward grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in <listcomp> grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/ops/clip_ops.py", line 107,rm t = ops.convert_to_tensor(t, name="t") File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 676o_tensor as_ref=False) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 741convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", onstant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", ake_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. 使用tensorflow gpu版本 tensorflow 1.2.0。希望找到解决方法或者出现这个错误的原因

CNN模型报错None values not supported,怎么解决?

深度学习小白一枚,最近在kaggle网站上看一个皮肤病数据集的代码→ [皮肤病数据集-CNN模型](https://www.kaggle.com/sid321axn/step-wise-approach-cnn-model-77-0344-accuracy "") 可是按照原网站的代码复制上去后(前面访问文件夹的代码有改动,不过不影响后面代码),用vscode运行时出现了问题 ``` history = model.fit_generator(datagen.flow(x_train,y_train, batch_size=batch_size), steps_per_epoch=x_train.shape[0] // batch_size, epochs = epochs, verbose = 1, callbacks=[learning_rate_reduction], validation_data = (x_validate,y_validate)) ``` 报错如下: ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580524933_147759.png) ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580525568_202234.png) ![图片说明](https://img-ask.csdn.net/upload/202002/01/1580525554_366954.png) 报错:ValueError: None values not supported. 求助,这种问题该怎么解决,源代码可以戳上方蓝字进入kaggle网站上看。

keras 训练网络时出现ValueError

rt 使用keras中的model.fit函数进行训练时出现错误:ValueError: None values not supported. 错误信息如下: ``` File "C:/Users/Desktop/MNISTpractice/mnist.py", line 93, in <module> model.fit(x_train,y_train, epochs=2, callbacks=callback_list,validation_data=(x_val,y_val)) File "C:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1575, in fit self._make_train_function() File "C:\Anaconda3\lib\site-packages\keras\engine\training.py", line 960, in _make_train_function loss=self.total_loss) File "C:\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Anaconda3\lib\site-packages\keras\optimizers.py", line 432, in get_updates m_t = (self.beta_1 * m) + (1. - self.beta_1) * g File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 820, in binary_op_wrapper y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y") File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 639, in convert_to_tensor as_ref=False) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 704, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 113, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 102, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 360, in make_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. ```

在使用机器学习算法过程中报错:ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 224 while Y.shape[1] == 334

使用KNN算法过程中遇到了ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 224 while Y.shape[1] == 334 的问题 代码截图:![图片说明](https://img-ask.csdn.net/upload/202005/26/1590474323_300759.png) 报错截图:![图片说明](https://img-ask.csdn.net/upload/202005/26/1590474354_165863.png) 求大佬救救孩子吧

Keras, Tensorflow, ValueError

把csdn上一个颜值打分程序放到jupyter notebook上跑,程序如下: ``` from keras.applications import ResNet50 from keras import optimizers from keras.layers import Dense, Dropout from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint from keras.backend.tensorflow_backend import set_session os.environ['CUDA_VISIBLE_DEVICES'] = '1' config = tf.ConfigProto() config.gpu_options.allow_growth = True set_session(tf.Session(config=config)) batch_size = 32 target_size = (224, 224) resnet = ResNet50(include_top=False, pooling='avg') resnet.trainable = False # keras.backend.clear_session() # tf.reset_default_graph() model = Sequential() model.add(resnet) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) print(model.summary()) model.compile(optimizer=optimizers.SGD(lr=0.001), loss='mse') callbacks = [EarlyStopping(monitor='val_loss', patience=5, verbose=1, min_delta=1e-4), ReduceLROnPlateau(monitor='val_loss', patience=3, factor=0.1, epsilon=1e-4), ModelCheckpoint(monitor='val_loss', filepath='weights/resnet50_weights.hdf5', save_best_only=True, save_weights_only=True)] train_file_list, test_file_list = read_data_list() train_steps_per_epoch = math.ceil(len(train_file_list) / batch_size) test_steps_per_epoch = math.ceil(len(test_file_list) / batch_size) train_data = DataGenerator(train_file_list, target_size,batch_size) test_data = DataGenerator(test_file_list, target_size, batch_size) model.fit_generator(train_data, steps_per_epoch=train_steps_per_epoch, epochs=30, verbose=1, callbacks=callbacks, validation_data=test_data, validation_steps=test_steps_per_epoch, use_multiprocessing=True) ``` 结果引发如下错误: ValueError Traceback (most recent call last) <ipython-input-34-ae0a8870fdc1> in <module>() 20 # tf.reset_default_graph() 21 model = Sequential() ---> 22 model.add(resnet) 23 model.add(Dropout(0.5)) 24 model.add(Dense(1, activation='sigmoid')) ...Ignoring many tracing lines... ValueError: Variable bn_conv1/moving_mean/biased already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at: File "xxxx\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in __init__ self._traceback = _extract_stack() File "xxxx\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "xxxx\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) 我按照网上说法在model语句前加了tf.reset_default_graph() ,结果又产生新的error: ValueError: Tensor("conv1_1/kernel:0", shape=(7, 7, 3, 64), dtype=float32_ref) must be from the same graph as Tensor("resnet50/conv1_pad/Pad:0", shape=(?, ?, ?, 3), dtype=float32). 又按照网上说法加了keras.backend.clear_session(),总共加的两句前前后后在很多地方放了测试,结果都会有新的问题: ValueError: Tensor("conv1/kernel:0", shape=(7, 7, 3, 64), dtype=float32_ref) must be from the same graph as Tensor("resnet50/conv1_pad/Pad:0", shape=(?, ?, ?, 3), dtype=float32). 请教大牛究竟该如何彻底解决问题。

初学Tensorflow搭建简单的神经网络,无法理解报错,求助!

import tensorflow as tf import numpy as np def add_layer(input_data,input_size,output_size,activation_function=None): Weights=tf.Variable(tf.random.normal([input_size,output_size])) biases=tf.Variable(tf.zeros(None,output_size)+0.1) Wx_plus_biases=tf.matmul(input_data,Weights) if activation_function==None: output=Wx_plus_biases else: output=activation_function(output) return output x_data=np.linspace(-1,1,300)[:,np.newaxis] noise=np.random.normal(0.0,0.05,x_data) y_data=np.square(x_data)-0.5+noise xs=tf.compat.v1.placeholder(tf.float32,[None,1]) ys=tf.compat.v1.placeholder(tf.float32,[None,1]) l1=add_layer(x_data,1,10,activation_function=tf.nn.relu) prediction=add_layer(l1,10,1,activation_function=None) loss=tf.reduce_mean(tf.reduce_sum(tf.square(ys-precdition), reduction_indices=[1])) train_op=tf.train.GradientDicentOptimizer(0.1).minize(loss) init=tf.global_variables_initializer() for i in range(1000): sess.run(train_op,feed_dict={xs:x_data,ys:y_data}) if i%50: print (sess.run(loss,feed_dict={xs:x_data,ys:y_data})) ``` 以上是源码,下面是报错 Traceback (most recent call last): File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1877, in zeros tensor_shape.TensorShape(shape)) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 326, in _tensor_shape_tensor_conversion_function "Cannot convert a partially known TensorShape to a Tensor: %s" % s) ValueError: Cannot convert a partially known TensorShape to a Tensor: <unknown> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Easy_network.py", line 17, in <module> l1=add_layer(x_data,1,10,activation_function=tf.nn.relu) File "Easy_network.py", line 5, in add_layer biases=tf.Variable(tf.zeros(None,output_size)+0.1) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1880, in zeros shape = ops.convert_to_tensor(shape, dtype=dtypes.int32) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1087, in convert_to_tensor return convert_to_tensor_v2(value, dtype, preferred_dtype, name) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1145, in convert_to_tensor_v2 as_ref=False) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 305, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 246, in constant allow_broadcast=True) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 284, in _constant_impl allow_broadcast=allow_broadcast)) File "/home/vdarkknightx/.local/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 454, in make_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. 初学者求助!!

tensorflow RNN LSTM代码运行不正确?

报错显示是ValueError: None values not supported. 在cross_entropy处有问题。谢谢大家 ``` #7.2 RNN import tensorflow as tf #tf.reset_default_graph() from tensorflow.examples.tutorials.mnist import input_data #载入数据集 mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) #输入图片是28*28 n_inputs = 28 #输入一行,一行有28个数据 max_time = 28 #一共28行 lstm_size = 100 #隐层单元 n_classes = 10 #10个分量 batch_size = 50 #每批次50个样本 n_batch = mnist.train.num_examples//batch_size #计算共由多少个批次 #这里的none表示第一个维度可以是任意长度 x = tf.placeholder(tf.float32, [batch_size, 784]) #正确的标签 y = tf.placeholder(tf.float32, [batch_size, 10]) #初始化权值 weights = tf.Variable(tf.truncated_normal([lstm_size, n_classes], stddev = 0.1)) #初始化偏置 biases = tf.Variable(tf.constant(0.1, shape = [n_classes])) #定义RNN网络 def RNN(X, weights, biases): #input = [batch_size, max_size, n_inputs] inputs = tf.reshape(X, [-1, max_time, n_inputs]) #定义LSTM基本CELL lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(lstm_size) #final_state[0]是cell_state #final_state[1]是hidden_state outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, inputs, dtype = tf.float32) results = tf.nn.softmax(tf.matmul(final_state[1], weights) + biases) #计算RNN的返回结果 prediction = RNN(x, weights, biases) #损失函数 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y,logits = prediction)) #使用AdamOptimizer进行优化 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #结果存放在一个布尔型列表中 correct_prediction = tf.equal(tf.argmax(y, 1),tf.argmax(prediction, 1)) #求准确率 accuracy = tf.reduce_mean(tf.cast(correct_precdition,tf.float32)) #初始化 init = tf.global_variable_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(6): for batch in range(n_batch): batch_xs,batch_ys=mnist.train.next_batch(batch_size) sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys}) acc = sess.run(accuracy, feed_dict={x:mnist.test.images,y:mnist.test.labels}) print('Iter' + str(epoch) + ',Testing Accuracy = ' + str(acc)) ```

【python】关于Pandas DataFrame 的一些奇怪的问题

具体代码如下: ``` def reshape_data(time_today, data_today): #构建时间序列数组 len_data = len(data_today) time_try = [parse(time_today)]*len_data #插入时间轴并定为最外轴 data_today.insert(0, 'time', time_try) data_today.set_index(['time',data_today.index], inplace = True) return data_today ``` 输入为一个字符串time_today和一个DataFrame data_today 然后在实际运行中提示出错,详细错误代码如下: ``` NotImplementedError Traceback (most recent call last) <ipython-input-14-d4c768794e17> in <module> 1 if __name__ == '__main__': ----> 2 main() 3 rootdir <ipython-input-8-0c303725fb93> in main() 15 test_b = caculate_rate(a) 16 print(test_b) ---> 17 test_b = reshape_data(time_a, test_b) 18 save_csv(test_b, file_name_test, False) 19 b = save_csv(test_a, file_name_test, True) <ipython-input-13-1381f1d66cba> in reshape_data(time_today, data_today) 9 data_today.index.names = ['name'] 10 data_today = data_today.reset_index() ---> 11 data_today.set_index(['time','name'], inplace = True) 12 13 return data_today G:\anaconda\lib\site-packages\pandas\core\frame.py in set_index(self, keys, drop, append, inplace, verify_integrity) 3913 arrays.append(level) 3914 -> 3915 index = _ensure_index_from_sequences(arrays, names) 3916 3917 if verify_integrity and not index.is_unique: G:\anaconda\lib\site-packages\pandas\core\indexes\base.py in _ensure_index_from_sequences(sequences, names) 4909 return Index(sequences[0], name=names) 4910 else: -> 4911 return MultiIndex.from_arrays(sequences, names=names) 4912 4913 G:\anaconda\lib\site-packages\pandas\core\indexes\multi.py in from_arrays(cls, arrays, sortorder, names) 1272 from pandas.core.arrays.categorical import _factorize_from_iterables 1273 -> 1274 labels, levels = _factorize_from_iterables(arrays) 1275 if names is None: 1276 names = [getattr(arr, "name", None) for arr in arrays] G:\anaconda\lib\site-packages\pandas\core\arrays\categorical.py in _factorize_from_iterables(iterables) 2541 # For consistency, it should return a list of 2 lists. 2542 return [[], []] -> 2543 return map(list, lzip(*[_factorize_from_iterable(it) for it in iterables])) G:\anaconda\lib\site-packages\pandas\core\arrays\categorical.py in <listcomp>(.0) 2541 # For consistency, it should return a list of 2 lists. 2542 return [[], []] -> 2543 return map(list, lzip(*[_factorize_from_iterable(it) for it in iterables])) G:\anaconda\lib\site-packages\pandas\core\arrays\categorical.py in _factorize_from_iterable(values) 2513 codes = values.codes 2514 else: -> 2515 cat = Categorical(values, ordered=True) 2516 categories = cat.categories 2517 codes = cat.codes G:\anaconda\lib\site-packages\pandas\core\arrays\categorical.py in __init__(self, values, categories, ordered, dtype, fastpath) 355 356 # FIXME --> 357 raise NotImplementedError("> 1 ndim Categorical are not " 358 "supported at this time") 359 NotImplementedError: > 1 ndim Categorical are not supported at this time ``` 这是什么神奇的事情啊?其他地方都没事,只有这里调用的时候出错,连语法都是一样的。

搭建好服务器之后,启动服务器

搭建好服务器之后,启动服务器,出现: DEBUG response status: 404 :/home/tclxa/.virtualenvs/zamboni/lib/python2.6/site-packages/pyelasticsearch/client.py:252 cilent.py文件如下: # -*- coding: utf-8 -*- from __future__ import absolute_import from datetime import datetime from operator import itemgetter from functools import wraps from logging import getLogger import re from six import (iterkeys, binary_type, text_type, string_types, integer_types, iteritems, PY3) from six.moves import xrange try: # PY3 from urllib.parse import urlencode, quote_plus except ImportError: # PY2 from urllib import urlencode, quote_plus import requests import simplejson as json # for use_decimal from simplejson import JSONDecodeError from pyelasticsearch.downtime import DowntimePronePool from pyelasticsearch.exceptions import (Timeout, ConnectionError, ElasticHttpError, InvalidJsonResponseError, ElasticHttpNotFoundError, IndexAlreadyExistsError) def _add_es_kwarg_docs(params, method): """ Add stub documentation for any args in ``params`` that aren't already in the docstring of ``method``. The stubs may not tell much about each arg, but they serve the important purpose of letting the user know that they're safe to use--we won't be paving over them in the future for something pyelasticsearch-specific. """ def docs_for_kwarg(p): return '\n :arg %s: See the ES docs.' % p doc = method.__doc__ if doc is not None: # It's none under python -OO. # Handle the case where there are no :arg declarations to key off: if '\n :arg' not in doc and params: first_param, params = params[0], params[1:] doc = doc.replace('\n (Insert es_kwargs here.)', docs_for_kwarg(first_param)) for p in params: if ('\n :arg %s: ' % p) not in doc: # Find the last documented arg so we can put our generated docs # after it. No need to explicitly compile this; the regex cache # should serve. insertion_point = re.search( r' :arg (.*?)(?=\n+ (?:$|[^: ]))', doc, re.MULTILINE | re.DOTALL).end() doc = ''.join([doc[:insertion_point], docs_for_kwarg(p), doc[insertion_point:]]) method.__doc__ = doc def es_kwargs(*args_to_convert): """ Mark which kwargs will become query string params in the eventual ES call. Return a decorator that grabs the kwargs of the given names, plus any beginning with "es_", subtracts them from the ordinary kwargs, and passes them to the decorated function through the ``query_params`` kwarg. The remaining kwargs and the args are passed through unscathed. Also, if any of the given kwargs are undocumented in the decorated method's docstring, add stub documentation for them. """ convertible_args = set(args_to_convert) def decorator(func): # Add docs for any missing query params: _add_es_kwarg_docs(args_to_convert, func) @wraps(func) def decorate(*args, **kwargs): # Make kwargs the map of normal kwargs and query_params the map of # kwargs destined for query string params: query_params = {} for k in list(iterkeys(kwargs)): # Make a copy; we mutate kwargs. if k.startswith('es_'): query_params[k[3:]] = kwargs.pop(k) elif k in convertible_args: query_params[k] = kwargs.pop(k) return func(*args, query_params=query_params, **kwargs) return decorate return decorator class ElasticSearch(object): """ An object which manages connections to elasticsearch and acts as a go-between for API calls to it This object is thread-safe. You can create one instance and share it among all threads. """ def __init__(self, urls, timeout=60, max_retries=0, revival_delay=300): """ :arg urls: A URL or iterable of URLs of ES nodes. These are full URLs with port numbers, like ``http://elasticsearch.example.com:9200``. :arg timeout: Number of seconds to wait for each request before raising Timeout :arg max_retries: How many other servers to try, in series, after a request times out or a connection fails :arg revival_delay: Number of seconds for which to avoid a server after it times out or is uncontactable """ if isinstance(urls, string_types): urls = [urls] urls = [u.rstrip('/') for u in urls] self.servers = DowntimePronePool(urls, revival_delay) self.revival_delay = revival_delay self.timeout = timeout self.max_retries = max_retries self.logger = getLogger('pyelasticsearch') self.session = requests.session() self.json_encoder = JsonEncoder def _concat(self, items): """ Return a comma-delimited concatenation of the elements of ``items``, with any occurrences of "_all" omitted. If ``items`` is a string, promote it to a 1-item list. """ # TODO: Why strip out _all? if items is None: return '' if isinstance(items, string_types): items = [items] return ','.join(i for i in items if i != '_all') def _to_query(self, obj): """ Convert a native-Python object to a unicode or bytestring representation suitable for a query string. """ # Quick and dirty thus far if isinstance(obj, string_types): return obj if isinstance(obj, bool): return 'true' if obj else 'false' if isinstance(obj, integer_types): return str(obj) if isinstance(obj, float): return repr(obj) # str loses precision. if isinstance(obj, (list, tuple)): return ','.join(self._to_query(o) for o in obj) iso = _iso_datetime(obj) if iso: return iso raise TypeError("_to_query() doesn't know how to represent %r in an ES" ' query string.' % obj) def _utf8(self, thing): """Convert any arbitrary ``thing`` to a utf-8 bytestring.""" if isinstance(thing, binary_type): return thing if not isinstance(thing, text_type): thing = text_type(thing) return thing.encode('utf-8') def _join_path(self, path_components): """ Smush together the path components, omitting '' and None ones. Unicodes get encoded to strings via utf-8. Incoming strings are assumed to be utf-8-encoded already. """ path = '/'.join(quote_plus(self._utf8(p), '') for p in path_components if p is not None and p != '') if not path.startswith('/'): path = '/' + path return path def send_request(self, method, path_components, body='', query_params=None, encode_body=True): """ Send an HTTP request to ES, and return the JSON-decoded response. This is mostly an internal method, but it also comes in handy if you need to use a brand new ES API that isn't yet explicitly supported by pyelasticsearch, while still taking advantage of our connection pooling and retrying. Retry the request on different servers if the first one is down and ``self.max_retries`` > 0. :arg method: An HTTP method, like "GET" :arg path_components: An iterable of path components, to be joined by "/" :arg body: The request body :arg query_params: A map of querystring param names to values or ``None`` :arg encode_body: Whether to encode the body of the request as JSON """ path = self._join_path(path_components) if query_params: path = '?'.join( [path, urlencode(dict((k, self._utf8(self._to_query(v))) for k, v in iteritems(query_params)))]) request_body = self._encode_json(body) if encode_body else body req_method = getattr(self.session, method.lower()) # We do our own retrying rather than using urllib3's; we want to retry # a different node in the cluster if possible, not the same one again # (which may be down). for attempt in xrange(self.max_retries + 1): server_url, was_dead = self.servers.get() url = server_url + path self.logger.debug( "Making a request equivalent to this: curl -X%s '%s' -d '%s'", method, url, request_body) try: resp = req_method( url, timeout=self.timeout, **({'data': request_body} if body else {})) except (ConnectionError, Timeout): self.servers.mark_dead(server_url) self.logger.info('%s marked as dead for %s seconds.', server_url, self.revival_delay) if attempt >= self.max_retries: raise else: if was_dead: self.servers.mark_live(server_url) break self.logger.debug('response status: %s', resp.status_code) print'****************************************************' print resp.status_code print'****************************************************' prepped_response = self._decode_response(resp) if resp.status_code >= 400: self._raise_exception(resp, prepped_response) self.logger.debug('got response %s', prepped_response) return prepped_response def _raise_exception(self, response, decoded_body): """Raise an exception based on an error-indicating response from ES.""" error_message = decoded_body.get('error', decoded_body) error_class = ElasticHttpError if response.status_code == 404: error_class = ElasticHttpNotFoundError elif (error_message.startswith('IndexAlreadyExistsException') or 'nested: IndexAlreadyExistsException' in error_message): error_class = IndexAlreadyExistsError raise error_class(response.status_code, error_message) def _encode_json(self, value): """ Convert a Python value to a form suitable for ElasticSearch's JSON DSL. """ return json.dumps(value, cls=self.json_encoder, use_decimal=True) def _decode_response(self, response): """Return a native-Python representation of a response's JSON blob.""" try: json_response = response.json() except JSONDecodeError: raise InvalidJsonResponseError(response) return json_response ## REST API @es_kwargs('routing', 'parent', 'timestamp', 'ttl', 'percolate', 'consistency', 'replication', 'refresh', 'timeout', 'fields') def index(self, index, doc_type, doc, id=None, overwrite_existing=True, query_params=None): """ Put a typed JSON document into a specific index to make it searchable. :arg index: The name of the index to which to add the document :arg doc_type: The type of the document :arg doc: A Python mapping object, convertible to JSON, representing the document :arg id: The ID to give the document. Leave blank to make one up. :arg overwrite_existing: Whether we should overwrite existing documents of the same ID and doctype :arg routing: A value hashed to determine which shard this indexing request is routed to :arg parent: The ID of a parent document, which leads this document to be routed to the same shard as the parent, unless ``routing`` overrides it. :arg timestamp: An explicit value for the (typically automatic) timestamp associated with a document, for use with ``ttl`` and such :arg ttl: The time until this document is automatically removed from the index. Can be an integral number of milliseconds or a duration like '1d'. :arg percolate: An indication of which percolator queries, registered against this index, should be checked against the new document: '*' or a query string like 'color:green' :arg consistency: An indication of how many active shards the contact node should demand to see in order to let the index operation succeed: 'one', 'quorum', or 'all' :arg replication: Set to 'async' to return from ES before finishing replication. :arg refresh: Pass True to refresh the index after adding the document. :arg timeout: A duration to wait for the relevant primary shard to become available, in the event that it isn't: for example, "5m" See `ES's index API`_ for more detail. .. _`ES's index API`: http://www.elasticsearch.org/guide/reference/api/index_.html """ # :arg query_params: A map of other querystring params to pass along to # ES. This lets you use future ES features without waiting for an # update to pyelasticsearch. If we just used **kwargs for this, ES # could start using a querystring param that we already used as a # kwarg, and we'd shadow it. Name these params according to the names # they have in ES's REST API, but prepend "\es_": for example, # ``es_version=2``. # TODO: Support version along with associated "preference" and # "version_type" params. if not overwrite_existing: query_params['op_type'] = 'create' return self.send_request('POST' if id is None else 'PUT', [index, doc_type, id], doc, query_params) @es_kwargs('consistency', 'refresh') def bulk_index(self, index, doc_type, docs, id_field='id', parent_field='_parent', query_params=None): """ Index a list of documents as efficiently as possible. :arg index: The name of the index to which to add the document :arg doc_type: The type of the document :arg docs: An iterable of Python mapping objects, convertible to JSON, representing documents to index :arg id_field: The field of each document that holds its ID :arg parent_field: The field of each document that holds its parent ID, if any. Removed from document before indexing. See `ES's bulk API`_ for more detail. .. _`ES's bulk API`: http://www.elasticsearch.org/guide/reference/api/bulk.html """ body_bits = [] if not docs: raise ValueError('No documents provided for bulk indexing!') for doc in docs: action = {'index': {'_index': index, '_type': doc_type}} if doc.get(id_field) is not None: action['index']['_id'] = doc[id_field] if doc.get(parent_field) is not None: action['index']['_parent'] = doc.pop(parent_field) body_bits.append(self._encode_json(action)) body_bits.append(self._encode_json(doc)) # Need the trailing newline. body = '\n'.join(body_bits) + '\n' return self.send_request('POST', ['_bulk'], body, encode_body=False, query_params=query_params) @es_kwargs('routing', 'parent', 'replication', 'consistency', 'refresh') def delete(self, index, doc_type, id, query_params=None): """ Delete a typed JSON document from a specific index based on its ID. :arg index: The name of the index from which to delete :arg doc_type: The type of the document to delete :arg id: The (string or int) ID of the document to delete See `ES's delete API`_ for more detail. .. _`ES's delete API`: http://www.elasticsearch.org/guide/reference/api/delete.html """ # id should never be None, and it's not particular dangerous # (equivalent to deleting a doc with ID "None", but it's almost # certainly not what the caller meant: if id is None or id == '': raise ValueError('No ID specified. To delete all documents in ' 'an index, use delete_all().') return self.send_request('DELETE', [index, doc_type, id], query_params=query_params) @es_kwargs('routing', 'parent', 'replication', 'consistency', 'refresh') def delete_all(self, index, doc_type, query_params=None): """ Delete all documents of the given doctype from an index. :arg index: The name of the index from which to delete. ES does not support this being empty or "_all" or a comma-delimited list of index names (in 0.19.9). :arg doc_type: The name of a document type See `ES's delete API`_ for more detail. .. _`ES's delete API`: http://www.elasticsearch.org/guide/reference/api/delete.html """ return self.send_request('DELETE', [index, doc_type], query_params=query_params) @es_kwargs('q', 'df', 'analyzer', 'default_operator', 'source' 'routing', 'replication', 'consistency') def delete_by_query(self, index, doc_type, query, query_params=None): """ Delete typed JSON documents from a specific index based on query. :arg index: An index or iterable thereof from which to delete :arg doc_type: The type of document or iterable thereof to delete :arg query: A dictionary that will convert to ES's query DSL or a string that will serve as a textual query to be passed as the ``q`` query string parameter. (Passing the ``q`` kwarg yourself is deprecated.) See `ES's delete-by-query API`_ for more detail. .. _`ES's delete-by-query API`: http://www.elasticsearch.org/guide/reference/api/delete-by-query.html """ if isinstance(query, string_types) and 'q' not in query_params: query_params['q'] = query body = '' else: body = query return self.send_request( 'DELETE', [self._concat(index), self._concat(doc_type), '_query'], body, query_params=query_params) @es_kwargs('realtime', 'fields', 'routing', 'preference', 'refresh') def get(self, index, doc_type, id, query_params=None): """ Get a typed JSON document from an index by ID. :arg index: The name of the index from which to retrieve :arg doc_type: The type of document to get :arg id: The ID of the document to retrieve See `ES's get API`_ for more detail. .. _`ES's get API`: http://www.elasticsearch.org/guide/reference/api/get.html """ return self.send_request('GET', [index, doc_type, id], query_params=query_params) @es_kwargs() def multi_get(self, ids, index=None, doc_type=None, fields=None, query_params=None): """ Get multiple typed JSON documents from ES. :arg ids: An iterable, each element of which can be either an a dict or an id (int or string). IDs are taken to be document IDs. Dicts are passed through the Multi Get API essentially verbatim, except that any missing ``_type``, ``_index``, or ``fields`` keys are filled in from the defaults given in the ``index``, ``doc_type``, and ``fields`` args. :arg index: Default index name from which to retrieve :arg doc_type: Default type of document to get :arg fields: Default fields to return See `ES's Multi Get API`_ for more detail. .. _`ES's Multi Get API`: http://www.elasticsearch.org/guide/reference/api/multi-get.html """ doc_template = dict( filter( itemgetter(1), [('_index', index), ('_type', doc_type), ('fields', fields)])) docs = [] for id in ids: doc = doc_template.copy() if isinstance(id, dict): doc.update(id) else: doc['_id'] = id docs.append(doc) return self.send_request( 'GET', ['_mget'], {'docs': docs}, query_params=query_params) @es_kwargs('routing', 'parent', 'timeout', 'replication', 'consistency', 'percolate', 'refresh', 'retry_on_conflict', 'fields') def update(self, index, doc_type, id, script=None, params=None, lang=None, query_params=None, doc=None, upsert=None): """ Update an existing document. Raise ``TypeError`` if ``script``, ``doc`` and ``upsert`` are all unspecified. :arg index: The name of the index containing the document :arg doc_type: The type of the document :arg id: The ID of the document :arg script: The script to be used to update the document :arg params: A dict of the params to be put in scope of the script :arg lang: The language of the script. Omit to use the default, specified by ``script.default_lang``. :arg doc: A partial document to be merged into the existing document :arg upsert: The content for the new document created if the document does not exist """ if script is None and doc is None and upsert is None: raise TypeError('At least one of the script, doc, or upsert ' 'kwargs must be provided.') body = {} if script: body['script'] = script if lang and script: body['lang'] = lang if doc: body['doc'] = doc if upsert: body['upsert'] = upsert if params: body['params'] = params return self.send_request( 'POST', [index, doc_type, id, '_update'], body=body, query_params=query_params) def _search_or_count(self, kind, query, index=None, doc_type=None, query_params=None): if isinstance(query, string_types): query_params['q'] = query body = '' else: body = query return self.send_request( 'GET', [self._concat(index), self._concat(doc_type), kind], body, query_params=query_params) @es_kwargs('routing', 'size') def search(self, query, **kwargs): """ Execute a search query against one or more indices and get back search hits. :arg query: A dictionary that will convert to ES's query DSL or a string that will serve as a textual query to be passed as the ``q`` query string parameter :arg index: An index or iterable of indexes to search. Omit to search all. :arg doc_type: A document type or iterable thereof to search. Omit to search all. :arg size: Limit the number of results to ``size``. Use with ``es_from`` to implement paginated searching. See `ES's search API`_ for more detail. .. _`ES's search API`: http://www.elasticsearch.org/guide/reference/api/search/ """ return self._search_or_count('_search', query, **kwargs) @es_kwargs('df', 'analyzer', 'default_operator', 'source', 'routing') def count(self, query, **kwargs): """ Execute a query against one or more indices and get hit count. :arg query: A dictionary that will convert to ES's query DSL or a string that will serve as a textual query to be passed as the ``q`` query string parameter :arg index: An index or iterable of indexes to search. Omit to search all. :arg doc_type: A document type or iterable thereof to search. Omit to search all. See `ES's count API`_ for more detail. .. _`ES's count API`: http://www.elasticsearch.org/guide/reference/api/count.html """ return self._search_or_count('_count', query, **kwargs) @es_kwargs() def get_mapping(self, index=None, doc_type=None, query_params=None): """ Fetch the mapping definition for a specific index and type. :arg index: An index or iterable thereof :arg doc_type: A document type or iterable thereof Omit both arguments to get mappings for all types and indexes. See `ES's get-mapping API`_ for more detail. .. _`ES's get-mapping API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-get-mapping.html """ # TODO: Think about turning index=None into _all if doc_type is non- # None, per the ES doc page. return self.send_request( 'GET', [self._concat(index), self._concat(doc_type), '_mapping'], query_params=query_params) @es_kwargs('ignore_conflicts') def put_mapping(self, index, doc_type, mapping, query_params=None): """ Register specific mapping definition for a specific type against one or more indices. :arg index: An index or iterable thereof :arg doc_type: The document type to set the mapping of :arg mapping: A dict representing the mapping to install. For example, this dict can have top-level keys that are the names of doc types. See `ES's put-mapping API`_ for more detail. .. _`ES's put-mapping API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping.html """ # TODO: Perhaps add a put_all_mappings() for consistency and so we # don't need to expose the "_all" magic string. We haven't done it yet # since this routine is not dangerous: ES makes you explicily pass # "_all" to update all mappings. return self.send_request( 'PUT', [self._concat(index), doc_type, '_mapping'], mapping, query_params=query_params) @es_kwargs('search_type', 'search_indices', 'search_types', 'search_scroll', 'search_size', 'search_from', 'like_text', 'percent_terms_to_match', 'min_term_freq', 'max_query_terms', 'stop_words', 'min_doc_freq', 'max_doc_freq', 'min_word_len', 'max_word_len', 'boost_terms', 'boost', 'analyzer') def more_like_this(self, index, doc_type, id, mlt_fields, body='', query_params=None): """ Execute a "more like this" search query against one or more fields and get back search hits. :arg index: The index to search and where the document for comparison lives :arg doc_type: The type of document to find others like :arg id: The ID of the document to find others like :arg mlt_fields: The list of fields to compare on :arg body: A dictionary that will convert to ES's query DSL and be passed as the request body See `ES's more-like-this API`_ for more detail. .. _`ES's more-like-this API`: http://www.elasticsearch.org/guide/reference/api/more-like-this.html """ query_params['mlt_fields'] = self._concat(mlt_fields) return self.send_request('GET', [index, doc_type, id, '_mlt'], body=body, query_params=query_params) ## Index Admin API @es_kwargs('recovery', 'snapshot') def status(self, index=None, query_params=None): """ Retrieve the status of one or more indices :arg index: An index or iterable thereof See `ES's index-status API`_ for more detail. .. _`ES's index-status API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-status.html """ return self.send_request('GET', [self._concat(index), '_status'], query_params=query_params) @es_kwargs() def update_aliases(self, settings, query_params=None): """ Add, remove, or update aliases in bulk. :arg settings: a dictionary specifying the actions to perform See `ES's admin-indices-aliases API`_. .. _`ES's admin-indices-aliases API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-aliases.html """ return self.send_request('POST', ['_aliases'], body=settings, query_params=query_params) @es_kwargs() def aliases(self, index=None, query_params=None): """ Retrieve a listing of aliases :arg index: the name of an index or an iterable of indices See `ES's admin-indices-aliases API`_. .. _`ES's admin-indices-aliases API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-aliases.html """ return self.send_request('GET', [self._concat(index), '_aliases'], query_params=query_params) @es_kwargs() def create_index(self, index, settings=None, query_params=None): """ Create an index with optional settings. :arg index: The name of the index to create :arg settings: A dictionary of settings If the index already exists, raise :class:`~pyelasticsearch.exceptions.IndexAlreadyExistsError`. See `ES's create-index API`_ for more detail. .. _`ES's create-index API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-create-index.html """ return self.send_request('PUT', [index], body=settings, query_params=query_params) @es_kwargs() def delete_index(self, index, query_params=None): """ Delete an index. :arg index: An index or iterable thereof to delete If the index is not found, raise :class:`~pyelasticsearch.exceptions.ElasticHttpNotFoundError`. See `ES's delete-index API`_ for more detail. .. _`ES's delete-index API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-delete-index.html """ if not index: raise ValueError('No indexes specified. To delete all indexes, use' ' delete_all_indexes().') return self.send_request('DELETE', [self._concat(index)], query_params=query_params) def delete_all_indexes(self, **kwargs): """Delete all indexes.""" return self.delete_index('_all', **kwargs) @es_kwargs() def close_index(self, index, query_params=None): """ Close an index. :arg index: The index to close See `ES's close-index API`_ for more detail. .. _`ES's close-index API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-open-close.html """ return self.send_request('POST', [index, '_close'], query_params=query_params) @es_kwargs() def open_index(self, index, query_params=None): """ Open an index. :arg index: The index to open See `ES's open-index API`_ for more detail. .. _`ES's open-index API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-open-close.html """ return self.send_request('POST', [index, '_open'], query_params=query_params) @es_kwargs() def get_settings(self, index, query_params=None): """ Get the settings of one or more indexes. :arg index: An index or iterable of indexes See `ES's get-settings API`_ for more detail. .. _`ES's get-settings API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-get-settings.html """ return self.send_request('GET', [self._concat(index), '_settings'], query_params=query_params) @es_kwargs() def update_settings(self, index, settings, query_params=None): """ Change the settings of one or more indexes. :arg index: An index or iterable of indexes :arg settings: A dictionary of settings See `ES's update-settings API`_ for more detail. .. _`ES's update-settings API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings.html """ if not index: raise ValueError('No indexes specified. To update all indexes, use' ' update_all_settings().') # If we implement the "update cluster settings" API, call that # update_cluster_settings(). return self.send_request('PUT', [self._concat(index), '_settings'], body=settings, query_params=query_params) @es_kwargs() def update_all_settings(self, settings, query_params=None): """ Update the settings of all indexes. :arg settings: A dictionary of settings See `ES's update-settings API`_ for more detail. .. _`ES's update-settings API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings.html """ return self.send_request('PUT', ['_settings'], body=settings, query_params=query_params) @es_kwargs('refresh') def flush(self, index=None, query_params=None): """ Flush one or more indices (clear memory). :arg index: An index or iterable of indexes See `ES's flush API`_ for more detail. .. _`ES's flush API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-flush.html """ return self.send_request('POST', [self._concat(index), '_flush'], query_params=query_params) @es_kwargs() def refresh(self, index=None, query_params=None): """ Refresh one or more indices. :arg index: An index or iterable of indexes See `ES's refresh API`_ for more detail. .. _`ES's refresh API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-refresh.html """ return self.send_request('POST', [self._concat(index), '_refresh'], query_params=query_params) @es_kwargs() def gateway_snapshot(self, index=None, query_params=None): """ Gateway snapshot one or more indices. :arg index: An index or iterable of indexes See `ES's gateway-snapshot API`_ for more detail. .. _`ES's gateway-snapshot API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-gateway-snapshot.html """ return self.send_request( 'POST', [self._concat(index), '_gateway', 'snapshot'], query_params=query_params) @es_kwargs('max_num_segments', 'only_expunge_deletes', 'refresh', 'flush', 'wait_for_merge') def optimize(self, index=None, query_params=None): """ Optimize one or more indices. :arg index: An index or iterable of indexes See `ES's optimize API`_ for more detail. .. _`ES's optimize API`: http://www.elasticsearch.org/guide/reference/api/admin-indices-optimize.html """ return self.send_request('POST', [self._concat(index), '_optimize'], query_params=query_params) @es_kwargs('level', 'wait_for_status', 'wait_for_relocating_shards', 'wait_for_nodes', 'timeout') def health(self, index=None, query_params=None): """ Report on the health of the cluster or certain indices. :arg index: The index or iterable of indexes to examine See `ES's cluster-health API`_ for more detail. .. _`ES's cluster-health API`: http://www.elasticsearch.org/guide/reference/api/admin-cluster-health.html """ return self.send_request( 'GET', ['_cluster', 'health', self._concat(index)], query_params=query_params) @es_kwargs('filter_nodes', 'filter_routing_table', 'filter_metadata', 'filter_blocks', 'filter_indices') def cluster_state(self, query_params=None): """ The cluster state API allows to get comprehensive state information of the whole cluster. (Insert es_kwargs here.) See `ES's cluster-state API`_ for more detail. .. _`ES's cluster-state API`: http://www.elasticsearch.org/guide/reference/api/admin-cluster-state.html """ return self.send_request( 'GET', ['_cluster', 'state'], query_params=query_params) @es_kwargs() def percolate(self, index, doc_type, doc, query_params=None): """ Run a JSON document through the registered percolator queries, and return which ones match. :arg index: The name of the index to which the document pretends to belong :arg doc_type: The type the document should be treated as if it has :arg doc: A Python mapping object, convertible to JSON, representing the document Use :meth:`index()` to register percolators. See `ES's percolate API`_ for more detail. .. _`ES's percolate API`: http://www.elasticsearch.org/guide/reference/api/percolate/ """ return self.send_request('GET', [index, doc_type, '_percolate'], doc, query_params=query_params) class JsonEncoder(json.JSONEncoder): def default(self, value): """Convert more Python data types to ES-understandable JSON.""" iso = _iso_datetime(value) if iso: return iso if not PY3 and isinstance(value, str): return unicode(value, errors='replace') # TODO: Be stricter. if isinstance(value, set): return list(value) return super(JsonEncoder, self).default(value) def _iso_datetime(value): """ If value appears to be something datetime-like, return it in ISO format. Otherwise, return None. """ if hasattr(value, 'strftime'): if hasattr(value, 'hour'): return value.isoformat() else: return '%sT00:00:00' % value.isoformat()

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

我说我不会算法,阿里把我挂了。

不说了,字节跳动也反手把我挂了。

抖音上很火的时钟效果

反正,我的抖音没人看,别人都有几十万个赞什么的。 发到CSDN上来,大家交流下~ 主要用到原生态的 JS+CSS3。 具体不解释了,看注释: &lt;!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="UTF-8"&gt; &lt;title&gt;Title&lt;/tit...

记录下入职中软一个月(外包华为)

我在年前从上一家公司离职,没想到过年期间疫情爆发,我也被困在家里,在家呆着的日子让人很焦躁,于是我疯狂的投简历,看面试题,希望可以进大公司去看看。 我也有幸面试了我觉得还挺大的公司的(虽然不是bat之类的大厂,但是作为一名二本计算机专业刚毕业的大学生bat那些大厂我连投简历的勇气都没有),最后选择了中软,我知道这是一家外包公司,待遇各方面甚至不如我的上一家公司,但是对我而言这可是外包华为,能...

培训班出来的人后来都怎么样了?(二)

接着上回说,培训班学习生涯结束了。后面每天就是无休止的背面试题,不是没有头脑的背,培训公司还是有方法的,现在回想当时背的面试题好像都用上了,也被问到了。回头找找面试题,当时都是打印下来天天看,天天背。 不理解呢也要背,面试造飞机,上班拧螺丝。班里的同学开始四处投简历面试了,很快就有面试成功的,刚开始一个,然后越来越多。不知道是什么原因,尝到胜利果实的童鞋,不满足于自己通过的公司,嫌薪水要少了,选择...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

推荐9个能让你看一天的网站

分享的这9个保证另你意外的网站,每个都非常实用!非常干货!毫不客气的说,这些网站最少值10万块钱。 利用好这些网站,会让你各方面的技能都得到成长,不说让你走上人生巅峰,但对比现在的你,在眼界、学识、技能方面都有质的飞跃。 一、AIRPANO 传送门:https://www.airpano.com/360photo_list.php 这是一个可以躺在家里,就能环游世界的神奇网站。 世界那么大,绝大多...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

97年世界黑客编程大赛冠军作品(大小仅为16KB),惊艳世界的编程巨作

这是世界编程大赛第一名作品(97年Mekka ’97 4K Intro比赛)汇编语言所写。 整个文件只有4095个字节, 大小仅仅为16KB! 不仅实现了3D动画的效果!还有一段震撼人心的背景音乐!!! 内容无法以言语形容,实在太强大! 下面是代码,具体操作看最后! @echo off more +1 %~s0|debug e100 33 f6 bf 0 20 b5 10 f3 a5...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

什么是a站、b站、c站、d站、e站、f站、g站、h站、i站、j站、k站、l站、m站、n站?00后的世界我不懂!

A站 AcFun弹幕视频网,简称“A站”,成立于2007年6月,取意于Anime Comic Fun,是中国大陆第一家弹幕视频网站。A站以视频为载体,逐步发展出基于原生内容二次创作的完整生态,拥有高质量互动弹幕,是中国弹幕文化的发源地;拥有大量超粘性的用户群体,产生输出了金坷垃、鬼畜全明星、我的滑板鞋、小苹果等大量网络流行文化,也是中国二次元文化的发源地。 B站 全称“哔哩哔哩(bilibili...

十个摸鱼,哦,不对,是炫酷(可以玩一整天)的网站!!!

文章目录前言正文**1、Kaspersky Cyberthreat real-time map****2、Finding Home****3、Silk – Interactive Generative Art****4、Liquid Particles 3D****5、WINDOWS93****6、Staggering Beauty****7、Ostagram图片生成器网址****8、全历史网址*...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

大厂的 404 页面都长啥样?最后一个笑了...

每天浏览各大网站,难免会碰到404页面啊。你注意过404页面么?猿妹搜罗来了下面这些知名网站的404页面,以供大家欣赏,看看哪个网站更有创意: 正在上传…重新上传取消 腾讯 正在上传…重新上传取消 网易 淘宝 百度 新浪微博 正在上传…重新上传取消 新浪 京东 优酷 腾讯视频 搜...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

一场疫情,炸出了退休的COBOL程序员

COBOL编程语言,估计大多数程序员从没听说过,我这样的编程老司机,也是只闻其名,从未一睹芳容。出门问了问度娘,答案如下:COBOL语言,是一种面向过程的高级程序设计语言,主要用于数据...

爬虫(101)爬点重口味的

小弟最近在学校无聊的很哪,浏览网页突然看到一张图片,都快流鼻血。。。然后小弟冥思苦想,得干一点有趣的事情python 爬虫库安装https://s.taobao.com/api?_ks...

讲真,这两款idea插件,能治愈你英语不好的病

时不时就有小伙伴问我,“二哥,能推荐一款 IDE 吗?”你看这话问的,现在搞 Java 的不都在用 Intellij IDEA 吗,还用得着推荐(我已经和 Eclipse 分手了)。然后小伙伴又说,“二哥,IDEA 支持中文吗?我英语不太好。”你看这话问的,搞编程的,英语不好是硬伤啊! 不过,随着 IDEA 最新版(版本号是 2020.1)的发布,英语不好的病可以彻底治愈了。为什么这么说呢?因为 ...

在拼多多上班,是一种什么样的体验?我心态崩了呀!

之前有很多读者咨询我:武哥,在拼多多上班是一种什么样的体验?由于一直很忙,没抽出时间来和大家分享。上周末特地花点时间来写了一篇文章,跟大家分享一下拼多多的日常。 1. 倒时差的作息 可能很多小伙伴都听说了,拼多多加班很严重。这怎么说呢?作息上确实和其他公司有点区别,大家知道 996,那么自然也就能理解拼多多的“11 11 6”了。 所以当很多小伙伴早上出门时,他们是这样的: 我们是这样的: 当...

又一起程序员被抓事件

就在昨天互联网又发生一起让人心酸的程序员犯罪事件,著名的百度不限速下载软件 Pandownload PC 版作者被警方抓获。案件大致是这样的:软件的作者不仅非法盗取用户数据,还在QQ群进...

瑞德西韦重症用药结果再曝光,上百名重症一周内好转,股价大涨19%

郭一璞 发自 凹非寺量子位 报道 | 公众号 QbitAI期盼已久的瑞德西韦临床数据,现在“偷跑”了。在芝加哥大学医学院的临床试验中,125名病人参与,大部分人都已经出院,其中只有2名病...

应聘3万的职位,有必要这么刁难我么。。。沙雕。。。

又一次被面试官带到坑里面了。面试官:springmvc用过么?我:用过啊,经常用呢面试官:springmvc中为什么需要用父子容器?我:嗯。。。没听明白你说的什么。面试官:就是contr...

Vue商城——详情页功能

详情页实现思路 点击商品进去详情页,根据点击请求更加详细的信息,要传过来goodsItem的iid,根据id去服务器请求更加详细的信息;配置路由映射关系,点击进行跳转,带参数传递跳转 itemClick(){ this.$router.push('/detail/'+this.goodsItem.iid) /* this.$router.push({ ...

太狠了,疫情期间面试,一个问题砍了我5000!

疫情期间找工作确实有点难度,想拿到满意的薪资,确实要点实力啊!面试官:Spring中的@Value用过么,介绍一下我:@Value可以标注在字段上面,可以将外部配置文件中的数据,比如可以...

自学编程的 6 个致命误区

嗨,小伙伴们大家好,我是沉默王二。本篇文章来和大家聊聊自学编程中的一些误区——这是我在 B 站上看了羊哥的一期视频后有感而发的文章。因为确实有很多读者也曾私信问过我这些方面的问题,很有代表性,所以我就结合自己的亲身体会来谈一谈,希望对小伙伴们有所启发。 01、追求时髦 所谓基础不牢,地动山摇啊。可很多小伙伴压根就没注意过这个问题,市面上出什么新鲜的技术就想去尝试,结果把自己学的乱七八糟,心灰意冷...

你离黑客的距离,就差这20个神器了

郑重声明:本文仅限技术交流,不得用于从事非法活动 在不少电影电视剧中,主角的身边都有这么一位电脑高手:他们分分钟可以黑进反派的网络,攻破安全防线,破解口令密码,拿到重要文件。他们的电脑屏幕上都是一些看不懂的图形和数字,你能看懂的就只有那个进度条,伴随着紧张的BGM,慢慢的向100%靠近······ 上面的场景和套路是不是很眼熟? 影视作品中的黑客当然有夸张和戏剧化的表现,不过,现实世界中的黑客也...

Linux基础:xargs命令-I选项使用技巧

这篇文章使用具体示例来介绍一下xargs命令-I参数的常见使用方法。

一个华为离职者的离职感言,干货满满

11.8Y, 40-, 3.30提离职,本月底截止。 离职原因: 1.年龄大,职级低,处境尴尬。 2.常年处于紧绷状态,身心疲乏,近来工作干劲不足,没有期待。 现在离职时机的选择比较尴尬。主要有以下几点: 1. 赶上了疫情,外部就业环境险恶。 2. 最近公司出台了ESOP1政策,享受不到政策好处了。 3. 到了沟通奖金的时间。 我是去年底挂简历,询问的猎头不少,约面试的只有一家,面...

我和AI打了六局王者荣耀,心态崩了

十三 发自 凹非寺量子位 报道 | 公众号 QbitAI今天,我和AI绝悟打了6局王者荣耀,心态有点崩。没赢过?不,比分3:3打平,还拿过MVP。那怎么就崩溃了?听我慢慢道来。PVP对战...

一张千万级别数据的表想做分页,如何优化?

介绍 当进行分页时,MySQL 并不是跳过 offset 行,而是取 offset+N 行,然后放弃前 offset 行,返回 N 行。例如 limit 10000, 20。mysql排序取出10020条数据后,仅返回20条数据,查询和排序的代价都很高。那当 offset 特别大的时候,效率就非常的低下,所以我们要对sql进行改写 使用书签 用书签记录上次取数据的位置,过滤掉部分数据 如下面语句 ...

一个天才程序员的毁灭性衰落:“偏头痛”毁了他的一生

白交 发自 凹非寺量子位 报道 | 公众号 QbitAI读到这则故事,想分享给每一位程序员朋友——或者是每一位朋友。故事的主人公,并不在国内享有名气,他创办的公司,也只是垂直行业内为人...

【相亲】96年程序员小哥第一次相亲,还没开始就结束了

颜值有点高,条件有点好

C和C++安全编码笔记:动态内存管理

4.1 C内存管理: C标准内存管理函数: (1).malloc(size_t size):分配size个字节,并返回一个指向分配的内存的指针。分配的内存未被初始化为一个已知值。 (2).aligned_alloc(size_t alignment, size_t size):为一个对象分配size个字节的空间,此对象的对齐方式是alignment指定的。alignment的值必须是实现支持...

腾讯面试题: 百度搜索为什么那么快?

我还记得去年面腾讯时,面试官最后一个问题是:百度/google的搜索为什么那么快? 这个问题我懵了,我从来没想过,搜素引擎的原理是什么 然后我回答:百度爬取了各个网站的信息,然后进行排序,当输入关键词的时候进行文档比对……巴拉巴拉 面试官:这不是我想要的答案 我内心 这个问题我一直耿耿于怀,终于今天,我把他写出来,以后再问,我直接把这篇文章甩给他!!! 两个字:倒排,将贯穿整篇文章,也是面试官...

你怎么看欧阳娜娜空降阿里p8?

前段时间 欧阳娜娜空降阿里P8被骂上热搜 有网友调侃道: 名牌大学毕业的研究生 要在阿里没日没夜、加班加点、 全年无休奋斗5年,才可能有机会 和20岁的欧阳娜娜一起喝下午茶…… 本来嘛,大厂✖明星的营销无可厚非 那是什么让撸代码N年的程序员愤愤不平呢? 还不是因为升到P8真的太难了! 这是很多阿里人甚至互联网人遥不可及的梦想! 阿里P8到底有多牛? 根据知乎大V@半佛仙人透露的情况: 阿里P8基本上要求研究生 5 年以上经验,本科 7 年以上经验; P8 一般去小公司就是各种 O,一般公司(非国企、.

相关热词 c# 局部 截图 页面 c#实现简单的文件管理器 c# where c# 取文件夹路径 c# 对比 当天 c# fir 滤波器 c# 和站 队列 c# txt 去空格 c#移除其他类事件 c# 自动截屏
立即提问