spark 写入elasticsearch报错Could not write all entries

我在使用Spark将Rdd写入到elasticsearch集群的时候报出异常

Could not write all entries [199/161664] (maybe ES was overloaded?). Bailing out...
    at org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:250)
    at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:201)
    at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:163)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:49)
    at org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1.apply(EsSpark.scala:84)
    at org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1.apply(EsSpark.scala:84)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

RDD大概是5000W行数据,es集群有两个节点

EsSpark.saveToEs(result, "userindex/users", Map("es.mapping.id" -> "uid"))

2个回答

bnvm1401
螳螂之怒 你给的链接里面没有涉及到这个异常啊,你遇到过这个异常吗,怎么处理的呢,es新手
2 年多之前 回复

楼主解决了吗,我也遇到了同样的问题

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
spark读取elasticSearch时EsHadoopNoNodesLeftException
RT 我本机是能ping通elasticSearch的9200端口的。 org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes faile]] at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:123) at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:303) at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:287) at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:291) at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:118) at org.elasticsearch.hadoop.rest.RestClient.discoverNodes(RestClient.java:100) at org.elasticsearch.hadoop.rest.InitializationUtils.discoverNodesIfNeeded(InitializationUtils.java:57) at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:346) at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:31) at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:34) at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:34) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
求助sprinboot整合 elasticsearch-rest-high-level-client-5.6.4报错
以下是报错信息: Description: An attempt was made to call a method that does not exist. The attempt was made from the following location: org.springframework.boot.autoconfigure.elasticsearch.rest.RestClientConfigurations$RestHighLevelClientConfiguration.elasticsearchRestHighLevelClient(RestClientConfigurations.java:75) The following method did not exist: org.elasticsearch.client.RestHighLevelClient.<init>(Lorg/elasticsearch/client/RestClientBuilder;)V The method's class, org.elasticsearch.client.RestHighLevelClient, is available from the following locations: jar:file:/Users/jianxiaowen/.m2/repository/org/elasticsearch/client/elasticsearch-rest-high-level-client/5.6.4/elasticsearch-rest-high-level-client-5.6.4.jar!/org/elasticsearch/client/RestHighLevelClient.class It was loaded from the following location: file:/Users/jianxiaowen/.m2/repository/org/elasticsearch/client/elasticsearch-rest-high-level-client/5.6.4/elasticsearch-rest-high-level-client-5.6.4.jar Action: Correct the classpath of your application so that it contains a single, compatible version of org.elasticsearch.client.RestHighLevelClient 以下是pom文件,是springboot版本和这个es不兼容吗,原来用的es6.x是可以的,换成5.x就不行了,但是公司环境就要求5.x: ``` <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.xxx</groupId> <artifactId>repservice</artifactId> <version>1.0</version> <modules> <module>kg-business</module> </modules> <packaging>pom</packaging> <name>repservice</name> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.7.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <elasticsearch.version>5.6.4</elasticsearch.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.x x x.datanlp</groupId> <artifactId>graph-db-sdk</artifactId> <version>1.1-SNAPSHOT</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.13</version> </dependency> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-commons</artifactId> <version>2.1.10.RELEASE</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.restdocs</groupId> <artifactId>spring-restdocs-mockmvc</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-jdbc</artifactId> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter-test</artifactId> <version>2.1.0</version> <scope>test</scope> </dependency> <dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-core</artifactId> <version>5.2.4</version> </dependency> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> <version>1.3.3</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.4.12</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.5.10</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpmime</artifactId> <version>4.5.10</version> </dependency> <dependency> <groupId>io.reactivex.rxjava2</groupId> <artifactId>rxjava</artifactId> <version>2.2.9</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-aop</artifactId> </dependency> <dependency> <groupId>commons-net</groupId> <artifactId>commons-net</artifactId> <version>3.6</version> </dependency> <!-- 引入 redis 依赖 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>2.9.0</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.solr/solr-solrj --> <dependency> <groupId>org.apache.solr</groupId> <artifactId>solr-solrj</artifactId> <version>8.2.0</version> </dependency> <!-- 解压rar --> <dependency> <groupId>com.github.junrar</groupId> <artifactId>junrar</artifactId> <version>4.0.0</version> </dependency> <!-- 解压zip --> <dependency> <groupId>org.apache.ant</groupId> <artifactId>ant</artifactId> <version>1.10.7</version> </dependency> <!--结果验证--> <dependency> <groupId>com.jayway.jsonpath</groupId> <artifactId>json-path</artifactId> <version>2.4.0</version> </dependency> <!-- 临时--> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.51</version> </dependency> <dependency> <groupId>com.arangodb</groupId> <artifactId>arangodb-java-driver</artifactId> <version>5.0.0</version> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-web --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <version>2.7</version> </dependency> <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>5.6.4</version> </dependency> <!-- Java Low Level REST Client --> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> <version>5.6.4</version> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>5.6.4</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> ```
Centos7运行Elasticsearch6.5.4报错
# Centos7运行Elasticsearch6.5.4报错 ``` org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [ck.ml.enabled] did you mean any of [xpack.ml.enabled, xpack.sql.enabled]? ``` ![图片说明](https://img-ask.csdn.net/upload/202001/13/1578913104_689350.png) 大家有遇到过吗?我找了网上的贴子,没有遇到过的
Centos7运行elasticsearch-6.5.4报错
# Centos7运行elasticsearch-6.5.4报错 ![图片说明](https://img-ask.csdn.net/upload/202001/09/1578550525_662554.jpg) ,bootstrap checks failed [1]: Java version [1.8.0_222-ea] is an early-access build, only use release builds 搞不定啊啊啊啊
ElasticSearch 6.3.2版本整合springBoot 报错 java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50)
在服务器上搭建了一个单机的ElasticSearch 版本6.3.2,因为本地访问服务器的9200端口访问不到(network.host:0.0.0.0启动报错),所以用nginx做了代理9002端口对应9200,9001端口对应9300,直接访问访问成功如下:![图片说明](https://img-ask.csdn.net/upload/202002/08/1581096007_348190.jpg)![图片说明](https://img-ask.csdn.net/upload/202002/08/1581096016_248026.png) 然后我用写了个Demo但是一直报错 java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50); 配置文件如下: ![图片说明](https://img-ask.csdn.net/upload/202002/08/1581096398_340973.jpg) ![图片说明](https://img-ask.csdn.net/upload/202002/08/1581096609_898535.jpg) 错误日志和pom.xml文件如下: ``` exception caught on transport layer [NettyTcpChannel{localAddress=/192.168.1.103:61678, remoteAddress=/xxx.xxx.xxx.xxx:9001}], closing connection io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-codec-4.1.31.Final.jar:4.1.31.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-handler-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:648) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:548) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:502) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) [netty-transport-4.1.31.Final.jar:4.1.31.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-common-4.1.31.Final.jar:4.1.31.Final] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191] Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (48,54,54,50) at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1315) ~[elasticsearch-6.3.2.jar:6.4.3] at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[transport-netty4-client-6.4.3.jar:6.4.3] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-codec-4.1.31.Final.jar:4.1.31.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-codec-4.1.31.Final.jar:4.1.31.Final] ... 19 common frames omitted ``` pom.xml文件如下 ``` <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com</groupId> <artifactId>elasticsearchtest</artifactId> <version>0.0.1-SNAPSHOT</version> <name>elasticsearchtest</name> <description>Demo project for Spring Boot</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>6.3.2</version> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>transport</artifactId> <version>6.3.2</version> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>6.3.2</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.16.10</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> ``` 请大神帮忙看看,谢谢啦!
用RestHighLevelClient将数据从MongoDb同步到ElasticSearch报错
如图,当批量增加时报错 ![图片说明](https://img-ask.csdn.net/upload/201904/22/1555923782_384390.png) 我的代码如下: Test类: ``` @Test public void aa (UserVo userVo) throws IOException { try { MongoClient mongo = new MongoClient("47.106.125.227", 27017);//连接mongo DB db = mongo.getDB("qhc");//获取数据库 DBCollection table = db.getCollection("sjs_list");//获取表名 RestHighLevelClient client = new RestHighLevelClient( RestClient.builder( new HttpHost("localhost", 9200, "http")));//连接es //找到数据 List<UserVo> userVoList=new ArrayList<>(); userVoList.add(userVo); //放入数据 batchInsertToEsSync(client,userVoList,"user","_search");//导出到es 表 字段 } catch (Exception e) { e.printStackTrace(); } } ``` batchInsertToEsSync: ``` public void batchInsertToEsSync(RestHighLevelClient client,List<UserVo> objs,String tableName,String type) throws IOException {//导出 BulkRequest bulkRequest=new BulkRequest(); for(UserVo obj:objs) { IndexRequest req = new IndexRequest(tableName, type); Map<String,Object> map=new HashMap<>(); //获取数据 String id = obj.getId(); String title = obj.getTitle(); String content = obj.getContent(); String source = obj.getSource(); String date = obj.getDate(); String pageUrl = obj.getPageUrl(); String areaTag = obj.getAreaTag(); String affairsTag = obj.getAffairsTag(); String contentTag = obj.getContentTag(); List<FuJian> enclosure= obj.getEnclosure(); String img = obj.getImg(); String summary = obj.getSummary(); String tag = obj.getTag(); String labelName = obj.getLabelName(); //添加数据 map.put("_id",id); map.put("_title",title); map.put("_content",content); map.put("_source",source); map.put("_date",date); map.put("_pageUrl",pageUrl); map.put("_areaTag",areaTag); map.put("_affairsTag",affairsTag); map.put("_contentTag",contentTag); map.put("_enclosure",enclosure); map.put("_img",img); map.put("_summary",summary); map.put("_tag",tag); map.put("_labelName",labelName); req.id(map.get("_id").toString()); req.source(map, XContentType.JSON); bulkRequest.add(req); } BulkResponse bulkResponse=client.bulk(bulkRequest); for (BulkItemResponse bulkItemResponse : bulkResponse) { if (bulkItemResponse.isFailed()) { System.out.println(bulkItemResponse.getId()+","+bulkItemResponse.getFailureMessage()); } } } ``` pom.xml一部分: ``` <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>6.2.3</version> <exclusions> <exclusion> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> </exclusion> </exclusions> </dependency> ``` 是包冲突还是?我的elasticsearch版本是6.2.3,jdk1.8,有遇到过的朋友加下QQ1479756648 ,万分感谢
elasticsearch如何表示AND NOT(A AND B)
elasticsearch如何表示AND NOT(A AND B) 比如sql语句:select * from course t1 where t1.DEL_FLAG = 0 AND NOT (NOW() > t1.live_end_time and t1.is_liveing=1 用es该如何标识
elasticsearch插入数据报错
之前用的elasticsearch是1.3.2版本的,装了ik插件,用起来很正常,后面elasticsearch换成2.0.0的了,还装了logstash2.0.0,kibana4.2.0, ik分词器是1.5.0的,用Java代码批量插入的时候就报错了,在elasticsearch的控制台手动插入是没问题的 [INFO][2016-11-16 18:22:34] org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:151) main [Agatha Harkness] loaded [analysis-jcseg], sites [] [INFO][2016-11-16 18:22:35] org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1.handleException(TransportClientNodesService.java:443) elasticsearch[Agatha Harkness][transport_client_worker][T#1]{New I/O worker #28} [Agatha Harkness] failed to get local cluster state for [#transport#-1][USER-20150529VW][inet[localhost/127.0.0.1:9300]], disconnecting... org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:173) at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.StreamCorruptedException: Unsupported version: 1 at org.elasticsearch.common.io.ThrowableObjectInputStream.readStreamHeader(ThrowableObjectInputStream.java:46) at java.io.ObjectInputStream.<init>(ObjectInputStream.java:299) at org.elasticsearch.common.io.ThrowableObjectInputStream.<init>(ThrowableObjectInputStream.java:38) at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:170) ... 23 more [WARN][2016-11-16 18:22:35] org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:135) elasticsearch[Agatha Harkness][transport_client_worker][T#1]{New I/O worker #28} [Agatha Harkness] Message not fully read (response) for [0] handler org.elasticsearch.client.transport.TransportClientNodesService$SniffNodesSampler$1$1@7e8c9412, error [true], resetting [INFO][2016-11-16 18:22:39] org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:862) Thread-1 Closing org.springframework.context.support.GenericApplicationContext@fbd1f6: startup date [Wed Nov 16 18:21:55 CST 2016]; root of context hierarchy [INFO][2016-11-16 18:22:39] org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:272) Thread-1 [Fer-de-Lance] stopping ... [INFO][2016-11-16 18:22:39] org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:310) Thread-1 [Fer-de-Lance] stopped
关于elasticsearch的添加数据
elasticsearch7.2.0版本,在kibnan操作的 执行如下操作就会报错 PUT myindex/_doc/18 { "name": "pipeline", "dat3":"测试", "msg": "123", "ab":{ "content":{ "c":"564654123" } }, "newFile2":{ "name":{ "id":"564654123" } }, "newFile1":"111" }``` 报错信息 { "error": { "root_cause": [ { "type": "mapper_parsing_exception", "reason": "failed to parse field [newFile2] of type [text] in document with id '18'" } ], "type": "mapper_parsing_exception", "reason": "failed to parse field [newFile2] of type [text] in document with id '18'", "caused_by": { "type": "illegal_state_exception", "reason": "Can't get text on a START_OBJECT at 10:14" } }, "status": 400 } ``` 但是吧newFile2变成newFile3,newFile1变成newFile5就正常添加了,没有问题 PUT myindex/_doc/18 { "name": "pipeline", "dat3":"测试", "msg": "123", "ab":{ "content":{ "c":"564654123" } }, "newFile3":{ "name":{ "id":"564654123" } }, "newFile5":"111" } 有人知道是什么原因吗 ``` ```
如何在从NSQ中拉取数据写入到Elasticsearch 。
如何在从NSQ中拉取数据写入到Elasticsearch 。
Elasticsearch.bat改了yml配置重启报错直接闪退出命令行
1.我是根据这个帖子做的https://blog.csdn.net/qq_38989725/article/details/89381173#commentsedit 2.按照它的步骤1.4.3配置运行 第一步:进入Elasticsearch安装目录下的config目录,修改elasticsearch.yml文件.在文件的末尾加入以下代码 http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true 然后去掉network.host: 192.168.0.1的注释并改为network.host: 0.0.0.0,去掉cluster.name;node.name;http.port的注释(也就是去掉#) 第二步:双击elasticsearch.bat重启Elasticsearch 3.问题就出在这个”然后去掉network.host: 192.168.0.1的注释并改为network.host: 0.0.0.0,去掉cluster.name;node.name;http.port的注释“这里,只要去掉之后,再重启就报错 ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277807_910529.jpg) 会的大佬看看呢
linux中用 elasticsearch 下载的 x-pack 没有 setup-passwords
linux中用 elasticsearch 下载的 x-pack 没有 setup-passwords
elasticsearch network_host 参数填写机器外网地址,启动失败
elasticsearch network_host 参数填写机器外网地址,启动失败 Caused by: java.net.BindException: Cannot assign requested address
flume-ng 1.4 elasticsearch sink 报错
哪位知道这个怎么回事啊? 使用flume-ng 使用elasticsearch 作为sink的时候报错。 20 三月 2014 22:28:09,417 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:150) - Creating channels 20 三月 2014 22:28:09,438 INFO [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40) - Creating instance of channel c1 type memory 20 三月 2014 22:28:09,451 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205) - Created channel c1 20 三月 2014 22:28:09,453 INFO [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39) - Creating instance of source r1, type spooldir 20 三月 2014 22:28:09,478 INFO [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40) - Creating instance of sink: k1, type: elasticsearch 20 三月 2014 22:28:09,486 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:145) - Failed to start agent because dependencies were not found in classpath. Error follows. java.lang.NoClassDefFoundError: org/elasticsearch/common/transport/TransportAddress at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at org.apache.flume.sink.DefaultSinkFactory.getClass(DefaultSinkFactory.java:67) at org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:415) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.transport.TransportAddress at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
Elasticsearch nested 嵌套分词查询返回结果不准确
通过ES分词搜索文档的标题和内容,标题和内容这两个字段是分词的text字段。 因为一个文档时多语言的,所以mapping设置是嵌套的,但是搜索出来的结果并不是我期望的。 ```json { "mappings": { "properties": { "title": { // 标题字段 "type": "nested", "properties": { "lang": { // 语种字段,"zh/en/ja..." "type": "keyword" }, "value": { // 标题对应的值 "type": "text", "search_analyzer": "ik_max_word", "analyzer": "ik_max_word", "fields": { "keyword": { "ignore_above": 256, "type": "keyword" } } } } }, "content": { // 正文内容 "type": "nested", "properties": { "lang": { "type": "keyword" }, "value": { "type": "text", "search_analyzer": "ik_max_word", "analyzer": "ik_max_word", "fields": { "keyword": { "ignore_above": 256, "type": "keyword" } } } } }, "baseCode": { //属于某个知识库下 "type": "keyword" }, "knowledgeId": { // 唯一ID "type": "long" }, "order": { // 序号 "type": "integer" }, "publish": { // 发布状态 "type": "integer" }, "status": { // 逻辑删除,0删除1存在 "type": "integer" } } } } ``` 这个是查询条件: ```json { "query": { "bool": { "must": [{ "bool": { "should": [{ "nested": { "query": { "bool": { "must": [{ "term": { "title.lang": { "value": "zh", "boost": 1 } } }, { "match": { "title.value": { "query": "测试", "operator": "OR", "prefix_length": 0, "max_expansions": 50, "fuzzy_transpositions": true, "lenient": false, "zero_terms_query": "NONE", "auto_generate_synonyms_phrase_query": true, "boost": 1.5 } } } ], "adjust_pure_negative": true, "boost": 1.5 } }, "path": "title", "ignore_unmapped": false, "score_mode": "avg", "boost": 2 } }, { "nested": { "query": { "bool": { "must": [{ "term": { "content.lang": { "value": "zh", "boost": 1 } } }, { "match": { "content.value": { "query": "测试", "operator": "OR", "prefix_length": 0, "max_expansions": 50, "fuzzy_transpositions": true, "lenient": false, "zero_terms_query": "NONE", "auto_generate_synonyms_phrase_query": true, "boost": 1 } } } ], "adjust_pure_negative": true, "boost": 1 } }, "path": "content", "ignore_unmapped": false, "score_mode": "avg", "boost": 1 } } ], "adjust_pure_negative": true, "boost": 2 } }, { "term": { "status": { "value": 1, "boost": 1 } } }, { "term": { "categoryType": { "value": 0, "boost": 1 } } }, { "term": { "baseCode": { "value": "B8rqalut2bajk", "boost": 1 } } }, { "term": { "publish": { "value": 1, "boost": 1 } } }, { "term": { "isOpen": { "value": 1, "boost": 1 } } } ], "adjust_pure_negative": true, "boost": 1 } } } ``` 期望结果是标题是“测试标题并有多个测试词语”的数据在第一个,因为标题和内容都匹配到了,并且次数最多。但是第一条返回的并不是这条。 ![图片说明](https://img-ask.csdn.net/upload/202001/31/1580472299_2137.png) 但实际返回的第一条数据是这个: ![图片说明](https://img-ask.csdn.net/upload/202001/31/1580472386_601750.png) 怎么样才能返回第一条是我想要的数据呢?
elasticsearch源码导入idea编译报错
在根目录运行gradle idea 报错 FAILURE: Build failed with an exception. * Where: Build file 'D:\git_repository\elasticsearch\build.gradle' line: 315 * What went wrong: Could not get unknown property 'projectSubstitutions' for DefaultExternalModuleDependency{group='junit', name='junit', version='4.12', configuration='default'} of type org.gradle.api.internal.artifacts.dependencie s.DefaultExternalModuleDependency. build.gradle315行: String substitution = projectSubstitutions.get("${dep.group}:${dep.name}:${dep.version}")
elasticsearch连接出错
1、错误截图 ``` 2019-12-06 10:39:43.846 ERROR 2824 --- [nio-8080-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{OAnbfg_bSjCat5ebeRNMTg}{127.0.0.1}{127.0.0.1:9300}]]] with root cause org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{OAnbfg_bSjCat5ebeRNMTg}{127.0.0.1}{127.0.0.1:9300}] at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:363) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54) ~[elasticsearch-5.6.1.jar:5.6.1] at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:62) ~[elasticsearch-5.6.1.jar:5.6.1] at com.imooc.service.search.SearchServiceImpl.aggregateDistrictHouse(SearchServiceImpl.java:460) ~[classes/:na] at com.imooc.web.controller.house.HouseController.show(HouseController.java:207) ~[classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_131] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_131] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_131] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_131] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133) ~[spring-web-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) ~[tomcat-embed-core-8.5.20.jar:8.5.20] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) ~[spring-webmvc-4.3.11.RELEASE.jar:4.3.11.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) ~[tomcat-embed-core-8.5.20.jar:8.5.20] ``` ![图片说明](https://img-ask.csdn.net/upload/201912/06/1575600402_940718.jpg) 2、elasticsearch配置信息 ``` cluster.name: elasticsearch # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: master node.master: true node.data: true # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # #path.data: /path/to/data # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 127.0.0.1 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.zen.ping.unicast.hosts: ["host1", "host2"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # #discovery.zen.minimum_master_nodes: 3 # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true #bootstrap.system_call_filter: false http.cors.enabled: true http.cors.allow-origin: "*" ``` 3、spring项目配置信息 ``` package com.imooc.config; import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; import org.elasticsearch.transport.client.PreBuiltTransportClient; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import java.net.InetAddress; import java.net.UnknownHostException; @Configuration public class ElasticSearchConfig { @Value("${elasticsearch.host}") private String esHost; @Value("${elasticsearch.port}") private int esPort; @Value("${elasticsearch.cluster.name}") private String esName; @Bean public TransportClient esClient() throws UnknownHostException { Settings settings = Settings.builder() .put("cluster.name", this.esName) // .put("cluster.name", "elasticsearch") .put("client.transport.sniff", true) .build(); InetSocketTransportAddress master = new InetSocketTransportAddress( InetAddress.getByName(esHost), esPort // InetAddress.getByName("192.168.100.106"), 8999 ); TransportClient client = new PreBuiltTransportClient(settings) .addTransportAddress(master); return client; } } ``` 4、application.propertires ``` elasticsearch.cluster.name=xunwu elasticsearch.host=127.0.0.1 elasticsearch.port=9300 ```
Elasticsearch 启动报错问题 显示Nodes的问题???
今天在搭建ElasticSearch时候 出现的问题 显示Nodes的问题: ``` 019-01-18 23:20:34.462 ERROR 12264 --- [ main] .d.e.r.s.AbstractElasticsearchRepository : failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{mfQlD8ZeTIOtTnLapBR5pA}{192.168.1.105}{192.168.1.105:9300}] ``` springboot版本: <version>2.1.2.RELEASE</version> springboot data elasticsearch 版本: ![图片说明](https://img-ask.csdn.net/upload/201901/18/1547825326_146679.png) properties配置文件内容如下: ``` spring.elasticsearch.jest.uris=http://192.168.1.105:9200/ spring.data.elasticsearch.cluster-name=elasticsearch spring.data.elasticsearch.cluster-nodes=192.168.1.105:9300 ``` elasticsearch 浏览器中显示如下: ``` { "name" : "Rigellian Recorder", "cluster_name" : "elasticsearch", "cluster_uuid" : "MU7QHYyfR6CoTImE1BpjEQ", "version" : { "number" : "2.4.6", "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd", "build_timestamp" : "2017-07-18T12:17:44Z", "build_snapshot" : false, "lucene_version" : "5.5.4" }, "tagline" : "You Know, for Search" } ```
elasticsearch启动报错,百度也找不到办法解决
问题描述就一小段: Error: You must build the project with Gradle or download a pre-built package before you can run Elasticsearch. See 'Building from Source' in README.textile or visit https://www.elastic.co/download to get a pre-built package. 然后在elasticsearch的启动脚本找到了这句话 :if echo 'distribution' | grep project.name > /dev/null ; then cat >&2 << EOF Error: You must build the project with Gradle or download a pre-built package before you can run Elasticsearch. See 'Building from Source' in README.textile or visit https://www.elastic.co/download to get a pre-built package. EOF exit 1 fi # TODO: remove for Elasticsearch 6.x unsupported_environment_variable() { if test -n "$1"; then echo "$2=$1: $3" fi } 百度也找不到问题出在哪,elasticsearch是在腾讯云配置的,之前是后台启动后就没管过了,今天突然发现搜索不能用了,服务器重启后,elasticsearch启动就报错了
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
阿里面试官问我:如何设计秒杀系统?我的回答让他比起大拇指
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图和个人联系方式,欢迎Star和指教 前言 Redis在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在Redis的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n...
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
Android性能优化(4):UI渲染机制以及优化
文章目录1. 渲染机制分析1.1 渲染机制1.2 卡顿现象1.3 内存抖动2. 渲染优化方式2.1 过度绘制优化2.1.1 Show GPU overdraw2.1.2 Profile GPU Rendering2.2 卡顿优化2.2.1 SysTrace2.2.2 TraceView 在从Android 6.0源码的角度剖析View的绘制原理一文中,我们了解到View的绘制流程有三个步骤,即m...
微服务中的Kafka与Micronaut
今天,我们将通过Apache Kafka主题构建一些彼此异步通信的微服务。我们使用Micronaut框架,它为与Kafka集成提供专门的库。让我们简要介绍一下示例系统的体系结构。我们有四个微型服务:订单服务,行程服务,司机服务和乘客服务。这些应用程序的实现非常简单。它们都有内存存储,并连接到同一个Kafka实例。 我们系统的主要目标是为客户安排行程。订单服务应用程序还充当网关。它接收来自客户的请求...
致 Python 初学者们!
作者| 许向武 责编 | 屠敏 出品 | CSDN 博客 前言 在 Python 进阶的过程中,相信很多同学应该大致上学习了很多 Python 的基础知识,也正在努力成长。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 Python 这门编程语言,从2009年开始单一使用 Python 应对所有的开发工作,直至今...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
SpringBoot2.x系列教程(三十六)SpringBoot之Tomcat配置
Spring Boot默认内嵌的Tomcat为Servlet容器,关于Tomcat的所有属性都在ServerProperties配置类中。同时,也可以实现一些接口来自定义内嵌Servlet容器和内嵌Tomcat等的配置。 关于此配置,网络上有大量的资料,但都是基于SpringBoot1.5.x版本,并不适合当前最新版本。本文将带大家了解一下最新版本的使用。 ServerProperties的部分源...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
爬取薅羊毛网站百度云资源
这是疫情期间无聊做的爬虫, 去获取暂时用不上的教程 import threading import time import pandas as pd import requests import re from threading import Thread, Lock # import urllib.request as request # req=urllib.request.Requ...
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
HTML5适合的情人节礼物有纪念日期功能
前言 利用HTML5,css,js实现爱心树 以及 纪念日期的功能 网页有播放音乐功能 以及打字倾诉感情的画面,非常适合情人节送给女朋友 具体的HTML代码 具体只要修改代码里面的男某某和女某某 文字段也可自行修改,还有代码下半部分的JS代码需要修改一下起始日期 注意月份为0~11月 也就是月份需要减一。 当然只有一部分HTML和JS代码不够运行的,文章最下面还附加了完整代码的下载地址 &lt;!...
相关热词 c# 识别回车 c#生成条形码ean13 c#子控制器调用父控制器 c# 写大文件 c# 浏览pdf c#获取桌面图标的句柄 c# list反射 c# 句柄 进程 c# 倒计时 线程 c# 窗体背景色
立即提问