C# 读取数据库信息 形成树形List

数据库信息是比如
总公司 01
子公司 0101
子公司 0102
.....依次

用C# 读取 形成一个树形list

c#

2个回答

id 名称 节点编号
1 总公司1 root
2 子公司1 1
3 子公司2 1
4 子子公司1 2

使用递归查询这个表,root表示更元素
根据自己的id查找节点编号,建立树形图

你还需要一个标识列,标识列为根节点id

q465162770
小慧哥 数据库 是动不了的 就用0101 这种格式分层
5 年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
C#点击Listview选中数据删除无效
图1[代码,帮我看看是不是SQL语句有问题。SQL语句我在数据库是可以修改的](https://img-ask.csdn.net/upload/201711/26/1511707102_8261.png) 图2[只能表面修改。无法修改到数据库内容](https://img-ask.csdn.net/upload/201711/26/1511706637_879759.png) 姓名文本框 textBOX2 customerid 是序号 SQL连接语句是不是不对。。。 同样的代码在另一个窗体界面可以完美执行修改 复制过来就不行了 功能是 listview 读取数据库数据绑定在界面。单条选中双击listview里面的数据会把 数据绑定在上面的文本框。直接在文本框修改 点击修改按钮即可实现修改
本人未学过C#,但是公司要我改一段代码,是C#写的,自己在琢磨,但是遇到些问题
List<BaiduPoi> list = BaiduPoi.Fetch(" where issearch=0 order by createdate desc"); 这句话应该是从数据库读取数据,但是是什么意思,我不太理解,使用的工具是Visual Studio 2015,实体类好像是从数据库自动生成的; 有人能具体给一下那句数据库查询语句的具体含义吗? 跪求。。。。。感谢![图片说明](https://img-ask.csdn.net/upload/201701/20/1484879311_598818.png)
c# 解压文件的 解压之后没有存进数据库 没有看的解压之后 文件
Stream stream = HttpContext.Current.Request.InputStream; account = HttpContext.Current.Request.Params["Account"]; if (account == null || account == "") { account = "unknown"; } desZipFilePrefix = ServiceHelp.getDesZipFileName(account); fileSuffixNme = ".rar"; //读取文件流将文件保存到服务器的特定目录下 compressFilePath = OperateZipFile.writeCompressFile(stream, desZipFilePrefix + fileSuffixNme); if (Directory.Exists(System.Web.HttpContext.Current.Server.MapPath("DesFile")) == false) { Directory.CreateDirectory(System.Web.HttpContext.Current.Server.MapPath("DesFile")); } deCompressFilePath = Path.Combine(System.Web.HttpContext.Current.Server.MapPath("DesFile"), desZipFilePrefix); OperateZipFile.UnRar(deCompressFilePath, compressFilePath, desZipFilePrefix + fileSuffixNme); //解析shapefile文件 FeatureShapeFileParse featureShapeFileParse = new FeatureShapeFileParse(); featureShapeFileParse.filePathForShp = featureShapeFileParse.getShpPath(deCompressFilePath); featureShapeFileParse.filePathForShx = featureShapeFileParse.getShxPath(deCompressFilePath); featureShapeFileParse.filePathForDbf = featureShapeFileParse.getDbfPath(deCompressFilePath); List<ShpFileData> shpFileDataList = featureShapeFileParse.readShpFile(); List<DbfFileData> dbfFileDataList = featureShapeFileParse.readDbfile(); if (dbfFileDataList == null) { new Normal_AdminlogsDAL().SaveMapServiceLog(" 解压失败,或者文件为空"); return; } SurveyFeatureAddRequest surveyFeatureAddRequest = getSurveyFeatureAddRequest(shpFileDataList, dbfFileDataList); string featureSql = string.Format(" INSERT INTO [PRO_PollingCheckLog]([P_ProID],[P_UserID],[P_CheckLogName],[P_StartTime],[P_EndTime],[P_Attributes],[P_LayerName],[P_FeatureName],[P_PicURI],[P_MediaURI],[P_Note],[P_StatusID],[P_CreateUserName],[P_CreateTime]) VALUES ('{0}','{1}','{2}','{3}','{4}','{5}','{6}','{7}','{8}','{9}','{10}','{11}','{12}',getdate() ) ;", surveyFeatureAddRequest.ProjectID, surveyFeatureAddRequest.SysUserId, surveyFeatureAddRequest.ProjectName, surveyFeatureAddRequest.SurveyTime, surveyFeatureAddRequest.SurveyTime, surveyFeatureAddRequest.Attributes, surveyFeatureAddRequest.LayerName, surveyFeatureAddRequest.FeatureName, surveyFeatureAddRequest.PicURI, surveyFeatureAddRequest.MediaURI, surveyFeatureAddRequest.Note, 1, account); int exerst = new SurveyFeatureDAL().InsertSurveyFeature(featureSql); if (exerst > 0) { //删除解压后的临时文件 deleteTempFile(); //记录用户操作日志 //new Normal_AdminlogsDAL().SaveLog(surveyFeatureAddRequest.SysUserId.ToString(), "项目名称:" + surveyFeatureAddRequest.ProjectName + ",关联地物信息已经上传成功。"); } else { new Normal_AdminlogsDAL().SaveMapServiceLog("插入失败");//Test ``` dbfFileDataList 总是等 null ```
UWP直接运行程序跳出主页无显示(单步执行没问题)C#
using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Windows.Storage; using Windows.Storage.Pickers; using Windows.Storage.Streams; using Windows.UI.Xaml.Media.Imaging; using Main_Page.Pages; using Main_Page; using Windows.Storage.AccessCache; using Windows.Graphics.Imaging; using DataAccessLibrary; using System.Collections.ObjectModel; using System.Threading; namespace Main_Page.Models { public class Items { public SoftwareBitmapSource AccessSource { set; get; } public string Date { set; get; } public string Size { set; get; } public string Name { set; get; } public string Path { set; get; } } public class PicturePicker { public static async Task<SoftwareBitmapSource> GetitemsAsync(int n) { var source = new SoftwareBitmapSource(); FileOpenPicker fileOpenPicker = new FileOpenPicker(); fileOpenPicker.SuggestedStartLocation = PickerLocationId.PicturesLibrary; fileOpenPicker.FileTypeFilter.Add(".jpg"); fileOpenPicker.FileTypeFilter.Add(".png"); fileOpenPicker.FileTypeFilter.Add(".bmp"); fileOpenPicker.ViewMode = PickerViewMode.Thumbnail; var inputFile = await fileOpenPicker.PickSingleFileAsync(); if (inputFile == null) { // The user cancelled the picking operation return source; } SoftwareBitmap softwareBitmap; using (IRandomAccessStream stream = await inputFile.OpenAsync(FileAccessMode.Read)) { // Create the decoder from the stream BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream); // Get the SoftwareBitmap representation of the file softwareBitmap = await decoder.GetSoftwareBitmapAsync(); } if (softwareBitmap.BitmapPixelFormat != BitmapPixelFormat.Bgra8 || softwareBitmap.BitmapAlphaMode == BitmapAlphaMode.Straight) { softwareBitmap = SoftwareBitmap.Convert(softwareBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied); } await source.SetBitmapAsync(softwareBitmap); // Set the source of the Image control return source; } } //添加程序未来可访问列表许可 public class GetUserPermissions { public static string mruToken { set; get; } public static string faToken { set; get; } public static async Task<string> GetAccessPermissions() { var folderPicker = new Windows.Storage.Pickers.FolderPicker(); folderPicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.Desktop; folderPicker.FileTypeFilter.Add("*"); System.DateTime currentTime = new System.DateTime(); currentTime = System.DateTime.Now; StorageFolder folder = await folderPicker.PickSingleFolderAsync(); if (folder != null) { // Add to MRU with metadata (For example, a string that represents the date) mruToken = Windows.Storage.AccessCache.StorageApplicationPermissions.MostRecentlyUsedList.Add(folder, currentTime.ToShortDateString()); // Add to FA without metadata faToken = Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList.Add(folder, currentTime.ToShortDateString()); } else { return "Operation cancelled."; } return folder.Path; } } public class ItemMannager { //从未来可访问列表中的文件夹传回ObservableCollection<Items>列表 public static async Task ItemsAccessSourceAsync(ObservableCollection<Items> AccessSource) { var Item = await AdditemsAsync(); AccessSource.Clear(); var Items1 = Item; Items1.ForEach(p => AccessSource.Add(p)); } //读取数据库并返回List<Items> public static async Task<List<Items>> AdditemsAsync() { var Tokens = new List<String>(); var ItemList = new List<Items>(); Tokens = faTokenDataAccess.GetData();//返回数据库里面所有文件访问令牌faToken : List<string> foreach (String Token in Tokens) { var Bitmaplist =await ItemAccess.GetitemsAsync(Token);//直接运行程序跳出主页无显示(单步执行没问题) foreach (SoftwareBitmapSource Bitmap in Bitmaplist) ItemList.Add(new Items { AccessSource = Bitmap, Date = "2019/4/16", Size = "30", Name = "NULL", Path = "C:/Users/I1661/AppData/Local" }); } return ItemList; } } //得到文件夹中所有图片并返回List<SoftwareBitmapSource> public static class ItemAccess { public static async Task<List<SoftwareBitmapSource>> GetitemsAsync(string Token) { var ListofBitmap = new List<SoftwareBitmapSource>(); var source = new SoftwareBitmapSource(); var inputFloder= await StorageApplicationPermissions.FutureAccessList.GetFolderAsync(Token); var inputFiles =inputFloder.GetFilesAsync(); var inputFiles_ = inputFiles.GetResults(); foreach (StorageFile inputFile in inputFiles_) { SoftwareBitmap softwareBitmap; using (IRandomAccessStream stream = inputFile. OpenAsync(FileAccessMode.Read).GetResults()) { // Create the decoder from the stream var decodert = BitmapDecoder.CreateAsync(stream); BitmapDecoder decoder = decodert.GetResults(); // Get the SoftwareBitmap representation of the file var softwareBitmapt = decoder.GetSoftwareBitmapAsync(); softwareBitmap= softwareBitmapt.GetResults(); } if (softwareBitmap.BitmapPixelFormat != BitmapPixelFormat.Bgra8 || softwareBitmap.BitmapAlphaMode == BitmapAlphaMode.Straight) { softwareBitmap = SoftwareBitmap.Convert(softwareBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied); } await source.SetBitmapAsync(softwareBitmap); // Set the source of the Image control ListofBitmap.Add(source); } return ListofBitmap; } } } 线程 0x3f38 已退出,返回值为 0 (0x0)。 线程 0x5618 已退出,返回值为 0 (0x0)。 引发的异常:“System.InvalidOperationException”(位于 Main_Page.exe 中) WinRT 信息: 在意外的时间调用了方法。 引发的异常:“System.InvalidOperationException”(位于 System.Private.CoreLib.dll 中) WinRT 信息: 在意外的时间调用了方法。 引发的异常:“System.InvalidOperationException”(位于 System.Private.CoreLib.dll 中) WinRT 信息: 在意外的时间调用了方法。 直接启动看到Bitmaplist里面的值是 Id = 6, Status = Faulted, Method = "{null}", Result = "{Not yet computed}" 单步就没有问题
spark 读取不到hive metastore 获取不到数据库
直接上异常 ``` Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data01/hadoop/yarn/local/filecache/355/spark2-hdp-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for TERM 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for HUP 19/08/13 19:53:17 INFO SignalUtils: Registered signal handler for INT 19/08/13 19:53:17 INFO SecurityManager: Changing view acls to: yarn,hdfs 19/08/13 19:53:17 INFO SecurityManager: Changing modify acls to: yarn,hdfs 19/08/13 19:53:17 INFO SecurityManager: Changing view acls groups to: 19/08/13 19:53:17 INFO SecurityManager: Changing modify acls groups to: 19/08/13 19:53:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set() 19/08/13 19:53:18 INFO ApplicationMaster: Preparing Local resources 19/08/13 19:53:19 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1565610088533_0087_000001 19/08/13 19:53:19 INFO ApplicationMaster: Starting the user application in a separate Thread 19/08/13 19:53:19 INFO ApplicationMaster: Waiting for spark context initialization... 19/08/13 19:53:19 INFO SparkContext: Running Spark version 2.3.0.2.6.5.0-292 19/08/13 19:53:19 INFO SparkContext: Submitted application: voice_stream 19/08/13 19:53:19 INFO SecurityManager: Changing view acls to: yarn,hdfs 19/08/13 19:53:19 INFO SecurityManager: Changing modify acls to: yarn,hdfs 19/08/13 19:53:19 INFO SecurityManager: Changing view acls groups to: 19/08/13 19:53:19 INFO SecurityManager: Changing modify acls groups to: 19/08/13 19:53:19 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); groups with view permissions: Set(); users with modify permissions: Set(yarn, hdfs); groups with modify permissions: Set() 19/08/13 19:53:19 INFO Utils: Successfully started service 'sparkDriver' on port 20410. 19/08/13 19:53:19 INFO SparkEnv: Registering MapOutputTracker 19/08/13 19:53:19 INFO SparkEnv: Registering BlockManagerMaster 19/08/13 19:53:19 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/08/13 19:53:19 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/08/13 19:53:19 INFO DiskBlockManager: Created local directory at /data01/hadoop/yarn/local/usercache/hdfs/appcache/application_1565610088533_0087/blockmgr-94d35b97-43b2-496e-a4cb-73ecd3ed186c 19/08/13 19:53:19 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 19/08/13 19:53:19 INFO SparkEnv: Registering OutputCommitCoordinator 19/08/13 19:53:19 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 19/08/13 19:53:19 INFO Utils: Successfully started service 'SparkUI' on port 28852. 19/08/13 19:53:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://datanode02:28852 19/08/13 19:53:19 INFO YarnClusterScheduler: Created YarnClusterScheduler 19/08/13 19:53:20 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1565610088533_0087 and attemptId Some(appattempt_1565610088533_0087_000001) 19/08/13 19:53:20 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 31984. 19/08/13 19:53:20 INFO NettyBlockTransferService: Server created on datanode02:31984 19/08/13 19:53:20 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/08/13 19:53:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManagerMasterEndpoint: Registering block manager datanode02:31984 with 366.3 MB RAM, BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, datanode02, 31984, None) 19/08/13 19:53:20 INFO EventLoggingListener: Logging events to hdfs:/spark2-history/application_1565610088533_0087_1 19/08/13 19:53:20 INFO ApplicationMaster: =============================================================================== YARN executor launch context: env: CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>/usr/hdp/2.6.5.0-292/hadoop/conf<CPS>/usr/hdp/2.6.5.0-292/hadoop/*<CPS>/usr/hdp/2.6.5.0-292/hadoop/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>/usr/hdp/current/ext/hadoop/*<CPS>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.5.0-292/hadoop/lib/hadoop-lzo-0.6.0.2.6.5.0-292.jar:/etc/hadoop/conf/secure:/usr/hdp/current/ext/hadoop/*<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__ SPARK_YARN_STAGING_DIR -> *********(redacted) SPARK_USER -> *********(redacted) command: LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" \ {{JAVA_HOME}}/bin/java \ -server \ -Xmx5120m \ -Djava.io.tmpdir={{PWD}}/tmp \ '-Dspark.history.ui.port=18081' \ '-Dspark.rpc.message.maxSize=100' \ -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ -XX:OnOutOfMemoryError='kill %p' \ org.apache.spark.executor.CoarseGrainedExecutorBackend \ --driver-url \ spark://CoarseGrainedScheduler@datanode02:20410 \ --executor-id \ <executorId> \ --hostname \ <hostname> \ --cores \ 2 \ --app-id \ application_1565610088533_0087 \ --user-class-path \ file:$PWD/__app__.jar \ --user-class-path \ file:$PWD/hadoop-common-2.7.3.jar \ --user-class-path \ file:$PWD/guava-12.0.1.jar \ --user-class-path \ file:$PWD/hbase-server-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-protocol-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-client-1.2.8.jar \ --user-class-path \ file:$PWD/hbase-common-1.2.8.jar \ --user-class-path \ file:$PWD/mysql-connector-java-5.1.44-bin.jar \ --user-class-path \ file:$PWD/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar \ --user-class-path \ file:$PWD/spark-examples_2.11-1.6.0-typesafe-001.jar \ --user-class-path \ file:$PWD/fastjson-1.2.7.jar \ 1><LOG_DIR>/stdout \ 2><LOG_DIR>/stderr resources: spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark-streaming-kafka-0-8-assembly_2.11-2.3.2.jar" } size: 12271027 timestamp: 1565697198603 type: FILE visibility: PRIVATE spark-examples_2.11-1.6.0-typesafe-001.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark-examples_2.11-1.6.0-typesafe-001.jar" } size: 1867746 timestamp: 1565697198751 type: FILE visibility: PRIVATE hbase-server-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-server-1.2.8.jar" } size: 4197896 timestamp: 1565697197770 type: FILE visibility: PRIVATE hbase-common-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-common-1.2.8.jar" } size: 570163 timestamp: 1565697198318 type: FILE visibility: PRIVATE __app__.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/spark_history_data2.jar" } size: 44924 timestamp: 1565697197260 type: FILE visibility: PRIVATE guava-12.0.1.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/guava-12.0.1.jar" } size: 1795932 timestamp: 1565697197614 type: FILE visibility: PRIVATE hbase-client-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-client-1.2.8.jar" } size: 1306401 timestamp: 1565697198180 type: FILE visibility: PRIVATE __spark_conf__ -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/__spark_conf__.zip" } size: 273513 timestamp: 1565697199131 type: ARCHIVE visibility: PRIVATE fastjson-1.2.7.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/fastjson-1.2.7.jar" } size: 417221 timestamp: 1565697198865 type: FILE visibility: PRIVATE hbase-protocol-1.2.8.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hbase-protocol-1.2.8.jar" } size: 4366252 timestamp: 1565697198023 type: FILE visibility: PRIVATE __spark_libs__ -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/hdp/apps/2.6.5.0-292/spark2/spark2-hdp-yarn-archive.tar.gz" } size: 227600110 timestamp: 1549953820247 type: ARCHIVE visibility: PUBLIC mysql-connector-java-5.1.44-bin.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/mysql-connector-java-5.1.44-bin.jar" } size: 999635 timestamp: 1565697198445 type: FILE visibility: PRIVATE hadoop-common-2.7.3.jar -> resource { scheme: "hdfs" host: "CID-042fb939-95b4-4b74-91b8-9f94b999bdf7" port: -1 file: "/user/hdfs/.sparkStaging/application_1565610088533_0087/hadoop-common-2.7.3.jar" } size: 3479293 timestamp: 1565697197476 type: FILE visibility: PRIVATE =============================================================================== 19/08/13 19:53:20 INFO RMProxy: Connecting to ResourceManager at namenode02/10.1.38.38:8030 19/08/13 19:53:20 INFO YarnRMClient: Registering the ApplicationMaster 19/08/13 19:53:20 INFO YarnAllocator: Will request 3 executor container(s), each with 2 core(s) and 5632 MB memory (including 512 MB of overhead) 19/08/13 19:53:20 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark://YarnAM@datanode02:20410) 19/08/13 19:53:20 INFO YarnAllocator: Submitted 3 unlocalized container requests. 19/08/13 19:53:20 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals 19/08/13 19:53:20 INFO AMRMClientImpl: Received new token for : datanode03:45454 19/08/13 19:53:21 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000002 on host datanode03 for executor with ID 1 19/08/13 19:53:21 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: Opening proxy : datanode03:45454 19/08/13 19:53:21 INFO AMRMClientImpl: Received new token for : datanode01:45454 19/08/13 19:53:21 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000003 on host datanode01 for executor with ID 2 19/08/13 19:53:21 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:21 INFO ContainerManagementProtocolProxy: Opening proxy : datanode01:45454 19/08/13 19:53:22 INFO AMRMClientImpl: Received new token for : datanode02:45454 19/08/13 19:53:22 INFO YarnAllocator: Launching container container_e20_1565610088533_0087_01_000004 on host datanode02 for executor with ID 3 19/08/13 19:53:22 INFO YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them. 19/08/13 19:53:22 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0 19/08/13 19:53:22 INFO ContainerManagementProtocolProxy: Opening proxy : datanode02:45454 19/08/13 19:53:24 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.198.144:41122) with ID 1 19/08/13 19:53:25 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.229.163:24656) with ID 3 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode03:3328 with 2.5 GB RAM, BlockManagerId(1, datanode03, 3328, None) 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode02:28863 with 2.5 GB RAM, BlockManagerId(3, datanode02, 28863, None) 19/08/13 19:53:25 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.1.229.158:64276) with ID 2 19/08/13 19:53:25 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 19/08/13 19:53:25 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done 19/08/13 19:53:25 INFO BlockManagerMasterEndpoint: Registering block manager datanode01:20487 with 2.5 GB RAM, BlockManagerId(2, datanode01, 20487, None) 19/08/13 19:53:25 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect. 19/08/13 19:53:25 INFO SparkContext: Starting job: start at VoiceApplication2.java:128 19/08/13 19:53:25 INFO DAGScheduler: Registering RDD 1 (start at VoiceApplication2.java:128) 19/08/13 19:53:25 INFO DAGScheduler: Got job 0 (start at VoiceApplication2.java:128) with 20 output partitions 19/08/13 19:53:25 INFO DAGScheduler: Final stage: ResultStage 1 (start at VoiceApplication2.java:128) 19/08/13 19:53:25 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0) 19/08/13 19:53:25 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0) 19/08/13 19:53:26 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[1] at start at VoiceApplication2.java:128), which has no missing parents 19/08/13 19:53:26 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.1 KB, free 366.3 MB) 19/08/13 19:53:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2011.0 B, free 366.3 MB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode02:31984 (size: 2011.0 B, free: 366.3 MB) 19/08/13 19:53:26 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:26 INFO DAGScheduler: Submitting 50 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[1] at start at VoiceApplication2.java:128) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) 19/08/13 19:53:26 INFO YarnClusterScheduler: Adding task set 0.0 with 50 tasks 19/08/13 19:53:26 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, datanode02, executor 3, partition 0, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, datanode03, executor 1, partition 1, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, datanode01, executor 2, partition 2, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, datanode02, executor 3, partition 3, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, datanode03, executor 1, partition 4, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, datanode01, executor 2, partition 5, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode02:28863 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode03:3328 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode01:20487 (size: 2011.0 B, free: 2.5 GB) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, datanode02, executor 3, partition 6, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, datanode02, executor 3, partition 7, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 693 ms on datanode02 (executor 3) (1/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 712 ms on datanode02 (executor 3) (2/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, datanode02, executor 3, partition 8, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 21 ms on datanode02 (executor 3) (3/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, datanode02, executor 3, partition 9, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 26 ms on datanode02 (executor 3) (4/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, datanode02, executor 3, partition 10, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 23 ms on datanode02 (executor 3) (5/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, datanode02, executor 3, partition 11, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 25 ms on datanode02 (executor 3) (6/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 12.0 in stage 0.0 (TID 12, datanode02, executor 3, partition 12, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 18 ms on datanode02 (executor 3) (7/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 14 ms on datanode02 (executor 3) (8/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 13.0 in stage 0.0 (TID 13, datanode02, executor 3, partition 13, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 14.0 in stage 0.0 (TID 14, datanode02, executor 3, partition 14, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 12.0 in stage 0.0 (TID 12) in 16 ms on datanode02 (executor 3) (9/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, datanode02, executor 3, partition 15, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 13.0 in stage 0.0 (TID 13) in 22 ms on datanode02 (executor 3) (10/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 16.0 in stage 0.0 (TID 16, datanode02, executor 3, partition 16, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 14.0 in stage 0.0 (TID 14) in 16 ms on datanode02 (executor 3) (11/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 17.0 in stage 0.0 (TID 17, datanode02, executor 3, partition 17, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 15.0 in stage 0.0 (TID 15) in 13 ms on datanode02 (executor 3) (12/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 18.0 in stage 0.0 (TID 18, datanode01, executor 2, partition 18, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 19.0 in stage 0.0 (TID 19, datanode01, executor 2, partition 19, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 787 ms on datanode01 (executor 2) (13/50) 19/08/13 19:53:26 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 789 ms on datanode01 (executor 2) (14/50) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 20.0 in stage 0.0 (TID 20, datanode03, executor 1, partition 20, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:26 INFO TaskSetManager: Starting task 21.0 in stage 0.0 (TID 21, datanode03, executor 1, partition 21, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 905 ms on datanode03 (executor 1) (15/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 907 ms on datanode03 (executor 1) (16/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 22.0 in stage 0.0 (TID 22, datanode02, executor 3, partition 22, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 23.0 in stage 0.0 (TID 23, datanode02, executor 3, partition 23, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 24.0 in stage 0.0 (TID 24, datanode01, executor 2, partition 24, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 18.0 in stage 0.0 (TID 18) in 124 ms on datanode01 (executor 2) (17/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 16.0 in stage 0.0 (TID 16) in 134 ms on datanode02 (executor 3) (18/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 25.0 in stage 0.0 (TID 25, datanode01, executor 2, partition 25, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 26.0 in stage 0.0 (TID 26, datanode03, executor 1, partition 26, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 17.0 in stage 0.0 (TID 17) in 134 ms on datanode02 (executor 3) (19/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 20.0 in stage 0.0 (TID 20) in 122 ms on datanode03 (executor 1) (20/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 27.0 in stage 0.0 (TID 27, datanode03, executor 1, partition 27, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 19.0 in stage 0.0 (TID 19) in 127 ms on datanode01 (executor 2) (21/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 21.0 in stage 0.0 (TID 21) in 123 ms on datanode03 (executor 1) (22/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 28.0 in stage 0.0 (TID 28, datanode02, executor 3, partition 28, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 29.0 in stage 0.0 (TID 29, datanode02, executor 3, partition 29, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 22.0 in stage 0.0 (TID 22) in 19 ms on datanode02 (executor 3) (23/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 23.0 in stage 0.0 (TID 23) in 18 ms on datanode02 (executor 3) (24/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 30.0 in stage 0.0 (TID 30, datanode01, executor 2, partition 30, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 31.0 in stage 0.0 (TID 31, datanode01, executor 2, partition 31, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 25.0 in stage 0.0 (TID 25) in 27 ms on datanode01 (executor 2) (25/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 24.0 in stage 0.0 (TID 24) in 29 ms on datanode01 (executor 2) (26/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 32.0 in stage 0.0 (TID 32, datanode02, executor 3, partition 32, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 29.0 in stage 0.0 (TID 29) in 16 ms on datanode02 (executor 3) (27/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 33.0 in stage 0.0 (TID 33, datanode03, executor 1, partition 33, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 26.0 in stage 0.0 (TID 26) in 30 ms on datanode03 (executor 1) (28/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 34.0 in stage 0.0 (TID 34, datanode02, executor 3, partition 34, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 28.0 in stage 0.0 (TID 28) in 21 ms on datanode02 (executor 3) (29/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 35.0 in stage 0.0 (TID 35, datanode03, executor 1, partition 35, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 27.0 in stage 0.0 (TID 27) in 32 ms on datanode03 (executor 1) (30/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 36.0 in stage 0.0 (TID 36, datanode02, executor 3, partition 36, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 32.0 in stage 0.0 (TID 32) in 11 ms on datanode02 (executor 3) (31/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 37.0 in stage 0.0 (TID 37, datanode01, executor 2, partition 37, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 30.0 in stage 0.0 (TID 30) in 18 ms on datanode01 (executor 2) (32/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 38.0 in stage 0.0 (TID 38, datanode01, executor 2, partition 38, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 31.0 in stage 0.0 (TID 31) in 20 ms on datanode01 (executor 2) (33/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 39.0 in stage 0.0 (TID 39, datanode03, executor 1, partition 39, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 33.0 in stage 0.0 (TID 33) in 17 ms on datanode03 (executor 1) (34/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 34.0 in stage 0.0 (TID 34) in 17 ms on datanode02 (executor 3) (35/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 40.0 in stage 0.0 (TID 40, datanode02, executor 3, partition 40, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 41.0 in stage 0.0 (TID 41, datanode03, executor 1, partition 41, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 35.0 in stage 0.0 (TID 35) in 17 ms on datanode03 (executor 1) (36/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 42.0 in stage 0.0 (TID 42, datanode02, executor 3, partition 42, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 36.0 in stage 0.0 (TID 36) in 16 ms on datanode02 (executor 3) (37/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 43.0 in stage 0.0 (TID 43, datanode01, executor 2, partition 43, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 37.0 in stage 0.0 (TID 37) in 16 ms on datanode01 (executor 2) (38/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 44.0 in stage 0.0 (TID 44, datanode02, executor 3, partition 44, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 45.0 in stage 0.0 (TID 45, datanode02, executor 3, partition 45, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 40.0 in stage 0.0 (TID 40) in 14 ms on datanode02 (executor 3) (39/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 42.0 in stage 0.0 (TID 42) in 11 ms on datanode02 (executor 3) (40/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 46.0 in stage 0.0 (TID 46, datanode03, executor 1, partition 46, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 39.0 in stage 0.0 (TID 39) in 20 ms on datanode03 (executor 1) (41/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 47.0 in stage 0.0 (TID 47, datanode03, executor 1, partition 47, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 41.0 in stage 0.0 (TID 41) in 20 ms on datanode03 (executor 1) (42/50) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 48.0 in stage 0.0 (TID 48, datanode01, executor 2, partition 48, PROCESS_LOCAL, 7831 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 49.0 in stage 0.0 (TID 49, datanode01, executor 2, partition 49, PROCESS_LOCAL, 7888 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 43.0 in stage 0.0 (TID 43) in 18 ms on datanode01 (executor 2) (43/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 38.0 in stage 0.0 (TID 38) in 31 ms on datanode01 (executor 2) (44/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 45.0 in stage 0.0 (TID 45) in 11 ms on datanode02 (executor 3) (45/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 44.0 in stage 0.0 (TID 44) in 16 ms on datanode02 (executor 3) (46/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 46.0 in stage 0.0 (TID 46) in 18 ms on datanode03 (executor 1) (47/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 48.0 in stage 0.0 (TID 48) in 15 ms on datanode01 (executor 2) (48/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 47.0 in stage 0.0 (TID 47) in 15 ms on datanode03 (executor 1) (49/50) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 49.0 in stage 0.0 (TID 49) in 25 ms on datanode01 (executor 2) (50/50) 19/08/13 19:53:27 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 19/08/13 19:53:27 INFO DAGScheduler: ShuffleMapStage 0 (start at VoiceApplication2.java:128) finished in 1.174 s 19/08/13 19:53:27 INFO DAGScheduler: looking for newly runnable stages 19/08/13 19:53:27 INFO DAGScheduler: running: Set() 19/08/13 19:53:27 INFO DAGScheduler: waiting: Set(ResultStage 1) 19/08/13 19:53:27 INFO DAGScheduler: failed: Set() 19/08/13 19:53:27 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[2] at start at VoiceApplication2.java:128), which has no missing parents 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.2 KB, free 366.3 MB) 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1979.0 B, free 366.3 MB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode02:31984 (size: 1979.0 B, free: 366.3 MB) 19/08/13 19:53:27 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:27 INFO DAGScheduler: Submitting 20 missing tasks from ResultStage 1 (ShuffledRDD[2] at start at VoiceApplication2.java:128) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)) 19/08/13 19:53:27 INFO YarnClusterScheduler: Adding task set 1.0 with 20 tasks 19/08/13 19:53:27 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 50, datanode03, executor 1, partition 0, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 51, datanode02, executor 3, partition 1, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 52, datanode01, executor 2, partition 3, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 53, datanode03, executor 1, partition 2, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 54, datanode02, executor 3, partition 4, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 55, datanode01, executor 2, partition 5, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode02:28863 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode01:20487 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on datanode03:3328 (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.229.163:24656 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.198.144:41122 19/08/13 19:53:27 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to 10.1.229.158:64276 19/08/13 19:53:27 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 56, datanode03, executor 1, partition 7, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 53) in 192 ms on datanode03 (executor 1) (1/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 57, datanode03, executor 1, partition 8, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 56) in 25 ms on datanode03 (executor 1) (2/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 58, datanode02, executor 3, partition 6, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 51) in 220 ms on datanode02 (executor 3) (3/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 14.0 in stage 1.0 (TID 59, datanode03, executor 1, partition 14, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 57) in 17 ms on datanode03 (executor 1) (4/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 16.0 in stage 1.0 (TID 60, datanode03, executor 1, partition 16, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 14.0 in stage 1.0 (TID 59) in 15 ms on datanode03 (executor 1) (5/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 16.0 in stage 1.0 (TID 60) in 21 ms on datanode03 (executor 1) (6/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 61, datanode02, executor 3, partition 9, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 54) in 269 ms on datanode02 (executor 3) (7/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 50) in 339 ms on datanode03 (executor 1) (8/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 10.0 in stage 1.0 (TID 62, datanode02, executor 3, partition 10, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 58) in 56 ms on datanode02 (executor 3) (9/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 11.0 in stage 1.0 (TID 63, datanode01, executor 2, partition 11, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 55) in 284 ms on datanode01 (executor 2) (10/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 12.0 in stage 1.0 (TID 64, datanode01, executor 2, partition 12, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 52) in 287 ms on datanode01 (executor 2) (11/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 13.0 in stage 1.0 (TID 65, datanode02, executor 3, partition 13, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 15.0 in stage 1.0 (TID 66, datanode02, executor 3, partition 15, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 10.0 in stage 1.0 (TID 62) in 25 ms on datanode02 (executor 3) (12/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 61) in 29 ms on datanode02 (executor 3) (13/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 17.0 in stage 1.0 (TID 67, datanode02, executor 3, partition 17, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 15.0 in stage 1.0 (TID 66) in 13 ms on datanode02 (executor 3) (14/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 13.0 in stage 1.0 (TID 65) in 16 ms on datanode02 (executor 3) (15/20) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 18.0 in stage 1.0 (TID 68, datanode02, executor 3, partition 18, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Starting task 19.0 in stage 1.0 (TID 69, datanode01, executor 2, partition 19, NODE_LOCAL, 7638 bytes) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 11.0 in stage 1.0 (TID 63) in 30 ms on datanode01 (executor 2) (16/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 12.0 in stage 1.0 (TID 64) in 30 ms on datanode01 (executor 2) (17/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 17.0 in stage 1.0 (TID 67) in 17 ms on datanode02 (executor 3) (18/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 19.0 in stage 1.0 (TID 69) in 13 ms on datanode01 (executor 2) (19/20) 19/08/13 19:53:27 INFO TaskSetManager: Finished task 18.0 in stage 1.0 (TID 68) in 20 ms on datanode02 (executor 3) (20/20) 19/08/13 19:53:27 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 19/08/13 19:53:27 INFO DAGScheduler: ResultStage 1 (start at VoiceApplication2.java:128) finished in 0.406 s 19/08/13 19:53:27 INFO DAGScheduler: Job 0 finished: start at VoiceApplication2.java:128, took 1.850883 s 19/08/13 19:53:27 INFO ReceiverTracker: Starting 1 receivers 19/08/13 19:53:27 INFO ReceiverTracker: ReceiverTracker started 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@4044ec97 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO MappedDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO MappedDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO MappedDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@5dd4b960 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@132d0c3c 19/08/13 19:53:27 INFO KafkaInputDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO KafkaInputDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO KafkaInputDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@5fd3dc81 19/08/13 19:53:27 INFO MappedDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO MappedDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO MappedDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@5dd4b960 19/08/13 19:53:27 INFO ForEachDStream: Slide time = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Storage level = Serialized 1x Replicated 19/08/13 19:53:27 INFO ForEachDStream: Checkpoint interval = null 19/08/13 19:53:27 INFO ForEachDStream: Remember interval = 60000 ms 19/08/13 19:53:27 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@525bed0c 19/08/13 19:53:27 INFO DAGScheduler: Got job 1 (start at VoiceApplication2.java:128) with 1 output partitions 19/08/13 19:53:27 INFO DAGScheduler: Final stage: ResultStage 2 (start at VoiceApplication2.java:128) 19/08/13 19:53:27 INFO DAGScheduler: Parents of final stage: List() 19/08/13 19:53:27 INFO DAGScheduler: Missing parents: List() 19/08/13 19:53:27 INFO DAGScheduler: Submitting ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:613), which has no missing parents 19/08/13 19:53:27 INFO ReceiverTracker: Receiver 0 started 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 133.5 KB, free 366.2 MB) 19/08/13 19:53:27 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 36.3 KB, free 366.1 MB) 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on datanode02:31984 (size: 36.3 KB, free: 366.3 MB) 19/08/13 19:53:27 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1039 19/08/13 19:53:27 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:613) (first 15 tasks are for partitions Vector(0)) 19/08/13 19:53:27 INFO YarnClusterScheduler: Adding task set 2.0 with 1 tasks 19/08/13 19:53:27 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 70, datanode01, executor 2, partition 0, PROCESS_LOCAL, 8757 bytes) 19/08/13 19:53:27 INFO RecurringTimer: Started timer for JobGenerator at time 1565697240000 19/08/13 19:53:27 INFO JobGenerator: Started JobGenerator at 1565697240000 ms 19/08/13 19:53:27 INFO JobScheduler: Started JobScheduler 19/08/13 19:53:27 INFO StreamingContext: StreamingContext started 19/08/13 19:53:27 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on datanode01:20487 (size: 36.3 KB, free: 2.5 GB) 19/08/13 19:53:27 INFO ReceiverTracker: Registered receiver for stream 0 from 10.1.229.158:64276 19/08/13 19:54:00 INFO JobScheduler: Added jobs for time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.0 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.1 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Finished job streaming job 1565697240000 ms.1 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Finished job streaming job 1565697240000 ms.0 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO JobScheduler: Starting job streaming job 1565697240000 ms.2 from job set of time 1565697240000 ms 19/08/13 19:54:00 INFO SharedState: loading hive config file: file:/data01/hadoop/yarn/local/usercache/hdfs/filecache/85431/__spark_conf__.zip/__hadoop_conf__/hive-site.xml 19/08/13 19:54:00 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('hdfs://CID-042fb939-95b4-4b74-91b8-9f94b999bdf7/apps/hive/warehouse'). 19/08/13 19:54:00 INFO SharedState: Warehouse path is 'hdfs://CID-042fb939-95b4-4b74-91b8-9f94b999bdf7/apps/hive/warehouse'. 19/08/13 19:54:00 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode02:31984 in memory (size: 1979.0 B, free: 366.3 MB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode02:28863 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode01:20487 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:00 INFO BlockManagerInfo: Removed broadcast_1_piece0 on datanode03:3328 in memory (size: 1979.0 B, free: 2.5 GB) 19/08/13 19:54:02 INFO CodeGenerator: Code generated in 175.416957 ms 19/08/13 19:54:02 INFO JobScheduler: Finished job streaming job 1565697240000 ms.2 from job set of time 1565697240000 ms 19/08/13 19:54:02 ERROR JobScheduler: Error running job streaming job 1565697240000 ms.2 org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/13 19:54:02 ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/13 19:54:02 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException: Database 'meta_voice' not found; at org.apache.spark.sql.catalyst.catalog.ExternalCatalog.requireDbExists(ExternalCatalog.scala:40) at org.apache.spark.sql.catalyst.catalog.InMemoryCatalog.tableExists(InMemoryCatalog.scala:331) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.tableExists(SessionCatalog.scala:388) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:393) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:122) at com.stream.VoiceApplication2$2.call(VoiceApplication2.java:115) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$2.apply(JavaDStreamLike.scala:280) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ) 19/08/13 19:54:02 INFO StreamingContext: Invoking stop(stopGracefully=true) from shutdown hook 19/08/13 19:54:02 INFO ReceiverTracker: Sent stop signal to all 1 receivers 19/08/13 19:54:02 ERROR ReceiverTracker: Deregistered receiver for stream 0: Stopped by driver 19/08/13 19:54:02 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 70) in 35055 ms on datanode01 (executor 2) (1/1) 19/08/13 19:54:02 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 19/08/13 19:54:02 INFO DAGScheduler: ResultStage 2 (start at VoiceApplication2.java:128) finished in 35.086 s 19/08/13 19:54:02 INFO ReceiverTracker: Waiting for receiver job to terminate gracefully 19/08/13 19:54:02 INFO ReceiverTracker: Waited for receiver job to terminate gracefully 19/08/13 19:54:02 INFO ReceiverTracker: All of the receivers have deregistered successfully 19/08/13 19:54:02 INFO ReceiverTracker: ReceiverTracker stopped 19/08/13 19:54:02 INFO JobGenerator: Stopping JobGenerator gracefully 19/08/13 19:54:02 INFO JobGenerator: Waiting for all received blocks to be consumed for job generation 19/08/13 19:54:02 INFO JobGenerator: Waited for all received blocks to be consumed for job generation 19/08/13 19:54:12 WARN ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67) 19/08/13 19:54:12 ERROR Utils: Uncaught exception in thread pool-1-thread-1 java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.spark.streaming.util.RecurringTimer.stop(RecurringTimer.scala:86) at org.apache.spark.streaming.scheduler.JobGenerator.stop(JobGenerator.scala:137) at org.apache.spark.streaming.scheduler.JobScheduler.stop(JobScheduler.scala:123) at org.apache.spark.streaming.StreamingContext$$anonfun$stop$1.apply$mcV$sp(StreamingContext.scala:681) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1357) at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:680) at org.apache.spark.streaming.StreamingContext.org$apache$spark$streaming$StreamingContext$$stopOnShutdown(StreamingContext.scala:714) at org.apache.spark.streaming.StreamingContext$$anonfun$start$1.apply$mcV$sp(StreamingContext.scala:599) at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1988) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188) at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ```
PYTHON EXCEL转换CSV并汇总成一个文件
请大神帮忙调一下程序 DEBUG没走通 请反馈源代码非常感谢 代码如下: #载入需要的库 import os import pandas as pd import glob #excel转化为csv def xlsx_to_csv_pd(): c=os.getcwd() excel_list1=glob.glob('*.xls') excel_list2=glob.glob('*.xlsx') for a in excel_list1: data_xls = pd.read_excel(a, index_col=0) outfile=c+"/"+a data_xls.to_csv(outfile, encoding='utf-8') for b in excel_list2: data_xls = pd.read_excel(b, index_col=0) outfile=c+"/"+b data_xls.to_csv(outfile, encoding='utf-8') # 定义函数hebing def hebing(): csv_list = glob.glob('*.csv') # 查看同文件夹下的csv文件数 print(u'共发现%s个CSV文件' % len(csv_list)) print(u'正在处理............') for i in csv_list: # 循环读取同文件夹下的csv文件 fr = open(i, 'r').read() with open('result.csv', 'rb') as f: # 将结果保存为result.csv f.write(fr) print(u'合并完毕!') # 定义函数quchong(file),将重复的内容去掉,主要是去表头 def quchong(file): df = pd.read_csv(file, header=0) datalist = df.drop_duplicates() datalist.to_csv(file) #运行函数 if __name__ == '__main__': # xlsx_to_csv_pd() print("转化完成!!!" ) hebing() quchong("result.csv") print("已完成数据文件合并清单所处位置:"+str(file))
指定的参数已超出有效值的范围。 参数名: index?
asp.net初学者 出错代码旁边注释掉的是之前的写法。。不会改啊要抓狂了!!!!麻烦各界大神指正!还挺着急的。。悬赏我可以加,回答的时候贴上改正后的代码好不???? ``` <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="adminList.aspx.cs" Inherits="Last.Last" %> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>adminlist</title> <link rel="stylesheet" type="text/css" href="css/index.css"/> <link rel="stylesheet" type="text/css" href="css/adminList.css" /> </head> <body> <form id="form1" runat="server"> <!-- 管理员列表 --> <div id="admin_list"> <div class="header radius"> <h3>管理员列表</h3> </div> <div class="admin_list_content"> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" CellPadding="4" ForeColor="#333333" GridLines="None" OnRowDeleting="GridView1_RowDeleting" OnRowEditing="GridView1_RowEditing" OnRowUpdating="GridView1_RowUpdating" OnRowCancelingEdit="GridView1_RowCancelingEdit" OnRowDataBound="GridView1_RowDataBound"> <FooterStyle BackColor="#990000" Font-Bold="True" ForeColor="White" /> <Columns> <asp:TemplateField> <ItemTemplate> <asp:CheckBox ID="CheckBox1" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="id" HeaderText="" ReadOnly="True" /> <%--只能读取 不能编辑改变值 这是搜寻的标杆--%> <asp:BoundField DataField="List" HeaderText="管理员列表" /> <asp:BoundField DataField="Name" HeaderText="管理员姓名" /> <asp:BoundField DataField="Type" HeaderText="管理员类型" /> <asp:BoundField DataField="Range" HeaderText="管理员区域" /> <asp:CommandField HeaderText="修改" ButtonType="image" EditImageUrl="./images/change.png" ShowEditButton="True" /> <asp:CommandField HeaderText="删除" ButtonType="image" EditImageUrl="./images/remove.png" ShowEditButton="True" /> </Columns> <RowStyle ForeColor="Black" /> <%--字体颜色--%> <PagerStyle BackColor="white" ForeColor="white" HorizontalAlign="Left" /> <HeaderStyle BackColor="White" Font-Bold="True" ForeColor="Black" /> <%--背景色 题头颜色 --%> </asp:GridView> <br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<asp:TextBox ID="Num" TextMode="SingleLine" runat="server" Height="30px" Width="30px"></asp:TextBox> <asp:TextBox ID="List" TextMode="SingleLine" runat="server" Height="30px" Width="230px"></asp:TextBox> <asp:TextBox ID="Name" TextMode="SingleLine" runat="server" Height="30px" Width="320px"></asp:TextBox> <asp:TextBox ID="Type" TextMode="SingleLine" runat="server" Height="30px" Width="225px"></asp:TextBox> <asp:TextBox ID="Range" TextMode="SingleLine" runat="server" Height="30px" Width="320px"></asp:TextBox> <asp:Button ID="AddAdmin" runat="server" OnClick="AddAdmin_Click" Text="添加管理员" Width="100px" Height="30px" /> </div> </div> </form> </body> </html> using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Last { public partial class Last : System.Web.UI.Page { SqlConnection sqlcon; SqlCommand sqlcom; string strCon = "server=.;database=ado;uid=sa;pwd=123456"; protected void Page_Load(object sender, EventArgs e) { //获取一个值,该值显示页上呈现第一次还是正在加载中发出的响应 if (!IsPostBack)//刷新后展现修改后的内容 { bind(); } } protected void GridView1_RowEditing(object sender, GridViewEditEventArgs e) { GridView1.EditIndex = e.NewEditIndex;//将文本框输入的内容传到gv代码里面,代码操控着数据库 bind(); } //删除 protected void GridView1_RowDeleting(object sender, GridViewDeleteEventArgs e) { string sqlstr = "delete from ado.dbo.ziliao where 管理员列表='" + GridView1.DataKeys[e.RowIndex].Value.ToString() + "'";//获取要删除的id sqlcon = new SqlConnection(strCon); sqlcom = new SqlCommand(sqlstr, sqlcon);//命令删除语句 sqlcon.Open(); sqlcom.ExecuteNonQuery();//执行 sqlcon.Close(); bind(); } //更新 protected void GridView1_RowUpdating(object sender, GridViewUpdateEventArgs e) { string sqlstr = "update ado.dbo.adminList set List='"+ ((TextBox)(GridView1.Rows[e.RowIndex].Cells[0].Controls[0])).Text.ToString().Trim() + "',Name='"+ ((TextBox)(GridView1.Rows[e.RowIndex].Cells[1].Controls[0])).Text.ToString().Trim() + "',Type='" + ((TextBox)(GridView1.Rows[e.RowIndex].Cells[2].Controls[0])).Text.ToString().Trim() + "',Range='"+ GridView1.Rows[e.RowIndex].Cells[3].Controls[0].ToString().Trim() + "' where id='" + GridView1.DataKeys[e.RowIndex].Value.ToString() + "'"; SqlHelper.ExecuteNonQuery(sqlstr); sqlcon.Close(); GridView1.EditIndex = -1;//退出当前编辑状态(GridView1是从第0行开始编辑),切换到浏览模式 bind(); } //取消 protected void GridView1_RowCancelingEdit(object sender, GridViewCancelEditEventArgs e) { GridView1.EditIndex = -1;//退出当前编辑状态(GridView1是从第0行开始编辑),切换到浏览模式 bind(); } //绑定 public void bind() { string sqlstr = "select * from ado.dbo.adminList"; sqlcon = new SqlConnection(strCon); SqlDataAdapter myda = new SqlDataAdapter(sqlstr, sqlcon); DataSet myds = new DataSet(); sqlcon.Open(); myda.Fill(myds, "ado.dbo.adminList"); GridView1.DataSource = myds;//将GridView的数据库绑定到指定数据库 GridView1.DataBind(); sqlcon.Close(); } protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { //首先判断是否是数据行 if (e.Row.RowType == DataControlRowType.DataRow) { //当鼠标停留时更改背景色 e.Row.Attributes.Add("onmouseover", "c=this.style.backgroundColor;this.style.backgroundColor='#d7d7f1'"); //当鼠标移开时还原背景色 e.Row.Attributes.Add("onmouseout", "this.style.backgroundColor=c"); } //if (e.Row.RowType == DataControlRowType.DataRow) //{ // e.Row.Cells[6].Text = " <img src='Images/change.png' style='width: 20;height: 20;'/>" + e.Row.Cells[1].Text; // e.Row.Cells[7].Text = " <img src='Images/remove.png' style='width: 20;height: 20;'/>" + e.Row.Cells[1].Text; //} } SqlConnection con = new SqlConnection("server=.;database=ado;uid=sa;pwd=123456");//创建连接对象 protected void AddAdmin_Click(object sender, EventArgs e) { string Sql = "select * from ado.dbo.adminList where List='" + List.Text + "'"; SqlDataAdapter da = new SqlDataAdapter(Sql, con); //创建适配器 DataSet ds = new DataSet(); //创建数据集 da.Fill(ds, "table"); //填充数据集 if (da.Fill(ds, "table") > 0) //同用户名 { Response.Write("<script>alert('新增失败,存在相同用户名')</script>");//输出信息 } else//不同用户名 { string text5 = Num.Text.Trim(); string text1 = List.Text.Trim(); string text2 = Name.Text.Trim(); string text3 = Type.Text.Trim(); string text4 = Range.Text.Trim(); string str = "insert into ado.dbo.adminList (id,List,Name,Type,Range) VALUES('" + text5 + "','" + text1 + "','" + text2 + "','" + text3 + "','" + text4 + "')"; if (SqlHelper.ExecuteNonQuery(str) > 0)//执行成功 { Response.Redirect(Request.Url.ToString()); } else//执行失败 { Response.Write("<script>alert('增加失败,请检查系统内部')</script>");//输出信息 } } } } } ``` ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565337412_110449.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565338046_314223.png) ![图片说明](https://img-ask.csdn.net/upload/201908/09/1565338057_516946.png)
我是一个oo初学者,想向大家讨论一下方法责任划分的问题。
代码是C#的,请大家帮忙指教 需求是:用户查询自己已经记录过的饮食情况。 有user类和food类。 第一种设计是(直接由User类来直接读取数据): public class User { private string username, email; ...//省略get;set public List<Food> readFood() { //这里读取数据库获取信息 } } public class Food { string foodname, foodweight; ... } 第二种设计(User类委托Food类来查询): public class User { private string username, email; ...//省略get;set public List<Food> readFood() { //这里委托Food类进行处理 Food food = new Food(); return food.readFood(this.username); } } public class Food { string foodname, foodweight; ...//省略get;set public List<Food> readFood(string username) { //这里读取数据库获取信息 } } 1.请问以上两种方法,一种是直接在用户类中读取数据库,一种是委托Food类自己来读取数据库,哪一种好一些呢? 2.第二种方法中由于Food类中含有readFood方法,并不是一个纯的实体类,那么,用这么一个不是纯实体类来传递参数好不好?会不会有违反oo原则之嫌? 3.第二种方法的情况下,如果我为Food类单独声明一个属性专用接口,用于传递属性,这样好不好呢?是不是有点画蛇添足了呢?(代码如下) public interface IFoodModule { string Foodname { get; set; } string Foodweight { get; set; } } public class Food : IFoodModule { ...//省略get;set public List<IFoodModule> readFood(string username) { //这里读取数据库获取信息 } }
TreeView添加节点不显示
TreeView添加节点不显示 ``` 代码一、 private void btn_mb_lj_Click_1(object sender, EventArgs e) { btn_mb_cslj.Enabled = false; btn_mb_lj.Enabled = false; String mbdz = txt_mb_dz.Text; String mbdk = txt_mb_dk.Text; String mbzh = txt_mb_zh.Text; String mbpwd = txt_mb_pwd.Text; String mbmc = txt_mb_mc.Text; this.MBTableName = mbmc; //Console.WriteLine(mbdz + mbdk + mbzh + mbpwd + mbmc); //Console.Read(); MBSqlu = new SqlConnectionUtils(); MBConn = MBSqlu.connsql(mbdz, mbdk, mbzh, mbpwd, mbmc); if (MBConn != null) { cMessage2.Text = "连接成功"; List<String> tableList = MBSqlu.queryDB();//查询数据库中的表 AddTree(tableList, mbmc, MBTreeView); } else { cMessage2.Text = "连接失败"; } btn_mb_cslj.Enabled = true; btn_mb_lj.Enabled = true; } ``` ``` 代码二、 public List<String> queryDB() { List<String> DB = new List<string>(); String sql = "select name from sysobjects where xtype='u'"; SqlDataReader reader = null; try { SqlCommand cmd = new SqlCommand(sql, this.conn); reader = cmd.ExecuteReader(); while (reader.Read()) { DB.Add(reader["name"].ToString()); } } catch (Exception ex) { throw new Exception(ex.ToString() + "读取数据失败"); } finally { reader.Close(); } return DB; } ``` ``` 代码三、 public void AddTree(List<String> list, string mc, TreeView tree) { tree.Nodes.Clear(); tree.StateImageList = dbImg; TreeNode parent = new TreeNode(); parent.Text = mc.Trim(); parent.StateImageIndex = 0; tree.Nodes.Add(parent); foreach (string s in list) { TreeNode son = new TreeNode(); son.StateImageIndex = 1; parent.Nodes.Add(son); } } ``` ``` 代码四、 public SqlConnection connsql(string ip, string dk, string zh, string pwd, string dbname) { try { if (conn != null) { conn.Close(); conn.Dispose(); } string conncetion = "data source=" + ip + "," + dk + ";initial catalog=" + dbname + ";user id=" + zh + ";pwd=" + pwd + ""; conn = new SqlConnection(conncetion); conn.Open(); } catch (Exception e) { throw new Exception(e.ToString() + "打开数据库失败"); } return conn; } ``` 代码一为前台的按钮事件方法,代码二读取数据内容,代码三添加TreeView的方法,代码四为链接数据的方法。 前台winform有个TreeView和一个Button点击按钮后,把获取的内容添加到TreeView中进行显示,但是只显示了一个父节点所有的子节点全部都不能显示。 求大神指点!! 补充一下,前台运行截图: ![图片说明](https://img-ask.csdn.net/upload/201907/05/1562316797_571798.png)
读取数据集写入文件打印输出的问题,要使用C语言的程序代码编写运用库函数的实现的方式怎么做
Problem Description With a typical operating system, a filesystem consists of a number of directories, in which reside files. These files generally have a canonical location, known as the absolute path (such as /usr/games/bin/kobodl), which can be used to refer to the file no matter where the user is on a system. Most operating system environments allow you to refer to files in other directories without having to be so explicit about their locations, however. This is often stored in a variable called PATH, and is an ordered list of locations (always absolute paths in this problem) to search for a given name. We will call these search paths. In the brand-new crash shell, paths are handled somewhat differently. Users still provide an ordered list of locations that they wish to search for files (their search paths); when a particular filename is requested, however, crash tries to be even more helpful than usual. The process it follows is as follows: If there is an exact match for the filename, it is returned. Exact matches in locations earlier in the list are preferred. (There are no duplicate filenames in a single location.) If there are no exact matches, a filename that has a single extra character is returned. That character may be at any point in the filename, but the order of the non-extra characters must be identical to the requested filename. As before, matches in locations earlier in the list are preferred; if there are multiple matches in the highest-ranked location, all such matches in that location are returned. If there are no exact matches or one-extra-character matches, files that have two extra characters are looked for. The same rules of precedence and multiple matches apply as for the one-extra-character case. If no files meet the three criteria above, no filenames are returned. Two characters is considered the limit of "permissiveness" for the crash shell. So, for example, given the two files bang and tang, they are both one character away from the filename ang and two from ag. (All characters in this problem will be lowercase.) In the sample data below, both cat and rat are one character away from at. Given a complete list of locations and files in those locations on a system, a set of users each with their own ordered lists of search paths, and a set of files that they wish to search for, what filenames would crash return? For the purposes of simplification, all locations will be described by a single alphabetic string, as will filenames and usernames. Real operating system paths often have many components separated by characters such as slashes, but this problem does not. Also note that users may accidentally refer to nonexistent locations in their search paths; these (obviously) contain no files. Input All alphabetic strings in the input will have at least one and at most 20 characters, and will contain no special characters such as slashes or spaces; all letters will be lowercase. Input to this problem will begin with a line containing a single integer N (1 ≤ N ≤ 100) indicating the number of data sets. Each data set consists of the following components: A line containing a single integer F (1 ≤ F ≤ 100) indicating the number of files on the system; A series of F lines representing the files on the system, in the format "location filename", where location and filename are both alphabetic strings; A line containing a single integer U (1 ≤ U ≤ 10) indicating the number of users on the system; A series of U stanzas representing the users. Each stanza consists of the following components: A line containing a single alphabetic string which is the user'susername; A line containing a single integer L (1 ≤ L ≤ 10) representing the number of locations in the user's search path; and A series of L lines containing a single alphabetic string apiece listing the locations in the user's search path. The first one is the highest priority, the second (if present) is the second-highest priority, and so on. A line containing a single integer S (1 ≤ S ≤ 200) indicating the number of file searches to run; A series of S lines representing the searches, in the format "username filename", where username is an alphabetic string that matches one of the users defined in the data set, and filename is an alphabetic string that represents the requested filename. Output For each data set in the input, output the heading "DATA SET #k" where k is 1 for the first data set, 2 for the second, etc. Then for each of the S searches in the data set (and in the same order as read from the input) do the following: Print the line "username REQUESTED filename" where filename is the file requested by username. For each file (if any) that matches this search, print the line "FOUND filename IN location" where filename is the file that matched the user's request and that was found in location. The list of matching files must be sorted in alphabetical order by filename. Sample Input 1 4 food oat food goat animal rat animal cat 2 bob 2 food animal bill 1 animal 4 bob at bob cat bill goat bill at Sample Output DATA SET #1 bob REQUESTED at FOUND oat IN food bob REQUESTED cat FOUND cat IN animal bill REQUESTED goat bill REQUESTED at FOUND cat IN animal FOUND rat IN animal
为什么我创建一个已有的word 对象,添加段落时,如果有段落样式的时候就报错
from docx import Document # 导入docx基础库 from docx.enum.style import WD_STYLE_TYPE # 样式库 from docx.enum.text import WD_PARAGRAPH_ALIGNMENT # 对齐式库 from docx.shared import Pt # 单独调整几个字的样式 import os import time class WordOperate: """ World相关操作 """ def __init__(self, failename=None): """ 初始化函数 :failename:文件路径 """ self.failename = failename self.save_path = os.path.join(os.getcwd(), str(time.strftime('%Y%m%d%H%M%S',time.localtime(time.time()))), r'new_doc'+r'.docx') if failename is None: self.doc = Document() else: self.doc = Document(self.failename) def read_text(self): """ 读取文档中text段落,返回list """ try: paragraph_list = [] for paragraph in self.doc.paragraphs: paragraph_list.append(paragraph.text) return paragraph_list except Exception as e: return print(e) def read_table(self): """ 读取文档中表格,返回list 返回格式[table[行[列数据]]] """ try: tables = self.doc.tables table_rsult = [] for i in range(len(tables)): table = tables[i] table_text = WordOperate.read_table_rows(table) table_rsult.append(table_text) return table_rsult except Exception as e: return print(e) def read_table_rows(table): """ 读取table所有行数据 """ reust = [] for rows in range(len(table.rows)): result_rows = WordOperate.read_table_columns(rows, table) reust.append(result_rows) return reust def read_table_columns(rows, table): """ 读取一行所有列数据 """ result_rows = [] for columns in range(len(table.columns)): result_row = table.cell(rows, columns).text result_rows.append(result_row) return result_rows def write_heading(self, text, level=0): """ 添加标题 :text:标题内容 :level:标题级别,默认为0 """ self.doc.add_heading(text, level=level) def write_paragraph(self, text, style='Normal'): """ 添加段落 :text:段落内容 :style:段落样式,默认Normal 查看样式调用 search_paragraph_styles """ paragraph = self.doc.add_paragraph(text, style=style) return paragraph def write_text(self, text, paragraph, style=None): """ 段落后添加文字 :text:文字内容 :paragraph:需要追加文字的段落 :style:文字样式,默认Title Char 查看样式调用 search_character_styles """ if style is None: text = paragraph.add_run(text) else: text = paragraph.add_run(text, style=style) return text def search_paragraph_styles(self): """ 查看段落样式调用 """ print('以下为段落样式:') for s in self.doc.styles: if s.type == WD_STYLE_TYPE.PARAGRAPH: print(s.name) def search_character_styles(self): """ 查看字体样式调用 """ print('以下为字体样式:') for s in self.doc.styles: if s.type == WD_STYLE_TYPE.CHARACTER: print(s.name) def write_picture(self): pass def write_table(self): pass def style_paragraph(self): pass def style_character(self): pass def save_word(self): if os.path.exists(self.save_path) is True: pass else: os.mkdir(os.path.join(os.getcwd(), str(time.strftime('%Y%m%d%H%M%S',time.localtime(time.time()))))) self.doc.save(self.save_path) return print('文件保存成功,存储路径为:', self.save_path) if __name__ == "__main__": docx = WordOperate(r'C:\Users\14836\OneDrive\Python学习\test\file\新建 Microsoft Word 文档.docx') paragraph = doc.write_paragraph('5456465', 'ListNumber') doc.write_text('wqqwqw', paragraph, 'Title Char') docx.save_word()
用sqldatareader更新listview时出现的问题
编译没问题 我的目地:读取一张表中的所有信息插入listview中 ..... while(sqlReader.Read()) { ListViewItew it=new ListViewItem(); it.SubItems[0].Text=sqlreader[0].ToStrig(); for(int i=1;i<12;i++) //我的表有12个字段,第一次用listview不知道对不对 { //listView1.Items.Clear(); it.SubItems.Add(sqlReader[i].ToString()); listView1.Items.Add(it); } ListView1.EndUpdate(); } sqlreader.Close(); ... 提示:不能在多处添加或插入项"201201".必须首先将其从当前位置移除或将其隆. 参数名:item
PHP网站语句如何优化,希望和高手一起探讨下
我的网站是PHP网站,使用的数据库是MySQL 的目前直接用网上的采集插件出现了很多慢查询和扫全表的动作,自己也在对应的表里建立了普通索引,但还是无法得到解决,一直报错在SQL语句里,对于这块本人学习尚浅。 MySQL引擎是InnoDB,下面是索引截图: ![图片说明](https://img-ask.csdn.net/upload/201906/16/1560632068_941383.jpg) 报错语句: select url from ve123_links_temp where url like 'jmw.com.cn%' UPDATE `ve123_links_temp` SET `no_id`='1' WHERE url='http://app.hiapk.com/hiapk/about/agreement' 还请各位高手指点迷津,到底如何优化 语句: <?php //抓全站--- 多线程 function all_links_duo($site_id,$ceng,$include_word,$not_include_word) { global $db; $new_url=array(); $fenge=array(); $nei=1;//1代表只收内链 2代表外链 空代表所有 $numm=2;//开启多少线程 echo "<br><b>开始抓取第".$ceng."层</b><br>"; $ceng++; $row=$db->get_one("select * from ve123_links_temp where site_id='".$site_id."' and no_id='0'"); if(empty($row)){echo " ---------- 没有新链接了<br>";return;}//如果找不到新增加url,则结束 $query=$db->query("select * from ve123_links_temp where site_id='".$site_id."' and no_id='0'"); while($row=$db->fetch_array($query)) { $new_url[]=$row[url]; } $he_num = ceil(count($new_url)/$numm);//计算需要循环多少次 $fenge=array_chunk($new_url,$numm);//把数组分割成多少块数组 每块大小$numm /* echo "一共多少个"; echo count($new_url); echo "需要循环"; echo $he_num; echo "次<br>"; */ for($i=0;$i<=$he_num;$i++) { /*echo "开始循环第 ".$i." 次<br>"; print_r($fenge[$i]); echo "<br>";*/ $fen_url = array(); $fen_url = cmi($fenge[$i]); //需要把得到的数组 (数组只包括 网址和源码) 分析 写入数据库 , /*echo "<b>本次抓完的网址为</b>"; print_r($fen_url[url]); echo "<br>";*/ foreach ((array)$fen_url as $url => $file) { $links = array(); $temp_links = array(); $cha_temp = array(); $loy = array(); $new_links = array(); $cha_links = array(); $cha_links_num = array(); $links = _striplinks($file); //从htmlcode中提取网址 $links = _expandlinks($links, $url); //补全网址 $links=check_wai($links,$nei,$url); $links=array_values(array_unique($links)); $bianma = bianma($file); //获取得到htmlcode的编码 $file = Convert_File($file,$bianma); //转换所有编码为gb2312 $loy = clean_lry($file,$url,"html"); $title=$loy["title"]; //从数组中得到标题,赋值给title $pagesize=number_format(strlen($file)/1024, 0, ".", ""); $fulltxt=Html2Text($loy["fulltext"]); $description=$loy["description"]; //从数组中得到标题,赋值给description $keywords=$loy["keywords"]; //从数组中得到标题,赋值给keywords $lrymd5=md5($fulltxt); $updatetime=time(); if($title==""){$title=str_cut($fulltxt,65); } //根据url,更新内容 $array=array('lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); $db->update("ve123_links",$array,"url='".$url."'"); $all_num = count($links); //开始读取 ve123_links_temp 中所有site_id 为$site_id 的url 然后和抓取的 $links 数组比较,将得到的差集创建到 ve123_links_temp 中 $query=$db->query("select url from ve123_links_temp where url like '%".getdomain($url)."%'"); while($row=$db->fetch_array($query)) { $temp_links[]=rtrim($row[url],"/"); } $cha_temp=array_diff($links,$temp_links); foreach((array)$cha_temp as $value) { if(check_include($value, $include_word, $not_include_word )) { $arral=array('url'=>$value,'site_id'=>$site_id); $db->insert("ve123_links_temp",$arral); } } //开始读取 ve123_links 中所有site_id 为 $site_id 的url 然后和抓取的 $links 数组比较,将得到的差集创建到 ve123_links 中 合集则输出 已存在了 $query=$db->query("select url from ve123_links where url like '%".getdomain($url)."%'"); while($row=$db->fetch_array($query)) { $new_links[]=rtrim($row[url],"/"); } $cha_links=array_diff($links,$new_links); foreach((array)$cha_links as $value) { if(check_include($value, $include_word, $not_include_word )) { $array=array('url'=>$value,'site_id'=>$site_id,'level'=>'1'); $db->insert("ve123_links",$array); $cha_links_num[]=$value; } } $cha_num = count($cha_links_num); printLinksReport($cha_num, $all_num, $cl=0); echo "<a href=".$url." target=_blank>".$url. "</a><br>"; $arral=array('no_id'=>1); $db->update("ve123_links_temp",$arral,"url='$url'"); ob_flush(); flush(); } } all_links_duo($site_id,$ceng,$include_word,$not_include_word);//再次调用本函数开始循环 } //一键找站 function find_sites($site_id,$ceng) { global $db; $new_url=array(); $fenge=array(); $numm=20;//开启多少线程 echo "<br><b>开始抓取第".$ceng."层</b><br>"; $ceng++; $row=$db->get_one("select * from ve123_sites_temp where site_id='".$site_id."' and no_id='0'"); if(empty($row)){echo " ---------- 没有新链接了<br>";return;}//如果找不到新增加url,则结束 $query=$db->query("select * from ve123_sites_temp where site_id='".$site_id."' and no_id='0'"); while($row=$db->fetch_array($query)) { $new_url[]=$row[url]; } $he_num = ceil(count($new_url)/$numm);//计算需要循环多少次 $fenge=array_chunk($new_url,$numm);//把数组分割成多少块数组 每块大小$numm for($i=0;$i<=$he_num;$i++) { $fen_url = array(); $fen_url = cmi($fenge[$i]); //需要把得到的数组 (数组只包括 网址和源码) 分析 写入数据库 , foreach ((array)$fen_url as $url => $file) { $links = array(); $fen_link = array(); $nei_link = array(); $wai_link = array(); $new_temp = array(); $cha_temp = array(); $new_site = array(); $cha_site = array(); $new_lik = array(); $cha_lik = array(); $links = _striplinks($file); //从htmlcode中提取网址 $links = _expandlinks($links, $url); //补全网址 $fen_link=fen_link($links,$url); //把内链和外链分开 $nei_link=array_values(array_unique($fen_link[nei])); //过滤内链 重复的网址 $wai_link=GetSiteUrl($fen_link[wai]); //把外链都转换成首页 $wai_link=array_values(array_unique($wai_link)); //过滤外链 重复的网址 //读出 ve123_sites_temp 中所有 site_id=-1 and no_id=0 $query=$db->query("select url from ve123_sites_temp where site_id='".$site_id."'"); while($row=$db->fetch_array($query)) { $new_temp[]=$row[url]; } $cha_temp=array_diff($nei_link,$new_temp);//与内链进行比较 得出差集 //将差集创建到 ve123_sites_temp 中 foreach((array)$cha_temp as $value) { $arral=array('url'=>$value,'site_id'=>$site_id,'no_id'=>0); $db->insert("ve123_sites_temp",$arral); } //读出 ve123_sites 中所有 site_id=-1 global $db; $query=$db->query("select url from ve123_sites where site_no='".$site_id."'"); while($row=$db->fetch_array($query)) { $new_site[]=$row[url]; } $cha_site=array_diff($wai_link,$new_site);//与外链进行比较 得出差集 //将差集创建到 ve123_sites 中 foreach((array)$cha_site as $value) { $arral=array('url'=>$value,'site_no'=>$site_id); $db->insert("ve123_sites",$arral); } //读出 ve123_links 中所有 site_id=-1 global $db; global $db; $query=$db->query("select url from ve123_links where site_id='".$site_id."'"); while($row=$db->fetch_array($query)) { $new_lik[]=$row[url]; } $cha_lik=array_diff($wai_link,$new_lik);//与外链进行比较 得出差集 //将得到的差集 创建到 ve123_links foreach ((array)$cha_lik as $value) { $array=array('url'=>$value,'site_id'=>$site_id); $db->insert("ve123_links",$array); echo "<font color=#C60A00><b>抓取到:</b></font>"; echo "<a href=".$value." target=_blank>".$value."</a><br>"; } $arral=array('no_id'=>1); $db->update("ve123_sites_temp",$arral,"url='$url'"); ob_flush(); flush(); } } find_sites($site_id,$ceng);//再次调用本函数开始循环 } //一键更新 已抓站 function Update_sites($site_id) { global $db; $numm=20;//开启多少线程 $new_url = array(); $fenge = array(); $query=$db->query("select url from ve123_links where site_id='".$site_id."' and length(lrymd5)!=32"); while($row=$db->fetch_array($query)) { $new_url[]=$row[url]; } $he_num = ceil(count($new_url)/$numm);//计算需要循环多少次 $fenge=array_chunk($new_url,$numm);//把数组分割成多少块数组 每块大小$numm for($i=0;$i<=$he_num;$i++) { $fen_url = array(); $fen_url = cmi($fenge[$i]); //需要把得到的数组 (数组只包括 网址和源码) 分析 写入数据库 , foreach ((array)$fen_url as $url => $file) { $links = array(); $temp_links = array(); $cha_temp = array(); $loy = array(); $new_links = array(); $cha_links = array(); $cha_links_num = array(); $bianma = bianma($file); //获取得到htmlcode的编码 $file = Convert_File($file,$bianma); //转换所有编码为gb2312 if($file==-1) {echo "<b><font color=#C60A00>抓取失败</b></font> ".$url."<br>"; continue;} $loy = clean_lry($file,$url,"html"); //设置分析数组 $title=$loy["title"]; //从数组中得到标题,赋值给title $pagesize=number_format(strlen($file)/1024, 0, ".", ""); $fulltxt=Html2Text($loy["fulltext"]); $description=$loy["description"]; //从数组中得到标题,赋值给description $keywords=$loy["keywords"]; //从数组中得到标题,赋值给keywords $lrymd5=md5($fulltxt); $updatetime=time(); if($title==""){$title=str_cut($fulltxt,65); } //根据url,更新内容 echo "<b><font color=#0Ae600>已更新</font></b>"; echo $title; echo "<a href=".$url." target=_blank>".$url. "</a><br>"; $array=array('lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); $db->update("ve123_links",$array,"url='".$url."'"); } } } //一键找站 暂时不用的 function find_sites_($url) { $oldtime=time(); $site_id = -1; $numm=10; $links=array(); $fen_link=array(); $lrp =array(); $nei_link =array(); $wai_link =array(); $new_temp =array(); $cha_temp =array(); $new_site =array(); $cha_site =array(); $new_lik =array(); $cha_lik =array(); $fenge =array(); $lrp = cmi($url); $links = _striplinks($lrp[$url]); //从htmlcode中提取网址 $links = _expandlinks($links, $url); //补全网址 $fen_link=fen_link($links,$url); //把内链和外链分开 $nei_link=array_values(array_unique($fen_link[nei])); //过滤内链 重复的网址 $wai_link=GetSiteUrl($fen_link[wai]); //把外链都转换成首页 $wai_link=array_values(array_unique($wai_link)); //过滤外链 重复的网址 /*print_r($nei_link); echo "<br><br>"; print_r($wai_link);*/ //读出 ve123_sites_temp 中所有 site_id=-1 and no_id=0 global $db; $query=$db->query("select url from ve123_sites_temp where site_id='-1' and no_id='0'"); while($row=$db->fetch_array($query)) { $new_temp[]=$row[url]; } $cha_temp=array_diff($nei_link,$new_temp);//与内链进行比较 得出差集 //将差集创建到 ve123_sites_temp 中 foreach((array)$cha_temp as $value) { $arral=array('url'=>$value,'site_id'=>$site_id,'no_id'=>0); $db->insert("ve123_sites_temp",$arral); } //读出 ve123_temp 中所有 site_id=-1 global $db; global $db; $query=$db->query("select url from ve123_sites where site_no='-1'"); while($row=$db->fetch_array($query)) { $new_site[]=$row[url]; } $cha_site=array_diff($wai_link,$new_site);//与外链进行比较 得出差集 //将差集创建到 ve123_sites 中 foreach((array)$cha_site as $value) { $arral=array('url'=>$value,'site_no'=>$site_id); $db->insert("ve123_sites",$arral); } //读出 ve123_links 中所有 site_id=-1 global $db; global $db; $query=$db->query("select url from ve123_sites where site_id='-1'"); while($row=$db->fetch_array($query)) { $new_lik[]=$row[url]; } $cha_lik=array_diff($wai_link,$new_lik);//与外链进行比较 得出差集 //将得到的差集 创建到 ve123_links $he_num = ceil(count($cha_lik)/$numm);//计算需要循环多少次 $fenge=array_chunk($cha_lik,$numm);//把数组分割成多少块数组 每块大小$numm for($i=0;$i<=$he_num;$i++) { $fen_url = array(); $fen_url = cmi($fenge[$i]); //多线程开始采集 foreach ((array)$fen_url as $url => $file) { $bianma = bianma($file); //获取得到htmlcode的编码 $file = Convert_File($file,$bianma); //转换所有编码为gb2312 $loy = clean_lry($file,$url,"html"); //过滤 file 中标题等 到数组 $title=$loy["title"]; //从数组中得到标题,赋值给title $pagesize=number_format(strlen($file)/1024, 0, ".", ""); $fulltxt=Html2Text($loy["fulltext"]); $description=$loy["description"]; //从数组中得到标题,赋值给description $keywords=$loy["keywords"]; //从数组中得到标题,赋值给keywords $lrymd5=md5($fulltxt); $updatetime=time(); if($title==""){$title=str_cut($fulltxt,65); } //根据url,更新内容 $array=array('url'=>$value,'lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); $db->insert("ve123_links",$array); echo "<font color=#C60A00><b>抓取到:</b></font>".$title; echo "<a href=".$url." target=_blank>".$url."</a><br>"; } } $newtime=time(); echo " --- <b>用时:</b>"; echo date("H:i:s",$newtime-$oldtime-28800); echo "<br>"; del_links_temp($site_id); } //抓全站--- 单线程 function all_url_dan($url,$old,$nei,$ooo,$site_id,$include_word,$not_include_word) { if(!is_url($url)) { return false;} global $db,$config; $snoopy = new Snoopy; //国外snoopy程序 $snoopy->fetchlry($url); $links=$snoopy->resulry; if(!is_array($links)) {return;} $links=check_wai($links,$nei,$url); $links=array_values(array_unique($links)); $title=$snoopy->title; $fulltxt=$snoopy->fulltxt; $lrymd5=md5($fulltxt); $pagesize=$snoopy->pagesize; $description=$snoopy->description; $keywords=$snoopy->keywords; $updatetime=time(); if($title==""){$title=str_cut($fulltxt,65); } //读取url,更新内容 $array=array('lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); $db->update("ve123_links",$array,"url='".$url."'"); $all_num = count($links); $temp_links=array(); $cha_temp=array(); //开始读取 ve123_links_temp 中所有site_id 为$site_id 的url 然后和抓取的 $links 数组比较,将得到的差集创建到 ve123_links_temp 中 $query=$db->query("select url from ve123_links_temp where url like '%".getdomain($url)."%'"); while($row=$db->fetch_array($query)) { $temp_links[]=rtrim($row[url],"/"); } $cha_temp=array_diff($links,$temp_links); foreach((array)$cha_temp as $value) { $arral=array('url'=>$value,'site_id'=>$site_id); $db->insert("ve123_links_temp",$arral); } //开始读取 ve123_links 中所有site_id 为 $site_id 的url 然后和抓取的 $links 数组比较,将得到的差集创建到 ve123_links 中 合集则输出 已存在了 $query=$db->query("select url from ve123_links where url like '%".getdomain($url)."%'"); while($row=$db->fetch_array($query)) { $new_links[]=rtrim($row[url],"/"); } $cha_links=array_diff($links,$new_links); $cha_num = count($cha_links); foreach((array)$cha_links as $value) { if(check_include($value, $include_word, $not_include_word )) { $array=array('url'=>$value,'site_id'=>$site_id,'level'=>'1'); $db->insert("ve123_links",$array); } } printLinksReport($cha_num, $all_num, $cl=0); echo "<a href=".$old." target=_blank>".$old. "</a>"; ob_flush(); flush(); } //抓全站--- 单线程---不用的 function add_all_url_ ($url,$old,$numm,$ooo,$site_id,$include_word,$not_include_word) { if(!is_url($url)) { return false;} global $db,$config; $snoopy = new Snoopy; //国外snoopy程序 $snoopy->fetchlry($url); $links=$snoopy->resulry; if(!is_array($links)) {return;} $links=check_wai($links,$numm,$url); $links=array_values(array_unique($links)); $title=$snoopy->title; $fulltxt=$snoopy->fulltxt; $lrymd5=md5($fulltxt); $pagesize=$snoopy->pagesize; $description=$snoopy->description; $keywords=$snoopy->keywords; $updatetime=time(); if($title==""){$title=str_cut($fulltxt,65); } //读取url,更新内容 $array=array('lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); $db->update("ve123_links",$array,"url='".$url."'"); $all_num = count($links); $temp_links=array(); $cha_temp=array(); //开始读取 ve123_links_temp 中所有site_id 为$site_id 的url 然后和抓取的 $links 数组比较,将得到的差集创建到 ve123_links_temp 中 $query=$db->query("select url from ve123_links_temp where url like '%".getdomain($url)."%'"); while($row=$db->fetch_array($query)) { $temp_links[]=rtrim($row[url],"/"); } $cha_temp=array_diff($links,$temp_links); foreach((array)$cha_temp as $value) { $arral=array('url'=>$value,'site_id'=>$site_id); $db->insert("ve123_links_temp",$arral); } //开始读取 ve123_links 中所有site_id 为 $site_id 的url 然后和抓取的 $links 数组比较,将得到的差集创建到 ve123_links 中 合集则输出 已存在了 $query=$db->query("select url from ve123_links where url like '%".getdomain($url)."%'"); while($row=$db->fetch_array($query)) { $new_links[]=rtrim($row[url],"/"); } $he_links=array_intersect($links,$new_links); $he_num = count($he_links); $cha_links=array_diff($links,$new_links); $cha_num = count($cha_links); foreach((array)$cha_links as $value) { if(check_include($value, $include_word, $not_include_word )) { $array=array('url'=>$value,'site_id'=>$site_id,'level'=>'1'); $db->insert("ve123_links",$array); } } printLinksReport($cha_num, $all_num, $cl=0); echo "<a href=".$old." target=_blank>".$old. "</a>"; ob_flush(); flush(); } function printLinksReport($cha_num, $all_num, $cl) { global $print_results, $log_format; $cha_html = " <font color=\"blue\">页面包含<b>$all_num</b>条链接</font>。 <font color=\"red\"><b>$cha_num</b>条新链接。</font>\n"; $no_html = " <font color=\"blue\">页面包含<b>$all_num</b>条链接</font>。 没有新链接。\n"; if($cha_num==0) {print $no_html; flush();} else{print $cha_html;} } function add_links_insite($link,$old,$numm,$ooo,$site_id,$include_word,$not_include_word) { if(!is_url($link)) { return false; } global $db,$config; /* $spider=new spider; //系统自带蜘蛛 echo "<b>网站编码</b>(默认GB2312)<b>:"; $spider->url($link); echo "</b><br>"; $links= $spider->get_insite_links(); */ //$site_url=GetSiteUrl($link); $url_old=GetSiteUrl($old); echo "原始页=".$url_old." - - <"; echo "首层 id=".$site_id."> - - <"; echo "包含字段=".$include_word.">"; echo "<br>"; /*if($ooo==0) { $site=$db->get_one("select * from ve123_sites where url='".$url_old."'"); $site_id=$site["site_id"]; $include_word=$site["include_word"]; $not_include_word=$site["not_include_word"]; $spider_depth=$site["spider_depth"]; } */ $snoopy = new Snoopy; //国外snoopy程序 $snoopy->fetchlinks($link); $links=$snoopy->results; $links=check_wai($links,$numm,$link); $links=array_values(array_unique($links)); foreach((array)$links as $value) { $row=$db->get_one("select * from ve123_links_temp where url='".$value."'"); if(empty($row)) { $arral=array('url'=>$value,'site_id'=>$site_id); $db->insert("ve123_links_temp",$arral); } $value=rtrim($value,"/"); $row=$db->get_one("select * from ve123_links where url='".$value."'"); if (check_include($value, $include_word, $not_include_word )) { if(empty($row)&&is_url($value)) { echo "<font color=#C60A00><b>抓取到:</b></font>"; $array=array('url'=>$value,'site_id'=>$site_id,'level'=>'1'); $db->insert("ve123_links",$array); } else { echo "<b>已存在了:</b>";} echo "<a href=".$value." target=_blank>".$value. "</a><br>"; ob_flush(); flush(); //$row=$db->get_one("select * from ve123_links_temp where url='".$value."'"); // if(empty($row)&&is_url($value)) // { // $array=array('url'=>$value,'site_id'=>$site_id); // $db->insert("ve123_links_temp",$array); // } } } } //只保留内链或者外链 function check_wai($lry_all,$nei,$url) { $lry_nei=array();//站内链接数组 $lry_wai=array();//站外链接数组 $new_url=getdomain($url); if($nei=="") { foreach ((array)$lry_all as $value) { $lry_nei[]=rtrim($value,"/"); } return $lry_nei; } foreach ((array)$lry_all as $value) { if(getdomain($value)==$new_url) { $lry_nei[]=rtrim($value,"/"); //$lry_nei[]=$value; } else { $lry_wai[]=rtrim($value,"/"); } } if($nei==1){return $lry_nei;} if($nei==2){return $lry_wai;} } //把内链和外链分开 function fen_link($lry_all,$url) { $data=array();//站外链接数组 $new_url=getdomain($url); foreach ((array)$lry_all as $value) { if(getdomain($value)==$new_url) { $data['nei'][]=rtrim($value,"/"); } else { $data['wai'][]=rtrim($value,"/"); } } return $data; } function check_include($link, $include_word, $not_include_word) { $url_word = Array (); $not_url_word = Array (); $is_shoulu = true; if ($include_word != "") { $url_word = explode(",", $include_word); } if ($not_include_word != "") { $not_url_word = explode(",", $not_include_word); } foreach ($not_url_word as $v_key) { $v_key = trim($v_key); if ($v_key != "") { if (substr($v_key, 0, 1) == '*') { if (preg_match(substr($v_key, 1), $link)) { $is_shoulu = false; break; } } else { if (!(strpos($link, $v_key) === false)) { $is_shoulu = false; break; } } } } if ($is_shoulu && $include_word != "") { $is_shoulu = false; foreach ($url_word as $v_key) { $v_key = trim($v_key); if ($v_key != "") { if (substr($v_key, 0, 1) == '*') { if (preg_match(substr($v_key, 1), $link)) { $is_shoulu = true; break 2; } } else { if (strpos($link, $v_key) !== false) { $is_shoulu = true; break; } } } } } return $is_shoulu; } function add_links_site_fromtemp($in_url) { global $db; $domain=getdomain($in_url); $query=$db->query("select * from ve123_links_temp where url like '%".$domain."%' and no_id='0'"); while($row=$db->fetch_array($query)) { @$db->query("update ve123_links_temp set no_id='1' where url='".$row["url"]."'"); add_links_insite($row["url"],$row["url"],1,1); //sleep(3); } //sleep(5); add_links_site_fromtemp($in_url) ; } function insert_links($url) { global $db,$config; $spider=new spider; $spider->url($url); $links= $spider->links(); $sites= $spider->sites(); foreach($sites as $value) { $site_url=GetSiteUrl($link); $site=$db->get_one("select * from ve123_sites where url='".$site_url."'"); $site_id=$site["site_id"]; $row=$db->get_one("select * from ve123_links where url='".$value."'"); if(empty($row)&&is_url($value)) { echo $value."<br>"; $array=array('url'=>$value,'site_id'=>$site_id,'level'=>'0'); $db->insert("ve123_links",$array); } else { echo "已存在:".$value."<br>"; } ob_flush(); flush(); //sleep(1); $row=$db->get_one("select * from ve123_sites where url='".$value."'"); if(empty($row)&&is_url($value)) { $array=array('url'=>$value,'spider_depth'=>$config["spider_depth"],'addtime'=>time()); $db->insert("ve123_sites",$array); } } //sleep(1); foreach($links as $value) { $row=$db->get_one("select * from ve123_links_temp where url='".$value."'"); if(empty($row)&&is_url($value)) { $array=array('url'=>$value); $db->insert("ve123_links_temp",$array); } } } function GetUrl_AllSite($in_url) { global $db; $query=$db->query("select * from ve123_links_temp where url like '%".$in_url."%' and updatetime<='".(time()-(86400*30))."'"); while($row=$db->fetch_array($query)) { @$db->query("update ve123_links_temp set updatetime='".time()."' where url='".$row["url"]."'"); insert_links($row["url"]); //sleep(3); } //sleep(5); GetUrl_AllSite($in_url) ; } function Updan_link($url,$site_id) { global $db; $row=$db->get_one("select * from ve123_links_temp where url='".$url."'"); if(empty($row)) { $arral=array('url'=>$url,'site_id'=>$site_id); $db->insert("ve123_links_temp",$arral); } $row=$db->get_one("select * from ve123_links where url like '%".$url."%'"); if(empty($row)) { echo "<font color=#C60A00><b>抓取到:</b></font>".$url."<br>"; $array=array('url'=>$url,'site_id'=>$site_id,'level'=>'1'); $db->insert("ve123_links",$array); } else { echo "已存在:".$url."<br>"; } } function Updan_zhua($url,$site_id) { global $db; $lrp = array(); $links = array(); $fen_link = array(); $nei_link = array(); $new_temp = array(); $cha_temp = array(); $lrp = cmi($url); $links = _striplinks($lrp[$url]); //从htmlcode中提取网址 $links = _expandlinks($links, $url); //补全网址 $fen_link=fen_link($links,$url); //把内链和外链分开 $nei_link=array_values(array_unique($fen_link[nei])); //过滤内链 重复的网址 //读出 ve123_sites_temp 中所有 site_id=-1 and no_id=0 $query=$db->query("select url from ve123_sites_temp where site_id='".$site_id."'"); while($row=$db->fetch_array($query)) { $new_temp[]=$row[url]; } $cha_temp=array_diff($nei_link,$new_temp);//与内链进行比较 得出差集 //将差集创建到 ve123_sites_temp 中 foreach((array)$cha_temp as $value) { $arral=array('url'=>$value,'site_id'=>$site_id,'no_id'=>0); $db->insert("ve123_sites_temp",$arral); } } function Update_link($url) { global $db,$bug_url; $is_success=FALSE; $is_shoulu=FALSE; /*$spider=new spider; $spider->url($url); $title=$spider->title; $fulltxt=$spider->fulltxt; $lrymd5=md5($spider->fulltxt); $pagesize=$spider->pagesize; $keywords=$spider->keywords; $htmlcode=$spider->htmlcode; $description=$spider->description;*/ $snoopy = new Snoopy; //国外snoopy程序 $snoopy->fetchtext($url); $title=$snoopy->title; $fulltxt=$snoopy->fulltxt; $lrymd5=md5($fulltxt); $pagesize=$snoopy->pagesize; $description=$snoopy->description; $keywords=$snoopy->keywords; //echo "fulltxt=".$fulltxt."<br>"; $updatetime=time(); //$site_url=GetSiteUrl($url); //$site=$db->get_one("select * from ve123_sites where url='".$site_url."'"); //$site_id=$site["site_id"]; //echo "site_id".$site["site_id"]."<br>"; if($title==""){$title=str_cut($fulltxt,65); } echo "<b><font color=#0Ae600>已更新</font></b>"; echo $title; $array=array('lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); //$db->query("update ve123_links set updatetime='".time()."' where url='".$url."'"); //更新时间 //$s=array(); //$s=explode("?",$title); //$domain=GetSiteUrl($url); //$site=$db->get_one("select * from ve123_sites where url='".$domain."'"); $db->update("ve123_links",$array,"url='".$url."'"); $is_success=TRUE; if(empty($bug_url)) { exit(); } return $is_success; } function Update_All_Link_($in_url='',$days,$qiangzhi) { global $db; $new_url=array(); $fen_url=array(); $fenge=array(); $numm=20;//开启多少线程 //if($qiangzhi==0){ $lry="and strlen(lrymd5)!=32";} //else { ;} if(empty($in_url)) { $sql="select url from ve123_links where length(lrymd5)!=32 order by link_id desc"; } else { $sql="select url from ve123_links where url like '%".getdomain($in_url)."%' and length(lrymd5)!=32 order by link_id desc"; } echo $sql."<br>"; $query=$db->query($sql); while($row=$db->fetch_array($query)) { $new_url[]=$row[url]; } $he_num = ceil(count($new_url)/$numm);//计算需要循环多少次 //echo "<br><b>需要循环多少次=</b>".$he_num."<br>"; $fenge=array_chunk($new_url,$numm);//把数组分割成多少块数组 每块大小$numm for($i=0;$i<=$he_num;$i++) //for($i=0;$i<=1;$i++) { $fen_url=cmi($fenge[$i]); //需要把得到的数组 (数组只包括 网址和源码) 分析 写入数据库 , foreach ((array)$fen_url as $url => $file) { $bianma = bianma($file); //获取得到htmlcode的编码 $file = Convert_File($file,$bianma); //转换所有编码为gb2312 $lry = clean_lry($file,$url,"html"); $title=$lry["title"]; //从数组中得到标题,赋值给title $pagesize=number_format(strlen($file)/1024, 0, ".", ""); $fulltxt=Html2Text($lry["fulltext"]); $description=$lry["description"]; //从数组中得到标题,赋值给description $keywords=$lry["keywords"]; //从数组中得到标题,赋值给keywords $lrymd5=md5($fulltxt); $updatetime=time(); if($title==""){$title=str_cut($fulltxt,65); } echo "<b><font color=#0Ae600>已更新</font></b>"; echo $title; echo "<a href=".$url." target=_blank>".$url. "</a><br>"; $array=array('lrymd5'=>$lrymd5,'title'=>$title,'fulltxt'=>$fulltxt,'description'=>$description,'keywords'=>$keywords,'pagesize'=>$pagesize,'updatetime'=>$updatetime); $db->update("ve123_links",$array,"url='".$url."'"); } } } function cmi($links,$killspace=TRUE,$forhtml=TRUE,$timeout=6,$header=0,$follow=1){ $res=array();//用于保存结果 $mh = curl_multi_init();//创建多curl对象,为了几乎同时执行 foreach ((array)$links as $i => $url) { $conn[$url]=curl_init($url);//若url中含有gb2312汉字,例如FTP时,要在传入url的时候处理一下,这里不用 curl_setopt($conn[$url], CURLOPT_TIMEOUT, $timeout);//此时间须根据页面的HTML源码出来的时间,一般是在1s内的,慢的话应该也不会6秒,极慢则是在16秒内 curl_setopt($conn[$url], CURLOPT_HEADER, $header);//不返回请求头,只要源码 curl_setopt($conn[$url],CURLOPT_RETURNTRANSFER,1);//必须为1 curl_setopt($conn[$url], CURLOPT_FOLLOWLOCATION, $follow);//如果页面含有自动跳转的代码如301或者302HTTP时,自动拿转向的页面 curl_multi_add_handle ($mh,$conn[$url]);//关键,一定要放在上面几句之下,将单curl对象赋给多对象 } //下面一大步的目的是为了减少cpu的无谓负担,暂时不明,来自php.net的建议,几乎是固定用法 do { $mrc = curl_multi_exec($mh,$active);//当无数据时或请求暂停时,active=true } while ($mrc == CURLM_CALL_MULTI_PERFORM);//当正在接受数据时 while ($active and $mrc == CURLM_OK) {//当无数据时或请求暂停时,active=true,为了减少cpu的无谓负担,这一步很难明啊 if (curl_multi_select($mh) != -1) { do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); } } foreach ((array)$links as $i => $url) { $cinfo=curl_getinfo($conn[$url]);//可用于取得一些有用的参数,可以认为是header $res[$url]=curl_multi_getcontent($conn[$url]); if(!$forhtml){//节约内存 $res[$url]=NULL; } /*下面这一段放一些高消耗的程序代码,用来处理HTML,我保留的一句=NULL是要提醒,及时清空对象释放内存,此程序在并发过程中如果源码太大,内在消耗严重 //事实上,这里应该做一个callback函数或者你应该将你的逻辑直接放到这里来,我为了程序可重复,没这么做 preg_match_all($preg,$res[$i],$matchlinks); $res[$i]=NULL;*/ curl_close($conn[$url]);//关闭所有对象 curl_multi_remove_handle($mh , $conn[$url]); //用完马上释放资源 } curl_multi_close($mh);$mh=NULL;$conn=NULL;$links=NULL; return $res; } function clean_lry($file, $url, $type) { $data=array(); $file = preg_replace("/<link rel[^<>]*>/i", " ", $file); //$file = preg_replace("@<!--sphider_noindex-->.*?<!--\/sphider_noindex-->@si", " ",$file); $file = preg_replace("@<!--.*?-->@si", " ",$file); $file = preg_replace("@<script[^>]*?>.*?</script>@si", " ",$file); $file = preg_replace("/&nbsp;/", " ", $file); $file = preg_replace("/&raquo;/", " ", $file); $file=str_replace("'","‘",$file); $regs = Array (); preg_match("/<meta +name *=[\"']?description[\"']? *content=[\"']?([^<>'\"]+)[\"']?/i", $file, $regs); if (isset ($regs)) { $description = $regs[1]; $file = str_replace($regs[0], "", $file); } $regs = Array (); preg_match("/<meta +name *=[\"']?keywords[\"']? *content=[\"']?([^<>'\"]+)[\"']?/i", $file, $regs); if (isset ($regs)) { $keywords = $regs[1]; $file = str_replace($regs[0], "", $file); } $regs = Array (); $keywords = preg_replace("/[, ]+/", " ", $keywords); if (preg_match("@<title *>(.*?)<\/title*>@si", $file, $regs)) { $title = trim($regs[1]); $file = str_replace($regs[0], "", $file); } $file = preg_replace("@<style[^>]*>.*?<\/style>@si", " ", $file); //create spaces between tags, so that removing tags doesnt concatenate strings $file = preg_replace("/<[\w ]+>/", "\\0 ", $file); $file = preg_replace("/<\/[\w ]+>/", "\\0 ", $file); $file = strip_tags($file); //$fulltext = $file; //$file .= " ".$title; $file = preg_replace('~&#x([0-9a-f]+);~ei', 'chr(hexdec("\\1"))', $file); $file = preg_replace('~&#([0-9]+);~e', 'chr("\\1")', $file); $file = strtolower($file); $file = preg_replace("/&[a-z]{1,6};/", " ", $file); $file = preg_replace("/[\*\^\+\?\\\.\[\]\^\$\|\{\)\(\}~!\"\/@#?%&=`?><:,]+/", " ", $file); $file = preg_replace("/\s+/", " ", $file); //$data['fulltext'] = $fulltext; $data['fulltext'] = addslashes($file); $data['title'] = addslashes($title); $data['description'] = $description; $data['keywords'] = $keywords; return $data; } function bianma($file) { preg_match_all("/<meta.+?charset=([-\w]+)/i",$file,$rs); $chrSet=strtoupper(trim($rs[1][0])); return $chrSet; } function Convert_File($file,$charSet) { $conv_file = html_entity_decode($file); $charSet = strtoupper(trim($charSet)); if($charSet != "GB2312"&&$charSet != "GBK") { $file=convertfile($charSet,"GB2312",$conv_file); if($file==-1){ return -1; } } return $file; } function convertfile($in_charset, $out_charset, $str) { //if(function_exists('mb_convert_encoding')) //{ $in_charset=explode(',',$in_charset); $encode_arr = array('GB2312','GBK','UTF-8','ASCII','BIG5','JIS','eucjp-win','sjis-win','EUC-JP'); $cha_temp=array_intersect($encode_arr,$in_charset); $cha_temp=implode('',$cha_temp); if(empty($in_charset)||empty($cha_temp)) { $encoded = mb_detect_encoding($str, $encode_arr); $in_charset=$encoded; } if(empty($in_charset)){ return -1; } echo $in_charset; return mb_convert_encoding($str, $out_charset, $in_charset); /*} else { require_once PATH.'include/charset.func.php'; $in_charset = strtoupper($in_charset); $out_charset = strtoupper($out_charset); if($in_charset == 'UTF-8' && ($out_charset == 'GBK' || $out_charset == 'GB2312')) { return utf8_to_gbk($str); } if(($in_charset == 'GBK' || $in_charset == 'GB2312') && $out_charset == 'UTF-8') { return gbk_to_utf8($str); } return $str; }*/ } function Update_All_Link($in_url='',$days,$qiangzhi) { global $db; if(empty($in_url)) { //$sql="select * from ve123_links where updatetime<='".(time()-(86400*$days))."' order by link_id desc";//echo $days."<br>"; $sql="select * from ve123_links where updatetime+86400 <".time()." order by link_id ";//echo $days."<br>"; } else { $sql="select * from ve123_links where url like '%".getdomain($in_url)."%' order by link_id desc";//echo $days."<br>"; //$sql="select * from ve123_links where url like '%".$in_url."%' order by link_id desc";//echo $days."<br>"; } //$sql="select * from ve123_links order by link_id"; echo $sql."<br>"; $query=$db->query($sql); while($row=$db->fetch_array($query)) { if(is_url($row["url"])) { // echo "呵呵呵呵".$row["lrymd5"]."<br>"; ob_flush(); flush(); //sleep(1); //if($row["lrymd5"]==""){ Update_link($row["url"],$row["lrymd5"]); } if($qiangzhi==1){ Update_link($row["url"]); } else { if(strlen($row["lrymd5"])!=32){ Update_link($row["url"]); } else {echo ""; } } echo ""; } ////sleep(2); } // echo "<br><b>全部更新完成</b> 完成日期:"; // echo date("Y年m月d日 H:i:s",time()); //sleep(2); // Update_All_Link($in_url) ; } function url_ce($val, $parent_url, $can_leave_domain) { global $ext, $mainurl, $apache_indexes, $strip_sessids; $valparts = parse_url($val); $main_url_parts = parse_url($mainurl); //if ($valparts['host'] != "" && $valparts['host'] != $main_url_parts['host'] && $can_leave_domain != 1) {return '';} reset($ext); while (list ($id, $excl) = each($ext)) if (preg_match("/\.$excl$/i", $val)) return ''; if (substr($val, -1) == '\\') {return '';} if (isset($valparts['query'])) {if ($apache_indexes[$valparts['query']]) {return '';}} if (preg_match("/[\/]?mailto:|[\/]?javascript:|[\/]?news:/i", $val)) {return '';} if (isset($valparts['scheme'])) {$scheme = $valparts['scheme'];} else {$scheme ="";} if (!($scheme == 'http' || $scheme == '' || $scheme == 'https')) {return '';} $regs = Array (); while (preg_match("/[^\/]*\/[.]{2}\//", $valpath, $regs)) { $valpath = str_replace($regs[0], "", $valpath); } $valpath = preg_replace("/\/+/", "/", $valpath); $valpath = preg_replace("/[^\/]*\/[.]{2}/", "", $valpath); $valpath = str_replace("./", "", $valpath); if(substr($valpath,0,1)!="/") {$valpath="/".$valpath;} $query = ""; if (isset($val_parts['query'])) {$query = "?".$val_parts['query'];} if ($main_url_parts['port'] == 80 || $val_parts['port'] == "") {$portq = "";} else {$portq = ":".$main_url_parts['port'];} return $val; } function iframe_ce($val, $parent_url, $can_leave_domain) { global $ext, $mainurl, $apache_indexes, $strip_sessids; $valparts = parse_url($val); $main_url_parts = parse_url($mainurl); //if ($valparts['host'] != "" && $valparts['host'] != $main_url_parts['host'] && $can_leave_domain != 1) {return '';} reset($ext); while (list ($id, $excl) = each($ext)) if (preg_match("/\.$excl$/i", $val)) return ''; if (substr($val, -1) == '\\') {return '';} if (isset($valparts['query'])) {if ($apache_indexes[$valparts['query']]) {return '';}} if (preg_match("/[\/]?mailto:|[\/]?javascript:|[\/]?news:/i", $val)) {return '';} if (isset($valparts['scheme'])) {$scheme = $valparts['scheme'];} else {$scheme ="";} if (!($scheme == 'http' || $scheme == '' || $scheme == 'https')) {return '';} $regs = Array (); while (preg_match("/[^\/]*\/[.]{2}\//", $valpath, $regs)) { $valpath = str_replace($regs[0], "", $valpath); } $valpath = preg_replace("/\/+/", "/", $valpath); $valpath = preg_replace("/[^\/]*\/[.]{2}/", "", $valpath); $valpath = str_replace("./", "", $valpath); if(substr($valpath,0,1)!="/") {$valpath="/".$valpath;} $query = ""; if (isset($val_parts['query'])) {$query = "?".$val_parts['query'];} if ($main_url_parts['port'] == 80 || $val_parts['port'] == "") {$portq = "";} else {$portq = ":".$main_url_parts['port'];} return $val; } function _striplinks($document) { $match = array(); $links = array(); preg_match_all("'<\s*(a\s.*?href|[i]*frame\s.*?src)\s*=\s*([\'\"])?([+:%\/\?~=&\\\(\),._a-zA-Z0-9-]*)'isx",$document,$links,PREG_PATTERN_ORDER); foreach ($links[3] as $val) { if (($a = url_ce($val, $url, $can_leave_domain)) != '') { $match[] = $a; } $checked_urls[$val[1]] = 1; } return $match; } function _expandlinks($links,$URI) { preg_match("/^[^\?]+/",$URI,$match); $match = preg_replace("|/[^\/\.]+\.[^\/\.]+$|","",$match[0]); $match = preg_replace("|/$|","",$match); $match_part = parse_url($match); $match_root = $match_part["scheme"]."://".$match_part["host"]; $URI_PARTS = parse_url($URI); $host = $URI_PARTS["host"]; $search = array( "|^http://".preg_quote($host)."|i", "|^(\/)|i", "|^(?!http://)(?!mailto:)|i", "|/\./|", "|/[^\/]+/\.\./|" ); $replace = array( "", $match_root."/", $match."/", "/", "/" ); $expandedLinks = preg_replace($search,$replace,$links); return $expandedLinks; } function foothtml() { echo "<div style=\"text-align:center;\"><a target=\"_blank\" href=\"http://www.php.com\"> Php</a></div>"; } ?> ``` ```
求大神帮忙 spring aop 方式事务不回滚怎么搞?
spring 版本 4.1.7 代码如下: 表: CREATE TABLE `users` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `nick_name` varchar(100) DEFAULT NULL, `password` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8; INSERT INTO `users` VALUES ('1', 'Jennifer', 'Alice'); INSERT INTO `users` VALUES ('2', '爱', '克斯莱'); java 文件: 1) package com.maxwell.spring.jdbc.tx; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import com.maxwell.spring.jdbc.vo.User; @Service public class UserAopService { @Autowired private UserDao dao; public UserAopService() {} /** * 此处的@Transactional注解把这个方法中的所有数据库操作当作事务进行处理 */ public void testTransactionManager() { String nn = dao.addUser(new User().setNickName("Jensen").setPassword("jensen")); int i = dao.getIdByNickName(nn); System.out.println(i); dao.setNickNameById(i, "Nimei"); } } 2) package com.maxwell.spring.jdbc.tx; import java.util.HashMap; import java.util.List; import java.util.Map; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.jdbc.core.BeanPropertyRowMapper; import org.springframework.jdbc.core.RowMapper; import org.springframework.jdbc.core.namedparam.BeanPropertySqlParameterSource; import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate; import org.springframework.jdbc.core.namedparam.SqlParameterSource; import org.springframework.stereotype.Repository; import com.maxwell.spring.jdbc.vo.User; /** * 用于测试事务 * @author Techwork * */ @Repository public class UserDao { @Autowired @Qualifier("npJdbcTpl") private NamedParameterJdbcTemplate jdbcTpl; public String addUser(User user) { String sql = "INSERT INTO users (nick_name, password) VALUES (:nickName, :password)"; SqlParameterSource paramSource = new BeanPropertySqlParameterSource(user); jdbcTpl.update(sql, paramSource); return user.getNickName(); } public int getIdByNickName(String nickName) { String sql = "select id, nick_name, password from users where nick_name = :xxx"; Map<String, Object> paramMap = new HashMap<>(); paramMap.put("xxx", nickName); RowMapper<User> rowMapper = new BeanPropertyRowMapper<>(User.class); List<User> users = jdbcTpl.query(sql, paramMap, rowMapper); if(users == null || users.size() < 1) { throw new RuntimeException("无此用户:" + nickName); } return users.get(0).getId(); } public void setNickNameById(int id, String nickName) { if(nickName.length() < 10) throw new RuntimeException("性别错误"); String sql = "update users set nick_name = :n where id = :i"; Map<String, Object> paramMap = new HashMap<>(); paramMap.put("n", nickName); paramMap.put("i", id); jdbcTpl.update(sql, paramMap); } } 3) package com.maxwell.spring.jdbc.tx; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Isolation; import org.springframework.transaction.annotation.Transactional; import com.maxwell.spring.jdbc.vo.User; @Service public class UserService { @Autowired private UserDao dao; public UserService() {} /** * 此处的@Transactional注解把这个方法中的所有数据库操作当作事务进行处理 */ @Transactional public void testTransactionManager() { String nn = dao.addUser(new User().setNickName("Jensen").setPassword("jensen")); int i = dao.getIdByNickName(nn); System.out.println(i); dao.setNickNameById(i, "Nimei"); } /** * propagation: 指定事务的传播行为 * isolation: 指定事务的隔离级别 * noRollbackFor: 指定对哪些异常不回滚。通常取默认值。 * rollbackFor: 指定对哪些异常回滚。通常取默认值。 * readOnly: 只读事务。如果事务只读取数据,而不写数据的话,设置为true,有助于数据库引擎优化事务。 * timeout: 指定强制回滚事务(就算可能成功)之前,事务可以存在的时间,以防止事务占用数据库连接时间过长。 */ @Transactional(isolation = Isolation.READ_COMMITTED, noRollbackFor = { Exception.class }, rollbackFor = { RuntimeException.class }, readOnly = true, timeout = 3) public void testTransactionManager2() { String nn = dao.addUser(new User().setNickName("Jensen").setPassword("jensen")); int i = dao.getIdByNickName(nn); dao.setNickNameById(i, "哈哈"); } } 4) package com.maxwell.spring.jdbc; import org.junit.Test; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import com.maxwell.spring.jdbc.tx.UserAopService; /** * 测试AOP事务 * * @author Angrynut * */ public class AOPTransactionTest { private ApplicationContext ctx = null; private UserAopService service = null; { ctx = new ClassPathXmlApplicationContext("aoptx.xml"); service = ctx.getBean("userAopService", UserAopService.class); } /** * 1.测试AOP事务。 */ @Test public void testTransactionManager() { service.testTransactionManager(); } } spring 配置文件: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.1.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-4.1.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.1.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.1.xsd"> <context:component-scan base-package="com.maxwell.spring.jdbc"/> <!-- 数据源配置文件 --> <util:properties id="db" location="classpath:db.properties"/> <!-- 配置C3p0数据源 --> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="user" value="#{db['jdbc.user']}"/> <property name="password" value="#{db['jdbc.password']}"/> <property name="driverClass" value="#{db['jdbc.driverClass']}"/> <property name="jdbcUrl" value="#{db['jdbc.jdbcUrl']}"/> <property name="initialPoolSize" value="#{db['jdbc.initPoolSize']}"/> <property name="maxPoolSize" value="#{db['jdbc.maxPoolSize']}"/> </bean> <bean id="npJdbcTpl" class="org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate"> <constructor-arg ref="dataSource"/> </bean> <!-- 配置事务管理器 --> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean> <tx:advice id="txAdvice" transaction-manager="transactionManager"> <tx:attributes> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice> <aop:config> <aop:pointcut expression="execution(* com.maxwell.spring.jdbc.tx.UserService.*(..))" id="pc"/> <aop:advisor advice-ref="txAdvice" pointcut-ref="pc"/> </aop:config> </beans>
java.sql.SQLException: Io 异常: Broken pipe 大大们帮忙 十万火JI!!
SSH程序 tomcat proxool连接池 配置如下: <proxool-config> <proxool> <alias>DBPool</alias> <driver-url>jdbc:oracle:thin:@(description=(address_list= (address=(host=*****)(protocol=tcp)(port=1521)) (address=(host=*****)(protocol=tcp)(port=1521)) (load_balance=yes)) (connect_data=(server=dedicated)(service_name=ORCL))) </driver-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <driver-properties> <property name="user" value="****"/> <property name="password" value="****"/> </driver-properties> <house-keeping-sleep-time>90000</house-keeping-sleep-time> <maximum-new-connections>20</maximum-new-connections> <prototype-count>15</prototype-count> <maximum-connection-count>80</maximum-connection-count> <minimum-connection-count>10</minimum-connection-count> <house-keeping-test-sql>select 1 from dual</house-keeping-test-sql> <maximum-active-time>1200000</maximum-active-time> <trace>true</trace> <verbose>true</verbose> </proxool> </proxool-config> 上线运行一周左右 orcle数据库就倒 大家帮分析下啊 日志信息如下: [ERROR] 21 七月 11:16:43.338 下午 HouseKeeper [org.logicalcobwebs.proxool.DBPool] #0048 encountered errors during destruction: java.sql.SQLException: Io 异常: Broken pipe at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:481) at oracle.jdbc.driver.PhysicalConnection.close(PhysicalConnection.java:1228) at org.logicalcobwebs.proxool.ProxyConnection.reallyClose(ProxyConnection.java:192) at org.logicalcobwebs.proxool.ConnectionPool.removeProxyConnection(ConnectionPool.java:423) at org.logicalcobwebs.proxool.HouseKeeper.sweep(HouseKeeper.java:90) at org.logicalcobwebs.proxool.HouseKeeperThread.run(HouseKeeperThread.java:39) [ERROR] 21 七月 11:16:43.347 下午 HouseKeeper [org.logicalcobwebs.proxool.DBPool] #0050 encountered errors during destruction: java.sql.SQLException: 无法从套接字读取更多的数据 at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:208) at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1123) at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1075) at oracle.jdbc.driver.T4C7Ocommoncall.receive(T4C7Ocommoncall.java:106) at oracle.jdbc.driver.T4CConnection.logoff(T4CConnection.java:465) at oracle.jdbc.driver.PhysicalConnection.close(PhysicalConnection.java:1228) at org.logicalcobwebs.proxool.ProxyConnection.reallyClose(ProxyConnection.java:192) at org.logicalcobwebs.proxool.ConnectionPool.removeProxyConnection(ConnectionPool.java:423) at org.logicalcobwebs.proxool.HouseKeeper.sweep(HouseKeeper.java:90) at org.logicalcobwebs.proxool.HouseKeeperThread.run(HouseKeeperThread.java:39) [ERROR] 21 七月 11:16:43.358 下午 HouseKeeper [org.logicalcobwebs.proxool.DBPool]
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
使用 Docker 部署 Spring Boot 项目
Docker 技术发展为微服务落地提供了更加便利的环境,使用 Docker 部署 Spring Boot 其实非常简单,这篇文章我们就来简单学习下。首先构建一个简单的 S...
英特尔不为人知的 B 面
从 PC 时代至今,众人只知在 CPU、GPU、XPU、制程、工艺等战场中,英特尔在与同行硬件芯片制造商们的竞争中杀出重围,且在不断的成长进化中,成为全球知名的半导体公司。殊不知,在「刚硬」的背后,英特尔「柔性」的软件早已经做到了全方位的支持与支撑,并持续发挥独特的生态价值,推动产业合作共赢。 而对于这一不知人知的 B 面,很多人将其称之为英特尔隐形的翅膀,虽低调,但是影响力却不容小觑。 那么,在...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
刷了几千道算法题,这些我私藏的刷题网站都在这里了!
遥想当年,机缘巧合入了 ACM 的坑,周边巨擘林立,从此过上了"天天被虐似死狗"的生活… 然而我是谁,我可是死狗中的战斗鸡,智力不够那刷题来凑,开始了夜以继日哼哧哼哧刷题的日子,从此"读题与提交齐飞, AC 与 WA 一色 ",我惊喜的发现被题虐既刺激又有快感,那一刻我泪流满面。这么好的事儿作为一个正直的人绝不能自己独享,经过激烈的颅内斗争,我决定把我私藏的十几个 T 的,阿不,十几个刷题网...
白话阿里巴巴Java开发手册高级篇
不久前,阿里巴巴发布了《阿里巴巴Java开发手册》,总结了阿里巴巴内部实际项目开发过程中开发人员应该遵守的研发流程规范,这些流程规范在一定程度上能够保证最终的项目交付质量,通过在时间中总结模式,并推广给广大开发人员,来避免研发人员在实践中容易犯的错误,确保最终在大规模协作的项目中达成既定目标。 无独有偶,笔者去年在公司里负责升级和制定研发流程、设计模板、设计标准、代码标准等规范,并在实际工作中进行...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
redis分布式锁,面试官请随便问,我都会
文章有点长并且绕,先来个图片缓冲下! 前言 现在的业务场景越来越复杂,使用的架构也就越来越复杂,分布式、高并发已经是业务要求的常态。像腾讯系的不少服务,还有CDN优化、异地多备份等处理。 说到分布式,就必然涉及到分布式锁的概念,如何保证不同机器不同线程的分布式锁同步呢? 实现要点 互斥性,同一时刻,智能有一个客户端持有锁。 防止死锁发生,如果持有锁的客户端崩溃没有主动释放锁,也要保证锁可以正常释...
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
Nginx 原理和架构
Nginx 是一个免费的,开源的,高性能的 HTTP 服务器和反向代理,以及 IMAP / POP3 代理服务器。Nginx 以其高性能,稳定性,丰富的功能,简单的配置和低资源消耗而闻名。 Nginx 的整体架构 Nginx 里有一个 master 进程和多个 worker 进程。master 进程并不处理网络请求,主要负责调度工作进程:加载配置、启动工作进程及非停升级。worker 进程负责处...
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
Java世界最常用的工具类库
Apache Commons Apache Commons有很多子项目 Google Guava 参考博客
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
Android 9.0 init 启动流程
阅读五分钟,每日十点,和您一起终身学习,这里是程序员Android本篇文章主要介绍Android开发中的部分知识点,通过阅读本篇文章,您将收获以下内容:一、启动流程概述一、 启动流程概述Android启动流程跟Linux启动类似,大致分为如下五个阶段。1.开机上电,加载固化的ROM。2.加载BootLoader,拉起Android OS。3.加载Uboot,初始外设,引导Kernel启动等。...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车?某胡同口的煎饼摊一年能卖出多少个煎饼?深圳有多少个产品经理?一辆公交车里能装下多少个乒乓球?一个正常成年人有多少根头发?这类估算问题,被称为费米问题,是以科学家费米命名的。为什么面试会问这种问题呢?这类问题能把两类人清楚地区分出来。一类是具有文科思维的人,擅长赞叹和模糊想象,它主要依靠的是人的第一反应和直觉,比如小孩...
前后端分离,我怎么就选择了 Spring Boot + Vue 技术栈?
前两天又有小伙伴私信松哥,问题还是职业规划,Java 技术栈路线这种,实际上对于这一类问题我经常不太敢回答,每个人的情况都不太一样,而小伙伴也很少详细介绍自己的情况,大都是一两句话就把问题抛出来了,啥情况都不了解,就要指出一个方向,这实在是太难了。 因此今天我想从我学习 Spring Boot + Vue 这套技术栈的角度,来和大家聊一聊没有人指导,我是如何一步一步建立起自己的技术体系的。 线上大...
17张图带你解析红黑树的原理!保证你能看懂!
二叉查找树 由于红黑树本质上就是一棵二叉查找树,所以在了解红黑树之前,咱们先来看下二叉查找树。 二叉查找树(Binary Search Tree),也称有序二叉树(ordered binary tree),排序二叉树(sorted binary tree),是指一棵空树或者具有下列性质的二叉树: 若任意结点的左子树不空,则左子树上所有结点的值均小于它的根结点的值; 若任意结点的...
相关热词 基于c#波形控件 c# 十进制转十六进制 对文件aes加密vc# c#读取栈中所有的值 c# rsa256加密 好 学c# 还是c++ c# 和java的差距 c# curl网络框架 c# https证书请求 c# 中崎
立即提问