cv::filestorage读写yml失败问题

跑opencv2.4.10或opencv2.4.9或opencv2.4.11中例程opencv\samples\cpp\tutorial_code\core\file_input_output\file_input_output.cpp

int main(int ac, char** av)
{
if (ac != 2)
{
help(av);
return 1;
}

string filename = av[1];
{ //write
    Mat R = Mat_<uchar>::eye(3, 3),
        T = Mat_<double>::zeros(3, 1);
    MyData m(1);

    FileStorage fs(filename, FileStorage::WRITE);

    fs << "iterationNr" << 100;
            ..........
}

代码中   FileStorage fs(filename, FileStorage::WRITE);话没有返回正确fs,调试的时候说fs是错误的指针,变量filename在命令参数中给出,已被正确赋值为q.yml,但没有产生q.yml。用fs.isopened检查文件状态返回0.该怎么正确读写yml文件
我的环境是windows10+vs2010+opencv2.4.10/opencv2.4.11/opencv/2.4.9.感觉我的电脑不能正常使用cv::filestorage这个类。cvfilestorage结构体可以正常读取 yml文件

2个回答

文件路径,以及对应目录是否有权限

楼楼这个问题你解决了吗 我也遇到了

qq_23161489
qq_23161489 你好,你这个问题解决没有啊?我也遇到了这样的问题
接近 3 年之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
未定义对“ cv :: optflow :: createOptFlow_DualTVL1()”的引用

<div class="post-text" itemprop="text"> <p>I have successfully installed <code>gocv</code> package from <a href="https://github.com/hybridgroup/gocv" rel="nofollow noreferrer">https://github.com/hybridgroup/gocv</a> and I am trying to run C++ code which I wrapped in C library inside my Go project. But there seems to be the problem when I try to call <code>cv::optflow::createOptFlow_DualTVL1()</code> method. I get undefined reference error. I dont know why, since in <code>optflow.hpp</code> file there is a method called <code>createOptFlow_DualTVL1()</code></p> <p>Here is my <code>main.go</code> file:</p> <pre><code>package main /* #cgo LDFLAGS: -L/usr/local/lib -lopencv_core -lopencv_video -lopencv_videoio -lopencv_highgui -lopencv_tracking -lopencv_optflow #include "dense_flow.h" */ import ( "C" "gocv.io/x/gocv" ) func main(){ } </code></pre> <p><strong>NOTE:</strong> <code>dense_flow.h</code> is where I have declared my <code>CalculateT4VL1()</code> function which calls to <code>createOptFlow_DualTVL1()</code> inside <code>dense_flow.cpp</code> file.</p> <p>Operating System and version: Ubuntu 18.04</p> <p>OpenCV version used: 4.0.0</p> <p>GoCV version used: 0.18</p> <p>Go version: 1.12</p> </div>

OpenCV写yaml文件时的问题

我想在config.yaml中写入 ``` AAA: - BBB: a: 1 b: 2 ``` 我写的代码如下: ``` #include "opencv2/opencv.hpp" using namespace cv; int main(int, char** argv) { FileStorage fw("./config.yaml", FileStorage::WRITE); fw << "AAA" << "[" << "{"; fw << "BBB" << "{"; fw << "a" << 1; fw << "b" << 2; fw << "}" << "}" << "]"; return 0; } ``` 得到的config.yaml结果为 ``` %YAML:1.0 AAA: - BBB: a: 1 b: 2 ``` 其中的"-"和"BBB"不在同一行上,请问有什么方法解决吗? 另外,我不希望在config.yaml中出现”{ }“,所以请不要用"{:"代替"{"。谢谢!

bacula备份的内容无法恢复,sd线程状态显示如下

==== Device status: Device "FileStorage" (/tmp/backup) is not open. == ==== Used Volume status: ==== ====

如何将来自Burzum / FileStorage的contentType信息传递给KnpLabs / Gaufrette

<div class="post-text" itemprop="text"> <p>CakePHP 3.4 application using the <a href="https://github.com/burzum/cakephp-file-storage" rel="nofollow noreferrer">Burzum/FileStorage</a> plugin (which uses <a href="https://github.com/knplabs/Gaufrette" rel="nofollow noreferrer">KnpLabs/Gaufrette</a>) to manage uploads to AWS S3. Unfortunately, I was running into the issue of MS Office files (docx, xlsx, etc) being detected as ZIP files.</p> <p>I altered my code to use finfo first and if it detects zip, look at the extension to see if it maybe is an office file. Now I can pass this correct mimetype on to the file_storage table by doing a patchEntity. So far so good.</p> <p>However, the FileStorage plugin calls KnpLabs/Gaufrette to actually send the file to S3, but it doesn't seem to send along the mimetype/contenttype. So Gaufrette then does its own little finfo trick in the <a href="https://github.com/KnpLabs/Gaufrette/blob/master/src/Gaufrette/Adapter/AwsS3.php#L166" rel="nofollow noreferrer">AwsS3 Adapter</a>, writing a metadata field 'Content-Type: application/zip' to the item on S3, causing the Office file to be downloaded as a zip file...</p> <p>Is there any way to set the correct content type in the options of the AwsS3 adapter?</p> <p>thanks!</p> </div>

CakePHP 3 Filestorage插件触发ImageProcessingListener以删除版本

<div class="post-text" itemprop="text"> <p><a href="https://github.com/burzum/cakephp-file-storage/blob/3.0/docs/Home.md" rel="nofollow">https://github.com/burzum/cakephp-file-storage/blob/3.0/docs/Home.md</a></p> <p>I have two tables</p> <p>ProductStylesTable.php and ProductStyleImagesTable.php which extends ImageStorageTable which is connect to my FileStorage Table in my sql that was created with the migration tool.</p> <p>Upload works fine.</p> <pre><code>// ProductStylesController.php public function upload($product_style_id = null) { $this-&gt;ProductStyles-&gt;ProductStyleImages-&gt;upload( $product_style_id, $entity) //ProductStyleImagesTable.php class ProductStyleImagesTable extends ImageStorageTable { //initialize code... public function upload($product_style_id, $entity) { $entity = $this-&gt;patchEntity($entity, [ 'adapter' =&gt; 'Local', 'model' =&gt; 'ProductStyles', 'foreign_key' =&gt; $product_style_id, ]); return $this-&gt;save($entity); } </code></pre> <p>Awesome, ProductStyleImages is listening for the upload method and places it in the appropriate folder. I was hoping this would work the same for delete.</p> <p>So I called</p> <pre><code> //ProductStylesController.php $this-&gt;ProductStyles-&gt;ProductStyleImages-&gt;delete($fileStorage_id) //In my ProductStyleImagesTable.php public function delete($fileStorageID = null) { //deleting the row, hoping for ImageProcessingListener to pick it up? $FileStorageTable = TableRegistry::get('FileStorage'); $query = $FileStorageTable-&gt;query(); $query-&gt;delete() -&gt;where(['id' =&gt; $fileStorageID ]) -&gt;execute(); } </code></pre> <p>I get an error that delete must be compatible with the interface. So to avoid this conflict I renamed my function to 'removeImage'. It works in removing the row but the Listener isn't picking it up. I looked in ImageStorageTable.php and FileStorageTable.php. I see the afterDelete methods. But i'm unsure how to trigger them since i'm unsure how to configure my delete methods to match the interface.</p> </div>

error: no match for 'operator[]'

很简单的程序,只是使用了函数 定义部分 int a[500],e[500]; 然后下面所有尝试对a数组进行操作的全部报错 a[p]=0; a[p]++;等 错误提示: rror: no match for 'operator[]' (operand types are 'QCoreApplication' and 'int') a[p]++; ^ 而且奇怪的是相同的e数组却没有错误提示(或者将a数组改掉之后就会出现错误提示了吧) 这是主函数部分 #include #include"opencv2/imgproc/imgproc.hpp" #include"opencv2/highgui/highgui.hpp" #include #include //#include #include #include #include #include #include #include using namespace std; using namespace cv; Mat marker_mask; Mat markers; Mat img0,img,img1,img2,img_propo_measure,img_gray,wshed,roi_circle,roi_circle2,res,marker_circle,marker_circle2; Point prev_pt(-1,-1),pt2(-1,-1),prev_pt2(-1,-1),pt1(-1,-1),prev_pt1(-1,-1),center_of_circle(-1,-1); int a[500],e[500]; double d[500],f[500]; int b=1,pixels=1; double bound_num=0,comp_count; double c=0,radius,roi_w,roi_h; vectorx; vector > storage; vector contours; void img_propo_measure_fun(int event,int,int,int,void*); void on_mouse(int event,int,int,int,void*); void on_mouse2(int event,int,int,int,void*); void sortation(); void convertion_workoutG(); void output(); int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); if(argc==2&&(img0=imread(argv[1],1))!=0) { } else { img0=imread("e:/codes/sample2.jpg"); } string filemname="811.jpg"; RNG rng=RNG(-1); namedWindow("image",1); namedWindow("cut",1); namedWindow("watershed transform",1); namedWindow("proportion",1); img=img0.clone(); img_propo_measure=img0.clone(); imshow("Proportion",img_propo_measure); imshow("cut",img); img1=img.clone(); setMouseCallback("Proportion",img_propo_measure_fun,0); setMouseCallback("cut",on_mouse2,0); setMouseCallback("image",on_mouse,0); for(;;) { char c=waitKey(); if(c==27||c=='b') break; else if(c=='r') { //marker_mask=Mat::zeros(); img1.copyTo(img2); imshow("image",img2); } else if(c=='w'||c==' ') { comp_count=1; findContours(marker_mask,storage,&contours,CV_RETR_LIST,CV_CHAIN_APPROX_SIMPLE); //markers=Mat::zeros(); for(int i;i<contours.size();i++,comp_count++) //contours declaration is changed, trouble may be called,the next contours should be write as? { Scalar color=Scalar(rng.uniform(0,255),rng.uniform(0,255),rng.uniform(0,255)); drawContours(markers,contours,i,color,1,8,vector<Vec4i>(),0,Point()); } Mat color_tab(1,comp_count,CV_8UC3); for(int i=0;i<comp_count;i++) { uchar* ptr=color_tab.ptr<uchar>(i*3); ptr[0]=(uchar)(RNG::next(&rng)%180+50);//重置并返回32u的随机值 ptr[1]=(uchar)(RNG::next(&rng)%180+50); ptr[2]=(uchar)(RNG::next(&rng)%180+50); } double t=(double)getTickCount(); watered(res,markers); t=(double)getTickCount()-t; marker_circle=Mat::zeros(markers.size(),CV_32S); marker_circle2=Mat::zeros(markers.size(),CV_32S); bitwise_and(markers,markers,marker_circle,roi_circle); bitwise_and(markers,markers,marker_circle,roi_circle); for(int p=0;p<500;p++) { a[p]=0; e[p]=0; } int k=0; bound_num=0; for(int i=0;i<marker_circle2.rows;i++) for(int j=0;i<marker_circle2.cols-1;j++) { int n=0; int pres=CV_MAT_ELEM(marker_circle2,int,i,j); int next=CV_MAT_ELEM(marker_circle2,int,i,j+1); int sub=abs(pres-next); if(pres*next==0&&sub!=0) { for(n=0;n<k+1;n++) if(e[n]==sub) break; if(n==k+1) { e[k]=sub; bound_num++; k++; } } } for(int i=0;i<marker_circle.rows;i++) for(int j=0;j<marker_circle.cols;j++) { int idx=CV_MAT_ELEM(marker_circle,int,i,j); for(int p=1;p<comp_count;p++) { if((idx-1)==b++) { for(int boundary=0;boundary<k+1;boundary++) { if(e[boundary]==(b+1)) a[p]++; } a[p]++; break; } } b=1; uchar* dst=&CV_MAT_ELEM(whed,uchar,i,j*3); if(idx==-1) dst[0]=dst[1]=dst[2]=(uchar)255; else if(idx<=0||idx>comp_count) dst[0]=dst[1]=dst[2]=(uchar)0; else { uchar* ptr=color_tab.ptr<uchar>(idx-1)*3; dst[0]=ptr[0]; dst[1]=ptr[1]; dst[2]=ptr[2]; } } addWeighted(wshed,0.5,img_gray,0.5,0,wshed); imshow("watershed transform",wshed); } } for(int i=1;i<comp_count;i++) { for(int j=i;j<comp_count;j++) { if(a[i]<a[j]) { int temp=a[i]; a[i]=a[j]; a[j]=temp; } } } convertion_workoutG(); sortation(); output(); waitKey(0); return a.exec(); }

使用bacula服务器备份,客户端在win,服务端为linux,备份失败的问题

06-7月 19:29 saas-dir JobId 101: Start Backup JobId 101, Job=FullBackup.2015-07-06_19.29.16_24 06-7月 19:29 saas-dir JobId 101: Using Device "FileStorage" to write. 06-7月 19:19 saas-fd JobId 101: Warning: lib/bsock.c:132 Could not connect to Storage daemon on 192.168.8.170:9103. ERR=\B2\D9\D7\F7\B3ɹ\A6\CD\EA\B3ɡ\A3 Retrying ... 06-7月 19:35 saas-fd JobId 101: Warning: lib/bsock.c:132 Could not connect to Storage daemon on 192.168.8.170:9103. ERR=\B2\D9\D7\F7\B3ɹ\A6\CD\EA\B3ɡ\A3 Retrying ... 06-7月 19:48 saas-fd JobId 101: Fatal error: lib/bsock.c:138 Unable to connect to Storage daemon on 192.168.8.170:9103. ERR=\B2\D9\D7\F7\B3ɹ\A6\CD\EA\B3ɡ\A3 06-7月 19:48 saas-fd JobId 101: Fatal error: Failed to connect to Storage daemon: 192.168.8.170:9103 06-7月 19:59 saas-dir JobId 101: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage 06-7月 19:59 saas-dir JobId 101: Error: Bacula saas-dir 7.0.5 (28Jul14): Build OS: x86_64-unknown-linux-gnu redhat (Core) JobId: 101 Job: FullBackup.2015-07-06_19.29.16_24 Backup Level: Full Client: "saas-fd" 5.2.10 (28Jun12) Microsoft Windows Home ServerEnterprise Edition Service Pack 2 (build 3790),Cross-compile,Win32 FileSet: "Full Set" 2015-07-06 11:16:12 Pool: "Default" (From Job resource) Catalog: "MyCatalog" (From Client resource) Storage: "saas-sd" (From command line) Scheduled time: 06-7月-2015 19:29:16 Start time: 06-7月-2015 19:29:18 End time: 06-7月-2015 19:59:22 Elapsed time: 30 mins 4 secs Priority: 10 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None VSS: no Encryption: no Accurate: no Volume name(s): Volume Session Id: 3 Volume Session Time: 1436168494 Last Volume Bytes: 20,672 (20.67 KB) Non-fatal FD errors: 2 SD Errors: 0 FD termination status: Error SD termination status: Waiting on FD Termination: *** Backup Error ***

opencv release下程序运行报错,求指教ORZ

多相机参数标定。VS2015+opencv2.4.13 代码运行到匹配关系那一部分就会崩。虽然找到了问题在哪,但不会解决,求大神指教。报错内容为:0x00007FF8DCE3D3D8 (ucrtbase.dll) (calib_stitch.exe 中)处有未经处理的异常: 将一个无效参数传递给了将无效参数视为严重错误的函数。 下面是代码: #include <iostream> #include <fstream> #include <string> #include "opencv2/core/core.hpp" #include "opencv2/opencv_modules.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/stitching/detail/autocalib.hpp" #include "opencv2/stitching/detail/blenders.hpp" #include "opencv2/stitching/detail/camera.hpp" #include "opencv2/stitching/detail/exposure_compensate.hpp" #include "opencv2/stitching/detail/matchers.hpp" #include "opencv2/stitching/detail/motion_estimators.hpp" #include "opencv2/stitching/detail/seam_finders.hpp" #include "opencv2/stitching/detail/util.hpp" #include "opencv2/stitching/detail/warpers.hpp" #include "opencv2/stitching/warpers.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/nonfree/nonfree.hpp" #include <opencv2/calib3d/calib3d.hpp> using namespace std; using namespace cv; using namespace cv::detail; // Default command line args bool preview = false; bool try_gpu = false; double work_megapix = -0.6; // 缩放参数 double seam_megapix = 0.1; double compose_megapix = -1; float conf_thresh = 1.f; string features_type = "sift"; //orb surf sift string ba_cost_func = "reproj"; //reproj ray string ba_refine_mask = "xxllx"; bool do_wave_correct = true; WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ; bool save_graph = true; std::string save_graph_to; string warp_type = "spherical"; //spherical cylindrical plane int expos_comp_type = ExposureCompensator::GAIN_BLOCKS; //GAIN,OR NO float match_conf = 0.3f; string seam_find_type = "gc_color"; //no voronoi gc_color gc_colorgrad dp_color dp_colorgrad int blend_type = Blender::MULTI_BAND; // Blender::FEATHER Blender::MULTI_BAND float blend_strength = 5;//0就是关,默认5 string result_name = "result.jpg"; void detection(const vector<string> imagelist, vector<vector<Point2f>>& ransac_image_points_seq) { if (imagelist.size() % 2 != 0) { cout << "Error: the image list contains odd (non-even) number of elements\n"; return; } bool displayCorners = true;//true; const int maxScale = 2; const float squareSize = 1.f; // Set this to your actual square size // ARRAY AND VECTOR STORAGE: Size boardSize = Size(11, 8); vector<vector<Point2f>> imagePoints[2]; vector<vector<Point3f> > objectPoints; Size imageSize; int i, j, k, nimages = (int)imagelist.size() / 2; imagePoints[0].resize(nimages); imagePoints[1].resize(nimages); vector<string> goodImageList; for (i = j = 0; i < nimages; i++) { for (k = 0; k < 2; k++) { const string& filename = imagelist[i * 2 + k]; Mat img = imread(filename, 0); if (img.empty()) break; if (imageSize == Size()) imageSize = img.size(); else if (img.size() != imageSize) { cout << "The image " << filename << " has the size different from the first image size. Skipping the pair\n"; break; } bool found = false; vector<Point2f>& corners = imagePoints[k][j]; for (int scale = 1; scale <= maxScale; scale++) { Mat timg; if (scale == 1) timg = img; else resize(img, timg, Size(), scale, scale); found = findChessboardCorners(timg, boardSize, corners, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_NORMALIZE_IMAGE); if (found) { if (scale > 1) { Mat cornersMat(corners); cornersMat *= 1. / scale; } break; } } if (displayCorners) { //cout << filename << endl; Mat cimg, cimg1; cvtColor(img, cimg, COLOR_GRAY2BGR); drawChessboardCorners(cimg, boardSize, corners, found); double sf = 640. / MAX(img.rows, img.cols); resize(cimg, cimg1, Size(), sf, sf); namedWindow("corners", 0); imshow("corners", cimg1); char c = (char)waitKey(1); if (c == 27 || c == 'q' || c == 'Q') //Allow ESC to quit exit(-1); } else putchar('.'); if (!found) break; cornerSubPix(img, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 30, 0.01)); /* 亚像素精确化 */ //find4QuadCornerSubpix(img, corners, Size(5, 5)); //对粗提取的角点进行精确化 } if (k == 2) { goodImageList.push_back(imagelist[i * 2]); goodImageList.push_back(imagelist[i * 2 + 1]); j++; } } cout << j << " pairs have been successfully detected.\n"; nimages = j; if (nimages < 2) { cout << "Error: too little pairs to run the calibration\n"; return; } imagePoints[0].resize(nimages); imagePoints[1].resize(nimages); vector<vector<Point2f>> image_points_seq; for (int i = 0; i < 2; i++) { vector<Point2f> buf; for (int j = 0; j < imagePoints[i].size(); j++) { for (int k = 0; k < imagePoints[i][j].size(); k++) { buf.push_back(imagePoints[i][j][k]); } } image_points_seq.push_back(buf); } //RANSAC cout << image_points_seq[0].size() << endl; cout << image_points_seq[1].size() << endl; vector<uchar> mask; Mat h = findHomography(image_points_seq[0], image_points_seq[1], mask, CV_FM_RANSAC); vector<Point2f> point1, point2; for (int i = 0; i < image_points_seq[0].size(); i++) { //if (mask[i] == 1) { point1.push_back(image_points_seq[0][i]); point2.push_back(image_points_seq[1][i]); } } ransac_image_points_seq.push_back(point1); ransac_image_points_seq.push_back(point2); //cout << imagePoints[0].size() << endl; //cout << imagePoints[1].size() << endl; //return imagePoints; } int main(int argc, char* argv[]) { int64 app_start_time = getTickCount(); string xml_name = "144-146-147-1481.yaml"; vector<vector<string>> img_names; vector<vector<string>> names; char file_name[256]; int num_pairs = 3; int nums_pairs_count[4] = { 23,23,20 }; for (int i =0; i <= num_pairs; i++) { vector<string> temp; for (int j = 1; j <= nums_pairs_count[i]; j++) { sprintf(file_name, "1234/%d/1/(%d).jpg", i, j); temp.push_back(file_name); sprintf(file_name, "1234/%d/2/(%d).jpg", i, j); temp.push_back(file_name); } names.push_back(temp); } //棋盘格检测 vector<vector<Point2f>> double_image_points_seq; int match_num[4][4] = {0}; int match_start[4][4][2] = {0}; //vector<vector<Point2f>> ransac_image_points_seq; //detection(names[0], ransac_image_points_seq); //match_num[0][1] = ransac_image_points_seq[0].size(); //match_num[1][0] = ransac_image_points_seq[0].size(); //match_start[0][1] = 0; //match_start[1][0] = 0; //match_num.push_back(ransac_image_points_seq[0].size()); //cout << ransac_image_points_seq[0].size() << endl; //cout << ransac_image_points_seq[1].size() << endl; for (int i = 0; i < num_pairs; i++) { vector<vector<Point2f>> ransac_image_points_seq; detection(names[i], ransac_image_points_seq); if (i != 0) { match_num[i][i + 1] = ransac_image_points_seq[0].size(); match_num[i+1][i] = ransac_image_points_seq[0].size(); match_start[i][i + 1][0] = match_num[i - 1][i]; match_start[i][i + 1][1] = 0; match_start[i+1][i][0] = 0; match_start[i+1][i][1] = match_num[i - 1][i]; for (int j = 0; j < ransac_image_points_seq[0].size(); j++) { double_image_points_seq[double_image_points_seq.size() - 1].push_back(ransac_image_points_seq[0][j]); } double_image_points_seq.push_back(ransac_image_points_seq[1]); } else { double_image_points_seq.push_back(ransac_image_points_seq[0]); double_image_points_seq.push_back(ransac_image_points_seq[1]); match_num[0][1] = ransac_image_points_seq[0].size(); match_num[1][0] = ransac_image_points_seq[0].size(); match_start[0][1][0] = 0; match_start[0][1][1] = 0; match_start[1][0][0] = 0; match_start[1][0][1] = 0; } } //特征点 vector<ImageFeatures> features(num_pairs + 1); for (int i = 0; i <= num_pairs; i++) { vector<KeyPoint> keypoints; for (int j = 0; j < double_image_points_seq[i].size(); j++) { KeyPoint point; point.pt = double_image_points_seq[i][j]; keypoints.push_back(point); } features[i].keypoints = keypoints; features[i].img_size = Size(2560, 1440); features[i].img_idx = i; } //匹配关系 vector<MatchesInfo> pairwise_matches; for (int i = 0; i <= num_pairs; i++) { for (int j = 0; j <= num_pairs; j++) { MatchesInfo matches_info; if(j==i+1 || j==i-1) { vector<DMatch> match(match_num[i][j]); vector<uchar> mask(match_num[i][j]); for (int n = 0; n < match_num[i][j]; n++) { match[n].queryIdx = match_start[i][j][0] + n; match[n].trainIdx = match_start[i][j][1] + n; mask[n] = 1; } matches_info.src_img_idx = i; matches_info.dst_img_idx = j; matches_info.matches = match; //info.inliers_mask = inliers_mask; //info.num_inliers = match_num[i][j]; //vector<Point2f> pts_src, pts_dst; Mat src_points(1, static_cast<int>(matches_info.matches.size()), CV_32FC2); Mat dst_points(1, static_cast<int>(matches_info.matches.size()), CV_32FC2); for (int n = 0; n < match_num[i][j]; n++) { const DMatch& m = matches_info.matches[n]; Point2f p = features[i].keypoints[m.queryIdx].pt; p.x -= features[i].img_size.width * 0.5f; p.y -= features[i].img_size.height * 0.5f; src_points.at<Point2f>(0, static_cast<int>(n)) = p; p = features[j].keypoints[m.trainIdx].pt; p.x -= features[j].img_size.width * 0.5f; p.y -= features[j].img_size.height * 0.5f; dst_points.at<Point2f>(0, static_cast<int>(n)) = p; //pts_src.push_back(features[i].keypoints[match[n].queryIdx].pt); //pts_dst.push_back(features[j].keypoints[match[n].trainIdx].pt); } //vector<uchar> mask; matches_info.H = findHomography(src_points, dst_points, matches_info.inliers_mask,CV_FM_RANSAC); //matches_info.H = h.clone(); matches_info.num_inliers = 0; for (size_t i = 0; i < matches_info.inliers_mask.size(); ++i) if (matches_info.inliers_mask[i]) matches_info.num_inliers++; //info.confidence = 2; matches_info.confidence = matches_info.num_inliers / (8 + 0.3 * matches_info.matches.size()); // Set zero confidence to remove matches between too close images, as they don't provide // additional information anyway. The threshold was set experimentally. matches_info.confidence = matches_info.confidence > 3. ? 0. : matches_info.confidence; //// Construct point-point correspondences for inliers only src_points.create(1, matches_info.num_inliers, CV_32FC2); dst_points.create(1, matches_info.num_inliers, CV_32FC2); int inlier_idx = 0; for (size_t n = 0; n < matches_info.matches.size(); ++n) { if (!matches_info.inliers_mask[n]) continue; const DMatch& m = matches_info.matches[n]; Point2f p = features[i].keypoints[m.queryIdx].pt; p.x -= features[i].img_size.width * 0.5f; p.y -= features[i].img_size.height * 0.5f; src_points.at<Point2f>(0, inlier_idx) = p; p = features[j].keypoints[m.trainIdx].pt; p.x -= features[j].img_size.width * 0.5f; p.y -= features[j].img_size.height * 0.5f; dst_points.at<Point2f>(0, inlier_idx) = p; inlier_idx++; } // Rerun motion estimation on inliers only matches_info.H = findHomography(src_points, dst_points, CV_RANSAC); } else { matches_info.src_img_idx = -1; matches_info.dst_img_idx = -1; } pairwise_matches.push_back(matches_info);//发现程序崩在哪一行了 } } cout << pairwise_matches.size() << endl; /*Mat img1 = imread(img_names[0], 1); Mat img2 = imread(img_names[1], 1); Mat out1, out2, out; drawKeypoints(img1, features[0].keypoints, out1); drawKeypoints(img1, features[0].keypoints, out2); drawMatches(img1, features[0].keypoints, img2, features[1].keypoints, pairwise_matches[0].matches, out); cv::namedWindow("out1", 0); cv::imshow("out1", out); cv::namedWindow("out2", 0); cv::imshow("out2", out); cv::namedWindow("out", 0); cv::imshow("out", out); cv::waitKey();*/ //for(int i=0; i<nu) HomographyBasedEstimator estimator; vector<CameraParams> cameras; estimator(features, pairwise_matches, cameras); for (size_t i = 0; i < cameras.size(); ++i) { Mat R; cameras[i].R.convertTo(R, CV_32F); cameras[i].R = R; //cout << "Initial intrinsics #" << indices[i] + 1 << ":\n" << cameras[i].K() << endl; } Mat K1(Matx33d( 1.2755404529239545e+03, 0., 1.3099971348805052e+03, 0., 1.2737998060528048e+03, 8.0764915313791903e+02, 0., 0., 1. )); Mat K2(Matx33d( 1.2832823446505638e+03, 0., 1.2250954954648896e+03, 0., 1.2831721912770793e+03, 7.1743301498758751e+02, 0., 0., 1. )); Mat K3(Matx33d( 1.2840711959594287e+03, 0., 1.2473666273838244e+03, 0., 1.2840499404560594e+03, 7.9051574509733359e+02, 0., 0., 1.)); Mat K4(Matx33d( 1.2865853945042952e+03, 0., 1.1876049192856492e+03, 0., 1.2869927339670007e+03, 6.2306976561458930e+02, 0., 0., 1. )); Mat K[4]; K[0] = K1.clone(); K[1] = K2.clone(); K[2] = K3.clone(); K[3] = K4.clone(); for (size_t i = 0; i < cameras.size(); ++i) { K[i].convertTo(K[i], CV_32F); } for (size_t i = 0; i < cameras.size(); ++i) { Mat R; cameras[i].R.convertTo(R, CV_32F); cameras[i].R = R; cameras[i].focal = 0.5*(K[i].at<float>(0, 0)+ K[i].at<float>(1, 1)); // Focal length cameras[i].ppx = K[i].at<float>(0,2); // Principal point X cameras[i].ppy = K[i].at<float>(1,2); ; // Principal point Y cout << cameras[i].K() << endl; //cout << "Initial intrinsics #" << indices[i] + 1 << ":\n" << cameras[i].K() << endl; } Ptr<detail::BundleAdjusterBase> adjuster; if (ba_cost_func == "reproj") adjuster = new detail::BundleAdjusterReproj(); else if (ba_cost_func == "ray") adjuster = new detail::BundleAdjusterRay(); else { cout << "Unknown bundle adjustment cost function: '" << ba_cost_func << "'.\n"; return -1; } adjuster->setConfThresh(conf_thresh); Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U); if (ba_refine_mask[0] == 'x') refine_mask(0, 0) = 1; if (ba_refine_mask[1] == 'x') refine_mask(0, 1) = 1; if (ba_refine_mask[2] == 'x') refine_mask(0, 2) = 1; if (ba_refine_mask[3] == 'x') refine_mask(1, 1) = 1; if (ba_refine_mask[4] == 'x') refine_mask(1, 2) = 1; adjuster->setRefinementMask(refine_mask); for (int i = 0; i < features.size(); i++) { features[i].descriptors = Mat(); } (*adjuster)(features, pairwise_matches, cameras); cout << "camera number: " << cameras.size() << endl; cv::FileStorage fs(xml_name, cv::FileStorage::WRITE); int num = cameras.size(); fs << "CameraNumber" << num; //char file_name[256]; for (int i = 0; i<cameras.size(); i++) { sprintf(file_name, "Focal_Camera%d", i); fs << file_name << cameras[i].focal; sprintf(file_name, "ppx_Camera%d", i); fs << file_name << cameras[i].ppx; sprintf(file_name, "ppy_Camera%d", i); fs << file_name << cameras[i].ppy; sprintf(file_name, "K_Camera%d", i); fs << file_name << cameras[i].K(); sprintf(file_name, "R_Camera%d", i); fs << file_name << cameras[i].R; } //fs << "indices" << indices; fs.release(); return 0; } ![图片说明](https://img-ask.csdn.net/upload/201904/12/1555002609_315025.png) ![图片说明](https://img-ask.csdn.net/upload/201904/12/1555002619_770672.png)

如何使用Cakephp-File-Storage保存图像?

<div class="post-text" itemprop="text"> <p><strong>UPDATE</strong></p> <p>So I added some logs to the action upload in ProductsController and the method upload in MediasTable to find out what is happening. The entity from ProductsController <code>this-&gt;Products-&gt;Medias-&gt;newEntity()</code> was pass to MediasTable but it wasn't save.</p> <p><a href="https://i.stack.imgur.com/bHsRH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bHsRH.png" alt="enter image description here"></a></p> <p>It is necessary to upload the file to save the data in the db? Like if all the data is ok but the file is no present the event will be reject the data and do nothing in the db?</p> <hr> <p>I'm using cakephp 3.1 with the file-storage plugin. I'm following the quickstart guide from the docs: <a href="https://github.com/burzum/cakephp-file-storage/blob/1.0/docs/Tutorials/Quick-Start.md" rel="nofollow noreferrer">Quick-Start</a> but I don't understand some parts and doesn't upload, insert in database neither make thumbnails.</p> <p>This is my database:</p> <pre><code>CREATE TABLE products ( id INT AUTO_INCREMENT PRIMARY KEY, product_name VARCHAR(255) NOT NULL, quantity INT NOT NULL, sold INT NOT NULL, description VARCHAR(1000), price DECIMAL(7,2) NOT NULL, old_price DECIMAL(7,2) NOT NULL, visited INT NOT NULL, status INT NOT NULL, created DATETIME, modified DATETIME ); CREATE TABLE media_types ( id INT AUTO_INCREMENT PRIMARY KEY, name_media_type VARCHAR(255) NOT NULL, created DATETIME, modified DATETIME ); CREATE TABLE medias ( id INT AUTO_INCREMENT PRIMARY KEY, media_type_id INT NOT NULL, product_id INT NOT NULL, path VARCHAR(255) NOT NULL, created DATETIME, modified DATETIME, FOREIGN KEY media_type_key (media_type_id) REFERENCES media_types(id), FOREIGN KEY product_key (product_id) REFERENCES products(id) ); </code></pre> <p>MediasTable:</p> <pre><code>... use Burzum\FileStorage\Model\Table\ImageStorageTable; class MediasTable extends ImageStorageTable { public function initialize(array $config) { parent::initialize($config); $this-&gt;table('medias'); $this-&gt;displayField('id'); $this-&gt;primaryKey('id'); $this-&gt;addBehavior('Timestamp'); $this-&gt;belongsTo('MediaTypes', [ 'foreignKey' =&gt; 'media_type_id', 'joinType' =&gt; 'INNER' ]); $this-&gt;belongsTo('Products', [ 'foreignKey' =&gt; 'product_id', 'joinType' =&gt; 'INNER' ]); } ... public function upload($productId, $entity) { $media = $this-&gt;patchEntity($entity, [ 'adapter' =&gt; 'Local', 'model' =&gt; 'Media', 'foreign_key' =&gt; $productId ]); Log::write('debug', $media); return $this-&gt;save($media); } } </code></pre> <p>ProductsTable:</p> <pre><code>class ProductsTable extends Table { public function initialize(array $config) { parent::initialize($config); $this-&gt;table('products'); $this-&gt;displayField('id'); $this-&gt;primaryKey('id'); $this-&gt;addBehavior('Timestamp'); $this-&gt;hasMany('Medias', [ 'className' =&gt; 'Medias', 'foreignKey' =&gt; 'foreign_key', 'conditions' =&gt; [ 'Medias.model' =&gt; 'Media' ] ]); } ... } </code></pre> <p>ProductsController:</p> <pre><code>class ProductsController extends AppController { public function upload($productId = null) { $productId = 2; $entity = $this-&gt;Products-&gt;Medias-&gt;newEntity(); if ($this-&gt;request-&gt;is(['post', 'put'])) { $entity = $this-&gt;Products-&gt;Medias-&gt;patchEntity( $entity, $this-&gt;request-&gt;data ); if ($this-&gt;Products-&gt;Medias-&gt;upload($productId, $entity)) { $this-&gt;Flash-&gt;set(__('Upload successful!')); } } $this-&gt;set('productImage', $entity); } ... } </code></pre> <p>In config/local_store.php is the same as the example (I include this file in boostrap.php)</p> <pre><code>... $listener = new LocalFileStorageListener(); EventManager::instance()-&gt;on($listener); $listener = new ImageProcessingListener(); EventManager::instance()-&gt;on($listener); Configure::write('FileStorage', [ 'imageSizes' =&gt; [ 'Medias' =&gt; [ 'large' =&gt; [ ... ]); FileStorageUtils::generateHashes(); StorageManager::config('Local', [ 'adapterClass' =&gt; '\Gaufrette\Adapter\Local', 'adapterOptions' =&gt; [TMP, true], 'class' =&gt; '\Gaufrette\Filesystem' ]); </code></pre> <p>upload.ctp</p> <pre><code>echo $this-&gt;Form-&gt;create($productImage, array('type' =&gt; 'file')); echo $this-&gt;Form-&gt;file('file'); echo $this-&gt;Form-&gt;error('file'); echo $this-&gt;Form-&gt;submit(__('Upload')); echo $this-&gt;Form-&gt;end(); </code></pre> <p>In the quick start there is two upload methods: uploadImage and uploadDocument but in the controller they use "upload".</p> <p>There is another association in Products in the example, I need this?:</p> <pre><code> $this-&gt;hasMany('Documents', [ 'className' =&gt; 'FileStorage.FileStorage', 'foreignKey' =&gt; 'foreign_key', 'conditions' =&gt; [ 'Documents.model' =&gt; 'ProductDocument' ] ]); </code></pre> <p>I found this question (from there is the db I'm using) <a href="https://stackoverflow.com/questions/32031237/getting-started-with-cakephp-file-storage-quickstart-guide">Getting Started with cakephp-file-storage quickstart guide</a> and upload and insert but doesn't make the thumbnails and if I change the table to ImageStoreTable shows an error "Class not found"</p> <p>So if anybody can help me I will be very grateful!</p> </div>

使用python解析图片信息的问题

在python中,如何解析出像下列控制台输出的图片url和其它相关信息?请教一下 ``` <FileStorage: u'wx996ee0fd19218c1b.o6zAJs0NTua2_B-g5REK3kgqWAGY.c5cd06ebde4ede81 0844db0ac04f2278.png' ('image/png')> ```

TypeError: expected str, bytes or os.PathLike object, not tuple此报错有大神遇到过吗怎么解决

import gensim import torch from util import read_caption_data, char_table_to_sentence, word2vec from torch.utils.serialization import load_lua import os def save_embeddings(filepath, filename, embeddings): if not os.path.exists(filepath): os.mkdir(filepath) target_path = os.path.join(filepath, filename) torch.save({'embeds': embeddings}, target_path) # 保存整个网络,包括整个计算图 return True class Word_Embeddings(): def __init__(self, root_dir, caption_dir, split_file, alphabet): self.caption_dir = caption_dir self.split_file = split_file self.alphabet = alphabet self.dir_path = os.path.join(root_dir, 'pretrained_embeddings') self.model = gensim.models.KeyedVectors.load_word2vec_format('/data0/Masters/yanwencai/CrossModalRetrieval-master/models/GoogleNews-vectors-negative300.bin', binary=True) #加载Google训练的词向量 def load_caption(self, cap): assert (os.path.isfile(cap)) cls, fn = cap.split('/')[-2], cap.split('/')[-1] file_path = os.path.join((self.dir_path, cls)) #报错行 caption = load_lua(cap) char = caption['char'] sentence = char_table_to_sentence(self.alphabet, char) embeds = word2vec(self.model, sentence, sen_size=16, emb_size=300) return file_path, fn, embeds alphabet = "abcdefghijklmnopqrstuvwxyz0123456789-,;.!?:'\"/\\|_@#$%^&*~`+-=<>()[]{} " data_dir = '/data0/Masters/yanwencai/CrossModalRetrieval-master/datasets/CUB_200_2011/' split_file = os.path.join(data_dir, 'train_val.txt') caption_dir = os.path.join(data_dir, 'cub_icml') caption_list = read_caption_data(caption_dir, split_file) WE = Word_Embeddings(data_dir, caption_dir, split_file, alphabet) for idx, cap in enumerate(caption_list): filepath, fn, embeds = WE.load_caption(cap) #报错行 if save_embeddings(filepath, fn, embeds): print(filepath, fn, 'saved') ``` 错误描述:Traceback (most recent call last): File "word_embeddings.py", line 61, in <module> filepath, fn, embeds,type = WE.load_caption(cap) File "word_embeddings.py", line 36, in load_caption file_path,type = os.path.join((self.dir_path, cls)) File "/home/amax/anaconda2/lib/python3.6/posixpath.py", line 80, in join a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not tuple 请各位大神帮忙看一下,xie'xie

undefined reference to....这个怎么解决呢?谢了.如下图

![图片说明](https://img-ask.csdn.net/upload/201607/20/1468994286_642791.png)

python3调用Fdfs_client 出错

windows下使用 fdfs_client 上传文件 参考 https://www.cnblogs.com/kindleheart/p/10134502.html 已经用pip安装了fdfs_client 在用语句:client = Fdfs_client(r'C:\client.conf') 调用的时候报错: >>> client = Fdfs_client(r'client.conf') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\ProgramData\Anaconda3\lib\site-packages\fdfs_client\client.py", line 51, in __init__ self.tracker_pool = poolclass(**self.trackers) TypeError: type object argument after ** must be a mapping, not str 麻烦各位帮忙解答。 ![图片说明](https://img-ask.csdn.net/upload/201812/28/1545988262_317592.png)

关于XML数据处理的问题

XML中出现了数据重复的问题,我打算将所有数据读入,然后在m_lable数组相对应的位置标1,然后在把标记为1的位置读出来,以删除重复数据,但是却有大量不重复的数据也读不出来了,这是什么问题 ``` FileStorage fs2((LPCSTR)m_xmlfile, FileStorage::READ); FileNode Arm=fs2["Arm"] ; m_Lable=cvCreateImage(cvSize(m_nImgWidth,m_nImgHeight),IPL_DEPTH_8U,1); cvZero(m_Lable); for (int i=0;i<Arm.size();i+=2) { int x=Arm[i]; int y=Arm[i+1]; char* label=m_Lable->imageData+y*m_Lable->widthStep+x*m_Lable->nChannels; *label=1; } int with=m_Lable->width; int hight=m_Lable->height; for (int i=0;i<with;i++) { for (int j=0;j<hight;j++) { char* label=m_Lable->imageData+j*m_Lable->widthStep+i*m_Lable->nChannels; if (*label==1) { CvPoint point=cvPoint(i,j); m_Data.m_Arm_new.push_back(point); } } } ```

python中,opencv图片对象如何转化为二进制文件流

对于: ``` cap = cv2.VideoCapture(0) while(True): ret, frame = cap.read() ``` 对于frame, 要把它转化成 ``` fout = open("路径", "rb") frame = fout.read() fout.close ``` 要使得2个frame的格式是一样的,如何转换。 目的是因为使用的一个API需要传入二进制文件流,但是摄像头用opencv打开后每帧的对象并不是二进制的。

Meteor:Http Call在Http响应后返回时返回Undefined

<div class="post-text" itemprop="text"> <p>I try to upload a file by encoding the content as base64 using a meteor app and a custom php script.</p> <p>The php script is the following:</p> <pre><code>require_once '../vendor/autoload.php'; use WindowsAzure\Common\ServicesBuilder; use MicrosoftAzure\Storage\Common\ServiceException; use MicrosoftAzure\Storage\Blob\Models\ListBlobsOptions; error_log("Method:".$_SERVER['REQUEST_METHOD'],0); if($_SERVER['REQUEST_METHOD'] === 'OPTIONS'){ header('Access-Control-Allow-Origin: *'); header('Access-Control-Allow-Methods: POST'); header('Access-Control-Allow-Headers: Origin, Content-Type, X-Auth-Token , Authorization'); error_log("Options Called",0); die(); } else { error_log("Post Called",0); function create_storage_connection() { return "DefaultEndpointsProtocol=https;AccountName=".getenv('AZURE_ACCOUNT').";AccountKey=".getenv('AZURE_KEY'); } $connectionString=create_storage_connection(); $blobRestProxy= ServicesBuilder::getInstance()-&gt;createBlobService($connectionString); $container_name=getenv('AZURE_CONTAINER'); $data=file_get_contents('php://input'); $data=json_decode($data,true); try{ //Upload data $file_data=base64_decode($data['data']); $data['name']=uniqid().$data['name']; $blobRestProxy-&gt;createBlockBlob($container_name,$data['name'],$file_data); $blob = $blobRestProxy-&gt;getBlob($container_name, $data['name']); //Download url info $listBlobsOptions = new ListBlobsOptions(); $listBlobsOptions-&gt;setPrefix($data['name']); $blob_list = $blobRestProxy-&gt;listBlobs($container_name, $listBlobsOptions); $blobs = $blob_list-&gt;getBlobs(); $url=[]; foreach($blobs as $blob) { $urls[]=$blob-&gt;getUrl(); } error_log("Urls: ".implode(" , ",$urls),0); header("Content-type: application/json"); $result=json_encode(['files'=&gt;"sent",'url'=&gt;$urls]); error_log("Result: ".$result,0); echo $result; } catch(ServiceException $e) { $code = $e-&gt;getCode(); $error_message = $e-&gt;getMessage(); header("Content-type: application/json"); echo json_encode(['code'=&gt;$code,'message'=&gt;$error_message]); } } </code></pre> <p>And on my meteor script I created a file named "imports/ui/File.jsx" having the following content:</p> <pre><code>import React, { Component } from 'react'; import {FileUpload} from '../api/FileUpload.js'; class File extends Component { changeFile(e) { e.preventDefault() let files = document.getElementById('fileUpload'); var file = files.files[0]; var reader=new FileReader(); reader.onloadend = function() { Meteor.call('fileStorage.uploadFile',reader.result,file.name,file.type) } reader.readAsDataURL(file); } render() { return ( &lt;form onSubmit={ this.changeFile.bind(this) }&gt; &lt;label&gt; &lt;input id="fileUpload" type="file" name="file" /&gt; &lt;/label&gt; &lt;button type="submit"&gt;UploadFile&lt;/button&gt; &lt;/form&gt; ) } } export default File; </code></pre> <p>And I also have a file named <code>imports/api/FileUpload.js</code> that handles the http call to the server:</p> <pre><code>import { Meteor } from 'meteor/meteor'; import { HTTP } from 'meteor/http' export default Meteor.methods({ 'fileStorage.uploadFile'(base64Data,name,mime) { // this.unblock(); let http_obj={ 'data':{ 'data':base64Data, 'name':name, 'mime':mime }, } HTTP.call("POST","http://localhost/base64Upload/",http_obj,function(err,response){ console.log("Response:",response); }); } }); </code></pre> <p>The problem is even though I get I successfull response from my server the:</p> <pre><code> console.log("Response:",response); </code></pre> <p>Does not print the returned json response from my server script to the console. Instead I get the following message (in my browser console):</p> <blockquote> <p>Response: undefined</p> </blockquote> <p>I cannot uinderstand why I get undefined on response even though the php script returns a response. Also if I <code>console.log</code> the err I get the following:</p> <blockquote> <p>Error Error: network Καταγραφή στοίβας: httpcall_client.js/HTTP.call/xhr.onreadystatechange@<a href="http://localhost:3000/packages/http.js?hash=d7408e6ea3934d8d6dd9f1b49eab82ac9f6d8340:244:20" rel="nofollow noreferrer">http://localhost:3000/packages/http.js?hash=d7408e6ea3934d8d6dd9f1b49eab82ac9f6d8340:244:20</a></p> </blockquote> <p>And I cannot figure out why does it happen.</p> <h1>Edit 1:</h1> <p>The meteor App does 2 Http calls 1 using <code>OPTIONS</code> method and one that uses <code>POST</code></p> <p>As requested when replaced the <code>die()</code> with:</p> <pre><code> var_dump($_SERVER['REQUEST_METHOD']); exit; </code></pre> <p>I get the response:</p> <blockquote> /home/pcmagas/Kwdikas/php/apps/base64Upload/src/public/index.php:14:string 'OPTIONS' <i>(length=7)</i> </blockquote> <p>Also on my network tab of the browser it says:</p> <p><a href="https://i.stack.imgur.com/ptvGB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ptvGB.png" alt="The http calls on network tab"></a></p> <p>Please keep in mind that the meteor performs 2 http calls to the script one using http <code>OPTIONS</code> method and one that uses the http <code>POST</code> one. What I want to get is the one that uses the http <code>POST</code> one.</p> <h1>Edit 2:</h1> <p>I also tried to put a timeout of 2 seconds by changing the <code>http_obj</code> into:</p> <pre><code>let http_obj={ 'data':{ 'data':base64Data, 'name':name, 'mime':mime }, 'timeout':2000 } </code></pre> <p>But I get the following error:</p> <blockquote> <p>Error Error: Can't set timers inside simulations</p> </blockquote> </div>

vc++2010MFC把xml文档数据写到sql server 2008数据库中,字段类型为text。出现乱码什么原因?

之前写的数据没有出现乱码,数据量比现在写的433KB数据少,但不应该啊,text的存储容量远远大于433KB? //将Mat idx保存到idx_file.xml中,保存的变量名为prob_idx  FileStorage fs("idx_file.xml", FileStorage::WRITE); fs<<"prob_idx"<<descriptor; fs.release(); //上面读到xml文档中没有乱码 ,大小为 433 KB (444,010 字节) CFile file; CString FileName=_T("idx_file.xml"); char buf[1024];//读1K memset(buf,0,1024);//初始化内存,防止读出字符末尾出现乱码 if(!file.Open(FileName,CFile::modeRead)) { AfxMessageBox(_T("没有文件!")); return; } file.Read(buf,sizeof(buf)); file.Close(); pRst->Fields->GetItem("value")->Value=(_variant_t)buf; //写到数据库中有乱码,数据库字段类型为text , 大小为292 KB (300,007 字节) ![图片说明](https://img-ask.csdn.net/upload/201703/18/1489831428_680953.png)

文件上传错误“application / octet-stream”

<div class="post-text" itemprop="text"> <p>我不是很精通 PHP,我用以下的PHP代码上传一个csv文件却怎么都实现不了。我已经修复了上传最大尺寸等相关属性的问题。其在我的本地网站上运行良好,但在沙盒网站上就不行了。错误:“ application / octet-stream”。 我该怎么办?</p> <p>数据非常简单,以.csv 格式存储:</p> <pre><code>27589 16990 161.7000095 0.838494 27589 17067 161.7000095 0.838494 27820 17144 315.7000095 0.859458 27820 17221 315.7000095 0.859458 27820 17606 315.7000095 0.866033 27820 17683 315.7000095 0.866033 </code></pre> <p>错误输出: "-- CSV file to load: Invalid type: application/octet-stream"</p> <pre><code>&lt;?php ini_set('display_errors', 1); error_reporting(E_ALL); // using upload at click from http://code.google.com/p/upload-at-click/ // FileData is the name for the input file $file_result = ""; $file = $_FILES['Filedata']; $allowedExtensions = array("csv", "txt"); $arrayVar = explode(".", $file["name"]); $extension = end($arrayVar); //commented out for strict standard error //$extension = end(explode(".", $file["name"])); function isAllowedExtension($fileName) { global $allowedExtensions; return in_array(end(explode(".", $fileName)), $allowedExtensions); } if($file["error"] &gt; 0){ echo "failure to upload the file &gt;&gt;&gt; ". "Error code: ".$file["error"]."&lt;br&gt;"; }else{ //echo " &gt;&gt;&gt; CURRENT DIR: ".getcwd() . " "; $workDir = getcwd(); $dir = substr($workDir, 0, -10); $path = $file["name"]; $newFileLoc = $dir.$path; $file_result.= "&lt;br&gt; Upload: " . $file["name"] . "&lt;br&gt;" . " Type: " . $file["type"] . "&lt;br&gt;" . " Size: " . $file["size"] . "&lt;br&gt;" . " file uploaded to: ".$newFileLoc."&lt;br&gt;"; // txt - text/plain // rtf - application/msword // dat/obj - application/octet-stream // csv - application/vnd.ms-excel // maximum 200 MB file - 200,000,000 k if ($file["type"] == "application/vnd.ms-excel" || $file["type"] == "text/plain"){ if( isAllowedExtension($file["name"]) ) { if( $file["size"] &lt; 200000000 ) { move_uploaded_file($file["tmp_name"], $newFileLoc); echo "|".$path;//"filePath : " . $newFileLoc; } else { echo "Invalid file size: " . $file["size"] . " "; } } else { echo "Invalid extension: " . $file["name"]." "; } } else { echo "Invalid type: " . $file["type"] . " "; } } ?&gt; </code></pre> </div>

Python 读取.doc文件问题

我只知道python读取.docx文件可以导入docx库来读,但如果打开.doc文档会报错, 应该是不支持。.doc文件是二进制文件,那么是用读二进制文件的方法来读吗? 我试着用struct.unpack("s",f.read(1))这种方法来读,.doc文档中全是英文字符但是 读出来都是乱码。请问正确的读取方式是什么呀?

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

课程主要面向嵌入式Linux初学者、工程师、学生 主要从一下几方面进行讲解: 1.linux学习路线、基本命令、高级命令 2.shell、vi及vim入门讲解 3.软件安装下载、NFS、Samba、FTP等服务器配置及使用

我以为我对Mysql事务很熟,直到我遇到了阿里面试官

太惨了,面试又被吊打

Python代码实现飞机大战

文章目录经典飞机大战一.游戏设定二.我方飞机三.敌方飞机四.发射子弹五.发放补给包六.主模块 经典飞机大战 源代码以及素材资料(图片,音频)可从下面的github中下载: 飞机大战源代码以及素材资料github项目地址链接 ————————————————————————————————————————————————————————— 不知道大家有没有打过飞机,喜不喜欢打飞机。当我第一次接触这个东西的时候,我的内心是被震撼到的。第一次接触打飞机的时候作者本人是身心愉悦的,因为周边的朋友都在打飞机, 每

Python数据分析与挖掘

92讲视频课+16大项目实战+源码+¥800元课程礼包+讲师社群1V1答疑+社群闭门分享会=99元 &nbsp; 为什么学习数据分析? &nbsp; &nbsp; &nbsp; 人工智能、大数据时代有什么技能是可以运用在各种行业的?数据分析就是。 &nbsp; &nbsp; &nbsp; 从海量数据中获得别人看不见的信息,创业者可以通过数据分析来优化产品,营销人员可以通过数据分析改进营销策略,产品经理可以通过数据分析洞察用户习惯,金融从业者可以通过数据分析规避投资风险,程序员可以通过数据分析进一步挖掘出数据价值,它和编程一样,本质上也是一个工具,通过数据来对现实事物进行分析和识别的能力。不管你从事什么行业,掌握了数据分析能力,往往在其岗位上更有竞争力。 &nbsp;&nbsp; 本课程共包含五大模块: 一、先导篇: 通过分析数据分析师的一天,让学员了解全面了解成为一个数据分析师的所有必修功法,对数据分析师不在迷惑。 &nbsp; 二、基础篇: 围绕Python基础语法介绍、数据预处理、数据可视化以及数据分析与挖掘......这些核心技能模块展开,帮助你快速而全面的掌握和了解成为一个数据分析师的所有必修功法。 &nbsp; 三、数据采集篇: 通过网络爬虫实战解决数据分析的必经之路:数据从何来的问题,讲解常见的爬虫套路并利用三大实战帮助学员扎实数据采集能力,避免没有数据可分析的尴尬。 &nbsp; 四、分析工具篇: 讲解数据分析避不开的科学计算库Numpy、数据分析工具Pandas及常见可视化工具Matplotlib。 &nbsp; 五、算法篇: 算法是数据分析的精华,课程精选10大算法,包括分类、聚类、预测3大类型,每个算法都从原理和案例两个角度学习,让你不仅能用起来,了解原理,还能知道为什么这么做。

如何在虚拟机VM上使用串口

在系统内核开发中,经常会用到串口调试,利用VMware的Virtual Machine更是为调试系统内核如虎添翼。那么怎么搭建串口调试环境呢?因为最近工作涉及到这方面,利用强大的google搜索和自己

程序员的兼职技能课

获取讲师答疑方式: 在付费视频第一节(触摸命令_ALL)片头有二维码及加群流程介绍 限时福利 原价99元,今日仅需39元!购课添加小助手(微信号:csdn590)按提示还可领取价值800元的编程大礼包! 讲师介绍: 苏奕嘉&nbsp;前阿里UC项目工程师 脚本开发平台官方认证满级(六级)开发者。 我将如何教会你通过【定制脚本】赚到你人生的第一桶金? 零基础程序定制脚本开发课程,是完全针对零脚本开发经验的小白而设计,课程内容共分为3大阶段: ①前期将带你掌握Q开发语言和界面交互开发能力; ②中期通过实战来制作有具体需求的定制脚本; ③后期将解锁脚本的更高阶玩法,打通任督二脉; ④应用定制脚本合法赚取额外收入的完整经验分享,带你通过程序定制脚本开发这项副业,赚取到你的第一桶金!

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

C++语言基础视频教程

C++语言基础视频培训课程:本课与主讲者在大学开出的程序设计课程直接对接,准确把握知识点,注重教学视频与实践体系的结合,帮助初学者有效学习。本教程详细介绍C++语言中的封装、数据隐藏、继承、多态的实现等入门知识;主要包括类的声明、对象定义、构造函数和析构函数、运算符重载、继承和派生、多态性实现等。 课程需要有C语言程序设计的基础(可以利用本人开出的《C语言与程序设计》系列课学习)。学习者能够通过实践的方式,学会利用C++语言解决问题,具备进一步学习利用C++开发应用程序的基础。

北京师范大学信息科学与技术学院笔试10复试真题

北京师范大学信息科学与技术学院笔试,可以更好的让你了解北师大该学院的复试内容,获得更好的成绩。

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

本课程适合CCNA或HCNA网络小白同志,高手请绕道,可以直接学习进价课程。通过本预科课程的学习,为学习网络工程师、思科CCNA、华为HCNA这些认证打下坚实的基础! 重要!思科认证2020年2月24日起,已启用新版认证和考试,包括题库都会更新,由于疫情原因,请关注官网和本地考点信息。题库网络上很容易下载到。

Python界面版学生管理系统

前不久上传了一个控制台版本的学生管理系统,这个是Python界面版学生管理系统,这个是使用pycharm开发的一个有界面的学生管理系统,基本的增删改查,里面又演示视频和完整代码,有需要的伙伴可以自行下

软件测试2小时入门

本课程内容系统、全面、简洁、通俗易懂,通过2个多小时的介绍,让大家对软件测试有个系统的理解和认识,具备基本的软件测试理论基础。 主要内容分为5个部分: 1 软件测试概述,了解测试是什么、测试的对象、原则、流程、方法、模型;&nbsp; 2.常用的黑盒测试用例设计方法及示例演示;&nbsp; 3 常用白盒测试用例设计方法及示例演示;&nbsp; 4.自动化测试优缺点、使用范围及示例‘;&nbsp; 5.测试经验谈。

Tomcat服务器下载、安装、配置环境变量教程(超详细)

未经我的允许,请不要转载我的文章,在此郑重声明!!! 请先配置安装好Java的环境,若没有安装,请参照我博客上的步骤进行安装! 安装Java环境教程https://blog.csdn.net/qq_40881680/article/details/83585542 Tomcat部署Web项目(一)·内嵌https://blog.csdn.net/qq_40881680/article/d...

2019数学建模A题高压油管的压力控制 省一论文即代码

2019数学建模A题高压油管的压力控制省一完整论文即详细C++和Matlab代码,希望对同学们有所帮助

图书管理系统(Java + Mysql)我的第一个完全自己做的实训项目

图书管理系统 Java + MySQL 完整实训代码,MVC三层架构组织,包含所有用到的图片资源以及数据库文件,大三上学期实训,注释很详细,按照阿里巴巴Java编程规范编写

linux下利用/proc进行进程树的打印

在linux下利用c语言实现的进程树的打印,主要通过/proc下的目录中的进程文件,获取status中的进程信息内容,然后利用递归实现进程树的打印

微信小程序开发实战之番茄时钟开发

微信小程序番茄时钟视频教程,本课程将带着各位学员开发一个小程序初级实战类项目,针对只看过官方文档而又无从下手的开发者来说,可以作为一个较好的练手项目,对于有小程序开发经验的开发者而言,可以更好加深对小程序各类组件和API 的理解,为更深层次高难度的项目做铺垫。

[已解决]踩过的坑之mysql连接报“Communications link failure”错误

目录 前言 第一种方法: 第二种方法 第三种方法(适用于项目和数据库在同一台服务器) 第四种方法 第五种方法(项目和数据库不在同一台服务器) 总结 前言 先给大家简述一下我的坑吧,(我用的是mysql,至于oracle有没有这样的问题,有心的小伙伴们可以测试一下哈), 在自己做个javaweb测试项目的时候,因为买的是云服务器,所以数据库连接的是用ip地址,用IDE开发好...

人工智能-计算机视觉实战之路(必备算法+深度学习+项目实战)

系列课程主要分为3大阶段:(1)首先掌握计算机视觉必备算法原理,结合Opencv进行学习与练手,通过实际视项目进行案例应用展示。(2)进军当下最火的深度学习进行视觉任务实战,掌握深度学习中必备算法原理与网络模型架构。(3)结合经典深度学习框架与实战项目进行实战,基于真实数据集展开业务分析与建模实战。整体风格通俗易懂,项目驱动学习与就业面试。 建议同学们按照下列顺序来进行学习:1.Python入门视频课程 2.Opencv计算机视觉实战(Python版) 3.深度学习框架-PyTorch实战/人工智能框架实战精讲:Keras项目 4.Python-深度学习-物体检测实战 5.后续实战课程按照自己喜好选择就可以

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

机器学习初学者必会的案例精讲

通过六个实际的编码项目,带领同学入门人工智能。这些项目涉及机器学习(回归,分类,聚类),深度学习(神经网络),底层数学算法,Weka数据挖掘,利用Git开源项目实战等。

Python数据分析师-实战系列

系列课程主要包括Python数据分析必备工具包,数据分析案例实战,核心算法实战与企业级数据分析与建模解决方案实战,建议大家按照系列课程阶段顺序进行学习。所有数据集均为企业收集的真实数据集,整体风格以实战为导向,通俗讲解Python数据分析核心技巧与实战解决方案。

YOLOv3目标检测实战系列课程

《YOLOv3目标检测实战系列课程》旨在帮助大家掌握YOLOv3目标检测的训练、原理、源码与网络模型改进方法。 本课程的YOLOv3使用原作darknet(c语言编写),在Ubuntu系统上做项目演示。 本系列课程包括三门课: (1)《YOLOv3目标检测实战:训练自己的数据集》 包括:安装darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 (2)《YOLOv3目标检测:原理与源码解析》讲解YOLOv1、YOLOv2、YOLOv3的原理、程序流程并解析各层的源码。 (3)《YOLOv3目标检测:网络模型改进方法》讲解YOLOv3的改进方法,包括改进1:不显示指定类别目标的方法 (增加功能) ;改进2:合并BN层到卷积层 (加快推理速度) ; 改进3:使用GIoU指标和损失函数 (提高检测精度) ;改进4:tiny YOLOv3 (简化网络模型)并介绍 AlexeyAB/darknet项目。

2021考研数学张宇基础30讲.pdf

张宇:博士,全国著名考研数学辅导专家,教育部“国家精品课程建设骨干教师”,全国畅销书《张宇高等数学18讲》《张宇线性代数9讲》《张宇概率论与数理统计9讲》《张宇考研数学题源探析经典1000题》《张宇考

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

DirectX修复工具V4.0增强版

DirectX修复工具(DirectX Repair)是一款系统级工具软件,简便易用。本程序为绿色版,无需安装,可直接运行。 本程序的主要功能是检测当前系统的DirectX状态,如果发现异常则进行修复

期末考试评分标准的数学模型

大学期末考试与高中的考试存在很大的不同之处,大学的期末考试成绩是主要分为两个部分:平时成绩和期末考试成绩。平时成绩和期末考试成绩总分一般为一百分,然而平时成绩与期末考试成绩所占的比例不同会导致出现不同

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

c语言项目开发实例

十个c语言案例 (1)贪吃蛇 (2)五子棋游戏 (3)电话薄管理系统 (4)计算器 (5)万年历 (6)电子表 (7)客户端和服务器通信 (8)潜艇大战游戏 (9)鼠标器程序 (10)手机通讯录系统

相关热词 c#设计思想 c#正则表达式 转换 c#form复制 c#写web c# 柱形图 c# wcf 服务库 c#应用程序管理器 c#数组如何赋值给数组 c#序列化应用目的博客园 c# 设置当前标注样式
立即提问