lucene4.9 中文分词例子 失败

代码是这样的

    Directory directory = new RAMDirectory();
CharArraySet c = new  CharArraySet(Version.LUCENE_4_9, 1, true);
        c.add("#");
        IndexWriterConfig conf = new IndexWriterConfig(Version.LUCENE_4_9,new SmartChineseAnalyzer(Version.LUCENE_4_9,c));

        IndexWriter indexWriter = new IndexWriter(directory,conf);
        Document doc=new Document();
        doc.add(new StringField("keyword","青菜#土豆#牛肉" ,Field.Store.YES ));
        doc.add(new StringField("id","1aaa" ,Field.Store.YES));
        indexWriter.addDocument(doc);  
        indexWriter.close();
Term term = new Term("keyword", "土豆");   
        AtomicReader atomicReader = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(directory));
        IndexSearcher searcher = new IndexSearcher(atomicReader);
        TopDocs tdoc = searcher.search(new TermQuery(term), 10);
        System.out.println(tdoc.totalHits);
        for(ScoreDoc s : tdoc.scoreDocs){
            Document firstHit =  searcher.doc(s.doc);
           System.out.println(firstHit.getField("id").stringValue());
        }

但是搜!不!到!,只有 Term = "青菜#土豆#牛肉" 才能全部收到。那个SmartChineseAnalyzer 我看文档,传入c 是stop标签,难道我理解错了?求解

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
立即提问