firework2018 2018-09-11 14:53 采纳率: 50%
浏览 2843
已结题

使用jieba切词时出现格式问题

 from gensim.models import word2vec
import gensim
import logging
import jieba,re,codecs
##结巴分词——添加新字典
jieba.load_userdict("E:/workplace/data/userdict.txt")
test=open("E:/workplace/data/test.txt",'r',encoding='Utf-8')
words=list(jieba.cut(test,cut_all=False,HMM=True))
#输入文本 是否为全模式分词 与是否开启HMM进行中文分词
words= ''.join(words)#将列表转化为字符串 

报错:

   warnings.warn("detected Windows; aliasing chunkize to chunkize_serial")
Traceback (most recent call last):

  File "<ipython-input-17-a64173a4fbe2>", line 1, in <module>
    runfile('E:/workplace/code/untitled0.py', wdir='E:/workplace/code')

  File "D:\Program Files (x86)\anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)

  File "D:\Program Files (x86)\anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "E:/workplace/code/untitled0.py", line 15, in <module>
    words=list(jieba.cut(test,cut_all=False,HMM=True))

  File "D:\Program Files (x86)\anaconda\lib\site-packages\jieba\__init__.py", line 282, in cut
    sentence = strdecode(sentence)

  File "D:\Program Files (x86)\anaconda\lib\site-packages\jieba\_compat.py", line 37, in strdecode
    sentence = sentence.decode('utf-8')

AttributeError: '_io.TextIOWrapper' object has no attribute 'decode'

test.txt和userdict.txt均使用utf-8编码。
test.txt内容如下:

图片说明

  • 写回答

2条回答 默认 最新

  • devmiao 2018-09-11 15:28
    关注
    评论

报告相同问题?

悬赏问题

  • ¥15 HFSS 中的 H 场图与 MATLAB 中绘制的 B1 场 部分对应不上
  • ¥15 如何在scanpy上做差异基因和通路富集?
  • ¥20 关于#硬件工程#的问题,请各位专家解答!
  • ¥15 关于#matlab#的问题:期望的系统闭环传递函数为G(s)=wn^2/s^2+2¢wn+wn^2阻尼系数¢=0.707,使系统具有较小的超调量
  • ¥15 FLUENT如何实现在堆积颗粒的上表面加载高斯热源
  • ¥30 截图中的mathematics程序转换成matlab
  • ¥15 动力学代码报错,维度不匹配
  • ¥15 Power query添加列问题
  • ¥50 Kubernetes&Fission&Eleasticsearch
  • ¥15 報錯:Person is not mapped,如何解決?