在做对老师给定的文本的命名实体识别 逻辑回归的迭代不需要torch 只用给定的公式即可 现发现不管是给定的数据集还是自己试验用的小规模数据 F1是一个常数不随sita的改变而变。请问是什么原因呢? 一下贴一下我的部分代码及数据
for i in range(1000):
temp=0
for j in range(len(xto)):
temp+=(yto[j]-sigmoid(np.dot(sta,xto[j])))*xto[j]
print("temp=",temp)
sta=sta+0.02*temp
print("sta[",i,"]=",sta)
correct=0
presum=0
truesum=0
for t in range(len(wordsn)):
if(wordsn[t].count("/ns")==1):
truesum+=1
if(dic.get(wordsn[t])!=None):
presum+=1
x1 = np.zeros((len(num)))
if t > 0 and dic.get(wordsn[t - 1]) != None:
x1[dic.get(wordsn[t - 1])] = 1
x = np.zeros((len(num)))
x[dic.get(wordsn[t])] = 1
x2 = np.zeros((len(num)))
if i + 1 < len(wordsn) and dic.get(wordsn[t + 1]) != None:
x2[dic.get(wordsn[t + 1])] = 1
xtemp2=np.concatenate((x1,x,x2))
if np.dot(sta,xtemp2)>0 and wordsn[t].count("/ns")==1:
correct+=1
print("presum[",i,"]=",presum)
print("correct[", i, "]=", correct)
print("truesum[", i, "]=", truesum)
if(presum==0):
r1=0
else:
r1=correct/presum
if(truesum==0):
r2=0
else:
r2=correct/truesum
if(r1==0 and r2==0):
F1=0
else:
F1=(2*r1*r2)/(r1+r2)
print("F1[",i,"]","=",F1)
一千次迭代中f1始终是0.5没变(用的自己写的数据 用大规模数据是同样不变)用的公式为
𝜃 ≔ 𝜃 + 𝛼 ∑(𝑦 𝑖 − 𝑔(𝜃𝑇𝑥(𝑖))) 𝑥(𝑖) i从0->m