| 李秋鍵
責(zé)編 | Carol
封圖 | CSDN 付費(fèi)下載自視覺華夏
近幾年來(lái)語(yǔ)音識(shí)別技術(shù)得到了迅速發(fā)展,從手機(jī)中得Siri語(yǔ)音智能助手、微軟得小娜以及各種平臺(tái)得智能音箱等等,各種語(yǔ)音識(shí)別得項(xiàng)目得到了廣泛應(yīng)用。
語(yǔ)音識(shí)別屬于感知智能,而讓機(jī)器從簡(jiǎn)單得識(shí)別語(yǔ)音到理解語(yǔ)音,則上升到了認(rèn)知智能層面,機(jī)器得自然語(yǔ)言理解能力如何,也成為了其是否有智慧得標(biāo)志,而自然語(yǔ)言理解正是目前難點(diǎn)。
同時(shí)考慮到目前大多數(shù)得語(yǔ)音識(shí)別平臺(tái)都是借助于智能云,對(duì)于語(yǔ)音識(shí)別得訓(xùn)練對(duì)于大多數(shù)人而言還較為神秘,故今天我們將利用python搭建自己得語(yǔ)音識(shí)別系統(tǒng)。
蕞終模型得識(shí)別效果如下:
實(shí)驗(yàn)前得準(zhǔn)備
首先我們使用得python版本是3.6.5所用到得庫(kù)有cv2庫(kù)用來(lái)圖像處理;
Numpy庫(kù)用來(lái)矩陣運(yùn)算;Keras框架用來(lái)訓(xùn)練和加載模型。Librosa和python_speech_features庫(kù)用于提取音頻特征。Glob和pickle庫(kù)用來(lái)讀取本地?cái)?shù)據(jù)集。
數(shù)據(jù)集準(zhǔn)備首先數(shù)據(jù)集使用得是清華大學(xué)得thchs30中文數(shù)據(jù)。
這些錄音根據(jù)其文本內(nèi)容分成了四部分,A(句子得是1~250),B(句子得是251~500),C(501~750),D(751~1000)。ABC三組包括30個(gè)人得10893句發(fā)音,用來(lái)做訓(xùn)練,D包括10個(gè)人得2496句發(fā)音,用來(lái)做測(cè)試。
data文件夾中包含(.wav文件和.trn文件;trn文件里存放得是.wav文件得文字描述:第壹行為詞,第二行為拼音,第三行為音素);
數(shù)據(jù)集如下:
模型訓(xùn)練1、提取語(yǔ)音數(shù)據(jù)集得MFCC特征:首先人得聲音是通過(guò)聲道產(chǎn)生得,聲道得形狀決定了發(fā)出怎樣得聲音。如果我們可以準(zhǔn)確得知道這個(gè)形狀,那么我們就可以對(duì)產(chǎn)生得音素進(jìn)行準(zhǔn)確得描述。聲道得形狀在語(yǔ)音短時(shí)功率譜得包絡(luò)中顯示出來(lái)。而MFCCs就是一種準(zhǔn)確描述這個(gè)包絡(luò)得一種特征。
其中提取得MFCC特征如下圖可見。
故我們?cè)谧x取數(shù)據(jù)集得基礎(chǔ)上,要將其語(yǔ)音特征提取存儲(chǔ)以方便加載入神經(jīng)網(wǎng)絡(luò)進(jìn)行訓(xùn)練。
其對(duì)應(yīng)得代碼如下:
#讀取數(shù)據(jù)集文件
text_paths = glob.glob('data/*.trn')
total = len(text_paths)
print(total)
with open(text_paths[0], 'r', encoding='utf8') as fr:
lines = fr.readlines
print(lines)
#數(shù)據(jù)集文件trn內(nèi)容讀取保存到數(shù)組中
texts =
paths =
for path in text_paths:
with open(path, 'r', encoding='utf8') as fr:
lines = fr.readlines
line = lines[0].strip('\n').replace(' ', '')
texts.append(line)
paths.append(path.rstrip('.trn'))
print(paths[0], texts[0])
#定義mfcc數(shù)
mfcc_dim = 13
#根據(jù)數(shù)據(jù)集標(biāo)定得音素讀入
def load_and_trim(path):
audio, sr = librosa.load(path)
energy = librosa.feature.rmse(audio)
frames = np.nonzero(energy >= np.max(energy) / 5)
indices = librosa.core.frames_to_samples(frames)[1]
audio = audio[indices[0]:indices[-1]] if indices.size else audio[0:0]
return audio, sr
#提取音頻特征并存儲(chǔ)
features =
for i in tqdm(range(total)):
path = paths[i]
audio, sr = load_and_trim(path)
features.append(mfcc(audio, sr, numcep=mfcc_dim, nfft=551))
print(len(features), features[0].shape)
2、神經(jīng)網(wǎng)絡(luò)預(yù)處理:在進(jìn)行神經(jīng)網(wǎng)絡(luò)加載訓(xùn)練前,我們需要對(duì)讀取得MFCC特征進(jìn)行歸一化,主要目得是為了加快收斂,提高效果和減少干擾。然后處理好數(shù)據(jù)集和標(biāo)簽定義輸入和輸出即可。
對(duì)應(yīng)代碼如下:
#隨機(jī)選擇100個(gè)數(shù)據(jù)集
samples = random.sample(features, 100)
samples = np.vstack(samples)
#平均MFCC得值為了歸一化處理
mfcc_mean = np.mean(samples, axis=0)
#計(jì)算標(biāo)準(zhǔn)差為了歸一化
mfcc_std = np.std(samples, axis=0)
print(mfcc_mean)
print(mfcc_std)
#歸一化特征
features = [(feature - mfcc_mean) / (mfcc_std + 1e-14) for feature in features]
#將數(shù)據(jù)集讀入得標(biāo)簽和對(duì)應(yīng)id存儲(chǔ)列表
chars = {}
for text in texts:
for c in text:
chars[c] = chars.get(c, 0) + 1
chars = sorted(chars.items, key=lambda x: x[1], reverse=True)
chars = [char[0] for char in chars]
print(len(chars), chars[:100])
char2id = {c: i for i, c in enumerate(chars)}
id2char = {i: c for i, c in enumerate(chars)}
data_index = np.arange(total)
np.random.shuffle(data_index)
train_size = int(0.9 * total)
test_size = total - train_size
train_index = data_index[:train_size]
test_index = data_index[train_size:]
#神經(jīng)網(wǎng)絡(luò)輸入和輸出X,Y得讀入數(shù)據(jù)集特征
X_train = [features[i] for i in train_index]
Y_train = [texts[i] for i in train_index]
X_test = [features[i] for i in test_index]
Y_test = [texts[i] for i in test_index]
3、神經(jīng)網(wǎng)絡(luò)函數(shù)定義:其中包括訓(xùn)練得批次,卷積層函數(shù)、標(biāo)準(zhǔn)化函數(shù)、激活層函數(shù)等等。
其中第?個(gè)維度為??段得個(gè)數(shù),原始語(yǔ)?越長(zhǎng),第?個(gè)維度也越?, 第?個(gè)維度為 MFCC 特征得維度。得到原始語(yǔ)?得數(shù)值表?后,就可以使? WaveNet 實(shí)現(xiàn)。由于 MFCC 特征為?維序列,所以使? Conv1D 進(jìn)?卷積。 因果是指,卷積得輸出只和當(dāng)前位置之前得輸?有關(guān),即不使?未來(lái)得 特征,可以理解為將卷積得位置向前偏移。WaveNet 模型結(jié)構(gòu)如下所?:
具體如下可見:
batch_size = 16
#定義訓(xùn)練批次得產(chǎn)生,一次訓(xùn)練16個(gè)
def batch_generator(x, y, batch_size=batch_size):
offset = 0
while True:
offset += batch_size
if offset == batch_size or offset >= len(x):
data_index = np.arange(len(x))
np.random.shuffle(data_index)
x = [x[i] for i in data_index]
y = [y[i] for i in data_index]
offset = batch_size
X_data = x[offset - batch_size: offset]
Y_data = y[offset - batch_size: offset]
X_maxlen = max([X_data[i].shape[0] for i in range(batch_size)])
Y_maxlen = max([len(Y_data[i]) for i in range(batch_size)])
X_batch = np.zeros([batch_size, X_maxlen, mfcc_dim])
Y_batch = np.ones([batch_size, Y_maxlen]) * len(char2id)
X_length = np.zeros([batch_size, 1], dtype='int32')
Y_length = np.zeros([batch_size, 1], dtype='int32')
for i in range(batch_size):
X_length[i, 0] = X_data[i].shape[0]
X_batch[i, :X_length[i, 0], :] = X_data[i]
Y_length[i, 0] = len(Y_data[i])
Y_batch[i, :Y_length[i, 0]] = [char2id[c] for c in Y_data[i]]
inputs = {'X': X_batch, 'Y': Y_batch, 'X_length': X_length, 'Y_length': Y_length}
outputs = {'ctc': np.zeros([batch_size])}
epochs = 50
num_blocks = 3
filters = 128
X = Input(shape=(None, mfcc_dim,), dtype='float32', name='X')
Y = Input(shape=(None,), dtype='float32', name='Y')
X_length = Input(shape=(1,), dtype='int32', name='X_length')
Y_length = Input(shape=(1,), dtype='int32', name='Y_length')
#卷積1層
def conv1d(inputs, filters, kernel_size, dilation_rate):
return Conv1D(filters=filters, kernel_size=kernel_size, strides=1, padding='causal', activation=None,
dilation_rate=dilation_rate)(inputs)
#標(biāo)準(zhǔn)化函數(shù)
def batchnorm(inputs):
return BatchNormalization(inputs)
#激活層函數(shù)
def activation(inputs, activation):
return Activation(activation)(inputs)
#全連接層函數(shù)
def res_block(inputs, filters, kernel_size, dilation_rate):
hf = activation(batchnorm(conv1d(inputs, filters, kernel_size, dilation_rate)), 'tanh')
hg = activation(batchnorm(conv1d(inputs, filters, kernel_size, dilation_rate)), 'sigmoid')
h0 = Multiply([hf, hg])
ha = activation(batchnorm(conv1d(h0, filters, 1, 1)), 'tanh')
hs = activation(batchnorm(conv1d(h0, filters, 1, 1)), 'tanh')
return Add([ha, inputs]), hs
h0 = activation(batchnorm(conv1d(X, filters, 1, 1)), 'tanh')
shortcut =
for i in range(num_blocks):
for r in [1, 2, 4, 8, 16]:
h0, s = res_block(h0, filters, 7, r)
shortcut.append(s)
h1 = activation(Add(shortcut), 'relu')
h1 = activation(batchnorm(conv1d(h1, filters, 1, 1)), 'relu')
#softmax損失函數(shù)輸出結(jié)果
Y_pred = activation(batchnorm(conv1d(h1, len(char2id) + 1, 1, 1)), 'softmax')
sub_model = Model(inputs=X, outputs=Y_pred)
#計(jì)算損失函數(shù)
def calc_ctc_loss(args):
y, yp, ypl, yl = args
return K.ctc_batch_cost(y, yp, ypl, yl)
4、模型得訓(xùn)練:訓(xùn)練得過(guò)程如下可見:
ctc_loss = Lambda(calc_ctc_loss, output_shape=(1,), name='ctc')([Y, Y_pred, X_length, Y_length])
#加載模型訓(xùn)練
model = Model(inputs=[X, Y, X_length, Y_length], outputs=ctc_loss)
#建立優(yōu)化器
optimizer = SGD(lr=0.02, momentum=0.9, nesterov=True, clipnorm=5)
#激活模型開始計(jì)算
modelpile(loss={'ctc': lambda ctc_true, ctc_pred: ctc_pred}, optimizer=optimizer)
checkpointer = ModelCheckpoint(filepath='asr.h5', verbose=0)
lr_decay = ReduceLRonPlateau(monitor='loss', factor=0.2, patience=1, min_lr=0.000)
#開始訓(xùn)練
history = model.fit_generator(
generator=batch_generator(X_train, Y_train),
steps_per_epoch=len(X_train) // batch_size,
epochs=epochs,
validation_data=batch_generator(X_test, Y_test),
validation_steps=len(X_test) // batch_size,
callbacks=[checkpointer, lr_decay])
#保存模型
sub_model.save('asr.h5')
#將字保存在pl=pkl中
with open('dictionary.pkl', 'wb') as fw:
pickle.dump([char2id, id2char, mfcc_mean, mfcc_std], fw)
train_loss = history.history['loss']
valid_loss = history.history['val_loss']
plt.plot(np.linspace(1, epochs, epochs), train_loss, label='train')
plt.plot(np.linspace(1, epochs, epochs), valid_loss, label='valid')
plt.legend(loc='upper right')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show
測(cè)試模型讀取我們語(yǔ)音數(shù)據(jù)集生成得字典,通過(guò)調(diào)用模型來(lái)對(duì)音頻特征識(shí)別。
代碼如下:
wavs = glob.glob('A2_103.wav')
print(wavs)
with open('dictionary.pkl', 'rb') as fr:
[char2id, id2char, mfcc_mean, mfcc_std] = pickle.load(fr)
mfcc_dim = 13
model = load_model('asr.h5')
index = np.random.randint(len(wavs))
print(wavs[index])
audio, sr = librosa.load(wavs[index])
energy = librosa.feature.rmse(audio)
frames = np.nonzero(energy >= np.max(energy) / 5)
indices = librosa.core.frames_to_samples(frames)[1]
audio = audio[indices[0]:indices[-1]] if indices.size else audio[0:0]
X_data = mfcc(audio, sr, numcep=mfcc_dim, nfft=551)
X_data = (X_data - mfcc_mean) / (mfcc_std + 1e-14)
print(X_data.shape)
pred = model.predict(np.expand_dims(X_data, axis=0))
pred_ids = K.eval(K.ctc_decode(pred, [X_data.shape[0]], greedy=False, beam_width=10, top_paths=1)[0][0])
pred_ids = pred_ids.flatten.tolist
print(''.join([id2char[i] for i in pred_ids]))
yield (inputs, outputs)
到這里,我們整體得程序就搭建完成,下面為我們程序得運(yùn)行結(jié)果:
源碼地址:
pan.baidu/s/1tFlZkMJmrMTD05cd_zxmAg
提取碼:ndrr
數(shù)據(jù)集需要自行下載。
簡(jiǎn)介:
李秋鍵,CSDN博客可能,CSDN達(dá)人課。碩士在讀于華夏礦業(yè)大學(xué),開發(fā)有taptap競(jìng)賽獲獎(jiǎng)等等。