neural network - Does Word2Vec has a hidden layer? -


when reading 1 of papers of tomas mikolov: http://arxiv.org/pdf/1301.3781.pdf

i have 1 concern on continuous bag-of-words model section:

the first proposed architecture similar feedforward nnlm, non-linear hidden layer removed , projection layer shared words (not projection matrix); thus, words projected same position (their vectors averaged).

i find people mention there hidden layer in word2vec model, understanding, there 1 projection layer in model. projection layer same work hidden layer?

the question how project input data projection layer?

"the projection layer shared words (not projection matrix)", mean?

from original paper, section 3.1, clear there no hidden layer:

"the first proposed architecture similar feedforward nnlm non-linear hidden layer removed , projection layer shared words".

with respect second question (what sharing projection layer means), means consider 1 single vector, centroid of vectors of words in context. thus, instead of having n-1 word vectors input, consider 1 vector. why called continuous bag of words (because word order lost within context of size n-1).


Comments

Popular posts from this blog

javascript - Chart.js (Radar Chart) different scaleLineColor for each scaleLine -

jquery - ReferenceError: CKEDITOR is not defined -

android - Go back to previous fragment -