50 recent changes in Addis2023 Web retrieved at 00:48 (GMT)

TeachingMaterial
Teaching Material * Introductory slides for the course * Daniel Jurafsky and James H. Martin (forthcoming) Speech and Language Processing This link points to ...
ScheDule
New Schedule Schedule for each session: 15.15 16.45 17.00 18.30 The session on May 8th will most probably be a short one (15.15 16.45). All times ...
QA13b
Q: in backtranslation, when we translate the target language back to the source language, doesn't it be duplication? for example, suppose we have parallel input "how ...
QA10
Q: Do we capture long term dependencies in n gram language models? LE A: No, that is impossible. Everything beyond the n gram history is invisible to the model. ...
QA13a
Q: Is it unigram tokenization or BPE that better works for MT ? Why? DM A: See Jurafsky Martin page 254 Q: I need more clarification (working principles) of Data ...
QSM14
Questions for self monitoring * What's the benefit of using dense document representations for IR? * How can dense vectors be generated for IR? * What are ...
QSM16
Questions for self monitoring * What are typical characteristics of ASR tasks? * How can the encoder decoder architecture be applied to speech recognition? ...
GuidelinesForTheEssay
Guidelines for Writing the Outline Your outline should contain * the research question(s) you want to address with your essay * a short motivation, why this ...
QA9a
Q: The static nature of the context vector in the encoder decoder model is the bottleneck since it must represent every thing about the meaning of the source text ...
QA9b
Q: how do we systematic determine learning rate to overcome problem of overpassing and late to converge to optimal point in RNN leaning (MW) A: If the learning rate ...
QA8
Q: what would happen if unseen vocabulary occurs at the middle of the sequence( in POS labeling) in hiddden markov model. (MW) A: in a pure HMM the emission probability ...
QSM5
Questions for self monitoring * What's the difference between generative and discriminative models. * Why is the sigmoid function used in the logistic regression ...
QSM7
Questions for self monitoring * What are the similarities and differences between a binary logistic classifier and a neural network composed of a single unit with ...
QSM8
Questions for Self monitoring: Sequence labeling with HMM * What's the difference between pos tagging and named entitity recognition? * How can a segment labeling ...
QSM9b
Questions for Self monitoring: Encoder Decoder Architecture and Attention * Why can't machine translation be treated as sequence or segment labeling task? * ...
QSM13a
Questions for Self Monitoring: Machine Translation (1) * What are difficulties translation is faced with? * Why a simple recurrent network is not sufficient ...
QSM13b
Questions for Self monitoring: Machine Translation (2) * How can backtranslations help to alleviate data sparsity? * Is there an alternative to the use of backtranslations ...
QSM10
Questions for self monitoring: Transformers * What is the core idea of attention based computing? * How attention is used in encoder decoder models? * How ...
QA7
Q: Can we say feedforward is better than other deep learning algorithms or vice versa by itself without knowing their application? YM A: Feed forward networks can ...
QA5
22.2.2023 ch. 5: Logistic regression Q: when we use logistic regression for sentiment analysis it uses the probalblity of positive and negative words how about words ...
QSM9a
Questions for Self monitoring: RNN and LSTM * What's the difference between recurrent and recursive NN? * What are the benefits of RNN compared to window based ...
CourseStructure
Learning Goals Based on the experience from earlier runs of the course I have designed this year's edition to serve two different goals * impart fundamental concepts ...
QA2
Q: Is lemmatization used in current NLP tasks (since it is a bit complex in morphological complex language like Amharic) what if we use the word as it is and represent ...
QA3
Q: How to integrate syntactic structure and semantic analysis(similarity of words) in n gram language model? (MW) A: n gram models only describe the probability of ...
QA6
Q: what are the benefits of using vector semantics for NLP? AH A: Word embeddings are the ideal form of input to neural networks. In comparison to one hot vectors ...
WebHome
CIT 831 course on Natural Language Processing Welcome to the webpage of the course on Natural Language Processing held at the PhD School on Information Technology ...
WebAtom
Foswiki's Addis2023 web
WebLeftBar
* ** * Index * Changes * Notifications * Preferences
WebNotify
* .WikiGuest * .WikiGuest example #64;your.company
WebPreferences
Addis2023 Web Preferences Appearance * Set WEBBGCOLOR = #efefef * web specific background color, current color * Set SITEMAPLIST = on * set to ...
WebRss
" else="Foswiki's Addis2023 web"}% /Addis2023
Number of topics: 37

See also: rss-small RSS feed, recent changes with 50, 100, 200, 500, 1000 topics, all changes
 
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback