⟩ Fresh Natural Language Processing Engineer Job Interview Questions
☛ Artificial Intelligence: What is an intuitive explanation for recurrent neural networks?
☛ How are RNNs storing ‘memory’?
☛ What are encoder-decoder models in recurrent neural networks?
☛ Why do Recurrent Neural Networks (RNN) combine the input and hidden state together and not seperately?
☛ What is an intuitive explanation of LSTMs and GRUs?
☛ Are GRU (Gated Recurrent Unit) a special case of LSTM?
☛ How many time-steps can LSTM RNNs remember inputs for?
☛ How does attention model work using LSTM?
☛ How do RNNs differ from Markov Chains?
☛ For modelling sequences, what are the pros and cons of using Gated Recurrent Units in place of LSTMs?
☛ What is exactly the attention mechanism introduced to RNN (recurrent neural network)? It would be nice if you could make it easy to understand!
☛ Is there any intuitive or simple explanation for how attention works in the deep learning model of an LSTM, GRU, or neural network?
☛ Why is it a problem to have exploding gradients in a neural net (especially in an RNN)?
☛ For a sequence-to-sequence model in RNN, does the input have to contain only sequences or can it accept contextual information as well?
☛ Can “generative adversarial networks” be used in sequential data in recurrent neural networks? How effective would they be?
☛ What is the difference between states and outputs in LSTM?
☛ What is the advantage of combining Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN)?
☛ Which is better for text classification: CNN or RNN?
☛ How are recurrent neural networks different from convolutional neural networks?