May 2, 2025

Here Discussed:

Module Syllabus Important
1 REINFORCEMENT OVERVIEW:
Course logistics and overview. Origin and history of Reinforcement Learning
research. Its connections with other related fields and with different branches
of machine learning. Reinforcement Learning Problem - Elements of
Reinforcement Learning- Limitations and Scope-Examples- Extended
Example: Tic-Tac-Toe. Are ‘representation learning’ and ‘reinforcement
learning’ the same terms? - Future Trends- Deep reinforcement learning. 1. What is Reinforcement Learning? Define its key components (agent, environment, state, action, reward, policy, value function).
  1. Are Representation Learning and Reinforcement Learning the same? Compare and explain their roles in ML. (Prev Year Q1)

  2. What are the main challenges and limitations of Reinforcement Learning? In which domains is it most effective? (Prev Year Q2)

  3. How is RL related to Supervised/Unsupervised Learning, Control Theory, and Operations Research? (Prev Year Q11)

  4. Explain how Tic-Tac-Toe can be solved using Reinforcement Learning. (Prev Year Q5)

  5. What are some real-world applications of RL? | | 2 | REINFORCEMENT DECISION PROCESS AND POLICIES: Markov Decision Process:Introduction to Markov decision process (MDP), state and action value functions. Bellman expectation equations, optimality of value functions and policies, Bellman optimality equations. Policy gradient methods- Reducing variance in policy gradientestimates. | 1. What is a Markov Decision Process (MDP)? Explain with an example (e.g., chess).

  6. Define state-value and action-value functions.

  7. What is the Bellman Expectation Equation and how does it lead to the Bellman Optimality Equation?

  8. What does it mean for a value function and policy to be optimal? (Prev Year Q12)

  9. What are the methods to derive optimal policy? (e.g., value iteration, policy iteration with examples)

  10. What is the exploration vs exploitation trade-off in Q-learning? (Prev Year Q13)

  11. Compare Policy Gradient methods and Value-based methods (like Q-learning) — advantages and disadvantages. (Prev Year Q3)

  12. What are some real-world applications of Policy Gradient methods in robotics, game playing, continuous control? (Prev Year Q6) **** | | 3 | REINFORCEMENT ALGORITHMS AND APPLICATIONS: Algorithms for control learning, Q-learning, Discrete action space: SARSA – Lambda, DQN- Deep Q Network. Continuousaction space:Deep Deterministic Policy Gradient (DDPG), Asynchronous Advantage Actor-Critic Algorithm(A3C). | 1. Explain Q-learning with an example. What is the discount factor (γ) and how does it affect learning?

  13. What is SARSA(λ) and how does it differ from standard SARSA? (Prev Year Q4)

  14. Explain the Deep Q-Network (DQN) algorithm.

  15. What is the DDPG (Deep Deterministic Policy Gradient) algorithm?

  16. What is A3C (Asynchronous Advantage Actor-Critic)? | | 4 | REPRESENTATION LEARNING OVERVIEW: Machine learning on graphs, Background and Traditional Approaches- Graph Statistics and Kernel Methods, Neighborhood Overlap Detection, Graph Laplacians and Spectral Methods, Neighborhood Reconstruction Methods, Multi-relational Data and Knowledge Graphs | 1. What is Representation Learning on Graphs?

  17. How do neighborhood reconstruction methods help in node classification? (Prev Year Q7)

  18. What are the advantages of kernel methods in graph-based learning? (Prev Year Q8)

  19. How does multi-relational data help in link prediction in knowledge graphs? (Prev Year Q14) | | 5 | GRAPH NEURAL NETWORK: The Graph Neural Network Model, Neural Message Passing, Generalized Neighborhood Aggregation, Generalized Update Methods, Graph Pooling, Graph Neural Networks in Practice, GNNs and Graph Convolutions | 1. What is a Graph Neural Network (GNN)?

  20. Explain neural message passing and how it enables information propagation in graphs. (Prev Year Q9)

  21. What is generalized neighborhood aggregation?

  22. What is graph pooling and how is it used?

  23. How do GNNs differ from CNNs and FCLs?

  24. What are some real-world applications of GNNs (e.g., social networks, recommender systems)? (Prev Year Q10)

  25. What are some emerging trends in GNNs such as attention mechanisms and scalability improvements? (Prev Year Q15) |

Nancy Kumari RL Paper

  1. Are representation learning and reinforcement learning the same? Explain the differences and their respective roles in machine learning
  2. What are the main challenges and limitations of Reinforcement Learning? In which domains is RL most effectively applied?
  3. Compare policy gradient methods with value-based methods like Q-learning. What are the advantages and disadvantages of each?
  4. Explain the SARSA(2) algorithm. How does it differ from standard SARSA?
  5. Explain how Reinforcement Learning can be applied to solve the Tic-Tac-Toe game. What learning approach is used?
  6. Discuss real-world applications where policy gradient methods are used. How are they applied in robotics, game playing, and continuous control tasks?
  7. How do neighborhood reconstruction methods help in node classification tasks in graphs?
  8. What are the advantages of using kernel methods in graph-based machine learning?
  9. Explain the concept of neural message passing in GNNS. How does it enable information propagation in graphs?
  10. Discuss real-world applications of GNNs in areas such as social networks and recommendation systems.
  11. How is Reinforcement Learning related to other fields such as supervised learning. unsupervised learning, control theory and operations research?
  12. What does it mean for a value function and policy to be optimal? How does an optimal policy lead to the best possible rewards in an MDP?
  13. What is the role of the exploration-exploitation trade-off in Q-learning?
  14. How does multi-relational graph data contribute to link prediction tasks in knowledge graphs?
  15. What are some emerging trends in Graph Neural Networks? Discuss advancements such as attention-based GNNS and scalability improvements.