Sample exam questions for CSC785-Introduction to neural networks


    The exam will include:
     

    The problems below might be similar to the problems that will be given in the exam. Please use them together with the exercises in the book to prepare for the exam.
     
     

  1. Describe 4 learning rules of your  choice. For each learning rule discuss:
    1. the type of learning (e.g. supervised, unsupervised, etc.)
    2. the weight changing rule
    3. main features
    4. advantages and disadvantages
    5. what the rule achieves

  2. Discuss the main categories of neural network applications. Describe each type and discuss various issues as appropriate. Discuss advantages and disadvantages of each type of application.

  3.  
  4. Describe the X neural network paradigm. Address the following:
    1. Type of learning
    2. Network structure
    3. Network training
    4. Advantages and disadvantages
    5. Training and generalization issues

    6.  
  5. Discuss training and generalization issues in neural network learning. Address issues like the number of weights, the number of patterns, etc. and relate them to the probability of implementing a function that offers a good generalization.

  6.  
  7. Compare X and Y neural network paradigms.

  8. Compare A and B learning rules.

  9. What are the main advantages and disadvantages of neural networks with respect to other machine learning techniques (e.g. expert systems, formal logic deduction, automatic proving systems, etc.)?

  10. What are the main disadvantages and potential problems of the standard backpropagation?

  11. Describe enhancement W of standard backpropagation.

  12. Briefly describe the main enhancements of the standard backpropagation.

  13. Compare enhancements W and Z of standard backpropagation.

  14. Describe how CBD will solve the problem above. Write the expression of the solution. Draw several pictures explaining how the training proceeds.

Note:
X and Y can be (without limitation): standard  backpropagation, Widrow-Hoff, perceptron, supervised Hebbian, constraint based decomposition, etc.
A and B can be (without limitation): hebbian learning, delta rule, Widrow-Hoff rule,  etc.
W and Z can be (without limitation): momentum, Newton's method, conjugate gradient, variable learning rate, simulated annealing, etc.