21. The average squared difference between classifier predicted output and actual output.
22. Which of the following methods do we use to find the best fit line for data in Linear Regression?
23. Following are the descriptive models
24. Assume that you are given a data set and a neural network model trained on the data set. You are asked to build a decision tree model with the sole purpose of understanding/interpreting the built neural network model. In such a scenario, which among the following measures would you concentrate most on optimising?
25. What are common feature selection methods in regression task?
26. Regarding bias and variance, which of the following statements are true? (Here 'high' and 'low' are relative to the ideal model.
i. Models which overfit are more likely to have high bias
ii. Models which overfit are more likely to have low bias
iii. Models which overfit are more likely to have high variance
iv. Models which overfit are more likely to have low variance
i. Models which overfit are more likely to have high bias
ii. Models which overfit are more likely to have low bias
iii. Models which overfit are more likely to have high variance
iv. Models which overfit are more likely to have low variance
27. Which of the following can only be used when training data are linearlyseparable?
28. Wrapper methods are hyper-parameter selection methods that
29. Given that we can select the same feature multiple times during the recursive partitioning of the input space, is it always possible to achieve 100% accuracy on the training data (given that we allow for trees to grow to their maximum size) when building decision trees?
30. In many classification problems, the target dataset is made up of categorical labels which cannot immediately be processed by any algorithm. An encoding is needed and scikit-learn offers at least . . . . . . . . valid options
Read More Section(Machine Learning)
Each Section contains maximum 100 MCQs question on Machine Learning. To get more questions visit other sections.