91.
Suppose we train a hard-margin linear SVM on n > 100 data points in R2, yielding a hyperplane with exactly 2 support vectors. If we add one more data point and retrain the classifier, what is the maximum possible number of support vectors for the new hyperplane (assuming the n + 1 points are linearly separable)?

92.
Which of the following can be true for selecting base learners for an ensemble?
1. Different learners can come from same algorithm with different hyper parameters
2. Different learners can come from different algorithms
3. Different learners can come from different training spaces

93.
Suppose you have fitted a complex regression model on a dataset. Now, you are using Ridge regression with tuning parameter lambda to reduce its complexity. Choose the option(s) below which describes relationship of bias and variance with lambda.

95.
You've just finished training a decision tree for spam classification, and it is getting abnormally bad performance on both your training and test sets. You know that your implementation has no bugs, so what could be causing the problem?

96.
Point out the wrong statement.

98.
. . . . . . . . adopts a dictionary-oriented approach, associating to each category label a progressive integer number.

99.
Which of the following properties are characteristic of decision trees?
1. High bias
2. High variance
3. Lack of smoothness of prediction surfaces
4. Unbounded parameter set

100.
What is back propagation?

Read More Section(Machine Learning)

Each Section contains maximum 100 MCQs question on Machine Learning. To get more questions visit other sections.