## 41. What would you do in PCA to get the same projection as SVD?

## 42. The . . . . . . . . of the hyperplane depends upon the number of features.

## 43. What is the approach of basic algorithm for decision tree induction?

## 44. Can we extract knowledge without apply feature selection

## 45. Suppose there are 25 base classifiers. Each classifier has error rates of e = 0.35. Suppose you are using averaging as ensemble technique. What will be the probabilities that ensemble of above 25 classifiers will make a wrong prediction? Note: All classifiers are independent of each other

## 46. When the number of classes is large Gini index is not a good choice.

## 47. Data used to build a data mining model.

## 48. This technique associates a conditional probability value with each data instance.

## 49. What is the purpose of the Kernel Trick?

## 50. Having multiple perceptrons can actually solve the XOR problem satisfactorily: this is because each perceptron can partition off a linear part of the space itself, and they can then combine their results.

## Read More Section(Machine Learning)

Each Section contains maximum **100 MCQs question** on **Machine Learning**. To get more questions visit other sections.