61. Suppose you are using stacking with n different machine learning algorithms with k folds on data. Which of the following is true about one level (m base models + 1 stacker) stacking? Note: Here, we are working on binary classification problem All base models are trained on all features You are using k folds for base models
62. Type of dataset available in Supervised Learning is
63. Another name for an output attribute.
64. PCA works better if there is
1. A linear structure in the data
2. If the data lies on a curved surface and not on a flat surface
3. If variables are scaled in the same unit
1. A linear structure in the data
2. If the data lies on a curved surface and not on a flat surface
3. If variables are scaled in the same unit
65. What is/are true about kernel in SVM?
1. Kernel function map low dimensional data to high dimensional space
2. It's a similarity function
1. Kernel function map low dimensional data to high dimensional space
2. It's a similarity function
66. Which statement about outliers is true?
67. Select the correct answers for following statements.
1. Filter methods are much faster compared to wrapper methods.
2. Wrapper methods use statistical methods for evaluation of a subset of features while Filter methods use cross validation.
1. Filter methods are much faster compared to wrapper methods.
2. Wrapper methods use statistical methods for evaluation of a subset of features while Filter methods use cross validation.
68. With Bayes classifier, missing data items are
69. scikit-learn also provides functions for creating dummy datasets from scratch:
70. Support vectors are the data points that lie closest to the decision surface.
Read More Section(Machine Learning)
Each Section contains maximum 100 MCQs question on Machine Learning. To get more questions visit other sections.