91. What is/are true about ridge regression?
1. When lambda is 0, model works like linear regression model
2. When lambda is 0, model doesn't work like linear regression model
3. When lambda goes to infinity, we get very, very small coefficients approaching
04. When lambda goes to infinity, we get very, very large coefficients approaching infinity.
1. When lambda is 0, model works like linear regression model
2. When lambda is 0, model doesn't work like linear regression model
3. When lambda goes to infinity, we get very, very small coefficients approaching
04. When lambda goes to infinity, we get very, very large coefficients approaching infinity.
92. In what type of learning labelled training data is used
93. Based on survey, it was found that the probability that person like to watch serials is 0.25 and the probability that person like to watch netflix series is 0.43. Also the probability that person like to watch serials and netflix sereis is 0.12. what is the probability that a person doesn't like to watch either?
94. Which learning Requires Self Assessment to identify patterns within data?
95. The linear SVM classifier works by drawing a straight line between two classes
96. In which of the following cases will K-Means clustering fail to give good results?
1. Data points with outliers
2. Data points with different densities
3. Data points with round shapes
4. Data points with non-convex shapes
1. Data points with outliers
2. Data points with different densities
3. Data points with round shapes
4. Data points with non-convex shapes
97. A person trained to interact with a human expert in order to capture their knowledge.
98. In many classification problems, the target . . . . . . . . is made up of categorical labels which cannot immediately be processed by any algorithm.
99. What is PCA, KPCA and ICA used for?
100. Which of the following can be one of the steps in stacking?
1. Divide the training data into k folds
2. Train k models on each k-1 folds and get the out of fold predictions for remaining one fold
3. Divide the test data set in "k" folds and get individual fold predictions by different algorithms
1. Divide the training data into k folds
2. Train k models on each k-1 folds and get the out of fold predictions for remaining one fold
3. Divide the test data set in "k" folds and get individual fold predictions by different algorithms
Read More Section(Machine Learning)
Each Section contains maximum 100 MCQs question on Machine Learning. To get more questions visit other sections.