## 11. What characterize unlabeled examples in machine learning

## Answer & Solution

Answer:

**Option A**Solution:

What characterizes unlabeled examples in machine learning?**Option A: There is no prior knowledge**

Unlabeled examples in machine learning typically do not have associated target labels or outcomes. This means there is no prior knowledge or information about the specific categories or values these examples belong to. So,

**Option A**accurately characterizes unlabeled examples.

**Option B: There is no confusing knowledge**

Option B, stating "there is no confusing knowledge," does not adequately describe unlabeled examples in machine learning. It does not address the absence of labels or the lack of prior knowledge about these examples.

**Option C: There is prior knowledge**

Option C is not an accurate description of unlabeled examples. Unlabeled examples are typically characterized by the absence of prior knowledge or labels.

**Option D: There is plenty of confusing knowledge**

Option D, mentioning "plenty of confusing knowledge," is not an appropriate characterization of unlabeled examples. Unlabeled examples are typically blank slates without known categories or values.

In summary, unlabeled examples in machine learning are characterized by the absence of prior knowledge or target labels, as described in

**Option A**.

## 12. What characterize is hyperplance in geometrical model of machine learning?

## Answer & Solution

Answer:

**Option A**Solution:

What characterizes a hyperplane in the geometrical model of machine learning?**Option A: A plane with 1 dimensional fewer than the number of input attributes**

In the geometrical model of machine learning, a hyperplane is a flat subspace of one dimension less than the number of input attributes or features. It separates data points in space, and this separation is achieved with one dimension less than the original feature space. So,

**Option A**accurately characterizes a hyperplane.

**Option B: A plane with 2 dimensional fewer than the number of input attributes**

Option B describes a hyperplane as having two dimensions fewer than the number of input attributes, which is not a correct characterization. A hyperplane typically has one dimension less than the feature space.

**Option C: A plane with 1 dimensional more than the number of input attributes**

Option C suggests that a hyperplane has one dimension more than the number of input attributes, which is not accurate in the geometrical model of machine learning.

**Option D: A plane with 2 dimensional more than the number of input attributes**

Option D describes a hyperplane as having two dimensions more than the number of input attributes, which is not a correct characterization. A hyperplane typically has one dimension less than the feature space.

In summary, a hyperplane in the geometrical model of machine learning is characterized by being a plane with one dimension fewer than the number of input attributes, as described in

**Option A**.

## 13. Imagine a Newly-Born starts to learn walking. It will try to find a suitable policy to learn walking after repeated falling and getting up.specify what type of machine learning is best suited?

## Answer & Solution

Answer:

**Option D**Solution:

Imagine a newly-born starts to learn walking. It will try to find a suitable policy to learn walking after repeated falling and getting up. Specify what type of machine learning is best suited?**Option A: Classification**

Classification is not the most suitable type of machine learning for this scenario. Classification typically involves assigning data points to predefined categories or classes. It may not capture the continuous and dynamic nature of learning to walk.

**Option B: Regression**

Regression involves predicting a continuous numerical output based on input features. While it allows for continuous predictions, it may not be the best fit for the scenario described, which involves learning a policy for walking rather than predicting specific values.

**Option C: K-Means Algorithm**

The K-Means algorithm is a clustering technique that is not suitable for learning to walk. It is used to partition data into clusters, which does not align with the concept of learning a policy for a dynamic task like walking.

**Option D: Reinforcement Learning**

Reinforcement learning is the most appropriate type of machine learning for the scenario described. It involves learning a policy through trial and error, where actions are taken in an environment to maximize rewards. In the context of a newborn learning to walk, this aligns with the process of repeated falling and getting up to find a suitable walking policy. So,

**Option D**is the correct choice.

In summary, reinforcement learning is the most suitable type of machine learning for a newborn learning to walk, as it involves learning a policy through trial and error and maximizing rewards.

## 14. What are the popular algorithms of Machine Learning?

## Answer & Solution

Answer:

**Option D**Solution:

What are the popular algorithms of Machine Learning?**Option A: Decision Trees and Neural Networks (Back Propagation)**

Decision trees and neural networks (back propagation) are indeed popular machine learning algorithms. Decision trees are used for classification and regression tasks, while neural networks, especially those trained with backpropagation, are widely used for various machine learning tasks, including deep learning.

**Option B: Probabilistic Networks and Nearest Neighbor**

Probabilistic networks, such as Bayesian networks, and nearest neighbor algorithms are also popular in machine learning. They are used for tasks involving probabilistic modeling and similarity-based classification.

**Option C: Support Vector Machines**

Support Vector Machines (SVMs) are a well-known and widely used machine learning algorithm for classification and regression tasks. SVMs are known for their effectiveness in finding decision boundaries in complex data.

**Option D: All**

The correct answer is

**Option D**– all of the mentioned algorithms (Decision Trees, Neural Networks, Probabilistic Networks, Nearest Neighbor, and Support Vector Machines) are indeed popular algorithms in the field of machine learning.

In summary, popular machine learning algorithms include Decision Trees, Neural Networks (Back Propagation), Probabilistic Networks, Nearest Neighbor, and Support Vector Machines, as indicated in

**Option D**.

## 15. A machine learning problem involves four attributes plus a class. The attributes have 3, 2, 2, and 2 possible values each. The class has 3 possible values. How many maximum possible different examples are there?

## Answer & Solution

Answer:

**Option D**Solution:

A machine learning problem involves four attributes plus a class. The attributes have 3, 2, 2, and 2 possible values each. The class has 3 possible values. How many maximum possible different examples are there?**Option A: 12**

To calculate the maximum possible different examples, you can multiply the number of possible values for each attribute and the class together: 3 (attribute 1) x 2 (attribute 2) x 2 (attribute 3) x 2 (attribute 4) x 3 (class) = 72. So, the correct answer is not

**Option A**.

**Option B: 24**

Option B is not the correct answer because it doesn't reflect the correct calculation for the maximum possible different examples.

**Option C: 48**

Option C is not the correct answer because it also does not represent the correct calculation for the maximum possible different examples.

**Option D: 72**

The correct answer is

**Option D**. To find the maximum possible different examples, you can multiply the number of possible values for each attribute and the class together: 3 (attribute 1) x 2 (attribute 2) x 2 (attribute 3) x 2 (attribute 4) x 3 (class) = 72.

In summary, the maximum possible different examples for this machine learning problem is 72, as correctly represented in

**Option D**.

## 16. In machine learning, an algorithm (or learning algorithm) is said to be unstable if a small change in training data cause the large change in the learned classifiers. True or False: Bagging of unstable classifiers is a good idea

## Answer & Solution

Answer:

**Option A**Solution:

In machine learning, instability refers to the sensitivity of an algorithm to changes in the training data. When an algorithm is unstable, small variations in the training data can lead to significant changes in the learned classifiers. Bagging, which stands for Bootstrap Aggregating, is a technique that aims to reduce the variance and improve the stability of machine learning models.By combining predictions from multiple unstable classifiers trained on different subsets of the data, bagging can often produce a more robust and stable ensemble model. Therefore,

**Option A: TRUE**is the correct answer. Bagging of unstable classifiers is generally a good idea to enhance the overall performance of a machine learning model.

## 17. Which of the following is characteristic of best machine learning method ?

## Answer & Solution

Answer:

**Option D**Solution:

Machine learning methods can vary widely in terms of their characteristics and suitability for different tasks. The "best" machine learning method depends on the specific requirements and goals of the problem at hand. Let's evaluate each option:**Option A: fast**

Speed or efficiency is an important characteristic for machine learning methods, especially in real-time or time-sensitive applications. However, being fast alone does not necessarily make a method the best choice, as accuracy and scalability are also important considerations.

**Option B: accuracy**

Accuracy is a crucial characteristic of a machine learning method. A good method should provide accurate predictions or classifications on the given data. However, accuracy alone may not be sufficient if the method is not fast or scalable.

**Option C: scalable**

Scalability is another important factor, especially when dealing with large datasets or the need to process data efficiently at scale. Scalability ensures that the method can handle growing data without a significant drop in performance. However, scalability alone does not make a method the best choice if it lacks accuracy.

**Option D: all above**

The "all above" option suggests that the best machine learning method should possess all three characteristics: being fast, accurate, and scalable. This is a reasonable choice because the best machine learning method should ideally combine speed, accuracy, and scalability to be effective in a wide range of applications.

In conclusion, the best machine learning method is one that is

**fast, accurate, and scalable**. Therefore,

**Option D: all above**is the correct answer.

## 18. Machine learning techniques differ from statistical techniques in that machine learning methods

## Answer & Solution

Answer:

**Option A**Solution:

Machine learning techniques and statistical techniques are related fields, but they have distinct differences in their approaches and characteristics.**Option A: typically assume an underlying distribution for the data.**

In statistical techniques, it is common to assume specific probability distributions for the data, and many statistical methods are based on these assumptions. In contrast, machine learning methods often do not make strong assumptions about the underlying data distribution. Instead, they focus on learning patterns and relationships directly from the data.

**Option B: are better able to deal with missing and noisy data.**

Machine learning methods often have techniques and algorithms designed to handle missing and noisy data effectively. They can adapt to imperfect data and still make predictions or classifications, whereas statistical methods may struggle with data quality issues.

**Option C: are not able to explain their behavior.**

This statement is not entirely accurate. Machine learning methods can be interpretable to some extent, and efforts have been made to develop explainable AI techniques. While some complex machine learning models may be less interpretable than traditional statistical models, they are not inherently incapable of explaining their behavior.

**Option D: have trouble with large-sized datasets.**

Machine learning methods are often well-suited for large-sized datasets, and many machine learning algorithms can scale to handle massive amounts of data. In fact, they are frequently used in big data analytics and large-scale applications.

In summary, the key differences between machine learning and statistical techniques lie in their approaches to data assumptions, handling missing/noisy data, and explainability. Therefore, the correct answer is

**Option A: typically assume an underlying distribution for the data.**

## 19. What is Model Selection in Machine Learning?

## Answer & Solution

Answer:

**Option A**Solution:

Model selection in machine learning refers to the process of choosing the most appropriate model or algorithm from a set of candidate models to make predictions or capture relationships within a given dataset.**Option A: The process of selecting models among different mathematical models, which are used to describe the same data set.**

This option correctly defines model selection in machine learning. It involves comparing and choosing from different mathematical models to find the one that best fits and describes the data.

**Option B: when a statistical model describes random error or noise instead of the underlying relationship.**

This statement appears to describe a situation where a model fails to capture the true underlying relationship in the data and instead models random error or noise. However, it is not the primary definition of model selection.

**Option C: Find interesting directions in data and find novel observations/database cleaning.**

This option seems to describe the process of exploratory data analysis and data preprocessing rather than model selection itself.

**Option D: All above.**

This option suggests that all of the statements (A, B, and C) are correct definitions of model selection. While option A is indeed a correct definition, options B and C are not. Therefore,

**Option D**is not the correct choice.

In conclusion, the correct definition of model selection in machine learning is

**Option A: The process of selecting models among different mathematical models, which are used to describe the same data set.**

## 20. Some people are using the term . . . . . . . . instead of prediction only to avoid the weird idea that machine learning is a sort of modern magic.

## Answer & Solution

Answer:

**Option A**Solution:

The term used *instead of prediction only to avoid the weird idea that machine learning is a sort of modern magic*is

**Option A: Inference**. In machine learning, inference refers to the process of drawing conclusions or making educated guesses based on a model that has been trained on data. It involves using the model to make predictions or extract meaningful information from new data, which helps demystify the idea of machine learning as a kind of magical black box.

## Read More Section(Machine Learning)

Each Section contains maximum **100 MCQs question** on **Machine Learning**. To get more questions visit other sections.