Saturday 16 July 2022

Inductive classification: The concept learning task

We learn our surrounding through 5 senses — eye, ear, nose, tongue and skin. We learn lot of things during the entire life. Some of them are based on experience and some of them are based on memorization. On the basis of that we can divide learning methods into five types:

  • Rote Learning (memorization): Memorizing things without knowing the concept/ logic behind them.
  • Passive Learning (instructions): Learning from a teacher/expert.
  • Analogy (experience): Learning new things from our past experience.
  • Inductive Learning (experience): On the basis of past experience, formulating a generalized concept.
  • Deductive Learning: Deriving new facts from past facts.
In machine learning, our interest is in inductive learning and its based on formulating a generalized concept after observing a number of instances of examples of the concept.

Concept learning: Inferring a boolean-valued function from training examples of its input and output.
Assume that we have collected data for some attributes/features of the day like, Sky, Air Temperature, Humidity, Wind, Water, Forecast. Let these set of instances be denoted by X and many concepts can defined over the X. For example, the concepts can be
  1. Days on which my friend Sachin enjoys his favorite water sport
  2. Days on which my friend Sachin will not go outside of his house.
  3. Days on which my friend Sachin will have night drink.

Target concept — The concept or function to be learned is called the target concept and denoted by c. It can be seen as a boolean valued function defined over X and can be represented as c: X → {0, 1}.
For the target concept c , “Days on which my friend Sachin enjoys his favorite water sport”, an attribute EnjoySport is included in the below dataset X and it indicates whether or not my friend Sachin enjoys his favorite water sport on that day.


Now the target concept is EnjoySport : X →{0,1}. With this, a learner task is to learn to predict the value of EnjoySport for arbitrary day, based on the values of its attribute values. When a new sample with the values for attributes <Sky, Air Temperature, Humidity, Wind, Water, Forecast> is given, the value for EnjoySport (ie. 0 or 1) is predicted based on the previous learning.

Let H denote the set of all possible hypotheses that the learner may consider regarding the identity of the target concept. So H = {h1, h2, …. }.

What hypothesis representation shall we provide to the learner in this case ?. Let us begin by considering a simple representation in which each hypothesis consists of a conjunction of constraints on the instance attributes. Let each hypothesis be a vector of six constraints, specifying the values of the six attributes <Sky, Air Temperature, Humidity, Wind, Water, Forecast>. In hypothesis representation, value of each attribute could be either

  • “?’ — that any value is acceptable for this attribute,
  • specify a single required value (e.g., Warm) for the attribute, or
  • “0” that no value is acceptable.

For example: the hypothesis that my friend Sachin enjoys his favorite sport only on cold days with high humidity (independent of the values of the other attributes) is represented by the expression < ?,cold, High,?, ? ,?>

  • The most general hypothesis — < ?, ? , ? , ?, ? , ?> that every day is a positive example
  • The most specific hypothesis — < 0, 0, 0, 0, 0, 0>that no day is a positive example.

So far we looked into what is concept, target concept and concept learning. Also extended the definition of concept, target concept and concept learning to an example where my friend Sachin enjoys the water sport on certain day. Also looked into the hypothesis representation. With this knowledge we can say, EnjoySport concept learning task requires

  • learning the sets of days for which EnjoySport=yes and then
  • Describing this set by a conjunction of constraints over the instance attributes.



Notice that, the learning algorithm objective is to find a hypothesis h in H such that h(x) = c(x) for all x in D.

So we can define, Inductive learning hypothesis is any hypothesis found to approximate the target function well over a sufficiently large set of training examples will also approximate the target function well over any other unobserved examples.

0 comments :

Post a Comment

Note: only a member of this blog may post a comment.

Machine Learning

More

Advertisement

Java Tutorial

More

UGC NET CS TUTORIAL

MFCS
COA
PL-CG
DBMS
OPERATING SYSTEM
SOFTWARE ENG
DSA
TOC-CD
ARTIFICIAL INT

C Programming

More

Python Tutorial

More

Data Structures

More

computer Organization

More
Top