Its importance for Machines, not just Humans.

What is Transfer Learning?

The concept of transfer learning lies in imparting knowledge learned for performing a task to another task that is different but similar.

How is Transfer Learning Useful to Me?

In the context of humans, transfer learning is crucial to our lives. Examples include:

  • the knowledge of riding a bicycle can be transferred to, and further enhanced, for riding a motorbike

Beyond Achieving Top Accuracy

We are familiar with the idea of using machine learning to make predictions and inferences to high accuracies. This is after all a big part of what machine learning is expected to do.

Interestingly and importantly, we can leverage further on machine learning models. Beyond using the model to make top accuracy predictions, we can use it to create Uncertainty, Ambiguity or even Contention.

Don’t We Like Clarity and Certainty?

Not all the time. For good motivations,

  • Testers may want to create examples that are uncertain, so we can stress test a system for its result or even its decision on borderline inputs.


Neural Networks

In Machine Learning, Neural Networks are an instrumental class of models that has wide-ranging applications. Generally, a neural network contains numerous parameters that are collectively used to infer predictions or forecasts from relevant input information.

Neural networks allow us to create sophisticated relationships and answer practical questions such as:

(i) Could we understand the traits of a job applicant based on the applicant’s written responses before a formal interview?

(ii) Instead of sifting through a full catalogue of apartments, furniture or dishes, could we use their photographs to automatically shortlist those that meet a consumer’s requirements?

Learning Relationships using the Neural Network

The above examples make…


My inspiration for this article arises from infrequently hearing “neural network training is not just optimization as there is a need to prevent overfitting”. So, here I am, sipping coffee and sharing one piece of knowledge!

The Construction

“Neural network training is not just optimization as there is a need to prevent overfitting.”

At face value, this statement seems consistent and correct:

  1. Yes, we optimize neural networks during training

Now, let us be a little more inquisitive, pierce face value and dig deeper. This statement suggests that it is not just doing optimization…


1. What is feature scaling?

Feature scaling is the statistical operation of using values of features to scale themselves to smaller and similar ranges.

It is widely used in data pre-processing before performing further feature engineering in machine learning and deep learning.

2. What are the methods to feature scaling?

There are 3 main approaches, each consisting of its own variants, to feature scaling. I will show the 3 main approaches:

  1. Min-Max Scaling
    x_scaled = [x - min(x)] / [max(x) - min(x)]
    where min(x) and max(x) are respectively the minimum and maximum value across all data points for feature x.

runyan

Machine learning scientist. To fail than to regret not having tried. Connect with me: www.linkedin.com/in/runyantan

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store