Monday, January 29, 2018

Types of machine learning and Deep Learning in simple terms

Main types of machine learning:


Supervised Learning vs Unsupervised Learning
Supervised Learning
Unsupervised Learning
1. Input data is labeled.
1. Input data is unlabeled.
2. Uses training dataset.
2. Uses the input data set.
3. Used for prediction.
3. Used for analysis.
4. Enables classification and regression.
4. Enables Classification, Density Estimation, & Dimension Reduction
Supervised machine learning
In this scenario, you are providing a computer program with labeled data. For instance, if the assigned task is to separate pictures of boys and girls using an algorithm for sorting images, those with a male child would have a “boy” label, and images with a female child would have a “girl” label. This is considered as a “training” dataset, and the labels remain in place until the program can successfully sort the images at an acceptable rate.
Semi-supervised machine learning
In this case, only a few images are labeled. The computer program will then use an algorithm to make its best guess regarding the unlabeled images, and then the data is fed back to the program as training data. A new batch of images is then provided, with only a few sporting labels. It’s a repetitive process until the program can distinguish between boys and girls at an acceptable rate.
Unsupervised machine learning
This type of machine learning doesn’t involve labels whatsoever. Instead, the program is blindly thrown into the task of splitting images of boys and girls into two groups using one of two methods. One algorithm is called “clustering” that groups similar objects together based on characteristics, such as hair length, jaw size, eye placement, and so on. The other algorithm is called “association” where the program creates if/then rules based on similarities it discovers. In other words, it determines a common pattern between the images, and sorts them accordingly.
Reinforcement machine learning
Chess would be an excellent example of this type of algorithm. The program knows the rules of the game and how to play, and goes through the steps to complete the round. The only information provided to the program is whether it won or lost the match. It continues to replay the game, keeping track of its successful moves, until it finally wins a match.




Deep Learning
Deep learning is basically machine learning on a “deeper” level (pun unavoidable, sorry). It’s inspired by how the human brain works, but requires high-end machines with discrete add-in graphics cards capable of crunching numbers, and enormous amounts of “big” data. Small amounts of data actually yield lower performance.
Unlike standard machine learning algorithms that break problems down into parts and solves them individually, deep learning solves the problem from end to end. Better yet, the more data and time you feed a deep learning algorithm, the better it gets at solving a task.
In our examples for machine learning, we used images consisting of boys and girls. The program used algorithms to sort these images mostly based on spoon-fed data. But with deep learning, data isn’t provided for the program to use. Instead, it scans all pixels within an image to discover edges that can be used to distinguish between a boy and a girl. After that, it will put edges and shapes into a ranked order of possible importance to determine the two genders.
On an even more simplified level, machine learning will distinguish between a square and triangle based on information provided by humans: squares have four points, and triangles have three. With deep learning, the program doesn’t start out with pre-fed information. Instead, it uses an algorithm to determine how many lines the shapes have, if those lines are connected, and if they are perpendicular. Naturally, the algorithm would eventually figure out that an inserted circle does not fit in with its square and triangle sorting.
Again, this latter “deep thinking” process requires more hardware to process the big data generated by the algorithm. These machines tend to reside in large datacenters to create an artificial neural network to handle all the big data generated and supplied to artificial intelligent applications. Programs using deep learning algorithms also take longer to train because they’re learning on their own instead of relying on hand-fed shortcuts.
“Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon,” writes Nvidia’s Michael Copeland. “With Deep Learning’s help, A.I. may even get to that science fiction state we’ve so long imagined.”
A great recent example of deep learning is translation. This technology is capable of listening to a presenter talking in English, and translating his words into a different language through both text and an electronic voice in real time. This achievement was a slow learning burn over the years due to the differences in overall language, language use, voice pitches, and maturing hardware-based capabilities.
Deep learning is also responsible for conversation-carrying chatbots, Amazon Alexa, Microsoft Cortana, Facebook, Instagram, and more. On social media, algorithms based on deep learning are what cough up contact and page suggestions. Deep learning even helps companies customize their creepy advertising to your tastes even when you’re not on their site. Yay for technology.



Reference:

Saturday, January 27, 2018

Difference between Artificial Intelligence and Augmented Intelligence

Artificial Intelligence is the collection of mathematics, computing and statistics that makes it possible to perform tasks once deemed exclusively human in nature, tackling problems typically resolved by humans through intelligence. Within this context, it is acknowledged that the algorithm has an element of independence: once the training has been completed, the system voluntarily initiates the action within its environment and pursues the objectives set without interacting with the human agent.
Conversely, Augmented Intelligence integrates and supports human thinking, analysis and planning, maintaining, however, the intent of a human player at the center of the human-machine interaction.



Tuesday, January 16, 2018

Classification

Decision Trees

Jeff and Maria approaching a bank to get a loan for their needs. Loan officer asked two questions.

1) Are you married?
They said yes... Great. A positive sign.
2) Are you both working?
They said yes. Officer checked their records and seems to be true and they are working in a company for long 3 years. Job stability is another positive sign.

Loan office then check their credit history and seems like they missed payment 3 times..A big negative.
Then he started thinking. Do I need to give them loan? Here comes the decision tree that helps to solve this issue.
  • Classification approach that uses input variables to predict a classification variable.
  • Builds one tree for each predictable attribute
  • Do not support aggregation

GEN AI

  Stay Tuned....