top of page

Apoio Machine Learning-1

Taxonomy of AI Algorithms

Presentation - Taxonomy of AI Algos - bom - 06 mar 2025.jpg

by easodre@gmail.com (March, 2025)

Map of Grokking Artificial Intelligence Algorithms

map of grokkin artificial intelligence.jpg

Rishal Hurbans, “Grokking Artificial Intelligence Algorithms”, Manning, 2020.

​

Course "Intro to Machine Learning"

do Kaggle (https://www.kaggle.com/learn/intro-to-machine-learning)

​

1.1 - How Models Work (The first step if you're new to machine learning)

​

1.2 - Basic Data Exploration (Load and understand your data)

 

1.3 - Your First Machine Learning Model (Building your first model. Hurray!)

 

1.4 - Model Validation (Measure the performance of your model, so you can test and compare alternatives)

 

1.5 - Underfitting and Overfitting (Fine-tune your model for better performance)

 

1.6 - Random Forests (Using a more sophisticated machine learning algorithm)

o free lunch theorem.png

Main Challenges of Machine Learning

        Insufficient Quantity of Training Data

        Nonrepresentative Training Data

        Poor-Quality Data

        Irrelevant Features

        Overfitting the Training Data

        Underfitting the Training Data

Aurélien Géron, “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems”, Editora: O'Reilly Media; 2º edição (15 outubro 2019).

Andreas Müller and Sarah Guido, "Introduction to Machine Learning with Python: A Guide for Data Scientists", O'Reilly Media, 1st edition, November 15, 2016.

Beale and Jackson, "Neural Computing", 1990.

Chapter 2 - Training Simple Machine Learning Algorithms for Classification

image linearly and non-linerly separable.png
image-bilogical neurons.png
weights and bias neuron.png
perceptron learning rule.png

Sebastian Raschka, Yuxi (Hayden) Liu, Vahid Mirjalili, “Machine Learning with PyTorch and Scikit-Learn: Develop machine learning and deep learning models with Python”, Packt Publishing, February 25, 2022.

Anotações demonstrando os cálculos das derivadas parciais para a minimização da Loss Function do ADALINE.

Difference Between Classification and Regression in Machine Learning

Classification predictive modeling problems are different from regression predictive modeling problems.

 

1. Classification is the task of predicting a discrete class label.

 

2. Regression is the task of predicting a continuous quantity.

 

There is some overlap between the algorithms for classification and regression; for example:

 

a) A classification algorithm may predict a continuous value, but the continuous value is in the form of a probability for a class label.

 

b) A regression algorithm may predict a discrete value, but the discrete value in the form of an integer quantity.

 

Some algorithms can be used for both classification and regression with small modifications, such as decision trees and artificial neural networks. Some algorithms cannot, or cannot easily be used for both problem types, such as linear regression for regression predictive modeling and logistic regression for classification predictive modeling.

 

Importantly, the way that we evaluate classification and regression predictions varies and does not overlap, for example:

 

a) Classification predictions can be evaluated using accuracy, whereas regression predictions cannot.

 

b) Regression predictions can be evaluated using root mean squared error, whereas classification predictions cannot.

Overfitting and Underfitting; Training and Test Data Set.

livro do Geron - Overfitting.jpg
livro do Geron - Underfitting.jpg
livro do Geron - Training and Testing.jpg

Aurélien Géron, “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems”, Editora: O'Reilly Media; 2º edição (15 outubro 2019).

Feature Scaling
# What is the difference between MinMaxScaler() and StandardScaler()???
 

# Gradient descent is one of the many algorithms that benefit from feature scaling
 

# MinMaxScaler(feature_range = (0, 1)) will transform each value in the column

# proportionally within the range [0,1]. Use this as the first scaler choice

# to transform a feature, as it will preserve the shape of the dataset (no distortion).

 

# StandardScaler() will transform each value in the column to range about

# the mean 0 and standard deviation 1, ie, each value will be normalised by

# subtracting the mean and dividing by standard deviation. Use StandardScaler

#if you know the data distribution is normal.

 

# If there are outliers, use RobustScaler(). Alternatively you could remove

# the outliers and use either of the above 2 scalers (choice depends on whether

# data is normally distributed)

Performance Measure

imagem RMSE para Regression Prediction - em 28 jun 2023.png

Aurélien Géron, “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems”, Editora: O'Reilly Media; 2º edição (15 outubro 2019).

imagem r2_score.jpg

Sebastian Raschka, Yuxi (Hayden) Liu, Vahid Mirjalili, “Machine Learning with PyTorch and Scikit-Learn: Develop machine learning and deep learning models with Python”, Packt Publishing, February 25, 2022.

Support Vector Machines (SVM) with Scikit-learn Tutorial 

Chapter 11 - Implementing a Multilayer Artificial Neural Network from Scratch

 - Em Construção - 
bottom of page