**Find machine learning algorithms for your data.**

Use **Filters** to describe your data or model requirements. Then look at the **Applicable Models** that match.

Contact us to learn more or give feedback.

Column types

What is the data type of your columns (a.k.a. “attributes” or “input / independent variables”)?

Columns can be:

numerical, e.g. "7.6"

binary, e.g. "true/false"

and categorical, e.g. "red/green/blue"

Target type

What is the data type of your target column (a.k.a. “label”, “class”, or “dependent variable”)? This is the column your model will predict.

Targets can be:

numerical, e.g. "7.6"

binary, e.g. "true/false"

or categorical, e.g. "red/green/blue"

It's fine to not have a target column!

Number of columns

Approximately how many columns does your data have (order of magnitude)?

Number of rows

Approximately how many rows does your data have (order of magnitude)?

Advanced options

**Updatable:**Select if your model should take new training data without the need to retrain on the complete data set.
**Missings:**Select if your model should handle missings values in the data.
**Row weights:**Select if your model should take the importance of rows into account to give those with a higher weight more emphasis during training.

- Predictive ()
- Decision Tree19.3%Docs19.3%
- Naive Bayes12.6%Docs12.6%
- k-NNMEM!11.2%Docs11.2%
- Neural Net8.2%Docs8.2%
- Linear RegressionMEM!7.3%Docs7.3%
- Support Vector Machine5.4%Docs5.4%
- Support Vector Machine (LibSVM)4.7%Docs4.7%
- Logistic Regression4.5%Docs4.5%
- Rule InductionMEM!2.8%Docs2.8%
- Random ForestMEM!2.5%Docs2.5%
- ID31.9%Docs1.9%
- Default Model1.5%Docs1.5%
- Support Vector Machine (Linear)1.5%Docs1.5%
- Linear Discriminant Analysis1.5%Docs1.5%
- Naive Bayes (Kernel)1.4%Docs1.4%
- Polynomial Regression1.4%Docs1.4%
- PerceptronMEM!1.3%Docs1.3%
- Decision Tree (Multiway)1.2%Docs1.2%
- Random Tree1.1%Docs1.1%
- CHAID1.0%Docs1.0%
- Decision Stump1.0%Docs1.0%
- Decision Tree (Weight-Based)0.9%Docs0.9%
- AutoMLP0.9%Docs0.9%
- Logistic Regression (Evolutionary)0.6%Docs0.6%
- Support Vector Machine (Evolutionary)0.6%Docs0.6%
- Vector Linear RegressionMEM!0.5%Docs0.5%
- Gaussian Process0.4%Docs0.4%
- Support Vector Machine (PSO)0.4%Docs0.4%
- Local Polynomial Regression0.3%Docs0.3%
- Relevance Vector MachineMEM!0.3%Docs0.3%
- Quadratic Discriminant Analysis0.3%Docs0.3%
- Single Rule InductionMEM!0.2%Docs0.2%
- Subgroup DiscoveryMEM!0.2%Docs0.2%
- Seemingly Unrelated Regression0.2%Docs0.2%
- Fast Large MarginMEM!0.2%Docs0.2%
- Hyper Hyper0.2%Docs0.2%
- Regularized Discriminant Analysis0.2%Docs0.2%
- Gradient Boosted Trees0%Docs0%
- Single Rule Induction (Single Attribute)0%Docs0%
- Deep Learning0%Docs0%
- Generalized Linear Model0%Docs0%
- Segmentations ()
- k-Means50.0%Docs50.0%
- Agglomerative Clustering9.4%Docs9.4%
- DBSCAN6.9%Docs6.9%
- X-Means5.8%Docs5.8%
- k-Medoids5.8%Docs5.8%
- k-Means (Kernel)4.7%Docs4.7%
- k-Means (fast)MEM!4.5%Docs4.5%
- Support Vector Clustering4.2%Docs4.2%
- Expectation Maximization Clustering3.3%Docs3.3%
- Random Clustering2.7%Docs2.7%
- Top Down Clustering2.7%Docs2.7%
- Correlations ()
- Correlations Matrix71.4%Docs71.4%
- Covariance Matrix8.8%Docs8.8%
- ANOVA Matrix7.7%Docs7.7%
- Grouped ANOVA3.8%Docs3.8%
- Mutual Information Matrix3.8%Docs3.8%
- Transition Graph2.2%Docs2.2%
- Rainflow Matrix1.3%Docs1.3%
- Transition Matrix1.1%Docs1.1%
- Feature Weights ()
- Weight by Information Gain29.4%Docs29.4%
- Weight by Correlation16.6%Docs16.6%
- Weight by Information Gain Ratio14.9%Docs14.9%
- Weight by Gini Index13.7%Docs13.7%
- Weight by SVM11.1%Docs11.1%
- Weight by Chi Squared Statistic5.2%Docs5.2%
- Weight by PCA2.7%Docs2.7%
- Weight by Relief1.7%Docs1.7%
- Weight by Tree Importance1.2%Docs1.2%
- Weight by Rule1.0%Docs1.0%
- Weight by Deviation0.8%Docs0.8%
- Weight by Component Model0.7%Docs0.7%
- Weight by Uncertainty0.5%Docs0.5%
- Weight by Value Average0.5%Docs0.5%
- Similarities ()
- Data to SimilarityMEM!69.8%Docs69.8%
- Cross DistancesMEM!16.8%Docs16.8%
- Data to Similarity DataMEM!13.5%Docs13.5%
- Ensemble ()
- AdaBoost16.3%Docs16.3%
- Bagging15.8%Docs15.8%
- Vote15.4%Docs15.4%
- Polynominal by Binominal Classification11.3%Docs11.3%
- Classification by Regression9.4%Docs9.4%
- MetaCost7.9%Docs7.9%
- Bayesian Boosting7.1%Docs7.1%
- Stacking6.8%Docs6.8%
- Subgroup Discovery (Meta)3.7%Docs3.7%
- Hierarchical Classification2.8%Docs2.8%
- Additive Regression1.1%Docs1.1%
- Relative Regression1%Docs1%
- Find Threshold (Meta)0.8%Docs0.8%
- Transformed Regression0.7%Docs0.7%
- Optimizations ()
- Optimize Parameters (Grid)49.0%Docs49.0%
- Forward Selection11.6%Docs11.6%
- Optimize Selection7.7%Docs7.7%
- Optimize Selection (Evolutionary)6.6%Docs6.6%
- Backward Elimination5.8%Docs5.8%
- Optimize Parameters (Evolutionary)5.6%Docs5.6%
- Optimize Weights (Evolutionary)4.0%Docs4.0%
- Optimize Selection (Brute Force)2.6%Docs2.6%
- Optimize Weights (Forward)1.3%Docs1.3%
- Optimize Weights (PSO)1.3%Docs1.3%
- Optimize Selection (Weight-Guided)1.1%Docs1.1%
- Optimize Parameters (Quadratic)0.7%Docs0.7%
- Optimize by Generation (Evolutionary Aggregation)0.7%Docs0.7%
- Optimize Weights (Backward)0.6%Docs0.6%
- Optimize by Generation (YAGGA2)0.5%Docs0.5%
- Optimize by Generation (GGA)0.4%Docs0.4%
- Optimize by Generation (YAGGA)0.4%Docs0.4%
- Optimize by Generation (AGA)0.1%Docs0.1%