The following node is available in the Open Source KNIME predictive analytics and data mining platform version 2.7.1. Discover over 1000 other nodes, as well as enterprise functionality at http://knime.com.
Class for building a best-first decision tree classifier. This class uses binary split for both nominal and numeric attributes. For missing values, the method of 'fractional' instances is used. For more information, see: Haijian Shi (2007). Best-first decision tree learning. Hamilton, NZ. Jerome Friedman, Trevor Hastie, Robert Tibshirani (2000). Additive logistic regression : A statistical view of boosting. Annals of statistics. 28(2):337-407.
(based on WEKA 3.6)
For further options, click the 'More' - button in the dialog.
All weka dialogs have a panel where you can specify classifier-specific parameters.
The Preliminary Attribute Check tests the underlying classifier against the DataTable specification at the inport of the node. Columns that are compatible with the classifier are marked with a green 'ok'. Columns which are potentially not compatible are assigned a red error message.
Important: If a column is marked as 'incompatible', it does not necessarily mean that the classifier cannot be executed! Sometimes, the error message 'Cannot handle String class' simply means that no nominal values are available (yet). This may change during execution of the predecessor nodes.
Capabilities: [Nominal attributes, Binary attributes, Unary attributes, Empty nominal attributes, Numeric attributes, Missing values, Nominal class, Binary class] Dependencies: [] min # Instance: 1
S: Random number seed. (default 1)
D: If set, classifier is run in debug mode and may output additional info to the console
P: The pruning strategy. (default: POSTPRUNED)
M: The minimal number of instances at the terminal nodes. (default 2)
N: The number of folds used in the pruning. (default 5)
H: Don't use heuristic search for nominal attributes in multi-class problem (default yes).
G: Don't use Gini index for splitting (default yes), if not information is used.
R: Don't use error rate in internal cross-validation (default yes), but root mean squared error.
A: Use the 1 SE rule to make pruning decision. (default no).
C: Percentage of training data size (0-1] (default 1).
0 | Training data |
0 | Trained classifier |