Answer the question
In order to leave comments, you need to log in
Why does a model trained on principal components perform significantly worse than a model trained on true predictors?
For a certain set of predictors, after the analysis of the main components, new predictors were obtained that describe approximately 99% of the variations in the original data. The model was trained (I don’t know if this is important, but it was a regression decision tree) on the original predictors and on the main components. The quality of the model on the principal components turned out to be about 17% worse than on the original predictors. What is the reason, because the main components describe almost the whole picture in the data?
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question