J
J
Jesu0s2020-07-09 23:28:45
Python
Jesu0s, 2020-07-09 23:28:45

How to predict a variable as close as possible, but not larger than it?

I use GradientBoostingRegressor sklearn to predict a variable, the goal is not to predict it as accurately as possible, but to get as close as possible on one side. That is, let's say y_true=[9,6,4,9,7], so y_pred should be [<=9,<=6,<=4,<=9,<=7] . Prediction [0,0,0 ,0,0] will work anyway, but get as close as possible, and never more. That is, you need something like a constraint on the prediction in the form of a target variable. How can this be implemented? That is, tell me at least the direction where to dig (it occurred to me to make a neural network with a genetic algorithm and change the error function in it)

Answer the question

In order to leave comments, you need to log in

1 answer(s)
R
Roman Mirilaczvili, 2020-07-10
@Jesu0s

Sounds like an optimization problem.
https://ru.wikipedia.org/wiki/%D0%9E%D0%BF%D1%82%D...

If optimization is associated with the calculation of optimal parameter values ​​for a given object structure, then it is called parametric optimization.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question