-
-
Notifications
You must be signed in to change notification settings - Fork 112
Open
Description
Hello,
I am hitting tensorflow limits when feeding a larger dataset to the model.
my model looks like:
@pm.model
def model(X, clicks, conversions):
b_0 = yield pm.Normal(loc=0., scale=10, name='b_0')
betas = yield pm.Normal(loc=0., scale=10, name='betas', batch_stack=X.shape[1])
# Expected value
p = tf.math.sigmoid( b_0 + tf.tensordot(betas, tf.cast(X.T, tf.float32), axes=1) )
# Data likelihood
obs = yield pm.Binomial('obs', clicks, p, observed=conversions)
In this way I believe tensorflow is including the whole dataset into the graph. Is that the correct way of doing linear regression? How can I avoid hitting such limit? Other examples are doing smth similar, e.g.:
https://github.com/pymc-devs/pymc4/blob/master/notebooks/radon_hierarchical.ipynb
Thanks in advance
C
Metadata
Metadata
Assignees
Labels
No labels