Interpret the results of a model
items = dummy_data_generator(50, 10, nrows=100)
dbunch = TSDataLoaders.from_items(items, horizon = 5, lookback = 20, step=5)
dbunch.show_batch(max_n=6)
(1, 60)
Train:400; Valid: 200; Test 100
learn = nbeats_learner(dbunch, layers=[100])
learn.lr_find()
learn.fit_flat_cos(3, 2e-1)
learn.recorder.plot_loss()
epoch train_loss valid_loss mae smape theta b_loss f_loss time
0 4.164892 3.244691 0.832825 1.312350 0.389949 nan nan 00:00
1 3.600806 2.668536 0.741687 1.066607 0.983290 nan nan 00:00
2 3.257172 2.495960 0.726824 1.043808 1.051242 nan nan 00:00
learn.show_results(max_n=9)
learn.metrics = []
dct = {'foo':{'bar':1},'bar':2,'foo2':{'foo3':3},'ignore':{'bar':1000}}
r = _get_key_from_nested_dct(dct,'bar',['ignore'])
test_eq(r,{'foobar': 1, 'bar': 2})

class NBeatsInterpretation[source]

NBeatsInterpretation(dl, inputs, preds, targs, decoded, losses, dct=None)

Interpretation base class, can be inherited for task specific Interpretation classes

from fastai2.interpret import *
interp = NBeatsInterpretation.from_learner(learn)

add_stack[source]

add_stack(b)

add_stack_full[source]

add_stack_full(b)

dct = {'bias0_0_f':torch.ones(1,1), 'bias0_1_f': torch.ones(1,1), 'bias0_2_f': torch.ones(1,1),'bias0_1_b': torch.ones(1,1)*10,
       'trend1_0_f': torch.ones(1,1)*100, 'trend1_1_f':torch.ones(1,1)*100}
res = add_stack(dct)
test_eq(res,{'trend1_f': tensor([[200.]]), 'bias0_f': tensor([[3.]]), 'bias0_b': tensor([[10.]])})

dct = {'bias0_0_full':torch.ones(1,1), 'bias0_1_full': torch.ones(1,1), 'bias0_2_full': torch.ones(1,1),
        'trend1_1_full':torch.ones(1,1)*100,'trend1_2_full':torch.ones(1,1)*100}
res = add_stack_full(dct)
test_eq(res,{'trend1_full':torch.ones(1,1)*200,'bias0_full': tensor([[3.]])})
add stack before dict_keys(['bias0_0_f', 'bias0_1_f', 'bias0_2_f', 'bias0_1_b', 'trend1_0_f', 'trend1_1_f'])
add stack after dict_keys(['bias0_f', 'bias0_b', 'trend1_f'])

(object,object) -> plot_top_losses[source]

(object,object) -> plot_top_losses()

Dictionary-like object; __getitem__ matches keys of types using issubclass

ts_plot_top_losses[source]

ts_plot_top_losses(x:TSTensorSeq, y:TSTensorSeqy, *args, blocks={}, total_b=None, combine_stack=False, rows=None, cols=None, figsize=None, **kwargs)

interp.plot_top_losses(3, combine_stack= True)

TODO: make the scale work TODO2: something seems of besides the scale the last part seems not to be shown