Replies: 2 comments
-
My traing dataset is Tusimple with 3000 training images and 2000 testing images |
Beta Was this translation helpful? Give feedback.
-
@zkyntu yes, it's been noted that those models can be more challenging to use in downstream tasks without hparams similar to the original training, so LAMB optimizer, more augs. The original torchvision can be more foregiving. It's also worth noting that the newer 'with batteries' IMAGENET1K_V2 weights are also similar to these... though they never did a resnet34 from what I recall. It's for this reason I made a wide variety of options available as it's best to figure out which ones work best for your use case, imagenet-1k evals aren't always the best indicator. So, in resnet34 size...
I'd see how these compare
|
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for your great work! I have a question when transferring the resnet strikes back (rsb)model to downstream tasks: I take
resnet34.a1_in1k
as the backbone in the lane detection task. I find the performance degrades compared to the vanilla resnet34. F1 score results: resnet34-rsb (95.14) vs resnet34 (96.20). What are the reasons? I think the performance will increase using the better backbone. My optimizer is AdamW with the learning rate 0.0001. Weight decay is 0.05. Image norm is {mean:[0,0,0], std: [255,255,255]}. Looking forward to your answers!Beta Was this translation helpful? Give feedback.
All reactions