xpixelgroup/hat 🖼️ → 🖼️
About
Activating More Pixels in Image Super-Resolution Transformer

Example Output
Output

Performance Metrics
7.27s
Prediction Time
99.21s
Total Time
Input Parameters
- image (required)
- Input Image.
Output Schema
Output
Example Execution Logs
Disable distributed. 2022-08-28 18:09:38,591 INFO: ____ _ _____ ____ / __ ) ____ _ _____ (_)_____/ ___/ / __ \ / __ |/ __ `// ___// // ___/\__ \ / /_/ / / /_/ // /_/ /(__ )/ // /__ ___/ // _, _/ /_____/ \__,_//____//_/ \___//____//_/ |_| ______ __ __ __ __ / ____/____ ____ ____/ / / / __ __ _____ / /__ / / / / __ / __ \ / __ \ / __ / / / / / / // ___// //_/ / / / /_/ // /_/ // /_/ // /_/ / / /___/ /_/ // /__ / /< /_/ \____/ \____/ \____/ \____/ /_____/\____/ \___//_/|_| (_) Version Information: BasicSR: 1.3.4.9 PyTorch: 1.9.1+cu102 TorchVision: 0.10.1+cu102 2022-08-28 18:09:38,592 INFO: name: HAT_SRx4_ImageNet-LR model_type: HATModel scale: 4 num_gpu: 1 manual_seed: 0 datasets:[ test_1:[ name: custom type: SingleImageDataset dataroot_lq: input_dir io_backend:[ type: disk ] phase: test scale: 4 ] ] network_g:[ type: HAT upscale: 4 in_chans: 3 img_size: 64 window_size: 16 compress_ratio: 3 squeeze_factor: 30 conv_scale: 0.01 overlap_ratio: 0.5 img_range: 1.0 depths: [6, 6, 6, 6, 6, 6] embed_dim: 180 num_heads: [6, 6, 6, 6, 6, 6] mlp_ratio: 2 upsampler: pixelshuffle resi_connection: 1conv ] path:[ pretrain_network_g: experiments/pretrained_models/HAT_SRx4_ImageNet-pretrain.pth strict_load_g: True param_key_g: params_ema results_root: /src/results/HAT_SRx4_ImageNet-LR log: /src/results/HAT_SRx4_ImageNet-LR visualization: /src/results/HAT_SRx4_ImageNet-LR/visualization ] val:[ save_img: True suffix: None ] dist: False rank: 0 world_size: 1 auto_resume: False is_train: False 2022-08-28 18:09:38,592 INFO: Dataset [SingleImageDataset] - custom is built. 2022-08-28 18:09:38,592 INFO: Number of test images in custom: 1 2022-08-28 18:09:38,943 INFO: Network [HAT] is created. 2022-08-28 18:09:42,882 INFO: Network: HAT, with parameters: 20,772,507 2022-08-28 18:09:42,882 INFO: HAT( (conv_first): Conv2d(3, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed( (norm): LayerNorm((180,), eps=1e-05, elementwise_affine=True) ) (patch_unembed): PatchUnEmbed() (pos_drop): Dropout(p=0.0, inplace=False) (layers): ModuleList( (0): RHAG( (residual_group): AttenBlocks( (blocks): ModuleList( (0): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): Identity() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (2): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (3): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (4): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (5): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (overlap_attn): OCAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (qkv): Linear(in_features=180, out_features=540, bias=True) (unfold): Unfold(kernel_size=(24, 24), dilation=1, padding=4, stride=16) (softmax): Softmax(dim=-1) (proj): Linear(in_features=180, out_features=180, bias=True) (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed() (patch_unembed): PatchUnEmbed() ) (1): RHAG( (residual_group): AttenBlocks( (blocks): ModuleList( (0): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (2): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (3): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (4): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (5): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (overlap_attn): OCAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (qkv): Linear(in_features=180, out_features=540, bias=True) (unfold): Unfold(kernel_size=(24, 24), dilation=1, padding=4, stride=16) (softmax): Softmax(dim=-1) (proj): Linear(in_features=180, out_features=180, bias=True) (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed() (patch_unembed): PatchUnEmbed() ) (2): RHAG( (residual_group): AttenBlocks( (blocks): ModuleList( (0): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (2): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (3): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (4): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (5): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (overlap_attn): OCAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (qkv): Linear(in_features=180, out_features=540, bias=True) (unfold): Unfold(kernel_size=(24, 24), dilation=1, padding=4, stride=16) (softmax): Softmax(dim=-1) (proj): Linear(in_features=180, out_features=180, bias=True) (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed() (patch_unembed): PatchUnEmbed() ) (3): RHAG( (residual_group): AttenBlocks( (blocks): ModuleList( (0): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (2): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (3): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (4): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (5): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (overlap_attn): OCAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (qkv): Linear(in_features=180, out_features=540, bias=True) (unfold): Unfold(kernel_size=(24, 24), dilation=1, padding=4, stride=16) (softmax): Softmax(dim=-1) (proj): Linear(in_features=180, out_features=180, bias=True) (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed() (patch_unembed): PatchUnEmbed() ) (4): RHAG( (residual_group): AttenBlocks( (blocks): ModuleList( (0): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (2): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (3): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (4): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (5): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (overlap_attn): OCAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (qkv): Linear(in_features=180, out_features=540, bias=True) (unfold): Unfold(kernel_size=(24, 24), dilation=1, padding=4, stride=16) (softmax): Softmax(dim=-1) (proj): Linear(in_features=180, out_features=180, bias=True) (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed() (patch_unembed): PatchUnEmbed() ) (5): RHAG( (residual_group): AttenBlocks( (blocks): ModuleList( (0): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (1): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (2): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (3): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (4): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) (5): HAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (attn): WindowAttention( (qkv): Linear(in_features=180, out_features=540, bias=True) (attn_drop): Dropout(p=0.0, inplace=False) (proj): Linear(in_features=180, out_features=180, bias=True) (proj_drop): Dropout(p=0.0, inplace=False) (softmax): Softmax(dim=-1) ) (conv_block): CAB( (cab): Sequential( (0): Conv2d(180, 60, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): GELU() (2): Conv2d(60, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ChannelAttention( (attention): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Conv2d(180, 6, kernel_size=(1, 1), stride=(1, 1)) (2): ReLU(inplace=True) (3): Conv2d(6, 180, kernel_size=(1, 1), stride=(1, 1)) (4): Sigmoid() ) ) ) ) (drop_path): DropPath() (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (overlap_attn): OCAB( (norm1): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (qkv): Linear(in_features=180, out_features=540, bias=True) (unfold): Unfold(kernel_size=(24, 24), dilation=1, padding=4, stride=16) (softmax): Softmax(dim=-1) (proj): Linear(in_features=180, out_features=180, bias=True) (norm2): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (mlp): Mlp( (fc1): Linear(in_features=180, out_features=360, bias=True) (act): GELU() (fc2): Linear(in_features=360, out_features=180, bias=True) (drop): Dropout(p=0.0, inplace=False) ) ) ) (conv): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (patch_embed): PatchEmbed() (patch_unembed): PatchUnEmbed() ) ) (norm): LayerNorm((180,), eps=1e-05, elementwise_affine=True) (conv_after_body): Conv2d(180, 180, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv_before_upsample): Sequential( (0): Conv2d(180, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.01, inplace=True) ) (upsample): Upsample( (0): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): PixelShuffle(upscale_factor=2) (2): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): PixelShuffle(upscale_factor=2) ) (conv_last): Conv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) 2022-08-28 18:09:42,955 INFO: Loading HAT model from experiments/pretrained_models/HAT_SRx4_ImageNet-pretrain.pth, with param key: [params_ema]. 2022-08-28 18:09:43,225 INFO: Model [HATModel] is created. 2022-08-28 18:09:43,226 INFO: Testing custom...
Version Details
- Version ID
ad47b01e4923c19fed424451d925d7e95b743be1df19e9b3b0dbb9ed8685ed6b
- Version Created
- August 31, 2022