实际上由于SwinUnet是一个encoder-decoder对称的结构,因此加载权重时,作者并没有像通常那样仅仅加载encoder部分而不加载decoder部分,而是同时将encoder的权重对称地加载到了decoder上(除了swin_unet.layers_up.1/2/3.upsample)
def get_current_consistency_weight(epoch):
# Consistency ramp-up from https://arxiv.org/abs/1610.02242
return args.consistency * ramps.sigmoid_rampup(epoch, args.consistency_rampup)
parser.add_argument( ’ --max_iterations’, type=int,default=300,help='maximum iterations number to train ’ )
#default=30000 改变iteration改变epoch
os.environ[ " CUDA_VISIBLE_DEVICES" ]=‘4,5’
os.environ[ " CUDA_VISIBLE_DEVICES" ]=‘4,5’ …41