Tensor 'context_embedder.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'context_embedder.weight' has different shapes: Model 1: torch.Size([2150, 4096]), Model 2: torch.Size([3072, 4096]) Tensor 'time_text_embed.guidance_embedder.linear_1.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'time_text_embed.guidance_embedder.linear_1.weight' has different shapes: Model 1: torch.Size([2150, 256]), Model 2: torch.Size([3072, 256]) Tensor 'time_text_embed.guidance_embedder.linear_2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'time_text_embed.guidance_embedder.linear_2.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'time_text_embed.text_embedder.linear_1.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'time_text_embed.text_embedder.linear_1.weight' has different shapes: Model 1: torch.Size([2150, 768]), Model 2: torch.Size([3072, 768]) Tensor 'time_text_embed.text_embedder.linear_2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'time_text_embed.text_embedder.linear_2.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'time_text_embed.timestep_embedder.linear_1.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'time_text_embed.timestep_embedder.linear_1.weight' has different shapes: Model 1: torch.Size([2150, 256]), Model 2: torch.Size([3072, 256]) Tensor 'time_text_embed.timestep_embedder.linear_2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'time_text_embed.timestep_embedder.linear_2.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.0.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.0.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.0.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.0.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.0.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.0.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.0.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.0.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.0.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.0.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.0.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.0.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.0.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.0.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.0.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.1.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.1.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.1.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.1.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.1.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.1.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.1.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.1.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.1.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.1.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.1.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.1.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.1.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.1.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.1.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.1.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.10.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.10.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.10.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.10.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.10.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.10.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.10.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.10.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.10.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.10.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.10.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.10.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.10.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.10.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.10.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.10.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.11.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.11.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.11.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.11.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.11.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.11.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.11.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.11.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.11.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.11.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.11.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.11.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.11.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.11.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.11.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.11.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.12.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.12.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.12.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.12.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.12.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.12.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.12.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.12.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.12.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.12.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.12.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.12.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.12.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.12.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.12.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.12.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.13.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.13.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.13.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.13.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.13.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.13.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.13.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.13.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.13.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.13.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.13.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.13.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.13.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.13.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.13.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.13.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.14.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.14.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.14.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.14.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.14.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.14.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.14.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.14.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.14.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.14.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.2.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.2.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.2.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.2.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.2.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.2.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.2.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.2.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.2.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.2.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.2.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.2.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.2.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.2.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.2.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.2.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.3.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.3.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.3.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.3.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.3.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.3.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.3.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.3.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.3.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.3.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.3.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.3.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.3.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.3.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.3.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.3.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.4.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.4.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.4.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.4.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.4.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.4.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.4.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.4.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.4.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.4.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.4.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.4.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.4.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.4.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.4.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.4.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.5.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.5.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.5.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.5.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.5.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.5.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.5.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.5.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.5.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.5.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.5.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.5.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.5.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.5.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.5.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.5.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.6.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.6.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.6.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.6.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.6.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.6.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.6.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.6.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.6.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.6.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.6.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.6.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.6.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.6.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.6.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.6.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.7.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.7.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.7.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.7.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.7.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.7.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.7.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.7.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.7.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.7.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.7.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.7.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.7.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.7.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.7.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.7.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.8.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.8.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.8.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.8.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.8.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.8.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.8.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.8.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.8.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.8.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.8.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.8.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.8.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.8.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.8.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.8.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.9.attn.add_k_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.add_k_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.add_q_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.add_q_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.add_v_proj.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.add_v_proj.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.norm_added_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.9.attn.norm_added_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.9.attn.norm_k.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.9.attn.norm_q.weight' has different shapes: Model 1: torch.Size([89]), Model 2: torch.Size([128]) Tensor 'transformer_blocks.9.attn.to_add_out.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.to_add_out.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.to_k.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.to_k.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.to_out.0.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.to_out.0.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.to_q.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.to_q.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.attn.to_v.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.attn.to_v.weight' has different shapes: Model 1: torch.Size([2150, 3072]), Model 2: torch.Size([3072, 3072]) Tensor 'transformer_blocks.9.ff.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.9.ff.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.9.ff.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.ff.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.9.ff_context.net.0.proj.bias' has different shapes: Model 1: torch.Size([8601]), Model 2: torch.Size([12288]) Tensor 'transformer_blocks.9.ff_context.net.0.proj.weight' has different shapes: Model 1: torch.Size([8601, 3072]), Model 2: torch.Size([12288, 3072]) Tensor 'transformer_blocks.9.ff_context.net.2.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'transformer_blocks.9.ff_context.net.2.weight' has different shapes: Model 1: torch.Size([2150, 12288]), Model 2: torch.Size([3072, 12288]) Tensor 'transformer_blocks.9.norm1.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.9.norm1.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'transformer_blocks.9.norm1_context.linear.bias' has different shapes: Model 1: torch.Size([12902]), Model 2: torch.Size([18432]) Tensor 'transformer_blocks.9.norm1_context.linear.weight' has different shapes: Model 1: torch.Size([12902, 3072]), Model 2: torch.Size([18432, 3072]) Tensor 'x_embedder.bias' has different shapes: Model 1: torch.Size([2150]), Model 2: torch.Size([3072]) Tensor 'x_embedder.weight' has different shapes: Model 1: torch.Size([2150, 64]), Model 2: torch.Size([3072, 64])