Title of Competition
stringclasses 168
values | Title of Writeup
stringlengths 9
139
| User
stringclasses 1
value | Writeup
stringlengths 419
61.5k
|
---|---|---|---|
Google - American Sign Language Fingerspelling Recognition | 122nd Solution for Google - American Sign Language Fingerspelling Recognition Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We did this in our freshman summer, defintely learnt a lot.</p>
<p><strong>Context</strong></p>
<ul>
<li>Business context:<a href="https://www.kaggle.com/competitions/asl-fingerspelling/overview" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/overview</a></li>
<li>Data context:<a href="https://www.kaggle.com/competitions/asl-fingerspelling/data" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/data</a></li>
</ul>
<p><strong>Overview of the Approach</strong><br>
Our work is basically some improvements based on the <a href="https://www.kaggle.com/code/royalacecat/the-deeper-the-better" target="_blank">public notebook baseline</a> w/ LB score 0.699, and we improved it w/ +0.007.<br>
It conbines Transformer and 1D-CNN.<br>
We basically improved it by doing some feature engineering to the frames.</p>
<p><strong>Details on the Submission</strong><br>
Originally our work was based on another public notebook baseline which only uses the Transformer, and we did some improvement on that by rescheduling the learning rate, specifically, to increase the learning rate at the end, but it didn't work for our final notebook.<br>
We also processed the supplemental data, but it didn't have some improvements(maybe we did it the wrong way).<br>
Our data augmentation includes some techniques like random cropping, rotation, affine transformations, etc.</p>
<p><strong>Code Samples for Data Augmentation</strong></p>
<pre><code>@tf.function()
def :
tf.[0] < FRAME_LEN:
x = tf.pad(x, ([[, FRAME_LEN-tf.shape(x)[]], [, ], [, ]]), constant_values=())
:
x = tf.image.resize(x, (FRAME_LEN, tf.shape(x)[]))
x
@tf.function()
def resize_pad_channel(x):
tf.shape(x)[] < FRAME_LEN:
x = tf.pad(x, ([[, FRAME_LEN-tf.shape(x)[]], [, ], [, ]]), constant_values=())
:
x = tf.image.resize(x, (FRAME_LEN, tf.shape(x)[]))
x
def interp1d_(x, target_len, method=):
target_len = tf.maximum(,target_len)
method == :
tf.random.uniform(()) < :
x = tf.image.resize(x, (target_len,),)
:
tf.random.uniform(()) < :
x = tf.image.resize(x, (target_len,),)
:
x = tf.image.resize(x, (target_len,),)
:
x = tf.image.resize(x, (target_len,),method)
resize_pad(x)
def flip_lr(x):
dx, dy, dz = tf.unstack(x, axis=)
dx = -dx
new_x = tf.([dx,dy,dz], )
new_x
def resample(x, rate=(,)):
rate = tf.random.uniform((), rate[], rate[])
length = tf.shape(x)[]
new_size = tf.cast(rate*tf.cast(length,tf.float32), tf.int32)
new_x = tf.image.resize(x, (new_size, ))
resize_pad_channel(new_x)
def spatial_random_affine(xyz,
scale = (,),
shear = (,),
shift = (,),
degree = (,),
):
center = tf.constant([,])
scale is not None:
scale = tf.random.uniform((),*scale)
xyz = scale*xyz
shear is not None:
xy = xyz[...,:]
z = xyz[...,:]
shear_x = shear_y = tf.random.uniform((),*shear)
tf.random.uniform(()) < :
shear_x =
:
shear_y =
shear_mat = tf.identity([
[,shear_x],
[shear_y,]
])
xy = xy @ shear_mat
center = center + [shear_y, shear_x]
xyz = tf.concat([xy,z], axis=)
degree is not None:
xy = xyz[...,:]
z = xyz[...,:]
xy -= center
degree = tf.random.uniform((),*degree)
radian = degree/*np.pi
c = tf.math.(radian)
s = tf.math.(radian)
rotate_mat = tf.identity([
[c,s],
[-s, c],
])
xy = xy @ rotate_mat
xy = xy + center
xyz = tf.concat([xy,z], axis=)
shift is not None:
shift = tf.random.uniform((),*shift)
xyz = xyz + shift
xyz
def temporal_crop(x, length=FRAME_LEN):
l = tf.shape(x)[]
offset = tf.random.uniform((), , tf.clip_by_value(l-length,,length), dtype=tf.int32)
x = x[offset:offset+length]
x
def temporal_mask0(x, size=(,), mask_value=()):
l = tf.shape(x)[]
mask_size = tf.random.uniform((), *size)
mask_size = tf.cast(tf.cast(l, tf.float32) * mask_size, tf.int32)
mask_offset = tf.random.uniform((), , tf.clip_by_value(l-mask_size,,l), dtype=tf.int32)
x = tf.tensor_scatter_nd_update(x,tf.range(mask_offset, mask_offset+mask_size)[...,None],tf.fill([mask_size,,],mask_value))
x
def temporal_mask(x, rate=, mask_value=()):
mask_size=(FRAME_LEN*rate)
mask = tf.squeeze(tf.random.categorical(np.mat([/FRAME_LEN i in range(FRAME_LEN)]),mask_size))
x = tf.tensor_scatter_nd_update(x,mask[...,None],tf.fill([mask_size,,],mask_value))
x
def spatial_mask(x, size=(,), mask_value=()):
mask_offset_y = tf.random.uniform(())
mask_offset_x = tf.random.uniform(())
mask_size = tf.random.uniform((), *size)
mask_x = (mask_offset_x<x[...,]) & (x[...,] < mask_offset_x + mask_size)
mask_y = (mask_offset_y<x[...,]) & (x[...,] < mask_offset_y + mask_size)
mask = mask_x & mask_y
x = tf.where(mask[...,None], mask_value, x)
x
def augment_fn(x, always=False):
tf.random.uniform(())< or always:
x = resample(x, (,))
tf.random.uniform(())< or always:
x = flip_lr(x)
tf.random.uniform(())< or always:
x = spatial_random_affine(x)
tf.random.uniform(())< or always:
x = temporal_mask(x)
tf.random.uniform(())< or always:
x = spatial_mask(x)
x
@tf.function(jit_compile=True)
def pre_process0(x):
lip_x = tf.gather(x, LIP_IDX_X, axis=)
lip_y = tf.gather(x, LIP_IDX_Y, axis=)
lip_z = tf.gather(x, LIP_IDX_Z, axis=)
rhand_x = tf.gather(x, RHAND_IDX_X, axis=)
rhand_y = tf.gather(x, RHAND_IDX_Y, axis=)
rhand_z = tf.gather(x, RHAND_IDX_Z, axis=)
lhand_x = tf.gather(x, LHAND_IDX_X, axis=)
lhand_y = tf.gather(x, LHAND_IDX_Y, axis=)
lhand_z = tf.gather(x, LHAND_IDX_Z, axis=)
rpose_x = tf.gather(x, RPOSE_IDX_X, axis=)
rpose_y = tf.gather(x, RPOSE_IDX_Y, axis=)
rpose_z = tf.gather(x, RPOSE_IDX_Z, axis=)
lpose_x = tf.gather(x, LPOSE_IDX_X, axis=)
lpose_y = tf.gather(x, LPOSE_IDX_Y, axis=)
lpose_z = tf.gather(x, LPOSE_IDX_Z, axis=)
lip = tf.concat([lip_x[..., tf.newaxis], lip_y[..., tf.newaxis], lip_z[..., tf.newaxis]], axis=)
rhand = tf.concat([rhand_x[..., tf.newaxis], rhand_y[..., tf.newaxis], rhand_z[..., tf.newaxis]], axis=)
lhand = tf.concat([lhand_x[..., tf.newaxis], lhand_y[..., tf.newaxis], lhand_z[..., tf.newaxis]], axis=)
rpose = tf.concat([rpose_x[..., tf.newaxis], rpose_y[..., tf.newaxis], rpose_z[..., tf.newaxis]], axis=)
lpose = tf.concat([lpose_x[..., tf.newaxis], lpose_y[..., tf.newaxis], lpose_z[..., tf.newaxis]], axis=)
hand = tf.concat([rhand, lhand], axis=)
hand = tf.where(tf.math.is_nan(hand), , hand)
mask = tf.math.not_equal(tf.reduce_sum(hand, axis=[, ]), )
lip = lip[mask]
rhand = rhand[mask]
lhand = lhand[mask]
rpose = rpose[mask]
lpose = lpose[mask]
lip, rhand,lhand, rpose, lpose
@tf.function()
def pre_process1(lip, rhand,lhand, rpose, lpose, augment = False):
lip = (resize_pad(lip) - LIPM) / LIPS
rhand = (resize_pad(rhand) - RHM) / RHS
lhand = (resize_pad(lhand) - LHM) / LHS
rpose = (resize_pad(rpose) - RPM) / RPS
lpose = (resize_pad(lpose) - LPM) / LPS
x = tf.concat([lip, rhand, lhand, rpose, lpose], axis=)
x = tf.where(tf.math.is_nan(x), , x)
augment:
x = augment_fn(x)
s = tf.shape(x)
x = tf.reshape(x, (s[], s[]*s[]))
x
pre0 = pre_process0(frames)
pre1 = pre_process1(*pre0)
INPUT_SHAPE = (pre1.shape)
print(INPUT_SHAPE)
</code></pre>
<p><strong>Some Useful References</strong><br>
<a href="https://www.kaggle.com/code/royalacecat/the-deeper-the-better" target="_blank">https://www.kaggle.com/code/royalacecat/the-deeper-the-better</a><br>
<a href="https://www.kaggle.com/code/hebasaleh00/aslfr-eda-preprocessing" target="_blank">https://www.kaggle.com/code/hebasaleh00/aslfr-eda-preprocessing</a><br>
<a href="https://www.kaggle.com/competitions/asl-fingerspelling/discussion/409438" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/discussion/409438</a></p> |
Google - American Sign Language Fingerspelling Recognition | [1st place solution] Improved Squeezeformer + TransformerDecoder + Clever augmentations | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle and everyone involved for hosting such an interesting competition. It was a great extension to the isolated sign language classification and it was very interesting to see how much of speech-to-text research could also be applied to sign language fingerspelling. As always it was a great teaming experience with <a href="https://www.kaggle.com/darraghdog" target="_blank">@darraghdog</a> </p>
<h2>TLDR</h2>
<p>Our solution is based on a single encoder-decoder architecture. The encoder is a significantly improved version of Squeezeformer, where the feature extraction was adapted to handle mediapipe landmarks instead of speech signals. The decoder is a simple 2-layer transformer. We additionally predicted a confidence score to identify corrupted examples which can be useful for post-processing. We also introduced efficient and creative augmentations to regularize the model, where the most important ones were CutMix, FingerDropout and TimeStretch, DecoderInput Masking. We used pytorch for developing and training our models and then manually translated model architecture and ported weights to tensorflow from which we exported to tf-lite.</p>
<h2>Cross validation</h2>
<p>We split the training data into 4 folds by signer. In the beginning we had nearly perfect correlation between CV and public LB with this approach. With higher scores improvements on CV reflected a bit less on LB, mostly due to the fact that the LB score was always decently higher and hence saturated earlier. Most of the time we only trained and tracked the score of fold0 and not all folds. </p>
<h2>Data preprocessing</h2>
<p>In total 130 key points were used. These consisted of 21 key points from each hand, 6 pose key points from each arm, and the remaining 76 from the face (lips, nose, eyes). Locally the 130 key points were cached to .npy files for fast data loading. <br>
Prior to data augmentations, the data was normalized with std/mean and nans were zero filled. </p>
<h2>Augmentations</h2>
<p>Augmentations were essential to prevent overfitting, generalize to new signers and enable deep models. We used augmentations which were popular in the first ASL competition but also came up with a lot of new creative augmentations and some have proven to be very effective. </p>
<ul>
<li>Resizing along time axis.</li>
<li>Shift the sequence along the time axis. </li>
<li>Windowed resizing along time axis (similar to warping).</li>
<li>Left-right flip of keypoints. </li>
<li>Cutmix of samples timewise - draw a random percentage between [0,1] and cut 2 sequences and related phrases at that percentage and mix. Mixing only within same signer was best</li>
<li>Spatial affine - scale, shear, shift and rotate. </li>
<li>Drop/Zero-fill between 2 to 6 different fingers over 2 to 3 time windows. </li>
<li>Drop/Zero-fill either all face landmarks or all pose landmarks. </li>
<li>In rare cases (~5% of samples) drop/zero-fill all hand landmarks. </li>
<li>Temporal masking (zero-fill) in windows of different sizes or counts. </li>
<li>Spatial masking </li>
</ul>
<p>Most augmentations were applied to 50% of the samples, except for resizing and spatial affine which were applied to ~80% of samples. </p>
<p>After augmentation, samples with more than 384 frames were resized along time axis, with channel-wise linear interpolation. Samples of less than 384 were padded to 384 for training only. tf-lite ran on variable length samples. <br>
No frames were dropped in preprocessing. </p>
<h2>Model</h2>
<p>In general, we observed that a deeper model gives significant gains (if we are able to prevent overfitting). As a consequence not only regularization techniques like augmentations are essential, but also every improvement in computational efficiency creates space to use deeper models and hence is equally important as the model architecture itself.</p>
<p>Our model consists of 3 parts, Feature Extraction, Encoder, Decoder which are shown in the image below</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2Fdc7c0fc365f54fb4d35f03da1748ed3b%2FScreenshot%202023-08-25%20at%2012.47.53.png?generation=1692960500706760&alt=media" alt=""></p>
<p>We interpret the data like a 3 channel image, where width is defined by the number of frames, height is given by the number of the selected 130 landmarks and channels are given by raw xyz coordinates. <br>
The feature extraction is based on a 2D convolution followed by batchnorm and a linear layer on the flattened features to extract features per frame. We have 5 of this feature extraction modules, one for all landmarks at once and one per landmark type (left_hand, right_hand, face, pose). The all-landmark module outputs 208 dim vector/ frame. The other 4 output 52 dim vectors each which are then concatenated to have also 208 dims. We had those two 208-dim vectors per frame and get a (batch_size x 384 x 208) input for our encoder where 384 is the maximum sequence length we chose.</p>
<p>The main component of our model is an encoder which was adapted from the Squeezeformer architecture. We did not use the actual “squeeze” idea, i.e. a temporal Unet, but used the general architecture of Squeezeformer Blocks which consist of a combination of MultiHeadSelfAttention (MHSA), Convolution and FeedForward modules. We made several improvements to this architecture:<br>
ASR conformers (and Squeezeformer) use relative positional encoding which allow the self-attention module to generalize better on different input lengths. Relative positional encoding is performance intensive, as well as using many parameters, as they are stored separately in each layer. Replacing this with Llama attention which uses rotary embeddings sped up training ~2X and tf-lite inference approx ~3X allowing larger models to be used. In addition, we cached the rotary embeddings once and fed them into each layer with the input data so they are not duplicated in each layer. This resulted in 20% less parameters in the model. We saw no benefit in using time reduction which was introduced with Squeezeformer. So in our model all layers had the same sequence length as the original input. As suggested in the Squeezeformer paper, the pre-Layer Norm from the Macaron structure is redundant and was replaced with a learnable scaling layer which scales and shifts the activations. </p>
<p>For decoding we used a simple 2 layer transformer decoder which is similar to hugging faces <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L857" target="_blank">Speech2TextDecoder</a>, which outputs a sequence prediction. We then used cross entropy loss for training our model End2end. An extra cross entropy auxiliary loss of the reversed sequence was used. A causal decoder mainly uses encoder cross attentions for the sequence's beginning and previous characters and cross attentions for the end. To improve the model's accuracy, we use a separate causal decoder on the reversed sequence as an auxiliary loss, making the model rely more on the encoder cross attention for the label's end. For decoder inference early stopping and past key value caching was used which sped up inference significantly. It should be noted that we found a transformer based Decoder superior to a CTC based decoding, even in a setting where computational efficiency matters a lot. </p>
<p>Additionally to the decoder, we also added a single linear layer to take the features of the first token of the encoder output to predict a confidence score, which helps to identify garbage data and can be used in post-processing. As a target for this we used normalized levensthein distance clipped to [0,1] of OOF predictions of a decent previous model.</p>
<p>We explored and trained all our models in pytorch, but whenever we deemed it good enough for a submission we translated each component manually to tensorflow and ported weights from our pytorch models. </p>
<h2>Training procedure</h2>
<p>Models were trained with a cosine learning rate schedule for 400 epochs with peak LR of 0.0045, weight decay of 0.08, 10 epochs warmup, mixed precision and an effective batch size of 512 samples. Dropout of 0.1 was used in the transformer encoder/decoder layers. It was important to train with mixed precision in order to leverage fp16 inference without performance drop. Training with fp32 and using tf-lite fp16 inference causes a drop of ~0.01 in CV vs LB. </p>
<p>tf-lite inference used a single sample without padding, while model training was performed with time padded mini-batches. To avoid the model learning with pads, and ensure optimal inference runtime, the feature extractor and Macaron structure encoder layers were masked time wise during training. This needed to be manually implemented in pytorch on each layer. This took some effort, but paid off by significantly speeding up inference. As <a href="https://www.kaggle.com/competitions/asl-signs/discussion/406684" target="_blank">1st place team</a> in the previous ISL competition explained, this is much easier to do in tensorflow as keras has an off the shelf masking layer. </p>
<p>Model training and tf-lite inference ran in fp16, so tf-lite files consumed almost half the disk size of of fp32. This was important, as the 40MB size was our limitation in the end. The two final model seeds measured 39988kb. </p>
<h2>Postprocessing</h2>
<p>The main idea of our postprocessing is to replace poor predictions with a dummy phrase which has a small levensthein distance to the train/ test data. <a href="https://www.kaggle.com/anokas" target="_blank">@anokas</a> showed in his <a href="https://www.kaggle.com/code/anokas/static-greedy-baseline-0-157-lb" target="_blank">notebook</a> why '2 a-e -aroe' is a good candidate for that. Most of the poor prediction resulted from corrupted input data often only a few frames long. We used a confidence score predicted by our model as basis. Whenever the confidence score is below 0.15 or the sequence is shorter than 15 frames we replace the prediction with '2 a-e -aroe'.</p>
<h2>Supplemental Data</h2>
<p>We only marginally profited from using the supplemental data. We think the main reason is that although there are 50k samples in this supplemental data there are only 500 unique phrases, and hence the model rather learns to classify then to actually decode character-by-character. We tried a lot of approaches but only the following one gave a small boost (0.838 -> 0.839): <br>
First we group the supplemental data by phrase, which only leaves us with 500 groups. In each epoch of training we add one sample per group to the training dataset for our model. That means in each epoch we use 50k samples of the training data and only 500 samples of the supplemental data. </p>
<h2>Ensembling</h2>
<p>Our final submission is a 2-seed ensemble of our model trained on the complete training data (fullfit). We average resulting logits in each decoding step for ensembling.</p>
<h2>What did not help</h2>
<ul>
<li>Fully using supplemental data</li>
<li>Using edit distance as loss (tried different approaches)</li>
<li>CTC loss (even as an auxiliary loss it hurt score)</li>
<li>Label smoothing</li>
<li>AWP - kept getting nans with FP16</li>
<li>TTA (flip/stretch)</li>
<li>Mixup of hidden layers & Specaugment++</li>
<li>Beam search decoding (too costly)</li>
</ul>
<h2>Ablation study (roughly)</h2>
<h4>Augmentations</h4>
<ul>
<li>Cutmix +0.005</li>
<li>FingerDropout +0.005</li>
<li>Face/PoseDropout +0.005</li>
<li>masking decoder inputs +0.003</li>
</ul>
<h4>Model improvements</h4>
<ul>
<li>CNN Feature extraction +0.005</li>
<li>2-branch Feature extraction with indiv norm +0.003</li>
<li>Squeezeformer over 1stplace Net of 1 round +0.005</li>
<li>Decoder over CTC +0.003</li>
<li>Confidence over simple rules for post-processing +0.002</li>
</ul>
<h4>Efficiency Improvements:</h4>
<ul>
<li>Deeper model due to fp16 +0.003</li>
<li>Deeper model due to llama attention +0.003</li>
<li>Deeper model due to masking/ variable sequence len +0.005</li>
<li>Deeper model due to caching/ early stopping in decoder +0.005</li>
</ul>
<h4>Postprocessing</h4>
<ul>
<li>Replace bad predictions with dummy phrase +0.006</li>
</ul>
<h2>Used tools/ repos</h2>
<ul>
<li>Pytorch/Tensorflow/Tf-lite (no onnx this time)</li>
<li>Huggingface</li>
<li>Albumentations (adapted their framework for using things like OneOf or Compose, but wrote our own augmentation implementations)</li>
<li>Neptune.ai was our MLOps stack to track compare and share models. Below are example training runs using different model parameters and hardware (a100 card vs kaggle kernel). The data loading in the kaggle kernel below was slow and could probably be sped up with some work. </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1424766%2F4693e3379ea6a1631bb3cb7b9044d42a%2FScreenshot%202023-08-25%20at%2012.58.07.png?generation=1692961107306495&alt=media" alt=""></p>
<h2>Code & model weights</h2>
<p><a href="https://github.com/ChristofHenkel/kaggle-asl-fingerspelling-1st-place-solution" target="_blank">https://github.com/ChristofHenkel/kaggle-asl-fingerspelling-1st-place-solution</a></p>
<h2>Paper</h2>
<p>tbd. Due to the novelty of our approach we are thinking about summarizing it in a paper.</p>
<p><strong>Thank you for reading, questions welcome</strong></p> |
Google Research - Identify Contrails to Reduce Global Warming | 102nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I want to thank the organizers who hosted such a great competition and kagglers who generously shared their knowledge. <br>
I was not intended to write my solution because my rank is not fascinating and I'm busy with the entrance exam for graduate college, but I couldn't resist the temptation of a duck T-shirt…</p>
<h1>Model</h1>
<p>I created two models: TwoStageModel and TwoStageModelMultiChannel.</p>
<pre><code>config.seg_model =
(nn.Module):
():
().__init__()
self.config = config
self.model = seg_models[config.seg_model](
encoder_name=config.encoder_name,
encoder_weights=,
in_channels=,
classes=,
activation=,
)
self.model2= seg_models[config.seg_model](
encoder_name=,
encoder_weights=,
in_channels=,
classes=,
activation=,
)
self.conv2d = nn.Conv2d(, , config.kernel_size, padding=config.padding)
():
out = self.model(x)
out2 = torch.cat([out, x], dim=)
out2 = self.conv2d(out2)
out2 = self.model2(out2)
out2, out
(nn.Module):
():
().__init__()
self.config = config
self.model = seg_models[config.seg_model](
encoder_name=config.encoder_name,
encoder_weights=,
in_channels=,
classes=,
activation=,
)
self.model2= seg_models[config.seg_model](
encoder_name=,
encoder_weights=,
in_channels=,
classes=,
activation=,
)
self.conv2d1 = nn.Conv2d(, , config.kernel_size, padding=config.padding)
self.conv2d2 = nn.Conv2d(, , config.kernel_size, padding=config.padding)
self.conv2d3 = nn.Conv2d(+, , config.kernel_size, padding=config.padding)
():
out = self.conv2d1(x)
out2 = self.conv2d2(out)
out3 = self.model(out2)
out2 = torch.cat([out3, out, x], dim=)
out2 = self.conv2d3(out2)
out2 = self.model2(out2)
out2, out3
</code></pre>
<h1>Encoder</h1>
<p>I used 'timm-resnest26d', 'efficientnet-b6', 'vgg19_bn', and so on. Large models game me better performance.</p>
<h1>Data</h1>
<p>I used false-color images for ContrailsModel2Stage and all 9 bands for TwoStageModelMultiChannel.</p>
<h1>Augmentation</h1>
<p>HorizontalFlip(p=0.1),<br>
VerticalFlip(p=0.1),<br>
RandomRotate90(p=0.1),<br>
ShiftScaleRotate(shift_limit=0.0625/2, scale_limit=0.2/2, rotate_limit=15, p=0.1, border_mode=cv2.BORDER_REFLECT)</p>
<h1>TTA</h1>
<p>As mentioned in others' write-ups, data flip/rotation TTA severely hurt cv scores. I felt something wrong but thought this is because of <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/discussion/419994" target="_blank">data leakage</a> and ignored.<br>
Instead of flip/rotation TTA, I applied a 1-pixel slide TTA and calculated the max value of original predictions and TTA predictions. 1-pixel right-down slide and left-up slide were effective and gave me about a 0.01 score boost, but 1-pixel right-up slide and left-down slide didn't work. I should have given more thought to why this happens…<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9268451%2Fe104d2aef5442825ea965e711a0b5856%2F2023-08-18%20191942.png?generation=1692354026668584&alt=media" alt=""></p> |
Google Research - Identify Contrails to Reduce Global Warming | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Summary</h1>
<ul>
<li>Customized U-shape network with CoaT, NeXtViT, SAM-B, and tf_efficientnetv2_s backbones</li>
<li>×2 (with pixel shuffle up-scaling of predictions to 256×256) and ×4 input upscaling: pixel-level accuracy of predictions is critical</li>
<li>Sequence of images: LSTM, Transformer, Convolutional temporal mixing</li>
<li>No flip and 90-degree rotation augmentation (nor TTA) because masks are shifted</li>
<li>Train on soft labels: label = average of all annotators</li>
<li>BCE + dice Lovasz loss</li>
</ul>
<h1>Code</h1>
<p>My and <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> 's <a href="https://github.com/DrHB/2nd-place-contrails" target="_blank">part</a> (includes the best performing CoaT_ULSTM model)<br>
<a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> 's <a href="https://github.com/TheoViel/kaggle_contrails" target="_blank">part</a><br>
<a href="https://www.kaggle.com/code/theoviel/contrails-inference-comb" target="_blank">Inference</a></p>
<h1>Introduction</h1>
<p>Our team would like to thank the organizers and Kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> and <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> for their incredible contribution toward our final result. </p>
<h1>Details</h1>
<p>This competition has two main challenges: (1) noisy labels and (2) pixel-level accuracy requirements. (1) If one checks the annotation from all annotators, a major disagreement is quite apparent. This label noise challenge may be partially addressed by using soft labels (the average of all annotator labels) during training. However, since the evaluation requires hard labels, the major model failure is the prediction of contrails near the decision boundary, which cannot be avoided. (2) The thing that could really be addressed by the models is achieving pixel-level accuracy of predictions. Pixel-level accuracy is quite important since contrails are only several pixels thick, and even a mistake by one pixel in the mask boundary in the lateral direction may result in a significant decrease in the dice score, used as a metric. For such a task, a typical solution is up-sampling the input of the model or replacing the final linear up-sampling in segmentation models with transposed convolution or pixel shuffle up-sampling. This modification gave us a substantial boost in initial experiments over the direct use of models on the original resolution.</p>
<h2>Data</h2>
<p>Following the <a href="https://arxiv.org/pdf/2304.02122.pdf" target="_blank">organizer's publication</a>, we used “ash” false color images considering the 12 μm band, the difference between 12 and 11 μm bands, and the difference between 11 and 8 μm bands, respectively. We also tried to consider all bands or expand the “ash” color images with 8, 10, and 12 μm, but it resulted in lower performance (the pre-trained weights of the first convolution were replicated). We hypothesize that the best performance of ash color images may be a consequence of the use of the ash color images by annotators, and biased label noise. In our experiments, we up-sample the input with bi-cubic interpolation by ×2 for CoaT and NeXtViT models, and by ×4 for SAM model. EfficientNet model uses the original input but the stride in the first convolutional block is set to 1, which is equivalent to ×4 input up-sampling.</p>
<h2>Model</h2>
<p>The interesting thing about this competition is that the model is required to do both (1) tracking the global pixel dependencies because contrails are quite elongated, and (2) capable of generating predictions with pixel-level accuracy because contrails are only several pixels thick. (1) can be addressed with transformers, while (2) is addressed with convolutional networks or local attention. In the competition, we tried to address these points independently with SAM-B ViT backbone and EfficientNet v2 purely convolutional network. In addition, there is a class of networks, transformers with a hierarchical structure, which addresses both aspects of the considered problem, (1) + (2), simultaneously. This class includes CoaT, which resulted in the best single model performance in our experiments, NeXtViT, MaxViT (which we missed), and MiT transformer from Segformer (unfortunately, it cannot be used in the competition because of the license).<br>
For the decoder we either considered a vanilla U-Net decoder (EfficientNet models) or a more customized set of U-blocks using pixel shuffle up-sampling and factorized FPN (CoaT, NeXtViT, SAM). In these blocks, we also replaced the BatchNorm with LayerNorm2d to have proper model convergence at the low batch size and used GELU instead of ReLU activation. <br>
The models are schematically illustrated in the figure below.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F16e90aba2cb8ee7ac94837e09455688f%2F4.png?generation=1692419847964897&alt=media" alt=""></p>
<h2>Temporal mixing</h2>
<p>In addition to our initial runs with single-frame models, we performed experiments with image sequences. In our best setup image, multi-frame models get ~0.01 improvement.<br>
Since the images are taken with a considerable temporal delay, and the displacement of clouds between frames is huge (~10-30 pixels) and is deteriorated by input up-sampling, video models based on 3D convolutions or window attention (VideoSwin) are not effective in the context of the considered problem. For the same reason, early mixing or mixing the predictions of sequential models is also not expected to work. Therefore, mixing at the intermediate feature map scales, such as res/32 and res/16, is most promising: even if clouds/contrails are misplaced, the feature maps at low resolution are reasonably aligned. In our experiments with the CoaT model, the best performance is achieved if two low-resolution feature maps (res/32 and res/16) are considered. The output of the temporal mixing modules as well as the output of the remaining feature maps are pooled at frame 4 and the decoder sees only the input corresponding to the considered frame, as illustrated in Figure 1.<br>
As temporal mixing modules, we considered LSTMs (which achieved the best performance), 1D temporal convolutions, and Transformers. LSTM (1 layer) and 1D temporal convolutions perform mixing along the temporal dimension only, while the spatial dependences between features are considered in the backbone and the decoder. The transformer-based mixing was proposed to perform an implicit feature map registration for a large displacement of contrails/clouds because it works with temporal and spatial mixing of features simultaneously and can match them even if they are not spatially aligned. <br>
This mixing is performed in the following way. To each feature map, corresponding to a given frame, we add encoding to distinguish it from others, then we flatten tokens from all frames into a single sequence and process it with 2 transformer decoder blocks. Key and value input to the transformer is represented by the concatenated sequence of all frames, while the query is a sequence of tokens from the 4-th frame only. Unfortunately, transformer-based mixing ended up with slightly lower performance, and the training with it is less stable. However, we kept both approaches in the final ensemble, benefitting from the improved diversity.<br>
In our experiments with CoaT/NeXtViT we followed the <a href="https://arxiv.org/pdf/2304.02122.pdf" target="_blank">organizer's publication</a> and selected the first 5 frames as the input. Both LSTM and Transformer mixing were considered. In our experiments with EfficientNet we considered 4 frame input (2nd – 5th, 2 frames before and 1 frame after the annotated frame) with bi-directional LSTM. This frame selection is dictated by the instruction to annotators to have a contrail at least on two subsequent frames. With the SAM model, we considered 1D-convolutional mixing with 3 frames (1 frame before and 1 frame after the annotation) because of the heavy VRAM requirement for this model, which was also an issue in inference. <br>
One idea we had is explicit registration of input frames to have contrails and clouds aligned between frames (temporal mixing is the most effective), for example considering optical flow. Unfortunately, the images contain not only clouds but also the Earth's surface, which does not move, or several layers of clouds moving in different directions. Therefore, we could not come up with any methods to proceed with the frame registration task. We were also considering performing registration based on the predicted contrails, but since contrails appear and disappear from frame to frame, this approach is also not feasible.</p>
<h2>Pseudo-labels (PL)</h2>
<p>In addition to the competition data, we have collected a dataset “Contrails GOES16 Images May” (single frame setup) [9]. We split the images into 256×256 tiles with partial overlap and generated the masks based on the ensemble of best single-frame models. This data is shared as a Kaggle dataset [10]. This data was used for pretraining the models in a single-frame setup in some experiments, followed by finetuning the models either in single-frame or multi-frame setups. The use of PL has drastically improved the performance of individual models (folds), but the diversity of the models is reduced even if each fold of the finetuned model uses an independently trained PL model. Therefore, the performance on the average prediction over all folds is comparable to one for the setup trained without PL.<br>
We also have experimented with PL training on both external data + unannotated competition frames, but it did not give any benefit in finetuning a single frame model. Since such pretraining could affect multi-frame models, in the production experiments we used only external data at the PL step.</p>
<h2>Training</h2>
<p>CoaT and NeXtViT models: Training is performed for 24 epochs with Over9000 (Radam+LAMB+LookAhead) optimizer, learning rate of 3.5e-4, weight decay of 0.01. It appeared that this optimizer has superior compatibility with CoaT. In the case of PL pretraining, it is performed for 12 epochs on the external data, and then the models are finetuned for 12 epochs for CoaT and 18 epochs for NeXtViT models.<br>
EfficientNet models: Pretraining is performed in a single frame setup for 100 epochs with additional CutMix augmentation, learning rate is 1e-3, AdamW optimizer, weight decay 0.2. During pretraining the fraction of PL data is linearly decreased to zero. Then the model is fine-tuned in the image sequence setup with 3e-5 for the encoder and 1e-4 for the decoder. The addition of external data for pretraining resulted in much stronger 2D models, hence multi-frame EfficientNet models were not considered in the final ensemble. For diversity, we also pre-trained models for 200 epochs, and in a full-fit setup (training on all data, including validation set).<br>
SAM models: Our final submission includes only single-frame SAM-B ViT-based models. During training, we, first, train the model with a frozen encoder for 10 epochs, and then we unfreeze the model and continue for 20 more epochs. AdamW optimizer is used with the learning rate of 4e-6 for the encoder and 4e-5 for the decoder.<br>
During training, we used ShiftRotateScale, RandomGamma, RandomBrightnessContrast, MotionBlur, and GaussianBlur augmentations. Flip and 90-rotate augmentation were not used because they resulted in worse performance. The training of all production models is performed for 5-6 folds our folds are trained on the entire train dataset with different random seeds, while the evaluation is performed on the provided validation set. This strategy is used because evaluation in a standard K-fold split of the training data overestimates CV (cross-validation), which may be caused by spatial overlaps of tiles in the training dataset.</p>
<h2>Loss</h2>
<p>As a loss function, we used BCE + dice Lovasz loss. The second term is a modified version of a Lovasz loss [12] where (1) ReLU is replaced with 1 + ELU, (2) a symmetric version is used, and (3) instead of IoU we use a surrogate function maximizing dice. The second term provides an insignificant boost to the individual model performance. However, this term gives a wider maximum of the dice with respect to the threshold selection, i.e. weaker dependence of the performance on the threshold selection. This property is vital to avoid shakeup at the private leaderboard because of the improper threshold and is preferable for more effective ensembling.</p>
<h2>Results</h2>
<p>The performance of our final models is summarized in the table below. Ex refers to models trained in PL setup (EfficientNet is trained with PL only). The postfix U refers to the single frame model, UT – transformer-based temporal mixing, and ULSTM – LSTM-based temporal mixing. The threshold is selected based on the search maximizing CV value. For most models, it is 0.46-0.50.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F5b5010ef692bccaca91f04bdd8ae6be3%2F5.png?generation=1692423621715751&alt=media" alt=""></p>
<h2>Best Single model</h2>
<p>Our best single mode is CoaT, which for 5 folds gets 0.7039 CV, 0.71790 private, and 0.71243 public LB. Single fold CV is 0.6960+-0.0003 evaluated on val set (our folds are trained on the train set with different seeds because eval in a standard Kfold split of train data overestimates CV due to possible spatial overlaps). <strong>We could take top-4 in the competition with this single model</strong>. </p>
<h2>Ensemble</h2>
<p>Our best ensemble consists of 8 models with weights selected to make approximately equal contributions from all components: CV is 0.7140 (on eval set), 0.72574 public and 0.72304 private LB. The models and their weight are summarized in the table below.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F2ddf4778766d0b5aa44a24a81409d460%2F6.png?generation=1692421380873655&alt=media" alt=""><br>
EfficientNet includes 3 model setups (100 epochs, 200 epochs, and full-fit) with the total efficient weight equal to 1.</p>
<h2>Model Execution Time</h2>
<p>CoaT and NeXtViT models: Training time is 2 hours for single-frame and 8 hours for multi-frame models per fold on 2×RTX4090 GPUs. The inference time at Kaggle (P100) for multi-frame models is 35 minutes for 5 folds.<br>
EfficientNet models: Training time is 3 hours 45 minutes for 100 epochs on 8×V100 GPUs per fold. Inference at Kaggle (P100) takes 20 minutes for 6 folds including data loading.<br>
SAM models: Training time is 16 hours per fold on 2×A6000 GPUs. Inference at Kaggle (P100) takes 48 minutes for 5 folds including data loading.</p>
<h2>Things we missed</h2>
<p>(1) During the competition one surprising finding for us was that flips and 90-degree rotation augmentation in model training resulted in worse performance. It was quite surprising, but, unfortunately, we did not think sufficiently about the origin of this behavior and attributed it to prevailing winds and a particular shape of clouds and contrails. After the competition, it appeared that the masks were shifted by 0.5 pixels, which made any flips inapplicable, as we discovered. If the mask issue was fixed in our setup, flip/90-rotation augmentation could lead to an 8-fold increase of the effective training dataset size and enables further performance boost at the inference stage due to test time augmentation. So, potentially the performance of our models can be noticeably improved. <br>
(2) PL single model pertaining on the external data gives moderate CV improvement to the single-frame model (interesting that at private LB this improvement is quite large). Meanwhile, this single-frame pretraining didn't give much to sequence models. So, external sequence data collection + proper reprojection could potentially server as a better pertaining strategy for sequence models, and we may expect a boost comparable to single-frame models.</p> |
Google - American Sign Language Fingerspelling Recognition | [429th Solution] Google - American Sign Language Fingerspelling Recognition - own, native) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello. I had the opportunity to describe my decision and the path by which I came to it) I decided not to deny myself the pleasure of taking this opportunity) Unfortunately, I no longer remember which options I chose for the final submission (I got confused in the list of submitted materials) and it probably doesn't matter considering, that this is a description of the solution that took 429th place. I did a lot of experiments, redid the n-th number of layers, tried to assemble something worthwhile out of them - it helped me become 429) Keras - interesting. Thanks to the organizers for a good time.</p>
<h2><strong>Context</strong></h2>
<p>Business context: <a href="https://www.kaggle.com/competitions/asl-fingerspelling/overview" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/asl-fingerspelling/data" target="_blank">https://www.kaggle.com/competitions/asl-fingerspelling/data</a></p>
<h2><strong>Overview of the Approach</strong></h2>
<p>In the beginning, I worked with two publicly available, well-known approaches (thanks to their authors). I was able to achieve cv 0.9+ with an encoder-decoder transformer by adding some LSTM or DepthwiseConv1D+ConvLSTM1D layers before the classifier, but all my attempts to convert and complete the tflite model ended in failure.</p>
<p>The result of this work was a search and an attempt to create a custom LSTM layer. Somewhere on GitHub, I found a variant of a simple LSTM layer (AI Summer link below). Converting it to keras was the start. The created layer did not lead to the desired result, but laid the foundation for an interesting search process. Everything that was - corresponded in Keras. No less amusing was the combination of everything that was, with new finds.</p>
<p>My experiments are based on the work of Rohith Ingilela's and MARK WIJKHUIZEN Loading, data processing, model output, minus my little additions are their activities.</p>
<h2><strong>Details of the submission</strong></h2>
<p>Once again, I note that I no longer remember which variants were selected by me in the end. I do not exclude that it could even be variants based on the above-mentioned works - pardom. Here are a couple of options for the layers that I used. As I understand it, something similar is often used in work on the classification of poses.</p>
<p>Simple LSTM layer:</p>
<pre><code> :
def :
super(Cell_AI_SUMMER, self).
self.input_length = input_length
self.linear_forget_w1 = tf.keras.layers.
self.linear_forget_r1 = tf.keras.layers.
self.sigmoid_forget = tf.keras.activations.sigmoid
self.linear_gate_w2 = tf.keras.layers.
self.linear_gate_r2 = tf.keras.layers.
self.sigmoid_gate = tf.keras.activations.sigmoid
self.linear_gate_w3 = tf.keras.layers.
self.linear_gate_r3 = tf.keras.layers.
self.activation_gate = tf.keras.activations.tanh
self.linear_gate_w4 = tf.keras.layers.
self.linear_gate_r4 = tf.keras.layers.
self.sigmoid_hidden_out = tf.keras.activations.sigmoid
self.drop_forget = tf.keras.layers.
self.activation_final = tf.keras.activations.tanh #gelu #tf.keras.activations.tanh
def forget(self, x, h):
x = self.linear
x = self.drop
h = self.linear
h = self.drop
return self.sigmoid
def input:
x_temp = self.linear
x_temp = self.drop
h_temp = self.linear
h_temp = self.drop
i = self.sigmoid
return i
def cell:
x = self.linear
h = self.linear
x = self.drop
h = self.drop
k = self.activation
g = ki
c = fc_prev
c_next = g + c
return c_next
def out:
x = self.linear
h = self.linear
return self.sigmoid
def call(self, x, tuple_in ):
(h, c_prev) = tuple_in
i = self.input
f = self.forget(x, h)
h = tf.cast(h, dtype=tf.float32)
c_prev = tf.cast(c_prev, dtype=tf.float32)
c_next = self.cell
o = self.out
h_next = oself.activation
return h_next, c_next
</code></pre>
<p><code>zero_core = 128</code></p>
<pre><code> (tf.keras.Model):
():
(Sequence2, self).__init__(name=)
self.input_length = input_length
self.rnn1 = Cell_AI_SUMMER(, input_length)
self.rnn2 = Cell_AI_SUMMER(, input_length)
self.add = tf.keras.layers.Add()
self.linear = tf.keras.activations.gelu
():
outputs = []
h_t = tf.zeros([zero_core, self.input_length], dtype=tf.double, name=)
c_t = tf.zeros([zero_core, self.input_length], dtype=tf.double, name=)
h_t2 = tf.zeros([zero_core, self.input_length], dtype=tf.double, name=)
c_t2 = tf.zeros([zero_core, self.input_length], dtype=tf.double, name=)
h_t, c_t = self.rnn1(, (h_t, c_t))
h_t2, c_t2 = self.rnn2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
i (future):
h_t, c_t = self.rnn1(, (h_t, c_t))
h_t2, c_t2 = self.rnn2(h_t, (h_t2, c_t2))
output = self.linear(h_t2)
outputs += [output]
outputs = self.add(outputs)
outputs = self.add([outputs, ])
outputs
</code></pre>
<p>Simple Custom Layer:</p>
<pre><code> :
def :
super(Convs, self).
self.input_length = input_length
self.ch1, self.ch2, self.ch3 = , , #, ,
self.conv_0 = tf.keras.layers.
self.conv_1 = tf.keras.layers.
self.conv_2 = tf.keras.layers.
self.conv_3 = tf.keras.layers.
self.conv_4 = tf.keras.layers.
self.conv_5 = tf.keras.layers.
self.conv_6 = tf.keras.layers.
self.conv_7 = tf.keras.layers.
self.batch1 = tf.keras.layers.
self.batch2 = tf.keras.layers.
self.batch3 = tf.keras.layers.
self.batch4 = tf.keras.layers.
self.relu = tf.keras.layers.
self.add = tf.keras.layers.
self.drop = tf.keras.layers.
self.max2 = tf.keras.layers.
self.uns = tf.keras.layers.
self.ave = tf.keras.layers.
def conv0(self, x):
x = self.conv
x = self.batch1(x)
x = self.relu(x)
x = self.conv
x = self.drop(x)
x = self.max2(x)
return x
def conv1(self, x):
x = self.conv
x = self.batch2(x)
x = self.relu(x)
x = self.conv
x = self.drop(x)
x = self.uns(x)
return x
def conv2(self, x):
x = self.conv
x = self.batch3(x)
x = self.relu(x)
x = self.conv
x = self.drop(x)
x = self.max2(x)
return x
def conv3(self, x):
x = self.conv
x = self.batch4(x)
x = self.relu(x)
x = self.conv
x = self.drop(x)
return x
def call(self, input):
out = self.conv0(input)
out = self.conv1(out)
out = self.conv2(out)
out = self.conv3(out)
out = self.add()
return out
</code></pre>
<p>Model example (this is just one of the smaller options):</p>
<pre><code>def get:
inp = tf.keras.
x = tf.keras.layers.(inp)
x = tf.keras.layers.(x)
x = tf.keras.layers.(x)
pe = tf.cast(positional, dtype=x.dtype)
x = x + pe
x = (x)
x = (x)
x1 = (x)
x = (x)
x = tf.keras.layers.()
x = (x)
x = (x)
x = tf.keras.layers.(x)
model = tf.keras.
return model
</code></pre>
<h2><strong>Which led to an improvement.</strong></h2>
<p>Augmentations at the initial stage. Subsequently, I abandoned them.<br>
Alternating UpSampling1D and MaxPooling1D in custom layers.<br>
Replacing Add with Average.<br>
Leaving in the model a certain amount of Conv1DBlock, TransformerBlock from the 1st place solution of the last competition, by HOYSO48<br>
What I didn't think to do)</p>
<h2><strong>Sources</strong></h2>
<p>My sincere respect to the authors of the following works:<br>
<a href="https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference" target="_blank">https://www.kaggle.com/code/markwijkhuizen/aslfr-transformer-training-inference</a><br>
<a href="https://github.com/The-AI-Summer/RNN_tutorial/blob/master/cutom_LSTM.py" target="_blank">https://github.com/The-AI-Summer/RNN_tutorial/blob/master/cutom_LSTM.py</a><br>
<a href="https://www.kaggle.com/competitions/asl-signs/discussion/406978" target="_blank">https://www.kaggle.com/competitions/asl-signs/discussion/406978</a><br>
<a href="https://www.kaggle.com/code/irohith/aslfr-preprocess-dataset-tfrecords-mean-std" target="_blank">https://www.kaggle.com/code/irohith/aslfr-preprocess-dataset-tfrecords-mean-std</a></p>
<p>Something like this. Thanks again for an interesting challenge. Have a good day.</p> |
ICR - Identifying Age-Related Conditions | 6th Position winner solution for the : ICR - Identifying Age-Related Conditions | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution is divided into 7 big steps:<br>
(Code version 10)</p>
<ol>
<li>Interpolate the missing data using a linear method using the 'interpolate' instance from pandas.</li>
<li>Use a random forest classifier to find the most important features in the dataset using 'gini-importance'.</li>
<li>Use Bayesian optimization to find the optimal parameters of the XGBoost classifier.</li>
<li>Repeat step 3 multiple times to gather many optimal parameters for the XGBoost classifier.</li>
<li>Make an ensemble of XGBoost classifiers using the optimal parameters.</li>
<li>Fine-tune the XGBoost classifiers again using GridSearchCV (because Bayesian optimization is just an estimation of the parameters).</li>
<li>Use a voting classifier (the mean of the probabilities of each XGBoost) to classify the test set.</li>
</ol>
<p>Here is my code (I performed step 2 in my personal computer):<br>
<a href="https://www.kaggle.com/code/diegosilvadefrana/notebooke87ef51e7e/notebook" target="_blank">https://www.kaggle.com/code/diegosilvadefrana/notebooke87ef51e7e/notebook</a></p> |
ICR - Identifying Age-Related Conditions | 9th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello! This was unexpected, I'm really happy! </p>
<h1>Context section</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<p>I was very concerned about overfitting my model. With so many different opinions in the discussions and the difficulty of accurately gauging technique effectiveness due to the small dataset, I was unsure about the best approach to take. </p>
<p>As a solution, I decided to integrate a mix of different models along with a variety of data preprocessing techniques.</p>
<h1>Details of the submission</h1>
<h3>Part one:</h3>
<ul>
<li>Data Preprocessing: <ul>
<li>Leveraged an over sampler to balance data distribution.</li>
<li>Use of Greeks' Epsilon with <code>Epsilon.max() + 1</code> for the test set.</li>
<li>Employing the SimpleImputer with the strategy set to 'constant'.</li></ul></li>
<li>Models:<ul>
<li>Ensemble of two XGBClassifiers and two TabPFNClassifiers.</li>
<li>Employed 5-fold cross-validation, picking the best model.</li></ul></li>
</ul>
<h3>Part two:</h3>
<ul>
<li>Data Preprocessing: <ul>
<li>No Greek's Epsilon.</li>
<li>Implemented feature scaling.</li>
<li>Used feature selection, opting for a subset of 40.</li></ul></li>
<li>Models: <ul>
<li>XGBClassifier and LGBMClassifier.</li>
<li>Employed 15-fold cross-validation, culminating in an ensemble mean.</li></ul></li>
</ul>
<h3>Conclusion</h3>
<p>I found that the best score was from computing the mean of the first ensemble assigning a weight of 3 to it, while each of the other two models held a weight of 1.</p>
<h3>What did not work for me:</h3>
<ul>
<li>postprocessing (obviously! 😉).</li>
<li>Other models such as Tree Classifiers and Neural Networks…</li>
<li>Using other fields (apart from Epsilon) in Greeks</li>
</ul>
<h1>Sources:</h1>
<ul>
<li>I drew inspiration from a publicly shared notebook for the first part: <a href="https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models" target="_blank">https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models</a></li>
</ul>
<p>I'd also like to thank everyone who actively participated in the discussion forum, I believe that this allowed me to learn a lot!</p> |
ICR - Identifying Age-Related Conditions | Silver Medal Solution Write-up (Top 3%) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Howdy! </p>
<p><strong>Notebook:</strong> <a href="https://www.kaggle.com/code/conradkrueger/silver-medal-catboost-no-postprocess-or" target="_blank">https://www.kaggle.com/code/conradkrueger/silver-medal-catboost-no-postprocess-or</a></p>
<p><strong>Model:</strong> My model is just an ensemble of 10 binary CatBoostClassifiers from CV. </p>
<h5>Feature Engineering</h5>
<p>Baseline GBT models on the default dataset were not it for me. So, I turned to feature engineering to generate new features via arithmetic operations.<br>
Huge shout-out to <a href="https://www.kaggle.com/tatudoug" target="_blank">@tatudoug</a> 's Logistic Regression Baseline Notebook (<a href="https://www.kaggle.com/code/tatudoug/logistic-regression-baseline)" target="_blank">https://www.kaggle.com/code/tatudoug/logistic-regression-baseline)</a>.<br>
That notebook inspired me to try other arithmetic operations and see if GBTs could find any discriminating features.</p>
<h5>Feature Selection</h5>
<p>From this point, I tried to be as robust as possible when doing feature selection. <br>
<code>RepeatedStratifiedKFold(10, 10)</code> was used to pick features (RFE gave me great results). I did nested CV with multiple seeds to see if ensembling the models in the inner CV loop would lead to better results. Shout-out to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for talking about nested CV in discussion posts.<br>
My final decision was to select my best 2 CVs. </p>
<h5>Results</h5>
<ul>
<li>CV: 0.157, STDDEV: 0.67 </li>
<li>Public LB: 0.21</li>
<li>Private LB: 0.39 </li>
</ul>
<p>My best submission (same strat but different features were generated) scored 0.37 on private and 0.21 on public. </p>
<p>Looking at the standard deviation, it was not surprising to see my model score 0.21 on the public LB. My logic was to hope my model was robust enough and that many kagglers overfitted to the public LB (and, of course, have definitely totally 0 luck involved). </p>
<p>Let me know what y’all think! And congrats to those who had the gut to include Epsilon in their final submission.</p> |
ICR - Identifying Age-Related Conditions | "Simple" and "Easy" Time Series approach but still have a medal for ICR-2023 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone,</p>
<p>In the ICR competition, we're tackling the overfitting challenge. To ensure robust models, rigorous validation is a must. While cross-validation is common, this is time series validation for a fresh perspective and better defense against overfitting.</p>
<p>I have also tried to use Cross-validation and Nest-CV, but I choose Time-series in the end (explanation below)</p>
<h1><strong>Time-Series Validation Approach:</strong></h1>
<p>I have a <strong>4078 increase</strong> from the public leaderboard to the private leaderboard with only <strong>3 Entries</strong>.<br>
This is how I do it!</p>
<h2>1. Data process</h2>
<ul>
<li>Standard Scaler for the input data (this might not affect much if using the tree-base model)</li>
<li>Feature selection </li>
<li>Add Epsilon as a features</li>
</ul>
<h2>2. Model</h2>
<ul>
<li>Single LGBM with tuning parameters</li>
</ul>
<h2>3. Validation</h2>
<p>The above parts seem not much different from most of all. Here is the time-series approach. <br>
I split train data into 2 fold:</p>
<ul>
<li><p>Fold 1: </p>
<ul>
<li>train->2019/01/03 </li>
<li>test: 2019/01/04 -> 2019/11/12<br>
=> This is <code>model_1</code> with <code>balance_log_loss_1</code> and <code>weight_1</code>. </li></ul></li>
<li><p>Fold 2: </p>
<ul>
<li>train->2019/11/12</li>
<li>test: 2019/11/13 -> 2020/09/29<br>
=> This is <code>model_2</code> with <code>balance_log_loss_2</code> and <code>weight_2</code>. </li></ul></li>
</ul>
<p><strong>Noticed</strong>: <code>weight_{i} = 1 / balance_log_loss_{i}</code>, these weights use to reduce imbalances I think.<br>
Here are other validations of loss</p>
<table>
<thead>
<tr>
<th></th>
<th>Balance Log Loss</th>
</tr>
</thead>
<tbody>
<tr>
<td>CV</td>
<td>0.21180</td>
</tr>
<tr>
<td>Nested-CV</td>
<td>0.13583</td>
</tr>
<tr>
<td>Time Series</td>
<td>0.43275</td>
</tr>
</tbody>
</table>
<p>=> CV and Nested-CV seem overfit so I chose Time-series instead! </p>
<h2>4. Output predict</h2>
<p>To have the final output, I calculate the average weight loss from the <code>pred_{i}</code> of 2 models:<br>
$$<br>
output =\frac{weight_1 * pred_1 + weight_2 * pred_2}{weight_1 + weight_2}<br>
$$</p>
<h1>Conclusion</h1>
<p>This is the approach that helps me have the first model. It can be used for beginners to achieve first medals like me.<br>
Please have an upvote if you found it interesting. This is my <a href="https://www.kaggle.com/code/hwhuynguyen/icr-base-line" target="_blank">code</a>.</p> |
ICR - Identifying Age-Related Conditions | 158th solution🥈 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><a href="https://www.kaggle.com/code/nynyny67/icr-submission?scriptVersionId=139202837" target="_blank">Kaggle ICR - Identifying Age-Related Conditions</a></h1>
<h1>Best solution among final selection (cv: 0.185, public: 0.17, private: 0.39)</h1>
<p>ensemble: (XGB, NN, TabPFN)<br>
weights: (19, 1, 10)</p>
<p>Weight combination was optimized by distributing 30 among models in ensemble using grid search based on cv score.</p>
<h2>Out of fold result</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8363906%2F175ce142039d556b282b859afe31ad53%2Fcv.png?generation=1691739845457339&alt=media" alt=""></p>
<h1>best model (cv: 0.235, public: 0.19, private: 0.34, not in final selection)</h1>
<p>20 xgb bugging.<br>
Bugging method was used with datasets in which negatives were down sampled.(learned from: <a href="https://www.kaggle.com/code/cdeotte/rapids-cuml-svc-baseline-lb-0-27-cv-0-35" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-cuml-svc-baseline-lb-0-27-cv-0-35</a> ).<br>
3% of labels were reversed in each bugging dataset so that the diversity and the robustness of the ensemble increases.<br>
I didn't choose this model for final selection because cv score was not so good.</p>
<h2>Out of fold result</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8363906%2Fecd886fa266dcd780b6a2d9b9c0010c0%2Fsingle_xgb.png?generation=1691739879738216&alt=media" alt=""></p>
<h1>optimization to competition metric</h1>
<p>I think it was important to optimize model outputs for the competition metric. These are methods I tried.</p>
<h2>Based on the model output.</h2>
<p>pp_prob function from this notebook was useful.<br>
<a href="https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17/notebook" target="_blank">https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17/notebook</a></p>
<p>I think this method is robust against data distribution change because it does not use the counts of positive and negatives in training set but uses the sum of the output of the model.<br>
It is not guaranteed that private dataset has the identical ratio of positive and negative samples.</p>
<h2>analytical approach</h2>
<p>Given the probability for each samples to be positive, what is the value to submit $$\hat{y_i}$$ to get lower metric value?<br>
Let's think about the i th sample's expected loss.</p>
<p>$$l_i = \frac{p_i}{n_1} \log{\hat{y_i}} + \frac{1-p_i}{n_0} \log{(1-\hat{y_i})} $$</p>
<p>To find y_i that make this value small will be useful to minimize the metric.</p>
<p>$$\frac{\partial l_i}{\partial \hat{y_i} }= \frac{p_i}{n_1} \frac{1}{\hat{y_i}} - \frac{1-p_i}{n_0} \frac{1}{1 - \hat{y_i}} = 0$$</p>
<p>$$\hat{y_i} = \frac{1}{1-\frac{n_1}{n_0} (1 - \frac{1}{p_i})}$$</p>
<p>Then you can estimate p value using machin learning approach then calculate y value for each samples. This method worked for me, but I didn't use this approach in final selection because the former method was better in cv score.</p>
<h2>code</h2>
<p><a href="https://github.com/NYNYNY67/KaggleIcr" target="_blank">https://github.com/NYNYNY67/KaggleIcr</a></p> |
ICR - Identifying Age-Related Conditions | [475 Post 0.37649] Place Solution for the ICR - Identifying Age-Related Conditions Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p></p>
<h2> Context </h2>
<div><p>
<b>Business context:</b> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview
<br>
<b>Data context: </b> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data
</p></div>
<h2> Overview of the Approach</h2>
<div><p>
<b>Handling Missing Values: </b> <br>
Median values were collected for train by EJ and Alpha.
Train columns with missing data (NaNs) were defaulted to median values for row EJ and Alpha.
Median values were then collected for train by EJ.
Test columns with missing data (NaNs) were defaulted to train median values for row EJ.</p>
<p>
<b>Decision to Split: </b> <br>
Split data for EJ A and EJ B for use in separate models for A and B -
Based on review of some of the EDA notebooks, discussions and consideration of differences in columns,
e.g. some static for A ('DU','EH','FD','FL','GL') and no Alpha B for A,
while those columns were relevant for Alpha B and EJ B only,
and analysis of differences in distributions and stats between A and B for other columns esp correlated ones. </p>
<p>
<b>Correlations:</b> <br>
Separate correlations for EJ A Alpha D and G and EJ A overall for analysis<br>
Separate correlations for EJ B Alpha B, D and G and EJ B overall for analysis</p>
<p>
<b>Feature columns:</b> <br>
np.log1p for all columns excluding EJ for train and test. (EJ column was not used in data for models) <br>
Combination columns via addition for AB_CF, CD_DI, BC_FR, CR_DA for EJ A data and model<br>
Combination columns via addition as for EJ A with DU_FL specific for Alpha B, for EJ B data and model.</p>
<p>
<b>Weighted PCA features:</b> <br>
Separate Weighted PCA for EJ A and EJ B using weight columns max normed for correlated groupings of columns and default weights for remaing columns.
Used wpca.EMPCA based on the algorithm presented in Bailey (2012) to handle weights on each input
value and heteroscedastic noise.<br>
Considering correlations in different ways gave a view for which columns influence other columns as well as Class and which might be combined as features and/or used for weights or in groupings for weighted PCA.</p>
<p>
<b>Percentiles Differences:</b> <br>
Separate Percentiles 10-90 were obtained for EJ A and EJ B for all columns excluding EJ <br>
Difference of columns to percentiles were calculated and for EJ A used 10, 40, 60, 90 and for EJ B used 20, 40, 60, 80<br>
Although column meanings were unknown, the percentiles differences were included as a proxy for
what could perhaps be considered in or out of normal range for test results, what may indicate
hyper or hypo conditions, etc. </p>
<p><b>Training and Models:</b> <br>
2 sets of features transformed one with QuantileTransformer, one with MinMaxScaler 0-255,
difference percentiles tranformed with MinMaxScaler 0-255, and a subset of np.log1p columns with most correlation (different for A and B) were all stacked together. Then transformed using Table to Image, into a 2D image style representation 16x16, input for a CNN Model. <br>
The model used was a simple Convolutional Neural Network (CNN) with a stack of 3 Conv2D, MaxPooling2D and BatchNormalization layers with Dropout on the last, swish activation on first, relu on second and third.
Flatten then 2 Dense layers, the first outputs 4 like Alphas activation swish, the second outputs 1 for Class activation Sigmoid.<br>
Trained with optimizer AdamW, CyclicalLearningRate and Batch size 8.<br>
Cross validation MultilabelStratifiedKFold split on Class, Alpha, Gamma with 8 folds for EJ A and 10 folds for EJ B (post competition 6 folds for EJ B was best)<br>
A proof of concept XGB, LGBM, TabFPN model was trialled for evaluating use of Weighted PCA and combination columns features. This was based on a notebook using Epsilon and modified to use KNNImputer(n_neighbors=8,weights='distance') for Unknown Episilon in train.
</p></div>
<h2>Details of the submission</h2>
<div><p>
The competition Overview stated "Currently, models like XGBoost and random forest are used to predict medical conditions yet the models' performance is not good enough." <br></p><p>
<b>What was special, creative, important:</b>
What was creative here was to try and see if a CNN would work with this data and how features could be used in different ways to that end.
Separate EJ A and EJ B data and models were important and allowed for insights gained from A which was easier to develop B. Using weighted PCA was special in this competition (first proposed in the IceCube - Neutrinos in Deep Ice competition (<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/overview"> IceCube - Neutrinos</a>) by bowaka). Using percentiles differences was another feature idea that leveraged the weighted PCA columns for potential additional information the model could use.<br>
Table to image proposed in this competition by miyawakiyoshifum,was a novel approach for NN solutions here.
Initially it did not seem as if it would be able to get good results but input size was small 5x5. To consider how features could be expanded and augmented, a variation of ulrich07 notebook XGB, LGBM, TabFPN was used.
(After competition end and more decimals revealed, the original notebook Private Score was 0.48347 and the variation including weighted PCA Private Score was 0.42157 so these features did work.) <br>
Returning to a CNN with table to image, initial versions of combination columns, weighted PCA, percentiles
for the 16x16 input and looking good CV 0.138855 seemed not to fare well with LB 0.36. After competition end and more decimals revealed this had <b>Private Score 0.40326 Public Score 0.36356 </b> but regrettably it was not selected for one of the final submissions. <br></p>
<p><b>What did not work:</b> Use of Epsilon in data for CNN models or for cross validation splits did not help. Anything to try to improve some of the hard to classify points like the notorious Id cf5439add02c, an Alpha D, which seemed to work seesaw to another Alpha D, Id 3924089e4396, a case of rob Peter to pay Paul. Various ideas were tried to add or remove features, use new combinations, more percentiles, other seeds. Attempts to create larger inputs e.g. 25x25 or 28x28 did not work even with more Dropout.<br></p>
<p><b>Post competition</b>
However, for the most part, improvements in CV for the CNN model translated to improved Private LB even if Public LB was not great, and spread between them was smaller than many other models. This is perhaps impactful for future considerations of CNN models with this type of dataset. <br>
Post competition, wanted to try to improve more on the best CNN table to image notebook Private Score 0.40326. EJ A model always performed best so focused on EJ B correlation and rework of its weighted PCA. Also with reduction in number of folds to 6 and batch size to 8 from 16 obtained the best so far: <br>
<b>Private Score 0.37649 Public Score 0.32110 </b> with CV 0.16837328202944316
see notebook <a href="https://www.kaggle.com/code/something4kag/icr-split-a-b-wpca-pctiles-table2img-cnn">ICR - Split A B WPCA Pctiles Table2Img CNN</a>
<br></p><p>There are no doubt ways to improve further, especially with knowledge of the true features meanings. <br>
Maybe a 2nd ICR competition can find out!
</p></div>
<h2>Sources </h2>
<p></p><div><p><br>
<a href="https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381279" target="_blank">https://www.kaggle.com/competitions/icecube-neutrinos-in-deep-ice/discussion/381279</a><br>
<a href="https://github.com/jakevdp/wpca" target="_blank">https://github.com/jakevdp/wpca</a><br>
<a href="http://arxiv.org/abs/1412.4533" target="_blank">http://arxiv.org/abs/1412.4533</a> Delchambre (2014) algorithm<br>
<a href="http://arxiv.org/abs/1208.4122" target="_blank">http://arxiv.org/abs/1208.4122</a> co Bailey (2012) algorithm<br>
<a href="https://www.kaggle.com/datasets/something4kag/wpca-01-py3-whl" target="_blank">https://www.kaggle.com/datasets/something4kag/wpca-01-py3-whl</a><br>
<a href="https://www.kaggle.com/code/something4kag/example-weighted-pca-no-internet" target="_blank">https://www.kaggle.com/code/something4kag/example-weighted-pca-no-internet</a><br>
<a href="https://www.kaggle.com/datasets/miyawakiyoshifumi/offline-tab2img" target="_blank">https://www.kaggle.com/datasets/miyawakiyoshifumi/offline-tab2img</a><br>
<a href="https://www.kaggle.com/code/miyawakiyoshifumi/table-to-image" target="_blank">https://www.kaggle.com/code/miyawakiyoshifumi/table-to-image</a><br>
<a href="https://www.tensorflow.org/tutorials/images/cnn" target="_blank">https://www.tensorflow.org/tutorials/images/cnn</a><br>
<a href="https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17" target="_blank">https://www.kaggle.com/code/ulrich07/tabfpn-and-xgboost-cv-0-19-lb-0-17</a><br>
<a href="https://www.kaggle.com/code/propriyam/icr-tabpfn-lgb-xgb-lb-0-17-cv-0-16" target="_blank">https://www.kaggle.com/code/propriyam/icr-tabpfn-lgb-xgb-lb-0-17-cv-0-16</a><br>
<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/429029" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/429029</a><br>
<a href="https://www.kaggle.com/tilii7/data-splitting" target="_blank">https://www.kaggle.com/tilii7/data-splitting</a><br>
<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/428997" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/discussion/428997</a><br>
</p></div> |
ICR - Identifying Age-Related Conditions | Unveiling a Potential Silver Lining: The Power of a Public Notebook 🥈 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Greetings Kagglers,</p>
<p>I'm excited to share a revelation from our recent competition that underscores the influence of a well-crafted public notebook. Allow me to introduce you to my public notebook, "ICR | MultiHeads Ensemble Baseline [CV 0.23] & EDA" which holds the potential to secure a silver medal for those who made the strategic choice to incorporate it into their approach.</p>
<p>Link to the notebook: <a href="https://www.kaggle.com/code/elcaiseri/icr-multiheads-ensemble-baseline-cv-0-23-eda" target="_blank">https://www.kaggle.com/code/elcaiseri/icr-multiheads-ensemble-baseline-cv-0-23-eda</a></p>
<p>In the ever-evolving landscape of Kaggle competitions, decisions can often play a pivotal role in shaping our outcomes. While my public notebook was designed with meticulous care, it's important to note that the impact of certain choices becomes more evident as we transition to the private leaderboard.</p>
<p>I'm hopeful that some of you recognized the promise of this notebook and opted to integrate it into your strategy. This choice could have translated into an impressive silver medal on your profile, and I genuinely hope that it worked out in your favor.</p>
<p>As we navigate the complexities of Kaggle, let's continue to appreciate the power of sharing knowledge and solutions within our community. Every notebook, every insight, and every decision contributes to our collective growth and understanding.</p>
<p>Thank you for being a part of this journey. I eagerly await your thoughts and insights on the notebook, and I'm looking forward to witnessing the diverse impacts it might have had.</p>
<p>Wishing you all happy coding and successful insights!</p> |
Google Research - Identify Contrails to Reduce Global Warming | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for hosting competition and congrats to all the winners.</p>
<h1>The key finding</h1>
<p>Many people have likely noticed that flip/rot90 augmentation doesn't work well on this dataset.</p>
<p>My guess is that during the conversion of polygon annotations to binary masks, the leftmost point is not included while the rightmost point is, causing a misalignment between the image and the mask.</p>
<ul>
<li>original<ul>
<li>img: [0, 128, 128, 3, 4, 5, 6]</li>
<li>mask: [0, 255a, 255b, 255pad, 0, 0, 0]</li></ul></li>
<li>normal flip that does not work<ul>
<li>img: [6, 5, 4, 3, 128, 128, 0]</li>
<li>mask: [0, 0, 0, 255pad, 255b, 255a, 0]</li></ul></li>
<li>consistent flip should be<ul>
<li>img: [6, 5, 4, 3, 128, 128, 0]</li>
<li>mask: [0, 0, 0, 0, 255b, 255a, 255pad]</li></ul></li>
</ul>
<p>I tried two solutions to address this issue.</p>
<h3>solution 1, train with misaligned (img ,mask)</h3>
<ul>
<li>training<ul>
<li>img: normal flip</li>
<li>mask: consistent flip</li></ul></li>
<li>inference<ul>
<li>flip tta: normal img flip -> predict -> consistent mask flip</li></ul></li>
</ul>
<p>training/test time augmentation is a bit tricky in this case.</p>
<h3>solution 2, train with aligned (img, mask)</h3>
<ul>
<li>training<ul>
<li>shift img by +0.5 pixel</li></ul></li>
<li>inference<ul>
<li>shift img by +0.5 pixel</li></ul></li>
</ul>
<p>training/test time augmentation is as normal in this case.</p>
<p>Both solutions worked. For simplicity, I chose solution 2. I use the following code to apply resize512&shift1</p>
<pre><code>img_affine_matrix = np.array([[2.0, 0.0, 1.5], [0.0, 2.0, 1.5]], =np.float64)
img = cv2.warpAffine(
img,
img_affine_matrix,
(512, 512),
=cv2.INTER_LINEAR,
=cv2.BORDER_CONSTANT,
=0,
)
</code></pre>
<p>Note that you need to calibrate M for warpAffine. So it's 1.5 in the final affine matrix.</p>
<pre><code> calibrate(M):
[:, ] += M[:, ] * . + M[:, ] * . - .
M
</code></pre>
<h1>Models</h1>
<p>My submission notebook is pubic <a href="https://www.kaggle.com/code/tascj0/contrail-submit?scriptVersionId=139432132" target="_blank">here</a>.</p>
<p>I joined the competition quite late and only briefly explored other bands and 2.5D before giving up on them. In the end, I only used false_color images and UNet models.</p>
<ul>
<li>Strong backbone were very helpful.</li>
<li>train with all individual annotations is helpful.</li>
</ul>
<p>I had some success with pseudo-labeling on a small model, but unfortunately, I didn't have time to apply it on larger models.</p> |
HuBMAP - Hacking the Human Vasculature | 54th place solution YOLOV8x + WBF + TTA ensemble train and inference code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Summary:<br>
MultilabelStratifiedKFolds on dataset, source, num_cells with yolov8 on one class, drop unsure, TTA, WBF, dilation</p>
<p>Inference code: <a href="https://www.kaggle.com/code/sirtimio/infer-yolov8-tta-wbf-55th-place" target="_blank">https://www.kaggle.com/code/sirtimio/infer-yolov8-tta-wbf-55th-place</a></p>
<p>Train code: <a href="https://github.com/sir-timio/HuBMAP-2023" target="_blank">https://github.com/sir-timio/HuBMAP-2023</a></p>
<p>Best private: 0.552 without dilation and trained mostly on dataset 1</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7215366%2Fab482c349804007b4699e3250f68f6db%2F2023-08-01%20%2010.30.49.jpg?generation=1690875066944109&alt=media" alt=""></p> |
ICR - Identifying Age-Related Conditions | 28th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>To begin, I'd like to express my gratitude to Kaggle for hosting this competition, and extend my heartfelt congratulations to the winners of this competition!</p>
<p>Here is the main takeaway from my solution. After reviewing discussions on the dangers of post-processing and data distribution on the forums and conducting several experiments, I've come to the realization that I should emphasize the prevention of overfitting due to the small dataset, rather than complicating my model. I allocated a significant amount of time to feature engineering while focusing on maintaining simplicity in model building.</p>
<p>I used stratified cross-validation, and an XGBoost model was constructed with specific parameters obtained through RandomizedSearchCV. A lot of normal feature engineering methods were used. </p>
<p>the public score for my submission is 0.18</p> |
Google - American Sign Language Fingerspelling Recognition | 95th Bronze solutions: Focal loss + KD + Supplemental data | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2><strong>95th Bronze Solutions (public score 0.714, private score 0.671)</strong></h2>
<p>I express my gratitude to Kaggle and Google for organizing this competition. Participating in this competition has been an invaluable learning experience for me.</p>
<p>One significant achievement was improving the public score from Version 17 of Saidineshpola's notebook, which initially achieved a public score of 0.689. The improvements were achieved by implementing the following modifications:</p>
<h1><strong>Model & Config:</strong></h1>
<ul>
<li>Set model dimensions (Dim) to 192.</li>
<li>Set 16 CNN-Transformer blocks.</li>
<li>Incorporated a dropout rate of 0.4.</li>
<li>Utilized 8 attention heads and an expansion factor of 8 in the Transformer blocks.</li>
<li>The total number of parameters amounted to 19,255,980.</li>
<li>Employed tf.float16 for model conversion.</li>
<li>Configured training with 100 epochs, with 10 warm-up epochs.</li>
<li>Set the maximum learning rate (LR_MAX) to 1e-3.</li>
<li>Set the weight decay (WD_RATIO) to 1e-4.</li>
</ul>
<h1><strong>CTC loss:</strong></h1>
<p>I utilized the CTC loss function from the <code>pip install tf-seq2seq-losses</code> library, which provided comparable results to TensorFlow's built-in function. This implementation offered a speed enhancement of around 25% on V100 GPUs. Additionally, I applied the Focal loss to the CTC loss function.</p>
<pre><code> tf_seq2seq_losses classic_ctc_loss
():
label_length = tf.reduce_sum(tf.cast(labels != pad_token_idx, tf.int32), axis=-)
logit_length = tf.ones(tf.shape(logits)[], dtype=tf.int32) * tf.shape(logits)[]
ctc_loss = classic_ctc_loss(
labels=tf.cast(labels, dtype=tf.int32),
logits=logits,
label_length=label_length,
logit_length=logit_length,
blank_index=pad_token_idx,
)
p= tf.exp(-ctc_loss)
alpha =
gamma =
focal_ctc_loss= tf.multiply(tf.multiply(alpha,tf.((-p),gamma)),ctc_loss)
loss = tf.reduce_mean(focal_ctc_loss)
loss
</code></pre>
<h1><strong>Data augmentation:</strong></h1>
<p>No data augmentation techniques were applied.</p>
<h1><strong>Additional data:</strong></h1>
<p>Supplemental data was integrated into the training process. I followed <a href="https://www.kaggle.com/code/irohith/aslfr-preprocess-dataset/notebook" target="_blank">ROHITH INGILELA's notebook</a> to generate supplemental TFrecord files. The resources used are available <a href="https://www.kaggle.com/datasets/kongpasom/aslfr-sup-tfrecord" target="_blank">here</a> and <a href="https://www.kaggle.com/datasets/kongpasom/aslfr-sup" target="_blank">here</a> .</p>
<h1><strong>Training Approach:</strong></h1>
<ol>
<li>Trained the model using supplemental data for 100 epochs.</li>
<li>Trained the model with training data and the pre-trained weights derived from the - supplemental data for an additional 100 epochs (resulting in a public score of 0.713).</li>
<li>Executed a fine-tuning process using knowledge distillation, refining the model with the best pre-trained weights through the AdamW optimizer, resulting in a modest score improvement of +0.001.</li>
</ol>
<h1><strong>Model Architecture:</strong></h1>
<p>Here's a summarized version of the model architecture:</p>
<pre><code> ():
inp = tf.keras.Input(INPUT_SHAPE)
x = tf.keras.layers.Masking(mask_value=)(inp)
x = tf.keras.layers.Dense(dim, use_bias=, name=)(x)
pe = tf.cast(positional_encoding(INPUT_SHAPE[], dim), dtype=x.dtype)
x = x + pe
x = tf.keras.layers.BatchNormalization(name=)(x)
num_blocks =
drop_rate =
i (num_blocks):
x = Conv1DBlock(dim, , drop_rate=drop_rate)(x)
x = Conv1DBlock(dim, , drop_rate=drop_rate)(x)
x = Conv1DBlock(dim, , drop_rate=drop_rate)(x)
x = TransformerBlock(dim=dim, num_heads=, expand=, attn_dropout=, drop_rate=)(x)
x = tf.keras.layers.Dense(dim*,activation=,name=)(x)
x = tf.keras.layers.Dropout()(x)
x = tf.keras.layers.Dense((char_to_num), name=)(x)
model = tf.keras.Model(inp, x)
loss = CTCLoss
optimizer = tfa.optimizers.RectifiedAdam(sma_threshold=)
optimizer = tfa.optimizers.Lookahead(optimizer, sync_period=)
model
</code></pre>
<h1><strong>Knowledge Distillation Training :</strong></h1>
<p>A training step using knowledge distillation was employed to refine the model:</p>
<pre><code>loss_fn = CTCLoss
kld_loss_fn = tf.keras.losses.KLDivergence()
():
temperature =
alpha =
teacher_pred = model_teacher(x, training=)
tf.GradientTape() tape:
logits = model(x, training=)
loss_value = loss_fn(y, logits)
kld_loss = kld_loss_fn(
tf.nn.softmax(teacher_pred / temperature, axis=),
tf.nn.softmax(logits / temperature, axis=),
) * temperature**
loss = alpha * loss_value + ( - alpha) * kld_loss
grads = tape.gradient(loss, model.trainable_weights)
model.optimizer.apply_gradients((grads, model.trainable_weights))
loss, loss_value, kld_loss
</code></pre>
<h1><strong>Reference (</strong>If I have seen further, it is by standing on the shoulders of giants.<strong>) :</strong></h1>
<p><a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">ROHITH INGILELA's notebook</a><br>
<a href="https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">RSAIDINESHPOLA's notebook</a><br>
<a href="https://www.kaggle.com/code/irohith/aslfr-preprocess-dataset" target="_blank">ROHITH INGILELA's pre-process dataset notebook</a></p> |
ICR - Identifying Age-Related Conditions | 15th Place Solution for the "ICR - Identifying Age-Related Conditions" | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I feel incredibly fortunate to have reached 15th place in the competition. I started by referencing the familiar baseline(<a href="https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models)" target="_blank">https://www.kaggle.com/code/aikhmelnytskyy/public-krni-pdi-with-two-additional-models)</a>. By delving into the comprehensive notebooks and discussions, I learned and adapted my approach.</p>
<p>What did not work for me:</p>
<ul>
<li>Avoided using Epsilon from greeks due to potential data drift.</li>
<li>De-anonymizing (But it's still a magic method for me who is not good at math, thanks!) and feature derivation.</li>
<li>Feature selecture using target permutation, I found that the features filtered by "gain" and "split" are not in good agreement.</li>
<li>Optuna, postprocessing, and oversampling, they weren't effective for me.</li>
</ul>
<p>What did work for me:</p>
<ul>
<li>Simulate a local verification process through Nested k-folds with StratifiedKFold. Not the best_model but full models of cv_outer.</li>
<li>A diverse ensemble model with probability reweighting.</li>
<li>Incorporating the greeks.Alpha into the training.</li>
</ul>
<p>I'm very lucky this time (contrary to real life…orz). My deepest gratitude goes out to the members of the Kaggle community who generously and selflessly share their knowledge. I also hold immense respect for those inquisitive minds who are never hesitant to ask questions and challenge the status quo. Kindly bear with any oversights or shortcomings!!</p> |
Kaggle - LLM Science Exam | 17th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, it's just harsh near-missing my first (solo) gold with a big score-gap but it was an intense competition! I don't know if I'll have the courage to try so hard for a solo gold again. 😅 I had shared a summary of my solution yesterday. Now, I've tried to explain it in as much detail and clarity as possible.</p>
<hr>
<p>When dealing with Wikipedia dump data, you realize that you come across a large number of articles. Processing all of them and getting them into a suitable format for a competition is a quite intense process. I also generated all the articles by processing the common dump files we all know. However, I noticed issues with the mathematical formulas and certain Wikipedia LUA templates in publicly available open-source dump extractor outputs.</p>
<h3>STEM Filtering</h3>
<p>So, I decided to design a separate pipeline to extract the articles related to STEM in their most original form. First and foremost, I needed to locate the articles related to STEM.</p>
<ul>
<li>I downloaded category dumps from wikipedia and <strong>reconstructed the category-links graph</strong> from the sql dumps.</li>
<li>Ran breadth-first search on STEM main categories for 4 levels and got ~1.1M STEM articles total from all that category tree.</li>
</ul>
<hr>
<h3>Data</h3>
<p>In my final solution, I used three different original data sources.</p>
<ul>
<li><strong>Wikipedia JSON Dump (F1)</strong>: It includes all articles but open-source extractors have LUA template compilation errors at extraction as I said above, which causes parsed texts with no numbers or some garbage in it</li>
<li><strong>Local MediaWiki Server TextExtract API (F2)</strong>: Using the filtering technique I described above, I collected 1.1 million STEM article titles. I sent API requests to the MediaWiki server I set up on my own computer to ensure that these articles were parsed in formats closest to the their originals. This way, LaTeX formulas and all LUA templates were parsed without any issues.</li>
<li><strong>Cohere WikiPedia corpus (F3)</strong></li>
</ul>
<p>I passed all the data I listed above through a Unicode character decode process. Then, I created corpus chunks by shifting 3-sentence windows with a stride of one-sentence.</p>
<hr>
<h3>Retrieval</h3>
<p>I noticed that different retrieval methods improved predictive performance. Therefore, I preferred to use three main approaches:</p>
<ul>
<li><strong>FAISS (BAAI/bge-small-en)</strong>: I processed the <strong>F1</strong> data using sentence-embedding models, resulting in an embedding data of approximately ~150GB. I used these without compression during the search phase. I noticed that my retrieval performance decreased when I compressed them.</li>
<li><strong>TFIDF</strong>: Since the competition's question-answer texts are paraphrased versions of the original Wikipedia texts, TFIDF was quite effective in capturing exact word clusters. I used TFIDF for ngrams 1,2 with no stop-words on <strong>F2</strong> and <strong>F3</strong>.</li>
<li><strong>FAISS + TFIDF</strong>: I also reordered F1 embedding retrieval results with TFIDF (called <strong>F4</strong>)</li>
</ul>
<hr>
<h3>Ensemble</h3>
<p>As I mentioned above, I created four different sources of context. I just ran a single debertav3-large model on all contexts and did a max-ensemble on probabilities. Since I used a single model, I noticed that max-ensemble was quite effective. If it found relevant text in any of the four contexts related to the question, it significantly boosted the probability of the corresponding answer. If it couldn't find the relevant context, the probabilities for all options were similar and relatively low.</p>
<hr>
<h2>Summary</h2>
<p>I'm also including a summary diagram of my solution below. It may provide a clearer understanding:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2514988%2F93b60fa0e3d06e32d8cb106b5a848617%2FScreenshot%20from%202023-10-11%2014-07-33.png?generation=1697022470900193&alt=media" alt=""></p> |
Kaggle - LLM Science Exam | 1st Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for this well executed competition that attracted a lot of interested from the community. Thanks a lot as well to my great teammates <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a> and <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>. </p>
<p>Check out our tldr summary <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446240" target="_blank">here</a>!</p>
<h2>Context Retrieval</h2>
<p>We started working on this competition very early and quickly faced a performance cap with the simple classification approach for both deberta and LLM models.</p>
<p>So, the next step was to explore the context retrieval from Wikipedia articles. Our first submission was with the <code>all-MiniLM-L6-v2</code> model. And to our surprise, this first attempt was much better than all our previous non-context models. From that point we were mostly focusing on improving the context retrieval part, while keeping the modeling part almost the same. We tried the majority all models from the top-20 list on MTEB Leaderboard: <a href="https://huggingface.co/spaces/mteb/leaderboard" target="_blank">https://huggingface.co/spaces/mteb/leaderboard</a>. In the end, we had a local choice of about 300 combinations of Retrieval + LLM models + different Wikipedia dumps. Our final submission has contexts from <code>e5-base-v2</code>, <code>e5-large-v2</code>, <code>gte-base</code>, <code>gte-large</code> and <code>bge-large</code> models.</p>
<p>Apart from exploring various models, we also tried different ways to encode both Wikipedia chunks and Prompts+Answers from the competition data. For Wikipedia we were mostly encoding <code>title</code> of the article + article chunk, and for the question the final model has 2 variations:</p>
<ol>
<li>Just a simple concatenation: “{prompt} {A}, {B}, {C}, {D}, {E}”</li>
<li>Search contexts individually for each “{prompt} {A}”, “{prompt} {B}”, etc. option, and sort them by similarity in the end.</li>
</ol>
<p>Our Wikipedia dumps had from 30M to 60M text chunks. However, we’ve implemented a pretty quick and scalable similarity search on GPUs. Firstly, we split the whole 60M database into smaller parts, and load them one after another. Then we calculate a similarity matrix between each part and whole test data (using simple matrix multiplication for torch tensors on GPU). All we have to do now is to store the global Top-k list among all parts. And to make it twice faster we’re doing it on 2 GPUs in parallel.</p>
<h2>Different Wikipedia dumps</h2>
<p>As most teams, we started out using the <a href="https://www.kaggle.com/datasets/narek1110/wikipedia-22-12-en-embeddings-all-minilm-l6-v2" target="_blank">public wiki</a> as our source of context and embeddings. We invested quite some time to create our own wiki corpus. First, we recreated the above wiki taking a snapshot from a newer date closer to the start of the competition. We added a few extra post processing steps, such as joining very small paragraphs (usually subtitles) with the subsequent paragraph. When investigating errors in the context matches, we noticed that all public wikipedia dump parsers are not properly expanding lua code which is often used on wikipedia to render values in scientific articles. To include these into our context, we decided to use the <a href="https://dumps.wikimedia.org/other/cirrussearch/" target="_blank">cirrussearch dump</a> which is a dump of all (almost) fully rendered wikipedia pages. Unfortunately, this dump doesn’t include newline characters, so we merged the sentences together to different target lengths of either 256, 512 or 1024 characters while not breaking any sentences. In the end we considered all of these wikis (and multiple embeddings for each) for the blend and it helped to increase diversity. As a standalone wikipedia corpus, the cirrussearch wiki with a target length of 512 characters was best for us.</p>
<p>We also tried to filter the wikis for science articles, but never achieved a significant improvement on public leaderboard over using the whole wiki corpus. Apparently, the embedding models are strong enough to not get distracted by irrelevant articles and we decided to play safe and included the full wikis in our final submissions. </p>
<h2>Longer contexts</h2>
<p>We started by using a single most similar wiki chunk for both train and inference. However, we quickly noticed that increasing the context length during the inference might improve the results. The optimal strategy for us was to train models with 3-chunk contexts and run inference with 5-chunks. Increasing it further for training was too slow, and increasing it more for inference was probably adding too much noise to the context. One interesting test that we’ve done was to inverse the order of contexts during the inference which dropped the score quite a bit. So, it might mean that models learnt the position of the “best” context.</p>
<h2>Data and validation strategy</h2>
<p>For training we mostly used only the publicly shared datasets. And adding more data wasn’t helpful for our models. For the validation, we used the original 200 samples provided as our validation data since the very beginning. However, after introducing the context retrieval part, we quickly got to 0.99 performance there. For the rest of the competition we were using 6k STEM questions from one of the publicly shared dataset.</p>
<h2>Models</h2>
<p>Our final ensemble is a combination of fine-tuned 7B & 13B LLMs. All models are trained in binary fashion, meaning a single sample for each of the five options and final predicted probabilities are used for ranking. This makes the models independent of the ordering of answers. For training data, we found the following steps to be useful:</p>
<ul>
<li>Pick synthetically generated multi-choice samples. There are many of such samples shared on Kaggle. Interestingly, data quantity was not too important here to improve performance.</li>
<li>For each sample, we generate the context using our RAG embedding approaches. This allows us to generate multiple contexts like in inference and adding noise here is helpful for training. Training on the true context the question was generated on, was a bit worse than training on the generated context.</li>
</ul>
<p>We train all models with LORA on all linear layers. We use a binary classification head after the final next-token logits. We also use a lower learning rate for the head vs. the LORA layers. We did tuning on LORA settings, learning rate, and other minor things, but they were not that impactful as for some other deep learning use cases. All models are trained over a single epoch with typical cosine lr decay and BCE loss, picking the last checkpoint.</p>
<p>We had most success with the following LLM models:</p>
<ul>
<li>Llama-2-7b</li>
<li>Mistral-7B-v0.1</li>
<li>xgen-7b-8k-base</li>
<li>Llama-2-13b</li>
</ul>
<p>The following diagram explains our architecture and inference setup on a high level:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2675447%2Fe304b1553bae71f790a313f92ea3bc1b%2Fimage.png?generation=1697040115030343&alt=media" alt=""></p>
<ol>
<li>First, we feed the context and the question through the backbone and save the past_key_values. For each question, we only need to do this once as each answer has the same context and question. This is a big benefit of decoder only models for such use-cases and saves a lot of runtime.</li>
<li>We then have another backbone forward on each of the five answers where each one gets the past_key_values of the previous forward as cache input. We can run this in a batch of five now.</li>
<li>We take the final next token prediction of each output which are the logits over the full vocabulary and feed this through a final classification head trained from scratch. This head now learns to map the output of the LLM across the full vocabulary to a single binary prediction whether the answer is correct or not.</li>
<li>The final ordering is then simply determined by sorting the predicted probabilities.</li>
</ol>
<p>One downside of this binary approach is that each sample lacks the cross-information from all the other options which might hold individual signal. We experimented a lot with multi-class model or simply adding the other answers as context to the binary model, but there can be a huge positional bias added by doing so. Hence, we tried to solve this with TTA approaches, using various ordered combinations of the answers, which definitely helped, but was more shaky and could not outcompete the binary model.</p>
<p>We still came up with, what we think is a pretty clever idea, another solution to help the models a bit with learning cross-information across samples not adding a positional bias. Here, the idea is to add the average next token logits of all other options as an additional input to the final classification head. So the final head gets logits for all possible tokens for the answer at hand, and the average of all logits for all possible tokens of all the other four answers. This does not add positional bias and gives extra information that boosted CV and LB and worked well when blended with the simpler approach.</p>
<p>As you may have guessed already from our team name, we used a forked <a href="https://github.com/h2oai/h2o-llmstudio" target="_blank">H2O LLM Studio</a> to experiment and to train our models. We have added a fully flexible way of training causal classification models. The <a href="https://github.com/h2oai/h2o-llmstudio/pull/449" target="_blank">merged PR</a> allows to train binary and multiclass classification LLM models using the core techniques used in our solution. You can just add a csv with a binary target and input text and train models.</p>
<h2>Inference solution</h2>
<p>Our final ensemble is a blend of five 7B models and one 13B model. Each model uses a different context approach using different wikis, embeddings and topk. The pipeline fits precisely into 9-hour runtime and utilizes 2.5TB of input data.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2675447%2F043168b3c4ac871415b03d8cfcd117be%2FBildschirmfoto%20vom%202023-10-11%2021-22-51.png?generation=1697052212889820&alt=media" alt=""></p>
<p>We have shared our inference code now:</p>
<p>A winning kernel with 0.933 Private: <a href="https://www.kaggle.com/code/ybabakhin/1st-place-team-h2o-llm-studio" target="_blank">https://www.kaggle.com/code/ybabakhin/1st-place-team-h2o-llm-studio</a><br>
A single model with 0.932 Private: <a href="https://www.kaggle.com/code/ybabakhin/1st-place-single-model-inference" target="_blank">https://www.kaggle.com/code/ybabakhin/1st-place-single-model-inference</a></p> |
Kaggle - LLM Science Exam | 191st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for hosting this fun competition, and to all those who shared throughout! I learned a ton about LLMs, RAG, and inference optimization. </p>
<h2>RAG part</h2>
<p>The Wikipedia dataset shared by <a href="https://www.kaggle.com/jjinho" target="_blank">JJ</a> was great, but it was missing some articles and sometimes key information needed to answer questions. (discussion <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913" target="_blank">here</a>). This can also be confirmed by looking for the answer to the <code>Modified Newtonian Dynamics</code> question provided by the hosts, as the article containing the answer (4th paragraph <a href="https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics" target="_blank">here</a>) is not found in the dataset.</p>
<p>To find more article IDs, I combined JJs and <a href="https://huggingface.co/datasets/Cohere/wikipedia-22-12" target="_blank">Cohere’s Wikipedia data</a>. Then, I used the MediaWiki API to collect the text for all the articles as suggested by Steubk <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913#2411374" target="_blank">here</a>. Finally, I split each article into subsections by article headings. After some light filtering, this gave me roughly 22 million texts. In hindsight, even the MediaWiki API missed pieces of the text, and I should have combined multiple wiki data sources.</p>
<p>For retrieval, I used the <a href="https://huggingface.co/intfloat/e5-base-v2" target="_blank">intfloat/e5-base-v2</a> model and the Faiss library. I split the Faiss index into 16 shards to be searchable with <13 GB of RAM. Finally, I split the selected texts based on <code>\n</code> characters and performed a final search to get context for each question.</p>
<h2>Inference Part</h2>
<p>The model I used for inference was an <code>AutoModelForMultipleChoice</code> with a <code>deepset/deberta-v3-large-squad2</code> backbone. I enhanced the class to include other answer options as part of the context. This allowed me to perform TTA by shuffling answer options. See an example of this below.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2Fdf6897f36282af45f7a18cf841c6cd38%2Fwp2.png?generation=1696982588123448&alt=media" alt="TTA image"></p>
<p>All my experiments were done with the <code>microsoft/deberta-v3-base</code> model, and I scaled up to the deberta-v3-large model variant for my final submission.</p>
<h2>Training Data</h2>
<p>I used similar question-answer sources to <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">Chris’s 60k Dataset</a>, but added <a href="https://www.kaggle.com/erijoel" target="_blank">erijoel’s</a> llmse-few-shot-prompting-1k data. (dataset <a href="https://www.kaggle.com/datasets/erijoel/llmse-few-shot-prompting-1k" target="_blank">here</a>). I also re-generated all question contexts using the e5-base-v2 retrieval model.</p>
<h2>Improvements</h2>
<table>
<thead>
<tr>
<th>Feature</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>RAG</td>
<td>+0.1</td>
</tr>
<tr>
<td>Add more article IDs</td>
<td>+0.03</td>
</tr>
<tr>
<td>Bigger retriever (intfloat/e5-base-v2)</td>
<td>+0.01</td>
</tr>
<tr>
<td>Other answers in context</td>
<td>+0.01</td>
</tr>
<tr>
<td>TTA</td>
<td>+0.005</td>
</tr>
<tr>
<td>deberta-v3-base -> deberta-v3-large</td>
<td>+0.01</td>
</tr>
<tr>
<td>1024 max tokens (training)</td>
<td>+0.005</td>
</tr>
<tr>
<td>1536 max tokens (inference)</td>
<td>+0.005</td>
</tr>
</tbody>
</table>
<h2>Frameworks</h2>
<p>Models: <a href="https://huggingface.co/models" target="_blank">Huggingface</a><br>
Retrieval: <a href="https://github.com/facebookresearch/faiss" target="_blank">Faiss</a>, <a href="https://www.sbert.net/" target="_blank">SentenceTransformers</a><br>
Logging: <a href="https://wandb.ai/" target="_blank">WandB</a></p> |
Kaggle - LLM Science Exam | 20th Place (Top 1%, Solo Silver) Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, I would like to express our gratitude to the hosts and the Kaggle team for arranging this wonderful competition. Also, a huge shout-out to all the participants for fighting through this intense competition. It was an extremely educational and exciting competition.</p>
<p>As my solution is quite similar to the top teams and I'm a little busy these days, I didn't work on the writeup until I was mentioned by <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> in this <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/447396#2486398" target="_blank">discussion</a>. It's time to spend some time to finish this writeup. Thanks <a href="https://www.kaggle.com/kaggleqrdl" target="_blank">@kaggleqrdl</a> for memorizing me in this competition:)</p>
<p>My final solution is quite simple, 4 RAG + 1 deberta-v3-large ensembling:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3563032%2Fedcd41b7d3935ee3aa69148e88ea9cf2%2Fllm-sci-writeup.jpg?generation=1697638240812008&alt=media" alt="llm-solution"></p>
<p>can improve to 0.917 if change mean ensembling to max + mean ensembling. Refer to my submission <a href="https://www.kaggle.com/code/wuwenmin/llm-science-ensemble-exp-max/notebook" target="_blank">code</a></p>
<p><strong>Things that didn't work for me</strong></p>
<ul>
<li>Pre-finetune on RACE<br>
near the end of the competition, I tried to pre-finetune the model on RACE and then finetune on 60k with much smaller LR, but it didn't improve the CV & LB</li>
</ul>
<p><strong>Special thanks to</strong> </p>
<ul>
<li><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, and <a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> for providing us the valuable datasets.</li>
<li><a href="https://www.kaggle.com/ksmcg90" target="_blank">@ksmcg90</a> for providing a faster data loading method using pyarrow (refer to his <a href="https://www.kaggle.com/code/ksmcg90/faster-context-extraction-pyarrow-fixed" target="_blank">notebook</a>)</li>
<li><a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> for providing a solution to training using deepspeed (refer to his <a href="https://www.kaggle.com/code/itsuki9180/using-deepspeed-with-hf-trainer" target="_blank">notebook</a>). I trained the deberta-v3-large on 8xRTX4090, the speedup was amazing:)</li>
<li><a href="https://www.kaggle.com/datafan07" target="_blank">@datafan07</a> for providing a way to finetune LLMs for sequence classification using RewardTrainer (refer to this <a href="https://www.kaggle.com/code/datafan07/single-model-rewardtrainer-lora-llm/notebook" target="_blank">notebook</a>)</li>
<li>and all the teams for sharing their valuable solution for this task. I need spend one more weekend to learn all the techniques:)</li>
</ul> |
Kaggle - LLM Science Exam | 10th(11th) Place Solution: This is my first solo gold medal !!! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2522289%2F2b8138bddf09cb3168d9e32079b02e84%2Fkaggle-llm.drawio.png?generation=1697103353661989&alt=media" alt=""></p>
<p>It was the help of the Kaggle community that enabled me to get this far on my own. I would like to thank all the participants who made their data, code and ideas available to the public and the competition hosts.</p>
<p><a href="https://www.kaggle.com/code/kami634/11th-solution/notebook" target="_blank">final submission code</a></p>
<h2>Overview</h2>
<ul>
<li>multiple context retrieval from 2+2 different wikipedia data</li>
<li>multiple choice by single DeBERTa model</li>
<li>max probability ensemble</li>
<li>tf-idf post process</li>
</ul>
<h2>Retrieval</h2>
<p>There are four main types of Wiki data</p>
<ul>
<li>Wikipedia database backup dump (extracted with <a href="https://github.com/attardi/wikiextractor" target="_blank">wikiextractor</a>)<ul>
<li>Cons<ul>
<li>All articles are there, but bullet lists and some sections are missing.</li>
<li>Some formulas are not extracted, e.g. "formula_1", "formula_13".</li></ul></li></ul></li>
<li>Wikipedia Cirrus dumps (extracted with wikiextractor)<ul>
<li>Con<ul>
<li>Cannot split into sections</li></ul></li></ul></li>
<li><a href="https://www.kaggle.com/datasets/mbanaei/stem-wiki-cohere-no-emb" target="_blank">270k Data Cohere</a></li>
<li><a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270k Data parsed</a></li>
</ul>
<h3>About dumps data</h3>
<ul>
<li>Split and embed articles 200~250 characters at a time, quantise the index with ivfpq in faiss to get it into memory and search for articles</li>
<li>After retrieving 10 articles, split the text with sliding window, vectorise it and search with faiss's indexflat to create a context<ul>
<li>The answer should be contained within a few sentences</li>
<li>A single sentence is too short.</li>
<li>However, if you split the text into several sentences, it may be split at the centre of the sentence that is the answer → split with sliding window</li></ul></li>
</ul>
<h3>About 270K data</h3>
<p>The following two types of retrieval are used</p>
<ul>
<li>tf-idf</li>
<li>Search with faiss indexflat as embedding</li>
</ul>
<p>Since embedding of only questions can be computed during inference, even relatively large models (e.g. BAAI/bge-large-en-v1.5) can be used, but this did not make much difference.</p>
<h2>Model</h2>
<ul>
<li>Same DeBERTa large single model used for all inference except post-processing</li>
<li>Inference used a context twice as long as the training one</li>
</ul>
<h2>Training data (especially Exam data)</h2>
<ul>
<li>60K data (data generated by GPT-3.5 based on wikipedia + α)</li>
</ul>
<h3>Validation Strategy</h3>
<ul>
<li>Hold out</li>
<li>200 train.csv + 1000 additional data (science exam from wikipedia made by GPT-3.5)</li>
</ul>
<p>No high correlation, but informative. However, the last one was not very helpful as it was a mixture of questions with wrong answers or more than one correct answer.</p>
<h2>Ensemble</h2>
<ul>
<li>Take the max of each prediction probability , and try to predict from largest to smallest<ul>
<li>If there is an answer in the context, it should be high confidence</li>
<li>If there is no answer in the context, it should be low confidence</li></ul></li>
<li>Robust, less likely to overfit, but better performance than simple blending.</li>
</ul>
<h2>Post Process</h2>
<ul>
<li>Modification of predictions by tf-idf with (4,7)-gram for specific conditions.<ul>
<li>Post-processing to correct what is obviously wrong</li></ul></li>
</ul> |
Kaggle - LLM Science Exam | 13th(14th) place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to competition organizers for hosting this competition. <br>
I also like to appreciate <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/leonidkulyk" target="_blank">@leonidkulyk</a>, <a href="https://www.kaggle.com/mozattt" target="_blank">@mozattt</a>, <a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a> for sharing useful dataset, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> for sharing retrieval approach, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for summary these dataset and approach in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">this discussion</a>. Without your work, we could not have achieved these results. I learned a lot from you!</p>
<p>I start from <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">this notebook</a>. I made four changes from this notebook: <b>dataset, retrieval, models, ensemble</b></p>
<h1>1. dataset</h1>
<p>Because some of the data (mainly numbers?) were missing as discussed <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913" target="_blank">here</a>, I changed wiki dump from <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">https://www.kaggle.com/datasets/jjinho/wikipedia-20230701</a> to cirrussearch wiki dump.</p>
<h1>2. retrieval</h1>
<p>I tried to improve search accuracy.</p>
<h2>2-1. dataset for evaluation</h2>
<p>I create 250 question for evaluate retrieval by ChatGPT. To accumulate a lot of experiments, only data starting with "a" were used in the evaluation, not the entire wikipedia data. Finally, I achieve <code>0.94@recall1</code>, <code>1.0@recall30</code> on this dataset. (question from wiki whole dataset: about <code>0.85@recall1</code>)</p>
<h2>2-2. text processing</h2>
<p>For more accurate retrieval, I have split the wikipedia text. Specifically, the text is read from the beginning, and when it reaches 90 words or more, the data is saved, leaving the last three sentences to continue reading the text. </p>
<p>The pseudo code is as follows:</p>
<pre><code> ():
(text_processed.split())
():
ret = text.split()[-n:]
.join(ret)
texts = load_wiki()
texts_processed = []
text_processed =
length =
window =
text texts:
text_split_period = text.split()
sentence text_split_period:
text_processed += sentence
count_words(text_processed) > length:
texts_processed.append(text_processed)
text_processed = leave_last_sentence(text_processed, window)
</code></pre>
<p>I tried <code>(length, window) = (60, 2), (75, 2), (90, 2), (90, 3), (90, 4), (120, 4), (150, 6)</code> and choose the best LB score.</p>
<h2>2-3. faiss</h2>
<p>I use this parameters: <code>{"nlists": 1, "M": 64, "nbits": 8}</code></p>
<h2>2-4. retrieval model</h2>
<p>I use <code>gte-base</code> and <code>e5-base</code>. I tried {<code>gte</code>, <code>bge</code>, <code>e5</code>}_{<code>small</code>, <code>base</code>, <code>large</code>} and choose the best. (In the beginning, I select the model by looking at the search accuracy in only-a. However, I realized that this retrieval accuracy was not correlated with LB, so I checked it by looking at LB at the end😭.)</p>
<h1>3. model</h1>
<p>Almost the same as baseline notebook. <br>
I set <code>max_length</code> 256 for training, 786 for inference. I wanted to train with <code>max_length</code> 786, but I did not have enough memory resources. 256 was the best when I trained models with 256, 384, and 512 <code>max_length</code>, and then predicted them with 768 <code>max_length</code>.</p>
<p>training parameter is below:</p>
<pre><code>training_args = TrainingArguments(
warmup_ratio=,
learning_rate=,
per_device_train_batch_size=,
per_device_eval_batch_size=,
num_train_epochs=config.epochs,
report_to=,
optim=,
overwrite_output_dir=,
fp16=,
gradient_accumulation_steps=,
load_best_model_at_end=,
metric_for_best_model=,
lr_scheduler_type=,
weight_decay=,
save_total_limit=,
)
</code></pre>
<p>I ensemble 4 models, combination of the following elements</p>
<ul>
<li><p>models</p>
<ul>
<li><code>OpenAssistant/reward-model-deberta-v3-large-v2</code></li>
<li><code>deepset/deberta-v3-large-squad2</code></li>
<li><code>microsoft/deberta-v3-large</code></li></ul></li>
<li><p>retrieval models:</p>
<ul>
<li><code>gte-base</code></li>
<li><code>e5-base</code></li></ul></li>
<li><p>dataset:</p>
<ul>
<li>all</li>
<li>without 3, 7, 8, 9 : worse score compared from other datasets.</li>
<li>without 10 : better score compared from other datasets.</li></ul>
<p>number is <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">this dataset's</a> source.</p></li>
</ul>
<h1>4. ensemble</h1>
<h2>4-1. TTA</h2>
<p>I use the low ranking retrieval results for inference.<br>
I have 4 tta below:</p>
<ul>
<li><code>[ 0, 1, 2, 3, 4, 5]</code></li>
<li><code>[ 0, 6, 7, 8, 9, 10]</code></li>
<li><code>[ 0, 11, 12, 13, 14, 15]</code></li>
<li><code>[ 0, 16, 17, 18, 19, 20]</code></li>
</ul>
<p>the number of array is retrieval ranking of question.</p>
<h2>4-2. ensemble</h2>
<p>After TTA, Instead of the simply average the score, I took the sum of the maximum and the average like below:</p>
<pre><code>df = pd.read_csv()
df[] = np.arange((df))
df = ensemble(df)
df = df.groupby().mean() + df.groupby().()
</code></pre>
<h1>5. Not worked / Can't get result for me.</h1>
<ul>
<li>Create question by ChatGPT.<ul>
<li>I try many prompt, but score is decreased.</li></ul></li>
<li>Create the CV correlated with LB.<ul>
<li>I try 300 row per <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>'s dataset, but finally use <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a>'s 300 + test 200. But CV is not correlated with LB.</li></ul></li>
<li>More complex TTA.</li>
<li>Improve model accuracy.<ul>
<li>It was almost same accuracy as the <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>'s notebook.</li>
<li>I thought that improving the model was not the key point of this competition, so I focused on the retrieval.</li></ul></li>
</ul> |
Kaggle - LLM Science Exam | 82th Place Solution for the Kaggle - LLM Science Exam Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I'd like to thank the organizers and congratulations to all the winners!<br>
Thanks to everyone who shared code, datasets, and ideas. Especially to <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> who shared an open book approach. </p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<ul>
<li>Self crafted wikipedia dataset (see Wikipedia data section)</li>
<li>Similarity search using bge-base/bge-large embeddings</li>
<li>Deberta ensemble</li>
</ul>
<h1>Wikipedia data</h1>
<p>I found that publicly available wiki dumps have missing text and values due to html tags/lua macroses removals. I tried many different processing repos and try to implement my own approach but it was a waste of time. So my final solution:</p>
<ul>
<li>Run private mediawiki service</li>
<li>Install ~10 necessary wiki plugins (e.g. math formula renderer)</li>
<li>Tweak mysql/apache/memcached performace</li>
<li>Upload ~6.5M articles via wiki API</li>
<li>Download rendered articles via wiki API</li>
</ul>
<p>Some insights:</p>
<ul>
<li>Final mysql DB size ~600GB (most likely due to diffrent kinds of indexes)</li>
<li>If a page contains many macroses it may take up to 2 mins to render it first time. Once it's rendered, I suppose it's saved in cache and the next request renders it almost immediatly</li>
<li>I achived speed of ~2k articles/min using 64C/256GB instance. So it took ~3 days to upload articles and another ~3 days to download</li>
</ul>
<p>Result:<br>
<a href="https://postimg.cc/sBNrRKwQ" target="_blank"><img src="https://i.postimg.cc/G2chG5DQ/Screen-Shot-2023-10-12-at-00-34-01.png" alt="Screen-Shot-2023-10-12-at-00-34-01.png"></a></p>
<h1>Embeds</h1>
<p>~6.5M articles contains ~22M sections. It takes time calculate embeddings. I tried a few:</p>
<table>
<thead>
<tr>
<th>model</th>
<th>map@100 on train</th>
</tr>
</thead>
<tbody>
<tr>
<td>all-MiniLM-L6-v2</td>
<td>0.670</td>
</tr>
<tr>
<td>BAAI/bge-base-en</td>
<td>0.706</td>
</tr>
<tr>
<td>BAAI/bge-large-en</td>
<td>0.727</td>
</tr>
</tbody>
</table>
<p>BAAI/bge-large-en embeds size of 22M sections ~ 50GB in fp16. </p>
<h1>Pipeline</h1>
<p><a href="https://postimg.cc/7bWcKfb9" target="_blank"><img src="https://i.postimg.cc/ZKbSKNSG/Screen-Shot-2023-10-12-at-01-12-25.png" alt="Screen-Shot-2023-10-12-at-01-12-25.png"></a></p>
<h1>What would I do differently?</h1>
<ul>
<li>Invest more time on CV dataset</li>
<li>Models diversity</li>
<li>Start ensembling earlier</li>
<li>Try different augumentations</li>
</ul>
<h1>What didn't work</h1>
<ul>
<li>Longformer model (0.864/0.851)</li>
<li>MLM deberta fine-tuning on wikipedia texts </li>
</ul>
<h1>Optimization</h1>
<ul>
<li>inference 2xT4 deberta fp16</li>
<li>similarity search 2xT4</li>
<li>store embeds in fp16</li>
<li>SentenceTransformers fp16 inference showed worse performance</li>
</ul>
<h1>Sources</h1>
<ul>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/434913</a></li>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383</a></li>
</ul> |
Kaggle - LLM Science Exam | 50th Place Solution for the Kaggle - LLM Science Exam Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>That was a long road for all of us! Without great insights and sharings by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> and others we would not have been able to achieve this result. Kaggle is a great place with great people indeed.</p>
<p><strong>Context</strong></p>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<p><strong>Overview of the Approach</strong><br>
Before the wonderful retrieval algorithm by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> was published our main strategy was to find the best sentence transformer. We tried a lot of options and the best boost was given by <strong>gte-small</strong> (+0.02 average LB boost in comparsion with the other ST, in particular with all-minilm-v6 and all-minilm-v12 that were used publicly). Also we fine-tuned few debertav3large (different lrs, epochs, datasets, etc) and used them in a weighted ensemble.</p>
<p>After the new way of retrieval was published, we had to reconsider our strategy — new ST no longer gave us an advantage and LB got so tight. We blended 2 ways of retrieval and totally used 8 different models:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13977249%2F883edae001ad95e0ff795a2d83794505%2Fsolution.jpg?generation=1696984676432568&alt=media" alt=""></p>
<p><strong>Details of the submission</strong><br>
As always it is very hard to pick the best private solution. Turns out that our best submission with <code>additional stop words</code> and another models scores 0.912 — +20 places on private LB from ours final position!</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13977249%2F0ab2e1bf43fb90c245f5e8f0edbac4a9%2Fsol2.jpg?generation=1696984882419435&alt=media" alt=""></p>
<p><strong>Conclusion</strong><br>
As we can see debertav3large with 304M backbone parameters (and we freeze first 18 layers!) used in ensemble and with great retrieval algorithms can score 0.91+ easily on exam by chatgpt3.5 with 175B parameters! Thats insane!</p>
<p><strong>Sources</strong></p>
<ul>
<li><a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1</a> — great starter notebook to train your own LLM model</li>
<li><a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag" target="_blank">https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag</a> — who could have possibly known we can run 70B model on Kaggle…</li>
<li><a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles</a> — game changing retrieval approach</li>
<li><a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">https://www.kaggle.com/datasets/jjinho/wikipedia-20230701</a> — we used this dataset for training</li>
</ul>
<p>Thanks Kaggle for giving us the opportunity to compete and learn! Thanks all the participants who share their thoughts and findings!</p> |
CommonLit - Evaluate Student Summaries | 16th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks, Kaggle and <a href="https://www.kaggle.com/cookiecutters" target="_blank">@cookiecutters</a> for this interesting competition. Congratulations to all winners, and gold medalists. I would like to give credit to my talented and hard-working teammates <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> and <a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a>.</p>
<p>Kudos to <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> for their fantastic work on LGBM trick, and all participants of this competition for their sharing and hard work.</p>
<h1>First words:</h1>
<ul>
<li>We think this competition is a perfect example of out-of-distribution forecasting, therefore, we don’t fully trust either CV or LB. Instead, we judge the models by our sense of training stability, partly CV, partly LB, and diversity;</li>
<li>Therefore, we try to think of as many model architectures, and model training techniques as possible;</li>
</ul>
<h1>Model 1 - Transformer + Catboost</h1>
<h3>Transformer model</h3>
<ul>
<li>CV/LB/PB: 0.552/0.488/0.556</li>
<li>Input: text + [SEP] + prompt_question</li>
<li>Loss: MCRMSE </li>
<li>Backbone: deberta-v3-large</li>
<li>Hyperparams:<ul>
<li>max_length: 256</li>
<li>pooling: attention</li>
<li>random-reinit: 1</li>
<li>differential learning rate factor: 2.6</li>
<li>lr: 1e-5</li></ul></li>
<li>epochs: 4</li>
<li>evaluate at the end of each epoch</li>
<li>add additional 2x full-fit checkpoints, it helps to gain 0.003 in PB</li>
</ul>
<h3>GBDT - Catboost</h3>
<ul>
<li>CV/LB: 0.513/0.443/0.484</li>
<li>Hyperparams:<ul>
<li>learning_rate: 0.05</li>
<li>Cat_features = [“summary_n_sentence”]</li></ul></li>
<li>Features:<ul>
<li>summary_length, summary_n_sentence, mean_edit_distance of summary sentences, …</li>
<li>remove many features like spell errors, POS tag features, …</li>
<li>removing prompt_length improves CV a bit, but no help in LB/PB</li></ul></li>
<li>Total inference time: 32mins</li>
</ul>
<h1>Model 2 - Span Model</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2F6170216dc252bbe8f8ef098e626a3b4b%2FSpanModel.png?generation=1697201774978961&alt=media" alt=""></p>
<ul>
<li>Reformulated the problem as a span-regression task by randomly grouping N summaries associated with a particular prompt and predicting their scores together </li>
<li>Predictions were conditioned on a context that consisted of the top 8 sentences from the prompt text as ranked by their importance. </li>
<li>The importance value of a sentence from the prompt text was determined as the average cosine similarity score between the sentence and all summaries for the prompt</li>
<li>Used MSE Loss on the context and wording targets</li>
<li>Discretized the content and wording scores into 16 bins to create auxiliary targets and used ArcFace loss on aux targets</li>
<li>Used ranking loss among summaries in each example span</li>
<li>During inference, the random grouping of N summaries provided a TTA-like impact</li>
<li>Private LB by using 2x full-fit checkpoints: 0.473</li>
</ul>
<h1>Model 3 - Search Approach</h1>
<ul>
<li>Input text is comprised of a context and the student summary</li>
<li>Context:<ul>
<li>Retrieved the top 3 sentences from the prompt text based on cosine similarity scores with the summary. </li>
<li>Concatenated prompt title and prompt question </li></ul></li>
<li>Trained the model using BCEWithLogitsLoss<ul>
<li>Scaled the scores between 0-1 </li>
<li>Used BCEWithLogitsLoss to compute the loss between scaled scores and logits</li></ul></li>
<li>Private LB by using 2x full-fit checkpoints: 0.476</li>
</ul>
<h1>Model 4 - Long Context</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2F9184846a10fbdd62ab5d384d294d81d4%2FLongContext.png?generation=1697201712777096&alt=media" alt=""></p>
<ul>
<li>Remarks:<ul>
<li>max_len: 1024</li>
<li>lr: 2e-5</li>
<li>weight_decay: 1e-2</li>
<li>n_epochs: 3</li>
<li>AWP and EMA after 1 epoch</li>
<li>evaluate 10 times/epoch</li>
<li>Inference time: 205 mins (4 folds)</li>
<li>CV/Public LB/Private LB: 0.499/0.438/0.462</li>
<li>Backbone: OpenAssistant/reward-model-deberta-v3-large-v2</li>
<li>We have another model with this structure and its CV/Public LB/Private LB: 0.494/0.448/0.459. Unfortunately, we didn’t include this model in the ensemble</li></ul></li>
<li>Problems:<ul>
<li>Inference time is too long due to the long context → not good for ensemble</li></ul></li>
<li>Code:<ul>
<li>Training: <a href="https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv5e" target="_blank">https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv5e</a></li>
<li>Inference: <a href="https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v5e-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v5e-infer</a> </li></ul></li>
</ul>
<h1>Model 5 - Dual Encoder with Transfer Learning</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2Fde86838dc52ff852ab461f461ab2a4b7%2FDualEncoder.png?generation=1697201727858488&alt=media" alt=""></p>
<ul>
<li>Model Ideas:<ul>
<li>Instead of concatenating text and prompt info (prompt text, question, title) before passing through the backbone, we can concatenate them after the transformer. One benefit is that many summaries share the same prompts, we can compute them only once and put them into a hashing table, once the embedding of a prompt is computed, we can just query it instead of computing it again.</li>
<li>However, I think this way is not good in terms of performance because there is no connection between text and prompt (they’re processed separately in spite of being processed by the same backbone). To overcome it, I load the trained backbone from the long-context model (trained with concatenated text and prompt, with a long sequence length), freeze the top 12 layers, and train the second stage model on top.</li></ul></li>
<li>Remarks:<ul>
<li>max_len context: 1536</li>
<li>max_len text: 512</li>
<li>n_epochs: 1</li>
<li>AWP and EMA enabled after 50 steps</li>
<li>Use LGBM trick</li>
<li>Inference time: 39 mins (4 folds)</li>
<li>CV/Public LB/Private LB: 0.500/0.445/0.470</li>
<li>Backbone: OpenAssistant/reward-model-deberta-v3-large-v2</li></ul></li>
<li>Code:<ul>
<li>Training: <a href="https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv6c" target="_blank">https://www.kaggle.com/datasets/shinomoriaoshi/commonlitsummaryv6c</a> </li>
<li>Inference: <a href="https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v6c-infer" target="_blank">https://www.kaggle.com/code/shinomoriaoshi/commonlitsummary-v6c-infer</a> </li></ul></li>
</ul>
<h1>Final ensemble</h1>
<p>We make ensembles of these above models and the public models by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> (we actually train it by ourselves rather than get the model from them directly). We observe a super messy correlation between CV and Public LB here, at the ensemble phase. For example, we simply took the average of the component models, the weights were optimized with some constraints to deal with overfitting.</p>
<h1>What didn’t work</h1>
<ul>
<li>We tried to generate synthetic data from LLMs (T5 and Llama), but it didn’t work at all;</li>
<li>Training with LoRA for target-wise models worked and got CV 0.53+, but didn’t help in the local ensemble;</li>
</ul>
<h1>Team member</h1>
<ul>
<li><a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a></li>
<li><a href="https://www.kaggle.com/syhens" target="_blank">@syhens</a></li>
<li><a href="https://www.kaggle.com/shinomoriaoshi" target="_blank">@shinomoriaoshi</a><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2176524%2F1ff6a8913b16a36c1474d8418353be2a%2Fteam.PNG?generation=1697202450769081&alt=media" alt=""></li>
</ul> |
CommonLit - Evaluate Student Summaries | 9th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>9th Place Solution</h1>
<p>First and foremost, I'd like to extend my gratitude to the host for organizing this competition. Congratulations to the winners as well! I'd like to present a brief overview of my solution that secured the 9th position.</p>
<h2>Cross-Validation</h2>
<p>The wording of prompt_id 814d6b stood out, being notably different from other prompt_ids. When cross-validating with groupkfold using the prompt_id, predicting for this particular prompt_id didn't correlate with scores from other prompt_ids. Hypothetically, if there were any test ids with a distribution similar to 814d6b, having 814d6b in the training data might produce decent accuracy. However, there was no way to validate this. For this competition, I employed two methods of cross-validation and submitted for both:<br>
Cross-validation 1: groupkfold (group = prompt_id), excluding 814d6b from evaluation.<br>
Cross-validation 2: groupkfold (group = prompt_id), Including 814d6b in the evaluation.<br>
The final private score was better for the model from cross-validation 2, so I'll delve into the details of that model. <br>
Given that changing the seed alone resulted in variable CV scores, I performed cross-validation using three different seeds and evaluated the model based on their ensemble. The final submission was based on the predictions of models trained on all the data using three different seeds.</p>
<h2>Training</h2>
<p>I tokenized the text as follows:<br>
text1: summary_text<br>
text2: prompt_question + [SEP] + prompt_text</p>
<pre><code>inputs = self.tokenizer.encode_plus(text1,
text2,
=,
=self.max_len,
=,
=,
=)
</code></pre>
<p>Model: Consisted of two deberta-v3-large and one LSTM as described below:</p>
<ul>
<li>The first deberta-v3-large was trained using the entire input text.</li>
<li>The second deberta-v3-large and LSTM were trained solely on the summary_text.</li>
</ul>
<pre><code> =
</code></pre>
<p>Training settings:</p>
<ul>
<li>token_len: 1680</li>
<li>epochs: 3</li>
<li>Loss: SmoothL1Loss</li>
<li>lr: 8e-6</li>
<li>optimizer: AdamW<ul>
<li>weight_decay: 0.01</li>
<li>beta: (0.9, 0.98)</li></ul></li>
<li>scheduler: get_linear_schedule_with_warmup<ul>
<li>num_warmup_steps: 10% of total training steps</li></ul></li>
<li>EMA (Exponential Moving Average)<ul>
<li>ema_decay: 0.995</li>
<li>Using EMA helped stabilize the training.</li></ul></li>
</ul>
<h2>Inference</h2>
<p>token_len: 4200</p>
<h2>Scores</h2>
<ul>
<li>CV: 0.495 (ensemble of 3 seeds)<ul>
<li>814d6b: 0.604982</li>
<li>ebad26: 0.431438</li>
<li>3b9047: 0.49692</li>
<li>39c16e: 0.483208</li></ul></li>
<li>Public Score: 0.456</li>
<li>Private Score: 0.457</li>
</ul>
<h2>Inference Code</h2>
<p><a href="https://www.kaggle.com/code/takoihiraokazu/sub-ex263-seq4200?scriptVersionId=145902061" target="_blank">https://www.kaggle.com/code/takoihiraokazu/sub-ex263-seq4200?scriptVersionId=145902061</a></p> |
Kaggle - LLM Science Exam | 171st Place Solution - New Kaggle Expert! 🎉 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was a fantastic competition! Thank you so much to Kaggle for running it, and to everyone who shared such interesting notebooks and discussions. (Plus it's always great to get some more medals under your belt! 😄)</p>
<h1>Data Collection</h1>
<p>It was clear early on that this was essentially a <em>data</em> competition, rather than focussing on modelling. Very little data was available in the beginning, so in the early stages my main focus was on generating high quality training data that could be representative of the train data provided, and hopefully also the hidden test data. </p>
<p>With some manual work, I was able to identify the <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425242" target="_blank">wikipedia pages used to generate the training data</a>. I then focussed on identifying which <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425246" target="_blank">wikipedia categories would be most effective to train on</a>, using some <a href="https://www.kaggle.com/code/judehunt23/llm-science-exam-wikipedia-graph-analysis" target="_blank">graph analysis</a> to look at the pages and categories with the most connections. I was able to find that the <code>Concepts in physics</code> category was able to cover 72% of the training dataset with just 900 wikipedia pages. </p>
<p>I also decided to include the categories <code>Chemistry</code>, <code>Biology</code>, and <code>Mathematics</code> to cover the remaining training data, and any drift between this data and the test dataset. I used these wikipedia pages to generate additional questions to train on using gpt-3.5-turbo.</p>
<h1>Retrieval Augmented Generation (RAG)</h1>
<p>After initially experimenting with finetuning deberta for multiple-choice question answering, later on in the competition it became clear that the real improvements in performance would come from adding <em>context</em> to the inputs of the model. This is a common technique used in real LLM applications, known as <em>Retrieval Augmented Generation (RAG)</em>. I used two retrieval pipelines for my final solution:</p>
<ol>
<li>FAISS + Sentence Transformers, finding the top 6 out of 6M Wikipedia articles using the similarity to the question and possible answers, and extracting the top 22 sentences (thank you <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> for sharing your wonderful notebook!)</li>
<li>TF-IDF on 270k wikipedia articles using cohere embeddings (thank you <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> for your amazing notebook!)</li>
</ol>
<p>In hindsight should have experimented with other techniques further and combined more of them together, for example searching only using questions, using questions + answers, questions + one answer at a time, using answers only etc. </p>
<h1>Modelling</h1>
<p>I experimented and ensembled multiple different modelling approaches for my final solution, inspired by <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440908" target="_blank">this amazing post</a> from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>. My final submission included:</p>
<ul>
<li>Deberta <code>AutoModelForMultipleChoice</code> with no context</li>
<li>Deberta <code>AutoModelForMultipleChoice</code> with context 1 </li>
<li>Deberta <code>AutoModelForSequenceClassification</code> (multi-class, num_labels=5) with context 1</li>
<li>Longformer <code>LongformerForMultipleChoice</code> with context 2</li>
<li>Deberta <code>AutoModelForMultipleChoice</code> with context 2 </li>
</ul>
<p>I then took averages of the model outputs to determine the top 3 results.</p>
<p>I also experimented with <code>AutoModelForSequenceClassification</code> (multi-label, num_labels=1) however this didn't give me a CV/LB boost, and tried adding AutoModelForCausalLM using [Platypus 70B]() and [Mistral 7B](), however these notebooks unfortunately timed out on submission. I only finetuned Deberta using context 1, and in hindsight should have experimented with finetuning with other contexts as well, and also should have tried running e.g. AutoModelForSequenceClassification using context 2.</p>
<h1>Things I missed</h1>
<ul>
<li>Ensembling multiple different retrieval techniques together - e.g. retrieving just with questions, with questions + answers, answers only etc</li>
<li>Answer permutation - I thought about using this, but decided against it since my training runs were already taking incredibly long, but in hindsight I could have used this for retrieval as well!</li>
<li>Dropping out incorrect answers - a technique that would have been interesting to try would have been removing answers that were clearly incorrect and then reranking the remaining answers</li>
<li>Stacking CV - with the small amount of training data provided and by using open source models, I never managed to get a holdout dataset that I was happy with to run some stacking on, and instead ended up running trial and error with different model combinations and averages in the last few days. Having a strong local CV is massively important, but I lost track of this in the scramble at the end</li>
<li>Causal LMs - this is the one I really wish I'd played around with more. Deberta worked incredibly well in this competition, but I wish I'd played around more with different language models like Mistral 7B and Llama 2. I have a feeling that especially with context they could have performed really well, and added more diverse predictions to ensemble with at the end.</li>
</ul>
<p>Ultimately a lot of these points came down to time management and compute/submission constraints on Kaggle. I still learned absolutely loads during this competition, and can't wait to take this forward into the next one!</p>
<p>Huge congratulations to everybody who made it to the end of this competition, enjoy those precious points and medals. See you again on the leaderboard soon!</p> |
CommonLit - Evaluate Student Summaries | 61th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>61th place solution</h1>
<p>First of all, I'd like to thank the competition host for organizing such an intriguing contest and my team members. <br>
Many of the ideas for this solution were contributed by my teammates.</p>
<h3>Summary</h3>
<p>The pipeline of our solution is as follows <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3836857%2F03a0f6f09ed783bbdbc1cdc759ab0fca%2Fimage.png?generation=1697118683872550&alt=media" alt=""><br>
This solution was inspired by this <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">notebook</a>. Thanks for sharing this great notebook.</p>
<h3>Strategy</h3>
<p>The public leaderboard was derived from a single prompt, but based on <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/443223" target="_blank">discussion</a>, the private leaderboard encompasses over 100 prompts. This observation led me to place a heightened emphasis on cross-validation (CV). However, the CV too was based on merely four prompts, making it somewhat untrusted.<br>
Given the uncertainty in both CV and the public leaderboard, I opted for a strategy that emphasizes robustness. Consequently, I integrated various backbones and input texts into the ensemble, optimizing to reduce the inference time. Furthermore, considering that the evaluation metric was RMSE, ensuring prediction accuracy was paramount. Therefore, even models with suboptimal individual scores were incorporated into the ensemble.</p>
<p>I couldn't effectively utilize the prompt_text, so I shifted my strategy to shorten the input length and reduce inference time. </p>
<h3>BERT Model Training</h3>
<p>Training used Nakama's excellent <a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">notebook</a> on Feedback3.<br>
Common to all <br>
epoch : 3 <br>
schduler : cosine <br>
No dropout </p>
<table>
<thead>
<tr>
<th>backbone</th>
<th>text</th>
<th>maxlen</th>
<th>loss</th>
<th>freeze</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>deberta-v3-base</td>
<td>title question summary</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>4</td>
<td>0.54756</td>
</tr>
<tr>
<td>deberta-v3-large</td>
<td>question summary</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>12</td>
<td>0.53988</td>
</tr>
<tr>
<td>deberta-xlarge</td>
<td>summary title question</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>2</td>
<td>0.54400</td>
</tr>
<tr>
<td>electra-large</td>
<td>question summary</td>
<td>1024</td>
<td>RMSE</td>
<td>2</td>
<td>0.55925</td>
</tr>
<tr>
<td>muppet-roberta-large</td>
<td>question summary</td>
<td>512</td>
<td>SmoothL1Loss</td>
<td>4</td>
<td>0.56337</td>
</tr>
</tbody>
</table>
<h3>GBDT</h3>
<p>LightGBM, CatBoost, and XGBoost training were all parameter optimized with optuna. <br>
The amount of features is based on this <a href="https://www.kaggle.com/code/wchan757/achieving-lb-0-47-with-just-lightgbm-detail" target="_blank">notebook</a>.</p>
<h3>did not work</h3>
<ul>
<li>awp</li>
<li>fgm</li>
<li>SVR<ul>
<li>(feedback3 chirs notebook)[<a href="https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-cv-0-450-lb-0-44x</a>]</li></ul></li>
<li>pseudo (might have worked)<ul>
<li>The CV of feedback3 and old commonlit single had a gain of more than 0.01 on any backbone, but it worsened when put in GBDT, so it was not used.</li></ul></li>
<li>text cleaning<ul>
<li>Could capitalization and STOPWORD have also been a feature of wording?</li></ul></li>
<li>wording anomaly detection<ul>
<li>By removing data with extremely poor wording, the RMSE deteriorated significantly, so an anomaly detection model was created specifically for this, but it was abandoned due to lack of progress in learning.</li></ul></li>
<li>FlanT5 encoder<ul>
<li>The encoder part was taken from FlanT5, an encoder-decoder model often used in actual summarization, and used in the backbone, but the CV was 0.8x, which was very poor, so it was not used.</li></ul></li>
</ul>
<h3>Important Citations</h3>
<ul>
<li><a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">2stage pipeline</a></li>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-train" target="_blank">feedback3 train notebook</a></li>
<li><a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/260729" target="_blank">no dropout</a></li>
<li><a href="https://www.kaggle.com/code/wchan757/achieving-lb-0-47-with-just-lightgbm-detail" target="_blank">get features</a></li>
</ul> |
CommonLit - Evaluate Student Summaries | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Inputs</strong></p>
<p>The data was input to the model as follows : </p>
<p>'Think through this step by step : ' + prompt_question + [SEP] + 'Pay attention to the content and wording : ' + text + [SEP] + prompt_text</p>
<p><strong>Pooling Method [High Impact]</strong></p>
<p><strong>Input :</strong> [TOKEN] [TOKEN] [SEP] [TOKEN] [TOKEN] [SEP] [TOKEN] [TOKEN]<br>
<strong>Head Mask :</strong> [0] [0] [1] [1] [1] [0] [0] [0]</p>
<p>Instead of using the normal attention mask created by the model tokenizer. I used a head mask that only had ones for the students' answer (text) portion of the input and zeros for all other tokens. I used the normal attention mask for the attention mask that the model consumed but I used the head mask for the mean pooling.</p>
<p>This had the biggest impact out of all the tricks I used. It increased the CV by a huge margin in all folds, but especially for the difficult prompts : 3b9047 and 814d6b. In my opinion this was the “magic” for this competition.</p>
<p><strong>Prompt Question Augmentation [Moderate Impact]</strong></p>
<p>I created 10 extra prompt questions per a prompt. I used an LLM. I asked the LLM to give me 10 variations of the prompt question. I then used this as augmentation during training. In inference, I used the default prompt question. In total I had 44 different prompt questions across all folds.</p>
<p><strong>Auxiliary Classes [Moderate Impact]</strong></p>
<p>I used auxiliary classes during the competition. These auxiliary classes were the target classes from Feedback 3.0 - <br>
['cohesion','syntax','vocabulary','phraseology','grammar','conventions'].</p>
<p>To create these labels I used models that were trained on the Feedback 3.0 data and ran the data from this competition through those models. I used only the ‘text’ column from this competition. In doing this I produced pseudo labels to use for this competition.</p>
<p>I used the auxiliary classes in the following way : (loss * .5) + (aux_loss * .5)</p>
<p>The auxiliary classes were used every second step.</p>
<p>The Feedback 3.0 competition was hosted by The Learning Agency Lab and to the best of my knowledge this is a legal technique.</p>
<p><strong>Max Length</strong></p>
<p>Models were trained on a maximum length ranging from 896-1280 during initial training. During the pseudo labelling rounds they were trained with a maximum length ranging from 1280-2048. Pseudo labels allowed the models to learn at a higher maximum length.</p>
<p>During inference the models used 1792 for large and 2048 for base.</p>
<p><strong>Pseudo Labels [Moderate Impact]</strong></p>
<p>Once a CV of .4581 was reached across the grouped kfold I started creating pseudo labels. </p>
<p>The pseudo labels allowed me to train deberta-v3-base effectively. Before PL, I was not able to train the base model. They also allowed me to increase the maximum length during training.</p>
<p>PL increased the CV from .4581 to .4476</p>
<p>The models were trained using a concatenation of the original labels and pseudo labels.</p>
<p><strong>Final Ensemble (PL)</strong></p>
<p><br></p>
<table>
<thead>
<tr>
<th>Model Name</th>
<th>Training Max Length</th>
<th>Inference Max Length</th>
<th>Head</th>
<th>Model CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-v3-large</td>
<td>2048</td>
<td>1792</td>
<td>Mean Pooling + LSTM Layer Pooling</td>
<td>.460</td>
</tr>
<tr>
<td>microsoft/deberta-v3-base</td>
<td>2048</td>
<td>2048</td>
<td>Mean Pooling + LSTM Sequence Pooling</td>
<td>.468</td>
</tr>
<tr>
<td>OpenAssistant/reward-model-deberta-v3-large-v2</td>
<td>2048</td>
<td>1792</td>
<td>Mean Pooling + LSTM Layer Pooling</td>
<td>.464</td>
</tr>
<tr>
<td>microsoft/deberta-large</td>
<td>2048</td>
<td>1792</td>
<td>Mean Pooling + Linear</td>
<td>.466</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large</td>
<td>1280</td>
<td>1792</td>
<td>Mean Pooling + LSTM Sequence Pooling</td>
<td>.461</td>
</tr>
</tbody>
</table>
<p><br><br>
<br></p>
<p><strong>Did work:</strong></p>
<ul>
<li>Layer wise learning rate decay</li>
<li>Freezing layers (bottom 8)</li>
<li>LSTM layer pooling</li>
<li>LSTM Sequence pooling</li>
<li>Turn off dropout in transformer backbone</li>
<li>Multisample dropout in head</li>
</ul>
<p><strong>Did not work:</strong></p>
<ul>
<li>AWP</li>
<li>SWA</li>
</ul>
<p><strong>Inference script :</strong> <a href="https://www.kaggle.com/code/ivanaerlic/2nd-place-solution/notebook" target="_blank">https://www.kaggle.com/code/ivanaerlic/2nd-place-solution/notebook</a></p>
<p><strong>Note :</strong> The inference script and input datasets will contain extra tricks that I haven't mentioned here, such as the inclusion of additional head for the model. Only the main points are outlined in this write-up.</p>
<p>Also, you can find the training script for each model inside of the dataset that contains the weights.</p> |
Kaggle - LLM Science Exam | [8th Public / 18th Private] - Full Wikipedia Passage-level Retrieval | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the top teams, especially to <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a>, <a href="https://www.kaggle.com/ilu000" target="_blank">@ilu000</a>, and <a href="https://www.kaggle.com/ybabakhin" target="_blank">@ybabakhin</a>! This was a fun competition and I learned a lot about training language models and effective inferencing for LLM on small GPUs. building the whole retrieval pipeline from scratch was a really fun exercise.</p>
<h1>A few thoughts about this competition</h1>
<p>This competition was an rollercoaster for me: got to top20 just 1 week after joining, then the STEM 270K dataset changed the LB completely (7 days before competition end I was on ~190th place in the LB), got busy with work and abandoned the competition for weeks, and finally bouncing back to top20 in the last few days was both stressful and exciting. Not to mention that my Azure account got banned 11 hours before the end of competition (forgot to increase budget limits), just when I was uploading data to Kaggle, so my latest models did not make it to my final submissions 😅</p>
<p>At the end, the shakeup was super disappointing for me - after such an intense week without much sleep, I hoped for a better result. I'm glad I learned a lot by doing though 😁 My full inference code can be found here: <a href="https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission" target="_blank">https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission</a></p>
<h1>My solution</h1>
<p>My pipeline is quite standard - first, I retrieve relevant sentences from Wikipedia to add to the context for my final DeBERTa models. However, instead of retrieving articles and then sentences as in most public notebooks, I perform <strong>passage-level retrieval from full-text Wikipedia!</strong> This decision comes from the observation that, in the competition's 200 train set, there are many examples where the actual answer to the question can be found actually in a different, less related article. The re-ranking and prediction that comes after that are quite standard.</p>
<p>The code of my final submission is here: <a href="https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission" target="_blank">https://www.kaggle.com/code/chankhavu/fork-of-0-92-submission</a></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2270821%2F6ae277ca9e188913392575378a8ae114%2FPresentation1.png?generation=1696987202333885&alt=media" alt=""></p>
<h2>1. Full Passage-level Retrieval - 54.5 million vectors/passages</h2>
<p>I split whole Wikipedia to passages of 128 tokens each (which is approx. <strong>100 words per passage</strong>), with a stride of 96 tokens (so, for example, an article after removing citations/categories/bibliography and after tokenization will be split into passages with offsets <code>{ [0, 127], [96, 224], ... }</code>. This results in a HUGE embeddings dataset of <strong>54.4 million vectors</strong>, each has dimension 768, and takes <strong>154 GBs</strong> on the disk. Obviously, it's too large for Kaggle. We need to compress this index to be usable on Kaggle.</p>
<p>I've tried different combinations of FAISS indexes and quantization techniques - IVF, HNSW, Product Quantization (PQ) with OPQ preprocessing, Scalar Quantization (SQ), etc. I validate the indexes by generating questions with GPT-3.5, saving the Wikipedia article and passage that was used for generating the question, and measuring the recall of the index (if the retrieved passage is contained inside the passage used). The best and fastest combination is <strong>IVF with 256 centroids</strong>, and each vector is encoded into <strong>96 bytes with Product Quantization</strong>. Obviously, the more bytes in PQ the better, and 96 is the maximum possible PQ for GPU inference. The index is compressed to less than <strong>6 GBs</strong>, takes only ~20 seconds to perform a full search for 4000 questions (with <code>K=100</code> and <code>nprobe=96</code>)</p>
<p>With <strong>only</strong> this index, I was able to achieve <strong>0.891 LB</strong> using just <strong>one public model</strong> (from Chris notebook), and this is before the STEM 270K dataset was published!</p>
<h2>2. Science Articles Sub-Index - 7.6 million vectors/passages</h2>
<p>I split the STEM 270K articles to passages of 128 tokens each and with stride 64 tokens. Because we have less vectors than the Full Wikipedia Passage-level Index, we can afford to use more precise indexes and get rid of the PQ quantization (that, in my opinion, is too aggressive). I ended up using <strong>IVF256</strong> and use a simple <strong>FP16</strong> quantization for this, allowing a more precise retrieval. This boosted my <strong>0.891</strong> LB solution to <strong>0.895</strong> - the boost is much less than other teams, which I think is an indicator that my full passage-level retrieval is quite powerful by itself.</p>
<p><strong>12 hours before the end of the competition</strong>, I also added 300K more articles from Wikipedia (I did not scrape using the API, just added from the Huggingface dataset) that are related to Science. I got those 300K articles so by asking GPT-4 to generate top-level topics in Science (Physics, Chemistry, Biology, Engineering, etc.) in general, and did some clustering with <code>be-base</code> embeddings. In total, the Science sub-index has <strong>600K articles</strong> and <strong>7.6 million vectors</strong> (passages). This added a slight boost to both my public and private scores as well.</p>
<h2>3. Re-ranking sentences</h2>
<p>This part is quite straightforward. I split the passages to chunks of 3 sentences with stride of 1 sentences, and re-rank those chunks using <code>BAAI/bge-large-en-v1.5</code>. Additionally, I also calculate the BM25 retrieval score for each of the sentences (using the all retrieved sentences for all questions as corpus).</p>
<h2>4. Training DeBERTa models</h2>
<p>I trained 6 DeBERTa models, each with slightly different set of training data and context length. All those models were trained in the last 3 days of the competition using 1xA100 and 1x3090, and are likely under-trained because I did not have much time. A few interesting observations:</p>
<ul>
<li>Training data was generated using GPT-3.5, using different prompts to add diversity. I also used the 60k dataset by Chris Deotte. I also prompted the model to try to include more numbers, formulas, etc. to make the question harder (from a tokenizing standpoint).</li>
<li><strong>Training on worse context gives better models!</strong> So I generated my training data with weaker version of my retrieval pipeline (using <code>bge-small</code>). The difference is ~0.005 on LB.</li>
<li>This competition is about reading comprehension and unleashing the knowledge inside the LM. So, smaller <code>lr</code> and more Q/A examples works.</li>
</ul>
<h1>What did not work</h1>
<ul>
<li>I spent some time trying to train a re-ranking model with LightGBM/XGBoost using LambdaRank objectives and various embedding scores/bm25 scores/tfidf scores as features. This is worse than a simple embedding score unfortunately.</li>
</ul> |
CommonLit - Evaluate Student Summaries | 29th Place Solution for the CommonLit - Evaluate Student Summaries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the host for organizing this competition and all the participants who generously shared valuable information. I have learned a lot.</p>
<h2>Context</h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a> </li>
<li>Data context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></li>
</ul>
<h2>Overview of the approach</h2>
<p>My approach is averaging the following models. </p>
<ul>
<li>LightGBM stacking with meta-features of 9 deberta and bart models. </li>
<li>A deberta-v3-large model for the concatenation of summary text and prompt text. (Exp ID: 10) <br>
The key aspect of the latter model is an attention mask that focuses only on the summary text, as explained by other top solutions.</li>
</ul>
<h4>Model Training Settings</h4>
<p>I trained the deberta models shown in this table. <br>
Out of these, 8 models are deberta-v3-large, and 2 model are bart-large. <br>
The bart-large models take prompt text as the encoder input and summary text as the decoder input. (However, I don't think they effectively utilize prompt text information…) <br>
CV strategy is Group k fold (group id = prompt_id) </p>
<table>
<thead>
<tr>
<th>exp id</th>
<th>model</th>
<th>input</th>
<th>pooling method</th>
<th>token max length</th>
<th>epoch</th>
<th>freezing layer</th>
<th>loss</th>
<th>awp</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>deberta-v3-large</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.542</td>
<td>0.494</td>
<td>0.541</td>
</tr>
<tr>
<td>2</td>
<td>facebook/bart-large</td>
<td>text, question+prompt_text</td>
<td>mean</td>
<td>822</td>
<td>4</td>
<td>-</td>
<td>smooth l1</td>
<td>false</td>
<td>0.555</td>
<td>0.514</td>
<td>0.556</td>
</tr>
<tr>
<td>3</td>
<td>deberta-v3-large-squad2</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>4</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.581</td>
<td>0.496</td>
<td>0.549</td>
</tr>
<tr>
<td>4</td>
<td>deberta-v3-large</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>true</td>
<td>0.535</td>
<td>0.488</td>
<td>0.532</td>
</tr>
<tr>
<td>5</td>
<td>bart-large</td>
<td>text+question, prompt_text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>-</td>
<td>smooth l1</td>
<td>true</td>
<td>0.533</td>
<td>0.486</td>
<td>0.510</td>
</tr>
<tr>
<td>6</td>
<td>deberta-v3-large</td>
<td>question+title+text</td>
<td>attention</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.538</td>
<td>0.497</td>
<td>0.563</td>
</tr>
<tr>
<td>7</td>
<td>deberta-v3-large</td>
<td>question+title+text</td>
<td>attention</td>
<td>512</td>
<td>4</td>
<td>-</td>
<td>smooth l1</td>
<td>true</td>
<td>0.628</td>
<td>0.519</td>
<td>0.535</td>
</tr>
<tr>
<td>8</td>
<td>deberta-v3-large</td>
<td>title+question+text</td>
<td>multi-scale</td>
<td>822</td>
<td>10</td>
<td>18</td>
<td>smooth l1</td>
<td>true</td>
<td>0.535</td>
<td>0.477</td>
<td>0.538</td>
</tr>
<tr>
<td>9</td>
<td>deberta-v3-large</td>
<td>text</td>
<td>attention</td>
<td>822</td>
<td>4</td>
<td>18</td>
<td>mse + rank</td>
<td>false</td>
<td>0.551</td>
<td>0.487</td>
<td>0.544</td>
</tr>
<tr>
<td>10</td>
<td>deberta-v3-large</td>
<td>text+title+question+prompt_text</td>
<td><a href="#model-for-summary-text-and-prompt-text-input">See this section</a></td>
<td>training : 1800, prediction : 1024</td>
<td>4</td>
<td>18</td>
<td>smooth l1</td>
<td>false</td>
<td>0.504</td>
<td>0.461</td>
<td>0.47</td>
</tr>
<tr>
<td>Every training utilized layer-wise learning rate decay between 0.8 and 0.9.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>*Sum of losses calculated in each of the last 3 hidden states.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>**Two attention pooling layers for content and wording separately.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>***Pooling at different levels: words, sentences, and paragraphs, then the results are pooled.</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h4>LightGBM stacking with meta features</h4>
<p>I stacked 9 models (Exp ID: 1 to 9) using LightGBM with the following meta-features:</p>
<ul>
<li>Number of characters in the summary text</li>
<li>Number of characters in the prompt text</li>
<li>Number of unique words overlapping between summary and prompt text (excluding punctuation)</li>
<li>Number of overlapping words between summary and prompt text</li>
<li>Ratio of overlapping bigrams</li>
<li>Ratio of overlapping 4-grams</li>
<li>Cosine similarity between TF-IDF values of summary and prompt text</li>
<li>Mean of edit distances between each summary sentence and prompt sentence</li>
<li>Number of words in the summary text (excluding punctuation, correcting misspelling)</li>
<li>Number of unique words overlapping between summary and prompt text (excluding punctuation, correcting misspelling)</li>
<li>Number of overlapping bigrams between summary and prompt text (excluding punctuation, correcting misspelling)</li>
<li>Ratio of overlapping bigrams (correcting misspelling)</li>
<li>Number of similar summary texts (similarity calculated by cosine similarity using Universal Sentence Encoder embedding)</li>
<li>Readability score of the prompt text<br>
This stacking model achieved CV: 0.478.</li>
</ul>
<h4>Averaging the Two Models</h4>
<p>For the final submission, I simply averaged the stacking model and the long-token model (Exp ID: 10), and submitted it. <br>
The CV is 0.471, Public LB: 0.433. <br>
I also tried nelder-mead weight optimization, but CV was not well as the averaged score.</p>
<h2>Details of the submission</h2>
<h3>What Worked</h3>
<h4>Model for Summary Text and Prompt Text Input</h4>
<p>I initially concatenated the summary text and prompt text and fed them into deberta-v3-large model. However, this didn't lead to an improvement in CV performance. <br>
I realized it was necessary to inform the model about the positions of the summary text tokens. Therefore, I prepared attention masks to indicate the positions of the summary text, title, question, and prompt text within the input tokens.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2419980%2F8d48845c845333584ec6044c09f7d33e%2Fattention.drawio.png?generation=1697274325973927&alt=media" alt=""><br>
These masks were used in conjunction with deberta's output. The architecture of this model is described below.<br>
(I apologize if it seems overly complex. According to other solutions, it appears that using an attention mask solely for the summary text may be sufficient.)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2419980%2Fcb53a1430da088ad8e964119ff05acbe%2Fdeberta.drawio.png?generation=1697274351083740&alt=media" alt=""><br>
This model was trained with a token length of 1800. However, during prediction, I had to reduce the token length to 1024 due to a 9-hour time limit. <br>
This model (Exp ID: 10) imporoved CV scores, especially challenging prompts, resulting in CV: 0.504, Public LB: 0.461, inference time exceeded 4h~.</p>
<h3>What didn't Work</h3>
<h4>CNN Model to the Sentence Similarity Matrix between Summary Text and Prompt Text</h4>
<p>I noticed that significant errors were caused by summary texts that simply copied and pasted many sentences from their prompt texts. These summaries were given the same low content and wording score. I believe that when scorers come across such a copy-and-paste summary, they tend to stop reading it and automatically assign low scores. Therefore, I wanted to address this issue by calculating the edit distances between each of the N sentences in the summary text and each of the M sentences in the prompt text. I then created an N x M similarity matrix, with each element containing edit distance ratio. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2419980%2F300454278eb2e87f36d4e263c64cc282%2Fheatmap.png?generation=1697274594179322&alt=media" alt=""> <br>
This matrix was fed into a CNN model, and target values were generated. However, this CNN model was not as effective as anticipated… (this idea was inspired by a method described in <a href="https://www.cl.cam.ac.uk/~ek358/Summarization_NAACL.pdf" target="_blank">this</a> paper).</p>
<h3>Souces</h3>
<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369956" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369956</a><ul>
<li>For information on the multi-scale approach.</li></ul></li>
<li><a href="https://www.cl.cam.ac.uk/~ek358/Summarization_NAACL.pdf" target="_blank">https://www.cl.cam.ac.uk/~ek358/Summarization_NAACL.pdf</a><ul>
<li>Reference material for tackling the same task.</li></ul></li>
<li><a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering</a><ul>
<li>I borrowed many features from this source. Thank you very much for your contribution!</li></ul></li>
</ul> |
CommonLit - Evaluate Student Summaries | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the host for hosting such a wonderful competition.Also thanks to kagglers for sharing valuable information in notebooks and discussions.</p>
<p>I'll briefly share my approach that led to the 5th place finish. Honestly, I didn't use any special techniques, so I believe the shake-up was largely due to luck. However, I'm extremely pleased with my first solo gold.</p>
<h3>Overview</h3>
<p>The key points of my approach are mainly threefold.</p>
<ul>
<li>Use all of summary, question, title, and prompt_text.</li>
<li>Use a single deberta v3 large model, improved with longer max_len and freeze layer, etc.</li>
<li>An ensemble of deberta v3 large and LightGBM.</li>
</ul>
<h3>Approach</h3>
<ul>
<li><p>deberta v3 large</p>
<ul>
<li>Input all of summary + question + title + prompt_text.</li>
<li>No preprocessing.</li>
<li>max_len: 1536.</li>
<li>Freeze embedding and the first 18 layers. </li>
<li>No dropout.</li>
<li>Train content and wording simultaneously.</li></ul></li>
<li><p>LightGBM</p>
<ul>
<li>Uses mostly the same features as public notebooks.</li></ul></li>
<li><p>ensemble</p>
<ul>
<li>Weighted average of deberta and LGB.</li>
<li>Weights optimized using nelder mead.</li></ul></li>
</ul>
<h3>validation</h3>
<ul>
<li>strategy<ul>
<li>groupkfold (group = prompt_id)</li></ul></li>
<li>cv score<ul>
<li>deberta v3 large :0.4816</li>
<li>LGBM : 0.5513</li>
<li>ensemble : 0.4748</li></ul></li>
</ul>
<h3>What Didn't Work</h3>
<ul>
<li><p>Models other than deberta v3 large.</p>
<ul>
<li>When including prompt_text as input, other models (including deberta-v3-base) had much poorer performance.</li></ul></li>
<li><p>Additional training data with LLM</p>
<ul>
<li>Considered data augmentation by creating summaries from prompt_texts using LLM.</li>
<li>I used the scraped prompt_text from commonlit.</li>
<li>Created ground truth using pseudo labels and added to training, but it did not improve.</li></ul></li>
<li><p>LGB-based stacking</p>
<ul>
<li>Tried stacking using deberta's oof, similar to public notebooks, but it did not improve in my model.</li>
<li>Hence, changed to the weighted average of each model as mentioned above.</li></ul></li>
<li><p>Text preprocessing.</p></li>
<li><p>Inputting other summaries from the same prompt.</p>
<ul>
<li>Inspired by the PPPM competition's magic, considered concatenating other summaries from the same prompt, but it did not improve.</li></ul></li>
<li><p>etc…</p></li>
</ul> |
CommonLit - Evaluate Student Summaries | 98th Place Solution Overview 🥈 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>🥈 <strong>Silver Medal Solution Overview for CommonLit - Evaluate Student Summaries<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7523757%2F717c6210085b3728d0a839bb7b35e846%2FCDS.jpg?generation=1697980812788529&alt=media" alt=""></strong></h1>
<ul>
<li><strong>Solution Link</strong>: <a href="https://www.kaggle.com/code/aleron751/sanseymysamba-final-solution/notebook" target="_blank">Kaggle Notebook</a></li>
</ul>
<h2><strong>1. 📖 Data Reading</strong></h2>
<ul>
<li><strong>Datasets Used</strong>: Data was sourced from Kaggle, including student prompts for train/test and their corresponding essays.</li>
</ul>
<h2><strong>2. 🛠 Feature Creation</strong></h2>
<ul>
<li>Generated features were primarily based on comparing student essays with a summarized version of the original prompts.</li>
<li>An extraction-based summarization technique was employed to distill the essence of the original prompts, making them more comparable to student essays.</li>
</ul>
<h2><strong>3. 🔗 Data Concatenation</strong></h2>
<ul>
<li>To facilitate a holistic view and ease of processing, prompts and essays were concatenated.</li>
</ul>
<h2><strong>4. 🤖 Transformer Predictions</strong></h2>
<ul>
<li><p>Several transformer models were leveraged, forming a "stack of BERTs" to enhance prediction accuracy.</p></li>
<li><p>Pre-existing models were harnessed to make out-of-fold predictions on the training data. The training process can be reviewed <a href="https://www.kaggle.com/code/bulivington/transformers-predictions-base" target="_blank">here</a>.</p></li>
<li><p>Functions were defined to facilitate the efficient inference of test data using the trained models.</p></li>
</ul>
<h2><strong>5. 📚 Stacking Approach</strong></h2>
<ul>
<li><p>A customized model architecture was employed, utilizing Huggingface's trainer.</p></li>
<li><p>Different pooling strategies were explored for the transformer outputs. <code>WeightedLayerPooling</code> was the chosen method due to its efficacy.</p></li>
<li><p>A custom dataset class in PyTorch was designed for test inference, accompanied by its associated function.</p></li>
</ul>
<h2><strong>6. 🎛 Meta Model Training</strong></h2>
<ul>
<li><p>Given its resilience against overfitting and robustness to outliers, the <code>HuberRegressor</code> was chosen as the meta model.</p></li>
<li><p>Beyond transformer-based features, various intuitive features were conceptualized, like word/token statistics, text readability score, and prompt-essay comparisons.</p></li>
</ul>
<h2><strong>7. 🔍 Feature Engineering</strong></h2>
<ul>
<li><p>The BARTScorer class was utilized to compute the BART score, shedding light on the essay's quality.</p></li>
<li><p>Several pre-trained models were deployed to derive features that measure the degree of alignment between the prompt and the essay.</p></li>
<li><p>A plethora of custom features were engineered to capture the nuances of the text, such as sentence structure, token intersections, and linguistic errors.</p></li>
</ul>
<h2><strong>8. 🧪 Feature Generation</strong></h2>
<ul>
<li><p>Sentence encoders, reward models, and other tools were employed to craft features that capture the essence of the text.</p></li>
<li><p>Features that provide insights into the text's structure and coherence, like the count of stop words, n-gram token intersections, and sentence errors, were extracted.</p></li>
<li><p>Models trained on previous competition datasets were leveraged to craft additional, insightful features.</p></li>
</ul>
<h2><strong>9. 🚀 Final Model Training</strong></h2>
<ul>
<li><p>All features, including transformer predictions and manually crafted ones, were integrated and readied for the final model.</p></li>
<li><p>Cross-validation methods were possibly used to gauge the model's performance on unseen data.</p></li>
<li><p>The model was trained with a limited iteration count to prevent overfitting and ensure its general applicability.</p></li>
</ul>
<h2><strong>10. 📤 Submission Generation</strong></h2>
<ul>
<li><p>Predictions for the test dataset were crafted using the meticulously trained HuberRegressor model.</p></li>
<li><p>Predictions were organized into the required format, typically consisting of an ID and its corresponding predicted value.</p></li>
<li><p>The structured submission file was uploaded to Kaggle's competition page, marking the culmination of the process.</p></li>
</ul> |
CommonLit - Evaluate Student Summaries | 19th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd like to start by expressing my gratitude to the organizers for hosting this competition and inspiring engaging discussions.<br>
A special congratulations to the top participants for their outstanding achievements.<br>
Personally, I was on the cusp of becoming a GM with a gold medal, and it's a bit disappointing to have come so close yet fall short. Nevertheless, your hard work and insights have truly made this a memorable event for everyone. <br>
Thank you all for making it such a rewarding experience.</p>
<p>And also thank you for my teammate takai380. </p>
<h2>Overview</h2>
<ul>
<li>A two-stage structure with the 1st stage being a Transformer and the 2nd stage being a GBDT (LightGBM).</li>
<li>For the 2nd stage GBDT, a model is created for each out-of-fold (oof) of the Transformer, and finally, a Weighted Average is taken.<ul>
<li>Some raw prediction values from the Transformers are also included in the final Weighted Average.</li></ul></li>
<li>Adjust the weight according to the total token count of summary_text and prompt_text.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1305588%2F9c362ac762756babd02f63e96e36e971%2Foverview.png?generation=1697077612844422&alt=media" alt="overview"></p>
<h2>Final Model</h2>
<p>Three models with different features and weights were selected for the final submission:</p>
<ul>
<li>Sub1 (TrustLB) LB:0.422 / CV:0.463-0.465 (Different CV calculation method)</li>
<li>Sub2 (TrustCV) LB:0.448 / CV:0.458</li>
<li>Sub3 (Balance) LB:0.441 / CV:0.460</li>
</ul>
<h2>Transformer</h2>
<ul>
<li>pao<ul>
<li>model-p1: deberta-v3-large (question + summary_text)</li>
<li>model-p2: deberta-v2-xlarge (question + summary_text)</li></ul></li>
<li>takai<ul>
<li>model-t1: deberta-v3-large (question + summary_text)</li>
<li>model-t2: deberta-v3-large (question + summary_text)</li>
<li>model-t3: deberta-v3-large (summary_text + prompt_text)</li>
<li>model-t4: deberta-v3-large (summary_text + question + prompt_text)</li></ul></li>
</ul>
<p>Predictions from model-t3 and t4 are also used in the final ensemble.<br>
Model with prompt_text is strong, but it takes long time to inference, so just 2 models.</p>
<ul>
<li>For pao model:<ul>
<li>Freezing layer</li>
<li>Concatenate the embedding of deberta-v3-large before training to the final layer (only for model-p1)</li>
<li>Prediction using CLS token</li>
<li>Linear scheduler with warmup, 3 epochs</li></ul></li>
<li>For takai model'<ul>
<li>Freezing layer</li>
<li>concat pooling (concat multiple layer cls tokens)</li></ul></li>
</ul>
<h2>LightGBM</h2>
<ul>
<li>Ultimately, about 80 features.</li>
<li>10 seeds x 4 folds of LightGBM Average for each Transformer model.</li>
<li>To add features, check if the CV is improved with a score from a 50 seed average.</li>
</ul>
<h3>Features</h3>
<ul>
<li>From public notebooks: (great thanks to <a href="https://www.kaggle.com/nogawanogawa" target="_blank">@nogawanogawa</a>)<ul>
<li><a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering</a></li></ul></li>
<li>Jaccard coefficient between question/prompt_text and summary_text.</li>
<li>Calculate tfidf for summary_text per prompt, and take the average tfidf per record.<ul>
<li>For all of summary_text and only words not in prompt_text.</li>
<li>Taking average of all columns and average of non-zero elements.</li></ul></li>
<li>Cosine similarity of average tfidf/BoW of summary_text per prompt.</li>
<li>Sentence Transformer embedding features.<ul>
<li>Cosine similarity between prompt_text and summary_text.</li>
<li>Cosine similarity between question and summary_text.</li>
<li>Cosine similarity between average embedding of summary_text per prompt.</li>
<li>Calculate cosine similarity between embeddings of sentences split from prompt_text and summary_text, and take the standard deviation of similarity for all sentences.</li>
<li>Similarly, calculate cosine similarity between embeddings of sentences split from summary_text and prompt_text, and take the standard deviation of similarity for all sentences.</li></ul></li>
<li>kNN-based features.<ul>
<li>For each prompt, kNN with the embedding of summary_text.</li>
<li>Feature extraction of the average and cosine similarity of the top 5% of oof features with high similarity for each record.</li>
<li>The average of oof features also includes the difference from the average oof features per prompt.</li></ul></li>
<li>Text similarity metrics:<ul>
<li>BERT Score</li>
<li>ROUGE</li>
<li>BLEU</li></ul></li>
<li>Learning Word2Vec with prompt_text for each prompt. Cosine similarity of the vectors of prompt_text and summary_text.</li>
<li>Include prediction values from the model of Feedback competition 3 as features.<ul>
<li><a href="https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference?scriptVersionId=104649006" target="_blank">https://www.kaggle.com/code/yasufuminakama/fb3-deberta-v3-base-baseline-inference?scriptVersionId=104649006</a></li></ul></li>
<li>Features from pyreadability.</li>
<li>Average the general frequency of words present only in summary_text and not in prompt_text for each summary_text.<ul>
<li>For bert_score and embedding features, the average embedding was taken from extracting text from the beginning, middle, and end for long prompts.</li></ul></li>
</ul>
<h2>Ensemble</h2>
<ul>
<li>Weight optimization using Nelder-Mead.<ul>
<li>Ultimately, only use those above 0 and round to a total of 1.0.</li></ul></li>
<li>Change ensemble weights according to the token count of summary_text + prompt_text.<ul>
<li>For those with a certain number of tokens or more, gradually reduce the weight of raw Transformer predictions and increase the weight of LightGBM (linearly adjust the weight by token count).</li>
<li>It is assumed that when the token count is high, the prompt_text does not fit entirely in the Transformer with prompt_text input, reducing accuracy.</li></ul></li>
</ul> |
CommonLit - Evaluate Student Summaries | Learnings from this competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I'd first of all like to express my gratitude for the Kaggle team to organize this challenge and my colleagues from my office for supporting me through the journey.<br>
Apart from the coding & conceptual knowledge that I gained through this I also learnt the most important values of patience, perseverence & hardwork.<br>
Late into the competition I also joined in the efficiency track, so that was an additional learning i got about making the inference superfast and actually developing strong & meaningful textual features.</p>
<p>this is my code for the efficiency track: <a href="https://www.kaggle.com/bhavesjain/commonlit-2023-train" target="_blank">https://www.kaggle.com/bhavesjain/commonlit-2023-train</a></p>
<p>and for the accuracy track i modified the publicly available solution with my fixes: <a href="https://www.kaggle.com/code/bhavesjain/commonlit-ess-lgbm-autocorrect-deberta-v3-tuned/" target="_blank">https://www.kaggle.com/code/bhavesjain/commonlit-ess-lgbm-autocorrect-deberta-v3-tuned/</a><br>
Overview: <br>
My prime objective in the journey was to experiment with logical approaches than just bruteforce techniques of concatenating everything & feeding it to a powerful & large models. I experimented with textual features like lexical diversity, readability indices, syllables& word metrics. I wanted to explore more into the area of metadiscourse & publicly available datasets for grading the quality but i had time & computation quota limitations. I also experimented with BERT, different versions of DeBERTa, sentence transformers but that's about it.<br>
Submission Details:<br>
I primarily stuck to XGBoost with Optuna. And rest was just hit & trial with various models.<br>
The USP of my solution is the wide array of features that i used inspired from <a href="https://textinspector.com/help/statistics-readability/" target="_blank">https://textinspector.com/help/statistics-readability/</a><br>
I hope you all too check it out<br>
Additionally i also learnt that i need to focus more on the robustness of my training techniques & optimization to avoid such loss in shakeups.</p>
<p>Even though I didn't get a medal, i'm glad that i learnt a lot of things worth applying in my own domain & possibly in upcoming competitions.</p> |
CommonLit - Evaluate Student Summaries | 35th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for organizing the challenging competition. Though we got a silver medal finally, we learned a lot from the experienments. And thanks my teammates <a href="https://www.kaggle.com/penpentled" target="_blank">@penpentled</a> <a href="https://www.kaggle.com/shigeria" target="_blank">@shigeria</a> <a href="https://www.kaggle.com/chaudharypriyanshu" target="_blank">@chaudharypriyanshu</a> <a href="https://www.kaggle.com/takanashihumbert" target="_blank">@takanashihumbert</a>, they have been really helpful these months. </p>
<h1>1. Summary</h1>
<p>Our best private solution submitted is a ensemble of models, which contain a model with prompt_text, 3 wording models and a content+lgbm model without prompt_text. The inference time is about 8hrs. </p>
<table>
<thead>
<tr>
<th>Sub</th>
<th>Type</th>
<th>CV</th>
<th>LB</th>
<th>PB</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>No.1</td>
<td>best lb</td>
<td>0.4655</td>
<td>0.426</td>
<td>0.466</td>
<td>model with prompt_text training max_len=896 infer 1024</td>
</tr>
<tr>
<td>No.2</td>
<td>best cv</td>
<td>0.463</td>
<td>0.427</td>
<td>0.467</td>
<td>substitute the content model with a distilled model+lgbm</td>
</tr>
<tr>
<td>No.3</td>
<td>conservative</td>
<td>0.476</td>
<td>0.439</td>
<td>0.484</td>
<td>only models without prompt_text</td>
</tr>
</tbody>
</table>
<p>Though we have a model scored 0.461 for pb, we didn't select it since it only scored 0.445 for lb. It contains training on 2 folds validation on other 2 folds models by <a href="https://www.kaggle.com/takanashihumbert" target="_blank">@takanashihumbert</a> and the base model with prompt_text.</p>
<h1>2. Training</h1>
<p>Models submitted are shown below.</p>
<table>
<thead>
<tr>
<th>ID</th>
<th>Type</th>
<th>Train max_len</th>
<th>Infer max_len</th>
<th>Text Cleaning</th>
<th>CV</th>
<th>Model</th>
<th>With prompt_text</th>
</tr>
</thead>
<tbody>
<tr>
<td>base</td>
<td>both</td>
<td>896</td>
<td>1024</td>
<td>public methods</td>
<td>0.477</td>
<td>v3-large</td>
<td>✓</td>
</tr>
<tr>
<td>w2</td>
<td>wording</td>
<td>512</td>
<td>512</td>
<td>replaced copied text with [PASSAGE]</td>
<td>0.556</td>
<td>v3-large</td>
<td></td>
</tr>
<tr>
<td>w4</td>
<td>wording</td>
<td>768</td>
<td>768</td>
<td>replaced copied text with [PASSAGE]</td>
<td>0.559</td>
<td>v3-large</td>
<td></td>
</tr>
<tr>
<td>w6</td>
<td>wording</td>
<td>512</td>
<td>512</td>
<td>replaced copied text with [PASSAGE] and [REFERENCE]</td>
<td>0.5561</td>
<td>v3-base</td>
<td></td>
</tr>
<tr>
<td>c1</td>
<td>content</td>
<td>1024</td>
<td>1024</td>
<td>public methods</td>
<td>0.4445</td>
<td>v3-large</td>
<td></td>
</tr>
<tr>
<td>distill</td>
<td>content</td>
<td>1024</td>
<td>1024</td>
<td>joined copied text with [PASSAGE] and [REFERENCE] tokens</td>
<td>0.488</td>
<td>v3-base</td>
<td></td>
</tr>
</tbody>
</table>
<h1>3. inference</h1>
<p>We used optuna and nelder-mead to optimize for ensemble, and features that public notebooks shared for lgbm. We strived for both better cv and lb when ensemble.</p>
<h1>4. Not working for us</h1>
<ul>
<li>Models except for v3</li>
<li>Mean Pooling, Concat Pooling, …</li>
</ul>
<h1>5. Acknowledgements</h1>
<ul>
<li><a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> idea of lgbm and feature engineering</li>
</ul>
<h1>6.Notebooks</h1>
<p>Best PB submission: <a href="https://www.kaggle.com/code/snorfyang/commonlit-exp-e6-shigeria1/notebook" target="_blank">https://www.kaggle.com/code/snorfyang/commonlit-exp-e6-shigeria1/notebook</a><br>
Text cleaning, ensemble methods, final submissions and other notebooks may be published later.</p> |
CommonLit - Evaluate Student Summaries | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution is rather simple, because I used almost no complex tricks - instead I just built a reliable pipeline and found good hyperparameters. I had some brilliant ideas (at least I think so :)) - to use augmentation and generate synthetic data using some big model based on other texts from commonlit.org. But at that time, when I was still actively contributing, there was uncertainty with the license for the texts, and I did not complete the use of augmentations - I was too busy with my work (trying to get RLHF to work), so I left the competition - my last commit was a month ago. But apparently the decision was good, at least I didn’t overfit :)</p>
<p>So let's break my solution down into parts.</p>
<p><strong>1. Data:</strong><br>
I used a fairly standard template: prompt + question + text. At the end of my participation, I tried to dig into the data and do a good pre-processing - after all, there were quite a lot of very similar essays with exact or different scores. So, I tried to find these samples by similarity (like Levenstein) and then merge them. In addition, I decided to make augumentations based on this insight - if there are many similar essays (differing only in typos, for example) - I could use something like reverse autocorrect - randomly replace some words with its close analogues. With this technique I got 0.453 in private ONLY on fold3 (which is better than my chosen blend and probably could lead to a second place) - but I was too tired at this point so I didn't look further into augmentations. But I think augmentations could probably lead me to victory.</p>
<p><strong>2. Models</strong><br>
Deberta is the king, so there's not much to say here. I tried using decoder models like Llama, but Deberta was still better. There were some techniques that gave me a boost - using EMA (honestly, without EMA it was very unstable, so it's probably just necessary) and using differential learning rates.I tried several pooling options, but the best option for me was to use concatenation of CLS token and student's text meanpooling. I also used token_type_ids to separate the prompt, question and essay.</p>
<p><strong>3 Inference & train</strong><br>
I used following scheme - I tried to find a good hyperparameters on some fold (for example, fold0), and then train with exact hyperparameters on other folds. I then sumbittend the entire blend and 4 individual models (5 submission total - one day) and repeated the procedure the next day. I realized that I could use maxlen 1500 for inference (didn't research this number much, tried something like 1024 and 2048, but 1500 was better in terms of efficiency), so in my final mix I took the 10 best checkpoints across folds (some folds got 2 checkpoints, some folds got 3). First I averaged by folds, then averaged the rest. That's all. </p>
<p>Briefly what worked (ranked from most important to least important, IMO):</p>
<ol>
<li><p>Using Deberta</p></li>
<li><p>EMA</p></li>
<li><p>Augumentation</p></li>
<li><p>Defferentiated learning rates</p></li>
<li><p>Custom pooling</p></li>
<li><p>token_type_ids</p></li>
<li><p>Data cleaninig</p>
<p>What did not work (random order):</p></li>
<li><p>Decoder models</p></li>
<li><p>AWP</p></li>
<li><p>FGM</p></li>
<li><p>WD</p></li>
<li><p>Constant LR</p></li>
<li><p>Handcrafted features</p></li>
<li><p>GBT for stacking</p></li>
</ol>
<p>In the end, it was a good competition for me. Last year I competed in another NLP competition and got a silver medal, but I grinded all day at that competition (I wasn't working that time, so I had a lot of free time). This time I also expected silver, which I consider a solid result, but I got 3rd place. In any case, this competition was a cakewalk for me, since I spend very little effort on it (compared to the previous competition, at least). I'm hoping this means I'll grow a lot this year - and I think that's the main goal of participating in Kaggle.</p>
<p>Good luck to all of you.</p> |
Kaggle - LLM Science Exam | 6th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for hosting the LLM competition - it was both challenging and well-structured. A big shout-out to our community for the insightful discussions and for demonstrating what's possible under limited GPU resources. Congratulations to the winning teams, we learned a lot from the solution write ups. Many thanks to my super talented teammates <a href="https://www.kaggle.com/ubamba98" target="_blank">@ubamba98</a>, <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>, <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> and <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a> for such an amazing collaboration and perfect teamwork!</p>
<h2>Code Links</h2>
<ul>
<li>Github repo: <a href="https://github.com/rbiswasfc/llm-science-exam" target="_blank">https://github.com/rbiswasfc/llm-science-exam</a> </li>
<li>Inference notebook: <a href="https://www.kaggle.com/code/ubamba98/a08-exam-solver?scriptVersionId=145861577" target="_blank">https://www.kaggle.com/code/ubamba98/a08-exam-solver?scriptVersionId=145861577</a></li>
</ul>
<h2>Summary</h2>
<p>From the very beginning, we followed the retriever-reader framework for answering the STEM MCQs.</p>
<ul>
<li>Retrievers: Get context data<ul>
<li>Corpus: we created a custom STEM corpus by filtering wiki articles based on their category metadata</li>
<li>Generated synthetic MCQs using GPT-3.5, GPT4, LLaMA2 70b & Falcon 180b, which were used to train both retrievers and readers (downstream MCQ models)</li>
<li>Reranked top 10 retrieved chunks using a cross-encoder and provided top 2 to 4 re-ranked chunks to readers to solve the MCQs.</li></ul></li>
<li>Readers: Solve MCQs<ul>
<li>We used an ensemble of DeBERTas and LLMs.</li>
<li>DeBERTa: we explored different training strategies, such as<ul>
<li>Span approach: to make use of cross-option information</li>
<li>DebertaV2ForMultipleChoice: for diversity</li>
<li><a href="https://aclanthology.org/2021.eacl-main.20/" target="_blank">PET</a>: for diversity</li></ul></li>
<li>LLM: we explored both fine-tuning and zero-shot strategies<ul>
<li>Finetuned Open-Orca/Mistral-7B-OpenOrca using LoRA</li>
<li>Top 5% of MCQs (as ranked by prediction entropy - proxy for MCQ difficulty) were handled by to 70b LLMs:<ul>
<li>platypus2-70b-instruct</li>
<li>sheep-duck-llama-2</li></ul></li></ul></li></ul></li>
</ul>
<p><strong>NOTE</strong>: We used <code>platypus2-70b-instruct</code> in our pipeline - We're uncertain about the compatibility of this model with the competition rules.</p>
<h2>1. Retrievers: Get Context Data</h2>
<p>We adopted the standard retrieve & re-rank pipeline to find MCQ specific relevant text chunks from a custom STEM wikipedia corpus.</p>
<h3>1.1 STEM Wiki Corpus</h3>
<p>To address frequent rendering issues (number, equations & symbols) and to filter out irrelevant articles from the existing wiki corpuses, we created a custom STEM wiki corpus as below:</p>
<ul>
<li>Define a set of seed wikipedia categories related to STEM topics such as Category:Concepts in physics, Category:Physical quantities, etc.</li>
<li>For each category, recursively collect the member pages and subcategories up to a certain depth.</li>
<li>Extract the page contents of the collected wiki URLs Wikipedia-API (~500k pages).</li>
</ul>
<p><strong>Chunking:</strong> We first split the full text from each article based on different sections. The longer sections were further broken down into smaller chunks containing approximately 300 tokens. We maintained two representations for each chunk:</p>
<ul>
<li>A short representation without overlap - used for embedding and search</li>
<li>A longer representation with overlap from the previous and next chunks - used as contexts in the downstream MCQ models.</li>
</ul>
<h3>1.2 Retriever</h3>
<p>Retrieved text chunks from pre-trained embedding models such as thenlper/gte-base, BAAI/bge-base-en-v1.5 provided a strong performance boost for the downstream MCQ models. In our pipeline, we observed further performance improvement by fine-tuning the embedding models. We re-purposed the synthetically generated MCQs for the retrieval task, as illustrated in the figure below:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F51fc3bccc92c4d90ce6d5358ed93fe4d%2FScreenshot%202023-10-17%20at%2012.09.15%20AM.png?generation=1697472582799041&alt=media" alt=""></p>
<p>Retrievers were fine-tuned with the NCE loss + dynamically computed in-batch hard negatives. Specifically, we used the batching strategy from this <a href="https://www.kaggle.com/competitions/learning-equality-curriculum-recommendations/discussion/395110" target="_blank">solution post</a>. In our pipeline, we used a union of top 10 text chunks retrieved from fine-tuned <code>thenlper/gte-base</code> & <code>BAAI/bge-base-en-v1.5</code> models.</p>
<h3>1.3 Reranker</h3>
<p>To reduce noise in the contexts provided to the downstream MCQ models, we re-ranked top retrieved candidates using a cross-encoder (deberta-v3-base). We selected top 2 to 4 re-ranked contexts as per a dynamic threshold on computed relevancy score from the re-ranker.</p>
<h2>2. Readers: Solve MCQs</h2>
<p>We explored several strategies to solve the MCQs using the retrieved contexts.</p>
<h3>2.1 Span Classification</h3>
<ul>
<li>Mean pooling of tokens from each option to extract option-wise features</li>
<li>Classification with cross-option information</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2Fe0bc561a6930b8247d3f0142db008b2e%2FScreenshot%202023-10-17%20at%2012.12.08%20AM.png?generation=1697472753776815&alt=media" alt=""></p>
<h3>2.2 LLM Fine-tuning using LoRA</h3>
<p>Towards the end of competition, it was evident that we were hitting performance ceiling with DeBERTa models. Hence, we decided to explore fine-tuning of LLMs. While we experimented with several models (e.g. flan-t5-xl (3b), flan-t5-xxl (11b), llama-7b, mistral-7b, llama-30b and llama-70b), our final pipeline include 1x LoRA fine-tuned mistral-7b model. The lora settings were as follows:</p>
<ul>
<li>lr 3.e-4 (slight variations)</li>
<li>rank 16</li>
<li>alpha 16</li>
<li>lora modules on all linear components (qkv, out_proj, ffn, gate_proj, etc.) and embeddings</li>
<li>dropout 0.1</li>
<li>no sequence packing</li>
<li>bf16</li>
</ul>
<p>We followed the cross-option setting during LLM fine-tuning i.e. to predict the right answer given all the options. We did not explore training the model in binary classification set up. </p>
<h3>2.3 Zero Shot using 70b models</h3>
<p>A huge shout out to <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> for demonstrating the potential of 70b LLM using Kaggle GPU!!</p>
<p>About 5 days before the end of the competition, we started experimenting with 70b models. We realized we wouldn’t be able to run every example though a model this big, but we also knew that our deberta models were doing a good job for 90-95% of the questions. Thus, we took the least confident predictions and passed them to the 70b model, using <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>'s code. We did slight prompt engineering the last few days and found that taking the difference between two tokens (yes/no or true/false) gave slightly better results. In the end we used both platypus 70b and sheep-duck-llama-70b. We adjusted the prompt from the public notebook to match the way it was trained. Our last few submissions were experiments passing 5-10% of the samples, blending with other models, or just having the 70b predictions be the final predictions. We did get lucky because our best scores came on the final day.</p>
<p>CV on our custom dataset showed slightly better performance passing all the options at once, but we couldn’t figure out how to do that on Kaggle.</p>
<p>Local validation results with <code>Sheep-duck-llama</code>:</p>
<ul>
<li>(all options at once) ABCDE tokens: 0.91</li>
<li>(one option at a time) only yes token: 0.886</li>
<li>(one option at a time) only no token: 0.888</li>
<li>(one option at a time) yes token - no token: 0.891</li>
</ul>
<p>We did try fine-tuning the 70b model using QLoRA, but we ran into issues we could not debug before time ran out.</p>
<h2>Team Members</h2>
<ul>
<li>Abhishek Thakur (@abhishek)</li>
<li>Trushant Kalyanpur (@trushk)</li>
<li>Udbhav Bamba (@ubamba98)</li>
<li>Nicholas Broad (@nbroad)</li>
<li>Raja Biswas (@conjuring92)</li>
</ul> |
CommonLit - Evaluate Student Summaries | 21th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizers for hosting such an interesting competition. Also thanks to kaggler for sharing so much useful information through notebooks and discussions.</p>
<p>I share my solution.</p>
<p><br></p>
<h2>Overview</h2>
<p>My final submission is an ensemble of two deberta-v3-large models and LightGBM.<br>
The pipeline is as follows.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6454400%2F9c4ca8159dbb9fb98f9cefa2e4e90b3d%2F(81).png?generation=1697805022893951&alt=media" alt=""></p>
<p><br></p>
<h2>Model</h2>
<ul>
<li>Deberta v3 large<ul>
<li>input_text = text+prompt_question+prompt_text</li>
<li>maxlen = 1024</li>
<li>CLS token</li>
<li>loss : RMSELoss</li>
<li>epoch = 4</li></ul></li>
</ul>
<p><br></p>
<ul>
<li>Deberta v3 base<ul>
<li>input_text = prompt_title+len2text+prompt_question+text (※len2text : [<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/437591" target="_blank">ref</a>])</li>
<li>maxlen = 512</li>
<li>Attention Pooling</li>
<li>loss : RMSELoss</li>
<li>epoch = 4</li></ul></li>
</ul>
<p><br></p>
<ul>
<li>LightGBM<ul>
<li>Almost the same as <a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">publish nootbook</a></li></ul></li>
</ul>
<p><br><br></p>
<h2>Kye Points</h2>
<p>The following two points contributed significantly to improving the accuracy of the deberta model.</p>
<ul>
<li>Grouped-LLRD (Layer Wise Learning Rate Decay)</li>
<li>freeze layer (freeze 4 layers close to embeddeding) [<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/433754#2405543" target="_blank">ref</a>]</li>
</ul>
<p><br></p>
<h2>CV</h2>
<ul>
<li>validation strategy : GroupKFold(groups=prompt_id)</li>
</ul>
<table>
<thead>
<tr>
<th></th>
<th>39c16e</th>
<th>3b9047</th>
<th>ebad26</th>
<th>814d6b</th>
<th>cv</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>model1</td>
<td>0.4768</td>
<td>0.5045</td>
<td>0.4461</td>
<td>0.5842</td>
<td>0.4948</td>
<td>0.466</td>
<td>0.471</td>
</tr>
<tr>
<td>model2</td>
<td>0.4540</td>
<td>0.5154</td>
<td>0.4427</td>
<td>0.5830</td>
<td>0.4907</td>
<td></td>
<td></td>
</tr>
<tr>
<td>LGBM</td>
<td>0.4730</td>
<td>0.5802</td>
<td>0.4592</td>
<td>0.5809</td>
<td>0.5197</td>
<td>0.449</td>
<td>0.481</td>
</tr>
</tbody>
</table>
<p><br></p>
<h3>Ensemble</h3>
<ul>
<li>weighted average<ul>
<li>cv : 0.4777</li>
<li>Public LB : 0.4351</li>
<li>Private LB : 0.46163</li></ul></li>
</ul>
<p><br></p>
<h2>Didn't Work</h2>
<ul>
<li>text cleaning</li>
<li>awp</li>
<li>fgm</li>
<li>svr </li>
<li>pseudo labels </li>
<li>etc…</li>
</ul> |
CommonLit - Evaluate Student Summaries | Solution Write up For CommonLit - Evaluate Student Summaries | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi Everyone! I'm posting my solution here to share a slightly different approach I tried in the competition.</p>
<h1>Competition Context</h1>
<p>The context of this solution is the CommonLit - Evaluate Student Summaries competition:<br>
<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a></p>
<p>The data of this competition can be found here:<br>
<a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></p>
<h1>My Approach</h1>
<p>My approach was very similar to the DeBERTa + LGBM approach in this competition, however, I added a few details such as using 2 different DeBERTa backbones to parse the summary text and the prompt text separately, and 2 extra heads which then computed both the target values.</p>
<p>The idea for this approach was that I wanted to give more flexibility to the model by fine-tuning two different backbones entirely as there may be very different writing styles that require different processing.</p>
<p>However, it is also quite reasonable to assume that passing the prompt text and the summary can give a better feature vector that combines knowledge from both, and this may be better in fact, as seen in many top solutions.</p>
<p>Lastly, using the wording and content predictions from the double DeBERTa model, I pass it into an LGBM ensemble along with the n-gram features as computed in <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect#Model-Function-Definition" target="_blank">https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect#Model-Function-Definition</a></p>
<p>My goal was to not try and overfit the model because the summaries and prompts in the hidden test dataset could have been very different from the train dataset.</p>
<p><em>Validation</em><br>
I validated my DeBERTa predictions using a random hold out validation set.<br>
I used KFold CV for LGBM predictions.</p>
<h1>Improvements</h1>
<ol>
<li>Use DeBERTa Large instead of Base</li>
<li>Use CV ensemble for DeBERTa itself</li>
<li>Made better validation set by separating by prompt and not randomly</li>
<li>Could have tried making a dataset using ChatGPT</li>
</ol> |
CommonLit - Evaluate Student Summaries | 1st Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is a classic distill into deberta-v3-xsmall. Runs in ~15mins. </p>
<p>Inference on prompt title, prompt question, summary text.</p>
<p>The magic: tokenize prompt text, lookup token/2-token/3-token matches—and include as token_type_ids (0-6, for all windows, is better than 0-3, max pooled by 1/2/3-token)</p>
<p>The model knows exactly which text is quoted/copied, and which words and phrases come from the prompt, which is a lot of contextual information at 1/10th the sequence length. </p>
<p>—-</p>
<p>Base models are deberta-v3-base with attn/cls pooling, and 4/7 token type ids. These four plus lgb are stacked into out of fold predictions, for distillation. </p>
<p>—</p>
<p>Content is adjusted by ~80% and wording is adjusted by 20-40% by prompt mean. </p>
<p>Final ensemble is 68/42 nn/lgb for content; 75/30 for wording. </p>
<p>—</p>
<p>Distillation includes cosine annealing from pseudo labels to half pseudo half label across the course of three epochs. </p>
<p>Many other ideas were tried (cross-attention to prompt; embedding approaches; massive library of generative text; etc); these were the ones that worked. </p>
<p>Credit to Psi, Team Turing, and Kaggle—also the gzip paper. May release code after packaging. </p> |
CommonLit - Evaluate Student Summaries | Public 5th & Private 24th Place Solution (CV = LB) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to the winners and everyone who put lots of efforts on this competition, and thank Kaggle & host for organising this competition.<br>
Even though my team shakedown on the private learderboard, we don't think it's truly 'shakedown'. Because our CVs are very close to the LBs (public + private) and we have already selected almost our best results. Our final solutions are purely based on best CVs.</p>
<h1>1 Summary</h1>
<h2>1.1 Key points</h2>
<p>● Various inputs using common words, phrases, etc. between “prompt_text” and “text” as inputs.<br>
● Custom heads combining various pooling.<br>
● Pseudo labeling with back translated texts.<br>
● Weighted loss.<br>
● Ensemble weights based on ridge regression with large alpha </p>
<h2>1.2 Our submissions (3 final submissions and the best submission)</h2>
<pre><code>CV LB LB
</code></pre>
<p>①best public LB 0.4596 0.420 0.463<br>
②best CV 0.4566 0.423 0.463<br>
③other 0.4579 0.422 0.464<br>
(best private LB) 0.4601 0.420 0.462<br>
cv roughly euqals to LB: 0.457 = 0.420x0.13(public) + 0.463x0.87(private)</p>
<h1>2 Details</h1>
<h2>2.1 Various inputs</h2>
<p>We added common words, phrases, etc. between “prompt_text” and “text” to the end of the input texts.(While 2 and 3 grams overlap were added to the input, 2nd Stage LGB models are not working anymore, at least this is the case on Jie’s models)<br>
● overlap words<br>
● overlap N-grams (N=2,3)<br>
● overlap phrases<br>
● overlap key phrases by Rake method</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F530572%2Fc4af5af50522e5116d8ff8a626d594ef%2FScreenshot%202023-10-12%20at%2017.42.56.png?generation=1697129165509557&alt=media" alt=""></p>
<h2>2.2 Custom heads</h2>
<p>We created many custom heads that cat the output values of different poolings.<br>
● CLS output<br>
● Mean pooling<br>
● Max pooling<br>
● GeM pooling<br>
● Attention pooling<br>
● Mean-Max-Attention Pooling (MMA)<br>
● Mean-Max-Gem Pooling (MMG)<br>
● Gem-Attention Pooling (GA)<br>
Example: GA Pooling<br>
gemtext_out = self.gempooler(lasthiddenstate, attention_mask)<br>
attpool_out = self.attpooler(lasthiddenstate, attention_mask)<br>
context_vector = torch.cat((gemtext_out, attpool_out), dim=-1)</p>
<h2>2.3 Pseudo labeling for back translated texts.</h2>
<p>Data Augmentation and Pseudo Labelling: Translate summary text to Chinese, then translate back to English, then use a good model to Pseudo label.</p>
<h2>2.4 Weighted loss</h2>
<p>We created a weighted loss for wording and content as following, ratios are decided by the RMSE ratio between content and wording.<br>
class WeightedCRMSE(nn.Module):<br>
def <strong>init</strong>(self):<br>
super(WeightedCRMSE, self).<strong>init</strong>()<br>
self.weights = torch.tensor([0.85, 1.15]).to(device)</p>
<pre><code> ():
squared_errors = torch.square(y_pred - y_true)
weighted_root_squared_errors = torch.mean(squared_errors, dim=) ** * .weights
loss = torch.mean(weighted_root_squared_errors)
loss
</code></pre>
<h2>2.5 Ensemble weights based on ridge regression with large alpha</h2>
<p>We trained ridge regression out-of-fold and use the mean of the normalized coefficients as the ensemble weights. Both CV and LB scores were better when using ridge regression with a large alpha than using linear regression and other methods.</p>
<ol>
<li>The intercept is zero.</li>
<li>Large alpha (500)</li>
</ol>
<h2>2.6 Others</h2>
<p>● Backbone<br>
○ deberta-v3-large<br>
○ electra-large<br>
○ roberta-large<br>
○ roberta-large-squad2<br>
● Model for ensemble<br>
○ LightGBM<br>
○ Ridge regression<br>
● Max length<br>
○ 300<br>
○ 512<br>
○ 768</p>
<h1>3 Not working</h1>
<p>● LSTM head<br>
● Freezing the embbedings and some layers.<br>
● AWP<br>
● MLM<br>
● 30° rotated targets<br>
● Different max length between training and inference.</p>
<h1>4 Jie’s model performance</h1>
<p>● Name convention backbon_maxlength_2&3gramsoverlap_dataaugmentionflag_pooling, e.g.<br>
del_768_23_aug_mmg: Debertal-V3-Large, maxlength=768, 2and3 grams overlap were added to input, data augmention is used for model training, use Mean-Max-Gem pooling.<br>
● Table below: 1st row: model name, 2nd row: CV(average of 4 folds), 3rd row: CV (oof), 4th row: LB, 5th row: Inference time. <br>
● Ensemble of LB 0.437, 0.432, 0.439, 0.436, 0.433 and 0.436 gives CV: 0.45909, Pubic LB 0.424, Private LB: 0.464<br>
● Note: Deberta Large models inference time are 80mins not 125-140min, all other models are <=36mins
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F530572%2F231869326fa934ba8ccf68f282a8a442%2FJies%20Model%20Performance.png?generation=1697129193727475&alt=media" alt=""></p>
<h1>5 yyykrk’s model performance</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F530572%2F692c701395194fb917ea8a5be76ef559%2Fyyykrk.png?generation=1697129202595643&alt=media" alt=""></p> |
CommonLit - Evaluate Student Summaries | 333rd place...sharing my findings | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, huge thanks for organizing this competition and everyone who has shared their ideas in the notebooks and discussion forums!</p>
<p><strong>Approach</strong></p>
<p>My solution is built upon <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a>'s effective idea of stacking Debertav3base with LGBM. I mostly played around with adding more nlp features and tweaking with the model.</p>
<p><strong>What worked</strong></p>
<p>Freezing the first 4 encoder layers gave the best results<br>
Using CLS token output for regression was effective<br>
Adding the 6 features from <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/overview" target="_blank">FB3 commonlit competition</a> gave a good boost<br>
Adding textstat features such as <code>automated_readability_index, smog_index, dale_chall_readability_score, gunning_fog, reading_time, difficult_words</code> showed improvement<br>
Addding cosine similarity between prompt text and text embeddings also helped to a small extent<br>
Removing <code>length_ratio</code> also showed small improvement</p>
<p><strong>What didn't work</strong></p>
<p>Freezing embedding layers deteriorated performance<br>
Mean pooling/ GeM pooling<br>
Autocorrect didn't improve score<br>
Tried other models such as Roberta, Electra which were not as good as Debertav3base. Even ensembling Debertav3base with them couldn't help<br>
Training separate models for each target was not effective. In fact, content scores improved when trained together with wording scores</p>
<p>I wanted to include <code>prompt_text</code> and also try using Debertav3large, however it didn't turn out well this time :(</p>
<p>Although I couldn't earn a medal, my learnings through this competition have been immense and I'm very happy to have come this far. I've already learnt a lot from the teams who have published their solutions and I look forward to participating in the next competitions!</p> |
Kaggle - LLM Science Exam | 37th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>37th Place Solution</h1>
<p>First off, I would like to thank my teammate, Simon (@simonveitner) for his collaboration. He is definitely responsible for most of the work. Second, I would like to thank all the people who contributed datasets and public notebooks. It seemed like any idea we had was published publicly within the next week (looking at you <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/mbanei" target="_blank">@mbanei</a>, and <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>). This was my first foray into NLP and LLMs, and it has been quite instructive.</p>
<p>Our solution is based on high quality RAG and an ensemble of models. We pulled results from <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, <a href="https://www.kaggle.com/jasonzhang2022" target="_blank">@jasonzhang2022</a>, <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>, and <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a> for modelling. We would like to thank <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for his datasets, especially when he included the retrieval context, and <a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> for his 300 high quality samples. We also used <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> for his optimization of ensemble weights.</p>
<h2>Context Retrieval</h2>
<p>Early on, it was clear from the way the train and test datasets were generated we needed some context to do the reverse and then would rely on the language model to do reasoning over the context. Given that it was an easier task to pick from 5 answers than to generate a question in the first place, we had hope that the smaller models would be able to match the work of the larger LLMs used for generating questions. As is pointed out by <a href="https://www.kaggle.com/phillippsinger" target="_blank">@phillippsinger</a> <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446240" target="_blank">here</a>, there is a limit to the best you can do because GPT-3.5 occasionally gets it wrong, as for example the <em>triskeles</em> question pointed out in discussions early in the competition (sorry, I could not find the original discussion now).</p>
<p>Initial work focused on the 60k dataset from <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a>, but we soon realized it was missing many key articles. We then moved to the larger full wikipedia set, which would allow us to train with non-science related questions. Further research showed the full set was missing articles, so we parsed and uploaded our own full set. After some initial small successes with a custom reranking model and some tests with section sized inputs, <a href="https://www.kaggle.com/mbanei" target="_blank">@mbanei</a> published the now famous longformer notebook and 270K article set, and we abandoned this work to do modeling building. </p>
<p>Simon did the hard work of testing different embedding models, and we settled on <code>bge-base-en-v1-5</code>, uploaded by <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a>. Incidentally this did better than the large model. We then used <code>LuceneSearcher</code> in the pyserini package for the lookup. </p>
<h2>Models and Ensembling</h2>
<p>There is not much to add here. We did look into some multiple choice models besides the DebertaV2ForMultipleChoice, but did not get anywhere. We settled on an ensemble of 3 different public notebooks using context lookup with the DebertaV2ForMultipleChoice. We also used one model without any context.</p>
<p>Using the 200 training + 300 high quality samples as our training set for ensembling, we used the ensemble weight optimization introduced by <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>. </p>
<p>With the probability table results from the ensemble, we calculated how much time we had left in the 9 hour window, and calculated the max number of samples we could redo (basically 800 samples per 2 hours left). We used a Platypus 70B model introduced by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, after retraining the <code>lm_head</code> on a limited number of samples. <em>This turned out to be the game changer, as this model performed better on the private board than the public leading to a huge jump in rankings from public leaderboard to private leaderboard!</em> </p>
<h2>Breakdown</h2>
<table>
<thead>
<tr>
<th>Addition</th>
<th>Score Change</th>
</tr>
</thead>
<tbody>
<tr>
<td>BM25 context change</td>
<td>+0.03</td>
</tr>
<tr>
<td>BM25 base model</td>
<td>+ 0.02</td>
</tr>
<tr>
<td>DebertaV3 ensemble</td>
<td>+ 0.02</td>
</tr>
<tr>
<td>Platypus 70B on unsure</td>
<td>+ 0.02</td>
</tr>
</tbody>
</table>
<h2>What didn't work</h2>
<ol>
<li><a href="https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=8618&context=sis_research" target="_blank">Evidence Filter models</a>: We altered the DebertaV2 architecture to include these, but they did not improve the result. Perhaps it was poor training methodology, we had hoped that allowing some contrast between the answers would improve scores. </li>
<li>Sparse attention longer context models such as <code>google/bigbird-roberta-large</code>. These also did not train appropriately in our experiments, although we did not spend much time on them.</li>
</ol> |
CommonLit - Evaluate Student Summaries | 69th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, thanks to Kaggle and the hosts for this interesting competition, this is the first competition that I ever commit to, and I'm glad that it paid off.</p>
<p>As always, I knew that diversity wins, so I focused on building a diverse set of models. My final submission consisted of two LGBM models on top of Deberta-Base predictions and raw Deberta-Large predictions, with weights optimized on oofs. </p>
<p><strong>LGBM Models</strong><br>
These two models are based on Feature Engineering ideas and model from <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> (thanks!). Each model was trained with a different set of features and hyperparameters, each scoring around 0.505 cv, 0.447 public and 0.480 private. A simple average between these two models scores 0.493 cv , 0.442 lb and a bronze zone finish. </p>
<p><strong>Stabilizing Deberta-Large Training</strong><br>
I knew that this would be key for diversity. However, at first, the training seemed unstable and the predictions not trustworthy, the model would converge at different epochs for different folds. So I spent the last week trying to stabilize the training pipeline and these are the key things that helped:</p>
<ul>
<li>Re-Initializing last layers</li>
<li>Attention Pooling seemed to work best</li>
<li>Layer-wise Learning Rate Decay</li>
<li>I've also experimented with AWP but with no success</li>
</ul>
<p>With this in mind, I managed to somewhat stabilize the training, I trained the final model for 4 epochs with 1 epoch for warmup and always picking the last epoch as the final checkpoint.</p>
<p><strong>Final Ensemble</strong><br>
The final weights of the 3 models were optimized on oofs on each of the targets, I managed to get a small boost for the cv (0.489) but a slightly worse lb (0.445). But that performed the best on private and got me a 0.473 ( Trust your cv!). </p> |
CommonLit - Evaluate Student Summaries | 30th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My thank you to Kaggle, the competition hosts, and the Kaggle community for a fun and educational three months. I'm really looking forward to reading everyone's solutions and learning from them. The 30th ranked submission was an ensemble of 6 deberta-v3-large models.</p>
<p>Throughout this competition, I worried about overfitting. My earliest taste of overfitting was when I reached a gold position briefly with 0.461 public LB score, by not splitting on prompt-id, and by doing a random 80/20 train/val split. After spending a a day being thrilled at the gold symbol in my public LB rank, I started thinking that it was all a mirage, and I was overfitting (and indeed the private LB score of that submission confirms this). Back then the model did not even train if I split by prompt-id. It was some more weeks before I learned to train transformer models in a stable way, with a somewhat steadily decreasing validation loss. I did not use many of the features from the top-scoring public notebooks, because I feared overfitting. I don't know if it was a good decision, but the shakedown was only by 5 places, and I don't regret it.</p>
<p><strong>Things that worked:</strong></p>
<ul>
<li><p>Training seperately for content & wording scores. I trained 8 models (4 per target, since there were 4 folds) for each "full model". For CV, I averaged out-of-fold predictions. For predicting on test data, I averaged the 4 models for each target. </p></li>
<li><p>It was 3 levels of modeling: <br>
1) deberta-v3-large models with many differences (pooling techniques, different inputs, frozen vs unfrozen, different max-len, etc)<br>
2) Make LGBM predictions from each deberta's output, plus features like text length etc. The features list was the same for all models.<br>
3) Both deberta and LGBM outputs of 6 models was fed into hill climbing algorithm (with negative weights) to predict the final output. Hill climbing, being a greedy algorithm, was sensitive to the order of inputs, and I tried to find alternatives but none of them worked as well.</p></li>
<li><p>Layerwise learning rate decay - stabilizing transformer training using this method, with linear or cosine scheduler with warmup was the foundation of all the models. </p></li>
<li><p>Using the input text after replacing common-ngrams with prompt text with the "_" character. The inputs to most of the models was prompt_question + separator + prompt_title + [start] + modified input text + [end]. One model uses a modified prompt_text appended at the end -- I've explained that below. Another model had [P] prepended to each sentence in summary_text.</p></li>
<li><p>Taking the encoder output of only some tokens, like [start] and [end], or [start] to [end], or gem pool of only [P] tokens.</p></li>
<li><p>weighted layer pooling (all layers), gem pooling & mean max pooling</p></li>
<li><p>The head was the same for all models: 2 linear layers with tanh inbetween.</p></li>
<li><p>freezing embeddings & layers. I froze between 12 and 18 layers, not including embeddings layer, which was frozen for all models.</p></li>
<li><p>pseudo labeling: I did it by training T5 on competition data, using it to generate texts, which I labeled using my best ensemble, then trained fresh models first on PL data, then on competition data. The first time I did it, it was leaky & LB score tanked. Second implemention gave a small LB boost (2-model ensemble score went from 0.431 to 0.430), but I don't know if it was worth all the trouble.</p></li>
<li><p>mixed precision training seemed to give results that were as good as full precision training, while being much faster</p></li>
<li><p>I included prompt text like this: Gather sentences from prompt_text that are quoted by at least one student, who had written a summary that was at least 20% original (as calculated by ngram matching). The ensemble of 6 models included one model that did this. That model's private LB score was 0.48 (public LB was 0.443). I don't know how useful it was. I was just happy to have found a way to include prompt_text.</p></li>
<li><p>Hill climbing</p></li>
</ul>
<p><strong>Things that did not work:</strong></p>
<ul>
<li>Re-initializing transformer layers</li>
<li>AWP</li>
<li>adversarial attack through SIFT</li>
<li>MLM</li>
<li>SWA (stochastic weight averaging - maybe my implementation was wrong)</li>
<li>ridge regression or xgb for ensembling</li>
<li>auxiliary loss calculated for classifying the summary into discretized score buckets</li>
<li>RAPIDS SVR using embeddings of various models (without training on competition data)</li>
<li>many features I'd calculated, such as mean sentence transformer cosine similarity scores (sentence-wise) between prompt_text and summary_text, improved CV score but made LB score much worse</li>
<li>Trying to make vicuna-13b generate various features (including "gold" summaries) that deberta or LGBM could use did not work. Making chatgpt guess the individual rubric scores that must have gone into the calculation of wording & content also did not work.</li>
<li>I felt that if a summary provided on-topic, original insight, it was scored higher. I spent a lot of energy in trying to predict this using various pretrained models. I also generated lots of chatgpt or llama-generated data for fine tuning deberta for this purpose. It did not work at all. I think I spent the first month mostly on this pursuit & in making vicuna-13b useful. By doing this I learned a lot about the datasets that are used for benchmarking LLMs, about finetuning llama and sentence transformers, etc. So maybe it was useful (I doubt it lol). </li>
<li>One alarming thing that did not work till the end was Clipping grad norm. The model stopped training if I clipped.</li>
<li>Using a model trained on previous feedback competition data to guess syntax, cohesion, vocabulary, etc scores of competition data didn't work. But I tried it early on, and back then I didn't know that much about various transformer techniques that I learned later. So maybe it would've worked if I'd tried it near the end.</li>
<li>I picked only 2 submissions because I was going to pick one submission that I'd made last minute, and it errored out, and I forgot to pick an alternative.</li>
<li>I wish I'd trusted the CV score all along. It correlated the best with private LB.</li>
</ul>
<p><strong>Credits & Gratitude:</strong></p>
<ul>
<li>I learned about LLRD from Vignesh in Sanyam's interview: <a href="https://www.youtube.com/watch?v=XLaq-boyAGk" target="_blank">https://www.youtube.com/watch?v=XLaq-boyAGk</a> and then by reading <a href="https://www.kaggle.com/torch" target="_blank">@torch</a>'s wonderful notebooks - <a href="https://www.kaggle.com/code/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning" target="_blank">https://www.kaggle.com/code/rhtsingh/on-stability-of-few-sample-transformer-fine-tuning</a></li>
<li>Learned about pooling techniques from <a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently/notebook" target="_blank">https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently/notebook</a></li>
<li>Learned about GeM pool from Team hydrogen's notebooks ( <a href="https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place" target="_blank">https://www.kaggle.com/code/philippsinger/team-hydrogen-efficiency-prize-1st-place</a> )</li>
<li>Nvidia's videos with Chris Deotte, Christof Henkel, Jean-Francois Puget and Ahmet Erdem were super helpful for me. I learned how to do leak-proof PL (and other things) from them. Despite this, my first implemention of PL was leaky, but I was able to fix it because frankly, I learned from the best: <a href="https://www.youtube.com/watch?v=CwDKy0EKwHE" target="_blank">https://www.youtube.com/watch?v=CwDKy0EKwHE</a> and <a href="https://www.youtube.com/watch?v=PXc_SlnT2g0" target="_blank">https://www.youtube.com/watch?v=PXc_SlnT2g0</a> </li>
<li>Learned about hill climbing from <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609</a></li>
</ul>
<p><strong>General Educational Material:</strong><br>
I started learning ML in april, and until this competition, I was very intimidated by kaggle. If someone else like me is reading this, I highly recommend these courses. I found all them very helpful and educational: </p>
<ul>
<li>Andrew Ng's introductory course <a href="https://www.coursera.org/specializations/machine-learning-introduction" target="_blank">https://www.coursera.org/specializations/machine-learning-introduction</a></li>
<li>Fast.ai (book and videos) <a href="https://www.fast.ai/" target="_blank">https://www.fast.ai/</a></li>
<li>Karpathy's Zero to Hero course <a href="https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ" target="_blank">https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ</a></li>
<li>Kaggle Learn <a href="https://www.kaggle.com/learn" target="_blank">https://www.kaggle.com/learn</a></li>
<li>Kaggle Playground Competitions</li>
<li>Aladdin Pearson's youtube channel <a href="https://www.youtube.com/@AladdinPersson" target="_blank">https://www.youtube.com/@AladdinPersson</a> : He implements papers like UNet, Gans etc, and it makes the papers & code very approchable. </li>
<li>I found it very helpful to use weights & biases for tracking things and highly recommend them, especially for beginners like me.</li>
<li>I used paperspace & jarvislabs.ai for training.</li>
</ul> |
CommonLit - Evaluate Student Summaries | 167th Place Solution [Deberta Large with LoRA] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>Submission Overview</h3>
<table>
<thead>
<tr>
<th>Models</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Deberta base + lgb</td>
<td>508</td>
<td>447</td>
<td>485</td>
</tr>
<tr>
<td>Deberta large LoRA + lgb</td>
<td>501</td>
<td>441</td>
<td>495</td>
</tr>
<tr>
<td>Result blend + lgb</td>
<td>492</td>
<td>435</td>
<td>482</td>
</tr>
</tbody>
</table>
<p><br></p>
<h3>Things which didn't work</h3>
<p>Fine-tuning like GPT models for regression<br>
SFT Llama for direct target generation<br>
Additional data from ChatGPT, SFT Llama2<br>
<br></p>
<h3>Example code how to fine-tuning with LoRA</h3>
<p><a href="https://www.kaggle.com/code/alexcei/fine-tuning-deberta-large-with-lora-adapter" target="_blank">fine-tuning-deberta-large-with-lora-adapter</a></p> |
HuBMAP - Hacking the Human Vasculature | 8th Place Solution: single model with heavy augmentation + tuned threshold | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Hubmap for hosting such a nice competition and to the many helpful notebooks and discussions.</p>
<ul>
<li>Summary<ul>
<li>Train using all train data with the yolov8x-seg model with strong augmenation</li>
<li>The threshold for converting mask prob to binary seems to be a key for scores</li></ul></li>
<li>Training<ul>
<li>yolov8x-seg model( with settings below) with all train data</li>
<li>Trained all 3 classes </li>
<li>2-fold validation <ul>
<li>fold1: wsi1, 3</li>
<li>fold2: wsi2, 4</li>
<li>The key insight for me here was that the optimal value of the mask threshold varies considerably with fold1, 2. For example, in one experiment, the following score were obtained<ul>
<li>mask_threshold, fold1 val , fold2 val </li>
<li>0.2, <strong>0.349</strong>, 0.35</li>
<li>0.5, 0.211, <strong>0.481</strong></li></ul></li>
<li>Therefore, in final 2 submission, I thought it was a safe bet to choose different thresholds for masks</li></ul></li>
<li>The yolov8 setup is as follows</li></ul></li>
</ul>
<pre><code>
</code></pre>
<ul>
<li>Inference<ul>
<li>image size: 768 (better than 512, why?)</li>
<li>as noted above, two different thresholds was used<ul>
<li>thresh 0.5: private <strong>0.56</strong>, public: 0.391</li>
<li>thresh 0.2: private0.369, public: 0.506</li></ul></li></ul></li>
<li>Not work for me<ul>
<li>combined with semantic segmenation</li>
<li>Training on large image sizes (768, 1024)</li>
<li>tta(rot90) (a bug in my implementation?)</li></ul></li>
<li>Did not try<ul>
<li>Pseudo labeling in instance segmentation model</li>
<li>Stein augmentation</li>
<li>WBF</li>
<li>And many more models</li></ul></li>
</ul> |
Kaggle - LLM Science Exam | 45th Place solutions , learnings , trials and what could have been | 0.919/0.917 submitted too late :D | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to Kaggle for hosting this competition that open door to trying LLMs and solving problems deemed to be solved by today's llm. Thanks to all the community for generously sharing ideas and code without which most at-least we might not have progressed so far . <br>
Also thanks to all the team <a href="https://www.kaggle.com/urvishp80" target="_blank">@urvishp80</a> <a href="https://www.kaggle.com/ayaanjang" target="_blank">@ayaanjang</a> <a href="https://www.kaggle.com/tonymarkchris" target="_blank">@tonymarkchris</a> as they joined and helped along the way . We were lucky to fix our cv on time but not on time to submit LB score 0.917,0.919 private but still was good figured the former on time :) </p>
<h1>Datasets</h1>
<h2>Train datasets</h2>
<p>Below are the datasets which worked best for us</p>
<ol>
<li>Chris 60k with context</li>
<li>Generating context1 and context2 with <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> tfidf with 270k data and cohere.</li>
</ol>
<h2>Validation</h2>
<p>Thankful for this dataset <a href="https://www.kaggle.com/datasets/yeoyunsianggeremie/validation-500" target="_blank">https://www.kaggle.com/datasets/yeoyunsianggeremie/validation-500</a> from <a href="https://www.kaggle.com/yeoyunsianggeremie" target="_blank">@yeoyunsianggeremie</a> . this helped us a lot get correlation between cv and lb .</p>
<h2>Retrieval /Infer</h2>
<ol>
<li>Openbook minilm , though mpnet-base-v2 also we tried worked well but was not part of our final submission [0.834 LB with openbk on that]. </li>
<li>MB tfidf technique on wiki data created with <a href="https://www.kaggle.com/code/nbroad/create-science-wikipedia-dataset" target="_blank">https://www.kaggle.com/code/nbroad/create-science-wikipedia-dataset</a>. - worked well</li>
<li>MB tfidf technique on cohere by MB - worked welll</li>
<li>we didnt use 270k MB dataset for infer retrival.</li>
<li>We found the sorting order in MB notebook was reverse/ascending order based on the scores we just did a simple correction on the same which helped</li>
</ol>
<pre><code>context1 =
</code></pre>
<h1>Training</h1>
<ul>
<li>CV 0.893 (on 500 dataset)Deberta on 60k with context</li>
<li>CV 0.8963 (on 500 dataset)Deberta on context1 and context2 tfidf context data trained <a href="https://huggingface.co/docs/datasets/stream#interleave" target="_blank">interleaved</a> with hugging face datasets example .. <a href="https://www.kaggle.com/code/gauravbrills/customdeberta-w-context-seq-mix/notebook" target="_blank">https://www.kaggle.com/code/gauravbrills/customdeberta-w-context-seq-mix/notebook</a> - Our best model seems interleaving helped a lot generalizing </li>
<li>CV 0.896 (on 500 dataset)deberta trained only on context 1 generated via tfdidg on 60k</li>
<li>CV 0.836 (on 500 dataset) deberta with 15 freeze layers on 60k dataset</li>
<li>CV 0.92 Llama but on 100 dataset not good on LB trained with peft for multiplechoiceseqclassifcation.</li>
</ul>
<p><strong>Training graph of our best with interleaved dataset 0.8963 one below 0.8917 cv at train</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F494b9dd2fe5620dfcb63eb0de5e6d166%2Ftrain_0.8917.png?generation=1697333461425893&alt=media" alt=""></p>
<p>Params </p>
<pre><code>warmup_ratio=,
learning_rate=,
per_device_train_batch_size=,
per_device_eval_batch_size=,
num_train_epochs=,
FREEZE_LAYERS =
</code></pre>
<h1>Final ensemble and infer</h1>
<p>We did struggle with ensemble till we had the 500 datasets, we were just mixing things up with context and non context models and sometimes llama but never confident of our submission .<br>
Once we got the 500 dataset we just picked our top 3 models and <code>average</code> and <code>optuna</code> weighted ensemble them .<br>
<a href="https://www.kaggle.com/code/gauravbrills/t2-0-901-openbook-tfidf?scriptVersionId=146318816" target="_blank">https://www.kaggle.com/code/gauravbrills/t2-0-901-openbook-tfidf?scriptVersionId=146318816</a> </p>
<p>We were lucky to get some in for final submission but missed submitting one last one which could have stored 0.817/0.819 in LB as didnt have time .</p>
<table>
<thead>
<tr>
<th>cv</th>
<th>Private LB</th>
<th>Public LB</th>
<th>description</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.9036</td>
<td><strong>0.909</strong></td>
<td>0.903</td>
<td>before deadline weighted slightly better</td>
</tr>
<tr>
<td>0.9017</td>
<td><strong>0.909</strong></td>
<td>0.905</td>
<td>before deadline SELECTED</td>
</tr>
<tr>
<td>0.91</td>
<td><strong>0.919297</strong></td>
<td>0.917</td>
<td>Missed submitting too late 😃</td>
</tr>
<tr>
<td>0.908</td>
<td><strong>0.917</strong></td>
<td>0.912</td>
<td>Missed submitting too late 😃</td>
</tr>
</tbody>
</table>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F768166%2F9a42967e5a51c866e27cc0bc18cbc553%2Fllm_sol.png?generation=1697337180495104&alt=media" alt=""></p>
<h1>What tried, did not work or could have worked</h1>
<ul>
<li><strong>Rerankers</strong> : We tried a bunch of rerankers using ST marco ones but they did not help in Public LB though seems were good in private . We also reasearched for colbert and <a href="https://huggingface.co/ibm/re2g-reranker-nq" target="_blank">https://huggingface.co/ibm/re2g-reranker-nq</a> which other teams tried but somehow lost confidence</li>
<li>Training on diff context opebook tfidf , openbook we were also trying to generate context with to check how well that works but we could not finish on time </li>
<li>dataset cleaning . we did fiddle around with mwparaserfromhell,LatexNodes2Text and beautiful soup but just lost patience or didnt go ahead in cleaning wiki data from scratch . Seems some top teams did explore this and implemented this better.</li>
</ul>
<pre><code> pylatexenc.latex2text LatexNodes2Text
bs4 BeautifulSoup
mwparserfromhell
():
l2t = LatexNodes2Text()
code = mwparserfromhell.parse(text)
(code.filter_templates())
template code.filter_templates():
(template)
code.replace(template, l2t.latex_to_text((template)))
text = (code)
text = text.replace(, ).replace(, )
text
</code></pre>
<ul>
<li>Using faiss with different embeddings and simple consine similarity via ST , diff embedding faiss idx we found wasn't giving as good scores as tfidf so we dropped trying other embeddings though it was way faster . Cosine similarity was bit gpu intensive so we dropped using it though we think its retrival was good. These all were tried on minilm and we could not finish trying on BGE or e5 on time :( .</li>
<li><strong>LLMs</strong> : Urvish in our team did crack hhow to use Llama for Multiple choicce sequence classification , our inference unfortunately was taking around 4-5 hours so we could not at the end add to our ensemble . Plus we were not able to improve much on Llama cv. <br>
Custom llama by <a href="https://www.kaggle.com/urvishp80" target="_blank">@urvishp80</a> </li>
</ul>
<pre><code> (nn.Module):
():
(CustomLlamaModel, self).__init__()
self.model = backbone
self.model.config.use_cache =
self.config = self.model.config
self.num_labels = num_labels
use_gradient_checkpointing:
self.model.gradient_checkpointing_enable()
self.pooler = MeanPooling()
self.dense = nn.Linear(self.config.hidden_size, )
self.dropout = nn.Dropout()
self.classifier = nn.Linear(self.config.hidden_size, , bias=)
():
num_choices = input_ids.shape[] input_ids inputs_embeds.shape[]
batch_size = input_ids.shape[] input_ids inputs_embeds.shape[]
flat_input_ids = input_ids.view(-, input_ids.size(-)) input_ids
flat_position_ids = position_ids.view(-, position_ids.size(-)) position_ids
flat_token_type_ids = token_type_ids.view(-, token_type_ids.size(-)) token_type_ids
flat_attention_mask = attention_mask.view(-, attention_mask.size(-)) attention_mask
flat_inputs_embeds = (
inputs_embeds.view(-, inputs_embeds.size(-), inputs_embeds.size(-))
inputs_embeds
)
outputs = self.model.model(input_ids=flat_input_ids, attention_mask=flat_attention_mask, output_attentions=,
output_hidden_states=)
last_hidden_state = outputs[][-]
x = self.pooler(last_hidden_state, flat_attention_mask)
x = self.dropout(x)
logits = self.classifier(x)
reshaped_logits = logits.view(batch_size, num_choices)
loss =
labels :
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(reshaped_logits, labels)
self.model.config.output_attentions:
attentions = outputs.attentions
:
attentions =
self.model.config.output_hidden_states:
hidden_states = outputs.hidden_states
:
hidden_states =
SequenceClassifierOutput(
loss=loss,
logits=reshaped_logits,
hidden_states=hidden_states,
attentions=attentions,
)
</code></pre>
<p>At the end was a great learning experience for all of us and still learning a lot still from this comp to use in the real world. Will update this as recall other things we tried or shall write a blog .</p> |
CommonLit - Evaluate Student Summaries | 12th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizing such an interesting competition!<br>
I was unable to sleep due to the fear of shakedown in the last part of the competition, but I am happy to have remained in the gold medal position.</p>
<p>I would like to briefly summarize my methodology.</p>
<h1>Model & Result</h1>
<p>My best private solution is an ensemble of long mode with prompt_text and short model without prompt_text.<br>
The former is intended to evaluate that the text is summarized correctly, and latter to evaluate that sentence structure and word quality is good or bad.</p>
<h2>Long Model</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3310894%2F0c8e645a91c267f1452d1356066ca558%2Flong.png?generation=1697338005852669&alt=media"></p>
<p>Using prompt_text and long max_len worked very well for me. But inference time is long (over 6 hour) and I was not able to ensemble with other long model. So using the output of the Pooling Layer, trained LightGBM to improve robustness.</p>
<h2>Short Model</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3310894%2F0626c69c7fba14b4528ea768288fe45c%2Fshort.png?generation=1697338755742782&alt=media"></p>
<p>Most of this model is based on <a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">a very nice public notebook</a>.<br>
The changes are as follows</p>
<ul>
<li>change deberta-base to deberta-v3-large</li>
<li>non text cleaning</li>
<li>drop prompt_length</li>
<li>change definition of "overlap"</li>
<li>freeze top 12 deberta layers / non layers ensemble</li>
</ul>
<h2>Ensemble Model</h2>
<p>I simply ensemble (Long + Short)/2<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3310894%2F420023ddb770abcdfe7ad91ac5fec276%2Fens.png?generation=1697339278226917&alt=media"></p>
<h1>Not working for me</h1>
<ul>
<li>Text Cleaning</li>
<li>Backbone except deberta-v3-large</li>
<li>MLM</li>
<li>AWP</li>
<li>Augmentation with ChatGPT</li>
<li>SVR, xgboost (As an alternative to lightgbm)</li>
</ul>
<p>Thanks.</p> |
ICR - Identifying Age-Related Conditions | My Summary Notes on the 'ICR - Identifying Age-Related Conditions' Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>My Summary Notes on the 'ICR - Identifying Age-Related Conditions' Competition</h1>
<h3>About the Competition</h3>
<ul>
<li>see <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">ICR - Identifying Age Related Conditions</a></li>
</ul>
<hr>
<h3>Particularities of the Competition</h3>
<h5>Test Sets (1) [aka 'public'] & (2) [aka 'private'] are unknown</h5>
<ul>
<li>During the competition, the test set could not be displayed.</li>
</ul>
<h5>Final results computed from Test Set (2)</h5>
<ul>
<li>During the competition, test runs were possible on the 1st half of the entire Test Set.</li>
<li>The final results were computed on the 2nd half of the entire Test Set.</li>
</ul>
<h5>Testing is scarce</h5>
<ul>
<li>The competition participants were only to run/test 1 model per day. (Other competitions usually allow for up to 5 trials per day.)</li>
</ul>
<h5>Test metric penalizes confident, wrong predictions</h5>
<ul>
<li>A test run only returns one metric. Here, the evaluation metric is computed by a '<em>balanced</em> log loss' function (see competition page).</li>
<li>The predictions were to be made as probabilities (e.g. 0.89, 0.11), not as class labels ('0' or '1').</li>
<li>A confident prediction for a class label ('The predicted probability is high.') that is wrong, leads to a huge drop in the final evaluation metric.</li>
<li>A model should take that into account, and predict approx. (0.5, 0.5) (coin flip) when in doubt.</li>
</ul>
<h5>Models trained on the Training Set perform (far) worse on the Test Set (1) & Test Set (2)</h5>
<ul>
<li>With all that, models trained & cross-validated on the Training Set generally perform bad on the Test Set. Overfitting was a general issue to deal with.</li>
<li><em>A low CV mean training error & low CV training error spread does not say a thing, unless the model is run on Test Set giving similar results.</em></li>
</ul>
<hr>
<h3>Strategic Approach</h3>
<h5>Get an overview</h5>
<ul>
<li>With a limited number of test trials, an unknown test set, and overfitting as a main general issue, the task of classifying data points gets even harder.</li>
</ul>
<h5>Test many different models</h5>
<ul>
<li>In order to counter overfitting, a possible strategy is to test many single ML models one-by-one. A drawdown with this approach is that it wastes a lot of trials. However, it's beneficial to establish which models perform at all & which are useless.</li>
</ul>
<h5>Decide on which models to test further</h5>
<ul>
<li>Based on the single tests, one can decide on which model to work on further. It's also possible to combine/ensemble different models.</li>
</ul>
<h5>Do not shy away from trying specialized/smooth models</h5>
<ul>
<li>Many approaches tried to find specialized models as far as the training & test set allowed.</li>
<li>Many approaches used probing techniques on the test set. Though not beneficial to the question, which model describes the data best, these approaches were allowed. Taking a look at the difference between the performance of these approaches on the Test Set (1) & Test Set (2), it becomes clear that they are discouraged.</li>
</ul>
<h5>Generalize/regularize well</h5>
<ul>
<li>The only one strategy in this case (strong overfitting) is choosing generalized/regularized approaches.</li>
<li>That is, the model is allowed to make more errors to not fit the data as perfectly as before.</li>
<li>One takes into account a larger test error in return for greater consistency in predictions.</li>
</ul>
<h5>Choose your submissions</h5>
<ul>
<li>During the competition, the best performing approaches reached eval metrics of 0.06 and below. Many approaches favored post-processing. A large part of very well performing models centered around approaches with an eval metric of about 0.11. Better performing models in general were around 0.20 or smaller.</li>
<li>A model that performed with an eval metric of around 0.69 made (0.50, 0.50) predictions for all data points, basically.</li>
<li>I found the approaches with a test balanced log loss > 0.11 and < 0.3 the most convincing. Though in doubt, I decided on submitting one approach in the middle, and one more generalized approach.</li>
</ul>
<h5>Final Results</h5>
<ul>
<li>Due to the particularities of the competition, a (huge) shake-up from test set (1) evaluation to the final evaluation on the test set (2) was to be expected. - (No) Surprise!: There was indeed a huge shake-up among all the participants!</li>
<li>Should have decided on and should have worked on approaches that generalize well even more!</li>
</ul> |
Kaggle - LLM Science Exam | 64th Place - Elasticsearch to find them all | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>And then the Dark AI lord etched into one of the hyperdimensional manifolds</strong></p>
<p><em>One RAG to rule them all</em><br>
<em>One RAG to find them</em><br>
<em>One RAG to bring them all</em><br>
<em>And in the <strong>context</strong> bind them</em></p>
<h1>Solution highlight - Elasticsearch to find them all</h1>
<p>Having started with just 3 weeks left in the competition, I was convinced that the be most important element will be <strong>search</strong> and <strong>retrieval</strong>. After a few days of trying out trying out some TF-IDF and BM25 ideas, it struck me that a search engine is what's needed.</p>
<p>If you take the correct answer field for many of the questions on the train set, Google straight up highlights the sentence it came from.</p>
<p>(<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2F44a044ca559c7bb1346f4adbbb2eee14%2FScreenshot%202023-10-11%20at%205.51.58%20AM-1.png?generation=1696992434300492&alt=media" alt="Google search highlight">)</p>
<p>So can we get a search engine running locally? Yes!!</p>
<p>Don't reinvent the wheel - <a href="https://lucene.apache.org/" target="_blank">Lucene</a> is the best search engine for this. <a href="https://www.elastic.co/elasticsearch/" target="_blank">Elasticsearch</a> is an amazing software built on top of it, and it comes integrated in Huggingface Datasets!! (Well, sort of … Usage is a pain on Kaggle 🥹)</p>
<p><a href="https://www.kaggle.com/code/dipamc77/66th-place-elasticsearch-deberta/notebook" target="_blank">Here's my solution notebook</a>, the best submission just uses 5 Deberta models trained on the shared datasets and some data I generated privately. Plus lots of engineering to collect context and run elasticsearch.</p>
<h2>Document Recall</h2>
<p>I'm fairly certain the competition dataset was generated with chunks of 3-5 consecutive sentences from the same article. I hand labelled each of the 200 questions to see the recall rate of finding the correct article based on the text. To test this further, I generated some data with 5 sentence excerpts so that I have the exact section and page for each text. <strong>Elasticsearch beat all other approaches at finding the correct articles by a big margin.</strong></p>
<h1>Context generation</h1>
<p>I opted to pair up question with single answers for sentence semantic search. Intuitively this seemed more meaningful and boosted CV scores on Kaggle set as well as other shared datasets.</p>
<p>It also helped to add upto 2 sentences which were immediately above and below the retrieved sentences in the original article. Since often a single sentence starts with "It" or a similar word, the sentence above and below give context to the context. 😉</p>
<p>Generating contexts based on (question + single answer) also aligns well with how AutoModelForMultipleChoice models are trained. I also used the same single answer pairing approach to train a Mistral LoRA and initial experiments showed it did better than predicting the option directly. Details of context selection can be found in my <a href="https://www.kaggle.com/code/dipamc77/66th-place-elasticsearch-deberta/notebook" target="_blank">solution notebook</a> and <a href="https://www.kaggle.com/code/dipamc77/elasticsearch-mistral-lora-deberta/notebook" target="_blank">Mistral notebook</a>.</p>
<h1>Which wikipedia to use?</h1>
<p>Since the dataset is generated using wikipedia, it makes sense to the wikipedia dumps. The catch is, wikipedia keeps getting edited, <a href="https://en.wikipedia.org/w/index.php?title=Dark_matter&oldid=1158114319#:~:text=These%20models%20attempt%20to%20account%20for%20all%20observations%20without%20invoking%20supplemental%20non%2Dbaryonic%20matter" target="_blank">Here's a line from June Wikipedia that matches Question ID 47</a>, and it is not present in the <a href="https://en.wikipedia.org/w/index.php?title=Dark_matter#:~:text=These%20include%20modified,still%20be%20required." target="_blank">latest article</a>. On top of that, searching all of wikipedia is really difficult to pull off on Kaggle, so filtering the data for science articles seemed like a reasonable idea.</p>
<p>Initially, I decided to filter the dataset by following the <a href="https://en.wikipedia.org/wiki/Category:Main_topic_classifications" target="_blank">Wikipedia Category DAG</a>, but they quickly started including non science articles after some depth. One approach that seemed to work is taking different starting points and selecting overlaps between them based on the depth. But I realized that this was a quick way to overfit to the test set and using the full wikipedia is better. Some other science wiki datasets shared had as few as 40% coverage the the articles I hand labelled from the Kaggle training set.</p>
<p>I started using <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/424821" target="_blank">JJ's wiki dataset</a>, but later found it had a number of missing articles, and decided to take the June 2023 dump of Wikipedia, dataset shared <a href="https://www.kaggle.com/datasets/dipamc77/wikipedia-full-june-2023" target="_blank">here</a>.</p>
<h1>So many errors !!</h1>
<p>I picked the correct submissions, selected based on <em>eval_loss</em> instead of <em>map@3</em>, which seemed to correlate better. And yes I lost 3 submissions on the last day due to slow network drives!! 🥳</p>
<p>(<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2F0efd5be4b56fd7b31cbfa373b6a9e4f4%2FScreenshot%202023-10-11%20at%206.03.27%20AM-1.png?generation=1696992526193175&alt=media" alt="Errors and final picks">)</p>
<p>Found out the most painful way that 2x T4 nodes have much slower disk read times than P100, enough to slow down elasticsearch to a crawl and hit search timeouts. <a href="https://www.kaggle.com/code/dipamc77/elasticsearch-mistral-lora-deberta/notebook" target="_blank">In this notebook</a>, I added a Mistral 7B LoRA that I trained on the last day, but had to switch to T4 x2 accelerator to make it run.</p>
<p>I was later able to fix it by increasing Elasticsearch's query timeout, but the search would still take 7 hours for 4000 queries (as opposed to 1 hour on P100 nodes, and 3 minutes on my local machine with a regular SSD). So the Elasticsearch approach wouldn't always work on Kaggle for millions of articles, but it's amazing anywhere else. I hope some of the engineering hacks are useful for someone in the future.</p>
<h1>Ensembling</h1>
<p>The models I used had 3 different contexts and the competition had a MAP@3 metric, so some custom ensembling ideas worked well.</p>
<p>If the top probability is above 0.7, the models seem to be well-calibrated, meaning the model has a high chance of being correct. Hence above this threshold, I pick the best model predictions first, but only the top prediction. If the next best model disagrees with high probability, its prediction gets placed in prediction slot 2, and so on for slot 3. This method boosted both CV and LB scores by a good margin.</p>
<p>I think this method worked well because the top model has fewer sentences from different articles but more surrounding context. While the other models have single sentences from more articles which has a higher chance of containing a sentence from the correct article.</p>
<p>Below the max probability of 0.7 and above 0.4 - I take a weighted average of the probabilities.</p>
<p>Below 0.4 the predictions seem close to random, I just pick the best model's predictions.</p>
<p><em>Extra note</em> - Using an LLM or a Long context for the low probability questions might be a good idea which also saves time. My experiment with Mistral 7B suggested the same, I'll post an update with that score if I'm able to fix the engineering issues.</p>
<h1>Ideas left unexplored</h1>
<p>It's now clear I didn't focus on the right things, or at least I shouldn't have worked on this competition solo. Training models was just as important, and I sadly I ignored it until there were just 5 days left.</p>
<p>I trained 4 Deberta models in parallel on the last day, on different contexts generated for the same questions, and a LoRA on Mistral 7B that failed to submit.</p>
<p>These are some of the ideas that I probably should have explored more:</p>
<ol>
<li>Training LLMs instead of just Deberta</li>
<li>Long context models - For the long tail of articles that don't have matches in the top sentences.</li>
<li>Different sentence embedding models</li>
</ol>
<h1>Personal notes</h1>
<p>This is perhaps the only serious amount of NLP work I've ever done, and it was a rollercoaster of a learning experience. Huge thanks to everyone who shared their code, notebooks, datasets and insights, I'll not add a list because that list will never end. Kaggle truly lets you stand on the shoulders of giants. Practically my entire solution is based on what others shared.</p>
<p>The compute management experience was a nightmare but also a great exercise in hacky software engineering. I look forward to all the tricks people use in their solutions. I do believe that the Kaggle hardware is not suitable for such competitions and needs a major update.</p>
<p>The last few days were extremely intense and I barely had any sleep. I'm glad I was able to get into the Silver zone on the last day after being below 300 rank for the entire competition, and more so given that this was the most competitive challenge in recent Kaggle history.</p>
<p>Lessons learnt:</p>
<ol>
<li>Start early, and team up sooner.</li>
<li>Don't trust Kaggle's compute, avoid major engineering changes at the end.</li>
<li>Don't get stuck on one idea even if it seems the most important and delivers results, explore more.</li>
</ol>
<p>In the end, I knew the one RAG to rule them all was not mine. Congrat's to <a href="https://www.kaggle.com/philippsinger" target="_blank">Psi</a>, <a href="https://www.kaggle.com/ybabakhin" target="_blank">ybabakhin</a> and <a href="https://www.kaggle.com/ilu000" target="_blank">ilu000</a>, looking forward to your solution.</p>
<p>(<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3468635%2F99aec1716b14465e0ff1712c9fa64833%2Ffrodo-baggins-its-over.gif?generation=1696992407253017&alt=media" alt="Frodo meme">)</p> |
CommonLit - Evaluate Student Summaries | 3rd Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Summary and the Magic</h1>
<p>I trained a deberta-v3-xsmall, running in ~20 minutes. With additional steps for another 4 minuts.<br>
My solution is very similar to the 1st Place on the Efficiency Leaderboard, see <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/447944" target="_blank">Shochoshi write-up</a>. The main idea was to give the model as much information as possible from the original text without increasing the token number by much. I did that by looking at sequences from the original text occurring in the prompt text and putting braces around them, the code and examples for this step can be seen <a href="https://www.kaggle.com/code/raki21/marked-summary-gen-efficient-v3xsmall/notebook" target="_blank">here</a>. If I had known of the optional token_type_ids I would probably have gone with these instead of using braces that use up a few additional tokens. I also needed to focus on longer sequences for that reason (at least 3-gram) while the winner focused especially on short sequences. I would have used up lot's of tokens to mark trivial single word overlaps like 'the' if I focused on short ones with my approach. I also utilized multiprocessing and tried out tons of tricks, but most did not stick. </p>
<h1>2. Getting Sidetracked</h1>
<p>I also pursued the performance track, after seeing how effective the sequence length reduction was, hoping I could get a good medal by just being able to iterate more quickly and use huge ensembles. I also did not expect to win any prize money and because no medals are given for the Efficiency track my focus shifted towards the safer bet of the performance track.<br>
In the end I split my submissions 2 for the performance and 1 for the efficiency track, getting 32nd place in performance and 3rd in Efficiency. The write-up for the performance track focusing more on topics like ensembling is <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/446631" target="_blank">here</a>.</p>
<h1>3. Approach</h1>
<ol>
<li>Clean summaries and prompt from whitespaces.</li>
<li>Put braces around sequences that appear in the original text with n-gram >=3.</li>
<li>Predict content and wording with the same deberta-v3-xsmall (only 1 model here of course).<br>
3.1 This is done with Ray Multiprocessing to utilize the 4 cores well.</li>
</ol>
<p>Training was pretty standard with embeddings and first layer frozen.</p>
<h1>4. Journey</h1>
<p>I tried a lot of options, at first I just looked at examples of summaries for different scores, a notebook I shared for that <a href="https://www.kaggle.com/code/raki21/summary-examples-by-content-and-wording" target="_blank">here</a>. I looked up the CommonLit site, tried to make sense of how the grade the student is in might impact mean scores for the metrics and just absorbed information from discussion threads and code. I looked at the efficiency score implementation and made sense of the time/performance tradeoff.</p>
<p>The tradeoff I worked with was based on the assumption that with a baseline of 0.84 and a best assumed score of around 0.42, the divisor of the first part of the efficiency metric would also be 0.420. With 9 hours of runtime / 540 minutes contributing to 1 score I worked with the assumption that 0.008 score is worth around 10 minutes.</p>
<p>I then implemented simple baselines like linear and non-linear models using only length of the summary. I then looked at the biggest error sources and saw that most of them by a large margin are children just copy pasting large parts of the original text. Those are long and rated well by my simple approach, but are rated very poorly by teachers, especially in wording. Just removing all common 3-grams (link ngram), counting the length of the remaining text instead and using this information again in an exponential formula decreases wording RMSE much furthers. I then switched to LGBM and added additional features like relative words copied, counts for N-grams… At this point I noticed content score being pretty good already but the wording prediction still being really bad. For better predictions I added a tiny bert transformer for wording only.</p>
<p>I wanted to use bigger transformers and tried out a lot of approaches to make them more efficient: conversion from torch to ONNX, static and dynamic quantization in torch and ONNX, multiprocessing with ray. In the end only ray sticked. The quantization gave 25% speedup on my CPU but only around 10% on the submission servers CPU, the tradeoff was not good enough as there was a performance drop of course. Maybe I could have made it work with quantization aware training, but then you have a conflict with the early frozen layers (at least I think so as I found nothing about the standard weights being quantized already). In the end I only had around 15 days to invest in total and a lot of that time was gone at this point.</p>
<p>I also tried misspelling correction, averaging weights of multiple models (does not work at all), varying token length, adjusting for token length > max_length by adding some score to the prediction for longer sequences. I also tried post processing with LGBMs (which are pretty lightweight and don't need much compute), but that did not work out well at first, I got LGBMs working in the end using a trick, but only at the last day and I only used that for my final performance submission, I described this in more detail in my performance write-up. I switched almost everything to deberta-v3-xsmall prediction after working on the performance track for a while. In an error analysis afterwards I found some texts with tons of tabs being misclassified and solved that byremoving white-spaces with a single space.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Local CV Score</th>
<th>Public Score</th>
<th>Private Score</th>
<th>Inference Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>0-submission</td>
<td>1.030</td>
<td>0.840</td>
<td>0.985</td>
<td><1min</td>
</tr>
<tr>
<td>summary-length-only</td>
<td>0.677</td>
<td>0.596</td>
<td>0.610</td>
<td><1min</td>
</tr>
<tr>
<td>cutting 3-grams</td>
<td>0.621</td>
<td>0.535</td>
<td>0.565</td>
<td>1 min</td>
</tr>
<tr>
<td>LGBM+tiny-bert</td>
<td>?</td>
<td>0.471</td>
<td>0.515</td>
<td>31 min</td>
</tr>
<tr>
<td>v3-xsmall with all tricks</td>
<td>?</td>
<td>0.455</td>
<td>0.491</td>
<td>25 min</td>
</tr>
</tbody>
</table>
<p>I had a lot of things left to do: <br>
I only just got LGBMs behind a strong transformer working when the competition ended, I wanted to throw away some layers of deberta-v3-xsmall completely and see if it improves the time-performance tradeoff. I wanted to try pruning and distillation too. </p>
<h1>5. Acknowledgements</h1>
<p>I especially want to thank <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> for the great baseline.<br>
There were also a lot of other great things I read up on intermediately like how and why to do layer freezing. I want to thank everyone I gained good ideas and insights from this competition and all the people freely sharing their models performance and settings. I also want to give a general thank you to the community, I really have been learning a lot in the last months on Kaggle!</p> |
Bengali.AI Speech Recognition | 67th Place Solution - Bronze - LB 0.515 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you to the organizers for arranging this exciting competition, and congratulations to the winners. I have learned a lot throughout this journey. The discussion section and public notebooks were quite helpful. I feel a bit regretful because I couldn't dedicate enough time due to my final exams.</p>
<h2>My Approach</h2>
<p>A few months back, I improved <a href="https://www.kaggle.com/code/nischaydnk/bengali-finetuning-baseline-wav2vec2-inference" target="_blank">the public notebook</a> by <a href="https://www.kaggle.com/nischaydnk" target="_blank">@nischaydnk</a> just by training my own <a href="https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro" target="_blank">n-gram language model</a>. It improved both my public LB (0.445 -> 0.436) and private LB (0.534 -> 0.515).</p>
<h2>Preparing the Data</h2>
<p>I normalized the training texts using <a href="https://github.com/mnansary/bnUnicodeNormalizer" target="_blank">bnunicodenormalizer</a> as well as the normalizer developed by <a href="https://github.com/csebuetnlp/normalizer" target="_blank">CSE BUET NLP</a>. Some data could not be normalized because of the confusion which contained the character " ঃ" (<a href="https://bn.wikipedia.org/wiki/%E0%A6%AC%E0%A6%BF%E0%A6%B8%E0%A6%B0%E0%A7%8D%E0%A6%97" target="_blank">বিসর্গ</a>) as a separate word. As it is mostly used in place of ":" (colon), I replaced them with colons instead.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4772132%2Fc86799e65036ebc9228f9f709a98a309%2FScreenshot%20from%202023-10-20%2020-15-23.png?generation=1697811401235581&alt=media" alt=""></p>
<h2>Training the N-gram Model</h2>
<p>Before training the n-gram model, I removed the punctuation from the normalized text data because Wav2Vec2 does not predict any punctuation. I trained the n-gram model using <a href="https://github.com/kensho-technologies/pyctcdecode" target="_blank">pyctcdecode</a>.</p>
<h2>Solution Code</h2>
<ul>
<li>Data preparation: <a href="https://www.kaggle.com/code/umongsain/macro-normalization" target="_blank">https://www.kaggle.com/code/umongsain/macro-normalization</a></li>
<li>Training code: <a href="https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro/notebook" target="_blank">https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro/notebook</a></li>
<li>Inference code: <a href="https://www.kaggle.com/code/umongsain/67th-place-solution-inference-notebook" target="_blank">https://www.kaggle.com/code/umongsain/67th-place-solution-inference-notebook</a></li>
</ul>
<h2>Things I Tried But Did Not Work</h2>
<p>Later, I tried to fine-tune multiple XLS-R and MMS models from scratch. As I had limited resources, I focused on low-resource training, using only 30k samples from the Common Voice 13. Unfortunately, they could not beat my previous baseline. But what I found is that <a href="https://huggingface.co/facebook/mms-1b" target="_blank">MMS 1B</a> works quite well for low-resource languages. Sadly, it is available only for non-commercial uses. The checkpoints can be found here:</p>
<ul>
<li><a href="https://huggingface.co/Umong/wav2vec2-xls-r-300m-bengali" target="_blank">https://huggingface.co/Umong/wav2vec2-xls-r-300m-bengali</a></li>
<li><a href="https://huggingface.co/Umong/wav2vec2-large-mms-1b-bengali" target="_blank">https://huggingface.co/Umong/wav2vec2-large-mms-1b-bengali</a></li>
</ul>
<h2>Things That I Could Have Tried</h2>
<p>I had plans to use QLoRa to fine-tune on a few out-of-domain (OOD) hand-labeled samples for domain adaptation after fine-tuning on the crowd-sourced data. I wish I had a bit more free time.</p>
<p>I am glad to have my 1st Bronze competition medal. Again, thanks to the organizers.</p> |
Bengali.AI Speech Recognition | 33rd place silver medal solution - My first competition medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, we want to thank Bengali.ai and Kaggle for hosting such an excellent competition on Bengali ASR and releasing a large-scale dataset for this domain. This competition was challenging, to say the least. But I enjoyed the last three months working on it ngl. It was a pretty good learning curve for me. I got the opportunity to contribute to the community and also engage with the community through this competition. I got 1 Gold, 2 silver and 8 bronze medals for my notebooks in this competition. So overall, a very busy and happy three months!</p>
<h2>Summary :</h2>
<p><strong>Acoustic Model</strong>: We fine-tuned the ai4bharat/indicwav2vec_v1_bengali model<br>
<strong>Dataset</strong> : Competition Data + Openslr53+openslr37+ TTS dataset<br>
In the first phase, we fine-tuned the model with the whole competition data. which was making things worse, dragging the LB performance below the best public NB ( 0.445)<br>
Then we filtered out the bad-quality audio from the competition data using the training metadata provided by the host ( We took the audios with MOS>=2). <br>
We did a random split with both the train and validation data(since it was evident MaCro validation audio quality was better than train audio) and trained with 95% of the audios. Then added the other datasets. This improved the LB score to 0.430<br>
Augmentation : reverberation, Speed perturbation,Volume perturbation (0.125x ~ 2.0x)<br>
Adding background noise from the example audios(Improved performance slightly)</p>
<p><strong>Language Model</strong>: We built a 5-gram LM using competition sentences + <a href="https://github.com/csebuetnlp/banglanmt" target="_blank">banglanmt</a> + IndicCorp_V2</p>
<h2>What did not work:</h2>
<ol>
<li><strong>Punctuation model</strong> : I tried to add the <a href="https://github.com/xashru/punctuation-restoration" target="_blank">xashru-punctuation-restoration</a> model but it always gave CUDA OOM. Now it's bugging me to see others implemented it and got a huge boost with it. I also tried to fine-tune T5 model for this task but didn't see much improvement. So I gave up the idea.</li>
<li><strong>Speech enhancement</strong> : </li>
</ol>
<ul>
<li><a href="https://github.com/Rikorose/DeepFilterNet" target="_blank">DeepFIlterNet</a></li>
<li><a href="https://huggingface.co/speechbrain/sepformer-wham-enhancement" target="_blank">Speechbrain wham</a></li>
</ul>
<p>I spent a significant amount of time on them. First I tried to add a speech enhancer model to my inference pipeline. It didn't improve the performance. Then I tried to train with enhanced audio, but it didn't help either. The noise cancellation performance by these models was quite good, but it also affected the pitch and distorted the speech a little bit. </p>
<p><strong>What could have been done to improve further/ Next steps</strong> : </p>
<ol>
<li>Adding the punctuation model.</li>
<li>Train the acoustic model with more data (MADASR,Shrutilipi)</li>
<li>Add more texts to the LM</li>
</ol> |
Bengali.AI Speech Recognition | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizing this competition.<br>
I am new to speech recognition and have learned a lot. Thanks to the competition participants and hosts.</p>
<h1>Solution Overview</h1>
<ul>
<li>Dataset cleaning</li>
<li>Punctuation</li>
<li>Simple Ensemble (Rescoring)</li>
</ul>
<h1>Dataset</h1>
<p>First, I applied a data filter. Since I'm not familiar with the audio domain, I used a simple filter.<br>
One was the ratio between the transcribed text and the length of the audio data. I filtered the data based on a 20 characters per audio second. If it's higher than that, I considered the possibility that there are too many characters for the length of the audio data. It could be abnormally fast speech or corrupted audio data.<br>
Another filter I used was based on the prediction results from the yellow_king model. I filtered out data with scores above a certain threshold (CER > 0.6).<br>
By training on clean data, I was able to start off with a good score.</p>
<p>Because the competition data set was so large, training was repeated on a subset of it rather than using all of it. Once the model achieved some performance, the WER for the remaining data set was calculated and added to the data. This round was repeated several times.</p>
<h2>external data</h2>
<p>kathbath, openslr37, 53, ulca<br>
These external data were also filtered</p>
<h1>Model</h1>
<p><code>IndicWav2Vec</code><br>
This model was very powerful for the test set.<br>
Since the previous competition report suggested that retraining a model once it had converged would not perform well, I used a pre-trained model instead of bengali's fine-tuned checkpoints (converted to HF model from faireseq checkpoints). Another reason was that I customized vocab (Mentioned in the section on punctuation.).</p>
<h2>LM Model</h2>
<p>5-gram Language Model (using all of Indiccorpus v1 and v2, BangaLM, and mC4). The text corpus totaled 45GB, with the 5gram.bin file being around 22GB in size.<br>
Using pyctcdecoder, alpha, beta and beam_width were adjusted with example audio CV. This initially provided a 0.01 boost to the LB score, but the benefit decreased as the score improved, eventually settling at around 0.003.</p>
<h1>Punctuation</h1>
<p>punctuation is very important.<br>
Instead of creating a punctuation model, I took the simple approach of including punctuation in the vocab and LM corpus. Even when including punctuation, wav2vec was largely unable to predict it. However, it was found that including them in the vocab allowed them to be considered as candidates during LM decoding. Simply by including punctuation in both the model and the LM corpus, it was possible to improve the score from 0.44 to 0.428.</p>
<h1>Ensemble (Rescoring)</h1>
<p>Due to the structure of the output, it was not possible to simple ensemble at the logit level. Therefore, a strategy was adopted to calculate the LM Score for decoded sentences using the LM model, and the sentence with the highest score was selected.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1197187%2F5e6ee0ffb1f1b28b93f1a744fe77c438%2Fbengali.png?generation=1697624102630269&alt=media" alt=""></p>
<p>Sentence used the output of three models with variations on training data and decoding parameters. Both public and private models improve by about 0.006 - 0.008.</p>
<h1>Not working for me</h1>
<ul>
<li>xls-r-300m<ul>
<li>Common Voice's CV is competitive with indicwav2vec, but performs very poorly for the test set. Not effective when added to an ensemble.</li></ul></li>
<li>Speech Enhancement<ul>
<li>It was found that the score was poor for example audio with reverb, so noise removal was performed using DeepFilterNet. There were cases where this boosted the LB score and cases where it didn't, so it was finally not used.</li></ul></li>
<li>Adding Assamese<ul>
<li>my understanding is that it shares characters with Bengali. Addition did not improve score.</li></ul></li>
</ul> |
Bengali.AI Speech Recognition | 92th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Bengali.AI & Kaggle for organizing this competition. </p>
<h1>Overview of the Approach</h1>
<h2>ASR Model</h2>
<ol>
<li>Dataset: I only use small subset of competition dataset with filter audios which duration is less then 1 sec. as outlier. Removing all kinds of punctuation. Removing all kinds of punctuation then Normalization</li>
<li>Augmentation: BackgroundNoise, Gain</li>
<li>Pretrain model: arijitx-full-model/wav2vec2-xls-r-300m-bengali</li>
</ol>
<p>I don't have the time or resources to train more epochs (the model only train on 1 epoch). If possible the results could be even better!</p>
<h2>Language Model</h2>
<p>Same as public notebook: 5-gram LM (arijitx-full-model/wav2vec2-xls-r-300m-bengali)</p> |
Bengali.AI Speech Recognition | 23nd Solution - My first competition medal | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to express my gratitude to the organizers of this competition. As a relative newcomer to Automatic Speech Recognition (ASR), I'm thrilled to have secured a medal in this event. I am immensely satisfied with the outcome, and I extend my thanks to both the competition participants and its hosts.</p>
<p><strong>The solution consists of 2 components:</strong></p>
<ol>
<li>ASR model</li>
<li>Ngram KenLM model</li>
</ol>
<p><strong>1. ASR model</strong><br>
Initially, I experimented with numerous Hugging Face (HF) models, training them on approximately 20,000 data samples. The models I tested included:</p>
<ul>
<li>LegolasTheElf/Wav2Vec2_XLSR_Bengali_1b</li>
<li>arijitx/wav2vec2-large-xlsr-bengali</li>
<li>arijitx/wav2vec2-xls-r-300m-bengali</li>
<li>tanmoyio/wav2vec2-large-xlsr-bengali</li>
<li>ai4bharat/indicwav2vec_v1_bengali</li>
<li>Umong/wav2vec2-large-mms-1b-bengali</li>
<li>kabir5297/Wav2Vec2-90k-Bengali</li>
<li>tanmoyio/wav2vec2-large-xlsr-bengali</li>
<li>bayartsogt/bengali-2023-0016</li>
<li>shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice</li>
<li>jonatasgrosman/wav2vec2-large-xlsr-53-english</li>
<li>wav2vec2-xls-r-2b - training from scratch</li>
<li>wav2vec2-xls-r-1b - training from scratch</li>
</ul>
<p>Eventually, I settled on the "shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice" model. You can find all the fine-tuned models in my HF repository <a href="url" target="_blank">https://huggingface.co/Aspik101</a><br>
The final fine-tuned model can be accessed: <a href="url" target="_blank">https://huggingface.co/Aspik101/shahruk10_checkpoint-360_2444</a></p>
<p>The final model underwent a two-step training process. In the first step, the model was trained on filtered data, where the Word Error Rate (WER) was less than 80, sourced from the yellowking model, MADASR dataset, and google/fleurs. This training was executed based on a streaming mode and took approximately 20 hours.</p>
<pre><code>trainer = Trainer(
model=model,
data_collator=daata_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=IterableWrapper(train),
eval_dataset=IterableWrapper(val),
tokenizer=processor.feature_extractor)
</code></pre>
<p>Loading data via streaming on Hugging Face can be useful when processing large data sets or when you want to feed data to a model while it is running rather than loading the entire data set into memory at once.</p>
<p>At this stage, the best result I managed to get was around 0.424 on the leaderboard</p>
<p>In the next step, the model was trained using data from the competition but from the <strong>examples folder!</strong> Thanks to this, I managed to get results of <strong>0.404.</strong> with KenLM model.</p>
<p><strong>2. Ngram KenLM model</strong><br>
5-gram kenlm model trained on competition dataset external Bengali corpus IndicCorp V1+V2.</p>
<p><strong>What doesn't work</strong></p>
<ul>
<li>Punctuation model <a href="url" target="_blank">https://huggingface.co/1-800-BAD-CODE/xlm-roberta_punctuation_fullstop_truecase</a> On the test data set, my calculations show that I should get about 0.02 WER, unfortunately it did not work on the final data set</li>
<li>Attempting to use a large language model to correct typos, such as Qwen 7b, from <a href="url" target="_blank">https://huggingface.co/Qwen/Qwen-7B-Chat</a> showed potential but did not yield successful results in my case.</li>
<li>I experimented with the "indicparser" as follows:</li>
</ul>
<pre><code> indicparser graphemeParserg
gp=graphemeParser()
</code></pre>
<ul>
<li>Exploring the stacking of logits from various wav2vec models.</li>
<li>Generating transcriptions using the Conformer model available <a href="url" target="_blank">https://huggingface.co/bengaliAI/BanglaConformer</a>, while considering only punctuation and combining it with the final wav2vec model.</li>
<li>And many other ideas that I have already forgotten</li>
</ul>
<p>In conclusion, I want to reiterate my appreciation to the competition organizers and participants. This journey has been an incredible learning experience for me, and I'm excited to continue exploring the world of ASR.</p> |
Bengali.AI Speech Recognition | 20th Rank Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Our Final Solution is a single Wave2Vec model along with a N-Gram Model</p>
<h1>Step 1:</h1>
<p>Trained a model for 5 epochs on </p>
<ol>
<li>OpenSLR37</li>
<li>OpenSLR53</li>
<li>Fleurs</li>
<li>Common voice</li>
</ol>
<p>Validation:<br>
Kaggle Validation data</p>
<h1>Step 2:</h1>
<p>Estimate the WER for the Kaggle training data with the model from Step 1.<br>
Remove samples which have WER greater than 0.5<br>
Resulting cleaned version of Training data had 50-60% of overall Training data</p>
<h1>Step 3:</h1>
<p>Trained a model for 20 epochs on </p>
<ol>
<li>OpenSLR37</li>
<li>OpenSLR53</li>
<li>Fleurs</li>
<li>Common voice</li>
<li>Cleaned version of Training data</li>
</ol>
<h1>Step 4:</h1>
<p>Build a N-Gram model with Common Voice dataset</p>
<p>Our best Wave2Vec model 0.407 in LB<br>
Our best Wave2Vec model + N-Gram Model 0.396 in Public LB</p>
<p><strong>Things that worked:</strong></p>
<ol>
<li>Caching of the datasets reduced the overall training time. But it didnt allow us to add any augmentations on the fly</li>
<li>Having more warmup steps ensured that the model doesn't throw NaN. We used warmup of 0.25</li>
<li>MSD as Final layer instead of Linear layer</li>
</ol>
<p><strong>Things that didnt work:</strong></p>
<ol>
<li>We tried adding few more finetuning steps. But mostly the model started to overfit</li>
</ol>
<p>We will add few more points about our pipeline in next few hours</p> |
Kaggle - LLM Science Exam | 13th place solution: Ensemble context ensemble model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you to Kaggle for organizing this competition and to my teammates.<a href="https://www.kaggle.com/natnitarach" target="_blank">@natnitarach</a> <a href="https://www.kaggle.com/pongtsu" target="_blank">@pongtsu</a> <a href="https://www.kaggle.com/kunato" target="_blank">@kunato</a> <a href="https://www.kaggle.com/yoyoismee" target="_blank">@yoyoismee</a> 🔥🔥🔥🔥🔥</p>
<p>We have been participating in this competition since the first week of the competition. We dropped from the top 20 down to the nowhere about 3 times in this competition but always managed to get back up. Each time we see the rankings fall from the top 20 to nowhere, it makes us feel frightened (its like playing roller coaster LOL😅) but at the end of the competition we were able to get the gold medal 🥇🎉. <br>
Our solution is ensemble of 2 wikipedia version + 270k dataset and ensemble of debertaV3 large model by max probability<br>
I will split into 2 section retrival and modeling. Retrival has the big impact to our score.</p>
<p><strong>Retrival</strong><br>
In the Retrieval phase, we use two sources of Wikipedia embeddings, utilizing the "bge-small-en" to create a FaissFlatL2 index.</p>
<ol>
<li>Wikipedia cohere 35M -> 35M vector</li>
<li><a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">Wikipedia 2023</a> -> split by character chunk = 1000 -> 21M vector</li>
</ol>
<p>and <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270k dataset</a><br>
In total, we had 35M + 21M = 56M vector to retrival search that is huge!. We use technique that split big index to small index then merge at the end we call it "Faiss Batch" (1 wikipedia has around 6 index each index is 10GB size). so it total size 100GB+ of only retrival part.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5283167%2F5dc1440b9f7f92e5da231ea771ca60fb%2FRetrivalPart.png?generation=1696997560116021&alt=media" alt=""><br>
to inference you can see image above the step for inference was</p>
<ol>
<li>The wikipedia dataset we use faiss query search to get top ~100 article (6 index * 15 neighbor)</li>
<li>Use TF-IDF for reranking from experiment we found that TF-IDF reranking has score better than bge-reranker</li>
</ol>
<p>Example code of what we call Faiss Batch</p>
<pre><code>from datasets import Dataset
ds = ()
test = Dataset(test)
import os
dir_path =
index_list = os(dir_path)
index_list = (index_list)
k =
total =
distance_list =
indices_list =
res = faiss()
indexBatch index_list:
index_Batch = + indexBatch
(f)
index1 = faiss(index_Batch)
index1 = faiss(res,,index1)
distances1, indices1 = index1(query_vector,k)
updated_indices2 = inner_list indices1]
total += index1
(total)
distance_list(distances1)
indices_list(updated_indices2)
index1
_ = gc()
libc()
torch()
concatenated_indices = np(indices_list, axis=)
concatenated_distances = np(distance_list, axis=)
</code></pre>
<p>what a messy code😂<br>
in the 270k dataset we follow <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595" target="_blank">discussion</a> that use TF-IDF </p>
<p>at the end we will have 3 context and each model will predict 3 times.</p>
<p><strong>Model</strong><br>
In the Modeling phase, We use DebertaV3 large and ensemble it.<br>
we use same pipeline as <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">Chris</a> unfreeze embedding and use various token from 512-768 with <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k dataset</a> + dataset that we generate from chatGPT3.5 around 5k question.</p>
<p><strong>What Didn't Work</strong></p>
<ul>
<li>Attempts to compress the index (IVQ, PQ, etc.) resulted in low accuracy.</li>
<li>We tried combining Platypus2-70B-instruct with our retrieval method, but it proved impractical due to the extended runtime, exceeding 9 hours during submission.</li>
<li>The application of PEFT on DeBERTaV3 did not yield the desired results.</li>
</ul>
<p><strong>To improve</strong></p>
<ul>
<li>Obtain better Wikipedia text sources.</li>
<li>Enhance the performance of our models.</li>
</ul>
<p>Github Code<br>
<a href="https://github.com/nat-nischw/kaggel-llm-science-exam-2023" target="_blank">https://github.com/nat-nischw/kaggel-llm-science-exam-2023</a></p> |
Kaggle - LLM Science Exam | 0.894 without training any model (+ 56th solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Lots of people discuss LLM fine tuning, but prompt fine tuning is another way to improve LLMs. A great example was provided early by <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> in <a href="https://www.kaggle.com/code/steubk/the-art-of-prompt-engineering" target="_blank">The Art of Prompt Engineering</a>. This and the 70B model inference code by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> made me spend most of my competition time on how to use pretrained models as effectively as possible. To my surprise I got as far as 0.89 on the public LB and 0.892 private only using pretrained models. I shared one submitted notebook with 0.89 public / 0.892 private <a href="https://www.kaggle.com/code/cpmpml/llm-008-orig" target="_blank">here</a>. Another one that has a better private LB at 0.894 is <a href="https://www.kaggle.com/cpmpml/llm-008" target="_blank">here</a>.</p>
<p>Not surprisingly my solution is a RAG method with these steps:</p>
<ul>
<li>Indexing wikipedia chunks</li>
<li>Retrieving relevant chunks from each sample</li>
<li>Predicting the top 3 answers<br>
However it differs from publicly shared ones in how each individual step is performed. Let's review these one by one.</li>
</ul>
<h1>Indexing Wikipedia</h1>
<p>When I joined, RAG was already a hot topic, with a public notebook disclosing how to retrieve sentences in a two step way: retrieve relevant articles using their titles and first sentence, then retrieve relevant sentences within these articles. I quickly found that retrieving full paragraphs (separated by <code>'\n'</code>) was much better than retrieving sentences. I then wondered why did we need to do it in two steps? Why not directly retrieve relevant paragraphs? Indeed, the article title plus first sentence may not represent correctly all the context of the article. </p>
<p>Issue was that the number of paragraphs is huge. Even when discarding short ones that are not really useful. I settled for all paragraphs with at least 150 characters. This makes about 35 million of these. Using an embedding model of size 1024 (such as bge-large-en-v1.5 or e5-large-v2) this represents about 140 GB. This is way too large for a Faiss index we could load in memory.</p>
<h1>Retrieval</h1>
<p>I went on nevertheless and coded a cosine similarity retrieval between a query embedding and these 140 GB that runs in about 25 minutes on Kaggle notebooks. Code is in the notebook I shared. I previously computed offline the paragraph embeddings and saved them in separate files, one file per million paragraphs. One file uses 4GB for the largest embedding models. Files are loaded in memory one by one and released afterwards.</p>
<p>For the query embedding for each sample I first compute the embedding for the prompt. I also compute one embedding per answer, then take their average. This yields an answers embedding which does not depend on the order in which answers are listed as options. I then average the prompt embedding and the answers embedding. This worked better than other ways I tried.</p>
<p>I then use faiss for retrieval but I don't create an index. I directly used the <code>faiss.knn_gpu(res, key_embeddings, value_embeddings, k)</code> method.</p>
<p>In the notebook shared above I retrieve the top 8 paragraphs then concatenate them with a newline, so that they appear as a set of paragraphs.</p>
<h1>Answer prediction</h1>
<p>I use the Xwin model I shared <a href="https://www.kaggle.com/datasets/cpmpml/xwin-lm-70b-v0-1-part1" target="_blank">here</a> and <a href="https://www.kaggle.com/datasets/cpmpml/xwin-lm-70b-v0-1-part2" target="_blank">here</a> with the following prompt. I had betetr results wuith tghe Platypus model shared by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> but that model has a non commercial license hence is not compliant with the competition rules iMHO. I switched for safety.</p>
<p>The prompt I used is a fine tuned version of the one <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> used in his code. This fine tuning improves the public LB significantly. </p>
<pre><code>Find below an instruction that describes a task, paired an that provides further context. Write a response that appropriately completes the request.
Your task to evaluate the correctness of a proposed answer to a question. Respond yes the answer correct, respond no it correct. Some background Wikipedia articles available, even it might always be relevant.
Background:
CONTEXT
Question: PROMPT
Proposed answer: ANSWER
</code></pre>
<p>I then use the logit for the 'yes' token as in <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> code.</p>
<p>I tried other logits and also tried difference, like the logit for yes minus the logit for no, but nothing worked both on CV and LB better than the yes logit. I just find out that yes minus no scores a bit better on the private LB at 0.894, see the second notebook I shared above.</p>
<p>I see I made a typo in the code, and the result will probably be a bit better with the typo corrected to mimic the above prompt.</p>
<h1>Some training to 0.908</h1>
<p>Picking one pretrained logit may not be the best text classifier. During the last day of the competition I trained a logistic regression that takes as input the last hidden state of the model. It scores 0.894 on the public LB and 0.908 on the private LB. This is my best submission. </p>
<p>The improvement seems small but I started from a lower point as my 0.89 submission consistently failed when I tried it in the last week of the competition. I had to use a lower MAX_LENGTH of 1024 to make it run. And it took 8h58 to run… I therefore think that training the head yield close to 0.02 improvement.</p>
<p>I share the inference code <a href="https://www.kaggle.com/code/cpmpml/llm-021" target="_blank">here</a>.</p>
<h1>Misc</h1>
<p>I worked most of the time with embeddings computed with the <code>gte-small</code> model as it was the best for me on CV. I switched to larger ones like <code>bge-large-en-v1.5</code>and <code>e5-large-v2</code> during the last few days. My best sub uses <code>e5-large-v2</code>.</p>
<p>I wanted to look at TF-IDF and BM25 retrieval but did not find time for it.</p>
<p>The 9 hours limit and the memory were a real pain, and I was always at the border. My best submission ran for 8h58 minutes. And to keep up with memory I had to trim down max token length to 1024. What is weird is that versions with 3000 tokens ran fine two weeks ago. It would have moved me up by something like 0.005.</p>
<p>I also worked on a Deberta solution where I replace the 70B model above with Deberta. I tried several variants and the best private score is 0.902. It used 3 different embedding models: <code>gte-small</code> , <code>bge-large-en-v1.5</code>and <code>e5-large-v2</code>. For each of these I run the above pipeline with Deberta replacing the 70B model, then average the Deberta logits over the three runs. This yields 0.902 on private LB (and 0.895 public.) This notebook has nearly 400 GB of data sources attached to it! I used FP16 and Pytorch DDP to speedup inference 4x on T4.</p>
<p><strong>Edit.</strong> I also tried middle ground, i.e. 7B and 13B models. I fine tuned some but each time my fine tuned one was a bit worse than what I started from. Maybe I didn't use the right training data? Looking forward to top teams writeups to see how they fine tuned their models.</p> |
CAFA 5 Protein Function Prediction | 1st Place Solution for the CAFA5 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>A. MODEL SUMMARY</h1>
<h2>A.1 Background on you/your team</h2>
<p>Competition Name: CAFA5 Protein Function Prediction</p>
<p>Team Name: GOCurator</p>
<p>Private Leaderboard Score: 0.61623</p>
<p>Private Leaderboard Place: 1</p>
<p>Name: Shaojun Wang</p>
<p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p>
<p>Email: <a href="mailto:20110850027@fudan.edu.cn">20110850027@fudan.edu.cn</a></p>
<p>Name: Weiqi Zhai</p>
<p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p>
<p>Email: <a href="mailto:20110850010@fudan.edu.cn">20110850010@fudan.edu.cn</a></p>
<p>Name: Wei Liu</p>
<p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p>
<p>Email: <a href="mailto:21110850007@m.fudan.edu.cn">21110850007@m.fudan.edu.cn</a></p>
<p>Name: Tianyang Huang</p>
<p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p>
<p>Email: <a href="mailto:tyhuang22@m.fudan.edu.cn">tyhuang22@m.fudan.edu.cn</a></p>
<p>Name: Huiying Yan</p>
<p>Location: Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China</p>
<p>Email: <a href="mailto:yanhuiyingwangyi@163.com">yanhuiyingwangyi@163.com</a></p>
<h2>A2. Background on you/your team</h2>
<p>Our team consists of five PhD students working on bioinformatics, especially automated protein function prediction, and biomedical text mining from ZhuLab, which is led by Prof. Shanfeng Zhu at Fudan University. Our lab has participated in the CAFA3 and CAFA4 and achieved the top performance. Our team developed GOCurator by considering all kinds of information including protein sequence, network, structure, and textual information. During the competition, S.W. was responsible for collecting data, building the framework, and training the model. W.L developed a component method based on FoldSeek. T.H and H.Y were mainly engaged in biomedical text mining for protein annotation. W.Z submitted the prediction result, and performed the result analysis.</p>
<h2>A3. Summary</h2>
<p>Our pipeline, GOCurator, was developed based on NetGO 3.0. In addition, we also introduced new component methods to extract functional information from protein 3D structure, protein textual description, and scientific literature. Thanks to the learning to rank model, we eventually integrated multiple component methods effectively. Figure 1 showed the score of component methods achieved in public leaderboard.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16029558%2F27233f10b1cbcfd1e5aec8ed8a5b85c5%2Fsolution1.png?generation=1704887698109856&alt=media" alt="The performance of component methods in public leaderboard"></p>
<ul>
<li>LR-MEM, it combined protein description, scientific literature, and protein sequence together based on the annotation from GO and GOA. Specifically, it concatenated representations from different information source as protein representations. Then we trained logistic regression classifiers based on it. We integrated this method into NetGO 3.0 and ranked in the first place for the first time on public leaderboard. </li>
<li>FoldSeek-KNN, it made prediction according to the structure information. By comparing the structural similarities between the test proteins and the proteins in the training set, we calculated the prediction score as weighted voting by similar annotated proteins.</li>
<li>GOXML, an extreme multi-label classifier based on literature information. Similar to DeepText2GO, it used AttentionXML to achieve fast high-precision large-scale multi-label classification without training logistic regression classifiers for each GO terms.</li>
<li>GORetrieval, a novel two-stage deep learning-based framework based on the protein description and protein literature. Given a target protein, it first retrieved candidate Gene Ontology (GO) terms from annotated proteins with similar descriptions. In the second stage, the retrieved GO terms were reranked based on semantic matching between the GO descriptions and the literature information of the target protein. Moreover, the second stage can also be used to enhance the prediction from other method via reranking.</li>
</ul>
<p>When doing ensemble learning, the input to the model contains not only the predicted scores from each method, but also a 20-dim one-hot vector to characterize the protein species information.</p>
<h2>A4. Features Selections / Engineering</h2>
<p>According to the public leaderboard scores, we believed that protein language models and textual information are useful for improving accuracy. For protein language models, we have also tried the ESM2 and ProtT5 to obtain protein embeddings. Similar to LR-ESM (based on ESM-1b), we trained logistic regression classifiers for ESM2 and ProtT5 embedding and the concatenation of all three embeddings. We found little difference in the scores achieved by these methods, but the best performance was achieved when LR-ESM was integrated as a component method into the GOCurator. Therefore, we finally used ESM-1b to generate protein embeddings. For textual information, how to generate text representations is important. Following to DeepText2GO, we used document to vector (D2V) and TF-IDF to represent scientific literature. In GORetrieval and GOXML, PubMedBERT was used to generate text embedding.</p>
<h2>A5. Training Method(s)</h2>
<p>We collected functional annotation information from three database SwissProt, GOA, and GO, according to the CAFA5 evidence code. Unlike the temporal split in NetGO, we selected proteins for validation based on the species distribution of proteins in the CAFA5 test set. Specifically, we calculate the proportion of proteins from each species in the CAFA5 test set, and sampled proteins for the corresponding species from the dataset to form a validation dataset of 1000 proteins. We have improved the performance of our methods as much as possible in this way.</p>
<h2>A6. Interesting findings</h2>
<p>We found that the Net-KNN based on protein-protein interaction performs poorly in predicting proteins from those species with less functional annotation information. When we integrated all the component methods together, we even got better results by not fusing Net-KNN into it. However, according to our previous studies (NetGO series), Net-KNN is a powerful method for BPO and CCO. Therefore, we tried to filter the prediction from Net-KNN by the species. Specifically, we considered the predictions for a target protein to be valid if the number of the annotated proteins in the target protein’s species is larger. As shown in the figure, the number of valid species has an impact on the prediction performance. Finally, we treated Net-KNN predictions for the top 15 species as effective results.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16029558%2Fc669b39b4fb0047b017c6b75a9440223%2Fsolution2.png?generation=1704888407902780&alt=media" alt=""></p>
<h2>A7. Simple Features and Methods</h2>
<p>According to the performance of different source information, we believe that incorporating a component method based on textual or structural information into NetGO 3.0 can achieve 90% of the final performance.</p>
<h2>A8. Model Execution Time</h2>
<p>Firstly, we used InterProScan and ESM-1b to get protein domain/family/motif information and protein embeddings, which took about 1-2 days, where most of the time was spent on InterProScan. Secondly, we trained each component method separately on several servers, spending roughly 10-15 hours per method. Then, the learning to rank model used for ensemble learning could be trained within 50 mins. For inference, we first used component methods to generate initial prediction, which took about 2 hours for CAFA5 test proteins. Finally, it took about 1.5 hours for ensemble learning on all test proteins.</p>
<h2>A9. References</h2>
<ol>
<li>You R, Zhang Z, Xiong Y, et al. GOLabeler: improving sequence-based large-scale protein function prediction by learning to rank. Bioinformatics, Volume 34, Issue 14, 2018, 2465–2473.</li>
<li>You R, Huang X, Zhu S. DeepText2GO: Improving large-scale protein function prediction with deep semantic text representation. Methods, 2018, 145: 82-90.</li>
<li>You R, Yao S, Xiong Y, et al. NetGO: improving large-scale protein function prediction with massive network information. Nucleic Acids Research, Volume 47, Issue W1, 2019, W379–W387.</li>
<li>Yao S, You R, Wang S, et al. NetGO 2.0: improving large-scale protein function prediction with massive sequence, text, domain, family and network information. Nucleic Acids Research, Volume 49, Issue W1, 2021, W469-W475</li>
<li>Gu Y, Tinn R, Cheng H, et al. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23, 2021</li>
<li>Rives A, et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 118(15), 2021</li>
<li>You R, Yao S, Mamitsuka H, et al. DeepGraphGO: graph neural network for large-scale, multispecies protein function prediction. Bioinformatics (ISMB 2021), Volume 37, Supplement_1, 2021, i262–i271.</li>
<li>Wang S, You R, Liu Y, et al. NetGO 3.0: Protein Language Model Improves Large-scale Functional Annotations. Genomics, Proteomics & Bioinformatics, Volume 21, Issue 2, 2023, 349-358.</li>
</ol> |
CommonLit - Evaluate Student Summaries | 7th Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to competition organizers for hosting this interesting competition and my teamate <a href="https://www.kaggle.com/emiria" target="_blank">@emiria</a>. I learn a lot from emiria's ideas and code. Congrats for emiria's fourth gold medal, a new GM is on the way. And this is my first gold medal of nlp competition.</p>
<p>The <strong>key points</strong> of our strategy:</p>
<ul>
<li>Add "prompt_text" for inputs (0.03 boost on cv)</li>
<li>Freezing layers (0.01 boost on cv)</li>
<li>Different inputs for blending(0.01 boost on cv)</li>
<li>Blending with result of lightgbm(0.005 boost on cv)</li>
<li>Focus on local cv (LB only has 13% of data)</li>
</ul>
<p><strong>Did not work</strong> for us:</p>
<ul>
<li>AWP</li>
<li>SWA</li>
<li>Text preprocess</li>
<li>MLM</li>
</ul>
<h3>Models</h3>
<p>Here's the discription of our models we used for finall submissions. We use groupkfold with "prompt_id" for local validation, and used all prompts for training when inference.</p>
<table>
<thead>
<tr>
<th>id</th>
<th>backbone</th>
<th>inputs</th>
<th>maxlen</th>
<th>loss</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>model1</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_text+sep+prompt_question</td>
<td>1280</td>
<td>mseloss</td>
<td>0.500</td>
</tr>
<tr>
<td>model2</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>1280</td>
<td>mseloss</td>
<td>0.489</td>
</tr>
<tr>
<td>model3</td>
<td>deberta-v3-large</td>
<td>prompt_title+sep+prompt_question+sep+text+sep+prompt_text</td>
<td>1280</td>
<td>mseloss</td>
<td>0.506</td>
</tr>
<tr>
<td>model4</td>
<td>deberta-v3-large+lgb</td>
<td>prompt_question+sep+text</td>
<td>512</td>
<td>mseloss</td>
<td>0.520</td>
</tr>
<tr>
<td>model5</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>768</td>
<td>mseloss</td>
<td>-</td>
</tr>
<tr>
<td>model6</td>
<td>deberta-v3-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>768</td>
<td>logloss</td>
<td>-</td>
</tr>
<tr>
<td>model7</td>
<td>deberta-large</td>
<td>text+sep+prompt_title+sep+prompt_question+sep+prompt_text</td>
<td>1024</td>
<td>mseloss</td>
<td>-</td>
</tr>
</tbody>
</table>
<h3>Results</h3>
<p>And here's our models with best scores:<br>
Each model is average of 2 seeds, except the "model4"(with lightgbm).</p>
<table>
<thead>
<tr>
<th>PB</th>
<th>LB</th>
<th>Picked</th>
<th>models</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.456</td>
<td>0.427</td>
<td>Yes</td>
<td>0.32*model1+0.32*model2+0.16*model3+0.2*model7</td>
</tr>
<tr>
<td>0.453</td>
<td>0.428</td>
<td>No</td>
<td>0.32*model1+0.32*model2+0.16*model4+0.1*model5+0.1*model6</td>
</tr>
</tbody>
</table> |
Google Research - Identify Contrails to Reduce Global Warming | 92nd Place Solution for the Google Research - Identify Contrails to Reduce Global Warming Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google Research - Identify Contrails to Reduce Global Warming
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Welcome! Сongratulations to everyone on finishing this great competition. It was a honor for us to compete with you all. Makes me happy to see, that AI solutions can be beneficial for the environment and nature. </p>
<p>Our team ( <a href="https://www.kaggle.com/slavabarkov" target="_blank">@slavabarkov</a> and me) would like to present our approach. We didn't get the SOTA result, but I believe, that we got some insights. </p>
<h1>Context section</h1>
<ul>
<li>Business context: <a href="http://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming" target="_blank">www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data" target="_blank">https://www.kaggle.com/competitions/google-research-identify-contrails-reduce-global-warming/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<p>Our best solution was an ensemble of several models with equal weights: efficientnet-b5, b6, b7, b8 and eca_nfnet_l2 with the Unet decoder. All models were trained on 512x512 images (Ash Color images, shout-out to <a href="https://www.kaggle.com/shashwatraman" target="_blank">@shashwatraman</a>) and DICE loss. We used threshold=0.3 for the pixels final prediction. It got 0.685 on the public lb and 0.681 on the private. </p>
<p>Our validation score correlated rather nice with the public lb. We used the StratifiedKFold validation (4 folds), stratified by the masks size. I'll describe our approach in the next paragraph more precisely.</p>
<h1>Details of the submission</h1>
<h3>What was special about the submission</h3>
<p>We had a straightforward robust approach, that has a nice performance, without using the SOTA models. We developed a thoughtful validation strategy, that can be used in the future competitions. </p>
<p>We also had ideas that we did not manage to work right (we joined the competition last week). However, according to the winning solutions, we moved in the right direction.</p>
<h3>Validation strategy</h3>
<p>I didn't like the idea of random folds, since folds might contain the images of different complexity level. Some folds might contain too many empty masks or too big masks. I thought, that models might struggle with prediction of empty masks, for example. Therefore, it was a nice approach to divide images equally on folds, based on their masks size. One needs just to sum the total amount of masks pixels. Then you should create bins to label ranges of masks sizes. For empty masks we created a separate bin. Then you just stratify data, based on these labels. </p>
<p>Code example:</p>
<pre><code>
skf = StratifiedKFold(
n_splits=config[][],
shuffle=,
random_state=config[][],
)
df[] = pd.qcut(
df[df[] > ][],
q=config[][] - ,
labels=,
)
df.loc[df[] == , ] = -
fold_number, (train_index, val_index) (
skf.split(df, df[])
):
df.loc[val_index, ] = (fold_number)
df.drop(columns=[], inplace=)
</code></pre>
<h3>Other details about the models</h3>
<p>For segmentation models we used the smp torch library. We have tried different backbones from the timm library. However, we found out that an efficientnet family was the best, resnet was not successful in our case. We also liked the nfnet models, they were faster and also could be trained on bigger image sizes. However, 512x512 worked the best. We also found out, that models hyperparameters was not that important: lr, scheduler - does not mean too much. </p>
<p>Unet decoder also was the best one, others were worse or showed almost the same performance. </p>
<h3>Ideas that didn't work in our case (but they can)</h3>
<h3>Pretraining on the synthetic dataset</h3>
<p>That was a really cool idea to generate the similar background images by generative networks and draw masks on them to pretrain the models. I think that it is a very perspective idea in future works. In our case it didn't work good, trained models showed nice performance on these images, but this didn't improve the quality on the competition dataset. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2590337%2Fc04e6ac0c033d17bac52b6405ea828ee%2Fphoto_2023-08-05_00-49-53.jpg?generation=1692294981919634&alt=media" alt=""></p>
<h3>Training on the soft labels</h3>
<p>Instead of using the ground truth masks, we can average the predictions by annotators. We preprocessed the data and then just didn't have the time to test it. But this can work nice, since there are controversial images in the dataset.</p>
<h3>Pseudo labeling other frames</h3>
<p>That's a great approach to make the data bigger. We did this for the 2,3,5,6 frames and trained models with the DICE loss. It was a total mistake. We trained on binary masks, but we should use BCE loss and soft labels in this case. Another mistake was that we used the whole ensemble to label this data, so we could not really validate the models on previous folds. Our validation score improved, but the lb did not. I believe it is due to the hard labels and knowledge (data) leakage from the models trained on other folds. </p>
<h3>Training the additional classifier</h3>
<p>I wanted to train a classifier model just to predict, whether the image has the empty mask or not. I was really concerned about the empty masks, since the wrong predictions on these images will get the high error. I tried different backbones, got about 0.9 accuracy and in the pipeline it didn't work well then. Didn't have time to improve this step.</p>
<h3>Training code</h3>
<p>We have created a python project with loggers (wandb), which runs from the command line with YAML config. You can find it on the github: <a href="https://github.com/25icecreamflavors/contrails" target="_blank">https://github.com/25icecreamflavors/contrails</a></p>
<h1>Sources</h1>
<ul>
<li><a href="https://www.kaggle.com/code/shashwatraman/contrails-dataset-ash-color" target="_blank">https://www.kaggle.com/code/shashwatraman/contrails-dataset-ash-color</a></li>
<li><a href="https://www.kaggle.com/competitions/understanding_cloud_organization/discussion/118080" target="_blank">https://www.kaggle.com/competitions/understanding_cloud_organization/discussion/118080</a></li>
<li><a href="https://github.com/25icecreamflavors/contrails" target="_blank">https://github.com/25icecreamflavors/contrails</a></li>
</ul> |
Bengali.AI Speech Recognition | 24th Place Solution for the Bengali.AI Speech Recognition Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>First of all, we want to thank the organizers who held this wonderful competition. This is my first kaggle competition, I fell satisfied with the final outcome. Thanks my teammate <a href="https://www.kaggle.com/nisshokuitsuki" target="_blank">@nisshokuitsuki</a> , who worked with me during this three months journel. Thanks everyone who contributed to the discussion and the notebooks, your works gave us a lot insperiations. </h2>
<h1>Model</h1>
<p>The model we used is</p>
<pre><code>
</code></pre>
<p>We finetuned it further on this model <a href="https://www.kaggle.com/datasets/nischaydnk/bengali-wav2vec2-finetuned" target="_blank">bengali_wav2vec2_finetuned (kaggle.com)</a><br>
Since the data quality of the competition dataset is low, we used the train split of <a href="https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized?select=train.tsv" target="_blank">Common Voice 13 | Bengali (Normalized) (kaggle.com)</a>to train. We trained about 10 epochs with the 20000 data in the dataset, and splited about 700 data for validation. <br>
The data here is normalized and removed punctuations.<br>
Here is the training arguments:</p>
<pre><code>training_args = TrainingArguments(
&nbsp; &nbsp; group_by_length=,
&nbsp; &nbsp; weight_decay=,
&nbsp; &nbsp; num_train_epochs=,
&nbsp; &nbsp; fp16=,
&nbsp; &nbsp; learning_rate=,
&nbsp; &nbsp; warmup_steps=,
)
</code></pre>
<p>And we used cosine optimizer.</p>
<p>The best model got local wer 0.15, and Improved the Public Score from <em>0.445->0.434</em></p>
<p>However, my teammate also trained a model with the same data for just 60 steps( different args), and got the same score. (even 0.001 better on the private LB). How interesting and confusing.</p>
<h1>Language Model</h1>
<p>We trained a 6gram with <a href="https://www.kaggle.com/datasets/mbmmurad/lm-no-punc" target="_blank">lm_no_punc (kaggle.com)</a>. <br>
Mention that there is a error in the LM provided in <a href="https://www.kaggle.com/datasets/sameen53/yellowking-dlsprint-model" target="_blank">YellowKing_DLSprint_Model (kaggle.com)</a>. </p>
<pre><code>There is no terminator arpa . So we need </s> .
</code></pre>
<p>Thanks to this notebook <a href="https://www.kaggle.com/code/umongsain/build-an-n-gram-with-kenlm-macro" target="_blank">Build an n-gram with KenLM | MaCro | Kaggle</a>, we are able to realize this. <br>
Adding the improved the LB from <em>0.445->0.422</em><br>
And building an 6gram with <a href="https://www.kaggle.com/datasets/mbmmurad/lm-no-punc" target="_blank">lm_no_punc (kaggle.com)</a>.Improved about <em>0.001</em></p>
<h1>Punctuation Restoration</h1>
<p>Punctuation really matters. Thanks to this post <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305" target="_blank">Bengali.AI Speech Recognition | Kaggle</a>, we are able to realize this. And an response under this post showed us a way to restore the punctuation:<br>
<a href="https://github.com/xashru/punctuation-restoration" target="_blank">xashru/punctuation-restoration: Punctuation Restoration using Transformer Models for High-and Low-Resource Languages (github.com)</a> First we trained a model with the dataset provided in this repositorie. It can restore 3 punctuations : </p>
<pre><code>{: , : , : }
</code></pre>
<p>This improved the LB from <em>0.422->0.400</em><br>
Combining with the finetuned model, we have <em>0.400->0.397</em><br>
Afterwards, we thought that 3 punctuations might be not enouth. So we made a dataset with <a href="https://huggingface.co/datasets/oscar" target="_blank">oscar · Datasets at Hugging Face</a>, filterd datas that have only have bengali words. We chose 7<br>
punctuations: </p>
<pre><code>{: , : , : , : , : , : , : }
</code></pre>
<p>We trained 6 epochs with the default pharams.<br>
And we have <em>0.393->0.387</em></p>
<h1>Model ensemble</h1>
<p>We simply ensembled our model like this: </p>
<pre><code> y = model_1(x).logits* + model_2(x).logits* + model_3(x).logits*
</code></pre>
<p>Wait, This works???<br>
Yes, thouth the predictions may not be aligned, But since the three models are trained on same datasets, the no-aligning problem is paritially solved. <br>
This improved our performance about <em>0.001</em></p>
<h1>Decoder pharams selection</h1>
<p>There are three main pharams for the decoder:</p>
<pre><code>alpha: weight language model during shallow fusion
beta: weight score adjustment during scoring
beam_width: determines candidate output sequences retained step.
</code></pre>
<p>To find the best pharams, we used optuna <a href="https://www.kaggle.com/code/snnclsr/0-444-optimize-decoding-parameters-with-optuna" target="_blank">[0.444] Optimize Decoding Parameters with Optuna | Kaggle</a> to search the best pharams. We searched the pharams with the example datas in the dataset, which is ood data, brought us better LB score.<br>
The final decoder pharams are:</p>
<pre><code>{'alpha': , 'beta': , 'beam_width': }
</code></pre>
<h1>What doesn't work for us</h1>
<ul>
<li>Data augmentation. We added background noise downloaded from <a href="https://pixabay.com/sound-effects/search/noise/" target="_blank">https://pixabay.com/sound-effects/search/noise/</a> and also pitch shift , time stretch etc. But The LB got worse (<em>0.397->0.415</em>). Every experiment of data augmentation takes too much time and I fells to tired to do more experiments. Maybe I could write some codes to do it automatically</li>
<li>Denoise model. We tried three denoising models: <br>
UVR: Notebook Run Out of time<br>
<a href="https://github.com/facebookresearch/denoiser" target="_blank">facebookresearch/denoiser</a>: decreased about 0.01<br>
<a href="https://github.com/NVIDIA/CleanUNet/blob/main/exp/DNS-large-high/checkpoint/pretrained.pkl" target="_blank">CleanUNet</a>: decreased about 0.005<br>
We thought that denoising harms the features and makes some short syllables unrecornizable.</li>
<li>Train a bigger LM with more data. We used 15G normalized benglai data to build an kenlm, the score got worse. We still haven't found the cause of the problem. </li>
</ul> |
Kaggle - LLM Science Exam | 55th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to the organization for this amazing competition and all the participants that shared their data and code. Especially <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for the his datasets <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> and <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k-data</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> for the <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> and the idea of using TF-IDF. Without this help, it would have been impossible to achieve the silver medal in this competition.</p>
<h1>Overview of the final solution</h1>
<ul>
<li>Retrieval: Single TF-IDF on <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> extended with 159K more articles using Cohere embeddings to filter the articles and wikiextractor to get the paragraphs.</li>
<li>Models: Ensemble of 3 DeBERTa-v3-large models trained and validated with different data.</li>
</ul>
<h1>Training</h1>
<p>We spent most of the time using the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> and <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k-data</a> to finetune deberta, which was a poor approximation because we should have focused on improving the retrieval ( we were using the first approximation to open book shared by <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> in this <a href="https://www.kaggle.com/code/jjinho/open-book-llm-science-exam/notebook" target="_blank">notebook</a>).<br>
However, when changing the retrieval to <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> +TF-IDF having good models allows for small gains that make a difference compared to other DeBERTa models that are slightly worse fine-tuned.</p>
<p>Specifically, we used 3 models fine-tuned with different data from the 60k dataset.</p>
<ul>
<li>One model using the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> for training and 200 data provided in the competition for validation.</li>
<li>Two models using 20k subset from <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> + <a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">40k-data</a>. This subset was built trying to have questions with a similar distribution to the 200 data provided in the competition (we noticed that the questions from the 200 were longer, so we looked for a subset that resembled this distribution more closely). These two models were validated with two different subsets of 2,000k scientific questions also obtained from the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data</a> , as using this validation genereted models with higher LB- CV correlation and also higher LB scores than using the original 200 questions for validation.</li>
</ul>
<p>In addition, when selecting the 3 models, we chose the 3 cases that were least correlated with each other among the sufficiently good cases. When choosing the weights, we tried to optimize them using the 200 origanal questions and other datasets but there was no correlation with LB. So, we finally settled on the solution that worked best among 3 or 4 simple weight combinations (equally weighted, one twice as much as the other two…)</p>
<h2>Traning parameters/ experiments</h2>
<ul>
<li>Modifying the number of frozen layers. Ultimately, we chose 2 models with 18 frozen layers and one model with no frozen layers.</li>
<li>We didn't see many differences when varying the maximum number of tokens during training. The selected models are trained with max_tokens values ranging from 350 to 500. As for the max_tokens in validation, we found that 650 allowed us to obtain models that correlated well with what later happened on the LB.</li>
<li>Altering the learning rate (both the learning rate and the warmup ratio). The models that generally performed best required around 25,000 steps with batch_size=2. At step 25,000, the learning rate was 1.5e-6, and the warmup had not yet completed, but generally beyond that point, the model stopped improving.</li>
<li>Step strategy for evaluation instead of epoch strategy.</li>
<li>Things that didn´t work: gradual unfreezing and discriminative finetunning.</li>
</ul>
<h1>Retrieval.</h1>
<p>When <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> shared <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">270Wikipedia STEM articles</a> we started to use TF-IDF on it as in his <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">notebook</a>.<br>
We improved the retrieval in two distinct ways.</p>
<ul>
<li><p>On the one hand, we modified certain parameters of the TfidfVectorizer that led to improvements in CV, public LB, and private LB:</p>
<ul>
<li>A substantial improvement was adding the parameter sublinear_tf=True to the TfidfVectorizer, which provided us with an enhancement of 0.005 on both public LB and private LB. This makes sense because in this case, it's beneficial to give less weight to cases very frequent in many documents and give more weight to very infrequent cases.</li>
<li>Another residual improvement was switching from the regular expression r"(?u)\b[\w/.-]+\b|!|/|?|"|'" to r"(?u)\b[\w/.-]+\b" for token generation. The latter discards punctuation signs like ,',- etc., while the former uses them as tokens (this gave us a 0.001 improvement on occasions).</li></ul></li>
<li><p>On the other hand, we tried to increase the number of articles in the dataset. This provided another boost of around 0.006 on both public LB and private LB. To select new articles this we used Cohere/wikipedia-22-12-en-embeddings and mean or max cosine similarity between each new article and the 270_000 articles in use. Our final submission uses 159K articles in addition to the original 270K.</p></li>
</ul> |
HuBMAP - Hacking the Human Vasculature | Enhancing Kidney Microvascular Structure Segmentation with Morphological Operations - Private LB .451 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: HuBMAP - Hacking the Human Vasculature
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Summary:</strong><br>
In this competition, the goal is to segment microvascular structures in healthy human kidney tissue slides. To achieve this, a model is trained on 2D PAS-stained histology images. I used morphological operations to significantly boost accuracy, however I didn't select this model but thought I'd share the approach for posterity. </p>
<p>I forked a popular notebook and made minor changes: <br>
<a href="https://www.kaggle.com/code/skeller/hubmap-mmdet3-1-single-fold-inference/notebook" target="_blank">https://www.kaggle.com/code/skeller/hubmap-mmdet3-1-single-fold-inference/notebook</a></p>
<p><strong>Introduction:</strong><br>
Segmenting microvascular structures, such as capillaries, arterioles, and venules, in kidney tissue slides is a challenging task due to their intricate nature and complex patterns. While deep learning models can achieve impressive results, incorporating domain-specific knowledge can further refine the predictions. In this approach, morphological operations are applied to the binary masks generated by the model to improve the accuracy of segmenting these delicate structures.</p>
<p><strong>Morphological Operations for Microvascular Segmentation:</strong><br>
Morphological operations are image processing techniques that alter the shape and structure of objects in an image. In the context of this competition, binary masks are used to represent the presence or absence of microvascular structures in the kidney tissue slides.</p>
<p>**Morphological Gradient: **<br>
The morphological gradient is calculated by taking the difference between the dilated and eroded versions of the binary mask. It highlights the boundaries and edges of the structures. This operation helps in refining the masks and enhancing the edges of the segmented structures.</p>
<p>**Combining Multiple Operations: **<br>
To further improve the segmentation, a combination of morphological operations is applied to the binary masks. Specifically, the code uses erosion, opening, and closing operations iteratively. These operations help remove noise, close small gaps, and smooth the masks, resulting in cleaner and more accurate segmentations.</p>
<p>**Weighting Glomerulus Class: **<br>
Glomerulus structures are of particular importance in kidney tissue analysis. To ensure accurate segmentation of these structures, the code assigns a higher score (weight) to the glomerulus class during the final prediction. This weighting ensures that glomeruli are given more significance in the final segmentation.</p>
<p><strong>Conclusion:</strong><br>
I didn't spend much time on this contest (last 5 days) but saw some decent results with morphological operations with a deep learning model to accurately segment microvascular structures in kidney tissue slides. By enhancing the binary masks through morphological gradient calculation and applying a combination of erosion, opening, and closing operations, the approach significantly boosted accuracy. Additionally, the approach effectively assigns a higher weight to the glomerulus class, further improving the segmentation performance. This combination of deep learning and domain-specific image processing techniques showcases the power of leveraging morphological operations for microvascular structure segmentation in histology images.</p>
<p>**Methodology's Final Private Leaderboard Score: 0.451 **</p> |
Kaggle - LLM Science Exam | 49th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I had not had much experience using language models before, so I entered this competition to learn about them. As a result, I was able to learn a lot about working with language models, and in addition, I was able to win my first silver medal!</p>
<p>However, this result wasn't achieved through my own abilities, but through the collective knowledge of many Kagglers, especially the following people.</p>
<ul>
<li><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> for pointing me in the initial direction of Multiple Choice using Deberta.</li>
<li><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for publishing the RAG approach and providing a large amount of well-maintained training data.</li>
<li><a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> for providing a validation dataset that correlates to LB and help model training.</li>
<li><a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> for providing an improved context retrieving method and pushing the competition to the next stage.</li>
</ul>
<p>What I have done is merely a combination of these things. I did not have time and abilities to make the fundamental and cool improvements that were the major issues in this competition, such as an efficient and accurate context retrieval method, or advanced computer engineering to run huge models on limited compute resources(like <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> shown).</p>
<p>Anyway, the approach I took was the following points.</p>
<ul>
<li>Multiple Choice model for DeBERTa-v3-large and Sequential Classification model for Mistral-7B are trained with 60k+40k+99k dataset with tfidf based 270k Wiki Contexts attached.<ul>
<li>For DeBERTa-v3-large, I freezed the weights of the upper layers to improve the training time.</li>
<li>I originally used LLaMa2-7B as the LLM, but Mistral-7B, which was just released, performed wonderfully. I trained Mistral-7B with 4-bit quantize on sequential classification LoRA and obtained a clear improvement in accuracy.</li>
<li>Actually, a submission done with a model using only the 60k dataset as training data achieved Public LB 0.9, Private LB 0.911, which was my best, although I could not choose😅. In my case, the addition of 40k+99k training data improved the CV and Public score(0.910) but didn't improve the Private score.</li></ul></li>
<li>Using the DeBERTa-v3-large(Multiple choice) and Mistral-7B(Sequential classification) trained with the above methods, I performed inference separately for the two contexts, parsed and un-parsed tfidf based 270k Wiki context , and then used a simple soft-voting for ensembling.<ul>
<li>In using multiple models, I took care to avoid OOM by writing all code in <code>.py</code> files and having it run as <code>!python xxx.py</code>.</li></ul></li>
</ul>
<p>Here is the notebook: <a href="https://www.kaggle.com/code/bobfromjapan/49th-solution-deberta-and-mistral-7b-w-270k-cont" target="_blank">https://www.kaggle.com/code/bobfromjapan/49th-solution-deberta-and-mistral-7b-w-270k-cont</a></p>
<p>Once again, I would like to thank all the Kagglers and staff(, and my RTX4090 that kept computing for almost a whole month😀)</p> |
CommonLit - Evaluate Student Summaries | 25th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners!<br>
Thanks to organizers and all participants! I really enjoyed this exciting competition!</p>
<p>This is my final solution!</p>
<h1>Overview</h1>
<p>As <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-feature-engineering" target="_blank">I published before</a>, I used Deberta + LGBM tric.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2033239%2Fa993ec6cdadb4afed646085dfa13e511%2F2023-10-13%201.24.56.png?generation=1697127934161831&alt=media" alt=""></p>
<h2>CV-LB Score Summary</h2>
<table>
<thead>
<tr>
<th></th>
<th>wording CV</th>
<th>content CV</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DebertaV3-base</td>
<td>0.425</td>
<td>0.542</td>
<td>0.483</td>
<td>0.490</td>
<td>0.478</td>
</tr>
<tr>
<td>DebertaV3-large</td>
<td>0.412</td>
<td>0.534</td>
<td>0.477</td>
<td>not submit</td>
<td>not submit</td>
</tr>
<tr>
<td>DebertaV3-base + DebertaV3-large with LGBM tric</td>
<td>0.419</td>
<td>0.551</td>
<td>0.485</td>
<td>0.433</td>
<td>0.463</td>
</tr>
</tbody>
</table>
<h2>1st stage: Deberta</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2033239%2F6df068eeabe2ee8f944bc53b2f15e57e%2F2023-10-13%201.28.04.png?generation=1697128165767762&alt=media" alt=""></p>
<ul>
<li>base model : DebertaV3-base, DebertaV3-large</li>
<li>input: <code>prompt_text</code> + <code>[SEP]</code> + <code>prompt_question</code> + <code>[SEP]</code> + <code>text</code><ul>
<li>max_len = 2000</li></ul></li>
<li>loss: MCRMSE loss</li>
<li>custom header: average pooling (using <code>text</code> only)</li>
<li>inference: top 3 cv score model average (to avoid inference timeouts)</li>
</ul>
<table>
<thead>
<tr>
<th></th>
<th>wording CV</th>
<th>content CV</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DebertaV3-base</td>
<td>0.425</td>
<td>0.542</td>
<td>0.483</td>
<td>0.490</td>
<td>0.478</td>
</tr>
<tr>
<td>DebertaV3-large</td>
<td>0.412</td>
<td>0.534</td>
<td>0.477</td>
<td>not submit</td>
<td>not submit</td>
</tr>
</tbody>
</table>
<p>By the way, I show the score models trained using all data without cutting CV.</p>
<table>
<thead>
<tr>
<th></th>
<th>wording CV</th>
<th>content CV</th>
<th>CV</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DebertaV3-base (full data, epoch=2)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0.470</td>
<td>0.481</td>
</tr>
<tr>
<td>DebertaV3-large (full data, epoch=2)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>0.480</td>
<td>0.465</td>
</tr>
</tbody>
</table>
<p>It seems that using single model (DebertaV3-large,full data train, epoch=2) can achieve silver medal score. </p>
<h3>Why I used <code>prompt_text</code>?</h3>
<p>I think many participants suffered from large <code>wording</code> rmse.</p>
<p>In my opinion, this big error comes from <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/430705" target="_blank">score rotation</a> by competition host, I think.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2033239%2Fee4c02f37d0f5c8ae695d2f6a016aade%2F2023-10-13%202.29.03.png?generation=1697131789891984&alt=media" alt=""></p>
<p>If the content and wording data we get from this competition has already been rotated 30 degree, the score we get from competition is like below.</p>
<ul>
<li>content score (apparent) = original_content score * cos30° + original_wording score * -sin30°</li>
<li>wording score (apparent) = original_content score * sin30° + original_wording score * cos30°</li>
</ul>
<p>Thus, if my idea is correct, the content and wording score dataset we get from competition will contain both original content and wording components.</p>
<p>The meanings of content and wording score are shared from host, and original content score is based on following components.</p>
<ul>
<li>Main Idea<ul>
<li>How well did the summary capture the main idea of the source?</li></ul></li>
<li>Details<ul>
<li>How accurately did the summary capture the details from the source?</li></ul></li>
<li>Cohesion<ul>
<li>How well did the summary transition from one idea to the next?</li></ul></li>
</ul>
<p>I think apparent wording score contains not only wording contents score but also original contents score. So, in order to improve the accuracy of the apparent wording, I think it necessary to accurately predict the original content score and use <code>prompt_text</code> for predicting original content score.</p>
<p>(No one don't know what the truth is. This is just my opinion…)</p>
<h3>Why I used custom header?</h3>
<p>So far, I have shown that it is better to use prompt_text.</p>
<p>However, I think that simply using prompt_text will result in larger input tokens, which in turn will carry more noise. As a result, I think that we can’t get a good cv score.</p>
<p>So I used the custom header to input only the <code>text</code> portion into the final layer.<br>
I think we can use <code>prompt_text</code> and <code>prompt_question</code>, but focus the deverte on the <code>text</code>.</p>
<h2>2nd stage: LightGBM</h2>
<p>I use LGBM tric as I published.</p>
<h3>Using features</h3>
<ul>
<li>deberta output <ul>
<li>content_debertav3_large_prompt_text_text_ver2_pred</li>
<li>content_debertav3_base_prompt_text_text_ver2_pred</li>
<li>wording_debertav3_large_prompt_text_text_ver2_pred</li>
<li>wording_debertav3_base_prompt_text_text_ver2_pred</li></ul></li>
<li>handcrafted features<ul>
<li>num_chars</li>
<li>num_unique_words</li>
<li>embedding_sim (sentence transformer)</li>
<li>tfidf_sim</li>
<li>subjectivity</li>
<li>trigram_overlap_ratio</li>
<li>difficult_words</li>
<li>coleman_liau_score</li>
<li>polarity</li>
<li>nn_count</li>
<li>summary_length</li>
<li>dale_chall_readability_score</li>
<li>num_stopwords</li>
<li>mean_tokens_in_sentence</li>
<li>bigram_overlap_ratio</li>
<li>flesch_reading_ease</li>
<li>jj_count</li>
<li>max_count_per_sentence</li>
<li>vb_count</li>
<li>mean_count_per_sentence</li>
<li>osman</li>
<li>linsear_write_formula</li>
<li>gunning_fog</li>
<li>word_overlap_count</li>
<li>splling_err_num</li>
<li>szigriszt_pazos</li>
<li>smog_index</li>
<li>median_count_per_sentence</li>
<li>pr_count</li>
<li>num_words_title</li>
<li>min_count_per_sentence</li>
<li>num_punctuations</li>
<li>fernandez_huerta</li>
<li>gulpease_index</li>
<li>gutierrez_polini</li>
<li>crawford</li>
<li>automated_readability_index</li>
<li>bigram_overlap_count</li>
<li>trigram_overlap_count</li>
<li>flesch_kincaid_grade</li>
<li>num_sentences</li>
<li>cd_count</li>
<li>quotes_count</li>
<li>num_words_upper</li>
<li>num_paragraphs</li>
<li>uh_coun</li></ul></li>
</ul> |
Kaggle - LLM Science Exam | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle for organizing this competition. I do similar things in my work, and it was interesting to see what an open community could achieve here. My final solution is mostly based on Deberta type models and only half of the test questions are handled by Mistral LLM.</p>
<h2>System components</h2>
<ol>
<li><p>Knowledge database<br>
The Hugging Face dataset <a href="https://huggingface.co/datasets/graelo/wikipedia/viewer/20230601.en" target="_blank">graelo/wikipedia/20230601.en</a> was used as a single source of knowledge. Each document was divided into stentenses and grouped into overlapping chunks with an average length of 1000 characters. The chunks were indexed using the <code>pyserini.index.lucene</code> package. The results index takes up 23 GB and retrieving 200*5 queries takes about 5 minutes.</p></li>
<li><p>Reranker model<br>
Each retrieved chunk was rescored using a custom Reranker model. This Reranker <code>deberta-v3-base</code> model is built like the standard <code>DebertaV2ForMultipleChoice</code> model, but instead of predicting the correct answer given a context, this model predicts the best context for a given question/answer pair. To create the training dataset for this, a teacher model was used to label pseudo-ranks. Specifically, for each correct question/answer pair, few contexts was retrieved and evaluated using the teacher model. The best context was considered as correct option for the Reranker model. The teacher model was taken from early experiments and is based on <code>deberta-v3-large</code> model.</p></li>
<li><p>Multiple Choice model<br>
Nothing special here, the standard <code>DebertaV2ForMultipleChoice</code> approach was used for Deberta and Mistral based models. Just one note: to train on datasets with different number of options, the loss function was scaled like this <code>loss * (self.num_labels / num_choices_in_sample)</code>.</p></li>
<li><p>Masked LM<br>
Error analysis showed that the MC models performed very poorly on questions when the options are different in few words. For these specific cases, when the options are very similar and quite long, a special Masked LM was used. It is very easy to train this model, but it is quite difficult to apply this model for answering 5 options question. Ideally, we should find a multiple sequence alignment for 5 options and compare different words in a single pass. But I couldn't find a python package for this and decided to use <code>sed.standard_sed_backtrace</code> between each pairs of options. </p></li>
<li><p>Cross reference options<br>
To be honest, I thought the shakeup would be stronger. I assumed that the test data included more questions with option like <code>None of the above</code>. For example, when I replaced all correct answers in my validation sets with <code>None of the above</code>, the quality drastically dropped from 0.9 to 0.3 MAP@3 score. Keep that in mind, I built a second stage ranker with more training examples like this.</p></li>
<li><p>XGBRanker<br>
XGBRanker was used as the fusion mechanism. This ranker is trained on validation sets and is based on few features: scores from retrival models, logits from Deberta/Mistral MC models and one feature flag for <code>None of the above</code> options.</p></li>
</ol>
<h2>Failed experiments</h2>
<ul>
<li>Focal loss for <code>DebertaV2ForMultipleChoice</code></li>
<li>Removing causal constraints for Llama attentions</li>
</ul>
<h2>Datasets</h2>
<table>
<thead>
<tr>
<th>Author</th>
<th>Name</th>
<th>Size</th>
<th>Licence</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a></td>
<td>all</td>
<td>39249</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/leonidkulyk" target="_blank">@leonidkulyk</a></td>
<td>stem_1k_v1</td>
<td>928</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a></td>
<td>eduqg_llm_formatted</td>
<td>3297</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/mozattt" target="_blank">@mozattt</a></td>
<td>test</td>
<td>3084</td>
<td></td>
</tr>
<tr>
<td><a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a></td>
<td>MMLU</td>
<td>17433</td>
<td>NC</td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/openbookqa" target="_blank">openbookqa</a></td>
<td>5957</td>
<td></td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/ai2_arc" target="_blank">ai2_arc</a></td>
<td>7787</td>
<td></td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/qasc" target="_blank">qasc</a></td>
<td>9060</td>
<td></td>
</tr>
<tr>
<td>AI2</td>
<td><a href="https://huggingface.co/datasets/sciq" target="_blank">sciq</a></td>
<td>13679</td>
<td>NC</td>
</tr>
</tbody>
</table>
<h2>Ablation analysis</h2>
<table>
<thead>
<tr>
<th>System</th>
<th>Private Score</th>
<th>Public Score</th>
<th>Kernel</th>
</tr>
</thead>
<tbody>
<tr>
<td>k=1, 2xDebertaMC</td>
<td>0.909763</td>
<td>0.907823</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker</td>
<td>0.916119</td>
<td>0.907823</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker, XGBRanker</td>
<td>0.915963</td>
<td>0.909904</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker, XGBRanker, MLM</td>
<td>0.915546</td>
<td>0.911568</td>
<td></td>
</tr>
<tr>
<td>k=8, 2xDebertaMC, Reranker, XGBRanker, MLM, MistralMC</td>
<td>0.931489</td>
<td>0.931543</td>
<td><a href="https://www.kaggle.com/code/sorokin/llm-xgboost-abc?scriptVersionId=145958932" target="_blank">llm-xgboost-abc</a></td>
</tr>
</tbody>
</table>
<p>*k - number of documents retrieved from the index.</p>
<h2>Conclusion</h2>
<p>I think the key components of this solution are the full Wikipedia index and XGBRanker on top of different models: BM-25/Reranker scores, Multiple Choice Deberta and Mistral logits.</p> |
Kaggle - LLM Science Exam | From nothing to something... a 103 LB jump! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all winners!</p>
<p>Just like the title says, I was quite surprised to find that although for the whole of the last 2 weeks, I slipped out of bronze into the 300s in ranking, when the private LB was released, I found that I made the cut after all. I ended up with a medal (and a 197 rank)</p>
<p>But in all honesty, my primary reason for joining the comp was to learn more about the LLM space. I knew there was so much progress being made over the past few months in this area, and I had no clue how or where to start. So firstly, very grateful to kaggle for this competition!</p>
<p>Regarding my solution, here is a summary:</p>
<ul>
<li>Uses <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s amazing notebook for wiki context retrieval. I modified it to use 14 chunks and 10 top queries, and also fit the tfidf on a larger fraction of the text corpus</li>
<li>Made a custom wiki index and custom wiki embeddings, and then use faiss to search the index for relevant articles. The embeddings and index were made on the title and 2 paragraphs for all pages.</li>
<li>I then use a single deberta model fine-tuned on a subset of the 60k data shared by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> + a custom dataset generated by gpt3.5</li>
<li>I then run the model on the parsed and cohere contexts of the 270k wiki data, and if the ensemble has a top probability score higher that=n 0.41, I use the prediction, else I use the prediction from the dame model using my custom 6M wiki embeddings</li>
</ul>
<p>Approaches I tried but failed to capitalise on:</p>
<ul>
<li>4 different trained models, each having predictions from 3 different contexts (parsed, cohere and 6M wiki), and then running an extensive dynamic weighted voting ensemble. This got quite complex in the end and gave me a private and public LB score pof 0.892. I know I could have improved but I did not have the time to work on this.</li>
<li>A key aspect that I realised really late into the comp is that I should have experimented on more models for context retrieval, as I see that played a significant role.</li>
</ul>
<p>But there's so much I've learned from this comp, and I really am grateful for the cimmunity for sharing so much on here. I'm sure I'm gonna be visiting back now and then to read up/refresh my memory on some LLM related concept discussed here.</p>
<p>Shoutout to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a>, <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>, <a href="https://www.kaggle.com/cpmp" target="_blank">@cpmp</a> (congrats on your huge private LB jump!), <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/serjhenrique" target="_blank">@serjhenrique</a>, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a>, <a href="https://www.kaggle.com/simonveitner" target="_blank">@simonveitner</a>, <a href="https://www.kaggle.com/nbroad" target="_blank">@nbroad</a> and so many more contestants for being super helpful and active on the discussion forums. Until next time (or maybe season 2….?)</p> |
CommonLit - Evaluate Student Summaries | 627th Solution for CommonLit ESS competition- What doesn't kill you makes you stronger | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was the first NLP task & first Kaggle competition that I worked really hard on.<br>
Even though my final result is not satisfying, I have learned a lot through the journey.<br>
<br></p>
<h1>For those who didn't join the competition.</h1>
<p>You can check the competition overview <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">here</a>.<br>
You can check the competition dataset <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">here</a>.</p>
<h1>Overview of Approach</h1>
<h2>Spell Checking Tools</h2>
<p>In the beginning, I started off by modifying the notebook created by other competitors (<code>debertav3base</code>).<br>
Then, I realized that I wouldn’t be developing myself in this field just by copying others’ work. (I had to <code>copy & edit</code> later on anyway, trying to achieve high score)<br>
So, I started off by developing my own preprocessing method, focusing on <a href="https://www.kaggle.com/code/jasonheesanglee/eda-spellcheck-tool-comparison" target="_blank">misspelled words</a>.<br>
Here, I found that SymSpell worked better on the dataset we have than any other libraries.<br>
Later (now), I understood that this wasn’t the key to this competition.<br>
<br></p>
<h2>MobileBERT</h2>
<p>After taking a closer look at the Competition Overview, I decided to try out <code>MobileBERT</code> with the concept of helping teachers located in areas where there is less or no internet connection.<br>
My journey with <code>MobileBERT</code> is explained in this <a href="https://www.kaggle.com/code/jasonheesanglee/en-mobilebert-implementation" target="_blank">notebook</a>.<br>
<strong>TL;DR</strong><br>
I first tried to apply <code>MobileBERT</code> on the code that I was working on with <code>debertav3base</code>.<br>
However, it didn’t work.<br>
After reading the <a href="https://arxiv.org/pdf/2004.02984.pdf" target="_blank">research paper</a>, I understood that the teacher model was needed to educate MobileBERT on the task.<br>
However, in terms of the competition, it didn’t give a good result.<br>
<br></p>
<h2>All models</h2>
<p>So I had to move back to <code>debertav3base</code> + <code>LGBM</code>.<br>
I finally got an OK score, but I couldn’t stop trying other methods. (Check this <a href="https://www.kaggle.com/code/jasonheesanglee/lgbm-autocorrect-allmodels" target="_blank">notebook</a> for my record)<br>
So, I have tried all different models in this <a href="https://www.kaggle.com/datasets/kozodoi/transformers" target="_blank">Dataset</a>.<br>
Most models worked fine on the logic I had but didn’t give a good score as expected. (There should be reason behind why other people don’t go with haha).<br>
The only thing I regret is that I couldn’t try with <code>T5</code>, which didn’t work on my logic (Of course it wouldn’t work.)<br>
<br></p>
<h2>Modifying Preprocessing</h2>
<p>In the last few days, I tried to focus on developing the preprocessing part & the <code>LGBM</code> part of the code.
<br>
For the preprocessing part, I have <a href="https://www.kaggle.com/code/jasonheesanglee/datapreprocess" target="_blank">separated this part</a> from the submission notebook to save some time with submission and added SymSpell as a spell-checking tool.<br>
And Optuna for finding the best parameter for LGBM.<br>
It worked fine on the Public Leaderboard (0.44) but unfortunately didn’t work on the Private Leaderboard.<br>
<br></p>
<p>At the end of the day, I am still alive, learned a lot through this competition.<br>
On top of that, I am happy that I have more opportunities to challenge myself!</p> |
Bengali.AI Speech Recognition | 81st Place Solution for Bengali.Ai Speech Recognition. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>1. Fine Tuning Wev2Vec2:</h2>
<p>Training Notebook: <a href="https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training" target="_blank">https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training</a></p>
<p>Dataset Used: <a href="https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized" target="_blank">https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized</a></p>
<p>Hyperparameters:</p>
<p>LR: 1 e-4<br>
Epochs: 15<br>
Logging step: 5000</p>
<h2>2. Use Optuna to find the Best decoding Parameters.</h2>
<p>Notebook: <a href="https://www.kaggle.com/code/royalacecat/lb-0-442-the-best-decoding-parameters" target="_blank">https://www.kaggle.com/code/royalacecat/lb-0-442-the-best-decoding-parameters</a></p>
<p>best_params = {'alpha': 0.3802723523729998, 'beta': 0.053996879617918436, 'beam_width': 1024}</p>
<h2>Score:</h2>
<table>
<thead>
<tr>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.432</td>
<td>0.518</td>
</tr>
<tr>
<td>0.435</td>
<td>0.518</td>
</tr>
</tbody>
</table> |
Kaggle - LLM Science Exam | 47th place solution -- Simple Method Based on Open Codes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Thanks</h1>
<p>First and foremost, we would like to express our sincere gratitude to the hosts and the Kaggle team for organizing this amazing competition. Thank my team member <a href="https://www.kaggle.com/yuanji1239" target="_blank">@yuanji1239</a> for hard work. Thank You to Chris Deotte , MGoksu ,MB and Radek to share the OpenBook technique! </p>
<h1>RAG</h1>
<h2>Dataset</h2>
<ol>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">Chris Deotte's 60k datasets</a> by <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a> </li>
</ol>
<ul>
<li>this dataset with original contexts is split into train, eval, test_offline datasets for training deberta-v3-large model. </li>
</ul>
<ol>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425941" target="_blank">Radek's 6.5k dataset</a> </li>
</ol>
<ul>
<li><p>using <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">tfidf recall codes</a> on this dataset. </p></li>
<li><p>this dataset with tfidf recall contexts is divided into train, eval datasets for training deberta-v3-large. </p></li>
</ul>
<h2>Retrival methods</h2>
<ol>
<li><p><a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">all-MiniLM-L6-v2 and tfidf methods</a> based on <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> sharing. </p></li>
<li><p><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/443334" target="_blank">bge-small-faiss by </a><a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> and tfidf methods. </p></li>
</ol>
<h1>Training</h1>
<ol>
<li><p><a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">train model code by </a><a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a> based on (Dataset 1 - 60k datasets) , and select 6000--24000 samples and set many seeds, and generate models. Then, select best model as my model 1. </p></li>
<li><p>Based on 1 code, we add noise by this codes and retrain my model 1, and generate my model 2. </p></li>
</ol>
<pre><code> ():
a = df[col]..split().apply()
len_q85 = (np.quantile(a, q=))
len_q85
():
s = s.split()
s = np.array(s)
mask = tokenizer.mask_token
(s) > len_q85:
num_replacements = np.random.uniform(low=, high=)
num_replacements = ((s) * num_replacements)
maskidx = np.random.choice((s), size=num_replacements, replace=)
s[maskidx] = mask
s = .join(s.tolist())
s
</code></pre>
<ol>
<li>Training 1 and 2 for (Dataset 2 - 6.5k dataset ) ,and add revised prompts to options by using below code. Train models and generate models 3 and 4. </li>
</ol>
<pre><code> ():
cond1 = x: x[]
cond1 = df.apply(cond1, axis=)
col ():
df.loc[cond1, col] = df.loc[cond1, ].apply( x: x[:-]) + + + + df.loc[cond1, col]
df
</code></pre>
<h1>Inference</h1>
<p>the ensemble mothod of 4 deberta models and <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">longformer model</a> by <a href="https://www.kaggle.com/yuanji1239" target="_blank">@yuanji1239</a> to infer online test dataset. </p>
<h1>Thanks for Dr.JI Yuan, all teams and Kaggle 😄</h1> |
Kaggle - LLM Science Exam | 161th place solution(without training a model) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I entered this competition only two weeks before deadline. I did not train a model or use any tricks. Thanks to <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB' notebook</a>, I only do some modification on it and I am able to get a bronze medel(very close to sliver).</p>
<h1>Method</h1>
<ol>
<li>Base on the opensource 270k notebook, I replaced the Longformer model with the finetune <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">60k</a> deberta-V3-large model that I found <a href="https://www.kaggle.com/code/bobfromjapan/my-best-submission-with-openbook-context-v2" target="_blank">here</a>. With this simple replacement, I got an LB score of 0.898.</li>
<li>I found that the input of deberta is composed of several parts. I think the retrieval results should be sorted in descending order of similarity, so I put the most similar retrieval result first instead of last.</li>
<li>Increasing the length of deberta input can bring performance improvement, but be aware of OOM. I I limit the maximum input length to 4500.</li>
<li>Drop the <code>util_openbook</code>. MB' s origin notebook used the <code>util_openbook.py</code> to generate a <code>backup_submission.csv</code> for ensemble, but is harmful to the final result.</li>
<li>A littile probability ensemble. I used the Longformer and set its probability weight to 0.1 while the weight of the other two output of deberta is 0.5 and 0.4.</li>
<li>Trust CV! Thanks to <a href="https://www.kaggle.com/wuwenmin" target="_blank">wu</a> and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/445053" target="_blank">500val dataset</a>. The CV is closely related to LB. I got 0.897 on CV while the private board is 0.902.</li>
</ol>
<h1>Ablation experiment</h1>
<ol>
<li>Replace the Tfidf with <code>bge-small</code> or <code>allminiv2</code> did not bring performance improvement and cost to much time.</li>
<li>Add Wikipidia data did not bring performance improvement. The retrieval result of Wiki data is completely different from the 270k dataset.</li>
</ol>
<p>Thanks to <a href="https://www.kaggle.com/mbanaei" target="_blank">MB</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris</a> again!</p> |
Kaggle - LLM Science Exam | How the 300 validation dataset is generated & 57th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>How the 300 validation dataset is generated</h2>
<p>The <a href="https://www.kaggle.com/datasets/yalickj/dataset-wiki-new-1" target="_blank">300</a> validation dataset was generated by picking interesting contexts from the <a href="https://www.kaggle.com/datasets/nbroad/wiki-20220301-en-sci" target="_blank">130k stem dataset</a> and feeding prompts to GPT-3.5-Turbo. The distribution of topics is quite random with a focus on physics and biology (this competition has really been a good learning experience as I dived into both LLM and interesting science topics at the same time!)</p>
<p>I wasn't able to check every question and it seems that some of the samples are flawed (e.g., incorrect answer, ambiguous question/options, more than one correct answers, no correct answers). Many thanks to <a href="https://www.kaggle.com/wuwenmin" target="_blank">@wuwenmin</a> for correcting some of the questions in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444933" target="_blank">300 Validation Samples Corrected by GPT4
</a>!</p>
<p>The prompt used for the generation is as follows:</p>
<pre><code>You are a professional multiple-choice question generation robot. When generating multiple-choice questions, you make sure that the choices are balanced too short, so that the questions are quite hard to answer. Please choose the most suitable question the :
[,
,
,
,
,
,
,
,
,
]
Please read the following material generate a multiple-choice question five options marked A, B, C, D, E, respecively, only one of the five correct the other four options are incorrect. Provide the correct answer after the question choices. Create choices that are long enough but vary length (e.g., more than words, but can be even longer, some of the answers can be shorter). The beginning phrases of the choices are generally identical so that we know the choices are on the same topic, but this a strict rule. It desirable to randomly asign the choices.
Material: {
}
</code></pre>
<h2>57th place solution</h2>
<h3>Retrieval</h3>
<ul>
<li>The retrieval method is mainly a combination of two pipelines:<ul>
<li>Encoding text sections split by '\n' from a 435k title dataset (270k + 130k + an additional set of titles found by k-means) with all-MiniLM-L6-v2 and building a flat faiss-gpu index. Retrieve at most 66 sections and keep the ones with distance <0.9.</li>
<li>Use <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> 's public global tf-idf retrieval pipeline and keep the top 4 and keep the ones with similarity >0.25.</li>
<li>Combine the two groups of retrieved sections above and re-rank them with another tf-idf vectorizer with ngram_range=(1, 4). Keep the top 22 sections after re-ranking and filter out the ones with less or equal to 6 words.</li></ul></li>
<li>To construct the prompt, keep the first 2300 characters and combine it with the question and one of the options.</li>
</ul>
<h3>Model</h3>
<ul>
<li>An ensemble of public and private DeBERTa-v3-large models. The ones I trained mainly follow <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> 's pipeline with 60k dataset with roughly 6k filtered out by tf-idf.</li>
<li>Validation is on the 500 dataset (which is indeed less effective when the score is higher than 0.89 level).</li>
</ul>
<h3>Other stuff</h3>
<ul>
<li>Adding 'sublinear_tf=True' to all of the tf-idfs gives a 0.003 boost on private LB but a 0.004 drop on public LB.</li>
</ul> |
CommonLit - Evaluate Student Summaries | 79th place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for hosting this competition. <br>
I'm so excited to have finished my first Kaggle competition with a silver medal!<br>
Thanks to shake up we made our own medal.</p>
<p>The main techiques are not different as other well organized public notebooks. <br>
We took our baseline from the public notebook made by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> .<br>
Special thanks to the best contributor of this competition.</p>
<p>We made a little changes though, which are addressed below briefly.</p>
<h1>What Worked</h1>
<ul>
<li>deberta-v3-large + LGBM</li>
<li>re-init last 2 layers</li>
<li>replace '\n\n', '\r\n' with [PARAGRAPH]</li>
<li>Use Optuna</li>
<li>nelder-mead for finding Ensemble weights</li>
<li>Get target-wised Model</li>
<li>Post-processing(low impact, private score : + 0.0001)<br>
We took a look at the preprocessed data and found some patterns from it.<br>
So we arranged the content and wording score like this based on what we analyzed</li>
</ul>
<pre><code>test.loc[(test[]>)&(test[]>-), ] = -
test.loc[(test[]>)&(test[]<=)&(test[]>), ] =
test.loc[(test[]>)&(test[]>-), ] = -
test.loc[(test[]>)&(test[]<=)&(test[]>-), ] = -
test.loc[(test[]>)&(test[]<=)&(test[]>), ] =
</code></pre>
<h1>What Didn't Work</h1>
<ul>
<li>freezing n layers</li>
<li>MeanPooling, AttentionPooling</li>
<li>change batch size</li>
<li>Models other than deberta-v3-large (deberta-v2-xlarge, albert-large-v2, muppet-roberta-large, bigbird-roberta-large, roberta-large)</li>
</ul>
<p>Thanks again to the competition hosts and everyone who participated!! Good luck for your next competitions!</p>
<h2><strong>Team member</strong></h2>
<p><a href="https://www.kaggle.com/kimseunghee" target="_blank">@kimseunghee</a> <br>
<a href="https://www.kaggle.com/danielchae" target="_blank">@danielchae</a> </p> |
Kaggle - LLM Science Exam | 41st Place Solution for the Kaggle - LLM Science Exam | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h3>41st Place Solution for the Kaggle - LLM Science Exam</h3>
<p>First of all thank you to the competition office for organizing such a great competition! Also, thanks to <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@MB</a> and all the competition participants who shared useful ideas and codes. It was a very exciting and interesting competition! My result was 58th for Public LB, 41st for private LB and I got a silver medal.</p>
<h3>Context</h3>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<h3>Overview of the approach</h3>
<p>Our final model was a combination of seven single microsoft/deberta-v3-large models (each with a different seed, data set, and pause token).The final model was a combination of an average ensemble of seven microsoft/deberta-v3-large models and TF-IDF-based retrieval.The final result was Public LB=0.916 / Private LB=0.909.</p>
<h3>Details of the submission</h3>
<p>The datasets used for training were the <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s 60k dataset and the 99k dataset. Also, a training model that takes into account pause tokens was created by referring to "Think before you speak: <a href="https://arxiv.org/abs/2310.02226" target="_blank">Training Language Models With Pause Tokens</a> to build a learning model that takes pause tokens into account.Then, TF-IDF-based context retrieving method was performed, referring to <a href="https://www.kaggle.com/mbanaei" target="_blank">@MB</a>'s method during inference.</p>
<p>Below is a summary of CV, Public LB, and Private LB for each model(Use of TF-IDF based contextual search method during inference).The validation data for CV used 200 data from the <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s 60k dataset in all cases.</p>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-v3-large (seed=42, dataset=60k, without pause token)</td>
<td>0.915</td>
<td>0.907</td>
<td>0.905</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=52, dataset=60k, without pause tokens)</td>
<td>0.893</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=62, dataset=60k, without pause tokens)</td>
<td>0.910</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=72, dataset=60k, without pause tokens)</td>
<td>0.895</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=82, dataset=60k, without pause tokens)</td>
<td>0.905</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=42, dataset=60k, with pause tokens)</td>
<td>0.906</td>
<td>0.902</td>
<td>0.902</td>
</tr>
<tr>
<td>microsoft/deberta-v3-large (seed=42, dataset=60k+99k, without pause tokens)</td>
<td>0.904</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
<h3>Freeze layers</h3>
<p>Some layers were frozen because microsoft/deberta-v3-large did not learn well as it was. (if the layers were not frozen, the MAP@3 was not good with CV~0.37 at the first epoch.)</p>
<h3>Max length</h3>
<p>The max_length for training was 256, 512, and 1024, and 512 was the best CV, so 512 was used.</p>
<h3>TF-IDF-based context retrieving method</h3>
<p>I tried updating <a href="https://www.kaggle.com/mbanaei" target="_blank">@MB</a>'s method, combining two different contexts and setting a threshold for frequency of occurrence, etc., but the most effective was applying logarithmic scaling of TF-IDF frequencies and updating the stopwords. I combined the original stopwords with ENGLISH_STOP_WORDS from sklearn.feature_extraction.text.</p>
<h3>Pause tokens</h3>
<p>The following was done to take pause tokens into account when learning.(According to the <a href="https://arxiv.org/abs/2310.02226" target="_blank">paper</a>, the accuracy of Language Models improves when pause tokens are added during training, just as humans do better when they think carefully, so hopefully the CV will improve a little more… I tried the following, but if the method is more creative, the CV may increase.)</p>
<ul>
<li>Add pause tokens to tokenizer</li>
</ul>
<pre><code>tokenizer.add
model.resize)
</code></pre>
<ul>
<li>Add pause tokens in random position on input</li>
</ul>
<pre><code>PAUSE_TOKEN_COUNT =
def insert_pause_tokens(sentence, count):
= tokenizer.tokenize(sentence)
(count):
= .randint(, len() - )
.insert(, )
tokenizer.convert_tokens_to_string()
def preprocess():
context_with_pause = insert_pause_tokens([], PAUSE_TOKEN_COUNT)
first_sentence = [ + context_with_pause] *
second_sentences = [
+ [] + + [option] +
option
]
tokenized_example = tokenizer(
first_sentence,
second_sentences,
truncation=,
max_length=MAX_INPUT,
add_special_tokens=False,
)
tokenized_example[] = option_to_index[[]]
tokenized_example
</code></pre>
<ul>
<li>Custom Loss Settings</li>
</ul>
<pre><code>def custom:
loss = cross
# pause_id = tokenizer.convert
# pause_mask = (labelspause_id).
pause_id = tokenizer.convert
pause_mask = (labelspause_id).(torch.)
masked_loss = loss( - pause_mask)
return masked_loss.mean
</code></pre>
<ul>
<li>Setup and run custom trainers</li>
</ul>
<pre><code>class CustomTrainer(Trainer):
def compute_loss(self, model, inputs, =):
labels = inputs.pop()
outputs = model(**inputs)
logits = outputs.logits
loss = custom_loss(logits, labels, inputs[])
return (loss, outputs) return_outputs loss
trainer = CustomTrainer(
=model,
=training_args,
=tokenizer,
=DataCollatorForMultipleChoice(tokenizer=tokenizer),
=tokenized_dataset,
=tokenized_dataset_valid,
=compute_metrics,
callbacks=[EarlyStoppingCallback(=1)],
)
trainer.train()
</code></pre>
<h3>Sources</h3>
<ul>
<li><a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2</a></li>
<li><a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2</a></li>
<li><a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1</a></li>
<li><a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles</a></li>
</ul> |
CommonLit - Evaluate Student Summaries | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you very much for organizing such an interesting competition. I am greatly thankful to the hosts and the Kaggle staff.</p>
<p>This competition turned out to be exactly what we had imagined and we are really happy to have won 4th place.</p>
<p>And thank you so much <a href="https://www.kaggle.com/kurokurob" target="_blank">@kurokurob</a> for teaming up with me and for the daily discussions and experiments. Congratulations on your 4th GOLD MEDAL.<br>
(And I will become competition Grand Master!!!).<br>
We are a great team!</p>
<h1>1. Summary</h1>
<p>Our best private solution is 1 fulltrain x 7 models ensemble.</p>
<p>This competition had only 4 prompts in the training data and with the information that 122 prompts were in the test data, one could imagine a big shake. Also, the public lb was very unstable. These things made us think that robustness is important, not only for cv.</p>
<p>I used all 4kfold at first. Especially the models with prompt text, because I had to increase the maxlen to increase the score, I had to increase the inference time and could not ensemble other models (only 2-3 models could be included).</p>
<p>Just then, I and kuro have team merged. And we mixed the fulltrain idea he had been working on. By replacing 4kfold with 1fulltrain, we arrived at the idea of compressing the information of 4kfold to 1fulltrain and ensemble more models.</p>
<p>We imagined that by doing so, we would prevent the variation and reduction of scores on each prompt in the private test data and, in total, get better score.</p>
<p>As a result, we believe we were able to prevent a shake down and finish in this position!</p>
<table>
<thead>
<tr>
<th>sub</th>
<th>type</th>
<th>models</th>
<th>cv</th>
<th>public lb</th>
<th>private lb</th>
<th>rank</th>
<th>comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>sub1</td>
<td>best lb</td>
<td>1 fulltrain × 9 models</td>
<td>0.4679</td>
<td>0.41991</td>
<td>0.45785</td>
<td>11</td>
<td>using gbdt and expand of inference-maxlen</td>
</tr>
<tr>
<td>sub2</td>
<td>best cv</td>
<td>1 fulltrain × 7 models</td>
<td>0.4639</td>
<td>0.42979</td>
<td>0.45515</td>
<td>4</td>
<td>just NLP result(expand of inference-maxlen)</td>
</tr>
<tr>
<td>sub3</td>
<td>insurance</td>
<td>1 fulltrain × 8 models</td>
<td>0.4693</td>
<td>0.43855</td>
<td>0.45597</td>
<td>6</td>
<td>just NLP result</td>
</tr>
</tbody>
</table>
<p>The best cv in sub2 is our best private, and that is the main explanation below.</p>
<h1>2. training</h1>
<p>Each of our models in sub2 is shown below.</p>
<table>
<thead>
<tr>
<th>modelno</th>
<th>InfNo</th>
<th>model</th>
<th>training<br>maxlen</th>
<th>inference <br>maxlen</th>
<th>freezing</th>
<th>layerwise</th>
<th>input</th>
<th>pooling</th>
<th></th>
<th></th>
<th>2nd loss</th>
<th>preprocess</th>
<th>cv of 4kfold<br> earlystop</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>original prompt</td>
<td>cls</td>
<td>attention of <br>original prompt</td>
<td>mean of <br>text part only</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>91</td>
<td>deberta-v3-large</td>
<td>768</td>
<td>1500</td>
<td>✓</td>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.4818</td>
</tr>
<tr>
<td>2</td>
<td>22</td>
<td>deberta-v3-large</td>
<td>1050</td>
<td>950</td>
<td></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>0.4855</td>
</tr>
<tr>
<td>3</td>
<td>63</td>
<td>deberta-v3-large</td>
<td>850</td>
<td>1500</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td>0.4984</td>
</tr>
<tr>
<td>4</td>
<td>72</td>
<td>deberta-v3-large</td>
<td>868</td>
<td>1024</td>
<td></td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td>Arcface</td>
<td>✓</td>
<td>0.4919</td>
</tr>
<tr>
<td>5</td>
<td>2,3</td>
<td>deberta-v3-large</td>
<td>868</td>
<td>868</td>
<td></td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.4880</td>
</tr>
<tr>
<td>6</td>
<td>259</td>
<td>deberta-v3-large-squad2</td>
<td>768</td>
<td>1500</td>
<td>✓</td>
<td></td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>0.4952</td>
</tr>
<tr>
<td>7</td>
<td>331</td>
<td>deberta-v3-large-squad2</td>
<td>1050</td>
<td>950</td>
<td></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>0.4993</td>
</tr>
</tbody>
</table>
<p>The details are described below.</p>
<h2>2.1 model no1 : basemodel</h2>
<p>This model is a typical base for our model.</p>
<p>First, two inputs are prepared and put into the tokenizer as a pair (an example of dataset). The same result can be obtained by connecting them with [SEP] even if you do not put them as a pair.</p>
<pre><code>self = self.
self = self + + self + + self
tokens = tokenizer(
self,
self,.
...
)
</code></pre>
<p>Now all we have to do is put this in the model, increase the maxlen, and output the cls in the fc layer.</p>
<h2>2.2 model no2 : original prompt</h2>
<p>In this model, an original prompt was created and used for input.</p>
<pre><code>self = + self
self = prompt_title + + prompt_question + + prompt_text
tokens = tokenizer(
self,
self,
・・・
)
</code></pre>
<p>Then, only the part of the original prompt (Evaluating the summarized…) is attentional pooled (an example of a model).</p>
<pre><code>## init
.pool = AttentionPooling(..hidden_size)
...
## forward
= .model(ids, mask, token_type_ids)
= [][:,:,:]
= .pool(,mask[:,:])
= .fc()
</code></pre>
<h2>2.3 model no4 : using 2nd loss and preprocess</h2>
<p>As discussed in the discussion and notebook, the train data can be classified into 38 types. I thought I could make good use of this, so I included arcface as an auxiliary loss.<br>
I tried to make good use of embeddings at the end, but it was not available. However, it did contribute to the diversity of the ensemble.</p>
<p>Also, as a preprocss, I made sure to include a space after the period and comma; after doing EDA, I noticed that if a sentence comes after a period or comma without a space, it is divided differently in the tokenizer. However, I don't think this affected the score significantly.</p>
<h2>2.4 Extending the inference max length</h2>
<p>Scores have been greatly improved by increasing the inference length over that of training. An example is shown below. The following was trained with maxlen 850, but by setting maxlen to 1500 during inference, cv and public lb improved (maybe not so much contribution to private).</p>
<table>
<thead>
<tr>
<th>traing<br>maxlen</th>
<th>inference<br>maxlen</th>
<th>39c16e<br>fold0</th>
<th>814d6b<br>fold1</th>
<th>ebad26<br>fold2</th>
<th>3b9047<br>fold3</th>
<th>cv</th>
</tr>
</thead>
<tbody>
<tr>
<td>850</td>
<td>850</td>
<td>0.4505</td>
<td>0.5595</td>
<td>0.5051</td>
<td>0.5024</td>
<td>0.4984</td>
</tr>
<tr>
<td>850</td>
<td>1024</td>
<td>0.4524</td>
<td>0.5590</td>
<td>0.4836</td>
<td>0.5018</td>
<td>0.4927</td>
</tr>
<tr>
<td>850</td>
<td>1500</td>
<td>0.4527</td>
<td>0.5588</td>
<td>0.4614</td>
<td>0.5013</td>
<td>0.4867</td>
</tr>
</tbody>
</table>
<p>However, some models may degrade. insurance submission(sub3) consisted only of models that did not expand this inference max length. As a result, cv and public were affected, but private had almost no effect.</p>
<h1>3. inference (ensemble)</h1>
<p>For best cv sub, we initially post-processed using GBDT (LGBM + Catboost + XGboost), but in the end, ensemble without GBDT resulted in a better CV, The submission that produced the best private did not use post-processing with GBDT. We also considered nealder-mead and optuna, but we did not use them because of the concern that they would definitely overfit. So we use the simple mean ensemble using a hill climb technique (we also added a full train with different seed to further increase robustness for the extra inference time).</p>
<p>final cv(4kfold earlystopping result) : 0.46394 , public : 0.42979, private 0.45515 4th</p>
<p>※ Futhermore, we took as insurance(sub3) that the inference max lengths were the same as that in training. This result was as follows.</p>
<p>final cv 0.4693 , public : 0.43855, private 0.45597 6th.</p>
<p>In this result, cv and lb were bad, but private was good, which shows how important the robustness of model was in this competition.</p>
<h1>4. Not working for us</h1>
<ul>
<li>mlm</li>
<li>awp</li>
<li>svr</li>
<li>other model</li>
<li>regression from classification model</li>
<li>other many many things…</li>
</ul>
<h1>5. Acknowledgements</h1>
<p>We could not have achieved these results on our own. We were greatly influenced by those who we have collaborated with in the past, and we are grateful for their contributions. We would also like to express our sincere gratitude to those who have shared their knowledge and insights through previous competitions. Thank you very much.</p>
<p>We would especially like to thank the following people for their help in this competition! Thank you very much.</p>
<ul>
<li><p>Fulltrain : Team Hydrogen <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369578" target="_blank">ref1</a> <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347536" target="_blank">ref2</a>, Raja Biswas <a href="https://www.kaggle.com/conjuring92" target="_blank">@conjuring92</a> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424330#2354068" target="_blank">ref3</a></p></li>
<li><p>Freezing : Takamichi Toda <a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/433754#2405543" target="_blank">ref4</a></p></li>
<li><p>37-38 classification : MOTH <a href="https://www.kaggle.com/alejopaullier" target="_blank">@alejopaullier</a> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/430705" target="_blank">ref5</a>, <br>
Alexander Chervov <a href="https://www.kaggle.com/alexandervc" target="_blank">@alexandervc</a> <a href="https://www.kaggle.com/code/alexandervc/commonlit-levenshtein-distances?scriptVersionId=141542492&cellId=23" target="_blank">ref6</a></p></li>
<li><p>Postprocess : nogawanogawa <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">ref7</a></p></li>
<li><p>Model selection(feedback3) : CHRIS DEOTTE <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609" target="_blank">ref8</a>, TOM <a href="https://www.kaggle.com/tikutiku" target="_blank">@tikutiku</a> <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369567" target="_blank">ref9</a></p></li>
</ul>
<h1>6. team member</h1>
<ul>
<li><a href="https://www.kaggle.com/chumajin" target="_blank">@chumajin</a></li>
<li><a href="https://www.kaggle.com/kurokurob" target="_blank">@kurokurob</a> </li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4001300%2Fa311809edebd32836455da5b13e1dc1e%2Fkuromajin.jpg?generation=1697084418755389&alt=media" alt=""></p>
<h1>7. code</h1>
<p>inference code : <a href="https://www.kaggle.com/code/chumajin/commonlit2-4th-place-inference" target="_blank">https://www.kaggle.com/code/chumajin/commonlit2-4th-place-inference</a> (This is the cleaned code. Same score as the submission.)</p>
<p>training code(chumajin part) : <a href="https://github.com/chumajin/kaggle-commonlit" target="_blank">https://github.com/chumajin/kaggle-commonlit</a><br>
training code(kuro_B part) : <a href="https://github.com/kurokuroB/kaggle-commonlit2-4th_place_solution-my_training_code" target="_blank">https://github.com/kurokuroB/kaggle-commonlit2-4th_place_solution-my_training_code</a></p> |
CommonLit - Evaluate Student Summaries | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Submission Overview</h1>
<p>I split the data into 4 folds on <code>prompt_id</code> and computed a 4-fold CV. Rather than submitting all 4 models for inference, I retrained on 100% of the training data. My final submission is 6x deberta-v3-large: 2 models with different random seeds from each of my top 3 experiments. I took the unweighted mean of the 6 models' predictions.</p>
<h1>Source code</h1>
<p>I mostly work in Python scripts rather than notebooks. Here's a repo containing a package for running my training and inference scripts: <a href="https://github.com/AMontgomerie/commonlit-summaries-competition" target="_blank">https://github.com/AMontgomerie/commonlit-summaries-competition</a></p>
<h1>Inputs and max length</h1>
<p>In addition to using the summary text, I found that using the <code>prompt_question</code> improved the score slightly, and using the <code>prompt_text</code> helped quite a bit, assuming that the max length was also increased. Increasing the max length only helped up to 1024 tokens for training, although increasing up to 1536 during inference improved both CV (on prompt <code>ebad26</code>) and LB scores.</p>
<p>It was hard to tell if increasing beyond 1536 was worth it as none of the prompts in the training set were long enough, and I didn't get any consistent improvements from the LB when increasing to 2048 either.</p>
<h1>Special tokens</h1>
<p>Another thing that helped was adding a special token for each section of the input rather than using <code>[SEP]</code>. I added <code><text></code>, <code><question></code>, and <code><summary></code> tokens to the tokenizer before training, and then fine-tuned them. Inputs took the format:</p>
<pre><code>"<question> The prompt question text here. <> The student text here. <text> The original prompt text here."
</code></pre>
<p>I put the prompt text last to account for extremely long texts that would need to be truncated.</p>
<p>I did also try adding the prompt title and a <code><title></code> token, but this didn't improve CV.</p>
<h1>Inference</h1>
<p>Earlier in the competition I was submitting the mean of the 4 models trained in each experiment. However after I started increasing the max length up to and beyond 1024 tokens, I found that inference time increased quite significantly. It took about an hour to run inference with 1 deberta-v3-large at 1024 token max length, and about 1.5 hours at 1280 or 1536 tokens.</p>
<p>This means a 4-fold mean would take 6 hours or more at 1536 tokens. To be able to submit a more diverse ensemble I decided to start retraining on 100% of training data instead. I found the LB scores of these 100% models to be fairly consistent with the ones trained on 75% of the data. This allowed me to submit an ensemble of my best 3 CV experiments. I retrained on 100% of training data twice with different random seeds for each of the experiments.</p>
<h1>GPU resources</h1>
<p>I don't have access to any dedicated GPUs, and each 4 fold experiment took a long time on Kaggle P100s (particularly when training with long sequence lengths), so I subscribed to Paperspace Gradient which gave me intermittent access to an RTX5000 or an A4000, both of which allowed me to run experiments about 4x faster than running on Kaggle GPUs.</p>
<p>Since none of these GPUs have more than 16GB of RAM, I was only able to train with a batch size of 1 at max length 1024. To compensate for this, I used gradient accumulation. I initially accumulated gradients for 8 steps per optimizer update, but found that 16 steps worked even better.</p>
<h1>Dropout</h1>
<p><a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/432186" target="_blank">This thread</a> pointed out that turning off dropout can be useful in regression tasks. I gave this a try and found that turning off dropout in fully-connected layers helped a little, but not in attention layers.</p>
<h1>Pooling</h1>
<p>I saw some public notebooks which used mean pooling or GeM pooling, but in my experiments neither of these worked better than the default <a href="https://github.com/huggingface/transformers/blob/3e93dd295b5343557a83bc07b0b2ea64c926f9b4/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L56C7-L56C8" target="_blank">ContextPooler</a> which is built into <code>transformers.DebertaV2ForSequenceClassification</code>.</p>
<h1>Things which didn't work</h1>
<p>Here's a list of ideas either by me, or which I borrowed from public notebooks or discussions, which didn't lead to any noticeable improvement:</p>
<ul>
<li>Using <code>prompt_title</code>.</li>
<li>Freezing embeddings.</li>
<li>Freezing encoder layers.</li>
<li>Max, mean, or GeMText pooling.</li>
<li>Extra attention head.</li>
<li>MSE, SmoothL1, or (modified) MarginRankingLoss. (I mostly used MCRMSE loss).</li>
<li>Training separate models to predict content and wording scores.</li>
<li>Training with max sequence lengths longer than 1024.</li>
<li>deberta-v2-xlarge with LoRA (even with large alpha).</li>
<li>Autocorrecting spelling.</li>
<li>Ensembling with linear models or gradient-boosted regressors with hand-made features. (I didn't spend much time on this, but I tried using some of the features from the high-scoring public notebooks and they didn't seem very useful)</li>
<li>Using abstractive summarization to reduce the length of the <code>prompt_text</code>. I was able to improve my CV a bit with this, but it significantly hurt my public LB, which was a good warning not to do this. I decided it was too risky, because I wouldn't be able to validate any generated summaries when scoring on the private test data. If the generative model hallucinated even once it could screw up my final LB score.</li>
</ul>
<p>I would've liked to have tried extractive summarization as this would prevent the hallucination problem, but I didn't come up with a good way to train a model to rank the most important sentences in each <code>prompt_text</code>. It would've been a fun direction to go down, but probably ultimately a waste of time in comparison to just increasing the max sequence length.</p> |
RSNA 2023 Abdominal Trauma Detection | 1st Place Solution: Team Oxygen | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, Thank you RSNA for hosting another interesting competition & my teammates <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a> <a href="https://www.kaggle.com/harshitsheoran" target="_blank">@harshitsheoran</a> - formation of <strong>Team oxygen</strong> ? :)) It was amazing to be #1 on public leaderboard for almost a month. I am sharing a quick overview of our solution, we will release the entire solution soon. It was really fun competing for #1 with <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a> <br>
<strong>Edit: Full solution published.</strong> <br>
Here is the inference code you may refer: <a href="https://www.kaggle.com/nischaydnk/rsna-super-mega-lb-ensemble" target="_blank">link</a> <br>
Our GitHub repo w/ all preprocessing + training code: <a href="https://github.com/Nischaydnk/RSNA-2023-1st-place-solution" target="_blank">link</a><br>
Demo Inference notebook: <a href="https://www.kaggle.com/code/haqishen/rsna-2023-1st-place-best-model-infer-cleaned" target="_blank">link</a></p>
<h4><strong>Split used:</strong> 4 Fold GroupKFold ( Patient Level)</h4>
<h2><strong>Our solution is divided into three parts:</strong></h2>
<p><strong>Part 1:</strong> 3D segmentation for generating masks / crops [Stage 1]<br>
<strong>Part 2:</strong> 2D CNN + RNN based approach for Kidney, Liver, Spleen & Bowel [Stage 2]<br>
<strong>Part 3:</strong> 2D CNN + RNN based approach for Bowel + Extravasation [Stage 2]</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fee28ed8eef7827d8f2cc69601875e5c2%2FScreenshot%202023-10-22%20at%2011.54.15%20AM.png?generation=1697955883461262&alt=media" alt=""></p>
<h2><strong>Data Preprocessing:</strong></h2>
<p>Here comes the key part of our solution, we will describe it later in more depth. <strong>Note:</strong> <em>All models were trained on image size 384 x 384. We use datasets preprocessing from <a href="https://www.kaggle.com/TheoVeol" target="_blank">@TheoVeol</a> and our data which we made rescale dicoms and applying soft-tissue windowing.</em></p>
<p>We take a patient/study, we run a 3d segmentation model on it, it outputs masks for each slice, we make a study-level crop here based on boundaries of organs - liver, spleen, kidney & liver. </p>
<p>Next, we make volumes from the patient, each volume extracted with equi-distant 96 slices for a study which is then reshaped to (32, 3, image_size, image_size) in a 2.5D manner for training CNN based models.</p>
<p>3 channels are formed by using the adjacent slices.</p>
<p>All our model takes in input in shape (2, 32, 3, height, width) and outputs it as (2, 32, n_classes) as the targets are also kept in shape (2, 32, n_classes).</p>
<p>To make the targets, we need 2 things, patient-level target of each organ and how much the organ is visible compared to its maximum visibility, this data is available after normalizing segmentation model masks in 0-1 based on number of positive pixels</p>
<p>Then we multiply targets * patient-level target for each middle slice of the sequence and that is our label</p>
<p>For example if a patient has label 0 for liver-injury and the liver visibility is as follows in the slice sequence</p>
<p>[0., 0., 0., 0.01, 0.05, 0.1, 0.23, 0.5, 0.7, 0.95, 0.99, 1., 0.95, 0.8, 0.4 …. 0. ,0., 0.]</p>
<p>We multiply it with label which is currently 0 results in an all zeros list as output, but if target label for liver-injury was 1, then we use the list mentioned above as our soft labels.</p>
<h2><strong>Stage2: 2.5D Approach ( 2D CNN + RNN):</strong></h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fe8df4581839b1fa7dcadf68fe2a715a1%2FScreenshot%202023-10-22%20at%205.31.23%20AM.png?generation=1697935695484067&alt=media" alt=""></p>
<p>In stage 2, we trained our models using the volumes either based on our windowing or theo's preprocessing approach and the masks/crops generated from 3D segmentation approach. Each model is trained for multiple tasks (segmentation + classification). For all 32 sequences, we predicted slice level masks and sigmoid predictions. Further, simple maximum aggregation is applied on sigmoid predictions to fetch study level prediction used in submissions. </p>
<p>For training our models, some common settings were:</p>
<ul>
<li><strong>Learning rate:</strong> (1e-4 to 4e-4) range</li>
<li><strong>Optimizer:</strong> AdamW</li>
<li><strong>Scheduler:</strong> Cosine Annealing w/ Warmup </li>
<li><strong>Loss:</strong> BCE Loss for Classification, Dice Loss for segmentation</li>
</ul>
<h3><strong>Auxiliary Segmentation Loss:</strong></h3>
<p>One of the key things which made our training much more stable and helped in improving scores was using auxiliary losses based on segmentation. </p>
<p>Encoder was kept same for both classification & segmentation decoders, we used two types of segmentation head:</p>
<ul>
<li><strong><em>Unet based decoder</em></strong> for generating masks</li>
<li><strong><em>2D-CNN</em></strong> based head </li>
</ul>
<pre><code>nn.Sequential(
nn.Conv2d(nb_ft, 128, =3, =1),
nn.BatchNorm2d(128),
nn.ReLU(=),
nn.Conv2d(128, 128, =3, =1),
nn.BatchNorm2d(128),
nn.ReLU(=),
nn.Conv2d(128, 4, =1, =0),
)
</code></pre>
<pre><code> self = self(true_encoder)
self = self(true_encoder)
</code></pre>
<p>We used the feature maps generated mainly from last and 2nd last blocks of the backbones & apply dice loss on the predicted masks & true masks. This trick gave us around +0.01 to +0.03 boost in our models. We used similar technique in Covid 19 detection competition held few years back, you can also refer my solution for more detailed use of auxiliary loss & code snippets. <br>
<a href="https://www.kaggle.com/c/siim-covid19-detection/discussion/266571" target="_blank">link of discussion</a></p>
<p>Here is an example code for applying aux loss:</p>
<pre><code></code></pre>
<h3><strong>Architectures used in Final ensemble:</strong></h3>
<ul>
<li>Coat Lite Medium w/ GRU - <a href="https://github.com/mlpc-ucsd/CoaT" target="_blank">original source code</a></li>
<li>Coat Lite Small w/ GRU!</li>
<li>Efficientnet v2s w/ GRU [Timm]</li>
</ul>
<h3><strong>Augmentations:</strong></h3>
<p>We couldn't come up with several augmentations to use, but these were the ones which we used in our training.</p>
<pre><code> .Perspective(p=.),
.HorizontalFlip(p=.),
.VerticalFlip(p=.),
.Rotate(p=., limit=(-, )),
</code></pre>
<h2><strong>Post Processing / Ensemble:</strong></h2>
<p>Final ensemble for all organs model includes <strong>multiple Coat medium and V2s based models</strong> trained on either 4 Folds or Full data. </p>
<p>For extravasation, We mainly used Coat Small and v2s in ensemble. <br>
<strong>No major postprocessing</strong> was applied except for tuning scaling factors based on CV scores.<br>
To get the predictions, we aggregated the model outputs at slice level and simply took the maximum value for each patient.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fd6fa2cc524588b85b82906cccb6552bf%2FScreenshot%202023-10-22%20at%206.02.32%20AM.png?generation=1697936329146043&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2Fb69766140607127e717966085c927bce%2FScreenshot%202023-10-22%20at%206.07.13%20AM.png?generation=1697936371125187&alt=media" alt=""></p>
<h4><strong>Ensemble:</strong></h4>
<p>Within folds of each models, we are doing slice level ensemble.<br>
For different architectures & cross data models (theo/ours), we did ensemble after the max aggregation. </p>
<h4><strong>Best Ensemble OOF CV</strong>: 0.31x</h4>
<h4><strong>Best single model 4 fold OOF CV</strong>: 0.326 [Coat lite Medium]</h4>
<p>Organ level OOF for single model looks like this:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4712534%2F9143afd07ba3069f2b2259b1d8fe80eb%2FScreenshot%202023-10-16%20at%204.05.07%20AM.png?generation=1697413214073394&alt=media" alt=""></p>
<p>Thank you. </p>
<p>EDIT 1: 3D segmentation code: <a href="https://www.kaggle.com/code/haqishen/rsna-2023-1st-place-solution-train-3d-seg/notebook" target="_blank">notebook link</a></p> |
RSNA 2023 Abdominal Trauma Detection | 14th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Overview</h2>
<p>I used an efficient preprocessing pipeline and small multi-task models in a single stage framework. I didn't use image level labels and segmentation masks because I forgot they were given 🤦♂️.</p>
<p>Kaggle Notebook: <a href="https://www.kaggle.com/code/gunesevitan/rsna-2023-abdominal-trauma-detection-inference" target="_blank">https://www.kaggle.com/code/gunesevitan/rsna-2023-abdominal-trauma-detection-inference</a><br>
Kaggle Dataset: <a href="https://www.kaggle.com/datasets/gunesevitan/rsna-2023-abdominal-trauma-detection-dataset" target="_blank">https://www.kaggle.com/datasets/gunesevitan/rsna-2023-abdominal-trauma-detection-dataset</a><br>
GitHub Repository: <a href="https://github.com/gunesevitan/rsna-2023-abdominal-trauma-detection" target="_blank">https://github.com/gunesevitan/rsna-2023-abdominal-trauma-detection</a></p>
<h2>Dataset</h2>
<h3>2D Dataset</h3>
<ul>
<li>Bit shift with DICOM's bits allocated and stored attributes</li>
<li>Linear pixel value rescale with DICOM's rescale slope and intercept attributes</li>
<li>Window with DICOM's window width and center attributes (abdominal soft tissue window; width 400, center 50)</li>
<li>Adjust minimum pixel value to 0 and scale pixel values with the new maximum</li>
<li>Invert pixel values if DICOM's photometric interpretation attribute is MONOCHROME1</li>
<li>Multiply pixel values with 255 and cast image to uint8</li>
<li>Write image in lossless png format with raw size</li>
</ul>
<p>My 2D and 3D dataset pipelines are separated because this part can run very fast in parallel because of non-blocking IO. I can export all of the training DICOMs as pngs in approximately 20 minutes.</p>
<h3>3D Dataset</h3>
<p>I saved lots of different CT scans from training set as videos and examined them. I noticed each of their start and end points were different on the z dimension. Some of them were starting from the shoulders and ending just before the legs or some of them were starting from the lungs and ending somewhere around middle femur.</p>
<p>I studied the anatomy and decided to localize ROIs. I manually annotated bounding boxes around the largest contour on axial plane. I labeled slices before the liver as "upper" and slices after the femur head as "lower". Slices between those two location are labeled as "abdominal". I trained a YOLOv8 nano model and it was reaching to 0.99x mAP@50 on all those classes easily. I dropped slices that are predicted as "upper" and "lower", and I used slices that are predicted as "abdominal" and cropped them with the predicted bounding box.<br>
<img src="https://i.ibb.co/JpNsJD1/val-batch2-pred.jpg" alt="yolo"></p>
<p>Eventually, I ditched this approach because it was too slow and it didn't improve my overall score at all. In my latest 3D pipeline, I was using a lightweight localization by simply cropping the largest contour on the axial plane and keep all slices on the z dimension.</p>
<ul>
<li>Read all images that are exported as pngs in a scan and stack them on the z-axis</li>
<li>Sort z-axis in descending order by DICOMs' image position patient z attribute</li>
<li>Flip x-axis if DICOMs' patient position attribute is HFS (head first supine)</li>
<li>Drop partial slices (some slices at the beginning or end of the scan were partially black)</li>
</ul>
<p>I dropped those slices by counting all black vertical lines and their differences on z-axis. Normal slices had 0-5 all black vertical lines. If all black vertical line count suddenly increases or decreases then that slice is partial.</p>
<pre><code>
scan.shape[] != :
scan_all_zero_vertical_line_transitions = np.diff(np.(scan == , axis=).(axis=))
slices_with_all_zero_vertical_lines = (scan_all_zero_vertical_line_transitions > ) | (scan_all_zero_vertical_line_transitions < -)
slices_with_all_zero_vertical_lines = np.append(slices_with_all_zero_vertical_lines, slices_with_all_zero_vertical_lines[-])
scan = scan[~slices_with_all_zero_vertical_lines]
scan_all_zero_vertical_line_transitions, slices_with_all_zero_vertical_lines
</code></pre>
<ul>
<li>Crop the largest contour on the axial plane</li>
</ul>
<p>I didn't do that to each image separately because it would break the alignment of slices. I calculated bounding boxes for each slice and calculate the largest bounding box by taking minimum of starting points and maximum of ending points.</p>
<pre><code>
largest_contour_bounding_boxes = np.array([dicom_utilities.get_largest_contour(image) image scan])
largest_contour_bounding_box = [
(largest_contour_bounding_boxes[:, ].()),
(largest_contour_bounding_boxes[:, ].()),
(largest_contour_bounding_boxes[:, ].()),
(largest_contour_bounding_boxes[:, ].()),
]
scan = scan[
:,
largest_contour_bounding_box[]:largest_contour_bounding_box[] + ,
largest_contour_bounding_box[]:largest_contour_bounding_box[] + ,
]
</code></pre>
<ul>
<li>Crop non-zero slices along 3 planes</li>
</ul>
<pre><code>
mmin = np.array((scan > ).nonzero()).(axis=)
mmax = np.array((scan > ).nonzero()).(axis=)
scan = scan[
mmin[]:mmax[] + ,
mmin[]:mmax[] + ,
mmin[]:mmax[] +
]
</code></pre>
<ul>
<li>Resize 3D volume into 96x256x256 with area interpolation</li>
<li>Write image as a numpy array file</li>
</ul>
<p>To conclude, those numpy arrays are used as model inputs. I wasn't able to benefit from parallel execution at this stage. </p>
<h2>Validation</h2>
<p>I used multi label stratified group kfold for cross-validation. Group functionality can be achieved by splitting at patient level. I converted one-hot encoded classes into ordinal encoded single columns and created another column for patient scan count. I split dataset into 5 folds and 5 ordinal encoded target columns + patient scan count column are used for stratification.</p>
<h2>Models</h2>
<p>I tried lots of different models, heads and necks but two simple models were the best performing ones.</p>
<h3>MIL-like 2D multi-task classification model</h3>
<p>This model is a very simple one that is similar to MIL approach and ironically this was my best performing model. The architecture is:</p>
<ol>
<li>Extract features on 2D slices</li>
<li>Average or max pooling on z dimension</li>
<li>Average, max, gem or attention pooling on x and y dimension</li>
<li>Dropout</li>
<li>5 classification heads for each target</li>
</ol>
<h3>RNN 2D multi-task classification model</h3>
<p>This model is similar to what others used in previous competitions. The architecture is:</p>
<ol>
<li>Extract features on 2D slices</li>
<li>Average, max or gem pooling on x and y dimension</li>
<li>Bidirectional LSTM or GRU max while using z dimension as a sequence </li>
<li>Dropout</li>
<li>5 classification heads for each target</li>
</ol>
<h3>Backbones, necks and heads</h3>
<ul>
<li>I tried lots of backbones from timm and monai but my best backbones were EfficientNet b0, EfficientNet v2 tiny and DenseNet121. I think I wasn't able to make large models converge.</li>
<li>I also tried lots of different pooling types including average, sum, logsumexp, max, gem, attention but average and attention worked best for the first model and max worked best for the second model.</li>
<li>I only used 5 regular classification heads for 5 targets<ul>
<li>n_features x 1 bowel head + sigmoid at inference time</li>
<li>n_features x 1 extravasation head + sigmoid at inference time</li>
<li>n_features x 3 kidney head + softmax at inference time</li>
<li>n_features x 3 liver head + softmax at inference time</li>
<li>n_features x 3 spleen head + softmax at inference time</li></ul></li>
</ul>
<h2>Training</h2>
<p>I used BCEWithLogitsLoss for bowel and extravasation heads, CrossEntropyLoss for kidney, liver and spleen weights. The only modification I did was implementing exact same sample weights like this:</p>
<pre><code> ():
():
(SampleWeightedBCEWithLogitsLoss, self).__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
():
loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction=, weight=self.weight)
loss = loss * sample_weights
self.reduction == :
loss = loss.mean()
self.reduction == :
loss = loss.()
loss
</code></pre>
<pre><code> ():
():
(SampleWeightedCrossEntropyLoss, self).__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
():
loss = F.cross_entropy(inputs, targets, reduction=, weight=self.weight)
loss = loss * sample_weights
self.reduction == :
loss = loss.mean()
self.reduction == :
loss = loss.()
loss
</code></pre>
<p>Final loss is calculated as the sum of each heads' loss and backward is called on that.</p>
<p>Training transforms are:</p>
<ul>
<li>Scale by max 8 bit pixel value</li>
<li>Random X, Y and Z flip that are independent of each other</li>
<li>Random 90 degree rotation on axial plane</li>
<li>Random 0-45 degree rotation on axial plane</li>
<li>Histogram equalization or random contrast shift</li>
<li>Random 224x224 crop on axial plane</li>
<li>3D cutout</li>
</ul>
<p>Test transforms are:</p>
<ul>
<li>Scale by max 8 bit pixel value</li>
<li>Center 224x224 crop on axial plane</li>
</ul>
<pre><code>training_transforms = T.Compose([
T.EnsureChannelFirst(channel_dim=),
T.RandFlip(spatial_axis=, prob=transform_parameters[]),
T.RandFlip(spatial_axis=, prob=transform_parameters[]),
T.RandFlip(spatial_axis=, prob=transform_parameters[]),
T.RandRotate90(spatial_axes=(, ), max_k=, prob=transform_parameters[]),
T.RandRotate(
range_x=transform_parameters[],
range_y=transform_parameters[],
range_z=transform_parameters[],
prob=transform_parameters[]
),
T.OneOf([
T.RandHistogramShift(num_control_points=transform_parameters[], prob=transform_parameters[]),
T.RandAdjustContrast(gamma=transform_parameters[], prob=transform_parameters[])
], weights=(, )),
T.RandSpatialCrop(roi_size=transform_parameters[], max_roi_size=, random_center=, random_size=),
T.RandCoarseDropout(
holes=transform_parameters[],
spatial_size=transform_parameters[],
dropout_holes=,
fill_value=,
max_holes=transform_parameters[],
max_spatial_size=transform_parameters[],
prob=transform_parameters[]
),
T.ToTensor(dtype=torch.float32, track_meta=)
])
inference_transforms = T.Compose([
T.EnsureChannelFirst(channel_dim=),
T.CenterSpatialCrop(roi_size=transform_parameters[]),
T.ToTensor(dtype=torch.float32, track_meta=)
])
</code></pre>
<pre><code>
[, ]
[, , ]
[, , ]
[, , ]
</code></pre>
<p>Batch size of 2 or 4 is used depending on the model size. Cosine annealing learning rate schedule is utilized to explore different regions with a small base and minimum learning rate. AMP is also used for faster training and regularization.</p>
<h2>Inference</h2>
<p>2x MIL-like model (efficientnetb0 and densenet121) and 2x RNN model (efficientnetb0 and efficientnetv2t) are used on the final ensemble. </p>
<p>Since the models were trained with random crop augmentation, inputs are center cropped at test time. 4x TTA (xyz, xy, xz and yz flip) are applied and predictions are averaged.</p>
<p>Predictions of 5 folds are averaged and then activated with sigmoid or softmax functions.</p>
<h2>Post-processing</h2>
<p>Different weights are used for 4 models for different targets. Those weights are found by minimizing the OOF score.</p>
<pre><code>mil_efficientnetb0_bowel_weight =
mil_densenet121_bowel_weight =
lstm_efficientnetb0_bowel_weight =
lstm_efficientnetv2t_bowel_weight =
mil_efficientnetb0_extravasation_weight =
mil_densenet121_extravasation_weight =
lstm_efficientnetb0_extravasation_weight =
lstm_efficientnetv2t_extravasation_weight =
mil_efficientnetb0_kidney_weight =
mil_densenet121_kidney_weight =
lstm_efficientnetb0_kidney_weight =
lstm_efficientnetv2t_kidney_weight =
mil_efficientnetb0_liver_weight =
mil_densenet121_liver_weight =
lstm_efficientnetb0_liver_weight =
lstm_efficientnetv2t_liver_weight =
mil_efficientnetb0_spleen_weight =
mil_densenet121_spleen_weight =
lstm_efficientnetb0_spleen_weight =
lstm_efficientnetv2t_spleen_weight =
</code></pre>
<p>I aggregated scan level predictions on patient_id and took the maximum prediction.</p>
<p>I also scaled injury target predictions with different multipliers and they are also set by minimizing OOF score.</p>
<pre><code>df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
df_predictions[] *=
</code></pre>
<p>My final ensemble score was <strong>0.3859</strong> and target scores are listed below. I really enjoyed how my OOF scores are almost perfectly correlated with LB scores. I selected the submission that had the best OOF, public and private LB scores thanks to stable cross-validation.</p>
<table>
<thead>
<tr>
<th>bowel</th>
<th>extravasation</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.1282</td>
<td>0.5070</td>
<td>0.2831</td>
<td>0.4186</td>
<td>0.4736</td>
<td>0.5050</td>
</tr>
</tbody>
</table> |
RSNA 2023 Abdominal Trauma Detection | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution combines knowledge acquired in participating in the previous RSNA challenges, and involves much more than the month I spent working intensively in the competition. I've always enjoyed joining RSNA challenges, and have a special affection for medical imaging because of my relatives' medical profession. </p>
<p>Although 2nd is a great finish, the conditions in which it happened (i.e. unjustified deadline extension) make it really painful. The Kaggle team still does not understand how much modifying rules last minute hurts participants, or they simply don't care. I was already burnt out by competing full time for a month, adding 2 days on top plus missing first place by nothing is too much for me.</p>
<p><strong>Updates:</strong> </p>
<ul>
<li>More details added, fixed num_classes mistake.</li>
<li>Inference code : <a href="https://www.kaggle.com/code/theoviel/rsna-abdominal-inf" target="_blank">https://www.kaggle.com/code/theoviel/rsna-abdominal-inf</a></li>
<li><strong>Training code on Github :</strong> <a href="https://github.com/TheoViel/kaggle_rsna_abdominal_trauma" target="_blank">https://github.com/TheoViel/kaggle_rsna_abdominal_trauma</a></li>
</ul>
<h2>Data</h2>
<p>I use <a href="https://www.kaggle.com/theoviel/datasets?sort=votes" target="_blank">my datasets</a>! Give them a quick upvote so I can reach 4x GM. <br>
In addition, I resize the longest edge to 512 & center crop to 384. I also use 1 frame out of 2 to speed up 2D models inference, and limit stack size to 600. For models requiring a specific input size, images were simply resized afterwards. <br>
Images are loaded with <code>dicomsdl</code> and processed on GPU. It's fast. The pipeline without ensembling runs in less than 4h. </p>
<h2>Models</h2>
<h3>Overview</h3>
<p>Pipeline is below. It has two components: </p>
<ul>
<li>2D models + RNN, where the frame-level labels are inferred using organ visibility classification when needed. </li>
<li>Crop models for kidney / liver / spleen. Results are fed to the RNN after pooling.</li>
</ul>
<p>It re-uses winning ideas from the RSNA fracture competition (main references: <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/363232" target="_blank">[1]</a>, <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362640" target="_blank">[2]</a>, <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/363232" target="_blank">[3]</a>).</p>
<p><a href="https://ibb.co/MBBh8wP"><img src="https://i.ibb.co/fDDS86r/RSNA-Abd-drawio.png" alt="RSNA-Abd-drawio"></a></p>
<h3>2D models</h3>
<p>The key to achieve good performance with 2D models is cleverly sampling frames to feed meaningful information and reduce label noise.<br>
To do so, I use a simple but fast <code>efficientnetv2_rw_t</code> to infer which organs are present on every frame. During training, frames are sampled the following way:</p>
<ul>
<li>kidney / liver / spleen / negative bowel : Pick a random frame inside the organ.</li>
<li>positive bowel / positive extravasation : Use the frame-level labels.</li>
<li>Negative extravasation : Sample anywhere</li>
</ul>
<p>This model extracts probabilities for every 1/2 frame in the stack, and a RNN is trained on top to aggregate results. </p>
<p><strong>Details :</strong></p>
<ul>
<li>Heavy augmentations (HFlip, ShiftScaleRotate, Color augs, Blur augs, ElasticTransform) + cutmix (<code>p=0.5</code>)</li>
<li><code>maxvit_tiny_tf_512</code> was best. <code>convnextv2_tiny</code> and <code>maxvit_tiny_tf_384</code> were also great. </li>
<li>Ranger optimizer, <code>bs=32</code>, 40 epochs, <code>lr=4e-5</code></li>
<li>Only 3D info is the 3 adjacent frames used as channels.</li>
<li>11 classes : <code>[bowel/extravasation]_injury</code>(BCE optimized). And <code>[kidney/liver/spleen]_[healthy/low/high]</code> optimized with the cross entropy.</li>
</ul>
<h3>Crop models</h3>
<p>Strategy is similar : key is to feed to the model crops where the information is located. In that case, I used a 3D <code>resnet18</code> to crop the organs, and feed the crop to a 2D CNN + RNN model. It improves performances on kidney, liver and spleen by a good margin. </p>
<p><strong>Details :</strong></p>
<ul>
<li>Same augmentations with more cutmix (<code>p=1.</code>)</li>
<li>Ranger optimizer, <code>bs=8</code>, 20 epochs, <code>lr=2e-5</code></li>
<li>Best model uses 11 frames sampled uniformly in the organ. I used different number of frames for ensembling.</li>
<li><code>coatnet_1_rw_224</code> + RNN was best. I used different heads (RNN + attention, transformers) and other models CoatNet variants for ensembling.</li>
<li>3 class cross-entropy loss.</li>
</ul>
<h3>RNN model</h3>
<p>It is trained separately. Its role is to aggregate information from previous models, and optimize the competition metric directly.</p>
<p><strong>Details :</strong></p>
<ul>
<li>Restrict stack size to 600 (for faster loading), use 1/2 frame (for faster 2D inference). Sequences are then resized to 200 for batching. </li>
<li>Heavily tweaked LSTM architecture :<ul>
<li>1x Dense + Bidi-LSTM for the 2D models probabilities. Input is the concatenation of the segmentation proba (<code>size=5</code>), the classification probas (<code>size=11 x n_models</code>), and the classification probas multiplied by the associated segmentation (<code>size=11 x n_models</code>)</li>
<li>Pool using probabilities predicted by the segmentation model to get organ-conditioned features.</li>
<li>Use the mean and max pooling of the <code>22 x n_models</code> 2D classification features</li>
<li>Independent per organ logits, which have access to the corresponding pooled features. For instance the kidney logits sees only the crop features for the kidney (<code>3x n_crop_models</code> fts) , the RNN features pooled using the kidney segmentation, and the <code>3 x n_models</code> pooled 2D features for the kidney class.</li></ul></li>
<li>AdamW optimizer, <code>bs=64</code>, 10 epochs, <code>lr=4e-5</code></li>
</ul>
<h3>Things that did not work</h3>
<ul>
<li>YoloX + Ian Pan extravasation boxes. Tried using the data to get crops, and adding the confidence to the RNN model. It did not really help and was painful to implement.</li>
<li>Adding a sequential head to the first stage worked early on, but as my crop models got stronger I figured out 2D was enough. This allowed for a significant speed up of my inference pipeline which is nice.</li>
<li>Ensembling did not really help on private ultimately and my best sub is my strongest single model.</li>
<li>Stuff I tried during the 2 days extended deadline. I was happy with how I managed my time and knew the extension only meant more time for other teams to catch up. Thanks again Kaggle 😭</li>
</ul>
<h2>Scores :</h2>
<ul>
<li>2D Classification + RNN :<ul>
<li>Using ConvNext-v2: <strong>Public 0.41</strong> - <strong>Private 0.39</strong></li></ul></li>
<li>Add the crop model:<ul>
<li>MaxVit (instead of ConvNext) + CoatNet-RNN : <strong>Public 0.37</strong> - <strong>Private 0.35</strong> (best private)</li></ul></li>
<li>Ensemble:<ul>
<li>3x2D models, 8x 2.5D models : <strong>Public 0.35</strong> - <strong>Private 0.35</strong></li></ul></li>
</ul>
<p><em>Thanks for reading !</em></p> |
RSNA 2023 Abdominal Trauma Detection | 439th-Solution-RSNA_ATD | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Place 439th Solution for the RSNA Abdominal Trauma Detection competition and Insights.</h1>
<h2><strong>Context</strong></h2>
<p>Clinical Context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></p>
<p>Data Context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></p>
<h2>Intro</h2>
<p>Firstly thanks to the RSNA for hosting the Abdominal Trauma Detection competition. It was both challenging and well-structured. A big shout-out to our community for the insightful discussions and for demonstrating what's possible, we learn a lot not also by making research on reading papers but also reading the solution notebooks of the community. Congratulations to the winning teams; we learned a lot from the solution write-ups, also congratulations to our team <a href="https://www.kaggle.com/diegoramirezmendoza" target="_blank">@diegoramirezmendoza</a>, <a href="https://www.kaggle.com/pedromartnezbarrn" target="_blank">@pedromartnezbarrn</a> , <a href="https://www.kaggle.com/arantzabazalda" target="_blank">@arantzabazalda</a>, <a href="https://www.kaggle.com/eliudlimon" target="_blank">@eliudlimon</a> for such an amazing collaboration and perfect teamwork!</p>
<h2>Overview</h2>
<p>We implement 2 different pipelines for this competition, a 3D-CNN and a 2D-CNN embedded with a LSTM arquitecture, those were 2 different approaches.</p>
<h2>EDA</h2>
<p>As usual, we couldn't start this competition without an exploratory data analysis. The aim of this stage was to understand the data, its particular distribution, getting acquainted with the labels to classify, and exploring metadata that could provide better quality data preprocessing. By doing this, we were able to build a robust strategy based on the characteristics of the data and also considering the available resources. This is something we will discuss later in this document, but for now, let's say this first step (EDA), as a data scientist's best practice, provides good insights about the data and possible strategies, but also about the computational resources required.</p>
<h2>Data Preprocessing</h2>
<p>The data preprocessing step was something laborious, the objective was to generate scripts to automate the generation and transformations of the competitions data that was stored locally. Even though the developed static functions and methods to fit something like image data generators we observed that the time taken to preprocess all this data during training wasn´t really efficient, instead when training the proposed models the time to train a single epoch was enormously high.</p>
<p>Essentially the "standard" part of the data preprocessing was to use the raw dicom files transformed to arrays, then rescaled them from the original shape to 128 x 128, normalized and fixed the pixel value representations due to some dicom files storage characteristics. From here we start exploring different models and possible solutions that summarizes our 3 proposed models, the first model is a 3D CNN based from the work of [], the second model is a CNN + LSTM layers and the third model was designed to train from scratch some state-of-the-art CNN´s like VGG16 and ResNet50b (which don´t gave us good results).</p>
<h3>The strength of segmentations data</h3>
<p>From the EDA stage, we gained insights into the segmentation data provided for the competition. These NIfTI files could offer a better understanding of the data and help develop a more refined data preprocessing pipeline.</p>
<p>Although there is a limited amount of segmentation data, it was deemed sufficient for our purposes. The proposed methodology was to train a U-Net model from scratch using this data and integrate its predictions into the main pipeline. But why? The reason is that some of the scans are not as informative as we would like, and this is particularly important given the competition's objective to focus on abdominal trauma. Consequently, some of the scans above and below the abdomen can be considered as "noise".</p>
<p>Based on this proposition, we decided to use the segmentation data to train a U-Net for the task of segmenting the organs, as provided by the masks in the NIfTI files. We then developed a threshold function to clean the inferences, using the first appearance of the liver as the upper limit and the last segment of the bowel as the lower limit. This strategy for reducing the data was successfully implemented and integrated into the preprocessing pipeline.</p>
<p>The experiment setup for training the U-Net model consisted of several steps. First, we implemented the model by extracting data as numpy files. Second, we defined the hyperparameters for the model. Third, we evaluated the performance of the model.</p>
<p>The final model used for the data reduction preprocessing pipeline was trained using the Adam optimizer with a learning rate of 0.000001. We used categorical crossentropy as the loss function, MeanIoU as our tunable metric and set the batch size to 128. The model was trained for 128 epochs, although due to GPU resource limitations on Google Colab, our training crashed somewhere around epoch 75 and total amount of data used for training was 15,520 DICOM and NIfTI files, which included both images and segmentation masks. Despite the challenges, the trained U-Net model provided valuable information for the preprocessing pipeline and allowed us to effectively reduce the data to focus on the region of interest in abdominal trauma cases.</p>
<h2>Training Results:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11304439%2Fa610792c5a9897038986087883df671c%2Fdescarga.png?generation=1699395350287888&alt=media" alt=""></p>
<h2>Inference Results:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11304439%2F7b77812541848956d381fbbf1416ed2e%2Fdescarga%20(2).png?generation=1699395316405025&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F11304439%2F28d7f4e90e094f9dbd48760e27c4b037%2Fdescarga%20(3).png?generation=1699395391353207&alt=media" alt=""></p>
<h2>3D CNN - data preprocessing</h2>
<p>For the 3D CNN model, after completing our "standard" preprocessing procedure and fixing the scale to 128 x 128, the question of determining the depth of the volume arose. To answer this, we referred to the work from [] and considered the GPU capacity. Ultimately, we set the volume depth to 64.</p>
<p>To get the data into the shape of 128x128x64, we used the zoom [] function from Python's scipy module. To automate this preprocessing task and data generation, we developed a preprocessing script that maps the patient data, storing the DICOM paths in a DataFrame (which was later saved as a CSV file and also stored in an SQLite database). From these mapped paths of the original data, we then distributed the specific "maps" to correctly generate the data.</p>
<p>The data was generated and saved as numpy files, normalized to values between 0 and 1, with the final shape of 128x128x64 for each series (patients' folders store series, with some patients having only one series and others having two series). As a reminder, the total number of training data 128x128x64 data volumes generated sums up the complete number of series present in the competition's dataset.</p>
<h2>CNN-LSTM - data preprocessing</h2>
<p>On this model we use the down-sampling block of the U-net previously trained for the Semantic Segmentation using the masks, to extract the Feature vector of the image, this part of the U-net is called Encoder, the pipeline was:</p>
<p>Semantic Segmentation with the U-net -> Reduce Volume Shape to 128x128x64 (using Zoom Function) -> Feature Extraction (Pretrained Encoder) -> Bidirectional-LSTM<br>
Note: In the Feature Extraction we notice there were many 0's in the Feature Vectors, we applied a drop to that (it doesn't give us important information) and the final shape to be inputed in the model was (Batch_size, 64, 624)</p>
<h2>Models</h2>
<ul>
<li>3D CNN</li>
<li>CNN-LSTM (Pretrained Encoder from U-net -> Bi-LSTM)</li>
</ul>
<h2>What did not work</h2>
<ul>
<li>3D-CNN model has bad results, we think that this kind of arquitecture only works with a very good cropping of the volumes, the bad performance is because of the black pixels who tend to overfit the model.<br>
Currently working on different approaches…</li>
</ul>
<h2>NOTEBOOKS</h2>
<p>Training: (The train of the model was on a local computer)</p>
<p>Inference: <a href="https://www.kaggle.com/alejandrolunamtz/inference-rsna-atd-fv" target="_blank">https://www.kaggle.com/alejandrolunamtz/inference-rsna-atd-fv</a></p> |
RSNA 2023 Abdominal Trauma Detection | Solution Write-up: Unleashing the Healing Potential: Abdominal Trauma | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Introduction</strong></p>
<p>The RSNA Abdominal Trauma Detection AI Challenge addresses the critical issue of prompt and accurate diagnosis of traumatic injuries in the abdomen using computed tomography (CT) scans. This is crucial as traumatic injuries are a leading cause of death worldwide. CT scans provide detailed cross-sectional images, but interpreting them for abdominal trauma can be complex, especially with multiple injuries or subtle bleeding.<br>
The competition aims to harness artificial intelligence and machine learning to assist medical professionals in rapidly and precisely detecting injuries and grading their severity. This will significantly enhance trauma care and improve patient outcomes on a global scale.</p>
<p><strong>Overview</strong></p>
<p>The provided dataset contains information on patients and their abdominal health status. It includes variables indicating the health and injury status of various abdominal organs (bowel, extravasation, kidney, liver, spleen). Additionally, an "any_injury" variable provides an overall count of injuries detected in a patient.</p>
<p><strong>Preprocessing</strong></p>
<p>The dataset was preprocessed to create a cleaned DataFrame. It includes information on patient ID, health, and injury status of different abdominal organs. Descriptive statistics were generated to provide a summary of the categorical variables, showcasing the prevalence of healthy and injured conditions for each organ.</p>
<p><strong>Data Exploration and Visualizations</strong></p>
<p><strong>Occurrence of Injuries in Different Organs</strong></p>
<p>A bar plot was used to visualize the occurrence of injuries in different organs. It revealed that bowel injuries were the least common, while other organs had one or more injuries.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F4221c4f134c8d908f2b5b837c805b8a2%2FOccurrence%20of%20injuries%20in%20different%20organs.png?generation=1698915704986187&alt=media" alt="image"></p>
<p><strong>Overall Prevalence of Injuries</strong></p>
<p>The "any_injury" variable indicated the overall prevalence of injuries in the dataset, with a total count of 11 injuries.</p>
<p><strong>Relationship Between Injuries in Different Organs</strong></p>
<p>A heatmap illustrated the correlation between injuries in different organs. While perfect correlation (1) was observed on the diagonal, off-diagonal elements indicated correlations between injuries in different organs. In this small dataset, strong correlations may not be present.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2Fc03e89a61bd88c6e53cf310f33f3df41%2FCorrelation%20between%20organ.png?generation=1698916278467505&alt=media" alt="image"></p>
<p><strong>Multi-Planar Reconstruction (MPR)</strong></p>
<p>MPR involves displaying slices in different planes. Libraries like matplotlib or pyvista can be used to create interactive MPR visualizations, enhancing the interpretation of CT scans.</p>
<p><strong>Analysis of Organ Health</strong></p>
<p>The prevalence of healthy, low, and high health conditions for each organ was compared. The plot highlighted that the dataset predominantly consisted of healthy organ conditions, with only a few instances of low or high health status.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F44858e883cf40a860263bb2c31f8e20a%2FOrgan%20health.png?generation=1698916380566194&alt=media" alt="image"></p>
<p><strong>Analysis of Injuries</strong></p>
<p>Occurrences of injuries in different organs were analyzed, along with the overall prevalence of injuries in the dataset. The heatmap revealed potential relationships between injuries in different organs.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F4d807a5cfa2d37a1bdb6277863b16ee6%2FCorrelation%20between%20Injuries.png?generation=1698916468169775&alt=media" alt="image"></p>
<p><strong>Relationship Between "any_injury" and Organ Health</strong></p>
<p>The relationship between the presence of "any_injury" and the health status of each organ was examined. The plot indicated a strong relationship, with certain organs showing injuries in patients with low or high health status.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F7539ee4980700fcf95f04cc8a0695aff%2FComparison%20of%20organs.png?generation=1698916646965693&alt=media" alt="image"></p>
<p><strong>Patient Profiles</strong></p>
<p>Patient profiles were identified based on organ health and injury status using K-means clustering. Two clusters were identified: one representing healthy patients and the other representing injured patients. In a larger dataset, more meaningful patient profiles may emerge.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7212220%2F1e55b56c5d14a7e12826b6b89dd02a25%2FPatient%20Profile.png?generation=1698916166021267&alt=media" alt="image"></p>
<p><strong>Model Evaluation</strong></p>
<p>Three machine learning models (Random Forests, SVM, Gradient Boosting) were trained and evaluated for accuracy. The results indicated high performance across all models, with Random Forests achieving an accuracy of 96%.</p>
<p><strong>Conclusion</strong></p>
<p>The solution presented here demonstrates a comprehensive approach to addressing the RSNA Abdominal Trauma Detection AI Challenge. Through data preprocessing, exploratory data analysis, visualizations, and machine learning, this solution aims to significantly improve the diagnosis and treatment of traumatic abdominal injuries. The high model accuracy underscores the potential impact of this approach in real-world clinical settings.</p>
<p><strong>Notebook:</strong> <a href="https://www.kaggle.com/code/jocelyndumlao/unleashing-the-healing-potential-abdominal-trauma/notebook" target="_blank">https://www.kaggle.com/code/jocelyndumlao/unleashing-the-healing-potential-abdominal-trauma/notebook</a></p> |
RSNA 2023 Abdominal Trauma Detection | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I salute you all for fighting to the end. Thanks also to the RSNA for their support. It was another great competition.</p>
<h2>Overview</h2>
<p>1, The 3D segmentation was trained with the given masks and each organ was cut into a cube shape using the predicted results.<br>
2, Multiple organ cubes were each entered into various 2.5D+3D classification models and the results enumerated.</p>
<h2>Segmentation</h2>
<p>I used Qishen's 3D segmentation code. <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607</a><br>
The models used were resnet18 and resnet50. The average of the output of all models was used as a mask.</p>
<h2>Crop</h2>
<p>Since information around the organs is essential for trauma detection, the mask was slightly enlarged before the boxes were cut out. Two patterns of mask sizes were employed and two datasets were created for each organ.</p>
<h2>Classification</h2>
<p>All classification models follow a 2.5D + 3D structure. Typically, multiple 2.5D images are generated from the organ box and input into the model. The input sizes are (8, 15, 3, 128, 128), where 8 is batch_size, 15 is image, and 3 is channel. Each image is transformed into a feature map through a 2D CNN and input to subsequent processes such as pooling and lstm.<br>
Because of the correlation in injuries between organs, the problem was solved in a multiclass problem, using essentially all targets. I trained a variety of model patterns, including:</p>
<p>・Multiple Image Sizes<br>
・Multiple image counts<br>
・Multiple necks (average pooling / max pooling / lstm / gru)<br>
・Multiple crop sizes<br>
・Multiple backbones (convnext / se_resnext / maxvit / caformer / xcit)<br>
・Multiple augmentation sets<br>
・Multiple epochs(without early stop)<br>
・Multiple targets (all targets / single organ targets)<br>
・Some models are pre-trained with image-level bowel / extravasation labels and used as initial values for weights.<br>
・Some models reduce noise by using box<em>mask as input. There is also a model that uses box</em>mask as input to reduce noise. This type of model is specific to the liver. Because of its shape, the liver has a lot of noise information if you just crop it with a box, so masking was very effective.<br>
・Some models use all organ boxes for training. Different image sizes/number of images are used for each organ to ensure the same resolution. A custom sampler was defined so that only boxes of the same organ exist in the same batch, allowing simultaneous training with different sizes/numbers.</p>
<p>The following are those that have made a particularly significant contribution to accuracy:<br>
・masking for liver model<br>
・custom sampler for all class models<br>
・2types of crops</p>
<h2>Ensemble</h2>
<p>A simple weighted average was performed for each target.<br>
Below is a simplified weights.</p>
<pre><code>{
'bowel_injury':
{
'liver_gru_chaug_256_cropv1': ,
'liver_maxvit_224_cropv1': ,
'liver_maxvit_224_cropv2': ,
'spleen_gru_128_cropv1': ,
'spleen_maxvit_224_25epochs_cropv1': ,
'kidney_maxvit_224_cropv1': ,
'kidney_caformer_192_cropv2_pretrain': ,
'kidney_maxvit_224_cropv2': ,
'kidney_maxvit_224_25epochs_cropv2': ,
'bowel_lstm_256_n15_cropv1': ,
'bowel_288_n25_cropv1_pretrain': ,
'bowel_288_n25_25epochs_cropv1_pretrain': ,
'all_pretrain_cropv1_input_bowel': ,
'all_pretrain_cropv1_input_kidney': ,
'all_lstm_pretrain_cropv2_input_kidney': ,
},
'kidney_healthy':
{
'liver_gru_chaug_256_cropv1': ,
'liver_maxvit_224_cropv1': ,
'liver_maxvit_224_cropv2': ,
'spleen_gru_128_cropv1': ,
'spleen_maxvit_224_25epochs_cropv1': ,
'kidney_maxvit_224_cropv1': ,
'kidney_caformer_192_cropv2_pretrain': ,
'kidney_maxvit_224_cropv2': ,
'kidney_maxvit_224_25epochs_cropv2': ,
'bowel_lstm_256_n15_cropv1': ,
'bowel_288_n25_cropv1_pretrain': ,
'bowel_288_n25_25epochs_cropv1_pretrain': ,
'all_pretrain_cropv1_input_bowel': ,
'all_pretrain_cropv1_input_kidney': ,
'all_lstm_pretrain_cropv2_input_kidney': ,
},
}
</code></pre>
<p>The final oof log loss is as follows:</p>
<pre><code>{
'bowel_injury': ,
'extravasation_injury': ,
'kidney_healthy': ,
'kidney_low': ,
'kidney_high': ,
'liver_healthy': ,
'liver_low': ,
'liver_high': ,
'spleen_healthy': ,
'spleen_low': ,
'spleen_high':
}
</code></pre>
<h2>Post processing</h2>
<p>After a simple weighted average of each model, each prediction was further weighted. This is the same post-processing as in the public notebook.<br>
I also tried stacking which directly optimizes the metric, but the results were slightly worse than the simple pp due to overfit.<br>
The final CV is 0.3316 and the respective scores are as follows.</p>
<table>
<thead>
<tr>
<th></th>
<th>bowel</th>
<th>extravasation</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0.095</td>
<td>0.4853</td>
<td>0.2434</td>
<td>0.3489</td>
<td>0.3751</td>
<td>0.4417</td>
</tr>
</tbody>
</table>
<p><br></p>
<p>train code: <a href="https://github.com/yujiariyasu/rsna_2023_abdominal_trauma_detection" target="_blank">https://github.com/yujiariyasu/rsna_2023_abdominal_trauma_detection</a><br>
inference code: <a href="https://www.kaggle.com/code/yujiariyasu/3rd-place-inf-code/notebook" target="_blank">https://www.kaggle.com/code/yujiariyasu/3rd-place-inf-code/notebook</a></p>
<p>Thanks everyone for your hard work!</p> |
RSNA 2023 Abdominal Trauma Detection | 514th Place Solution for the RSNA 2023 Abdominal Trauma Detection Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank Radiological Society of North America and Kaggle for organizing and running this competition. I would also like to thank all the competitors who shared their views and notebooks.<br>
Congratulations to all the winners! Although my ranking was not good, I would like to share with you what I did and the results. I worked on the prediction using a small number of images to conserve computational resources. I made predictions only for the liver, pancreas and kidneys of a part of patients. I set my goals small, because dealing with a lot of data was likely to cause me to give up in the process.</p>
<h2><strong>Context</strong></h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></li>
</ul>
<h2><strong>Overview of the Approach</strong></h2>
<p>Twenty input images were selected per series_id. From the selected images, the nearest neighbors model selected images that showed the target organs, excluding images that only showed lungs, legs, etc. The remaining images were used to predict damage to the spleen, liver, and kidneys, and so on.</p>
<h2><strong>Details of the submission</strong></h2>
<p>First, using the <a href="https://www.kaggle.com/code/parhammostame/construct-3d-arrays-from-dcm-nii-3-view-angles?scriptVersionId=138964121" target="_blank">notebook [1]</a> method by <a href="https://www.kaggle.com/parhammostame" target="_blank">Parham Mostame</a>, 20 images (256*256 PNG images) were selected per series_id. In this process, unusual DICOM files were standardized according to the <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217" target="_blank">post [2]</a> by <a href="https://www.kaggle.com/huiminglin" target="_blank">Hui Ming Lin</a>. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F1afcba921294d620d637fa28cb1c681c%2Fcor.png?generation=1699094504090575&alt=media" alt=""><br>
Because some of the selected images contained mainly non-target-related items, such as lungs and legs, only the necessary images were further selected. For this, 206 sets of data with segmentation information were used to predict images containing the liver, images containing the spleen, images containing the liver and spleen, and so on. The model was from the <a href="https://www.kaggle.com/competitions/digit-recognizer" target="_blank">"Digit Recognizer" Competition</a> notebook [<a href="https://www.kaggle.com/code/t0m0ff3l/top-score-using-nearest-neighbours?scriptVersionId=12826678" target="_blank">3</a>, <a href="https://www.kaggle.com/code/shadabhussain/digit-recognition-using-nearest-neighbors" target="_blank">4</a>] and used nearest neighbors learning. It runs fast, can be used without a GPU [<a href="https://www.kaggle.com/code/hiroshisakiyama/top-score-using-nearest-neighbours-without-gpu?scriptVersionId=146578718" target="_blank">5</a>], and seems to have worked somewhat well. <br>
Then, about one-third of the data was used to make predictions for each organ. The aforementioned nearest neighbors model was used, but was planed to be changed later to a better model. For the liver, it appeared to work somewhat well, and for the spleen, some of the results seemed to work well. The kidneys were not so good, and the bowels were not good at all. This time, I stopped here and only made final predictions for the liver, spleen, and kidneys for limited patients. Where no prediction was made, the average value was used.<br>
In the private score results, the score was better when only the liver and spleen were predicted, without including the kidney prediction. (Note: LS means liver and spleen; LSK means liver, spleen and kidneys.)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2F061e1fe83f5391b7e5763477f98fa576%2FRSNAscore.png?generation=1699094621201857&alt=media" alt=""><br>
Next time, I would like to learn more and work with a better model appropriately.</p>
<h2><strong>Sources</strong></h2>
<p>[1] <a href="https://www.kaggle.com/parhammostame" target="_blank">Parham Mostame</a>, <a href="https://www.kaggle.com/code/parhammostame/construct-3d-arrays-from-dcm-nii-3-view-angles?scriptVersionId=138964121" target="_blank">Construct 3D arrays from DCM/NII (+ 3 view angles)</a>.<br>
[2] <a href="https://www.kaggle.com/huiminglin" target="_blank">Hui Ming Lin</a>, <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217" target="_blank">Standardizing Unusual Dicoms</a>.<br>
[3] <a href="https://www.kaggle.com/t0m0ff3l" target="_blank">Hendrik</a>, <a href="https://www.kaggle.com/code/t0m0ff3l/top-score-using-nearest-neighbours?scriptVersionId=12826678" target="_blank">Top Score using Nearest Neighbours</a>.<br>
[4] <a href="https://www.kaggle.com/shadabhussain" target="_blank">Shadab Hussain</a>, <a href="https://www.kaggle.com/code/shadabhussain/digit-recognition-using-nearest-neighbors" target="_blank">Digit Recognition using Nearest Neighbors</a>.<br>
[5] <a href="https://www.kaggle.com/hiroshisakiyama" target="_blank">Hiroshi Sakiyama</a>, <a href="https://www.kaggle.com/code/hiroshisakiyama/top-score-using-nearest-neighbours-without-gpu?scriptVersionId=146578718" target="_blank">Top Score using Nearest Neighbours without GPU</a>.</p> |
RSNA 2023 Abdominal Trauma Detection | 63rd Place Solution for the RSNA 2023 Abdominal Trauma Detection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I created my solution following the advice of the <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/435053#2425570" target="_blank">discussion</a>. My solution is not the solution of the top winners, but I hope it will be of some help. When participating in this competition, I referred to various public notebooks and data. I would like to express my appreciation to everyone who was willing to post.</p>
<h2>Overall solution</h2>
<ul>
<li><p>Trauma detection for all organs is performed using a simple single-stage architecture. Then, detection for the three organs is performed using a multi-stage architecture, and finally ensembled.</p></li>
<li><p>single-stage architecture</p>
<ul>
<li>using <a href="https://arxiv.org/ftp/arxiv/papers/2002/2002.04752.pdf" target="_blank">CT-Net</a></li></ul></li>
<li><p>multi-stage architecture</p>
<ul>
<li>for kidney+liver+spleen</li>
<li>stage1 : sub-volume prediction using EfficientNet-v1b0</li>
<li>stage2 : prediction using CT-Net</li></ul></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3593902%2Fd6f113ee7831e5e087d7eeb7148640ca%2F1.PNG?generation=1698845265814490&alt=media" alt=""></p>
<p>I was doing trial and error right up until the deadline, so I ended up with a half-finished architecture configuration.<br>
In the case of only single-stage architecture, the score was LB0.61 without tuning.</p>
<h2>notebook</h2>
<ul>
<li><a href="https://www.kaggle.com/code/sho124/rsna-infer-notebook" target="_blank">inference notebook</a></li>
</ul>
<h2>Impression</h2>
<ul>
<li>I had no experience implementing recognition technology using 3D data. So, this competition was very challenging and educational for me. </li>
<li>My best solution was created during the two-day extension period. I think this two-day extension was controversial. But it was a precious two days where I was able to get the best score for me. As a result I was able to get my first medal.</li>
</ul> |
Kaggle - LLM Science Exam | 28th-place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to competition organizers for hosting this competition. I would also like to congratulate everyone who made it through this tough competition to the end.</p>
<p>Unfortunately my solution overfits the public lb and I missed the gold medal, but I would like to share a summary of my solution here.</p>
<h2>Wikipedia data for retrieval</h2>
<p>As reported in <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595" target="_blank">this discussion</a>, there're cases in which some numbers or even paragraphs are missing from the wikipedia data, so I created whole wikipedia plain text from <a href="https://dumps.wikimedia.org/other/cirrussearch/" target="_blank">CirrusSearch dumped data</a>.</p>
<h2>Context retrieval</h2>
<p>I combined two types of retrieval methods.</p>
<h3>1. bi-gram tfidf inverted index</h3>
<p>For each wikipedia article, I created a 2-gram tfidf based inverted index. I referred <a href="https://github.com/GINK03/minimal-search-engine" target="_blank">this repository</a> to create the inverted index. Indexing all articles resulted in slow search speeds, so I indexed about half of the articles based on their creation date and popularity.</p>
<h4>1st step</h4>
<p>Using prompt and answer as query, extract some wikipedia articles with high similarity from the inverted index database. The example of retrieval notebook is <a href="https://www.kaggle.com/code/daisuketakahashi/wikipedia-bi-gram-inverted-index/notebook?scriptVersionId=146229927" target="_blank">here</a>.</p>
<h4>2nd step</h4>
<p>The extracted wikipedia articles are split into sentences, and the similarity with the query is calculated by tfidf to extract some sentences. The three sentences before and after the extracted sentence were added as the context.</p>
<h3>2. faiss</h3>
<p>The full text of wikipedia was split into sentences so that each chunk is about 25 words. Each chunk was vectorized by <a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" target="_blank">sentence-transformers/all-MiniLM-L6-v2</a> and a faiss index was created. The faiss index was quantized and compressed to about 7 GB.</p>
<h4>1st step</h4>
<p>Using prompt and answer as query, extract some chunks with high similarity from the faiss index.</p>
<h4>2nd step</h4>
<p>The extracted chunks and the chunks before and after were used as context.</p>
<h2>Model</h2>
<p>Almost the same as a baseline notebook.<br>
I trained two deberta-v3-large model with different training dataset.<br>
The results were inferred by using the following 7 contexts. These results were ensembled by simply averaging each output.</p>
<ul>
<li>two contexts from bi-gram tfidf inverted index</li>
<li>five contexts from faiss index</li>
</ul> |
CommonLit - Evaluate Student Summaries | 2nd Place Efficiency Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the sponsors and admins! Absolutely love the impact that these education-focused competitions will have down the road so please keep them coming - these are awesome initiatives.</p>
<p>Thank you to <a href="https://www.kaggle.com/kononenko" target="_blank">@kononenko</a> and <a href="https://www.kaggle.com/andreippv" target="_blank">@andreippv</a> for providing support to the competition's community to help overcome those license compliancy issues.</p>
<p>And finally thank you to the heavyweight grandmasters both old and new who didn’t enter the efficiency track. Not sure I would be writing this if they had!</p>
<h1>2ND PLACE SOLUTION</h1>
<p>With most competitors focusing on (and clearly succeeding in) pushing the boundaries of transformers, I decided from the onset that I would focus solely on the efficiency track. Not sure if I loved the efficiency leaderboard delay but for some reason it made things a bit more exciting lol - maybe that's just me though.</p>
<p>Anyway after exhausting readability metrics and prompt/summary overlaps, I turned to exploiting missteps in English writing (e.g. don’t start sentences with subordinating conjunctions, don’t use first-person pronoun I in this context, etc.) and also the human nature of the scorers (e.g. focus on early sections of summary and the lasting first impressions they make, bias towards penalizing for spelling errors). I felt that by doing so I could hone in on the few elements (i.e. scorers and their rubrics/scoring instructions) that were relatively consistent across all prompts no matter the grade or subject - maybe this helped stabilize things when facing the private data.</p>
<p>Below are some of my solution’s specs:</p>
<ul>
<li>Catboost</li>
<li>CV/PUBLIC/PRIVATE - 0.52/0.452/0.504<ul>
<li>Used GroupKFold with five different seeds for validation - due to the nature of the public LB I often found myself using it as another fold and re-calculating CV when I was on the fence about a feature. </li></ul></li>
<li>Hyperparameters by way of Optuna: <ul>
<li>content_params = {'grow_policy': 'Depthwise', 'iterations': 1490, 'min_data_in_leaf': 93, 'learning_rate': 0.01, 'l2_leaf_reg': 2.0484264500233307, 'colsample_bylevel': 0.42, 'subsample': 0.42, 'max_depth': 6}</li>
<li>wording_params = {'grow_policy': 'Depthwise', 'iterations': 1230, 'min_data_in_leaf': 7, 'learning_rate': 0.01, 'l2_leaf_reg': 8.403269352596215, 'colsample_bylevel': 0.75, 'subsample': 0.4, 'max_depth': 6}</li></ul></li>
<li>Features:<ul>
<li>% of unique words that were misspelled - found that just the count didn’t work while % worked very well</li>
<li># of quotes in the summary that were found in the prompt</li>
<li>% of prompt words found in summary</li>
<li># of bigrams/trigrams that intersect between prompt and summary, also represented these values as a % of the total summary bigrams/trigrams</li>
<li>% of stop words in summary</li>
<li># of periods that aren’t followed by a space and capital letter</li>
<li># of commas that aren’t followed by a space and a letter</li>
<li># of commas that are preceeded by a space or not typed directly beside a character</li>
<li># of cardinal numbers + # of existential there + # of co-ordinating conjunctions <- for this I replaced nltk pos_tag with just a list of words/numbers that are captured by pos_tag</li>
<li># of uppercase characters</li>
<li>Mean words in each sentence, mean unique words in each sentence, # of sentences, syntax count, # of “and”s, # of characters, # of words, # of difficult words</li>
<li>% of words in summary that are all uppercase</li>
<li># of unique verbs and # of unique nouns found in the second sentence of the summary</li>
<li># of unique word meanings in the summary</li>
<li>Index of first period, binary flag if summary ends with a period</li>
<li># of “ i “ or “ I “ - to help indicate if writer used first-person</li>
<li># of sentences starting with a subordinating conjunction</li>
<li>Pretty much every readability metric out there except the McAlpine EFLAW score, szigriszt-pazos is the only non-english language metric used</li>
<li>Difference between Dale-Chall readability scores of prompt and summary, also did this with Spache readability</li>
<li>Difference between # of stop words found in summary and mean number of stop words across all summaries for the respective prompt_id</li></ul></li>
<li>Speed-ups (i.e. vrooms):<ul>
<li>Catboost for faster inference! Much slower to train, but for inference it's great. It was always considerably faster than my LightGBM models, but at the same time slightly worse in terms of CV/PUBLIC LB.</li>
<li>In almost all cases I used regex patterns instead of word_tokenize and sent_tokenize from nltk - found that they really slowed things down so I avoided them at all costs.</li>
<li>Used simple list instead of pos_tag from nltk when possible, also used simple list instead of importing stopwords</li>
<li>Used numpy vectorize as much as possible (groupby features were done with pandas data frame)</li>
<li>Also tried libraries like daal4py, treelite, and lleaves for faster inference but the compile time didn’t seem to be worth the inference speed-up so I just ended up loading my models normally.</li></ul></li>
</ul>
<p><strong>Inference Notebook:</strong> <a href="https://www.kaggle.com/code/kpfmma/efficiency-2nd-place-inference-catboost" target="_blank">https://www.kaggle.com/code/kpfmma/efficiency-2nd-place-inference-catboost</a></p> |
ICR - Identifying Age-Related Conditions | 322 place solution for the "ICR - Identifying Age-Related Conditions" competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Really didn't think I would make it in the Top 6% with such a simple solution. </p>
<h1>Context section</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<p>My main concern was that the dataset was imbalanced and the final model may be baised towards the label with greater instances. So my main focus was to keep an eye on recall and try to use ensemble models to get good results.</p>
<h1>Details of the submission</h1>
<h2>Data Preprocessing</h2>
<p>I have done basic data preprocessing like using standardization for numerical features and label encoding for categorical features. I did not try to impute the null values or drop the outliers since I did not have any information about the features as they were anonymized.</p>
<ul>
<li>Used label encoding for categorical features</li>
<li>Used standarization for numerical features</li>
<li>Dropped nan values and kept outliers</li>
</ul>
<h2>Model Selection</h2>
<p>I wanted to go with an ensemble model with a 5-fold so I selected the XGBoost model as it has a large number of hyperparameters to optimize. Which gives me much better chances to get good results.</p>
<ul>
<li>Used XGBoost</li>
<li>Used 5 folds of data</li>
</ul>
<h2>Hyperparameter Optimization</h2>
<p>I decided to go for bayesian optimization using Optuna since I had a large number of parameters to search through. My exact hyperparameter grid search is as follows:</p>
<pre><code> `params = {
: trial.suggest_(, 50, 300),
: trial.suggest_(, 2, 25),
: trial.suggest_log(, 0.001, 0.5),
: trial.suggest_log(, 0.1, 1),
: trial.suggest_log(, 0.5, 1),
: trial.suggest_log(, 0.001, 1),
: trial.suggest_log(, 0.001, 10),
: trial.suggest_log(, 0.001, 10.0),
: trial.suggest_log(, 0.001, 10.0),
: trial.suggest_log(, 0.001, 10), }`
</code></pre>
<ul>
<li>Used optuna optimization</li>
</ul>
<h2>Final Results</h2>
<p>After getting the best parameters I used to train 5 different models using the 5 folds of data. I selected the best model from those folds on basis of F1-score and used it to generate the predictions.</p>
<ul>
<li>Selected best model for 5 folds on basis of F1-score</li>
</ul>
<h2>What didn't work</h2>
<p>I tried to use balanced weights for different models as the dataset for imbalanced but I didn't get any good results using that.</p>
<h1>Sources</h1>
<p>I learned about class weights from <a href="https://www.kaggle.com/code/gusthema/identifying-age-related-conditions-w-tfdf" target="_blank">gusthema's notebook</a>.</p> |
RSNA 2023 Abdominal Trauma Detection | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to express my gratitude to the Kaggle staff for hosting this fantastic competition, as well as to the organizers at RSNA. I also want to extend my appreciation to all the hardworking participants who dedicated themselves to this competition. Special thanks go to my teammate <a href="https://www.kaggle.com/liushuzhi" target="_blank">@liushuzhi</a> for his tireless efforts and insightful discussions.</p>
<h1><strong>Overview</strong></h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3285259%2F81780865e35219e456c375ea284989bd%2FScreenshot%202023-10-16%20at%2017.23.32.png?generation=1697444632841544&alt=media"></p>
<h1><strong>Details of Each Model</strong></h1>
<h3><strong>3D Segmentation</strong></h3>
<ul>
<li>Used the implementation from <a href="https://www.kaggle.com/haqishen" target="_blank">@haqishen</a>'s top solution of the previous competition. [<a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">1</a>] (Thank you!)</li>
<li>Input size: (128, 128, 128)</li>
<li>Models used: ResNet18d, UNet</li>
</ul>
<h3>Classification - <strong>Liver/Kidney/Spleen</strong></h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3285259%2F2a1524549d583462f54ab94a9804204d%2FScreenshot%202023-10-19%20at%2020.36.04.png?generation=1697715592043327&alt=media"></p>
<ul>
<li>Developed a single model to classify all three organs instead of individual models for each organ due to better performance.</li>
<li>For patients with 2 series, only used the one with the lower aortic_hu for training. For patients with 1 series, used all of them. This decision was based on experimentation rather than a specific rationale.</li>
<li>Performed cuboid crop using the 3D segmentation masks, slightly expanding the cuboid to preserve edges.</li>
<li>Weighted the loss during training according to the competition metrics [1, 2, 4].</li>
<li>Model architecture: SE-ResNeXt → LSTM → Concatenation of [1D CNN, Attention]</li>
<li>Input size: (96, 4, 256, 256)<ul>
<li>Channels are (i-1, i, i+1, mask), with 'i' representing the index of images along the z-axis.</li></ul></li>
</ul>
<h3>Classification - <strong>Bowel</strong></h3>
<p>Two-stage training was employed to leverage image-level labels.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3285259%2F55260f9b200a2d93f7a51ebcda070f59%2FScreenshot%202023-10-19%20at%2020.36.18.png?generation=1697715686083231&alt=media"></p>
<p><strong>1st Stage</strong></p>
<ul>
<li>Binary classification with a 2D model using image_level_label.csv.</li>
<li>Similar to liver/kidney/spleen model, cuboid crop was done using 3D segmentation masks to create 4-channel input data. Thus, input size was (4, 384, 384).</li>
<li>Sampled data randomly to achieve a 1:10 ratio between positive and negative.</li>
<li>The loss is weighted based on the competition's metrics.</li>
<li>SE-ResNeXt was used for the model.</li>
</ul>
<p><strong>2nd Stage</strong></p>
<ul>
<li>Aggregated features obtained from the model trained in the 1st stage.</li>
<li>Model architecture: SE-ResNeXt → LSTM</li>
<li>Weights from the 1st stage were loaded and kept frozen, updating only the network beyond the LSTM during training.</li>
</ul>
<h3>Classification - <strong>Extravasation</strong></h3>
<p>Similar to the approach for Bowel, a two-stage training strategy was used to leverage image-level labels. The model and aggregation methods were the same, but there were some differences.</p>
<ul>
<li>Cuboid crop was not performed.</li>
<li>Since anomalies were small, its resolution was increased to (96, 3, 512, 512).</li>
<li>Used data with higher aortic_hu for patients with 2 series to highlight features.</li>
</ul>
<h1><strong>Post-Processing</strong></h1>
<p>A post-processing step was introduced to improve the optimization of any_injury. The only post-processing step we performed was simply multiplying the coefficients.</p>
<p>Coefficients were searched and applied based on OOF prediction values. Although the original plan was to create a stacking model for this purpose, it was simplified due to time constraints.</p>
<pre><code>low_coef =
high_coef =
ev_coef =
bowel_coef =
df_pred[] *= low_coef
df_pred[] *= high_coef
df_pred[] *= low_coef
df_pred[] *= high_coef
df_pred[] *= low_coef
df_pred[] *= high_coef
df_pred[] *= ev_coef
df_pred[] *= bowel_coef
</code></pre>
<h1><strong>CV</strong></h1>
<table>
<thead>
<tr>
<th></th>
<th>bowel</th>
<th>ev</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any_injury</th>
<th>mean</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o Post-Processing</td>
<td>0.1293</td>
<td>0.5348</td>
<td>0.3146</td>
<td>0.4192</td>
<td>0.4454</td>
<td>0.5533</td>
<td>0.3994</td>
</tr>
<tr>
<td>w/ Post-Processing</td>
<td>0.1293</td>
<td>0.5303</td>
<td>0.3141</td>
<td>0.4190</td>
<td>0.4485</td>
<td>0.4925</td>
<td>0.3889</td>
</tr>
</tbody>
</table>
<h1><strong>Source Code</strong></h1>
<table>
<thead>
<tr>
<th></th>
<th>URL</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inference notebook</td>
<td><a href="https://www.kaggle.com/code/kapenon/rsna2023atd-9th-place-inference" target="_blank">https://www.kaggle.com/code/kapenon/rsna2023atd-9th-place-inference</a></td>
</tr>
<tr>
<td>Training code</td>
<td><a href="https://github.com/kapenon/rsna2023atd_9th_solution" target="_blank">https://github.com/kapenon/rsna2023atd_9th_solution</a></td>
</tr>
</tbody>
</table>
<h1>Acknowledgments</h1>
<p>We want to extend our thanks to the Kaggle staff and hosts who organized this fantastic competition, all the dedicated participants, and Rist Inc. for their support in providing computational resources.</p> |
CommonLit - Evaluate Student Summaries | 49th Place Solution [Single Model] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot to the hosts and Kaggle for hosting this interesting competition. I personally had a great time working on this competition.</p>
<h2>Summary:</h2>
<p>My solution primarily relies on a single DeBERTa-v3-large model with a custom MeanPooling layer. Used all the text columns <code>text</code>, <code>prompt_text</code>, <code>prompt_question</code>and <code>prompt_title</code> for training. Trainied with 1500 max_len and infer with 2048. </p>
<h2>Input Processing:</h2>
<p>To prepare the input data, I added two special tokens, '[SUMMARY_START]' and '[SUMMARY_END]', before and after the summary text, respectively. Then, I appended all the other prompt data with a '[SEP]' token. The final input text structure looked like this:<br>
'[SUMMARY_START]text[SUMMARY_END][SEP]prompt_text[SEP]prompt_question[SEP]prompt_title[SEP]'.</p>
<h2>Training Details:</h2>
<p>The DeBERTa-v3-large model was trained with a maximum sequence length of 1500 using the input structure mentioned above. I also modified the MeanPooling layer to calculate the mean only between the '[SUMMARY_START]' and '[SUMMARY_END]' tokens.</p>
<h2>CV - LB Details</h2>
<ul>
<li>CV - 0.474 Public LB - 0.43 Private LB 0.469</li>
</ul>
<h2>Inference Time:</h2>
<p>It's important to note that this single model has a significant inference time of approximately 8.5 hours.</p>
<h2>What Worked:</h2>
<ul>
<li>Inferencing on larger Max Len</li>
<li>Averaging Multiple checkpoints per epoch.</li>
</ul>
<h2>What Didn't Work:</h2>
<ul>
<li>Implementing a second-stage model (LGBM/XGBOOST).</li>
<li>Text cleaning.</li>
<li>Post-processing techniques, such as target smoothing.</li>
</ul>
<h2>Training Code</h2>
<p><a href="https://github.com/rohitsingh02/CommonLit-ESS" target="_blank">https://github.com/rohitsingh02/CommonLit-ESS</a></p>
<h2>Inference Kernel</h2>
<p><a href="https://www.kaggle.com/code/rohitsingh9990/commonlit-ensemble-new-v2?scriptVersionId=145818122" target="_blank">https://www.kaggle.com/code/rohitsingh9990/commonlit-ensemble-new-v2?scriptVersionId=145818122</a></p> |
CommonLit - Evaluate Student Summaries | 42nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was my first Kaggle competition and I ended in 42nd position out of over 2000 teams.</p>
<ul>
<li>Training code: <a href="https://github.com/Lukasz-Cesarski/cless/blob/main/cless/run_sweep.py" target="_blank">cless/run_sweep.py</a> </li>
<li>Inference <a href="https://www.kaggle.com/code/luki493/lgbm-ensemble-iter-e6-open" target="_blank">code</a></li>
</ul>
<h2>Solution</h2>
<p>Final solution contains</p>
<ul>
<li>6 <a href="https://huggingface.co/docs/transformers/model_doc/deberta" target="_blank">Deberta</a> models (2 base and 4 large)</li>
<li>LGBM on top of every model </li>
<li>Late Fusion ensemble</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2200033%2Feab6107c139b5f93f4f14b440ed33a7d%2Fsolution.png?generation=1697541256075838&alt=media" alt=""></p>
<h2>Results</h2>
<h3>Cross Validation:</h3>
<table>
<thead>
<tr>
<th></th>
<th>checkpoint_name</th>
<th>model_name</th>
<th>adds</th>
<th>pseudo</th>
<th>content_rmse</th>
<th>wording_rmse</th>
<th>mcrmse</th>
<th>lgbm_content_rmse</th>
<th>lgbm_wording_rmse</th>
<th>lgbm_mcrmse</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>lrg-add-nops-202310101531</td>
<td>large</td>
<td>True</td>
<td>False</td>
<td>0.436815</td>
<td>0.563804</td>
<td>0.500309</td>
<td>0.416913</td>
<td>0.547992</td>
<td>0.482452</td>
</tr>
<tr>
<td>2</td>
<td>lrg-add2-nops-202310071225</td>
<td>large</td>
<td>True</td>
<td>False</td>
<td>0.422034</td>
<td>0.557535</td>
<td>0.489785</td>
<td>0.422094</td>
<td>0.552657</td>
<td>0.487376</td>
</tr>
<tr>
<td>3</td>
<td>deberta-large-pseudo-20231004</td>
<td>large</td>
<td>False</td>
<td>True</td>
<td>0.447833</td>
<td>0.599299</td>
<td>0.523566</td>
<td>0.431393</td>
<td>0.559015</td>
<td>0.495204</td>
</tr>
<tr>
<td>4</td>
<td>cless-deberta-20230919-2131-ensamble</td>
<td>base</td>
<td>False</td>
<td>False</td>
<td>0.478477</td>
<td>0.6168</td>
<td>0.547638</td>
<td>0.436959</td>
<td>0.556711</td>
<td>0.496835</td>
</tr>
<tr>
<td>5</td>
<td>base-noadd-pseu-202310101211</td>
<td>base</td>
<td>False</td>
<td>True</td>
<td>0.450015</td>
<td>0.631051</td>
<td>0.540533</td>
<td>0.426784</td>
<td>0.570818</td>
<td>0.498801</td>
</tr>
<tr>
<td>6</td>
<td>lrg-add2-ps-frz8-202310061415</td>
<td>large</td>
<td>True</td>
<td>True</td>
<td>0.450208</td>
<td>0.574412</td>
<td>0.51231</td>
<td>0.439451</td>
<td>0.563622</td>
<td>0.501536</td>
</tr>
</tbody>
</table>
<p>Columns:</p>
<ul>
<li><strong>checkpoint_name</strong> - name of model checkpoint (4 folds inside) </li>
<li><strong>model_name</strong> - which model was used: <code>microsoft/deberta-v3-base</code> or <code>microsoft/deberta-v3-large</code></li>
<li><strong>adds</strong> - if <code>True</code> then <code>prompt_text</code> and <code>prompt_question</code> were both added to model as input</li>
<li><strong>pseudo</strong> - pretraining on <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/data" target="_blank">Feedback Prize - English Language Learning</a> data</li>
<li><strong>content_rmse</strong>, <strong>wording_rmse</strong>, <strong>mcrmse</strong> - metrics of raw Deberta Regressor model </li>
<li><strong>lgbm_content_rmse</strong>, <strong>lgbm_wording_rmse</strong>, <strong>lgbm_mcrmse</strong> - metrics of LGBM Regressor on top of every Deberta</li>
</ul>
<p><strong>Note</strong>: Models without <code>prompt_text</code> and <code>prompt_question</code> in input (adds=False) has much worse CV <code>mcrmse</code>. <br>
However this is mitigated by LGBM which utilizes <code>prompt_text</code> handcrafted text mining techniques and <code>lgbm_mcrmse</code> <br>
is significantly improved for these models</p>
<h3>Ensemble metrics</h3>
<table>
<thead>
<tr>
<th></th>
<th>metric</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>content_rmse</td>
<td>0.40780695722821314</td>
</tr>
<tr>
<td>2</td>
<td>wording_rmse</td>
<td>0.5389858871892348</td>
</tr>
<tr>
<td>3</td>
<td>mcrmse</td>
<td>0.47339642220872397</td>
</tr>
</tbody>
</table>
<h3>Leaderboard</h3>
<table>
<thead>
<tr>
<th></th>
<th>metric</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>mcrmse (public)</td>
<td>0.432</td>
</tr>
<tr>
<td>2</td>
<td>mcrmse (private)</td>
<td>0.468</td>
</tr>
</tbody>
</table>
<p>Kudos for the author of <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">this notebook</a> it helped me a lot.</p> |
Bengali.AI Speech Recognition | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, all the fellow Kagglers! </p>
<p>I'd like to first give a massive gratitude to Kaggle, and Bengali AI. This competition is my first competition and I have learnt a lot in the process. Besides, I want to thank lots of fellow competitors for providing inspirational insights. Thanks to authors of reposLastly, thanks to my team mates, particularly <a href="https://www.kaggle.com/rkxuan" target="_blank">@rkxuan</a> for spotting the repo that forms the basis of the punctuation model. </p>
<hr>
<h2>Model Architecture:</h2>
<ul>
<li><strong>ASR model:</strong> Wav2vec2 CTC model</li>
<li><strong>Ngram:</strong> Kenlm</li>
<li><strong>Punctuation Model:</strong> xlm-roberta-large</li>
</ul>
<h2>What Worked:</h2>
<ul>
<li>Fine-tuning on <a href="https://www.kaggle.com/umongsain" target="_blank">@umongsain</a> filtered Common Voice dataset. Public LB: <code>0.439 -> 0.428</code></li>
<li>Fine-tuning on Openslr 37 resulted in: Public LB <code>0.428 -> 0.414</code></li>
<li>Incorporating Oscar corpus into ngram</li>
<li>Data augmentation with <a href="https://github.com/asteroid-team/torch-audiomentations" target="_blank">audiomentations</a>.</li>
<li>xlm-roberta-large configuration from this <a href="https://github.com/xashru/punctuation-restoration" target="_blank">repo</a>.</li>
<li>Optuna search for decoding hyperparameters, as demonstrated in this <a href="https://www.kaggle.com/code/royalacecat/lb-0-442-the-best-decoding-parameters" target="_blank">notebook</a>.</li>
</ul>
<h2>Challenges:</h2>
<h3>Datasets:</h3>
<ul>
<li><strong>Openslr 53:</strong> Voluminous and might've led to overfitting during training. It being crowdsourced could be a factor, especially when compared to the more refined Openslr 37.</li>
<li><strong>Fleurs:</strong> Plagued with inconsistent quality. A consistent sharp noise mars the dataset.</li>
<li><strong>Competition ds:</strong> Exhibits quality diversity.</li>
</ul>
<h3>Punctuation Restoration:</h3>
<ul>
<li>Difficulties in restoring five punctuations: <strong>!</strong>, <strong>,</strong>, <strong>?</strong>, <strong>।</strong>, and <strong>-</strong>.<ul>
<li>Notably, hyphens appear to be tricky. Maybe isolating it for training could help.</li></ul></li>
</ul>
<h3>Audio Augmentation:</h3>
<ul>
<li>Might've overdone with BGMs. Modulating pitch could potentially be more effective.</li>
</ul>
<h3>Speech Enhancement:</h3>
<ul>
<li>Both FAIR Denoiser and Nvidia CleanUnet fell short of expectations. It's perplexing, but perhaps they inadvertently degraded human voice quality.</li>
</ul>
<h2>Pro-Tips:</h2>
<ul>
<li>Prefer ARPA over BIN. Kenlm seems to lose unigram post-conversion, but this trick can amplify the score by <code>0.007</code>. Mind the 13GB RAM constraint on Kaggle. I eventually settled with a trimmed 4gram, approximately 12GB.</li>
</ul>
<h2>Observations:</h2>
<ul>
<li>Local CV, grounded in annotated OOD examples, aligns well with the public LB. This might explain the negligible shake-up in the end.</li>
<li>Oscar primarily features formal <strong>articles</strong>. So, the ngram, despite its magnitude, might overlook the niche vocabularies in OOD test sets, especially with the high OOV observed.</li>
</ul>
<hr>
<p>Feel free to share your thoughts and experiences!</p> |
Bengali.AI Speech Recognition | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to the organizers for this interesting competition. Speech recognition is a very interesting direction and I did learn a lot from the discussion and public code of the many contestants in this competition. The past three months have been stressful but rewarding. I will try to make my solution clear in my broken English</p>
<h2>Summary</h2>
<p>My solution is relatively simple, using a wav2vec2 1b model as the pretrain model and training a Wav2Vec2ForCTC model. During the post-processing stage, a 6-gram language model is trained using KenLM, followed by further post-processing of normalization and dari on the output results.</p>
<h2>Wav2Vec2ForCTC Training</h2>
<p>Specifically, I use <a href="https://huggingface.co/facebook/wav2vec2-xls-r-1b" target="_blank">facebook/wav2vec2-xls-r-1b</a> as the pretrain model. Training this model requires three stages, with different random seeds and consistent data augmentation and parameters in each stage:</p>
<ul>
<li>Optimizer: AdamW (weight_decay: 0.05; betas: (0.9, 0.999))</li>
<li>Scheduler: Modified linear_warmup_cosine scheduler (init_lr: 1e-5; min_lr: 5e-6; warmup_start_lr: 1e-6; warmup_steps: 1000; max_epoch: 120; iters_per_epoch: 1000)</li>
</ul>
<pre><code> :
():
self.optimizer = optimizer
self.max_epoch = max_epoch
self.min_lr = min_lr
self.init_lr = init_lr
self.warmup_steps = warmup_steps
self.iters_per_epoch = iters_per_epoch
self.warmup_start_lr = warmup_start_lr warmup_start_lr >= init_lr
self.max_iters = max_epoch * iters_per_epoch
():
total_steps = cur_epoch * self.iters_per_epoch + cur_step
total_steps < self.warmup_steps:
warmup_lr_schedule(
step=cur_step,
optimizer=self.optimizer,
max_step=self.warmup_steps,
init_lr=self.warmup_start_lr,
max_lr=self.init_lr,
)
total_steps <= self.max_iters // :
cosine_lr_schedule(
epoch=total_steps,
optimizer=self.optimizer,
max_epoch=self.max_iters // ,
init_lr=self.init_lr,
min_lr=self.min_lr,
)
total_steps <= self.max_iters // :
cosine_lr_schedule(
epoch=self.max_iters // ,
optimizer=self.optimizer,
max_epoch=self.max_iters // ,
init_lr=self.init_lr,
min_lr=self.min_lr,
)
:
cosine_lr_schedule(
epoch=total_steps - self.max_iters // ,
optimizer=self.optimizer,
max_epoch=self.max_iters // ,
init_lr=self.min_lr,
min_lr=,
)
():
lr = (init_lr - min_lr) * * (
+ math.cos(math.pi * epoch / max_epoch)
) + min_lr
param_group optimizer.param_groups:
param_group[] = lr
</code></pre>
<ul>
<li>base DataAugmentation (denode as <code>base_aug</code>): </li>
</ul>
<pre><code> ():
trans = Compose(
[
TimeStretch(min_rate=, max_rate=, p=, leave_length_unchanged=),
Gain(min_gain_in_db=-, max_gain_in_db=, p=),
PitchShift(min_semitones=-, max_semitones=, p=),
OneOf(
[
AddBackgroundNoise(sounds_path=musan_dir, min_snr_in_db=, max_snr_in_db=,
noise_transform=PolarityInversion(), p=),
AddGaussianNoise(min_amplitude=, max_amplitude=, p=),
] musan_dir [
AddGaussianNoise(min_amplitude=, max_amplitude=, p=), ],
p=,
),
]
)
trans
</code></pre>
<ul>
<li><p>composite DataAugmentation (denode as <code>comp_aug</code>)。I use three comp_augs:</p>
<ol>
<li>split an audio wave evenly into 3 segments and perform base_aug on each segment;</li>
<li>randomly select two speeches from the dataset, perform base_aug on each of them separately, and then concatenate them together;</li>
<li>combine the above two data augmentation methods.</li></ol></li>
<li><p>dataset<br>
For Wav2Vec2ForCTC training, I didn't use external data, I filtered the competition dataset in the following steps:</p>
<ol>
<li>train a model based on <a href="https://huggingface.co/arijitx/wav2vec2-xls-r-300m-bengali" target="_blank">arijitx/wav2vec2-xls-r-300m-bengali</a> </li>
<li>use the model above to inference the whole dataset, sort all sample scores from small to large and retain the top 70% of the data</li></ol></li>
</ul>
<h2>KenLM training</h2>
<ul>
<li><p>dataset:<br>
I use IndicCorpv1 and IndicCorpv2 as corpus. After cleaning, the two corpus are combined without reduplicates.</p></li>
<li><p>corpus cleaning:<br>
I clean each sentence in the corpus using the below code:</p></li>
</ul>
<pre><code>chars_to_ignore = re.()
long_space_to_ignore = re.()
bnorm = Normalizer()
():
text = re.sub(chars_to_ignore, , text)
text = re.sub(long_space_to_ignore, , text).strip()
text
():
sentence = normalize(sentence)
sentence = fix_text(sentence)
words = sentence.split()
:
all_words = [bnorm(word)[] word words]
all_words = [_ _ all_words _]
(all_words) < :
.join(all_words).strip()
TypeError:
</code></pre>
<ul>
<li>6gram language model is trained.</li>
</ul>
<h2>Tried but not work</h2>
<ol>
<li>use external data, like openslr, shrutilipi</li>
<li>use deepfilternet to denose the audio</li>
<li>use larger model (wav2vec2-xls-r-2b)</li>
<li>use whisper-small</li>
<li>use more corpus to train an language model (BanglaLM)</li>
<li>train a spelling error correction model</li>
<li>train the fourth stage model</li>
<li>…<br>
There's a lot more, off the top of my head</li>
</ol>
<h2>Update</h2>
<ul>
<li>I just found the 7gram-language kenlm model is chosen to be the final submission version, not the 6gram version, even though their scores are the same.</li>
<li>inference code is made public: <a href="https://www.kaggle.com/code/hanx2smile/4th-place-solution-inference-code" target="_blank">https://www.kaggle.com/code/hanx2smile/4th-place-solution-inference-code</a></li>
<li>training code is made public: <a href="https://github.com/HanxSmile/lavis-kaggle" target="_blank">https://github.com/HanxSmile/lavis-kaggle</a></li>
</ul> |
RSNA 2023 Abdominal Trauma Detection | 32nd place overview + mistakes | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I know that 32nd place is not necessarily the dream place people will aim for in future competitions but it's always important to learn from mistakes and we've made a few so I'll share our solution here and share to my future self and others how to improve for the next time.</p>
<h2>Overall solution</h2>
<p><strong>First stage: organ segmentation</strong><br>
In order to go from weak labels to strong labels it's important to be able to segment the organs of interest, so that's what we did like many other teams.</p>
<p>One potential mistake we did here is to keep the original labels of the annotated data and treat them as out-of-fold predictions for the next stages. Although it might seem a harmless decision, the fact that the annotated data all had injuries and that ground truth segmentation could be distinguishable by second stage models might have introduce a small data leakage.</p>
<p><strong>Second stage: organ injury classification</strong><br>
Like many others we used our segmentation mask to crop region of interests and feed them to a 2.5D + LSTM models (one for each of the segmented organs: bowel, liver, spleen and kidney). But we decided to also give the segmentation mask to the model as input, this is where we did something different which probably was not necessary and might have introduce a leak in our pipeline (not the only problem though but a point of concern).</p>
<p>One thing we've done differently from most solution I've read so far is to train one single model for all targets but in the axis parallel to the z-axis: the CT scan is fully stacked to form a 3D image Z, W, H and then slices of data are generated by slicing across W and H -> this allows to see all the organs within only few frames (16 or 24), to train end to end with all organs, to augment training by switching the slicing between W and H and to ensemble a very different approach.</p>
<p>On both approaches we noticed that our models were bad for extravasation, so we simply ignored it and never had time to come back to it before end of the competition. Note to myself, joining a competition 3 weeks before the end is short.</p>
<p><strong>Thrid stage: MLP for ensembling and competition metric optimization</strong></p>
<p>Since the goal of a Kaggle competition is to optimize a metric it's important to have a final approach that tries to minimize that metric. So for each patient we took the predictions of our two approaches, stacked them and stacked the predictions for two series belonging to a same patient.<br>
Now we just train an MLP to minimize the official competition metric.<br>
Note that the final model is predicting extravasation class without any input information about it.</p>
<p>Individually both of our second stage approaches reached about CV 0.42 - LB 0.52, when stacked together they reached CV 0.386 - LB 0.49.</p>
<h2>Big mistake</h2>
<p>When entering a competition you are always eager to know what your first solution is worth on the public LB, so you go for a quick and dirty inference notebook. For basic image competition everything goes fine, for 3 stages solution things can get ugly pretty fast.</p>
<p>You don't care too much about your crappy inference code when your LB score is improving, so you add new stuff until your score does not improve anymore and your CV - LB gap is the worst among all participants! 😭</p>
<p>In the end I've spent the last few days (+ the 2 extra days) trying to figure out from where this huge CV - LB gap came from…</p>
<p>Score on fold 0 during CV 0.393 -> score on fold 0 out of kaggle inference notebook 0.418… Silent errors in machine learning are frequent so be careful!</p>
<p>Anyway it was a fun competition to participate in, congratulations to everyone and see you soon!</p> |
Kaggle - LLM Science Exam | 3rd place solution [Update + Code links] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I want to thank organizers for amazing competition which provided a nice chance to dive deep into LLMs and thanks to co-kagglers who shared their datasets and approaches. Specifically, thanks to <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> and <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for high quality datasets, which helped massively in training good question answering models. Thanks to <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> for showing how to do the impossible and to <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> for developing these ideas and adding Xwin model weights. Special thanks to <a href="https://www.kaggle.com/sugupoko" target="_blank">@sugupoko</a> for sharing 70k dataset, which helped me to tune re-ranker model (the dataset contributed to qa models as well), which in turn gave huge boost to my scores. To contribute to knowledge sharing, I will publish my code, but it is going to take a few days.</p>
<p>Important note before going into details about the solution: I used Platypus2, but still not sure if it is allowed or not due to its license restrictions. But the main reason why I kept it in final submissions is that organizers stated that winning models are not used for any business purpose, which, I guess, implies that commercial usage restrictions in license should not be a problem.</p>
<p><strong>[Update]</strong> Turns out it was fine to use Platypus2, but surprisingly additional ablation studies showed that slightly better scores could have been achieved without it! More details are in a table below.</p>
<h1>Diagram of the solution</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1210465%2F8f8a9a7095f0f75c61c687a8ff02eb62%2FSolution5.png?generation=1697013081108751&alt=media" alt=""></p>
<h1>Some notes about the solution</h1>
<h2>Wikipedia processing</h2>
<p>It starts with Wikipedia dump processing, because, as many noted before, available processed wikipedia data have numerous flaws from missing numbers to missing articles. I used <a href="https://github.com/attardi/wikiextractor" target="_blank">https://github.com/attardi/wikiextractor</a> and modified it to solve problem of deleted numerical values. Most likely some problems still remained, but some were solved.</p>
<p>Search for context is performed in one stage on passage level as opposed to the search for articles first and then for sentences. The reason is that in some cases two stage search just has no chance to find necessary context. For example, in training data I saw a question about actions of a character from some play. But the title or beginning of the article about this play does not mention this character, so encoding of the article does not contain necessary information. Not sure how frequent such cases, but probably not negligible.</p>
<h2>Reranker training</h2>
<p>I saw some comments in other discussions that people tried to train reranker model, but without success. From what I understood in the process of tuning my reranker, there are two key ingredients here:</p>
<ol>
<li>Having candidates in pairs of (question, candidate), both positive and negative, from the same distribution as they appear during inference. In my case it was important to use wikipedia passages, that I got from wikipedia processing, and not the context from original 70k dataset.</li>
<li>Train with hard negatives. Though there is a caveat here: a raw model, that was not pretrained for reranking task, will fail miserably on hard negatives. So it is important to take some pretrained one (ibm/re2g-reranker-nq in my case) or train it in two stages.</li>
</ol>
<h2>Ablation studies</h2>
<p>It is interesting to see how each part of the solution contributes to overall score, but I have not done thorough ablation studies, because assembled all parts together very late. But there are some hints from earlier submissions:</p>
<table>
<thead>
<tr>
<th>Pipeline configuration</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inference only with 5 debertas (not final models), not tuned reranker</td>
<td>0.894</td>
<td>0.893</td>
</tr>
<tr>
<td>+answering 500 hard questions with Platypus</td>
<td>0.91</td>
<td>0.909</td>
</tr>
<tr>
<td>+replacing debertas with new ones and adding 3 electras and 5 robertas</td>
<td>0.912</td>
<td>0.914</td>
</tr>
<tr>
<td>+replacing reranker with tuned one</td>
<td>0.927</td>
<td>0.922</td>
</tr>
<tr>
<td>+adding Xwin to ensemble</td>
<td>0.928</td>
<td>0.926</td>
</tr>
</tbody>
</table>
<p>I think such results show that question answering part does its job very well if provided with correct context (boost from tuning reranker) and the weakest part of the solution is the initial retrieval. Which makes sense, since it is the part that is not tuned for competition data and also it consists of two very small models with very limited capacity, which are tasked to index huge dataset. Probably much higher score could have been achieved with tuned retrieval models.</p>
<p><strong>[Update]</strong><br>
Table with measurements of each part's contribution to the final score</p>
<table>
<thead>
<tr>
<th>Pipeline configuration</th>
<th>Private</th>
<th>Public</th>
</tr>
</thead>
<tbody>
<tr>
<td>Final submit</td>
<td>0.9284</td>
<td>0.9286</td>
</tr>
<tr>
<td>Without Xwin</td>
<td>0.9272</td>
<td>0.9257</td>
</tr>
<tr>
<td>Without Platypus2</td>
<td>0.9288</td>
<td>0.9309</td>
</tr>
<tr>
<td>Without both LLMs</td>
<td>0.9165</td>
<td>0.9201</td>
</tr>
<tr>
<td>Without bge-small in retrieval</td>
<td>0.9272</td>
<td>0.9261</td>
</tr>
<tr>
<td>Without MiniLM-L6-v2 in retrieval</td>
<td>0.9258</td>
<td>0.9259</td>
</tr>
<tr>
<td>Without Electra and Roberta in MLMs ensemble</td>
<td>0.9246</td>
<td>0.9268</td>
</tr>
<tr>
<td>Without reranker</td>
<td>0.9113</td>
<td>0.9130</td>
</tr>
</tbody>
</table>
<p><br><br>
Code:<br>
<a href="https://www.kaggle.com/code/podpall/3rd-place-reranker-training" target="_blank">Reranker training</a><br>
<a href="https://www.kaggle.com/code/podpall/3rd-place-full-inference" target="_blank">Full pipeline inference</a></p> |
RSNA 2023 Abdominal Trauma Detection | 8th Place Solution & Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all of the winners and competitors. Thank you to Kaggle and the organizers for this interesting competition.</p>
<p>I joined relatively late after releasing my <a href="https://www.kaggle.com/datasets/vaillant/rsna-abdominal-trauma-extravasation-bounding-boxes" target="_blank">extravasation bounding box labels</a>. My solution was pretty similar to my prior solutions in RSNA cross-sectional imaging challenges (pulmonary embolism, cervical spine fracture). I treated the task as 3 separate subtasks: predicting solid organ injury (liver, kidney, spleen), bowel injury, and extravasation. All models were based on CNN-transformer "2.5D" models. </p>
<h2>Data</h2>
<p>I converted all DICOMs to 3-channel PNGs, where each channel was a separate CT window. I used 3 windows (soft tissue: WL=50, WW=400, liver: WL=90, WW=150, and angiography: WL=100, WW=700).</p>
<h2>Crop Model</h2>
<p>Using 3D connected components, I was able to generate a mask for each CT volume in order to train a model to eliminate the empty black space. This mask was converted into bounding box coordinates for each image, which allowed me to train a 2D CNN mobilenetv3_small_050 model on 256 x 256 images to predict the coordinates. For each CT volume, I took the union of the predicted bounding boxes for each individual slice and used this to crop each individual image.</p>
<h2>Liver-Kidney-Spleen Organ Identification Model</h2>
<p>Using the segmentation output from TotalSegmentator, I trained another mobilenetv3_small_050 model on 256 x 256 images to predict presence of these organs on individual slices. </p>
<h2>Liver-Kidney-Spleen Injury Model</h2>
<p>With about a week left, I decided to label some slices with injuries to the liver, kidneys, and spleen so I could train a 2D model slice-wise model. I did not release the labels since the end of the competition was near, and I did not want to create any disruption. They are now available <a href="https://www.kaggle.com/datasets/vaillant/rsna-abd-trauma-organ-injury-slice-labels" target="_blank">here</a>.</p>
<p>I trained a ConvNeXt-tiny model on 2D slice labels with an additional linear layer to reduce the feature dimension to 256. This included the laterality of the kidney injury (i.e., left vs. right), though I am not sure how much this actually helped in the end. The model was trained on cropped images of size 288 x 384 from cropped volumes using the above models. This model was used to extract features from each slice; CT volumes were sampled to 128 images. Thus a CT series was converted to a sequence of shape 128 x 256. </p>
<p>A 3-layer transformer was trained on these sequences to predict the series-level label. A weighted binary cross-entropy loss was constructed to mimic the competition metric. The validation loss was rather unstable, so I also tracked the AUC to make sure the model was learning.</p>
<h2>Bowel Injury Model</h2>
<p>I used the provided slice-wise bowel injury labels to train CNN-transformer model in the same manner as above, except I only cropped the individual images to remove black space, not the CT volume since bowel is present on more slices. Images were resized to 384 x 512. </p>
<h2>Extravasation Model</h2>
<p>Using the bounding box labels I annotated, I generated 12 nonoverlapping patches of size 128 x 128 from images of size 384 x 512 and assigned each patch with a label of injury vs. healthy. I trained a ConvNeXt-tiny model on these patches and extracted features for each patch. Thus each image was converted into a sequence of shape 12 x 256. </p>
<p>I trained a 3-layer transformer using those sequences on slice-wise labels. I then used this transformer to extract features from each image of a resampled 128-slice volume, again resulting in a sequence of shape 128 x 256. This method improved performance over simply training a 2D CNN on whole images. A second-stage transformer was trained on these sequences to predict the series-level labels, using a weighted loss similar to the above.</p>
<h2>Inference</h2>
<p>5-fold ensemble of the above was used for final inference. For patient-level prediction, predictions were averaged across the series, if there were 2. Softmax activation function was applied to each label group's predictions, so all the probabilities were already normalized to 1. All probabilities were then scaled by taking the square root, which improved the private LB loss by 0.4. OOF CV was 0.375.</p>
<h2>Additional Thoughts</h2>
<p>I tried to incorporate 3D models, but they were taking too long to train and the performance was not as high. I was also interested in training segmentation models and training on cropped organs using the segmentation masks but did not have enough time. I tried training models on images with stacked slices as channels (i.e., each channel of the "image" was a separate slice), but this resulted in similar performance (slightly worse on LB). I tried training a single transformer on the concatenation of the features from the 3 types of models above, but this resulted in worse performance. Overall, I am happy to have won my 10th gold medal. </p>
<p>Inference Notebook: <a href="https://www.kaggle.com/code/vaillant/rsna-trauma-submission-v2-1" target="_blank">https://www.kaggle.com/code/vaillant/rsna-trauma-submission-v2-1</a></p>
<p>Source Code: <a href="https://www.kaggle.com/datasets/vaillant/rsna-trauma-src" target="_blank">https://www.kaggle.com/datasets/vaillant/rsna-trauma-src</a></p> |