Title of Competition
stringclasses 168
values | Title of Writeup
stringlengths 9
139
| User
stringclasses 1
value | Writeup
stringlengths 419
61.5k
|
---|---|---|---|
Bengali.AI Speech Recognition | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>11th Solution</h1>
<p>The solution consisted of a single whisper medium model using a beam decoder with a size of 4.</p>
<h6>Training</h6>
<ul>
<li>Like other solutions, the most important step is to use cleaned data. I use following rules based on metadata shared by host:</li>
</ul>
<pre><code>cond0 = train_df[
(
(train_df.ykg_wer < ) | (train_df.ggl_wer < )
) & (
(train_df.total_wer_by_client_ykg < ) |
(train_df.total_wer_by_client_ggl < )
) & (train_df.mos_pred > )
]
</code></pre>
<ul>
<li><p>To speed up training, about 80% of the dataset is presented to the model in combinations of two audios, showing only single audios to prevent hallucinations. This also helps to reduce the impact of possible bad annotations.</p></li>
<li><p>SpecAugment, SpecAugment++, CutOut were used.</p></li>
</ul>
<h6>Inference</h6>
<ul>
<li><p>For inference, the most important thing is to handle correctly audios longer than 30 secs and sentences longer than 448 tokens. This made an improvement on LB from 0.43 to 0.38.</p></li>
<li><p>Inference code: <a href="https://www.kaggle.com/code/themadrambito/11th-place-whisper-inference" target="_blank">https://www.kaggle.com/code/themadrambito/11th-place-whisper-inference</a></p></li>
</ul> |
Bengali.AI Speech Recognition | A low ranked solution!!!😅 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Summary: I used the "ai4bharat/indicwav2vec_v1_bengali" acoustic model and worked with the competition dataset. Since I don't have any physical GPU, I relied on Kaggle's free GPU. Training the model with the entire competition dataset was impossible for me, so I decided to focus on 29k validation samples that has more accurate labels and randomly selected an equal number of training samples. I still couldn't train with these 58k samples, so I further reduced it to 30k training samples by random selection and 2k validation samples. Before training, I normalized the labels also.</p>
<p>I experimented with various open-source acoustic models and found that "ai4bharat/indicwav2vec_v1_bengali" performed the best. I fine-tuned this model with my modified dataset in Kaggle. I saved checkpoints after 12 hours of training and then reload the checkpoint in another kernel and repeated this process three times (but you can't compare it with a continuous run, as the learning rate returns to its initial). Unfortunately, due to hardware limitations, I couldn't make further progress without a personal offline GPU workstation. That's why I gave up on the idea of using augmentations, incorporating a punctuation model, and so on.</p>
<p>I noticed that many sentences ended with "|" , so I added "|" at the end of all labels. But then I realized, generally bengali imperative sentences start with words like {'কি', 'কী', 'কেন', 'কিভাবে','কবে','কখন'}, so I used the "?" sign instead of "|" , after all the sentences which started with the words that indicate imperatives in Bengali. Additionally, I incorporated a 5-gram language model into my approach. So, that's it.</p> |
RSNA 2023 Abdominal Trauma Detection | 382nd Place Solution for the RSNA 2023 Abdominal Trauma Detection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Introduction</h1>
<p>It brings me joy to be a part of the RSNA 2023 Abdominal Trauma Detection. I would like to express my gratitude to the organizers, sponsors, and Kaggle staff for their efforts, and I wish all the participants the best. I gain a lot of knowledge from this competition and other participants.</p>
<p>I want to express my gratitude to MIRENA ANGELOVA for providing the public notebook <a href="https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb</a> </p>
<h1>2. Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></li>
</ul>
<h1>3. Overview of the approach</h1>
<p>The solution was a copy of the public notebook(Public/Private LB of 0.66708/0.67282) <br>
with a change in the mean multiple coefficient for extravasation injuries and any injuries to 27 (Public/Private LB of 0.66669/0.67180).</p>
<p>The data preprocessing process is not used.<br>
The algorithms employed are Mean (without a model).</p>
<ul>
<li>Algorithm and Inference: <a href="url" target="_blank">https://www.kaggle.com/code/liudacheldieva/rsna-0-66-lb-aa7716?scriptVersionId=141544801</a> <br>
( copy from codes provided by MIRENA ANGELOVA in a public notebook <a href="url" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb?scriptVersionId=141091151</a>)<br>
The validation strategy has not been implemented.</li>
</ul>
<table>
<thead>
<tr>
<th>mean()</th>
<th>public notebook Mean multiple by</th>
<th>my Mean multiple by</th>
</tr>
</thead>
<tbody>
<tr>
<td>bowel_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>bowel_<strong>injury</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>extravasation_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>extravasation_<strong>injury</strong></td>
<td>28</td>
<td>27</td>
</tr>
<tr>
<td>kidney_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>kidney_<strong>low</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>kidney_<strong>high</strong></td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>liver_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>liver_<strong>low</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>liver_<strong>high</strong></td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>spleen_healthy</td>
<td></td>
<td></td>
</tr>
<tr>
<td>spleen_<strong>low</strong></td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>spleen_<strong>high</strong></td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>any_<strong>injury</strong></td>
<td>28</td>
<td>27</td>
</tr>
</tbody>
</table>
<p>This means from the training set used to fill in the solution values.</p>
<h1>4. Method</h1>
<p>Solution: Mean.</p>
<table>
<thead>
<tr>
<th>notebook</th>
<th>score(private)</th>
</tr>
</thead>
<tbody>
<tr>
<td>MIRENA ANGELOVA public <a href="https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb</a></td>
<td>0.67282</td>
</tr>
<tr>
<td>Change feature.split('_')[1] == 'injury' and feature != 'bowel_injury': submission[feature] *= 28 <br> to submission[feature] *= 27</td>
<td>0.6718</td>
</tr>
</tbody>
</table>
<p><br></p>
<h1>5. Final result</h1>
<table>
<thead>
<tr>
<th>low or bowel_injury</th>
<th>high</th>
<th>linjury and not eq bowel_injury</th>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>mean * 4</td>
<td>mean * 6</td>
<td>mean * 27</td>
<td><strong>0.6718</strong></td>
<td>0.66669</td>
</tr>
<tr>
<td>mean * 4</td>
<td>mean * 6</td>
<td>mean * 29</td>
<td>0.6739</td>
<td>0.66754</td>
</tr>
<tr>
<td>mean * 4</td>
<td>mean * 5</td>
<td>mean * 28</td>
<td>0.67335</td>
<td>0.66502</td>
</tr>
<tr>
<td>mean * 3</td>
<td>mean * 6</td>
<td>mean * 28</td>
<td>0.67181</td>
<td>0.66819</td>
</tr>
<tr>
<td>mean * 4</td>
<td>mean * 6</td>
<td>mean * 28</td>
<td>0.67282</td>
<td>0.66708</td>
</tr>
</tbody>
</table>
<h1>6. Sources</h1>
<ul>
<li><a href="url" target="_blank">https://www.kaggle.com/code/mirenaborisova/rsna-0-66-lb?scriptVersionId=141091151</a></li>
</ul> |
Bengali.AI Speech Recognition | 5th place solution - ensembling works | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizing this competition so well. Whenever questions arose, the hosts responded promptly and clearly. I really enjoyed working on this problem. Thank you very much <a href="https://www.kaggle.com/imtiazprio" target="_blank">@imtiazprio</a> </p>
<p><strong>Modeling</strong><br>
Like most of the participants, I started with the YellowKing pipeline. I changed the backbone to IndicWav2Vec and reinitialized the CTC layer. With this method and the public Dari/normalizing pipeline, it was already possible to achieve 0.48x scores without a Language Model. However, there was a big problem: <strong>the local WER score did not match the public lb anymore if the model was trained for too long</strong>. At a learning rate of 3e-5, local overfitting started at 40k steps (batch size 16). I suspect that the model started learning the relationship of erroneous audio/annotation pairs.</p>
<p>Due to the noisy annotation, I excluded all samples with a mos-score of >2.0 in the subsequent training stage, reaching a score of 0.472. After re-labeling with the new model, I removed all samples with a WER above 0.5 for my model and a WER above 0.5 for YellowKing. <strong>This resolved the local overfitting issue</strong>, ensuring that local improvements correlated with better public lb scores. I trained the model over 210k steps (bs 16) with a learning rate of 8e-5, achieving a score of 0.452 without a language model on top.</p>
<p>Additionally, I trained a model with a <a href="https://huggingface.co/facebook/wav2vec2-xls-r-1b" target="_blank">larger backbone</a>. Unfortunately, due to lack of time, I could not fully exploit the potential, but after 135k steps I could already achieve a score of 0.454. The learning rate for the larger model had to be much smaller (1e-5).</p>
<p><strong>Key Takeaways:</strong></p>
<ul>
<li>Quality filtering of data significantly enhanced training performance.</li>
</ul>
<p><strong>Ensembling</strong><br>
I tried early to average the model logits of different finetuned models. However, it doesn´t work because the predictions may not be aligned. This is even the case for models of the same architecture. In <a href="https://arxiv.org/pdf/2206.05518.pdf" target="_blank">this paper</a> a procedure is described, where <strong>features of different ASR models are concatenated</strong>, followed by the addition of a transformer encoder and a CTC layer. The weights of the asr models are freezed and only the transformer encoder and the CTC layer are trained. I applied this approach to my finetuned models:</p>
<ul>
<li>Extract embeddings of the finetuned models</li>
<li>Concatenate the features</li>
<li>Concatenated features are passed into the Transformer encoder and then processed in the CTC layer</li>
<li>A few steps (7k steps for batchsize 8) on the same training data were enough to increase the performance of the overall model from 0.355 to 0.344 (I justed tested it in the full pipeline, so with language model, punctuation and normalizing)<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2959383%2Fadb7e22f32c07808c16ed5ece66f5032%2FBild1.png?generation=1697605346189304&alt=media" alt=""></li>
</ul>
<p><strong>Key takeways:</strong></p>
<ul>
<li>Ensembling asr model outputs by concatenating the last hidden states and adding a transformer encoder with CTC layer improves performance a lot</li>
</ul>
<p><strong>Potentials:</strong></p>
<ul>
<li>I think this is where the <strong>greatest further potential lies</strong> in my solution. I didn't have enough time, but it would be interesting to see how Whisper or HUBERT in the ensemble would have improved the performance. I think because of the different architectures it should be even better because of the larger diversity in the ensemble.</li>
<li>My finetuned model with the 1b backbone is undertrained. There is still potential for better performance if the model is trained for a longer period.</li>
<li>I have hardly experimented with the parameters and training routine of the added transformer encoder. There is room for improvement as well.</li>
</ul>
<p><strong>Language Model</strong><br>
I have thought a lot about the language model part. I had thought about interpreting the whole thing as a seq2seq problem and taking a pretrained transformer as the language model. In the end I just used a standard KenLM model. For this I used datasets which were shared in the forums (e.g. IndicCorp, SLR, Comp data).</p>
<p>In preprocessing, I radically <strong>limited to the character set of the training data</strong>, as the<a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/433469#2410416" target="_blank"> hosts said </a> there are no new characters in the test data. I replaced numbers, other languages, and other characters with special tokens. This reduced the number of unigrams significantly which was a bottleneck in the Kaggle environment. I ended up with <strong>a 0 0 0 2 2 pruned 5-ngram</strong>. Adding more data didn´t increase the performance at a certain point.</p>
<p>Much more important for the Kaggle workflow was the following fact: If you use the standard Wav2Vec2ProcessorWithLM of Huggingface and then remove the model, the <a href="https://github.com/kensho-technologies/pyctcdecode/pull/111" target="_blank">RAM is not released</a>. I therefore disassembled the pipeline and first computed all logits, stored them locally, and then read them back in for the decoding process. Afterwards I removed the language model cleanly with decoder.cleanup() before deleting the decoder. Without this measure, I would not have been able to use the ensemble model because of out of memory errors.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2959383%2Fd077b0fb9e9042395af9401e0b509c65%2FBild2.png?generation=1697605762499198&alt=media" alt=""></p>
<p><strong>Key Takeways:</strong></p>
<ul>
<li>The language model is important: precise preprocessing manages size while excessive data may not necessarily improve the LM's performance. Corrupted data can even degrade the performance of the language model.</li>
</ul>
<p><strong>Punctuation</strong><br>
I tried different punctuation models and ensembles. In the end, I used a <a href="https://github.com/xashru/punctuation-restoration" target="_blank">XLM-Roberta-Large model</a>. It improved the score by ~0.02. I could improve the performance even more by forcing the model to output a | or ? after the last word (+0.002).</p>
<p><strong>Code</strong><br>
Cleaned up inference code can be found here:<br>
<a href="https://www.kaggle.com/code/benbla/5th-place-solution/notebook" target="_blank">Kaggle Inference</a><br>
Cleaned up training code can be found here:<br>
<a href="https://github.com/bd317/BengaliAI_Speech_Recognition_5th_solution" target="_blank">Github training</a></p> |
CommonLit - Evaluate Student Summaries | 1st: A brief review of the competition experience (Detail solution is on the way) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Some Feelings</h2>
<p>I would like to thank the organizers for bringing us the wonderful competition, and thank you for the atmosphere of sharing and discussion. This competition is a practical project required by the natural language processing course. My teammates and I learned in this competition than in the school class. Of course, thanks to the lucky shake, luck is also an important factor.</p>
<h2>What we did in the comp</h2>
<p>Our final submission only used a 4fold microsoft/deberta v3 large. Because each team member's academic pressure and time reasons, we did not plan to study various tricks or training skills from the beginning, but started with the quality and diversity of data.</p>
<p>WHY data? Because we found that the training data provided, only contains four topics, but from our own experience to think, training a student writing ability requires a lot of different topics, so the real test scenario, the model for the understanding of different topics is the most important (subsequent other players through prob test set also proved this), and we found that the LLM game, many people use LLM data increase and obtained the exciting fect, so we focus on improving the diversity of topic.(We continue to focus on other competitions and competitors' experience and progress in using LLM, but we do not see anyone openly revealing whether using LLM is "USEFUL", even if they have tried to use the generated data.)</p>
<p><strong>We sincerely hope that our ideas can provide some feasible ideas for the use of LLM in the FUTURE competitions, and can bring reference value to other participants</strong></p>
<h3>The key points of our solution:</h3>
<ol>
<li><p>Meta pseudo label (3 rounds), the most critical and time-consuming part, is the key to associating unannotated data with annotated data, from a paper by Google.(<a href="https://arxiv.org/pdf/2003.10580.pdf" target="_blank">https://arxiv.org/pdf/2003.10580.pdf</a>)</p></li>
<li><p>Carefully designed prompt to guide LLM to spit out the topic and topic text in his stomach (we can actually use the commonlit website, but we didn't realize that at the moment.)</p></li>
<li><p>Another prompt used to generate ten summary of different quality for each additional topic.</p></li>
<li><p>Change the data preprocessing in the open source training code to introduce prompt text into the model, but most other good teams have this key point, which commonly mentioned in discussion.</p></li>
<li><p>Two stage traning: stage1 - Use pseudo labeled data only for 2epoch and valid on train. stage2 - Use train data only for 2-3 epochs. In this way, we need not to pay too much attention to the data distribution of pseudo-labels, which mentioned by <a href="https://www.kaggle.com/philippsinger" target="_blank">@philippsinger</a> in his insightful <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369578" target="_blank">solution</a> of Feedback-ELL.</p></li>
<li><p>Sort index according to the length of the input text, and reduce the infer time. The whole model of inference is estimated to be 7 hours, but if you do not do so, the inference time will exceed the limit of 9 hours.</p></li>
<li><p>Traning and inference pipeline<a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">code</a> shared by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> ,the perfect process steps save us a lot of time to build pipeline.</p></li>
</ol>
<h2>Other words</h2>
<p>We only used the open source code and did not make any improvement on the model.<br>
In these days, we have read the plans of other teams, which impressed us, especially the "Head Mask" mentioned by <a href="https://www.kaggle.com/ivanaerlic" target="_blank">@ivanaerlic</a> and his deep understanding and insight into the game itself and the nature of the model structure, which made us admire and hope that one day we can make such excellent model optimization like you.</p>
<p>I believe combine the head mask and llm meta pseudo procedure, the private score might improved to .43+ ONLY BY ONE MODEL.</p>
<p>Thanks again to all the participants for your selfless sharing. I hope our working can contribute to the community!! :)</p>
<h2>Prompt used for LLM generate prompts:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15921268%2F18c17ba5a587a959e9b1c548b063c954%2FCOMMONLIT-1.jpg?generation=1697356510042870&alt=media" alt=""></p>
<h2>Prompt used for LLM generate summaries:</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15921268%2F3e612d66341007d176e84c454b284517%2FCOMMONLIT-2.jpg?generation=1697356525760792&alt=media" alt=""></p>
<h2>Related source:</h2>
<p>inference code: <br>
<a href="https://www.kaggle.com/itsmeanszao/debertav3-lgbm-no-autocorrect" target="_blank">https://www.kaggle.com/itsmeanszao/debertav3-lgbm-no-autocorrect</a><br>
llm generated data: <a href="https://docs.google.com/spreadsheets/d/1gKT1jYOVVtcMsIMiqgrxXGxdLSBQPkIM/edit?usp=sharing&ouid=117192899697376325370&rtpof=true&sd=true" target="_blank">https://docs.google.com/spreadsheets/d/1gKT1jYOVVtcMsIMiqgrxXGxdLSBQPkIM/edit?usp=sharing&ouid=117192899697376325370&rtpof=true&sd=true</a></p> |
ICR - Identifying Age-Related Conditions | 8th Place Solution for the "ICR - Identifying Age-Related Conditions" Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: ICR - Identifying Age-Related Conditions
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Frankly, I was somewhat shocked by the result. I had a pretty solid idea that there would be a lot of shakeup - but certainly wasn't expecting to be in the top 10. </p>
<p>The public/private scores for this submission are 0.19/0.34 respectively.</p>
<h1>Context:</h1>
<p>Business context:<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank"> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p>
<h1>Overview of the approach</h1>
<p>I focused first on predicting if someone had a specific Age-Related Condition rather than the class. This proved more effective than predicting class alone. Effectively, I create an Ensemble of Ensemble Predictors focused on specific conditions. The models in the primary ensembles were XGBoost & TabPFN. </p>
<h1>Details of Submission</h1>
<h2>Imputing Strategy</h2>
<p>My primary imputing strategy utilized XGBoost to predict most of the missing values rather than dropping, filling with 0, mean, mode, etc.</p>
<p>Compared to median imputing, this resulted in an improvement in the Public score of 0.04 (with no change in Private Score). </p>
<p>In this submission, there were two fields that contained the majority of the nan values in the training set. As I couldn't be sure this would hold true in any other set, I created a function to impute these values in a brief loop while dropping additional columns from subsequent impute runs when multiple columns in the target were nan values. </p>
<h1>Context:</h1>
<p>Business context:<a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview" target="_blank"> https://www.kaggle.com/competitions/icr-identify-age-related-conditions/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data" target="_blank">https://www.kaggle.com/competitions/icr-identify-age-related-conditions/data</a></p>
<h1>Overview of the approach</h1>
<p>I focused first on predicting if someone had a specific Age-Related Condition rather than the class. This proved more effective than predicting class alone. Effectively, I create an Ensemble of Ensemble Predictors focused on specific conditions. The models in the primary ensembles were XGBoost & TabPFN. </p>
<h1>Details of Submission</h1>
<h2>Imputing Strategy</h2>
<p>My primary imputing strategy utilized XGBoost to predict most of the missing values rather than dropping, filling with 0, mean, mode, etc.</p>
<p>Compared to median imputing, this resulted in an improvement in the Public score of 0.04 (with no change in Private Score). </p>
<p>In this submission, there were two fields that contained the majority of the nan values in the training set. As I couldn't be sure this would hold true in any other set, I created a function to impute these values in a brief loop while dropping additional columns from subsequent impute runs when multiple columns in the target were nan values. </p>
<pre><code>
():
()
()
drop_list = (df.columns).intersection((dl))
imputeDF = df.copy()
imputeDF.drop(drop_list, axis=, inplace=)
impute_X = imputeDF.drop(field, axis=).dropna()
impute_y = imputeDF[field]
imputeDF = impute_X.join(impute_y)
trainDF = imputeDF[imputeDF[field].notna()].dropna()
train_X = pd.get_dummies(trainDF.drop(field, axis=))
train_y = trainDF[field]
testDF = imputeDF[imputeDF[field].isna()]
test_X = testDF.drop(field, axis=)
test_y = testDF[field]
test_X.dropna(inplace=)
testDF = test_X.join(test_y)
test_y = testDF[field]
()
((test_X)) == :
()
colsToUse = (train_X.columns).intersection((test_X.columns))
train_X = train_X[colsToUse]
test_X = test_X[colsToUse]
modelOne = XGBRegressor(n_estimators=).fit(train_X, train_y)
preds = modelOne.predict(pd.get_dummies(test_X))
predictedDF = test_X
predictedDF[field] = preds.astype()
i, row predictedDF.iterrows():
df.loc[i, field] = row[field]
remainingToSolve = df[df[field].isna()].count()
()
</code></pre>
<h2>Feature Engineering</h2>
<p>Aside from the imputing step above and converting categoricals into ints, I did no manipulation of the float data. There are quite a few fields with outliers that could perhaps be addressed, and at least one where the values appear to max out. In this case, I didn't have time to play around.</p>
<h2>Ensemble Models Used</h2>
<p>XGBoost's Classifier w/o fine tuning (fine tuning overfitted)<br>
TabPFN</p>
<p>As a metric (for XGBoost) I used balanced log loss, with the binary logistic objective. </p>
<p>Each model was fit using cross validation, scored by Balanced Log Loss, and the best performing model from each run was chosen. </p>
<h2>Greeks & Class</h2>
<p>The greeks file contains the Alpha field that differentiates the specific conditions that might result in the Class value being equal to 1. The remainder of the Greeks file was ignored. </p>
<h2>Ensemble of Ensemble Strategy</h2>
<p>Instead of a single ensemble predicting Class, we train 4. We train ensemble models to predict</p>
<ol>
<li>Class</li>
<li>Alpha "B"</li>
<li>Alpha "D"</li>
<li>Alpha "G"</li>
</ol>
<p>For Positive predictions, we take the max of positive predictions for B,D,G & average the result with the positive prediction of class.<br>
For Negative predictions, we take the min of negative predictions for B,D,G & average the result with the negative prediction of class.</p>
<p>Note that in the code below, I refer to the predictors for B, D, and G, as Beta, Delta, and Gamma. This is not to be confused with the fields of the same name (which are not used). </p>
<pre><code> ():
alphaPreds = modelAlpha.predict_proba(train_X, )
betaPreds = modelBeta.predict_proba(train_X, )
deltaPreds = modelDelta.predict_proba(train_X, )
gammaPreds = modelGamma.predict_proba(train_X, )
ensemblePreds = []
i, pred (betaPreds):
truthy = mean([([betaPreds[i][], deltaPreds[i][], gammaPreds[i][]]), alphaPreds[i][]])
falsey = mean([([betaPreds[i][], deltaPreds[i][], gammaPreds[i][]]), alphaPreds[i][]])
ensemblePreds.append([falsey, truthy])
postProcess:
ensemblePreds = modelAlpha.post_process_proba(ensemblePreds)
np.array(ensemblePreds)
</code></pre>
<h3>Special Note on Balanced Log Loss & The Greek Strategy</h3>
<p>For this competition, we did not need to ensure that predicted probabilities sum to 1. There are quite a few public notebooks utilizing a version of this function that simply inverts the positive predictions. This would obviously be incorrect with my prediction strategy described above as I can have total probabilities both greater and less than 1. For this notebook, I chose to ensure the Balanced Log Loss function I used took the full prediction probabilities and dealt with them independently rather than ensuring the probabilities summed to 1. Both should presumably be equivalent. </p>
<h3>Caveats from a better performing notebook.</h3>
<p>A better performing notebook (Private 0.33) ignored the Class model altogether.</p>
<h2>Prediction Balancing</h2>
<p>I balanced the prediction probabilities based on the weight of the predictions by the ensembled' models on a per model basis during training, and in aggregate when making predictions. I found this performed better during limited testing than providing an up-front weighting to XGBoost for the classes based on the ratio in the training set. </p>
<h2>Picking Features</h2>
<p>For each of the Ensemble Models (Class, B, G, D), I ran it through a loop removing features with feature importances scored <=0 to check which had the best performance through training. I kept the feature set with the best scoring run.</p>
<h3>Caveats from a better performing notebook.</h3>
<p>A better performing notebook (Private 0.33) used all features.</p>
<h2>Hyperparameter Tuning</h2>
<p>I ran through 200 trials w/ Optuna on each model (Class, B, D, G) to tune hyper parameters. Ultimately, I found that tuning resulted in worse scores for both public & private sets (0.01 to 0.05 worse). However, given the time limitation, I did not re-perform this for the reduced feature set for each model. This leaves room for re-running hyperparameter tuning on the reduced feature sets. I still suspect that at this point we're already working on overfitting against my core solution.</p>
<h2>A note on Post Processing</h2>
<p>I tried simple prediction post processing strategies where I pushed high values towards 1, and low values towards 0. I also tried ratio oriented approaches. Both tended to improve the training result score significantly (~0.5 all the way down to ~0.04 range) - but this resulted in significantly worse scores on both the public and private data set. I observed this strategy in the highest scoring public notebook (<a href="https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble" target="_blank">https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble</a>) that seemed to have been the source of many high scores. I'm curious now if this harmed those notebooks in a fashion as it did mine. </p>
<h1>Sources</h1>
<p>(<a href="https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble" target="_blank">https://www.kaggle.com/code/vadimkamaev/postprocessin-ensemble</a>)<br>
Inspired my failed attempts at post processing & source of my swap to balancing predictions during prediction rather than via the xgboost class weighting parameter (tabPFN didn't seem to have this, and attempts to mix/match were unnecessarily complex).</p> |
RSNA 2023 Abdominal Trauma Detection | 7th Place Solution & Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you RSNA for hosting this great competition, which gave me a nice experience and I believe this will start a wonderful journey on the Kaggel.</p>
<p>I will briefly introduce the solution I used in this competition.</p>
<h2>Dataset</h2>
<h3>sequence image data</h3>
<ol>
<li><p>My Solution is a 2.5D pipeline,it's necessary to process the sequence to a certain shapes, that is,<strong>[T * 3, 512, 512]</strong> , every series will be <strong>sampled</strong> to a length, for example, T = 32. </p></li>
<li><p>Then each independent slice image be <strong>croped</strong> to include as much valid data as possible in the image. This can be achieved by counting effective pixels.</p></li>
<li><p>Finally reshape them to **[256, 384] **shape.</p></li>
</ol>
<p>The visualization of cropping and reshaping results is as follows.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16201938%2F6ff85850d5189bfbba5eae614eb76096%2Fimage-20231016105954580.png?generation=1697456066783359&alt=media" alt=""></p>
<h3>sequence organ mask</h3>
<p>Just use <a href="https://pubs.rsna.org/doi/10.1148/ryai.230024" target="_blank">the total segmentor model</a> to generate segmentation results for all series. The bowel_mask = colon_mask + duodenum_mask + small_bowel_mask + esophagus_mask.</p>
<p>These masks will be used as mask ground truth, to assist with classification tasks.</p>
<h2>Models</h2>
<p><strong>Backbone:</strong> </p>
<p>InternImage (base) -> out stride (8, 16, 32)</p>
<p><strong>neck:</strong></p>
<p>UnetPlusPlus -> out stride (4, 8)</p>
<p><strong>head:</strong></p>
<p>I think the head section is the most valuable and effective part of this scheme.</p>
<p>The bowel, liver, spleen,and kidneys all have specific shapes and positions,except for extravasation. So there is two heads for classification.</p>
<p>For the first head, I referred to the decoding idea of mask2Former which learned to predict a mask from a query, and using it as the attention of the decoder layer. This can help each query extract effective information for each organ.</p>
<p>For the extravasation head, using the image level label to assist feature learning and enable better classification.</p>
<p>The entire pipeline is as follows.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16201938%2F186012aa8c3e147e92ece5017bb14650%2Fimage-20231016183612262.png?generation=1697456103599237&alt=media" alt=""></p>
<p><strong>loss:</strong></p>
<p>All cross entropy loss are weighted according to the status of each organ in each patient. The organ weight consistent with the weight used for official verification.</p>
<h2>Post processing</h2>
<p>I simply averaged the results of different series of the same patient id.</p>
<h2>Ensemble</h2>
<p>All ensembled models use the same model architecture, only using different sequence lengths(T=24/32/48) and different data folds.</p>
<h3>train code:</h3>
<p><a href="https://github.com/llreda/RSNA/tree/master" target="_blank">https://github.com/llreda/RSNA/tree/master</a></p>
<h3>inference code:</h3>
<p><a href="https://www.kaggle.com/code/hongx0615/rsna-2023-7th-place-solution-inference" target="_blank">https://www.kaggle.com/code/hongx0615/rsna-2023-7th-place-solution-inference</a></p> |
Bengali.AI Speech Recognition | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Many thanks to Kaggle and Bengali.AI for hosting such an interesting competition. As I was about to start my PhD research on speech processing and I knew nothing about the field, the competition came as the perfect opportunity for me to learn. In the end it was a highly rewarding experience.</p>
<p>The solution consists of 3 components:</p>
<ul>
<li>ASR model</li>
<li>Language model</li>
<li>Punctuation model</li>
</ul>
<p><strong>1. ASR model</strong></p>
<p>I used <code>ai4bharat/indicwav2vec_v1_bengali</code> as pretrained model.</p>
<p><strong>Datasets:</strong></p>
<ul>
<li>Speech data: Competition data, Shrutilipi, MADASR, ULCA (for ULCA data most of the links are dead but some are still downloadable, only a few thousands samples though)</li>
<li>Noise data: music data from MUSAN and noise data from DNS Challenge 2020</li>
<li>All data are normalized and punctuation marks are removed except for dot (<code>.</code>) and hyphen (<code>-</code>)</li>
</ul>
<p><strong>Augmentation:</strong></p>
<ul>
<li>use augmentation from <code>audiomentations</code>, apply heavy augmentation for read speech (comp data, MADASR) and lighter augmentation for spontaneous speech (Shrutilipi, ULCA)<br>
Example of read speech augmentation:</li>
</ul>
<pre><code>augments = Compose([
TimeStretch(=0.8, =2.0, =0.5, =),
RoomSimulator(=0.3),
OneOf([
AddBackgroundNoise(
sounds_path=[
,
],
=5.0,
=30.0,
=PolarityInversion(),
=1.0
),
AddBackgroundNoise(
sounds_path=[
],
=5.0,
=30.0,
=PolarityInversion(),
=1.0
),
AddGaussianNoise(=0.005, =0.015, =1.0),
], =0.7),
Gain(=-6, =6, =0.2),
])
</code></pre>
<p>For spontaneous speech augment probabilities are smaller and <code>TimeStretch</code> rate much less extreme.</p>
<ul>
<li>concat augment: randomly concatenate short samples together to make length distribution of training set closer to OOD test set.</li>
<li>SpecAugment: mask_time_prob = 0.1, mask_feature_prob = 0.05.</li>
</ul>
<p><strong>Training:</strong></p>
<ul>
<li>First fit on all training data, then remove about 10% with highest WER after fitted from train set.</li>
<li>Don't freeze feature encoder.</li>
<li>Use cosine schedule with warmups and restarts: 1st cycle 5 epochs peak lr 4e-5, 2nd cycle 3 epochs peak lr 3e-5, third cycle 3 epochs peak lr 2e-5.</li>
</ul>
<p><strong>Inference:</strong><br>
Use <code>AutomaticSpeechRecognitionPipeline</code> from <code>transformers</code> to apply inference with chunking and stride:</p>
<pre><code> = pipe(w, chunk_length_s=, stride_length_s=(, ))[]
</code></pre>
<p><strong>2. Language model</strong></p>
<p>6-gram kenlm model trained on multiple external Bengali corpus:</p>
<ul>
<li>IndicCorp V1+V2.</li>
<li>Bharat Parallel Corpus Collection.</li>
<li>Samanantar.</li>
<li><a href="https://www.kaggle.com/datasets/truthr/free-bengali-poetry" target="_blank">Bengali poetry dataset</a>.</li>
<li><a href="https://data.statmt.org/news-crawl/" target="_blank">WMT News Crawl</a>.</li>
<li>Hate speech corpus from <a href="https://github.com/rezacsedu/Classification_Benchmarks_Benglai_NLP" target="_blank">https://github.com/rezacsedu/Classification_Benchmarks_Benglai_NLP</a>.</li>
</ul>
<p><strong>3. Punctuation model</strong></p>
<p>Train token classification model to add the following punctuation set: <code>।,?!</code></p>
<ul>
<li>use <code>ai4bharat/IndicBERTv2-MLM-Sam-TLM</code> as backbone</li>
<li>add LSTM head</li>
<li>train for 6 epochs, cosine schedule, lr 3e-5 on competition data + subset of IndicCorp</li>
<li>mask 15% of the tokens during training as augmentation</li>
<li>ensemble 3 folds of model trained on 3 different subsets of IndicCorp</li>
<li>beam search decoding for inference.</li>
</ul>
<p>Thank you very much for reading and please let me know if you have any questions.</p>
<p>Update: </p>
<ul>
<li>training code: <a href="https://github.com/quangdao206/Kaggle_Bengali_Speech_Recognition_2nd_Place_Solution" target="_blank">https://github.com/quangdao206/Kaggle_Bengali_Speech_Recognition_2nd_Place_Solution</a></li>
<li>inference notebook: <a href="https://www.kaggle.com/code/qdv206/2nd-place-bengali-speech-infer/" target="_blank">https://www.kaggle.com/code/qdv206/2nd-place-bengali-speech-infer/</a></li>
</ul> |
RSNA 2023 Abdominal Trauma Detection | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks kaggle and the hosts for this interesting competition, <br>
Big thanks to kagglers out there for their great ideas and engaging descussions.<br>
Thanks a lot as well to my great teammate <a href="https://www.kaggle.com/siwooyong" target="_blank">@siwooyong</a> </p>
<h1>Summary</h1>
<p>Our solution is an ensemble of one stage approach without segmentation and two stage approach with segmentation.</p>
<h2>One stage approach (public LB = 0.4Thanks kaggle and the hosts for this interesting competition,</h2>
<p>Big thanks to kagglers out there for their great ideas and engaging descussions.<br>
Thanks a lot as well to my great teammate <a href="https://www.kaggle.com/siwooyong" target="_blank">@siwooyong</a> </p>
<h1>Summary</h1>
<p>Our solution is an ensemble of one stage approach without segmentation and two stage approach with segmentation.</p>
<h2>One stage approach (public LB = 0.5, private LB = 0.45)</h2>
<h3>Data Pre-Processing</h3>
<p>If the dimension for the image is greater than (512, 512), we cropped the area with a higher density of pixels to get a (512, 512) image, then the input is resized to (96, 256, 256) for each serie following the same preprocessing steps that were used by hengck23 in his great <a href="https://www.kaggle.com/code/hengck23/lb0-55-2-5d-3d-sample-model" target="_blank">notebook</a></p>
<h3>Model : resnest50d + GRU Attention</h3>
<p>We tried to predict each target independently from the others so we have 13 outputs</p>
<pre><code> (nn.Module):
():
().__init__()
self.seq_len = seq_len
self.model_arch = model_arch
self.model = timm.create_model(model_arch, in_chans=, pretrained=pretrained)
cnn_feature = self.model.fc.in_features
self.model.global_pool = nn.Identity()
self.model.fc = nn.Identity()
self.pooling = nn.AdaptiveAvgPool2d()
self.spatialdropout = SpatialDropout(CFG.dropout)
self.gru = nn.GRU(cnn_feature, hidden_dim, num_layers=, batch_first=, bidirectional=)
self.mlp_attention_layer = MLPAttentionNetwork( * hidden_dim)
self.logits = nn.Sequential(
nn.Linear( * hidden_dim, ),
)
():
bs = x.size()
x = x.reshape(bs*self.seq_len//, , x.size(), x.size())
features = self.model(x)
features = self.pooling(features).view(bs*self.seq_len//, -)
features = self.spatialdropout(features)
features = features.reshape(bs, self.seq_len//, -)
features, _ = self.gru(features)
atten_out = self.mlp_attention_layer(features)
pred = self.logits(atten_out)
pred = pred.view(bs, -)
pred
</code></pre>
<h3>Augmentation</h3>
<ul>
<li>Mixup</li>
<li>Random crop + resize</li>
<li>Random shift, scale, rotate</li>
<li>shuffle randomly the indexes of the sequence, but respecting the same order and keeping the dependency between each three consecutive images:</li>
</ul>
<pre><code> inds = np.random.choice(np.arange(, -), , replace = )
inds.sort()
inds = np.stack([inds-, inds, inds+]).T.flatten()
image = image[inds]
</code></pre>
<p>Loss : BCEWithLogitsLoss<br>
scheduler : CosineAnnealingLR<br>
optimizer : AdamW<br>
learning rate : 5e-5</p>
<h3>Postprocessing</h3>
<p>We simply multiplied the output by the weights of the competition metric :</p>
<pre><code>preds.loc[:, [, , , ]] *=
preds.loc[:, [, , ]] *=
preds.loc[:, []] *=
</code></pre>
<h2>Two stage approach (public LB = 0.45, private LB = 0.43)</h2>
<h3>stage1 : Segmentation</h3>
<p>Model : regnety002 + unet</p>
<p>Even with only 160 of 200 data (1th fold) used as training data, the model has already shown good performance.</p>
<pre><code> (nn.Module):
():
(SegModel, self).__init__()
self.n_classes = (
[
,
,
,
,
,
])
in_chans =
self.encoder = timm.create_model(
,
pretrained=,
features_only=,
in_chans=in_chans,
)
encoder_channels = (
[in_chans]
+ [
self.encoder.feature_info[i][]
i ((self.encoder.feature_info))
]
)
self.decoder = UnetDecoder(
encoder_channels=encoder_channels,
decoder_channels=(, , , , ),
n_blocks=,
use_batchnorm=,
center=,
attention_type=,
)
self.segmentation_head = SegmentationHead(
in_channels=,
out_channels=self.n_classes,
activation=,
kernel_size=,
)
self.bce_seg = nn.BCEWithLogitsLoss()
():
enc_out = self.encoder(x_in)
decoder_out = self.decoder(*[x_in] + enc_out)
x_seg = self.segmentation_head(decoder_out)
nn.Sigmoid()(x_seg)
</code></pre>
<h3>stage2 : 2.5DCNN</h3>
<h4>Data Pre-Processing:</h4>
<p>We used the segmentation logits obtained from stage1 to crop livers, spleen, and kidney, and then resized each to (96, 224, 224). <br>
(We use 10-size padding when we crop the organs with segmentation logits)<br>
In addition, full ct data not cropped is resized to (128, 224, 224) and a total of four inputs are put into the model (full_video, crop_liver, crop_spleen, crop_kidney)</p>
<h4>Model : regnety002 + transformer</h4>
<p>We initially used a custom any_injury_loss function, but found that it did not improve the performance. For the model input channel, we experimented with different values, including 2, 3, 4, and 8. <br>
We found that a channel size of 2 performed the best, we also initially tried using a shared CNN and transformer model for all organs, but found that separate CNN and transformer models for each organ performed better. we also experimented with increasing the size of the CNN (using ConvNeXt and EfficientNet models), but this resulted in a decrease in performance. Therefore, we used the RegNet002 model, which is a smaller CNN model.</p>
<pre><code> (nn.Module):
():
(FeatureExtractor, self).__init__()
self.hidden = hidden
self.num_channel = num_channel
self.cnn = timm.create_model(model_name = ,
pretrained = ,
num_classes = ,
in_chans = num_channel)
self.fc = nn.Linear(hidden, hidden//)
():
batch_size, num_frame, h, w = x.shape
x = x.reshape(batch_size, num_frame//self.num_channel, self.num_channel, h, w)
x = x.reshape(-, self.num_channel, h, w)
x = self.cnn(x)
x = x.reshape(batch_size, num_frame//self.num_channel, self.hidden)
x = self.fc(x)
x
(nn.Module):
():
(ContextProcessor, self).__init__()
self.transformer = RobertaPreLayerNormModel(
RobertaPreLayerNormConfig(
hidden_size = hidden//,
num_hidden_layers = ,
num_attention_heads = ,
intermediate_size = hidden*,
hidden_act = ,
)
)
self.transformer.embeddings.word_embeddings
self.dense = nn.Linear(hidden, hidden)
self.activation = nn.ReLU()
():
x = self.transformer(inputs_embeds = x).last_hidden_state
apool = torch.mean(x, dim = )
mpool, _ = torch.(x, dim = )
x = torch.cat([mpool, apool], dim = -)
x = self.dense(x)
x = self.activation(x)
x
(nn.Module):
():
(Custom3DCNN, self).__init__()
self.full_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.kidney_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.liver_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.spleen_extractor = FeatureExtractor(hidden=hidden, num_channel=num_channel)
self.full_processor = ContextProcessor(hidden=hidden)
self.kidney_processor = ContextProcessor(hidden=hidden)
self.liver_processor = ContextProcessor(hidden=hidden)
self.spleen_processor = ContextProcessor(hidden=hidden)
self.bowel = nn.Linear(hidden, )
self.extravasation = nn.Linear(hidden, )
self.kidney = nn.Linear(hidden, )
self.liver = nn.Linear(hidden, )
self.spleen = nn.Linear(hidden, )
self.softmax = nn.Softmax(dim = -)
():
full_output = self.full_extractor(full_input)
kidney_output = self.kidney_extractor(crop_kidney)
liver_output = self.liver_extractor(crop_liver)
spleen_output = self.spleen_extractor(crop_spleen)
full_output2 = self.full_processor(torch.cat([full_output, kidney_output, liver_output, spleen_output], dim = ))
kidney_output2 = self.kidney_processor(torch.cat([full_output, kidney_output], dim = ))
liver_output2 = self.liver_processor(torch.cat([full_output, liver_output], dim = ))
spleen_output2 = self.spleen_processor(torch.cat([full_output, spleen_output], dim = ))
bowel = self.bowel(full_output2)
extravasation = self.extravasation(full_output2)
kidney = self.kidney(kidney_output2)
liver = self.liver(liver_output2)
spleen = self.spleen(spleen_output2)
any_injury = torch.stack([
self.softmax(bowel)[:, ],
self.softmax(extravasation)[:, ],
self.softmax(kidney)[:, ],
self.softmax(liver)[:, ],
self.softmax(spleen)[:, ]
], dim = -)
any_injury = - any_injury
any_injury, _ = any_injury.()
bowel, extravasation, kidney, liver, spleen, any_injury
</code></pre>
<h4>Augmentation</h4>
<pre><code> (nn.Module):
():
(CustomAug, self).__init__()
self.prob = prob
self.do_random_rotate = v2.RandomRotation(
degrees = (-, ),
interpolation = torchvision.transforms.InterpolationMode.BILINEAR,
expand = ,
center = ,
fill =
)
self.do_random_scale = v2.ScaleJitter(
target_size = [s, s],
scale_range = (, ),
interpolation = torchvision.transforms.InterpolationMode.BILINEAR,
antialias = )
self.do_random_crop = v2.RandomCrop(
size = [s, s],
pad_if_needed = ,
fill = ,
padding_mode =
)
self.do_horizontal_flip = v2.RandomHorizontalFlip(self.prob)
self.do_vertical_flip = v2.RandomVerticalFlip(self.prob)
():
np.random.rand() < self.prob:
x = self.do_random_rotate(x)
np.random.rand() < self.prob:
x = self.do_random_scale(x)
x = self.do_random_crop(x)
x = self.do_horizontal_flip(x)
x = self.do_vertical_flip(x)
x
</code></pre>
<p>Loss : nn.CrossEntropyLoss(no class weight)<br>
scheduler : cosine_schedule_with_warmup<br>
optimizer : AdamW<br>
learning rate :2e-4</p>
<h4>Postprocessing</h4>
<p>We multiplied by the value that maximizes the validation score for each pred_df obtained for each fold.</p>
<pre><code>weights = [
[, , , , , , , ],
[, , , , , , , ],
[, , , , , , , ],
[, , , , , , , ],
[, , , , , , , ]
]
y_pred = pred_df.copy().groupby().mean().reset_index()
w1, w2, w3, w4, w5, w6, w7, w8 = weights[i]
y_pred[] *= w1
y_pred[] *= w2
y_pred[] *= w3
y_pred[] *= w4
y_pred[] *= w5
y_pred[] *= w6
y_pred[] *= w7
y_pred[] *= w8
y_pred = y_pred **
</code></pre>
<h3>Reference</h3>
<p><a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">RSNA 2022 1st place solution</a><br>
<a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/365115" target="_blank">RSNA 2022 2nd place solution</a><br>
<a href="https://github.com/pascal-pfeiffer/kaggle-rsna-2022-5th-place" target="_blank">RSNA 2022 5th place solution</a><br>
<a href="https://www.kaggle.com/hengck23" target="_blank">@hengck23</a>'s <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/435053" target="_blank">descussion</a></p>
<h2>Code :</h2>
<p><a href="https://github.com/siwooyong/RSNA-2023-Abdominal-Trauma-Detection" target="_blank">https://github.com/siwooyong/RSNA-2023-Abdominal-Trauma-Detection</a><br>
Inference notebook : <a href="https://www.kaggle.com/code/ahmedelfazouan/rsna-atd-channel2-512-inference-ensemble?scriptVersionId=146616538" target="_blank">https://www.kaggle.com/code/ahmedelfazouan/rsna-atd-channel2-512-inference-ensemble?scriptVersionId=146616538</a></p> |
Kaggle - LLM Science Exam | 19th-place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Kaggle for hosting this challenging competition on LLM . <br>
A special thanks to Kagglers who shared their work, making the competition even more stimulating. <br>
As usual, congratulations to the winners and everyone who enjoyed the competition!</p>
<p>Here is a write-up of my solution</p>
<h2>Context Retrieval</h2>
<p>It's a two-step process:</p>
<ul>
<li>In the first step, I generate candidates through an indexed search across all Wikipedia paragraphs.</li>
<li>In the second step, I rerank the candidates using a TF-IDF model to fully leverage the fact that prompts/answers use same words of wikipedia sentences: This second step will prove to be very important for the final score.</li>
</ul>
<p>For index generation, I used the Hugging Face dataset <a href="https://huggingface.co/datasets/wikipedia" target="_blank">wikipedia_20220301.en</a>: I removed all sections such as 'See also,' 'References,' 'External links', … and split the text by <code>\n\n</code>. <br>
Additionally, I eliminated all paragraphs with fewer than 10 words.</p>
<p>I generated the embedding for each paragraph using 'all-MiniLM-L6-v2' with 'MAX_LENGTH = 512' and created 10 Faiss indexes, each containing 4.2 million rows</p>
<p>The sequential search of these indexes on the GPU T4 x2 instances is relatively fast (less then 10 minutes for 200 prompts)</p>
<p>You can find the dataset and an example of a search here: <a href="https://www.kaggle.com/code/steubk/semantic-search-of-wikipedia-en-with-faiss" target="_blank">Semantic search of Wikipedia EN with Faiss</a></p>
<h2>LLAMA2 70B Inference</h2>
<p>I used 'orca-mini-v3-70b' and the amazing job done by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> to run LAMA2 70B on GPU T4 x2..</p>
<p>This is the prompt:</p>
<pre><code>
You are an AI assistant follows instruction extremely well. Help much you can.
Your task analyze the question the proposed answer below. If the answer correct, respond ; , respond . As a potential aid, context sentences Wikipedia pages delimited
Context: {row[]
Question: {row[]}
Proposed answer: {row[]
</code></pre>
<h2>ModelForMultipleChoice</h2>
<p>I trained Deberta V3 (max_length = 450) with <code>AutoModelForMultipleChoice</code> on a dataset of 24k prompts extracted from the dataset <a href="https://www.kaggle.com/datasets/mgoksu/llm-science-exam-dataset-w-context" target="_blank">llm-science-exam-dataset-w-context</a> by <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, taking only the prompts that start with 'What,' 'Which,' 'How,' 'Who,' and removing duplicate prompts from the training set. <br>
I then validated it on the 200 rows training set.</p>
<h2>Submission</h2>
<p>Final submission is merge of softmax of the 2 models.</p>
<p>To minimize memory leaks, each step was executed in a separate script.</p>
<p>Submission time of the pipeline time varies from 7 hours and 20 minutes to the timeout (in the last days!!) </p>
<p>Below is the contribution of each step to the final score:</p>
<p><strong>Single ModelForMultipleChoice Model</strong>: <br>
LB: 0.822, Private: 0.805</p>
<p><strong>Single ModelForMultipleChoice Model with first step context retrivial</strong>:<br>
LB: 0.889, Private: 0.884</p>
<p><strong>Single orca-mini-v3-70b Model with first step context retrivial</strong>:<br>
LB: 0.897, Private: 0.889</p>
<p><strong>Merge ModelForMultipleChoice + orca-mini-v3-70b with first step context retrivial</strong>:<br>
LB: 0.906, Private: 0.901</p>
<p><strong>Single orca-mini-v3-70b Model with two steps context retrivial</strong>:<br>
LB: 0.904, Private: 0.906</p>
<p><strong>Merge ModelForMultipleChoice + orca-mini-v3-70b with two steps context retrivial</strong>:<br>
LB: 0.921, Private:0.917</p>
<p>Link to Inference notebook: <a href="https://www.kaggle.com/code/steubk/llm-mc-qa-openbook-v3-inf-val-tfidf-db450" target="_blank">https://www.kaggle.com/code/steubk/llm-mc-qa-openbook-v3-inf-val-tfidf-db450</a></p> |
RSNA 2023 Abdominal Trauma Detection | 74th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to kaggle and the organizers.</p>
<p>After experiencing a month of competition, I am thrilled to have achieved a bronze medal as a beginner. Now, I would like to share my code and model with other novice kagglers for learning purposes.</p>
<p>Thank you very much for <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362787" target="_blank">the first-place solution</a> of <a href="https://www.kaggle.com/haqishen" target="_blank">Qishen Ha</a> in the 2022 RSNA competition. I have completely copied their baseline and made corresponding modifications based on the content of this competition.</p>
<p>The submission time is around 180 min (mostly due to data loading) and is able to get such scores, the public score is 0.65, while the private score is 0.59, and a final ranking of 74th place.</p>
<h1><strong>Code</strong></h1>
<p>inference code : <a href="https://www.kaggle.com/code/naocanzouyihui/rsna-2023-74th-place-solution-inference" target="_blank">https://www.kaggle.com/code/naocanzouyihui/rsna-2023-74th-place-solution-inference</a><br>
training code : <a href="https://github.com/naozouyihui/RSNA_abdominal_trauma_74th_solution/tree/main" target="_blank">https://github.com/naozouyihui/RSNA_abdominal_trauma_74th_solution/tree/main</a></p>
<h1><strong>Summary</strong></h1>
<p>Designed a 2-stage pipeline to deal with this problem.</p>
<p>stage1: 3D semantic segmentation -> stage2: 2.5D w/ LSTM classification.</p>
<h1><strong>3D Semantic Segmentation</strong></h1>
<p>I use 128x128x128 input, to train efficientnet v2s + unet model, for segmenting organs (5ch output,included bowel, left kidney, right kidney, spleen and liver).</p>
<p>After the training was completed, I predicted 3d masks for each organ for all 10k samples in the training set.</p>
<h1><strong>Prepare Data for Classification</strong></h1>
<p>Next step is to prepare data for classification.</p>
<p>First using the predicted 3D mask for each organ, we can crop out 5 parts from a single original 3d image. After combine the masks of the left and right kidney, we cropped 10k * 4 = 40k samples </p>
<p>Then for each organ sample, I extracted 20 slices evenly by z-dimension, and for each slice, I further extracted +-2 adjacent slices to form an image with 5 channels. </p>
<p>In addition, I added the predicted mask of corresponding organ as the 6th channel to each image.</p>
<p>I chose the 2.5D approach to do this work. here 2.5D means that each 2D slice in a sample has the information of several adjacent slices, so it is written 2.5D. But the model is a normal 2D CNN with 5-channels input.</p>
<p>The structure of this model is that, I first input 20 slices from a single sample into a 2D CNN, extracted out features of each slice, and then follow it with an LSTM model. So that the whole model can learn the features of the whole organ.</p>
<h1><strong>Final Submission</strong></h1>
<p>3D Seg</p>
<ul>
<li>5fold effv2s unet (128x128x128)</li>
</ul>
<p>2.5D Cls</p>
<ul>
<li>5fold resnet 50d (224x224)</li>
</ul> |
Kaggle - LLM Science Exam | 14th place solution: An attempt to reverse-engineer the dataset | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Unlike many other competitions, I believe in this contest emphasis was placed on the quality and preprocessing of the data rather than complexity of the model architecture. Since we only had 200 rows of initial data, data gathering became a pivotal part of a solution. Here are some key points of my approach:</p>
<h2>Manual labeling of train.csv</h2>
<p>The starting point was the train.csv. I manually found the origin page of each question. Although I don't believe it was 100% accurate, it served its purpose.<br>
<strong>Key Discovery:</strong> A significant observation during this phase was that many questions were generated from the same categories. Some origin pages were from non-scientific articles like "Droste effect" and "Triskelion" from "Category:Symmetry". Yet, the majority of other articles from this category, like "Spontaneous symmetry breaking", were related to science. This led to my hypothesis that <strong>the competition dataset was possibly generated from science categories</strong> rather than specific science pages. Moreover, this seemed like an automated extraction rather than a manual selection. </p>
<h2>Constructing a category-centric dataset</h2>
<p>With the newfound understanding of categories playing a significant role, I focused on constructing a category-centric dataset:</p>
<ul>
<li><strong>Adjacent by Category (AC) Dataset:</strong> I extracted all categories from the manually labeled csv and fetched every page within these categories. The pages were then split into segments, approximately 1500 characters each. </li>
<li><strong>Training Dataset Creation:</strong> Using gpt3.5, I created 5 multiple-choice questions, each having 5 options, for every 1500-character segment. This generated a massive dataset of about 750k segment-question-answer triplets, which became the one and only dataset of my model training. <a href="https://www.kaggle.com/datasets/chubasik/kaggle-llm-science-exam-gpt-ac-dataset/" target="_blank">Link</a></li>
</ul>
<h3>Dataset Variations</h3>
<p>To expand the available data, I introduced several versions of the AC dataset. These datasets were only used during model inference/validation, not the training part:</p>
<ul>
<li><strong>Grandparent (GP) Dataset:</strong> Starting from the grandparent categories (i.e parent categories of parent categories) of pages in the labeled train.csv, I fetched every child and grandchild page within these categories. </li>
<li><strong>Natural Sciences (NS) Dataset:</strong> A graph search on "Category:Natural_sciences" with a 5-level cutoff covered 193 out of 200 pages from train.csv. To include the remaining 7 pages I added pages from other categories with varying cutoff levels ("Category:Mathematics", "Category:Measuring_instruments", "Category:Crystals", "Category:History_of_science_by_discipline"). </li>
<li><strong>GP Plus 1 (GPP1) Dataset:</strong> An enhanced version of the GP dataset, this included child categories of the leaf categories. It was also heavily modified by manually removing hundreds of unrelated categories, e.g. "Category:Soundtrack_albums_by_year". </li>
<li><strong>AC-750 Dataset:</strong> Similar to the AC dataset but with segments of 750 characters. </li>
</ul>
<p>(The dataset nomenclature was improvised and it later stuck) </p>
<h2>Article search using vector and TFIDF search</h2>
<p>With the datasets ready, I utilized a hybrid search strategy. A combination of vector (with e5 and bge models finetuned on AC) and tfidf similarities were used to identify the most relevant excerpts corresponding to the questions.</p>
<h2>Final Model Ensemble</h2>
<p>My final solution was an ensemble of three Debertas(ForMultipleChoice) with some insignificant differences, like (1) the number of output layers, (2) the order of context-question-option in their inputs, (3) the model awareness of other options. Each of these models was enhanced with 4 distinct context augmentations - GP, NS, GPP1, AC-750. <br>
In conclusion, this competition was a great learning experience. By observing patterns, recognizing the significance of categories, and implementing diverse strategies, I managed to build a solution that was robust enough to survive the transition to private leaderboard.</p> |
CommonLit - Evaluate Student Summaries | 17th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the winners, and thanks to organizers and all participants. <br>
This is my first NLP competition and I have learnt a lot to deal with this area.</p>
<p><strong>Overview</strong><br>
Weighted average of 4 models:</p>
<ol>
<li>deberta-v3-large. Using all of title, quesiton, prompt_text and summary. GroupKFold with prompt_id. (CV:0.488, Public:0.430, Private:0.463)</li>
<li>Same as model1, but using mask augmentation and trained on all data.</li>
<li>Same as model2, but using mask augmentation and trained on all data. (Only seed is different from model2)</li>
<li>LGBM trained from the output of debereta-v3-large (prompt_text is not used for this model) and handcraft features.</li>
</ol>
<p>Ensemble Score: Public:0.422, Private:0.459</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5568744%2Fbe6e40d6a2a2017d84435e1ab12dce53%2Fcommonlit_solution.png?generation=1697117455399929&alt=media" alt=""></p>
<p><strong>Training Settings (model2,3)</strong></p>
<ul>
<li>Epoch: 3</li>
<li>Loss: SmoothL1Loss</li>
<li>lr: 1e-5</li>
<li>Optimizer: Adam (weight_decay: 5e-4, beta=(0.9, 0.999))</li>
<li>Scheduler: cosine scheduler</li>
<li>token length: 2024 (training: 2024, inference: 1664)</li>
</ul>
<p><strong>Did work</strong></p>
<ul>
<li>Mask augmentation<ul>
<li>Mask augmentation doesn't improve cv, but loss curve became more stable. I tuned hypaer parameter in GroupKFold training and apply same parameters for the training with all data (model2, model3).</li></ul></li>
<li>Freezing Layers</li>
</ul>
<p><strong>Did not worked</strong></p>
<ul>
<li>Text cleaning</li>
<li>AWP</li>
<li>Back translation augmentaion</li>
</ul> |
Kaggle - LLM Science Exam | 21st place private LB solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi fellow kagglers,</p>
<p>We are the “Contextual Kagglers” team (<a href="https://www.kaggle.com/alexryzhkov" target="_blank">Alex</a>, <a href="https://www.kaggle.com/titericz" target="_blank">Giba</a>, <a href="https://www.kaggle.com/olyatsimboy" target="_blank">Olya</a> and <a href="https://www.kaggle.com/ey4read1mk4realtv" target="_blank">Dima</a>) with the 0.916 private LB score (21st place) and 0.928 public LB score (11th place). Yep, we were in gold medals on public and gone out of it on private - sad news 🙁</p>
<p>Here we want to disclose our solution and share some interesting insights on the data and competition itself. One more thing - I want to mention here that the <strong>majority of ideas and work were done by <a href="https://www.kaggle.com/olyatsimboy" target="_blank">Olya</a> and <a href="https://www.kaggle.com/ey4read1mk4realtv" target="_blank">Dima</a></strong> so they have had a huge impact on the final solution score. I also would like to congratulate <a href="https://www.kaggle.com/olyatsimboy" target="_blank">Olya</a> with <strong>her new Kaggle Competitions Expert tier</strong>.</p>
<h2>Final structure</h2>
<p>The <strong>structure of our solution</strong> is like the scheme below:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F19099%2Fea90eb23380f0e712a493076d6f96e3c%2F2023-10-16%20%2016.21.43.png?generation=1697462519468216&alt=media" alt=""></p>
<h2>Retrieval strategies</h2>
<p>It contains <strong>2 different context retrieval strategies</strong>:</p>
<ul>
<li>the first is based on the <strong>full pack of wikipedia articles sentences (around 117 mln elements)</strong> and <code>all-MiniLM-L6-V2</code> and <code>GTE-base</code> based FAISS quantized indices (we use IVF, PQ64 index). Here we used <strong>both variants (only prompt and prompt + all answers)</strong> to find the best context - so the final count of the contexts from this part is 4:<ul>
<li><code>all-MiniLM-L6-V2</code> with prompt,</li>
<li><code>all-MiniLM-L6-V2</code> with prompt and all answers, </li>
<li><code>GTE-base</code> with prompt, </li>
<li><code>GTE-base</code> with prompt and all answers</li></ul></li>
<li>the second one is the TF-IDF retrieval part from the great kernel created by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> with the only change: we have added <code>sublinear_tf=True</code> for 200k parsed paragraphs dataset, according to our validation this was the best option. So here we receive 2 more context variants - for 200k and 270k datasets. </li>
</ul>
<h2>Single models</h2>
<p><strong>As for single models</strong>, we have used 2 common architectures as a base for our fine tuned checkpoints:</p>
<ul>
<li>DeBERTaV3 Large</li>
<li>LongFormer Large<br>
We finetune our models using 60k <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> dataset and ours generated STEM 39k dataset. To enrich the variety of contexts for 39k custom dataset we generated refined contexts using GTE-base index both using prompt only and prompt with all answers. For finetuning we train <code>ckpt-3850</code>, <code>ckpt-7200</code> and <code>ckpt-8750</code> for DeBERTa architecture, for each step we relabeled least confident samples, where a finetuned model put the correct answer to the last place. For LongFormer we took only the last checkpoint due to the inference time limitations. During inference we use 2300 character limited context for DeBERTa-s and no limit for LongFormer (we grid search over 2000, 2100, 2200, 2300, 2400 and 2500 locally on 500-samples dataset and 2300 limit appears to be optimal in terms of inference time and final metric MAP@3).</li>
</ul>
<h2>What we have tried and it didn’t work:</h2>
<ul>
<li>CutOut, CutMix augmentations</li>
<li>separate models for each contexts</li>
<li>reward training with and without hard negatives</li>
<li>multiple negatives loss</li>
<li>RoBERTa, ELECTRA, BERT finetuning</li>
<li>averaging over answers order during retrieval: <code>mean(prompt + A + B + C + D + E, prompt + E + A + B + C + D , prompt D + E + A + B + C … )</code> performs exactly the same score on 500 samples, <code>mean(prompt + A, prompt + B, prompt + C, prompt + D, prompt + E)</code> got less in MAP@3</li>
<li>t5, e5, GTE and BGE, but for <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> datasets, TF-IDF was the best option</li>
<li>different index versions, the only meaningful parameter was nprobs</li>
<li>TF-IDF ranker + BM25 as reranker.</li>
<li>Training a single question+single answer pair classifier.</li>
</ul>
<p><strong>We do regret about not probing the other embedders’ options for full wikipedia index creation and using only sentences not chunks.</strong></p>
<h2>Compositions</h2>
<p><strong>To create compositions</strong> we have checked a lot of crazy variants including blending, stacking and backup strategy - all of them were tested and compared on the 500 rows publicly available (200 + 300 rows). The best variant we have found here is to blend 4 predictions from the same DeBERTaV3 checkpoint based on contexts from the first retrieval strategy to receive so called <code>blend_7200</code> and <code>blend_8500</code> along with average backup for predictions <code>DeBERTa_3850</code> using the LongFormer predictions. The final formula looks like: 0.58 * blend_7200 + 0.42 * blend_8500 + TF-IDF_backups<br>
Important notes we found about blending:</p>
<ul>
<li>Blending predictions for more than 4 models at a time on the 500 rows validation dataset decrease the MAP@3 score on both public and private LB</li>
<li>Blending predictions compositions from different retrieval strategies also made the things worse in comparison with simple averaging</li>
<li>Backup strategy for TF-IDF retrieval works better if we replace <code>DeBERTa_3850</code> predictions with <code>max_probability < 0.514</code> using the blend <code>0.5 * DeBERTa_3850 + 0.5 * LongFormer</code> instead of LongFormer predictions itself.</li>
</ul>
<h2>Submissions timing</h2>
<p>As our notebooks can run only for 9 hours max, we <strong>need to monitor how long each submission was</strong>. This can help us to find out how many more models we can add to our solution and also helps to find out in what part the OOM/exception was. To check the submission time we have created a specific script which was run in tmux session with <code>while True</code> loop inside to check the submission status every 1 minute. So we know that <strong>our best submission is 08:52:43 out of 9 hours timeout</strong> - pretty awesome usage of the time, right? 🙂 </p>
<p>As this is not the last kernel competition, <strong>I would like to share my script <a href="https://www.kaggle.com/code/alexryzhkov/submissions-timer-script/notebook" target="_blank">here</a></strong>, which you can use in the further competitions on your side. You can also change the <code>GMT_OFFSET</code> variable to make it more suitable with your timezone.</p>
<h2>Another tricks</h2>
<p>We also have tried to use <strong>pseudolabeling techniques</strong> to make our predictions look more similar for the test data distribution. Basically adding the 4k test samples on the top of the 500 OOF dataset and run ML models didn’t score better than simple weighted average approach.</p>
<h2>Conclusion</h2>
<p>Thanks Kaggle and other participants for such a great competition. For me even after 12 years of kaggling this is still full of learning and fun.</p> |
RSNA 2023 Abdominal Trauma Detection | Kaggle Solution Write-Up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>import pandas as pd<br>
import numpy as np</p>
<p>df1 = pd.read_csv("/kaggle/input/rsna-2023-abdominal-trauma-detection/sample_submission.csv")<br>
df1</p>
<p>df1.to_csv("submission.csv", index=False)<br>
df1</p> |
Kaggle - LLM Science Exam | 53rd place solution: Ensemble model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thank you to Kaggle for organizing this competition and to my teammates. <a href="https://www.kaggle.com/aditya2718" target="_blank">@aditya2718</a> <a href="https://www.kaggle.com/namanbansal4122" target="_blank">@namanbansal4122</a> </p>
<p>We participated in this competition for learning about LLM's .At that time we were noobs in this field (we still are 😅)so our initial scores were a mess :<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13602048%2F7be717605e918798f5aff66cb5e475d0%2FKaggle%20-%20LLM%20Science%20Exam%20_%20Kaggle%20-%20Google%20Chrome%2011-10-2023%2019_13_02.png?generation=1697031905209471&alt=media" alt=""><br>
But some how at the end we managed to finish with a score of :<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13602048%2F53e21a7e4efcb0d2b35aaffd9b0b0c4a%2FKaggle%20-%20LLM%20Science%20Exam%20_%20Kaggle%20-%20Google%20Chrome%2011-10-2023%2019_16_57.png?generation=1697032143163355&alt=media" alt=""></p>
<p>We used TF-IDF search to retrieve context from <a href="url" target="_blank">https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded</a> .</p>
<p>During the initial days, we tried a few different models and got the following results : (Note:- These results were obtained with no context given to model and were trained on given data of 200 questions.)</p>
<table>
<thead>
<tr>
<th>model_name</th>
<th>public_score</th>
<th>private_score</th>
</tr>
</thead>
<tbody>
<tr>
<td>NystromformerForMultipleChoice</td>
<td>0.485</td>
<td>0.480</td>
</tr>
<tr>
<td>Bert Base cased</td>
<td>0.493</td>
<td>0.505</td>
</tr>
<tr>
<td>Flan-t5</td>
<td>0.469</td>
<td>0.456</td>
</tr>
<tr>
<td>Deberta base</td>
<td>0.679</td>
<td>0.681</td>
</tr>
<tr>
<td>Distil Bert</td>
<td>0.538</td>
<td>0.525</td>
</tr>
</tbody>
</table>
<p>We decided to go with Deberta and created an assembly of 6 models(this was the maximum number of models which was not giving time-out).</p>
<p>These models were trained on different sample size of 60 k data <a href="url" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383</a> as we did not the resource to train on entire dataset .Some sample were of size 5k , 6k and 2 with 20k.</p>
<p>The weights of model was decided by the following formula :<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F13602048%2Fd73e08a44e663e99281920dff2fd3367%2FDocument1%20-%20Word%2011-10-2023%2021_25_31.png?generation=1697039794626917&alt=media" alt=""></p>
<p>Some prompts which we tried that improved our results :</p>
<ul>
<li>"Answer the following questions by eliminating wrong options:\n" </li>
<li>"Read the following passage carefully and answer the questions :\n"</li>
<li>"Please re-evaluate your answer before giving final answer.\n"</li>
</ul>
<p><strong>Thing we tried but failed</strong> :</p>
<ul>
<li>Concept of negative weights : We prompted the model to tell the wrong answer and then subtracted the weighted probability from the ensemble. Got a public score of .903 . It was not exactly a failure but was below our expectation .</li>
<li>Implementing Shepherd model : We tried to implement the model given in the paper <a href="url" target="_blank"></a><a href="https://arxiv.org/abs/2308.04592" target="_blank">https://arxiv.org/abs/2308.04592</a> which was meant to act as critic for our assembly . Because we did not had the sufficient training resources we failed to extract any meaningful result . </li>
</ul>
<p>We would also like to thank <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> , <a href="https://www.kaggle.com/steubk" target="_blank">@steubk</a> , <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> , <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> and <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> for providing datasets and various insides to the competition . </p> |
Bengali.AI Speech Recognition | 57th Place Solution for the Bengali.AI Speech Recognition Competition (Top 8%) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Bengali.AI & Kaggle for organizing this competition. It was a very wonderful learning experience for my team <a href="https://www.kaggle.com/kamaruladha" target="_blank">@kamaruladha</a> <a href="https://www.kaggle.com/ariffnazhan" target="_blank">@ariffnazhan</a>. This is our first bronze medal and first-time dealing with audio data! </p>
<p>Business context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">www.kaggle.com/competitions/bengaliai-speech/overview</a><br>
Data context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">www.kaggle.com/competitions/bengaliai-speech/data</a></p>
<h1>Overview of the Approach</h1>
<h2>ASR Model</h2>
<p>Our most performing model is from finetuning <code>ai4bharat/indicwav2vec_v1_bengali</code> trained for 2 epochs with training data containing both train and valid split. </p>
<h2>Dataset</h2>
<p>We took 80% of training data and 96% of validation split using random sample for our training set. Dataset is also preprocessed first using bnmnormalizer. We noticed a difference in performance for preprocessed data compared to unprocessed one.</p>
<h2>Dataset Cleaning</h2>
<p>We conducted data cleaning on the audio transcription by:</p>
<ul>
<li>Removing all kinds of punctuation. Manually added quotations into the list.</li>
</ul>
<pre><code>
base = .punctuation
punct = base +
punct
>>> !</code></pre>
<ul>
<li>Strip any whitespaces at the front and at the back of string.</li>
<li>Normalize the transcriptions using bnunicodenormalizer.</li>
</ul>
<pre><code>!pip install bnunicodenormalizer
bnunicodenormalizer import Normalizer
bnorm=Normalizer()
def norm(transcription):
text_list = []
texts = transcription.()
texts:
= bnorm()
([]) > :
text_list.append([][][])
:
text_list.append()
normalized_transcription = .join(text_list)
</code></pre>
<p>In order for the normalizer to work on our data, we had to split it into tokens (words). This is due to the normalizer not accepting a full sentence and can only process single words. We also noticed that the process could take some time. Hence, we implemented multi-threading to speed up the entire normalization process. We applied the data cleaning process on the base dataset from Bengali.AI and also Google Fleurs.</p>
<ul>
<li>Calculated the length for each of the audio files.</li>
</ul>
<p>Based on our analysis, we noticed the audio files had varying lengths. This resulted in unexpected OOM during the validation. Thus, we filtered the audio files and only took 18~20 seconds. </p>
<ul>
<li>Mixing Training and Validation Data</li>
</ul>
<p>The data given by Bengali.AI was already separated into 2 (training and validation). We noticed that the validation data was somewhat better as compared to the training data. Thus, we mixed some of the validation data into the training set. </p>
<ul>
<li>[Untested Process]</li>
</ul>
<p>In total we had almost 1 million audio files. From our analysis and preliminary training evaluation, we noticed that the data consists of very bad audio quality. This affected the performance of the model directly and prevented the model to improve. Initially we thought of calculating the amplitude of the audio files. But, determining the amplitude does not correlate to background noise. What we thought of doing is to filter the audio files that the model has trouble to transcribe. </p>
<p>To put it simply, by training the model on the cleaned data, we would use the best checkpoint and transcribe the training and validation data. The data that produces the worst WER (word error rate) shall be excluded. This is to ensure that the model is only trained and validated on good quality data. </p>
<h2>Language Model</h2>
<p>We attached language model trained using kenlm library on top of our wav2vec model prediction to improve our score and use Word Error Rate (WER) and Character Error Rate (CER) as our evaluation metrics to benchmark the predictions. We train the language model with the texts from competition datasets (train and validation) and google fleurs bengali.</p>
<p>After rigorous evaluations, 3-gram showed the best performance among 4-gram and 5-gram models.</p>
<h2>Details of Submission</h2>
<h3>Model Saturation</h3>
<p>In the early stages of our training phase, we dumped the entire dataset onto the model. Not only did we face OOM problems, but we slowly notice that the model became saturated. Due to not having an early stopping function and not stopping the training at certain points, the model was performing even worst as compared to those who only train with 10%-20% data. </p>
<p>To prevent from the model being saturated, we had to feed the data into byte-sized pieces (pun intended). As it turns out, feeding a small partition of the data improved the model' performance quite drastically. From there, we tested the model using the test set. If the performance is increasing, we maintain the current data partition. If the performance is constant or decreasing, we would increase the data partition, i.e. add in more data. </p>
<p>By using this strategy, we were able to prevent the model from being saturated. The only downside is that we had to keep track of the model's performance and constantly push out the checkpoints. </p>
<p>All in all, implementing these strategies showed a significant improvement on the model's performance as compared to not cleaning the data at all. In our previous projects, we would only conduct extremely minor data cleaning process as to preserve the structure of the data. We wanted the model to learn the clean and also unclean data in the hopes of making it robust. Unfortunately, this was not the case. </p>
<p>Not cleaning the data at all would degrade the performance and training the model for long periods of time will result in model saturation.</p>
<h3>What didn't work for our team</h3>
<p>We experimented with a few other pretrained models, but their result is not as good as our final model. We tried models from:</p>
<ul>
<li>facebook/mms-1b</li>
<li>wav2vec2-large-xlsr-53</li>
<li>whisper-small</li>
<li>conformer-rnn-t</li>
</ul>
<p>We also noticed a degrading in performance when we train the model longer, our validation score no longer correlates with the public lb. </p>
<p>We experimented with adding google fleur to our dataset for training, but it seems that the model trained on the dataset doesn't improve much of our submission score.</p>
<p>We tried increasing the value in wav2vec2 parameters for mask_time_prob & mask_feature_prob but we didn't notice any improvement in performance.</p>
<p>We learned that it's very important that the dataset is properly cleaned and filtered in this competition. It could help boost the score if we manage to filter poor quality audio with high WER. We didn't manage to work more on preprocessing the dataset.</p>
<h3>Sources</h3>
<p>Pretrained model: <a href="https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali" target="_blank">https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali</a><br>
Source Code: <a href="https://github.com/malaysia-ai/bengali.ai-stt-competition" target="_blank">https://github.com/malaysia-ai/bengali.ai-stt-competition</a><br>
Inference Notebook: <a href="https://www.kaggle.com/code/aisyahhrazak/inference-wav2vec2-kenlm" target="_blank">https://www.kaggle.com/code/aisyahhrazak/inference-wav2vec2-kenlm</a></p> |
Kaggle - LLM Science Exam | 5th place solution: Llama 2 70B meets Sparse & Dense Retrievals from Own parsed wikipedia dataset | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the organizers and congratulations to all the participants.</p>
<p>There are really lots of things that can be done from dataset preparation, model training to post-processing (how to utilize trained LLM model), and it was a very tough competition for me. But I enjoyed the competition overall!<br>
I would like to thank my team members <a href="https://www.kaggle.com/zaburo" target="_blank">@zaburo</a>, <a href="https://www.kaggle.com/qhapaq49" target="_blank">@qhapaq49</a>, <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>, <a href="https://www.kaggle.com/flowlight" target="_blank">@flowlight</a> for their hard work too. </p>
<p>The figure shows our retrieval to the model overall pipeline.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F518134%2F8f34b4a6918523b94b9ff6fec9b10da1%2Fkaggle-llm-5th-overall.jpg?generation=1697001871726670&alt=media" alt=""></p>
<p> --> Write-up Updated!</p>
<h2>Short summary</h2>
<ul>
<li>We successfully run Llama 70B model on kaggle notebook</li>
<li>We used BM-25 based sparse retrieval (using <a href="https://github.com/castorini/pyserini" target="_blank">pyserini</a> library whose backend is <a href="https://lucene.apache.org/" target="_blank">Apache Lucene</a>) to search all the wikipedia pages.</li>
<li>We parsed wikipedia dataset by ourselves to cover most of the pages with number information.</li>
</ul>
<h2>Dataset creation</h2>
<h3>Wikipedia dataset</h3>
<p>In the early stage of the competition when I checked the given train.csv and tried to answer manually, I felt this competition is a “retrieval” competition rather than a model training competition. We also noticed some pages or numbers are missing in the publicly available dataset. So I decided to parse our wikipedia dump manually. </p>
<p>Below are comparison tables of datasets:</p>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Parser</th>
<th>Page</th>
<th>Text info</th>
</tr>
</thead>
<tbody>
<tr>
<td>Kaggle published dataset by <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> <br> <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">jjinho/wikipedia-20230701</a></td>
<td><code>wikitextparser</code></td>
<td>Some pages are dropped</td>
<td>All templates are dropped.<br>All the “\n” are dropped (difficult to read by human).</td>
</tr>
<tr>
<td>Huggingface dataset<br> - <a href="https://huggingface.co/datasets/wikipedia" target="_blank">wikipedia</a> <br> - <a href="https://huggingface.co/datasets/graelo/wikipedia" target="_blank">graelo/wikipedia</a></td>
<td><code>mwparserfromhell</code></td>
<td>Better coverage</td>
<td>All templates are dropped.<br> <code><math></code>, <code><ref></code>, <code><table></code> tags are dropped.</td>
</tr>
<tr>
<td>Our dataset</td>
<td><code>wikitextparser</code> + custom template processing</td>
<td>Better coverage</td>
<td><code>{val}</code> and <code>{math}</code> templates text are kept (Other templates are dropped).<br> Keep <code><math></code> tag, but <code><ref></code>, <code><table></code> tags are dropped.</td>
</tr>
</tbody>
</table>
<p>Huggingface's Wikipedia dataset covers most of the pages but numbers are missing, it seems due to the <code>mwparserfromhell</code> parser issue. So, I replaced the parser with <code>wikitextparser</code> from the <a href="https://huggingface.co/datasets/graelo/wikipedia" target="_blank">graelo/wikipedia</a> code base (There was a helpful instruction on how to run the code at the bottom of the dataset description!).<br>
I believe our dataset covers most of the pages with numbers or math expression information.</p>
<h3>QA dataset creation</h3>
<p>Many datasets were shared publically during the competition, but we also tried to make it by ourselves as well. Below is the prompt for ChatGPT 3.5 that I used to generate the QA dataset.</p>
<pre><code>system_content =
user_content_template_qa = Template(
new lineWhat ...Which following statements ...What did ` person` doWhat was ...skipmissing baryonic mass
) MOND a theory reduces observed missing baryonic mass galaxy clusters postulating existence a new form matter called
) MOND a theory increases discrepancy observed missing baryonic mass galaxy clusters measured velocity dispersions a factor a factor
) MOND a theory explains missing baryonic mass galaxy clusters was previously considered dark matter demonstrating mass form neutrinos axions.
) MOND a theory reduces discrepancy observed missing baryonic mass galaxy clusters measured velocity dispersions a factor a factor
) MOND a theory eliminates observed missing baryonic mass galaxy clusters imposing a new mathematical formulation gravity require existence dark matter.
A)
Let's start. Here TEXT: $title\n$
</code></pre>
<p>I only give ChatGPT 3.5 an abstract and 1 paragraph from specific page as <code>$text</code>.</p>
<p>Since we can achieve very high scores like 0.995 for train.csv at the end of the competition. We used our generated 1000 dataset for local validation.</p>
<h2>Retrieval</h2>
<h3>Sparse retrieval</h3>
<p>I'm not professional to the NLP field. I surveyed existing research, especially related to retrieval-based LLMs. And <a href="https://aclanthology.org/P19-1612/" target="_blank">ORQA</a> paper affected our approach, it reported that “On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient.”. Since ChatGPT 3.5 was given the sentence to generate QA, we thought the same applies in this competition too.<br>
So we read <a href="https://aclanthology.org/N19-4013/" target="_blank">BERTserini</a> paper and decided to use <a href="https://github.com/castorini/pyserini" target="_blank">pyserini</a> library, which is Python wrapper of <a href="https://github.com/castorini/anserini" target="_blank">anserini</a> that uses Apache Lucene in the backend. </p>
<p><a href="https://www.kaggle.com/code/strifonov/pyserini-scibert" target="_blank">This notebook</a> by <a href="https://www.kaggle.com/strifonov" target="_blank">@strifonov</a> was really helpful to run pyserini on kaggle notebook (thank you!).</p>
<p>Apache Lucene was very efficient and convenient. It was easy to index all the wikipedia dataset and search through BM25 score.<br>
We splitted each wikipedia page to the paragraph (by “\n\n”). It only required several hours to index in total of 74M records. In the inference time, searching is accelerated by multi-threading, 200 searches are finished in 2 min or so.</p>
<h3>Dense retrieval</h3>
<p>We created 2 types of dense retrieval.</p>
<p><strong>1. All the Wikipedia contents split by sentence wise block</strong></p>
<p>We used <a href="https://huggingface.co/hkunlp/instructor-xl" target="_blank">instructor-xl</a> for the embedding calculation and used <a href="https://github.com/facebookresearch/faiss" target="_blank">faiss</a> to create index.<br>
The original index is too big (300GB) to run on memory, so we quantized the index (300GB → 10GB) for this approach.</p>
<p><strong>2. STEM 270k contents split by paragraph wise block</strong></p>
<p>We used <a href="https://huggingface.co/BAAI/bge-large-en" target="_blank">bge-large-en</a> for the embedding calculation.<br>
And only used same pages published in <a href="https://www.kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded" target="_blank">STEM270k</a> dataset by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>.</p>
<p>We added “title” information in each block, so that we can obtain correct search result even when “it/he/she” is used to refer to the title.</p>
<p>In total, we used 3 types of retrievals, and combined them as a context to input to the LLM.</p>
<h2>Model (written by <a href="https://www.kaggle.com/zaburo" target="_blank">@zaburo</a>)</h2>
<p>Our team used Llama-2 7B, 13B, 70B and <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" target="_blank">Mistral 7B</a>. Since Mistral 7B was achieving better scores compared to Llama-2 of the same size, we used Mistral 7B and Llama-70B in our final submission. Based on the scores from MMLU and the results of our preliminary experiments, we used models without Instruction Tuning, such as Llama-2-70b-hf and Mistral-7B-v0.1.</p>
<p>For each model, we input the problem in the following format and had the model output the probability that the token corresponding to "▁A", "▁B", "▁C", "▁D", "▁E" comes immediately after this sentence. This yielded decent performance even with models without any fine-tuning.</p>
<pre><code></code></pre>
<p>Fine-tuning was done using QLoRA. We incorporated bitsandbytes' 4bit quantization and PEFT's LoRAModel into our training pipeline. The hyperparameters of QLoRA were almost exactly the ones in QLoRA's paper, except for the batch size which had to be lowered due to long input and the GPU memory constraint.</p>
<p>We expected that running Llama 70B on Kaggle Notebook would be challenging, so we started working on it from the early stages of the competition. We registered quantized weights (less than 40GB) for each layer as a dataset, and performed inference layer-by-layer. By applying xformers' <code>memory_efficient_attention</code> to the attention layers, we were able to keep memory consumption linear even with long contexts, leaving plenty of room in the GPU memory. We used only approximately 6GB of GPU memory for inference.</p>
<p>LLM can considerably change inference results depending on the order of choices, so we used a test time augmentation strategy in which we rotated the order of choices 5 times to average this effect. This TTA was most effective with pretrained models without any retrieval, and although the effect drastically decreased with retrievals or QLoRA, it remained effective to the end.</p>
<p>Naively carrying out TTA inference would increase the execution time 5 times, but rather than inferring the entire set 5 times, we concatenated all patterns of choice order like <code>{context} {Q} {A B C D E} {B C D E A} … {E A B C D}</code> and set the <code>attention_mask</code> appropriately to reduce the inference of the long common part to only once. By applying this technique, a 70B model with 5 times TTA worked within time for the full test set, and using a single 70B + BM25 Top2 + 5 times TTA resulted in a Private 0.925 score.</p>
<h2>Validation</h2>
<p>We noticed MAP@3 is a bit unstable metric, and can fluctuate a bit. Below are our strategy not to overfit to the public LB.</p>
<ul>
<li>We used 3 datasets, train.csv (200), <a href="https://www.kaggle.com/yalickj" target="_blank">@yalickj</a> <a href="https://www.kaggle.com/datasets/yalickj/dataset-wiki-new-1" target="_blank">dataset</a> (300), and our own dataset (1000, described above), for the local validation.<ul>
<li>We did not use these data in the training, even in the final submission.</li></ul></li>
<li>We tried to run various experiments with local dataset validation. We did not submit methods which do not work well in our local dataset. I think it prevents us from adopting various methods that overfit the public LB.<ul>
<li>For example, a custom query like <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595#2454966" target="_blank">this</a> did not work well in our experiment, so we did not adopt this.</li>
<li>Also, we tried embedding model fine tuning, but it did not work well in our local dataset, so we did not submit this too.</li></ul></li>
</ul>
<h2>Final Pipeline - Multi Stage Inference (written by <a href="https://www.kaggle.com/charmq" target="_blank">@charmq</a>)</h2>
<p>We noticed that putting multiple contexts in a prompt at the same time improves the performance of the model. However, because simultaneous input of contexts increases inference time, we used a multi-stage inference pipeline in which easy problems are answered by small models (mistral-7B) and difficult problems are answered by large models (llama2-70B). The final submission consists of three stages as follows:</p>
<p><strong>1st stage: ensemble of mistral-7B models</strong><br>
mistral-7B with BM25 top2 contexts + mistral-7B with instructor and bge contexts<br>
Inference on all data</p>
<p><strong>2nd stage: llama2-70B</strong><br>
llama2-70B with BM25 top2, instructor and bge contexts<br>
Inference on bottom 40% data with low confidence in the 1st stage</p>
<p><strong>3rd stage llama2-70B</strong><br>
llama2-70B with bge contexts (2~3 times longer than bge contexts used in the 1st and 2nd stage)<br>
Inference on bottom 5% data with low confidence in the 1st and 2nd stage</p>
<p>This multi stage pipeline achieved private:0.926, public:0.928.</p> |
RSNA 2023 Abdominal Trauma Detection | 59th Place Solution for the Detect and Classify Traumatic Abdominal Injuries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, we extend our gratitude to RSNA for organizing this captivating competition, through which we gained invaluable insights. </p>
<h1>1. Context section</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></li>
</ul>
<h1>2. Overview of the Approach</h1>
<p>In this competition, the motivation is to contribute to the improvement of patient outcomes from traumatic injuries, which account for over 5 million global deaths each year. Blunt-force abdominal trauma, often caused by vehicle accidents, can damage organs and lead to internal bleeding. With traditional methods like physical exams and lab tests often being inconclusive, the emphasis shifts to the need for accurate medical imaging interpretations.</p>
<p>In this study, our models predict a probability for each of the different possible injury types and degrees using CT scans. While vital for evaluating abdominal trauma, CT scans can be challenging to decipher, especially when injuries are subtle or multiple. Our goal is to harness the power of AI and ML to better interpret CT scans.</p>
<h1>3. Details of the submission</h1>
<h2>3.1 Data Interpretation and Preprocessing</h2>
<p>The dataset is primarily stored in the .dcm (DICOM) format, which offers a detailed insight into traumatic injuries. The patients underwent CT scans, with each undergoing one to two scans, generally covering from the upper neck down to the region below the anus. However, the specific number of scan images per individual, per scan, remained variable. This variability was due to the unknown intervals between each scan slice and the differing heights of the individuals, introducing a level of complexity in the initial assessment of the data. </p>
<p>The scans could be classified into two types: complete and incomplete. A complete scan provides a comprehensive view, encompassing all organs within the range from the upper neck to the below the anal region. In contrast, an incomplete scan is a localized examination, focusing on specific organs within this anatomical spectrum. While the training dataset came labeled with indications of completeness, such classification were absent in the test dataset. The training dataset encompassed the CT scan images of 3,147 patients, while the test dataset was composed of approximately 1,300 patents’ scans.</p>
<p>When these multiple .dcm images from a single scan were stacked together, they coalesced to form a vivid 3D representation of the human body, encapsulating a plethora of details. Such data, especially in the field of medical imaging, is paramount to diagnose and comprehend the intricacies of internal injuries.</p>
<p>For convenience, we converted the given .dcm (DICOM) files into the more universally recognized .png format, facilitated by a <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427427" target="_blank">method</a> provided by Kaggle platform, streamlining subsequent operations and analyses. </p>
<h2>3.2 Model Pipeline Design</h2>
<h3>3.2.1 2D Semantic Segmentation Model</h3>
<h4>Segmentation with U-Net</h4>
<p>Our objective with U-Net was twofold: to identify the span of CT images that contained each organ (liver, spleen, kidneys, bowels) and to pinpoint their precise locations within each image. </p>
<p>We initially employed the Total Segmentator tool (available at: <a href="https://github.com/wasserth/TotalSegmentator" target="_blank">Total Segmentator on GitHub</a>), but faced significant time constraints due to the tool’s processing speed. This constraint necessitated a shift in our approach toward a more efficient solution, leading us to use the U-Net architecture. </p>
<p>We utilized the Total Segmentator to generate marked pixel points as input data. These annotated 2D images served as training data for our U-Net model. By harnessing the efficiency of U-Net, we trained a 2D segmenter to efficiently discern the presence and location of the targeted organs across the CT scan images. The refined segmentation process not only indicated which scans contained the organs of interest but also provided their spatial coordinates, thus streamlining the subsequent stages of our analysis. </p>
<p>The following figures are an illustration of segmentation with U-Net. The black-and-white figure on the left is the original CT scan image, and the right one is the one segmented. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F79d5484239de43371fef834bdb1fd493%2Ffigure3.png?generation=1699408525140588&alt=media" alt="U-Net"></p>
<h4>Constructing 2.5D Input</h4>
<p>For this part, our objective was to create a 2.5D input data for the neural network. We implemented a strategy to select 32 evenly spaced images from each scan across the training dataset. Given that the minimum number of slices per scan was 44, this uniform selection process ensured comprehensive coverage of each individual’s scan while maintaining a focus on representation consistency for each organ. Our selection also guaranteed that every organ appeared in at least 4 separate images, thereby capturing the essential anatomical features needed for accurate analysis. </p>
<p>Then we used EfficientNet architecture. Each image was processed through the network, with the output of the penultimate layer capturing the feature representation extracted by the network after processing. Subsequently, a fully connected layer served as the final decision-making component, distinguishing between health and injury. </p>
<p>To synthesize the 32 image-derived insights into a single diagnostic outcome for each scan, we utilized Long Short-Term Memory (LSTM) implemented in TensorFlow. This approach allowed us to analyze the sequence of images capturing the spatial continuity and progression of anatomical structures. </p>
<p>The following is a prototype of our model.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2497266%2F920be15372ec12783f6e8891e5f34336%2Ffigure2.png?generation=1699408559685906&alt=media" alt="model"></p>
<h3>3.2.2 Data Augmentation Techniques</h3>
<p>To bolster the quality and variety of our dataset, we initially planned to use the <code>albumentations</code> library for data augmentation. However, due to compatibility issues with TPU training, the library could not be directly applied. As a workaround, we drew inspiration from the augmentation techniques available in <code>albumentations</code> and replicated them using TensorFlow. Our customized augmentation pipeline included transformations such as horizontal and vertical flipping, transposition, and various blurring techniques like Gaussian blur, mean blur, and motion blur. We also introduced random Gaussian noise to further augment the dataset.</p>
<h2>3.3 Loss Function Design</h2>
<h3>3.3.1 Organ-Specific Loss Calculation</h3>
<p>For each organ (liver, spleen, kidneys, and bowels), the loss is calculated by taking the slices that correspond to the starting and ending indices of the organ within each scan. For example. If the liver is visualized from slice index 1 to 5, the model will predict the probability of injury for these slices, and the five separate loss values will be computed. The final loss for the liver will be the average of these five loss values. This process is repeated for each organ to obtain individual organ losses. </p>
<h3>3.3.2 Extravasation Loss Calculation</h3>
<p>For extravasation, the loss is computed across 32 evenly selected images from each scan. This comprehensive approach ensures that the loss calculation for extravasation is representative of the entire scan.</p>
<h3>3.3.3 Any Injury Loss Calculation</h3>
<p>The <code>any_injury</code> loss is calculated automatically on the Kaggle platform based on the previously mentioned loss values. This loss serves as an aggregate indicator of the model’s to detect any form of injury present across the scan.</p>
<h3>3.3.4 Loss Function Weights</h3>
<p>Different weights are applied within the loss function: a health prediction is assigned 1 point, and an injury prediction is given 2 points, emphasizing the model’s need to accurately identify injuries. </p>
<h1>4. Sources</h1>
<ul>
<li>Standardizing Unusual Dicoms: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427217</a></li>
<li>Total Segmentator: <a href="https://github.com/wasserth/TotalSegmentator" target="_blank">https://github.com/wasserth/TotalSegmentator</a></li>
<li>Data in PNG Format: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427427" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/427427</a></li>
<li>EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling: <a href="https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html" target="_blank">https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html</a></li>
</ul> |
RSNA 2023 Abdominal Trauma Detection | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to RSNA and Kaggle for hosting this competition. Congratulations to all the winners and participants.</p>
<h2>Overview</h2>
<p>I employed a 2.5 pipeline and trained for both classification and segmentation tasks.</p>
<h2>Dataset</h2>
<p>I utilized 3D masks from TotalSegmentator and subsequently retrained a 2D model for the liver, spleen, bowel, kidney, and body. The DICOM images were rescaled to (1, 1, 5) and stored with a 5-channel mask.<br>
For each epoch, I first sampled N=14 frames from the rescaled array, then used the body mask to filter out hands or other irrelevant areas.<br>
I used the organ mask to limit the Z-axis space, as slices without the target organ might contain less valuable information.</p>
<h2>Model</h2>
<p>I used a Unet model integrated with Pyramid Vision Transformer V2 and MaxViT encoder. Transformers outperformed the convolutional models, especially for the extravasation target.</p>
<p>Pretraining on the 2D mask facilitated convergence.<br>
6 classification heads were employed for prediction.<br>
Perhaps the lack of an RNN layer is the primary reason I didn't match the performance of the top teams.</p>
<h2>Loss</h2>
<p>I used the CE loss with weights identical to the metric.</p>
<h2>Results</h2>
<table>
<thead>
<tr>
<th>Encoder</th>
<th>CV</th>
<th>LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>pvt-b2</td>
<td>0.3783</td>
<td>0.41</td>
</tr>
<tr>
<td>pvt-b3</td>
<td>0.3750</td>
<td>0.41</td>
</tr>
<tr>
<td>pvt-b4</td>
<td>0.3786</td>
<td>0.4</td>
</tr>
<tr>
<td>maxvit_t</td>
<td>0.3810</td>
<td>0.42</td>
</tr>
<tr>
<td>ensemble</td>
<td>0.3570</td>
<td>0.4</td>
</tr>
<tr>
<td>ensemble w/ scale</td>
<td>0.3530</td>
<td>0.39</td>
</tr>
</tbody>
</table>
<h2>Code</h2>
<p><a href="https://github.com/iseekwonderful/RSNA-2023-Abdominal-Trauma-Detection-4th-Place-Code.git" target="_blank">Github link</a></p> |
Bengali.AI Speech Recognition | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we want to thank the competition organizers who hosted this fantastic competition and gave us an excellent opportunity to learn how to improve the ASR model against low-resource languages like Bengali and other competitors who generously shared their knowledge.<br>
The following is a brief explanation of our solution. We will open-source more detailed codes later, but if you have questions, ask us freely.</p>
<hr>
<h1>Model Architecture</h1>
<h3>・CTC</h3>
<p>We fine-tuned "ai4bharat/indicwav2vec_v1_bengali" with competition data(CD).<br>
We observed low-quality samples in CD, and mindlessly fine-tuning a model with all the CD deteriorated its performance. So first, we fine-tuned a model only with split=”valid” CD (this improved the model’s performance) and predicted with it against split=”train” CD. After that, we included a high-quality split=’train’ CD (WER<0.75) to split=’valid’ CD and fine-tuned "ai4bharat/indicwav2vec_v1_bengali" from scratch. <br>
This improved the public baseline to LB=0.405.</p>
<h3>・kenlm</h3>
<p>Because there are many out-of-vocabulary in OOD data, we thought training strong LM with large external text data is important. So we downloaded text data and script of ASR datasets(CD, indicCorp v2, common voice, fleurs, openslr, openslr37, and oscar) and trained 5-gram LM.<br>
This LM improved LB score by about 0.01 compared with "arijitx/wav2vec2-xls-r-300m-bengali"</p>
<h1>Data</h1>
<h3>・audio data</h3>
<p>We used only the CD. As mentioned in the Model Architecture section, we did something like an adversarial validation using a CTC model trained with split=’valid’ and used about 70% of the all CD.</p>
<h3>・text data</h3>
<p>We used text data and script of ASR datasets(CD, indicCorp v2, common voice, fleurs, openslr, openslr37, and oscar). As a preprocessing, we normalized the text with bnUnicodeNormalizer and removed some characters('[\,\?.!-\;:\"\।\—]').</p>
<h1>Inference</h1>
<h3>・sort data by audio length</h3>
<p>Padding at the end of audios negatively affected CTC, so we sorted the data based on the audio length and dynamically padded each batch. This increased prediction speed and performance.</p>
<h3>・Denoising with Demucs, a music source separation model.</h3>
<p>We utilized Demucs to denoise audios. This improved the LB score by about 0.003.</p>
<h3>・Judge if we use Demucs or not</h3>
<p>Demucs sometimes make the audio worse, so we evaluated if the audio gets worse and we switched the audio used in a prediction. This improved the LB score by about 0.001.<br>
The procedure is as follows.</p>
<ol>
<li>we made two predictions: prediction with Demucs and prediction without Demucs. To speed up this prediction, we made the LM parameter beam_width=10.</li>
<li>we compared the number of tokens in two predictions. If the number of tokens in the prediction with Demucs is shorter than the other, we predicted without Demucs. Otherwise, we predicted with Demucs.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9268451%2Fc66948ac5c61830bd513e6af4564cfa4%2Fdenoising_solution.png?generation=1697586862740699&alt=media" alt=""></li>
</ol>
<h1>Post Processing</h1>
<h3>・punctuation model</h3>
<p>We built models that predict punctuations that go into the spaces between words. These improved LB by more than 0.030.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9268451%2Fdc61bcc13bd81e9a0259a3df7b95ac6f%2Fesprit_solution.png?generation=1697586896853219&alt=media" alt=""></p>
<p>As a tip for training, the CV score was better improved by setting the loss weight of "PAD" to 0.0.<br>
Backborn: xlm-roberta-large, xlm-roberta-base<br>
Trainer: XLMRobertaForTokenClassification<br>
Dataset: train.csv (given), indicCorp v2<br>
Punctuations: [ ,।?-]</p>
<hr>
<p>・<a href="https://github.com/sagawatatsuya/BengaliAI_Speech_Recognition_3rd_solution/tree/main" target="_blank">CTC and LM train code</a><br>
・<a href="https://github.com/espritmirai/bengali-punctuation-model" target="_blank">punctuation model code</a><br>
・<a href="https://www.kaggle.com/code/takuji/3rd-place-solution?scriptVersionId=147177786" target="_blank">inference notebook</a></p> |
Kaggle - LLM Science Exam | 85th place solution; 0.906 public and private scores; our 2nd back to back medal; kaggle experts now ! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to Kaggle team for arranging such an amazing competition and congratulations to all the participants and winners.</p>
<p>We have participated in back to back generative AI competitions in last 7-8 months. 1st one was based upon stable diffusion [<a href="https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts" target="_blank">https://www.kaggle.com/competitions/stable-diffusion-image-to-prompts</a>] where we earned a bronze medal and in this one we earned a silver medal. <br>
More importantly, it has been a great learning experience from so many inspiring kagglers. Next landmark is Kaggle master 😉. That path will be quite tough but the biggest gain will be the learnings irrespective of where we reach. </p>
<p>Our solution: <br>
Training:<br>
We mainly trained deberta v3 large models (all layers were fine tuned) with context lengths 512, 768 and 1024 on [<a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2</a>] and also on some random samples from [<a href="https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2" target="_blank">https://www.kaggle.com/datasets/cdeotte/40k-data-with-context-v2</a>]<br>
Hyperparams:<br>
NUM_TRAIN_EPOCHS = 5<br>
LEARNING_RATE = 2e-6<br>
GLOBAL_BATCH_SIZE = 32<br>
PRECISION = BF16<br>
For context length=1024, we had to increase the learning rate to 1e-5 and managed to train it for ~3 epochs.</p>
<p>Inference:<br>
m1 - deberta trained with context len 512<br>
m2 - deberta trained with context len 768<br>
m3 - deberta trained with context len 1024 </p>
<p>1st Ensembling on wiki 6M RAG: <br>
FILTER_LEN = 10<br>
IND_SEARCH = 20<br>
NUM_SENTENCES_INCLUDE = 20<br>
CONTEXT_LEN = 3000 <br>
predictions_ensemble_1 = m1 * 0.2 + m2 * 0.6 + m3 * 0.2</p>
<p>2nd Ensembling on STEM 270K PARSED and UNPARSED:<br>
We made some updates to the tokenizer, maximum length and dataset creation logic here. <br>
predictions_ensemble_2 = m1 * 0.2 + m2 * 0.6 + m3 * 0.2</p>
<p>final_ensemble = predictions_ensemble_1 * 0.2 + predictions_ensemble_2 * 0.8</p>
<p>Inference notebook: <br>
[<a href="https://www.kaggle.com/code/puneeshk/final-inference-notebook" target="_blank">https://www.kaggle.com/code/puneeshk/final-inference-notebook</a>] <br>
This will get to 0.908 public and private scores and around 55th rank but we could not solve the OOM error in this notebook within the deadline time and ran out of submissions.</p>
<p>We are ensuring that we are upvoting all discussions and notebooks that really helped us in this competition and hope to be more proactive in the discussions in our next competitions. Also we are planning to conduct more research on this competition dataset and problem and try to come up with a whitepaper.</p>
<p>Our linkedin profiles:<br>
<a href="https://www.linkedin.com/in/puneeshkhanna/" target="_blank">https://www.linkedin.com/in/puneeshkhanna/</a><br>
<a href="https://www.linkedin.com/in/ankur-shukla-3144906b/" target="_blank">https://www.linkedin.com/in/ankur-shukla-3144906b/</a></p>
<p>Best Regards,<br>
Puneesh Khanna / Ankur Shukla</p> |
CommonLit - Evaluate Student Summaries | 9th Place Efficiency Solution for CommonLit-Evaluate Student Summaries | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, congratulations to the organizers and winners of the competition. <br>
I am very grateful for the opportunity to have participated in this wonderful contest and to have learned from the other competitors.</p>
<h1>Context</h1>
<ul>
<li>Overview <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a></li>
<li>Data <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></li>
</ul>
<h1>Overview of the Approach:</h1>
<p>My solution implemented for the efficiency award consists of a pipeline composed of the following steps Feature Engineering followed by the following Pipeline:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1647119%2F64b8524f73ff2bb814cc907c4952f502%2Fpipe.png?generation=1697930928007385&alt=media" alt=""></p>
<p>The CV by prompt:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1647119%2F9adc3576d7b060c04cd91902cf924b1b%2Ftable.PNG?generation=1697930966613777&alt=media" alt=""></p>
<p>Public LB 0.45943, private LB 0.51579</p>
<h1>Details of the submission:</h1>
<p>The best performing features were those that used logarithms of counts words, 2-grams, and 3-grams in the summary text and were not in the prompt text. I tried to extract more features with Distillbert and TfidfVectorizer but didn't work.</p>
<h1>Sources:</h1>
<p>Some ideas for feature engineering were taken from:</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424372" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424372</a></li>
<li><a href="https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect" target="_blank">https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect</a></li>
<li><a href="https://www.kaggle.com/code/vassylkorzh/feature-engineering" target="_blank">https://www.kaggle.com/code/vassylkorzh/feature-engineering</a></li>
<li><a href="https://www.kaggle.com/code/gusthema/commonlit-evaluate-student-summaries-w-tfdf" target="_blank">https://www.kaggle.com/code/gusthema/commonlit-evaluate-student-summaries-w-tfdf</a></li>
</ul>
<h1>Codes:</h1>
<ul>
<li>Training: <a href="https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution-training" target="_blank">https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution-training</a></li>
<li>Inference: <a href="https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution" target="_blank">https://www.kaggle.com/code/jhony1/9th-place-efficiency-solution</a></li>
</ul> |
Kaggle - LLM Science Exam | 67th Place Solution for the Kaggle - LLM Science Exam Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot for this great challenge on such a hot topic. We (thanks my great teammate <a href="https://www.kaggle.com/fejust" target="_blank">@fejust</a>) learnt a lot in particular through all the opened kernels and lively discussions. </p>
<h2>Approach</h2>
<p>We used the open book approach <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">How To Train Open Book Model - Part 1</a> to train our multiple choice models, and the disruptive retrieval option <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">[86.2] with only 270K articles!</a> based on the cleaned datasets and the TF-IDF retriever. Our best submission used this retrieval and was an ensemble of two fine-tuned DeBERTa large models, one trained for 2 epochs on all 60k records of <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k-data-with-context-v2</a> with 256 max length, and another trained on 30k with max length 512.</p>
<h2>Key Points</h2>
<ul>
<li>Rather computing the logits of all 5 options at once, we computed the logit for each option separately in the inference notebook. This was allowing us to compute logits with much larger contexts, since we saw during the open book approach that several improvements were reported by adding more contexts.</li>
<li>We managed to fine-tune a TensorFlow DeBERTa large model with only frozen embeddings and max length of 512 on the Kaggle TPU on 30k records from the open book approach for 2 epochs, in similar spirit to <a href="https://www.kaggle.com/code/defdet/tpu-accelerated-training" target="_blank">TPU - Accelerated training</a>. This took around 8.5 hours. The weights are finally converted to PyTorch, so that we didn't had to worry about two frameworks in the submission notebook.</li>
<li>We also slightly changed the inference prompt to <code>f"[CLS] {context} #### {question} [SEP] {option} [SEP]"</code>, because this was something that we found more natural to the training. This showed a minor improvement in our first tests and we kept this till the end.</li>
</ul>
<h2>Other</h2>
<p>We spent a considerable amount of time experimenting with dense retrieval options, based on normalized sentence transformer embeddings. However in all our experiments, using this directly on the cleaned datasets, for a second stage selection of smaller chunks, or using them in combination to the TD-IDF retriever, we never reached performance similar to using the TD-IDF alone. We are very interested to see what we might have missed there.</p>
<h2>Key Learning</h2>
<ul>
<li>Multiple choice models and how to train them.</li>
<li>Be careful when extracting data and check if the relevant information is not destroyed by special tags.</li>
<li>Reverse ordering the context helps.</li>
<li>TFIDF is impossible to beat.</li>
<li>Context preparation can be different between training and inference.</li>
</ul>
<p>Thanks all!</p> |
CommonLit - Evaluate Student Summaries | 191th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Solution Summary</h2>
<p>My <a href="https://www.kaggle.com/code/tsunghsuanhung/lgbm-deberta-autocorrect-readability" target="_blank">solution notebook link</a> is a fork of <a href="https://www.kaggle.com/code/siddhvr/commonlit-ess-lgbm-autocorrect-deberta-v3-tuned?scriptVersionId=144476692" target="_blank">tuned-debertav3-lgbm-autocorrect</a> notebook with only one change: </p>
<ul>
<li>add <code>readability</code> features</li>
</ul> |
CommonLit - Evaluate Student Summaries | 14th place Solution (lucky for me) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First my HUGE thanks to the competition host. what a fantastic competition! I indeeed had a fun time kaggling!<br>
Also HUGE thanks to the shared notedbooks and the valuable discussions, from which I got a lot of inspiration and enlightment.</p>
<p>I'm not actually an NLP expert with all sorts of experiences and tuning skills, thus I felt very lucky that I could have got this far. Also because of the shake which fortunately turned out to be in my favour this time, doesn't really happen very often, :p.</p>
<p>Like many other top solutions, my model is acutally very simple deberta-v3-large. But the way I got there was rather serpentine.</p>
<h3>MY SOLUTION</h3>
<ul>
<li><strong>Inputs</strong>:<br>
Paried input into tokenizer: <em>summary text</em> and <em>prompt title</em> +[SEP]+ <em>prompt text</em> +[SEP] + <em>prompt question</em></li>
</ul>
<pre><code>prompt = sep.join([data[], data[], data[]])
tokenized = tokenizer(
data[],
prompt,
=,
=,
=config.max_seq_length,
)
</code></pre>
<ul>
<li><p><strong>Model</strong>:<br>
deberta-v3-large. Like everybody else :P. I also tried some other models: deberta-v3-large-squad2, deberta-v3-base</p></li>
<li><p><strong>Max Length</strong>:<br>
I tried different max lengths, the final model was with 1600.</p></li>
<li><p><strong>Model set up and Hyperparameters</strong>:<br>
learning rate: after many rounds of hyperparameter tuning, I found 6e-6 the best<br>
num_epochs: 4 <br>
weight_decay: after hyperparameter tuning, 1e-4 <br>
warmup_ratio: 0<br>
batch_size: 2 for training and eval (I bascially cannot afford a larger batch_size due to GPU memory)<br>
lr_scheduler_type: cosine<br>
optimizer: adamw<br>
I also tried freezing some layers, but did not boost local CV so the final model I actually did not freeze anything.\<br>
No drop was used, no extra pooling was used.<br>
Moreover, I found that local CVs and public LBs positively correlated with max length in training. Later the private LB proved this. Models with larger max lengths tend to have better private LBs.</p></li>
<li><p><strong>CV Results and Ensemble</strong>:<br>
the best privat LB model achieved CV: 0.5513 (fold 814d6b) , 0.4587 (fold 39c16e), 0.5038 (fold 3b9047) and 0.4316 (fold ebad26)<br>
The final model is a simple ensemble of the 4 folds. <br>
I actually considered to train the model using the set up above on the whole training data set. However, in the end I chose to do an ensemble of the models of different folds. This might not be the best practice, but gave me better opportunity to discover the relations between CV and public LB.<br>
Public LB: 0.429<br>
Private LB: 0.458</p></li>
</ul>
<h3>Things did NOT work</h3>
<ol>
<li><p><strong>MLM did not work</strong><br>
This was actually a major mistake I made in this competition. I took it for granted that MLM of deberta on previous commonlit datasets would help boosting the CV. With that in mind, I actually spent the whole August and the first half of September only training the models pretrained with MLM. The best model I got with MLM had however, local CVs of 0.5449, 0.4782, 0.5182, 0.4473 and public LB 0.445. This could not then be improved anymore.<br>
It was not until the third week of September did I overthrow everything and started again without any pretraining using MLM and ended up with the model described above.</p></li>
<li><p><strong>Manually generated datasets with pseudo scores could help accelerate the training process but can lead to overfitting</strong><br>
Almost at the end phase of the competition, I came up with the thoughts to expand the training data for pretraining task. Here is how I did this:</p>
<ul>
<li>Minor part of the extended training data: I downloaded from commonlit website some texts as prompt texts and questions as prompt questions. I then generated summary texts using gpt-4 by letting the machine mimick the way students write. The pseudo scores were given by the best model I created at that time. That gave me around 300 more training rows.</li>
<li>Major part of the extended training data: without any prompt text, I directly used the student written texts in the previous commonlit competitions and let my best model score them. That gave me ~4000 extra rows</li></ul>
<p>I hope this explains your concern <a href="https://www.kaggle.com/kononenko" target="_blank">@kononenko</a> :p. </p></li>
</ol>
<p>I used the extended data to do pretraining task. I got better local CV results and earlier converged networks. <strong>HOWEVER</strong>, the public LBs were not improving and in the end, the private LB turned out to be even worse :(. It could have led to overfitting (good lesson learnt). So my final model was not trained on this dataset.</p>
<ol>
<li><strong>Lightgbm did not work</strong><br>
I did not dive deeply enough to understand why lgbm did not work. I actually expected it could boost my CV.</li>
</ol>
<h3>Things that I would like to try if I were given more time</h3>
<ol>
<li>EMA<br>
I tried EMA in the early phase of this competition, with models pretrained by MLM. It did not help boosting the local CV so I didn't even bother submitting them. I would like to try EMA on the model without MLM pretraining if I had been given more time.</li>
<li>layer-wise discriminative learning rates <br>
same with EMA, i only applied this in my early models but did not boost local CV.</li>
<li>play with more max length set ups</li>
</ol>
<h3>Here is a summary of the models I tried</h3>
<p>*All LBs are the ensembles over 4 folds</p>
<table>
<thead>
<tr>
<th>Model name</th>
<th>Remarks</th>
<th>Local CV</th>
<th>LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Model 2308</td>
<td>pretrained with MLM, both pretraining and training are with max length 1536</td>
<td>0.5449, 0.4782, 0.5182, 0.4473</td>
<td>0.445</td>
<td>0.481</td>
</tr>
<tr>
<td>Model 2708</td>
<td>pretrained with MLM with max length 2048, model training with max length 1536</td>
<td>0.5778, 0.4598, 0.5305, 0.4369</td>
<td>0.451</td>
<td>0.466</td>
</tr>
<tr>
<td>Model 3008</td>
<td>pretrained with MLM, both pretraining and training are with max length 2048</td>
<td>0.5532, 0.4706, 0.5226, 0.4383</td>
<td>0.447</td>
<td>0.471</td>
</tr>
<tr>
<td>Model 1009</td>
<td>pretrained with MLM. EMA used in training</td>
<td>0.5631, 0.4728, 0.5201, 0.4407</td>
<td>not submitted</td>
<td>not submitted</td>
</tr>
<tr>
<td>Model 2609 (best model)</td>
<td>no pretraining, max length 1600</td>
<td>0.5513, 0.4587, 0.5038, 0.4316</td>
<td>0.429</td>
<td>0.458</td>
</tr>
<tr>
<td>Model 0810</td>
<td>with incomplete extended dataset to pretrain, max length 1600</td>
<td>0.5363, 0.4454, 0.4974, 0.4314</td>
<td>0.431</td>
<td>0.474</td>
</tr>
<tr>
<td>Model 1010</td>
<td>with complete extended data to pretrain, max length 1600</td>
<td>0.5199, 0.4451, 0.4958, 0.4356</td>
<td>0.458</td>
<td>0.469</td>
</tr>
<tr>
<td>Model 0810 + lgbm</td>
<td>with complete extended data to pretrain, max length 1600, plus lgbm</td>
<td>0.6154, 0.4695, 0.4990, 0.4391</td>
<td>0.435</td>
<td>0.47</td>
</tr>
</tbody>
</table>
<p>As you can tell, with only a few exceptios, my public LBs and private LBs correlate somehow well.</p> |
CAFA 5 Protein Function Prediction | 5th Place Solution for the CAFA 5 Protein Function Prediction Challenge | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CAFA 5 Protein Function Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>A. MODEL SUMMARY</strong><br>
<strong>A1. Background on you/your team</strong><br>
Competition Name: CAFA 5 Protein Function Prediction<br>
Team Name: hfm7zc<br>
Private Leaderboard Score: 0.56171 (maximum weighted F-measure, wFmax)<br>
Private Leaderboard Place: 5</p>
<p>Name: Chengxin Zhang<br>
Location: Department of Computational Medicine and Bioinformatics, University of Michigan – Ann Arbor, MI, USA<br>
Email: <a href="mailto:zcx@umich.edu">zcx@umich.edu</a> </p>
<p>Name: P Lydia Freddolino<br>
Location: Department of Biological Chemistry, University of Michigan – Ann Arbor, MI, USA<br>
Email: <a href="mailto:lydsf@umich.edu">lydsf@umich.edu</a></p>
<p><strong>A2. Background on you/your team</strong><br>
Our team consists of two faculty members working on algorithm development of protein function annotation tools as well as their proteome-wide application to microbes and human. We have previously participated in the CAFA3 challenge as team Zhang-Freddolino lab, which was ranked first in the CAFA3 limited knowledge Biological Process category in CAFA3 [1]. We participated in CAFA5 to benchmark our latest protein function prediction methods, with a particular focus on the assessment of utility of structure templates and deep learning in function prediction. C.Z. and P.L.F. conceived the project and designed the pipeline. C.Z. developed the method, performed the CAFA5 prediction and submitted the prediction result.</p>
<p><strong>A3. Summary</strong><br>
Our workflow, StarFunc, first independently generates five sets of Gene Ontology (GO) predictions for a query protein using five component methods:</p>
<ul>
<li>Sequence homolog search through the UniProt-GOA database by BLASTp [2].</li>
<li>Structure alignment between the AlphaFold2 model [3] of the query protein and template structure from the BioLiP2 [4] and AlphaFold databases [5] by Foldseek [6] plus TM-align [7].</li>
<li>Protein-protein interaction (PPI) partners recorded by the STRING database [8].</li>
<li>Logistic regression models (one model per GO term) which use Pfam family matches as the input features.</li>
<li>Prediction from a deep learning model (SPROF-GO) [9].<br>
The prediction scores from these five components, as well as the background frequency of the GO term in the UniProt-GOA database (i.e., the “Naïve” method in CAFA3 evaluation [1]), are used as six input features to train three random forest models (one model per GO aspect) to derive the consensus score used for final submission. </li>
</ul>
<p><strong>A4. Features Selection / Engineering</strong><br>
Based on wFmax of the component methods in an independent test set, the features (i.e., predictions from component methods) ranked in descending order of importance are sequence, deep learning, structure, Pfam, PPI and Naïve (Figure 1). <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16315279%2Ffe2b66c5b505f9bad5465314041c8a2d%2FFeatureImportance.png?generation=1703270675891555&alt=media" alt=""><br>
<strong>Figure 1.</strong> Importance of different features, as measured by the wFmax of GO prediction on independent test set.</p>
<p>A major reason for the high performance of the sequence-based component method is the introduction of a new scoring function that weights each BLASTp hit by both the bit-score and the sequence identity [10]. <br>
For the component methods and the consensus prediction, only experimental GO terms (evidence codes EXP, IDA, IPI, IMP, IGI, IEP, HTP, HDA, HMP, HGI, HEP, TAS, and IC) are included in the training dataset. On the other hand, during training, if a protein has already has a non-experimental GO annotation in UniProt-GOA, this GO annotation is also included in the final result, where the prediction score is calculated as one minus the error rate associated with the evidence code, as determined by our previous study [11]. Note that GO annotations with IBA evidence are always excluded as they are found to have higher error rate than other types of GO annotations [11].</p>
<p><strong>A5. Training Method(s)</strong><br>
We tested traditional Gradient Boosting Decision Tree (GBDT), Random Forest (RF), and Dropouts meet Multiple Additive Regression Trees (DART) implemented by LightGBM to ensemble the predictions from the five component methods. On an independent test set based on UniProt-GOA release 2023-05-18, we found that comparable results can be achieved by GBDT with 1000 trees and RF with 4000 trees, both of which are more accurate than DART. We eventually selected RF to derives the consensus prediction.</p>
<p><strong>A6. Interesting findings</strong><br>
The treatment of term GO:0005515 “protein binding” is different from previous CAFA challenges. In previous CAFA challenges (CAFA1 to CAFA3), proteins whose only Molecular Function (MF) leaf term is "protein binding" will be excluded from MF evaluation. According to the CAFA3 paper, "protein binding is a highly generalized function description, does not provide more specific information about the actual function of a protein, and in many cases may indicate a non-functional, non-specific binding. If it is the only annotation that a protein has gained, … we deleted these annotations from our benchmark set." <br>
However, the CAFA5 Kaggle challenge does not treat the “protein binding” term differently, while it is unclear whether the “protein binding” term will be treated differently in the academic paper reporting CAFA5 official assessment. Due to these discrepancies, we submitted two different models using the same architecture to CAFA5, where model “zcx” exclude protein binding-only proteins when training the MF prediction model while model “hfm7zc” include these training proteins. Model “hfm7zc” seems to achieve better Leaderboard score (0.56171, ranked 5) than “zcx” (0.55539, ranked 8), showing that different treatment of “protein binding” can give a significant impact on the final Leaderboard ranking. We would like to register our opinion, however, that for the academic CAFA5 assessment, the exclusion of protein binding as a leaf term would be more appropriate and biologically useful . <br>
We also note, in contrast to the findings reported below in section A7, that based on our prior experience we expect that for more challenging annotation targets, it is likely that the non-sequence pipelines will be more important [12, 13].</p>
<p><strong>A7. Simple Features and Methods</strong><br>
Based on independent test, 98.7% of the performance (in terms of wFmax) can be achieved by the sequence-based component alone.</p>
<p><strong>A8. Model Execution Time</strong><br>
The random forest models as well as the logistic regression models mentioned in section A3 can be trained within < 1 day using the CAFA5 training set. To perform inference, the overall pipeline takes a few minutes per protein, where most of the time is spent by the structure template alignment component. If only the sequence-based component is run, it takes at most a few seconds per protein in large-scale inference.</p>
<p><strong>A9. Acknowledgement</strong><br>
We thank Quancheng Liu and Dr Xiaoqiong Wei for technical supports and insightful discussions. This work used the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.</p>
<p><strong>A10. References</strong></p>
<ol>
<li>Zhou NH, et al: The CAFA challenge reports improved protein function prediction and new functional annotations for hundreds of genes through experimental screens. Genome Biology 2019, 20.</li>
<li>Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ: Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res 1997, 25:3389-3402.</li>
<li>Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Zidek A, Potapenko A, et al: Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596:583-589.</li>
<li>Zhang C, Zhang X, Freddolino PL, Zhang Y: BioLiP2: an updated structure database for biologically relevant ligand-protein interactions. Nucleic Acids Res 2023.</li>
<li>Varadi M, Anyango S, Deshpande M, Nair S, Natassia C, Yordanova G, Yuan D, Stroe O, Wood G, Laydon A, et al: AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucleic Acids Res 2022, 50:D439-D444.</li>
<li>van Kempen M, Kim SS, Tumescheit C, Mirdita M, Lee J, Gilchrist CLM, Soding J, Steinegger M: Fast and accurate protein structure search with Foldseek. Nat Biotechnol 2023.</li>
<li>Zhang Y, Skolnick J: TM-align: a protein structure alignment algorithm based on the TM-score. Nucleic Acids Res 2005, 33:2302-2309.</li>
<li>Szklarczyk D, Kirsch R, Koutrouli M, Nastou K, Mehryary F, Hachilif R, Gable AL, Fang T, Doncheva NT, Pyysalo S, et al: The STRING database in 2023: protein-protein association networks and functional enrichment analyses for any sequenced genome of interest. Nucleic Acids Research 2023, 51:D638-D646.</li>
<li>Yuan Q, Xie J, Xie J, Zhao H, Yang Y: Fast and accurate protein function prediction from sequence through pretrained language model and homology-based label diffusion. Briefings in Bioinformatics 2023, 24.</li>
<li>Zhang C, Freddolino PL: A large-scale assessment of sequence database search tools for homology-based protein function prediction. bioRxiv 2023:2023.2011. 2014.567021.</li>
<li>Wei X, Zhang C, Freddolino PL, Zhang Y: Detecting Gene Ontology misannotations using taxon-specific rate ratio comparisons. Bioinformatics 2020, 36:4383-4388.</li>
<li>Zhang C, Freddolino PL, Zhang Y: COFACTOR: improved protein function prediction by combining structure, sequence and protein-protein interaction information. Nucleic Acids Res 2017, 45:W291-W299.</li>
<li>Zhang CX, Zheng W, Freddolino PL, Zhang Y: MetaGO: Predicting Gene Ontology of Non-homologous Proteins Through Low-Resolution Protein Structure Prediction and Protein Protein Network Mapping. Journal of Molecular Biology 2018, 430:2256-2265.</li>
</ol> |
Kaggle - LLM Science Exam | [76th Place solution] - STEM WIKI Data filtering & Lesson learned points | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>🔥 Entire Architecture 🔥</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2133178%2Fc6c3e5315813adc5b80ebe9d45723318%2FKakaoTalk_20231013_011613425.png?generation=1697128039286975&alt=media" alt=""></p>
<h2>Architecture in detail</h2>
<p>Divided into parts 1 and 2. similar to <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">strategy.</a></p>
<h3>Part 1</h3>
<ol>
<li>The <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">6.2M wiki data</a> had data from various fields. And in the 200EA test.csv, the Science & Mathematics and Computers & Internet fields were more than 97% when we applied a <a href="https://huggingface.co/jonaskoenig/topic_classification_04" target="_blank">model to classify topics</a>. <br>
Therefore, we reduced the number of wiki data to reduce the size of the index. (6.2M → 4.7M) → This method reduces the number of things to retrieve and construct new contexts. The intent is to utilize documents from the scientific domain as contexts whenever possible.</li>
<li>In method 1, the index was constructed using the bge-small-1.5-en embedding model & the faiss IndexFlatIP method.</li>
<li>The models (weights) used in the ensemble are as follows : </li>
</ol>
<ul>
<li>Model 1(0.32) : Modify the context in <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60K Data</a>. On 60K data, make prompt as query, apply wiki data + BGE Large 1.5 model embedding model, configure the search results as new context and train with DeBERTa large.</li>
<li>Model 2(0.28) : <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60K Data</a> & DeBERTa</li>
<li>Model 3(0.28) : <a href="https://www.kaggle.com/datasets/cdeotte/99k-data-with-context-v2" target="_blank">Only RACE Data 97K</a> & DeBERTa</li>
<li>Model 4 (0.32) : <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a> 's <a href="https://www.kaggle.com/code/mgoksu/0-807-sharing-my-trained-with-context-model" target="_blank">Model</a></li>
</ul>
<h3>Part 2</h3>
<ol>
<li>We used 70K parsed data, 270k cohere data.</li>
<li>When performing retrieval, we utilized tf-idf and applied bi-grams, and stopwords. We also utilized subliner_tf = True, which plays an important role in reducing the value of TF and increased performance.</li>
<li>Our solution is similar to <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB's</a>. We didn't use a long-former. But an ensemble application, utilizing the same models we used in part 1.</li>
<li>Adjusted for probability_.max() > 0.45 in part1, part2 results. A higher prob value means trust the TF-IDF search result value. </li>
</ol>
<h2>😇 Lesson Learned Points 😇</h2>
<ol>
<li>We needed more validation data to make the model more performant and robust. I was too obsessed with 200 sample Data. </li>
<li>Consider different metrics like MAP@1 as well as MAP@3 for the 200 data. </li>
<li>In this competition, the embedding models on the <a href="https://huggingface.co/spaces/mteb/leaderboard" target="_blank">MTEB leaderboard</a> were very similar in terms of experience, and we spent a lot of time obsessing over them.</li>
<li>I have participated in fewer kaggle competitions and this was the first time I saw the shake-up phenomenon. I realized that the lower the percentage of test datasets used in the public LB, the more dangerous it is. Our team was obsessed with the public LB, which was not a good thing 😭</li>
<li>I never thought of the chunk, window sentence, slide sentence method in wiki data, it's really basic. But after running the contest for so long, I didn't have any fresh ideas and kept sticking to the way everyone else was doing it.</li>
<li>With 1 week left in the competition, and over 9 hours of OOM and inference time since submission, the stress was considerable.</li>
<li>I was surprised to see that I could get good enough results without using a 7B+ model. But I realized that to get a gold medal, I needed a more novel method than the one published on kaggle. Gold medals are too hard 🫥<br>
(I envision myself winning a gold medal in 2024!!)</li>
</ol>
<h2>Closing</h2>
<p>Thanks also to all the kagglers who shared their ideas and code during the competition. <br>
Also, sharing people's solutions after the competition is over seems like a great net function of kaggle. </p>
<p>And thanks to our team members <a href="https://www.kaggle.com/yumyeom" target="_blank">@yumyeom</a> <a href="https://www.kaggle.com/jisukim8873" target="_blank">@jisukim8873</a> <a href="https://www.kaggle.com/limhaa" target="_blank">@limhaa</a> <a href="https://www.kaggle.com/shengshingyuan" target="_blank">@shengshingyuan</a> for their hard work.</p> |
Kaggle - LLM Science Exam | 7th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle staff for organizing the brand new and interesting competition.<br>
I really learned a lot of new things through this competition, including RAG and fine-tuning of LLM. I would like to thank all those who contributed meaningful discussions and published high-level notebooks during the competition.<br>
I would also like to thank my teammates <a href="https://www.kaggle.com/thedrcat" target="_blank">@thedrcat</a>, <a href="https://www.kaggle.com/anonamename" target="_blank">@anonamename</a> and <a href="https://www.kaggle.com/kashiwaba" target="_blank">@kashiwaba</a> for refining our solution with a variety of ideas.</p>
<h1>Overview</h1>
<ul>
<li>Use contexts created from various wikipedia data.</li>
<li>Ensemble of DeBERTa-V3 large models and LLMs.</li>
<li>Multi-stage strategy: incorporating various combinations of contexts and models by reducing the number of questions to be predicted at the later stage.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2Fb579b81a690fa6d046945483570c4cdf%2Fllm-overview-v4.png?generation=1697287757990102&alt=media" alt=""></p>
<h1>Retrieval part</h1>
<h3>Wikipedia-20230801 (article retrieve -> sentence retrieve)</h3>
<p>This search strategy is based on the method used in JJ's <a href="https://www.kaggle.com/code/jjinho/open-book-llm-science-exam" target="_blank">public notebook</a>, but with the following changes:</p>
<ul>
<li>We use <a href="https://www.kaggle.com/datasets/bwandowando/wikipedia-index-and-plaintext-20230801/versions/2" target="_blank">Wikipedia-20230801-dump</a>. This contains more articles (6.67M) than <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">Wikipedia Plaintext (2023-07-01)</a> (6.27M).</li>
<li>For the article retrieve, we used the embeddings consisting of the full text of the article (for each article, we split all text in an article into chunks of 256 tokens).</li>
<li>We use gte-small for creating the above embeddings.</li>
<li>We have modified the sectionize_documents function for better sententialization.</li>
<li>We use TF-IDF for the sentence retrieve.</li>
</ul>
<h3>MB 270K + TF-IDF</h3>
<p>We use the MB's 270K dataset and TF-IDF method from <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB's great notebook</a>.</p>
<h3>MB 270K + sentence-transformer</h3>
<p>We search for MB's 270K text using gte-small with max_seq_length=512</p>
<h3>Cohere TF-IDF</h3>
<p>This method is based on the hypothesis that paragraph-based search and sparse method may be better than dense methods specifically for the competition data. The questions and options often contain very specific terms which might not be well represented via dense embeddings. Also, paragraph-based search is generally considered more effective for question answering. The challenge is how to effectively perform sparse retrieval on many millions of Wikipedia paragraphs.</p>
<ul>
<li>We start with Cohere en-wiki dataset via HF. It's not the latest dump, which might cause some problems as some articles changed, but it was conveniently split into paragraphs. It also suffers from the problem mentioned on the forums, some LUA numbers/expressions are not properly parsed. </li>
<li>We use BERT tokenizer and vocabulary for TF-IDF to keep a reasonably sized vocab. We train TF-IDF vectorizer on the first 10M paragraphs and use that vectorizer for all paragraphs.</li>
<li>We precompute indexes and store both indexes and paragraphs in kaggle datasets, in 4x batches of 10 million rows</li>
<li>We use <a href="https://github.com/ing-bank/sparse_dot_topn" target="_blank">fast sparse KNN lookup</a>, co-developed and previously shared by Ahmet Erdem, to find top 5 nearest neighbors context paragraphs for each question. </li>
</ul>
<h1>Model part</h1>
<h2>Validation</h2>
<p>Initially, we used the samples provided in train.csv for validation, but after achieving CV ~ 0.99, it became challenging to assess the correlation between CV and LB. Therefore, we generated approximately 3000 additional samples by Chat-GPT and used these as new validation set. We used the <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/425106" target="_blank">130K STEM articles</a> as Chat-GPT input.<br>
Although there is some blurring, CV and LB correlate relatively well, and the combination of models in the final ensemble was determined based on the CV in this validation set.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6102861%2F50ab5d20ecab000cb86c96947e62ba54%2Fllm_cv_vs_lb.jpg?generation=1697277999186303&alt=media"></p>
<h2>DeBERTa-V3 large</h2>
<p>Based on <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">cderotte's notebook</a>, we train the following four models.</p>
<ul>
<li>max_length=512, microsoft/deberta-v3-large (x2, we will refer to these as v1 and v2 in the Ensemble section)</li>
<li>max_length=512, deepset/deberta-v3-large-squad2 (x1, we will refer to it as v3)</li>
<li>max_length=1024, microsoft/deberta-v3-large (x1)</li>
</ul>
<h2>LLM</h2>
<h3>CausalLM</h3>
<p>We used AutoModelForCausalLM class from transformers and SFTTrainer class from trl for LLM training.</p>
<h4>Training</h4>
<ul>
<li>We trained the model to output one of the letters A, B, C, D, or E with the following prompt</li>
</ul>
<pre><code>### Input: <context>\n\n### System: Answer the following multiple choice question by giving the most appropriate response. Answer should be one among [A, B, C, D, E]. Use the input text above a reference for your answers needed.### Question: <prompt>\nA) <option A>\nB) <option B>\nC) <option C>\nD) <option D>\nE) <option E>\n\n### Answer:
</code></pre>
<ul>
<li>Use QLoRa for memory saving.</li>
<li>Shuffling the choices during the training worked a bit.</li>
</ul>
<h4>Inference</h4>
<ul>
<li>max_new_tokens=1</li>
<li>We put the logits of the [A,B,C,D,E] tokens of the first output into softmax function to obtain the probability of each choice.</li>
</ul>
<h3>CausalLM Reward Modeling</h3>
<ul>
<li>Modified trl's RewardTrainer class to train AutoModelForCausalLM with Reward Modeling. </li>
<li>Input is in the following format<br>
<code><context> #### <prompt> #### <option> ####</code></li>
<li>Generate pairs with the correct choice as "chosen" and any other choice as "rejected" (this is the same as normal Reward Modeling).</li>
<li>Use the predicted value of the next "yes" token in the input string as the logit to be compared.</li>
<li>LLMs that do not support AutoModelForSequenceClassification can now be trained with Reward Modeling, and can be combined with normal CausalLM models to ensemble more diverse LLM models.</li>
</ul>
<h3>Single Model Results</h3>
<p>context : Wikipedia-20230801</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Type</th>
<th>Private LB</th>
<th>Public LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mistral-7B-v0.1</td>
<td>CausalLM</td>
<td>0.874</td>
<td>0.868</td>
</tr>
<tr>
<td>Mistral-7B-OpenOrca</td>
<td>CausalLM</td>
<td>0.882</td>
<td>0.876</td>
</tr>
<tr>
<td>Mistral-7B-v0.1</td>
<td>CausalLM Reward Modeling</td>
<td>0.897</td>
<td>0.888</td>
</tr>
<tr>
<td>Mistral-7B-OpenOrca</td>
<td>CausalLM Reward Modeling</td>
<td>0.896</td>
<td>0.888</td>
</tr>
<tr>
<td>OpenOrca-Platypus2-13B</td>
<td>CausalLM</td>
<td>0.872</td>
<td>0.880</td>
</tr>
<tr>
<td>Llama2-chat-AYT-13B</td>
<td>CausalLM</td>
<td>0.872</td>
<td>0.875</td>
</tr>
</tbody>
</table>
<h1>Ensemble</h1>
<p>The problem with LLM is its long inference time. Therefore, we first use Deberta models to filter out easy questions, which we define as questions with a high prediction probability (max probability > 0.7), and then use LLMs to infer only remaining questions. In this way, the inference time is reduced by narrowing down the data to be inferred with LLM. We also applied this step-by-step inference to Deberta models to keep the inference within 9 hours. This ensemble method allowed us to add 3 LLMs to the 9 hours of inference. The final prediction is a weighted average of 8 Deberta models and 3 LLMs across different contexts. We assigned higher weights to LLMs than to Deberta models. The weights were determined from map3 of the validation set.</p>
<h3>Final submission</h3>
<p>Private LB : 0.925, Public LB : 0.931</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>Context</th>
<th>Stage</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>DeBERTa-V3 large (max_length=512), v1</td>
<td>Wikipedia-20230801</td>
<td>1st</td>
<td>1.0</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=512), v1</td>
<td>MB 270K + TF-IDF</td>
<td>2nd</td>
<td>1.0</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=512), v2</td>
<td>Wikipedia-20230801</td>
<td>3rd</td>
<td>0.5</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=512), v2</td>
<td>MB 270k + sentence-transformer</td>
<td>3rd</td>
<td>0.5</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=1024)</td>
<td>Wikipedia-20230801</td>
<td>3rd</td>
<td>0.5</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=1024)</td>
<td>MB 270k + sentence-transformer</td>
<td>3rd</td>
<td>0.5</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=512), v3</td>
<td>Cohere TF-IDF</td>
<td>3rd</td>
<td>0.5</td>
</tr>
<tr>
<td>DeBERTa-V3 large (max_length=512), v3</td>
<td>MB 270K + TF-IDF</td>
<td>3rd</td>
<td>0.5</td>
</tr>
<tr>
<td>Mistral-7B-v0.1, CausalLM Reward Modeling</td>
<td>Wikipedia-20230801</td>
<td>4th</td>
<td>1.25</td>
</tr>
<tr>
<td>Mistral-7B-v0.1, CausalLM Reward Modeling</td>
<td>MB 270K + TF-IDF</td>
<td>4th</td>
<td>1.5</td>
</tr>
<tr>
<td>OpenOrca-Platypus2-13B, CausalLM</td>
<td>Wikipedia-20230801</td>
<td>4th</td>
<td>1.25</td>
</tr>
</tbody>
</table> |
RSNA 2023 Abdominal Trauma Detection | 79th place - beginner tutorial in applying a previous solution with minimal changes and training time | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This post is meant to show other beginners how we can take a previous solution, apply the smallest changes possible, and still achieve a bronze medal with 30 hours of kaggle gpu (silver medal with 30 additional hours of a kaggle gpu). </p>
<p>Thanks to kaggle and the organizers. <br>
Thanks to <a href="https://www.kaggle.com/haqishen" target="_blank">Qishen Ha</a> for their <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362787" target="_blank">1st place solution of the rsna 2022 competition</a>, where the input data was the same, but the targets were fractures in the spinal vertebrae C1-C7. My aim was to learn how this code works and apply it to this competition. Also thanks to <a href="https://www.kaggle.com/theoviel" target="_blank">Theo Viel</a>, who pointed out Qishen’s solution in his post about <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/441557" target="_blank">beating the baseline</a>, and for his notebook about <a href="https://www.kaggle.com/code/theoviel/get-started-quicker-dicom-png-conversion" target="_blank">processing the dicom files into pngs</a>.</p>
<p>I trained one model to segment all the organs, followed by one model for each organ to classify injury. For extravasation I just predicted an optimized constant value, frequency_of_extravasation x 6. </p>
<p>I only used kaggle resources (about 30 gpu hours total), scoring .615 private leaderboard. This is with only 15 epochs for the final classification models. After training those models for 45 total epochs (taking about 30 extra kaggle gpu hours), we get .548 private leaderboard, which puts us right on the edge for a silver medal at 56th place. </p>
<h2>Adapting <a href="https://www.kaggle.com/haqishen" target="_blank">Qishen Ha</a>’s approach step by step</h2>
<h3>Stage 1: Segmentation</h3>
<p>The first step is to build a model that will segment out the relevant organs. <br>
For reference, this is <a href="https://www.kaggle.com/code/haqishen/rsna-2022-1st-place-solution-train-stage1" target="_blank">Qishen’s notebook</a> which builds such a model for spinal vertebrae C1-C7. As an input into that notebook, he has built a dataset with the studies processed into 3D images of size 128x128x128 to save time. </p>
<p>Here is my <a href="https://www.kaggle.com/code/chrisrichardmiles/rsna23-dicom-to-3d-array-128x128x128-fixed" target="_blank">notebook that processes all training studies into 3D images of size 128x128x128</a>. I have combined Qishen’s original code with Theo Veil’s previously mentioned dicom processing code. The output of this notebook is an input into the next notebook, which trains the segmenter model. </p>
<p>Here is the <a href="https://www.kaggle.com/code/chrisrichardmiles/rsna-2023-train-stage1-seg-mask?scriptVersionId=145931960" target="_blank">stage 1 segmentation model training notebook</a>. I link to version 7 because linking to a notebook that timed out crashes your browser. In version 9, I train one fold for 15 epochs (which is what I get after the 12 hours of kaggle gpu). From visualizing the output masks, it seems to be good enough.<br>
Here is the <a href="https://www.kaggle.com/datasets/chrisrichardmiles/rsna23-train-stage1" target="_blank">dataset with the model output</a>,<br>
Input size: 128x128x128<br>
Epochs: 15</p>
<h4>Stage 1.5: Segmentation inference and building 2.5D image input into stage 2 model</h4>
<p>In order to make the stage 2 training efficient, we precompute the segmentation masks by using the stage 1 segmentation model to infer the segmentation mask for each study_id. After finding the mask for the entire study_id, we build 15 “2.5 dimensional” images for each organ. For each organ we find the min and max value across the z axis (spinal axis), and take 15 images evenly spaced across this z range. For each image we also include 2 images from above and 2 from below for extra information. We also include the segmentation mask so that the model knows where the region of interest is. So for each organ (liver, left_kidney, right_kidney, spleen, bowel), our final result is an array of shape (15, 6, 224, 224). We stack all organ’s outputs and save one file. So for each study_id, we save a file of shape (75, 6, 224, 224). This will be used to create 15 inputs into the stage 2 classifier model for each organ. </p>
<p>Since the output of kaggle notebooks is limited to 20GB, we use 30 notebooks to get the segmentation masks for all the training data. Here is <a href="https://www.kaggle.com/code/chrisrichardmiles/fork-of-rsna23-s1-inf-30parts-2/output" target="_blank">#2 of 30 as an example</a>. All 30 must be put as an input into the stage 2 model training notebook.</p>
<h3>Stage 2 models: [classification]</h3>
<p>For reference, here is Qishen’s <a href="https://www.kaggle.com/code/haqishen/rsna-2022-1st-place-solution-train-stage2-type1" target="_blank">stage 2 training notebook</a>.</p>
<p>Here is my <a href="https://www.kaggle.com/chrisrichardmiles/rsna23-train-stage2-final-5" target="_blank">stage 2 training notebook</a>. Note that there is code added that is used to continue training, using the best models saved from previous versions of the same notebook. This code should be commented out on the first run. </p>
<p>Here is the <a href="https://www.kaggle.com/code/chrisrichardmiles/fork-of-rsna23-final-inference-4-diff-agg?scriptVersionId=147503119" target="_blank">final inference notebook</a> which scores .548 private LB. </p>
<p><strong>Key changes in my stage 2 models, compared to Qishen's model</strong>: </p>
<ul>
<li>In Qishen’s notebook, he builds one single model to classify if a vertebra has a fracture. For each vertebrae C1-C7, he makes 105 input samples to train with. This makes sense because each vertebrae looks similar. C2 looks a lot like C5. But for this competition, each organ does not look like the other, so I chose to build 4 different models for liver, kidney, spleen, and bowel. <br>
<strong>special note about kidney</strong>: Since the segmentation data from the organizers had different labels for left and right kidney, my segmentation masks also had left and right kidney. In order to build a single model for the kidneys, I concatenated the left and right kidney. To be clear I took the left and right kidney arrays (shape (15,6,224,224)) resulting from the input building in stage 1.5, and combined them to get an array of shape (15, 6, 448, 224). </li>
</ul>
<p>Here is the dataloader for the stage 2 classifier model: </p>
<pre><code>class CLSDataset(Dataset):
def __init__(self, df, mode, ):
self.df = df.reset_index()
self.mode = mode
self. =
def __len__(self):
self.df.shape[]
def __getitem__(self, index):
= self.df.iloc[index]
image_full = .(.cls_inp_path)
out = defaultdict(dict)
organ, cols, (a, b) zip(ORGANS, LABELS, ABS):
images = []
image_full[a: b]:
= .(, , )
= transforms_train(=)['']
= .(, , )
images.()
images = .stack(images, )
organ == 'kidney':
images = .concatenate((images[:, :, :, :], images[:, :, :, :]), )
out[organ]['images'] = torch.tensor(images).()
out[organ][''] = torch.tensor([[cols]] * n_slice_per_c).()
out
</code></pre>
<p><strong>Note</strong>: </p>
<ul>
<li>Even though the batch_size I use for the dataloader is 1, we get 15 training examples for each batch. So the model is treating each 6x224x224 image on its own, but it is processing all 15 images at once, as if the batch size were 15. </li>
</ul> |
RSNA 2023 Abdominal Trauma Detection | 17th Place Solution - How to Learn and Practice as a Beginner | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to RSNA for hosting this interesting competition and congrats to all the winnners for their hard work.<br>
I am a beginner who changed my major from physics to software engineering for just one year, I like to learn from practice, so I choose the kaggle competition platform which has many excellent learning resources.</p>
<p>I am very happy that I can go from having no knowledge about 3D image processing to beating the baseline and finally getting the silver medal. I hope my competition experience can bring you some inspiration, especially the beginners like me who are struggling to beat the baseline at first.</p>
<h1>Method</h1>
<h3>Summary</h3>
<p>For learning purposes, I plan to try all of the 2d classification, 3d classification, 2d segmentation, 3d segmentation in this competition. So my pipeline may be a little complicated.</p>
<h3>Stage1: 2D + 3D segmentation</h3>
<ol>
<li><code>2D UNet</code> to segment liver, spleen, kidney_left, kidney_right and bowel.</li>
<li><code>3D UNet</code> to further finely segment spleen.</li>
</ol>
<h3>Stage2: liver, spleen, kidney: 3D classification</h3>
<ol>
<li>Crop organs from segmentation and resize: liver(64, 312, 312), spleen(80, 224, 224), kidney_left(40, 128, 128), kidney_right(40, 128, 128).</li>
<li>Use 3D classifier <code>X3D_l</code> to classify liver, spleen and kidney(For kidney, only backpropagate kidney with higher probability of being positive)</li>
</ol>
<h3>Stage3: bowel: 2.5D + 1D classification</h3>
<ol>
<li>Crop bowel by using bbox of bowel's masks(only choose pixels of the mask > 1000 for each slice)</li>
<li>Sample 64 slice uniformly from the cropped bowel and resize each slice to (512, 512), each slice has 4 channels(z-1, z, z+1, mask). So the input shape is (B, N, 4, 512, 512), B is batch size and N is num of slice.</li>
<li>Use <code>convnext_tiny</code> as feature extractor. Input data(B, N, 4, 512, 512) after passing through the feature extractor will be converted into features(B, N, 768).<br>
If <code>N < 64</code>, will use zero features to padding it. so the final output features are shape of (B, 64, 768).</li>
<li>Use <code>lstm</code> + <code>attention pooling concat maxpooling</code> to fusion features.</li>
<li>Use <code>nn.Linear</code> to classify.</li>
</ol>
<h3>Stage4: extravasation: 2.5D + 1D classification</h3>
<ol>
<li>Sample 64 slice uniformly, For each slice I use 5crop(top_left, top_right, bottom_left, bottom_right, center), and then resize each crop to (512, 512), then stack them. So the input shape is (B, N, 5, 3, 512, 512), B is batch size and N is num of slice.</li>
<li>Use <code>convnext_tiny</code> as feature extractor. Input data(B, N, 5, 3, 512, 512) will be converted into features(B, N, 5, 768).<br>
If <code>N < 64</code>, will use zero features to padding it. so the final output features are shape of (B, 64, 5, 768).</li>
<li>Use <code>attention_pooling concat maxpooling</code> to fuse 5 features of each slice, so the output is shape of (B, 64, 768).</li>
<li>Use <code>lstm</code> + <code>attention pooling concat maxpooling</code> to fusion features.</li>
<li>Use <code>nn.Linear</code> to classify.</li>
</ol>
<h3>Some useful details:</h3>
<ol>
<li>When I started working on the extravasation classification, I saw <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/441402" target="_blank">IAN PAN's extravasation bbox discussion</a>(Thanks!), I think it will help with the classification of extravasation.<br>
For positive sample, I use it with <code>albumentations.BBoxSafeRandomCropFixedSize(288, 288)</code>, it is a useful function of data augmentation that can random crop a part of the image around bbox. So it can help <code>convnext_tiny</code> to pay more attention to small area.<br>
For negative sample, I use <code>albumentations.RandomCrop(288, 288)</code> to random crop a part of input.</li>
<li>I count every positive bowel slice, I find all of them <code>mask.sum() >= 1000</code>(mask is generated by TotalSegmentator). So I think the slice which <code>mask.sum() < 1000</code> can be ignored, thus making the network more concentrated in the ROI area.</li>
<li>By <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature/discussion/430242" target="_blank">NISCHAY DHANKHAR's 3rd solution</a>, I learned to use pseudo labeled dataset for initial training with a large learning rate and finally use a fine-labeled dataset for fine-tuning. So when I train 2D UNet, I use TotalSegmentator's prediction as pseudo label then use 206 fine segmentation to fine-tuning it.</li>
<li>Since about 5% TotalSegmentator's zero-shot segmentations have serious problems, so I use trained 2D UNet segmentation to calculate the dice with it. If it is less than 0.75, it will be reconsidered.</li>
<li>For Notebook Out of Memory:</li>
</ol>
<ul>
<li>by <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/443256" target="_blank">Shai Ronen's discussion</a>, convert 2D dicom to uint8 as soon as loaded and delete + <code>gc.collect()</code> helped a lot.</li>
<li>reduce <code>DataLoader.num_workers</code> can save RAM, set <code>DataLoader.pin_memory=False</code> can save GPU memory.</li>
</ul>
<h3>Things may work</h3>
<ol>
<li>For the task which has only small area of contrast, <code>maxpooling</code> may better than <code>avgpooling</code></li>
<li>Training <code>convnext_tiny</code> + <code>lstm</code> + <code>attention pooling concat maxpooling</code> + <code>nn.Linear</code> end-to-end.<br>
(I see local CV increased a lot, But I didn't have enough time to submit it before the end of competition)</li>
</ol>
<h3>Things may not work</h3>
<ol>
<li>Use <code>uniformer</code> instead of <code>X3D</code></li>
<li>Use <code>efficientnetv2_s</code> instead of <code>convnext_tiny</code></li>
</ol>
<h1>For beginners like me</h1>
<p>I know it will be a little hard when you first join a competition which unfamiliar to you.<br>
I will list some personal experience to help beginners take the first step.</p>
<h3>Before joining the competition</h3>
<p>Before joining the competition, you must first clarify the competition task type(3D CT multi-label classification) and estimate the required calculation and required computing resources and hard disk capacity.<br>
This is used to decide whether you should join this competition, because you may be distressed when you don't have enough hard disk capacity to store the processed data.</p>
<h3>Before coding</h3>
<p>For an unfamiliar task(3D CT classification), the best way to get started is to read solultions from similar competitions that have ended.<br>
It just so happens that RSNA has held many similar competitions in the past few years.<br>
I find <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection" target="_blank">RSNA 2022 Cervical Spine Fracture Detection (last year)</a> and <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/" target="_blank">RSNA STR Pulmonary Embolism Detection (3 years ago)</a> are highly relevant to this competition, their tasks are all 3D CT classification.<br>
You should read a lot about the winner's solution to decide the method of your own experiments.<br>
My finding are that 2d backbone to extract features(or 2.5d) + 1d rnn to fusion them tend to perform best.</p>
<h3>About coding</h3>
<p>After reading the top solutions, you have two routes to develop your own pipeline.<br>
First is to copy and edit public code, but you should understand every line of code.<br>
Second is to refer to public code and use your own programming habits or code segment to develop pipeline.<br>
I use the second method because it can greatly improve my coding ability.<br>
In my learning process of deep learning in the last year, I continuously accumulate and write a <a href="https://github.com/m1dsolo/yangDL" target="_blank">simple pytorch-based framework for multi-fold train, val, test, predict</a> framework.<br>
It can exercise my coding skills and greatly improves the speed of my coding, it feels really good to have any code in your own hands.<br>
At the same time, this accumulation of code will also be beneficial to similar tasks in the future.</p>
<h3>Design method</h3>
<p>Most people's methods can't beat the baseline propbably because they just throw 3D data to the network for training.<br>
I did this too at the beginning, but the results were poor.<br>
<a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">GUANSHUO XU's 1st solution</a> tells me that cropping ROI is really important.<br>
Maybe because it is difficult for the network to learn ROI from just a few thousand of training data in 3d classification task.<br>
So I think segmenting each organ is an important first step.</p>
<p>Since there are 2D classification labels for bowel and extravasation, so I decide to use 2d classification method for both of them and 3d classification method for liver, spleen and kidney.</p>
<p>By <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/364837" target="_blank">Selim's 4th solution (CSN)</a> and <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362651" target="_blank">IAN PAN's 6th solution (X3D)</a>, it seems that using <code>transformer</code> as backbone is not good for small amounts of training data. Finally I choose <code>X3D_l</code> as my 3d classifier.</p>
<p>Next is to design the 2d + 1d method for bowel and extravasation classification.<br>
By <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">QISHEN HA's 1st solution</a> and <a href="https://www.kaggle.com/competitions/rsna-breast-cancer-detection/discussion/392449" target="_blank">Đăng Nguyễn Hồng's 1st solution</a>, <code>convnext</code> should be a good choice as 2D features extractor.<br>
By <a href="https://www.kaggle.com/c/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">Guanshuo Xu's 1st solution</a>, model of <code>lstm</code> + <code>attention pooling concat maxpooling</code> is selected by me to fuse features.</p>
<p>For segmentation, because I already wrote 2D segmentation code during last competition <a href="https://www.kaggle.com/competitions/hubmap-hacking-the-human-vasculature" target="_blank">hubmap-hacking-the-human-vasculature</a>, so I just decide to train a 2D UNet in early experiments.<br>
Later I discovered that 2D UNet was not very good for segmenting spleen (maybe because my bad training skill), so I use 3D UNet to further refine segment spleen by inputting only data cropped by bbox of 2D UNet's coarse segmentation (spleen dice: 0.880 -> 0.943)</p>
<h1>Further</h1>
<p>After I briefly readed other winning teams' solution, I list some tips that I might try in the future.</p>
<ol>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447449" target="_blank">NISCHAY DHANKHAR's 1st solution</a>: Auxiliary Segmentation Loss, 3D segmentation, generate image level label from series level label by organ visibility.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447453" target="_blank">THEO VIEL's 2nd solution</a>: infer which organs are present each of the slice, heavy augmentation, 3D <code>resnet18</code> to crop organs, use <code>RNN</code> to aggregate information from previous model and optimize the competition metric directly.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447464" target="_blank">YUJIARIYASU's 3rd solution</a>: enlarge mask before crop, input all organs into one model.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447549" target="_blank">LLREDA's 7th solution</a>: idea of <code>Mask2Former</code>, use image level label to assist feature learning, method of crop.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447706" target="_blank">IAN PAN's 8th solution</a>: use square root to scale probabilities.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447506" target="_blank">KAPENON's 9th solution</a>: post-process to improve the optimization of any_injury.</li>
<li><a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447450" target="_blank">YU4U's 10th solution</a>: stacking model to optimize any_injury, upsample positive smaples, train <code>max(logits, gt)</code> instead of <code>gt</code> because noise in image level label, <code>region_crop()</code> to remove outer black areas.</li>
</ol> |
Kaggle - LLM Science Exam | Public Top3 -> Private Top15 Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>What a big surprise! But we still lost the $10000 prize. It looks like overfit. From Top3 shake to Top15.<br>
To be honest, we focus our all attention on RAG,and just use 2 deberta to do MutilChoice. So the overfitting problem may be in the deberta(not like other's LLM-7B).<br>
My English is not very good, so some of the content is translated from Chinese using a translator。🤣🤣🤣🤣<br>
We have a 5-way recall strategy. It can be divided into 2 parts, Sentence Model or TFIDF.<br>
<strong>Here's a flowchart of our overall program.</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7722685%2F597bed18f695c4a355f6a88029836607%2F.png?generation=1697180108921230&alt=media" alt=""><br>
<strong>Part1. Sentence Model:</strong><br>
We use simcse to train our Sentence Model with SFT, it improves us LB 0.015. <strong>And specially, We use a trick called Difficult Sample Comparison Learning.</strong> We train our Simcse firstly, and use this model to inference our train dataset. This will produce Top5 recall, then we put Top4(except the ground truth) to Simcse Loss as negative label. This will give our model a stronger textual representation. And this gives our model an additional 0.005 LB lift over normal Simcse.<br>
<strong>Note that all the Sentence Models we use are trained with simcse</strong><br>
<strong>Part2. TFIDF</strong><br>
Just like open-source notebook. <strong>And specially, we did a speedup on TFIDF and it only took 20min.</strong><br>
<strong>In particular, since we found that the sentence-model cannot encode all the text of a wiki article, TFIDF or BM25 or LGBRanker are several effective complementary schemes to it, since they can all encode to all lengths of text.</strong><br>
Then we have 5-way recall strategy.<br>
1.Dataset 6800K wikipedia. We use sentence model to recall Top1000 wiki articles. And we use Bm25(or a LightGBM Ranker) to reorder the Top1000 articles, only remain Top30 articles. Then split the articles and use our sentence model to get Top20 sentence. This way's LB score is 0.885.<br>
2.Dataset 270K wikipedia(2100K paragraphs). We use sentence model to recall Top5 paragraphs. <br>
3.Dataset 270K wikipedia(2100K paragraphs). We use TFIDF to recall Top8 paragraphs. <br>
4.Dataset 270K wikipedia(2800K paragraphs). We use sentence model to recall Top5 paragraphs. <br>
5.Dataset 270K wikipedia(2800K paragraphs). We use TFIDF to recall Top8 paragraphs.<br>
Interestingly, we slice the dataset into multiple indexes and then recall them individually and then sort them according to search_score to get Top5. this solves the problem of faiss taking up too much memory. 🤣🤣🤣🤣🤣🤣<br>
<strong>Some miscellany</strong><br>
<strong>This is a Discussion Competition!</strong><br>
It was a very competitive competition. But there was also a lot of great open source work that completely changed the competition. MB's 270K dataset is undoubtedly excellent work, but it also completely disrupted the game in the last half. After that, all the trick can be found in DISCUSSION.<br>
Thx to <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB's notebook</a>, this leads us to fusion on multiple datasets.<br>
Thx to <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595#2462052" target="_blank">william.wu</a>, this leads us to fusion on Faiss and TFIDF.<br>
Thx to <a href="https://www.kaggle.com/code/mgoksu/0-807-sharing-my-trained-with-context-model" target="_blank">MGÖKSU</a>, it is a wonderful baseline!<br>
This is the first kaggle competition I've been seriously involved in, thanks to my teammates and the organizers, and thanks to everyone for the open source work!<br>
<strong>Life is full of surprise!</strong></p> |
CommonLit - Evaluate Student Summaries | 22nd Efficiency - My creative ideas | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, I want to express how much I've enjoyed participating in this competition and how deeply I adore delving into the world of machine learning. However, I must confess that I became quite engrossed in this competition, finding it somewhat addictive to explore the plethora of novel ideas and approaches constantly bubbling up in my mind. I couldn't resist the urge to implement and experiment with them. In all honesty, especially during the last few weeks, I did experience a degree of disappointment. It stemmed from my inability to unearth any substantial correlations, structures, or approaches that significantly contributed to improving my scores. I meticulously crafted features that exhibited a high correlation with the score, but to my chagrin, they often led to worse results. Towards the end, it felt more akin to filling out a lottery ticket than engaging in scientific endeavors; it had become far too random.</p>
<p>Since we're encouraged to share our solutions, I'd like to offer my highest-scoring notebook and provide an overview of some of the strategies I pursued with great hope, only to see them fall short in the end.</p>
<p>I should mention that from the start, I made a conscious decision to focus exclusively on the Efficiency track, given my passion for constructing computationally efficient solutions. I also refused to simply run one of the high scoring publicly shared notebooks.</p>
<p>Here are some of the techniques I experimented with, despite their ultimate lack of impact on my scores:</p>
<ul>
<li>Extracting 10 stemmed noun-keywords from both prompt_question+prompt_text and input_text, and then comparing them, both direct similarity as tfidf_vectorized cosine similarity.</li>
<li>Extracting the three most crucial sentences from the prompt_text and utilizing them for comparisons with the input_text.</li>
<li>Employing metrics such as ROUGE scores, METEOR scores, JACCARD similarity, and BLEU scores.</li>
</ul>
<p>In my best-performing notebook, I did incorporate BM25 and textblob features. However, I can't say I was entirely confident that these features were responsible for the observed score improvements. There must have been some effect, but I wished I could have established a more concrete relationship between these features and my scores.</p>
<p>**My unsuccessful attempt with a BERT model: **</p>
<p>I aimed to deploy a BERT model capable of processing data in under 10 minutes using only CPU resources. I experimented with various BERT variants but eventually settled on deberta-v3-xsmall, with a limited input length of only 96 tokens to heavily expedite inference. I trained the model on 30% of the data in a single epoch with a learning rate of 1.5e-5. Subsequently, I froze the layers and added content and wording heads, one for each fold. These heads consisted of simple GRU models connected to the CLS layer, essentially creating an ensemble without incurring additional time costs.</p>
<p>While the model achieved an RMSE of approximately 0.58 in isolation, it lowered my cv score to around 0.44. However, my public score remained at 0.56, indicating severe overfitting to the training data.</p>
<p>In hindsight, I acknowledge that I might have achieved better results with a longer input length, but given my CPU constraints and my desire to perform well on the efficiency leaderboard, I opted to work with a smaller text portion.</p>
<p>I want to emphasize that I'm not a seasoned LLM expert; rather, I'm a learner eager to experiment and get creative in crafting solutions that work effectively and score well. I truly wish my efforts had borne fruit, but that is true for all of us of course.</p>
<p>** Regarding my used BERT model: **</p>
<p>In this particular notebook, I employed a deberta-v3-xsmall model trained using the data from the English Language competition. My approach involved selecting the three middle sentences from each text in that competition data to train the model, as this approximated the average text length of the input_text. However, I restricted the maximum input length as well to only 96 tokens.</p>
<p>This managed to boost my score by approximately 0.04.</p>
<p>I welcome any feedback on my approach, as there's nothing more valuable than the opportunity to learn and improve.</p>
<p>Thank you very much.</p>
<p><a href="https://www.kaggle.com/code/geoffreybeulque/commonlit-efficiency-final-submission" target="_blank">https://www.kaggle.com/code/geoffreybeulque/commonlit-efficiency-final-submission</a></p> |
CommonLit - Evaluate Student Summaries | 108th solution🥉 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Solution</h1>
<p>My best submission was almost same as this public code.<br>
(<a href="https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect" target="_blank">https://www.kaggle.com/code/tsunotsuno/debertav3-lgbm-no-autocorrect</a>)<br>
I appreciate <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> to share the sophisticated notebooks!</p>
<p>I changed the deberta-v3-base to deberta-v3-large.<br>
Training was done on google colab notebook.</p>
<h1>What I tried but didn't work</h1>
<ul>
<li>MLM training for summary texts</li>
<li>summary_text + prompt_question +prompt_text training.</li>
<li>feed manual features made in the above public notebook to the nlp header.</li>
</ul>
<p>The fact that training set and test set have no common prompt_id and the training set have only 4 prompt_ids (it seems not enough for the model to learn the interaction pattern between prompts and summaries) seemed to be the most important factor in this competition to me.</p>
<p>So I thought it might be efficient to teach the model the interaction of the prompt and summary using the test set, because we have only 4 prompts in our training set but many prompts in test set. So I tried deberta-v3-base MLM training on test set before regression training, but it didn't work.</p>
<p>Thinking the way to tackle this point wasted me the most of the time I consumed in this competition…<br>
I eventually selected the deberta-v3-large simple solution but there were potential to be improved…</p>
<h1>What I should have done</h1>
<ul>
<li>ensemble nlp models using different combination of input order</li>
<li>use longer token length</li>
<li>ensemble GBDT stacking</li>
<li>pooling techniques (mean/max pooling, convolution, LSTM)</li>
</ul>
<p>and so on.</p>
<p>I didn't noticed changing the order or combination of the input to nlp model make the enough difference to the nlp model to be used for ensemble!</p>
<h1>Other</h1>
<p>I noticed that deberta sequence classification model implementation uses 'type_ids' argument instead of 'token_type_ids' argument.<br>
This seems to be one of the cause that including the prompt_text to the deberta input harms the model performance when you use huggingface Trainer and AutoTokenizer combination.<br>
Maybe the SEP token is not enough for the model to distinguish where to score the content and wording.<br>
You can solve this issue by changing the returned key name by tokenizer from token_type_ids to type_ids.</p>
<p><a href="https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/sequence_classification.py" target="_blank">https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/sequence_classification.py</a></p> |
RSNA 2023 Abdominal Trauma Detection | 10th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all prize and medal winners! I enjoyed this competition because there are a great variety of options for solving this task as with last year's RSNA competition. We share our team's solution ( <a href="https://www.kaggle.com/ren4yu" target="_blank">@ren4yu</a> + <a href="https://www.kaggle.com/tattaka" target="_blank">@tattaka</a> ).</p>
<h1>Summary</h1>
<p>Our solution is to first segment the organs, cut out each organ region, and build a dedicated model for each organ.<br>
For the bowel and extravasation classes, for which large regions must be explored, we do not perform segmentation to cut out the regions, but instead perform simple black region removal and input the large regions into the models.<br>
The results of each model are refined by the stacking model and submitted as the final result.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2Ff5b5b27bf27dbd11ae78efca228ff017%2F1.png?generation=1697414699260439&alt=media" alt=""></p>
<h1>Segmentation Model</h1>
<p>We used the 3D SwinUNETR model provided by MONAI. It works surprisingly well with even a small amount of training data. To reduce the computational cost, the entire voxel was resized into 128x128x128 before segmentation.</p>
<h1>Organ Models</h1>
<p>For liver, spleen, kidney, and bowel cropped regions, 2.5D CNN + LSTM models are used.</p>
<h2>Liver and Spleen Models</h2>
<p>Cropped region is resized into 16x386x386, and fed into dedicated models.</p>
<h2>Kidney Model</h2>
<p>Left and right kidney are independently cropped, resized, and concatinated along with horizontal axis. This enables us horizontal flip augmentation and TTA. Concatenated region becomes 16x224x448.</p>
<h2>Bowel Model</h2>
<p>Cropped region is resized into 64x224x224. In training bowel model, both patient level label and image level labels were used.</p>
<h1>Stacking Model</h1>
<p>The purpose of the stacking model is to directly optimize the average of the weighted logloss, which is the metric for this competition, including the logloss for any_injury. Each model is optimized for the weighted logloss for each of the injury type, but not for any_injury because it is automatically calculated from the probabilities of the other injuries. For any_injury, it is essential to optimize any_injury because the weight for any_injury is relatively large.<br>
As a stacking model, we use a simple 4-layer MLP, and trained with competition metric as loss function.</p>
<p>The table below shows the CV evaluation results before and after stacking (The values below are out of date because we wrote this solution before the competition was extended. The final submission's CV is 0.3686).</p>
<table>
<thead>
<tr>
<th></th>
<th>bowel</th>
<th>extravasation</th>
<th>kidney</th>
<th>liver</th>
<th>spleen</th>
<th>any</th>
<th>overall</th>
</tr>
</thead>
<tbody>
<tr>
<td>w/o stacking</td>
<td>0.1186</td>
<td>0.4716</td>
<td>0.2948</td>
<td>0.4282</td>
<td>0.4301</td>
<td>0.5528</td>
<td>0.3827</td>
</tr>
<tr>
<td>with stacking</td>
<td>0.1034</td>
<td>0.4885</td>
<td>0.2861</td>
<td>0.4409</td>
<td>0.4452</td>
<td>0.4777</td>
<td>0.3736</td>
</tr>
</tbody>
</table>
<h1>tattaka's Part</h1>
<p>I was in charge of bowel and extravasation classification. Our solution followed <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">the 2-stage approach of the RSNA competition 3 years ago</a>. <br>
The basic setup is as follows</p>
<h2>Image-Level Modeling (1st stage)</h2>
<p>The input for the 1st stage is a 3-channel image including adjacent frames. <br>
1epoch training was performed on all labeled images. </p>
<ul>
<li>backbone: resnetrs50 </li>
<li>head: <ul>
<li>Separate the head by bowel and extravasation</li></ul></li>
</ul>
<pre><code> nn.Sequential(
nn.Conv2d(num_features[-], , kernel_size=),
nn.AdaptiveAvgPool2d((, )),
Flatten(),
nn.Dropout(, inplace=),
nn.Linear(, ),
)
</code></pre>
<p>In the 1st stage, students learn bowel and extravasation at the same time. </p>
<h2>Series-Level Modeling (2nd stage)</h2>
<p>The input for the 2nd stage also followed the previous solution. <br>
Use the 512 dimensions after Flatten in the head of the 1st stage as image features.<br>
Image features are created with stride=3 instead of all images, and the input sequence length is set to a maximum of 256 in the same way as the previous solution. <br>
The differences between adjacent features are combined, and the input to the model is in the form (bs, 256, 1536). </p>
<ul>
<li>model:<ul>
<li>Combines the attention pooling and max pooling of the BiGRU outputs in one layer to create an series-level prediction.</li>
<li>The BiGRU output should also predict the image label. </li></ul></li>
</ul>
<p>Unlike the 1st stage, bowel and extravasation were optimized separately.</p>
<h2>Tricks for Successful Training</h2>
<p>There are a few tricks to successful learning in this competition.</p>
<ul>
<li>Because of large data imbalance, upsampling of the positive sample by a factor of 10 (high impact)<ul>
<li>In addition, focal loss is used</li></ul></li>
<li>After training stage1 at image level, max of the logit and gt are trained again as labels for the new image level (high impact)<ul>
<li>repeated it twice</li>
<li>Perhaps the image label is noisy</li></ul></li>
<li>Rule-based removal of the outer black areas before the image is entered into the model<ul>
<li>Because of the longer computation time when using a larger resolution, a size of 384x384 is used after removing the outer area.</li></ul></li>
</ul>
<pre><code> () -> np.ndarray:
image_1ch = (img.mean() * ).astype(np.uint8)
kernel = np.ones((, ), np.uint8)
image_1ch = cv2.erode(image_1ch, kernel, iterations=)
mask = image_1ch >
mask.() == :
img
rows = np.(mask, axis=)
cols = np.(mask, axis=)
y_min, y_max = np.where(rows)[][[, -]]
x_min, x_max = np.where(cols)[][[, -]]
(y_max - y_min) > (x_max - x_min) > :
img = img[y_min:y_max, x_min:x_max]
img
</code></pre>
<p>The bowel and extravasation scores before stacking are</p>
<table>
<thead>
<tr>
<th></th>
<th>bowel logloss</th>
<th>bowel auc</th>
<th>ev logloss</th>
<th>ev auc</th>
</tr>
</thead>
<tbody>
<tr>
<td>stage1</td>
<td>0.2719</td>
<td>0.9314</td>
<td>0.4602</td>
<td>0.8287</td>
</tr>
<tr>
<td>stage2</td>
<td>0.1167</td>
<td>0.9167</td>
<td>0.4579</td>
<td>0.8264</td>
</tr>
</tbody>
</table>
<h2>Not works</h2>
<ul>
<li>label smoothing</li>
<li>resnet3dcsn<ul>
<li>Not bad, but I didn't have time to tune in.</li></ul></li>
<li>scaling logit</li>
<li>GeM pooling</li>
<li>Other backbone<ul>
<li>ConvNeXt is slightly worse than resnetrs50</li>
<li>I could not get the backbone of the transformer to work </li></ul></li>
</ul> |
CommonLit - Evaluate Student Summaries | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you Learning Agency Lab and Kaggle for hosting this competition. We are relieved 😌 to survive the shake up. </p>
<p><strong>Context:</strong> <a href="http://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview/description" target="_blank">www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview/description</a></p>
<p><strong>Data:</strong> <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></p>
<h3>Overview of the approach</h3>
<p>Our selected submission had 17 diverse models. We used stacking with LGBM. </p>
<p>We used one prompt per fold cv strategy. All our models had collate function which speeds up training as well as inference. Lower batch sizes were helpful as well. Most models had evaluation at the end of epoch, few models had evaluation multiple times during epoch. Many of the diverse and smaller models were distilled (described below). </p>
<p>All of us were able to contribute models to the stack, some of us never checked individual models scores because public LB seemed unstable. Our stack seemed more stable than individual models. </p>
<p>Apart from model predictions we had following features in stack: </p>
<pre><code>word_overlap_count
bigram_overlap_count
bigram_overlap_ratio
trigram_overlap_count
trigram_overlap_ratio
- Text stat features
Osman
Gutierrez_polini
Dale_chall_readability_score
flesch_kincaid_grade
</code></pre>
<h3>Model Description</h3>
<p>Here we describe each of the models : </p>
<p><strong>(A)</strong> <strong>Deberta v3 large - full prompt text psuedo labeled model</strong> [ar3]<br>
This model had best cv (0.4721) in stack - This was trained with max len 1790, attention head on top. This was also using back translated pseudo labeled dataset. Every sample was translated to german and then back to english. We randomly selected 50% of this data for training. So every training fold had different ~3100 samples that were pseudo labeled , validation set had no psuedo labeled samples. <br>
Loss function for this model was combination of SmoothL1Loss, RMSE Loss and RankLoss - <a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> described the loss function <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/446554" target="_blank">here</a><br>
Input for this model - <code>[ANSWER_START] + summary_text + [ANSWER_END] + [PROMPT_START] + prompt_text [PROMPT_END] + [QUESTION_START] + prompt_question + [QUESTION_END]</code><br>
Inference maxlen = 1024</p>
<hr>
<p><strong>(B)</strong> </p>
<ul>
<li>Deberta V3 Large [MK 43] - MeanPooling</li>
<li>Deberta V3 Large [MK 45] - MeanPooling</li>
</ul>
<p>These were the same models with different seeds and mean pooled, input to these were summary_text + prompt_question, RMSE loss, validation end of epoch. </p>
<hr>
<p><strong>(C)</strong> </p>
<ul>
<li>Deberta V3 Large [r3] - CLS </li>
<li>Deberta V3 Large [r4] - CLS</li>
<li>Deberta V3 Large [r5] - CLS </li>
</ul>
<p>These were the same models with different seeds and with CLS token, input to these were summary_text + prompt_question, MSE Loss,validation end of epoch. I added extra text “Summarize:” to prompt question , if we look 👀 at the data 🔎carefully, notice that there are prompts with "cite evidence" and these have much longer summary texts and it doesn't say summarize. After reading other solutions, should have tried some more variations to this. </p>
<hr>
<p><strong>(D)</strong> </p>
<ul>
<li>Bart Large - Attention Head (Distilled)</li>
<li>Electra Large - Attention Head (Distilled)</li>
<li>Funnel Large - Mean pooling (Distilled)</li>
</ul>
<p>All these models were distilled - 50% oof labels and 50% true labels, with AWP added - only summary_text as input, validated multiple times during epoch. </p>
<hr>
<p><strong>(E)</strong></p>
<ul>
<li>Deberta v3 large [ar2] - CLS Token </li>
<li>Deberta v3 base [ar4] - CLS Token - Distilled model </li>
<li>Deberta v3 XSmall [ar5] - CLS Token - Distilled model </li>
<li>Deberta v3 XSmall [ar7] - Distilled model - Bigram Signal </li>
</ul>
<hr>
<p>Deberta v3 large [ar2] - Order of input to this model was different than the remaining models - <code>text + SEP + prompt_question</code> </p>
<p>Deberta v3 Base and XSmall models [ar4, ar5, ar7] - These were distilled models. We utilized oof labels from a previous stack. These models were trained using oof labels where as validation set remained as true labels. </p>
<p>Bigram Signal - </p>
<pre><code>
outputs = .model(input_ids, attention_mask, token_type_ids)
bigram_signal = outputs.last_hidden_state[, , ] * outputs.last_hidden_state[, , ]
feature = bigram_signal[, , ]
feature
</code></pre>
<p><strong>(F)</strong> <strong>Deberta v3 large - Classification - Attention Head , Mean Pooling - (4x)</strong> [ar6]<br>
After analyzing data, we noticed there were 1134 distinct categories of content and wording, so we added a classification model with <code>BCEWithLogitsLoss</code>. We trained 4 different models with different heads, averaged them together and then added to the stack. </p>
<h4>Other</h4>
<p>We haven't understood how or why the splits for training/public lb/private lb were created. The distribution of prompts for training and public lb is very different than private lb. Very long prompt text in public and training set but not in private lb. Also number of summary text per prompt is very different for training/public LB and private LB, unfortunately these resulted in shake down for many many teams. </p>
<p><em>Many other things tried but were not part of selected submission</em></p>
<ul>
<li>MLM based on this dataset shared <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/441202" target="_blank">here</a> </li>
<li>RAG (Similar to what was shared in LLM )</li>
<li>Many different architectures , heads, hyperparameters, loss functions </li>
<li>2x T4 Inference </li>
<li>Large Distilled models </li>
<li>Adding more features to LGBM</li>
<li>Catboost, XGBoost, NN alongwith LGBM</li>
<li>Generating data from GPT3.5</li>
<li>I'm glad we didn't spend much time on sliding window approach because the maxlen of prompts in private lb is less than or equal to public lb. </li>
</ul>
<p><strong>Acknowledgements</strong><br>
Thank you <a href="https://www.kaggle.com/kononenko" target="_blank">@kononenko</a> for <a href="https://www.kaggle.com/code/kononenko/pip-install-nlp-mit?scriptVersionId=142526877" target="_blank">this</a> compliant textstat kernel </p>
<p>Thank you <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> for <a href="https://www.kaggle.com/code/tsunotsuno/updated-debertav3-lgbm-with-spell-autocorrect" target="_blank">LGBM features</a> </p>
<hr>
<p>🙏 Thank you to my amazing teammates, grateful to have worked together on this with you all. </p>
<ul>
<li><a href="https://www.kaggle.com/aman1391" target="_blank">@aman1391</a> </li>
<li><a href="https://www.kaggle.com/phoenix9032" target="_blank">@phoenix9032</a> </li>
<li><a href="https://www.kaggle.com/ragnar123" target="_blank">@ragnar123</a> </li>
<li><a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> </li>
</ul> |
Bengali.AI Speech Recognition | 699th Rank for Bengali.AI Speech Recognition Challenge! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ul>
<li>Business context: <a href="url" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech</a></li>
<li>Data context: <a href="url" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/data</a></li>
</ul>
<p>I'm relatively new to competitions and submissions, and also this is my first time working with Audio data. I had zero knowledge about ASR and Audio models. During this competition, I gained a lot of experience with ASR and Transformers and implementing and training models.</p>
<p><strong>Model</strong> : OpenAI Whisper-small pre-trained model <br>
<strong>Training step</strong> : trained up to step 200,000 <br>
<strong>Learning rate</strong> :1e-5<br>
<strong>Batch_size</strong> =12<br>
<strong>Trained on</strong> : 12GB RTX 3080 Ti<br>
<strong>Trainer</strong>: Hugging face trainer<br>
<strong>External Dataset</strong>: MUSAN<br>
<strong>Normalization</strong>: Normalize Training Data with librosa.util.normalize<br>
<strong>Audio Augmentation</strong>:Add 4 types of Audio Augmentation randomly to the data during the preparation process for training</p>
<ul>
<li><strong>Pitch</strong>: Change the pitch of the audio clip with librosa.effects.pitch_shift </li>
<li><strong>Speed</strong>: Change the speed of the audio clip with librosa.effects.time_stretch </li>
<li><strong>Noise</strong>: Add randomly chosen background noise audio clips from the MUSAN dataset to the Training Data</li>
<li><strong>Music</strong>: Add randomly chosen background music instruments audio clips from the MUSAN dataset to the Training Data</li>
</ul>
<p>The reason why I chose the Whisper model was because, in the paper of the dataset, they mentioned that Whisper had better results than any other model. <br>
I have done some audio augmentation on training data but it did not help a lot in getting a better score, which means these audio augmentation steps did not close the gap between test data and train data. <br>
After the competitions and reading the leaderboard solution, I noticed that I should have spent more time on cleaning data from noisy audio clips, poorly annotated data, and poor-quality audio clips and making the training data more generic towards the test dataset and its diverse domain.</p>
<p>Thank you to everyone who takes the time to read this, and I really appreciate any suggestions you may have to help me enhance my machine learning skills.</p>
<p>Inference notebook: <a href="url" target="_blank">https://www.kaggle.com/code/saeidehmousavi/bengali-competition-notebook</a><br>
Model Training notebook:<a href="url" target="_blank">https://www.kaggle.com/code/saeidehmousavi/audio-augmentation-notebook</a></p> |
CommonLit - Evaluate Student Summaries | 43th place solution (711 shake up) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for organizers hosting the competition. I also appreciate everyone at Kaggle sharing insights through codes and discussions.</p>
<p>The reason why I was at very low rank (754 th) is that I didn't have enough time to improve public leaderboard score due to severe fever from 10/3 to the competition end.<br>
(I have planed to reach similar CV score reported in <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/discussion/424330" target="_blank">Single Model CV-LB discussion</a> first, then try to improve CV and public leaderboard score. But there wasn't enough time to do later.)</p>
<h1>Overview of the approach</h1>
<p>My solution is mean ensemble of two deberta-v3-large with different settings. Each model has 4 weights (4 folds), so the total number of weights for inference is 8. I have used best CV checkpoints.<br>
CV setup is Group K fold by prompt_id and CV scores of each model are 0.482 and 0.492.</p>
<h1>Details of submission</h1>
<p>Most influential part for CV score improvement is pooling inside student summary. Other things bring small improvements compared with that. Details are written in the following sections.</p>
<h2>Model input</h2>
<p>Each model has different model input.<br>
One is <code>[START] text [END] prompt_question [SEP] prompt_text</code>.<br>
Another is <code>[START_QUESTION] prompt_question [END_QUESTION] [START_SUMMARY] text [END_SUMMARY] [SEP] [START_TITLE] prompt_title [END_TITLE] [START_PROMPT] prompt_text [END_PROMPT]</code>.</p>
<h2>Model architecture</h2>
<p>Each model has different pooling.<br>
One is mean pooling inside <code>[START]</code> and <code>[END]</code> tokens which means pool inside student summary.<br>
Another is GeM pooling inside <code>[CLS]</code> <code>[SEP]</code> tokens which means pool inside prompt question and student summary.</p>
<p>Second model has separate heads to predict content and wording scores.</p>
<h2>Other settings</h2>
<h3>Same settings across models</h3>
<ul>
<li>Gradient clipping = 10.</li>
<li>Set <code>hidden_dropout_prob</code> and <code>attention_probs_dropout_prob</code> are 0.</li>
<li>Tokenizer max length is 1024.</li>
</ul>
<h3>Different settings across models</h3>
<ul>
<li>The frequency of CV computation to save best checkpoints during training is 100 and 300.</li>
<li>Second model has different learning rate for heads.</li>
</ul>
<h1>Things not worked</h1>
<p>I have guessed most important thing of this competition is combining student summary and prompt text effectively. Most of time I have focused this direction but none of them bring significant improvements (They bring similar CV score of final models but even ensemble of them didn't give huge improvements).</p>
<ul>
<li>After passing summary and prompt_text to backbone, pool summary and prompt_text then take bert cross attention of them<ul>
<li>I have also tried to stack bert cross attention and bert layer multiple times like <a href="https://arxiv.org/abs/2112.03857" target="_blank">this paper</a>, but not worked</li></ul></li>
<li>Pass summary and prompt text separately to the backbone, then concatenate those hidden states and pass to the head</li>
<li>re initialization of layers</li>
<li>freezing layers</li>
<li>ensemble with other backbones (debrta-large, longformer)</li>
<li>pretty long max length (4096)</li>
</ul> |
Google - American Sign Language Fingerspelling Recognition | Silver - LB 0.770 - Two Lines of Code! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - American Sign Language Fingerspelling Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and Google for hosting a fun ASL competition.</p>
<p>I joined a few days ago, so I didn't have time to build my own model. Instead, I began with the best public notebook <a href="https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-place?scriptVersionId=139048726" target="_blank">here</a> (version 17) and attempted to improve it. After making a few changes, I boosted the LB from 0.700 to an amazing 0.770! and obtained Silver medal !!</p>
<h1>Change Two Lines of Code</h1>
<p>The best public notebook is Rohith Ingilela's awesome public notebook <a href="https://www.kaggle.com/code/irohith/aslfr-ctc-based-on-prev-comp-1st-place" target="_blank">here</a> which was improved by Saidineshpola <a href="https://www.kaggle.com/code/saidineshpola/aslfr-ctc-based-on-prev-comp-1st-place?scriptVersionId=139048726" target="_blank">here</a>. Version 17 of Saidineshpola's notebook achieves CV = 0.689 (scroll to bottom of version 17) and LB = 0.697.</p>
<p>That notebook uses TF Records made by Rohith Ingilela <a href="https://www.kaggle.com/code/irohith/aslfr-preprocess-dataset-tfrecords-mean-std" target="_blank">here</a>. An easy trick to boost the performance of the public notebook is create TF Records which keep frames where hands are missing. The TF Records made by linked notebook removes all frames without hands. Instead we can use <code>"output two"</code> below and keep 50% of the frames with missing hands below to boost CV and LB. Updated notebook published <a href="https://www.kaggle.com/code/cdeotte/2-lines-of-code-change-lb-0-760" target="_blank">here</a>.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/asl_preprocess.png" alt=""></p>
<p>Here is preprocess code to use when making TF Records and during inference:</p>
<pre><code>hand = tf.concat(, axis=)
hand = tf.where(tf.math.is, , hand)
mask = tf.math.not, )
alternating_tensor = tf.math.equal( tf.cumsum(
tf.ones ))%, )
mask = tf.math.logical
</code></pre>
<h1>CTC (Connectionist Temporal Classification) Loss:</h1>
<p>In this competition, I learned about CTC loss. This loss is amazing! It allows us to create a model with variable length input and predict variable length output in one step. So we can do things like seq2seq without waiting for a sequential decoder to decode each step. Instead we predict the entire output at once super fast! Giving the model some of the frames with hands missing helps identify duplicates and transistions:</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/asl_ctc.png" alt=""></p>
<h1>Time Augmentation LB +0.004!</h1>
<p>The public notebook trains for 50 epochs, I found that training for more epochs continues to boost CV and LB score! My final submission trains for 200 epochs. Furthermore, we can add augmentation (i.e. regularization) which helps the model train longer, prevent overfitting, and generalize better.</p>
<p>Below we see the histogram of train data Number of Frames divided by Character Length of Target Phrase. From this plot, we see that the ratio varies a lot. Some videos have a different recorded frame rate than others and some participants sign faster than others.</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Aug-2023/frame_hist.png" alt=""></p>
<p>What this means is that the model has a hard time transfer learning one frame rate to learn about another frame rate. We can help the model by using time augmentation. For each input sequence, we can randomly shrink frame length by 50% or enlarge 150%. This will add lots of new train data and help the model learn about different frame rates. Here is augmentation code:</p>
<pre><code> tf.random.uniform(shape=(), minval=, maxval=)<.:
= tf.math.round( tf.random.uniform(
=(), minval = tf.cast(tf.shape(lip)[],tf.float32) / .,
= tf.cast(tf.shape(lip)[],tf.float32) * .) )
x in</code></pre>
<h1>Post Processing LB +0.004!</h1>
<p>Sometimes the model doesn't make a good prediction. The shortest train data is length 3. When the model predicts length 2 or less, we know it is a bad prediction. Therefore we can replace bad predictions with the best constant length prediction. Anokas found the best constant length prediction <a href="https://www.kaggle.com/code/anokas/static-greedy-baseline-0-157-lb" target="_blank">here</a>. We add this to our TF Lite mode with the following code:</p>
<pre><code>x = tf(tf(x) < , lambda: tf(
, tf.int64), lambda: tf(x))
</code></pre>
<h1>Solution Code</h1>
<p>I published a Kaggle notebook <a href="https://www.kaggle.com/code/cdeotte/2-lines-of-code-change-lb-0-760" target="_blank">here</a> demonstrating the above 3 changes. The first four bullet points below achieve <code>LB = 0.763</code>, then time augmentation boost <code>+0.004</code> and PP boost <code>+0.004</code>. Note that the second bullet point (about batch size) just makes things faster but doesn't change the CV nor LB score. The other bullet points are the key:</p>
<ul>
<li>Change 2 lines of code to keep 50% missing hand frames</li>
<li>Change batch size from 32 to 128 and learning rate from 1e-3 to 4e-3</li>
<li>Change FRAME_LEN from 128 to 216</li>
<li>Increase epochs 50 to 200</li>
<li>Add time augmenation</li>
<li>Add post process for preds less than 3 chars</li>
</ul> |
Kaggle - LLM Science Exam | 12th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, we are grateful to the organizers of this amazing competition, which was a great learning opportunity for us. We also appreciate all the kagglers who shared their ideas and code during the competition. Thank my team members <a href="https://www.kaggle.com/zzy990106" target="_blank">@zzy990106</a>, <a href="https://www.kaggle.com/berserker408" target="_blank">@berserker408</a>, <a href="https://www.kaggle.com/fankaixie" target="_blank">@fankaixie</a> for their hard work.</p>
<p>In short, our solution consists of diverse contexts and models.</p>
<h1>RAG</h1>
<h2>Context #1:</h2>
<p>Built wiki english faiss index, combine top5 of three recall paths as context for each question:</p>
<ul>
<li>bge prompt + answer</li>
<li>gte prompt + answer</li>
<li>gte prompt </li>
</ul>
<h2>Context #2:</h2>
<p>Sentence based openbook context shared by JJ ( <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a> )</p>
<h2>Context #3:</h2>
<p>270k dataset shared by MB (@mbanaei), 270k data cohere and parsed.</p>
<h2>Context #4:</h2>
<p>Built a 480k dataset using the notebook MB to build 270k dataset to cover more articles.<br>
The 270k dataset did not cover all the 154 articles, we adjust the clustering parameters to cover all the articles with 480k.</p>
<h1>Models</h1>
<ul>
<li>Deberta V3 large multiple-choice classification<br>
Shared by <a href="https://www.kaggle.com/cderotte" target="_blank">@cderotte</a>.</li>
<li>Deberta V3 large one-shot: <br>
Concatenete all answers together, only inference once for each question.<br>
Model architecture reference <a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/discussion/347433" target="_blank">here</a></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2125251%2F1b6b3d2fbb7551da6de6fe45f3f00f94%2FScreenshot%202022-08-24%20at%2010.40.29%20AM.png?generation=1661308851093353&alt=media" alt=""></p>
<ul>
<li>Llama2 7B<br>
We trained llama2 7B with lora, not able to optimize it very well, it inferences 5 times for a single question, takes 3 hours using 512 seq length, the naive implementation gave us 0.003 boost for ensemble.</li>
</ul>
<p>We trained multiple checkpoints of deberta, in our final solution, we used 8 checkpoints to take different contexts. We also observed that train with better context did not improve model, so we selected the checkpoints trained with sentence-based context.</p> |
Kaggle - LLM Science Exam | 70th place solution (PB:0.907) and PB:0.911 solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to everyone involved in this competition.<br>
And this competition was very fierce and exciting.<br>
Our solution centers around training on <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s 60k dataset and retrieval with <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s 270k articles.</p>
<h1>PB:0.907 solution</h1>
<h4>1. Models</h4>
<p>We created an ensemble of models with four codenames. itk16, cody, jonge and itk17. All models used are DebertaV3L. Of these, cody and jonge use <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">baseline</a>. <br>
<a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a>'s model (jonge) used <strong>label smoothing=0.1</strong>. It works a little.</p>
<p>itk16,17 processes the choices as follows, allowing model to answer while comparing them with other choices.</p>
<pre><code>option_A = [df[] + rand_perm(df[] ~ df[])]
option_B = [df[] + rand_perm(df[] ~ df[])]
...
option_E = [df[] + rand_perm(df[] ~ df[])]
</code></pre>
<p>Here, rand_perm is a function that randomly rearranges the order. This also has the effect of data augmentation. But It didn't have much effect.</p>
<p>also, The two models were trained using <a href="https://arxiv.org/abs/1605.07725" target="_blank">Fast Gradient Method (FGM)</a> adversarial training. <strong>FGM was more powerful than label smoothing. CV was improved about 0.01~0.02.</strong></p>
<h4>2. Retrieval</h4>
<p>2 types of context were used. Applying the context retrieved in <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">270k article</a> to the model created by these and inferring it will significantly increase the score. For example, an itk16 model trained on chris' 60k goes from <strong>CV 0.9083 to 0.9783</strong> on a training dataset of 200 samples. <strong>It's Just change LongFormer to Deberta!</strong><br>
It's amazing how score can go up with zero shots. But we do not understand the reason for this. <br>
Our teammate <a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a> showed that changing n_range in TfidfVectorizer increased LB, but had no effect on PB.</p>
<h4>3. Inference and Ensemble</h4>
<p>We had a hard time making inferences work. We realized too late that the cause of the error was a VRAM OOM error due to the token length being too long. In the end, we chose to branch using try and except statements.<br>
Since we concluded empirically that <strong>LB (or PB) is better at maximizing MAP@3 than minimizing loss</strong>, but we used simple averaging to weight the model for robustness. </p>
<h4>4. Submission</h4>
<p>We also had a hard time submitting it. We kept getting errors until <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> found a <a href="https://www.kaggle.com/discussions/product-feedback/276051" target="_blank">discussion showing that when output exceeds 500</a>, Submission CSV not Found appears.</p>
<h4>5. Overall Result</h4>
<p>Although about half of the submission was consumed by errors, we managed to make it a reality. The model results are as follows. The 4 models and 2 contexts produce a total of 8 predicted values.</p>
<table>
<thead>
<tr>
<th>model_name</th>
<th>dataset</th>
<th>map@3 200</th>
<th>Loss 200</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>itk16_context1</td>
<td>Chris'60k</td>
<td>0.9600</td>
<td>0.2481</td>
<td>0.125</td>
</tr>
<tr>
<td>itk16_context2</td>
<td>Chris'60k</td>
<td>0.9783</td>
<td>0.1457</td>
<td>0.125</td>
</tr>
<tr>
<td>cody_context1</td>
<td>Chris'60k</td>
<td>0.9650</td>
<td>0.3156</td>
<td>0.125</td>
</tr>
<tr>
<td>cody_context1</td>
<td>Chris'60k</td>
<td>0.9850</td>
<td>0.2207</td>
<td>0.125</td>
</tr>
<tr>
<td>jonge_context1</td>
<td>Chris'60k</td>
<td>0.9616</td>
<td>0.4208</td>
<td>0.125</td>
</tr>
<tr>
<td>jonge_context2</td>
<td>Chris'60k</td>
<td>0.9741</td>
<td>0.3846</td>
<td>0.125</td>
</tr>
<tr>
<td>itk17_context1</td>
<td>Chris'60k</td>
<td>0.9666</td>
<td>0.2356</td>
<td>0.125</td>
</tr>
<tr>
<td>itk17_context2</td>
<td>Chris'60k</td>
<td>0.9825</td>
<td>0.1467</td>
<td>0.125</td>
</tr>
<tr>
<td>---</td>
<td>---</td>
<td>---</td>
<td>---</td>
<td>---</td>
</tr>
<tr>
<td>ensembled</td>
<td>-</td>
<td>0.9916</td>
<td>0.2386</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p><strong>LB was 0.920 and PB is 0.907.</strong></p>
<p>As a bonus, We've included some additional information below</p>
<h1>PB:0.911 solution</h1>
<p>Our best submission was PB0.911. we'll give you a quick introduction.<br>
Remove itk17 model from submission of 0.907.<br>
We used Chris's 60k for the dataset, but change the context to the one introduced by <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a> instead. Let the model trained in context 2 (retrieved_articles_parsed). This is called itk30. <br>
This model is not better than Zero Shot, but it does provide more versatility.</p>
<table>
<thead>
<tr>
<th>model_name</th>
<th>dataset</th>
<th>map@3 200</th>
<th>Loss 200</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>itk30_context1</td>
<td>Chris'60k + replaced context2</td>
<td>0.9558</td>
<td>0.2131</td>
<td>0.125</td>
</tr>
<tr>
<td>itk30_context2</td>
<td>Chris'60k + replaced context2</td>
<td>0.9766</td>
<td>0.1475</td>
<td>0.125</td>
</tr>
<tr>
<td>---</td>
<td>---</td>
<td>---</td>
<td>---</td>
<td>---</td>
</tr>
<tr>
<td>ensembled</td>
<td>-</td>
<td>0.9891</td>
<td>0.2368</td>
<td>1.0</td>
</tr>
</tbody>
</table>
<p><strong>LB was 0.910 and PB is 0.911.</strong></p>
<h4>optimising MAP@3</h4>
<p>It is difficult to optimize map3 directly. <a href="https://www.kaggle.com/code/itsuki9180/llm-sciex-optimise-ensemble-weights" target="_blank">We proposed loss optimization</a>, but we also suggest a search method using random values. There is an element of anxiety as it overfits CV, but in this competition, CV, LB, and PB all worked together. I post the code when there are 8 weights to optimize.</p>
<pre><code> ():
np.random.seed(seed)
best =
es_step =
w1b, w2b, w3b, w4b, w5b, w6b, w7b, w8b = -, -, -, -, -, -, -, -
t tqdm((trial)):
ws = np.random.rand()
ws /= np.(ws)
pred =
i ():
pred += preds[i] * ws[i]
overall_score = calc_mtr(pred)
overall_score > best:
best = overall_score
best_loss = log_loss_numpy(pred)
w1b = ws[]; w2b = ws[]; w3b = ws[]; w4b = ws[]; w5b = ws[]; w6b = ws[]; w7b = ws[]; w8b = ws[]
()
()
es_step =
:
es_step +=
es_step>=es:
w1b, w2b, w3b, w4b, w5b, w6b, w7b, w8b
</code></pre>
<h4>FGM and Other adversarial trainings</h4>
<p>We used <a href="https://www.kaggle.com/takamichitoda" target="_blank">@takamichitoda</a>'s <a href="https://www.ai-shift.co.jp/techblog/2985" target="_blank">implementation of FGM (Japanese Language).</a><br>
AWP was mentioned <a href="https://www.kaggle.com/code/itsuki9180/introducing-adversarial-weight-perturbation-awp" target="_blank">in my public notebook</a>, but it had no effect.</p>
<h4>Our codes (We will add more from time to time)</h4>
<p><a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/70th-place-solution-itk16-training" target="_blank">itk16 FGM training Notebook</a><br>
<a href="https://www.kaggle.com/domdejonge" target="_blank">@domdejonge</a>'s <a href="https://www.kaggle.com/code/domdejonge/label-smoothing-train" target="_blank">label_smoothing w/HF🤗 Trainer Notebook</a><br>
<a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/private-lb-0-911-submission" target="_blank">PB:0.911 sub Notebook</a></p>
<h1>Conclusion</h1>
<p>Anyway, congratulations to all the winners! And the people who didn't give up on this competition until the end, don't forget that you got here are your bread and butter. Kaggle is not only a place for winning, but also a place for learning.<br>
See you at the next competition!</p> |
CommonLit - Evaluate Student Summaries | 20th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the organizers for the fun competition and everyone who participated. <br>
I share my solution.</p>
<h1>Summary</h1>
<ul>
<li>When I added prompt_text, the local score clearly improved. However, as a result of the experiment, I found that prompt_text is very long (more than 4000) for test dataset. So I set the max_length to 5000. This is the important point for me.</li>
<li>weighted ensemble: deberta-v3-large + LightGBM = 7:3</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F164129%2F5c1484cc862d2bcea8e3c359b287799b%2Fimage2.png?generation=1697126808205146&alt=media" alt="solution-image"></p>
<h1>1. Validation Strategy</h1>
<ul>
<li>GroupKFold : 4-fold (groups=prompt_id)</li>
</ul>
<h1>2. deberta-v3-large model</h1>
<ul>
<li>input<ul>
<li>input_text = answer: summary_text [SEP] title: prompt_title | question: prompt_question | text: prompt_text</li>
<li>tokenizer: max_length=5000</li>
<li>not text cleansing</li></ul></li>
<li>model:<ul>
<li>pretrained-model: microsoft/deberta-v3-large</li>
<li>freeze: 18 layers</li>
<li>head: cls-token</li>
<li>max_position_embeddings=5000</li></ul></li>
<li>train: <ul>
<li>loss: torch.nn.SmoothL1Loss()</li>
<li>optimizer: Adam</li>
<li>batch_size=2</li>
<li>epoch=10</li>
<li>scheduler: cosine annealing</li></ul></li>
<li>predict:<ul>
<li>max_token=5000 </li>
<li>batch_size=1</li>
<li>only 2-fold (When it was 4-fold, it timed out.)</li></ul></li>
</ul>
<h1>3. LightGBM</h1>
<ul>
<li>features: 141 (count_word, count_paragraph, count_sentence, count_space, etc.)</li>
<li>train: 2 model (content, wording)</li>
</ul>
<h1>4. Ensemble</h1>
<ul>
<li>weighted average (tune weight by train_oof)</li>
<li>deberta-v3-large : LightGBM = 7:3</li>
</ul>
<table>
<thead>
<tr>
<th>id</th>
<th>model</th>
<th>local-cv</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Deberta-v3-large (4fold)</td>
<td>0.4964</td>
<td>Timeout</td>
<td>Timeout</td>
</tr>
<tr>
<td>2</td>
<td>Deberta-v3-large (2fold)</td>
<td>0.5007</td>
<td>0.461</td>
<td>0.476</td>
</tr>
<tr>
<td>3</td>
<td>LightGBM</td>
<td>0.5532</td>
<td>0.495</td>
<td>0.516</td>
</tr>
<tr>
<td>4</td>
<td>ensemble(2+3)</td>
<td>0.4822</td>
<td>0.455</td>
<td>0.461</td>
</tr>
</tbody>
</table>
<h1>Others</h1>
<ul>
<li>pooling-layer: almost same</li>
<li>N-hidden-layer: almost same</li>
<li>LSTM/1dcnn-layer: not work</li>
<li>Set lr for each layer: not work</li>
<li>Longformer: not work</li>
<li>summarizing prompt_text using Llama2: not work</li>
</ul>
<p>When I added prompt_text, the local score clearly improved. I also considered it important to have a model that can handle the long token length of test data. For this reason, although it was a submitted file that was outside the medal range, I believed in the local score and selected this submit file. I'm glad I chose it.<br>
However, I realized the length of the tokens two days before the end of the competition, so I regret not being able to try 4-fold model. .</p>
<p>Thank you for reading.</p> |
CommonLit - Evaluate Student Summaries | 27th Place Solution: How to use prompts? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly, I want to thank Kaggle and the Host for organizing this fun competition, and thanks to my teammates <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> , <a href="https://www.kaggle.com/mohammad2012191" target="_blank">@mohammad2012191</a> , <a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> , <a href="https://www.kaggle.com/ivanisaev" target="_blank">@ivanisaev</a> . This wouldn't be possible to achieve without them!</p>
<h1>Summary</h1>
<p>Our solution is an ensemble of diverse 1st stage models (Mostly with Deberta-v3-large backbone) feed into a 2nd Stage stacking Model. Diversity came from input, different custom pooling techniques and heads.<br>
Our best submission out of the 3 we chose had CV 0.486 and LB 0.425. It gives PB 0.464 which puts us at 27th. The best submission out of all our submissions had LB of 0.459, so it is a silver in both cases :)</p>
<h1>First Stage Models</h1>
<p>We finetuned Deberta Base, Large and Squad using summary texts + question only. We also added other models with some custom pooling ideas which involved using prompts inside, and they worked pretty nice, thanks to <a href="https://www.kaggle.com/ihebch" target="_blank">@ihebch</a> and <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> work! </p>
<h3>Model 1</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8785809%2F985b784a03c245a29e6113d8de4f7957%2FScreen%20Shot%202023-10-12%20at%207.19.37%20AM.png?generation=1697091595551878&alt=media" alt=""><br>
CV: 0.505; public LB: 0.456; private LB: 0.474 </p>
<h3>Model 2 (not included in ensemble 😢 )</h3>
<p>same as Model1 but without cross-attention mechanism<br>
CV: 0.49; publicLB: 0.575; privateLB: 0.468 (wow)</p>
<h3>Model 3</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8785809%2F9c30cf54687f2fb2126330e56af09975%2FScreen%20Shot%202023-10-12%20at%207.01.34%20AM.png?generation=1697090518733548&alt=media" alt=""><br>
CV: 0.52; PublicLB: 0.457; PrivateLB: 0.499</p>
<h1>2nd Stage Model</h1>
<p>We then added all these predictions as features for our LGBM+XG+CB models along with the same features published in public notebooks. <br>
We also added <strong>FB3</strong> labels (good boost) and used metadata features(specially grade).<br>
We tried hard to add new features/ do feature selection / do extensive hyperparameters tuning, but all these trials ended up with better cv and much worse lb, so we gave them up and focused on improving our 1st stage models.</p>
<h1>What Didn't Work</h1>
<ul>
<li>Feature Selection</li>
<li>Extensive Hyperparameters Tuning</li>
<li>Adding FB2 Labels</li>
</ul>
<h1>Team Members</h1>
<ul>
<li><a href="https://www.kaggle.com/ivanisaev" target="_blank">@ivanisaev</a> </li>
<li><a href="https://www.kaggle.com/cody11null" target="_blank">@cody11null</a> </li>
<li><a href="https://www.kaggle.com/mohammad2012191" target="_blank">@mohammad2012191</a></li>
<li><a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a></li>
<li><a href="https://www.kaggle.com/ihebch" target="_blank">@ihebch</a> </li>
</ul> |
RSNA 2023 Abdominal Trauma Detection | Awaiting private results. Results of the competition for me. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everybody!<br>
After weeks of trying, I still couldn't get the model to actually work out and not just dance around the class frequencies, but I did gain some useful experience and will share it. I know that in theory I'm posting this a little early, but as I wrote above, my models don't learn to classify classes, so it obviously won't affect the problem solving or the allocation of places. <br>
So let's start:</p>
<ol>
<li>As a basis, I used this public notebook: <a href="https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-infer" target="_blank">https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-infer</a> and other notebooks of this branch, special thanks to the author for the work and for the errors…..</li>
<li>I used 2D, 2.5D, 3D models, none of them worked, in the end I came up with the idea of transferring the 3D object to the model, and "compressing it to 2.5D". In fact, I compressed the image using something similar to an encoder, but it was immediately followed by the next layers of the model. This is a serious time saver compared to training a 3D model.</li>
<li>Image expansion 256x256x256.</li>
<li>Augmentation was done using volumentations-3D. Great library, but slow, I screwed it up after "dancing with a tambourine" to build a dataset for tensorflows. But even on TPU, it seriously slowed down work.</li>
<li>After the tests, I realized that I need to try to highlight the areas with the location of the organs, but it did not help, I need high-quality segmentation.</li>
<li>And today I prepared the model after segmentation, but I got a Submission Scoring Error. I don't know why, everything works on part of the training data and test public data, I don't know why it doesn't work. In a word, debugging on a private dataset is bad (I'm stating the fact, although I understand why it's done this way, because people will tweet as soon as you give them more freedom).</li>
</ol>
<p>In conclusion, the result is bad, but I learned a lot. I significantly improved my knowledge in working with data, and working with tensorflows. The next serious competition is only with pytorch.</p>
<p>And I think the classification models are not critical here, this problem requires proper data preprocessing, I look forward to the publication of better solutions. </p>
<p>Good luck to everyone!</p>
<p>P.S. The obvious ignoring of my letters and messages by the kaggle team when they write that "Please let me know if you have any questions!" seemed disrespectful to me. I think that if you do not have the opportunity to answer letters, it is better not to write such a sentence. In general, in the context of correspondence, it looked to me like you do not exist for us. Peace to all!</p> |
Kaggle - LLM Science Exam | 4th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First and foremost, we would like to express our gratitude to the hosts and the Kaggle team for arranging this wonderful competition. Also, a huge shout-out to all the participants for fighting through this intense competition. It was an extremely educational and exciting competition.<br>
Below, we present our solutions.</p>
<p>Our solution is based on the high-quality combination of retrieval + deberta v3 models. The implementation of high-quality retrieval and how it can be incorporated as context was critically important.</p>
<h1>Retrieval Part</h1>
<p>Example code: <a href="https://www.kaggle.com/code/linshokaku/4th-elasticsearch-retrieval-example" target="_blank">https://www.kaggle.com/code/linshokaku/4th-elasticsearch-retrieval-example</a></p>
<h2>Retrieval Workflow</h2>
<ul>
<li>Stage 1 <ul>
<li>Sentence-wise keyword retrieval</li>
<li>Using Elasticsearch </li></ul></li>
<li>Stage 2<ul>
<li>Version 3: Elasticsearch score</li>
<li>Version 5: Edit distance score</li>
<li>Version 7: Semantic search score </li></ul></li>
</ul>
<h2>Validation (Retrieval)</h2>
<p>Implemented multiple retrieval rules and evaluated scores in a zero-shot manner without intervening model training. Initially, evaluations were carried out with llama2 7b and train.csv. After the performance of the pre-trained model using context significantly improved, this model was used to evaluate using a validation dataset drawn from the 60k dataset at <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383</a>, taking out 2000 instances. As no model training was inserted, it was possible to execute improvement cycles at a relatively high speed.</p>
<h2>Details</h2>
<p>We believed the following elements to be necessary for high-quality retrieval:</p>
<ul>
<li>As much as possible, clean Wikipedia data</li>
<li>Search relating to all elements of the prompt, options</li>
<li>Comprehensive search </li>
</ul>
<p>Acquiring clean Wikipedia data is possible through the cirrussearch dump, which can be referenced from the Wikiextractor document at <a href="https://github.com/attardi/wikiextractor#wikipedia-cirrus-extractor" target="_blank">https://github.com/attardi/wikiextractor#wikipedia-cirrus-extractor</a>. We built an Elasticsearch server with the texts contained in the cirrussearch dump divided into sentences and ran it on the Kaggle kernel.</p>
<p>The searches are always performed sentence by sentence. For each prompt or option divided into sentences, we extract all the words, eliminate duplicates and stop words, access the Elasticsearch server, and extract similar sentences. Based on the Wikipedia sentences fetched in this way, we generate three types of contexts: v3, v5, and v7. Each context involves basic processing such as concatenating surrounding sentences of the target sentence and concatenating consecutive sentences. The differences are as follows:</p>
<ul>
<li>v3: Sorts context based on the score at the time of Elasticsearch search.</li>
<li>v5: Sorts context based on the edit distance with the question sentence.</li>
<li>v7: Sorts context using semantic search, implemented using sentence-transformers.<br>
For the embedding between the prompt and Wikipedia, we used msmarco-bert-base-dot-v5. For the embedding between the option and Wikipedia, we used all-mpnet-base-v2.</li>
</ul>
<h3>Tips:</h3>
<p>Elasticsearch requires writing at startup, so it needs to be copied and run on /kaggle/temp. As discussed in the discussion at <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444742" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444742</a>, the I/O speed of /kaggle/temp became unstable, so we added an innovation to directly perform data I/O from the file system on /kaggle/input by linking files in the Elasticsearch database that only perform reading with a symbolic link.</p>
<h1>Modeling Part</h1>
<h2>Validation</h2>
<p>Since it was important to train a model that could effectively reference context irrespective of the domain, we randomly extracted 2,000 instances as validation data from the open dataset without limiting to a specific domain. The raw train.csv was too easy for validation and was hardly useful; therefore, we did not use it much in the later stages.We were able to avoid overfitting by trusting the obtained correlation between validation scores (2,000 samples) and LB.<br>
Here is an illustration of the correlation between validation scores and LB: </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F520183%2Fef62d0c56d959b1d341c72ac5d9a266f%2F2023-10-11%2014.37.50.png?generation=1697002694325672&alt=media"></p>
<h2>training</h2>
<p>Model: Deberta v3 Large <br>
Datasets: Our main training datasets were 60k and 40k. <br>
Improvement in accuracy was noted by extending the number of tokens from 512 to 768 and 1280.</p>
<h3>Method to include context:</h3>
<p>We realized that the performance improved as more tokens were included as context. Thus, we implemented a method where we first positioned the prompt & choices and then packed the remaining portions with as much context as possible.</p>
<h2>Context Ensemble</h2>
<p>There were questions that matched the retrieval and some that did not. Ensembling the results inferred by each of the three types of contexts for a single model improved the score (Even during training, v7 only was good, but ensembling with multiple contexts during inference improved performance). This likely indicates that the diversity of the retrievers ensures the search results.</p> |
Bengali.AI Speech Recognition | 66th Place Solution for the Bengali.AI Speech Recognition Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Business context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">www.kaggle.com/competitions/bengaliai-speech/overview</a><br>
Data context: <a href="http://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">www.kaggle.com/competitions/bengaliai-speech/data</a></p>
<h2>Overview of the Approach</h2>
<p>Basically, we decided to use a very straightforward approach: finetuning only one model and using the language model afterward. We thought that the key to the success in this competition is the data, as the private dataset contains different OOD samples. In our final submission we have used the <code>ai4bharat/indicwav2vec_v1_bengali</code> model, that was finetuned for 3 epochs with the data. For the data we have used: SLR53 dataset, SLR37 dataset, Common Voice 13 data, 0.3 of the train split (random) competition data, 0.85 of the validation split competition data. We deleted Common Voice samples from the validation set, however, our CV score did not really correlated with the LB score. For the language model we just used the <code>arijitx/wav2vec2-xls-r-300m-bengali</code>.</p>
<h2>Details of the submission</h2>
<p>In conclusion, we have tried other pretrained checkpoints and other models, but they performed worse. We have also tried some augmentations for the volume and noise, but they did not work for us.</p>
<p>It is important to clean the competition dataset, for example, by pseudolabeling it. Otherwise you get poor correlation between CV and LB as we did. It is also important to train model on several datasets, then model trains better. We suppose, that it is due to "unlearning". If we finetune the model only on a competition data, then the model starts forgetting previous knowledge. For the OOD data model should remember many variations of speech. </p>
<p>We also did not do the punctuations part due to the lack of time, but it is essential, if you work with this type of problem.</p>
<h2>Sources</h2>
<p>Pretrained model: <a href="https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali" target="_blank">https://huggingface.co/ai4bharat/indicwav2vec_v1_bengali</a><br>
Dataset 1: <a href="https://www.openslr.org/53/" target="_blank">https://www.openslr.org/53/</a><br>
Dataset 2: <a href="https://www.openslr.org/37/" target="_blank">https://www.openslr.org/37/</a><br>
Dataset 3: <a href="https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized?select=train.tsv" target="_blank">https://www.kaggle.com/datasets/umongsain/common-voice-13-bengali-normalized?select=train.tsv</a></p> |
Kaggle - LLM Science Exam | Kaggle LLMSE 86th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Introduction</h1>
<p>First of all, we would like to express our sincere appreciation to the competition organizers and competitors who shared their valuable thoughts and resources during the competition.</p>
<p>Before starting, feel free to take a quick look at our <strong><a href="https://www.kaggle.com/code/hqfang/kaggle-llmse-inference" target="_blank">notebook</a></strong> solution. I would also like to introduce our team members: <strong><a href="https://www.kaggle.com/yuekaixueirc" target="_blank">@yuekaixueirc</a></strong>, <strong><a href="https://www.kaggle.com/lindseywei" target="_blank">@lindseywei</a></strong>, and <strong><a href="https://www.kaggle.com/hqfang" target="_blank">@hqfang</a></strong>.</p>
<p>Our team's solution started from <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">notebook</a>. We made changes in three different parts: <strong>Context Retrieval</strong>, <strong>Model Inference</strong>, and <strong>Ensemble Models</strong>.</p>
<p><br></p>
<h1>Context Retrieval</h1>
<p>We kept the original RAG method in the notebook. On top of that, we borrowed one more context source from <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-2" target="_blank">notebook</a>. Moreover, when retrieving relevant contexts using prompt and options, we found that <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s way of weighting the prompt by repeating it three times worked better, so we did it the same way for the context retrieval we added, that is:</p>
<pre><code>trn = trn(lambda x: (, x, x, x, x]), axis=)
trn = trn + + trn + + trn + + trn
</code></pre>
<p>We also found that <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s way of reversing the order of relevant context to make relevant contexts closer to the prompt and options also useful, so we did it the same way for the context retrieval we added, that is:</p>
<pre><code>contexts =
r (trn(), total=(trn)):
prompt_id = r
prompt_indices = processed_wiki_text_data(wikipedia_file_data==prompt_id].values)]
prompt_indices > :
prompt_index = faiss(wiki_data_embeddings, )
prompt_index(wiki_data_embeddings)
context =
context_temp =
ss, ii = prompt_index(question_embeddings, NUM_SENTENCES_INCLUDE)
_s, _i (ss, ii):
context_temp(processed_wiki_text_data)
context_temp()
((context_temp)):
context += context_temp +
contexts(context)
contexts_wiki = contexts
contexts
gc()
</code></pre>
<p>We also changed the variable <code>NUM_SENTENCES_INCLUDE</code> in the added retrieval to 15 in order to make our contexts have less irrelevant information.</p>
<p>Also, Inspired by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>'s <a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag" target="_blank">notebook</a>, we made use of the variable <code>IS_TEST_SET</code> to save local run time for context retrieval.</p>
<h1>Model Inference</h1>
<p>Instead of using a Longformer, we chose to use DeBERTa as it outperformed the Longformer in our experiments. Considering to add more diversity to the inference, also inspired by <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/llm-sciex-optimise-ensemble-weights" target="_blank">notebook</a>, we decided to use both OpenBook models and non-OpenBook models.</p>
<p>Among the OpenBook models, we used three DeBERTas trained locally by <a href="https://www.kaggle.com/yuekaixueirc" target="_blank">@yuekaixueirc</a> primarily using <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">notebook</a>. Among the non-OpenBook models, we used one DeBERTa trained locally by <a href="https://www.kaggle.com/hqfang" target="_blank">@hqfang</a> primarily using <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>'s <a href="https://www.kaggle.com/code/radek1/new-dataset-deberta-v3-large-training" target="_blank">notebook</a>, one DeBERTa trained locally by <a href="https://www.kaggle.com/lindseywei" target="_blank">@lindseywei</a> using the <a href="https://www.kaggle.com/code/datafan07/single-model-rewardtrainer-lora-llm/notebook" target="_blank">LoRA</a> technique, and one DeBERTa posted publicly by <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a> using the <a href="https://www.kaggle.com/code/itsuki9180/introducing-adversarial-weight-perturbation-awp" target="_blank">AWP</a> technique.</p>
<p>All of the models we trained are posted publicly in this <strong><a href="https://www.kaggle.com/datasets/hqfang/kaggle-llmse-dataset" target="_blank">dataset</a></strong>.</p>
<p>For the inference part of the OpenBook models, we changed the way of tokenizing samples to the one we used for training, that is:</p>
<pre><code>def prepare_answering_input(
tokenizer,
question,
options,
context,
=1024,
):
first_sentence = [ + context ] * 5
second_sentences = [ + question + + options[option] + option range(0,5)]
tokenized_examples = tokenizer(
first_sentence, second_sentences,
=max_seq_length,
=,
=,
=,
=
)
input_ids = tokenized_examples[].unsqueeze(0)
attention_mask = tokenized_examples[].unsqueeze(0)
example_encoded = {
: input_ids.(model.device.index),
: attention_mask.(model.device.index),
}
return example_encoded
</code></pre>
<p>Note that we also changed the <code>max_seq_length</code> to 1024.</p>
<p>We kept the inference code as it was, but changed some details to make it only output the probabilities for further use. What's more, we assigned weights of the three predictions of different contexts as 4:4:2. See more below:</p>
<pre><code>def (model_dir):
df_valid = pd.()
trn2 = pd.()
tokenizer = AutoTokenizer.(model_dir)
model = AutoModelForMultipleChoice.(model_dir).()
predictions = []
for index in ((trn2.shape[])):
columns = df_valid.iloc[index].values
question = columns[]
options = [columns[], columns[], columns[], columns[], columns[]]
context1 = trn2[][index]
context2 = trn2[][index]
context3 = trn2[][index]
inputs1 = (
tokenizer=tokenizer, question=question,
options=options, context=context1,
)
inputs2 = (
tokenizer=tokenizer, question=question,
options=options, context=context2,
)
inputs3 = (
tokenizer=tokenizer, question=question,
options=options, context=context3,
)
with torch.():
outputs1 = (**inputs1)
losses1 = -outputs1.logits[].().().()
probability1 = torch.(torch.(-losses1), dim=-)
with torch.():
outputs2 = (**inputs2)
losses2 = -outputs2.logits[].().().()
probability2 = torch.(torch.(-losses2), dim=-)
with torch.():
outputs3 = (**inputs3)
losses3 = -outputs3.logits[].().().()
probability3 = torch.(torch.(-losses3), dim=-)
probability_ = probability1 * + probability2 * + probability3 *
predictions.(probability_.())
predictions = np.(predictions)
return predictions
</code></pre>
<p>Also, Inspired by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a>'s <a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag" target="_blank">notebook</a> again, we made use of the variable <code>IS_TEST_SET</code> to save GPU run time for OpenBook model inference.</p>
<p><br></p>
<h1>Ensemble Models</h1>
<p>To avoid overfitting the public LB, we simply took the average of the three OpenBook models. When ensembling the three non-OpenBook models, we assigned specific weights we derived from previous experience when playing with <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s <a href="https://www.kaggle.com/code/itsuki9180/llm-sciex-optimise-ensemble-weights" target="_blank">notebook</a>. After that, we assigned the OpenBook models to take 90% of the total prediction, while the non-OpenBook models could only take 10%. Finally we can ensemble the inference using the weights below:</p>
<pre><code> =
= np.array(ws)
= . / .
= . / .
[] = ws[] * openbook_w
[] = ws[] * openbook_w
[] = ws[] * openbook_w
[] = ws[] * other_w
[] = ws[] * other_w
[] = ws[] * other_w
= deberta_ob_preds_eric_0897 * ws[] + deberta_ob_preds_eric_088 * ws[] + deberta_ob_preds_eric_0916 * ws[] + deberta_preds_billy_v1 * ws[] + deberta_awp_preds_itk * ws[] + deberta_lora_preds_lindsey * ws[]
</code></pre>
<p><br></p>
<h1>Conclusion</h1>
<p>In summary, what we mainly did in this competition was trying different kinds of ensembles, including ensembling different contexts and models. In those ensembles, we mainly contributed to training different models that were better than the ones posted publicly, and we borrowed those useful RAGs and brought them together with some fine-tuning. We were deeply surprised by the amazing open-source environment which helped everyone thrive in this competition. </p>
<p>Finally, thanks again to all the competitors who shared those invaluable ideas that we could work on. We wouldn't get to this position without their effort. </p>
<p><br></p>
<p><em>P.S. Our score of 0.905 on the PB was achieved by a solution that did not use non-OpenBook models, it maintained the same weights for contexts ensemble but took the average of the three OpenBook models. However, the approach we posted here could achieve a score of 0.906, which is the best score on PB among all of our submissions. Hence we chose to post the best one publicly. Also, the score of 0.915 on the LB was achieved by a solution that is exactly the same as the one posted here, where we only changed to take the average of the three contexts.</em></p> |
Kaggle - LLM Science Exam | 31th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><a href="https://www.kaggle.com/code/sugupoko/llm-31th-place-solution/notebook" target="_blank">Our Inference code</a></h1>
<h1><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/445438" target="_blank">70k Dataset</a></h1>
<p>Thank you for host and competitors.<br>
It is interesting competition for me.</p>
<p>My first NLP competition was a great learning experience for me.<br>
I'm looking forward to the solutions from the top.</p>
<h2>Team Member</h2>
<p><a href="https://www.kaggle.com/shuichiurabe" target="_blank">@shuichiurabe</a><br>
co-workers!!</p>
<h2>Solution overview</h2>
<h3>inference flow</h3>
<p>We decided on a strategy to have fewer models and more variations in the data. After observing that a 270k dataset was released and the LB scores significantly increased, we guessed this approach is validated.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2930242%2F56096167685f983ea2d2a27a17a6798e%2F2023-10-11%20090525.png?generation=1696982746546638&alt=media" alt=""></p>
<h3>model</h3>
<ul>
<li>we trained with 60k dataset <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">cdeotte's discussion</a></li>
<li>only changed max length to 640. Freeze 18, Freeze embedding.</li>
</ul>
<h2>Things that didn't go well:</h2>
<ul>
<li>Searching for similar documents. <ul>
<li>Training the Sentence Transformer.</li>
<li>Replacing the Sentence Transformer.</li>
<li>BM25 method</li></ul></li>
<li>Regarding answer prediction:<ul>
<li>Increasing the data.</li>
<li>Spent $70 on custom data, but it was ineffective.</li></ul></li>
</ul>
<h2>Things I couldn't do:</h2>
<ul>
<li><p>Searching for similar documents.</p>
<ul>
<li>Expanding data sources.</li>
<li>Couldn't due to PC specs (Needed more than 200GB RAM).</li></ul></li>
<li><p>Regarding answer prediction (continued):</p>
<ul>
<li>Making labels soft.<ul>
<li>Wanted to give partial scores like 0.5 when the answer contained some correct content.</li></ul></li></ul></li>
</ul>
<h2>Reference previous competition</h2>
<p>Our many experiments are from the previous competitions</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/AI4Code/leaderboard" target="_blank">https://www.kaggle.com/competitions/AI4Code/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-effectiveness/leaderboard" target="_blank">https://www.kaggle.com/competitions/feedback-prize-effectiveness/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/leaderboard" target="_blank">https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/leaderboard" target="_blank">https://www.kaggle.com/competitions/nbme-score-clinical-patient-notes/leaderboard</a></li>
<li><a href="https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard" target="_blank">https://www.kaggle.com/competitions/feedback-prize-2021/leaderboard</a></li>
</ul> |
Bengali.AI Speech Recognition | 14th Place Solution for the Bengali.AI Speech Recognition Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers and Kaggle staff for holding the competition, and congratulations to the winners!</p>
<h1>Context</h1>
<p>Business context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/data</a></p>
<h1>Overview of the Approach</h1>
<p>My approach addressed two main challenges:</p>
<p><strong>Challenge:</strong></p>
<ol>
<li>The need for robust speech recognition capable of handling diverse speakers.</li>
<li>The requirement to restore punctuation in transcriptions.</li>
</ol>
<p><strong>Approach:</strong></p>
<ol>
<li>Fine-tuning the <code>indicwav2vec_v1_bengali</code> model using competition data.</li>
<li>Leveraging the Punctuation Restoration tool from <a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a>.</li>
</ol>
<p>This approach led to a Public Leaderboard score of 0.38.</p>
<p>I began with the foundation provided by <a href="https://www.kaggle.com/ttahara" target="_blank">@ttahara</a>’s <a href="https://www.kaggle.com/code/ttahara/bengali-sr-public-wav2vec2-0-w-lm-baseline" target="_blank">notebook</a>.</p>
<h1>Details of the submission</h1>
<p><strong>Diverse Speaker Recognition:</strong><br>
The test audio data comes mostly from YouTube, which means that the speakers' identities are often unknown. To create a versatile model, I trained it on diverse audio data.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8577629%2F04724b5e0fab1e4b550271153025dcd3%2Fdomain.png?generation=1697591004781091&alt=media" alt=""><br>
(dataset paper: <a href="https://arxiv.org/abs/2305.09688" target="_blank">https://arxiv.org/abs/2305.09688</a> C.1.1. Data Scraping Roadmap & Prerequisites)</p>
<p><strong>Punctuation Restoration:</strong><br>
It's known that the labels are normalized, which implies that punctuation is preserved, as mentioned <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305#2400110" target="_blank">here</a>.<br>
Predicting punctuation during transcription is difficult, and leaving it out would result in word errors.</p>
<p>By restoring punctuation, Word Error Rate (WER) can be reduced:</p>
<ul>
<li>(label) hello. how are you?</li>
<li>(predict) hello how are you<br>
→ wer: 0.5</li>
<li>(restore) hello. how are you.<br>
→ wer: 0.25</li>
</ul>
<p>While the public notebook appends periods at the end of sentences, the test data has an average of 34.42 words per sample and a Macro Train/Validation set with averages of 8.42/9.21. This suggests that multiple sentences may exist in one audio file, making it necessary to restore punctuation at points other than sentence endings.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8577629%2F1aa92dc382aaefb5ee7797fdc42a5c94%2Fwps.png?generation=1697591051306745&alt=media" alt=""><br>
(dataset paper: <a href="https://arxiv.org/abs/2305.09688" target="_blank">https://arxiv.org/abs/2305.09688</a> Table 1: OOD-Speech Dataset Statistics)</p>
<h2>Models</h2>
<ol>
<li>Wav2vec2CTC model<ul>
<li>Fine-tuned Wav2vec2 <code>ai4bharat/indicwav2vec_v1_bengali</code></li></ul></li>
<li>Language model<ul>
<li>KenLM <code>arijitx/wav2vec2-xls-r-300m-bengali</code></li></ul></li>
<li>Punctuation Restore model<ul>
<li>XLM-RoBERTa-large from <a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a></li></ul></li>
</ol>
<h2>Training</h2>
<p><a href="https://github.com/Neilsaw/kaggle_Bengali.AI_ASR_16th_solution" target="_blank">Github</a></p>
<p>Fine-tuning Wav2vec2CTC with Transformers involved choosing datasets based on Yellowking’s WER, CER, and MOS_PRED metrics, as outlined in this <a href="https://www.kaggle.com/code/imtiazprio/listen-to-training-samples-data-quality-eda" target="_blank">notebook</a>. Two dataset splits were used:</p>
<ol>
<li><strong>Easy Data:</strong> audio samples where inference was straightforward.<ul>
<li>YKG WER < 0.1</li></ul></li>
<li><strong>Hard Data:</strong> audio samples where the character content was correct but the WER was high.<ul>
<li>0.3 < WER < 1.5</li>
<li>CER < 0.15</li>
<li>MOS_PRED > 3</li></ul></li>
</ol>
<p>The first dataset helped the model adapt to a variety of voices, while the second dataset allowed it to handle audio with higher WER.</p>
<p>Additionally, the inclusion of white noise during training led to a slight improvement in the Leaderboard score by 0.001.</p>
<h2>Validate</h2>
<p>I only used LB for Validate.<br>
I couldn't rely on local cross-validation because the domain shift between the training data and the test data was too significant.</p>
<h2>Inference</h2>
<p><a href="https://www.kaggle.com/code/neilus/16th-solution/notebook" target="_blank">inference notebook</a></p>
<p>improved things from public notebook</p>
<ul>
<li><p>Punctuation restoration (-0.023)</p>
<ul>
<li>for using <a href="https://github.com/xashru/punctuation-restoration" target="_blank">xashru/punctuation-restoration</a>, we need to change transformers==2.11.0.</li>
<li>so after LM inference, pip install transformers==2.11.0 and execute punctuation-restoration on command line for reset import packages.</li></ul></li>
<li><p>using unigrams.txt for KenLM ( -0.005)</p></li>
</ul>
<pre><code> ( / , encoding=) :
unigram_list = [t.() t f.().().()]
decoder = pyctcdecode.(
(sorted_vocab_dict.()),
( / ),
unigram_list,
)
</code></pre>
<ul>
<li>beam width 1500 (-0.002 ~ -0.001)</li>
</ul>
<h2>Results</h2>
<table>
<thead>
<tr>
<th></th>
<th>Public</th>
<th>Improvement</th>
<th></th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Baseline</td>
<td>0.471</td>
<td></td>
<td></td>
<td>0.564</td>
</tr>
<tr>
<td>Training (Easy Data)</td>
<td>0.425</td>
<td>-0.046</td>
<td></td>
<td>0.508</td>
</tr>
<tr>
<td>Beam width 1500</td>
<td>0.423</td>
<td>-0.002</td>
<td></td>
<td>0.506</td>
</tr>
<tr>
<td>Punctuation restoration</td>
<td>0.400</td>
<td>-0.023</td>
<td></td>
<td>0.488</td>
</tr>
<tr>
<td>Using unigrams.txt</td>
<td>0.395</td>
<td>-0.005</td>
<td></td>
<td>0.48</td>
</tr>
<tr>
<td>Training (Hard Data)</td>
<td>0.380</td>
<td>-0.015</td>
<td></td>
<td>0.458</td>
</tr>
</tbody>
</table>
<h2>Things didn't work for me</h2>
<ul>
<li>NER (Named Entity Recognition)<ul>
<li>For restore "-" to NER, but using NER for ASR output sentences occur a lot of False detection and LB down.</li></ul></li>
<li>create own LM<ul>
<li>since i only used MaCro Train data, may be too small sentence.</li></ul></li>
<li>fine tuned punctuation model<ul>
<li>same reason LM. only used MaCro Train data.</li></ul></li>
<li>denoiser (<a href="https://github.com/facebookresearch/denoiser" target="_blank">https://github.com/facebookresearch/denoiser</a>)<ul>
<li>LB and CV down. so I didn`t use.</li></ul></li>
<li>add various noise while training<ul>
<li>only White noise was work.</li></ul></li>
</ul>
<h1>Conclusion</h1>
<p>At the beginning of the competition, I tried adding noise to adapt to different domains, but it didn't improve the LB. From this, I thought that there were other challenges to address besides noise.</p>
<p>This competition challenged participants to achieve generalization and deal with label noise (punctuation). It required addressing the question of how much generalization is necessary and how to handle punctuation noise effectively. </p>
<p>I am grateful for the opportunity to learn from this competition.</p>
<h1>Source</h1>
<ul>
<li><a href="https://www.kaggle.com/code/ttahara/bengali-sr-public-wav2vec2-0-w-lm-baseline" target="_blank">https://www.kaggle.com/code/ttahara/bengali-sr-public-wav2vec2-0-w-lm-baseline</a></li>
<li><a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305#2400110" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/432305#2400110</a></li>
<li><a href="https://github.com/xashru/punctuation-restoration" target="_blank">https://github.com/xashru/punctuation-restoration</a></li>
</ul> |
Kaggle - LLM Science Exam | Top 100 Solution - Fast RAPIDS TF-IDF RAG - 2xT4 GPU Acceleration! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Awesome competition! Thanks everyone for sharing many wonderful discussions and notebooks. This competition was so much fun. I'm ready for another LLM QA comp already!</p>
<h1>RAG Ensemble</h1>
<p>Early in the competition, I jumped into gold using a single DeBERTa-v3-large (trained on my 60k dataset <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">here</a>) and ensemble multiple RAG pipelines. Each RAG pipeline extracts context. Then we infer each <code>context + question + choices</code> with DeBERTa-v3-large. And finally ensemble the sets of logits and make predictions:</p>
<p><img src="https://raw.githubusercontent.com/cdeotte/Kaggle_Images/main/Sep-2023/rag-ensemble2.png" alt=""></p>
<h1>GPU Speed Up</h1>
<p>I was surprised to discover that each new RAG pipeline I added above improved my CV and LB more than adding additional DeBERTa. Therefore I focused on accelerating the speed of RAG pipeline + DeBERTa inference (with the goal of ensemble in mind). I created lots of diverse RAG pipelines and I accelerated RAG with:</p>
<ul>
<li>Use <strong>GPU</strong> for Faiss</li>
<li>Use <strong>GPU</strong> for TF-IDF with <strong>RAPIDS</strong></li>
<li>Use <strong>GPU</strong> (and fp16 on 2xT4) for Sentence Transformers</li>
</ul>
<p>Regarding DeBERTa, with 2xT4 GPU I was able to infer DeBERTa <strong>4x faster</strong> than using 1xP100 GPU. This was accomplished using </p>
<ul>
<li><code>from torch.cuda.amp import autocast</code> (Note fp16 only speeds up T4 not P100 GPU) for 2x speed</li>
<li><code>import threading</code> where we infer half the test samples on each T4 GPU for 2x speed</li>
</ul>
<h1>Example Notebook</h1>
<p>Example notebook with GPU Speed Up is <a href="https://www.kaggle.com/code/cdeotte/rapids-tf-idf-lb-0-900" target="_blank">here</a>. Submit notebook single model takes only <strong>30 minutes</strong> and achieves <strong>LB 0.900</strong>!<br>
<strong>UPDATE:</strong> After competition ended, I learned about the TF-IDF parameter <code>sublinear_tf=True</code> (<a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446558" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446693" target="_blank">here</a>). This boost CV and LB by an amazing <code>+0.010</code> WOW! I updated notebook version 2 with this parameter.</p>
<h1>Diverse RAGs</h1>
<p>The goal of RAG is to search Wikipedia and find text context. My final ensemble used the following diverse RAGs:</p>
<ul>
<li>Find top 5 articles out of 6M Wikipedia using title, key words, first paragraph. Then extract top 20 sentences.</li>
<li>Find top 5 sections out of 128k STEM Wikipedia. Then extract top 20 sentences</li>
<li>Find top 5 chunks out of 12M Wikipedia 512 token chunks. Then extract 20 sentences</li>
<li>Use Question plus Choices to search Wikipedia.</li>
<li>Use Question without Choices to search Wikipedia</li>
<li>Use Choices without Questions to search Wikipedia</li>
<li>Find top 5 sections out of 271k Wikipedia STEM using <strong>RAPIDs TF-IDF</strong>. Then use paragraphs</li>
<li>Find top 5 sections out of 271k Wikipedia STEM using Sentence Transformers. Then use paragraphs.</li>
</ul>
<h1>Memory Error</h1>
<p>Unfortunately I could only ensemble 7 of my dozens of RAG pipelines before receiving OOM errors. This ensemble submit took only 6 hours to infer 7x Wikipedia RAG and 7x three DeBERTa. If I could solve the OOM errors then I could infer more RAG + DeBERTa pipelines in the remaining 3 hours and boost CV and LB. (In retrospect, perhaps I should have focused on improved RAG quality over RAG quantity).</p>
<h1>Question Drop Out Augmentation</h1>
<p>My best DeBERTa-v3-large was trained on my 60k dataset and used <strong>Question Drop Out Augmentation</strong>. During training, samples typically use <code>Question + 5 Choices</code> to locate <code>context</code>. Then the DeBERTa is trained with <code>context + question + 5 choices</code>. The idea of <strong>Question Drop Out Augmentation</strong> is that randomly 50%, we only use <code>5 Choices</code> to locate <code>context</code>. And the DeBERTa is trained with <code>context + 5 choices</code> without seeing the question.</p>
<p>Using <strong>Question Drop Out Augmentation</strong> achieved my best <code>MAP@3</code> loss for 200 train sample of <code>0.470</code>. This single model achieved <code>LB 0.904</code> in notebook <a href="https://www.kaggle.com/code/cdeotte/rapids-tf-idf-lb-0-900" target="_blank">here</a>.</p>
<h1>Choice Permute TTA</h1>
<p>Another interesting technique that I observed was <strong>Choice Permute TTA</strong>. When extracting context we can concatenate <code>question + choice A, B, C, D, E</code>. Using sentence transformers, this gets context. If we use concatenate <code>question + choice E, B, A, C, D</code> where we randomly permute the choices, then sentence transformer extracts different context. If we infer both context with DeBERTa and ensemble the two sets of logits, we observe a boost in CV LB!</p>
<h1>Another 1.6x Speed Up Trick!</h1>
<p>We can achieve another 1.6x Speed Up for a total of <strong>7x speed up</strong> when comparing 2xT4 vs. 1xP100 using <strong>Drop 2 Wrong Choice</strong>. After we infer a few DeBERTa, we can use the result to determine the 2 wrong choices for each test question. Then we can infer only the 3 best choices for a <code>5/3 = 1.6x</code> speed up. (i.e. we only generate and infer tokens for top 3 choices). We set the prediction for unused choice 4 and 5 as the same logit as choice 3. Then we have 5 logits and we can ensemble this with our previous inference.</p>
<h1>My 60k, 40k, and 99k Datasets!</h1>
<p>Thank you Kagglers for sharing so many wonderful train datasets. To train OpenBook QA DeBERTa-v3-large, I added context to every public dataset and then published the resultant data with context column added in my three Kaggle datasets. Discussions <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">here</a>, <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440908" target="_blank">here</a>, and <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/444202" target="_blank">here</a>. I trained many DeBERTa-v3-large models using all sorts of combinations of these three datasets in search for the best train procedure and data! However the success of my final solution was a result of having many diverse RAG pipelines and not because of having many (or a great) DeBERTa-v3-large.</p> |
Kaggle - LLM Science Exam | 69th Place Solution for the LLM Science Exam Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank Kaggle for hosting such a great competition and all the participants who generously shared useful insights/models/datasets. Thank my teammate <a href="https://www.kaggle.com/jiesong" target="_blank">@jiesong</a> for making this ranking possible.</p>
<p><strong>Context</strong></p>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<p><strong>Overview of the approach</strong><br>
We followed <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>'s "270K Wiki STEM articles + TF-IDF retrieval" method and used four ensemble models that three are DeBERTa and one is Longformer.</p>
<p>Due to our computational limits, although we fine-tuned a lot of DeBERTa models ourselves, most of them performed lower than these two open-sourced models - <a href="https://www.kaggle.com/bobfromjapan" target="_blank">@bobfromjapan</a>'s <a href="https://www.kaggle.com/datasets/bobfromjapan/llm-science-exam-context-v2-models" target="_blank">model</a> and <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>'s <a href="https://www.kaggle.com/datasets/mgoksu/llm-science-run-context-2" target="_blank">model</a>. One of our fine-tuned models which used different sentence transformer and dataset was also included. We followed <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>'s <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/436383" target="_blank">method</a> for the fine-tuning.</p>
<p>For Longformer, we just used zero-shot as our fine-tuning wasn't a success.</p>
<p><strong>Details of the submission</strong><br>
<em>Our fine-tuned model</em></p>
<ul>
<li>used all-mpnet_base_v2 to embed <a href="https://www.kaggle.com/datasets/jjinho/wikipedia-20230701" target="_blank">the 6+ million wiki articles</a>, and obtained a new index file</li>
<li>selected 27k "similar" questions from the <a href="https://www.kaggle.com/datasets/cdeotte/60k-data-with-context-v2" target="_blank">60k samples</a>, as those questions IMO have closer style to the 200 kaggle sample questions, in terms of content and length.<ul>
<li><a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a> <a href="https://www.kaggle.com/datasets/radek1/sci-or-not-sci-hypthesis-testing-pack" target="_blank">6k + 6k</a> + <a href="https://www.kaggle.com/datasets/radek1/15k-high-quality-examples" target="_blank">15k</a></li></ul></li>
<li>used the new index to find relevant context for the 27k questions for training</li>
<li>used the new index to find relevant context for the 200 kaggle questions for validation</li>
</ul>
<p><em>Ensemble weights</em><br>
We followed <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>'s method to optimize the ensemble weights. Originally we applied this method to more models, including more of our fine-tuned models and Llama and Galactica models. It showed higher CV score. However they (about five in the last 36 hours) were all timeout at the end of the submission. </p>
<p>Then it came to the last 12 hours in the competition… We don't have enough time to re-run the optimization on our at-the-moment best score which used 0.25 : 0.25 : 0.25 : 0.25 ratios on the models mentioned above. Hence we just used our educated guess to set the ensemble weights as following:<br>
- <a href="https://www.kaggle.com/bobfromjapan" target="_blank">@bobfromjapan</a>'s <a href="https://www.kaggle.com/datasets/bobfromjapan/llm-science-exam-context-v2-models" target="_blank">model</a> : 0.5<br>
- <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>'s <a href="https://www.kaggle.com/datasets/mgoksu/llm-science-run-context-2" target="_blank">model</a>: 0.25<br>
- our model: 0.15<br>
- longformer (zero-shot): 0.1</p>
<p><strong>Acknowledge</strong><br>
Last but not least, we learned a lot from this competition and will keep learning from the solution write-ups. I made a 67 page notes by just copying and pasting the valuable insights shared by the participants. We want to give a big shout-out to the people who inspired us the most and made our ranking possible: <a href="https://www.kaggle.com/leonidkulyk" target="_blank">@leonidkulyk</a>, <a href="https://www.kaggle.com/radek1" target="_blank">@radek1</a>, <a href="https://www.kaggle.com/nlztrk" target="_blank">@nlztrk</a>, <a href="https://www.kaggle.com/jjinho" target="_blank">@jjinho</a>, <a href="https://www.kaggle.com/mgoksu" target="_blank">@mgoksu</a>, <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/mbanaei" target="_blank">@mbanaei</a>, <a href="https://www.kaggle.com/bobfromjapan" target="_blank">@bobfromjapan</a>, <a href="https://www.kaggle.com/itsuki9180" target="_blank">@itsuki9180</a>. We also want to thank all the participants who provides valuable insights in the discussions/comments.</p> |
CommonLit - Evaluate Student Summaries | We got so close! :((( | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and CommonLit for such an amazing competition! This was one of the first non-starter/playground competitions that our team has competed in. Despite our close call I’m still really happy with the results, and this has definitely equipped us with the skills needed to conquer future Kaggle competitions.</p>
<p>Our best submission had a public LB score of 0.438, which was in the bronze threshold, but ended up getting a private LB score of 0.485, which was 0.001 away from the bronze threshold :(( That being said, I still wanted to make a post detailing our approach, since our team invested a lot of time into this competition. We found that our CV was around 0.5-0.6 points higher than the public LB, and was 0.1-0.2 points higher than the private LB.</p>
<p>Thank you to my teammates <a href="https://www.kaggle.com/yeoyunsianggeremie" target="_blank">@yeoyunsianggeremie</a> and <a href="https://www.kaggle.com/aayushpatni" target="_blank">@aayushpatni</a> for helping me with this competition! You guys were amazing :))</p>
<p>Congratulations to all of the winners! And what a shakeup!</p>
<h1>Method</h1>
<p>We used a modification of the DebertaV3 + LGBM approach shared by <a href="https://www.kaggle.com/tsunotsuno" target="_blank">@tsunotsuno</a> in his public notebook. Big kudos to them!</p>
<h2>Feature Engineering</h2>
<p>We utilized more features in our submission than were present in the original notebook. In particular,</p>
<ul>
<li>we added 6-gram and 12-gram features to model copying from the text instead of paraphrasing (which was associated with lower wording scores);</li>
<li>we added polarity and subjectivity scores using the NLTK library; and</li>
<li>we added counts of the NLTK pos tags in the student summary (e.g. nouns, verbs, exclamations, etc.)</li>
</ul>
<h2>Modelling</h2>
<p>We ensembled 4 LLM models: 2 DebertaV3 large models trained on individual metrics, 1 deberta v3 large model trained on both metrics, and 1 deberta v3 base model trained on both metrics. The predictions from these models were then fed into individual LGBM models, that had been tuned to each LLM model using Optuna. Finally, the resultant predictions were ensembled using hill climbing.</p>
<p>We found a variety of techniques when pretraining helped with the LLM models’ performance, including</p>
<ul>
<li>freezing the first N layers (N=6 or 18)</li>
<li>freezing the embeddings layer</li>
<li>using a max seq len of 1024 for training and a max seq len of 512 for inference</li>
<li>using step evaluation instead of epoch evaluation</li>
</ul>
<h1>What we could have done to get that sweet, sweet competition medal</h1>
<ul>
<li>We used <code>autocorrect-1.1.0</code> because of the whole licensing issue, but we should have looked into alternatives (since this was known to be buggy)</li>
<li>We could have tried ensembling the boosting models (e.g. using CatBoost / XGB / etc.)</li>
</ul>
<p>In general, we were really cautious about over-engineering our solution as we did not want to overfit to the public LB, especially given that there were not that many prompts in the training data. I do wish that we had a bigger distribution of prompts in our training data, perhaps one for each grade (grade 3-12).</p>
<h1>Code</h1>
<p>The notebooks we used for this competition can be found here:</p>
<ul>
<li><a href="https://www.kaggle.com/code/mcpenguin/inference-commonlit-student-summaries" target="_blank">Inference/Submission Notebook</a></li>
<li><a href="https://www.kaggle.com/code/mcpenguin/eda-commonlit-student-summaries" target="_blank">EDA Notebook</a></li>
<li><a href="https://www.kaggle.com/code/mcpenguin/utility-commonlit-student-summaries" target="_blank">Utility Notebook</a></li>
</ul> |
Google - Fast or Slow? Predict AI Model Runtime | 17th place solution: GNN + TransformerEncoderLayer w/ BCELoss. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for hosting the interesting competition and congratulations to the winners!<br>
I was disappointed with the result, but I learned a lot.</p>
<p>only describe T0m part.</p>
<h1>Layout</h1>
<ul>
<li>GNN(SAGEConv) + TransformerEncoderLayer</li>
<li>BCELoss</li>
<li>Predictions were made for all pairs, and the average of the predictions was used as a final guideline for sorting</li>
<li>For tests with graphs similar to train, Lightgbm was used to train and make predictions individually. Only node_config_feat was used in that case.<ul>
<li>This is not the cause of the shake down.</li></ul></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F4705999%2F6520e81b8d8ee65c6b97dbf86ec6f5ce%2F2023-11-20%2014.48.00.png?generation=1700459296767515&alt=media" alt=""></p> |
Google - Fast or Slow? Predict AI Model Runtime | 10th place solution: Fast or Slow with Graph Transformers | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Summary</h2>
<ol>
<li>Intermediate config fusion</li>
<li>Graph transformer based solution</li>
<li>The best lucky model got 0.715 at private LB</li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2F04febf5cf4f2e15f365b040eeca533d8%2FModel.png?generation=1700325074557958&alt=media" alt=""></p>
<p>Congratulations to everyone on finishing the Fast or Slow competition and a big thanks to my amazing teammate <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a>. After the competition announcement, we were quite excited about seeing quite an interesting problem considered in it. However, a more detailed look has disclosed that the evaluation is performed only on a few graphs, i.e. the evaluation may be very unstable, and the risk of massive two-digit shakeup was significant (I'm curious if organizers performed a config-based split instead of graph one to avoid it?). So we considered this competition as a lottery and spent here only ~1.5 weeks while working on another competition.</p>
<h2>Details</h2>
<p>The approaches frequently considered in <a href="https://arxiv.org/pdf/2008.01040.pdf" target="_blank">the literature</a> are often based either on early or late fusion, i.e. injection of the configuration input is performed either at the very beginning or after collapsing the graph representation into a single embedding vector. However, the first one struggles with computational inefficiency because the computation must be performed for each configuration independently, which results in issues when thousands of configurations are considered with graphs containing tens of thousands of nodes. Meanwhile, the late fusion method loses the node-specific information at the moment of fusion after collapsing all nodes into a single embedding vector, i.e. the model may have a difficult time assigning a particular configuration to the corresponding nodes. </p>
<p><strong>The key idea of our approach is using the intermediate fusion</strong> that provides <strong>a good balance between computational efficiency and the expressiveness of the graph representation at the moment of fusion with configuration input</strong>. Therefore, we may perform training with 1000s configurations and huge (graphs up to 10^4 nodes) at the same time without a significant overhead. The use of a large number of configurations in each batch is critical for sequence ranking losses like listMLE. Meanwhile, at the moment of fusion, the nodes do not collapse into a single embedding vector, and a particular configuration may be directly associated with the nodes.</p>
<h3>Data</h3>
<p>Some of the node features describe the dimensions of data and kernels, and the product of the dimensions may be as large as 10^6. It is not really acceptable for models, and we simply took a log of input features + 3 to have comparable scales. In our experiments, we also tried to extract additional 29 graph features by customizing the organizer's code but got comparable results to just using the base 140 features.</p>
<p>In the layout pipeline instead of storing everything in the RAM, which may be quite large in comparison to tile data, we loaded only graph features (several GB only). Meanwhile, the configs were sampled in chunks of 1000 and loaded during training in a just-in-time manner instead of loading all configurations and using only 1000 of them. It has eliminated the data loading bottleneck.</p>
<h3>Model</h3>
<p><strong>Our model is based on transformers adapted to work effectively with graph data on consumer-grade hardware.</strong> </p>
<p>(Tile) Specifically, the tile model uses a sequence of residual graph blocks, enabling the creation of the local neighborhood representation, before global mixing with self-attention blocks followed by node information retrieval for particular configurations with cross-attention transformer blocks. With this setup training of the model reaching 0.97+ slowdown metric takes under 1 min. The LB of our tile-only model is 0.196 at private and 0.197 at public out of 0.2. The combination of different folds gave 0.196 and 0.198 at private and public LB.</p>
<p>(Layout) In the layout model, we must replace standard dot product attention with multi-head <a href="https://arxiv.org/pdf/2205.14756.pdf" target="_blank">linear attention</a>, which has linear complexity on the number of tokens. This attention enables a global receptive field within the entire 10^4 node graph while being relatively computationally cheap enabling running 12 block transformer on GPUs like 4090 or A6000. We replaced the ReLU nonlinearity in the attention with a more robust ELU+1 function. We do not use SLA in the tile model because of difficulties with masking padding tokens. The MLP part of the transformer blocks, meanwhile, is replaced with graph networks, which enable local mixing of features between neighboring nodes. We considered APPNP, SAGE, GAT, GPRGNN graph modules, with APPNP getting slightly better results. We considered both, unidirectional and bidirectional graph models. In one set of experiments, we also considered 4 and 6 layers <a href="https://arxiv.org/pdf/2301.09474.pdf" target="_blank">DiFFormer model</a>. The graph transformer is followed by pooling of configurable nodes, concatenation of them with the corresponding configurations and mixing with MLP, and finally by global pooling among all configurable nodes and prediction of the overall ranking. The models are illustrated in the plot at the beginning of the post.</p>
<p>With the intermediate fusion approach even relatively wide (n=256) and deep (up to 12 layers) models are very fast, taking just a few seconds per epoch on tile setup, a few minutes per epoch with XLA layouts, and 30 min to 1 hour for NLP layouts while using up to 1000 configurations simultaneously. The convergence takes ~5 epochs for tile models and 16 epochs for layout ones.</p>
<h3>Training</h3>
<p>We used standard for transformers AdamW optimizer, cosine annealing with lr=5e-4, and wd of 1e-2. The training is performed for 5 epochs for tile and for 16 epochs for layout models. In some runs, we used EMA average of weights and gradient accumulation. We used listMLE as the loss function, which was possible because we considered ~1000 configurations for a given graph in each batch.</p>
<p>Since XLA and NLP graphs exhibit a drastic difference in graph size we decided to split them into two separate sets of models trained independently. Specifically, XLA graphs are huge (VRAM hungry) and the number of configs is low. While NLP graphs are quite smaller with a huge number of configs (i.e. longer training). So we could train XLA models with 256 configs per graph on 24 GB VRAM GPUs and 256-1000 configs on 48 GB VRAM GPUs. While NLP models are easily trained with 1000 configs per graph taking less than 24 GB VRAM.</p>
<h3>Evaluation</h3>
<p><strong>Running experiments we quickly realized that evaluation on eval set is not stable, and nearly all considered setups achieved statistically insignificant differences in comparison to reruning the same setup with a different seed.</strong> It did not allow us to assess the real improvement of models under different setups and report the ablation study. </p>
<p>We set an 11-fold split scheme (val set + 10-fold split of train set) to enable sufficient precision of evaluation needed to distinguish small performance differences in the considered setups. However, given only ~1 week remaining for us in the competition and 8-16 hours per fold total training time for NLP layouts, we did not proceed much further with that and ran just a few folds for diversity in several experiments. </p>
<h3>Final model</h3>
<p>Our main strategy in this competition, since we could not reliably assess the performance, was running as many configurations as possible and mixing them together to mitigate the effect of the possible shakeup. We composed 10-20 different configurations for XLA and NLP models. This setup got 0.712 and 0.696 at public and private LB.<br>
In addition, we selected our best public LB submission, which consists of 2 APPNP XLA models and 2 DIFFormer NLP models. It reaches 0.721 at public and 0.703 at private LB. A very surprising result for us. <br>
Our best lucky private LB sub meanwhile, is 0.706 and 0.715 at public and private LB. In this sub we used APPNP models with gated units instead of residual sum.</p> |
Google - Fast or Slow? Predict AI Model Runtime | 11th place solution: LightGBM | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the host and congratulations to the winners.<br>
I'm glad to win my 4th gold medal on this competition.<br>
I did not use GNN, but used LightGBM with feature engineering.<br>
The reason why I chose LightGBM is that I thought that the runtime of the model in a single TPU is equal to the sum of the runtime of each node, and learning the aggregated statistics from the graph would be sufficient to achieve good score without learning the graph structures.</p>
<h2>Layout</h2>
<h3>Features</h3>
<p>I extracted the following features from the graph.</p>
<ul>
<li>Number of nodes of each type for conv/dot/reshape when using the following classification method<ul>
<li>Conv: If any one of the input/output number of dimensions or config values, or the 93~106th values of node_feat is different, it is of a different type</li>
<li>Dot: If any one of the input/output number of dimensions or config values, or the index features of dot operation (extracted from the .pb file on its own) is different, then it is of a different type</li>
<li>Reshape: If any one of the input/output number of dimensions or config values, or the inconsistency on the element products in a set of a certain dimension, then it is of a different type</li></ul></li>
<li>Number of times the element is copied<ul>
<li>Determine if a copy is needed by comparing the layout of each configurable node and the configurable nodes connected to it.</li></ul></li>
<li>Counting binning the size of the 1st/2nd minor dimension</li>
<li>Sum of padding generated at each configurable node</li>
<li>(only default) Occurrence rate of config for each node relative to the total data for that model<ul>
<li>Because genetic algorithms are used in the search, the more frequently a config pattern appears, the faster the config runtime will tend to be.</li></ul></li>
</ul>
<h3>Models</h3>
<p>I trained the pointwise and pairwise models.</p>
<ul>
<li>Pointwise LightGBM<ul>
<li>Target: Normalized rankings(0~1)</li>
<li>Loss: MAE Loss</li>
<li>Public LB: 0.715 (Private LB: 0.680)</li></ul></li>
<li>Pairwise LightGBM<ul>
<li>Loss: Binary</li>
<li>Randomly select the same number of pairs as the number of configurations and generate train/valid.</li>
<li>Inference on test data predicts for all pairs of 1000^2 and then sort configs using the sum of the predictions.</li>
<li>Input features are as follows: <ul>
<li>The features for one config of the pair</li>
<li>The difference between the features of the two configs</li></ul></li>
<li>Public LB: 0.728 (Private LB:0.701)</li></ul></li>
</ul>
<p>I used the same train/valid given by the host. (Failure to devise a better CV may have been the cause of the shake down)</p>
<h2>Tile</h2>
<h3>Features</h3>
<ul>
<li>Config features</li>
<li>Node features averaged over all nodes</li>
</ul>
<h3>Models</h3>
<p>I trained the pointwise LightGBM model.</p>
<ul>
<li>Target: Normalized rankings(0~1)</li>
<li>Loss: MAE Loss</li>
<li>Public LB(only tile): 0.198 (Private LB(only tile): 0.195)</li>
</ul> |
Google - Fast or Slow? Predict AI Model Runtime | 7th Place Solution Write-up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for hosting this fascinating competition, and congratulations to the winners! I really learned a lot through this competition.</p>
<p><strong>Tile Part:</strong><br>
My approach to the Tile part was quite similar to the official code. I also check the public tile codes and find the validation score was already quite high , and ensembling them didn't lead to significant improvements, so I didn't invest too much time in this section.</p>
<p><strong>Layout Part:</strong><br>
For the Layout part, I primarily referred to the gst code <a href="url" target="_blank">https://github.com/kaidic/GST</a>. However, I encountered some challenges in making it work efficiently due to GPU memory constraints. Eventually, I used a sampling method to reduce GPU usage. </p>
<p><strong>Sampling Method:</strong><br>
The raw data consumed too much memory, so I decided to sample only 500 or 1000 configurations for each sample. This significantly reduced training time and GPU memory requirements.</p>
<p><strong>Training Strategy:</strong></p>
<ol>
<li>Training them all.</li>
<li>Training seperately based on the edge & node shapes. While the test data didn't explicitly specify the model type, we could infer it based on the edge & node shapes.</li>
<li>Using different kinds of parameters, such as graph conv type, learning rate, batchsize, layers number and hidden size. </li>
</ol>
<p><strong>Ensembling</strong><br>
Ensembling above models proved to be effective in improving the results. Every time I trained a new model, I found that ensembling it with existing models contributed to score improvements.</p>
<p><strong>Regarding the Public Score:</strong><br>
I did identify the fact mentioned in a post <a href="url" target="_blank">https://www.kaggle.com/competitions/predict-ai-model-runtime/discussion/456083</a> but couldn't fully understand why it worked. I exercised extreme caution in utilizing it during the private phase since I considered it very risky.</p> |
Google - Fast or Slow? Predict AI Model Runtime | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to competition organizer giving us such a interseting competition.Our team joined very late we don't have time to deepdive but we found a simple solution very useful.</p>
<h1>Data Processing & Feature Engineering</h1>
<p>Tile: count of config&node&edge, mean,max,std,last of node_feat, mean,max,std of config_feat<br>
Layout: flatten node_config_feat -> remove unique value columns -> remove duplicate columns</p>
<h1>Train Data</h1>
<p>The keypoint of our layout solution is finding the most similar train data for each test data. We can observe some data have almost same edge&node number and can guess they are same model type except size,the test data should be the same model with different batch size.<br>
e.g.</p>
<pre><code>train: small_bert_bert_en_uncased_L-12_H-768_A-12_batch_size_16_test
valid: small_bert_bert_en_uncased_L-12_H-768_A-12_batch_size_32_test
test(same edge&node number) should be the small_bert_bert_en_uncased_L-12_H-768_A-12_batch_size_64_test
</code></pre>
<p>We cannot find all the similar train data for each test data, but enough to get a good result.And it's very fast for iteration.</p>
<h1>Model</h1>
<p>transform the target to minmaxscaler , use xentropy as the loss function, lightgbm as model, we don't use validation just give a fix round for each model.</p> |
Google - Fast or Slow? Predict AI Model Runtime | 16th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for hosting the competition on a very interesting topic and congratulations to all the winners!</p>
<p>I'll share my solution briefly.</p>
<h1>Summary</h1>
<ul>
<li>3-hop subgraphs from configurable nodes</li>
<li>Drop duplicated configs</li>
<li>Listwise loss. In my experiments, listwise loss converged faster and performed better than pairwise loss.</li>
<li>3-layer <code>SAGEConv</code> with <code>LayerNorm</code> and residual connections. I implemented residual connections by simply adding the initial embedding to the output of each layer. Below is a code snippet.</li>
</ul>
<pre><code> ():
node_opcode = batch.node_opcode.long()
opcode_embeds = self.opcode_embedding(node_opcode)
x = torch.concat([batch.node_feat, opcode_embeds, batch.node_config_feat * self.node_config_weights], dim=)
x = self.lin1(x)
x = self.norm1(x).relu()
x_init = x
i (self.n_layers):
x = self.convs[i](x, batch.edge_index)
x = self.norms[i](x).relu()
x = x_init + x
x = torch.concat([global_mean_pool(x, batch.batch), global_max_pool(x, batch.batch)], dim=)
x = self.dropout(x)
x = self.readout(x)
x
</code></pre>
<ul>
<li><p>Models were trained separately for diffrent subtypes</p></li>
<li><p>CV scores (provided train, valid splits)</p>
<ul>
<li>xla default: 0.37</li>
<li>xla random: 0.71</li>
<li>nlp default: 0.55</li>
<li>nlp random: 0.96</li>
<li>tile: 0.97</li></ul>
<p>With these CV scores, I got a score of 0.684 (public) and 0.688 (private). </p></li>
</ul> |
Google - Fast or Slow? Predict AI Model Runtime | 9th Place Solution: GNN with Compressed Graphs Using Dijkstra’s Algorithm | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We would like to express our sincere gratitude to our Kaggle teammates and to our hosts for providing the opportunity to participate in such an engaging competition. And thanks to my excellent teammate <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>. </p>
<p>This was my 7th (and 4th with <a href="https://www.kaggle.com/yoichi7yamakawa" target="_blank">@yoichi7yamakawa</a>) gold medal this year. Wow!</p>
<p>The problem posed was of a type that we had not tackled much before, which made it extremely fascinating to delve into. Here below we outline our solution.</p>
<h1>On Tiles</h1>
<p>We referenced <a href="https://github.com/google-research-datasets/tpu_graphs" target="_blank">GitHub - google-research-datasets/tpu_graphs</a> and public notebooks. <br>
Since the score from the public notebook was already satisfactory, the successful learning of the layout part was the key to this competition.</p>
<h1>On Layout</h1>
<p>We mainly used the implementation of <a href="https://github.com/kaidic/GST/tree/main" target="_blank">GutHub - kaidic/GST</a> as a reference and made modifications that we felt were necessary to improve the score.</p>
<h2>Graph Compression</h2>
<p>During the learning of the layout, the configurable data was limited. Therefore, our implementation only extracted nodes influenced by this and their connected components. <br>
By applying this, our learning efficiency dramatically improved, significantly contributing to the improvement of the score.</p>
<p>Specifically, the following procedures were performed for each data set</p>
<ol>
<li>An undirected graph was constructed using edge_index, and Dijkstra’s algorithm was applied starting from the node with the largest index <code>s</code> (i.e., <code>s=data["node_feat"].shape[0]-1</code>). We chose <code>s</code> as the starting point because many graphs were trees with <code>s</code> as the parent.</li>
<li>The shortest path from <code>s</code> to all the nodes in <code>node_config_ids</code> was calculated, and the union set of nodes and edges in the path was considered as a compressed graph.</li>
</ol>
<p>This compression method reduced the average number of nodes for each layout data from 13894 to 1736 for xla and from 5711 to 570 for nlp. For features of nodes not included in the compressed graph, we simply ignored them completely.</p>
<h2>Preprocessing (log-transform)</h2>
<p>There were times when the learning could not proceed because orders differed depending upon the dimensions of input features. <br>
To cope with this, we underwent log transformation for node features, which consequently made learning proceed smoothly.</p>
<h2>Training Strategy</h2>
<ul>
<li>512 configs were sampled for each data per iteration. The bach_size were 2 or 4.</li>
<li>Trained separate models for the four data types xla-random, xla-default, nlp-random, and nlp-default for 1000 epochs.</li>
<li>The pairwise hinge loss was used (same as original implementation).</li>
</ul>
<h3>CV score</h3>
<ul>
<li>xla random: 0.7</li>
<li>xla default: 0.33</li>
<li>nlp random: 0.96</li>
<li>nlp default: 0.51</li>
</ul>
<h2>Data Specialized for Specific Model Types</h2>
<p>Upon examining the data, we found that architectures (such as BERT, U-Net, ResNet…) could be inferred from IDs or node numbers (or op_codes for test data).<br>
Leveraging this information, we performed learning using data related only to each architecture, which substantially contributed to improving the private score. In addition to the models trained on the entire dataset, we trained models focused on resnet, efficientnet, or bert. Proper EDA is indeed crucial.</p>
<h2>Multiple GNN Architectures</h2>
<p>The TransformerConv and GATConv as Graph conv layers had minimal impact but were useful for ensemble purposes.</p>
<h3>Ensemble</h3>
<p>Although we didn't have sufficient time for meticulous weight tuning, it certainly contributed to steady score improvement (+0.01 - 0.02).</p>
<h3>What Didn't Work</h3>
<ul>
<li>MSELoss as aux-loss</li>
<li>Learning using all nodes</li>
</ul> |
CommonLit - Evaluate Student Summaries | Lessons learned and experiments. DeBERTaV3-base solutions + extra features. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks host and Kaggle for an interesting challenge. Also thanks for the three submissions choices, should be a standard hereinafter.</p>
<p>Congrats all the top solutions, will be valuable source of information, with only a quick look I can see that large model and large lengths seems like a boost to the score. </p>
<p>As many times before I had an experiment approach, the knowledge one gets can not give massive shakeup even of one’s leaderboard place were so 😉</p>
<p>I experimented with Debertav3Base w/ 512MaxL as I used T4s it was the best, my experimenting never ended so those models were used for different solution. Sadly I didn’t pick the best CV solution with CV .460 with a final LB place ~#80, no ML errors behind that choice though.</p>
<p><strong>Short summary of the experimenting and solutions</strong></p>
<p><strong>Frameworks for training</strong></p>
<ul>
<li>Huggingface training and inference code. Only multi-target training.</li>
<li>Past top solutions code – feedback-prize-english-language-learning 1st place <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457</a>. Multi-target training with mixout augmentation etc.</li>
<li>AutoGluon MultiModalPredictor and TabularPredictor – for both model and later tabular training. <a href="https://auto.gluon.ai/stable/index.html" target="_blank">https://auto.gluon.ai/stable/index.html</a>. Single target training.</li>
</ul>
<p><strong>Input to transformers</strong></p>
<p>Different combinations for different models but mainly text_sum+Prompt Question. Sometime with original text and sometimes corrected, for the autocorrection I used NeuSpell: A Neural Spelling Correction Toolkit <a href="https://github.com/neuspell/neuspell" target="_blank">https://github.com/neuspell/neuspell</a> which worked well and fast with cuda and batch support.<br>
Exceptions for other types than text where for models trained on AutoGluon MultiModalPredictor framework. In that framework one can experiment with different fusion heads for the inputs building a complete end-to-end LM model.</p>
<p><strong>Features used for final tabular prediction</strong></p>
<ul>
<li>The public created 9 features.</li>
<li>Stacking: Predictions from every model and its ensemble value.</li>
<li>6 feature [cohesion, syntax, vocabulary, phraseology, grammar, conventions] from the Feedback Prize - English Language Learning competition and used 1st place code for it <a href="https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457" target="_blank">https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369457</a> Even though grammar etc. shouldn’t take in count for the true human score it helped the solution.</li>
<li>2 features, reading level, for the Prompt Question and Prompt Text from the CommonLit Readability Prize Competition and the 2nd place code was used <a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328</a></li>
<li>4 features, value of similarity between the texts. <a href="https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2" target="_blank">https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2</a> was used for it.</li>
</ul>
<p><strong>Model for tabular prediction</strong></p>
<p>As said before the SOTA framework AG TabularPredictor was used. Some changes were set from the standard. Used fixed set 4 fold validation with the 4 groups as folds. Fold_fitting_strategy = sequential_local as it always works without issue. Also the new feature “experimental_zeroshot_hpo_hybrid” was used to quickly get the params based on dataset in zeroshot manner, thanks. Great work <a href="https://www.kaggle.com/innixma" target="_blank">@innixma</a> 👍<br>
Then a 3-level stacked ~100 trained model ensemble did the final prediction. I also manually included the first target prediction to the next target training, a.k.a Auto-Regressive training.</p>
<p><strong>Other interesting experimenting</strong></p>
<ul>
<li>Pseudo label in one level worked well, taking the 4 GroupKFold prediction to 24 StratifiedGroupKFold made boost to the score.</li>
<li>Some new findings and own-made codes along the way:<br>
o A custom-made optimizer included cosine scheduler all-in-one with also a multi-learning rate for multitargets. Also added MADGRAD instead of AdamW as it worked better in the testing.<br>
o A multi-target feature to a torch model that saves the best per target states within the model .self function and ensemble it between the target states to smoothing the targets best loss closer to each other. In this way I could get a Deberta Base model CV close to Deberta Large. Didn’t included it in a final solution as it was close to deadline but did a late after deadline submission and the Base model had better private score then Large model without the feature. More testing needed of course.</li>
</ul>
<p>Next in line somewhere for testing would have been larger models and more text lengths. Next time 😊</p>
<p>That's it! Happy Kaggling!</p> |
Google - Fast or Slow? Predict AI Model Runtime | 33rd Solution Writeup and Discussion on GST | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks Kaggle and the competition host for hosting this exiting competition and congrats to all the winners. I would like to share my solution (though not that good) mainly from the perspective of <a href="https://github.com/kaidic/GST" target="_blank"><strong>G</strong>raph <strong>S</strong>egment <strong>T</strong>raining (<strong>GST</strong>)</a>. The code is released <a href="https://github.com/JiangJiaWei1103/Google-Fast-or-Slow" target="_blank">here</a>.</p>
<h2>Overview</h2>
<ul>
<li>Data cleaning and preprocessing<ul>
<li>Add new features following instructions <a href="https://github.com/google-research-datasets/tpu_graphs/tree/main#graph-feature-extraction" target="_blank">here</a>.</li>
<li>Drop constant and quasi-constant features.</li>
<li>Label encode features related to <code>shape_element_type_is_X</code>.</li>
<li>Log transform features with max value greater than 20.</li></ul></li>
<li>Model architecture<ul>
<li>Train all models in the <strong>early-join</strong> manner (<em>i.e.,</em> fuse node and config features before getting the whole graph context).</li>
<li>Use <code>SAGEConv</code> as the GNN block.</li></ul></li>
<li>Training strategy<ul>
<li>Use <strong>GST</strong> (with history embedding table and stale embedding dropout) to train <em>layout</em> models in a collection-specific manner (<em>i.e.,</em> one model per collection).</li>
<li>Sample a subset of configurations to train models per iteration.</li></ul></li>
<li>Experimental Setup<ul>
<li>Loss criterion: <code>PairWiseHingeLoss</code> for <em>layout</em> and <code>ListMLE</code> for <em>tile</em></li>
<li>Optimizer: <code>AdamW</code> with base learning rate <code>1e-3</code> (I decrease lr when increasing #epochs)</li>
<li>Learning rate scheduler: Cosine schedule without warmup</li>
<li>Checkpoint: Always pick the model at the last epoch</li></ul></li>
</ul>
<h2>Data Cleaning and Preprocessing</h2>
<p>I process the data by a simple four-stage workflow. Firstly, I find out that some of the features are constant among all the datasets, which can be viewed as the redundant dimensions and dropped directly. Also, those with constant ratio like above 0.999 (<em>i.e.,</em> quasi-constant) are thrown away. Then, I label encode the remaining dimensions related to <code>shape_element_type_is_X</code>, which can be represented with a dense embedding. Finally, considering features can span a wide value range (also, some outliers exist), I simply use <code>np.log1p</code> to log transform features with max value greater than 20.<br>
After processing, the node feature dimension drops to 116 and 50 (89 and 33 without new features added) for <em>xla</em> and <em>nlp</em>, respectively.</p>
<h2>CV Scheme</h2>
<p>Considering there are only ~4 graphs and 8 graphs for <em>xla</em> and <em>nlp</em> evaluated on public LB, I try to enlarge the validation set by splitting train+val stratified on runtime, which can somewhat balance the intrinsic graph properties (I explore relationship between graph stats and runtime in <a href="https://www.kaggle.com/code/abaojiang/google-fast-or-slow-detailed-eda" target="_blank">this notebook</a>). However, I don't think it's much different from just using the official train-val splitting.</p>
<h2>Model Architecture</h2>
<p><a href="https://postimg.cc/mtC0KZzX" target="_blank"><img src="https://i.postimg.cc/dtxwkLqY/Screenshot-2023-11-20-at-15-10-31.png" alt="Screenshot-2023-11-20-at-15-10-31.png"></a><br>
The figure above illustrates the overview of model architecture, where \(d_n \) and \(d_c \) denote the node and config feature dimensions. And, \(L \) is the number of graph convolution layers.<br>
Since my first submission on 22nd, Oct, I use early-join to fuse the node and config features. After experimenting with different GNN blocks (<em>e.g.,</em> <code>GATConv</code>, <code>GATv2Conv</code>, <code>GINConv</code>), <code>SAGEConv</code> always outperforms, so I stick to it till the end. Also \(L \) is always set to 3. To be honest, there's no fancy design in my model architecture. Hence, I want to talk more about the training strategy.</p>
<h2>Training Strategy - <strong>G</strong>raph <strong>S</strong>egment <strong>T</strong>raining (GST)</h2>
<p>Considering the memory limitation, I quickly decide to choose off-the-shelf <strong>GST</strong> as my training framework. As there exists some unsolved issues in the official implementation of <strong>GST</strong>, I rewrite the pipeline without <a href="https://github.com/rampasek/GraphGPS" target="_blank">GraphGPS</a>.<br>
The main concern with <strong>GST</strong> is that the training loss increases as the training process progresses, but validation performance still improves over time. After fixing the \(\eta \), the weight for each graph segment, for final sum pooling, the training loss decreases normally as shown below (special thanks to <a href="https://www.kaggle.com/dsfhe49854" target="_blank">@dsfhe49854</a> 's analysis <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime/discussion/448367#2497447" target="_blank">here</a>),<br>
<a href="https://postimg.cc/jCyB81QT" target="_blank"><img src="https://i.postimg.cc/Vsh1NQJJ/Screenshot-2023-11-20-at-17-50-00-Weights-Biases.png" alt="Screenshot-2023-11-20-at-17-50-00-Weights-Biases.png"></a><br>
Let's see how \(\eta \) is derived in the original paper. Let \(n \) be the number of segments for one graph and \(k \) be the number of segments to be trained per iteration. Also, select \(p \) as the dropout ratio of <strong>stale embedding dropout</strong>. Assume we sample only one segment for training per iteration(<em>i.e.,</em> \(k = 1\)), as described in the paper. The weight \(\alpha \) of the trained segment can be derived as follows, <br>
$$<br>
(n-k)p + k\alpha = n<br>
$$<br>
$$<br>
\alpha = (1-p)\frac{n}{k} + p<br>
$$</p>
<p>The logic behind the scene is that the final runtime estimation is the <strong>sum pooling</strong> of runtimes of all segments. Considering some segments are dropped with probability \(p \), we need to increase the weight of the trained segment for compensation. However, the problem is that most of the entries in historical embedding table are zeros. Therefore, in early epochs, the objective can be approximated as,<br>
$$<br>
\hat{y} = \alpha \hat{y}_{i} ,<br>
$$</p>
<p>where \(\hat{y} \) is the predicting runtime of the current graph and \(\hat{y}_{i} \) is the predicting runtime of the segment \(i \) of the current graph. What's interesting is that I observe the <strong>unfixed \(\eta \)</strong> always leads to better generalizability compared with the fixed one. Also, if the model is trained with sufficient number of iterations, the training loss actually goes downward (the red line turns the direction at ~100 epochs).</p>
<h2>Experimental Results</h2>
<p>Following table shows the local CV scores of my final submission.</p>
<table>
<thead>
<tr>
<th>Collection</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td><em>tile</em></td>
<td>0.9551</td>
</tr>
<tr>
<td><em>xla-default</em></td>
<td>0.3188</td>
</tr>
<tr>
<td><em>xla-random</em></td>
<td>0.5569</td>
</tr>
<tr>
<td><em>nlp-default</em></td>
<td>0.5053</td>
</tr>
<tr>
<td><em>nlp-random</em></td>
<td>0.8845</td>
</tr>
</tbody>
</table>
<h2>What Didn't Work for Me</h2>
<ul>
<li>Use other GNN blocks (<em>e.g.,</em> <code>GATConv</code>, <code>GATv2Conv</code>, <code>GINConv</code>)</li>
<li>Retrain models on the whole dataset</li>
<li>Finetune <em>default</em> using the pretrained weights from <em>random</em><ul>
<li>Freezing different parts of network makes no difference.</li></ul></li>
<li>Segment graphs with other strategies (<em>e.g.,</em> Metis)</li>
</ul>
<h2>Conclusion</h2>
<p>It's not good to stick to only one method (<strong>GST</strong>) for all implementation, I should have explored other potential solutions like all amazing writeups I've digested so far. Though the result isn't that promising this time, I will keep progressing and learning from the top-tiers. This journey never stops! Thanks for your patience!</p> |
Bengali.AI Speech Recognition | 40th Place Solution without External Dataset! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We achieved 40th Place (Silver Medal). Congratulations to all of Team Members from <strong>BengalX</strong>: <a href="https://www.kaggle.com/iftekharamin" target="_blank">@iftekharamin</a> , <a href="https://www.kaggle.com/mdfahimreshm" target="_blank">@mdfahimreshm</a> , <a href="https://www.kaggle.com/fahimshahriarkhan" target="_blank">@fahimshahriarkhan</a> 🎉</p>
<p><em>I would like to acknowledge my Team Lead <a href="https://www.kaggle.com/iftekharamin" target="_blank">@iftekharamin</a> Bhaiya for giving opportunity to do this competition and giving proper guideline for achieving silver medal.</em></p>
<p><strong>Dataset:</strong> We subset the dataset Based on <a href="https://www.kaggle.com/imtiazprio" target="_blank">@imtiazprio</a> published train metadata. We used condition to filter clean dataset from train meta features (yellowking_preds & google_preds) wer = 90% similar. After this we further filter dataset which mos_pred > 2. And we found around 100k+ datapoints.</p>
<p><strong>Data Cleaning:</strong> We filter audios which duration is less then 1 sec. as outlier and to not mislead model performance.</p>
<p><strong>Augmentations:</strong> We used audio augmentations i.e, Noise , Background sound mixing, Speed up-down, SpecAug, Changing different Sampling Rates. </p>
<p><strong>STT Modeling:</strong> We used Indic wav2vec2 pretrained model. And we finetune with the Subset augmented dataset. </p>
<p><strong>Post-processing- LM Decode:</strong> We used arijit indic pretrained KenLM.</p>
<p><strong>Post-processing-Punctuation:</strong> We used xashru/punctuation-restoration repo with xlm-roberta-base model and fine tune this competition dataset as punctuation restoration. We only consider 4 punctuation classes : {'O': 0, 'COMMA': 1, 'PERIOD': 2, 'QUESTION': 3} </p>
<p><strong>Post-processing-Erro Correction:</strong> We used this repo solution as further error correction <a href="https://github.com/Tawkat/Bengali-Spell-Checker-and-Auto-Correction-Suggestion-for-MS-Word" target="_blank">https://github.com/Tawkat/Bengali-Spell-Checker-and-Auto-Correction-Suggestion-for-MS-Word</a></p> |
RSNA 2023 Abdominal Trauma Detection | 16th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for such a great competition. Unfortunately, I couldn’t take part last year in a similar one due to the lack of hardware. However, this year is different, and I’m really happy with the results that we managed to achieve. </p>
<h2>Problem</h2>
<p>In this competition we were tasked with predicting the intensity of injuries for different abdominal organs. The available data consists of big CT images (in DICOM format) with partial supplemental segmentation annotations (in NIFTI format).</p>
<p>In ML terms, all comes down to 3D segmentation / classification models, lack of data / annotations, and heavily-penalizing metric.</p>
<h2>Summary</h2>
<ul>
<li><strong>U-Net Bi-Conv-LSTM</strong> segmentation for organs (kidney, liver, spleen, bowel) and separate model for Extravasation based on boxes from here (thanks a lot!)</li>
<li><strong>Resnet 3D CSN</strong> for 3D crops separated into 2 stages: (kidney, liver, spleen) and (bowel)</li>
</ul>
<h2>3D Semantic Segmentation</h2>
<p>The semantic segmentation part to identify organs is quite straightforward (compared to the later classification) and could be effectively performed by <strong>U-Net Bi-Conv-LSTM</strong> with Effnet_v2_b0 backbone and <strong>CE-Dice-Focal</strong> loss. </p>
<p>To make things efficient, we train semantic segmentation on <strong>96x256x256 crops</strong> and predict the whole image using crops of size 96x256x256 with <strong>overlaps of 48x256x256</strong> (later on overlaps were removed to save time and space in inference).</p>
<p>To elevate overfitting (it’s not that critical, especially compared to classification), we added <strong>geometric augmentations</strong> like ShiftScaleRotate, RandomBrightnessContrast, Flips, GridDistortion, ElasticTransform. </p>
<p>On average, about 10% of total volume was dedicated to kidney, liver, spleen, and about 20% - to bowel.</p>
<p>The <strong>macro dice score per image</strong> is around <strong>0.96</strong>. </p>
<p><strong>Training time</strong> - about <strong>12 hours</strong>.</p>
<h2>3D classification</h2>
<p>Based on extracted crops from segmentation masks, we train 2 models: one for kidney, liver, spleen, one for bowel. </p>
<p>The <strong>CSN models</strong> from mmaction proved to be very fast and accurate. In order to figure out how to deal with temporal dimension, several possibilities were explored, but in the end basic interpolation (<strong>3D resize</strong>) was used to convert crops to <strong>96x256x256 resolution</strong>.</p>
<p>To battle overfitting (which is really severe even with CSN), <strong>intensive geometric augmentations</strong> were used, including ShiftScaleRotate, RandomBrightnessContrast, and 4 different types of Blurs.</p>
<p>The mean competition loss across all folds is <strong>0.401</strong> for kidney, liver, spleen and 0<strong>.156</strong> for bowel.</p>
<p><strong>Training time</strong> - about <strong>4 hours</strong> per kidney, liver, spleen fold; and <strong>8-10 hours</strong> per bowel fold. </p>
<h2>3D classification for Extravasation</h2>
<p>In order to make predictions for Extravasation, a segmentation model was utilized. The motivation is simple: if semantic segmentation model predict anything, there is Extravasation, and it should be reflected in the probabilities. </p>
<p>To make the <strong>transition from semantic segmentation to classification</strong>, the following trick was used:</p>
<ul>
<li>Turn 3D mask to 1D </li>
<li>Sort probabilities </li>
<li>Take top_n probabilities</li>
<li>Find mean values of them. That’s the probability for positive Extravasation.</li>
</ul>
<p>In pseudo-code:<br>
<code>cls_pred = np.mean(np.sort(np.ravel(sigmoid(mask)))[::-1][:top_n])</code></p>
<p>The mean log loss across all folds is <strong>0.543</strong> for extravasation, and <strong>0.501</strong> for any_injury. </p>
<p><strong>Training time</strong> - about <strong>4 hours</strong> per fold. </p>
<h2>Validation</h2>
<p><strong>StratifiedGroupKFold</strong> (stratification based on classification labels, grouping based on patients) with 4 folds. </p>
<p>Mean log loss across all folds and all groups (kidney, liver, spleen, bowel, extravasation, any_injury), (which is the <strong>competition metric</strong>) is <strong>0.400</strong>. </p>
<h2>Additional tricks</h2>
<ul>
<li>No post-processing.</li>
<li>SWA on final checkpoints.</li>
<li>EMA during training.</li>
<li>Temporal shifting in classification to battle overfitting even more.</li>
<li>Gradient checkpointing to have bigger batches (important for classification).</li>
<li>memmap (uint8) using numpy to speed-up data reading and crop extraction. </li>
<li>2 final subs: one minimizing competition loss, one maximizing AUC</li>
</ul>
<h2>Things that didn’t work</h2>
<ul>
<li>Samplers</li>
<li>Heavier models (2+1D or Uniformer)</li>
</ul>
<h2>Final notes</h2>
<p>During the final 2 days of the competition, we managed to improve the models for kidney, liver, and spleen from <strong>0.4</strong> to roughly <strong>0.38</strong>, which brought the overall loss from <strong>0.4</strong> to <strong>0.39</strong>, but made some errors in the submission process, which made them useless. The trick is simple - increase batch size. Usually we train with the batch of 14, but could increase it to 24 (with the help of A100 cards). </p>
<p>The total <strong>training time</strong> (including all 4 folds for each stage) is around <strong>80 hours</strong> using a single RTX A6000 Ada.</p>
<p>The total <strong>submission time</strong> is around <strong>8-9 hours</strong> using a single Tesla P100. </p>
<p>We believe this solution could be pushed much further. However, we made the first sub (that isn’t sample submission or just a bunch of static predictions) 2 days before the competition ended, so that also played some role.</p>
<p>P.S. Man that sucked to mess up the models for 0.39 :) </p> |
Google - Fast or Slow? Predict AI Model Runtime | 201st Place Solution for the Google - Fast or Slow? Predict AI Model Runtime | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>1. Introduction</h1>
<p>I am happy to be part of Google - Fast or Slow? Predict AI Model Runtime. I want to express my appreciation to the organizers, sponsors, and Kaggle staff for their efforts, and I hope everyone has the best time. This competition and other participants provided me with a lot of knowledge.</p>
<p>I am grateful to MIHU for providing the public notebook <a href="https://www.kaggle.com/code/chenboluo/simple-prediction-for-five-datasets?scriptVersionId=145767932" target="_blank">https://www.kaggle.com/code/chenboluo/simple-prediction-for-five-datasets?scriptVersionId=145767932</a></p>
<h1>2. Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime" target="_blank">https://www.kaggle.com/competitions/predict-ai-model-runtime</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime/data" target="_blank">https://www.kaggle.com/competitions/predict-ai-model-runtime/data</a></li>
</ul>
<h1>3. Overview of the approach</h1>
<p>The solution was a copy of the public notebook(Public/Private LB of 0.39199/0.27376) <br>
with adding dataset created as output of public notebook (Public/Private LB of 0.39199/0.27376).</p>
<p>The data preprocessing process: Adjacency matrix. Virtual first node.</p>
<p>The algorithms employed: modified Bert.</p>
<ul>
<li>Inference: <br>
(copy codes given by MIHU in a public notebook <a href="url" target="_blank">https://www.kaggle.com/code/chenboluo/simple-prediction-for-five-datasets?scriptVersionId=145767932</a>)<br>
The validation strategy: split train and valid dataset.</li>
</ul>
<h1>4. Method modified Bert.</h1>
<p>Solution: </p>
<table>
<thead>
<tr>
<th>notebook</th>
<th>score private</th>
</tr>
</thead>
<tbody>
<tr>
<td>MIHU public <a href="https://www.kaggle.com/code/liudacheldieva/simple-prediction-for-five-datasets-2d0336?scriptVersionId=151176695" target="_blank">https://www.kaggle.com/code/liudacheldieva/simple-prediction-for-five-datasets-2d0336?scriptVersionId=151176695</a></td>
<td>0.27376</td>
</tr>
<tr>
<td>Change: add output of public notebook as input. Copy input to output.</td>
<td>0.27376</td>
</tr>
</tbody>
</table>
<p><br></p>
<h1>5. Final result</h1>
<table>
<thead>
<tr>
<th>public LB</th>
<th>private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>0.39199</strong></td>
<td>0.27376</td>
</tr>
<tr>
<td>0.13262</td>
<td>0.15293</td>
</tr>
<tr>
<td>0.12882</td>
<td>0.14862</td>
</tr>
<tr>
<td>0.14798</td>
<td>0.12904</td>
</tr>
<tr>
<td>0.14798</td>
<td>0.12904</td>
</tr>
<tr>
<td>0.15345</td>
<td>0.13017</td>
</tr>
</tbody>
</table>
<h1>6. Sources</h1>
<ul>
<li><a href="url" target="_blank">https://www.kaggle.com/code/chenboluo/simple-prediction-for-five-datasets?scriptVersionId=145767932</a></li>
<li><a href="url" target="_blank">https://www.kaggle.com/datasets/liudacheldieva/gf-sub-last</a></li>
</ul> |
Google - Fast or Slow? Predict AI Model Runtime | 6th solution: Node-level Instance Norm + Residual SageConv on 5-hop-neighbour Subgraph | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1><strong>Training and inference code can be downloaded from:</strong></h1>
<p><a href="https://github.com/hengck23/solution-predict-ai-model-runtime/" target="_blank">https://github.com/hengck23/solution-predict-ai-model-runtime/</a> <br>
</p>
<h2>1. Layout runtime prediction</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F113660%2Fb5c1c8af2f166b73061177ae313a2c6b%2FSelection_999(4192).png?generation=1701482437470415&alt=media" alt=""></p>
<p>main problem:</p>
<ul>
<li>we have very large graph as input. how to design learning model and algorithm that can fit into gpu memory? </li>
</ul>
<p>summary of approach :</p>
<ul>
<li>instead of using the whole graph, we can reduce it by considering only the 5-hop neighbours from node marked as "config id". We call this 5-hop-neighbour subgraph. We think this is reasonable becuase since we are comparing relative ranking of 2 graphs, and we just need to input the "difference nodes" (instead of the whole graph) to the neural net.</li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F113660%2F5e4c66d69dcf5ea590ad619c37dc2179%2FSelection_999(4193).png?generation=1701483096930612&alt=media" alt=""></p>
<ul>
<li>using the reduced subgraph, i can sample 32 to 100 configurations using all full subgraphs with a single 48-GB GPU card at training.</li>
<li>batch size is not an issue here, bcuase i am using gradient accumulation. We accumuluate over one subgraph at a time when training a batch.</li>
</ul>
<pre><code>optimizer.zero_grad()
for in range( r =
loss = net(r)
</code></pre>
<ul>
<li><p>normalisation is important. We use "graph instance norm" (over node), see paper[1], which works well with gradient accumulation </p></li>
<li><p>we use pairwise ranking loss in training loss.</p></li>
<li><p>We try 2 GNN: </p>
<ul>
<li>4-layer SAGE-conv[2] with residual shortcut</li>
<li>4-layer GIN-conv[3] </li></ul></li>
</ul>
<p>SAGE-conv is better than GIN-conv.</p>
<h2>2. Tile runtime prediction</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F113660%2Fc13d2c462b0958161cd4ce09e4249bc7%2FSelection_999(4191).png?generation=1701482450756660&alt=media" alt=""></p>
<p>main problem:</p>
<ul>
<li>There is no issues here as the graph in the kaggle training data are much smaller. These are actually subgraphs of the much larger original computation graph.</li>
</ul>
<p>summary of approach</p>
<ul>
<li>We still use "graph instance norm" (over node) [1], and smae gradient accumulation apporach, with batch size =64.</li>
<li>We try both SAGE-conv[2] and GAT-conv[4]. GAT-conv gives better results.</li>
<li>Since we are interested in top 5 ranks, we find listMLE is a better loss.</li>
</ul>
<hr>
<h2>[Reference]</h2>
<p>[1] GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training<br>
<a href="https://arxiv.org/abs/2009.03294" target="_blank">https://arxiv.org/abs/2009.03294</a> <br>
[2] Inductive Representation Learning on Large Graphs<br>
<a href="https://arxiv.org/abs/1706.02216" target="_blank">https://arxiv.org/abs/1706.02216</a> <br>
[3] How Powerful are Graph Neural Networks?<br>
<a href="https://arxiv.org/pdf/1810.00826.pdf" target="_blank">https://arxiv.org/pdf/1810.00826.pdf</a><br>
[4] Graph Attention Networks <br>
<a href="https://arxiv.org/abs/1710.10903" target="_blank">https://arxiv.org/abs/1710.10903</a></p>
<hr>
<h2>local validation and public/private score</h2>
<p>The metric are: slowndown (top-5) for tile and kendall tau for layout.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F113660%2F0caab07b1cf25d81b358b1234d459b8c%2FSelection_999(4194).png?generation=1701488206301120&alt=media" alt=""></p>
<hr>
<h2>Acknowledgment</h2>
<h2><em>"I would like to express my sincere gratitude to HP for the generous provision of the Z8-G4 Data Science Workstation that was instrumental in the successful completion of kaggle competition. The two 48GB Nvidia Quadro RTX 8000 GPU cards give me a distinct advantage to easily bulid models with the largest public graph dataset TPUGraphs, with 100 millions graphs of 10 thousands nodes."</em></h2> |
Google - Fast or Slow? Predict AI Model Runtime | 13th Place Solution for the Google - Fast or Slow? Predict AI Model Runtime Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>13th Place Solution for the Google - Fast or Slow? Predict AI Model Runtime Competition</h1>
<p>This is my solution for the "Fast or Slow? Predict AI Model Runtime" Competition. I hope you find it useful!</p>
<p>The key principles of my approach were to start with something simple and improve it in many iterations, and to work under the hardware constraints that I had (16 GB RAM, 4 GB VRAM).</p>
<h2>Context section</h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime/overview" target="_blank">https://www.kaggle.com/competitions/predict-ai-model-runtime/overview</a></li>
<li>Data Context: <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime/data" target="_blank">https://www.kaggle.com/competitions/predict-ai-model-runtime/data</a></li>
</ul>
<h2>Overview of the approach</h2>
<h3>Layout model</h3>
<p>The core idea is to extract features from the graph and its nodes, and train a Multi-layer Perceptron with that information. For each configurable node of the computational graph we took some node properties, the layout used, and properties from its "parents" and "siblings". The "parent" nodes are the ones that produce the node inputs. Two nodes are "siblings" if they share a parent. For example, in the following figure, nodes 1 and 2 are the parents and nodes 3, 5, and 6 are the siblings.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2528048%2F665c76b0456511e708fcdc2859c2ab98%2Fimg_kaggle.jpeg?generation=1701268499581469&alt=media" alt=""></p>
<p>For each configurable node in the graph we had a list of features that were processed by 3 fully-connected layers with dropout layers in between. After that, the "node" dimension was averaged, so all the node information is represented as a vector. Two extra inputs were concatenated to this vector: a "graph description" and the "subset information". The "graph description" is a vector with how many nodes of each type are present in the graph (normalized to sum 1), with an extra value that is the number of nodes of the graph (with a logarithm) to give the model an idea of the graph's size. The "subset information" is a vector that signals if the graph comes from the "xla:default", "xla:random", "nlp:default" or "nlp:random" subset, for what we used an "Embedding" layer of keras.</p>
<p>This new vector was processed by 3 additional fully-connected layers (no dropout this time). The Pairwise Hinge loss was used as objective function during training. In order to do that, each training batch of size 128 had examples of 16 different graphs, with 8 configuration examples each. Each batch is built by randomly choosing a subset with equal probability, so the batches are subset-balanced on average. The final submission was an ensemble of 3 independent training runs. </p>
<h3>Tile model</h3>
<p>The tile model is a simplified version of the layout model. Each configuration is described with a vector that has the "config_feat" information and also a "graph descriptor" (the same one from the layout model). This model is a Multi-layer Perceptron with 3 fully-connected layers and one dropout layer after the first layer. The loss function and batch structure is the same as the layout model, but with a larger batch size (600) and a larger number of configurations per graph (20).</p>
<h3>Validation</h3>
<p>The "valid" folder examples from the dataset were used as the validation set. We performed validation every 10000 training iterations, and we computed the competition metric over that set. In the case of the layout model, the metric was computed for each one of the four subsets and then an average is computed. If the validation metric did not improve after 5 validations, the training is stopped.</p>
<h2>Details of the submission</h2>
<h3>Tile model details</h3>
<p>The "tile problem" was relatively easy in comparison to the layout problem, so we did not spend that much effort improving this model. The number of configurations per graph was limited to 160, and we sampled them giving the configurations with a lower runtime a higher probability (using an exponential distribution), as the challenge here was about finding the fastest configuration, not sorting all the configurations.</p>
<h3>Node features (Layout model)</h3>
<p>From all the available features in the "node_feat" matrices, we selected the ones that we thought were the most important. This helps keeping the required memory low. These features are:</p>
<ul>
<li>shape_dimensions (21-26).</li>
<li>reshape/broadcast dimensions (31-36).</li>
<li>convolution_dim_numbers_input_spatial_dims (95-98).</li>
<li>convolution_dim_numbers_kernel_spatial_dims (101-104).</li>
<li>layout_minor_to_major (134-139)</li>
</ul>
<p>Values in parentheses correspond to the selected indices of "node_feat". We also gave the node layout information and the opcode to the model (encoded as a vector with a keras Embedding layer). Another important thing is that a re-ordered version of the shapes was given to the network according to the layout information (in addition to the original version). </p>
<p>For the siblings we took each sibling output shape, its layout and a boolean that compares the node layout with the sibling layout to check if they are the same. As the competition overview mentioned, if the layouts of two siblings are different, an extra copy operation is needed, so that motivated the creation of this variable.</p>
<p>For the parents we keep their output shapes and physical layouts. We also took the opcodes from the parents and the siblings, and express them as a vector with the help of the Embedding layer of keras.</p>
<h3>Keep the training stable (Layout model)</h3>
<p>To facilitate the training process, all the features that took values across many orders of magnitude (e.g. tensor shapes) were passed to a logarithm to avoid very large input values. The features were scaled using a mean / standard deviation normalization, with some clipping in the std estimation to avoid dividing by a very small value.</p>
<p>A cosine decay schedule was used for the learning rate. After a 10000 iteration linear warm-up in the learning rate, the cosine decay reduced the parameter across 250k iterations until it reached a 5 % of the original value. Adam was used as optimizer, with a clipnorm value of 1.0 to avoid large weight updates.</p>
<p>Many configurations had the same layout. All of them were replaced by just one instance of that layout, and the runtime replaced by the mean of the runtimes.</p>
<h3>Things that didn't work that well…</h3>
<p>We had some instability problems with the List MLE loss, so we chose using the Pairwise hinge loss instead. The problem was that after many iterations, suddenly a NaN value appeared in the model (or loss, idk) and destroyed all the model weights.</p>
<h3>Keeping the training under the memory budget</h3>
<p>As we mentioned, we worked with a limited memory budget of 16 GB of RAM and 4 GB of VRAM, so we had to be very careful of not loading too much data at the same time and be conservative with the model size. The first important thing was to process all the npz files and save the necessary information in the tfrecords format of tensorflow (with file compression activated). During training, these files were read from disk, trying to give the model samples from many different graphs instead of seeing just one graph at a time.</p>
<p>The number of configuration per graph was capped at 7500, and the number of configurable nodes given to the network was capped at 1000. As many different graphs were given to the network and considering that each graph has a different number of configurable nodes, it is necessary to pad and mask tensors to make all the samples the same length across the "node" dimension. This can have a heavy memory burden if the number of used nodes is increased too much, so the number 1000 was chosen given this restriction. I also put a limit in the number of parents (2) and siblings (3) for each node.</p>
<h2>Sources</h2>
<p>Code: <a href="https://github.com/ignacioreyes/kaggle_model_runtime" target="_blank">https://github.com/ignacioreyes/kaggle_model_runtime</a></p>
<p>Embedding layer (tf/keras): <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding" target="_blank">https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding</a></p>
<h4>Note: I wrote many sections in plural, as is customary in academic papers.</h4> |
CommonLit - Evaluate Student Summaries | 217th Place Solution for the CommonLit - Evaluate Student Summaries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the Learning Agency Lab and Kaggle for organizing and running this competition. I would also like to thank all the competitors who shared their views and notebooks.<br>
Congratulations to all the winners! Although I am not one of the winners, I would like to share with you what I did and the results. I am interested in NLP but have never had good results in competitions. So I started to study in public notebooks. All I could finally do was change a few parameters due to my lack of ability and the GPU's time limit, but the private score improved very slightly.</p>
<h2><strong>Context</strong></h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data" target="_blank">https://www.kaggle.com/competitions/commonlit-evaluate-student-summaries/data</a></li>
</ul>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fbe5c9aaa507498ae435000694059aa8e%2FCommonLit%202023_0.png?generation=1697846085252308&alt=media" alt=""></p>
<h2><strong>Overview of the Approach</strong></h2>
<p>Since I did not have the ability to create my own notebook, I started by modifying the excellent notebook [<a href="https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect?scriptVersionId=141032742" target="_blank">1</a>] by <a href="https://www.kaggle.com/cody11null" target="_blank">Cody_Null</a>. The main modification was to change the batch size.</p>
<h2><strong>Details of the submission</strong></h2>
<p>I changed all random seed values to 0 just because I like 0, although it may not make sense. Then, I decided to try changing the batch size. First, the batch size was changed from the original 12 to 16, but the result was out of memory. Second, the batch size was decreased to 8, but the public score was worsened from 0.45021 to 0.46446. Third, the batch size was changed to 10, but the public score further worsened to 0.46992. So I stopped.<br>
I saw only the public scores and thought there was no improvement, but the changes in the private scores were different. When the batch size was decreased to 8, contrary to the worsening in the public score (0.45021 -> 0.46446), the private score had improved very slightly (0.48972 -> 0.48483).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5800072%2Fd426b4cdac7276d987b7644faf1c0d61%2FCommonLit2023_1.png?generation=1697846131832554&alt=media" alt=""></p>
<p>I always tend to worry only about the public scores. I want to have a firm standard for myself.</p>
<h2><strong>Sources</strong></h2>
<p>[1] <a href="https://www.kaggle.com/cody11null" target="_blank">Cody_Null</a>, <a href="https://www.kaggle.com/code/cody11null/tuned-debertav3-lgbm-autocorrect?scriptVersionId=141032742" target="_blank">Tuned Debertav3+LGBM+autocorrect</a>.</p> |
Google - Fast or Slow? Predict AI Model Runtime | 19th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Google for organizing this competition! This was my first competition and I really enjoyed it. We didn't know anything about GNN's before starting this competition and we learned a lot throughout this competition. I would also like to thank my great teammate <a href="https://www.kaggle.com/roysegalz" target="_blank">@roysegalz</a> </p>
<h3>Tile dataset:</h3>
<p>For the Tile dataset we used an RGCN where the relations are the node_opcode. The model consisted of 4 blocks where a block consisted of RGCN -> LayerNorm -> ReLU</p>
<ul>
<li>loss function - ListMLE</li>
<li>hidden dim = 128</li>
<li>CosineLRScheduler</li>
<li>AdamW</li>
<li>lr = 4e-4</li>
<li>weight decay = 1e-4</li>
<li>max aggr for final graph representation</li>
</ul>
<p>We reached 0.198 on Tile + sample_submission (random predictions on Layout datasets)</p>
<h3>Layout datasets:</h3>
<p>We realized that the best way to differentiate between different graphs is by emphasizing the node_config_feat. We recognized the problem that the number of nodes that contain node configs is relatively small compared to the number of nodes in the graph. Hence we understood that this data might get lost in the model throughout the forward pass. Our way to solve the problem was the following:<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6061339%2Fc3795f9d4cb3cd838f865260a2433cd1%2FNew%20node%20config%20feat.png?generation=1700264018548073&alt=media" alt=""></p>
<p>First we represent the node_config_feat in a different way (nn.Embedding and one hot vector). After doing that we created this model architecture (which is the most important part of our solution):<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6061339%2Fd852425e39c06ed17a4d2c822fda94f4%2FLayout%20Architecture.png?generation=1700264126117504&alt=media" alt=""></p>
<p>Concatenating the data again after each block increased our scores dramatically.</p>
<p>We used the same model architecture for all the different layout datasets.</p>
<p>The model consisted of 3 blocks where a block consisted of GATv2-> LayerNorm -> ReLU (We decided not to use RGCN since it was really computationally expensive to run on the layout dataset)</p>
<ul>
<li>hidden dim = 64</li>
<li>loss function - nn.MarginRankingLoss(0.5) </li>
<li>CosineLRScheduler</li>
<li>AdamW</li>
<li>lr = 2e-4</li>
<li>weight decay = 1e-4</li>
<li>Virtual node for final graph representation</li>
</ul>
<p>Our CV Scores:</p>
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Kendal-Tau CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>XLA default</td>
<td>~0.3</td>
</tr>
<tr>
<td>NLP default</td>
<td>~0.5</td>
</tr>
<tr>
<td>XLA random</td>
<td>~0.62</td>
</tr>
<tr>
<td>NLP random</td>
<td>~0.94</td>
</tr>
</tbody>
</table> |
RSNA 2023 Abdominal Trauma Detection | 6th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, Thank you to RSNA and Kaggle for hosting this competition. <br>
Congratulations to all competitors. <br>
My solution is based on my mistakes in past RSNA competitions and solutions I learned from great competitors.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2Fa4d9546ab3d62dfe93918f09bc08219c%2FRSNA-Summary.jpg?generation=1697997852740205&alt=media" alt=""></p>
<h2>Data</h2>
<p>I use the datasets from <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>. Thanks for him. <br>
I make 5 folds based on patient id (n=3147). </p>
<h2>Models</h2>
<p>I divided task based on the label type.</p>
<ul>
<li><strong>Organ Model</strong> : Seg Label(nii) + Study Label</li>
<li><strong>Bowel Model</strong> : Seg Label(nii) + Study Label + Image Label</li>
<li><strong>Extra Model</strong> : Study Label + Image Label</li>
</ul>
<h3>1) Organ Model</h3>
<p>First, I trained 3D segmentation model for generating masks. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F5c42e73d71ae8c375b2a321024122e17%2F3d-segmentation.jpg?generation=1698161673119881&alt=media"></p>
<p>I used Qishen's 3D segmentation code. <a href="https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607" target="_blank">https://www.kaggle.com/competitions/rsna-2022-cervical-spine-fracture-detection/discussion/362607</a><br>
And I cropped organ and get 15 slices for each one. Because I got many ideas from previous RSNA competitions, I started to use adjacent +-2 channels. <br>
And I just tried only 1 slices with 5 channels because I want to see how different, but it performs better. <br>
So finally I used this way. But, I think the original method makes more sense.</p>
<p>And then I trained CNN + sequence model With cropped volumes and study label. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F729010131b30b3f5e629ec0c66d2f764%2Forgan.jpg?generation=1698161982441147&alt=media"></p>
<h4>Model:</h4>
<ol>
<li>3D segmentation : generate masks and crop (15 slices in each organ) <ul>
<li>resnet18d</li></ul></li>
<li>CNN 2.5D + sequence : train Organ classifier with study label.<ul>
<li>efficientnetv2s + LSTM</li>
<li>seresnext50_32x4d + LSTM</li></ul></li>
</ol>
<h3>2) Bowel Model</h3>
<p>The 3D segmentation part is same with above. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F6dadd1dddf4a0a682e76e1700534da30%2Fslice.png?generation=1698161851139718&alt=media"></p>
<p>The only difference is I cropped 30 slices for bowel. </p>
<p>I trained also CNN + sequence model with cropped volumes and study and image label. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F2faf6f6bbbbecb9df6c68c94e97bc78c%2Fbowel.jpg?generation=1698161933059070&alt=media"></p>
<h4>Model:</h4>
<ol>
<li>3D segmentation : generate masks and crop (30 slices in each organ)<ul>
<li>resnet18d</li></ul></li>
<li>CNN 2.5D + sequence : train Organ classifier with study label and image label.<ul>
<li>efficientnetv2s + LSTM</li>
<li>seresnext50_32x4d + LSTM</li></ul></li>
</ol>
<h3>3) Extra Model</h3>
<p>For Extra model, I got slices with stride 5 and +-2 adjacent channels. <br>
For example, each image shape is (5, size, size) and 5 channels are [n-2, n-1, n, n+1, n+2]. <br>
Also I just resized images to 384. I tried the other ways like 512 size, cropped images, but not working well. </p>
<p>Extra Model is based on 2 stage. <br>
First, I trained Feature extractor and got feature embeddings. <br>
Second, I trained Sequence Model. <br>
These are enough for gold zone. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2Fe2a01ab8a35bb6b060bfa694f322a538%2Fextravasation%20feature.jpg?generation=1698162042017089&alt=media"><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2816670%2F79ef199186ad3a5fe4f8b8463026e836%2Fextravasation%20sequence.jpg?generation=1698162053473465&alt=media"></p>
<p>In addition, thanks to Ian's bbox label, I could improve Extra model more. </p>
<p>In my experiment, training detector with bbox label is not working. <br>
So I used this label to make model to focus on extravasation region.<br>
I added segmentation head to feature extractor and it worked well.</p>
<p>This idea to add segmentation head comes from the previous Siim competition.</p>
<h4>Model:</h4>
<ol>
<li>Feature Extractor<ul>
<li>seresnext50_32x4d</li>
<li>efficientnetv2s</li></ul></li>
<li>Sequence<ul>
<li>GRU</li></ul></li>
</ol>
<h2>Things that did not work</h2>
<ul>
<li>Yolov7 + Ian Pan extravasation boxes. Training detector to crop bboxes is not working well.</li>
<li>seperate organ model. </li>
</ul>
<p>I truly appreciate the many competitors who produce and share great solutions every time. <br>
Thanks to, I was able to learn so much and become a Kaggle master. <br>
Also, Thank you to host and everyone who contributes to the best solution.</p>
<h2>Code</h2>
<p>inference code : <a href="https://www.kaggle.com/madquer/rsna-inference-6th-solution" target="_blank">https://www.kaggle.com/madquer/rsna-inference-6th-solution</a><br>
training code : <a href="https://github.com/sushi58373/RSNA_abdominal_trauma_6th_solution" target="_blank">https://github.com/sushi58373/RSNA_abdominal_trauma_6th_solution</a><br>
Demo Notebook : <a href="https://www.kaggle.com/code/madquer/rsna-inference-6th-solution-clean-version/notebook" target="_blank">https://www.kaggle.com/code/madquer/rsna-inference-6th-solution-clean-version/notebook</a></p> |
Google - Fast or Slow? Predict AI Model Runtime | Kaggle Solution Write-up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>import pandas as pd<br>
import numpy as np</p>
<p>df1 = pd.read_csv("/kaggle/input/predict-ai-model-runtime/sample_submission.csv")</p>
<p>df1.to_csv("submission.csv", index=False)</p> |
Google - Fast or Slow? Predict AI Model Runtime | 58th place basic vanilla keras solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>The generated features are provided in the attached .txt file (note: it has manual inputs)<br>
Solution- <a href="https://www.kaggle.com/code/anshuman8514/58th-place-vanilla-keras-solution" target="_blank">https://www.kaggle.com/code/anshuman8514/58th-place-vanilla-keras-solution</a></p> |
Bengali.AI Speech Recognition | 90th Place Solution for the Bengali.AI Speech Recognition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I would like to thank the hosts for the wonderful task! I feel so bad that I had only 3 days for this competition and didn't join earlier.</p>
<p><strong>Context</strong></p>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/overview" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/bengaliai-speech/data" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/data</a></li>
</ul>
<p><strong>Overview of the Approach</strong></p>
<p>As almost everyone else I used pretrained on bengali speech model. Main parameters for fine-tuning are listed below</p>
<pre><code>=,
=,
=,
=,
=,
=,
=,
=,
=,
=e-,
=
</code></pre>
<p>and the following configuration was used</p>
<pre><code>.freeze_feature_extractor()
.freeze_feature_encoder()
</code></pre>
<p><strong>Details of the submission</strong></p>
<p>It took me 16h to fine-tune a model on 10% of train data for 3 epochs on my GPU — thats where I felt the need for time.</p>
<p>I found that increasing <code>beam_width</code> in decoder from default to 2048 lead to 0.003 boost both on public and private (and inference time is ~8h, so we cant really increase further).</p>
<p><strong>Sources</strong></p>
<p><a href="https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training" target="_blank">https://www.kaggle.com/code/takanashihumbert/bengali-sr-wav2vec-v1-bengali-training</a> — notebook that I used for fine-tuning<br>
<a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/435300" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/435300</a> — datasets and model checkpoints for resource efficient training</p> |
Bengali.AI Speech Recognition | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Bengali.AI Speech Recognition
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>STT Model:</p>
<ul>
<li>OpenAI whisper-medium</li>
<li>Huggingface trainer</li>
<li>Trained on 8x 48GB RTX A6000</li>
<li>bs=8 and lr=1e-5</li>
<li>Train steps 50k</li>
<li>Spectrogram dithering</li>
<li>Spectrogram time and frequency masking</li>
<li>Resampling 16khz->8khz->16khz as augmentation</li>
<li>Inference with max_length=260, num_beams=4 and chunk_length_s=20.1s</li>
<li>Libsonic based speed/pitch augmentation</li>
<li>Datasets: OpenSLR 37, OpenSLR 53, MadASR, Shrutilipi, Macro, Kathbath, GoogleTTS generated audios and pseudo labeled YouTube videos</li>
</ul>
<p>Punctuation Model:</p>
<ul>
<li>AutoModelForTokenClassification google/muril-base-cased</li>
<li>Huggingface trainer</li>
<li>Labels: period, comma and question mark</li>
<li>bs=64, lr=2e-4 and max_seq_length=512</li>
<li>Ensemble of 4 models (using 6, 8, 11 and 12 layers of google/muril-base-cased)</li>
<li>Normalized IndicCorp v2 Bangla dataset</li>
</ul>
<p>In my daily job, I do speech speech recognition for low resource central Asian languages. From my experience, OpenAI Whisper works really well for OOD audios and can even transribe song lyrics. The downside is it is very sensitive to the annotation noise. So fixing the annotation noise, is the most crucial part of this competition.</p>
<p>Because the competition dataset was not validated, the initial model was trained on OpenSLR datasets. We normalized the texts and filtered out texts containing Bengali digits. All punctuation was also removed. Additionally, we sampled 420k texts from the IndicCorp and synthesized audios using GoogleTTS, which were then used as training datasets.</p>
<p>Following the training of an initial Whisper-medium model on OpenSLR and GoogleTTS, we conducted inference on MadASR, Shrutilipi, Macro, and Kathbath. We included audios with a WER of less than 15% in the next training phase. After three rounds of training, the model achieved an 8% WER on the Macro validation dataset and a public leaderboard score of approximately 0.380.</p>
<p>Since most of the training set audios were short, we merged some short audios to create around 70k longer audios. Subsequently, we achieved a public leaderboard score of approximately 0.370.</p>
<p>Whisper with the original tokenizer was slow on Bengali audios. Therefore, we trained a Whisper tokenizer with a 12k vocabulary on Bengali texts. With this tokenizer, we were able to perform inference with a num_beam value of up to 8 and a chunk_length_s of 20.1 seconds in less than 7 hours.</p>
<p>In the next step, we applied pseudo labeling to some YouTube videos, which enabled us to achieve a public leaderboard score of approximately 0.360. When we combined the predictions of four punctuation models, our public leaderboard score improved to around 0.325.</p>
<p>By adding more pseudo-labeled YouTube videos, our public leaderboard score further improved to 0.312 (private LB 0.372)</p>
<p>model weight and inference notebook: <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/447970" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/447970</a><br>
cleaned/long/pseudo data: <a href="https://www.kaggle.com/competitions/bengaliai-speech/discussion/448110" target="_blank">https://www.kaggle.com/competitions/bengaliai-speech/discussion/448110</a></p> |
Kaggle - LLM Science Exam | 🏅Top 6% - 149 Rank solution (Ensembling techniques + Training/inferencing) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Kaggle Competition Write-Up</strong><br>
<em>Acknowledgments:</em><br>
I wish to extend my heartfelt gratitude to Kaggle for hosting this remarkable competition, which has been a tremendous learning experience. The Kaggle community, with its collective wisdom and camaraderie, played a pivotal role in shaping my journey. I've learned invaluable lessons from fellow competitors, and their willingness to share knowledge has been an immense source of inspiration.<br>
<strong>Datasets Used:</strong><br>
For this competition, I harnessed the power of two vital datasets:</p>
<ol>
<li><strong>60k Dataset by <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a>:</strong> This dataset laid a strong foundation for my approach. It provided me with a wealth of data to train and fine-tune my models.</li>
<li><strong>40k Dataset by Chris Deotte:</strong> This dataset further complemented my training efforts, enabling me to capture diverse patterns in the data.<br>
<strong>Retrieval Process:</strong><br>
A key element of my strategy was the ingenious TF-IDF technique provided by <a href="https://www.kaggle.com/code/mbanaei/86-2-with-only-270k-articles" target="_blank">MB</a>. This technique proved to be a game-changer, working wonders in enhancing the quality of my retrieval process.<br>
<strong>Inference:</strong><br>
To bolster my inference and retrieval capabilities, I utilized the extensive 270k MB dataset. This vast resource significantly contributed to the accuracy and reliability of my predictions.<br>
<strong>Models:</strong><br>
My model portfolio consisted of a variety of powerful components:</li>
<li><strong>DistilBERT (DiBERTa) Models:</strong><ul>
<li>I trained two DistilBERT models, one on the 60k dataset and the other on the 40k dataset.</li>
<li>The 60k dataset-trained DiBERTa achieved a remarkable public leaderboard score of 0.901. I harnessed the openbook technique for this <a href="https://www.kaggle.com/code/cdeotte/how-to-train-open-book-model-part-1" target="_blank">model</a>, and I want to acknowledge the invaluable reference provided by Chris Deotte's notebook.</li>
<li>The DiBERTa model trained on the 40k dataset exhibited a competitive score of 0.84.</li></ul></li>
<li><strong>Other Models:</strong><ul>
<li>I also experimented with other models to explore their potential:</li>
<li>Longformer, though untrained, exhibited a respectable score of 0.83.</li>
<li>The <a href="https://www.kaggle.com/datasets/itsuki9180/llm-kaggle-awp" target="_blank">ITK</a> model, provided for the competition, achieved a solid score of 0.82.(using just to introduce some different aspect to my ensembling model)</li>
<li>A pre-trained DiBERTa model showed promise with a score of 0.88.<br>
<strong>Ensemble Model:</strong><br>
In pursuit of a robust and competitive solution, I employed an ensemble approach. The final ensemble model was meticulously crafted as follows:</li></ul></li>
</ol>
<ul>
<li>I executed an averaging operation between the two DiBERTa models. While I did not anticipate a substantial change, this step helped improve the overall stability of my model.</li>
<li>Subsequently, I integrated a voting mechanism that is provided by <a href="https://www.kaggle.com/code/radek1/an-introduction-to-voting-ensemble" target="_blank">radek</a>. This algorithm made its decision based on the consensus of all models, and it would only replace the primary answer, which was derived from the DiBERTa averaging, if all other models unanimously agreed on an alternative answer.<br>
<strong>Final Score:</strong><br>
After all these efforts i was lucky enough to get public leaderboard score of 0.904 and a private leaderboard score of 0.902. <br>
Special thanks to:<br>
<a href="https://www.kaggle.com/mbanaei" target="_blank">Mb</a> , <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a>, <a href="https://www.kaggle.com/radek1" target="_blank">Radek</a> and many more </li>
</ul> |
Child Mind Institute - Detect Sleep States | 7th Place Solution - Wavenet and Some Tricks | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks my teammate <a href="https://www.kaggle.com/robikscube" target="_blank">@robikscube</a> . It was a fun ride. We have noticed that Public LB is not very trustworthy and focused on our CV. We had around 0.826 CV at the end.</p>
<p><strong>Our main model is a wavenet with some modifications.</strong> It works on a sequence of minutely aggregated 3 days.</p>
<pre><code> (nn.Module):
():
(SleepModel, self).__init__()
emb_size =
self.minute_emb = nn.Embedding(, embedding_dim=emb_size)
self.bn = nn.BatchNorm1d(inch-)
self.wave_block1 = Wave_Block(inch-+emb_size, , , kernel_size)
self.wave_block2 = Wave_Block(, , , kernel_size, base=)
self.wave_block3 = Wave_Block(, , , kernel_size)
self.wave_block4 = nn.Sequential(nn.Conv1d(, , kernel_size=, dilation=DAY_LEN//),
nn.BatchNorm1d(), nn.LeakyReLU(),
nn.Conv1d(, , kernel_size=, dilation=DAY_LEN//),
nn.BatchNorm1d(), nn.LeakyReLU()
)
self.top = nn.Conv1d(, , kernel_size=)
self.top2 = nn.Conv1d(, , kernel_size=)
self.gn1 = nn.GroupNorm(, )
self.gn2 = nn.GroupNorm(, )
self.avgpool = nn.AvgPool1d(kernel_size=DAY_LEN+, padding=DAY_LEN//, stride=)
self.maxpool = nn.MaxPool1d(kernel_size=DAY_LEN+, padding=DAY_LEN//, stride=)
():
x = torch.cat([x[:, -DAY_LEN:]*, x, x[:, :DAY_LEN]*], axis=)
x = x.permute(, , )
minute, x = x[:, -, :], x[:, :-, :]
x[:, -, :] += minute /
x = self.bn(x)
minute_emb = self.minute_emb(torch.fmod(minute, ).long())
x = torch.cat([x, minute_emb.permute(, , )], axis=)
x = self.wave_block1(x)
x = self.gn1(x)
x = self.wave_block2(x)
x = self.gn2(x)
x = self.wave_block3(x)
x = torch.cat([x, self.avgpool(x[:, :]), self.maxpool(x[:, :])], axis=)
x = self.wave_block4(x)
x, x2 = self.top(x), self.top2(x)
x, x2
</code></pre>
<p>Its <strong>features</strong> are: <code>
["target", "idx", "anglez_mean", "anglez_std",
"enmo_mean", "enmo_std",
"same_anglez_prev_min", "same_anglez_next_min"] +
volatility_cols + ["hour", "minute"]
</code><br>
Volatility columns are median over anglez absolute difference on 3 different time windows. (5, 30, 480 steps)<br>
Same anglez prev and next features are difference of anglez with the last day same minute.</p>
<p><strong>Training details:</strong></p>
<ul>
<li>Target is also set to 1 for the adjacent minute because we predict with +3 step offset to cover the next minute too.</li>
<li>Ignoring near misses in loss: 2nd and 3rd minutes around the target are set to -1. SO that we dont penalize positive predictions there.</li>
<li>2 heads (1 for actual target -> categorical crossentropy, 1 for 15min windows -> BCE)</li>
<li>Online Hard Example Mining with 50%</li>
<li>6 epochs with diminishing LR. Each epoch takes 18 seconds on RTX3090. (and submission was taking 30 minutes in total)</li>
<li>2 iterations with different seed for diversity. 1st iteration excludes some bad series and 2nd iteration has very little augmentation.</li>
</ul>
<p><strong>Prediction details:</strong></p>
<ul>
<li>1 day sliding predictions with 3 days span. Only the middle day is used.</li>
<li>Postprocessing: Starting from the max prediction, we add their location as prediction. Score is their probability and 2nd highest probability within +-4 minute window. We set these windows to zero. We multiply surrounding +-20 minutes windows by 0.5 and continue. Later, we sample very low probability predictions for +-18 steps span for predictions with high score.</li>
<li>Stacking with LightGBM: Lightgm with a lot of median absolute anglez difference features are used. NN probabilities and time to/till onset/wakeup features using this probabilities are added. AUC got significantly improved but the metric is improved very little. (0.001)</li>
<li>Ensembling: We had another model with LSTM+Transformer. We added it to get 0.001 improvement.</li>
</ul> |
Child Mind Institute - Detect Sleep States | 3rd place solution - GRU, UNET and LGB! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>3rd place solution - GRU, UNET and LGB!</h1>
<p>Congratulations to kaggle and the host for this competition, it has been a very interesting and fun competition. Before we start, special thanks to <a href="https://www.kaggle.com/kyakovlev" target="_blank">@kyakovlev</a>, it has been exceptional to be able to work with you and learn from you.</p>
<h2>Structure of our approach</h2>
<p><strong>1. Preprocessing</strong><br>
<strong>2. Training (GRU + UNET + LGB)</strong><br>
<strong>3. Inference and postprocess</strong></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F970876bd40cd74ade6b68a7ea415c349%2FCaptura%20desde%202023-12-06%2001-35-18.png?generation=1701823419282889&alt=media" alt=""></p>
<h3>1. Preprocessing</h3>
<p>For final submission our GRU + UNET models work just with 7 features. We tried to add more features but for local validation we did not find any that would work.</p>
<h4>1.1. <strong>How do we structure the series before introducing them into our model?</strong></h4>
<p>We decided to divide the series into one-day sequences and reduce the granularity from 5 secs to 30 secs. Therefore we had sequences of 2880 length in which there was normally an onset and a wakeup. Here is an standard input for our model.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F4d5bb92e188eb1a4e751d18505ffa7ff%2Fstandard_series.png?generation=1701879825870419&alt=media" alt=""></p>
<h4>1.2. <strong>Key points and features in preprocesing</strong></h4>
<ul>
<li><p>Make anglez absolute, this was giving +0.002 on local validation.</p></li>
<li><p>For the only two variables we had (anglez and enmo), we tried to find useful aggregations(diff, mean, median, skew, etc…), but the only thing that seemed to work was the standard deviation (<strong>anglez_abs_std</strong> and <strong>enmo_std</strong>).</p></li>
<li><p>Detecting noise was another of the key points of our solution. We realized that when exactly the same value is repeated in the same series at the same hour, minute and second, this was basically noise. Here our detected noise is the red line.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F4926d54727c25100d547d5656cbc3667%2F04f547b8017d.png?generation=1701879870683731&alt=media" alt=""></p></li>
<li><p>To incorporate temporal information into the model, we decided to add 2 frequency encoding variables (one for onsets and one for wakeups) at the hour-minute level. </p></li>
<li><p>A good augmentation trick was to reverse all the series during training, this allowed us to have more sequences and increased our local validation by 0.01</p></li>
</ul>
<h4>1.3. <strong>Small ablation study of single model in the middle of the competition</strong></h4>
<p>The public leaderboard has really been making us dizzy throughout the competition, we thought there was a bug in our code. So halfway through the competition we did a study of how each variable was working on the leaderboard and these are the results:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3147995%2F43c76444fe8e5fc6a312bced2d320ddb%2FCaptura%20desde%202023-12-06%2017-26-09.png?generation=1701880005278625&alt=media" alt=""></p>
<h3>2. Training</h3>
<p>Our best model was GRU, which alone gave us a local validation of <strong>0.835</strong></p>
<h4>2.1. <strong>GRU and UNET -- training details</strong> (we did the training similar)</h4>
<ul>
<li><strong>Augmentation</strong>: just invert series</li>
<li><strong>Number of features</strong>: 7 (anglez_std, enmo_std, noise_day_before, noise_day_after, noise, hour_min_onset, hour_min_wakeup)</li>
<li><strong>Target</strong>: 2 outputs (one for onsets and other for wakeups)</li>
<li><strong>Target transformation</strong>: Add two steps back and one forward. (0,0,0,0,1,0,0,0 -> 0,0,1,1,1,1,0,0)</li>
<li><strong>loss</strong> : cross-entropy</li>
</ul>
<h4>2.2. <strong>Leaderboard variability</strong></h4>
<p>It seems that after the competition is over, we can observe that the private is more correlated with our local validation than the public one. In the public there was an instability that was consuming us because we really thought we had a bug somewhere in our code.<br>
Our best private submission is this ensemble: (GRU*0.68 + UNET * 0.2 + LGB*0.12)<br>
CV: ~<strong>0.840</strong> / public LB <strong>0.784</strong> / private LB <strong>0.848</strong></p>
<h4>2.3. <strong>LGB part</strong></h4>
<p>We've been trying to make lgb competitive since we formed a team. We finally managed to start getting him to contribute something to the ensemble the day before the competition ended. So I think that in this part we were a few days away from continuing to refine our LGB model.</p>
<p>Anyway, in our last submission we can see how it was giving us <strong>+0.002</strong> in the private LB.<br>
The maximum we achieved with an LGB single was <strong>0.757</strong> public / <strong>0.82</strong> private.</p>
<h3>3. Inference and posprocess</h3>
<p>For the best submission that we finally chose, we made an ensemble of several models (<strong>8</strong> GRU and <strong>2</strong> UNET)<br>
Running time for inference was ~<strong>1,5</strong> hours.</p>
<p>For postprocess we try several things, but basically what we did was keep the peaks every certain distance optimized for our predictions.</p>
<h3>4. Conclusions</h3>
<p>Surely there are things we are forgetting, so don't hesitate to ask anything and I'll update it in the post.</p>
<p>It has been a type of problem that could be approached in many ways, as we are seeing in the different solutions. We would like to mention that it has been a really challenging competition where it was a constant battle with other teams that pushed us to the limit (everything was very tight). Apart from that, I hope we can help your mission and be useful to the host.</p>
<p>Github code is here: <a href="https://github.com/FNoaGut/child-mind-institute-detect-sleep-states-3rd-place-solution" target="_blank">https://github.com/FNoaGut/child-mind-institute-detect-sleep-states-3rd-place-solution</a></p> |
Child Mind Institute - Detect Sleep States | 203 Place Solution for the Detect Sleep States Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Overview of the Approach</h1>
<p>It was interesting timeseries competition. I've tried many approaches, some works good, some not working.</p>
<h1>Details of the submission</h1>
<p>I want to notice a great notebooks and code by <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>:</p>
<ul>
<li><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a></li>
<li><a href="https://www.kaggle.com/code/tubotubo/cmi-code" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-code</a></li>
</ul>
<p>My solution is neural network and postprocessing by peaks. I have had a silver submit, but not checked it ((</p>
<h1>Not working</h1>
<p>Boostings<br>
TSFresh<br>
Blending with public kernels</p> |
Child Mind Institute - Detect Sleep States | 29th Place Solution and Initial Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>29th Solution Overview:</h2>
<h3>Model Architecture (from tubo's GitHub Repository):</h3>
<ul>
<li>2DCNN Feature Extractor -> Unet -> Unet Decoder</li>
<li>LSTM Feature Extractor -> 1DCNN -> LSTM Decoder</li>
</ul>
<h3>Input Features:</h3>
<ul>
<li>enmo, anglez, anglez_diff, hour, weekday, and a noise flag.<ul>
<li>Noise is flagged if identical (enmo + anglez) occur within a 5-minute span in the same series_id.</li>
<li>EDA indicated that the day before a holiday typically shows later onset times, prompting the inclusion of weekday and holiday features.</li></ul></li>
<li>Input steps of 5760 and 11520 were used, with four models based on ensemble model and input step pairs.</li>
</ul>
<h3>Target Size:</h3>
<ul>
<li>The target size is identical to the input size.</li>
</ul>
<h3>Training Targets:</h3>
<ul>
<li>Events with Gaussian soft labels (2 channels)</li>
<li>Sleep flag (1 channel)</li>
</ul>
<h3>Inference Process:</h3>
<ul>
<li>Chunks are created by sliding the input_step/4.<ul>
<li>For example, in the 5760 model, chunk steps begin from 0, 1440, 2880, etc.</li></ul></li>
<li>All chunks are predicted and averaged by step.</li>
<li>In CNN-based models, edge predictions are trimmed by 12.5%.</li>
</ul>
<h3>Postprocessing:</h3>
<ul>
<li>Use only events prediction (sleep flag is not used).</li>
<li>Utilizes find_peaks from SciPy signals, and applies a weighted average to steps near detected signals based on the prediction scores.<ul>
<li>For instance, if the candidate step is 5000, the weighted average of steps 4998-5002 is calculated.</li></ul></li>
</ul>
<hr>
<p>Final solution is just a small modification from tubo's approach, so let me share my initial solution. (I took most of my competition time in this approach…)</p>
<h2>Initial Solution (LB 0.677):</h2>
<h3>Based on a two-stage model:</h3>
<ul>
<li>Stage 1: Chunk Detection by binary classification <ul>
<li>Segmenting the dataset into 1-hour chunks with weak labels to identify useful segments. Label 1 is added if an event occurs within a chunk.</li></ul></li>
<li>Stage 2: L1 Regression <ul>
<li>Pinpointing the event position within the candidate chunks from Stage 1, followed by post-processing.</li></ul></li>
</ul>
<h3>Challenges and Learning Points:</h3>
<p>As a newcomer to neural networks, my approach lacked certain elements:</p>
<ul>
<li>Random Dataloader: No ideas to implement a random chunk dataloader. Each epoch started from the same position, leading to overfitting and lack of robustness.</li>
<li>Model Exploration: I used a very basic model (a single MLP, attention layer, LSTM, and head). Exploring more sophisticated models, as seen in tubo's work, would have been beneficial.</li>
</ul>
<hr>
<h2>Concluding Thoughts:</h2>
<p>This competition was a tremendous learning experience. My thanks to Kaggle for this opportunity, and to all the participants for their contributions and shared insights</p> |
Child Mind Institute - Detect Sleep States | 858th Place Solution: MLP-BiGRU | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I'd like to thank the organizers and Kaggle team for this interesting competition. Second, a huge congratulation to all the winners and everyone who participated in this competition. It was a very great competition for me to learn more about time-series approaches. Finally, many thanks to <a href="https://www.kaggle.com/werus23" target="_blank">werus23</a>, <a href="https://www.kaggle.com/tolgadincer" target="_blank">Tolga</a>, <a href="https://www.kaggle.com/tubotubo" target="_blank">213tubo</a>, and <a href="https://www.kaggle.com/cdeotte" target="_blank">Chris Deotte</a> for sharing their valuable knowledge and discussions throughout the competition. It was super helpful for me to understand and develop the final solution for this competition. </p>
<p><strong>1. Data preparation:</strong></p>
<ul>
<li><strong>Data</strong>: I truncated the original data at the end based on the valid events. Then, I added some engineered features based on the public notebooks and codes from <a href="https://www.kaggle.com/code/werus23/sleep-critical-point-prepare-data" target="_blank">[1]</a> <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/prepare_data.py" target="_blank">[2]</a>. The 18 final features I selected for my training/inference phase were: <br>
<code>num_features = ['anglez', 'enmo',
'anglez_lag_diff', 'anglez_lead_diff','anglez_mean_60','anglez_max_60','anglez_std_60',
'enmo_lag_diff','enmo_lead_diff','enmo_mean_60', 'enmo_max_60','enmo_std_60',
'hour_sin','hour_cos','month_sin','month_cos','minute_sin','minute_cos'
]</code></li>
<li><strong>Label</strong>: There were 2 approaches for preparing the labels in this problem: hard labelling vs soft labelling. Hard labelling is created by pre-selecting the number of offset from the events and considering it as a part of the training labels. In contrast, soft labeling is created using Gaussian distribution around the onset/awake positions with different sigma and offset values <a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/utils/common.py" target="_blank">[3]</a>. This approach has demonstrated much better results compared to hard labelling in this competition. Since I didn't have a good understanding of the soft labelling approach, I shifted my focus on training hard labelled data. A visualization of these two approaches is as follows (with an offset of 180 on each side): <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F8618580d7b03ac5fd064a28762da89f0%2Flabel_encoding.png?generation=1701984345645339&alt=media" alt=""></li>
</ul>
<p><strong>2. Cross-validation:</strong><br>
I used 269 series ids with group 10-fold. I initially used K-Fold but after reading <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/447882#2491744" target="_blank">Chris's discussion</a>, I switched to group K-Fold and it boosted my LB score by ~0.05. However, I only selected fold 4 and 7 for training purposes. <br>
Valid series ids:</p>
<ul>
<li><strong>fold_4:</strong> <code>['062dbd4c95e6', '0ef7d94fde99', '154fe824ed87', '280e08693c6d',
'31011ade7c0a', '3aceb17ef7bd', '4743bdde25df', '51fdcc8d9fe7',
'5f40907ec171', '653622ac8363', '703b5efa9bc1', '7822ee8fe3ec',
'87a6cbb7c4ed', '8877a6586606', '89bd631d1769', '91127c2b0e60',
'971207c6a525', '9b9cd7b7af8c', 'aed3850f65f0', 'ba8083a2c3b8',
'c38707ef76df', 'cca14d1966c1', 'd2d6b9af0553', 'db5e0ee1c0ab',
'e69aff66e0cb', 'eef041dd50aa', 'f981a0805fd0']</code></li>
<li><strong>fold_7:</strong> <code>['04f547b8017d', '0dee4fda51c3', '137b99e936ab', '25e2b3dd9c3b',
'2e9ced2c7976', '3664fe9233f9', '3c336d6ba566', '449766346eb1',
'5c088d7e916c', '601559e1777d', '6d6b9d22d48a', '752900afe3a6',
'83fa182bec3a', '8f6f15b9f598', '99b829cbad2d', 'a2b0a64ec9cf',
'a88088855de5', 'a9e5f5314bcb', 'b750c8c1556c', 'c107b5789660',
'd0f613c700f7', 'd93b0c7de16b', 'dff367373725', 'e0686434d029',
'e34b496b84ce', 'ece2561f07e9', 'f7eb179216c2']</code></li>
</ul>
<p><strong>3. Model architecture:</strong><br>
The model architecture was adapted and modified based on this MLP-Residual BiGRU model <a href="https://www.kaggle.com/code/werus23/sleep-critical-point-train/notebook" target="_blank">[4]</a><a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410" target="_blank">[5]</a>. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F4326451a4c802ae15d99fb35f8e039de%2FChild_mind_model_architecture.png?generation=1701992082562879&alt=media" alt=""><br>
<strong>4. Training:</strong></p>
<ul>
<li><p>Optimizer: Adam</p>
<ul>
<li>LR=1.0e-3</li></ul></li>
<li><p>Scheduler: CosineAnnealing</p>
<ul>
<li>T_max=10</li></ul></li>
<li><p>Epochs=10, batch_size=3</p></li>
<li><p>Loss function: <code>BCEWithLogitsLoss()</code></p></li>
<li><p>Evaluation metrics: <code>Event Detection AP</code>, <code>precision_score(average='macro')</code><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2Ff326b99e48fb43ea867cad1343ff4705%2Floss.png?generation=1702001203166923&alt=media" alt=""><br>
<strong>5. Inference:</strong></p></li>
<li><p>Used the same preprocessing steps as training phase, except there was no truncating in the test series.</p></li>
<li><p>Inferred with batch size = 1, and input_size=(17280,18)</p></li>
<li><p>Filtered out insignificant predictions</p></li>
<li><p>Added as many onset/awake positions as possible within the input time frame of 17280 (24 hours)</p></li>
<li><p>Selected only middle point of onset/awake transition periods<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6261540%2F2da3fe65d4c375da4d9282b66fa93c2c%2Fmodel_0.png?generation=1701998529729107&alt=media" alt=""><br>
<strong>6. Results:</strong></p>
<table>
<thead>
<tr>
<th>Fold</th>
<th>Model</th>
<th>Hidden_size</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>model_0</td>
<td>128</td>
<td>0.743</td>
<td>0.681</td>
<td>0.706</td>
</tr>
<tr>
<td>4</td>
<td>model_1</td>
<td>224</td>
<td>0.737</td>
<td>0.675</td>
<td>0.716</td>
</tr>
<tr>
<td>7</td>
<td>model_2</td>
<td>128</td>
<td>0.748</td>
<td>0.680</td>
<td>0.71</td>
</tr>
<tr>
<td>7</td>
<td>model_3</td>
<td>224</td>
<td>0.745</td>
<td>0.677</td>
<td>0.711</td>
</tr>
</tbody>
</table></li>
<li><p>Ensemble <strong>public LB</strong>: 0.25*(model_0+model_1+mode_2+model_3) = <strong>0.687</strong></p></li>
<li><p>Ensemble <strong>private LB</strong>: 0.25*(model_0+model_1+mode_2+model_3) = <strong>0.716</strong></p></li>
</ul>
<p><strong>7. Conclusion:</strong><br>
Things I tried but unsuccessful 🫠:</p>
<ul>
<li>WaveNet</li>
<li>UNet</li>
<li>Spectrogram-UNet</li>
<li>Transformer</li>
<li>Soft-labelling</li>
</ul>
<p><strong>References:</strong><br>
<a href="https://www.kaggle.com/code/werus23/sleep-critical-point-prepare-data" target="_blank"><em>[1] Sleep critical point prepare data-werus23</em></a> </p>
<p><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/run/prepare_data.py" target="_blank"><em>[2] Prepare data-tubo213</em></a></p>
<p><a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states/blob/main/src/utils/common.py" target="_blank"><em>[3] Common utils funtions-tubo213</em></a></p>
<p><a href="https://www.kaggle.com/code/werus23/sleep-critical-point-train/notebook" target="_blank"><em>[4] Sleep critical point train-werus23</em></a></p>
<p><a href="https://www.kaggle.com/competitions/tlvmc-parkinsons-freezing-gait-prediction/discussion/416410" target="_blank"><em>[5] 4th Place Solution: a MultiLayer Bidirectional GRU with Residual Connections-zinxira</em></a></p>
<p>Thanks for reading! 😀</p> |
Child Mind Institute - Detect Sleep States | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and competition host for this interesting competition that can be approached by various of methods, it is an enjoyable journey to explore this dataset.<br>
And also great thanks to people who sharing insights and ideas during the process of this competition, I learnt a lot from you all.</p>
<p><strong>overview</strong><br>
I approach this problem in four stages: </p>
<ol>
<li>candidate step generation by heuristic rules </li>
<li>step correct model to improve step quality </li>
<li>score model generate score for submission </li>
<li>post-processing </li>
</ol>
<p><strong>candidate generation</strong><br>
basic idea of heuristic rules, you can check code for more details and tricks.</p>
<ul>
<li>region with |diff(anglez)|<5: start -> onset, end -> wakeup. <br>
Onset/wakeup is not likely to happen in the middle of an inactive region. Region with suspicious fake data (rules to detect such region is described later) is removed.</li>
<li>fake region extended by inactive region: start->wakeup, end->onset. <br>
One explanation is that behavior of removing the watch can only happen when one is awake.</li>
<li>current candidates shift by +/-720 when no other candidates in range of 720*2.</li>
</ul>
<p>The three rules generate 254653 candidate steps in total for all series with best possible score 0.9006.</p>
<p><strong>step correct model</strong></p>
<ul>
<li>Lightgbm L1 regression with (nearest_target_step-step) as target</li>
<li>since we just interested in the quality of steps that matched with target, weight of data points with |target|>=360 is set to be zero</li>
<li>data points is weighted by threshold_class_width^(-6) to focusing on ones nearing the target, as I don’t think accurately predict the gap is possible when true target is far away.(threshold_class_width eg: 120<y=127<150, then resp class_width is (150-120)=30)There are also many other choice of weights that can reach similar performance based on my experiments.</li>
<li>As minute%15 and second are the most important features I think this stage is mainly correct for the bias in label generation process rather than actually improve the step quality. And the most gains of this stage actually come from shift step by ~-11, which is not reflected in feature importance list.</li>
<li>To validate next stage without leak CV is done in inner fold with 5fold group-k-fold by series_id, so 2* 5* 5 models in total.</li>
</ul>
<p>Best possible score after this stage is 0.932.</p>
<p><strong>score model</strong></p>
<ul>
<li>Lightgbm with cross-entropy objective</li>
<li>target=max(0,1-|nearest_target_step-step|/360) for data points nearest to each target else target is set to zero.</li>
<li>negative data points (target=0) is down weight by *0.7/0.4 for those with |nearest_target_step-step| >/<360.</li>
<li>target is created on corrected step given by out of fold prediction in last stage.</li>
<li>I create different models for onset and wakeup candidates and use 5fold group-k-fold by series_id to do validation, so 2*5 models in total.</li>
</ul>
<p>I compute competition metric after post processing, so no score is reported here, but It is easy to add one.</p>
<p><strong>post-processing</strong><br>
this consists of three parts:</p>
<ul>
<li>if step%12=0 then step+/- 1 to match more target steps.</li>
<li>for two candidates with gap < 720 only keep them when: <br>
<code>min(|gap|/720,1)*(exp_score1*exp_score2/(exp_score1**2+exp_score2**2))**0.5>0.083</code><br>
here <br>
<code>exp_score=np.exp(np.arctanh(2*score-1))</code><br>
else remove the one with smaller score.<br>
This avoid too dense candidates and only works when model preference to candidates is clearly biased.</li>
<li>If score sum in day for some series is larger than 1 then all scores in that day will be divide by score sum.</li>
</ul>
<p>After post-processing CV score 0.825, public LB score 0.783, private LB score 0.844.</p>
<p><strong>more details</strong></p>
<ul>
<li><p><strong>fake region</strong></p>
<ol>
<li>anglez 5min mean (6 decimal accuracy) appear more than one time in the same series</li>
<li>anglez 5min std>0.5 to avoid mark some inactive region as fake</li>
<li>mark regions surrounded by long fake region as fake, as described in GGIR website.</li></ol></li>
<li><p><strong>data cleaning</strong></p>
<ol>
<li>remove some series with many wrong labels by manual inspection</li>
<li>data points covered by long time unlabeled tail region is set to weight zero</li>
<li>data points related to target with unexpected anglez distribution is set to weight zero. (onset followed by active region, wakeup followed by inactive region )</li></ol>
<p>Those cases are only removed from training process, I still include them when compute CV score.</p></li>
<li><p><strong>features</strong></p>
<p>2040 in total, I only describe the most important ones, the remaining ones can be duplicated, unimportant or useless. You can check code for more details.</p>
<ol>
<li>window based features: <strong>source_column X stats X window_size X window_operations</strong>, not full combination.<br>
<strong>source_column:</strong> <em>anglez_abs_diff, enmo_abs_diff, anglez_abs_diff_quantile, smoothed_anglez_abs_diff,
anglez_abs_diff_in_given_range, fake_mark, sleep_mark by GGIR heuristic rules, …</em><br>
<strong>stats</strong>: <em>mean/50 quantile/95 quantile/max/min</em><br>
<strong>window_size</strong>: <em>1min,3min,5min,10min,15min,30min,1h,2h,4h,8h,12h</em><br>
<strong>window_operations</strong>: <em>left_side_window(lw), right_side_window(rw), (lw-rw), [(lw-rw)/(lw+rw)], concat(lw,rw), [(lw-rw)/ concat(lw,rw))]</em></li>
<li>time features<br>
weekday, hour, second, minute, second_in_day, step/max_step<br>
minute%15 (as discussed in <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/444374" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/444374</a>)</li>
<li>feature stats aggregated in day<br>
mainly fake_mark and sleep_mark</li>
<li>step gap to mean event time in series given by heuristic methods</li>
<li>stats values at the step time</li>
<li>step gap to nearby candidates and feature value gap with some features that ranked high based on feature importance, manually picked.</li></ol></li>
</ul>
<p><strong>Reproduce</strong><br>
re-run following notebooks in order<br>
<a href="https://www.kaggle.com/code/w5833946/cmi-lgb-v9-train-reproduce" target="_blank">https://www.kaggle.com/code/w5833946/cmi-lgb-v9-train-reproduce</a><br>
<a href="https://www.kaggle.com/code/w5833946/cmi-lgb-v9-predict-reproduce" target="_blank">https://www.kaggle.com/code/w5833946/cmi-lgb-v9-predict-reproduce</a></p> |
Child Mind Institute - Detect Sleep States | 77th solution - UNet1D -> WaveNet, Bi-GRU | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to the organizer and Kaggle staff for this excited competition. And congratulations to winners. I only participated in the last three weeks, but I learned a lot and enjoy this competition.</p>
<h1>PreProcess</h1>
<ul>
<li>standard scaler (for each subject)</li>
<li>sequence_size = 17,280 (12 * 60 * 24)</li>
<li>sequence overlap = 11,520 (12 * 60 * 16)</li>
<li>prepare features to be used for NN inputs<ul>
<li>anglez</li>
<li>enmo</li>
<li>anglez_diff</li>
<li>enmo_diff</li>
<li>time_unit_sin</li>
<li>time_unit_cos</li></ul></li>
</ul>
<h1>Validation</h1>
<ul>
<li>GroupKFold (k=5)</li>
</ul>
<h1>Model</h1>
<table>
<thead>
<tr>
<th></th>
<th>CV(=5fold)</th>
<th>Public</th>
<th>Private</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>UNet1D - WaveNet</td>
<td>0.746</td>
<td>0.735</td>
<td>0.772</td>
<td>0.3375</td>
</tr>
<tr>
<td>UNet1D - BiGRU(num_block=2)</td>
<td>0.731</td>
<td>not submit</td>
<td>not submit</td>
<td>0.14875</td>
</tr>
<tr>
<td>UNet1D - BiGRU(num_block=3)</td>
<td>0.721</td>
<td>0.708</td>
<td>0.754</td>
<td>0.06375</td>
</tr>
<tr>
<td>UNet1D - ViT - BiGRU(num_block=2)</td>
<td>0.728</td>
<td>not submit</td>
<td>not submit</td>
<td>0.0375</td>
</tr>
<tr>
<td>UNet1D - WaveNet - BiGRU(num_block=1)</td>
<td>0.751</td>
<td>0.730</td>
<td>0.770</td>
<td>0.4125</td>
</tr>
<tr>
<td>Ensemble</td>
<td>0.775</td>
<td>0.756</td>
<td>0.786</td>
<td></td>
</tr>
</tbody>
</table>
<h3>Common in all models</h3>
<ul>
<li>Linear Layer at beginning</li>
<li>Downsampling before calculation of loss (negative = positive * 10)</li>
<li>Use hard label (window=12)</li>
<li>Multi Task Learning<ul>
<li>segmentation: “asleep”, “awake”, “not_worn”</li>
<li>event detection: “onset”, “wakeup”</li></ul></li>
<li>I add the output of segmentation to the input of the event detection head. After that, I add multiple conv1d layer with various kernel_size(3, 7, 15) before the head.</li>
<li>Skip Connection is very important.</li>
</ul>
<h3>Each Component</h3>
<ul>
<li><strong>UNet1D</strong><ul>
<li>used skip connection instead concatenate when merge encoder and decoder</li></ul></li>
<li><strong>WaveNet</strong><ul>
<li>block = 2</li>
<li>dilation = 4 ** N (N = 6, 4)</li></ul></li>
<li><strong>Bi-GRU</strong><ul>
<li>Transformer-like architecture ( (Bi-GRU → FeedForward) * num_block, with skip connection and layer normalization)</li></ul></li>
<li><strong>ViT</strong><ul>
<li>patch_size=4</li>
<li>I use ViT for diversity, but attention-based model isn't well in my case.</li></ul></li>
</ul>
<h1>Ensemble</h1>
<ul>
<li>Weighted average of the model's output probabilities</li>
</ul>
<h1>PostProcess</h1>
<ul>
<li>Moving Average (window=3)</li>
<li>Find Peak (using scipy.signal.find_peaks, score_th=0.01, distance=100))</li>
<li>Eliminate time periods when events are unlikely to occur. (Removed predictions outside the following duration)<ul>
<li>onset : 20:00 ~ 12:00</li>
<li>wakeup : 3:00 ~ 20:00</li></ul></li>
</ul> |
Child Mind Institute - Detect Sleep States | 6th Place Solution - BiLSTM-UNet | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks a lot to the competition host and kaggle for hosting this fun competition!</p>
<h1>Data Processing</h1>
<p>I split the data into different separate days with a padding of 60 minutes. The days are shifted by 500 minutes (so that one day starts and ends at 15:40). I did this to avoid wakeup/onset events near the end or start of a day, which I hypothesized to be benefical to the model (since there is only limited padding in both directions). I discarded 8 series ids which were mostly filled with nan values, and hand picked a first-n-day cutoff for a few series (removing the end of the series whenever there were no events but still regular activity). Since events only ever happened on full minutes, I constructed features for every one-minute bucket of data and generated an array of size (1440, n_feats). The features were mostly basic: anglez, enmo, anglez*enmo, hdcza were selectively paired with min, max, mean, std aggregations to generate 10 features, where hdcza was inspired by this discussion post:<br>
<a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/453267" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/453267</a></p>
<p>Since it was discovered by <a href="https://www.kaggle.com/ymatioun" target="_blank">@ymatioun</a> made the great discovery that events had different frequencies at different steps in the 15 minute cycle. Therefore, one additional feature is the (step // 12) % 15.<br>
Since there were large repeated periods in the data, I used an algorithm that utilized hashing and hashmaps which looked for such patterns in the data:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F5afb15a36f6816d1a90766c8983d30fb%2Frepeat1.PNG?generation=1701822960376424&alt=media" alt=""></p>
<p>Such a feature proved incredibly useful since there were many, non-obvious such repeating patterns in the data.<br>
Here is a plot of enmo (blue), awake (red), and this new features, which indicates whether the data occurs somewhere else in the series or not (black):<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F83b03dd0cfbd6bb590934c9a3ad8a71b%2Frepeat2.PNG?generation=1701823230374898&alt=media" alt=""></p>
<p>As you can see, there were many parts of the data which were copy pasted from somewhere else. I am sure some models will have learned this anyway from the sometimes apparent cuts in the signal, but this of course works much better. The important thing about this feature is, that the patient is almost always sleeping when it is 1 (>99%). I didn't do too much in depth testing on how this affected the score, but local cv indicated that it improved the score by roughly ~0.015. Since the data was repeated and likely not valid anyway, I null out all anglez and enmo values where this features is 1 to prevent overfitting.</p>
<h1>Model Architecture</h1>
<p>I used 4 model architectures which were all very similar and mostly based on Bidirectional LSTM layers. I used a 6 fold split. </p>
<p>My main model used a UNet LSTM/Transformer architecture with a fully connected dimension of 1200 and an LSTM/Transformer dim of 300. In contrast to the typical UNet implementation, I added the layers in the skip connection rather than concatenating.</p>
<h2>Main model:</h2>
<p>This is the model pipeline used by the best single models. The other 3 architectures I used are very similar (with varying layer counts and dimensions), except 1 architecture which doesn't use the UNet.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2Fdf7fde6fec78f361a99cad3a934031b6%2Fmodel1.PNG?generation=1702143119651974&alt=media" alt=""></p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6735218%2F40cbb13011a969d0f9f1a6969af1bb07%2Fmodel2.PNG?generation=1702142987749483&alt=media" alt=""></p>
<p>1 day of data in 1 minute intervals (60 minute padded in both directions) shape: (batch_size, 1440 + 60*2, n_feats) -> fully connected layers -> BiLSTM -> max pooling (pool size 15) -> 2 lstms layers -> max pooling (pool size 2) -> transformer block -> upsampling (factor 2) -> skip -> 2 lstm layers -> upsampling (factor 15) -> skip -> lstm layer -> fully connected layers -> output layer</p>
<p>The 'skip' layers indicated the the output from the same stage at the downsampling stage is added onto the current sequence in the upsampling stage.</p>
<p>The minute-mod-15 feature was used to calculate an embedding which was added to the sequence before, and after the UNet. Also, I used the mish activation most of the time.</p>
<p>The other models had slight variations, e.g. leaving out the UNet, changing dimensions, increasing layer count.<br>
Some variations also include a sub-lstm layer, which applies an LSTM layer on all 180 minutes intervals within a day, with 120 minute overlaps between the intervals.</p>
<h2>Loss function and output</h2>
<p>The model predicted if an onset or wakeup event occurred within 0, 1, 3, 5, 7 minutes of the current step. I use a weighted BCE loss which weights the loss of the different minute thresholds by 1/(threshold+1). The prediction is the sum of all the probabilities that an event is in the 0, 1, 3, 5, 7 threshold. This is done for wakeup and onset separately so the model had 10 outputs per step.</p>
<h1>Postprocessing</h1>
<p>The model output itself had dimension (minutes_in_series, 10). This is the process of retrieving predictions from the model output:</p>
<ol>
<li>Concatenate the model predictions from each day in a series.</li>
<li>Take the sum of the prediction that an event is in 0, 1, 3, 5, 7 minutes range at each step, for onset and wakeup respectively.</li>
<li>Until there are no predictions larger than 5e-5 left:<br>
a. Select the highest prediction.<br>
b. Null out all predictions within a range of 10 minutes of the prediction.</li>
<li>Since we are still in minute space, multiply by 12 to obtain the correct steps.</li>
<li>As was pointed out in the discussion forums, predictions for full minutes are punished since the metric doesn’t count a prediction as within a threshold, if it is directly on a threshold. Therefore, to maximize the number of possible thresholds we may be in, it is beneficial to add or subtract 1. This is decided by looking at the neighboring prediction.</li>
<li>Use the summed predictions from step 2. As scores for the retrieved steps.</li>
</ol>
<p>Thanks to <a href="https://www.kaggle.com/maruichi01" target="_blank">@maruichi01</a> for pointing that out! The discussion related to this is here:<br>
<a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/446919#2487136" target="_blank">https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/446919#2487136</a></p>
<h1>Training</h1>
<p>I used the AdamW optimizer with a weight decay of 1e-7 and lr of 1-e3 that exponentially decayed. It helped training stability to clip the norm of the gradient to 1e-2. Regular dropout was used and a kind of dropout that randomly masked a 15 minute segments with a probability of 15-25%. I trained on 4 seeds and picked the best seed of each architecture within each fold. I also refitted the best 2 architectures using the entire data.</p>
<h1>Other stuff</h1>
<p>Like Chris mentioned in his write-up, the discovery that we can make a lot of predictions per event was also very important. I first had a decoder model, which used the previous predictions the generate the next one using a probability distribution over all future steps. This worked quite well but only if the maximum prediction would have been counted. So I discarded this idea after discovering that we can make a lot of predictions per event without getting punished too much.<br>
My local cv is ~0.833.</p>
<h1>Things that didn't work</h1>
<p>Like I said earlier, I first worked on a decoder model to predict the next change event, which didn't work so well since it was beneficial to make many predictions per night. I also worked on stacking the model with an LGBM, but that only gave tiny improvements so I avoided the idea to not add too much complexity, though if I had more time I would have probably reconsidered it.<br>
Since the ranking of the predictions was important I tried using other loss like e.g. a pearson correlation loss but that did not work better.</p>
<p>I hope I covered the most important parts and didn't forget anything. I tried a lot of things I may add later. I am very happy I managed to get my second solo Gold today, thus becoming a Kaggle master 😃</p>
<p>Edit:<br>
Link to git repo:<br>
<a href="https://gitlab.com/fritz_cremer/cmidss_final" target="_blank">https://gitlab.com/fritz_cremer/cmidss_final</a></p> |
RSNA 2023 Abdominal Trauma Detection | 51 Place Solution for the RSNA 2023 Abdominal Trauma Detection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This solution only focused on solid organ classification because I didn't find a good solution for the bowel and extravasation. In the final submission, the bowel and extravasation predictions are the same as the mean baseline.</p>
<h1>Overview of the Approach</h1>
<ul>
<li>2D UNet Segmentation (<a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">Pytorch Segmentation Model</a> with efficientnet-b0) + Bbox crop</li>
<li>2.5D CNN classification (timm EfficientNetV2-s + FC head)</li>
<li>4-fold Ensemble</li>
</ul>
<h1>Details of the submission</h1>
<h2>2D UNet Segmentation</h2>
<p>A 2D UNet is trained on the front view (coronal) slices from the given segmentations. This is because my main goal is to roughly crop the solid organs from the full height CT scans to avoid extra dataloading and remove noise for classification training.</p>
<p>The model is trained on 256x256 resolution because the pixel-wise accuracy is not critial for getting the bounding box, which makes the training very fast (<1hr).</p>
<h3>GT vs predicted mask:</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2F07ac8b5d37ea68487242c15b67c549f3%2FScreenshot%20from%202023-11-02%2011-37-46.png?generation=1698943423363415&alt=media"></p>
<p>The bbox of each slice is retrieved from the segmentation mask. The solid organs bbox of entire CT scan can be found from the union of the slice bboxes with a small margin. </p>
<p>All the solid organ volumes are then cropped from the full-height scans and saved as 3D arrays (.npy) for classification training and inference.</p>
<h2>2.5D CNN Classification</h2>
<p>A 2.5D CNN Classification model is trained on the cropped solid organ volume.<br>
Each volume is resized and augmented by the data loader. (Some sample have a large amount of scans(tensor height), it is faster to do the interpolation before augmentations). </p>
<h3>DataLoader Example (Batch x 160 x 352 x 352):</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F3964695%2F4b5f19a8b6cb0943b748898573e777df%2FScreenshot%20from%202023-11-02%2011-38-15.png?generation=1698943501191093&alt=media" alt=""></p>
<p>The model is trained at the series level and the final prediction for each patient is calculated from the mean of the series predictions.</p>
<h1>Sources</h1>
<p><a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">Pytorch Segmentation Model</a> </p> |
Kaggle - LLM Science Exam | 317th Place Solution for the LLM Science Exam | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Kaggle - LLM Science Exam
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank Kaggle for hosting <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">LLM Scinece competition</a> that answer multiple-choice questions written by an [LLM]. Despite joining my first Kaggle competition just a month before the deadline and not achieving good results😅, I learned a lot about LLMs including how to train them on small GPUs and make effective inferences. The experience was still awesome, and I'm excited to share the lessons I learned from my experiments😁. </p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></li>
</ul>
<h1>Overview of the Approach</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16611574%2F84814798a66c9a4555d6521bf647cc8a%2FPicture1.jpg?generation=1697668918085576&alt=media" alt="Workflow"></p>
<p>The figure illustrates the LLM-based question answering system. For each question, we first build up a prompt that consists of: (1) Instruction, (2) Background context relevant to the question, (3) Question and (4) Answer options. Second, we pass the prompt to the LLM, which is run one layer after another to save memory space. Finally, the LLM returns the scores of each answer option, and we select the top three options as the final result. The final solution is the Platypus2-70B-Instruct model with Retrieval-Argumented Generation (RAG) and Weight-Loader with accuracy of 0.888. The notebook is published publicly <a href="https://www.kaggle.com/code/minhsienweng/patypus2-70b-wiki-retrieval-weight-loader" target="_blank">here</a>. </p>
<p><strong>Step 1.</strong> Retrieval-Argumented Generation (RAG) retrieves the Wikipedia texts as background context for each question to help LLM generate accurate predictions.<br>
(a) Generate the question embeddings by concatenating the question and answer options, then converting them using the sentence transformer model <code>all-MiniLM-L6-v2</code>.<br>
(b) Search the Wikipedia embeddings to find the top 5 most similar Wikipedia texts to the question embedding, based on cosine similarity.<br>
(c) Store the retrieved Wikipedia texts to be used as the background context of the question.</p>
<p><strong>Step 2.</strong> Construct the prompt for each question<br>
For each question, create a prompt using <code>question text</code>, <code>answer options</code> and <code>background context</code> (relevant Wikipedia texts). Here is an example of question prompt.<br>
<code>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction: Your task is to analyze the question and answer below. If the answer is correct, respond yes; if it is not correct, respond no. As a potential aid to your answer, background context from Wikipedia articles is at your disposal, even if it might not always be relevant.
Input: Background context: {context}
Question: {question}
Proposed answer: {answers['A']} {answers['B']} {answers['C']} {answers['D']} {answers['E']}</code></p>
<p><strong>Step 3.</strong> Predict the answer to a question by passing the prompt to LLM <br>
(a) Run the LLM model layer by layer. The <code>Platypus2-70B</code> model is too large (140GB) for T5 GPUs, so we split it into layers to save memory and run each layer at a time. Each layer is roughly 1.7GB in size, which is small enough to fit on most GPUs.</p>
<p>(b) The question prompt is passed to the LLM. The LLM returns a probability distribution normalised over all answer options. We sort the answer options by their probabilities and select the top three for the question as submission result. </p>
<p>The prediction process is slow and memory hungry, so we use multithreading to parallelize prediction tasks, caching to reuse results, and memory clearing to reduce memory usage after each layer is finished.</p>
<h1>Details of the submission</h1>
<p>I tried the below solutions apart from the final solution.</p>
<h2>Fine-tuned LLMs</h2>
<p>I first tried the <code>flan-t5-base</code> and <code>flan-t5-large</code> models without fine-tuning, but their accuracy was poor (0.46 and 0.33). So I decided to fine-tune the models. Since fine-tuning a large model takes up a lot of time, I switched to the smaller <code>bert-base-uncased</code> model (110M parameters) as an experiment and fine-tuned it on the given training dataset, which improved the accuracy to 0.55. As the given training dataset is small (200 questions), another attempt was to include 6,500 extra examples to the training dataset, but this only resulted in a small improvement (0.56). This is likely because the fine-tuned LLMs alone could not handle such a complex task and they needs more background texts to make the predictions.</p>
<h2>Short Sentence texts as background context</h2>
<p>On the discussion forum, the Retrieval-Augmentation Generation (RAG) has been shown to improves LLM accuracy and efficiency by finding relevant Wikipedia data as context, supplementing LLMs with reliable information, and reducing model retraining costs.</p>
<p>The RAG approach was used to retrieve Wikipedia sentences relevant to each question, based on cosine similarity. The retrieved sentences were then combined with the question to form a prompt for the model. Our solution used the <code>deberta-v3-large model</code> (304M), which resulted in a significant improvement in accuracy (to 0.75). A voting ensemble of pretrained models was also experimented with, but the improvement in accuracy was slight (to 0.78). RAG can improve accuracy, but its improvement is limited because the sentences it retrieves may come from different articles and may not be well-connected to provide useful information. Therefore, we should provide them with longer and more coherent paragraph texts, rather than individual sentences. This will allow LLMs to infer the answer from the context of the longer text and improve their accuracy.</p>
<h2>Long Paragraph texts as background context</h2>
<p>The next soluion used TF-IDF to retrieve paragraph texts from Wikipedia articles as background contexts to a question. As the paragraph texts are longer than the maximum length of the transformer BERT model (512 tokens), we used the LongFormer model to process these long paragraph texts (4096 tokens or longer). Additionally, to increase diversity, the solution used two different datasets and combined the model predictions from these two datasets as the final result. Our solution used the Longformer-RACE model, a smaller language model (2GB), to achieve an accuracy of 0.85 with a faster execution time (approximately 40 minutes on 2 GPUs T5). This model is considerably small so in our next (final) solution, we experimented with a larger language model to improve the performance of our solution.</p>
<h3>Very Large LLM</h3>
<p>In our final solution, we used a very "large" <code>Platypus2-70B-Instruct</code> model that contains 70 billion parameters and amounts to 140 GB. The accuracy is improved to 0.888, even when the model was given short sentence texts as background context. However, the downsides of this solution are the long execution time (200 questions take 3 hours on 2 GPUs T5) and high memory demands.</p>
<p>Because I have run out of my weekly quota and also have another competition close to the deadline, I have not been able to improve the final solution. My next attempt would be to pass the long paragraph texts to this "large" LLM and see if we can obtain better accuracy.</p>
<h1>Sources</h1>
<p>The final solution was inspired by the notebook by <a href="https://www.kaggle.com/simjeg" target="_blank">@simjeg</a> and the LongFormer solution by <a href="https://www.kaggle.com/MB" target="_blank">@MB</a></p>
<ul>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440620" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/440620</a></li>
<li><a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/442595</a></li>
</ul> |
CommonLit - Evaluate Student Summaries | 429th Place Solution for Evaluate Student Summaries Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: CommonLit - Evaluate Student Summaries
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank Kaggle for hosting Evaluate Student Summaries, a competition that assesses the quality of summaries written by students in grades 3-12. Despite joining this competition just a month before the deadline and being busy with another competition, I did not achieving good results but learned a lot how to train a model to automatically evaluat the student summaries in terms of the <em>content score</em> (how proper the content of student summaries relevant to original texts) and <em>wording score</em> (spellings, grammar errors, word overlapping…). </p>
<h1>Context</h1>
<p>Business context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam</a><br>
Data context: <a href="https://www.kaggle.com/competitions/kaggle-llm-science-exam/data" target="_blank">https://www.kaggle.com/competitions/kaggle-llm-science-exam/data</a></p>
<h1>Overview of the Approach</h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F16611574%2Fecf091a2f0599e26d3b37d2ddbb7d210%2FPicture1.jpg?generation=1698114172221971&alt=media" alt="Workflow"></p>
<p>The LLM-based summary evaluation system works as follows:</p>
<ol>
<li><p>Fix misspellings/ in student summaries and collect features for evaluation.</p></li>
<li><p>Fine-tune a pre-trained LLM and train two models on training datasets. </p></li>
<li><p>Predict student summary scores in test dataset using fine-tuned LLM and LightGBM models</p>
<p>The best solution I found for the Evaluate Student Summaries competition was a combination of <code>Deberta-v3-base</code> and <code>LightGBM</code> models. This solution achieved an accuracy of 0.488, with a training time of 3.8 hours on a P100 GPU. I have published my notebook <a href="https://www.kaggle.com/code/minhsienweng/debert-v3-base-lightgbm-model-training?scriptVersionId=146058874" target="_blank">here</a>.</p></li>
</ol>
<p><strong>Step 1.</strong> Text preprocessing collects and fixes grammar/spelling errors in student summaries.<br>
Text preprocessing collects features, such as the number of incorrect words, overlapping words, and named entities (people, place or medical codes), from student summaries and original texts. It then uses <code>pyspellchecker</code> to find the grammar errors and fix incorrect words using <code>autocorrect</code>. The features include (1) text length (2) misspellings (3) N-gram overlapping (tri-, bi-gram and word) (4) grammar errors (5) quote overlapping.</p>
<p><strong>Step 2.</strong> Fine-tune pretrained LLM model and then train two models with training datasets for content and wording scores.</p>
<p>Fine-tune the pretrained <code>Deberta-v3-base</code> using 4-fold group cross-validation (<code>GroupKFold</code>) to predict content and wording scores for student summaries training dataset. The fine-tuned model takes <code>prompt title</code> [SEP] <code>prompt question</code> [SEP] <code>student summary</code> as input, and predicts their content and wording scores as output, and the evaluation is based on mean squared error. </p>
<p>Next, train two <code>LightGBM</code> models on the training dataset, one to predict content score and the other to predict wording score. Both models predict using the same fold groups, based on the features from Step 1 and the scores from fine-tuned model, and are then evaluated using the same metric.</p>
<p><strong>Step 3.</strong> Predict content and wording scores using the fine-tuned and LightGBM models. <br>
For the testing dataset, we predict content and wording scores for student summaries using the fine-tuned and LightGBM models, and average the predictions to produce the final score.</p>
<h1>Details of the submission</h1>
<p>I tried the below approaches apart from the final solution.</p>
<h1>Deberta-V3-large model</h1>
<p>We used the <code>Deberta-V3-large</code> model without any training to predict scores on the test dataset. The score was very poor (1.08). This indicates that LLMs require further information to make accurate predictions.</p>
<h2>Sentence Transformer + XGBoot</h2>
<p>We used Sentence Transformer to extract text similarity and word diversity features from student summaries and original texts, then trained XGBoost and BERT models to predict content and wording scores. BERT outperformed XGBoost with a score of 0.59, suggesting fine-tuning can improve accuracy.</p>
<h2>Text Preprocssing + Fast-AI training</h2>
<p>We applied the FastAI tabular library to simplify the training process. We used the same text processing as the final solution to extract features and fix misspellings. Next, we used a FastAI tabular learner to train a model with layers <code>[300, 200]</code>, using mean absolute error (<code>MAE</code>) as the loss function. The trainer fit the training data and obtained the optimal learning rate using 30-70 cross-validation (random splits). However, the notebook threw exceptions, even though it ran successfully and produced a submission file.</p>
<h1>Sources</h1>
<p>The final solution was inspired by</p>
<ul>
<li><a href="https://www.kaggle.com/code/dangnguyen97/tuned-debertav3-lgbm" target="_blank">https://www.kaggle.com/code/dangnguyen97/tuned-debertav3-lgbm</a></li>
<li><a href="https://www.kaggle.com/code/yusuphmustaphaladi/implementing-fastai-tabular" target="_blank">https://www.kaggle.com/code/yusuphmustaphaladi/implementing-fastai-tabular</a></li>
</ul> |
RSNA 2023 Abdominal Trauma Detection | 97th Place Solution for the RSNA 2023 Abdominal Trauma Detection | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: RSNA 2023 Abdominal Trauma Detection
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>Context</strong></p>
<p>Business context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/overview</a><br>
Data context: <a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data</a></p>
<p><strong>Overview of the Approach</strong></p>
<p>The final model is Resnet 2.5D + LSTM (1 layer with <code>hidden_size = 128</code>) was trained with <code>volume = (128, 64, 64)</code>, <code>window_step = 2</code> and <code>window_width = 3</code>.</p>
<p><strong>Details of the submission</strong></p>
<p>The <em>sigmoid</em> activation fuction was used for the output for <code>bowel_injury</code> and <code>extravasation_injury</code>, while <em>softmax</em> was used for the other targets</p>
<p><strong>Sources</strong></p>
<p><a href="https://www.kaggle.com/code/ayushs9020/understanding-the-competition-rsna" target="_blank">https://www.kaggle.com/code/ayushs9020/understanding-the-competition-rsna</a> — nice EDA notebook<br>
<a href="https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-train" target="_blank">https://www.kaggle.com/code/awsaf49/rsna-atd-cnn-tpu-train</a> — train EfficientNet on TPU<br>
<a href="https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447449" target="_blank">https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/447449</a> — 1st place solution</p> |
Stanford Ribonanza RNA Folding | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, a big thank you to Kaggle staff and host for providing a fun competition. <br>
<a href="https://www.kaggle.com/tattaka" target="_blank">tattaka</a> and <a href="https://www.kaggle.com/fuumin621" target="_blank">monnu</a> were able to complete their journey to Competitions Grandmaster with this result.</p>
<h2>Summary</h2>
<ul>
<li>Various models ensemble<ul>
<li>Modified RNAdegformer proposed by Shujun</li>
<li>1D Conv & Residual BPP Attention</li>
<li>Transformer with BPP Attention Bias</li></ul></li>
<li>bp_matrix generated by eternafold (partially contrafold)<ul>
<li>I didn't realize until halfway through that it was being provided.</li></ul></li>
<li>finetuning with higher s/n threshold</li>
<li>Pseudo Labeling using reactivity error prediction</li>
</ul>
<h2>Modified RNAdegformer</h2>
<p>code: <a href="https://github.com/tattaka/stanford-ribonanza-rna-folding-public" target="_blank">https://github.com/tattaka/stanford-ribonanza-rna-folding-public</a> </p>
<h3>Input</h3>
<ul>
<li>Sequence</li>
<li>BPP Matrix (by EternaFold and Contrafold)</li>
</ul>
<h3>Architecture</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2Fd89808656d85bd143d94055a543a5a55%2F2023-12-09%2018.25.55.png?generation=1702114143461138&alt=media" alt=""><br>
Made some changes to the <a href="https://academic.oup.com/bib/article/24/1/bbac581/6986359" target="_blank">RNAdegformer</a>(<a href="https://github.com/Shujun-He/RNAdegformer" target="_blank">https://github.com/Shujun-He/RNAdegformer</a>) proposed by <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189564" target="_blank">Shujun</a>, including the order of layers.</p>
<ul>
<li>kernel_size = 7 except for the last transformer, which is 1</li>
<li>postnorm</li>
<li>GLU family activation</li>
<li><a href="https://arxiv.org/abs/2108.12409" target="_blank">ALiBi</a> positional encoding is applied separately head from bp_matrix</li>
<li>Other minor changes ensemble</li>
</ul>
<table>
<thead>
<tr>
<th>model name</th>
<th>error prediction for pseudo labeling</th>
<th>use pseudo label</th>
<th>act_fn for feedforward</th>
<th>norm layer</th>
<th>add norm and act for conv1d</th>
<th>use contrafold(second bpps)</th>
<th>connect attn_weight to bpps bias</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp064</td>
<td>yes</td>
<td>no</td>
<td>SwiGLU</td>
<td>Layernorm</td>
<td>no</td>
<td>no</td>
<td>no</td>
<td>0.12087</td>
</tr>
<tr>
<td>exp070</td>
<td>no</td>
<td>exp064</td>
<td>SwiGLU</td>
<td>RMSnorm</td>
<td>yes</td>
<td>no</td>
<td>no</td>
<td>0.1199 / tiny: 0.12143</td>
</tr>
<tr>
<td>exp071</td>
<td>yes</td>
<td>no</td>
<td>GeGLU</td>
<td>RMSnorm</td>
<td>yes</td>
<td>yes</td>
<td>no</td>
<td>0.12146</td>
</tr>
<tr>
<td>exp072</td>
<td>no</td>
<td>exp064 + exp071</td>
<td>GeGLU</td>
<td>RMSnorm</td>
<td>yes</td>
<td>no</td>
<td>yes</td>
<td>0.11976</td>
</tr>
</tbody>
</table>
<h3>Training</h3>
<ul>
<li>simple kfold (k = 5)</li>
<li>1st stage: First train with sn > 0.5 (300epoch)</li>
<li>2nd stage: Then finetune with sn > 1.0 for a short number of epochs (15epoch)</li>
<li>When training with train dataset only, output errors for each nucleotide in addition to reactivity</li>
<li>For pseudo labels, use sn_pred>0.75 for the 1st stage, sn_pred>1.0 for the 2nd stage, and future=1 only<ul>
<li>Train from scratch with the pseudo labels added</li></ul></li>
<li>lr=1e-3, bs=256, AdamW(eps=1e-6), with warmup for 1st stage</li>
</ul>
<h3>Score for the single model</h3>
<p>best model: exp072 </p>
<ul>
<li>CV (k-fold): 0.11976</li>
<li>Public Score: 0.13681</li>
<li>Private Score: 0.14124</li>
</ul>
<h2>1D Conv & Residual BPP Attention</h2>
<p>code: <a href="https://github.com/fuumin621/stanford-ribonanza-rna-folding-4th" target="_blank">https://github.com/fuumin621/stanford-ribonanza-rna-folding-4th</a></p>
<p>Based on the 1D Conv + BPP Attention architecture proposed by <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189241" target="_blank">nyanp</a>, we made improvements.</p>
<h3>Architecture</h3>
<ol>
<li>Sequence Embedding</li>
<li>Conv 1d</li>
<li>SE Residual & Residual BPP Attention x 12 layers<ul>
<li>Using BPP as attention weight</li>
<li>Reduced the kernel size of 1D convolution as the layer depth increased</li></ul></li>
<li>Bi-LSTM x 2 layers </li>
<li>Linear</li>
</ol>
<h3>Input</h3>
<ul>
<li>Sequence</li>
<li>BPP Matrix (by EternaFold)</li>
</ul>
<h3>Parameter</h3>
<ul>
<li>Drop out rate: 0.1</li>
<li>n_dim: 256</li>
<li>Kernel size: 9, 7, 5, 3 (decreases as the layer depth increases)</li>
<li>Learning rate: 4e-3 with cosine scheduler</li>
<li>Batch size: 64</li>
</ul>
<h3>Training</h3>
<p>The training strategy essentially adopted the same approach as the above Modified RNAdegformer</p>
<h3>Score for the single model</h3>
<ul>
<li>CV (k-fold): 0.12161</li>
<li>Public Score: 0.13889</li>
<li>Private Score: 0.1425</li>
</ul>
<h3>What Didn't Work</h3>
<ul>
<li>BPP packages other than EternaFold (contrafold_2)</li>
<li>Distance Matrix</li>
<li>Structure</li>
<li>BPP Feature Engineering (max, sum, nb_count)</li>
<li>Sample Weight by SN</li>
<li>etc…</li>
</ul>
<h2>Transformer with BPP Attention Bias (@ren4yu's Part)</h2>
<p>code: <a href="https://github.com/yu4u/kaggle-stanford-ribonanza-rna-folding-4th-place-solution" target="_blank">https://github.com/yu4u/kaggle-stanford-ribonanza-rna-folding-4th-place-solution</a><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1745801%2F7eea07453fa8011a1435c012533a084d%2Ftransformer.png?generation=1702099940846948&alt=media"></p>
<p>This model borrows the bpp attention bias idea from the RNAdegformer, but is closer to the original Transformer architecture.<br>
Instead of using positional embedding, Conv1D was used in FFN to give relative positional information.</p>
<h3>Training Procedure</h3>
<ul>
<li>Train model with AdamW LR=2e-3 to 2e-4, BS=128, use s/n filter threshold=0.5</li>
<li>Finetune model with AdamW LR=2e-4 to 0, BS=256, use s/n filter threshold=1.0</li>
</ul>
<h3>Score for the single model</h3>
<ul>
<li>CV (k-fold): 0.12188</li>
<li>Public Score: 0.13948</li>
<li>Private Score: 0.14267</li>
</ul>
<h3>Does not work for me</h3>
<ul>
<li>Increasing dimension or number of layers</li>
<li>UNet-like hierarchical architecture</li>
</ul> |
Child Mind Institute - Detect Sleep States | 155th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to organizers and congrats to all participants.</p>
<p>I started working after I found <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a>'s public notebook & his github repo.<br>
<a href="https://www.kaggle.com/code/tubotubo/cmi-submit" target="_blank">https://www.kaggle.com/code/tubotubo/cmi-submit</a><br>
<a href="https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/tubo213/kaggle-child-mind-institute-detect-sleep-states</a><br>
First time for me to tackle on Deep learnning techninque instead of GBDT, I learned so many things.</p>
<h2>My solution</h2>
<p>Basically my solution comes from tubotubo 's public notebook.<br>
Differences are following;</p>
<ul>
<li>Use top-5 models instead of best model on an epoch.</li>
<li>Ensembling on variety of K-Folds</li>
</ul>
<p>I found validation score fluctuates on learning, also private data is 75% of test dataset, I decided to ensemble as much as I can.</p>
<h2>A Tip tried but not worked</h2>
<p>Here is a tip I couldn't find on the discussion. This is not an effective tip. I'd appreciate any feedbacks.</p>
<ul>
<li>Mean/Standardization on each series_id, instead of whole dataset<br>
I wanted my model to learn the difference btwn awake/sleep instead of each seires_id's difference.<br>
So I applied mean/standardization to anglez/enmo on each series_id, but it didn't work as I expected.</li>
</ul> |
Stanford Ribonanza RNA Folding | 16th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First off, I wanted to thank the competition hosts for putting together this competition as well as the community for all of the discussion.</p>
<h1>Summary</h1>
<ul>
<li>Transformer architecture<ul>
<li>Separate models for 2A3 and DMS</li></ul></li>
<li>Used Pr(base paired) derived from BPPM</li>
<li>Dynamic position bias</li>
</ul>
<h1>Approach</h1>
<h2>Data:</h2>
<p>The input features were:</p>
<ul>
<li>Sequence</li>
<li>Pr(base paired) (i.e. sum along one axis of the BPPM) from 4 RNA structure packages</li>
</ul>
<p>I was hesitant to include the BPPM for a few reasons that I thought would result in hindering the generalized performance for longer sequences:</p>
<ol>
<li>Base-base pairing probability is overly specific ‘wrong’ information</li>
<li>Lack of consideration of pseudoknots</li>
<li>Dilution of individual probabilities for longer test sequences compared to the shorter train sequences</li>
</ol>
<p>My compromise was to consider the global probability of pairing for each base. <a href="https://www.nature.com/articles/s41592-022-01605-0" target="_blank">This paper from the Das lab</a> assessed how Pr(paired) as calculated by various RNA structure packages compared to chemical mapping assay results. They found CONTRAfold 2, ViennaRNA 2 at 60 degrees C, and RNAsoft with BLStar parameters to be the most effective approaches. Therefore, I generated Pr(paired) vectors for all sequences from these three methods as well as Eternafold to use as features.</p>
<p>As an aside, I used Arnie to run these packages and spent more time than I care to admit getting it setup for multiprocessing, although overall I think it ended up saving time. I did try to expand to additional methods, however, I pretty quickly abandoned it due to bugs / run time constraints.</p>
<p>Earlier on in the competition, I did also try various structure features generated from ViennaRNA (MFE, centroid, ensemble). Compared to sequence alone, I was able to get improved performance, however, it added no benefit once the Pr(base paired) feature was included.</p>
<h2>Architecture:</h2>
<p>My approach ended up being a single transformer architecture separately trained to predict 2A3 or DMS.</p>
<p><strong>Seq</strong></p>
<ul>
<li>4 token embedding -> 512 output dimension</li>
</ul>
<p><strong>Pr(base paired)</strong></p>
<ul>
<li>Linear layer 4 input dimension -> 512 output dimension</li>
</ul>
<p><strong>Transformer</strong></p>
<ul>
<li>18 layers, 512 dimension, 8 heads</li>
<li>Postnorm</li>
<li>Dynamics position bias</li>
<li>Intra-attention Gating on Values</li>
<li>Input: Seq + Pr(base paired)</li>
</ul>
<p>This was my first time working with a transformer architecture. Luckily, I found the library <a href="https://github.com/lucidrains/x-transformers" target="_blank">x-transformers</a> by <a href="https://www.kaggle.com/lucidrains" target="_blank">@lucidrains</a> and used that to try out / learn about numerous variations of transformers.</p>
<p><strong>What helped:</strong><br>
<em>Dynamic Position Bias</em><br>
One of my first goals was to understand how to generalize a transformer to longer sequences. Initially I found the ALiBi paper and thought that would be a useful approach. However, based on the x-transformers documentation it appeared as if dynamic position bias would be more likely to be effective in this context. I did not attempt any direct comparisons between ALiBi (or other relative position biases) and dynamic position bias since I did not trust performance differences on length extrapolation from the train set alone. Dynamic position bias did show the largest bump of any modification on the public vs. private sets (public - 0.15400 vs 0.15002; private - 0.18476 vs 0.14976). Note: this result was before the change to using Pr(base paired) data.</p>
<p><em>Intra-attention Gating on Values</em><br>
This is an attention variant from Alphafold2 that gates the aggregated values with the input. This was one of the feature variations available on x-transformers that appeared to have a small improvement on performance. Note: I added this one a while back and just kept it in, so it is entirely possible it adds nothing of value in the final model.</p>
<p><em>Postnorm</em><br>
One of the first changes I assessed with the transformer was to switch from the generally more preferred prenorm to the original postnorm approach. This resulted in improvements to convergence rate and performance. I rechecked this multiple times over the course of the competition as the architecture and input data approaches shift and consistently got the same type of result. I also attempted the various other norm approaches found in x-transformers, but none resulted in any noticeable improvement.</p>
<p><em>18 Layers</em><br>
18 layers seemed to be the sweet spot of performance and training time. The main depths I tested were 12, 18, 24. 12 vs 18 showed a small performance improvement on the leader boards (public - 0.14383 vs 0.14330; private - 0.14595 vs 0.14517). 18 vs 24 I did not notice any difference in the CV score, so I did not submit any 24 layer models.</p>
<p><em>Sum embeds</em><br>
My approach to integrating the embedded sequence and the Pr(base paired) after passing through a linear layer (both of dimension 512) was to simply sum them elementwise. I tried other approaches, but they all appeared to have negative impacts to convergence and/or performance.</p>
<h2>Training:</h2>
<p>I used the splits used in <a href="https://www.kaggle.com/iafoss" target="_blank">@iafoss</a> starter transformer notebook and all of my experimental work was done using fold 0. Because I ended up using individual models for 2A3 and DMS prediction, the set of sequences satisfying SN >= 1.0 and reads >= 100 was higher for each group since there was a subset of sequences where only one of 2A3/DMS satisfied the constraint.</p>
<p>Additionally, I incorporated the react_error values as a means of weighting the loss so that I could use more of the lower quality sequences with greater confidence (luckily this happened right after the time the issue with react_error was identified and corrected). The react_error values were strictly greater than 0 and were usually fairly low (I believe the median on at least the high quality samples was ~0.125). I decided to try weighting the loss of each base using an exponential function:</p>
<p>Weighting v1:<br>
exp(-react_error)</p>
<p>Weighting v2:<br>
exp(-(react_error^2))</p>
<p>This would result in a weight value between 0 and 1 for each base in the sequence. While I don’t have a good mathematical rationale for this approach and I am sure there may be better options, it did give a noticeable boost to CV performance (~0.0005 - 0.001), including when using only the higher quality sequences.</p>
<p>My approach to training ended up being somewhat erratic for the final model.</p>
<ul>
<li>2A3 model<ul>
<li>fit_one_cycle(epochs=256, lr=5e-4, pctstart=0.01)</li>
<li>SN >= 1.0, reads >= 100</li>
<li>exp(-(react_error^2)) weighting</li>
<li>Early stopping at epoch 45</li></ul></li>
<li>2A3 model part 2<ul>
<li>Fine tune 2A3 model, lr=5e-5</li>
<li>SN >= 0.2, reads >= 75</li>
<li>exp(-react_error) weighting</li></ul></li>
<li>DMS model<ul>
<li>Fine tune 2A3 model part 2, lr=5e-5</li>
<li>SN >= 1.0, reads >= 100</li>
<li>exp(-(react_error^2)) weighting</li></ul></li>
<li>DMS model part 2<ul>
<li>Fine tune DMS model, lr=5e-5</li>
<li>SN >= 0.2, reads >= 75</li>
<li>exp(-react_error) weighting</li></ul></li>
</ul>
<p>As I mentioned above, I used fold 0 of my CV splits for all experimentation. Unfortunately, I lost time testing out other architectural approaches in the final 2 weeks and only settled on coming back to the split models due to the slight performance improvement within the last day, so I was unable to run the full set of CV folds. However, in a last minute effort to at least capture some of the fold 0 validation data in my model I ended up running 1 epoch on this data for my second final submission, which ended up giving me a nice little last minute boost on the leaderboard (public: 0.14330 vs 0.14199; private: 0.14517 vs 0.14458).</p> |
Stanford Ribonanza RNA Folding | 8th place solution (KF Part) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the competition hosts for their exceptional organization and the challenging yet enriching environment they created. Their dedication to fostering a space for learning and innovation is deeply appreciated. I also express my profound gratitude to my teammates, <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>. Their collaboration, expertise, and unwavering commitment throughout this project were invaluable.</p>
<p>Here I will try to summarize some of the main points of our solution.</p>
<h1>Solutions from our temamates</h1>
<ul>
<li><a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/460956" target="_blank">onodera part</a></li>
</ul>
<h1>CV Strategy: GroupKFold</h1>
<p>In the field of RNA secondary structure estimation, various studies including <a href="https://academic.oup.com/bib/article/24/4/bbad186/7179751?login=false" target="_blank">Sato et al., 2023,</a> have highlighted the inappropriateness of using random splits for evaluation.This stems from the concept of 'Families' in RNA experimental data - groups of RNA molecules sharing specific functions or structures, resulting in high structural similarity within the same family. For instance, <a href="https://academic.oup.com/nar/article/50/3/e14/6430845" target="_blank">Fu et al., 2022</a> noted that the <a href="https://academic.oup.com/view-large/figure/333767038/gkab1074fig3.jpg" target="_blank">E2EFold model was overestimated due to this issue</a>.</p>
<p>To address this, we adopted clustering based on RNA sequence edit distances, using cluster IDs for GroupKFold. Additionally, we aggregated all the limited samples with seqlen=206 into fold=0. This strategy allowed for continuous evaluation of the model's performance on longer sequences.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2311404%2Fb39fa96937be0505cbe1b8480910784e%2Fsplit.png?generation=1702029388253122&alt=media" alt=""></p>
<h1>Datasets</h1>
<p>As part of my models, I experimented with alternating training using both the RMDB dataset and the competition dataset. This approach was adopted with the anticipation that it would enhance the model's ability to handle longer RNA sequences, a critical aspect in RNA secondary structure prediction.</p>
<h1>Features</h1>
<p>We processed input sequences using the 'arnie' package, mainly utilizing eternafold, rnasoft, and rnastructure. The model inputs included embedding layers for structure and loop_type, and a Graph Neural Network (GNN) adjacency matrix for bpp (base pair probability).</p>
<p>However, bpp alone, representing the probability of forming a pair, requires many layers when combined with CNNs to view the entire graph. As an alternative to using Transformers for a global view, we utilized “structure” (renamed to 'chunk' due to naming conflicts) and “segment”, as defined in <a href="https://academic.oup.com/nar/article/46/11/5381/4994207" target="_blank">bpRNA</a> [Danaee et al., 2018].</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2311404%2Fd2a22732d565f34536b3ad5c9af99220%2Fgraph_features.png?generation=1702027001721359&alt=media" alt=""></p>
<h1>Models</h1>
<p>My architecture is inspired by <a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784?login=false" target="_blank">LegNet</a> from <a href="https://www.kaggle.com/dmitrypenzar1996" target="_blank">@dmitrypenzar1996</a> and <a href="https://www.kaggle.com/competitions/stanford-covid-vaccine/discussion/189241" target="_blank">OpenVaccine's 6th place solution</a> [nyanp]. It consists solely of CNN and GNN layers. The adjacency matrices for bpp, structure, chunk, and segment significantly differ, so I prepared independent CNN+GNN blocks for each type. Group Convolution and einsum enabled this without for loops.</p>
<p>Following <a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784?login=false" target="_blank">LegNet</a>, prediction head outputs a 100d vector instead of a 1d scalar, with weighted summation over bin values for the final output. This approach stabilized learning by controlling the output range in regression tasks.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2311404%2Feed37f263a4a1321dbd91ccaf915d66c%2Fmodel.png?generation=1702027231710653&alt=media" alt=""></p>
<h1>Loss</h1>
<p>We opted for an MAE + MSE weighted by signal-to-noise ratio as my optimization function. MSE provides gradients similar to MAE in the early stages of training but approaches zero later, aiding in convergence.</p>
<h1>Pseudo labels</h1>
<p>We conducted pseudo label training using predictions on the test set created collaboratively with teammates <a href="https://www.kaggle.com/onodera" target="_blank">@onodera</a> and <a href="https://www.kaggle.com/christofhenkel" target="_blank">@christofhenkel</a>.</p>
<h1>Result</h1>
<table>
<thead>
<tr>
<th>Name</th>
<th>CV</th>
<th>CV (seqlen=206)</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>single best (scratch training)</td>
<td>0.1336</td>
<td>0.1133</td>
<td>0.14012</td>
<td>0.14299</td>
</tr>
<tr>
<td>single best (+pseudo label 1st)</td>
<td>0.1313</td>
<td>0.1152</td>
<td>0.13828</td>
<td>0.14222</td>
</tr>
<tr>
<td>single best (+pseudo label 2nd)</td>
<td>0.1306</td>
<td>0.1221</td>
<td>0.13739</td>
<td>0.14186</td>
</tr>
<tr>
<td>blending w/ all models</td>
<td>0.127645</td>
<td>0.109341</td>
<td>0.13626</td>
<td>0.14263</td>
</tr>
</tbody>
</table>
<h1>References</h1>
<ul>
<li>Fu, Laiyi, et al. "<a href="https://academic.oup.com/nar/article/50/3/e14/6430845" target="_blank">UFold: fast and accurate RNA secondary structure prediction with deep learning.</a>" Nucleic acids research 50.3 (2022): e14-e14.</li>
<li>Sato, Kengo, and Michiaki Hamada. "<a href="https://academic.oup.com/bib/article/24/4/bbad186/7179751?login=false" target="_blank">Recent trends in RNA informatics: a review of machine learning and deep learning for RNA secondary structure prediction and RNA drug discovery.</a>" Briefings in Bioinformatics (2023): bbad186.</li>
<li>Danaee, Padideh, et al. "<a href="https://academic.oup.com/nar/article/46/11/5381/4994207" target="_blank">bpRNA: large-scale automated annotation and analysis of RNA secondary structure.</a>" <em>Nucleic acids research</em> 46.11 (2018): 5381-5394.</li>
<li>Sato, Kengo, and Michiaki Hamada. "<a href="https://academic.oup.com/bib/article/24/4/bbad186/7179751?login=false" target="_blank">Recent trends in RNA informatics: a review of machine learning and deep learning for RNA secondary structure prediction and RNA drug discovery.</a>" Briefings in Bioinformatics (2023): bbad186.</li>
<li>Penzar, Dmitry, et al. "<a href="https://academic.oup.com/bioinformatics/article/39/8/btad457/7230784?login=false" target="_blank">LegNet: a best-in-class deep learning model for short DNA regulatory regions.</a>" <em>Bioinformatics</em> 39.8 (2023): btad45</li>
</ul> |
Google - Fast or Slow? Predict AI Model Runtime | 3rd place solution write-up | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks for hosting this competition. It was my first Kaggle competition which I entered on a whim because I had some free time. It turned out to be a lot of funstration (fun + frustration). Looking at all the amazing solutions from the other teams, I consider myself incredibly lucky to have ranked this high!</p>
<p><strong>Edit:</strong> After cleaning up the code I noticed that I forgot to mention some things which I now added. The code is now available <a href="https://github.com/jafluri/kaggle_tpu_graph" target="_blank">here</a>.</p>
<h2>Overview</h2>
<p>My solution is more or less composed of three parts. Minor feature extraction and engineering, tinkering with the graph, e.g. pruning, and training a graph neural network (GNN). The GNN layers are based on the <a href="https://openreview.net/pdf?id=lMMaNf6oxKM" target="_blank">GPS layers</a>, using SAGE convolutions, <a href="https://arxiv.org/pdf/2006.04768.pdf" target="_blank">Linformers</a> and <a href="https://arxiv.org/pdf/2110.07875.pdf" target="_blank">learnable positional encodings</a>. Note that this discussion concerns mainly the layout dataset. The solution for the tile dataset is mentioned briefly at the end.</p>
<h2>Input Features</h2>
<p>I used all 140 provided input features and used a simple log transform after shifting them such that each feature was at least 1. Additionally, I went through the protocol buffers and extracted the following features:</p>
<ul>
<li><code>has_dynamic_com</code>: A flag indicating whether the graph has dynamic computations.</li>
<li><code>is_root_of_com</code>: A flag indicating if a node is the output node of a computation.</li>
<li><code>indices_are_sorted</code>: A flag that I am not sure why I added it.</li>
<li>For the <code>dot</code> operation, I extracted <code>lhs_contracting_dimensions</code>, <code>rhs_contracting_dimensions</code>, <br>
<code>lhs_batch_dimensions</code> and <code>rhs_batch_dimensions</code>, which are all integer sequences that I padded to a length of 3, so 12 features in total.</li>
<li>For the <code>gather</code> operation I added the integer sequences <code>offset_dims</code>, <code>collapsed_slice_dims</code> and <br>
<code>start_index_map</code> padded to length 3, the single integer <code>index_vector_dim</code> and the sequence <code>gather_slice_sizes</code> padded to length 5.</li>
</ul>
<p>The padding lengths were chosen based on the longest sequences contained in the dataset and I always used -1 as the padding value. Some of these features were useless depending on the applied graph pruning, but I left them in the input anyway. Additionally, while going through the protobufs, I added the input shapes (6D) of the two input arguments for the <code>dot</code> and <code>conv</code> operations as additional features, making sure that they are always ordered in the same way (e.g. lhs, rhs arguments for the <code>dot</code>). This adds 16 dimensions to the input (with sum and products of the shapes). I did this because I thought that it might be difficult for the network to learn the order of the inputs and dimensions which are reduced in these operations given solely the message-passing networks.<br>
I also took the 30 dimensional features and added them to the input features once modulus 128 (<code>(x % 128)/128</code> and once as true divide <code>(x // 128)/10</code> with some normalization. I did this to make it easier for the network to process the dimensions of the tensors and compare them to the <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime/discussion/437673" target="_blank">register size</a> of the TPUs.</p>
<h2>OP code embeddings and positional encoding</h2>
<p>I used 128-dimensional embeddings for the OP codes. For the positional encodings, I used RWPE described <a href="https://arxiv.org/pdf/2110.07875.pdf" target="_blank">here</a>. I created 16 dimensional PEs with the directed adjacency matrix and 112 using the undirected one, for a total of 128 features. The encoding was always calculated with the full graph, independent of the pruning that was used during the training of the network.</p>
<h2>Graph Modifications</h2>
<p>I experimented with three versions of pruning/pooling:</p>
<ul>
<li>Dropping all nodes and connections besides the configurable one. This results essentially in training an MLP.</li>
<li>Dropping all the nodes besides the configurable nodes and their inputs/outputs.</li>
<li>Merging all nodes besides the configurable nodes and their inputs/outputs. Two nodes were merged if they had at least one connection and were neither configurable nor an input or output of a configurable node. This was done until no further merging was possible. The merged nodes had a unique OP code but their features are set to zero. </li>
</ul>
<p>Addionally, I added a virtual output node, connecting all nodes that produce ouputs.</p>
<h2>GNN</h2>
<p>The GNN consists of SAGEConvolutions and Linformer. The architecture is shown in the figure below. The SAGEConvolutions use both the input and the output nodes with different weights and a message dimension of half the size of the input dimension. The Linformer dimension was set to 128 (or 256 in some experiments). I used Sigmoid Linear Units (SiLU) activation functions and a lot of layer normalisation.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10842317%2F8203b36d4bbe981f7d172cd270451c94%2Flayer.drawio.png?generation=1700420261387549&alt=media" alt=""></p>
<p>The training was done with Adam and a cosine annealing scheduler. I tried batch sizes of 8, 16 and 32 with 5, 8 or 10 configs and pairwise hinge-loss. I trained on all collections at the same time and then did finetuning on the individual collections. However, I did not have time to train a network on all collections with the merged nodes. I only implemented this towards the very end and trained only one network on the <code>xla:default</code> collection, which gave me the best CV. The final submission was composed of networks trained with my second pruning strategy for <code>xla:random</code> and the <code>nlp</code> collections and the third pruning strategy for <code>xla:default</code>.</p>
<h1>Tile Network</h1>
<p>The tile network was a simple GNN with 5 SAGEConvolutions, no extra features and no positional encodings. </p>
<h2>Other stuff</h2>
<p>These are things I tried out but failed or could not evaluate if they had a consistent positive impact on the results. </p>
<ul>
<li>Node dropout and test-time augmentations with the dropout. </li>
<li>Using the full graph</li>
<li>Transformers instead of Linformers (Memory)</li>
<li>Some self-implemented attention with configs and features</li>
</ul> |
Child Mind Institute - Detect Sleep States | 75th Place Detailed Solution - Spec2DCNN + CenterNet + Transformer + NMS | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank <a href="https://www.kaggle.com/tubotubo" target="_blank">@tubotubo</a> for sharing your high-quality code, and also thank my teammates <a href="https://www.kaggle.com/liruiqi577" target="_blank">@liruiqi577</a> <a href="https://www.kaggle.com/brickcoder" target="_blank">@brickcoder</a> <a href="https://www.kaggle.com/xtzhou" target="_blank">@xtzhou</a> for their contributions in the competition. Here, I am going to share our team’s “<strong>snore like thunder</strong>” solution from the following aspects:</p>
<ol>
<li>Data preprocessing</li>
<li>Feature Engineering</li>
<li>Model</li>
<li>Post Processing</li>
<li>Model Ensemble</li>
</ol>
<h2>1. Data preprocessing</h2>
<p>We made EDA and readed open discussions found that there are 4 types of data anomalies:</p>
<ul>
<li>Some series have a high missing rate and some of them do not even have any event labels;</li>
<li>In some series , there are no event annotations in the middle and tail (possibly because the collection activity has stopped);</li>
<li>The sleep record is incomplete (a period of sleep is only marked with onset or wakeup).</li>
<li>There are outliers in the enmo value.</li>
</ul>
<p>To this end, we have some attempts, such as:</p>
<ul>
<li>Eliminate series with high missing rates;</li>
<li>Cut the tail of the series without event labels;</li>
<li>Upper clip enmo to 1.</li>
</ul>
<p>But the above methods didn't completely work. In the end, our preprocessing method was:</p>
<p>We split the dataset group by series into 5 folds. For each fold, we eliminate series with a label missing rate of 100% in the training dataset while without performing any data preprocessing on the validation set. This is done to avoid introducing noise to the training set, and to ensure that the evaluation results of the validation set are more biased towards the real data distribution, which improve our LB score + 0.006.</p>
<p>Part of our experiments as below:</p>
<table>
<thead>
<tr>
<th>Experiment</th>
<th>Fold0</th>
<th>Public (single fold)</th>
<th>Private (5-fold)</th>
</tr>
</thead>
<tbody>
<tr>
<td>No preprocess missing data</td>
<td>0.751</td>
<td>0.718</td>
<td>0.744</td>
</tr>
<tr>
<td>Eliminate unlabeled data at the end of train_series & series with missing rate >80%</td>
<td>0.739</td>
<td>0.709</td>
<td>0.741</td>
</tr>
<tr>
<td>Drop train series which don’t have any event labels</td>
<td>0.752</td>
<td>0.724</td>
<td>0.749</td>
</tr>
</tbody>
</table>
<h2>2. Feature Engineering</h2>
<ul>
<li>Sensor features: After smoothing the enmo and anglez features, a first-order difference is made to obtain the absolute value. Then replace the original enmo and anglez features with these features, which improve our LB score + 0.01.</li>
</ul>
<pre><code>train_series[] = train_series[].diff().()
train_series[] = train_series[].rolling(window=, center=, min_periods=).mean()
train_series[] = train_series[].diff().()
train_series[] = train_series[].rolling(window=, center=, min_periods=).mean()
</code></pre>
<ul>
<li>Time features: sin and cos hour.</li>
</ul>
<p>In addition, we also made the following features based on open notebooks and our EDA, such as: differential features with different orders, rolling window statistical features, interactive features of enmo and anglez (such as anglez's differential abs * enmo, etc.), anglez_rad_sin/cos, dayofweek/is_weekend (I find that children have different sleeping habits on weekdays and weekends). But strangely enough, too much feature engineering didn’t bring us much benefit.</p>
<table>
<thead>
<tr>
<th>Experiment</th>
<th>Fold0</th>
<th>Public (5-fold)</th>
<th>Private (5-fold)</th>
</tr>
</thead>
<tbody>
<tr>
<td>anglez + enmo + hour_sin + hour_cos</td>
<td>0.763</td>
<td>0.731</td>
<td>0.768</td>
</tr>
<tr>
<td>anglez_abs_diff + enmo_abs_diff + hour_sin + hour_cos</td>
<td>0.771</td>
<td>0.741</td>
<td>0.781</td>
</tr>
</tbody>
</table>
<h2>3. Model</h2>
<p>We used 4 models:</p>
<ul>
<li>CNNSpectrogram + Spec2DCNN + UNet1DDecoder;</li>
<li>PANNsFeatureExtractor + Spec2DCNN + UNet1DDecoder.</li>
<li>PANNsFeatureExtractor + CenterNet + UNet1DDecoder.</li>
<li>TransformerAutoModel (xsmall, downsample_rate=8).</li>
</ul>
<p><strong>Parameter Tunning</strong>: Add more kernel_size 8 for CNNSpectrogram can gain +0.002 online.</p>
<p><strong>Multi-Task Learning Objectives</strong>: sleep status, onset, wake.</p>
<p><strong>Loss Function</strong>: For Spec2DCNN and TransformerAutoModel, we use BCE, but with multi-task target weighting, sleep:onset:wake = 0.5:1:1. The purpose of this is to allow the model to focus on learning the last two columns. We tried to train only for the onset and wake columns, but the score was not good. The reason is speculated that <strong>the positive samples in these two columns are sparse, and MTL needs to be used to transfer the information from positive samples in the sleep status to the prediction of sleep activity events.</strong> Also, I tried KL Loss but it didn't work that well. </p>
<pre><code>self.loss_fn = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([,,]))
</code></pre>
<p>At the same time, we adjusted epoch to 70 and added early stopping with patience=15. The early stopping criterion is the AP of the validation dataset, not the loss of the validation set. batch_size=32.</p>
<table>
<thead>
<tr>
<th>Experiment</th>
<th>Fold0</th>
<th>Public (single fold)</th>
<th>Private (5-fold)</th>
</tr>
</thead>
<tbody>
<tr>
<td>earlystop by val_loss</td>
<td>0.750</td>
<td>0.697</td>
<td>0.742</td>
</tr>
<tr>
<td>earlystop by val_score</td>
<td>0.751</td>
<td>0.718</td>
<td>0.744</td>
</tr>
<tr>
<td>loss_wgt = 1:1:1</td>
<td>0.752</td>
<td>0.724</td>
<td>0.749</td>
</tr>
<tr>
<td>loss_wgt = 0.5:1:1</td>
<td>0.755</td>
<td>0.723</td>
<td>0.753</td>
</tr>
</tbody>
</table>
<p>Note: we used the model_weight.pth with the best offline val_score to submit our LB instead of using the best_model.pth with the best offline val_loss。</p>
<h2>4. Post Processing</h2>
<p>Our post-processing mainly includes:</p>
<ul>
<li><strong>find_peaks()</strong>: scipy.signal.find_peaks;</li>
<li><strong>NMS</strong>: This task can be treated as object detection. [onset, wakeup] is regarded as a bounding boxes, and score is the confident of the box. Therefore, I used a time-series NMS. Using NMS can eliminate redundant boxes with high IOU, which increase our AP.</li>
</ul>
<pre><code> ():
x1 = dets_arr[:, ]
x2 = dets_arr[:, ]
scores = dets_arr[:, ]
areas = x2 - x1
order = scores.argsort()[::-]
keep = []
order.size > :
i = order[]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[:]])
xx2 = np.minimum(x2[i], x2[order[:]])
inter = np.maximum(, xx2 - xx1 + )
ovr = inter / (areas[i] + areas[order[:]] - inter)
inds = np.where(ovr <= thresh)[]
order = order[inds + ]
dets_nms_arr = dets_arr[keep,:]
onset_steps = dets_nms_arr[:, ].tolist()
wakeup_steps = dets_nms_arr[:, ].tolist()
nms_save_steps = np.unique(onset_steps + wakeup_steps).tolist()
nms_save_steps
</code></pre>
<p>In addition, we set score_th=0.005 (If it is set too low, a large number of events will be detected and cause online scoring errors, so it is fixed at 0.005 here), and use optuna to simultaneously search the parameter distance in find_peaks and the parameter iou_threshold of NMS. Finally, when distance=72 and iou_threshold=0.995, the best performance is achieved.</p>
<pre><code> optuna
():
score_th =
distance = trial.suggest_int(, , )
thresh = trial.suggest_float(, , )
val_pred_df = post_process_for_seg(
keys=keys,
preds=preds[:, :, [, ]],
score_th=score_th,
distance=distance,
)
val_pred_df = val_pred_df.to_pandas()
nms_pred_dfs = NMS_prediction(val_pred_df, thresh, verbose=)
score = event_detection_ap(valid_event_df.to_pandas(), nms_pred_dfs)
-score
study = optuna.create_study()
study.optimize(objective, n_trials=)
(, study.best_params)
(, study.best_value)
</code></pre>
<table>
<thead>
<tr>
<th>Experiment</th>
<th>Fold0</th>
<th>Pubic (5-fold)</th>
<th>Private (5-fold)</th>
</tr>
</thead>
<tbody>
<tr>
<td>find_peak</td>
<td>-</td>
<td>0.745</td>
<td>0.787</td>
</tr>
<tr>
<td>find_peak+NMS+optuna</td>
<td>-</td>
<td>0.746</td>
<td>0.789</td>
</tr>
</tbody>
</table>
<h2>5. Model Ensemble</h2>
<p>Finally, we average the output probabilities of the following models and then feed into the post processing methods to detect events. By the way, I tried post-processing the detection events for each model and then concating them, but this resulted in too many detections. Even with NMS, I didn't get a better score.</p>
<p>The number of ensemble models: 4 (types of models) * 5 (fold number) = 20.</p>
<table>
<thead>
<tr>
<th>Experiment</th>
<th>Fold0</th>
<th>Pubic (5-fold)</th>
<th>Private (5-fold)</th>
</tr>
</thead>
<tbody>
<tr>
<td>model1: CNNSpectrogram + Spec2DCNN + UNet1DDecoder</td>
<td>0.77209</td>
<td>0.743</td>
<td>0.784</td>
</tr>
<tr>
<td>model2: PANNsFeatureExtractor + Spec2DCNN + UNet1DDecoder</td>
<td>0.777</td>
<td>0.743</td>
<td>0.782</td>
</tr>
<tr>
<td>model3: PANNsFeatureExtractor + CenterNet + UNet1DDecoder</td>
<td>0.75968</td>
<td>0.634</td>
<td>0.68</td>
</tr>
<tr>
<td>model4: TransformerAutoModel</td>
<td>0.74680</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>model1 + model2(1:1)</td>
<td>-</td>
<td>0.746</td>
<td>0.789</td>
</tr>
<tr>
<td>model1 + model2+model3(1:1:0.4)</td>
<td>-</td>
<td>0.75</td>
<td>0.786</td>
</tr>
<tr>
<td>model1 + model2+model3+model4(1:1:0.4:0.2)</td>
<td></td>
<td>0.752</td>
<td>0.787</td>
</tr>
</tbody>
</table>
<p>Unfortunately, we only considered CenterNet and Transformer to model ensemble with a tentative attitude on the last day, but surprisingly found that <strong>a low-CV-scoring model still has a probability of improving final performance as long as it is heterogeneous compared with your previous models</strong>. But we didn’t have more opportunities to submit more, which was a profound lesson for me.</p>
<p>Thoughts not done:</p>
<ul>
<li><p><strong>Data Augmentation</strong>: Shift the time within the batch to increase more time diversity and reduce dependence on hour features.</p></li>
<li><p><strong>Model</strong>: Try more models. Although we try transformer and it didn’t work for us. I am veryyy looking forward to the solutions from top-ranking players.</p></li>
</ul>
<p>Thanks again to Kaggle and all Kaggle players. This was a good competition and we learned a lot from it. If you think our solution is useful for you, welcome to upvote and discuss with us.</p>
<p>In addition, this is my first 🥈 silver medal. Thank you everyone for letting me learn a lot. I will continue to work hard. :)</p> |
Child Mind Institute - Detect Sleep States | 41st Place Solution for the Child Mind Institute - Detect Sleep States Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, I would like to express my sincere thanks to the organizers of the Child Mind Institute - Detect Sleep States Competition. This competition provided an invaluable opportunity to engage with long one-dimensional sequences, offering a distinct and educational experience in an important area of data science.</p>
<h2><strong>Overview of the Inference Pipeline</strong></h2>
<p>Our training pipeline and models are based on the excellent code by <a href="https://www.kaggle.com/tubotubo" target="_blank">@213tubo</a>, available at <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/452940?rvi=1" target="_blank">this page</a>. </p>
<h4><strong>Our solution is as follows</strong>:</h4>
<ul>
<li><p><strong>Data Segmentation:</strong></p>
<ul>
<li>Segmented input features using overlapping time windows for inference</li></ul></li>
<li><p><strong>Models:</strong></p>
<ul>
<li>Utilized 10 Unet-based models, each trained on two distinct training-validation splits across five unique models.</li>
<li>Each of the 5 models has a different backbone.</li></ul></li>
<li><p><strong>PostProcessing(Single Model):</strong></p>
<ul>
<li>Applied Non-Maximum Suppression (NMS) to retain only the highest-scoring prediction when detected peaks were within a specified proximity.</li></ul></li>
<li><p><strong>Ensemble Predictions:</strong></p>
<ul>
<li>Adopted an original ensemble method inspired by Weighted Boxes Fusion (WBF)</li>
<li>In the first stage, models within the same fold were assembled.</li>
<li>In the second stage, these results from different folds were combined.</li></ul></li>
<li><p><strong>Post Processing(Ensembled Results):</strong></p>
<ul>
<li>Removal of predictions in periodic padding intervals.</li></ul></li>
</ul>
<p>Below is an overview of our solution, as illustrated in the following diagram.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7135891%2F6087b564bf8572d8309d45d1586fdd6e%2Foverview_zzz.svg?generation=1702429424591245&alt=media" alt=""></p>
<h2>Details of the Submission</h2>
<h3>Models Used:</h3>
<p>We trained Unet-based models featuring five different CNN backbones: ResNet101, MiT-B3, MiT-B5, ResNeXt101, and ResNet152.<br>
Our models were trained using two different combinations of training-validation data, with approximately 20% of the training set allocated as validation data in each combination.</p>
<p>For each training session of the models, we varied the seed values to increase their diversity. This approach was employed to enhance the reliability of cross-validation (CV) results and to enrich the ensemble for submission.</p>
<h3>Data Segmentation for Inference:</h3>
<p>For the input data segmentation, we used a specific window size and set the hop size to half of this window size, creating overlapping segments for inference. This method allowed us to effectively utilize the central part of each window in the final analysis, thereby reducing false detections at the edges of the segments.</p>
<h3>Ensemble Method</h3>
<p>We used an original ensemble method inspired by Weighted Boxes Fusion (WBF). The algorithm steps include:</p>
<ul>
<li>Grouping the input dataframe by series_id and event.<br>
Sorting each event by its score in descending order.</li>
<li>Extracting other events within a certain range from different dataframes.</li>
<li>Calculating the weighted step as sum(step*score) and calculating the new score as score/(number of dataframes)</li>
<li>Removing events within the specified range but with lower scores.</li>
</ul>
<h4>Our Code:</h4>
<pre><code> () -> pd.DataFrame:
grouped_list = [df.groupby([, ]) df df_list]
events = [, ]
ensemble_predictions = []
series_id series_ids:
event events:
sorted_group_list = []
grouped grouped_list:
(event, series_id) grouped.groups:
sorted_group_list.append(grouped.get_group((event, series_id)).sort_values(by=, ascending=))
([ sorted_group.empty sorted_group sorted_group_list]):
specific_index = ((index index, sorted_group (sorted_group_list) sorted_group.empty), )
highest = sorted_group_list[specific_index].iloc[]
ensemble_steps = [highest[]]
ensemble_probas = [highest[]]
probas_sum = highest[]
highest_step = highest[]
i, sorted_group (sorted_group_list):
compared_prediction = sorted_group[sorted_group[].sub(highest_step).() <= step_threshold]
condition = sorted_group[].sub(highest_step).() > step_threshold
filtered_group = sorted_group[condition]
sorted_group_list[i] = filtered_group
i != specific_index:
compared_prediction[].empty:
first_score = compared_prediction[].iloc[]
first_step = compared_prediction[].iloc[]
:
first_score =
first_step =
ensemble_steps.append(first_step)
ensemble_probas.append(first_score)
probas_sum += first_score
ensemble_step =
probas_sum!=:
s, p (ensemble_steps, ensemble_probas):
ensemble_step += s*(p/probas_sum)
ensemble_predictions.append({: series_id, : (ensemble_step), : event, : probas_sum/(grouped_list)})
ensemble_df = pd.DataFrame(ensemble_predictions)
ensemble_df[] = ensemble_df.index
ensemble_df
</code></pre>
<h3>Post Processing</h3>
<p>Post-processing steps were implemented for both single models and ensembled results.</p>
<ul>
<li><p><strong>For single models:</strong></p>
<ul>
<li>Applied Non-Maximum Suppression (NMS) using the excellent code from <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/458822" target="_blank">this discussion</a> to retain only the highest-scoring prediction when detected peaks were within a specified proximity.</li></ul></li>
<li><p><strong>For ensembled models:</strong></p>
<ul>
<li>Removal of needless predictions in periodic padding intervals was inspired by <a href="https://www.kaggle.com/code/takanashihumbert/a-simple-way-trying-to-find-the-dark-zones" target="_blank">this notebook</a>.</li></ul></li>
</ul>
<h2>Ablation Study</h2>
<p>The details of the scores are as follows.</p>
<table>
<thead>
<tr>
<th>Number</th>
<th>Description</th>
<th>CV (Fold1)</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>(1)</td>
<td>ResNet101 (No overlap)</td>
<td>0.756</td>
<td>0.722</td>
<td>0.748</td>
</tr>
<tr>
<td>(2)</td>
<td>ResNet101 (overlap)</td>
<td>0.77</td>
<td>0.732</td>
<td>0.778</td>
</tr>
<tr>
<td>(3)</td>
<td>ResNet101 (overlap) + NMS</td>
<td>0.773</td>
<td>Not Submitted</td>
<td>Not Submitted</td>
</tr>
<tr>
<td>(4)</td>
<td>Ensemble of 5 models (overlap, same fold) + NMS</td>
<td>0.7844</td>
<td>0.753</td>
<td>0.790</td>
</tr>
<tr>
<td>(5)</td>
<td>Ensemble of 5 models (overlap, same fold) + NMS + Remove Predictions in periodic padding</td>
<td><strong>0.7847</strong></td>
<td><strong>0.756</strong></td>
<td>0.801</td>
</tr>
<tr>
<td>(6)</td>
<td>Ensemble of 5 models(overlap) across 2 folds + NMS</td>
<td>-</td>
<td>0.749</td>
<td>0.792</td>
</tr>
<tr>
<td>(7)</td>
<td>Ensemble of 5 models(overlap) across 2 folds + NMS + Remove Predictions in periodic padding</td>
<td>-</td>
<td>0.753</td>
<td><strong>0.802</strong></td>
</tr>
</tbody>
</table> |
Stanford Ribonanza RNA Folding | [3rd Place Solution] AlphaFold Style Twin Tower Architecture + Squeezeformer | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>A big thanks to the organizers for making this competition happen and putting their efforts in solving hard problems, such as RNA structure prediction. This sentence stuck with me throughout the whole competition: “… without being able to understand how RNA molecules fold, we are missing a deeper <strong>understanding of how nature works, how life began</strong>, and how we can design…“</p>
<p>Many will glance over this statement but it is a huge mystery that sometimes kept me awake at night. I mean it is SO strange… the evolution/creation of 4 nucleotides with specific physical/chemical properties that can code and take on different structures to fulfill specific actions in the body, and we still don’t understand the origin/purpose of these molecules… Hopefully we are in the right direction in finding out the truth and reason behind it or even “what” created them. </p>
<p>Solution is not based on dozens of ensemble models, but rather 2 strong independent models. There are 2 models since I wasn’t sure if I will be able to produce an Alpha fold style viable solution in the allotted time, so I focused on a smaller “safe” version model first and a bigger “riskier” model later in the competition. But the ensemble of the 2 provided crucial in gaining the 3rd place.</p>
<h2>TLDR</h2>
<p>Blend of 2 Independent models. The smaller “safer” version model is based on augmented Squeezeformer architecture, which consists of RelativeMultiheadSelfAttention, Convolution and FeedFoward modules. Learnable BPP’s through 2d convolution are added to the attention scores, and augmented low Signal-to-Noise data is additionally used in training apart from clean training data. Bigger twin-tower model is based on augmented Alpha Fold style architecture, that consists of MSA stack representation and Pair stack representation, that communicate in a criss-cross fashion through Outer Product Mean and Pair Representation Bias.</p>
<h2>Data Preprocessing/Cross Validation</h2>
<p>Training data was split in 4 folds similar to this <a href="https://www.kaggle.com/code/iafoss/rna-starter-0-186-lb/notebook" target="_blank">notebook</a>. There was a stable correlation between local cv vs lb. BPP’s were processed and cached as .npz files. Most of the time, models were evaluated only on fold0. <br>
Twin-tower model was trained only on clean training data and it has the power to predict its confidence/error_estimate similar to pLDDT in AlphaFold. After getting a good generalization from the twin-tower model on the clean data, new training data was created by augmenting the noisy low SNR data with the model’s confidence/error_estimate. For a particular nucleotide position in the low SNR dataset the idea is to combine how confident the model's prediction is with the position's reactivity error from the experiment and thus “fix” the noisy data. This gave significant improvements in the smaller “safer” model which was trained on clean dataset + improved low snr dataset. Even with this useful new data being available for the bigger twin-tower model, it was never trained on this extra data because of time constraints and only the clean dataset was used for final submission. Twin-tower experiments are currently underway on the full dataset.</p>
<h2>Squeezeformer Model</h2>
<p>The smaller “safer” model is based on Squeezeformer architecture which was used in previous Google American Sign Language competition. The inputs to the model were tokenized RNA sequences and BPP matrices. The model consists of 14xSqueezeformer blocks and an output projection layer that predicts chemical reactivities at each position. One Squeezeformer block consists of three modules: Relative MultiHeadSelfAttention module, Convolution module, and FeedForward module.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14754958%2F55cab60a14c35cb028ecc1de472c6566%2FSqueezeFormerJPG-01.jpg?generation=1702092589333558&alt=media" alt=""></p>
<p>Instead of absolute positional encodings, relative encodings are used for generalization on longer sequences. Attention scores apart from relative pos scores are further affected by BasePairProbability matrices, even though I was reluctant to use them at first since they are created with software that is not capable of detecting long range pseudo-knots and this bias is unfortunately induced in the model. The attention scores in the transformer are calculated in this manner: (content score + relative position score)/sqrt(head_dim) + bpp_bias_score. Bpp_bias_score is obtained by passing the BPP matrices through a 2D convolution block.</p>
<h2>AlphaFold style Twin-tower Model</h2>
<p>This model was inspired from Google's AlphaFold and its derivatives OpenComplex/RhoFold. The original AlphaFold architecture relies on two different input representations for its predictions. It jointly uses MSA(Multiple Sequence Alignment) and Pair Representation features. The MSA representation uses row-wise attention to find intra-sequence features, while the column-wise attention is used to obtain inter-sequence evolutionary signals from the MSA stack. Since MSA for this competition wasn’t helpful in extracting evolutionary information, because of its synthetic nature of the RNA sequences, just the tokenized input sequence was used. Because MSA was not used, the axial-self attention was replaced with relative multi head attention and convolution.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14754958%2F99a3d1c0bbb83caf86c5a82c2d2a7627%2FChemFoldJPG-01.jpg?generation=1702092825428051&alt=media" alt=""><br>
(At the time of submission, only the Single representation branch was used for predictions, the pair features were completely ignored due to time restrictions)</p>
<h3>The workflow of the data from the input all the way to the prediction is as follows:</h3>
<p>1) Input sequence gets tokenized, sequence and pair masks are generated</p>
<p>2) Tokenized sequence are embedded through embedding networks: MSANet and PairNet</p>
<ul>
<li>PairRepresentation features are embedded using Relative2D positional encodings to provide information about position of residues. Maximum position is clipped at 32, and each position afterwards is considered as “far” away. This inductive bias trains the model not to rely heavily on nucleotide positions and generalize better to any length, as stated in the alpha fold supplemental materials.</li>
<li>MSA representation is passed through a simple Embedding layer (since positional encodings are added later in the transformer layer).</li>
</ul>
<p>3) Embedded MSA and Pair features are passed through the main trunk of the network, which consists of 8 Chemformer blocks illustrated above. Some key points are:</p>
<ul>
<li>MSA representation is updated within the Squeezeformer attention <br>
instead of axial self-attention proposed in the original paper.</li>
<li>Pair representation is updated only with triangular multiplicative updates. Triangular self-attention wasn’t used in this model because of memory constraints but using both should give an increase in score.</li>
<li>The MSA representation updates the pair representation through an element-wise outer product that is summed over the MSA sequence dimension.</li>
<li>The Pair representation updates the MSA representation through a projection of additional logits from the pair stack to bias the MSA attention scores.</li>
<li>Both representations are passed through a 2-layer MLP that acts as transition before the communication.</li>
<li>This communication is repeated within each block, so 8 times in total.</li>
<li>Residual connections and Row-wise/Column-wise dropouts are used.</li>
</ul>
<p>4) The processed representations MSA Features and Pair Features are then passed through various heads to extract meaningful information such as confidence/error_estimate, chemical reactivity, base_pair probabilities etc.</p>
<h2>Training</h2>
<p>Training procedure is as follows:</p>
<p>First the Twin-tower model is trained on the clean dataset for 60 epochs with lr of 1e-3. Batch size of 8 per gpu was used with gradient accumulation of 4, leading to an effective batch size of 64 on 2 GPUs. The optimizer used was AdamW with cosine scheduling, weight decay parameter is 0.05 and warmup is 0.5. Model was trained on 2x4090 for 30 hours total. No extensive parameter tuning was done for this model.<br>
This leads to a 0.13746 public/0.14398 private single model score. After this a synthetic dataset was created by blending the model’s predictions with a subset of noisy data that was between 0.35<data<1 SNR. Selecting a lower SNR threshold increased the training examples but lowered the quality. Current synthetic dataset is created with simple weighing, because of time constraints even though the model was trained and outputs valid plddt/error_estimates, they were not fully utilized. I didn’t have time to experiment with the formula for combining plddt score with experimental reactivity_error on a per nucleotide basis which is guaranteed to produce better synthetic data. <br>
The smaller “safer” model was trained for 200 epochs with lr of 7e-4 a batch size of 64. Optimizer and lr scheduling was the same as before, however no grad accumulation was used. This model was trained both on clean dataset and the synthetically created dataset which doubled the training examples. This single model achieves 0.13865 public/0.14256 private score.</p>
<p>A day before the competition ended I decided to just run longer epochs on the Twin-tower model, so I continued the same training procedure with starting weights from epoch 30 of the previous iteration. This gave improvements just by running the model longer which led to the final 0.13706 public/0.14366 private score of this single model. The final score is a blend of the twin-tower model which was trained only on the input sequence and tried to learn all of the interactions on its own and the squeeze model which contributed with the BPP’s and the synthetic data to the final prediction.</p>
<h2>Not utilized/Can be improved for Twin-tower model</h2>
<ul>
<li>Model was trained only with sequences as input. </li>
<li>BPP’s and other supplemental data can be used. </li>
<li>Synthetic dataset which improved the smaller model should be used.</li>
<li>With my tests, by increasing the depth (8->12) blocks, there is an increase in score, submission is only 8 blocks long. </li>
<li>Deeper model 16,24 blocks will be tested soon by utilizing checkpoiting/rematerialization to counter the memory problem and interleaving the convolution blocks after several MHSA blocks to aid training for this deeper model. </li>
<li>Recycling from the original paper was not utilized.</li>
<li>Only triangular multiplicative updates are used in the Pair Representation, triangular self-attention was not utilized.</li>
<li>Since RNA can take on multiple conformations, dropout can be utilized at inference time and results averaged to get a better estimate of the RNA’s structure</li>
</ul>
<p>All of these adjustments are very likely to provide benefits to the model. Some of the tests are currently underway.</p>
<h2>Comments</h2>
<p>The twin-tower model was a large task, partially because I was competing solo. The model at submission time was trained only on clean dataset, and only the MSA Feature pathway was used in the predictions, the other Pair Feature pathway was completely ignored, but it can be used to try and recreate the BPP’s which should increase the score (tests are underway as im writing). Best submission of the model without BPP’s, loop types or any pre/post processing is 0.13709 public and 0.14366 private. However, the twin-tower model has a bigger gap in generalization compared to the smaller “safer” squeezeformer model which scored 0.13865 public but 0.14265 private, so currently investigating the reason behind it.</p>
<p>I am new to machine learning. I started to learn the field in May of this year, so I am sure there will be a lot of mistakes in the code and in my approach and sorry if my explanation is all over the place, all this is new to me and I am still learning.</p>
<p>Open Sourced Code:<br>
<a href="https://github.com/GosUxD/OpenChemFold" target="_blank">https://github.com/GosUxD/OpenChemFold</a></p>
<h2>References:</h2>
<p>[1] Squeezeformer: An Efficient Transformer for Automatic Speech Recognition<br>
Sehoon Kim, Amir Gholami, Albert Shaw, Nicholas Lee, Karttikeya Mangalam, Jitendra Malik, Michael W. Mahoney, Kurt Keutzer arXiv:2206.00888 [eess.AS] <a href="https://doi.org/10.48550/arXiv.2206.00888" target="_blank">https://doi.org/10.48550/arXiv.2206.00888</a></p>
<p>[2] Winner of Google American Sign Language Fingerspelling Competition<br>
<a href="https://github.com/ChristofHenkel/kaggle-asl-fingerspelling-1st-place-solution" target="_blank">https://github.com/ChristofHenkel/kaggle-asl-fingerspelling-1st-place-solution</a></p>
<p>[3] Jumper, J., Evans, R., Pritzel, A. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). <a href="https://doi.org/10.1038/s41586-021-03819-2" target="_blank">https://doi.org/10.1038/s41586-021-03819-2</a></p>
<p>[4] OpenComplex github code repository:<br>
<a href="https://github.com/baaihealth/OpenComplex" target="_blank">https://github.com/baaihealth/OpenComplex</a></p>
<p>[5] E2Efold-3D: End-to-End Deep Learning Method for accurate de novo RNA 3D Structure Prediction<br>
Tao Shen, Zhihang Hu, Zhangzhi Peng, Jiayang Chen, Peng Xiong, Liang Hong, Liangzhen Zheng, Yixuan Wang, Irwin King, Sheng Wang, Siqi Sun, Yu Li. arXiv:2207.01586 [q-bio.QM] <a href="https://doi.org/10.48550/arXiv.2207.01586" target="_blank">https://doi.org/10.48550/arXiv.2207.01586</a></p> |
Stanford Ribonanza RNA Folding | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Summary</h2>
<ul>
<li>Transformer based solution</li>
<li>Masked Conv1D instead of MLP</li>
<li>bpp injection into attention matrix, bpp-based token mixing, matrix mixing, and dual stream setup (attention boosting) for incorporating bpp</li>
</ul>
<h2>Introduction</h2>
<p>Our team would like to thank the organizers and Kaggle for making this competition possible. Also, I want to express my gratitude to my outstanding teammates <a href="https://www.kaggle.com/drhabib" target="_blank">@drhabib</a> and <a href="https://www.kaggle.com/martynoveduard" target="_blank">@martynoveduard</a> for their incredible contribution toward our final result.</p>
<h2>Details</h2>
<h3>Data</h3>
<p><strong>BPP</strong>: We generated additional bpp using vienna_2 (we also used SS output), contrafold_2, rnaformerv1, and rnafm. However, we saw nearly negligible improvement in comparison to using only bpp provided by organizers.<br>
<a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/454397" target="_blank"><strong>EX data</strong></a>: We tried to do fine-tuning on this data since it is only the source that provides GT for sequence ends. We got 5-10 bps CV improvement from this procedure (Iafoss/DrHB) and got the best visualization for long-range dependencies. However, at the sequence ends the predictions tend to get values close to zero for unknown reasons. At private LB EX data did not give any boost.<br>
<strong>CV split</strong>. slime used a random 4-fold split. DrHB and Iafoss used a split based on sequence similarity to avoid any possible leaks and also we excluded any train data overlapping with the test test from the train. The size of the val set is ~20k samples that passed SN criterion.</p>
<h3>Model</h3>
<p>At the beginning of the competition we quickly realized that bpp is very helpful for model performance. Initially, we tried to use it as an auxiliary output computed as the mean attention matrix, but convergence was quite slow (the typical run was 200-250 epochs). Then we build setups that also take bpp as an input. It drastically accelerated convergence, so that it took only several tens of epochs, while giving similar results.<br>
We considered several setups:<br>
(1) <strong>bpp injection</strong> (Iafoss): add bpp (in the logit form) directly to the attention matrix (in case of multiple bpp, each bpp is added to the particular head). We gradually attenuate the injection towards zero in the last layers to give the mode the freedom to develop interactions missed in bpp. The critical part of the model was replacing MLP with a masked 1d x5 convolution that performs the mixing of neighboring tokens. The transformer has 384 width, a head size of 64, 24 depth, a droppath of 0.3, dropout of 0.1. We use rotary enc to define the relative position of tokens. Single bpp and 6 bpp setups were considered. The best single model got 0.14375 private and 0.14013 public LB and 0.14292 and 0.13973 with using pseudo labels.<br>
(2) <strong>bpp mixing</strong> (DrHB): instead of injection of bpp into the attention matrix, we tried to add matrix multiplication modules performing mixing tokens based on bpp. The total structure of the model: 3 x [bpp eterna -> conv ->transformer -> bpp rest mean ->conv ->transformer ->ss- as adj -> graph transformer ].<br>
(3) <strong>matrix mixing</strong> (slime): trying to follow the previous competition solutions we added a learnable attention bias produced by a convolutional stream applied to bpp and ALIBI-like positional encoding. We used 2 bpp sources. The matrix mixer is represented by several conv layers with SE modules. The transformer consists of 12 matrix mixing + 12 regular transformer blocks with a width of 384. The best single model is 0.14509 private and 0.14066 public LB. This model used a different training pipeline and cannot be directly compared to others.<br>
(4) <strong>dual stream</strong> (Iafoss). The drawback of matrix mixing is that the attention bias is updated independently from the transformer based on the input bpp. How about attention boosting? We perform a simple projection of the attention state followed by scaled tanh nonlinearity to limit accumulated values and stabilize training. This value is added to the Attention matrix and then the result is fed to the next transformer layer. This modification has improved the model performance giving 0.14296 at private and 0.13697 public LB as a single model. Unfortunately, we discovered this setup only a few days before the end of the competition, and didn't have a chance to run multiple trainings and PL setup, which we expect to give a further ~10 bps improvement of LB.</p>
<p>The models are schematically depicted in the image below:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1212661%2Fd537664beced5e0923d38a5cd8a612e3%2Fplots.png?generation=1702136589769473&alt=media" alt=""></p>
<h3>Training</h3>
<p><strong>(Iafoss/DrHB)</strong> The loss is weighted based on the error, while we do not downselect data based on SN criterion: <code>w = 1/sqrt(1/6 + err.clip(100))</code>. Use AdamW, cosine annealing with warmup, lr=5e-4, wd=0.05, bs=16 (small bs was working better for the reason we could not identify). We used flip augmentation for both train and TTA, but the key thing here is using bpp computed for the correct order of nucleotides. It quite improves CV giving 10-15 bps boost. We also used bin based auxiliary loss which gave a slight improvement.</p>
<p>We split the training portion of the data into 4 folds, train 4 models (40-48 epochs), fine-tune on external data + train data (5-6 epochs with x10 EX data oversampling), fine-tune on noise-free samples for 12 epochs. Then we corrected the provided train data weighting GT and PL based on the inverse error (assuming 0.15 error for PL). The provided train data has a large level of noise slowing down the convergence. Test data and the sequence ends were labeled solely based on PL. Then we train a PL model on the whole train (excluding val samples) + test data, fine-tune on EX data, and fine-tune on noise-free samples. This procedure improved CV by 10 bpp, and the weighted average of all models generated in the procedure gets further improvement.</p>
<p><strong>(slime)</strong> <em>Pre-training</em>: First we performed MLM pre-training on a whole dataset for 5 epochs (40% of input tokens are replaced with mask tokens). We used cosine decay to zero with one epoch warmup and AdamW optimizer with base_lr= 5e-4 and wd=0.05. The model was trained to predict missing nucleotides with cross-entropy loss</p>
<p><em>Fine-tuning</em>: During the fine-tuning stage we initialized our model with weights from MLM-pretraining, it gave a noticeable improvement to the final result [10 bps CV]. We fine-tuned our models for 50 epochs with batch_size=16 on samples where either SN(DMS) == 1 or SN(2A3) == 1, we masked the loss depending on SN of the given test [10 bps boost compared to filtering train dataset with SN(DMS) == 1 and SN(2a3) == 1). In addition, we weighted the samples based on their reactivity error provided by the ground-truth data as <code>loss *= torch.log(1.1 + snr) / 2</code>. Similar to pertaining, we used AdamW optimizer [lr=5e-4, wd=0.05] and cosine scheduler with lr decay to zero. Since in matrix mixing model masking was not considered in convolutions, this model was trained with length-matching batch sampling (samples of exactly the same length).</p>
<h3>Best singe model end ensemble</h3>
<p>Best single model (dual stream model): 0.14292 at private and 0.13711 public LB, which can take top 10 itself. This result would be further improved by 10 bps to ~0.1419 if if we had time to run our full PL pipeline.<br>
Our final submission is a combination of ~20 models that got 0.14189 at private and 0.13604 at public LB.</p>
<h3>Things didn't work</h3>
<ul>
<li>EX data was helpful at CV and public LB (5-10 bps boost), but not helpful at private LB</li>
<li>Additional bpps gave only a negligible improvement in comparison to the use of single bpp provided by organizers</li>
<li>MLM on 30M external RNA sequence dataset</li>
<li>EMA, AWP, Floyd-warshall distance matrices </li>
<li>2D Ushape models</li>
</ul> |
Child Mind Institute - Detect Sleep States | 4th Place Solution - Nikhil's Part (Modified Unet + Transformer and Weighted Box Fusion) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Child Mind Institute - Detect Sleep States
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Firstly thank you to my teammate <a href="https://www.kaggle.com/ryotayoshinobu" target="_blank">@ryotayoshinobu</a> for the amazing performance. Please read his part of the <a href="https://www.kaggle.com/competitions/child-mind-institute-detect-sleep-states/discussion/459597" target="_blank">solution</a>.<br>
I am really glad I teamed up with him and learnt a lot</p>
<p>A big thanks to the organizers as well for such an interesting competition. A really tough one.</p>
<p>You can find the solution code here:</p>
<p><a href="https://github.com/nikhilmishradevelop/kaggle-child-mind-institute-detect-sleep-states" target="_blank">https://github.com/nikhilmishradevelop/kaggle-child-mind-institute-detect-sleep-states</a></p>
<h1>Solution Summary</h1>
<p><strong>Validation Strategy</strong>: GroupKFold on Series Ids</p>
<p><strong>Model Inputs</strong>: 17280 x n_features length sequences as input (17280 = 12 steps_per_minute x 60_minutes * 24 hours)<br>
<strong>Model Outputs</strong>: 17280 x 2 (one for onset and other for wakeup)<br>
<strong>Model Type</strong>: Regression Model<br>
<strong>Loss Type</strong>: Cubic Loss i.e. abs(y_true-y_pred)**3</p>
<p>The last remaining part of the sequences with length < 17280 were padded to make them equal to 17280</p>
<p>My solution is a modified UNET (averaged over 4 models, 2 LSTMs and 2 GRUs) using normalized Gaussian targets similar to what <a href="https://www.kaggle.com/tolgadincer" target="_blank">@tolgadincer</a> described. A big thanks to him for sharing a good method, early on in this competition.</p>
<h2>Features:</h2>
<ol>
<li>Original Sequence Features: Enmo, Anglez</li>
<li>TimeStamp Features: Hour and Weekday</li>
<li>Derived Sequence Features: Anglez difference, Enmo Difference, HDCZA features etc)</li>
</ol>
<p>Good Features help in faster convergence and much better scores, so adding good features were important (Read penguin's solution for some other good features)</p>
<p><strong>My final 4 model ensemble results</strong>: </p>
<p><strong>CV</strong>: 0.828<br>
<strong>Public LB</strong>: 0.789<br>
<strong>Private LB</strong>: 0.841</p>
<h1><strong>Model Architecture</strong></h1>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1265131%2Fad78d36028ed6dc7ab1d3e79631b0172%2FFireShot%20Capture%20019%20-%20Online%20FlowChart%20%20Diagrams%20Editor%20-%20Mermaid%20Live%20Editor%20-%20mermaid.live%20(1).png?generation=1701841137052567&alt=media" alt="Model Architecture"></p>
<h2>Patching to reduce sequence length</h2>
<p>Since 17280 is a very long sequence length (makes training very slow and harder to train), we reduce the sequence length by patching it.</p>
<p><strong>Input:</strong> 17280 * n_features (34)<br>
<strong>Patch Size:</strong> (Used different patch sizes 3, 4, 5, or 6 in different models).</p>
<p><strong>Modified Sequence Length:</strong> 17280 to 17280//Patch_Size sequence length</p>
<p><strong>Modified Feature Size:</strong> k * 4 * num_features (where k is a Dense layer output dim)</p>
<h2>Modified Unet Part</h2>
<p><strong><em>UNET Encoder -> Bottleneck -> Transformer -> GRU or LSTM -> Unet Decoder</em></strong></p>
<p>Each layer of UNET encoder had concatenated connections to the 1st layer</p>
<p><strong>Initial Output size:</strong> 17280//Patch_Size, 2 * Patch_size</p>
<p><strong>Reshaped Output Size:</strong> (17280, 2)</p>
<h1>Post Processing</h1>
<p>Initially I was doing just doing simple peak detection when I was competing solo.</p>
<p>I wanted to, but could not formulate a good way to rerank model predictions using lightgbm, but <a href="https://www.kaggle.com/kmat2019" target="_blank">@kmat2019</a> is a great read for that.</p>
<p>Thanks to <a href="https://www.kaggle.com/ryotayoshinobu" target="_blank">@ryotayoshinobu</a> , I started applying NMS, and finally experimented with a kind of WBF algorithm and made it work. This WBF algorithm gave a score of 0.79 to 0.793 on last day.</p>
<p>But it was worse by 0.001 in our private LB lol, so WBF hurt.</p>
<p>WBF Working</p>
<ol>
<li><p><strong>Initialization and Convolution</strong>: The function starts by applying convolution with the specified <code>convolution_kernel</code> to smooth the data.</p></li>
<li><p><strong>Peak Detection Loop</strong>: It iteratively searches for peaks in the data. The loop runs until the maximum count (<code>max_count</code>) is reached or the peak value falls below a threshold (<code>max_thresh</code>).</p></li>
<li><p><strong>Adaptive Window and Weight Calculation</strong>:</p>
<ul>
<li>A dynamic window size around each peak is determined based on the current maximum value's power (<code>curr_max_power</code>) and a distance parameter (<code>k_dist</code>).</li>
<li>Depending on the <code>section_weight_method</code> (logarithmic or linear), weights are calculated for each section of the data around the peak.</li></ul></li>
<li><p><strong>Weighted Average and Score Calculation</strong>: For each detected peak, a weighted average is calculated to determine the score of the peak. <br>
This is done by considering the peak and k of its neighbours. This score is influenced by the method of weight calculation and other hyperparameters like <code>log_base</code>, <code>log_scale</code>, and <code>weight_coeff</code>.</p></li>
<li><p><strong>Suppression and Updating Predictions</strong>:</p>
<ul>
<li>After each peak is detected, the function suppresses the neighboring values to avoid detecting the same peak multiple times. This is controlled by <code>overlap_coeff</code> and <code>preds_reduction_power</code>.</li>
<li>The indices and scores of detected peaks are stored and returned after the loop completes.</li></ul></li>
</ol>
<p>I will share the implementation of the WBF part code, you can see if tuning these hyperparameters help in your models too. This accepts predictions per series id.</p>
<p>(I found hyperparameters by a mix of manual and automated hyperparamter tuning approach)</p>
<pre><code> ():
k_dist = hyperparams[]
log_base = hyperparams[]
log_scale = hyperparams[]
curr_max_power = hyperparams[]
weight_coeff = hyperparams[]
convolution_kernel = hyperparams[]
section_weight_method = hyperparams[]
preds_reduction_power = hyperparams[]
overlap_coeff = hyperparams[]
min_distance = hyperparams[]
preds = preds_orig.copy()
preds = np.convolve(preds, convolution_kernel, mode=)
count =
indices = []
scores = []
count < max_count:
curr_max_idx = np.argmax(preds)
curr_max = preds[curr_max_idx]
curr_max < max_thresh:
k = (k_dist - (min_distance, (curr_max**curr_max_power)))
start_idx = (curr_max_idx - k, )
end_idx = (curr_max_idx + k + , (preds))
section = preds[start_idx:end_idx]
distances = np.(np.arange((section)) - k)
section_weight_method == :
weights = / (log_base ** (distances / (k * log_scale)))
section_weight_method == :
weights = - (distances / k) * weight_coeff
weighted_avg = np.(section * weights) / np.(weights)
scores.append(weighted_avg)
indices.append(curr_max_idx)
preds[start_idx:end_idx] *= (( - weights * overlap_coeff))**preds_reduction_power
count +=
indices, scores
</code></pre>
<h2>Final Ensemble</h2>
<p>Final ensemble was a blended weight of my and Penguin's subs regression postprocessed using WBF</p>
<p>Final_Sub = WBF(Penguins_Predictions * 0.25 + Nikhil's Predictions*0.75)</p>
<p>Since our predictions had slightly different scales of regression targets, Penguin's Predictions were first scaled by using a simple power transform i.e Penguin's_Predictions ** 0.7</p>
<p><strong>Ensemble CV:</strong> 0.835<br>
<strong>Ensemble Public LB:</strong> 0.793<br>
<strong>Ensemble Private LB:</strong> 0.845</p>
<p>P.S: This competition is the first time I did not use gradient boosting, even after I knew it worked so well, me being a tabular guy. I mistake I guess. I love lightgbm and will definitely use one in the next competition.</p>
<p>Congratulations to all the the top performers, it was a good fight, till it lasted :)</p> |
Stanford Ribonanza RNA Folding | GraphAttention solution approach | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi there,thanks Stanford University for this competition!<br>
Might be someone will found my work is interesting.<br>
Note enough of time was my biggest problem,but after all this is my first medal,even though its silver, i am really happy.</p>
<ol>
<li>Preprocessing RNA sequence to graph:<br>
1.1. Used Eternafold pkg for extracting secondary structure.<br>
1.2. OHE nucleotids -> node features<br>
1.3. Used as edge features -> [phosphodiester_bond,base_pairing(canonical or wobble),BPPS]</li>
<li>My final solution was:<br>
2.1. Random walk as positional encoding.<br>
2.2. Architecture: <br>
Combination of:<br>
local attention -> GraphTransformer.<br>
global attention -> Attention encoder.<br>
20 layers depth nn(192 hidden dim), for me expands hidden dimension doesn't help.</li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10350612%2F29ca6292363042edcf0923d231234396%2Fd.drawio.png?generation=1702059627165899&alt=media" alt=""></p>
<p>This works helps me a lot:<br>
<a href="https://arxiv.org/abs/2009.03509" target="_blank">https://arxiv.org/abs/2009.03509</a><br>
<a href="https://arxiv.org/abs/2205.12454" target="_blank">https://arxiv.org/abs/2205.12454</a></p>
<p>3.<br>
Default splitting data on train and validation(10 % of SN filter =1) <br>
During train process using rna sequence with signal to noise >= 0.8</p>
<p>For final submission used only one model prediction no stucks, that might increase quality of prediction as i said before no time was problem.</p>
<p>github:<a href="https://github.com/cerenov94/ribonanzaRNA" target="_blank">https://github.com/cerenov94/ribonanzaRNA</a><br>
Instruments:<br>
Pytorch Geometric,Graphein</p> |
Stanford Ribonanza RNA Folding | 44th Place Solution for the Stanford Ribonanza RNA Folding Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Stanford Ribonanza RNA Folding
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Context section</h2>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/data</a></li>
</ul>
<h2>Overview of the Approach</h2>
<p>This is my first write-up (also the first time in the medal section 😁). Thanks to the organizers and participants! My approach was based on work we are doing in our lab for RNA secondary structure prediction [1] (this article includes source code). This is a ResNet-based model that takes the input sequences as one-hot representations and outputs the interaction matrix prediction (similar to the bpp provided). The model has a first 1D feature extraction stage, then it is converted to 2D with a simple matmul, and undergoes a second stage of 2D feature extraction to reach final prediction. For this competition, the output was "flattened" using a sum by columns, thus arriving to a representation of the activation of each nucleotide in the sequence. The inner 2D representation allow us to add bpp information as an additional channel to learn features from.</p>
<p>In terms of data it was rather a simple approach, the best model used the training data with SNR>0.5. Train/test splits were performed using clustered sequences using cdhit-est (80% threshold) [2]. This is important to not overfit patterns on similar sequences. Provided BPP were also used. </p>
<p><img src="https://github.com/sinc-lab/sincFold/blob/main/abstract.png?raw=true" alt=""></p>
<h2>Details of the submission</h2>
<p>I wanted to try different approaches but focused on improving the one detailed above. As training was time consuming, using only the medioids of each sequence cluster proved to reach competitive results (compared to my best submission) thus it was the approach during development. It would be nice to filter leaked test sequences but it seems that my best solution in public LB is the same in private, so good news for the model. Final results were averaging 5 models, 3 using only medioids because I ran out of time. Also didn't use test data for anything except looking at the public LB </p>
<p>An interesting take is the use of BPP information. As it is based on classical RNA structure prediction methods and do not model pseudoknots, it could affect model prediction in those cases. I trained models without BPP reaching to far worse avg results (public LB ~.16). I did not tried in private yet, but analyzing a sample case as described in [3] it can be seen that the final ensemble model (middle image) miss the predictions pointed by the arrow in the reference prediction (upper image), while the model without BPP (bottom image) have some resemblance. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F208067%2F77df07e5804b090f0301093fe0dceb73%2Fplot_reffinalsinbpp.png?generation=1702052560298402&alt=media" alt=""></p>
<p>Things that didn't work: </p>
<ul>
<li>Tried white noise and flip augmentation</li>
<li>Using sequences with less than .5 snr</li>
<li>Tried to use errors during training but didn't reach to a converging method. It could work though </li>
</ul>
<p>Hope we see more works on bio sequences!</p>
<h2>Sources</h2>
<p>[1] <a href="https://www.biorxiv.org/content/10.1101/2023.10.10.561771v1" target="_blank">https://www.biorxiv.org/content/10.1101/2023.10.10.561771v1</a><br>
[2] <a href="https://sites.google.com/view/cd-hit" target="_blank">https://sites.google.com/view/cd-hit</a><br>
[3] <a href="https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653" target="_blank">https://www.kaggle.com/competitions/stanford-ribonanza-rna-folding/discussion/444653</a></p> |
Google - Fast or Slow? Predict AI Model Runtime | Discoveries through the Contest | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to the organizers for holding this unique contest! I am currently an undergraduate student in CS and after taking the course on Compiler Design, this contest helped me explore the machine learning driven approaches for compiler optimizations on field to check how AI models run.</p>
<h1><strong>Background</strong></h1>
<p>Reference to the paper <a href="https://arxiv.org/pdf/2308.13490.pdf" target="_blank">"TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs"</a> gives an overview on the computational graph representation of the programs running on TPUs with a compilation configuration.</p>
<h1><strong>Approach</strong></h1>
<p>The detailed description of the node features, opcode for tiles and layout helps deciding the normalization parameters by noting the estimated time taken for the portion of the program to run given their flow representations. The use of <strong>Graph Convolution Network(GCNConv)</strong> to obtain the most probable runtime order is inspired from its ability to capture local and global information with Parameter Sharing and Transferability.</p>
<h1><strong>Data Preparation</strong></h1>
<p>Weights are assigned in the range [0.0055-0.01] to each feature vector based on the code against a particular instruction. Runtime per node is obtained as a weighted summation of the feature vectors of each node along with any specified connections through the edge values. The normalized config runtime is calculated as a difference of the found <em>config runtime</em> and the <em>config feature vectors</em>, and finally dividing it by the runtime obtained by summing the runtime of all the nodes. The classes to be predicted by the GCN would be the normalized config runtime values obtained as the order of fastest to slowest arranged from 0,n-1. The GCN is trained from the config features as the input, with edge vectors as the connectives. </p>
<h1><strong>Interpretation</strong></h1>
<p>The probabilities of the configuration belonging to a particular class of runtime(0 for the fastest,1,2,etc. specifying the order and n-1 for the slowest) are the outputs obtained as an n*n matrix, where n is the number of configurations. The highest probability obtained for row 'i' at a particular matrix entry pred[i][j] specifies configuration 'i' belongs to class 'j' or the runtime order is the 'j'th fastest. This trained model helps obtaining the predicted runtime classes given nodes of relatable configurations in the test dataset. The nodes bearing the values of the top 5 classes are returned for the tiles dataset. This is similar to the bag-of-words represented as feature vectors used to determine the class of Machine Learning Keywords in the cora dataset.</p>
<h1><strong>Key Takeaways</strong></h1>
<ol>
<li>Reviewing the solution by <a href="https://www.kaggle.com/competitions/predict-ai-model-runtime/discussion/456343" target="_blank">EDUARDO ROCHA DE ANDRADE</a> helped me understand the need for the use of Cross-Config Attention for making this approach more efficient.</li>
<li>The approach also helped obtain the runtime most configurations follow based on the most prominent classes obtained.</li>
</ol> |
Google - Fast or Slow? Predict AI Model Runtime | 5th Place Solution: GNN with Invariant Dimension Features | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Google - Fast or Slow? Predict AI Model Runtime
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for hosting the interesting competition, and congratulations to the winners!</p>
<h2>Overview</h2>
<p>My solution is based on an end-to-end graph neural network (GNN). I implemented a 3-layer GraphSage based on <a href="https://pytorch-geometric.readthedocs.io/en/latest/" target="_blank">PyG</a>. In each layer, I operate graph convolution in both directions of edges by different weights and concatenate the outputs.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9088007%2Fa038962047b9285a904e29d787b3d1ff%2FTPUGraphs-GNN.drawio.png?generation=1702755579495636&alt=media" alt=""><br>
I trained the model to minimize pairwise hinge loss using the AdamW optimizer using a cosine annealing scheduler.<br>
For the loss, I used the average of a pairwise hinge loss among different configurations of the same graph and a pairwise hinge loss among all the samples in a batch (including different graphs). For this reason, I didn't use a subgraph but a whole graph as an input to GNN.</p>
<h2>Dimension Feature Embed by Transformer</h2>
<p>Node features include 30 features (including tile and layout configurations) for each of the 6 dimensions. A naive approach to input this to GNN is to simply flatten them (I call this naive model), but I considered the following two disadvantages.</p>
<ul>
<li>It drops prior information about feature correspondence across dimensions</li>
<li>The output should be invariant to the indexing order of dimensions (I'm not sure if this is exactly correct)</li>
</ul>
<p>To tackle these issues, I implemented a dimension feature embedding layer using a transformer that handles each dimension as a token. In this layer, I transform (6, 30) input to (6, mid_ch) by a transformer and reduce to (mid_ch) by taking the sum in the token dimension.<br>
Since most dimension features are exactly the same (padded ones), I could compute this efficiently by calculating embedding for only unique ones in each batch and copying them.</p>
<h2>Tile Config Dataset</h2>
<p>I trained the model using only the tile dataset.<br>
Using the transformer model, I could easily achieve 0.2 (nearly perfect) in public and private LB. The transformer model was significantly better than the naive approach on the validation Kendall tau score.</p>
<h2>Layout Config Dataset</h2>
<p>I trained the model using the whole layout dataset (random and default of xla and nlp). Also, including the tile dataset enhanced the performance a little.</p>
<p>I could not outperform the naive model by the transformer model in the validation score (due to limited time), but it was comparable. My final submission was an ensemble of naive models and transformer models.</p>
<h2>Tips</h2>
<ul>
<li>use the same opcode embedding for unary operations such as abs, ceil, cosine, etc.</li>
<li>override layout_minor_to_major by layout config features for configurable nodes</li>
<li><a href="https://arxiv.org/abs/1907.10903" target="_blank">DropEdge</a></li>
<li>apply log transformation to input features</li>
<li>oversampling</li>
<li>load layout config data by numpy's mmap mode to save RAM</li>
</ul>
<h2>What Didn't Work</h2>
<ul>
<li>graph pooling</li>
<li>pretrain on the tile dataset and finetune on the layout dataset</li>
<li>graph normalization</li>
<li>dropout node</li>
<li>GAT, GATv2, GIN</li>
<li>fp16</li>
<li>pseudo label</li>
</ul>
<h2>Acknowledgement</h2>
<p>I acknowledge Preferred Networks, Inc. for allowing me to use computational resources.</p>
<p>Source Code: <a href="https://github.com/knshnb/kaggle-tpu-graph-5th-place" target="_blank">https://github.com/knshnb/kaggle-tpu-graph-5th-place</a></p> |
Subsets and Splits