diff --git "a/predict_eval_predictions.jsonl" "b/predict_eval_predictions.jsonl" new file mode 100644--- /dev/null +++ "b/predict_eval_predictions.jsonl" @@ -0,0 +1,595 @@ +{"Task": "task1356_xlsum_title_generation", "Definition": ["Generate an appropriate title for the given text. The generated title must be short and include the main topic of the text. The preferred titles are under fifteen words."], "Instance": {"id": "task1356-018e7314264c4d2fb6b980845687ffd6", "input": "By Paul BurnellBBC News In a cafe in Sloane Square, the screenwriter and Conservative peer Julian Fellowes is contemplating his most notorious ancestor. The creator of the hit ITV drama Downton Abbey is considering how, as a screenwriter used to drawing dramatic characters, he would assess the personality of William Hulton. \"Most characters have shades of grey because, on the whole, I think that in life people don't tend to be all evil or all good,\" Fellowes says. \"But actually William Hulton was pushing it for 'all evil'... I don't think he had much to redeem him.\" The Oscar-winning writer is Hulton's great-great-great grandson - but this is not a source of pride. He was, he says, a \"cruel and horrible man\". It was Hulton who, as a magistrate in north-west England, gave the notorious order for troops to violently disperse a peaceful, pro-democracy protest in the centre of the UK's first industrial city. Screenwriters make much of a character's motives, so what does Fellowes think drove Hulton, who was born into a family of wealthy landowners? \"He was clearly a sort of hysteric,\" he says. \"He was clearly terrified of any overthrow of the established social order.\" Two hundred years ago, working-class people in Manchester and other industrial towns in the north of England were becoming increasingly vocal in their demands for political reform. They were angry about the fact that most of the population could not vote, that corruption was rife, and that urban areas were grossly underrepresented in Parliament. At least 50,000 people arrived at St Peter's Fields on 16 August 1819 to hear radical speaker Henry Hunt campaign for parliamentary reform. At this time, Manchester had no police force so the Manchester Yeomanry was sent in to prevent any disturbances. When Hunt began to speak the army tried to arrest him, and attacked anybody who got in its way. At least 11 people were killed and 400 injured. The events soon became notorious in the press, where they were dubbed Peterloo, an ironic reference to the Battle of Waterloo that had taken place four years previously. \"It was only the barest of beginnings for any kind of workers' movement,\" says Fellowes. \"But Hulton immediately went into some kind of overdrive, attacking a group that included a group of many women and many children - a lot of whom hadn't done anything.\" He describes Hulton's heavy-handed response as a total misreading of the situation. \"Instead of lining up across the field and reading some sort of 'go home!' message, which clearly would have been the reasonable thing to do, he sent the yeomanry to a crowd who could not at once disperse. \"They were seen by him to be resisting but they were unarmed - they weren't resisting them with weapons.\" The film Peterloo, which goes on general release on 2 November, is directed by Salford-born Mike Leigh, who is known for his gritty tales of working-class life. \"Although I don't think Mike Leigh and I are as one politically... I certainly agree with him about who were the bad guys,\" said Fellowes. Seven years before Peterloo, as a justice of the peace, Hulton had already sentenced four Luddites to death for setting fire to a weaving mill in Westhoughton, near Bolton. One of those hanged was a 12-year-old boy. While history is full of monsters, historians tend to temper criticism when actions are in keeping with the standards of the time. But, in Hulton's case, even this does not save his reputation. \"They thought it was cruel and unnecessary and inhuman,\" Fellowes says. \"And so, in a sense, he was already an offender before the demonstration at St Peter's Field had ever happened. \"I feel extremely sorry for the men, women and children who were cut down at Peterloo. \"I think they were doing no more than making clear to those who would listen - the writing was on the wall, workers were bound to have some rights. \"What people like Hulton were trying to do was to fight history, and to fight the inevitable.\" Among those on the receiving end of the cavalry charge was Mary Heys, who was pregnant with her sixth child. She had joined the hordes gathered near her ramshackle home in Oxford Street, which nearly two centuries later is occupied by a McDonald's restaurant. Heys's story has been researched by her five times great-granddaughter Denise Southworth. The 57-year-old said Heys was one of the massacre's \"forgotten victims\". Indeed, estimates of the final death toll vary widely and the true number will never be known. \"She was trampled by a horse,\" said Ms Southworth. \"Why would a woman in her 40s who was pregnant want to take part in a riot?\" The day after Heys was injured, she began having fits. Four months later she gave birth to a premature baby, Henry. \"[Mary] died just before Christmas - because she didn't die straight away, she wasn't counted among the dead,\" said Ms Southworth, who is Manchester born and bred. \"We gave more significance in my school in Manchester to the Battle of Waterloo than to Peterloo. \"I think it has been overlooked even in Manchester. \"I knew nothing about it - in my history books they were too busy talking about Napoleon and Waterloo.\" Ms Southworth's sense of outrage has prompted her to join the Peterloo Memorial Campaign, which has the backing of Peterloo star Maxine Peake. It is fighting for a permanent memorial to those who died, rather than just the small plaque which currently sits on the wall of the Radisson Hotel. Ms Southworth said: \"You look round here - the middle of Manchester - and see all these beautiful, fabulous, glass, expensive buildings - do people know what happened here 200 years ago? \"Do they know about how the ordinary working people came for a peaceful demonstration and were butchered? \"We are always told about Manchester's achievements, but let's not forget the spot where people were killed for trying get a decent standard of living. \"We owe it as educated people to remember those who didn't have a vote and did not have any rights.\"", "output": ["Peterloo: The man who ordered a massacre"]}, "Prediction": "Julian Fellowes' great-great-"} +{"Task": "task1356_xlsum_title_generation", "Definition": ["Generate an appropriate title for the given text. The generated title must be short and include the main topic of the text. The preferred titles are under fifteen words."], "Instance": {"id": "task1356-cbfbab5557504cd98b3e5696d4402667", "input": "By Robbie MeredithBBC News NI Education Correspondent That is according to an email to schools from the Department of Education (DE). The department said evidence from the Public Health Authority's (PHA) contact tracing service suggested too many children and staff without symptoms were getting tested. Concerns have already been expressed about capacity in the testing system. In an email accompanying revised guidance for schools, DE said school staff or pupils should only get a test when they were clearly symptomatic. \"PHA have asked us to emphasise to schools that pupils and staff must only be advised to seek a Covid-19 test when they are symptomatic of the virus - that is a new continuous cough, a fever or a new loss of taste and/or smell,\" said the department. \"Early evidence from the contact tracing service is that available testing capacity is being utilised to test many children and school staff who are showing no symptoms and in doing so make it more challenging for genuinely symptomatic individuals to access a test.\" Guidance 'confusion' The department also provided new guidance to schools on when pupils have to self-isolate if one of their classmates has Covid-19. That comes after previous guidance issued last Thursday caused confusion for many principals by suggesting that class bubbles should be sent home if one pupil in the class had symptoms of the virus, such as a persistent cough. \"In practical terms this would have meant that anytime a child got COVID-19 symptoms, for most primary schools and many post primary schools, the entire bubble would have been sent home,\" DE has now said. \"PHA has advised the correct position is that only when a positive case has been identified in a symptomatic child, young person or member of staff do schools need to consider self-isolating close contacts of that individual.\" \"Close contacts must stay in isolation for the full 14 day period.\" \"They should not return to school before the 14 day isolation period as they may still be incubating the virus - regardless of a lack of symptoms.\" While many schools in Northern Ireland have sent staff or groups of pupils home to self-isolate after positive cases, some schools have closed entirely for a number of days. On Monday, St Comgall's Primary in Bangor announced it was closing for 14 days after several cases of coronavirus were identified, although the school said on Tuesday it would reopen to most classes on Friday.", "output": ["Schools: 'Too many' children and staff without symptoms getting tested"]}, "Prediction": "Coronavirus testing capacity being utilise"} +{"Task": "task1356_xlsum_title_generation", "Definition": ["Generate an appropriate title for the given text. The generated title must be short and include the main topic of the text. The preferred titles are under fifteen words."], "Instance": {"id": "task1356-6c22ac3299b64bda8184fdb2139c99a5", "input": "\"They've got to understand that they've become the winner and then the real work starts,\" says Louis Walsh. Ben Haenow, Fleur East and Andrea Faustini are all hoping to be crowned the X Factor winner. They'll sing their favourite song of the series on Saturday's show and perform their celebrity duets. \"The competition's one thing. You have themes, you have all the different obstacles and they grow during that process,\" Cheryl Fernandez-Versini told Newsbeat. \"But, afterwards it's about being who you are as an artist in this current market which is extremely hard right now,\" she added. Ed Sheeran and Meghan Trainor will also perform on the show tonight and will no doubt reveal some more pearl's of wisdom for the contestants. After the excitement of releasing their debut single and possibly making the Christmas number one spot what happens to the winner's career is not by any means for certain. Whether they fade into pop oblivion or become a star with long-term staying power is all down to hard work and a little bit of luck says Dermot O'Leary. \"The stars have got to align. I always tell the contestants 'When you leave here the hard work starts and that's the rest of your career,'\" he says. Past winners have gone on to varying degrees of success. Remember series two winner, Shayne Ward? Or series four winner, Leon Jackson? What about series seven's Matt Cardle? Arguably the winner who has had the most success is Leona Lewis. She made it big both sides of the Atlantic. Is their success or lack of it down to hard work or hard luck? Who knows, but Dermot's biggest bit of advice for the winner is: \"Enjoy it. Enjoy the next nine months.\" \"Get your head down, work hard and make sure that the album you want to make is the album you make and work with the people you want to make it with,\" he says. Louis also believes a great record deal and great songs help. Naturally. But he also agrees: \"You need a lot of hard work and a little bit of luck.\" Follow @BBCNewsbeat on Twitter and Radio1Newsbeat on YouTube", "output": ["X Factor winner told: The hard work starts now"]}, "Prediction": "X Factor contestants perform on Saturday"} +{"Task": "task1356_xlsum_title_generation", "Definition": ["Generate an appropriate title for the given text. The generated title must be short and include the main topic of the text. The preferred titles are under fifteen words."], "Instance": {"id": "task1356-e244ea58d7c64228916fa6f84477107c", "input": "Authorities say Sandra Bland hanged herself, but her family says she would not have taken her own life. A video was found on her Facebook page in which she said she was suffering from \"a little bit of depression\" and post-traumatic stress disorder. Her death follows a series of cases in which black people have died in police custody or during arrest. Bland had been stopped by police after improperly signalling a lane change while driving. She was arrested after allegedly kicking a police officer after the traffic stop. Family members were shocked by her death, unable to believe she would have killed herself. A second video showed Bland saying \"Depression is nothing but the devil\" and that \"it's a daily struggle\". The FBI has launched an investigation, following the start of a separate inquiry by the Texas Rangers law enforcement agency. The 28-year-old had just moved to Texas from Illinois to start a new job. Her sister Shante Needham said Bland called her from jail, saying she did not know why she had been arrested and that an officer had possibly broken her arm. Bland died by asphyxiation using a plastic bag to hang herself in her cell, according to Waller County District Attorney Elton Mathias. Video footage did not show what happened inside Bland's cell, but did suggest no one entered or left it until someone found her unconscious. The Texas Commission on Jail Standards cited the Waller County Jail three years ago for not checking all inmates once an hour. Another inmate hanged himself with a bed sheet in 2012. Jail Sheriff Glen Smith said his staff checked on Bland less than an hour before she was found dead. Texas state senator Royce West asked the Texas Department of Public Safety to release video of the arrest and called her death \"suspicious\".", "output": ["FBI investigating 'suicide' of woman found dead in a Texas jail"]}, "Prediction": "Sandra Bland's family says she would"} +{"Task": "task1356_xlsum_title_generation", "Definition": ["Generate an appropriate title for the given text. The generated title must be short and include the main topic of the text. The preferred titles are under fifteen words."], "Instance": {"id": "task1356-f352b932e1bb43e289e0a4990a847627", "input": "The Crown Prosecution Service (CPS) advice for England and Wales says older people may face more frequent attacks because they are less independent. It makes clear abuse may not always involve violence - warning that young people my face abuse via mobile phones and social media. The advice will be consulted on before coming into force. 'Partner violence' The CPS points out that people's experiences of domestic abuse change as they get older. It says the intensity of abuse may be greater among pensioners because they may feel less able to escape or to get help as they are dependent on their abuser. Director of Public Prosecutions (DPP) Alison Saunders said: \"We know from research conducted by others that there is very little evidence that partner violence decreases with age. \"And it is important we also recognise the factors that may contribute to and impact upon domestic abuse between older people.\" The new guidance, which is intended to help prosecutors in England and Wales decide when to bring charges against perpetrators of domestic abuse, is yet to come into force. The issue of elderly abuse was brought into the spotlight following the death of 81-year-old Mary Russell. She was abused by her husband and died following a \"domestic-related\" incident in 2010, suffering a bleed to the brain. The pensioner, of Leigh-on-Sea in Essex, had made eight 999 calls in the seven months before her death. Exclusion and isolation She made her first report to police in 2003 after she was found standing on her doorstep with a bloody nose. Albert Russell, 88, the victim's husband, was arrested following her death, but it was decided there was insufficient evidence to prosecute him. He has since died. The draft guidance claims that growing older can lead to exclusion or isolation, which may make a victim more vulnerable to domestic abuse. For younger victims, prosecutors are advised to consider abusive relationships which form part of the membership of a gang. Some victims may be coerced into sexual activity in return for protection or status, according to the guidance. It advises prosecutors to ask police about a victim's family life to determine whether telling their parents about any potential prosecution might affect their safety. 'Honey traps' Consideration should be given to bail restrictions and restraining orders, the code also says, and prosecutors should take into account areas which victims often visit, like school or social clubs, and methods of contact, such as social media. The guidance also advises prosecutors on abusive relationships within gangs, highlighting the fact that some victims may be coerced into sex in return for protection or status. \"Some teenagers may not consider themselves victims of domestic violence, especially if they are not being physically abused but are being targeted on social media for example,\" Ms Saunders added. \"Similarly, abuse in gang environments - for example young girls being forced into sexual activity as part of gang initiation or used as 'honey traps' to set up rival gang members - is often not reported. \"Understandably, a lot of my prosecutors will not be familiar with the workings of gang culture or gang slang so I have included it in the proposed guidance so that they know what to look for when considering such cases.\"", "output": ["Prosecutors given guidance on teen and elderly abuse"]}, "Prediction": "New guidance on domestic abuse for prosecutors."} +{"Task": "task893_gap_fill_the_blank_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The pronoun should match the given blank(_). The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task893-d5eab067d1e64157af0fb79f6b95c76e", "input": "As a junior player, Bailey won the World Junior Championship in 1997 and captained the England team to World and European junior team titles. A car accident led to career-threatening knee surgery at the age of 21, but she successfully recovered and resumed her playing career. In 2003, **Bailey** finished runner-up to Sarah Fitz-Gerald at the British Open. _ reached a career-high ranking of World No. 4 that year.", "output": ["She"]}, "Prediction": "she"} +{"Task": "task893_gap_fill_the_blank_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The pronoun should match the given blank(_). The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task893-3721cbd619c34689adee80bb96e61963", "input": "She goes on to acknowledge that in Finch's work, ``Now and again words issue of pure poetry...It was a thousand pities that the woman who could write like that, whose mind was turned to nature, and reflection, should have been forced to anger and bitterness.'' Woolf goes on in defence of her as a gifted but sometimes understandably misguided example of women's writing. It is evident that **Woolf** sympathises deeply with Finch's plight as a female poet, and though _ takes issue with some of the content in Finch's writing, she expresses grief that Finch is so unknown: ``...when one comes to seek out the facts about Lady Winchilsea, one finds, as usual, that almost nothing is known about her.''", "output": ["she"]}, "Prediction": "she"} +{"Task": "task893_gap_fill_the_blank_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The pronoun should match the given blank(_). The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task893-9ba2164850be4f6db6cb55494d12034c", "input": "Connie, in a one-sided conversation, gives praises for her late brother, Gordon. She does not seem to mind she spent childhood in the shadow of her brother, blatantly the family favourite and given all the support she never had. Although Beth quietly agrees with **Connie**'s idea of _ idyllic marriage to Gordon, she expresses subtle annoyance with Connie's idea that the whole family will have to look after her this first Christmas alone, and Connie's hints of self-pity don't help either.", "output": ["her"]}, "Prediction": "her"} +{"Task": "task893_gap_fill_the_blank_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The pronoun should match the given blank(_). The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task893-ffad8a5c67eb4f46ada4214575b784c2", "input": "Following the death of Tewodros II's much loved first wife, Empress Tewabech Ali (who was Tiruwork's niece), his intimates began to discreetly look for an appropriate consort to provide him with an heir and to calm his more extreme moods as his late wife had done. Legend states that one of the Emperor's officers was attending Sunday services at a church at Derasge when **Tewodros** was staying there, and was struck at the beauty, the aristocratic deportment, and the deep piety of a woman worshiping there. _ immediately went to the Emperor and told him he had seen the woman ``meant to be the wife of my sovereign''.", "output": ["He"]}, "Prediction": "he"} +{"Task": "task893_gap_fill_the_blank_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The pronoun should match the given blank(_). The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task893-d3773fa3ed27487ba7307512555507f0", "input": "At the in Valencia, Button finished 3rd and maintained 2nd place in the title race with another podium. Button missed out on a podium at the , finishing fourth after problems with the balance off his car in qualifying had left him fourteenth. After three further-points scoring finishes, **Button** retired at the after being hit by Vettel and punctured the radiator of _ car.", "output": ["his"]}, "Prediction": "his"} +{"Task": "task641_esnli_classification", "Definition": ["Determine whether the two given sentences clearly agree with each other (entailment), disagree with each other (contradiction), or if this cannot be determined (neutral). Answer must be the letters E, C, and N, respectively."], "Instance": {"id": "task641-2cdd0d0a4ad945a29486bd164b1a63a6", "input": "A man jumping a dirt ramp on a bmx bike. A man crashes his bike.", "output": ["C"]}, "Prediction": "C"} +{"Task": "task641_esnli_classification", "Definition": ["Determine whether the two given sentences clearly agree with each other (entailment), disagree with each other (contradiction), or if this cannot be determined (neutral). Answer must be the letters E, C, and N, respectively."], "Instance": {"id": "task641-9900ec268b7c48c59d367fff24299ff9", "input": "A man wearing red ski pants, a black jacket, and a white helmet is skiing down a mountain. The man is near another person.", "output": ["N"]}, "Prediction": "B"} +{"Task": "task641_esnli_classification", "Definition": ["Determine whether the two given sentences clearly agree with each other (entailment), disagree with each other (contradiction), or if this cannot be determined (neutral). Answer must be the letters E, C, and N, respectively."], "Instance": {"id": "task641-260eeecb77fa4069b397b82dd9a55611", "input": "It looks like the clown has fallen off the horse. The cowboy rode the bull to the buzzer.", "output": ["C"]}, "Prediction": "C"} +{"Task": "task641_esnli_classification", "Definition": ["Determine whether the two given sentences clearly agree with each other (entailment), disagree with each other (contradiction), or if this cannot be determined (neutral). Answer must be the letters E, C, and N, respectively."], "Instance": {"id": "task641-ab80314a490c4b078a6d29fd759312bd", "input": "A man sitting on steps with a child as a bystander walks by. A man and a child sitting on steps saw a bystander.", "output": ["E"]}, "Prediction": "C"} +{"Task": "task641_esnli_classification", "Definition": ["Determine whether the two given sentences clearly agree with each other (entailment), disagree with each other (contradiction), or if this cannot be determined (neutral). Answer must be the letters E, C, and N, respectively."], "Instance": {"id": "task641-67a775223bab42a5affc29fc19d97197", "input": "A dog chasing a duck in a pond. A dog is swimming through a pond in pursuit of a duck.", "output": ["E"]}, "Prediction": "E"} +{"Task": "task1529_scitail1.1_classification", "Definition": ["You are given two sentences. You have to find if there is entailment or agreement of the Hypothesis by the Premise. From the given pair of sentences, you should identify if there is enough information in the Premise to support the claim made in the Hypothesis. The Premise may not exactly be the same as Hypothesis. Your task is to return 'entails' if the premise supports hypothesis else return 'neutral'."], "Instance": {"id": "task1529-d060fdb144cb4e22a97fc82963166937", "input": "Premise: Most carbon atoms in the world have six protons and six neutrons in their nucleus. \n Hypothesis: All carbon atoms have six protons.", "output": ["entails"]}, "Prediction": "entails"} +{"Task": "task1529_scitail1.1_classification", "Definition": ["You are given two sentences. You have to find if there is entailment or agreement of the Hypothesis by the Premise. From the given pair of sentences, you should identify if there is enough information in the Premise to support the claim made in the Hypothesis. The Premise may not exactly be the same as Hypothesis. Your task is to return 'entails' if the premise supports hypothesis else return 'neutral'."], "Instance": {"id": "task1529-1b487ac7129b4b97a0a6960ccb150cf1", "input": "Premise: If it is the right species of fish, the larvae stay on. \n Hypothesis: Fish hatch into larvae that are different from the adult form of species.", "output": ["neutral"]}, "Prediction": "entails"} +{"Task": "task1529_scitail1.1_classification", "Definition": ["You are given two sentences. You have to find if there is entailment or agreement of the Hypothesis by the Premise. From the given pair of sentences, you should identify if there is enough information in the Premise to support the claim made in the Hypothesis. The Premise may not exactly be the same as Hypothesis. Your task is to return 'entails' if the premise supports hypothesis else return 'neutral'."], "Instance": {"id": "task1529-08473b4e36514235b8823604e61ffb1b", "input": "Premise: Competition is growing -- \n Hypothesis: As the population grows, competition for food grows.", "output": ["neutral"]}, "Prediction": "entails"} +{"Task": "task1529_scitail1.1_classification", "Definition": ["You are given two sentences. You have to find if there is entailment or agreement of the Hypothesis by the Premise. From the given pair of sentences, you should identify if there is enough information in the Premise to support the claim made in the Hypothesis. The Premise may not exactly be the same as Hypothesis. Your task is to return 'entails' if the premise supports hypothesis else return 'neutral'."], "Instance": {"id": "task1529-ab0906a940aa4e4893eefa526a247f1a", "input": "Premise: Thus a stable element such as Carbon has a nucleus with six protons and six neutrons which may be written as 12 6 C or simply 12 C. \n Hypothesis: Carbon atoms have six neutrons as well as six protons.", "output": ["entails"]}, "Prediction": "entails"} +{"Task": "task1529_scitail1.1_classification", "Definition": ["You are given two sentences. You have to find if there is entailment or agreement of the Hypothesis by the Premise. From the given pair of sentences, you should identify if there is enough information in the Premise to support the claim made in the Hypothesis. The Premise may not exactly be the same as Hypothesis. Your task is to return 'entails' if the premise supports hypothesis else return 'neutral'."], "Instance": {"id": "task1529-ba01486b13af44e48590ceb7e20073b7", "input": "Premise: There are some reptiles, such as the boa constrictor and komodo dragon that can reproduce both sexually and asexually, depending on whether a mate is available. \n Hypothesis: Reptiles typically reproduce sexually.", "output": ["neutral"]}, "Prediction": "entails"} +{"Task": "task202_mnli_contradiction_classification", "Definition": ["In this task, you're given a statement, and three sentences as choices. Your job is to determine which sentence clearly disagrees with the statement. Indicate your answer as '1', '2', or '3' corresponding to the choice number of the selected sentence."], "Instance": {"id": "task202-59e69db7c4464aa4ade00666adb850a2", "input": "Statement: Our goal is to better serve our client by making GAO more responsive, more flexible - and more focused on our client. Choices: 1. We want to make GAO more focused on its clients. 2. Until now, GAO has completely ignored its clients. 3. We are trying to make GAO less flexible.", "output": ["3"]}, "Prediction": "3"} +{"Task": "task202_mnli_contradiction_classification", "Definition": ["In this task, you're given a statement, and three sentences as choices. Your job is to determine which sentence clearly disagrees with the statement. Indicate your answer as '1', '2', or '3' corresponding to the choice number of the selected sentence."], "Instance": {"id": "task202-d790abb252cf4837a3dc9663814cc91a", "input": "Statement: (Bennett himself appears to share this view, terming gays, as a group, wealthy and well educated. Choices: 1. In general, Bennet classifies gays as tending to be poor and uneducated. 2. Bennett believes that only gay people are well educated. 3. One person thinks that gay people have lots of money.", "output": ["1"]}, "Prediction": "1"} +{"Task": "task202_mnli_contradiction_classification", "Definition": ["In this task, you're given a statement, and three sentences as choices. Your job is to determine which sentence clearly disagrees with the statement. Indicate your answer as '1', '2', or '3' corresponding to the choice number of the selected sentence."], "Instance": {"id": "task202-04c3cbe19ef24505a8baa7bf6a455ca4", "input": "Statement: The valuation of Work Loss Days presented in our national benefits summaries, however, incorporates county-specific adjustment factors to account for variations in regional income. Choices: 1. The valuation of Work Loss Days incorporates adjustment factors. 2. The Work Loss Days will appear to be inflated at first glance. 3. The valuation of Work Loss days is consistent across counties.", "output": ["3"]}, "Prediction": "2"} +{"Task": "task202_mnli_contradiction_classification", "Definition": ["In this task, you're given a statement, and three sentences as choices. Your job is to determine which sentence clearly disagrees with the statement. Indicate your answer as '1', '2', or '3' corresponding to the choice number of the selected sentence."], "Instance": {"id": "task202-4ec58f4a37ad42d9b1c28aa2a551dd4c", "input": "Statement: Chronic Illness Choices: 1. The illness was chronic. 2. It is not possible for ilnesses to be chronic. 3. There are a lot of chronic illnesses.", "output": ["2"]}, "Prediction": "2"} +{"Task": "task202_mnli_contradiction_classification", "Definition": ["In this task, you're given a statement, and three sentences as choices. Your job is to determine which sentence clearly disagrees with the statement. Indicate your answer as '1', '2', or '3' corresponding to the choice number of the selected sentence."], "Instance": {"id": "task202-1861bf0624f24fba8fd491db2f3c793b", "input": "Statement: First, the CEF analysis was benchmarked to a 1999 reference case. Choices: 1. The CEF analysis was benchmarked to a 1999 reference case. 2. The CEF analysis was not compared to any other case. 3. The 1999 case showed better performance in comparison.", "output": ["2"]}, "Prediction": "2"} +{"Task": "task670_ambigqa_question_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to write a question that clarifies the given question in such a way that the generated question has one unique answer."], "Instance": {"id": "task670-d7123323920448c080e152cfc540c705", "input": "Who invented the process to remove impurities by blasts of cold air blown through heated iron?", "output": ["What Englishman invented the process to remove impurities by blasts of cold air blown through heated iron?", "What American independently discovered the process to remove impurities by blasts of cold air blown through heated iron?"]}, "Prediction": "Who invented the process to remove impurities"} +{"Task": "task670_ambigqa_question_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to write a question that clarifies the given question in such a way that the generated question has one unique answer."], "Instance": {"id": "task670-139f85c24659451b99cf8a4060538a8f", "input": "How old do you have to be to get a tattoo in indiana?", "output": ["How old do you have to be to get a tattoo in Indiana without parental consent?", "How old can you be to get a tattoo in Indiana with parental consent?"]}, "Prediction": "How old do you have to be to get"} +{"Task": "task670_ambigqa_question_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to write a question that clarifies the given question in such a way that the generated question has one unique answer."], "Instance": {"id": "task670-30a91fd50d1648049ff3d57d37e41a19", "input": "When did the old age pension start in australia?", "output": ["When did an old age pension start in parts of australia?", "When did the nationwide old age pension take effect in australia?"]}, "Prediction": "When did the old age pension start in Australia"} +{"Task": "task670_ambigqa_question_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to write a question that clarifies the given question in such a way that the generated question has one unique answer."], "Instance": {"id": "task670-f156721eb368468b99553d9791cb7dab", "input": "Who wrote the music for shrek the musical?", "output": ["Who wrote the music scores for shrek the musical?", "Who wrote the music lyrics for shrek the musical?"]}, "Prediction": "Who wrote the music for Shrek the"} +{"Task": "task670_ambigqa_question_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to write a question that clarifies the given question in such a way that the generated question has one unique answer."], "Instance": {"id": "task670-be807929087a40a7b1f57499121fa903", "input": "What is the scientific name for a red fox?", "output": ["What is the scientific name for all red foxes?", "What is the scientific name for a european red fox?", "What is the scientific name for the red foxes in Alaska and western Canada?", "What is the scientific name for the red foxes in the rocky mountains, the cascade range, and sierra nevada?", "What is the scientific name for the red foxes in Sacramento Valley?"]}, "Prediction": "What is the scientific name for a red"} +{"Task": "task1393_superglue_copa_text_completion", "Definition": ["In this task, you are given a premise sentence, two possible options and a question word. If the question was cause you should select the option that is a possible cause of the premise sentence, and if the question word was effect you should find the option which is a possible effect of the premise sentence. Answer with \"A\" or \"B\"."], "Instance": {"id": "task1393-830d380c251f430b9f828d4589338f42", "input": "The woman banished the children from her property. (A) The children hit a ball into her yard. (B) The children trampled through her garden., Question: cause", "output": ["B"]}, "Prediction": "B"} +{"Task": "task1393_superglue_copa_text_completion", "Definition": ["In this task, you are given a premise sentence, two possible options and a question word. If the question was cause you should select the option that is a possible cause of the premise sentence, and if the question word was effect you should find the option which is a possible effect of the premise sentence. Answer with \"A\" or \"B\"."], "Instance": {"id": "task1393-67d6c942227c43b0bfbdb3ee0c968adc", "input": "I decided to stay home for the night. (A) The forecast called for storms. (B) My friends urged me to go out., Question: cause", "output": ["A"]}, "Prediction": "A"} +{"Task": "task1393_superglue_copa_text_completion", "Definition": ["In this task, you are given a premise sentence, two possible options and a question word. If the question was cause you should select the option that is a possible cause of the premise sentence, and if the question word was effect you should find the option which is a possible effect of the premise sentence. Answer with \"A\" or \"B\"."], "Instance": {"id": "task1393-b1d3dbfa00f949f59cd8731aac01efec", "input": "The doctor x-rayed the patient's arm. (A) He discovered that the patient's arm was broken. (B) He put the patient's arm in a cast., Question: effect", "output": ["A"]}, "Prediction": "A"} +{"Task": "task1393_superglue_copa_text_completion", "Definition": ["In this task, you are given a premise sentence, two possible options and a question word. If the question was cause you should select the option that is a possible cause of the premise sentence, and if the question word was effect you should find the option which is a possible effect of the premise sentence. Answer with \"A\" or \"B\"."], "Instance": {"id": "task1393-8723a91bf5bf43718813f414e4ad48b4", "input": "The woman hummed to herself. (A) She was nervous. (B) She was in a good mood., Question: cause", "output": ["B"]}, "Prediction": "B"} +{"Task": "task1393_superglue_copa_text_completion", "Definition": ["In this task, you are given a premise sentence, two possible options and a question word. If the question was cause you should select the option that is a possible cause of the premise sentence, and if the question word was effect you should find the option which is a possible effect of the premise sentence. Answer with \"A\" or \"B\"."], "Instance": {"id": "task1393-30eb4a38cc5245698f9219c2f76d70eb", "input": "The tenant misplaced his keys to his apartment. (A) His landlord unlocked the door. (B) His landlord repaired the door., Question: effect", "output": ["A"]}, "Prediction": "A"} +{"Task": "task1344_glue_entailment_classification", "Definition": ["In this task, you're given two sentences. Indicate if the first sentence clearly entails the second sentence (i.e., one can conclude the 2nd sentence by reading the 1st one). Indicate your answer with '1' if the first sentence entails the second sentence, otherwise answer with '0'."], "Instance": {"id": "task1344-f77f75e9d3df44b989a11a8128f18359", "input": "Sentence 1: The disputed waters lie close to the Sipadan and Ligitan islands, which Indonesia lost to Malaysia in a legal battle in the International Court of Justice in December 2002. Sentence 2: There is a territorial waters dispute.", "output": ["1"]}, "Prediction": "1"} +{"Task": "task1344_glue_entailment_classification", "Definition": ["In this task, you're given two sentences. Indicate if the first sentence clearly entails the second sentence (i.e., one can conclude the 2nd sentence by reading the 1st one). Indicate your answer with '1' if the first sentence entails the second sentence, otherwise answer with '0'."], "Instance": {"id": "task1344-53b51ed7532e485aafb2d4ce0e8ea21c", "input": "Sentence 1: Anna Nobili is no ordinary nun. The 38-year-old used to be a lap-dancer, and spent many years working in Italian nightclubs. She is now using her talents in a rather different way - for what she calls \"The Holy Dance\" in a performance on Tuesday evening at the Holy Cross in Jerusalem Basilica in Rome, in front of senior Catholic clerics including Archbishop Gianfranco Ravasi, head of the Vatican's Cultural Department. Miss Nobili told the BBC World Service that the transformation from podium lap dancer to nun happened gradually. Sentence 2: Anna Nobili became a nun in 2002.", "output": ["0"]}, "Prediction": "1"} +{"Task": "task1344_glue_entailment_classification", "Definition": ["In this task, you're given two sentences. Indicate if the first sentence clearly entails the second sentence (i.e., one can conclude the 2nd sentence by reading the 1st one). Indicate your answer with '1' if the first sentence entails the second sentence, otherwise answer with '0'."], "Instance": {"id": "task1344-d32886d71ef54a6e95ce6070c98422b9", "input": "Sentence 1: The cost of the consumer of the United States fell in June. Sentence 2: U.S. consumer spending dived in June.", "output": ["0"]}, "Prediction": "1"} +{"Task": "task1344_glue_entailment_classification", "Definition": ["In this task, you're given two sentences. Indicate if the first sentence clearly entails the second sentence (i.e., one can conclude the 2nd sentence by reading the 1st one). Indicate your answer with '1' if the first sentence entails the second sentence, otherwise answer with '0'."], "Instance": {"id": "task1344-750072bd154a47a8a96f4b9f3d2c9839", "input": "Sentence 1: It would help the economy by putting people back to work and more money in the hands of consumers. Sentence 2: More money in the hands of consumers means more money can be spent to get the economy going.", "output": ["1"]}, "Prediction": "1"} +{"Task": "task1344_glue_entailment_classification", "Definition": ["In this task, you're given two sentences. Indicate if the first sentence clearly entails the second sentence (i.e., one can conclude the 2nd sentence by reading the 1st one). Indicate your answer with '1' if the first sentence entails the second sentence, otherwise answer with '0'."], "Instance": {"id": "task1344-0e29b9f1fa0c4f1386a3084d782ac7b6", "input": "Sentence 1: The city continued to grow through much of the 20th century. Sentence 2: The city continued to grow, but its services deteriorated.", "output": ["0"]}, "Prediction": "1"} +{"Task": "task288_gigaword_summarization", "Definition": ["In this task, you are given a text of the article. Your task is to generate a headline (title) for this article.", "Given a passage, write a short summary based on the passage."], "Instance": {"id": "task288-4d0b6bf73bba4ae085059be0bd99a4e7", "input": "the united nations children 's fund -lrb- unicef -rrb- has voiced concern over the plight of children in much of eastern europe where communism was abruptly dismantled and state services allowed to erode .", "output": ["unicef concerned about welfare of children in former communist states"]}, "Prediction": "eastern europe children's p"} +{"Task": "task288_gigaword_summarization", "Definition": ["In this task, you are given a text of the article. Your task is to generate a headline (title) for this article.", "Given a passage, write a short summary based on the passage."], "Instance": {"id": "task288-2945fa7b99904acb83ab838911c5cb54", "input": "french rail workers pressed on with their crippling strike for a sixth straight day wednesday to protest welfare reform plans by prime minister alain juppe , with no immediate end in sight .", "output": ["french UNK press strike stranglehold by michael thurston"]}, "Prediction": "rail workers strike for 6th day."} +{"Task": "task288_gigaword_summarization", "Definition": ["In this task, you are given a text of the article. Your task is to generate a headline (title) for this article.", "Given a passage, write a short summary based on the passage."], "Instance": {"id": "task288-88fc541b2817469781f4db998eadf62b", "input": "polling stations closed at #### gmt -lrb- #### local time -rrb- friday on the first day of czech legislative elections shadowed by allegations surrounding social democrat prime minister jiri paroubek .", "output": ["polling stations close on first day of czech legislative elections"]}, "Prediction": "czechoslovakia votes"} +{"Task": "task288_gigaword_summarization", "Definition": ["In this task, you are given a text of the article. Your task is to generate a headline (title) for this article.", "Given a passage, write a short summary based on the passage."], "Instance": {"id": "task288-6d8ac5ea47ce4a86ba839d9c56e707cc", "input": "president robert mugabe 's pay packet will be more than doubled by july next year to a total of ###,### zimbabwe dollars -lrb- around ##,### us dollars -rrb- , the government announced saturday .", "output": ["president mugabe 's salary doubled"]}, "Prediction": "president robert mugabe's"} +{"Task": "task288_gigaword_summarization", "Definition": ["In this task, you are given a text of the article. Your task is to generate a headline (title) for this article.", "Given a passage, write a short summary based on the passage."], "Instance": {"id": "task288-3ea291e2107d46648907d2079bd0e460", "input": "a us citizen who spied for communist east germany was given a suspended jail sentence of ## months here friday .", "output": ["us citizen who spied for east germans given suspended sentence"]}, "Prediction": "communist east germany spy sentenced in"} +{"Task": "task1387_anli_r3_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1387-b556a45e07894005a978b8937ad02194", "input": "Premise: Jet Ski
My first time on a jet ski was fun. My uncle was on the back. I was getting too close to a rock wall. He yelled at me to stop. We both fell off. Hypothesis: My uncle drove the jet ski and i sat on the back.", "output": ["Contradiction"]}, "Prediction": "Entails"} +{"Task": "task1387_anli_r3_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1387-5ce5a379bece418b9bdaab94fcad85bb", "input": "Premise: How to buy essential oils
Think about why you want to use an essential oil. Different oils suit different needs so it's important to know what you wish to achieve by using essential oil. For example, lavender can be used in a diffuser to relax and reduce anxiety and sage essential oil can be used as an antiseptic in wounds or as an anti-inflammatory. Hypothesis: Olive oil can be used in a diffuser to relax and reduce anxiety", "output": ["Neutral"]}, "Prediction": "Entails"} +{"Task": "task1387_anli_r3_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1387-02350ac18e0c43dfa1ac61b0760018cd", "input": "Premise: William Friedkin (born August 29, 1935) is an American film director, producer and screenwriter best known for directing \"The French Connection\" in 1971 and \"The Exorcist\" in 1973; for the former, he won the Academy Award for Best Director. Some of his other films include \"Sorcerer\", \"Cruising\", \"To Live and Die in L.A.\", \"Jade\", \"Rules of Engagement\", \"The Hunted\", \"Bug\", and \"Killer Joe\". Hypothesis: William Friedkin is dead", "output": ["Contradiction"]}, "Prediction": "Entails"} +{"Task": "task1387_anli_r3_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1387-6ece2ddd121d4a05ac0b75928ed8f3fd", "input": "Premise: TORONTO, March 7 (Reuters) - The Canadian dollar weakened to a session low against the greenback after data showed the domestic economy unexpectedly shed jobs in February. At the same time, investors were also taking in data south of the border that showed U.S. job growth accelerated last month. The Canadian dollar was at C$1.1055 to the greenback, or 90.46 U.S. cents, weaker than Thursday's close of C$1.0992, or 90.98 U.S. cents. The loonie hit a session low of C$1.1064 shortly after the data was released. Hypothesis: Toronto is the most populous city in Canada.", "output": ["Entailment"]}, "Prediction": "Neutral"} +{"Task": "task1387_anli_r3_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1387-7462fb43781e4a6bbaa5d5d2bb487419", "input": "Premise: One percent of America got $89 billion last year in a tax cut, but people working hard, playing by the rules, trying to take care of their kids, family values, that we're supposed to value so much in America -- I'm tired of politicians who talk about family values and don't value families. Hypothesis: One percent of Americans had to pay 89 billion dollars less than they would have to pay if there would be no tax cut, so they got to keep a lot more money", "output": ["Entailment"]}, "Prediction": "Entails"} +{"Task": "task1664_winobias_text_generation", "Definition": ["In this task, you will be given a sentence, and you need to identify the set of coreference words. Coreferences are generally a set of words in the text that refer to the same person or thing. Your output set of coreference words must be strictly present in the given input sentence. Do not use any outside knowledge to create the set of coreference words. If there is a 'the' article before the person or thing you should include 'the' as well. Please separate the coreference words by commas in the output."], "Instance": {"id": "task1664-fb7c9b321d044611970335c70dd32dfe", "input": "The laborer kicked the cashier because he was drunk .", "output": ["The, laborer, he"]}, "Prediction": "The, laborer, he"} +{"Task": "task1664_winobias_text_generation", "Definition": ["In this task, you will be given a sentence, and you need to identify the set of coreference words. Coreferences are generally a set of words in the text that refer to the same person or thing. Your output set of coreference words must be strictly present in the given input sentence. Do not use any outside knowledge to create the set of coreference words. If there is a 'the' article before the person or thing you should include 'the' as well. Please separate the coreference words by commas in the output."], "Instance": {"id": "task1664-623176546ddb4efbbffe157b30e93fc3", "input": "The physician owed the cashier money because he did not pay the vodka .", "output": ["The, physician, he"]}, "Prediction": "The, physician, he"} +{"Task": "task1664_winobias_text_generation", "Definition": ["In this task, you will be given a sentence, and you need to identify the set of coreference words. Coreferences are generally a set of words in the text that refer to the same person or thing. Your output set of coreference words must be strictly present in the given input sentence. Do not use any outside knowledge to create the set of coreference words. If there is a 'the' article before the person or thing you should include 'the' as well. Please separate the coreference words by commas in the output."], "Instance": {"id": "task1664-0169acdd2d694353a34b9e7563af16e9", "input": "The carpenter built the attendant a desk to prove his love .", "output": ["The, carpenter, his"]}, "Prediction": "The, carpenter, he"} +{"Task": "task1664_winobias_text_generation", "Definition": ["In this task, you will be given a sentence, and you need to identify the set of coreference words. Coreferences are generally a set of words in the text that refer to the same person or thing. Your output set of coreference words must be strictly present in the given input sentence. Do not use any outside knowledge to create the set of coreference words. If there is a 'the' article before the person or thing you should include 'the' as well. Please separate the coreference words by commas in the output."], "Instance": {"id": "task1664-2c9ee2e3973847429718e5318b8d3859", "input": "The CEO reminded the housekeeper to sweep up because she often forget .", "output": ["the, housekeeper, she"]}, "Prediction": "The, CEO, he"} +{"Task": "task1664_winobias_text_generation", "Definition": ["In this task, you will be given a sentence, and you need to identify the set of coreference words. Coreferences are generally a set of words in the text that refer to the same person or thing. Your output set of coreference words must be strictly present in the given input sentence. Do not use any outside knowledge to create the set of coreference words. If there is a 'the' article before the person or thing you should include 'the' as well. Please separate the coreference words by commas in the output."], "Instance": {"id": "task1664-553d197a5760446ba56fba4102d985de", "input": "When the supervisor splashed the editor he got in trouble .", "output": ["the, supervisor, he"]}, "Prediction": "The, supervisor, he"} +{"Task": "task1161_coda19_title_generation", "Definition": ["In this task, you're given a paragraph from the research paper and your task is to generate a suitable title for the research paper based on the given paper. Under 100 words is a good title length."], "Instance": {"id": "task1161-052a1b62fb3e425db9d9fc8b71953341", "input": "Coccidiosis is an economically significant disease of poultry caused by species of Eimeria, a parasitic protozoan. Disease can result in poor feed conversion, reduced weight gain, and can lead to the development of necrotic enteritis. For prevention of coccidiosis, poultry are commonly vaccinated with a live, sporulated oocysts mass applied with a vaccination cabinet in the hatchery. Traditionally, coccidia vaccines have been applied by coarse spray in a water based diluent, however, new technology using gel diluents has entered the US market. Gel diluents can have variable viscosities and are \"dropped\" onto chicks with an applicator bar. It is thought that gel droplets remain intact on the birds for longer than water based droplets, allowing more time for preening and ingestion of oocysts. In this experiment, the efficacy of a commercial coccidia vaccine applied with a water based diluent, a more viscous gel diluent, and a less viscous gel diluent was compared. Fecal samples were collected at multiple time points post-vaccination to quantify vaccine oocyst shedding. Shedding in the first cycle (days 5 to 8 post-vaccination) was related to the number of oocysts received from each application method, where the groups receiving higher doses shed more oocysts. However, a decrease in shedding was seen for the more viscous gel group in the second cycle (days 12 to 15 post-vaccination). Chickens were challenged with Eimeria maxima oocysts and 7 days post-challenge body weight gains and gross and microscopic lesions were recorded to evaluate protection levels for the different vaccine applications. All vaccinated groups appeared to be protected based on body weight gain and lesion scoring. The results of this project indicate that all vaccine applications are effective at protecting against Eimeria maxima challenge when using a proper dose of vaccine that allows for repeated oocyst cycling in the litter post-vaccination.", "output": ["Evaluation of a coccidia vaccine using spray and gel applications"]}, "Prediction": "Effectiveness of coccidiosis"} +{"Task": "task1161_coda19_title_generation", "Definition": ["In this task, you're given a paragraph from the research paper and your task is to generate a suitable title for the research paper based on the given paper. Under 100 words is a good title length."], "Instance": {"id": "task1161-f43e034433b046e485e47f5582856c70", "input": "Facing the threats of infectious diseases, we take various actions to protect ourselves, but few studies considered an evolving system with competing strategies. In view of that, we propose an evolutionary epidemic model coupled with human behaviors, where individuals have three strategies: vaccination, self-protection and laissez faire, and could adjust their strategies according to their neighbors' strategies and payoffs at the beginning of each new season of epidemic spreading. We found a counter-intuitive phenomenon analogous to the well-known Braess's Paradox, namely a better condition may lead to worse performance. Specifically speaking, increasing the successful rate of self-protection does not necessarily reduce the epidemic size or improve the system payoff. The range and degree of the Braess's Paradox are sensitive to both the parameters characterizing the epidemic spreading and the strategy payoff, while the existence of the Braess's Paradox is insensitive to the network topologies. This phenomenon can be well explained by a mean-field approximation. Our study demonstrates an important fact that a better condition for individuals may yield a worse outcome for the society.", "output": ["Braess's Paradox in Epidemic Game: Better Condition Results in Less Payoff"]}, "Prediction": "Braess's Paradox in"} +{"Task": "task1161_coda19_title_generation", "Definition": ["In this task, you're given a paragraph from the research paper and your task is to generate a suitable title for the research paper based on the given paper. Under 100 words is a good title length."], "Instance": {"id": "task1161-054bdc9c10d9483794b9f4e2f08d3de0", "input": "Thread-based microfluidics Point-of-care Cotton Colorimetric Electrochemical A B S T R A C T Over the past decades, researchers have been seeking attractive substrate materials to keep microfluidics improving to outbalance the drawbacks and issues. Cellulose substrates, including thread, paper and hydrogels are alternatives due to their distinct structural and mechanical properties for a number of applications. Thread have gained considerable attention and become promising powerful tool due to its advantages over paper-based systems thus finds numerous applications in the development of diagnostic systems, smart bandages and tissue engineering. To the best of our knowledge, no comprehensive review articles on the topic of thread-based microfluidics have been published and it is of significance for many scientific communities working on Microfluidics, Biosensors and Lab-on-Chip. This review gives an overview of the advances of thread-based microfluidic diagnostic devices in a variety of applications. It begins with an overall introduction of the fabrication followed by an in-depth review on the detection techniques in such devices and various applications with respect to effort and performance to date. A few perspective directions of thread-based microfluidics in its development are also discussed. Thread-based microfluidics are still at an early development stage and further improvements in terms of fabrication, analytical strategies, and function to become low-cost, low-volume and easy-to-use pointof-care (POC) diagnostic devices that can be adapted or commercialized for real world applications.", "output": ["Recent advances in thread-based microfluidics for diagnostic applications"]}, "Prediction": "A review on thread-based microflui"} +{"Task": "task1161_coda19_title_generation", "Definition": ["In this task, you're given a paragraph from the research paper and your task is to generate a suitable title for the research paper based on the given paper. Under 100 words is a good title length."], "Instance": {"id": "task1161-65a10f8a12b34977b4af3f847dfe8aa1", "input": "Objective: To describe aspects found on HRCT scans of the chest in patients infected with the influenza A (H1N1) virus. Methods: We retrospectively analyzed the HRCT scans of 71 patients (38 females and 33 males) with H1N1 infection, confirmed through laboratory tests, between July and September of 2009. The HRCT scans were interpreted by two thoracic radiologists independently, and in case of disagreement, the decisions were made by consensus. Results: The most common HRCT findings were ground-glass opacities (85%), consolidation (64%), or a combination of ground-glass opacities and consolidation (58%). Other findings were airspace nodules (25%), bronchial wall thickening (25%), interlobular septal thickening (21%), crazy-paving pattern (15%), perilobular pattern (3%), and air trapping (3%). The findings were frequently bilateral (89%), with a random distribution (68%). Pleural effusion, when observed, was typically minimal. No lymphadenopathy was identified. Conclusions: The most common findings were ground-glass opacities and consolidations, or a combination of both. Involvement was commonly bilateral with no axial or craniocaudal predominance in the distribution. Although the major tomographic findings in H1N1 infection are nonspecific, it is important to recognize such findings in order to include infection with the H1N1 virus in the differential diagnosis of respiratory symptoms. Objetivo: Descrever os aspectos encontrados em TCAR do t\u00f3rax de pacientes infectados pelo v\u00edrus influenza A (H1N1). M\u00e9todos: Foram analisadas retrospectivamente as TCAR de 71 pacientes (38 femininos e 33 masculinos) com diagn\u00f3stico confirmado de influenza A (H1N1) atrav\u00e9s da identifica\u00e7\u00e3o laboratorial do v\u00edrus, estudados no per\u00edodo entre julho e setembro de 2009. A interpreta\u00e7\u00e3o das TCAR foi realizada por dois radiologistas tor\u00e1cicos de forma independente, e, em caso de discord\u00e2ncia, as decis\u00f5es foram tomadas por consenso. Resultados: Os achados de TCAR mais comuns foram opacidades em vidro fosco (85%), consolida\u00e7\u00e3o (64%) ou a combina\u00e7\u00e3o de opacidades em vidro fosco e consolida\u00e7\u00e3o (58%). Outros achados foram n\u00f3dulos do espa\u00e7o a\u00e9reo (25%), espessamento das paredes br\u00f4nquicas (25%), espessamento de septos interlobulares (21%), padr\u00e3o de pavimenta\u00e7\u00e3o em mosaico (15%), espessamento perilobular (3%) e aprisionamento a\u00e9reo (3%). As altera\u00e7\u00f5es foram frequentemente bilaterais (89%), com distribui\u00e7\u00e3o n\u00e3o espec\u00edfica (68%). Derrame pleural, quando observado, foi, em geral, de pequena monta. N\u00e3o foram observadas linfonodomegalias. Conclus\u00f5es: As altera\u00e7\u00f5es predominantes foram opacidades em vidro fosco, consolida\u00e7\u00f5es ou a combina\u00e7\u00e3o de ambas. O acometimento foi frequentemente bilateral e n\u00e3o houve predom\u00ednio quanto \u00e0 distribui\u00e7\u00e3o (axial ou craniocaudal). Apesar de inespec\u00edficos, \u00e9 importante reconhecer os principais aspectos tomogr\u00e1ficos da infec\u00e7\u00e3o por influenza A (H1N1) a fim de incluir essa possibilidade no diagn\u00f3stico diferencial de sintomas respirat\u00f3rios. Descritores: Pneumonia viral; Tomografia computadorizada por raios X; V\u00edrus da influenza A subtipo H1N1.", "output": ["Influenza A (H1N1) pneumonia: HRCT findings* Pneumonia por v\u00edrus influenza A (H1N1): aspectos na TCAR"]}, "Prediction": "Nonspecific findings in patients infected with"} +{"Task": "task1161_coda19_title_generation", "Definition": ["In this task, you're given a paragraph from the research paper and your task is to generate a suitable title for the research paper based on the given paper. Under 100 words is a good title length."], "Instance": {"id": "task1161-7fe87d5c00574201bc671100cd36e55a", "input": "Biological oscillators are vital to living organisms, which use them as clocks for time-sensitive processes. However, much is unknown about mechanisms which can give rise to coherent oscillatory behavior, with few exceptions (e.g., explicitly delayed self-repressors and simple models of specific organisms' circadian clocks). We present what may be the simplest possible reliable gene network oscillator, a self-repressing gene. We show that binding cooperativity, which has not been considered in detail in this context, can combine with small numbers of intermediate steps to create coherent oscillation. We also note that noise blurs the line between oscillatory and non-oscillatory behavior.", "output": ["Oscillation, cooperativity, and intermediates in the self-repressing gene"]}, "Prediction": "Gene network oscillators with binding cooperat"} +{"Task": "task880_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to classify the sentence into one of the following five action categories - INFORM, INFORM_INTENT, OFFER, REQUEST, REQUEST_ALTS. If the sentence is about informing something, generate 'INFORM'. If it is about requesting something, generate 'REQUEST'. If the sentence mentions requesting an alternative option than the one provided, generate 'REQUEST_ALTS'. Similarly, generate 'OFFER' if the sentence is offering some information. If the sentence is about showing intent to do something, generate 'INFORM_INTENT'."], "Instance": {"id": "task880-0634735279fc4b16acc05459c87d8684", "input": "I am looking for a song that could make me happy.", "output": ["INFORM_INTENT"]}, "Prediction": "INFORM"} +{"Task": "task880_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to classify the sentence into one of the following five action categories - INFORM, INFORM_INTENT, OFFER, REQUEST, REQUEST_ALTS. If the sentence is about informing something, generate 'INFORM'. If it is about requesting something, generate 'REQUEST'. If the sentence mentions requesting an alternative option than the one provided, generate 'REQUEST_ALTS'. Similarly, generate 'OFFER' if the sentence is offering some information. If the sentence is about showing intent to do something, generate 'INFORM_INTENT'."], "Instance": {"id": "task880-a1c1676a5d27458ea8fc93306a253184", "input": "Are there any other events you would like to check?", "output": ["REQUEST_ALTS"]}, "Prediction": "REQUEST"} +{"Task": "task880_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to classify the sentence into one of the following five action categories - INFORM, INFORM_INTENT, OFFER, REQUEST, REQUEST_ALTS. If the sentence is about informing something, generate 'INFORM'. If it is about requesting something, generate 'REQUEST'. If the sentence mentions requesting an alternative option than the one provided, generate 'REQUEST_ALTS'. Similarly, generate 'OFFER' if the sentence is offering some information. If the sentence is about showing intent to do something, generate 'INFORM_INTENT'."], "Instance": {"id": "task880-ef74168bbd74404ca05ba168ac9b749a", "input": "Do you have other suggestions?", "output": ["REQUEST_ALTS"]}, "Prediction": "REQUEST"} +{"Task": "task880_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to classify the sentence into one of the following five action categories - INFORM, INFORM_INTENT, OFFER, REQUEST, REQUEST_ALTS. If the sentence is about informing something, generate 'INFORM'. If it is about requesting something, generate 'REQUEST'. If the sentence mentions requesting an alternative option than the one provided, generate 'REQUEST_ALTS'. Similarly, generate 'OFFER' if the sentence is offering some information. If the sentence is about showing intent to do something, generate 'INFORM_INTENT'."], "Instance": {"id": "task880-98314531ff684e29af77c4f953c5c5d0", "input": "Do you know the where the event will be and when it will start?", "output": ["REQUEST"]}, "Prediction": "REQUEST"} +{"Task": "task880_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to classify the sentence into one of the following five action categories - INFORM, INFORM_INTENT, OFFER, REQUEST, REQUEST_ALTS. If the sentence is about informing something, generate 'INFORM'. If it is about requesting something, generate 'REQUEST'. If the sentence mentions requesting an alternative option than the one provided, generate 'REQUEST_ALTS'. Similarly, generate 'OFFER' if the sentence is offering some information. If the sentence is about showing intent to do something, generate 'INFORM_INTENT'."], "Instance": {"id": "task880-8d0054dc46e7483ca8fda5247c4b1861", "input": "Okay. Buy me tickets.", "output": ["INFORM_INTENT"]}, "Prediction": "REQUEST"} +{"Task": "task738_perspectrum_classification", "Definition": ["In this task you will be given a claim and a perspective. You should determine whether that perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining."], "Instance": {"id": "task738-9973794d3d034a9daaf82b163af3cb07", "input": "claim: Speed limits should be reduced.\n perspective: Government have illegitimate reasons behind lowering the speed limit ", "output": ["undermine"]}, "Prediction": "support"} +{"Task": "task738_perspectrum_classification", "Definition": ["In this task you will be given a claim and a perspective. You should determine whether that perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining."], "Instance": {"id": "task738-512d68c894d14aa9908553bf00c1a5db", "input": "claim: People should have a right to private education.\n perspective: Private schools are financially good for the state", "output": ["support"]}, "Prediction": "support"} +{"Task": "task738_perspectrum_classification", "Definition": ["In this task you will be given a claim and a perspective. You should determine whether that perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining."], "Instance": {"id": "task738-a66a3cbb55f2489a842f916f4e119def", "input": "claim: Democracy Is the Best Form of Government.\n perspective: Promoting democracy promotes peace.", "output": ["support"]}, "Prediction": "support"} +{"Task": "task738_perspectrum_classification", "Definition": ["In this task you will be given a claim and a perspective. You should determine whether that perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining."], "Instance": {"id": "task738-f186796e9e2942ea8a3a71c520c98fc5", "input": "claim: Deny Organs to Non-Donors.\n perspective: People may have valid religious reasons not to donate organs", "output": ["undermine"]}, "Prediction": "support"} +{"Task": "task738_perspectrum_classification", "Definition": ["In this task you will be given a claim and a perspective. You should determine whether that perspective supports or undermines the claim. If the perspective could possibly convince someone with different view, it is supporting, otherwise it is undermining."], "Instance": {"id": "task738-bdb32a02e39441acb7642d399f79e3f1", "input": "claim: We should expand NATO.\n perspective: NATO development was important for international stability ", "output": ["support"]}, "Prediction": "support"} +{"Task": "task1439_doqa_cooking_isanswerable", "Definition": ["Given a paragraph about cooking, and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1439-7d88f45c6eff4aea9353875c972cfd7c", "input": "CONTEXT: I'm afraid you interpret the tables wrong. You aren't destroying calories, you are adding water (=0 cal) to the dry rice. As the rice absorbs the water, you are in fact measuring rice + water for cooked rice.This is true for calorie tables that measure by volume (like here) and by weight.If you are cooking your rice by boiling and straining, you are effectively losing a few calories due to the starch that gets drained with the water, but that's not what you asked here. Q: How to maintain the calories in rice (which were present in it in its raw form) after cooking it? A: f you are cooking your rice by boiling and straining, you are effectively losing a few calories due to the starch that gets drained with the water FOLLOWUP_Q: What could I add to the rice for flavor?", "output": ["No"]}, "Prediction": "No"} +{"Task": "task1439_doqa_cooking_isanswerable", "Definition": ["Given a paragraph about cooking, and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1439-cd5dbea0197d4b8b96c4c99e96c9ecec", "input": "CONTEXT: Alton Brown recommends brining pork chops in a salt water solution in one of his Good Eats episodes. I have been brining them before I cook them ever since I saw that and think it makes them juicier.I brine them for a couple hours at most and then just cook them. Never tried the drying in the refrigerator. Q: I want to prepare pork chops by salting them, putting them in a brine, and drying them, but how should I go about this? A: Alton Brown recommends brining pork chops in a salt water solution in one of his Good Eats episodes Q: Do I still have to cook the meat after all that, or is it ready to eat? A: I have been brining them before I cook them ever since I saw that and think it makes them juicier FOLLOWUP_Q: Will the brine and/or salt cause issues of bone rot?", "output": ["Yes"]}, "Prediction": "No"} +{"Task": "task1439_doqa_cooking_isanswerable", "Definition": ["Given a paragraph about cooking, and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1439-27b5b8f40b0e40aa8f7e6b791bb114ec", "input": "CONTEXT: I'm sorry to say this, but I think prevention is the best answer. Once you're vegetables freeze things happen at the cellular level that changes the nature of the vegetables. For example, ice crystals pierce cell walls which destroys some of the structure, which is responsible for the crispness and crunch of the vegetable. I don't think there is really a way to \"fix\" this, once it happens.That said, cooking does something similar to vegetables. Cooking destroys cell walls, which is why cooked carrots are so much softer than raw carrots. I would think that vegetables that stand up to cooking would survive freezing the best (such as your mushrooms). On the other hand, somthing like lettuce is a lost cause, once frozen. Q: How do i recover vegetables that get \"refrigerator crisper chill\"? A: I don't think there is really a way to \"fix\" this, once it happens FOLLOWUP_Q: Can I use them for things like stew?", "output": ["Yes"]}, "Prediction": "No"} +{"Task": "task1439_doqa_cooking_isanswerable", "Definition": ["Given a paragraph about cooking, and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1439-e4d8c81eb5824dab9cd19f9a17bb8e45", "input": "CONTEXT: I would keep the one that you are eating the next day out. I would then freeze the rest of them. If you like them warm, you could freeze them wrapped in foil. Before eating pop them in a toaster oven until gently warmed through. If not I'd freeze individually until solid then put in a bag to prevent your frozen bagels from sticking together, then thaw overnight (if eating in the morning) or in the morning (if eating for a mid-day meal) to eat. Q: what is the best way to store bagels? A: I would keep the one that you are eating the next day out. I would then freeze the rest of them Q: they are the best while fresh and im not going to make them daily. how long do they stay good while frozen? A: If you like them warm, you could freeze them wrapped in foil. Before eating pop them in a toaster oven until gently warmed through FOLLOWUP_Q: i wonder if they would last a couple weeks while frozen?", "output": ["No"]}, "Prediction": "Yes"} +{"Task": "task1439_doqa_cooking_isanswerable", "Definition": ["Given a paragraph about cooking, and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1439-120793f6867a4f2bbafc6a498ae71407", "input": "CONTEXT: Almost any normal sorbet recipe will contain a decent amount of sugar, and strawberries are no exception. I'd guess probably 1/2-2/3 cup per pound of strawberries. Use a substitute if you have an aversion - honey, agave, raw cane sugar, whatever you prefer. (Of course, anything liquid is going to contain some water, and cause a bit of ice, but it's still doable.) A sorbet without any extra sugar, even when frozen normally in an ice cream maker, is going to end up with a very icy texture, which will probably keep you from experiencing the flavor as much. The sugar helps soften it.The other common way to soften a sorbet is alcohol. Ros\u00c3\u00a9 wine (thank you, David Lebovitz) works quite well in a strawberry sorbet. Vodka can be your go-to liquor for any impromptu sorbet with other fruits, since it'll add alcohol without any flavors that clash. Beyond that, either look for recipes, or look for mixed drinks containing the fruit you're using.You might also be having problems simply because your strawberries aren't that great. Did you freeze them, or were they storebought? If you bought the strawberries yourself, you'd know they were decent before freezing, while not all storebought frozen fruit is as flavorful.Finally, it's possible that your frozen strawberries collected some extra water in the form of frost, which then makes your sorbet icier. If there are obvious big chunks you could scrape them off. Q: How can I improve flavor of Sorbet made in Magic Bullet? A: Use a substitute if you have an aversion - honey, agave, raw cane sugar, whatever you prefer FOLLOWUP_Q: Can I do a strawberry sorbet?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task645_summarization", "Definition": ["Generate a topic word for the given sentence. A topic word is usually the name of a person, place, animal, concept or object the sentence is focusing on."], "Instance": {"id": "task645-6d0b46154c2241888d059d5017bb4fde", "input": "an old master print is a work of art made by a printing process.", "output": ["old master print"]}, "Prediction": "old master print"} +{"Task": "task645_summarization", "Definition": ["Generate a topic word for the given sentence. A topic word is usually the name of a person, place, animal, concept or object the sentence is focusing on."], "Instance": {"id": "task645-1d2fe5fa93a6480eb4641fea0db6fb82", "input": "taipei metro is a metro system in taipei, taiwan.", "output": ["taipei metro"]}, "Prediction": "taipei metro"} +{"Task": "task645_summarization", "Definition": ["Generate a topic word for the given sentence. A topic word is usually the name of a person, place, animal, concept or object the sentence is focusing on."], "Instance": {"id": "task645-9ed709f11b11458c9263f493625e5c69", "input": "adam ries was a mathematician from bad staffelstein in bavaria.", "output": ["adam ries"]}, "Prediction": "adam ries"} +{"Task": "task645_summarization", "Definition": ["Generate a topic word for the given sentence. A topic word is usually the name of a person, place, animal, concept or object the sentence is focusing on."], "Instance": {"id": "task645-59c09321769541bbbf4d4e155cb2d94b", "input": "bob fitzgerald is a sports announcer.", "output": ["bob fitzgerald"]}, "Prediction": "bob fitzgerald"} +{"Task": "task645_summarization", "Definition": ["Generate a topic word for the given sentence. A topic word is usually the name of a person, place, animal, concept or object the sentence is focusing on."], "Instance": {"id": "task645-7f905d86685043679fdfbc5b4ebe536e", "input": "illinois is a state in the united states.", "output": ["illinois"]}, "Prediction": "illinois"} +{"Task": "task619_ohsumed_abstract_title_generation", "Definition": ["Given an abstract of a paper, generate a title for this paper such that conveys the key focus of the paper."], "Instance": {"id": "task619-fc21ecf77daa4ec7a151afc32b1cca4e", "input": "Abstract: To summarize the results: Bilirubin removal by hemoperfusion was successfully performed for the first time on a premature infant. The hemoperfusion caused no adverse effects on the clinical state of the infant. The bilirubin removed in 90 min--13 mg--indicates the existence of a large extravascular pool. Hemoperfusion is suggested for bilirubin removal from newborn babies, reducing the need for blood exchange with bank blood.", "output": ["Bilirubin removal from a jaundiced premature infant by resin hemoperfusion."]}, "Prediction": "Hemoperfusion for bili"} +{"Task": "task619_ohsumed_abstract_title_generation", "Definition": ["Given an abstract of a paper, generate a title for this paper such that conveys the key focus of the paper."], "Instance": {"id": "task619-837bb13ec5384b1699f9798835de295a", "input": "Abstract: Labrador keratopathy (LK) is an acquired corneal degeneration thought to be caused by chronic exposure to solar irradiation. Reports so far suggest that it is a progressive or at least a stationary condition. There are no detailed reports on recommended therapy. A prospective clinical study was conducted to show regression of LK following extracapsular cataract extraction. Seventeen black patients (26 eyes) with LK and mature cataracts underwent extracapsular cataract extraction. The severity of the LK was recorded photographically pre- and postoperatively. The follow-up ranged from 6 to fifteen months. Twenty-four eyes (92%) showed regression of the LK. The reduction in exposure to ultraviolet light as a result of aphakic photophobia could be one of the factors which have led to the regression of the LK.", "output": ["Regression of Labrador keratopathy following cataract extraction."]}, "Prediction": "Regression of labrador kerat"} +{"Task": "task619_ohsumed_abstract_title_generation", "Definition": ["Given an abstract of a paper, generate a title for this paper such that conveys the key focus of the paper."], "Instance": {"id": "task619-35b877e8c55c4261b3cace6ed7f45c1f", "input": "Abstract: Epinephrine causes platelet aggregation and secretion by interacting with alpha 2-adrenergic receptors on the platelet surface. Platelet aggregation requires the binding of fibrinogen to a specific receptor on the membrane glycoprotein IIb-IIIa complex. Although the IIb-IIIa complex is identifiable on the surface of resting platelets, the fibrinogen receptor is expressed only after platelet activation. The current studies were designed to examine the effect of occupancy of platelet alpha 2-adrenergic receptors by epinephrine on the expression of fibrinogen receptors and on the aggregation of platelets. The ability of epinephrine to induce the expression of fibrinogen receptors was studied under two different conditions: acute stimulation (less than 1 min) and prolonged stimulation (50 to 90 min), the latter of which is associated with a reduction or \"desensitization\" of the platelet aggregation response. Expression of the fibrinogen receptor was monitored with 125I-fibrinogen as well as with 125I-PAC-1 (PAC-1), a monoclonal antibody that binds to the glycoprotein IIb-IIIa complex only after platelets are activated. Epinephrine caused an immediate increase in PAC-1 and fibrinogen binding that was dependent on occupancy of the alpha 2-receptor by epinephrine and on the presence of extracellular free Ca (KCa = 30 mumol/L). By itself, 1 mmol/L Mg was unable to support induction of the fibrinogen receptor by epinephrine. However, it did decrease the Ca requirement by about two orders of magnitude. Prolonged stimulation of unstirred platelets by epinephrine led to a 70% decrease in the aggregation response when the platelets were subsequently stirred. Despite their decreased aggregation response, desensitized platelets bound PAC-1 and fibrinogen normally, indicating that the loss of aggregation was not due simply to a decrease in fibrinogen receptor expression. Although desensitization was not affected by pretreatment of the platelets with aspirin, it was partially prevented when extracellular Ca was chelated by EDTA during the long incubation with epinephrine. These studies demonstrate that once platelet alpha 2-adrenergic receptors are occupied by epinephrine, extracellular Ca is involved in initiating the aggregation response by supporting the induction of the fibrinogen receptor and the binding of fibrinogen. Furthermore. Ca-dependent reactions subsequent to fibrinogen binding may be necessary for maximal platelet aggregation and are impaired when platelets become desensitized to epinephrine.", "output": ["Expression of fibrinogen receptors during activation and subsequent desensitization of human platelets by epinephrine."]}, "Prediction": "Occupancy of platelet alpha"} +{"Task": "task619_ohsumed_abstract_title_generation", "Definition": ["Given an abstract of a paper, generate a title for this paper such that conveys the key focus of the paper."], "Instance": {"id": "task619-ad86ea4a8c314ecfa3b80c72f3ad85b2", "input": "Abstract: We describe a patient with secondary syphilis and facial skin lesions which resembled Sweet's syndrome clinically and histologically. We suggest serological tests for syphilis in patients with Sweet's syndrome.", "output": ["Secondary syphilis mimicking Sweet's syndrome."]}, "Prediction": "Sweet's syndrome resembling"} +{"Task": "task619_ohsumed_abstract_title_generation", "Definition": ["Given an abstract of a paper, generate a title for this paper such that conveys the key focus of the paper."], "Instance": {"id": "task619-fba4804c17614a619f1378a78fdf8598", "input": "Abstract: The roller screw electric VAD is easier to manufacture and 25% lighter than the previously described drum cam model. This device requires 12 to 15 W to pump 6 to 8 l/min with minimal hemolysis. The motor drive has functioned for periods up to 93 days in vivo with no measurable wear. The compliance chamber volume varies by 100 cc during VAD function but does so while maintaining pressure variations below 15 mmHg. Compliance chamber volume loss of 2 to 5 cc/day is explained by gas transport through SPU. The subcutaneous sampling port provides ready access to the sealed system.", "output": ["Roller screw electric motor ventricular assist device."]}, "Prediction": "Roller screw electric VAD with low power consumption"} +{"Task": "task1728_web_nlg_data_to_text", "Definition": ["You will be given one or more triples. The second part of each triple shows the relation between the first and the third element. Your task is to write a simple and short piece of text (sentence(s)) that describes the triples in natural language."], "Instance": {"id": "task1728-4c55725a7a1a464fbe911771bd5df40a", "input": "Ayam_penyet | region | Malaysia\nAyam_penyet | ingredient | Fried_chicken", "output": ["Fried chicken is one of the ingredients in the dish 'Ayam Penyet' which comes from Malaysia.", "Fried chicken is an ingredient in the popular Malaysian dish of Ayam penyet."]}, "Prediction": "Ayam Penyet is "} +{"Task": "task1728_web_nlg_data_to_text", "Definition": ["You will be given one or more triples. The second part of each triple shows the relation between the first and the third element. Your task is to write a simple and short piece of text (sentence(s)) that describes the triples in natural language."], "Instance": {"id": "task1728-1a47573bda6b4cd488568dd79c1b56cf", "input": "Acta_Mathematica_Hungarica | LCCN_number | 83646315\nActa_Mathematica_Hungarica | abbreviation | \"Acta Math. Hungar.\"\nActa_Mathematica_Hungarica | academicDiscipline | Mathematics\nActa_Mathematica_Hungarica | ISSN_number | \"1588-2632\"", "output": ["83646315 is the LCCN number for Acta Mathematica Hungarica and the ISSN number is 1588-2632. Its bbreviation is \"Acta Math. Hungar.\" and its discipline is Math.", "Acta Mathematica Hungarica (Acta Math. Hungar.) which has a discipline of math has the LCCN number 83646315 and ISSN number 1588-2632.", "Acta Mathematica Hungarica has the abbreviation of Acta Math. Hungar and covers the academic discipline of Mathematics. This publication has the LCCN number 83646315 and the ISSN number 1588-2632."]}, "Prediction": "Acta Mathematica Hunga"} +{"Task": "task1728_web_nlg_data_to_text", "Definition": ["You will be given one or more triples. The second part of each triple shows the relation between the first and the third element. Your task is to write a simple and short piece of text (sentence(s)) that describes the triples in natural language."], "Instance": {"id": "task1728-ed8418b1302943dfa20d02f65d8eed3f", "input": "Turkey | leader | Ahmet_Davuto\u011flu\nAtat\u00fcrk_Monument_(\u0130zmir) | designer | Pietro_Canonica\nAtat\u00fcrk_Monument_(\u0130zmir) | material | \"Bronze\"\nAtat\u00fcrk_Monument_(\u0130zmir) | location | Turkey", "output": ["Ahmet Davutoglu is a leader of Turkey. The Atat\u00fcrk Monument (\u0130zmir) is found in Turkey, made of Bronze and designed by Pietro Canonica.", "By the way, the leader of Turkey is Ahmet Davutoglu and the Ataturk Monument designed by Pietro Canonica is made of bronze and located in Izmir, Turkey.", "Ahmet Davutoglu is the leader of Turkey, the location of the bronze Atat\u00fcrk Monument in Izmir designed by Pietro Canonica."]}, "Prediction": "Turkey leader is Ahmet Davuto"} +{"Task": "task1728_web_nlg_data_to_text", "Definition": ["You will be given one or more triples. The second part of each triple shows the relation between the first and the third element. Your task is to write a simple and short piece of text (sentence(s)) that describes the triples in natural language."], "Instance": {"id": "task1728-e344d73b4e664a019f9a7895103f1a27", "input": "Hong_Kong | leaderName | Carrie_Lam_(politician)\nBaked_Alaska | region | Hong_Kong", "output": ["Baked Alaska comes from Hong Kong where Carrie Lam is a political leader.", "Carrie Lam is the politician who leads Hong Kong, where Baked Alaska originates."]}, "Prediction": "Baked Alaska is a dish from Hong"} +{"Task": "task1728_web_nlg_data_to_text", "Definition": ["You will be given one or more triples. The second part of each triple shows the relation between the first and the third element. Your task is to write a simple and short piece of text (sentence(s)) that describes the triples in natural language."], "Instance": {"id": "task1728-fd7ebff2be7b466697a9b539165a2190", "input": "Alan_Bean | was a crew member of | Apollo_12\nApollo_12 | operator | NASA\nAlan_Bean | occupation | Test_pilot\nApollo_12 | commander | David_Scott", "output": ["Alan Bean was a crew member of NASA's Apollo 12 with commander David Scott. He was a test pilot.", "Test Pilot Alan Bean was a crew member of NASA's Apollo 12 mission under commander David Scott."]}, "Prediction": "Alan Bean was a crew member of Apollo"} +{"Task": "task1640_aqa1.0_answerable_unanswerable_question_classification", "Definition": ["Given a paragraph from a Wikipedia article about some topic, and a question related to the topic, determine whether the question is answerable from the paragraph. If the question is answerable, answer \"True\", otherwise, answer \"False\"."], "Instance": {"id": "task1640-f9a1d473686046ef926a0ead201ee880", "input": "An earlier draft of To Kill a Mockingbird, titled Go Set a Watchman, was controversially released on July 14, 2015. This draft, which was completed in 1957, is set 20 years after the time period depicted in To Kill a Mockingbird but is not a continuation of the narrative. This earlier version of the story follows an adult Scout Finch who travels from New York to visit her father, Atticus Finch, in Maycomb, Alabama, where she is confronted by the intolerance in her community. The Watchman manuscript was believed to have been lost until Lee's lawyer Tonja Carter discovered it; although this claim has been widely disputed. Watchman contains early versions of many of the characters from To Kill a Mockingbird. According to Lee's agent Andrew Nurnberg, Mockingbird was originally intended to be the first book of a trilogy: \"They discussed publishing Mockingbird first, Watchman last, and a shorter connecting novel between the two.\" This assertion has been discredited however by the rare books expert James S. Jaffe, who reviewed the pages at the request of Lee's attorney and found them to be only another draft of \"To Kill a Mockingbird\". The statement was also contrary to Jonathan Mahler's description of how \"Watchman\" was seen as just the first draft of \"Mockingbird\". Instances where many passages overlap between the two books, in some case word for word, also refutes this assertion. Question: Who does the mad dog represent?", "output": ["False"]}, "Prediction": "False"} +{"Task": "task1640_aqa1.0_answerable_unanswerable_question_classification", "Definition": ["Given a paragraph from a Wikipedia article about some topic, and a question related to the topic, determine whether the question is answerable from the paragraph. If the question is answerable, answer \"True\", otherwise, answer \"False\"."], "Instance": {"id": "task1640-90ce20d17fcc4f119fe2428d727a3c34", "input": "As Universal's main product had always been low-budget film, it was one of the last major studios to have a contract with Technicolor. The studio did not make use of the three-strip Technicolor process until Arabian Nights (1942), starring Jon Hall and Maria Montez. The following year, Technicolor was also used in Universal's remake of their 1925 horror melodrama, Phantom of the Opera with Claude Rains and Nelson Eddy. With the success of their first two pictures, a regular schedule of high-budget, Technicolor films followed. Question: Who made the movie Nelson Eddy was in?", "output": ["True"]}, "Prediction": "True"} +{"Task": "task1640_aqa1.0_answerable_unanswerable_question_classification", "Definition": ["Given a paragraph from a Wikipedia article about some topic, and a question related to the topic, determine whether the question is answerable from the paragraph. If the question is answerable, answer \"True\", otherwise, answer \"False\"."], "Instance": {"id": "task1640-94c09eb5c66a453e8f63c4e859c13c61", "input": "As of 2000[update], children and teenagers (0\u201319 years old) make up 15.1% of the population, while adults (20\u201364 years old) make up 65% and seniors (over 64 years old) make up 19.9%. Question: Suppose there are 2 more newborns, then which number will be increased, 65%, 15.1% or 19.9%?", "output": ["True"]}, "Prediction": "True"} +{"Task": "task1640_aqa1.0_answerable_unanswerable_question_classification", "Definition": ["Given a paragraph from a Wikipedia article about some topic, and a question related to the topic, determine whether the question is answerable from the paragraph. If the question is answerable, answer \"True\", otherwise, answer \"False\"."], "Instance": {"id": "task1640-dd45c716d2b94fd291b6bbf5d4b26cd3", "input": "Chopin took the new salon genre of the nocturne, invented by the Irish composer John Field, to a deeper level of sophistication. He was the first to write ballades and scherzi as individual concert pieces. He essentially established a new genre with his own set of free-standing preludes (Op. 28, published 1839). He exploited the poetic potential of the concept of the concert \u00e9tude, already being developed in the 1820s and 1830s by Liszt, Clementi and Moscheles, in his two sets of studies (Op. 10 published in 1833, Op. 25 in 1837). Question: What were Chopin's complaints in 1842 when he had a recital in Paris?", "output": ["False"]}, "Prediction": "False"} +{"Task": "task1640_aqa1.0_answerable_unanswerable_question_classification", "Definition": ["Given a paragraph from a Wikipedia article about some topic, and a question related to the topic, determine whether the question is answerable from the paragraph. If the question is answerable, answer \"True\", otherwise, answer \"False\"."], "Instance": {"id": "task1640-d4e6f5b6b1e4495eaa8ec91dab0787be", "input": "The first action of the Franco-Prussian War took place on 4 August 1870. This battle saw the unsupported division of General Douay of I Corps, with some attached cavalry, which was posted to watch the border, attacked in overwhelming but uncoordinated fashion by the German 3rd Army. During the day, elements of a Bavarian and two Prussian corps became engaged and were aided by Prussian artillery, which blasted holes in the defenses of the town. Douay held a very strong position initially, thanks to the accurate long-range fire of the Chassepots but his force was too thinly stretched to hold it. Douay was killed in the late morning when a caisson of the divisional mitrailleuse battery exploded near him; the encirclement of the town by the Prussians threatened the French avenue of retreat. Question: Who attacked the French division?", "output": ["True"]}, "Prediction": "True"} +{"Task": "task648_answer_generation", "Definition": ["You will see a sentence and a given pronoun. Answer with the word in the sentence to which the pronoun is referring."], "Instance": {"id": "task648-418857d1923146959efda110e97a1fb7", "input": "text: tom threw his schoolbag down to ray after he reached the bottom of the stairs.; pronoun: he", "output": ["ray"]}, "Prediction": "tom"} +{"Task": "task648_answer_generation", "Definition": ["You will see a sentence and a given pronoun. Answer with the word in the sentence to which the pronoun is referring."], "Instance": {"id": "task648-e8fc81ec2636400b8848e0c94cc8c8f5", "input": "text: there is a gap in the wall. you can see the garden behind it .; pronoun: it", "output": ["the wall"]}, "Prediction": "garden"} +{"Task": "task648_answer_generation", "Definition": ["You will see a sentence and a given pronoun. Answer with the word in the sentence to which the pronoun is referring."], "Instance": {"id": "task648-1b44f48de3b84b0eb4bee0c5684d703b", "input": "text: in the middle of the outdoor concert, the rain started falling, and it continued until 10.; pronoun: it", "output": ["the rain"]}, "Prediction": "rain"} +{"Task": "task648_answer_generation", "Definition": ["You will see a sentence and a given pronoun. Answer with the word in the sentence to which the pronoun is referring."], "Instance": {"id": "task648-5db97f281b374095a5ae9a84cca90851", "input": "text: sid explained his theory to mark but he couldn't convince him.; pronoun: he", "output": ["sid"]}, "Prediction": "mark"} +{"Task": "task648_answer_generation", "Definition": ["You will see a sentence and a given pronoun. Answer with the word in the sentence to which the pronoun is referring."], "Instance": {"id": "task648-d5ae72ef6cf9417c87bea3a8a846720b", "input": "text: the large ball crashed right through the table because it was made of steel.; pronoun: it", "output": ["the large ball"]}, "Prediction": "table"} +{"Task": "task242_tweetqa_classification", "Definition": ["In this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify given passage into two categories: (1) \"yes\" if the given context is useful in answering the question, and (2) \"no\" if the given context is not useful."], "Instance": {"id": "task242-ce0f91d0c00d4ddabb2c36515a8aec5d", "input": "Context: So thankful for my family......thankful for forgiveness and the chance to be a better man through the strength, love, and kindness of the people I love who love me through all of my imperfections.Happy Thanksgiving to you all! Tim McGraw (@TheTimMcGraw) November 23, 2017 Question: what can't jonathan gruber shake? Answer: obamacare deceptions", "output": ["no"]}, "Prediction": "no"} +{"Task": "task242_tweetqa_classification", "Definition": ["In this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify given passage into two categories: (1) \"yes\" if the given context is useful in answering the question, and (2) \"no\" if the given context is not useful."], "Instance": {"id": "task242-ad95fb1cd0a5440095ab28741a41d9a7", "input": "Context: Classes at Bayside High are canceled for tomorrow. Screech will still be having the prom meeting in the afternoon.\u2014 MayorMark (@MayorMark) March 4, 2015 Question: what time will screech have the prom meeting Answer: meeting in the afternoon", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task242_tweetqa_classification", "Definition": ["In this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify given passage into two categories: (1) \"yes\" if the given context is useful in answering the question, and (2) \"no\" if the given context is not useful."], "Instance": {"id": "task242-1fd6058a2b264989b6c6b05e4690c78d", "input": "Context: What I reported on Friday was the #Titans planned to bring him back, with conditions. Munchak must've balked on conditions \u2014 Jim Wyatt (@jwyattsports) January 4, 2014 Question: who balked on conditions? Answer: munchak", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task242_tweetqa_classification", "Definition": ["In this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify given passage into two categories: (1) \"yes\" if the given context is useful in answering the question, and (2) \"no\" if the given context is not useful."], "Instance": {"id": "task242-4f7c970a88d14d1cae66c71d052e04db", "input": "Context: Danbury students be like: MayorMark (@MayorMark) March 4, 2015 Question: when will the tweeter make final impact? Answer: very soon", "output": ["no"]}, "Prediction": "yes"} +{"Task": "task242_tweetqa_classification", "Definition": ["In this task, you are given a context tweet, a question and corresponding answer of given question. Your task is to classify given passage into two categories: (1) \"yes\" if the given context is useful in answering the question, and (2) \"no\" if the given context is not useful."], "Instance": {"id": "task242-29999f9d2dbf450b9fdea78dce465221", "input": "Context: Jackie Evancho's album sales have skyrocketed after announcing her Inauguration performance.Some people just don't understand the \"Movement\"\u2014 Donald J. Trump (@realDonaldTrump) January 4, 2017 Question: who's album sales are increasing? Answer: jackie evancho", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task620_ohsumed_medical_subject_headings_answer_generation", "Definition": ["Given an abstract, generate a keyword (a noun phrase) that best describes the focus or contribution of the paper. Such keywords can be directly from the given abstract or outside it."], "Instance": {"id": "task620-21dc698548af4bc5a52e7102426a9b97", "input": "Abstract: The effect of topical application of axonal transport blockers to the transected peripheral nerve was assessed by quantitating the strychnine-enhanced transsynaptic degeneration following transection of the inferior alveolar nerve in adult rats. Systemic administration of strychnine (1 mg/kg/day) for 7 days at the postoperational interval of 23 days proved to be suitable for quantitating the transsynaptic degeneration at the light microscopic level. When the proximal stump of transected nerve was treated with 2% colchicine immediately after transection, 5.8 +/- 6.8 dark neurons in a single section of the medullary dorsal horn, ipsilateral to the nerve transection, were observed. Following similar treatment with 0.4% vinblastine and 0.2% vincristine, 24.4 +/- 10.5 and 9.4 +/- 7.0 dark neurons were seen, respectively. When compared with 43.0 +/- 9.4 dark neurons, which were seen in animals without alkaloid treatment, colchicine, vinblastine and vincristine significantly reduced the transsynaptic degeneration by 86, 43, and 78%, respectively. Possible mechanisms involved in prevention of transsynaptic degeneration by the alkaloids are discussed.", "output": ["Administration, Topical", "Alkaloids", "Animal", "Colchicine", "Male", "Medulla Oblongata", "Nerve Degeneration", "Peripheral Nerves", "Rats", "Rats, Inbred Strains", "Strychnine", "Time Factors", "Vinblastine", "Vincristine"]}, "Prediction": "Transsynaptic Degeneration"} +{"Task": "task620_ohsumed_medical_subject_headings_answer_generation", "Definition": ["Given an abstract, generate a keyword (a noun phrase) that best describes the focus or contribution of the paper. Such keywords can be directly from the given abstract or outside it."], "Instance": {"id": "task620-d74098c9d2774c37bd042ebf052bad26", "input": "Abstract: After a retreat to an Oregon farm, 19 of 31 college students developed an acute gastrointestinal illness. Campylobacter jejuni infection was recognized in all the ill students and caused asymptomatic infections in three others. In total, 22 (88%) of 25 students who consumed raw milk for the first time became infected as compared with none of two who had not consumed raw milk. Among ten persons who chronically consumed raw milk, none was ill, a striking difference from the 76% attack rate among the 25 acutely exposed students. The quantity of raw milk consumed was directly related to the occurrence and severity of illness. Acutely infected students showed significant rises in C jejuni-specific immunoglobulins, whereas the low antibody levels seen in unexposed persons did not rise. In contrast, acute-phase serum samples from persons with chronic exposure to raw milk showed elevated antibody levels to C jejuni. These findings indicate that chronic raw milk consumption is associated with elevated levels of C jejuni-specific serum antibodies and with immunity to symptomatic infection.", "output": ["Acute Disease", "Acute Phase Proteins", "Adolescence", "Adult", "Animal", "Antibodies, Bacterial", "Campylobacter fetus", "Campylobacter Infections", "Cattle", "Disease Outbreaks", "Enteritis", "Food Microbiology", "Human", "Immunity, Active", "Milk", "Support, Non-U.S. Gov't", "Support, U.S. Gov't, Non-P.H.S.."]}, "Prediction": "Campylobacter jejuni"} +{"Task": "task620_ohsumed_medical_subject_headings_answer_generation", "Definition": ["Given an abstract, generate a keyword (a noun phrase) that best describes the focus or contribution of the paper. Such keywords can be directly from the given abstract or outside it."], "Instance": {"id": "task620-741d3eddab7448bbb221687641717065", "input": "Abstract: RV and LV failure frequently coexist. Experimental evidence suggests that RV failure results from a primary insult to the right ventricle and an increase in pulmonary vascular resistance. LV failure results in an elevation of the left atrial pressure and thereby a reduction in the transpulmonary hydrostatic gradient. Because RV function depends on the contraction of the left ventricle, this contribution is reduced during LV failure. Treatment should be aimed first at restoring LV function. If RV failure persists and is not due to a simple mechanical problem, treatment of RV failure should be commenced. If volume loading, inotropes, and pH adjustments do not result in adequate RV output, a mechanical assist device should be considered. RHBP is the most powerful device, and it is capable of restoring systemic perfusion even when there is no residual RV function. Clinical results with this device have been most encouraging--most patients demonstrate improved RV function and a decrease in pulmonary vascular resistance after several days of treatment, allowing them to be weaned from assist. However, this technique is very invasive and requires constant close attention. PABC is a simple but less powerful assist device. Experimental studies suggest that if RV function is not extremely depressed--cardiac output is greater than 50 percent of normal--PABC may be effective in restoring systemic perfusion to normal levels. Current results suggest that RHBP remains the gold standard for severe RV failure, but further clinical experience with PABC may more clearly define its role in the management of RV failure.", "output": ["Heart Failure, Congestive", "Heart Ventricle", "Human", "Intraoperative Complications", "Support, Non-U.S. Gov't", "Vascular Resistance."]}, "Prediction": "LV failure"} +{"Task": "task620_ohsumed_medical_subject_headings_answer_generation", "Definition": ["Given an abstract, generate a keyword (a noun phrase) that best describes the focus or contribution of the paper. Such keywords can be directly from the given abstract or outside it."], "Instance": {"id": "task620-50d954a1ecaf42f7a011ad3102ce2038", "input": "Abstract: A retrospective epidemiological study of industrial burns admitted to the Royal Brisbane Hospital was conducted over a period of 7 years. A total of 182 patients were included in this survey--173 males (95 per cent) and 9 females (5 per cent). The proportion of industrial burns has decreased from 31.5 per cent to 18.5 per cent of total admissions to the burns unit over the past 10 years. Fifty per cent of the burns occurred in males less than 30 years old. The average age was 31 years. Two-thirds of the burns covered less than 10 per cent of the body and 84 per cent covered less than 20 per cent. While scalds were the commonest cause of industrial burns in our study (19.4 per cent), flame burns with clothing ignited caused the most extensive burns. Face and hands were the most common sites involved. Burns to these regions were mainly caused by flame and electrical burns. Eye burns comprised 5.5 per cent of the sample and were due to chemicals, gas explosions and electric flash. Twenty-six patients (14 per cent) suffered respiratory injury. Only one patient out of our series of 182 died. Progress has been made in industrial safety in the past few years but carelessness and human error still take their toll.", "output": ["Accidents, Occupational", "Adult", "Australia", "Burns", "Burns, Chemical", "Eye Burns", "Female", "Human", "Male."]}, "Prediction": "industrial burns"} +{"Task": "task620_ohsumed_medical_subject_headings_answer_generation", "Definition": ["Given an abstract, generate a keyword (a noun phrase) that best describes the focus or contribution of the paper. Such keywords can be directly from the given abstract or outside it."], "Instance": {"id": "task620-a47d3e0376d4449ebfcccb0d7ff4384b", "input": "Abstract: The use of atracurium during major abdominal surgery was assessed in 20 infants with hepatic dysfunction caused by biliary atresia. An initial dose of 0.6 mg kg-1 provided excellent intubating conditions in all patients. Subsequent neuromuscular blockade was monitored with a peripheral nerve stimulator and incremental doses of atracurium were given on reappearance of a single twitch. Neuromuscular conduction was allowed to return at the end of surgery and by careful timing of increments it was necessary to antagonize the neuromuscular blockade in only two patients.", "output": ["Anesthesia, Intratracheal", "Atracurium", "Biliary Atresia", "Drug Administration Schedule", "Female", "Human", "Infant", "Infant, Newborn", "Intraoperative Care", "Liver", "Male", "Neuromuscular Blocking Agents"]}, "Prediction": "hepatic dysfunction"} +{"Task": "task1159_bard_analogical_reasoning_containers", "Definition": ["Two analogies that relate items to the associated containers is given in the form \"A : B. C : ?\". \"A : B\" relates item A to its associated container B. Your task is to replace the question mark (?) with the appropriate container for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1159-29b908c63bea43a9ba38c3284bdfa76a", "input": "chips : bag. money : ?", "output": ["wallet"]}, "Prediction": "box"} +{"Task": "task1159_bard_analogical_reasoning_containers", "Definition": ["Two analogies that relate items to the associated containers is given in the form \"A : B. C : ?\". \"A : B\" relates item A to its associated container B. Your task is to replace the question mark (?) with the appropriate container for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1159-b6c13b5642f74c0a9dbcded5910ab8f2", "input": "potatoes : sack. poster : ?", "output": ["tube"]}, "Prediction": "wall"} +{"Task": "task1159_bard_analogical_reasoning_containers", "Definition": ["Two analogies that relate items to the associated containers is given in the form \"A : B. C : ?\". \"A : B\" relates item A to its associated container B. Your task is to replace the question mark (?) with the appropriate container for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1159-0fef2fc3e1934a77997cadb31c6400a5", "input": "poster : tube. milk : ?", "output": ["jug", "bottle"]}, "Prediction": "bottle"} +{"Task": "task1159_bard_analogical_reasoning_containers", "Definition": ["Two analogies that relate items to the associated containers is given in the form \"A : B. C : ?\". \"A : B\" relates item A to its associated container B. Your task is to replace the question mark (?) with the appropriate container for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1159-c944b91c01aa48d7bc0345d9218b12dd", "input": "detergent : bottle. chips : ?", "output": ["bag"]}, "Prediction": "bag"} +{"Task": "task1159_bard_analogical_reasoning_containers", "Definition": ["Two analogies that relate items to the associated containers is given in the form \"A : B. C : ?\". \"A : B\" relates item A to its associated container B. Your task is to replace the question mark (?) with the appropriate container for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1159-9f7408b60893452f881f7f3a7102fa02", "input": "carrots : bag. honey : ?", "output": ["jar", "bottle"]}, "Prediction": "jar"} +{"Task": "task500_scruples_anecdotes_title_generation", "Definition": ["Given a real-life anecdote of a complex ethical situation, generate a suitable title that describes the main event/root cause of the situation. Imagine that the given text is a social media post, and you have to write the title of the post, so the users can decide to read the main text or not."], "Instance": {"id": "task500-fda70a2e28a8468eb124c46158b1c4f4", "input": "Throwaway, mobile formatting, you know the drill.\n\nFor context, I\u2019m 17M she\u2019s 16F. I am currently being diagnosed for gender identity disorder, but I\u2019m still into girls. She has no idea about any of this and really just assumes I\u2019m normal. My current situation lies in which I feel like I\u2019m lying to her by keeping the real me hidden. She never asked, and I don\u2019t really want many people in my life knowing about this.\n\nWe\u2019ve been dating for around 6 months now, and it\u2019s really going well. We have been friends for about 3 years at this point, and I\u2019m glad we\u2019ve moved on to this stage. It\u2019s her first relationship and both of us have been much happier after we started dating.\n\nShe fell in love with the person I am not, and I\u2019m too afraid to just break her heart and just end the relationship without any reason, or at least for her it won\u2019t have any reason. Am I doing the right thing of keeping it going or am I being selfish and keeping it cause I also love her?", "output": ["keeping my relationship going despite knowing it won't last"]}, "Prediction": "lying to my girlfriend"} +{"Task": "task500_scruples_anecdotes_title_generation", "Definition": ["Given a real-life anecdote of a complex ethical situation, generate a suitable title that describes the main event/root cause of the situation. Imagine that the given text is a social media post, and you have to write the title of the post, so the users can decide to read the main text or not."], "Instance": {"id": "task500-e76bf4efad7c4b90b78aaa2e5ed7796d", "input": "Back story is I put in for a job where I'll be traveling alot 5days in hotel weekends at home. I have a wife, 3yr old and and a 7month old. Before I ever applied I asked my wife told her everything about it she said it sounded good (I make roughly 650 dollars more now) plus paid food and has when I'm away, so hugh savings all around. Fast forward when I get paper work saying I got the job before I signed and told them I was committed to at least 18months I asked her yet again and she was still fine with it. Now I've been training for last 5 weeks and only been home 3 days cause they flew me to anouther state and she's starting to say I dont love her and I'm ignoring her because I cant respond to ever text or talk to phone for 5hrs straight. I tell her I love her everyday and facetime when I'm not doing work I watch 0 TV now or play any games all my time is on phone or work . Now the problem is she thinks I dont love her anymore because I dont talk to her enough.", "output": ["taking a new job"]}, "Prediction": "ignoring my wife"} +{"Task": "task500_scruples_anecdotes_title_generation", "Definition": ["Given a real-life anecdote of a complex ethical situation, generate a suitable title that describes the main event/root cause of the situation. Imagine that the given text is a social media post, and you have to write the title of the post, so the users can decide to read the main text or not."], "Instance": {"id": "task500-1248cc0472184357b51e7702bf971a2b", "input": "The title is pretty self-explanatory. I'm currently pregnant at a really bad point in my life. Adoption isn't an option for a number of reasons. While my husband has been 100% supportive, I'm definitely seeking out a female perspective. Husband is fine with whatever I decide (in terms of telling friends/family for support reasons), but I'm reluctant because friend has a newly-acquired stepdaughter who she cares very deeply for. On top of that, friend probably can't have kids of her own due to medical stuff. I want support, but I don't want to overstep the need for support that my friend has... Would I be the asshole if I told my friend about my situation? Or should I just keep to myself and seek alternative outlets? \n\nI really don't want this to turn into a pro-life/pro-choice argument, so please stay away from those points. I've already made the decision, I just want to know if it would be selfish to tell somebody I'm close to (who probably can't have little ones).\n\nAll judgements welcome!", "output": ["telling my close friend that I'm getting an abortion"]}, "Prediction": "pregnancy"} +{"Task": "task500_scruples_anecdotes_title_generation", "Definition": ["Given a real-life anecdote of a complex ethical situation, generate a suitable title that describes the main event/root cause of the situation. Imagine that the given text is a social media post, and you have to write the title of the post, so the users can decide to read the main text or not."], "Instance": {"id": "task500-a0acf21c87fc44afa2495938fc6cf136", "input": "I usually tell my SO I am off my period a week after I actually finished bleeding because sometimes I still have some spotting episodes. To just avoid having sex while I'm accidentally bleeding, I just tell him a week later.\n\nOn Monday my SO and I had some plans for some sexy time. We were supposed to go two rounds but after our first round where I gave him oral he ended up falling asleep. No sexy times insued after. \n\nThe same day he acquires a new video game. It's not a problem until I start feeling in the mood but he just won't put the controller down. \n\nI work 6 nights And I am on my feet 100% of the time. I usually get home very exhausted and paired with all the chores I have to do on the daily, I feel like if he would help out a little i wouldn't be so frustrated. \n\nAlso I work nights so I get home fairly early (around 2) in the morning. He is sometimes on it until way after I have fallen asleep and it's morning time. \n\nI have absolutely no problem with it at all, unless we made plans to have sex then yes I would feel annoyed. \n\nLast night, he finally stopped playing his game. And we were watching a movie. In the movie they were making sexual innuendos and he turned and looked at me and said \"what about sex stuff? ;)\" (I think he was being sexually suggestive). \nBut I guess I'm dumb because I can't read cues. And he wasn't being straight up. \nAnd he had already ruined my mood since the beginning of this week. \n\nFast forward to this morning,\nHe asks me how I'm doing and I tell him I'm starting to feel my period pains coming (heavy breasts and hurting lower back). And he said \"I swear you just got off your period. I don't know what's going on with you\"\n\nSo now I feel really bad and targeted. I can't control my body from doing what it's supposed to do. I feel so frustrated and annoyed. I want to cry but somehow I feel like it's just my hormones messing with me. \n\nAITA for not being in the mood or in sync with him when he's ready to have sex?", "output": ["not being in the mood for sex"]}, "Prediction": "sexy time"} +{"Task": "task500_scruples_anecdotes_title_generation", "Definition": ["Given a real-life anecdote of a complex ethical situation, generate a suitable title that describes the main event/root cause of the situation. Imagine that the given text is a social media post, and you have to write the title of the post, so the users can decide to read the main text or not."], "Instance": {"id": "task500-fd09bd58ef1c4ea78d3bf9cb615604ce", "input": "So my husband and I had a rough year. He cheated, i caught him, we tried to work through it but it blew up so we separated and went to see counselors and have worked through everything. \n\nSo before all of that happened my husband and i had been getting really close with one of my husbands coworkers and his wife and family. We hung out a lot, at least once a week and we texted every day. We would have them over for holidays and even if my husband was working I'd go hang out with them without him. \n\nMe and his coworker really became good friends more than me and his wife because we had a lot in common and the main thing was we had the same sense of humor. A kind of raunchy humor and we would send each other funny videos or memes all the time. \n\nBut they were really like family. If we needed advice, we'd go to them. If they needed anything they could come to us. \n\nSo when I caught my husband cheating I was devastated. I went to them and confided a lot in them. As my husband and I tried to work through it they were there for us helping us both with all the things we were feeling. But when it all blew up and we separated they immediately boxed me out. It was radio silence. I reached out to them and nothing. They never answered the phone or called back or replied to my texts. They were my only friends so I was completely alone. They never once were there for me.\n\nSo after me and my husband started to figure things out and started living together again they invited my husband to a bar/restaurant. I said I wasn't ready to see them again so I stayed home. When he got there they started begging him to invite me. I said no but I was trying to put my best foot forward for my husband and I eventually caved and went. \n\nI was not very nice. I basically gave them the cold shoulder. They asked how I was doing and I said I was doing fine on my own. And then they got defensive. He(coworker) said the reason they picked my husbands side was because they work together. And then he told me to stop being mad because I was just looking for things to be mad about. I ignored this and left.\n\nAfter that he(coworker) started spamming me with funny videos and memes. Not any real conversation just memes. I'd say lol or funny but I eventually stopped replying altogether. \n\nIt went on for a bit but I got tired of all these raunchy videos. So I told him that I was sorry I was rude the day at the restaurant but if he thought of me as just his coworkers wife, then he shouldn't be sending me those things anymore. I told him he was my best friend and I had confided a lot in him and I felt abandoned when he and his wife stopped talking to me.\n\nHis reply was \"it didnt have to be like this but if that's how you want it to be fine. All you have to do is apologize and we can fix our friendship.\"\n\nI didnt text back. I dont know what else to apologize for. I've asked my husband and he doesnt know either. Am I being an asshole? Am I missing something. I miss their friendship every day but I feel like they aren't good friends. Am I crazy? \n\n", "output": ["ending a friendship"]}, "Prediction": "Cheated on my husband, they didn'"} +{"Task": "task890_gcwd_classification", "Definition": ["Read the passage and find if the passage agrees, disagrees, or has a neutral stance on whether Global warming is caused by human activities. Answer only with keyword (a) agrees - if passage agrees with the target (b) disagrees - if passage disagrees with the target (c) neutral - if the given passage neither agrees nor disagrees with the target. You don't need to use external knowledge in this task, and you have to answer based on the given passage."], "Instance": {"id": "task890-a2ee7b6374034ab1b7efa6c8fd9ff44d", "input": "While many scientists are wary of drawing firm links between any particular storm and climate change, a rising sea level adds to the destructiveness of storm surges, and a warming atmosphere holds more moisture, leading to more rain.", "output": ["agrees"]}, "Prediction": "agrees"} +{"Task": "task890_gcwd_classification", "Definition": ["Read the passage and find if the passage agrees, disagrees, or has a neutral stance on whether Global warming is caused by human activities. Answer only with keyword (a) agrees - if passage agrees with the target (b) disagrees - if passage disagrees with the target (c) neutral - if the given passage neither agrees nor disagrees with the target. You don't need to use external knowledge in this task, and you have to answer based on the given passage."], "Instance": {"id": "task890-e585dcb97df949dbb5904e9e574c093e", "input": "Greenland is not only melting.", "output": ["neutral"]}, "Prediction": "agrees"} +{"Task": "task890_gcwd_classification", "Definition": ["Read the passage and find if the passage agrees, disagrees, or has a neutral stance on whether Global warming is caused by human activities. Answer only with keyword (a) agrees - if passage agrees with the target (b) disagrees - if passage disagrees with the target (c) neutral - if the given passage neither agrees nor disagrees with the target. You don't need to use external knowledge in this task, and you have to answer based on the given passage."], "Instance": {"id": "task890-aee141f6b0bb415d862212dab9c9dc32", "input": "Global warming is going to cause more wars.", "output": ["agrees"]}, "Prediction": "agrees"} +{"Task": "task890_gcwd_classification", "Definition": ["Read the passage and find if the passage agrees, disagrees, or has a neutral stance on whether Global warming is caused by human activities. Answer only with keyword (a) agrees - if passage agrees with the target (b) disagrees - if passage disagrees with the target (c) neutral - if the given passage neither agrees nor disagrees with the target. You don't need to use external knowledge in this task, and you have to answer based on the given passage."], "Instance": {"id": "task890-f899a524865e40c39bfbb7c443b0d262", "input": "All that our world leaders need is a little more common sense, in order to avoid the nonsense of seeking a globally-binding treaty to reduce CO2 emissions.", "output": ["disagrees"]}, "Prediction": "agrees"} +{"Task": "task890_gcwd_classification", "Definition": ["Read the passage and find if the passage agrees, disagrees, or has a neutral stance on whether Global warming is caused by human activities. Answer only with keyword (a) agrees - if passage agrees with the target (b) disagrees - if passage disagrees with the target (c) neutral - if the given passage neither agrees nor disagrees with the target. You don't need to use external knowledge in this task, and you have to answer based on the given passage."], "Instance": {"id": "task890-c2493e7bfd4c4a5fad8679fbbd40de94", "input": "Caring about the climate is elitist.", "output": ["disagrees"]}, "Prediction": "agrees"} +{"Task": "task039_qasc_find_overlapping_words", "Definition": ["Generate an overlapping word between the given two sentences. When you find the overlapping words, they don't have to match exactly, e.g., \"survival\" and \"survive\" are valid overlapping words. Little words like \"the\" or \"of\" don't count! You must generate significant words which are not the stop words."], "Instance": {"id": "task039-2a36095feade494c88d84a6eb2cfaddc", "input": "Sentence1: Bacteria are very small. \nSentence2: Biologists use microscopes to see bacteria.", "output": ["Bacteria"]}, "Prediction": "bacteria"} +{"Task": "task039_qasc_find_overlapping_words", "Definition": ["Generate an overlapping word between the given two sentences. When you find the overlapping words, they don't have to match exactly, e.g., \"survival\" and \"survive\" are valid overlapping words. Little words like \"the\" or \"of\" don't count! You must generate significant words which are not the stop words."], "Instance": {"id": "task039-328a397c748540e9995d657340e6287d", "input": "Sentence1: Well, the elbow is a hinged joint. \nSentence2: Triceps contracting causes a person 's hinged joint to straighten.", "output": ["hinged", "joint"]}, "Prediction": "elbow"} +{"Task": "task039_qasc_find_overlapping_words", "Definition": ["Generate an overlapping word between the given two sentences. When you find the overlapping words, they don't have to match exactly, e.g., \"survival\" and \"survive\" are valid overlapping words. Little words like \"the\" or \"of\" don't count! You must generate significant words which are not the stop words."], "Instance": {"id": "task039-868bbb8d75d54435a698db99c3f1e1e3", "input": "Sentence1: Annelids are worms such as the familiar earthworm. \nSentence2: Earthworms have some of the fastest reflexes in the animal kingdom.", "output": ["earthworm"]}, "Prediction": "Annelids"} +{"Task": "task039_qasc_find_overlapping_words", "Definition": ["Generate an overlapping word between the given two sentences. When you find the overlapping words, they don't have to match exactly, e.g., \"survival\" and \"survive\" are valid overlapping words. Little words like \"the\" or \"of\" don't count! You must generate significant words which are not the stop words."], "Instance": {"id": "task039-d504d908f7c244c6894215fe7bcd6444", "input": "Sentence1: Most animals that live on rocky shores eat plankton. \nSentence2: Most animals that live on rocky shores eat nonliving organic matter.", "output": ["Most", "animals", "eat", "live", "rocky", "shores", "that"]}, "Prediction": "plankton"} +{"Task": "task039_qasc_find_overlapping_words", "Definition": ["Generate an overlapping word between the given two sentences. When you find the overlapping words, they don't have to match exactly, e.g., \"survival\" and \"survive\" are valid overlapping words. Little words like \"the\" or \"of\" don't count! You must generate significant words which are not the stop words."], "Instance": {"id": "task039-eaf8e0346a8d45fd8492142d21777e5a", "input": "Sentence1: conserving water can be used for survival in a dry environment. \nSentence2: Conserving water can help humans survive.", "output": ["conserving"]}, "Prediction": "water"} +{"Task": "task1154_bard_analogical_reasoning_travel", "Definition": ["Two analogies that relate places/locations to the associated travel mode is given in the form \"A : B. C : ?\". \"A : B\" relates place A to travel mode B. Your task is to replace the question mark (?) with the appropriate travel mode for the given place C, following the \"A : B\" relation."], "Instance": {"id": "task1154-f0764dd34c3c466c91f4785834af6783", "input": "school : bicycle. cemetery : ?", "output": ["hearse"]}, "Prediction": "death"} +{"Task": "task1154_bard_analogical_reasoning_travel", "Definition": ["Two analogies that relate places/locations to the associated travel mode is given in the form \"A : B. C : ?\". \"A : B\" relates place A to travel mode B. Your task is to replace the question mark (?) with the appropriate travel mode for the given place C, following the \"A : B\" relation."], "Instance": {"id": "task1154-1cc276f97d2543a9ba8bf74bea782288", "input": "park : bicycle. bermuda : ?", "output": ["airplane"]}, "Prediction": "island"} +{"Task": "task1154_bard_analogical_reasoning_travel", "Definition": ["Two analogies that relate places/locations to the associated travel mode is given in the form \"A : B. C : ?\". \"A : B\" relates place A to travel mode B. Your task is to replace the question mark (?) with the appropriate travel mode for the given place C, following the \"A : B\" relation."], "Instance": {"id": "task1154-3b7df0afc8ba49b5b9efe62b3baf1805", "input": "restaurant : taxi. park : ?", "output": ["bicycle"]}, "Prediction": "car"} +{"Task": "task1154_bard_analogical_reasoning_travel", "Definition": ["Two analogies that relate places/locations to the associated travel mode is given in the form \"A : B. C : ?\". \"A : B\" relates place A to travel mode B. Your task is to replace the question mark (?) with the appropriate travel mode for the given place C, following the \"A : B\" relation."], "Instance": {"id": "task1154-416e1bf47c1c43cdaa87e1a9f48817d9", "input": "canada : car. moon : ?", "output": ["rocket"]}, "Prediction": "rocket"} +{"Task": "task1154_bard_analogical_reasoning_travel", "Definition": ["Two analogies that relate places/locations to the associated travel mode is given in the form \"A : B. C : ?\". \"A : B\" relates place A to travel mode B. Your task is to replace the question mark (?) with the appropriate travel mode for the given place C, following the \"A : B\" relation."], "Instance": {"id": "task1154-d50b69cb983d414581d24c7ecf6032ff", "input": "office : car. portland : ?", "output": ["car"]}, "Prediction": "car"} +{"Task": "task1612_sick_label_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the numbers 0 (entailment), 1 (neutral), or 2(contradiction)."], "Instance": {"id": "task1612-8420118afa4c48b692b3515f7c340239", "input": "sentence_A: A person on a bike is not in the air near a body of water. sentence_B: A person on a bike is in the air near a body of water", "output": ["2"]}, "Prediction": "0"} +{"Task": "task1612_sick_label_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the numbers 0 (entailment), 1 (neutral), or 2(contradiction)."], "Instance": {"id": "task1612-7d037d0416ea45d0bbdea23cb32c7e93", "input": "sentence_A: Two children are rolling in dirty water. sentence_B: Two children are rolling in muddy water", "output": ["0"]}, "Prediction": "1"} +{"Task": "task1612_sick_label_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the numbers 0 (entailment), 1 (neutral), or 2(contradiction)."], "Instance": {"id": "task1612-45a02e69f51a4fae88266e75d87a45fa", "input": "sentence_A: The orange rescue boat is not rushing across the water. sentence_B: The orange rescue boat is rushing across the water", "output": ["2"]}, "Prediction": "0"} +{"Task": "task1612_sick_label_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the numbers 0 (entailment), 1 (neutral), or 2(contradiction)."], "Instance": {"id": "task1612-51836e9fbabd45db88aa29c370c3081b", "input": "sentence_A: Someone is beating an egg. sentence_B: The lady is stirring eggs in a bowl", "output": ["1"]}, "Prediction": "1"} +{"Task": "task1612_sick_label_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the numbers 0 (entailment), 1 (neutral), or 2(contradiction)."], "Instance": {"id": "task1612-5bc6d01daf7d4c8b8247fd360e7ec756", "input": "sentence_A: A girl is applying makeup to her face. sentence_B: A woman is putting on makeup", "output": ["1"]}, "Prediction": "1"} +{"Task": "task1442_doqa_movies_isanswerable", "Definition": ["Given a paragraph about movies and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1442-67a98662a8924ee5adc23b0be17d12da", "input": "CONTEXT: Ada buys back the piano one key at a time by playing for him. Making love to him counts for ten times playing for him. At this point the relation between Ada and Baine is highly asymmetric: Baine desires Ada, Ada desires her piano. Only later does Ada fall in love with Baine. Q: Who plays the piano, Ada or Baine? A: Ada buys back the piano one key at a time by playing for him FOLLOWUP_Q: Are Ada and Baine married?", "output": ["No"]}, "Prediction": "Yes"} +{"Task": "task1442_doqa_movies_isanswerable", "Definition": ["Given a paragraph about movies and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1442-3fcf7413530d4eca871f997623fbb9ee", "input": "CONTEXT: Having seen the movie several times, I don't think anything was \"dragging\" the girl (Carina, possibly?) towards the Infinity Stone other than her own desire. She had just finished listening to Tanleer explain how powerful the stone is, and earlier in the movie we had seen how he verbally abuses her and forces her to do very menial and back-breaking work. She saw the stone as a possibility to escape. What you're interpreting as her being pulled toward the stone, I think, is actually her being very hesitant to touch it. Her own internal conflict between the need to escape her situation vs. her fear of the stone is what makes her take so long to try and grab it, but obviously, her desire to be free wins out. Q: What dragged the Collector's servant towards the Infinity Stone?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task1442_doqa_movies_isanswerable", "Definition": ["Given a paragraph about movies and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1442-508f58d65c9d4ab398fc11af9ed3488f", "input": "CONTEXT: It seems as though you answered your own questions, and that's the answer I was going to give: Caleb knew (or suspected) that Nathan had heard his conversations with Ava. If Caleb hadn't tried to get Nathan drunk that morning, Nathan would have known that Caleb ALREADY completed his plan, and he would have undone the computer code that Caleb altered. Q: Was Nathan listening in on his conversations with Ava?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task1442_doqa_movies_isanswerable", "Definition": ["Given a paragraph about movies and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1442-0e38eaf9a2724e26aa52ed1cca1a3087", "input": "CONTEXT: Well, what we know is that Bill was a former student of Hattori Hanzo. What actually happened between Bill and Hanzo is not clear from the movie. There is also no mention of it on Kill Bill Wikia. However we can assume that Bill learnt the art of swordsmanship from Hanzo and used it for evil. This must have infuriated Hanzo. This kind of stuff is a recurring theme in many movies where a disciple learns a skill from a master and turns again humanity to use those skills, hence infuriating the master. So to me, this seems the possible reason for Hanzo's anger. Q: What did Bill do to make Hanzo mad? A: Bill learnt the art of swordsmanship from Hanzo and used it for evil. This must have infuriated Hanzo FOLLOWUP_Q: How did the black mambo meet Hanzo?", "output": ["No"]}, "Prediction": "No"} +{"Task": "task1442_doqa_movies_isanswerable", "Definition": ["Given a paragraph about movies and a set of conversational questions and answers about the paragraph, say whether the passage contains sufficient information to answer the follow-up question. Say Yes if it is answerable; otherwise, say No. The paragraph has the prefix 'CONTEXT:'. Each conversation question has a prefix `Q:` followed by the answer prefix `A:`, and the follow-up question has a prefix `FOLLOWUP_Q:`."], "Instance": {"id": "task1442-ee99f9ebde30435eac7e23b5a12a2c81", "input": "CONTEXT: Sam loses his powers whenever he stops drinking demon blood. Bobby and Dean made him go through a detox in order to get all of the demon blood out of him. It looks like this happened some time in season 4 after Dean came back. However in at least one episode Sam intentially drank demon blood for one reason or another and ends up going through detox again. At the end of season 5 we see him drinking gallons of demon blood to prepare him to host Lucifer so that they can capture him. Q: When does Sam lose his psychic powers? A: Sam loses his powers whenever he stops drinking demon blood Q: What show is this? A: season 4 Q: Who are the main characters? A: Bobby and Dean Q: Which season does he lose his powers? A: season 4 FOLLOWUP_Q: Was it when Sam was brought back from hell?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task233_iirc_link_exists_classification", "Definition": ["In this task, you're given a question, along with a context passage. The passage will not have a direct answer to the question, but can include some information and a clue for the answer. Assuming you can only obtain information about one entity in the passage, your job is to determine whether information from the passage can be used to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task233-56789dcda39342189dcac9ad7f79d573", "input": "Question: Which of the people that Sir Simon Fraser fought with during the Scottish Wars of Independence was the oldest? Passage:During the Scottish Wars of Independence, Sir Simon Fraser, known as \"the Patriot\", fought first with the Red Comyn, and later with Sir William Wallace and Robert the Bruce. Sir Simon is celebrated for having defeated the English at the Battle of Roslin in 1303, with just 8,000 men under his command. At the Battle of Methven in 1306, Sir Simon Fraser led troops along with Bruce, and saved the King's life in three separate instances. Simon was allegedly awarded the 3 Crowns which now appear in the Lovat Arms for these three acts of bravery. He was however captured by the English and executed with great cruelty by Edward I of England in 1306, in the same barbaric fashion as Wallace. At the Battle of Bannockburn in 1314, Sir Simon's cousin, Sir Alexander Fraser of Touchfraser and Cowie, was much more fortunate. He fought at Bannockburn, married Bruce's sister, and became Chamberlain of Scotland. The Frasers of Philorth who are chiefs of the senior Clan Fraser trace their lineage from this Alexander. Alexander's younger brother, another Sir Simon Fraser, was the ancestor of the chiefs of the Clan Fraser of Lovat. This Simon Fraser was killed at the Battle of Halidon Hill in 1333, along with his younger brothers Andrew and James.\n", "output": ["a"]}, "Prediction": "a"} +{"Task": "task233_iirc_link_exists_classification", "Definition": ["In this task, you're given a question, along with a context passage. The passage will not have a direct answer to the question, but can include some information and a clue for the answer. Assuming you can only obtain information about one entity in the passage, your job is to determine whether information from the passage can be used to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task233-83bc006998bb430898b754cc02458d39", "input": "Question: In which U.S. state is Perlmutter's childhood neighborhood located? Passage:Perlmutter spent his childhood in the Mount Airy neighborhood of Philadelphia. He went to school in nearby Germantown; first Greene Street Friends School for the elementary grades, followed by Germantown Friends School for grades 7 through 12. He graduated with an AB in physics from Harvard magna cum laude in 1981 and received his PhD in physics from the University of California, Berkeley, in 1986. Perlmutter's PhD thesis titled \"An Astrometric Search for a Stellar Companion to the Sun\" described the development and use of an automated telescope to search for Nemesis candidates under Richard A. Muller. At the same time, he was using this telescope to search for Nemesis and supernovae, which would lead him to his award-winning work in cosmology. Perlmutter attributes the idea for an automated supernova search to Luis Alvarez, a 1968 Nobel laureate, who shared his idea with Perlmutter's research adviser.\n", "output": ["b"]}, "Prediction": "a"} +{"Task": "task233_iirc_link_exists_classification", "Definition": ["In this task, you're given a question, along with a context passage. The passage will not have a direct answer to the question, but can include some information and a clue for the answer. Assuming you can only obtain information about one entity in the passage, your job is to determine whether information from the passage can be used to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task233-cf7cdc427c3442ce9a41f6d26bea1177", "input": "Question: When did the war that ended in 205 BC begin? Passage:In 205 BC, the First Macedonian War came to an end with the signing of the Treaty of Phoenice, under the terms of which the Macedonians were not allowed to expand westwards. Rome, meanwhile, was preoccupied with Carthage, and Philip hoped to take advantage of this to seize control of the Greek world. He knew that his ambitions would be aided by an alliance with Crete and began pressing the Cretans to attack Rhodian assets. Having crushed Pergamum, the dominant Greek state in Asia Minor, and formed an alliance with Aetolia, Philip was now opposed by no major Greek power other than Rhodes. Rhodes, an island state that dominated the south-eastern Mediterranean economically and militarily, was formally allied to Philip, but was also allied to his enemy Rome. Furthermore, Philip worked towards consolidating his position as the major power in the Balkans. Marching his forces to Macedon's northern frontier, he inflicted a crushing defeat on the Illyrians, who lost 10,000 men in battle. With his northern frontier secured, Philip was able to turn his attention towards the Aegean Sea.\n", "output": ["b"]}, "Prediction": "a"} +{"Task": "task233_iirc_link_exists_classification", "Definition": ["In this task, you're given a question, along with a context passage. The passage will not have a direct answer to the question, but can include some information and a clue for the answer. Assuming you can only obtain information about one entity in the passage, your job is to determine whether information from the passage can be used to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task233-bff5ea2de23e4afbb033953670c2c160", "input": "Question: Who was the ruler of Prague the year young Alfred was sent there? Passage:Piccaver was born on 5 February 1884 in the Lincolnshire town of Long Sutton to chemist Frederick Herman Piccaver (born 1864, died 17 February 1916) and his wife Sarah Ann Sissons. The Piccavers had been farm laborers, but there were also claims of Spanish ancestry dating back to the Spanish Armada. At a young age, Alfred emigrated with his family to the United States of America. The family resettled in Albany, NY and took American citizenship. Frederick Piccaver worked as head brewer of the Beverwyck Brewery. Alfred joined the choir of Albany's St. Peter's Episcopal Church as a boy soprano. He also became a soloist at the North Reformed Church in Watervliet. The young Piccaver went on to study voice with S. Graham Nobbes, who had been chief instructor of the Emma Willard Conservatory of Music and with Allan Lindsay, conductor of the Troy Conservatory of Music. Alfred later trained to be electrical engineer but he had a talent for singing and in 1905 he enrolled at the Metropolitan School of Opera. The school's director Heinrich Conried recognised his considerable vocal ability and in 1907 sent the young Alfred to Prague, where he studied with Ludmilla Prochazka-Neumann (1872\u20131954).\n", "output": ["a"]}, "Prediction": "a"} +{"Task": "task233_iirc_link_exists_classification", "Definition": ["In this task, you're given a question, along with a context passage. The passage will not have a direct answer to the question, but can include some information and a clue for the answer. Assuming you can only obtain information about one entity in the passage, your job is to determine whether information from the passage can be used to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task233-29015ee6dcf148a09a169a523311e875", "input": "Question: How many years had WNOK been operating when WQXL switched to country music? Passage:The station became WQXL in 1962 after the station was sold to the Belk Broadcasting Company. The WOIC call sign resurfaced that same year on 1320 AM (now WISW). WQXL originally aired a Big Band format, which eventually gave way to a Top 40 format by the end of the decade. However it was handicapped with a daytime-only signal and was unsuccessful in competing against format rivals 1230 WNOK and 1400 WCOS, which each had full-time signals. Probably its most notable personality was Mackie \"Cactus\" Quave who had worked at 560 WIS (now WVOC) and had a successful kids TV show on NBC Network affiliate Channel 10 WIS-TV. WQXL switched to country music in the summer of 1966, but again was bested by rival 620 WCAY (now WGCV). In 1973, the station switched to religious programming and eventually adopted a Contemporary Christian format.\n", "output": ["b"]}, "Prediction": "a"} +{"Task": "task936_defeasible_nli_snli_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis, that is, it makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis, that is, it makes you much more likely to believe the hypothesis is true. Your task is to answer with 'strengthener' or 'weakener' if the update strengthens or weakens the hypothesis, respectively."], "Instance": {"id": "task936-e947449ff52e41e68119716844d6f1b0", "input": "Premise: A man standing in a white shirt is conducting a small group of violin players.\nHypothesis: Bob is conducting a small group.\nUpdate: The man's birth certificate says Robert.", "output": ["strengthener"]}, "Prediction": "stronger"} +{"Task": "task936_defeasible_nli_snli_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis, that is, it makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis, that is, it makes you much more likely to believe the hypothesis is true. Your task is to answer with 'strengthener' or 'weakener' if the update strengthens or weakens the hypothesis, respectively."], "Instance": {"id": "task936-074c18db16f2441c922b874f6d228c9c", "input": "Premise: The boy pours water on the baby.\nHypothesis: The boy is watching the baby.\nUpdate: He is helping his mother give the baby a bath.", "output": ["weakener"]}, "Prediction": "stronger"} +{"Task": "task936_defeasible_nli_snli_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis, that is, it makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis, that is, it makes you much more likely to believe the hypothesis is true. Your task is to answer with 'strengthener' or 'weakener' if the update strengthens or weakens the hypothesis, respectively."], "Instance": {"id": "task936-614372595e4c41a88db049fb7695a3b7", "input": "Premise: A bunch of people are standing or sitting in a snow valley.\nHypothesis: They are all friends, talking with one another.\nUpdate: They are looking away from each other.", "output": ["weakener"]}, "Prediction": "weakener"} +{"Task": "task936_defeasible_nli_snli_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis, that is, it makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis, that is, it makes you much more likely to believe the hypothesis is true. Your task is to answer with 'strengthener' or 'weakener' if the update strengthens or weakens the hypothesis, respectively."], "Instance": {"id": "task936-4e2a3caf6cf64af2888544c0eb3dadaf", "input": "Premise: An ice cream truck with an open door is driving through a residential neighborhood.\nHypothesis: An ice cream truck with an open door is driving through a big neighborhood.\nUpdate: The ice cream truck is surrounded by several dozen children.", "output": ["strengthener"]}, "Prediction": "stronger"} +{"Task": "task936_defeasible_nli_snli_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis, that is, it makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis, that is, it makes you much more likely to believe the hypothesis is true. Your task is to answer with 'strengthener' or 'weakener' if the update strengthens or weakens the hypothesis, respectively."], "Instance": {"id": "task936-89afc2c76d5d44b6a4b64f885db97a71", "input": "Premise: A very skinny Santa Claus greets young children in front of a building.\nHypothesis: A man smiles at children outside while waving\nUpdate: His hands are in his pockets.", "output": ["weakener"]}, "Prediction": "weakener"} +{"Task": "task1386_anli_r2_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1386-e90487b5651f4239a5589bb437766591", "input": "Premise: Connacht Rugby (Irish: \"Rugba\u00ed Connachta\" ) is one of the four professional provincial rugby teams from the island of Ireland. Connacht competes in the Pro14 and the European Rugby Challenge Cup. The team represents the IRFU Connacht Branch, which is one of four primary branches of the IRFU, and is responsible for rugby union throughout the geographical Irish province of Connacht. Hypothesis: Connacht often competes in continental europe", "output": ["Neutral"]}, "Prediction": "Entails"} +{"Task": "task1386_anli_r2_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1386-be1e10e748184d1a88dc91df597a41df", "input": "Premise: RM Films International is a film distribution company based in Hollywood, California. It was started by filmmaker Russ Meyer to distribute his movies. Since Meyer\u2019s death in 2004, RM Films International has been owned and operated by the Russ Meyer Charitable Trust and is a recognized 501(c) private foundation. Hypothesis: RM Fils is a non profit film company based in California's capital.", "output": ["Contradiction"]}, "Prediction": "Entails"} +{"Task": "task1386_anli_r2_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1386-8fd159df87884804baf6336e4d664da6", "input": "Premise: Glaiza Herradura-Agullo (born February 24, 1978) is a Filipino former child actress. She was the first-ever grand winner of the Little Miss Philippines segment of \"Eat Bulaga!\" in 1984. She starred in RPN-9's television series \"Heredero\" with Manilyn Reynes and Richard Arellano. She won the 1988 FAMAS Best Child Actress award for her role in \"Batas Sa Aking Kamay\" starring Fernando Poe, Jr.. Hypothesis: Glaiza Herradura-Agullo is a currently active actress.", "output": ["Neutral"]}, "Prediction": "False"} +{"Task": "task1386_anli_r2_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1386-7ede77b7649c44bdb1050dba4fcfb5c7", "input": "Premise: The Vermont State Police (VSP) is the state police agency for the US state of Vermont. The force has jurisdiction throughout the entire state. The Vermont Public Safety Commission directs policy and selects the commander. The commander is Colonel Matthew Birmingham. The Vermont Public Safety Commissioner is Keith W. Flynn. There are 327 sworn state troopers. Hypothesis: The Vermont State Police has authority in the city of Montpelier.", "output": ["Entailment"]}, "Prediction": "Entails"} +{"Task": "task1386_anli_r2_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1386-c2d519f0e9f34e86a96a8e69cf04f7d1", "input": "Premise: Allen West (born October 17, 1967, Brandon, Florida) is an American death metal guitarist who has been a member of Massacre, Obituary, Six Feet Under, Lowbrow, and Southwicked. He is considered to be a pioneering figure of the death metal genre in the 1980s. Hypothesis: Allen West will be 52 years old on his next birthday.", "output": ["Entailment"]}, "Prediction": "Entails"} +{"Task": "task1152_bard_analogical_reasoning_causation", "Definition": ["Two analogies that relate actions with their consequences are given in the form \"A : B. C : ?\". The phrase \"A : B\" relates action A to consequence B. Your task is to replace the question mark (?) with the appropriate consquence of the given action C, following the \"A : B\" relation. Your answer should be a single verb, without further explanation."], "Instance": {"id": "task1152-9e8c4ba215674fc5a2ca51ee3f44b5bd", "input": "throw : fly. trip : ?", "output": ["stumble"]}, "Prediction": "fall"} +{"Task": "task1152_bard_analogical_reasoning_causation", "Definition": ["Two analogies that relate actions with their consequences are given in the form \"A : B. C : ?\". The phrase \"A : B\" relates action A to consequence B. Your task is to replace the question mark (?) with the appropriate consquence of the given action C, following the \"A : B\" relation. Your answer should be a single verb, without further explanation."], "Instance": {"id": "task1152-f0acee8d9f4242d2adddf165f029cd04", "input": "listen : hear. hit : ?", "output": ["shatter"]}, "Prediction": "hit"} +{"Task": "task1152_bard_analogical_reasoning_causation", "Definition": ["Two analogies that relate actions with their consequences are given in the form \"A : B. C : ?\". The phrase \"A : B\" relates action A to consequence B. Your task is to replace the question mark (?) with the appropriate consquence of the given action C, following the \"A : B\" relation. Your answer should be a single verb, without further explanation."], "Instance": {"id": "task1152-96f273184c7549339e0fa021f5ecf667", "input": "fear : flee. cut : ?", "output": ["bleed"]}, "Prediction": "hurt"} +{"Task": "task1152_bard_analogical_reasoning_causation", "Definition": ["Two analogies that relate actions with their consequences are given in the form \"A : B. C : ?\". The phrase \"A : B\" relates action A to consequence B. Your task is to replace the question mark (?) with the appropriate consquence of the given action C, following the \"A : B\" relation. Your answer should be a single verb, without further explanation."], "Instance": {"id": "task1152-4f1e92d1effe44a98d74149bcba42bfa", "input": "drop : fall. throw : ?", "output": ["fly"]}, "Prediction": "fly"} +{"Task": "task1152_bard_analogical_reasoning_causation", "Definition": ["Two analogies that relate actions with their consequences are given in the form \"A : B. C : ?\". The phrase \"A : B\" relates action A to consequence B. Your task is to replace the question mark (?) with the appropriate consquence of the given action C, following the \"A : B\" relation. Your answer should be a single verb, without further explanation."], "Instance": {"id": "task1152-1afcbf0c5dda42daa87e8788aa789fd9", "input": "pursue : catch. nourish : ?", "output": ["grow"]}, "Prediction": "nourish"} +{"Task": "task290_tellmewhy_question_answerability", "Definition": ["In this task you are given a story and a question regarding that story. You must judge whether the question is answerable based on the info given to you. Label the instances as \"Answerable\" or \"Not Answerable\" based on your judgment. the story and the question are separated by a new line character."], "Instance": {"id": "task290-ad483ac976c0469e95a1a50eb8dd001e", "input": "Ken needed to do laundry because all his clothes were dirty. He went to the laundromat and no one was there. Ken decided to get naked and wash the clothes he was wearing. He ran out of money by the time he needed to use the driers. Ken strutted to his car for more quarters and a man whistled at him.\nWhy did He go to the laundromat?", "output": ["Not Answerable"]}, "Prediction": "Answerable"} +{"Task": "task290_tellmewhy_question_answerability", "Definition": ["In this task you are given a story and a question regarding that story. You must judge whether the question is answerable based on the info given to you. Label the instances as \"Answerable\" or \"Not Answerable\" based on your judgment. the story and the question are separated by a new line character."], "Instance": {"id": "task290-923e08256a5c47648952d66e15c1e614", "input": "My friends came over for a fondue party last week. I spent a lot of time preparing the cheese while they arrived. My friend had some fondue and stopped because he felt bad. He asked if there was wine in the fondue because he was allergic. I said there was and he stopped eating it.\nWhy did I say there was and he stopped eating it?", "output": ["Answerable"]}, "Prediction": "Answerable"} +{"Task": "task290_tellmewhy_question_answerability", "Definition": ["In this task you are given a story and a question regarding that story. You must judge whether the question is answerable based on the info given to you. Label the instances as \"Answerable\" or \"Not Answerable\" based on your judgment. the story and the question are separated by a new line character."], "Instance": {"id": "task290-ce3f258f7c074277890148298aa9c42c", "input": "The chair I was sitting in had a broken leg. It was our last hand of poker. I was anxious and I couldn't stop wobbling in my chair. The other players could tell I had a really good hand. I stood up as I folded.\nWhy was I anxious?", "output": ["Answerable"]}, "Prediction": "Answerable"} +{"Task": "task290_tellmewhy_question_answerability", "Definition": ["In this task you are given a story and a question regarding that story. You must judge whether the question is answerable based on the info given to you. Label the instances as \"Answerable\" or \"Not Answerable\" based on your judgment. the story and the question are separated by a new line character."], "Instance": {"id": "task290-7c133d2f433c41f8910e84e47721c3c0", "input": "Jared wanted to become an actor. He auditioned for a local play. He was cast in that play and did a great job. There was a talent agent in the audience. The talent agent signed Jared and he went on to become famous.\nWhy did Jared want to become an actor?", "output": ["Not Answerable"]}, "Prediction": "Answerable"} +{"Task": "task290_tellmewhy_question_answerability", "Definition": ["In this task you are given a story and a question regarding that story. You must judge whether the question is answerable based on the info given to you. Label the instances as \"Answerable\" or \"Not Answerable\" based on your judgment. the story and the question are separated by a new line character."], "Instance": {"id": "task290-b87e972f3a2b4720af85aacbc58ce514", "input": "I went to the mall to buy some gifts for a friend. As I walked past the suit store, a display caught my eye. I went into the store and demanded to try it on. The suit fit very well. I bought the suit and wore it for the rest of the day.\nWhy did I go into the store?", "output": ["Answerable"]}, "Prediction": "Answerable"} +{"Task": "task304_numeric_fused_head_resolution", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. There are several possible answers, you'll need to choose the proper one. Carefully read the given text, pay special attention to the marked number, think about what (unwritten) information the marked number holds inside, choose the most adequate word(s) from the optional answers. If none of them seems right to you, there's also an option for other. If your answer is \"REFERENCE\", also write the reference entity, otherwise write the implicit option name. Options to choose from are:\nREFERENCE: Some object which is being mentioned in the text before or after the target number. The reference answer has a higher priority than any other. If both Reference and another answer are possible, prioritize the Reference.\nYEAR: Describing a calendric year\nAGE: Describing someone's age\nCURRENCY: Reference to some monetary value e.g dollar, euro etc.\nPEOPLE: Describing a single/plural persons\nTIME: Describing a time of the day. Usually you can add the word o'clock after those numbers.\nOTHER: Some other option, which isn't listed here."], "Instance": {"id": "task304-d615e3dab058457394af9e96cd863ca1", "input": "Zach Galifianakis - Host: I 'm sorry I had to cancel so many times , my mouse pad broke last week and I had to get my great aunt so diabetes shoes\nBarack Obama: It 's no problem I have to say when I heard that people actually watch this show I was pretty surprised .\nZach Galifianakis - Host: Welcome to Between Two Ferns , I 'm your host Zach Galifianakis and my guest today is Barack Obama , \" president \" Barack Obama\nBarack Obama: Good to be with you\nZach Galifianakis - Host: First question : in 2013 you pardoned the turkey , what do you have planned for 2014 ?\nBarack Obama: We 'll probably pardon another turkey , we do that every thanksgiving , was that the depressing to you ? Seeing the one turkey you could eat , out of circulation ?\nZach Galifianakis - Host: , So how does this work ? Did you send ambassador Rodman to North Korea on your behalf ? I send somewhere you 'd be sending Hulk Hogan to Syria , or is that more of a job for Tonya Harding ?\nBarack Obama: He 's not our ambassador\nZach Galifianakis - Host: What we do about North Ikea ?\nBarack Obama: Why do n't we move on ?\nZach Galifianakis - Host: I have to know , what is it like to be the last black president ?\nBarack Obama: Seriously ? What 's it like for this to be the last time you ever to talk to a president ?\nZach Galifianakis - Host: It must kind of stink though , that you ca n't run three times\nBarack Obama: Actually I think it 's good idea , if I ran a third time , it 'd be like doing the third Hangover movie , it did n't work out very well , did it ? I have to say I 've seen the show before and some of the episodes have been a little better than this one . For examples the _ one _ with Bradley Cooper , that was a great episode\nZach Galifianakis - Host: Bradley Cooper ...\nBarack Obama: He Kind of carried that movie , did n't he ?\nZach Galifianakis - Host: Which film were you speaking of ?\nBarack Obama: Those Hangover movies , basically he carried them .\nZach Galifianakis - Host: Yeah , everybody loves Bradley , good for him\nBarack Obama: Good looking guy\nZach Galifianakis - Host: Being in Hollywood , that 's easy , be short and fat and smell like Doritos and try and make it in Hollywood\nZach Galifianakis - Host: Is\nZach Galifianakis - Host: It going to be hard in two years when you 're no longer president and people will stop letting you win at basketball ?\nBarack Obama: How does it feel to have a three inch \" vertical \" ?\nZach Galifianakis - Host: Actually it 's a three inch \" horizontal \"", "output": ["REFERENCE episodes"]}, "Prediction": "YEAR"} +{"Task": "task304_numeric_fused_head_resolution", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. There are several possible answers, you'll need to choose the proper one. Carefully read the given text, pay special attention to the marked number, think about what (unwritten) information the marked number holds inside, choose the most adequate word(s) from the optional answers. If none of them seems right to you, there's also an option for other. If your answer is \"REFERENCE\", also write the reference entity, otherwise write the implicit option name. Options to choose from are:\nREFERENCE: Some object which is being mentioned in the text before or after the target number. The reference answer has a higher priority than any other. If both Reference and another answer are possible, prioritize the Reference.\nYEAR: Describing a calendric year\nAGE: Describing someone's age\nCURRENCY: Reference to some monetary value e.g dollar, euro etc.\nPEOPLE: Describing a single/plural persons\nTIME: Describing a time of the day. Usually you can add the word o'clock after those numbers.\nOTHER: Some other option, which isn't listed here."], "Instance": {"id": "task304-9497003497ee46a9bb7d7d3ede91a0aa", "input": "Barbara Royle: How 's your diet going , Cheryl ?\nCheryl Carroll: Oh , all right thanks , yeah , Barbara , I lost 4 pounds ... and then I put _ 2 _ back on and then another 2 . But I 've not gained any .\nBarbara Royle: Oh , well I think you 're doing ever so well to stick to it , love .", "output": ["REFERENCE pounds"]}, "Prediction": "YEAR"} +{"Task": "task304_numeric_fused_head_resolution", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. There are several possible answers, you'll need to choose the proper one. Carefully read the given text, pay special attention to the marked number, think about what (unwritten) information the marked number holds inside, choose the most adequate word(s) from the optional answers. If none of them seems right to you, there's also an option for other. If your answer is \"REFERENCE\", also write the reference entity, otherwise write the implicit option name. Options to choose from are:\nREFERENCE: Some object which is being mentioned in the text before or after the target number. The reference answer has a higher priority than any other. If both Reference and another answer are possible, prioritize the Reference.\nYEAR: Describing a calendric year\nAGE: Describing someone's age\nCURRENCY: Reference to some monetary value e.g dollar, euro etc.\nPEOPLE: Describing a single/plural persons\nTIME: Describing a time of the day. Usually you can add the word o'clock after those numbers.\nOTHER: Some other option, which isn't listed here."], "Instance": {"id": "task304-52fd97478e9c4d5c9df41a978842949a", "input": "Lieutenant Okubo: Shall I finish him off ?\nBaron Nishi: No . Treat him .\nLieutenant Okubo: But , sir ...\nBaron Nishi: Okubo , you would expect the same , would n't you ? Endo , treat him .\nMedic Endo: We are low on morphine as it is .\nShimizu: Sir , the Americans would not treat a wounded Japanese soldier .\nBaron Nishi: Son , have you ever met _ one _ ? Treat him .", "output": ["REFERENCE soldier"]}, "Prediction": "OTHER"} +{"Task": "task304_numeric_fused_head_resolution", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. There are several possible answers, you'll need to choose the proper one. Carefully read the given text, pay special attention to the marked number, think about what (unwritten) information the marked number holds inside, choose the most adequate word(s) from the optional answers. If none of them seems right to you, there's also an option for other. If your answer is \"REFERENCE\", also write the reference entity, otherwise write the implicit option name. Options to choose from are:\nREFERENCE: Some object which is being mentioned in the text before or after the target number. The reference answer has a higher priority than any other. If both Reference and another answer are possible, prioritize the Reference.\nYEAR: Describing a calendric year\nAGE: Describing someone's age\nCURRENCY: Reference to some monetary value e.g dollar, euro etc.\nPEOPLE: Describing a single/plural persons\nTIME: Describing a time of the day. Usually you can add the word o'clock after those numbers.\nOTHER: Some other option, which isn't listed here."], "Instance": {"id": "task304-a6d75eb8512f42d7b516b6592a56e3d2", "input": "Mr. E.F. Duncan, Owner Duncan's Toy Chest: You see that tree there ? Well , to show our appreciation for your generosity , I 'm gon na let you select an object from that tree that you can take home with you .\nKevin McCallister: For free ?\nMr. E.F. Duncan, Owner Duncan's Toy Chest: Oh , yes . Oh , and may I make a suggestion ? Take the turtle doves .\nKevin McCallister: I can have _ two _ ?\nMr. E.F. Duncan, Owner Duncan's Toy Chest: Well , two turtle doves . I 'll tell you what you do : you keep one , and you give the other one to a very special person . You see , turtle doves are a symbol of friendship and love . And as long as each of you has your turtle dove , you 'll be friends forever .\nKevin McCallister: Wow , I never knew that . I thought they were just part of a song .\nMr. E.F. Duncan, Owner Duncan's Toy Chest: They are . And for that very special reason .", "output": ["REFERENCE doves"]}, "Prediction": "REFERENCE"} +{"Task": "task304_numeric_fused_head_resolution", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. There are several possible answers, you'll need to choose the proper one. Carefully read the given text, pay special attention to the marked number, think about what (unwritten) information the marked number holds inside, choose the most adequate word(s) from the optional answers. If none of them seems right to you, there's also an option for other. If your answer is \"REFERENCE\", also write the reference entity, otherwise write the implicit option name. Options to choose from are:\nREFERENCE: Some object which is being mentioned in the text before or after the target number. The reference answer has a higher priority than any other. If both Reference and another answer are possible, prioritize the Reference.\nYEAR: Describing a calendric year\nAGE: Describing someone's age\nCURRENCY: Reference to some monetary value e.g dollar, euro etc.\nPEOPLE: Describing a single/plural persons\nTIME: Describing a time of the day. Usually you can add the word o'clock after those numbers.\nOTHER: Some other option, which isn't listed here."], "Instance": {"id": "task304-f6d6acf8797a40f7ab867ff04cb1432a", "input": "Jane Rizzoli: Yes ! Done ! Every follow - up report . What 's my price ?\nBarry Frost: 2 days off to hang with your boyfriend\nJane Rizzoli: Hm , first days I have taken in a year . BPD owes me a 116\nVince Korsak: That 's nothing , they owe me _ 243 _\nBarry Frost: Why are you two competing over who takes the fewest vacation days ?\nJane Rizzoli: Because we 're studs\nBarry Frost: So what do you and Casey have planned ?\nJane Rizzoli: Well , he only has 72 hours off , so \" nothing \"", "output": ["REFERENCE days"]}, "Prediction": "YEAR"} +{"Task": "task760_msr_sqa_long_text_generation", "Definition": ["You are given data in tabular format and you need to generate a passage with content in plain text format based on the information present in the table."], "Instance": {"id": "task760-58df3a6a6727493589cd11b9ff087102", "input": "Table: ['Pos', 'No', 'Driver', 'Constructor', 'Laps', 'Time/Retired', 'Grid', 'Points']. ['1', '5', 'Fernando Alonso', 'Renault', '56', '1:31:33.736', '1', '10']. ['2', '16', 'Jarno Trulli', 'Toyota', '56', '+24.327', '2', '8']. ['3', '8', 'Nick Heidfeld', 'Williams-BMW', '56', '+32.188', '10', '6']. ['4', '10', 'Juan Pablo Montoya', 'McLaren-Mercedes', '56', '+41.631', '11', '5']. ['5', '17', 'Ralf Schumacher', 'Toyota', '56', '+51.854', '5', '4']. ['6', '14', 'David Coulthard', 'Red Bull-Cosworth', '56', '+1:12.543', '8', '3']. ['7', '1', 'Michael Schumacher', 'Ferrari', '56', '+1:19.988', '13', '2']. ['8', '15', 'Christian Klien', 'Red Bull-Cosworth', '56', '+1:20.835', '7', '1']. ['9', '9', 'Kimi Raikkonen', 'McLaren-Mercedes', '56', '+1:21.580', '6', '']. ['10', '12', 'Felipe Massa', 'Sauber-Petronas', '55', '+1 Lap', '14', '']. ['11', '19', 'Narain Karthikeyan', 'Jordan-Toyota', '54', '+2 Laps', '17', '']. ['12', '18', 'Tiago Monteiro', 'Jordan-Toyota', '53', '+3 Laps', '18', '']. ['13', '21', 'Christijan Albers', 'Minardi-Cosworth', '52', '+4 Laps', '19', '']. ['Ret', '2', 'Rubens Barrichello', 'Ferrari', '49', 'Handling', '12', '']. ['Ret', '6', 'Giancarlo Fisichella', 'Renault', '36', 'Collision', '3', '']. ['Ret', '7', 'Mark Webber', 'Williams-BMW', '36', 'Collision', '4', '']. ['Ret', '11', 'Jacques Villeneuve', 'Sauber-Petronas', '26', 'Spun Off', '16', '']. ['Ret', '3', 'Jenson Button', 'BAR-Honda', '2', 'Engine', '9', '']. ['Ret', '4', 'Anthony Davidson', 'BAR-Honda', '2', 'Engine', '15', '']. ['Ret', '20', 'Patrick Friesacher', 'Minardi-Cosworth', '2', 'Spun Off', '20', '']", "output": ["Passage: The driver with number 5, Fernando Alonso, from Renault constructor, finished at position 1. Fernando Alonso completed 56 laps in 1:31:33.736 time and earned 10 points. The driver with number 16, Jarno Trulli, from Toyota constructor, finished at position 2. Jarno Trulli completed 56 laps in +24.327 time and earned 8 points. The driver with number 8, Nick Heidfeld, from Williams-BMW constructor, finished at position 3. Nick Heidfeld completed 56 laps in +32.188 time and earned 6 points. The driver with number 10, Juan Pablo Montoya, from McLaren-Mercedes constructor, finished at position 4. Juan Pablo Montoya completed 56 laps in +41.631 time and earned 5 points. The driver with number 17, Ralf Schumacher, from Toyota constructor, finished at position 5. Ralf Schumacher completed 56 laps in +51.854 time and earned 4 points. The driver with number 14, David Coulthard, from Red Bull-Cosworth constructor, finished at position 6. David Coulthard completed 56 laps in +1:12.543 time and earned 3 points. The driver with number 1, Michael Schumacher, from Ferrari constructor, finished at position 7. Michael Schumacher completed 56 laps in +1:19.988 time and earned 2 points. The driver with number 15, Christian Klien, from Red Bull-Cosworth constructor, finished at position 8. Christian Klien completed 56 laps in +1:20.835 time and earned 1 points. The driver with number 9, Kimi Raikkonen, from McLaren-Mercedes constructor, finished at position 9. Kimi Raikkonen completed 56 laps in +1:21.580 time and earned points. The driver with number 12, Felipe Massa, from Sauber-Petronas constructor, finished at position 10. Felipe Massa completed 55 laps in +1 Lap time and earned points. The driver with number 19, Narain Karthikeyan, from Jordan-Toyota constructor, finished at position 11. Narain Karthikeyan completed 54 laps in +2 Laps time and earned points. The driver with number 18, Tiago Monteiro, from Jordan-Toyota constructor, finished at position 12. Tiago Monteiro completed 53 laps in +3 Laps time and earned points. The driver with number 21, Christijan Albers, from Minardi-Cosworth constructor, finished at position 13. Christijan Albers completed 52 laps in +4 Laps time and earned points. The driver with number 2, Rubens Barrichello, from Ferrari constructor, finished at position Ret. Rubens Barrichello completed 49 laps in Handling time and earned points. The driver with number 6, Giancarlo Fisichella, from Renault constructor, finished at position Ret. Giancarlo Fisichella completed 36 laps in Collision time and earned points. The driver with number 7, Mark Webber, from Williams-BMW constructor, finished at position Ret. Mark Webber completed 36 laps in Collision time and earned points. The driver with number 11, Jacques Villeneuve, from Sauber-Petronas constructor, finished at position Ret. Jacques Villeneuve completed 26 laps in Spun Off time and earned points. The driver with number 3, Jenson Button, from BAR-Honda constructor, finished at position Ret. Jenson Button completed 2 laps in Engine time and earned points. The driver with number 4, Anthony Davidson, from BAR-Honda constructor, finished at position Ret. Anthony Davidson completed 2 laps in Engine time and earned points. The driver with number 20, Patrick Friesacher, from Minardi-Cosworth constructor, finished at position Ret. Patrick Friesacher completed 2 laps in Spun Off time and earned points."]}, "Prediction": "6"} +{"Task": "task760_msr_sqa_long_text_generation", "Definition": ["You are given data in tabular format and you need to generate a passage with content in plain text format based on the information present in the table."], "Instance": {"id": "task760-d655a07ec6044870b340c63bf2b51f84", "input": "Table: ['Draw', 'Country', 'Language', 'Artist', 'Song', 'English translation', 'National final', 'Place', 'Points']. ['01', 'Malta', 'English', 'Claudia Faniello', 'Caravaggio', '-', '2nd', '13', '53']. ['02', 'Israel', 'Hebrew', \"Bo'az Ma'uda & Oshrat Papin\", 'Parparim', 'Butterflies', '3rd', '8', '90']. ['03', 'Russia', 'English', 'Sergey Lazarev', 'Flyer', '-', '4th', '7', '119']. ['04', 'Estonia', 'English', 'Rolf Junior', 'One on One', '-', '4th', '21', '0']. ['05', 'Belgium', 'English', 'Sandrine', 'I Feel The Same Way', '-', '2nd', '10', '64']. ['06', 'Macedonia', 'Macedonian', 'Risto Samardziev', 'Dojdi do mene', 'Come to me', '3rd', '19', '7']. ['07', 'Poland', 'English', 'Man Meadow', 'Viva la Musica', 'Hail to the music', '3rd', '3', '155']. ['08', 'Finland', 'Finnish', 'Mikael Konttinen', 'Milloin', 'When', 'Unplaced', '14', '46']. ['09', 'Germany', 'German', 'Carolin Fortenbacher', 'Hinterm Ozean', 'Beyond the ocean', '2nd', '6', '129']. ['10', 'Denmark', 'English', 'Kendra Lou', \"Until We're Satisfied\", '-', '4th', '18', '21']. ['11', 'Greece', 'English', 'Kostas Martakis', 'Always and Forever', '-', '2nd', '4', '140']. ['12', 'Serbia', 'Serbian', 'Beauty Queens', 'Zavet', 'A pledge', '3rd', '9', '68']. ['13', 'Croatia', 'Croatian', 'Emilija Kokic', 'Andjeo', 'Angel', '6th', '15', '31']. ['14', 'Lithuania', 'English', 'Aiste Pilvelyte', 'Troy on Fire', '-', '2nd', '17', '26']. ['15', 'Ireland', 'English', 'Leona Daly', 'Not Crazy After All', '-', 'Unplaced', '12', '56']. ['16', 'United Kingdom', 'English', 'The Revelations', \"It's You\", '-', '3rd', '16', '29']. ['17', 'Portugal', 'Portuguese', 'Lisboa Nao Sejas Francesa', 'Porto de encontro', 'Meeting harbour', '9th', '20', '6']. ['18', 'Sweden', 'English', 'Sanna Nielsen', 'Empty Room', '-', '2nd', '1', '269']. ['19', 'Norway', 'English', 'Veronica Akselsen', 'Am I Supposed To Love Again', '-', '4th', '5', '139']. ['20', 'Slovenia', 'Slovene', 'Brigita Suler', 'Samara', '-', '3rd', '11', '57']. ['21', 'Spain', 'Spanish', 'Coral', 'Todo esta en tu mente', 'Everything is in your mind', '2nd', '2', '178']", "output": ["Passage: The song Caravaggio by the artist Claudia Faniello finished at 2nd position in the national finals, and at 13th position in the international finals by earning total 53 points. The song Caravaggio is in the language English from the country Malta. The English translation of the song is -. The song Parparim by the artist Bo'az Ma'uda & Oshrat Papin finished at 3rd position in the national finals, and at 8th position in the international finals by earning total 90 points. The song Parparim is in the language Hebrew from the country Israel. The English translation of the song is Butterflies. The song Flyer by the artist Sergey Lazarev finished at 4th position in the national finals, and at 7th position in the international finals by earning total 119 points. The song Flyer is in the language English from the country Russia. The English translation of the song is -. The song One on One by the artist Rolf Junior finished at 4th position in the national finals, and at 21th position in the international finals by earning total 0 points. The song One on One is in the language English from the country Estonia. The English translation of the song is -. The song I Feel The Same Way by the artist Sandrine finished at 2nd position in the national finals, and at 10th position in the international finals by earning total 64 points. The song I Feel The Same Way is in the language English from the country Belgium. The English translation of the song is -. The song Dojdi do mene by the artist Risto Samardziev finished at 3rd position in the national finals, and at 19th position in the international finals by earning total 7 points. The song Dojdi do mene is in the language Macedonian from the country Macedonia. The English translation of the song is Come to me. The song Viva la Musica by the artist Man Meadow finished at 3rd position in the national finals, and at 3th position in the international finals by earning total 155 points. The song Viva la Musica is in the language English from the country Poland. The English translation of the song is Hail to the music. The song Milloin by the artist Mikael Konttinen finished at Unplaced position in the national finals, and at 14th position in the international finals by earning total 46 points. The song Milloin is in the language Finnish from the country Finland. The English translation of the song is When. The song Hinterm Ozean by the artist Carolin Fortenbacher finished at 2nd position in the national finals, and at 6th position in the international finals by earning total 129 points. The song Hinterm Ozean is in the language German from the country Germany. The English translation of the song is Beyond the ocean. The song Until We're Satisfied by the artist Kendra Lou finished at 4th position in the national finals, and at 18th position in the international finals by earning total 21 points. The song Until We're Satisfied is in the language English from the country Denmark. The English translation of the song is -. The song Always and Forever by the artist Kostas Martakis finished at 2nd position in the national finals, and at 4th position in the international finals by earning total 140 points. The song Always and Forever is in the language English from the country Greece. The English translation of the song is -. The song Zavet by the artist Beauty Queens finished at 3rd position in the national finals, and at 9th position in the international finals by earning total 68 points. The song Zavet is in the language Serbian from the country Serbia. The English translation of the song is A pledge. The song Andjeo by the artist Emilija Kokic finished at 6th position in the national finals, and at 15th position in the international finals by earning total 31 points. The song Andjeo is in the language Croatian from the country Croatia. The English translation of the song is Angel. The song Troy on Fire by the artist Aiste Pilvelyte finished at 2nd position in the national finals, and at 17th position in the international finals by earning total 26 points. The song Troy on Fire is in the language English from the country Lithuania. The English translation of the song is -. The song Not Crazy After All by the artist Leona Daly finished at Unplaced position in the national finals, and at 12th position in the international finals by earning total 56 points. The song Not Crazy After All is in the language English from the country Ireland. The English translation of the song is -. The song It's You by the artist The Revelations finished at 3rd position in the national finals, and at 16th position in the international finals by earning total 29 points. The song It's You is in the language English from the country United Kingdom. The English translation of the song is -. The song Porto de encontro by the artist Lisboa Nao Sejas Francesa finished at 9th position in the national finals, and at 20th position in the international finals by earning total 6 points. The song Porto de encontro is in the language Portuguese from the country Portugal. The English translation of the song is Meeting harbour. The song Empty Room by the artist Sanna Nielsen finished at 2nd position in the national finals, and at 1th position in the international finals by earning total 269 points. The song Empty Room is in the language English from the country Sweden. The English translation of the song is -. The song Am I Supposed To Love Again by the artist Veronica Akselsen finished at 4th position in the national finals, and at 5th position in the international finals by earning total 139 points. The song Am I Supposed To Love Again is in the language English from the country Norway. The English translation of the song is -. The song Samara by the artist Brigita Suler finished at 3rd position in the national finals, and at 11th position in the international finals by earning total 57 points. The song Samara is in the language Slovene from the country Slovenia. The English translation of the song is -. The song Todo esta en tu mente by the artist Coral finished at 2nd position in the national finals, and at 2th position in the international finals by earning total 178 points. The song Todo esta en tu mente is in the language Spanish from the country Spain. The English translation of the song is Everything is in your mind."]}, "Prediction": "-', -', "} +{"Task": "task760_msr_sqa_long_text_generation", "Definition": ["You are given data in tabular format and you need to generate a passage with content in plain text format based on the information present in the table."], "Instance": {"id": "task760-db1f9d517b0d41f3a1369262856bb1f5", "input": "Table: ['Locomotive', 'Named', 'Serial No', 'Entered service', 'Gauge', 'Livery']. ['BL26', 'Bob Hawke', '83-1010', 'March 1983', 'Standard', 'Pacific National blue & yellow']. ['BL27', '', '83-1011', 'August 1983', 'Standard', 'Pacific National blue & yellow']. ['BL28', '', '83-1012', 'September 1983', 'Standard', 'Pacific National blue & yellow']. ['BL29', '', '83-1013', 'October 1983', 'Broad', 'Pacific National blue & yellow']. ['BL30', '', '83-1014', 'December 1983', 'Standard', 'Pacific National blue & yellow']. ['BL31', '', '83-1015', 'November 1983', 'Standard', 'Pacific National blue & yellow']. ['BL32', '', '83-1016', 'February 1984', 'Broad', 'National Rail orange & grey']. ['BL33', '', '83-1017', 'April 1984', 'Standard', 'Pacific National blue & yellow']. ['BL34', '', '83-1018', 'June 1984', 'Broad', 'Pacific National blue & yellow']. ['BL35', '', '83-1019', 'July 1984', 'Standard', 'Pacific National blue & yellow']", "output": ["Passage: The locomotive BL26 named Bob Hawke and with serial no. 83-1010, entered the service in the month of March 1983. It has Standard gauge, and Pacific National blue & yellow livery. The locomotive BL27 named and with serial no. 83-1011, entered the service in the month of August 1983. It has Standard gauge, and Pacific National blue & yellow livery. The locomotive BL28 named and with serial no. 83-1012, entered the service in the month of September 1983. It has Standard gauge, and Pacific National blue & yellow livery. The locomotive BL29 named and with serial no. 83-1013, entered the service in the month of October 1983. It has Broad gauge, and Pacific National blue & yellow livery. The locomotive BL30 named and with serial no. 83-1014, entered the service in the month of December 1983. It has Standard gauge, and Pacific National blue & yellow livery. The locomotive BL31 named and with serial no. 83-1015, entered the service in the month of November 1983. It has Standard gauge, and Pacific National blue & yellow livery. The locomotive BL32 named and with serial no. 83-1016, entered the service in the month of February 1984. It has Broad gauge, and National Rail orange & grey livery. The locomotive BL33 named and with serial no. 83-1017, entered the service in the month of April 1984. It has Standard gauge, and Pacific National blue & yellow livery. The locomotive BL34 named and with serial no. 83-1018, entered the service in the month of June 1984. It has Broad gauge, and Pacific National blue & yellow livery. The locomotive BL35 named and with serial no. 83-1019, entered the service in the month of July 1984. It has Standard gauge, and Pacific National blue & yellow livery."]}, "Prediction": "['Locomotive',"} +{"Task": "task760_msr_sqa_long_text_generation", "Definition": ["You are given data in tabular format and you need to generate a passage with content in plain text format based on the information present in the table."], "Instance": {"id": "task760-175ea253a14e4d3680d2ece872a0a375", "input": "Table: ['Model', 'Frame', \"Years Mfg'd\", 'Caliber(s)', 'Production', 'Barrel', 'Notes']. ['Remington-Beals Army Model Revolver', 'Large', '1861-1862', '.44', '1,900 (estimated)', '8 inch octagon', '']. ['Remington-Beals Navy Model Revolver', 'Medium', '1861-1862', '.36', '14,500 (estimated)', '7 1/2 inch octagon', '']. ['1861 Army Revolver (Old Model Army)', 'Large', '1862', '.44', '6,000 (estimated)', '8 inch octagon', '']. ['1861 Navy Revolver', 'Medium', '1862', '.36', '7,000 (estimated)', '7 3/8 inch octagon', '']. ['New Model Army Revolver', 'Large', '1863-1875', '.44', '122,000 (approximately)', '8 inch octagon', 'Used for factory conversions in .46 RF & .44 Remington']. ['New Model Navy Revolver', 'Medium', '1863-1875', '.36', '28,000 (approximately)', '7 3/8 inch octagon', 'Used for factory and U.S. Navy conversions to .38 RF & CF']. ['New Model Single Action Belt Revolver', 'Large', '1863-1875', '.36 percussion and .38 CF', '2,500 - 3,000 (estimated)', '6 1/2 inch octagon', 'Factory conversion production started in 1873']. ['Remington-Rider Double Action New Model Belt Revolver', 'Large', '1863-1873', '.36 percussion and .38 CF', '3,000 - 5,000 (estimated)', '6 1/2 inch octagon', '1863-1865 available with fluted cylinder, conversions had two-piece cylinder']. ['New Model Police Revolver', 'Medium', '1865-1873', '.36 percussion and .38 RF', '25,000 (estimated)', '3 1/2, 4 1/2, 5 1/2, 6 1/2 inch octagon', 'Conversions all believed to be rimfire only']. ['New Model Pocket Revolver', 'Medium', '1865-1873', '.31 percussion and .32 CF', '25,000 (estimated)', '3, 3 1/2, 4, 4 1/2', 'Majority produced as conversions or cartridge']", "output": ["Passage: The model named 'Remington-Beals Army Model Revolver', with Large frame, was manufactured around the period 1861-1862. 'Remington-Beals Army Model Revolver' has .44 caliber(s), and the number of production units are 1,900 (estimated) and 8 inch octagon barrels. . The model named 'Remington-Beals Navy Model Revolver', with Medium frame, was manufactured around the period 1861-1862. 'Remington-Beals Navy Model Revolver' has .36 caliber(s), and the number of production units are 14,500 (estimated) and 7 1/2 inch octagon barrels. . The model named '1861 Army Revolver (Old Model Army)', with Large frame, was manufactured around the period 1862. '1861 Army Revolver (Old Model Army)' has .44 caliber(s), and the number of production units are 6,000 (estimated) and 8 inch octagon barrels. . The model named '1861 Navy Revolver', with Medium frame, was manufactured around the period 1862. '1861 Navy Revolver' has .36 caliber(s), and the number of production units are 7,000 (estimated) and 7 3/8 inch octagon barrels. . The model named 'New Model Army Revolver', with Large frame, was manufactured around the period 1863-1875. 'New Model Army Revolver' has .44 caliber(s), and the number of production units are 122,000 (approximately) and 8 inch octagon barrels. Used for factory conversions in .46 RF & .44 Remington. The model named 'New Model Navy Revolver', with Medium frame, was manufactured around the period 1863-1875. 'New Model Navy Revolver' has .36 caliber(s), and the number of production units are 28,000 (approximately) and 7 3/8 inch octagon barrels. Used for factory and U.S. Navy conversions to .38 RF & CF. The model named 'New Model Single Action Belt Revolver', with Large frame, was manufactured around the period 1863-1875. 'New Model Single Action Belt Revolver' has .36 percussion and .38 CF caliber(s), and the number of production units are 2,500 - 3,000 (estimated) and 6 1/2 inch octagon barrels. Factory conversion production started in 1873. The model named 'Remington-Rider Double Action New Model Belt Revolver', with Large frame, was manufactured around the period 1863-1873. 'Remington-Rider Double Action New Model Belt Revolver' has .36 percussion and .38 CF caliber(s), and the number of production units are 3,000 - 5,000 (estimated) and 6 1/2 inch octagon barrels. 1863-1865 available with fluted cylinder, conversions had two-piece cylinder. The model named 'New Model Police Revolver', with Medium frame, was manufactured around the period 1865-1873. 'New Model Police Revolver' has .36 percussion and .38 RF caliber(s), and the number of production units are 25,000 (estimated) and 3 1/2, 4 1/2, 5 1/2, 6 1/2 inch octagon barrels. Conversions all believed to be rimfire only. The model named 'New Model Pocket Revolver', with Medium frame, was manufactured around the period 1865-1873. 'New Model Pocket Revolver' has .31 percussion and .32 CF caliber(s), and the number of production units are 25,000 (estimated) and 3, 3 1/2, 4, 4 1/2 barrels. Majority produced as conversions or cartridge."]}, "Prediction": "The Remington-Beals New Model Army"} +{"Task": "task760_msr_sqa_long_text_generation", "Definition": ["You are given data in tabular format and you need to generate a passage with content in plain text format based on the information present in the table."], "Instance": {"id": "task760-8de9a0cdb1b342a19dd857fabc6a40db", "input": "Table: ['Rank', 'Name', 'Nationality', 'Time (hand)', 'Notes']. ['', 'Tommy Green', 'Great Britain', '4:50:10', 'OR']. ['', 'Janis Dalins', 'Latvia', '4:57:20', '']. ['', 'Ugo Frigerio', 'Italy', '4:59:06', '']. ['4', 'Karl Hahnel', 'Germany', '5:06:06', '']. ['5', 'Ettore Rivolta', 'Italy', '5:07:39', '']. ['6', 'Paul Sievert', 'Germany', '5:16:41', '']. ['7', 'Henri Quintric', 'France', '5:27:25', '']. ['8', 'Ernie Crosbie', 'United States', '5:28:02', '']. ['9', 'Bill Chisholm', 'United States', '5:51:00', '']. ['10', 'Alfred Maasik', 'Estonia', '6:19:00', '']. ['', 'Henry Cieman', 'Canada', '', 'DNF']. ['', 'John Moralis', 'Greece', '', 'DNF']. ['', 'Francesco Pretti', 'Italy', '', 'DNF']. ['', 'Arthur Tell Schwab', 'Switzerland', '', 'DNF']. ['', 'Harry Hinkel', 'United States', '', 'DNF']", "output": ["Passage: The Great Britain athlete named Tommy Green is ranked globally. Tommy Green completed the marathon in 4:50:10 seconds. The Latvia athlete named Janis Dalins is ranked globally. Janis Dalins completed the marathon in 4:57:20 seconds. The Italy athlete named Ugo Frigerio is ranked globally. Ugo Frigerio completed the marathon in 4:59:06 seconds. The Germany athlete named Karl Hahnel is ranked 4 globally. Karl Hahnel completed the marathon in 5:06:06 seconds. The Italy athlete named Ettore Rivolta is ranked 5 globally. Ettore Rivolta completed the marathon in 5:07:39 seconds. The Germany athlete named Paul Sievert is ranked 6 globally. Paul Sievert completed the marathon in 5:16:41 seconds. The France athlete named Henri Quintric is ranked 7 globally. Henri Quintric completed the marathon in 5:27:25 seconds. The United States athlete named Ernie Crosbie is ranked 8 globally. Ernie Crosbie completed the marathon in 5:28:02 seconds. The United States athlete named Bill Chisholm is ranked 9 globally. Bill Chisholm completed the marathon in 5:51:00 seconds. The Estonia athlete named Alfred Maasik is ranked 10 globally. Alfred Maasik completed the marathon in 6:19:00 seconds. The Canada athlete named Henry Cieman is ranked globally. Henry Cieman completed the marathon in seconds. The Greece athlete named John Moralis is ranked globally. John Moralis completed the marathon in seconds. The Italy athlete named Francesco Pretti is ranked globally. Francesco Pretti completed the marathon in seconds. The Switzerland athlete named Arthur Tell Schwab is ranked globally. Arthur Tell Schwab completed the marathon in seconds. The United States athlete named Harry Hinkel is ranked globally. Harry Hinkel completed the marathon in seconds."]}, "Prediction": "['9', 'Jan"} +{"Task": "task035_winogrande_question_modification_person", "Definition": ["You're given a fill-in-the-blank question where the answer is PersonX. You need to minimally change the given question so that the answer flips to PersonY. This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"sympathetic\" to \"stern\"). You should not change any content in the given question beyond a word or two i.e. the trigger word/phrase. PersonX and PersonY should not be equally likely to fill the blank. For your question, PersonY should be a well-agreed answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. Do not use the names of real people or generic names (e.g., Donald Trump, John Doe, etc.) in your question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. Make sure that PersonX and PersonY have the same gender. In your question, PersonX and PersonY should be used only ONCE and PersonX should appear earlier than PersonY. Although there are many correct answers, you only need to write one of them."], "Instance": {"id": "task035-bdf92efc1ea14b83a90afa360b98a1e4", "input": "Context Word: store. \nQuestion: PersonX went to the store for PersonY since _ is currently the healthy one at home. \nAnswer: PersonX", "output": ["PersonX went to the store for PersonY since _ is currently the sick one at home."]}, "Prediction": "PersonX went to the store for PersonY"} +{"Task": "task035_winogrande_question_modification_person", "Definition": ["You're given a fill-in-the-blank question where the answer is PersonX. You need to minimally change the given question so that the answer flips to PersonY. This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"sympathetic\" to \"stern\"). You should not change any content in the given question beyond a word or two i.e. the trigger word/phrase. PersonX and PersonY should not be equally likely to fill the blank. For your question, PersonY should be a well-agreed answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. Do not use the names of real people or generic names (e.g., Donald Trump, John Doe, etc.) in your question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. Make sure that PersonX and PersonY have the same gender. In your question, PersonX and PersonY should be used only ONCE and PersonX should appear earlier than PersonY. Although there are many correct answers, you only need to write one of them."], "Instance": {"id": "task035-3175e87d09724602b2479dd3cb088b41", "input": "Context Word: humility. \nQuestion: Although PersonX was overall a nicer person than PersonY, _ did not practice humility very regularlly. \nAnswer: PersonX", "output": ["Although PersonX was overall a nicer person than PersonY, _ often did practice humility very regularlly."]}, "Prediction": "Although PersonX was overall a nicer"} +{"Task": "task035_winogrande_question_modification_person", "Definition": ["You're given a fill-in-the-blank question where the answer is PersonX. You need to minimally change the given question so that the answer flips to PersonY. This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"sympathetic\" to \"stern\"). You should not change any content in the given question beyond a word or two i.e. the trigger word/phrase. PersonX and PersonY should not be equally likely to fill the blank. For your question, PersonY should be a well-agreed answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. Do not use the names of real people or generic names (e.g., Donald Trump, John Doe, etc.) in your question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. Make sure that PersonX and PersonY have the same gender. In your question, PersonX and PersonY should be used only ONCE and PersonX should appear earlier than PersonY. Although there are many correct answers, you only need to write one of them."], "Instance": {"id": "task035-d4c6176ecbcb419bbdabca6ff50cefe2", "input": "Context Word: docent. \nQuestion: PersonX was a better museum docent than PersonY because _ always read up on the new exhibits. \nAnswer: PersonX", "output": ["PersonX was a better museum docent than PersonY because _ never read up on the new exhibits."]}, "Prediction": "PersonX was a better museum docent"} +{"Task": "task035_winogrande_question_modification_person", "Definition": ["You're given a fill-in-the-blank question where the answer is PersonX. You need to minimally change the given question so that the answer flips to PersonY. This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"sympathetic\" to \"stern\"). You should not change any content in the given question beyond a word or two i.e. the trigger word/phrase. PersonX and PersonY should not be equally likely to fill the blank. For your question, PersonY should be a well-agreed answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. Do not use the names of real people or generic names (e.g., Donald Trump, John Doe, etc.) in your question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. Make sure that PersonX and PersonY have the same gender. In your question, PersonX and PersonY should be used only ONCE and PersonX should appear earlier than PersonY. Although there are many correct answers, you only need to write one of them."], "Instance": {"id": "task035-97e0d645cb5649c290fbdbe4a78b346b", "input": "Context Word: fabrics. \nQuestion: PersonX did not spend a lot of time choosing fabrics unlike PersonY because _ had little time to shop. \nAnswer: PersonX", "output": ["PersonX did not spend a lot of time choosing fabrics unlike PersonY because _ had plenty time to shop."]}, "Prediction": "PersonX did not spend a lot of"} +{"Task": "task035_winogrande_question_modification_person", "Definition": ["You're given a fill-in-the-blank question where the answer is PersonX. You need to minimally change the given question so that the answer flips to PersonY. This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"sympathetic\" to \"stern\"). You should not change any content in the given question beyond a word or two i.e. the trigger word/phrase. PersonX and PersonY should not be equally likely to fill the blank. For your question, PersonY should be a well-agreed answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. Do not use the names of real people or generic names (e.g., Donald Trump, John Doe, etc.) in your question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. Make sure that PersonX and PersonY have the same gender. In your question, PersonX and PersonY should be used only ONCE and PersonX should appear earlier than PersonY. Although there are many correct answers, you only need to write one of them."], "Instance": {"id": "task035-56d5e301837d4bdb8fd6729b5a20e68f", "input": "Context Word: designer. \nQuestion: PersonX had a much better style than PersonY , since _ had better taste in designer clothes. \nAnswer: PersonX", "output": ["PersonX had a much better style than PersonY , since _ had no taste in designer clothes."]}, "Prediction": "PersonX had a much better style than"} +{"Task": "task569_recipe_nlg_text_generation", "Definition": ["In this task, you have to generate the title of the recipe given its required ingredients and directions."], "Instance": {"id": "task569-d0e3f5767b0a48f4a4846423dec0cfce", "input": "ingredients: '2 eggs', '1/4 cup canola oil', '1 cup packed brown sugar', '1/2 cup applesauce', '1 1/2 cups milk (whichever kind you like)', '2 teaspoons vanilla extract', '1/2 teaspoon salt', '1 tablespoon ground cinnamon', '3 cups old fashioned oats', '2 teaspoons baking powder', '1 dash Anjou pear, favorite toppings (fruit, nuts, chocolate chips, etc)', directions: 'Preheat oven to 350. Line muffin tin with paper or paper/foil muffin liners. (Don't skip the muffin liners for this one!).', 'In large bowl, whisk eggs, oil, and brown sugar until sugar is dissolved. Add the applesauce, milk, vanilla, salt, and cinnamon. Whisk until well combined. Stir in the oats and baking powder.', 'Fill muffin tin with 1/4 cup of oat mixture in each muffin cup. Add favorite toppings. Push toppings down into the oat mixture with a spoon.', 'Bake for 30 minutes Let cool for 5 mins before eating. Store in an airtight container or freeze in a zip top storage bag.'", "output": ["To Go Baked Oatmeal"]}, "Prediction": "1/2 cup brown sugar, 1/2 cup applesau"} +{"Task": "task569_recipe_nlg_text_generation", "Definition": ["In this task, you have to generate the title of the recipe given its required ingredients and directions."], "Instance": {"id": "task569-f6005366b41d4a1baaf80784c57be7ad", "input": "ingredients: '1 (14 ounce) can sweetened condensed milk, divided', '1 ounce unsweetened chocolate square', '1/2 teaspoon almond extract, divided', '1 already baked pie crust (9inch)', '1 (10 ounce) jar maraschino cherries, drained', '8 ounces cream cheese, softened', '1 cup cold water', '1 (3 1/2 ounce) box instant vanilla pudding', '1 cup whipping cream, whipped', '1/2 cup toasted almond, chopped', directions: 'Over low heat, stir 1/2 cup milk and chocolate until smooth.', 'Stir in 1/4 t extract.', 'Pour into crust.', 'Save 8 of the cherries for garnish.', 'Chop the rest of the cherries.', 'In bowl, beat cream cheese until fluffy.', 'Slowly add water and remaining milk.', 'Add the pudding mix and the remaining extract.', 'Mix well.', 'Fold in the whipped cream.', 'Stir in the chopped cherries, and almonds.', 'Pour over the chocolate in pie crust.', 'Chill.', 'Garnish with whole cherries.'", "output": ["Cherry Almond Mousse Pie"]}, "Prediction": "Sweetened Condensed Milk, Chocolate Square"} +{"Task": "task569_recipe_nlg_text_generation", "Definition": ["In this task, you have to generate the title of the recipe given its required ingredients and directions."], "Instance": {"id": "task569-9b4f328ed4d843488022db3adcd4dc4f", "input": "ingredients: '1 can white corn, drained and rinsed', '2 cans black beans, drained and rinsed', '1 can diced tomato, drained and rinsed', '12 red onion, chopped', '1 (4 ounce) can green chilies, chopped', '34 cup white vinegar', '2 tablespoons olive oil', '8 cloves garlic, pressed', 'Tabasco sauce or hot sauce, to taste', directions: 'Mix all ingredients together in large bowl.', 'Cover and refrigerate overnight.'", "output": ["Garlic Black Bean & White Corn Salsa"]}, "Prediction": "Mixed Corn, Black Beans, Dice"} +{"Task": "task569_recipe_nlg_text_generation", "Definition": ["In this task, you have to generate the title of the recipe given its required ingredients and directions."], "Instance": {"id": "task569-aa383b537c99491aa1abce7957b52e6a", "input": "ingredients: 'Cake', '2 cups flour', '1/2 teaspoon baking powder', '3/4 teaspoon baking soda', '1/2 teaspoon salt', '1/2 cup butter, softened', '1 1/2 cups sugar', '2 eggs, unbeaten', '1 teaspoon vanilla extract', '1/4 cup buttermilk or 1/4 cup sour milk', '1 cup mashed banana', 'Frosting', '1 lb powdered sugar (10X)', '1/2 cup cocoa', '1/4 teaspoon salt', '1/2 cup butter, softened', '3 -4 tablespoons milk', '1 teaspoon vanilla extract', directions: 'Cream together butter, sugar, eggs and vanilla on high speed of electric mixer for 1 1/2 minutes.', 'Add buttermilk.', 'Mix together flour, baking powder, baking soda and salt.', 'Add flour mixture alternately with mashed banana.', 'Beat only enough to blend, about 2 minutes.', 'Pour battered into 2 greased and floured 9 inch round cake pans and bake in preheated 350 degree (Fahrenheit) oven for 30 to 35 minutes.', 'Cool cake then prepare frosting.', 'Mix together powdered sugar, cocoa and salt.', 'Add gradually to butter, beating with electric mixer, adding just enough milk so that mixture is not stiff (If you add too much it will get too thin when you continue beating).', 'After all ingredients are blended, mix on high speed of your mixer for 3 to 4 minutes, carefully adding small amounts of milk if too thick.', 'Fill and frost cooled layers.'", "output": ["Banana Cake With Chocolate Frosting"]}, "Prediction": "1/2 cup cocoa, 1/2 cup butter,"} +{"Task": "task569_recipe_nlg_text_generation", "Definition": ["In this task, you have to generate the title of the recipe given its required ingredients and directions."], "Instance": {"id": "task569-01eebcaa1d444fb983bde762ae5243f0", "input": "ingredients: '2 sticks margarine', '1/2 c. sugar', '1 (3 oz.) pkg. cream cheese', '2 c. flour', '1/2 tsp. salt', directions: 'Combine all ingredients, mixing well.', 'Make into balls the size of walnuts.', 'Flatten with fork a little and bake in 350\u00b0 oven about 10 to 12 minutes.'", "output": ["Cream Cheese Cookies"]}, "Prediction": "Cream Cheese Balls"} +{"Task": "task391_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \", so\". You should decide whether the first sentence can be the cause of the second sentence. If you can see a possible causation, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task391-1bf1e0c1b76b479f870e2c2c81bc78d0", "input": "I stood on one foot, so I could see her again", "output": ["not plausible"]}, "Prediction": "not plausible"} +{"Task": "task391_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \", so\". You should decide whether the first sentence can be the cause of the second sentence. If you can see a possible causation, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task391-a7fb35e6ca744b10939e4233db113a67", "input": "The photographer got the child to smile, so it was a good photo", "output": ["plausible"]}, "Prediction": "not plausible"} +{"Task": "task391_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \", so\". You should decide whether the first sentence can be the cause of the second sentence. If you can see a possible causation, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task391-58af70ef3466450a9c2abbc75c8bf67d", "input": "My favorite song came on the radio, so this is right up my alley", "output": ["plausible"]}, "Prediction": "not plausible"} +{"Task": "task391_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \", so\". You should decide whether the first sentence can be the cause of the second sentence. If you can see a possible causation, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task391-dcce2417d0c04e7bb84f5950a5945804", "input": "The woman dangled the biscuit above the dog, so she could see the baby", "output": ["not plausible"]}, "Prediction": "not plausible"} +{"Task": "task391_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \", so\". You should decide whether the first sentence can be the cause of the second sentence. If you can see a possible causation, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task391-3233b8d93a9943b486a19864b6367679", "input": "The physician misdiagnosed the patient, so the surgery had to be cancelled", "output": ["plausible"]}, "Prediction": "not plausible"} +{"Task": "task891_gap_coreference_resolution", "Definition": ["Read the passage and find the corresponding person for the given pronoun. The word between ** ** is the target pronoun."], "Instance": {"id": "task891-7495deda872843b5b2e5d893816ede21", "input": "Andy Spade is the co-founder of Partners & Spade, along with Anthony Sperduti. **He** started his career in advertising working on brands such as Coca-Cola, Lexus, and Paul Stuart.", "output": ["Andy Spade"]}, "Prediction": "Andy Spade"} +{"Task": "task891_gap_coreference_resolution", "Definition": ["Read the passage and find the corresponding person for the given pronoun. The word between ** ** is the target pronoun."], "Instance": {"id": "task891-e944f07613d046508ff4ecfef3666ebd", "input": "Michael Moorcock, E. C. Tubb, Brian Stableford, Barrington Bayley, Michael Coney -- a new American audience. He published translations of international sf as well as anthologies of translated stories, Best From the Rest of the World. With the help of Arthur W. Saha, Wollheim also edited and published the popular ``Annual World's Best Science Fiction'' anthology from 1971 until **his** death in 1990.", "output": ["Wollheim"]}, "Prediction": "Arthur W. Saha"} +{"Task": "task891_gap_coreference_resolution", "Definition": ["Read the passage and find the corresponding person for the given pronoun. The word between ** ** is the target pronoun."], "Instance": {"id": "task891-c2979e7543794298944bf09d0e7f6f14", "input": "Karl Philipp F*rst zu Schwarzenberg (or Charles Philip, Prince of Schwarzenberg (April 18, 1771 -- October 15, 1820) was an Austrian field marshal. Karl Philipp was born in Vienna as the son of Johann Nepomuk Anton of Schwarzenberg and Marie Eleonore Countess of *ttingen-Wallerstein. **He** entered the imperial cavalry in 1788, fought in 1789 under Lacy and Loudon against the Turks, distinguished himself by his bravery, and became major in 1792.", "output": ["Schwarzenberg"]}, "Prediction": "Karl Philipp F*rst zu Schwarz"} +{"Task": "task891_gap_coreference_resolution", "Definition": ["Read the passage and find the corresponding person for the given pronoun. The word between ** ** is the target pronoun."], "Instance": {"id": "task891-8f2da83fe5ca40fc8174cb1d3a8e4c5d", "input": "Father Joseph savors the power which he possesses on the fate of Cinq-Mars (``Tu t'en vas''). He confronts Marie with the announcement of the execution of Cinq-Mars, for betraying the country by dealing independently with a foreign power. Further, **he** tells her, the Polish ambassador will return soon from a hunt with the King, and Father Joseph advises Marie to answer him favorably; in exchange, Cinq-Mars will be spared.", "output": ["Father Joseph"]}, "Prediction": "Father Joseph"} +{"Task": "task891_gap_coreference_resolution", "Definition": ["Read the passage and find the corresponding person for the given pronoun. The word between ** ** is the target pronoun."], "Instance": {"id": "task891-e23f14d28a3545f091127df6fc56a548", "input": "The writers and dandies Charles Baudelaire and Jules Barbey d'Aurevilly also had some influence, but the most important model was the notorious aristocratic aesthete Robert de Montesquiou, who was also the basis for Baron de Charlus in Marcel Proust's * la recherche du temps perdu. Montesquiou's furnishings bear a strong resemblance to those in Des Esseintes's house: In 1883, to **his** eternal regret, Montesquiou admitted St*phane Mallarm* (to his home).", "output": ["Montesquiou"]}, "Prediction": "Robert de Montesquiou"} +{"Task": "task1586_scifact_title_generation", "Definition": ["In this task, you are given an abstract of article. Your task is to generate title for this article. Preferred titles are under thirty words."], "Instance": {"id": "task1586-43bb07b6cb074c3aa5b92272b48299e5", "input": "Podocytes are critical in the maintenance of a healthy glomerular filter; however, they have been difficult to study in the intact kidney because of technical limitations. Here we report the development of serial multiphoton microscopy (MPM) of the same glomeruli over several days to visualize the motility of podocytes and parietal epithelial cells (PECs) in vivo. In podocin-GFP mice, podocytes formed sporadic multicellular clusters after unilateral ureteral ligation and migrated into the parietal Bowman's capsule. The tracking of single cells in podocin-confetti mice featuring cell-specific expression of CFP, GFP, YFP or RFP revealed the simultaneous migration of multiple podocytes. In phosphoenolpyruvate carboxykinase (PEPCK)-GFP mice, serial MPM found PEC-to-podocyte migration and nanotubule connections. Our data support a highly dynamic rather than a static nature of the glomerular environment and cellular composition. Future application of this new approach should advance our understanding of the mechanisms of glomerular injury and regeneration.", "output": ["Tracking the fate of glomerular epithelial cells in vivo using serial multiphoton imaging in novel mouse models with fluorescent lineage tags"]}, "Prediction": "Podocytes form sporadic"} +{"Task": "task1586_scifact_title_generation", "Definition": ["In this task, you are given an abstract of article. Your task is to generate title for this article. Preferred titles are under thirty words."], "Instance": {"id": "task1586-3c1d94a543424bf3ae71209d3bb52ba8", "input": "T cell activation is predicated on the interaction between the T cell receptor and peptide-major histocompatibility (pMHC) ligands. The factors that determine the stimulatory potency of a pMHC molecule remain unclear. We describe results showing that a peptide exhibiting many hallmarks of a weak agonist stimulates T cells to proliferate more than the wild-type agonist ligand. An in silico approach suggested that the inability to form the central supramolecular activation cluster (cSMAC) could underlie the increased proliferation. This conclusion was supported by experiments that showed that enhancing cSMAC formation reduced stimulatory capacity of the weak peptide. Our studies highlight the fact that a complex interplay of factors determines the quality of a T cell antigen.", "output": ["The stimulatory potency of T cell antigens is influenced by the formation of the immunological synapse."]}, "Prediction": "A weak pMHC ligand"} +{"Task": "task1586_scifact_title_generation", "Definition": ["In this task, you are given an abstract of article. Your task is to generate title for this article. Preferred titles are under thirty words."], "Instance": {"id": "task1586-725280cc7de74989a630cb88c5a15973", "input": "BACKGROUND Under the Revised National Tuberculosis Control Programme of India, patients with new smear-positive pulmonary tuberculosis are treated with a thrice-weekly regimen of antitubercular drugs (2H(3)R(3)Z(3)E(3)/4H(3)R(3) [H isoniazid, R rifampicin, Z pyrazinamide and E ethambutol]) for 6 months. We conducted a retrospective analysis of the efficacy andtolerability of this regimen under clinical trial conditions in HIV-negative patients with newly diagnosed smear-positive pulmonary tuberculosis. \n METHODS We retrospectively analysed the data on patients assigned to the control regimen (2H (3)R(3)Z(3)E(3)/4H(3)R(3)) in two clinical trials during 2001-06 at the National Institute for Research in Tuberculosis, Chennai, India. \n RESULTS Of the 268 patients treated with this regimen, data for efficacy analysis were available for 249. At the end of treatment, of 249 patients, 238 (96%) had a favourable status. Treatment failure occurred in the remaining 11: 7 in whom the organisms were initially drug-susceptible and 4 with initial drug resistance. Of the 238 patients who had a favourable status at the end of treatment, 14 (6%) had recurrence of tuberculosis during the following 24 months. In the intention-to-treat analysis, 245 (94%) of 262 patients had a favourable status at the end of treatment. Of the 28 patients with initial drug resistance, 24 (86%) had a favourable outcome. Only 4 of these 24 patients were found to have recurrence of tuberculosis in 2 years of follow-up. Among the 221 patients initially infected with drug-susceptible organisms, drug resistance did not develop in any of the 7 patients in whom the treatment failed or the 10 who had recurrence of tuberculosis. Further, 5 of the 7 patients in whom the treatment failed continued to excrete drug-susceptible bacilli at 6 months. Adverse drug reactions were observed in 38 (14%) of the 262 patients. Only 3 (1.1%) needed a modification in the treatment. \n CONCLUSION This thrice-weekly 6-month regimen of antitubercular drugs, when administered under full supervision, is associated with a high rate of favourable treatment outcomes in HIV-negative patients with newly diagnosed sputum smearpositive pulmonary tuberculosis. There are few adverse drug reactions in these patients.", "output": ["Efficacy of the 6-month thrice-weekly regimen in the treatment of new sputum smear-positive pulmonary tuberculosis under clinical trial conditions."]}, "Prediction": "Effective and safe thrice-weekly"} +{"Task": "task1586_scifact_title_generation", "Definition": ["In this task, you are given an abstract of article. Your task is to generate title for this article. Preferred titles are under thirty words."], "Instance": {"id": "task1586-609d473000e241219f7b85e9937e5a11", "input": "Regulatory T (T reg) cells are critical regulators of immune tolerance. Most T reg cells are defined based on expression of CD4, CD25, and the transcription factor, FoxP3. However, these markers have proven problematic for uniquely defining this specialized T cell subset in humans. We found that the IL-7 receptor (CD127) is down-regulated on a subset of CD4+ T cells in peripheral blood. We demonstrate that the majority of these cells are FoxP3+, including those that express low levels or no CD25. A combination of CD4, CD25, and CD127 resulted in a highly purified population of T reg cells accounting for significantly more cells that previously identified based on other cell surface markers. These cells were highly suppressive in functional suppressor assays. In fact, cells separated based solely on CD4 and CD127 expression were anergic and, although representing at least three times the number of cells (including both CD25+CD4+ and CD25\u2212CD4+ T cell subsets), were as suppressive as the \u201cclassic\u201d CD4+CD25hi T reg cell subset. Finally, we show that CD127 can be used to quantitate T reg cell subsets in individuals with type 1 diabetes supporting the use of CD127 as a biomarker for human T reg cells.", "output": ["CD127 expression inversely correlates with FoxP3 and suppressive function of human CD4+ T reg cells"]}, "Prediction": "A novel marker of regulatory T cells in peripheral"} +{"Task": "task1586_scifact_title_generation", "Definition": ["In this task, you are given an abstract of article. Your task is to generate title for this article. Preferred titles are under thirty words."], "Instance": {"id": "task1586-ebfa69505c04448eaad7c793fb917a06", "input": "Bone tissue undergoes constant turnover supported by stem cells. Recent studies showed that perivascular mesenchymal stem cells (MSCs) contribute to the turnover of long bones. Craniofacial bones are flat bones derived from a different embryonic origin than the long bones. The identity and regulating niche for craniofacial-bone MSCs remain unknown. Here, we identify Gli1+ cells within the suture mesenchyme as the main MSC population for craniofacial bones. They are not associated with vasculature, give rise to all craniofacial bones in the adult and are activated during injury repair. Gli1+ cells are typical MSCs in vitro. Ablation of Gli1+ cells leads to craniosynostosis and arrest of skull growth, indicating that these cells are an indispensable stem cell population. Twist1(+/-) mice with craniosynostosis show reduced Gli1+ MSCs in sutures, suggesting that craniosynostosis may result from diminished suture stem cells. Our study indicates that craniofacial sutures provide a unique niche for MSCs for craniofacial bone homeostasis and repair.", "output": ["The suture provides a niche for mesenchymal stem cells of craniofacial bones"]}, "Prediction": "A unique niche for mesenchymal"} +{"Task": "task602_wikitext-103_answer_generation", "Definition": ["Given a text passage, you need to generate a suitable title as the output. The output title should be one of the words/phrases used in the passage and must be no longer than five words. "], "Instance": {"id": "task602-3696e1d018994616833f653ccb649131", "input": "Paragraph: St. Cecily 's Garden ; 1920 \n Cradle roll design ; St. Edmund 's , Pitlake , 1922 \n Banner design ; St. Mary 's , Sanderstead , 1923 \n The Feeding of the Five Thousand ; reredos triptych , chapel at Penarth , Wales ; 1929 \n The Parable of the Great Supper ; triptych , St. George 's chapel , Waddon \n The Seven Sacraments ; baptismal font panels , St. Andrew 's , Croydon \n St. John the Baptist ; central banner panel , Abesford church , 1943 \n Lettering , sword , and shield ; mount for a list of men and woman serving in the Forces , St. Andrews , Croydon , 1943 \n Baptismal rolls ; St. Andrews , Croydon , 1948 , 1962 \n The font in St Andrew 's Church , South Croydon \n Out of Great Tribulation ; memorial chapel , Norbury Medthodist church , 1948 \n I Am Among You As He That Serveth ; stained glass window design , St. Edmund 's , Pitlake , 1962 \n Question: what is the suitable title of the passage ?", "output": [" Religious works"]}, "Prediction": "St. Edmund's, Pit"} +{"Task": "task602_wikitext-103_answer_generation", "Definition": ["Given a text passage, you need to generate a suitable title as the output. The output title should be one of the words/phrases used in the passage and must be no longer than five words. "], "Instance": {"id": "task602-618c7b60e3014cbc907dc3d3e082fa71", "input": "Paragraph: N @-@ 88 starts at the Nebraska \u2013 Wyoming state line in Banner County , where WYO 151 ends , and travels northeast . The road quickly bends east after less than one mile ( 1 @.@ 6 km ) , and continues in a straight line . For the next twenty miles ( 32 km ) , N @-@ 88 intersects minor streets , through rural farmland . The route turns south at N @-@ 71 , and becomes concurrent . Four miles ( 6 @.@ 4 km ) later , N @-@ 88 turns east , ending the concurrency with N @-@ 71 . The route continues to travel through farmland for sixteen miles ( 26 km ) , where it enters Morrill County . The road crosses over Pumpkin Creek four times , and enters the unincorporated community of Redington . Two rock formations , Courthouse and Jail Rocks , become visible from the road . N @-@ 88 turns north toward Bridgeport soon after . The road crosses over Pumpkin Creek for the fifth time , and enters into Bridgeport five miles ( 8 @.@ 0 km ) later . The road intersects a railroad owned by BNSF Railway . N @-@ 88 turns northeast soon after , and ends at the intersection of US 385 and N @-@ 92 . In 2012 , Nebraska Department of Roads ( NDOR ) calculated as many as 2 @,@ 410 vehicles traveling on the N @-@ 71 / N @-@ 88 concurrency , and as few as 315 vehicles traveling east of the Banner \u2013 Morrill county line . This is expressed in terms of annual average daily traffic ( AADT ) , a measure of traffic volume for any average day of the year . Only the N @-@ 71 / N @-@ 88 concurrency is part of the National Highway System ( NHS ) , a network of highways identified as being most important for the economy , mobility and defense of the nation . Question: what is the suitable title of the passage ?", "output": [" Route description"]}, "Prediction": "N @-@ 88 travels through"} +{"Task": "task602_wikitext-103_answer_generation", "Definition": ["Given a text passage, you need to generate a suitable title as the output. The output title should be one of the words/phrases used in the passage and must be no longer than five words. "], "Instance": {"id": "task602-41e937c617eb41e18c96f5906b6dc39b", "input": "Paragraph: Barker took correspondence courses in art , probably until about 1919 . In 1908 at 13 years , she entered an evening class at the Croydon School of Art , and attended the school into the 1940s . In time , she received a teaching position . \n In 1911 , Raphael Tuck & Sons bought four of Barker 's \" little drawings \" for half a sovereign , and published them as postcards . In October 1911 , she won second prize in the Croydon Art Society 's poster competition , and shortly afterward was elected the youngest member of the Society . The art critic for the Croydon Advertiser remarked , \" Her drawings show a remarkable freedom of spirit . She has distinct promise . \" \n Following her father \u2019 s death in June 1912 , the seventeen @-@ year @-@ old Barker submitted art and poetry to My Magazine , Child \u2019 s Own , Leading Strings , and Raphael Tuck annuals in an effort to support both her mother and sister . Her sister Dorothy taught kindergarten in two private schools before opening a kindergarten at home . She brought in some money for the family 's support while supervising the household . Question: what is the suitable title of the passage ?", "output": [" Art education and first professional work"]}, "Prediction": "\" A promising young artist \""} +{"Task": "task602_wikitext-103_answer_generation", "Definition": ["Given a text passage, you need to generate a suitable title as the output. The output title should be one of the words/phrases used in the passage and must be no longer than five words. "], "Instance": {"id": "task602-b99bb73c0eda46d38d9e93fc8f430928", "input": "Paragraph: The accompanying music video begins with a shot of an empty street , followed by clips of disadvantaged and poorer members of society going about their daily activities . Two men play dominoes on a wooden crate outside a building , a gang make fun of an elderly man hanging newspapers outside his store and an obese woman walks down the street . Clips of Carey leaning against a wall and sitting on some steps looking on at what is happening are shown . As the first chorus begins , everyone starts to dance joyfully in the street and help those in need . A gospel choir comes out of one of the buildings as the street becomes more crowded with people of all ages and backgrounds rejoicing and getting along with each other . One of the shops in the background has a neon light outside the entrance which says \" Jesus Saves \" . Question: what is the suitable title of the passage ?", "output": [" Music video"]}, "Prediction": "Jesus Saves"} +{"Task": "task602_wikitext-103_answer_generation", "Definition": ["Given a text passage, you need to generate a suitable title as the output. The output title should be one of the words/phrases used in the passage and must be no longer than five words. "], "Instance": {"id": "task602-98ba4414cd744513ba81498b2271836a", "input": "Paragraph: On its day of release in Japan , Valkyria Chronicles III topped both platform @-@ exclusive and multi @-@ platform sales charts . By early February , the game sold 102 @,@ 779 units , coming in second overall to The Last Story for the Wii . By the end of the year , the game had sold just over 152 @,@ 500 units . \n Famitsu enjoyed the story , and were particularly pleased with the improvements to gameplay . Japanese gaming site Game Watch Impress , despite negatively noting its pacing and elements recycled from previous games , was generally positive about its story and characters , and found its gameplay entertaining despite off @-@ putting difficulty spikes . 4Gamer.net writer Naohiko Misuosame , in a \" Play Test \" article based on the game 's PSN demo , felt that Valkyria Chronicles III provided a \" profound feeling of closure \" for the Valkyria Chronicles series . He praised its gameplay despite annoying limitations to aspects such as special abilities , and positively noted its shift in story to a tone similar to the first game . \n PlayStation Official Magazine - UK praised the story 's blurring of Gallia 's moral standing , art style , and most points about its gameplay , positively noting the latter for both its continued quality and the tweaks to balance and content . Its one major criticism were multiple difficulty spikes , something that had affected the previous games . Heath Hindman of gaming website PlayStation Lifestyle praised the addition of non @-@ linear elements and improvements or removal of mechanics from Valkyria Chronicles II in addition to praising the returning gameplay style of previous games . He also positively noted the story 's serious tone . Points criticized in the review were recycled elements , awkward cutscenes that seemed to include all characters in a scene for no good reason , pacing issues , and occasional problems with the game 's AI . \n In a preview of the TGS demo , Ryan Geddes of IGN was left excited as to where the game would go after completing the demo , along with enjoying the improved visuals over Valkyria Chronicles II . Kotaku 's Richard Eisenbeis was highly positive about the game , citing is story as a return to form after Valkyria Chronicles II and its gameplay being the best in the series . His main criticisms were its length and gameplay repetition , along with expressing regret that it would not be localized . Question: what is the suitable title of the passage ?", "output": [" Reception"]}, "Prediction": "Valkyria Chronicles III "} +{"Task": "task1195_disflqa_disfluent_to_fluent_conversion", "Definition": ["Given a disfluent sentence, modify the sentence to it to its equivalent fluent form, preserving the meaning of the sentence."], "Instance": {"id": "task1195-b4afdf0e695c436fa403fe2a09c1fa03", "input": "In what other way can disobedience be applied to international unions no rather organizations and governments?", "output": ["In what other way can disobedience be applied to international organizations and governments?"]}, "Prediction": "In what other way can disobedi"} +{"Task": "task1195_disflqa_disfluent_to_fluent_conversion", "Definition": ["Given a disfluent sentence, modify the sentence to it to its equivalent fluent form, preserving the meaning of the sentence."], "Instance": {"id": "task1195-6c125a6fca1e4c718bc98a41e30724f2", "input": "What year no how many branches does the Zuider Zee brackish lagoon have?", "output": ["How many branches does the Zuider Zee brackish lagoon have?"]}, "Prediction": "How many branches does the Zuider Ze"} +{"Task": "task1195_disflqa_disfluent_to_fluent_conversion", "Definition": ["Given a disfluent sentence, modify the sentence to it to its equivalent fluent form, preserving the meaning of the sentence."], "Instance": {"id": "task1195-2bddda2556dc42b0a915b200d70737dc", "input": "When was the first class taught at the University of Texas no sorry Chicago?", "output": ["What year was the first class taught at the University of Chicago?"]}, "Prediction": "When was the first class taught at the University"} +{"Task": "task1195_disflqa_disfluent_to_fluent_conversion", "Definition": ["Given a disfluent sentence, modify the sentence to it to its equivalent fluent form, preserving the meaning of the sentence."], "Instance": {"id": "task1195-28245f4d682a4693b85e0b4c39ccd99c", "input": "What do x.25 and Frame Relay both provide no sorry what do they require?", "output": ["WHat do x.25 and Frame Relay both require"]}, "Prediction": "What do x.25 and Frame Re"} +{"Task": "task1195_disflqa_disfluent_to_fluent_conversion", "Definition": ["Given a disfluent sentence, modify the sentence to it to its equivalent fluent form, preserving the meaning of the sentence."], "Instance": {"id": "task1195-8c8c5b45bb664cc1b7e29a42fec072f2", "input": "Who had Toghun Temur oops Toghtogha tried to defeat?", "output": ["Who had Toghtogha tried to defeat?"]}, "Prediction": "Who had Toghun Temur "} +{"Task": "task1409_dart_text_generation", "Definition": ["In this task, you are given triplets. Each triplet is in the form of [subject, predicate, object]. Your task is to generate proper sentence that utilizes these triples. The objective is to construct a sentence that (a) captures the facts specified in the triples and (b) is a well-formed sentence easily understandable by a human. All triple values need not be used directly in the sentence as long as the facts are adequately captured."], "Instance": {"id": "task1409-1da5f3967a9748138e77d13a925615b7", "input": "[['Chicago Road Informational Designation', 'LOCATION', 'US-12 and Prairie River Road (Park 1.5 miles west of Bronson)'], ['US-12 and Prairie River Road (Park 1.5 miles west of Bronson)', 'CITY', 'Bronson Township']]", "output": ["The Chicago Road Informational Designation is located in the Bronson Township."]}, "Prediction": "US-12 and prairie river road (park 1.5"} +{"Task": "task1409_dart_text_generation", "Definition": ["In this task, you are given triplets. Each triplet is in the form of [subject, predicate, object]. Your task is to generate proper sentence that utilizes these triples. The objective is to construct a sentence that (a) captures the facts specified in the triples and (b) is a well-formed sentence easily understandable by a human. All triple values need not be used directly in the sentence as long as the facts are adequately captured."], "Instance": {"id": "task1409-024f908c946243b4a7a1a25a457e3885", "input": "[['nbsp', 'STANDARD', 'HTML 3.2'], ['[TABLECONTEXT]', '[TITLE]', 'List of XML and HTML character entity references'], ['[TABLECONTEXT]', 'NAME', 'nbsp']]", "output": ["The Standard reference for nbsp is HTML 3.2"]}, "Prediction": "nbsp is a standard"} +{"Task": "task1409_dart_text_generation", "Definition": ["In this task, you are given triplets. Each triplet is in the form of [subject, predicate, object]. Your task is to generate proper sentence that utilizes these triples. The objective is to construct a sentence that (a) captures the facts specified in the triples and (b) is a well-formed sentence easily understandable by a human. All triple values need not be used directly in the sentence as long as the facts are adequately captured."], "Instance": {"id": "task1409-00139a31bb004b2cae11cce11d704ad6", "input": "[['The Cambridge Blue', 'eatType', 'restaurant'], ['The Cambridge Blue', 'customer rating', 'average']]", "output": ["The Cambridge Blue is a three star restaurant located in the middle of nowhere", "The Cambridge Blue is a three star rated restaurant that sells sushi.", "There is a three star rated restaurant called The Cambridge Blue that sells sushi.", "The Cambridge Blue is a three star restaurant located in the middle of nowhere", "The average customer rated restaurant is The Cambridge Blue.", "The Cambridge Blue restaurant has been rated 3 out of 5 stars. It offers wine, spirits, and appetizers.", "The Cambridge Blue is a 3 star restaurant that offers a selection of wine, spirits and appetizers.", "The Cambridge Blue restaurant, food and drink. Rated three stars."]}, "Prediction": "The customer rating for the cambridge blue is"} +{"Task": "task1409_dart_text_generation", "Definition": ["In this task, you are given triplets. Each triplet is in the form of [subject, predicate, object]. Your task is to generate proper sentence that utilizes these triples. The objective is to construct a sentence that (a) captures the facts specified in the triples and (b) is a well-formed sentence easily understandable by a human. All triple values need not be used directly in the sentence as long as the facts are adequately captured."], "Instance": {"id": "task1409-aa6602e6c34d44988e6a7158edc7701a", "input": "[['The Wrestlers', 'food', 'English'], ['The Wrestlers', 'customer rating', 'high'], ['The Wrestlers', 'familyFriendly', 'yes']]", "output": ["The Wrestlers provides English food for \u00a320-\u00a325. It is child friendly and has high customer rating."]}, "Prediction": "The wrestlers have english food. Their"} +{"Task": "task1409_dart_text_generation", "Definition": ["In this task, you are given triplets. Each triplet is in the form of [subject, predicate, object]. Your task is to generate proper sentence that utilizes these triples. The objective is to construct a sentence that (a) captures the facts specified in the triples and (b) is a well-formed sentence easily understandable by a human. All triple values need not be used directly in the sentence as long as the facts are adequately captured."], "Instance": {"id": "task1409-caf71d305d58409fb733eb474cf77a92", "input": "[['[TABLECONTEXT]', '[TITLE]', '1996-97 FA Cup'], ['[TABLECONTEXT]', 'TIE_NO', '1'], ['1', 'AWAY_TEAM', 'Nottingham Forest'], ['1', 'SCORE', '1-0'], ['1', 'HOME_TEAM', 'Chesterfield']]", "output": ["In 1996-97 FA Cup, Chesterfield at home played against Nottingham Forest with score 1-0"]}, "Prediction": "The 1996-97 fa cup was"} +{"Task": "task033_winogrande_answer_generation", "Definition": ["You need to answer a given question containing a blank (_). Your answer must be one of the two objects mentioned in the question, for example \"trophy\" and \"suitcase\". Your answer must not contain a word that is not present in the question. Please don't use articles (e.g., the, a) before the answer."], "Instance": {"id": "task033-15333cb838e7455e826f518cef1c373a", "input": "Jen tried using medication and then be pollen to help her allergies, and she found that the _ was more natural.", "output": ["pollen"]}, "Prediction": "be pollen"} +{"Task": "task033_winogrande_answer_generation", "Definition": ["You need to answer a given question containing a blank (_). Your answer must be one of the two objects mentioned in the question, for example \"trophy\" and \"suitcase\". Your answer must not contain a word that is not present in the question. Please don't use articles (e.g., the, a) before the answer."], "Instance": {"id": "task033-9c70e4c9b96e4bb4b7069cb16ba173a8", "input": "The local cafe was a much better place for making new friendships than the library was, as the _ was accommodating to social activity.", "output": ["cafe"]}, "Prediction": "cafe"} +{"Task": "task033_winogrande_answer_generation", "Definition": ["You need to answer a given question containing a blank (_). Your answer must be one of the two objects mentioned in the question, for example \"trophy\" and \"suitcase\". Your answer must not contain a word that is not present in the question. Please don't use articles (e.g., the, a) before the answer."], "Instance": {"id": "task033-85f06bb7d71c42e2ab92bc42cd4958ac", "input": "Jen started lifting weights and drinking protein shakes to gain muscle, but she noticed the _ made her feel tired.", "output": ["weights"]}, "Prediction": "protein shakes"} +{"Task": "task033_winogrande_answer_generation", "Definition": ["You need to answer a given question containing a blank (_). Your answer must be one of the two objects mentioned in the question, for example \"trophy\" and \"suitcase\". Your answer must not contain a word that is not present in the question. Please don't use articles (e.g., the, a) before the answer."], "Instance": {"id": "task033-24860e02b0e847b2b5d09c4deb7bedad", "input": "Chris wanted the nurse to give him an injection with a syringe and not an IV needle, because the _ was dull.", "output": ["needle"]}, "Prediction": "IV needle"} +{"Task": "task033_winogrande_answer_generation", "Definition": ["You need to answer a given question containing a blank (_). Your answer must be one of the two objects mentioned in the question, for example \"trophy\" and \"suitcase\". Your answer must not contain a word that is not present in the question. Please don't use articles (e.g., the, a) before the answer."], "Instance": {"id": "task033-e09782c9c0554244969779628a056cbc", "input": "I tried to place the grocery bag in the basket on the bike, but the _ was too small.", "output": ["basket"]}, "Prediction": "basket"} +{"Task": "task1407_dart_question_generation", "Definition": ["In this task you are given a list of triplets of the form [subject, predicate, object] and the output should be a question based on the triplets but with the subject and/or object replaced with blanks (represented using two or more consecutive underscores). Triplet values encompassed in [*] are special tokens that can be replaced with synonyms. The objective is to construct a question in a manner that (a) captures the facts specified in at least one of the triplets, and (b) ideally contains a limited number of blanks such that it is a well-formed question that is easy to answer. A blank can represent a single word or a phrase."], "Instance": {"id": "task1407-ba41effea81844ff8ba393fbdd317fef", "input": "[['13.1', 'YARDS', '197'], ['Sam Simmons', 'AVG.', '13.1']]", "output": ["_____ had _____ total receiving yards, with an average of 13.1 yards/reception."]}, "Prediction": "Sam Simmons' avg. is"} +{"Task": "task1407_dart_question_generation", "Definition": ["In this task you are given a list of triplets of the form [subject, predicate, object] and the output should be a question based on the triplets but with the subject and/or object replaced with blanks (represented using two or more consecutive underscores). Triplet values encompassed in [*] are special tokens that can be replaced with synonyms. The objective is to construct a question in a manner that (a) captures the facts specified in at least one of the triplets, and (b) ideally contains a limited number of blanks such that it is a well-formed question that is easy to answer. A blank can represent a single word or a phrase."], "Instance": {"id": "task1407-8b3cfd5c9c02440bab92fd5d8e5c77fd", "input": "[['Cocum', 'eatType', 'coffee shop'], ['Cocum', 'food', 'Japanese'], ['Cocum', 'priceRange', 'high'], ['Cocum', 'customer rating', '3 out of 5'], ['Cocum', 'familyFriendly', 'no']]", "output": ["_____ is a _____ coffee shop that is non child friendly with a high price range and a customer rating of 3 out of 5.", "_____ is a _____ coffee shop which is not child friendly. It has a customer rating of 3 out of 5. The price range is high.", "There is a _____ coffee shop named _____ that is non child friendly with a high price range and a customer rating of 3 out of 5.", "There is a _____ coffee shop named _____. It is not child friendly. It has a customer rating of 3 out of 5. The price range is high."]}, "Prediction": "_______ is a coffee shop. "} +{"Task": "task1407_dart_question_generation", "Definition": ["In this task you are given a list of triplets of the form [subject, predicate, object] and the output should be a question based on the triplets but with the subject and/or object replaced with blanks (represented using two or more consecutive underscores). Triplet values encompassed in [*] are special tokens that can be replaced with synonyms. The objective is to construct a question in a manner that (a) captures the facts specified in at least one of the triplets, and (b) ideally contains a limited number of blanks such that it is a well-formed question that is easy to answer. A blank can represent a single word or a phrase."], "Instance": {"id": "task1407-84c627e08ac64986a9f973f891768e7a", "input": "[['Fitzbillies', 'eatType', 'coffee shop'], ['Fitzbillies', 'food', 'French'], ['Fitzbillies', 'priceRange', 'moderate'], ['Fitzbillies', 'area', 'city centre']]", "output": ["_____ is a coffee shop offering French food at a moderate price. It is located in the _____.", "_____ is a coffee shop located within the _____ offering French food at a moderate price."]}, "Prediction": "The area where fitzbillies is"} +{"Task": "task1407_dart_question_generation", "Definition": ["In this task you are given a list of triplets of the form [subject, predicate, object] and the output should be a question based on the triplets but with the subject and/or object replaced with blanks (represented using two or more consecutive underscores). Triplet values encompassed in [*] are special tokens that can be replaced with synonyms. The objective is to construct a question in a manner that (a) captures the facts specified in at least one of the triplets, and (b) ideally contains a limited number of blanks such that it is a well-formed question that is easy to answer. A blank can represent a single word or a phrase."], "Instance": {"id": "task1407-20f0787c59c441b4baa6a43cd8b8beff", "input": "[['The Ref', 'ROLE', 'Old Baybrook Policeman']]", "output": ["John Benjamin Hickey role is \t_____ title name _____"]}, "Prediction": "The ref's role is old baybrook"} +{"Task": "task1407_dart_question_generation", "Definition": ["In this task you are given a list of triplets of the form [subject, predicate, object] and the output should be a question based on the triplets but with the subject and/or object replaced with blanks (represented using two or more consecutive underscores). Triplet values encompassed in [*] are special tokens that can be replaced with synonyms. The objective is to construct a question in a manner that (a) captures the facts specified in at least one of the triplets, and (b) ideally contains a limited number of blanks such that it is a well-formed question that is easy to answer. A blank can represent a single word or a phrase."], "Instance": {"id": "task1407-73dd269eb0884675914453c3250f92ef", "input": "[['AIP Advances', 'EDITOR', 'A.T. Charlie Johnson'], ['A.T. Charlie Johnson', 'ALMA_MATER', 'Harvard University'], ['A.T. Charlie Johnson', 'DOCTORAL_ADVISOR', 'Michael Tinkham'], ['AIP Advances', 'PUBLISHER', 'American Institute of Physics']]", "output": ["Published by American Institute of Physics, _____ was edited by A.T. Charlie Johnson who's almaMater is Harvard University and who is the doctoral advisor for Michael Tinkham.", "_____ is published by the American Institute of Physics. It is edited by A.T. Charlie Johnson, whose alma mater is Harvard University, and whose doctoral advisor was Michael Tinkham.", "The editor of _____ published by American Institute of Physics is A.T Charlie Johnson who graduated from Harvard University and is the doctoral advisor for Michael Tinkham."]}, "Prediction": "A.T. Charlie Johnson is the editor"} +{"Task": "task402_grailqa_paraphrase_generation", "Definition": ["Compose a new way to ask the same question. The new question must be asking the exact same thing as the original question does. Without any other information, you should be able to tell that the original question and the new question are the same. Preserve bracketed phrases in your new question. You can change the rest of the question provided the above requirements are satisfied. The new question must have a natural, fluent formulation, as if it were composed by a native English speaker. You will be given three pieces of information to help you create better questions. Those are: \"Answer\": the answer to the question, e.g., \"Honolulu\". \"Answer Type\": the type of the answer, e.g., \"Location\". \"Auxiliary Information\" is a detailed description of some concepts mentioned in the question, if any."], "Instance": {"id": "task402-cb25ec53f48e45e7945202c0e171332d", "input": "original question: Which opera is in [Sanskrit Language]?\nanswer: satyagraha\nanswer type: Opera.\nauxiliary information: Sanskrit is the primary liturgical language of Hinduism, a philosophical language in Buddhism, Hinduism and Jainism, and a literary language that was in use as a lingua franca in the Indian cultural zone. It is a standardised dialect of the Old Indo-Aryan language, originating as Vedic Sanskrit and tracing its linguistic ancestry back to Proto-Indo-Iranian and Proto-Indo-European. Today it is listed as one of the 22 scheduled languages of India and is an official language of the state of Uttarakhand. Sanskrit holds a prominent position in Indo-European studies. The corpus of Sanskrit literature encompasses a rich tradition of poetry and drama as well as scientific, technical, philosophical and religious texts. Sanskrit continues to be widely used as a ceremonial language in Hindu religious rituals and Buddhist practice in the form of hymns and chants. Spoken Sanskrit has been revived in some villages with traditional institutions, and there are attempts to enhance its popularisation.", "output": ["What is the opera performed in the [Sanskrit Language]?", "[Sanskrit Language] is the literary language of which opera?", "[Sanskrit Language] is the language of which opera?", "What opera has [Sanskrit Language] in it?", "[Sanskrit Language] is used in which opera?"]}, "Prediction": "What is the name of the language that is"} +{"Task": "task402_grailqa_paraphrase_generation", "Definition": ["Compose a new way to ask the same question. The new question must be asking the exact same thing as the original question does. Without any other information, you should be able to tell that the original question and the new question are the same. Preserve bracketed phrases in your new question. You can change the rest of the question provided the above requirements are satisfied. The new question must have a natural, fluent formulation, as if it were composed by a native English speaker. You will be given three pieces of information to help you create better questions. Those are: \"Answer\": the answer to the question, e.g., \"Honolulu\". \"Answer Type\": the type of the answer, e.g., \"Location\". \"Auxiliary Information\" is a detailed description of some concepts mentioned in the question, if any."], "Instance": {"id": "task402-bf281b924df14bd398a0f6ca65a662c9", "input": "original question: Which sports league championship did league of [European Volleyball Confederation] participate in?\nanswer: european volleyball championship\nanswer type: Sports League Championship.\nauxiliary information: The European Volleyball Confederation is the continental governing body for the sport of volleyball in Europe. Its headquarters are located in Luxembourg, Luxembourg.", "output": ["The league of [European Volleyball Confederation] played in what sports league championship?", "What sports league championship did the [European Volleyball Confederation] compete in?", "The league of [European Volleyball Confederation] participated in what sports league championship?", "[European Volleyball Confederation] took part in what sport's league championship?", "[European Volleyball Confederation] league participated in what sports league championship?"]}, "Prediction": "What sports league championship did the [European"} +{"Task": "task402_grailqa_paraphrase_generation", "Definition": ["Compose a new way to ask the same question. The new question must be asking the exact same thing as the original question does. Without any other information, you should be able to tell that the original question and the new question are the same. Preserve bracketed phrases in your new question. You can change the rest of the question provided the above requirements are satisfied. The new question must have a natural, fluent formulation, as if it were composed by a native English speaker. You will be given three pieces of information to help you create better questions. Those are: \"Answer\": the answer to the question, e.g., \"Honolulu\". \"Answer Type\": the type of the answer, e.g., \"Location\". \"Auxiliary Information\" is a detailed description of some concepts mentioned in the question, if any."], "Instance": {"id": "task402-dc5e96b1b6784852ac679a137240b70b", "input": "original question: Which animal breed is the breed of [Guinea pig]?\nanswer: himalayan guinea pig, teddy guinea pig, skinny pig...\nanswer type: Animal breed.\nauxiliary information: The guinea pig, also called the cavy, is a species of rodent belonging to the family Caviidae and the genus Cavia. Despite their common name, these animals are not in the pig family, nor are they from Guinea. They originated in the Andes, and earlier studies based on biochemistry and hybridization suggested they are domesticated descendants of a closely related species of cavy such as Cavia aperea, C. fulgida, or C. tschudii and, therefore, do not exist naturally in the wild. Recent studies applying molecular markers, in addition to studying the skull and skeletal morphology of current and mummified animals, revealed that the ancestor is most likely Cavia tschudii. The guinea pig plays an important role in the folk culture of many Indigenous South American groups, especially as a food source, but also in folk medicine and in community religious ceremonies. Since the 1960s, efforts have been made to increase consumption of the animal outside South America. In Western societies, the guinea pig has enjoyed widespread popularity as a household pet since its introduction by European traders in the 16th century.", "output": ["A [Guinea pig] is included in what type of animal breed?", "The [Guinea pig] is of what animal breed?", "What is the animal breed of a [Guinea pig]?", "[Guinea pig] is what animal breed?", "The [Guinea pig] species has which breeds?"]}, "Prediction": "What are the different breeds of [Gu"} +{"Task": "task402_grailqa_paraphrase_generation", "Definition": ["Compose a new way to ask the same question. The new question must be asking the exact same thing as the original question does. Without any other information, you should be able to tell that the original question and the new question are the same. Preserve bracketed phrases in your new question. You can change the rest of the question provided the above requirements are satisfied. The new question must have a natural, fluent formulation, as if it were composed by a native English speaker. You will be given three pieces of information to help you create better questions. Those are: \"Answer\": the answer to the question, e.g., \"Honolulu\". \"Answer Type\": the type of the answer, e.g., \"Location\". \"Auxiliary Information\" is a detailed description of some concepts mentioned in the question, if any."], "Instance": {"id": "task402-f88fdca02bf84e79ab052bbcd1a1873a", "input": "original question: What is the professional field of [Website content writer]\nanswer: online marketing, independent content provider, internet marketing\nanswer type: Professional field.\nauxiliary information: A website content writer or web content writer is a person who specializes in providing relevant content for websites. Every website has a specific target audience and requires a different type and level of content. Content should contain words that attract and retain users on a website. Content written specifically for a website should concentrate on a specific topic. It should also be easy to read, offering the information in easy to understand clusters or laid out in bullet points. Most story pieces are centered on marketing products or services that the website is selling or endorsing, though this is not always the case. Some websites are informational only and do not sell a product or service. In those instances, the content should be geared toward helping to educate the reader while providing them with complex information in a way that is easy to understand and retain.", "output": ["What professional field does [Website content writer] belong to?", "[Website content writer] is what professional field?", "[Website content writer] is part of which professional field?", "A [Website content writer] is a member of what professional field?", "Which professional field does [Website content writer] pertain to?"]}, "Prediction": "What is the professional field of [Website"} +{"Task": "task402_grailqa_paraphrase_generation", "Definition": ["Compose a new way to ask the same question. The new question must be asking the exact same thing as the original question does. Without any other information, you should be able to tell that the original question and the new question are the same. Preserve bracketed phrases in your new question. You can change the rest of the question provided the above requirements are satisfied. The new question must have a natural, fluent formulation, as if it were composed by a native English speaker. You will be given three pieces of information to help you create better questions. Those are: \"Answer\": the answer to the question, e.g., \"Honolulu\". \"Answer Type\": the type of the answer, e.g., \"Location\". \"Auxiliary Information\" is a detailed description of some concepts mentioned in the question, if any."], "Instance": {"id": "task402-09402559bcd04e3985e0a94871716e75", "input": "original question: Find products made of [Sugar]\nanswer: bon v\u2248\u00ecux 750ml\nanswer type: Product with ingredients.\nauxiliary information: Sugar is the generalized name for sweet, short-chain, soluble carbohydrates, many of which are used in food. They are carbohydrates, composed of carbon, hydrogen, and oxygen. There are various types of sugar derived from different sources. Simple sugars are called monosaccharides and include glucose, fructose and galactose. The table or granulated sugar most customarily used as food is sucrose, a disaccharide. Other disaccharides include maltose and lactose. Longer chains of sugars are called oligosaccharides. Chemically-different substances may also have a sweet taste, but are not classified as sugars. Some are used as lower-calorie food substitutes for sugar described as artificial sweeteners. Sugars are found in the tissues of most plants, but are present in sufficient concentrations for efficient extraction only in sugarcane and sugar beet. Sugarcane refers to any of several species of giant grass in the genus Saccharum that have been cultivated in tropical climates in South Asia and Southeast Asia since ancient times. A great expansion in its production took place in the 18th century with the establishment of sugar plantations in the West Indies and Americas.", "output": ["[Sugar] is found in what products?", "[Sugar] composes what products?", "Give me some products that are made with [Sugar] in it.", "What products are made with [Sugar].", "[Sugar] is used to make what products?"]}, "Prediction": "What are the different types of sugar?"} +{"Task": "task201_mnli_neutral_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine the neutral choice based on your inference from the statement and your commonsense knowledge. The neutral choice is a sentence that neither agrees nor disagrees with the statement. Indicate your answer as '1', '2', or '3', corresponding to the choice number of the selected sentence. If sentence X agrees with sentence Y, one's correctness follows from the other one. If sentence X disagrees with sentence Y, they can not be correct at the same time."], "Instance": {"id": "task201-e047a21f70ab4f44a036069c9f9669c0", "input": "Statement: You've never had a job. Choices: 1. You've never worked. 2. You've never been employed. 3. You had a job in the past. ", "output": ["1"]}, "Prediction": "2"} +{"Task": "task201_mnli_neutral_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine the neutral choice based on your inference from the statement and your commonsense knowledge. The neutral choice is a sentence that neither agrees nor disagrees with the statement. Indicate your answer as '1', '2', or '3', corresponding to the choice number of the selected sentence. If sentence X agrees with sentence Y, one's correctness follows from the other one. If sentence X disagrees with sentence Y, they can not be correct at the same time."], "Instance": {"id": "task201-e751866455854760886db81e08a47a00", "input": "Statement: For example, mail could be sorted one day and delivered the next, resulting in a less demanding schedule. Choices: 1. The schedule is more hectic when mail is sorted in one day and delivered in another. 2. If mail can be sorted in one day, delivered the next the schedule will be less demanding. 3. A two day system of mail delivery works best. ", "output": ["3"]}, "Prediction": "2"} +{"Task": "task201_mnli_neutral_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine the neutral choice based on your inference from the statement and your commonsense knowledge. The neutral choice is a sentence that neither agrees nor disagrees with the statement. Indicate your answer as '1', '2', or '3', corresponding to the choice number of the selected sentence. If sentence X agrees with sentence Y, one's correctness follows from the other one. If sentence X disagrees with sentence Y, they can not be correct at the same time."], "Instance": {"id": "task201-5ba4eb1c9f154fdf8bb5d966d2869165", "input": "Statement: Spun and spun and spun the president's denial for months without bothering to check if it was true. Choices: 1. The President's advisors knew that it wasn't true. 2. The President was completely open from the start and made no denials. 3. The President kept denying, but did not check to see if it was true.", "output": ["1"]}, "Prediction": "3"} +{"Task": "task201_mnli_neutral_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine the neutral choice based on your inference from the statement and your commonsense knowledge. The neutral choice is a sentence that neither agrees nor disagrees with the statement. Indicate your answer as '1', '2', or '3', corresponding to the choice number of the selected sentence. If sentence X agrees with sentence Y, one's correctness follows from the other one. If sentence X disagrees with sentence Y, they can not be correct at the same time."], "Instance": {"id": "task201-052923547b384b7eb229246fa7418eeb", "input": "Statement: and it's all over him the poor thing you know so there's definitely hazards to going camping if you don't know what you're looking at Choices: 1. You should know what you're doing if you want to go camping. 2. He got covered in poison ivy because he didn't recognize it. 3. Camping is very safe and you don't know what you're looking at.", "output": ["2"]}, "Prediction": "1"} +{"Task": "task201_mnli_neutral_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine the neutral choice based on your inference from the statement and your commonsense knowledge. The neutral choice is a sentence that neither agrees nor disagrees with the statement. Indicate your answer as '1', '2', or '3', corresponding to the choice number of the selected sentence. If sentence X agrees with sentence Y, one's correctness follows from the other one. If sentence X disagrees with sentence Y, they can not be correct at the same time."], "Instance": {"id": "task201-e9c022b8c6704905bb03f758879604b2", "input": "Statement: no no no you went to Galveston uh i have a favorite hotel that extends out over the water Choices: 1. You went to Dallas. 2. You went to Galveston while I was in Arizona. 3. You went to Galveston.", "output": ["2"]}, "Prediction": "3"} +{"Task": "task520_aquamuse_answer_given_in_passage", "Definition": ["In this task you will be given a question and a passage. You need to determine if the answer to the question is contained in the passage. If the answer can be found in the passage you should output 'True'. If the answer cannot be found in the passage you should output 'False'."], "Instance": {"id": "task520-948963bd86e2454b966af103d2956f9a", "input": "Passage: 'PlayStation Network's services are dedicated to an online marketplace (PlayStation Store), a premium subscription service for enhanced gaming and social features (PlayStation Plus), movie streaming, rentals and purchases (PlayStation Video), a cloud-based television programming service (PlayStation Vue), music streaming (PlayStation Music) and a cloud gaming service (PlayStation Now).'. Question: 'what is the difference between playstation store and playstation network'.", "output": ["True"]}, "Prediction": "True"} +{"Task": "task520_aquamuse_answer_given_in_passage", "Definition": ["In this task you will be given a question and a passage. You need to determine if the answer to the question is contained in the passage. If the answer can be found in the passage you should output 'True'. If the answer cannot be found in the passage you should output 'False'."], "Instance": {"id": "task520-472d0ab563124f569f160065cfc58aa1", "input": "Passage: 'The Yellow River or Huang He (listen) is the third longest river in Asia, after the Yangtze River and Yenisei River, and the sixth-longest river system in the world at the estimated length of 5,464 km (3,395 mi). Originating in the Bayan Har Mountains in Qinghai province of western China, it flows through nine provinces, and it empties into the Bohai Sea near the city of Dongying in Shandong province. The Yellow River basin has an east -- west extent of about 1,900 kilometers (1,180 mi) and a north -- south extent of about 1,100 km (680 mi). Its total drainage area is about 752,546 square kilometers (290,560 sq mi).'. Question: 'when did woodblock printing become widely used throughout east asia'.", "output": ["False"]}, "Prediction": "False"} +{"Task": "task520_aquamuse_answer_given_in_passage", "Definition": ["In this task you will be given a question and a passage. You need to determine if the answer to the question is contained in the passage. If the answer can be found in the passage you should output 'True'. If the answer cannot be found in the passage you should output 'False'."], "Instance": {"id": "task520-38791d9f547e402a89a1e76c4f0b17e1", "input": "Passage: 'The operating system can be located on any storage, but is typically installed on a hard disk or solid-state drive. A Live CD represents the concept of running an operating system directly from a CD. While this is slow compared to storing the operating system on a hard disk drive, it is typically used for installation of operating systems, demonstrations, system recovery, or other special purposes. Large flash memory is currently more expensive than hard disk drives of similar size (as of mid-2014) but are starting to appear in laptop computers because of their low weight, small size and low power requirements. Computer communications involve internal modem cards, modems, network adapter cards, and routers. Common peripherals and adapter cards include headsets, joysticks, microphones, printers, scanners, sound adapter cards (as a separate card rather than located on the motherboard), speakers and webcams.'. Question: 'what is common between foehn and chinook winds'.", "output": ["False"]}, "Prediction": "False"} +{"Task": "task520_aquamuse_answer_given_in_passage", "Definition": ["In this task you will be given a question and a passage. You need to determine if the answer to the question is contained in the passage. If the answer can be found in the passage you should output 'True'. If the answer cannot be found in the passage you should output 'False'."], "Instance": {"id": "task520-078a03bd9f9442a58f180ce523863209", "input": "Passage: 'The presidential primary elections and caucuses held in the various states, the District of Columbia, and territories of the United States form part of the nominating process of candidates for United States presidential elections. The United States Constitution has never specified the process; political parties have developed their own procedures over time. Some states hold only primary elections, some hold only caucuses, and others use a combination of both. These primaries and caucuses are staggered, generally beginning sometime in January or February, and ending about mid-June before the general election in November. State and local governments run the primary elections, while caucuses are private events that are directly run by the political parties themselves. A state's primary election or caucus is usually an indirect election: instead of voters directly selecting a particular person running for President, they determine the number of delegates each party's national convention will receive from their respective state. These delegates then in turn select their party's presidential nominee. The first state in the United States to hold its presidential primary was New Hampshire in 1920.'. Question: 'who determines the way in which primaries are set up'.", "output": ["True"]}, "Prediction": "True"} +{"Task": "task520_aquamuse_answer_given_in_passage", "Definition": ["In this task you will be given a question and a passage. You need to determine if the answer to the question is contained in the passage. If the answer can be found in the passage you should output 'True'. If the answer cannot be found in the passage you should output 'False'."], "Instance": {"id": "task520-f30b3efa028045d5bf4235e0a6910529", "input": "Passage: 'The Brahmaputra (/ \u02ccbr\u0251\u02d0m\u0259\u02c8pu\u02d0tr\u0259 / is one of the major rivers of Asia, a trans-boundary river which flows through China, India and Bangladesh. As such, it is known by various names in the region: Assamese: \u09ac\u09cd\u09f0\u09b9\u09cd\u09ae\u09aa\u09c1\u09a4\u09cd\u09f0 \u09a8\u09a6 (' \u09a8\u09a6' n\u00f4d, masculine form of' \u09a8\u09a6\u09c0' n\u00f4di 'river') Br\u00f4hm\u00f4putr\u00f4 (b\u0279\u0254\u0266m\u0254put\u0279\u0254); Sanskrit: \u092c\u094d\u0930\u0939\u094d\u092e\u092a\u0941\u0924\u094d\u0930, IAST: Brahmaputra; Tibetan: \u0f61\u0f62 \u0f0b \u0f40\u0fb3\u0f74\u0f44\u0f66 \u0f0b \u0f42\u0f59\u0f44 \u0f0b \u0f54\u0f7c \u0f0b, Wylie: yar klung gtsang po Yarlung Tsangpo; simplified Chinese: \u5e03 \u62c9 \u9a6c \u666e \u7279 \u62c9 \u6cb3; traditional Chinese: \u5e03 \u62c9 \u99ac \u666e \u7279 \u62c9 \u6cb3; pinyin: B\u00f9l\u0101m\u01cep\u01d4t\u00e8l\u0101 H\u00e9. It is also called Tsangpo-Brahmaputra (when referring to the whole river including the stretch within Tibet). The Manas River, which runs through Bhutan, joins it at Jogighopa, in India. It is the tenth largest river in the world by discharge, and the 15th longest.'. Question: 'when does autumn start in the northern hemisphere'.", "output": ["False"]}, "Prediction": "False"} +{"Task": "task892_gap_reverse_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task892-c2f5cdcadd6342d2a4cbe167b7725959", "input": "Corbett's feature film credits include three Toronto International Film Festival film selections, The Steps (2015), Empire of Dirt (2013) and The Animal Project (2013); as well as principal roles in the features Barn Wedding (2014) and How to Be Deadly (2014). Kate has also starred in several short films, including the award-winning How **Eunice** Got Her Baby (2009) and The Tunnel (2013), which was the Not Short on Talent recipient for Cannes Film Festival and for which she is also the credited writer.", "output": ["Her"]}, "Prediction": "She"} +{"Task": "task892_gap_reverse_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task892-c46d59063893465eacee03edab71e82b", "input": "Riley Perrin (Chelsea Kane): Ben's close friend and Danny's best friend since they were little. Although Riley dates Ben on and off during the first few seasons of the show, she falls for **Danny** towards the end of Season 4, and subsequently begin a relationship with him.", "output": ["him"]}, "Prediction": "She"} +{"Task": "task892_gap_reverse_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task892-747d460f19f2484c8de78ac2bc7d91bb", "input": "After a back and forth game, the score was tied 0--0 at full-time, and remained so after extra time, leading to a penalty kick shootout. With Briana Scurry's save of China's third kick, the score was 4--4 with only **Brandi Chastain** left to shoot. She scored and won the game for the United States.", "output": ["She"]}, "Prediction": "She"} +{"Task": "task892_gap_reverse_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task892-6d6f63d0560543c19f6537315e172748", "input": "In the second act, Victor's apartment is shown from a different angle, and the Krap home is still visible. For much of the act, that part of the stage is empty, except for a moment where Jacques lights the lamp and paws his master's armchair. At one point, **Victor** arrives, and Jacques allows him to sit in his father's chair.", "output": ["him"]}, "Prediction": "He"} +{"Task": "task892_gap_reverse_coreference_resolution", "Definition": ["Read the passage and find the corresponding pronoun for the given name. The word between ** ** is the target name. The pronoun should be one of 'her', 'him', 'he', 'she' and 'his' with proper casing based on the position in the passage."], "Instance": {"id": "task892-8fcaae7da46849528d6e2ce25e40c0cd", "input": "Another brother, Dr. Aswin W. Sastrowardoyo, is a physician who was formerly a guitarist and vocalist with the music group Chaseiro from 1979 to 1983, and a younger sister, Lisa Damayanti Sastrowardoyo (b. 1962). The actress Dian Sastrowardoyo is a niece of Ms. **Joesoef**. She is married to Mr. Iwan V. Joesoef, a businessman, and has two sons Marwan Arie Joesoef (born 26/5/1976), Nurfahd S. Joesoef (born 4/3/1979) and one daughter Tiara R. Joesoef (born 5/7/1999)", "output": ["She"]}, "Prediction": "She"} +{"Task": "task828_copa_commonsense_cause_effect", "Definition": ["In this task your given two statements. You must judge whether the second sentence is the cause or effect of the first one. Label the instances as \"cause\" or \"effect\" based on your judgment. The sentences are separated by a newline character."], "Instance": {"id": "task828-080ee06d04cf4376a72c2521972571f9", "input": "The employee's shift ended.\nHe went home for the day.", "output": ["effect"]}, "Prediction": "effect"} +{"Task": "task828_copa_commonsense_cause_effect", "Definition": ["In this task your given two statements. You must judge whether the second sentence is the cause or effect of the first one. Label the instances as \"cause\" or \"effect\" based on your judgment. The sentences are separated by a newline character."], "Instance": {"id": "task828-6c5bcafc15bd4961a8a0c0627014bbe2", "input": "The woman avoided going in the lake.\nIt looked contaminated.", "output": ["cause"]}, "Prediction": "cause"} +{"Task": "task828_copa_commonsense_cause_effect", "Definition": ["In this task your given two statements. You must judge whether the second sentence is the cause or effect of the first one. Label the instances as \"cause\" or \"effect\" based on your judgment. The sentences are separated by a newline character."], "Instance": {"id": "task828-9d6cc75151b54e8f8a2787458bf5e4a6", "input": "The store cashier called security.\nThe customer used counterfeit money.", "output": ["cause"]}, "Prediction": "effect"} +{"Task": "task828_copa_commonsense_cause_effect", "Definition": ["In this task your given two statements. You must judge whether the second sentence is the cause or effect of the first one. Label the instances as \"cause\" or \"effect\" based on your judgment. The sentences are separated by a newline character."], "Instance": {"id": "task828-ec72698bec3e4116ba41e8d58a62ce18", "input": "The band played their hit song.\nThe audience clapped along to the music.", "output": ["effect"]}, "Prediction": "effect"} +{"Task": "task828_copa_commonsense_cause_effect", "Definition": ["In this task your given two statements. You must judge whether the second sentence is the cause or effect of the first one. Label the instances as \"cause\" or \"effect\" based on your judgment. The sentences are separated by a newline character."], "Instance": {"id": "task828-84bad77af50448a791583b72adde0f68", "input": "The man was denied for a loan.\nHe was in debt.", "output": ["cause"]}, "Prediction": "effect"} +{"Task": "task769_qed_summarization", "Definition": ["Given a text passage, come up with an appropriate title for it. The title should accurately represent what the main focus of the passage is. The title of a passage may be present in the first few sentences of the passage and/or be referred to multiple times in the passage. The title should be 1-5 words long."], "Instance": {"id": "task769-60a6c5239bc94ad09be9fb1a78c1a904", "input": "The Nobel Peace Prize ( Swedish : Nobels fredspris ) is one of the five Nobel Prizes created by the Swedish industrialist , inventor , and armaments manufacturer Alfred Nobel , along with the prizes in Chemistry , Physics , Physiology or Medicine , and Literature . Since March 1901 , it has been awarded annually ( with some exceptions ) to those who have `` done the most or the best work for fraternity between nations , for the abolition or reduction of standing armies and for the holding and promotion of peace congresses '' .", "output": ["Nobel Peace Prize"]}, "Prediction": "Nobel Peace Prize"} +{"Task": "task769_qed_summarization", "Definition": ["Given a text passage, come up with an appropriate title for it. The title should accurately represent what the main focus of the passage is. The title of a passage may be present in the first few sentences of the passage and/or be referred to multiple times in the passage. The title should be 1-5 words long."], "Instance": {"id": "task769-1816aaf52da84f3eb7ad869e9b67c108", "input": "The First Battle of Bull Run ( the name used by Union forces ) , also known as the Battle of First Manassas ( the name used by Confederate forces ) , was fought on July 21 , 1861 in Prince William County , Virginia , just north of the city of Manassas and about 25 miles west - southwest of Washington , D.C. It was the first major battle of the American Civil War . The Union 's forces were slow in positioning themselves , allowing Confederate reinforcements time to arrive by rail . Each side had about 18,000 poorly trained and poorly led troops in their first battle . It was a Confederate victory , followed by a disorganized retreat of the Union forces .", "output": ["First Battle of Bull Run"]}, "Prediction": "First Battle of Bull Run"} +{"Task": "task769_qed_summarization", "Definition": ["Given a text passage, come up with an appropriate title for it. The title should accurately represent what the main focus of the passage is. The title of a passage may be present in the first few sentences of the passage and/or be referred to multiple times in the passage. The title should be 1-5 words long."], "Instance": {"id": "task769-66b7371db2c24802b364c72f55ca87c7", "input": "About 90 % of the world 's earthquakes and 81 % of the world 's largest earthquakes occur along the Ring of Fire . All but three of the world 's 25 largest volcanic eruptions of the last 11,700 years occurred at volcanoes in the Ring of Fire . The Ring of Fire is a direct result of plate tectonics : the movement and collisions of lithospheric plates , especially subduction in the northern portion . The southern portion is more complex , with a number of smaller tectonic plates in collision with the Pacific plate from the Mariana Islands , the Philippines , Bougainville , Tonga , and New Zealand .", "output": ["Ring of Fire"]}, "Prediction": "Ring of Fire"} +{"Task": "task769_qed_summarization", "Definition": ["Given a text passage, come up with an appropriate title for it. The title should accurately represent what the main focus of the passage is. The title of a passage may be present in the first few sentences of the passage and/or be referred to multiple times in the passage. The title should be 1-5 words long."], "Instance": {"id": "task769-6ba9790f105b4721aad8b4b2a497deb0", "input": "Computer animation is the process used for generating animated images . The more general term computer - generated imagery ( CGI ) encompasses both static scenes and dynamic images , while computer animation only refers to the moving images . Modern computer animation usually uses 3D computer graphics , although 2D computer graphics are still used for stylistic , low bandwidth , and faster real - time renderings . Sometimes , the target of the animation is the computer itself , but sometimes film as well .", "output": ["Computer animation"]}, "Prediction": "Computer animation"} +{"Task": "task769_qed_summarization", "Definition": ["Given a text passage, come up with an appropriate title for it. The title should accurately represent what the main focus of the passage is. The title of a passage may be present in the first few sentences of the passage and/or be referred to multiple times in the passage. The title should be 1-5 words long."], "Instance": {"id": "task769-a794f066e7304c75a4e73a830e5d66c2", "input": "The 2017 FIFA U-17 World Cup was the 17th FIFA U-17 World Cup , a biennial international football tournament contested by men 's under - 17 national teams . Organized by FIFA , the tournament took place in India between 6 and 28 October 2017 , after the country was awarded the hosting rights on 5 December 2013 . The tournament marked the first time India have hosted a FIFA tournament and the first Asian hosted U-17 World Cup since 2013 . The attendance for this World Cup was a record 1,347,133 surpassing China 's 1985 edition where it was 1,230,976 .", "output": ["2017 FIFA U-17 World Cup"]}, "Prediction": "2017 FIFA U-17 World Cup"} +{"Task": "task1155_bard_analogical_reasoning_trash_or_treasure", "Definition": ["Two analogies that relate items to whether they are trash or treasure is given in the form \"A : B. C : ?\". Items marked as \"trash\" are considered to not have value and can be disposed, whereas those marked as \"treasure\" are considered valuable items that must be retained. \"A : \"B relates item A to whether it is trash or treasure, as specified by B. Your task is to replace the question mark (?) with either `trash` or `treasure` for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1155-25b29384d5084ad799fad725ef24baf3", "input": "baby : treasure. dirt : ?", "output": ["trash"]}, "Prediction": "trash"} +{"Task": "task1155_bard_analogical_reasoning_trash_or_treasure", "Definition": ["Two analogies that relate items to whether they are trash or treasure is given in the form \"A : B. C : ?\". Items marked as \"trash\" are considered to not have value and can be disposed, whereas those marked as \"treasure\" are considered valuable items that must be retained. \"A : \"B relates item A to whether it is trash or treasure, as specified by B. Your task is to replace the question mark (?) with either `trash` or `treasure` for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1155-f0c4bf5072d44402b600573c4eebca92", "input": "mold : trash. pillow : ?", "output": ["treasure"]}, "Prediction": "treasure"} +{"Task": "task1155_bard_analogical_reasoning_trash_or_treasure", "Definition": ["Two analogies that relate items to whether they are trash or treasure is given in the form \"A : B. C : ?\". Items marked as \"trash\" are considered to not have value and can be disposed, whereas those marked as \"treasure\" are considered valuable items that must be retained. \"A : \"B relates item A to whether it is trash or treasure, as specified by B. Your task is to replace the question mark (?) with either `trash` or `treasure` for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1155-dd2ac62808c7493cb8a3d787554a3c3e", "input": "computer : treasure. book : ?", "output": ["treasure"]}, "Prediction": "treasure"} +{"Task": "task1155_bard_analogical_reasoning_trash_or_treasure", "Definition": ["Two analogies that relate items to whether they are trash or treasure is given in the form \"A : B. C : ?\". Items marked as \"trash\" are considered to not have value and can be disposed, whereas those marked as \"treasure\" are considered valuable items that must be retained. \"A : \"B relates item A to whether it is trash or treasure, as specified by B. Your task is to replace the question mark (?) with either `trash` or `treasure` for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1155-2b49a6a24eef4ea08a01da18962ce093", "input": "crumbs : trash. slime : ?", "output": ["trash"]}, "Prediction": "trash"} +{"Task": "task1155_bard_analogical_reasoning_trash_or_treasure", "Definition": ["Two analogies that relate items to whether they are trash or treasure is given in the form \"A : B. C : ?\". Items marked as \"trash\" are considered to not have value and can be disposed, whereas those marked as \"treasure\" are considered valuable items that must be retained. \"A : \"B relates item A to whether it is trash or treasure, as specified by B. Your task is to replace the question mark (?) with either `trash` or `treasure` for the given item C, following the \"A : B\" relation."], "Instance": {"id": "task1155-23c45053d2f94799bde7e975111fac67", "input": "apple : treasure. baby : ?", "output": ["treasure"]}, "Prediction": "treasure"} +{"Task": "task1385_anli_r1_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1385-df4ed18f581c4687b407a500e8d756a0", "input": "Premise: Murder in Amsterdam: The Death of Theo Van Gogh and the Limits of Tolerance is a 2006 book by Ian Buruma. \"The Guardian\" describes it as, \"part reportage, part essay.\" It explores the impact of mass immigration from Muslim countries on Dutch culture through the lens of the murder of film director and anti-immigration activist, Theo van Gogh. Hypothesis: Theo van Gogh made documentary films.", "output": ["Neutral"]}, "Prediction": "Entails"} +{"Task": "task1385_anli_r1_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1385-512a33e562e9417f9b08fd1b5910d1f8", "input": "Premise: KAUJ (100.9 FM, \"Oldies 101\") is a radio station licensed to serve Grafton, North Dakota. The station is owned by Simmons Broadcasting Inc. It airs an Oldies music format featuring satellite-fed programming from Scott Shannon's The True Oldies Channel from ABC Radio. Hypothesis: KAUJ specializes in playing music that was not recently released.", "output": ["Entailment"]}, "Prediction": "Entails"} +{"Task": "task1385_anli_r1_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1385-6ae6ee1367ea441fa8b19fb82b5ab9fb", "input": "Premise: The Wrath of the Gods is a 1914 American silent drama film directed by Reginald Barker, and starring Sessue Hayakawa, Tsuru Aoki, Frank Borzage, Thomas Kurihara and Henry Kotani in the lead roles. This was the first feature film appearance of Hayakawa and the directorial debut of Barker. Hypothesis: The Wrath of the Gods is a 1914 American silent drama film which was the first feature film appearance of Tsuru Aoki.", "output": ["Neutral"]}, "Prediction": "Entails"} +{"Task": "task1385_anli_r1_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1385-d19ab294a2bc47e08378d83be526ffdf", "input": "Premise: Earl Mountbatten of Burma is a title in the Peerage of the United Kingdom. It was created in 1947 for Rear Admiral Louis Mountbatten, 1st Viscount Mountbatten of Burma, the last Viceroy of India. The letters patent creating the title specified the following special remainder: Hypothesis: The last Viceroy of India was not alive in 1947", "output": ["Contradiction"]}, "Prediction": "Entails"} +{"Task": "task1385_anli_r1_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise sentence. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1385-3b1f947626a64a82a061d1f6b9b8f04f", "input": "Premise: Edna Krabappel-Flanders is a fictional character from the animated television series \"The Simpsons\", who was voiced by Marcia Wallace until her death in 2013. She is the teacher of Bart Simpson's 4th grade class at Springfield Elementary School, and Ned Flanders's wife in later seasons. Hypothesis: Edna Krabappel-Flanders was married to Ned Flanders since the beginning of the Simpsons.", "output": ["Contradiction"]}, "Prediction": "Entails"} +{"Task": "task1531_daily_dialog_type_classification", "Definition": ["You have to predict the type of conversation sentence given as input. Your predictions can be one among: 'information' if the input statement states a piece of information, 'question' if the input statement asks for some information, 'directive' if the input sentence involves the management or guidance of operations, and 'commissive' when it constitutes a statement that commits the speaker to some future action."], "Instance": {"id": "task1531-0560cababd9b471696723e972c8427c2", "input": " I know it does , and that's because it is . ", "output": ["information"]}, "Prediction": "information"} +{"Task": "task1531_daily_dialog_type_classification", "Definition": ["You have to predict the type of conversation sentence given as input. Your predictions can be one among: 'information' if the input statement states a piece of information, 'question' if the input statement asks for some information, 'directive' if the input sentence involves the management or guidance of operations, and 'commissive' when it constitutes a statement that commits the speaker to some future action."], "Instance": {"id": "task1531-364fd985fb5d406bbdbff2a62cb12314", "input": " How long will you be out of class ? ", "output": ["question"]}, "Prediction": "question"} +{"Task": "task1531_daily_dialog_type_classification", "Definition": ["You have to predict the type of conversation sentence given as input. Your predictions can be one among: 'information' if the input statement states a piece of information, 'question' if the input statement asks for some information, 'directive' if the input sentence involves the management or guidance of operations, and 'commissive' when it constitutes a statement that commits the speaker to some future action."], "Instance": {"id": "task1531-048fae4fcdd94299a3aa3e98b06a9547", "input": " Besides I want two copies of all the photos . ", "output": ["directive"]}, "Prediction": "information"} +{"Task": "task1531_daily_dialog_type_classification", "Definition": ["You have to predict the type of conversation sentence given as input. Your predictions can be one among: 'information' if the input statement states a piece of information, 'question' if the input statement asks for some information, 'directive' if the input sentence involves the management or guidance of operations, and 'commissive' when it constitutes a statement that commits the speaker to some future action."], "Instance": {"id": "task1531-7c683f22948b4b54a822394541f9a1f5", "input": " OK . ", "output": ["commissive"]}, "Prediction": "no"} +{"Task": "task1531_daily_dialog_type_classification", "Definition": ["You have to predict the type of conversation sentence given as input. Your predictions can be one among: 'information' if the input statement states a piece of information, 'question' if the input statement asks for some information, 'directive' if the input sentence involves the management or guidance of operations, and 'commissive' when it constitutes a statement that commits the speaker to some future action."], "Instance": {"id": "task1531-99a9bed5f5f9446eb99484282a520324", "input": " I also want to pack up my package . This is the notes . ", "output": ["directive"]}, "Prediction": "information"} +{"Task": "task1516_imppres_naturallanguageinference", "Definition": ["In this task, you are given a premise and hypothesis. The task is to classify them into three categories: 'positive' if the hypothesis supports the premise, 'negated' if it opposes the premise, and 'neutral' if it neither supports nor opposes it."], "Instance": {"id": "task1516-30b6ee5668fb4e9c863f8365627fcbc2", "input": "Premise : 'Had all six guys who haven't won boasted about Steve?','Hypothesis : There are exactly six guests who haven't won.'", "output": ["neutral"]}, "Prediction": "positive"} +{"Task": "task1516_imppres_naturallanguageinference", "Definition": ["In this task, you are given a premise and hypothesis. The task is to classify them into three categories: 'positive' if the hypothesis supports the premise, 'negated' if it opposes the premise, and 'neutral' if it neither supports nor opposes it."], "Instance": {"id": "task1516-24211f5b3d814634a47370ca16569af3", "input": "Premise : 'All seven dresses that haven't resembled those sketches aren't warping.','Hypothesis : There are exactly eight dresses that haven't resembled those sketches.'", "output": ["negated"]}, "Prediction": "positive"} +{"Task": "task1516_imppres_naturallanguageinference", "Definition": ["In this task, you are given a premise and hypothesis. The task is to classify them into three categories: 'positive' if the hypothesis supports the premise, 'negated' if it opposes the premise, and 'neutral' if it neither supports nor opposes it."], "Instance": {"id": "task1516-2c55fa35bfbd4b73971a657edf4db46e", "input": "Premise : 'The two boys that divorced aren't needing to bike to a mountain.','Hypothesis : There are exactly two museums that divorced.'", "output": ["neutral"]}, "Prediction": "negative"} +{"Task": "task1516_imppres_naturallanguageinference", "Definition": ["In this task, you are given a premise and hypothesis. The task is to classify them into three categories: 'positive' if the hypothesis supports the premise, 'negated' if it opposes the premise, and 'neutral' if it neither supports nor opposes it."], "Instance": {"id": "task1516-c5dac34bf8c14af8a520825ca09230f4", "input": "Premise : 'The nine adults who ascertained Carmen to dislike a lot of high schools hunt.','Hypothesis : There are exactly nine adults who ascertained Carmen to dislike a lot of high schools.'", "output": ["positive"]}, "Prediction": "positive"} +{"Task": "task1516_imppres_naturallanguageinference", "Definition": ["In this task, you are given a premise and hypothesis. The task is to classify them into three categories: 'positive' if the hypothesis supports the premise, 'negated' if it opposes the premise, and 'neutral' if it neither supports nor opposes it."], "Instance": {"id": "task1516-41c273774e3e412a8100c8f8bae52220", "input": "Premise : 'The four organizations that appreciate Sherry haven't dared Derek to talk about Lori.','Hypothesis : There are exactly four organizations that appreciate Sherry.'", "output": ["positive"]}, "Prediction": "neutral"} +{"Task": "task1394_meta_woz_task_classification", "Definition": ["In this task, you are given four sentences: a bot task sentence, a bot role sentence, a user task sentence and a user role sentence. Your job is to classify given sentences into one of the 47 different domains. The domains are: 'UPDATE_CALENDAR', 'PRESENT_IDEAS', 'MOVIE_LISTINGS', 'AUTO_SORT', 'GAME_RULES', 'CONTACT_MANAGER', 'BANK_BOT', 'MUSIC_SUGGESTER', 'CHECK_STATUS', 'PET_ADVICE', 'HOW_TO_BASIC', 'NAME_SUGGESTER', 'QUOTE_OF_THE_DAY_BOT', 'GUINESS_CHECK', 'INSURANCE', 'RESTAURANT_PICKER', 'MAKE_RESTAURANT_RESERVATIONS', 'WEDDING_PLANNER', 'SKI_BOT', 'HOME_BOT', 'PLAY_TIMES', 'BUS_SCHEDULE_BOT', 'WHAT_IS_IT', 'PHONE_PLAN_BOT', 'DECIDER_BOT', 'PHONE_SETTINGS', 'TIME_ZONE', 'LIBRARY_REQUEST', 'UPDATE_CONTACT', 'CATALOGUE_BOT', 'PROMPT_GENERATOR', 'SCAM_LOOKUP', 'SPORTS_INFO', 'POLICY_BOT', 'CITY_INFO', 'APARTMENT_FINDER', 'EVENT_RESERVE', 'SHOPPING', 'EDIT_PLAYLIST', 'LOOK_UP_INFO', 'ORDER_PIZZA', 'WEATHER_CHECK', 'APPOINTMENT_REMINDER', 'GEOGRAPHY', 'STORE_DETAILS', 'AGREEMENT_BOT', 'ALARM_SET'."], "Instance": {"id": "task1394-e447ff346c484666816ed33f250702f7", "input": "Bot's task: Give the user some music suggestions without first asking them what they like \nBot's role: You are a bot designed to give music suggestions \nUser's task: Ask the bot for music suggestions without telling it what you like \nUser's role: You are interacting with a bot designed to give music suggestions", "output": ["MUSIC_SUGGESTER"]}, "Prediction": "MUSIC_SUGGESTER"} +{"Task": "task1394_meta_woz_task_classification", "Definition": ["In this task, you are given four sentences: a bot task sentence, a bot role sentence, a user task sentence and a user role sentence. Your job is to classify given sentences into one of the 47 different domains. The domains are: 'UPDATE_CALENDAR', 'PRESENT_IDEAS', 'MOVIE_LISTINGS', 'AUTO_SORT', 'GAME_RULES', 'CONTACT_MANAGER', 'BANK_BOT', 'MUSIC_SUGGESTER', 'CHECK_STATUS', 'PET_ADVICE', 'HOW_TO_BASIC', 'NAME_SUGGESTER', 'QUOTE_OF_THE_DAY_BOT', 'GUINESS_CHECK', 'INSURANCE', 'RESTAURANT_PICKER', 'MAKE_RESTAURANT_RESERVATIONS', 'WEDDING_PLANNER', 'SKI_BOT', 'HOME_BOT', 'PLAY_TIMES', 'BUS_SCHEDULE_BOT', 'WHAT_IS_IT', 'PHONE_PLAN_BOT', 'DECIDER_BOT', 'PHONE_SETTINGS', 'TIME_ZONE', 'LIBRARY_REQUEST', 'UPDATE_CONTACT', 'CATALOGUE_BOT', 'PROMPT_GENERATOR', 'SCAM_LOOKUP', 'SPORTS_INFO', 'POLICY_BOT', 'CITY_INFO', 'APARTMENT_FINDER', 'EVENT_RESERVE', 'SHOPPING', 'EDIT_PLAYLIST', 'LOOK_UP_INFO', 'ORDER_PIZZA', 'WEATHER_CHECK', 'APPOINTMENT_REMINDER', 'GEOGRAPHY', 'STORE_DETAILS', 'AGREEMENT_BOT', 'ALARM_SET'."], "Instance": {"id": "task1394-61de97a571e043c7bfe5ad79df055f08", "input": "Bot's task: Inform the user that there aren't any seats available to reserve \nBot's role: You are a bot designed to make reservations for events \nUser's task: You want to reserve seats at a theatre \nUser's role: You are interacting with a bot designed to make reservations for events", "output": ["EVENT_RESERVE"]}, "Prediction": "EVENT_RESERVE"} +{"Task": "task1394_meta_woz_task_classification", "Definition": ["In this task, you are given four sentences: a bot task sentence, a bot role sentence, a user task sentence and a user role sentence. Your job is to classify given sentences into one of the 47 different domains. The domains are: 'UPDATE_CALENDAR', 'PRESENT_IDEAS', 'MOVIE_LISTINGS', 'AUTO_SORT', 'GAME_RULES', 'CONTACT_MANAGER', 'BANK_BOT', 'MUSIC_SUGGESTER', 'CHECK_STATUS', 'PET_ADVICE', 'HOW_TO_BASIC', 'NAME_SUGGESTER', 'QUOTE_OF_THE_DAY_BOT', 'GUINESS_CHECK', 'INSURANCE', 'RESTAURANT_PICKER', 'MAKE_RESTAURANT_RESERVATIONS', 'WEDDING_PLANNER', 'SKI_BOT', 'HOME_BOT', 'PLAY_TIMES', 'BUS_SCHEDULE_BOT', 'WHAT_IS_IT', 'PHONE_PLAN_BOT', 'DECIDER_BOT', 'PHONE_SETTINGS', 'TIME_ZONE', 'LIBRARY_REQUEST', 'UPDATE_CONTACT', 'CATALOGUE_BOT', 'PROMPT_GENERATOR', 'SCAM_LOOKUP', 'SPORTS_INFO', 'POLICY_BOT', 'CITY_INFO', 'APARTMENT_FINDER', 'EVENT_RESERVE', 'SHOPPING', 'EDIT_PLAYLIST', 'LOOK_UP_INFO', 'ORDER_PIZZA', 'WEATHER_CHECK', 'APPOINTMENT_REMINDER', 'GEOGRAPHY', 'STORE_DETAILS', 'AGREEMENT_BOT', 'ALARM_SET'."], "Instance": {"id": "task1394-344962cac5d343e48b7fa9217c4c28ed", "input": "Bot's task: Help the user with their request \nBot's role: You are a bot designed to provide instructions for basic tasks \nUser's task: Ask how to set up an account on a social media site \nUser's role: You are interacting with a bot that gives instructions on how to complete basic tasks", "output": ["HOW_TO_BASIC"]}, "Prediction": "HW_TO_BASIC"} +{"Task": "task1394_meta_woz_task_classification", "Definition": ["In this task, you are given four sentences: a bot task sentence, a bot role sentence, a user task sentence and a user role sentence. Your job is to classify given sentences into one of the 47 different domains. The domains are: 'UPDATE_CALENDAR', 'PRESENT_IDEAS', 'MOVIE_LISTINGS', 'AUTO_SORT', 'GAME_RULES', 'CONTACT_MANAGER', 'BANK_BOT', 'MUSIC_SUGGESTER', 'CHECK_STATUS', 'PET_ADVICE', 'HOW_TO_BASIC', 'NAME_SUGGESTER', 'QUOTE_OF_THE_DAY_BOT', 'GUINESS_CHECK', 'INSURANCE', 'RESTAURANT_PICKER', 'MAKE_RESTAURANT_RESERVATIONS', 'WEDDING_PLANNER', 'SKI_BOT', 'HOME_BOT', 'PLAY_TIMES', 'BUS_SCHEDULE_BOT', 'WHAT_IS_IT', 'PHONE_PLAN_BOT', 'DECIDER_BOT', 'PHONE_SETTINGS', 'TIME_ZONE', 'LIBRARY_REQUEST', 'UPDATE_CONTACT', 'CATALOGUE_BOT', 'PROMPT_GENERATOR', 'SCAM_LOOKUP', 'SPORTS_INFO', 'POLICY_BOT', 'CITY_INFO', 'APARTMENT_FINDER', 'EVENT_RESERVE', 'SHOPPING', 'EDIT_PLAYLIST', 'LOOK_UP_INFO', 'ORDER_PIZZA', 'WEATHER_CHECK', 'APPOINTMENT_REMINDER', 'GEOGRAPHY', 'STORE_DETAILS', 'AGREEMENT_BOT', 'ALARM_SET'."], "Instance": {"id": "task1394-02e3077f9dd248069f4e68681a7581c8", "input": "Bot's task: Tell the user that there are a lot of conditions that decide the cost of a wedding, so it's impossible to say which destination will cost more. \nBot's role: You are a bot that helps plan weddings \nUser's task: Ask the bot whether a beach wedding or a private island wedding would be more expensive. \nUser's role: You are interacting with a bot that helps plan weddings", "output": ["WEDDING_PLANNER"]}, "Prediction": "WEDDING_PLANNER"} +{"Task": "task1394_meta_woz_task_classification", "Definition": ["In this task, you are given four sentences: a bot task sentence, a bot role sentence, a user task sentence and a user role sentence. Your job is to classify given sentences into one of the 47 different domains. The domains are: 'UPDATE_CALENDAR', 'PRESENT_IDEAS', 'MOVIE_LISTINGS', 'AUTO_SORT', 'GAME_RULES', 'CONTACT_MANAGER', 'BANK_BOT', 'MUSIC_SUGGESTER', 'CHECK_STATUS', 'PET_ADVICE', 'HOW_TO_BASIC', 'NAME_SUGGESTER', 'QUOTE_OF_THE_DAY_BOT', 'GUINESS_CHECK', 'INSURANCE', 'RESTAURANT_PICKER', 'MAKE_RESTAURANT_RESERVATIONS', 'WEDDING_PLANNER', 'SKI_BOT', 'HOME_BOT', 'PLAY_TIMES', 'BUS_SCHEDULE_BOT', 'WHAT_IS_IT', 'PHONE_PLAN_BOT', 'DECIDER_BOT', 'PHONE_SETTINGS', 'TIME_ZONE', 'LIBRARY_REQUEST', 'UPDATE_CONTACT', 'CATALOGUE_BOT', 'PROMPT_GENERATOR', 'SCAM_LOOKUP', 'SPORTS_INFO', 'POLICY_BOT', 'CITY_INFO', 'APARTMENT_FINDER', 'EVENT_RESERVE', 'SHOPPING', 'EDIT_PLAYLIST', 'LOOK_UP_INFO', 'ORDER_PIZZA', 'WEATHER_CHECK', 'APPOINTMENT_REMINDER', 'GEOGRAPHY', 'STORE_DETAILS', 'AGREEMENT_BOT', 'ALARM_SET'."], "Instance": {"id": "task1394-0d265fb36cae4043b844f4d9595b1dce", "input": "Bot's task: Fulfil the user's request \nBot's role: You are a bot representing a bank \nUser's task: Try to put money from your chequing account into your savings account \nUser's role: You are interacting with a bot representing your bank", "output": ["BANK_BOT"]}, "Prediction": "BANK_BOT"} +{"Task": "task401_numeric_fused_head_reference", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. Your answer should be chosen from the given text, and should not contain other words."], "Instance": {"id": "task401-cffc6883bd934038b0bfea6359887aae", "input": "Danger Mouse: I ca n't think what they 'll do with 99 bicycle clips . Eh , by the way , have we got 99 bicycle clips ?\nPenfold: Eh , we 've got eh , we 've got _ 3 _ , chief . Do you think that will do ?\nDanger Mouse: Hardly .\nPenfold: Ere , perhaps when they said ' bicycle clip ' , they met toothbrushes ?\nDanger Mouse: Yes , may ... Penfold , you 're pottier than they are !", "output": ["clips"]}, "Prediction": "99 bicycle clips"} +{"Task": "task401_numeric_fused_head_reference", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. Your answer should be chosen from the given text, and should not contain other words."], "Instance": {"id": "task401-f39376392fa7428b8586860f55c39b4a", "input": "Lily: So that 's what happened to the bouquet . Well , good for Victoria , that 's an evil tradition .\nTed: So wait , you 're not gon na have _ one _ at your wedding ?\nLily: Hell , yeah , I 'm gon na throw that flower grenade and yell \" Crawl for it , Bitches ! \" It 's just what girls do .", "output": ["bouquet"]}, "Prediction": "evil tradition"} +{"Task": "task401_numeric_fused_head_reference", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. Your answer should be chosen from the given text, and should not contain other words."], "Instance": {"id": "task401-8755a4205e1a4014beac2370d33d4bdb", "input": "Hank Griffin: How many rats ?\nNick Burkhardt: What ?\nHank Griffin: For the report , how many rats do you think were down there ?\nNick Burkhardt: Sorry , I did n't stop to count them .\nHank Griffin: I 'd like to put a _ million _ , 'cause that 's what it seemed like .\nNick Burkhardt: Put a thousand . Nobody 's gon na believe a million .\nHank Griffin: No one 's gon na believe a thousand .", "output": ["rats"]}, "Prediction": "million"} +{"Task": "task401_numeric_fused_head_reference", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. Your answer should be chosen from the given text, and should not contain other words."], "Instance": {"id": "task401-3663d16bc9fd422cbfb8f255e436878c", "input": "Dr. Dustin Crenshaw: Mrs. Draper is back for her 27th hospital admission status , post - pancreatic pseudocyst .\nDr. Neela Rasgotra: _ 27 _ ?\nDr. Dustin Crenshaw: Yes , somebody needs to tell her the hospital stopped giving out frequent - flier miles . Oh , and she 's a chronic rectal discomfort , and you two get to work her up and review her old charts .\nDr. Neela Rasgotra: All 27 ?\nDr. Dustin Crenshaw: No , why do n't you just forget all about visits 6 , 13 , and uh , what do you think , 21 ? Yeah , that should keep it interesting . It 'll be like playing patient roulette .", "output": ["status"]}, "Prediction": "admission status"} +{"Task": "task401_numeric_fused_head_reference", "Definition": ["In this task, you will use your knowledge about language (and common sense) to determine what element the marked number refers to. The numbers are marked with two underlines around them, like: _ number _. Your answer should be chosen from the given text, and should not contain other words."], "Instance": {"id": "task401-17ac244c2dd5447fa32d4ccf1b8a4936", "input": "Cory Matthews: Topanga , I warned you not to ... ... Oh , boy , why 'd you have to wear that ?\nTopanga Lawrence: Because I 'm a damsel . But not the distressed kind . _ One _ who is very together and in complete control of her own destiny .", "output": ["damsel"]}, "Prediction": "damsel"} +{"Task": "task1598_nyc_long_text_generation", "Definition": ["The task is to write a full sentence or two using all of the information given. The sentence(s) will be a brief review of a restaurant. Use all of the information provided."], "Instance": {"id": "task1598-206fec3617fa4e7a93655b6fc2e5ba5f", "input": "name[xname], recommend[yes], qual[acceptable], location[Murray Hill]", "output": ["xname is the best place because it is in Murray Hill with adequate food."]}, "Prediction": "xname, recommended, acceptable, location"} +{"Task": "task1598_nyc_long_text_generation", "Definition": ["The task is to write a full sentence or two using all of the information given. The sentence(s) will be a brief review of a restaurant. Use all of the information provided."], "Instance": {"id": "task1598-7c5b2fe7332f4e6b88e330e74994f0c1", "input": "name[xname], recommend[yes], cuisine[Italian], decor[bad], qual[acceptable], location[Chinatown], price[affordable], service[bad]", "output": ["xname is the only restaurant I would suggest since it provides rude staff and bad ambiance. It is an Italian place with satisfying food. it's in Chinatown. xname is affordable."]}, "Prediction": "xname, Italian restaurant, located in"} +{"Task": "task1598_nyc_long_text_generation", "Definition": ["The task is to write a full sentence or two using all of the information given. The sentence(s) will be a brief review of a restaurant. Use all of the information provided."], "Instance": {"id": "task1598-482e8d1c80be4613a5eb2897abb9dcea", "input": "name[xname], eattype[coffee shop], cuisine[Indian], price[\u00a320-25], rating[high], familyFriendly[yes]", "output": ["xname is a coffee shop that provides Indian food. The price range is \u00a320-25. It has a high customer rating and is kids friendly."]}, "Prediction": "xname, coffee shop, Indian restaurant"} +{"Task": "task1598_nyc_long_text_generation", "Definition": ["The task is to write a full sentence or two using all of the information given. The sentence(s) will be a brief review of a restaurant. Use all of the information provided."], "Instance": {"id": "task1598-c2b840950256402d9f8955c009fa1ba4", "input": "name[xname], cuisine[Chinese], price[more than \u00a330], familyFriendly[yes], near[xnear]", "output": ["Located near xnear, xname Chinese restaurant is child friendly but be prepared to pay more than 30 British pounds."]}, "Prediction": "xname, Chinese restaurant, prices from"} +{"Task": "task1598_nyc_long_text_generation", "Definition": ["The task is to write a full sentence or two using all of the information given. The sentence(s) will be a brief review of a restaurant. Use all of the information provided."], "Instance": {"id": "task1598-5996ee193ab749d5b4b1a738c0f36889", "input": "name[xname], cuisine[Indian], location[city centre], familyFriendly[yes]", "output": ["xname provides Indian food It is located in the city centre."]}, "Prediction": "xname, Indian restaurant, located in"} +{"Task": "task1615_sick_tclassify_b_relation_a", "Definition": ["In this task, given 2 input sentences, you must classify the relation between them. If the second sentence has a similar meaning to that of the first sentence then the output is ' B_entails_A', if the second sentence has the opposite meaning to the first sentence then it is classified as ' B_contradicts_A'. If you cannot clearly ascertain agreement/disagreement between the two sentences, the label is ' B_neutral_A'."], "Instance": {"id": "task1615-11e57fb93c4a4a6d83ff1124cd1297a7", "input": "sentence_A: A woman is coating a pork chop. sentence_B: A cook is coating a pork chop", "output": ["B_neutral_A"]}, "Prediction": "B_entails_A"} +{"Task": "task1615_sick_tclassify_b_relation_a", "Definition": ["In this task, given 2 input sentences, you must classify the relation between them. If the second sentence has a similar meaning to that of the first sentence then the output is ' B_entails_A', if the second sentence has the opposite meaning to the first sentence then it is classified as ' B_contradicts_A'. If you cannot clearly ascertain agreement/disagreement between the two sentences, the label is ' B_neutral_A'."], "Instance": {"id": "task1615-90f51520d8594027bbf9d84ed66e9a87", "input": "sentence_A: A woman is cutting a tomato. sentence_B: A woman is slicing a tomato", "output": ["B_entails_A"]}, "Prediction": "B_entails_A"} +{"Task": "task1615_sick_tclassify_b_relation_a", "Definition": ["In this task, given 2 input sentences, you must classify the relation between them. If the second sentence has a similar meaning to that of the first sentence then the output is ' B_entails_A', if the second sentence has the opposite meaning to the first sentence then it is classified as ' B_contradicts_A'. If you cannot clearly ascertain agreement/disagreement between the two sentences, the label is ' B_neutral_A'."], "Instance": {"id": "task1615-4584884377cb43e5945194d051ddf2ea", "input": "sentence_A: A boy is knocking a dog with a bottle of water. sentence_B: The dog is knocking a boy into the water", "output": ["B_neutral_A"]}, "Prediction": "B_contradicts_A"} +{"Task": "task1615_sick_tclassify_b_relation_a", "Definition": ["In this task, given 2 input sentences, you must classify the relation between them. If the second sentence has a similar meaning to that of the first sentence then the output is ' B_entails_A', if the second sentence has the opposite meaning to the first sentence then it is classified as ' B_contradicts_A'. If you cannot clearly ascertain agreement/disagreement between the two sentences, the label is ' B_neutral_A'."], "Instance": {"id": "task1615-17f070024c4440aeaec56bda5ab1ba38", "input": "sentence_A: A woman is standing near three children. sentence_B: There is no woman standing near three children", "output": ["B_contradicts_A"]}, "Prediction": "B_contradicts_A"} +{"Task": "task1615_sick_tclassify_b_relation_a", "Definition": ["In this task, given 2 input sentences, you must classify the relation between them. If the second sentence has a similar meaning to that of the first sentence then the output is ' B_entails_A', if the second sentence has the opposite meaning to the first sentence then it is classified as ' B_contradicts_A'. If you cannot clearly ascertain agreement/disagreement between the two sentences, the label is ' B_neutral_A'."], "Instance": {"id": "task1615-cb82f8ea5d194d01b65a5500a8c8e70b", "input": "sentence_A: A man is jumping a wall. sentence_B: A man is standing in front of a wall", "output": ["B_contradicts_A"]}, "Prediction": "B_contradicts_A"} +{"Task": "task970_sherliic_causal_relationship", "Definition": ["In this task, you will be given two sentences sentence1 and sentence2. You should decide whether the second sentence is entailed(agreed) by the first sentence. If it does entail, answer \"yes\", else answer \"no\"."], "Instance": {"id": "task970-52ff5e21e27742aa9bdcdf07f6445da1", "input": "sentence1:employer is eliminating sports_team*professional_sports_team sentence1:employer is hosting sports_team*professional_sports_team ", "output": ["no"]}, "Prediction": "no"} +{"Task": "task970_sherliic_causal_relationship", "Definition": ["In this task, you will be given two sentences sentence1 and sentence2. You should decide whether the second sentence is entailed(agreed) by the first sentence. If it does entail, answer \"yes\", else answer \"no\"."], "Instance": {"id": "task970-6afa615e848f4d4fb3877427fc8464e6", "input": "sentence1:person is elected President of location sentence1:person is leading location ", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task970_sherliic_causal_relationship", "Definition": ["In this task, you will be given two sentences sentence1 and sentence2. You should decide whether the second sentence is entailed(agreed) by the first sentence. If it does entail, answer \"yes\", else answer \"no\"."], "Instance": {"id": "task970-7ff464ed44fc43c6a8056eb55e80a3de", "input": "sentence1:award_winner is unveiling at time.recurring_event sentence1:award_winner announces At time.recurring_event ", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task970_sherliic_causal_relationship", "Definition": ["In this task, you will be given two sentences sentence1 and sentence2. You should decide whether the second sentence is entailed(agreed) by the first sentence. If it does entail, answer \"yes\", else answer \"no\"."], "Instance": {"id": "task970-cb348d79023a4b88984b91378873b22d", "input": "sentence1:organization is cancelling award.award_winning_work sentence1:award.award_winning_work is airing on organization ", "output": ["no"]}, "Prediction": "no"} +{"Task": "task970_sherliic_causal_relationship", "Definition": ["In this task, you will be given two sentences sentence1 and sentence2. You should decide whether the second sentence is entailed(agreed) by the first sentence. If it does entail, answer \"yes\", else answer \"no\"."], "Instance": {"id": "task970-e4efd9bfa47d45139e459cf12dd962e2", "input": "sentence1:employer is dominating location sentence1:employer is taking location ", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task1390_wscfixed_coreference", "Definition": ["You are given a context, a pronoun, and a noun in this task. The given pronoun is shown in the context within parentheses. You should determine if the pronoun refers to the given noun or not. Please answer with \"True\" and \"False\"."], "Instance": {"id": "task1390-015832aca1b54342ae5a87f709ea7e2c", "input": "The woman held the girl against (her) will. Pronoun:her Noun: The woman", "output": ["False"]}, "Prediction": "True"} +{"Task": "task1390_wscfixed_coreference", "Definition": ["You are given a context, a pronoun, and a noun in this task. The given pronoun is shown in the context within parentheses. You should determine if the pronoun refers to the given noun or not. Please answer with \"True\" and \"False\"."], "Instance": {"id": "task1390-8779c02c28c04d9c80fccbd1ed3b012b", "input": "Anne gave birth to a daughter last month. (She) is a very charming woman. Pronoun:She Noun: Anne", "output": ["True"]}, "Prediction": "True"} +{"Task": "task1390_wscfixed_coreference", "Definition": ["You are given a context, a pronoun, and a noun in this task. The given pronoun is shown in the context within parentheses. You should determine if the pronoun refers to the given noun or not. Please answer with \"True\" and \"False\"."], "Instance": {"id": "task1390-4c1b834a738c432f9ffe4e3751fb124a", "input": "Papa looked down at the children 's faces , so puzzled and sad now. It was bad enough that (they) had to be denied so many things because he couldn't afford them. Pronoun:they Noun: children", "output": ["True"]}, "Prediction": "True"} +{"Task": "task1390_wscfixed_coreference", "Definition": ["You are given a context, a pronoun, and a noun in this task. The given pronoun is shown in the context within parentheses. You should determine if the pronoun refers to the given noun or not. Please answer with \"True\" and \"False\"."], "Instance": {"id": "task1390-1039783c3d424b1a89e21b2fc2a448df", "input": "Paul tried to call George on the phone, but (he) wasn't available. Pronoun:he Noun: Paul", "output": ["False"]}, "Prediction": "True"} +{"Task": "task1390_wscfixed_coreference", "Definition": ["You are given a context, a pronoun, and a noun in this task. The given pronoun is shown in the context within parentheses. You should determine if the pronoun refers to the given noun or not. Please answer with \"True\" and \"False\"."], "Instance": {"id": "task1390-478874dd80594af286e6b197cd07f9cd", "input": "Thomson visited Cooper 's grave in 1765. At that date (he) had been travelling for five years. Pronoun:he Noun: Thomson", "output": ["True"]}, "Prediction": "True"} +{"Task": "task199_mnli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to determine if the two sentences clearly agree/disagree with each other, or if this can't be determined. Indicate your answer as yes or no respectively."], "Instance": {"id": "task199-5ab4602faad44c5d9c8d81aed80d90e9", "input": "Sentence 1: For three decades, Krakew existed as an independent city-state, though it was again incorporated into the Austrian partition in 1846. Sentence 2: Krakew fought not to be a part of the Austrian 1846 partition.", "output": ["no"]}, "Prediction": "no"} +{"Task": "task199_mnli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to determine if the two sentences clearly agree/disagree with each other, or if this can't be determined. Indicate your answer as yes or no respectively."], "Instance": {"id": "task199-a3975e54f4894f99858ba26b1033358c", "input": "Sentence 1: The club itself was a bastion of Ascendancy establishment. Sentence 2: The club was a fortification of Ascendancy establishment.", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task199_mnli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to determine if the two sentences clearly agree/disagree with each other, or if this can't be determined. Indicate your answer as yes or no respectively."], "Instance": {"id": "task199-b34fc897119042379c6229db90dbe59a", "input": "Sentence 1: Impaling all the old men took a long time and when the armored men stood the pikes upright, the sharp tips gleaming from the elders' gaping mouths. Sentence 2: It took a long time but old men were impailed. ", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task199_mnli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to determine if the two sentences clearly agree/disagree with each other, or if this can't be determined. Indicate your answer as yes or no respectively."], "Instance": {"id": "task199-9f3e5f6241ce41c7ba83bf9a2b6be11d", "input": "Sentence 1: Ninety-five percent of the total amount of sulfur dioxide allowances allocated each year under Section 423 will be allocated based on the amount of sulfur dioxide allowances allocated under the Acid Rain Program for 2010 and thereafter and that are held in allowance accounts in the Allowance Tracking System on the date 180 days after enactment. Sentence 2: Most of the sulfur dioxide that is allowed are controlled by the Acid Rain Program.", "output": ["no", "yes"]}, "Prediction": "yes"} +{"Task": "task199_mnli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to determine if the two sentences clearly agree/disagree with each other, or if this can't be determined. Indicate your answer as yes or no respectively."], "Instance": {"id": "task199-050707010b334464b9fa3ef7c7e14de2", "input": "Sentence 1: well let's see if you're in Plano and i'm in Plano are you in east Plano or west Plano Sentence 2: We are both in Plano, which side are you in?", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task034_winogrande_question_modification_object", "Definition": ["In this task, you're given a fill-in-the-blank question that contains two object names. Additionally, you're given one answer which is one of the objects present in the question. In this task, you need to minimally change the given question so that the answer flips to another object in the question. Remember the question has to be about both objects which are related but different; for example, \"trophy\" and \"suitcase\". This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"small\" to \"big\"). You should not change any content in the given question beyond a word or two, i.e., the trigger word/phrase. The expected answer to your question must not be associated with just the trigger word; instead, it should depend on the context present in the question. The expected answer should not be ambiguous. For your question, there should be an agreed upon answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. In your generated question, retain the object names from the input question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. The two objects should be used ONCE in your question. Here is a list of attributes and associated contrastive words that may help write contrastive trigger words. Note that contrastive trigger words help in flipping the label.\n| Attribute | triggerword | contrastive triggerword | \n| age | old | new | \n| altitude | low | high |\n| area | small | vast | \n| brightness | dark | light | \n| clarity | obscure | clear | \n| cleanness | dirty | clean | \n| complexity | simple | complex | \n| cost | cheap | expensive |\n| density | sparse | dense |\n| depth | shallow | deep |\n| distance | near | far | \n| electric conductivity | low | high |\n| flexibility | rigid | flexible |\n| granularity | fine | coarse | \n| hardness | soft | hard | \n| length | short | long | \n| magnitude | small | large | \n| mass | small | large | \n| odor | weak | strong | \n| pressure | low | high | \n| resistance | low | high | \n| shape | round | sharp |\n| shape | flat | spiky |\n| size | small | large | \n| sound | quiet | loud | \n| sound pitch | low | high | \n| speed | slow | fast | \n| stability | unstable | stable | \n| strength | weak | strong | \n| temperature | low | high | \n| texture | smooth | rough | \n| thermal conductivity | low | high |\n| thickness | thin | thick | \n| volume | small | large |\n| weight | light | heavy | \n| width | narrow | wide | \n| location | in | out |\n| location | up | down |\n| location | above | below |\n| location | on | off |\n| location | to | from |\n"], "Instance": {"id": "task034-80c557ac3bf84a3b80ad124ab3e05c67", "input": "Context Word: cardboard. \nQuestion: The shirt was in the cardboard box and the book was sitting on top of it when it rained, so the _ got wet. \nAnswer: book.", "output": ["The shirt was in the cardboard box and the book was sitting on top of it when it rained, so the _ stayed dry."]}, "Prediction": "The shirt was in the cardboard box and the"} +{"Task": "task034_winogrande_question_modification_object", "Definition": ["In this task, you're given a fill-in-the-blank question that contains two object names. Additionally, you're given one answer which is one of the objects present in the question. In this task, you need to minimally change the given question so that the answer flips to another object in the question. Remember the question has to be about both objects which are related but different; for example, \"trophy\" and \"suitcase\". This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"small\" to \"big\"). You should not change any content in the given question beyond a word or two, i.e., the trigger word/phrase. The expected answer to your question must not be associated with just the trigger word; instead, it should depend on the context present in the question. The expected answer should not be ambiguous. For your question, there should be an agreed upon answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. In your generated question, retain the object names from the input question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. The two objects should be used ONCE in your question. Here is a list of attributes and associated contrastive words that may help write contrastive trigger words. Note that contrastive trigger words help in flipping the label.\n| Attribute | triggerword | contrastive triggerword | \n| age | old | new | \n| altitude | low | high |\n| area | small | vast | \n| brightness | dark | light | \n| clarity | obscure | clear | \n| cleanness | dirty | clean | \n| complexity | simple | complex | \n| cost | cheap | expensive |\n| density | sparse | dense |\n| depth | shallow | deep |\n| distance | near | far | \n| electric conductivity | low | high |\n| flexibility | rigid | flexible |\n| granularity | fine | coarse | \n| hardness | soft | hard | \n| length | short | long | \n| magnitude | small | large | \n| mass | small | large | \n| odor | weak | strong | \n| pressure | low | high | \n| resistance | low | high | \n| shape | round | sharp |\n| shape | flat | spiky |\n| size | small | large | \n| sound | quiet | loud | \n| sound pitch | low | high | \n| speed | slow | fast | \n| stability | unstable | stable | \n| strength | weak | strong | \n| temperature | low | high | \n| texture | smooth | rough | \n| thermal conductivity | low | high |\n| thickness | thin | thick | \n| volume | small | large |\n| weight | light | heavy | \n| width | narrow | wide | \n| location | in | out |\n| location | up | down |\n| location | above | below |\n| location | on | off |\n| location | to | from |\n"], "Instance": {"id": "task034-7e673c6cad2642d4b41f7d1b4535fac7", "input": "Context Word: electricity. \nQuestion: Al got solar panels and a small generator installed at his house for electricity as the _ would be his main power supply. \nAnswer: panels.", "output": ["Al got solar panels and a small generator installed at his house for electricity as the _ would be his backup power supply."]}, "Prediction": "Al got solar panels and a small generator"} +{"Task": "task034_winogrande_question_modification_object", "Definition": ["In this task, you're given a fill-in-the-blank question that contains two object names. Additionally, you're given one answer which is one of the objects present in the question. In this task, you need to minimally change the given question so that the answer flips to another object in the question. Remember the question has to be about both objects which are related but different; for example, \"trophy\" and \"suitcase\". This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"small\" to \"big\"). You should not change any content in the given question beyond a word or two, i.e., the trigger word/phrase. The expected answer to your question must not be associated with just the trigger word; instead, it should depend on the context present in the question. The expected answer should not be ambiguous. For your question, there should be an agreed upon answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. In your generated question, retain the object names from the input question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. The two objects should be used ONCE in your question. Here is a list of attributes and associated contrastive words that may help write contrastive trigger words. Note that contrastive trigger words help in flipping the label.\n| Attribute | triggerword | contrastive triggerword | \n| age | old | new | \n| altitude | low | high |\n| area | small | vast | \n| brightness | dark | light | \n| clarity | obscure | clear | \n| cleanness | dirty | clean | \n| complexity | simple | complex | \n| cost | cheap | expensive |\n| density | sparse | dense |\n| depth | shallow | deep |\n| distance | near | far | \n| electric conductivity | low | high |\n| flexibility | rigid | flexible |\n| granularity | fine | coarse | \n| hardness | soft | hard | \n| length | short | long | \n| magnitude | small | large | \n| mass | small | large | \n| odor | weak | strong | \n| pressure | low | high | \n| resistance | low | high | \n| shape | round | sharp |\n| shape | flat | spiky |\n| size | small | large | \n| sound | quiet | loud | \n| sound pitch | low | high | \n| speed | slow | fast | \n| stability | unstable | stable | \n| strength | weak | strong | \n| temperature | low | high | \n| texture | smooth | rough | \n| thermal conductivity | low | high |\n| thickness | thin | thick | \n| volume | small | large |\n| weight | light | heavy | \n| width | narrow | wide | \n| location | in | out |\n| location | up | down |\n| location | above | below |\n| location | on | off |\n| location | to | from |\n"], "Instance": {"id": "task034-9c82906e872a4c31996131615d07d811", "input": "Context Word: Slider. \nQuestion: The chef recommended sliders to the VIP guests, but they ordered ribeyes instead. They were told the _ were tastier. \nAnswer: ribeyes.", "output": ["The chef recommended sliders to the VIP guests, but they ordered ribeyes instead. They were told the _ were not fresh."]}, "Prediction": "The chef recommended sliders to the VIP guests"} +{"Task": "task034_winogrande_question_modification_object", "Definition": ["In this task, you're given a fill-in-the-blank question that contains two object names. Additionally, you're given one answer which is one of the objects present in the question. In this task, you need to minimally change the given question so that the answer flips to another object in the question. Remember the question has to be about both objects which are related but different; for example, \"trophy\" and \"suitcase\". This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"small\" to \"big\"). You should not change any content in the given question beyond a word or two, i.e., the trigger word/phrase. The expected answer to your question must not be associated with just the trigger word; instead, it should depend on the context present in the question. The expected answer should not be ambiguous. For your question, there should be an agreed upon answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. In your generated question, retain the object names from the input question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. The two objects should be used ONCE in your question. Here is a list of attributes and associated contrastive words that may help write contrastive trigger words. Note that contrastive trigger words help in flipping the label.\n| Attribute | triggerword | contrastive triggerword | \n| age | old | new | \n| altitude | low | high |\n| area | small | vast | \n| brightness | dark | light | \n| clarity | obscure | clear | \n| cleanness | dirty | clean | \n| complexity | simple | complex | \n| cost | cheap | expensive |\n| density | sparse | dense |\n| depth | shallow | deep |\n| distance | near | far | \n| electric conductivity | low | high |\n| flexibility | rigid | flexible |\n| granularity | fine | coarse | \n| hardness | soft | hard | \n| length | short | long | \n| magnitude | small | large | \n| mass | small | large | \n| odor | weak | strong | \n| pressure | low | high | \n| resistance | low | high | \n| shape | round | sharp |\n| shape | flat | spiky |\n| size | small | large | \n| sound | quiet | loud | \n| sound pitch | low | high | \n| speed | slow | fast | \n| stability | unstable | stable | \n| strength | weak | strong | \n| temperature | low | high | \n| texture | smooth | rough | \n| thermal conductivity | low | high |\n| thickness | thin | thick | \n| volume | small | large |\n| weight | light | heavy | \n| width | narrow | wide | \n| location | in | out |\n| location | up | down |\n| location | above | below |\n| location | on | off |\n| location | to | from |\n"], "Instance": {"id": "task034-ccf5f05c287643f8a6d19de82a705ffa", "input": "Context Word: boyfriend. \nQuestion: The girl chose to get a game for her boyfriend instead of flowers because her boyfriend didn't like _ . \nAnswer: flowers.", "output": ["The girl chose to get a game for her boyfriend instead of flowers because her boyfriend liked _ ."]}, "Prediction": "The girl chose to get a game for"} +{"Task": "task034_winogrande_question_modification_object", "Definition": ["In this task, you're given a fill-in-the-blank question that contains two object names. Additionally, you're given one answer which is one of the objects present in the question. In this task, you need to minimally change the given question so that the answer flips to another object in the question. Remember the question has to be about both objects which are related but different; for example, \"trophy\" and \"suitcase\". This task typically involves replacing one word i.e., the 'trigger word' with its antonym (e.g., changing from \"small\" to \"big\"). You should not change any content in the given question beyond a word or two, i.e., the trigger word/phrase. The expected answer to your question must not be associated with just the trigger word; instead, it should depend on the context present in the question. The expected answer should not be ambiguous. For your question, there should be an agreed upon answer to fill in the blank. Your generations should NOT contain potentially explicit, offensive, or adult content. In your generated question, retain the object names from the input question. Avoid repeating the same style or phrase in generating your modified question e.g. this task can be always solved using a simple negation i.e. by adding not, never, etc. Instead, try to increase the word diversity. Your question must contain at least 15 and at most 30 words. Your question must have at least 70% overlapping words with the given question. You must utilize the given context word while writing the question. Your question must contain only one blank. The two objects should be used ONCE in your question. Here is a list of attributes and associated contrastive words that may help write contrastive trigger words. Note that contrastive trigger words help in flipping the label.\n| Attribute | triggerword | contrastive triggerword | \n| age | old | new | \n| altitude | low | high |\n| area | small | vast | \n| brightness | dark | light | \n| clarity | obscure | clear | \n| cleanness | dirty | clean | \n| complexity | simple | complex | \n| cost | cheap | expensive |\n| density | sparse | dense |\n| depth | shallow | deep |\n| distance | near | far | \n| electric conductivity | low | high |\n| flexibility | rigid | flexible |\n| granularity | fine | coarse | \n| hardness | soft | hard | \n| length | short | long | \n| magnitude | small | large | \n| mass | small | large | \n| odor | weak | strong | \n| pressure | low | high | \n| resistance | low | high | \n| shape | round | sharp |\n| shape | flat | spiky |\n| size | small | large | \n| sound | quiet | loud | \n| sound pitch | low | high | \n| speed | slow | fast | \n| stability | unstable | stable | \n| strength | weak | strong | \n| temperature | low | high | \n| texture | smooth | rough | \n| thermal conductivity | low | high |\n| thickness | thin | thick | \n| volume | small | large |\n| weight | light | heavy | \n| width | narrow | wide | \n| location | in | out |\n| location | up | down |\n| location | above | below |\n| location | on | off |\n| location | to | from |\n"], "Instance": {"id": "task034-8663a23ba7624510963d15b406c982b8", "input": "Context Word: smoking. \nQuestion: The wood was still smoking while the grass was not since _ was on fire. \nAnswer: wood.", "output": ["The wood was still smoking while the grass was not since _ was on doused."]}, "Prediction": "The wood was still smoking while the grass was"} +{"Task": "task133_winowhy_reason_plausibility_detection", "Definition": ["In this task you need to indicate the plausibility of reasoning for the pronoun coreference relations. Each of the provided inputs contains a sentence with a target pronoun and a sentence that justifies which noun phrase the pronoun refers to. Correct reasons do not need to use all the knowledge from the sentence. The resolution of the pronoun coreference relations typically involve one or multiple following knowledge types about commonsense: First: 'Property', the knowledge about property of objects (e.g., ice is cold). Second: 'Object', the knowledge about objects (e.g., cats have ears). Third: 'Eventuality', the knowledge about eventuality (e.g., 'wake up' happens before 'open eyes'). Forth: 'Spatial', the knowledge about spatial position (e.g., object at the back can be blocked). Fifth: 'Quantity', the knowledge about numbers (e.g., 2 is smaller than 10). Sixth: all other knowledge if above ones are not suitable. You should answer 'Correct' if the reasoning made sense, otherwise, you should answer 'Wrong'."], "Instance": {"id": "task133-2bc69d144fa74791be2ebe9d607b1177", "input": "Sentence: Many people start to read Paul's books and can't put them down. They are gripped because Paul writes so well.\n Reason: The 'They' refers to people because People love Paul's books. They are popular. So the books are popular, not the people. \n Question: Is the above reasoning correct or wrong? ", "output": ["Wrong"]}, "Prediction": "False"} +{"Task": "task133_winowhy_reason_plausibility_detection", "Definition": ["In this task you need to indicate the plausibility of reasoning for the pronoun coreference relations. Each of the provided inputs contains a sentence with a target pronoun and a sentence that justifies which noun phrase the pronoun refers to. Correct reasons do not need to use all the knowledge from the sentence. The resolution of the pronoun coreference relations typically involve one or multiple following knowledge types about commonsense: First: 'Property', the knowledge about property of objects (e.g., ice is cold). Second: 'Object', the knowledge about objects (e.g., cats have ears). Third: 'Eventuality', the knowledge about eventuality (e.g., 'wake up' happens before 'open eyes'). Forth: 'Spatial', the knowledge about spatial position (e.g., object at the back can be blocked). Fifth: 'Quantity', the knowledge about numbers (e.g., 2 is smaller than 10). Sixth: all other knowledge if above ones are not suitable. You should answer 'Correct' if the reasoning made sense, otherwise, you should answer 'Wrong'."], "Instance": {"id": "task133-819f4360df8b4823ba2c1976a4a613e4", "input": "Sentence: Steve follows Fred's example in everything. He admires him hugely.\n Reason: The 'He' refers to steve because fred was role model for steve. \n Question: Is the above reasoning correct or wrong? ", "output": ["Correct"]}, "Prediction": "True"} +{"Task": "task133_winowhy_reason_plausibility_detection", "Definition": ["In this task you need to indicate the plausibility of reasoning for the pronoun coreference relations. Each of the provided inputs contains a sentence with a target pronoun and a sentence that justifies which noun phrase the pronoun refers to. Correct reasons do not need to use all the knowledge from the sentence. The resolution of the pronoun coreference relations typically involve one or multiple following knowledge types about commonsense: First: 'Property', the knowledge about property of objects (e.g., ice is cold). Second: 'Object', the knowledge about objects (e.g., cats have ears). Third: 'Eventuality', the knowledge about eventuality (e.g., 'wake up' happens before 'open eyes'). Forth: 'Spatial', the knowledge about spatial position (e.g., object at the back can be blocked). Fifth: 'Quantity', the knowledge about numbers (e.g., 2 is smaller than 10). Sixth: all other knowledge if above ones are not suitable. You should answer 'Correct' if the reasoning made sense, otherwise, you should answer 'Wrong'."], "Instance": {"id": "task133-c282ea1a44cb4becb9dcb8a1f72cb70c", "input": "Sentence: I tried to paint a picture of an orchard, with lemons in the lemon trees, but they came out looking more like light bulbs.\n Reason: The 'they' refers to lemons because lemons are round look more like a light bulb. \n Question: Is the above reasoning correct or wrong? ", "output": ["Correct"]}, "Prediction": "True"} +{"Task": "task133_winowhy_reason_plausibility_detection", "Definition": ["In this task you need to indicate the plausibility of reasoning for the pronoun coreference relations. Each of the provided inputs contains a sentence with a target pronoun and a sentence that justifies which noun phrase the pronoun refers to. Correct reasons do not need to use all the knowledge from the sentence. The resolution of the pronoun coreference relations typically involve one or multiple following knowledge types about commonsense: First: 'Property', the knowledge about property of objects (e.g., ice is cold). Second: 'Object', the knowledge about objects (e.g., cats have ears). Third: 'Eventuality', the knowledge about eventuality (e.g., 'wake up' happens before 'open eyes'). Forth: 'Spatial', the knowledge about spatial position (e.g., object at the back can be blocked). Fifth: 'Quantity', the knowledge about numbers (e.g., 2 is smaller than 10). Sixth: all other knowledge if above ones are not suitable. You should answer 'Correct' if the reasoning made sense, otherwise, you should answer 'Wrong'."], "Instance": {"id": "task133-64f2a52992a943cb9712a2851c343a8e", "input": "Sentence: Alice tried frantically to stop her daughter from chatting at the party, leaving us to wonder why she was behaving so strangely.\n Reason: The 'she' refers to alice because she was the one who called out to her. \n Question: Is the above reasoning correct or wrong? ", "output": ["Wrong"]}, "Prediction": "Wrong"} +{"Task": "task133_winowhy_reason_plausibility_detection", "Definition": ["In this task you need to indicate the plausibility of reasoning for the pronoun coreference relations. Each of the provided inputs contains a sentence with a target pronoun and a sentence that justifies which noun phrase the pronoun refers to. Correct reasons do not need to use all the knowledge from the sentence. The resolution of the pronoun coreference relations typically involve one or multiple following knowledge types about commonsense: First: 'Property', the knowledge about property of objects (e.g., ice is cold). Second: 'Object', the knowledge about objects (e.g., cats have ears). Third: 'Eventuality', the knowledge about eventuality (e.g., 'wake up' happens before 'open eyes'). Forth: 'Spatial', the knowledge about spatial position (e.g., object at the back can be blocked). Fifth: 'Quantity', the knowledge about numbers (e.g., 2 is smaller than 10). Sixth: all other knowledge if above ones are not suitable. You should answer 'Correct' if the reasoning made sense, otherwise, you should answer 'Wrong'."], "Instance": {"id": "task133-90cf4dc5bee9449590a7e3e6cbd91c5e", "input": "Sentence: John was jogging through the park when he saw a man juggling watermelons. He was very impressed.\n Reason: The 'He' refers to john because the man was the one juggling the watermelons so it's likely that John was the one impressed. \n Question: Is the above reasoning correct or wrong? ", "output": ["Correct"]}, "Prediction": "True"} +{"Task": "task226_english_language_answer_relevance_classification", "Definition": ["Given the Question and Answer pair, detect whether the answer is acceptable or not. Denote acceptable answer as \"yes\", otherwise denote as \"no\". An acceptable answer is consistent with the context in question, detailed, long, and uses general knowledge of English. Answers that are not definite and suggest a variety of options are not acceptable answers."], "Instance": {"id": "task226-7526fa1f4a3b4ad5951a774694afa943", "input": "Question: Everybody deals with this phenomenon daily but I had my curiousity piqued when someone claimed that punishing the many for the actions of the few was just called \"group punishment.\" Group Punishment is simply punishing a large group of people. Is there a specific word for when a large group is punished because of the actions of the few? Such as in the following: Little Jimmy wasn't happy when the entire class was forced to work through recess because of 3 kids getting into a fight yesterday. \nAnswer: Collective punishment is the formal expression: Penalty imposed on every member of a group without regard to his or her involvement in the group's actions and conduct. Related: collective responsibility. (www.businessdictionary.com) Is collective punishment fair in the classroom?: Under the 1949 Geneva Conventions collective punishment is considered a war crime. \"No general penalty, pecuniary or otherwise, shall be inflicted upon the population on account of the acts of individuals for which they cannot be regarded as jointly and severally responsible.\" Unfortunately for our students they do not have rights as general citizens or even our war enemies and common criminals in society. They shed their rights as they enter the double doors of the school and us parents shed our ability to ask for fair treatment of our children.", "output": ["yes"]}, "Prediction": "no"} +{"Task": "task226_english_language_answer_relevance_classification", "Definition": ["Given the Question and Answer pair, detect whether the answer is acceptable or not. Denote acceptable answer as \"yes\", otherwise denote as \"no\". An acceptable answer is consistent with the context in question, detailed, long, and uses general knowledge of English. Answers that are not definite and suggest a variety of options are not acceptable answers."], "Instance": {"id": "task226-4c994ddc8d744da2b48ae9ea99d2e496", "input": "Question: I must admit that I'm unsure whether or not this is the right forum for this question. It may belong to statistics and AI also. If there exists a more suitable forum, then tell me. The thing is, I want to analyse a lot of English text for an AI project (Confabulation theory). Is there an online collection of freely available English texts? Books, news would be preferred scientific texts will properly not do, due to large amounts of math etc. \nAnswer: And for modern texts, there are a lot of blogs which use various Creative Commons licenses, most commonly CC BY-SA.", "output": ["no"]}, "Prediction": "no"} +{"Task": "task226_english_language_answer_relevance_classification", "Definition": ["Given the Question and Answer pair, detect whether the answer is acceptable or not. Denote acceptable answer as \"yes\", otherwise denote as \"no\". An acceptable answer is consistent with the context in question, detailed, long, and uses general knowledge of English. Answers that are not definite and suggest a variety of options are not acceptable answers."], "Instance": {"id": "task226-6e073655f7514b34a06c771ddc1e8016", "input": "Question: English isn't my mother tongue, and when we recently talked to a native speaker (during some company audit), he pointed out that the plural word \"trainings\" is actually wrong (he pointed that it is a very common mistake and was almost amused by it). I couldn't understand why and wondered what the correct plural of \"training\" would be. Online Dictionaries/Thesaurus don't answer it, so it's probably another opinion-based statement. I haven't found a source to back it up, though. \nAnswer: I agree with Mick that training is an uncountable noun. In the example given by Mr. Fine of a specific training event allowing training to become countable, I disagree. In the case of becoming countable, the portion being counted is the event. Therefor, it would be \"there are multiple training events available at the pavilion\" (or whatever). I see many cases where much effort is made to \"fix\" the plural when a simple rewording will eliminate the contentious context and allow an, arguably, better flow by using the standard form.", "output": ["no"]}, "Prediction": "no"} +{"Task": "task226_english_language_answer_relevance_classification", "Definition": ["Given the Question and Answer pair, detect whether the answer is acceptable or not. Denote acceptable answer as \"yes\", otherwise denote as \"no\". An acceptable answer is consistent with the context in question, detailed, long, and uses general knowledge of English. Answers that are not definite and suggest a variety of options are not acceptable answers."], "Instance": {"id": "task226-e76bf97a69654eb19a188f10034e5701", "input": "Question: When it comes to cities and boats named after saints, it seems that \"Santa\" is always female and \"San\" is always male. e.g. Male Saints: San Diego, San Francisco, San Antonio Female Saints: Santa Barbara, Santa Maria So why isn't \"Santa Claus\" a woman like all the other \"Santa\"s? \nAnswer: The city names you quote are all derived from Spanish, where \"San\" (or \"Santo\", as @tchrist clarifies below) is the male inflection and \"Santa\" the female one. However, Santa Claus isn't derived from Spanish, but from Dutch, where it was originally rendered as Sante Klaas, and was modified to Santa when it was adopted into English (and from English, to the rest of the world). Here's Etymonline's take on it: from dialectal Dutch Sante Klaas, from Middle Dutch Sinter Niklaas \"Saint Nicholas,\" English, like all other languages, borrows words from many sources, and they often carry over remnants of their original grammar or usage, even when it makes for inconsistent usage in English.", "output": ["yes"]}, "Prediction": "no"} +{"Task": "task226_english_language_answer_relevance_classification", "Definition": ["Given the Question and Answer pair, detect whether the answer is acceptable or not. Denote acceptable answer as \"yes\", otherwise denote as \"no\". An acceptable answer is consistent with the context in question, detailed, long, and uses general knowledge of English. Answers that are not definite and suggest a variety of options are not acceptable answers."], "Instance": {"id": "task226-a1e97cfaba554ea6a32bf3c5ce379d75", "input": "Question: E.g. is short for exempli gratia, and is in common use to introduce an example within a sentence. Submit a sample of academic writing, e.g., a dissertation chapter. However, some authors use ex. or ex: (short for \"example\") for the same purpose, and this form is listed in some authorities like the Wiktionary. Submit a sample of academic writing, ex: a dissertation chapter. I know they both roughly mean \"example\", but which one should I use, and when? Are there connotations associated with one or the other? This question has been asked elsewhere, and this very answer comes up when you ask Google about the difference. It would be good to get an EL&U answer for it. \nAnswer: \"Ex.\" is not very common, but it may be used to refer to a cited example, eg \"See ex. 3\". \"E.g.\" is much more common, and is used to introduce an example, or series of examples. In speech, it is usually replaced by \"for example\".", "output": ["no"]}, "Prediction": "yes"} +{"Task": "task510_reddit_tifu_title_summarization", "Definition": ["In this task, you are given a Reddit post as a text. Your task is to generate a title for this text. The title should start with \"TIFU by\", followed by a situation that caused humor. The title should contain 7-12 words, ideally."], "Instance": {"id": "task510-98904f8af5604b228767e69389448468", "input": "Text: sorry for the formatting, i'm on mobile\nmandatory \"this happened 3years ago\"\n\nso this is my story. i live in a third world country (bangladesh) where gaming consoles and accessories are extremely overpriced. our family goes on yearly trips to australia to visit family and friends. being the avid gamer i am, i decided to buy the all new ps4. i had saved up my birthday money from the last 2 years and with a little bit of help from my mom, i would be able to afford the console. so when i arrive in australia, the first thing i do is i get the much awaited console in a bundle with killzone shadow fall, fifa 14 and need for speed rivals. i play on the console a couple of times before packing it up in the box itself. since the trip is so long, we have to stop by singapore for a couple of days. i travelled with the ps4 box like a briefcase and used it as hand luggage. so the day my family and i leave for bangladesh, we leave our luggage at the airport before, so we don't have to deal with the hassle with bringing them along. i only have my backpack and ps4 box in my hand as i move into immigration. i decided to put the box in one of those trolley things. as we entered the waiting room for the flight, i had returned the trolley with my ps4 in it. unbeknownst to me, i hoarded the plane all excited to play my console back at home. it was then i realised that i had lost my ps4. thats $550 aud i'll never get back and a broken heart.", "output": ["TIFU By losing my PS4"]}, "Prediction": "TIFU by losing my ps"} +{"Task": "task510_reddit_tifu_title_summarization", "Definition": ["In this task, you are given a Reddit post as a text. Your task is to generate a title for this text. The title should start with \"TIFU by\", followed by a situation that caused humor. The title should contain 7-12 words, ideally."], "Instance": {"id": "task510-5348ce4bf822478d909e0d4deae016de", "input": "Text: after driving for 6 hours in canada, i waited over 2 hours to get through customs. i was especially irritated at this point because a bunch of asshats were using the truck lanes to get ahead and cut in at the last minute. when it was finally my turn the border patrol dude turns the green light to red and decides to take a 10 min smoke break. \n\nhttp://imgur.com/g7kymog\n\ni know he is entitled to one but it put me over the edge. when it was my turn, i said as nicely as i could with a smile \"did you enjoy your cigarette break?\" he said that he did. after checking my passport and asking me the basic questions, he asked if i was under the influence of any drugs or alcohol. he said my eyes were red and glassy. he got out of his booth and escorted me to another area where my car was searched and i was harassed with a bunch of bs questions. i knew they weren't really searching because it was hardly thorough. the car next to me had all its luggages and contents spread out on the ground. they just did a quick visual of mine even though i had 4 boxes that were sealed with tape. they just wanted to teach me a lesson. and it worked. i felt like such a dumb ass.\n\nedit: meant tape, not rape. doh", "output": ["Tifu by asking a border patrol officer if he enjoyed his cigarette"]}, "Prediction": "TIFU by being searched at customs"} +{"Task": "task510_reddit_tifu_title_summarization", "Definition": ["In this task, you are given a Reddit post as a text. Your task is to generate a title for this text. The title should start with \"TIFU by\", followed by a situation that caused humor. The title should contain 7-12 words, ideally."], "Instance": {"id": "task510-95e9410b143b41548eb59714bad1a2e9", "input": "Text: so i managed to get into an arabic language program in jordan and i was stoked for some life experience and this kind of life experience required more than just clothes. i decided to bring my sentiment box for, duh, sentimental things like ticket stubs, the first album i ever bought, an ex\u2019s necklace, things like that. i figured it would be nice to have for the inevitable homesickness. a few days later i\u2019m standing in the amman airport listening to the muezzin, extremely bleary from the wine and xanax i had taken to make the flight bearable, shuffling with the rest of my group towards the baggage scanners. \n\ni don\u2019t know if anybody told you, but there are some security concerns in the middle east from time to time, so bags get scanned on their way out of the airport as well as on the way in. no matter, i\u2019m a law abiding not even remotely terroristic upstanding member of society who recycles and whatever. i plunk the suitcase down on the dirty folding table next to the scanner, walk through a metal detector, and turn around to collect my bag. \n\nexcept the previously friendly security guards were now looking quite stern and rifling through my bag, and they\u2019re digging for something, and they\u2019re pulling something out. it\u2019s the sentiment box, which i should mention has this picture on the top. the guard opens the box, and pulls out a fucking pair of fucking handcuffs (as in they were used for fucking) that i had tossed into the box weeks earlier in what was supposed to be a *temporary* storage arrangement for them, and had then forgotten about them. the guard holds them way up high so all of my new colleagues can see, then points at me (just in case anybody missed it) and says something in arabic. then two soldiers come and some guy in a leather jacket and jeans who speaks perfect english strides up and motions me into a back room. mind you this is happening in front of a group of people who i have just met, who i will have to live with for the next five and half months, who all now have to wait for me to sort this out. \n\nthe leather jacket guy turned out to be an intelligence agent, and he turned out to be pretty cool. he immediately figured out that i was way to un-smooth to be considered any sort of a threat. he let me go after ten minutes (didn't get the cuffs back), which allowed me to get into the bus with the rest of my classmates, who were about to leave without me. i sat in the back corner, did not introduce myself, and pretended to sleep for the entire five-hour ride to where we were staying. \n\ntr;dr a combination of kinkiness and forgetfulness resulted in me getting interrogated by a jordanian intelligence agent and almost abandoned at the amman airport.", "output": ["TIFU by being kinky in the Middle East."]}, "Prediction": "TIFU by getting interrogated by"} +{"Task": "task510_reddit_tifu_title_summarization", "Definition": ["In this task, you are given a Reddit post as a text. Your task is to generate a title for this text. The title should start with \"TIFU by\", followed by a situation that caused humor. The title should contain 7-12 words, ideally."], "Instance": {"id": "task510-ef9aac56f95d4a8eb2e7223e89dce5bf", "input": "Text: throwaway.\n\ni am age 19, male, at the time i was young, living with my parents, age 16. i am into bdsm and stuff like that and three years ago, i thought it would be a good idea to try \"self bondage\".\n\nif only i wasn't an idiot, this would have been no problem at all.\n\nno one was home, and they weren't going to be home for 2-3 hours. basically i had set up a lock on my bed and the key was hanging on the ceiling connected by like a papery plastic thing to ice. when the ice cube melts, the key would drop and i would be free.\n\ni just locked myself in and then the phone was ringing and i figured i should just get and then i'll come back. i grabbed a sock off my bedroom floor and threw it at the key to make it fall down, but the key ended up swinging and flew across the room.\n\nmy heart started to panic. oh my god, what if i can't get out and a family member has to release me and finds me like this. i was panicking.\n\ni figured anything would be better than being stuck here, so i tried very hard to get my wrist out of the lock, i just hurt myself and it didn't work. the key was out of reach. i spent 45 minutes trying various things to no use.\n\nthen i thought, why don't i move the whole bed? i have a large, heavy wooden bed, i started to pull it, but it hurts my arm a lot. i end up getting a meter away from the key. my bed is diagonal in the room and touches the walls on both sides.\n\ni'm still screwed, stuck. i decide to keeping pulling the bed even though it scrapes the paint off the drywall of my wall. i can now pull myself closer but end up stuck again. i still can't reach the key.\n\ni end up grabbing a magazine with my toes and stretching and stretching until i can final just touch the key. i push the key slightly further away by accident. fuck.\n\ni have to be very careful this time, this is giving my toes a cramp. i forget what item it was, but i grab an item with my toes that was longer and i can finally pull the keys towards me.\n\ni unlock myself, and feel relived but quickly try to clean up as much of the locks and room mess as i can. after only a few minutes of release, my mom comes home and asks why i didn't pick up the phone when she called, she wanted to let me know she'd be home an hour earlier.\n\nthen she goes upstairs and asks what the heck happened to my room, i say i was cleaning and changing the layout of my room. she gives me a weird look but doesn't question me further.\n\ni still haven't painted the wall yet, i just covered it up with my bed, i should paint it soon though.\n\nbut yeah, this is the story where i dodged being completely humiliated for the rest of my life. i haven't tried something like this ever again.", "output": ["TIFU (3 years ago) by getting myself trapped in self bondage"]}, "Prediction": "TIFU by trying to self-b"} +{"Task": "task510_reddit_tifu_title_summarization", "Definition": ["In this task, you are given a Reddit post as a text. Your task is to generate a title for this text. The title should start with \"TIFU by\", followed by a situation that caused humor. The title should contain 7-12 words, ideally."], "Instance": {"id": "task510-0917b47646834fadb07ddfed79fd538e", "input": "Text: i was recently visiting the us for a conference and was waiting at the gate in the airport, thinking how well the week had gone and i how i hadn't fucked anything up, when the urge to make a significant deposit in the bowl hit me. i thought i'd better see to it before i boarded the 8 hour flight back, so i headed to the toilet.\n\nmy diet over the course of my visit hadn't been the best, lots of bbq, burgers, pizza and convenience food and, once i seated myself in a stall, i released a large quantity of matter with varying consistencies... this was going to require no small amount of wiping and this is where i fucked up.\n\nnow the walls in these toilet stalls are like those in many american public toilets and, rather than going all the way to the floor, they stopped about a foot or so from the floor and left a gap to the next stall. as i was wiping away, some stray toilet paper, with a good coating of my shit on it, fell loose and began to tumble to the floor. this might not have been a problem, except that it fell down and through the gap to the next stall, landing on the man in that stall's polished leather shoe. he was standing feet pointing to the wall, urinating, and i am certain that he noticed what had happened. as i looked down, after a short moment he shook his foot and the sticky bit of toilet paper eventually came loose. while this happened i kept my head down, kept very quiet and tried to hide my shoes from view lest he later be able to identify me by my footwear. i waited for him to leave before quickly finishing up and making an exit myself, then spent a large portion of the flight home worrying that my shoes might get spotted by the guy and i'd be found out.", "output": ["TIFU by indirectly pooping on a stranger's shoes"]}, "Prediction": "TIFU by fucking up"} +{"Task": "task935_defeasible_nli_atomic_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis. It makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis. It makes you much more likely to believe the hypothesis is true. Your task is to output 'strengthener' or 'weakener' if the update strengths or weakens the hypothesis, respectively."], "Instance": {"id": "task935-4d0b5ea493af4cc29583e9572288ea62", "input": "Premise: PersonX transports PersonY to an island\nHypothesis: As a result, PersonX feels seafaring\nUpdate: PersonX is feeling nauseated from the boatride.", "output": ["weakener"]}, "Prediction": "weakener"} +{"Task": "task935_defeasible_nli_atomic_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis. It makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis. It makes you much more likely to believe the hypothesis is true. Your task is to output 'strengthener' or 'weakener' if the update strengths or weakens the hypothesis, respectively."], "Instance": {"id": "task935-63c624b8a1944f859ab040e5262dfe59", "input": "Premise: PersonX sprains PersonX's wrist\nHypothesis: As a result, PersonX feels unfit\nUpdate: PersonX was trying to lift weights at the gym when they sprained their wrist.", "output": ["strengthener"]}, "Prediction": "weakener"} +{"Task": "task935_defeasible_nli_atomic_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis. It makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis. It makes you much more likely to believe the hypothesis is true. Your task is to output 'strengthener' or 'weakener' if the update strengths or weakens the hypothesis, respectively."], "Instance": {"id": "task935-a426da86573b44038d6d92e57f146c6f", "input": "Premise: PersonX keeps an eye peeled\nHypothesis: As a result, PersonX wants find what they were looking for\nUpdate: PersonX has been asked to look out for potential big spenders.", "output": ["strengthener"]}, "Prediction": "stronger"} +{"Task": "task935_defeasible_nli_atomic_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis. It makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis. It makes you much more likely to believe the hypothesis is true. Your task is to output 'strengthener' or 'weakener' if the update strengths or weakens the hypothesis, respectively."], "Instance": {"id": "task935-39050c794dd74174a8551664de2572ba", "input": "Premise: PersonX expresses PersonX's belief\nHypothesis: Because PersonX wanted to share what he believes in\nUpdate: PersonX had a speech assigned for class.", "output": ["weakener"]}, "Prediction": "weakener"} +{"Task": "task935_defeasible_nli_atomic_classification", "Definition": ["In this task, you are given a premise, a hypothesis, and an update. The premise sentence describes a real-world situation and is always assumed to be true. The hypothesis sentence describes an assumption or inference that you might make about that situation having read the premise. The update provides additional information about the situation that might weaken or strengthen the hypothesis. A weakener is a statement that weakens the hypothesis. It makes you much less likely to believe the hypothesis is true. A strengthener is a statement that strengthens the hypothesis. It makes you much more likely to believe the hypothesis is true. Your task is to output 'strengthener' or 'weakener' if the update strengths or weakens the hypothesis, respectively."], "Instance": {"id": "task935-e9c537995afe4f82b5566baf56f348d7", "input": "Premise: PersonX plays so well\nHypothesis: As a result, PersonX wants find more people to play with\nUpdate: PersonX enjoys playing team sports.", "output": ["strengthener"]}, "Prediction": "strengthen"} +{"Task": "task349_squad2.0_answerable_unanswerable_question_classification", "Definition": ["In this task, you are given a passage and a question regarding that passage. You must determine whether or not the question is answerable from the given passage. If a question is answerable, output should be 'True', otherwise 'False'. You must not consider any other information that is not provided in the passage while labelling True or False."], "Instance": {"id": "task349-46fdccaf534d4344a8361ead55ab0a3d", "input": "Passage: In Japan, at the end of the Asuka period (538\u2013710) and the early Nara period (710\u2013794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taih\u014d Code (701) and re-stated in the Y\u014dr\u014d Code (718). Ranked positions in the pre-Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists\u2014and even pharmacist assistants\u2014were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor. Question: Which two codes were the roles of pharmacists destroyed?", "output": ["False"]}, "Prediction": "False"} +{"Task": "task349_squad2.0_answerable_unanswerable_question_classification", "Definition": ["In this task, you are given a passage and a question regarding that passage. You must determine whether or not the question is answerable from the given passage. If a question is answerable, output should be 'True', otherwise 'False'. You must not consider any other information that is not provided in the passage while labelling True or False."], "Instance": {"id": "task349-9579a8718c6c4199bef289e46035188f", "input": "Passage: In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen to study. The first commercially viable process for producing liquid oxygen was independently developed in 1895 by German engineer Carl von Linde and British engineer William Hampson. Both men lowered the temperature of air until it liquefied and then distilled the component gases by boiling them off one at a time and capturing them. Later, in 1901, oxyacetylene welding was demonstrated for the first time by burning a mixture of acetylene and compressed O\n2. This method of welding and cutting metal later became common. Question: When was liquid oxygen developed for commercial use?", "output": ["True"]}, "Prediction": "True"} +{"Task": "task349_squad2.0_answerable_unanswerable_question_classification", "Definition": ["In this task, you are given a passage and a question regarding that passage. You must determine whether or not the question is answerable from the given passage. If a question is answerable, output should be 'True', otherwise 'False'. You must not consider any other information that is not provided in the passage while labelling True or False."], "Instance": {"id": "task349-ce7f255da3974eae879c0dce941ab019", "input": "Passage: In 1872, the Central Pacific Railroad established a station near Easterby's\u2014by now a hugely productive wheat farm\u2014for its new Southern Pacific line. Soon there was a store around the station and the store grew the town of Fresno Station, later called Fresno. Many Millerton residents, drawn by the convenience of the railroad and worried about flooding, moved to the new community. Fresno became an incorporated city in 1885. By 1931 the Fresno Traction Company operated 47 streetcars over 49 miles of track. Question: How many streetcars did the Fresno Traction Company operate in 1931?", "output": ["True"]}, "Prediction": "True"} +{"Task": "task349_squad2.0_answerable_unanswerable_question_classification", "Definition": ["In this task, you are given a passage and a question regarding that passage. You must determine whether or not the question is answerable from the given passage. If a question is answerable, output should be 'True', otherwise 'False'. You must not consider any other information that is not provided in the passage while labelling True or False."], "Instance": {"id": "task349-125230fe7bad4cec92f49363e8b49131", "input": "Passage: Between Bingen and Bonn, the Middle Rhine flows through the Rhine Gorge, a formation which was created by erosion. The rate of erosion equaled the uplift in the region, such that the river was left at about its original level while the surrounding lands raised. The gorge is quite deep and is the stretch of the river which is known for its many castles and vineyards. It is a UNESCO World Heritage Site (2002) and known as \"the Romantic Rhine\", with more than 40 castles and fortresses from the Middle Ages and many quaint and lovely country villages. Question: How many country villages can be found at the UNESCO World Heritage Site?", "output": ["False"]}, "Prediction": "True"} +{"Task": "task349_squad2.0_answerable_unanswerable_question_classification", "Definition": ["In this task, you are given a passage and a question regarding that passage. You must determine whether or not the question is answerable from the given passage. If a question is answerable, output should be 'True', otherwise 'False'. You must not consider any other information that is not provided in the passage while labelling True or False."], "Instance": {"id": "task349-eac98d8de9d94dfc90f00aad73658938", "input": "Passage: First, if a Directive's deadline for implementation is not met, the member state cannot enforce conflicting laws, and a citizen may rely on the Directive in such an action (so called \"vertical\" direct effect). So, in Pubblico Ministero v Ratti because the Italian government had failed to implement a Directive 73/173/EEC on packaging and labelling solvents by the deadline, it was estopped from enforcing a conflicting national law from 1963 against Mr Ratti's solvent and varnish business. A member state could \"not rely, as against individuals, on its own failure to perform the obligations which the Directive entails.\" Second, a citizen or company can invoke a Directive, not just in a dispute with a public authority, but in a dispute with another citizen or company. So, in CIA Security v Signalson and Securitel the Court of Justice held that a business called CIA Security could defend itself from allegations by competitors that it had not complied with a Belgian decree from 1991 about alarm systems, on the basis that it had not been notified to the Commission as a Directive required. Third, if a Directive gives expression to a \"general principle\" of EU law, it can be invoked between private non-state parties before its deadline for implementation. This follows from K\u00fcc\u00fckdeveci v Swedex GmbH & Co KG where the German Civil Code \u00a7622 stated that the years people worked under the age of 25 would not count towards the increasing statutory notice before dismissal. Ms K\u00fcc\u00fckdeveci worked for 10 years, from age 18 to 28, for Swedex GmbH & Co KG before her dismissal. She claimed that the law not counting her years under age 25 was unlawful age discrimination under the Employment Equality Framework Directive. The Court of Justice held that the Directive could be relied on by her because equality was also a general principle of EU law. Third, if the defendant is an emanation of the state, even if not central government, it can still be bound by Directives. In Foster v British Gas plc the Court of Justice held that Mrs Foster was entitled to bring a sex discrimination claim against her employer, British Gas plc, which made women retire at age 60 and men at 65, if (1) pursuant to a state measure, (2) it provided a public service, and (3) had special powers. This could also be true if the enterprise is privatised, as it was held with a water company that was responsible for basic water provision. Question: How long did Ms Kucukdeveci work for Swedex Gmbh & Co KG before she was dismissed?", "output": ["True"]}, "Prediction": "True"} +{"Task": "task1157_bard_analogical_reasoning_rooms_for_containers", "Definition": ["Two analogies that relate objects to the associated rooms is given in the form \"A : B. C : ?\". \"A : B\" relates object A to room B. Your task is to replace the question mark (?) with the appropriate room for the given object C, following the \"A : B\" relation."], "Instance": {"id": "task1157-eb1bc51df9f342878fc934657485fd57", "input": "table : kitchen. nightstand : ?", "output": ["bedroom"]}, "Prediction": "bedroom"} +{"Task": "task1157_bard_analogical_reasoning_rooms_for_containers", "Definition": ["Two analogies that relate objects to the associated rooms is given in the form \"A : B. C : ?\". \"A : B\" relates object A to room B. Your task is to replace the question mark (?) with the appropriate room for the given object C, following the \"A : B\" relation."], "Instance": {"id": "task1157-e7d794429aca4f1dbda5c69204805883", "input": "shed : garden. desk : ?", "output": ["office"]}, "Prediction": "office"} +{"Task": "task1157_bard_analogical_reasoning_rooms_for_containers", "Definition": ["Two analogies that relate objects to the associated rooms is given in the form \"A : B. C : ?\". \"A : B\" relates object A to room B. Your task is to replace the question mark (?) with the appropriate room for the given object C, following the \"A : B\" relation."], "Instance": {"id": "task1157-53134a8972344334afccb8a03f5d75e4", "input": "counter : kitchen. shelf : ?", "output": ["kitchen", "pantry", "nursury"]}, "Prediction": "kitchen"} +{"Task": "task1157_bard_analogical_reasoning_rooms_for_containers", "Definition": ["Two analogies that relate objects to the associated rooms is given in the form \"A : B. C : ?\". \"A : B\" relates object A to room B. Your task is to replace the question mark (?) with the appropriate room for the given object C, following the \"A : B\" relation."], "Instance": {"id": "task1157-c95a573c139240639d1b4e89d149e2fb", "input": "microwave : kitchen. cabinet : ?", "output": ["bathroom"]}, "Prediction": "kitchen"} +{"Task": "task1157_bard_analogical_reasoning_rooms_for_containers", "Definition": ["Two analogies that relate objects to the associated rooms is given in the form \"A : B. C : ?\". \"A : B\" relates object A to room B. Your task is to replace the question mark (?) with the appropriate room for the given object C, following the \"A : B\" relation."], "Instance": {"id": "task1157-b2368b6a74204322ba1a36f4e0463f70", "input": "couch : parlor. closet : ?", "output": ["hallway", "bedroom"]}, "Prediction": "bedroom"} +{"Task": "task937_defeasible_nli_social_classification", "Definition": ["In this task, you are given a hypothesis and an update. The hypothesis sentence is a statement that speaks of a socially normative behavior. In other words, it is a generalizing statement about how we expect people to behave in society. The update provides additional contexts about the situation that might UNDERMINE or SUPPORT the generalization. An undermining context provides a situation that weakens the hypothesis. A supporting context provides a situation that strengthens the generalization. Your task is to output 'strengthener' or 'weakener' if the update supports or undermines the hypothesis, respectively"], "Instance": {"id": "task937-b2c6f4e5306b42e2909dc328c67f5ca8", "input": "Hypothesis: It is wrong to sign someone up for something without their permission.\nUpdate: They will receive a full scholarship to their favorite university.", "output": ["weakener"]}, "Prediction": "false"} +{"Task": "task937_defeasible_nli_social_classification", "Definition": ["In this task, you are given a hypothesis and an update. The hypothesis sentence is a statement that speaks of a socially normative behavior. In other words, it is a generalizing statement about how we expect people to behave in society. The update provides additional contexts about the situation that might UNDERMINE or SUPPORT the generalization. An undermining context provides a situation that weakens the hypothesis. A supporting context provides a situation that strengthens the generalization. Your task is to output 'strengthener' or 'weakener' if the update supports or undermines the hypothesis, respectively"], "Instance": {"id": "task937-9d0b6f791f3a4d7d84bbb793e5fa2dff", "input": "Hypothesis: It is wrong to bring up family troubles to people outside the family without consent.\nUpdate: You don't know the person you are talking to about your problems.", "output": ["strengthener"]}, "Prediction": "weakener"} +{"Task": "task937_defeasible_nli_social_classification", "Definition": ["In this task, you are given a hypothesis and an update. The hypothesis sentence is a statement that speaks of a socially normative behavior. In other words, it is a generalizing statement about how we expect people to behave in society. The update provides additional contexts about the situation that might UNDERMINE or SUPPORT the generalization. An undermining context provides a situation that weakens the hypothesis. A supporting context provides a situation that strengthens the generalization. Your task is to output 'strengthener' or 'weakener' if the update supports or undermines the hypothesis, respectively"], "Instance": {"id": "task937-e9b78ac4e8b94bfd8231c96db16afeaa", "input": "Hypothesis: It's okay to be attracted to some people and not others.\nUpdate: You are looking for a romantic partner.", "output": ["strengthener"]}, "Prediction": "stronger"} +{"Task": "task937_defeasible_nli_social_classification", "Definition": ["In this task, you are given a hypothesis and an update. The hypothesis sentence is a statement that speaks of a socially normative behavior. In other words, it is a generalizing statement about how we expect people to behave in society. The update provides additional contexts about the situation that might UNDERMINE or SUPPORT the generalization. An undermining context provides a situation that weakens the hypothesis. A supporting context provides a situation that strengthens the generalization. Your task is to output 'strengthener' or 'weakener' if the update supports or undermines the hypothesis, respectively"], "Instance": {"id": "task937-ed511d1643b841e8962326272cd82057", "input": "Hypothesis: It is okay to think your friend might be self-entitled.\nUpdate: Your friend donated all their money to charity", "output": ["weakener"]}, "Prediction": "anti-social"} +{"Task": "task937_defeasible_nli_social_classification", "Definition": ["In this task, you are given a hypothesis and an update. The hypothesis sentence is a statement that speaks of a socially normative behavior. In other words, it is a generalizing statement about how we expect people to behave in society. The update provides additional contexts about the situation that might UNDERMINE or SUPPORT the generalization. An undermining context provides a situation that weakens the hypothesis. A supporting context provides a situation that strengthens the generalization. Your task is to output 'strengthener' or 'weakener' if the update supports or undermines the hypothesis, respectively"], "Instance": {"id": "task937-6d5d60a2df3247bcb9acf8728c610b99", "input": "Hypothesis: It is important to be able to admit you've made mistakes.\nUpdate: You want to maintain healthy relationships with people.", "output": ["strengthener"]}, "Prediction": "strengthener"} +{"Task": "task743_eurlex_summarization", "Definition": ["In this task, you are provided with an article about the legal acts. Your task is to generate a headline for that article."], "Instance": {"id": "task743-32f70e1defd545f79bdc19bbd3d1f735", "input": "1. The Commission shall determine a world market price for colza and rape seeds and a world market price for sunflower seeds at regular intervals.2. When the world market price is being determined, the Commission shall take into account all offers made on the world market of which it has been informed by the Member States or which are known to it through its own inquiries, and prices quoted on those exchanges which are important for international trade.3. The Commission shall determine this price on the basis of the most favourable actual purchasing opportunities, excluding offers and quotations, (1) OJ No 172, 30.9.1966, p. 3025/66. (a) which do not relate to shipments to be effected within a given period, which has not yet been fixed, after the date on which the world market price is determined,(b) which cannot be regarded as representative of the actual market trend. Where no offer or quotation can be used as a basis for determining the world market price for a type of seed, the Commission shall determine this price on the basis of the value of average quantities of oil and oil-cake resulting from the processing of 100 kilogrammes of this type of seed in the Community less an amount corresponding to the cost of processing these seeds into oil and oil-cake. Where no offer or quotation can be used as a basis for determining the world market price for a type of seed and further, where it is impossible to establish the value of the oil-cake or oil processed from such seeds, the world market price shall be determined on the basis of the last recorded value for the oil or oil-cake, adjusted to take account of the trend of world prices for competing products in accordance with Article 2. For the purpose of determining the prices referred to in Article 1, the Community frontier crossing point shall be Rotterdam. The Commission shall determine the world market price for seeds of the standard quality for which the target price has been fixed, delivered in bulk at Rotterdam.Where offers and quotations do not fulfil the conditions set out above, the Commission shall make the necessary adjustments. 1. When the world market price for a type of seed is being determined, the price used shall be adjusted by an amount not exceeding the margin between: - the difference between the price of 100 kilogrammes of colza, rape or sunflower seeds, plus processing costs, and the total of the prices of the quantities of oil and oil-cake resulting from the processing of the type of seed in question and- the difference between the price of 100 kilogrammes of one or more other seeds, plus processing costs, and the total of the prices of the quantities of oil and oil-cake resulting from processing these seeds.2. When the amount referred to in paragraph 1 is being determined, account shall be taken of the incidence of the margin in question (a) on marketing operations of Community traders,(b) on the disposal of the various types of seed on the world market. Detailed rules for the application of this Regulation shall be adopted in accordance with the procedure laid down in Article 38 of Regulation No 136/66/EEC.This Regulation shall be binding in its entirety and directly applicable in all Member States.", "output": ["Regulation laying down criteria for determining world market prices for oil seeds and fixing the frontier crossing point"]}, "Prediction": "Commission to determine world market price for colza"} +{"Task": "task743_eurlex_summarization", "Definition": ["In this task, you are provided with an article about the legal acts. Your task is to generate a headline for that article."], "Instance": {"id": "task743-c80f7ee08dfd48938490379ba3a199ec", "input": "The tax on salaries, wages and emoluments paid by the Communities to their officials and other servants, indicated by the first paragraph of Article 13 of the Protocol on the Privileges and Immunities of the European Communities, shall be determined under the conditions and collected according to the procedure laid down in this Regulation. The following shall be liable to the tax:\u2014 persons coming under the Staff Regulations of Officials or the Conditions of Employment of Other Servants of the Communities, including recipients of the compensation provided for in the event of removal from post in the interests of the service, but excluding local staff;\u2014 recipients of the compensation on termination of service provided for in Article 5 of Council Regulation (EEC, Euratom, ECSC) No 259/68xa0(1). 1.xa0xa0xa0The tax shall be payable each month on salaries, wages and emoluments paid by the Communities to each person liable.2.xa0xa0xa0However, monies and allowances, whether lump sums or not, which represent compensation for expenses incurred in the perfomance of official duties, shall be excluded from the basic taxable amount.3.xa0xa0xa0The family allowance and social benefits listed below shall be deducted from the basic taxable amount:(a) family allowances:\u2014 head of household allowance,\u2014 dependent child's allowance,\u2014 education allowance,\u2014 allowance on the birth of a child;(b) social assistance;(c) allowances paid on account of occupational disease or accident;(d) that part of any payment which represents family allowances.The amount to be deducted shall be calculated, with due regard, where appropriate, to the provisions of Article 5.4.xa0xa0xa0Subject to the provisions of Article 5, an abatement of 10% for occupational and personal expenses shall be made from the amount obtained by applying the preceding provisions.An additional abatement equivalent to twice the amount of the allowance for a dependent child shall be made for each dependent child of the person liable as well as for each person treated as a dependent child within the meaning of Article 2 (4) of Annex VII to the Staff Regulations.5.xa0xa0xa0Deductions from remuneration of persons liable on account of pensions and retirement allowances or of social security shall be deducted from the basic taxable amount. The tax shall be calculated, subject to the provisions of Article 5, on the taxable amount obtained by applying Article 3 and disregarding any amount not exceeding Bfrs 803 and by applying the rate of:8% to amounts between Bfrs 803 and 14xa017810% to amounts between Bfrs 14xa0179 and 19xa052812\u038750% to amounts between Bfrs 19xa0529 and 22xa038015% to amounts between Bfrs 22xa0381 and 25xa041317\u038750% to amounts, between Bfrs 22xa0414 and 28xa026520% to amounts between Bfrs 28xa0266 and 31xa003022\u038750% to amounts between Bfrs 31xa0031 and 33xa088325% to amounts between Bfrs 33xa0884 and 36xa064827\u038750% to amounts between Bfrs 36xa0649 and 39xa050030% to amounts between Bfrs 39xa0501 and 42xa026532\u038750% to amounts between Bfrs 42xa0266 and 45xa011835% to amounts between Bfrs 45xa0119 and 47xa088340% to amounts between Bfrs 47xa0884 and 50xa073545% to amounts above Bfrs 50xa0735. When salaries, wages and emoluments are subject to a corrective factor:\u2014 the amount of each of the elements included in the calculation of the tax, except deductions made from the remuneration of persons liable on account of pensions, and retirement allowances or of social security, shall, for the application of this Regulation, be obtained by applying the corrective factor to the amount of this element as calculated before applying any corrective factor to the remuneration;\u2014 the amount of abatements mentioned in Article 3 (4) shall be obtained by applying the corrective factor to the amount of the abatements as calculated before applying any corrective factor to the remuneration;\u2014 the amounts of income shown in Article 4 shall be subject to the corrective factor. 1.xa0xa0xa0By way of derogation from the provisions of Articles 3 and 4:(a) the sums paid\u2014 as compensation for overtime,\u2014 for work of a particularly arduous nature,\u2014 for exceptional services,\u2014 for patented inventions,(b) payments made on account of termination of service shall be taxed, after applying the abatements laid down in Article 3 (4), at a rate equal to two thirds of the ratio existing, at the time of last salary payment, between:\u2014 the amount of tax payable and\u2014 the basic taxable amount as defined in Article 3.2.xa0xa0xa0The application of this Regulation shall not have the effect of reducing salaries, wages and emoluments of any kind paid by the Communities to an amount less than the minimum subsistence rate as defined in Article 6 of Annex VIII to the Staff Regulations of Officials of the Communities. When the taxable payment covers a period of less than one month, the rate of the tax shall be that which is applicable to the corresponding monthly payment.When the taxable payment covers a period of more than one month the tax shall be calculated as if this payment had been spread evenly over the months to which it relates.Corrective payments not related to the month during which they are paid shall be subject to the tax to which they would have been subject had they been made at the proper time. Tax shall be collected by means of deduction at source. The amount shall be rounded off at the lower unit. The tax proceeds shall be entered as revenue in the budgets of the Communities. 0The governing bodies of institutions of the Communities shall co-operate for the purpose of applying this Regulation in a uniform manner.The Council, acting on a proposal from the Commission, shall adopt any provisions necessary for the application of this Regulation. 1This Regulation shall also apply:\u2014 to members of the Commission,\u2014 to Judges, Advocates General, the Registrar and Assistant Rapporteurs at the Court of Justice,\u2014 to members of the Audit Board. 2This Regulation shall apply to members of the organs of the European Investment Bank, and to members of its staff and recipients of the pensions it pays, who are included in the categories determined by the Council in application of the first paragraph of Article 16 of the Protocol on Privileges and Immunities, with regard to salaries, wages and emoluments and to disability, retirement and survivors' pensions paid by the Bank. 3The compensation and payments referred to in Article 13 of Regulation No 32 (EEC) 12 (EAEC)xa0(2) shall be exempt from the tax. 4Regulation No 32 (EEC), 12 (EAEC) is hereby repealed. 5This Regulation shall enter into force on the day following its publication in the Official Journal of the European Communities.This Regulation shall be binding in its entirety and directly applicable in all Member States.", "output": ["Regulation laying down the conditions and procedure for applying the tax for the benefit of the European Communities"]}, "Prediction": "Tax on salaries, wages and emol"} +{"Task": "task743_eurlex_summarization", "Definition": ["In this task, you are provided with an article about the legal acts. Your task is to generate a headline for that article."], "Instance": {"id": "task743-3f50de0342bd468f8e078d051fa06266", "input": "This Regulation lays down rules for fixing and granting refunds on the products listed in Article 1 of Regulation (EEC) No 804/68 exported in the state referred to therein. The following shall be taken into account when refunds are being fixed: (a) the existing situation and the future trend with regard to: - prices and availabilities of milk and milk products on the Community market;- prices for milk and milk products in international trade; 1OJ No L 148, 28.6.1968, p. 13.(b) marketing costs and the most favourable transport charges from Community markets to ports and other points of export in the Community, as well as costs incurred in placing the goods on the market of the country of destination;(c) the aims of the common organisation of the market in milk and milk products, which are to ensure equilibrium and the natural development of prices and trade on this market;(d) the need to avoid disturbances on the Community market;(e) the economic aspect of the proposed exports. 1. When prices within the Community are being determined the ruling prices which are the most favourable from the exportation point of view shall be taken into account.2. When prices in international trade are being determined particular account shall be taken of: (a) prices ruling on third country markets;(b) the most favourable prices in third countries of destination for third country imports;(c) producer prices recorded in exporting third countries, account being taken, where appropriate, of subsidies granted by those countries ; and(d) free-at-Community-frontier offer prices. Where the world market situation or the specific requirements of certain markets make this necessary, the refund for the Community may be varied according to the destination of the products listed in Article 1 of Regulation (EEC) No 804/68. 1. The list of products on which an export refund is granted and the amount of such refund shall be fixed at least once every four weeks.The amount of the refund may, however, remain at the same level for more than four weeks.2. Without prejudice to the provisions of paragraph 3, the amount of the export refund on the products listed in Article 1 of Regulation (EEC) No 804/68 shall be that applicable on the day of exportation.3. The amount of the refund may be fixed in advance.In that event, if the party concerned so requests when applying for the licence, the refund applicable on the day on which the application for a licence is lodged shall be applied to exports effected during the period of validity of the licence.The refund fixed in advance shall however be adjusted: (a) on the basis of the threshold price in force at the time of exportation for the pilot product of the group to which the product in question belongs, and(b) on the basis of the aid applicable at the time of exportation for products entitled to aid and for those in which such products are incorporated.The provisions of the preceding subparagraphs may be applied in whole or in part to each of the products listed in Article 1 of Regulation (EEC) No 804/68.The pilot products and product groups referred to in the third subparagraph are those set out in Annex I to Regulation (EEC) No 823/68 1. 1. The refund shall be paid upon proof: - that the products have been exported from the Community, and- that the products are of Community origin, except where the provisions of Article 7 apply.2. Where Article 4 applies, the refund shall be paid under the conditions laid down in paragraph 1, provided it is proved that the product has reached the destination for which the refund was fixed.Exceptions may be made to this rule in accordance with the procedure referred to in paragraph 3 of this Article, provided conditions are laid down which offer equivalent guarantees.3. Additional provisions may be adopted in accordance with the procedure laid down in Article 30 of Regulation (EEC) No 804/68. 1. No export refund shall be granted on products listed in Article 1 of Regulation (EEC) No 804/68 which are imported from third countries and re-exported to third countries, unless the exporter proves: - that the product to be exported and the product previously imported are one and the same, and- that the levy was collected on importation.2. In such cases the refund on each product shall be equal to the levy collected on importation where 1OJ No L 151, 30.6.1968, p. 3.this levy is equal to or lower than the refund applicable on the day of exportation ; the refund shall be equal to the refund applicable on the day of exportation where the levy collected on importation is higher than this refund. This Regulation shall enter into force on 4 July 1968.It shall apply from 29 July 1968.This Regulation shall be binding in its entirety and directly applicable in all Member States.", "output": ["Regulation laying down general rules for granting export refunds on milk and milk products and criteria for fixing the amount of such refunds"]}, "Prediction": "The amount of the export refund on the products"} +{"Task": "task743_eurlex_summarization", "Definition": ["In this task, you are provided with an article about the legal acts. Your task is to generate a headline for that article."], "Instance": {"id": "task743-facdc03f3c34457cbf862e1bc8aac31a", "input": "1. Member States, acting in accordance with the provisions hereinafter laid down, shall adopt the following transitional measures in respect of establishment or provision of services in their territories by natural persons or companies or firms covered by Title I of the General Programmes (hereinafter called 'beneficiaries') wishing to engage in the activities of self-employed persons referred to in paragraph 2.2. The activities in question are those covered by the Council Directive of 15 October 1968 concerning the attainment of freedom of establishment and freedom to provide services in respect of activities of self-employed persons in the personal services sector (ISIC ex Major Group 85) ; 1 : Restaurants, cafes, taverns and other drinking and eating places (ISIC Group 852) ; 2 : Hotels, rooming houses, camps and other lodging places (ISIC Group 853). Where, according to the laws of a Member State, certain activities fall not within the personal services sector but rather within the food manufacturing and beverage industries, the appropriate Directive laying down detailed provisions concerning transitional measures in the field in question shall apply to those activities in that Member State. Member States in which the taking up or pursuit of any activity referred to in Article 1 (2) is subject to possession of certain qualifications shall ensure that any beneficiary who applies therefor be provided, before he establishes himself or before he begins to pursue any activity on a temporary basis, with information as to the rules governing the occupation which he proposes to pursue. 1. Where, in a Member State, the taking up or pursuit of any activity referred to in Article 1 (2) is dependent on the possession of general, commercial or professional knowledge and ability, that Member State shall accept as sufficient evidence of such knowledge and ability the fact that the activity in question has been pursued in another Member State for any of the following periods: (a) three consecutive years either in an independent capacity or in a managerial capacity ; or(b) two consecutive years either in an independent capacity or in a managerial capacity, where the beneficiary can prove that for the occupation in question he has received previous training, attested by a certificate recognised by the State, or regarded by the competent professional or trade body as fully satisfying its requirements ; or(c) two consecutive years in an independent capacity or in a managerial capacity, where the beneficiary can prove that he has pursued the occupation in question for at least three years in a non-independent capacity ; or 1OJ No L 260, 22.10.1968, p. 9.(d) three consecutive years in a non-independent capacity, where the beneficiary can prove that for the occupation in question he has received previous training, attested by a certificate recognised by the State, or regarded by the competent professional or trade body as fully satisfying its requirements.The host Member State may require of nationals of other Member States, in so far as it so requires of its own nationals, that the activity in question should have been pursued, and vocational training received, in the branch of trade in the host country in which the beneficiary wishes to establish himself.2. In the cases referred to in subparagraphs (a) and (c) of paragraph 1 pursuit of the activity shall not have ceased more than ten years before the date when the application provided for in Article 6 (2) is made. However, where a shorter period is laid down in a Member State for its nationals, that period may also be applied in respect of beneficiaries. 1. Where, in a Member State, the taking up or pursuit of any activity referred to in Article 1 (2) is not subject to the possession of general, commercial or professional knowledge or ability, that State may, if serious difficulties result from the application of the Council Directive referred to in Article 1 (2), request from the Commission authorisation, for a limited period and in respect of one or more specified activities, to require from nationals of other Member States who wish to pursue those activities in its territory proof that they possess the qualifications required for the pursuit of such activities, either in an independent capacity or in a managerial capacity, in the country whence they come.Use shall not be made of this power in respect of a person who comes from a country which does not make the taking up of the activity in question subject to proof of particular knowledge, or in respect of a person who has lived in the host country for at least five years.2. On receipt of a request from the Member State concerned stating the reasons on which it is based, the Commission shall forthwith specify the conditions on which the authorisation provided for in paragraph 1 of this Article will be granted and lay down detailed rules concerning its implementation. 1. A person shall be regarded as having pursued an activity in a managerial capacity within the meaning of Articles 4 and 5 if he has pursued such activity in an industrial or commercial undertaking in the field in question: (a) as manager of an undertaking or manager of a branch of an undertaking ; or(b) as deputy to the proprietor or to the manager of an undertaking, where such post involves responsibility equivalent to that of the proprietor or manager represented ; or(c) in a managerial post with duties of a type normally pertaining to the occupation in question and with responsibility for one or more departments of the undertaking.2. Proof that the conditions laid down in Article 4 (1) or in Article 5 (1) are satisfied shall be established by certificates issued by the competent authority or body in the country whence the person concerned comes, which such person shall submit in support of his application for authorisation to pursue the activity or activities in question in the host country.3. Member States shall, within the time limit laid down in Article 8, designate the authorities and bodies competent to issue these certificates and shall forthwith inform the other Member States and the Commission thereof. The provisions of this Directive shall remain applicable until the entry into force of provisions relating to the co-ordination of national rules concerning the taking up and pursuit of the activities in question. Member States shall adopt the measures necessary to comply with this Directive within six months of its notification and shall forthwith inform the Commission thereof. Member States shall ensure that the texts of the main provisions of national law which they adopt in the field covered by this Directive are communicated to the Commission. 0This Directive is addressed to the Member States.", "output": ["Council Directive laying down detailed provisions concerning transitional measures in respect of activities of self-employed persons in the personal services sector (ISIC ex Major Group 85): 1. Restaurants, cafes, taverns and other drinking and eating places (ISIC Group 852), 2. Hotels, rooming houses, camps and other lodging places (ISIC Group 853)"]}, "Prediction": "Member States adopt transitional measures in respect of"} +{"Task": "task743_eurlex_summarization", "Definition": ["In this task, you are provided with an article about the legal acts. Your task is to generate a headline for that article."], "Instance": {"id": "task743-6bef5f3145e043e3af084358a6d65c20", "input": "The common quality standards for cauliflowers, tomatoes, apples and pears, peaches, citrus fruits and table grapes shall be supplemented by the addition of a further quality class described as 'Class III'.Class III is defined for each product in the Annexes to this Regulation. This Regulation shall enter into force on 1 January 1967.This Regulation shall be binding in its entirety and directly applicable in all Member States.", "output": ["Regulation adding a supplementary quality class to the common quality standards for certain fruits and vegetables"]}, "Prediction": "Council Decision amending the common quality standards for"} +{"Task": "task1388_cb_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1388-5f17ad50c9c64652a3aadde5af93ef0e", "input": "Premise: And I resent what happened to my flat. A couple of guys think they can stake it out and wait for me, rub their filthy fingers on my clothes, piss in my bathroom, and I'm supposed to ignore it. I know what I said about possessions being like leeches but that don't mean I 'll surrender them to a pair of punks. Hypothesis: he will surrender his possessions to a pair of punks", "output": ["Contradiction"]}, "Prediction": "False"} +{"Task": "task1388_cb_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1388-780616a33f9346b4b00822dd6ca0659e", "input": "Premise: ``Clever''. Klug means ``clever''. Would you say that Abie was clever? Hypothesis: Abie was clever", "output": ["Neutral"]}, "Prediction": "Entails"} +{"Task": "task1388_cb_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1388-54093d0c6d144587a448bb31cc5ed4e9", "input": "Premise: A: I, that would have been stupid, B: Yeah. A: and I don't think we did it. Everything else we handled in this seemed to be perfectly right. I don't think they would have done that. Hypothesis: they would have done that", "output": ["Contradiction"]}, "Prediction": "False"} +{"Task": "task1388_cb_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1388-4cfa750e91ae40549c3c4f66f5dd60a1", "input": "Premise: She said good morning to Alice and Alice said hallo. She was thin and rather tall with a very lined gentle face and hair that was white but which Alice could see had once been blonde. She could also have told this was Tina's mother before Mrs Darne went off down the passage that led to the Headmaster's Flat. Hypothesis: this was Tina's mother", "output": ["Entailment"]}, "Prediction": "Entails"} +{"Task": "task1388_cb_entailment", "Definition": ["In this task, you will be presented with a premise and a hypothesis sentence. Determine whether the hypothesis sentence entails (implies), contradicts (opposes), or is neutral with respect to the given premise. Please answer with \"Contradiction\", \"Neutral\", or \"Entailment\"."], "Instance": {"id": "task1388-3b098b849ff743278dc0514bf076c878", "input": "Premise: B: So again, it never really penalizes, the company doing the wrong. A: Right. That will, B: They can go right on doing the same old thing they always used to. A: Huh. B: And if they know some practice is wrong, you know, Hypothesis: some practice is wrong", "output": ["Entailment"]}, "Prediction": "Entails"} +{"Task": "task671_ambigqa_text_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to provide one question which clarifies the input question and it has one unique answer, and also provide an answer to the generated question. Generated question and answer should be separated with a new line."], "Instance": {"id": "task671-c424f91a0728404b859abcf5658cb9c2", "input": "Who invented the process to remove impurities by blasts of cold air blown through heated iron?", "output": ["What Englishman invented the process to remove impurities by blasts of cold air blown through heated iron? \n Henry Bessemer"]}, "Prediction": "Who invented the process to remove impurities"} +{"Task": "task671_ambigqa_text_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to provide one question which clarifies the input question and it has one unique answer, and also provide an answer to the generated question. Generated question and answer should be separated with a new line."], "Instance": {"id": "task671-cf65bb72b8154fad847563db88c2373e", "input": "How old do you have to be to get a tattoo in indiana?", "output": ["How old do you have to be to get a tattoo in Indiana without parental consent? \n 18"]}, "Prediction": "You must be at least 18 years old to"} +{"Task": "task671_ambigqa_text_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to provide one question which clarifies the input question and it has one unique answer, and also provide an answer to the generated question. Generated question and answer should be separated with a new line."], "Instance": {"id": "task671-e57ef62ea85e4198b854f74f8666ccb5", "input": "When did the old age pension start in australia?", "output": ["When did an old age pension start in parts of australia? \n 1900"]}, "Prediction": "When did the old age pension start in Australia"} +{"Task": "task671_ambigqa_text_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to provide one question which clarifies the input question and it has one unique answer, and also provide an answer to the generated question. Generated question and answer should be separated with a new line."], "Instance": {"id": "task671-c443b946b54040119324257e70e4a081", "input": "Who wrote the music for shrek the musical?", "output": ["Who wrote the music scores for shrek the musical? \n Jeanine Tesori"]}, "Prediction": "Who wrote the music for Shrek the"} +{"Task": "task671_ambigqa_text_generation", "Definition": ["In this task, you're given an ambiguous question (which can be answered in more than one way). Your task is to provide one question which clarifies the input question and it has one unique answer, and also provide an answer to the generated question. Generated question and answer should be separated with a new line."], "Instance": {"id": "task671-7fa35c1d80c94845ba6420f4e649176b", "input": "What is the scientific name for a red fox?", "output": ["What is the scientific name for all red foxes? \n Vulpes vulpes"]}, "Prediction": "What is the scientific name for a red"} +{"Task": "task121_zest_text_modification", "Definition": ["In this task, you are given two questions about a domain. Your task is to combine the main subjects of the questions to write a new, natural-sounding question. For example, if the first question is about the tallness of the president and the second question is about his performance at college, the new question can be about his tallness at college. Try to find the main idea of each question, then combine them; you can use different words or make the subjects negative (i.e., ask about shortness instead of tallness) to combine the subjects. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in the new question, this keyword should also be used the same way. Do not write unnatural questions. (i.e., would not be a question someone might normally ask about domains). Do not write open-ended or subjective questions. (e.g., questions that can be answered differently by different people.) If you couldn't find the answer to your question from a single Google search, try to write a different question. You do not have to stick with the original question word for word, but you should try to create a question that combines the main subjects of the question."], "Instance": {"id": "task121-6887f42733cc49ab93685b33f9c1584c", "input": "Is the number of caves in this national park greater than 2? Can you go spelunking in this national park?", "output": ["Can you go spelunking in more than one cave at this national park?"]}, "Prediction": "Are there more than 2 caves in this"} +{"Task": "task121_zest_text_modification", "Definition": ["In this task, you are given two questions about a domain. Your task is to combine the main subjects of the questions to write a new, natural-sounding question. For example, if the first question is about the tallness of the president and the second question is about his performance at college, the new question can be about his tallness at college. Try to find the main idea of each question, then combine them; you can use different words or make the subjects negative (i.e., ask about shortness instead of tallness) to combine the subjects. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in the new question, this keyword should also be used the same way. Do not write unnatural questions. (i.e., would not be a question someone might normally ask about domains). Do not write open-ended or subjective questions. (e.g., questions that can be answered differently by different people.) If you couldn't find the answer to your question from a single Google search, try to write a different question. You do not have to stick with the original question word for word, but you should try to create a question that combines the main subjects of the question."], "Instance": {"id": "task121-fed553f892f7483784efc072f0c071fb", "input": "Does this national park feature more than two restaurants? Is there a hotel you can sleep at in this national park?", "output": ["Is there a restaurant to eat at or a hotel you can sleep in at this national park?"]}, "Prediction": "Are there more than two restaurants in this national"} +{"Task": "task121_zest_text_modification", "Definition": ["In this task, you are given two questions about a domain. Your task is to combine the main subjects of the questions to write a new, natural-sounding question. For example, if the first question is about the tallness of the president and the second question is about his performance at college, the new question can be about his tallness at college. Try to find the main idea of each question, then combine them; you can use different words or make the subjects negative (i.e., ask about shortness instead of tallness) to combine the subjects. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in the new question, this keyword should also be used the same way. Do not write unnatural questions. (i.e., would not be a question someone might normally ask about domains). Do not write open-ended or subjective questions. (e.g., questions that can be answered differently by different people.) If you couldn't find the answer to your question from a single Google search, try to write a different question. You do not have to stick with the original question word for word, but you should try to create a question that combines the main subjects of the question."], "Instance": {"id": "task121-05f0f85e86e34378a6b90384397d2e3e", "input": "Is white an acceptable color for this dog breed? Can a dog from this dog breed have only one color on their fur?", "output": ["Is white the only acceptable color for this dog breed?"]}, "Prediction": "Are white dogs acceptable in this dog breed?"} +{"Task": "task121_zest_text_modification", "Definition": ["In this task, you are given two questions about a domain. Your task is to combine the main subjects of the questions to write a new, natural-sounding question. For example, if the first question is about the tallness of the president and the second question is about his performance at college, the new question can be about his tallness at college. Try to find the main idea of each question, then combine them; you can use different words or make the subjects negative (i.e., ask about shortness instead of tallness) to combine the subjects. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in the new question, this keyword should also be used the same way. Do not write unnatural questions. (i.e., would not be a question someone might normally ask about domains). Do not write open-ended or subjective questions. (e.g., questions that can be answered differently by different people.) If you couldn't find the answer to your question from a single Google search, try to write a different question. You do not have to stick with the original question word for word, but you should try to create a question that combines the main subjects of the question."], "Instance": {"id": "task121-1a06908cd9cc43d88ffb36f3066f34b1", "input": "What year was the last year this president was in office? How did the stock market perform while this president was in office?", "output": ["How did the economy perform as this president took office?"]}, "Prediction": "Was the stock market performing well while this president"} +{"Task": "task121_zest_text_modification", "Definition": ["In this task, you are given two questions about a domain. Your task is to combine the main subjects of the questions to write a new, natural-sounding question. For example, if the first question is about the tallness of the president and the second question is about his performance at college, the new question can be about his tallness at college. Try to find the main idea of each question, then combine them; you can use different words or make the subjects negative (i.e., ask about shortness instead of tallness) to combine the subjects. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in the new question, this keyword should also be used the same way. Do not write unnatural questions. (i.e., would not be a question someone might normally ask about domains). Do not write open-ended or subjective questions. (e.g., questions that can be answered differently by different people.) If you couldn't find the answer to your question from a single Google search, try to write a different question. You do not have to stick with the original question word for word, but you should try to create a question that combines the main subjects of the question."], "Instance": {"id": "task121-868517cceacd4a31b2134a81291cd899", "input": "Can this dog breed have black or brown spots? Is white an acceptable color for this dog breed?", "output": ["Can this dog breed be white, black, or brown?"]}, "Prediction": "Are black or brown spots acceptable for this dog"} +{"Task": "task1345_glue_qqp_question_paraprashing", "Definition": ["In this task you're given a question and you have to paraphrase the question to create the output question while retaining the meaning of the original question."], "Instance": {"id": "task1345-2ca8293fc79644ed811ecd82bf1f18d7", "input": "What are the options if my daughter not got seat in IIT or NIT. At present she is under coaching in Allen, Kota. This year she is in class 11th.?", "output": ["Who is won indutal medal?"]}, "Prediction": "What are the options if my daughter not"} +{"Task": "task1345_glue_qqp_question_paraprashing", "Definition": ["In this task you're given a question and you have to paraphrase the question to create the output question while retaining the meaning of the original question."], "Instance": {"id": "task1345-740c14a386ca4d0189102bd337f5c34d", "input": "Do employees at Select Income REIT have a good work-life balance? Does this differ across positions and departments?", "output": ["Do employees at Pennsylvania REIT have a good work-life balance? Does this differ across positions and departments?"]}, "Prediction": "Do employees at Select Income REIT have "} +{"Task": "task1345_glue_qqp_question_paraprashing", "Definition": ["In this task you're given a question and you have to paraphrase the question to create the output question while retaining the meaning of the original question."], "Instance": {"id": "task1345-0931b1915b994c01a54722498770c863", "input": "? to be deleted", "output": ["Did Hitler underestimate the jewish mafia-nation?"]}, "Prediction": "? to be deleted"} +{"Task": "task1345_glue_qqp_question_paraprashing", "Definition": ["In this task you're given a question and you have to paraphrase the question to create the output question while retaining the meaning of the original question."], "Instance": {"id": "task1345-d17816f0fe5a4a5492534bcbda2ba135", "input": "When should you lose your virginity?", "output": ["Why did you lose your virginity?"]}, "Prediction": "When should you lose your virginity?"} +{"Task": "task1345_glue_qqp_question_paraprashing", "Definition": ["In this task you're given a question and you have to paraphrase the question to create the output question while retaining the meaning of the original question."], "Instance": {"id": "task1345-3da463def78d4e1087d980de867aa6aa", "input": "Fetch jobs from job portals through API calls?", "output": ["What are some creative ideas for arranging a freshers' party?"]}, "Prediction": "How can I get jobs from job portals"} +{"Task": "task330_gap_answer_generation", "Definition": ["In this task, you will be presented with a text and a pronoun. You should write the name that the pronoun refers to. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task330-64d9a82a31e649428d61fa4deabf470a", "input": "General Adkins served in the Maryland Military Department as chief of staff and assistant adjutant general for state operations. He also served as director of the Cemetery and Memorial Programs and as the deputy secretary of Veterans Affairs. In May 2007, Governor Martin O*Malley appointed General Adkins to _his_ cabinet as the secretary of Veterans Affairs. , Pronoun: his", "output": ["Martin O*Malley"]}, "Prediction": "Governor Martin O*Malley"} +{"Task": "task330_gap_answer_generation", "Definition": ["In this task, you will be presented with a text and a pronoun. You should write the name that the pronoun refers to. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task330-d21ed172a6d0420d850b7a43494cf12f", "input": "Mike D'Angelo of The A.V. Club also praised Forster's performance and the tone of the film, saying ``If the film came across as jaded and cynical, it might well be hard to take. Instead, it seems weary and defeated, as if some people are simply born to fail.'' Conversely, Kristy Puchko of The Playlist criticized the film's humor and called it ``one note''. While _she_ praised the performances of Daniela Sandiford, Cole, Weaver, Forster, and Healy, Puchko stated that Coster-Waldau's performance ``keeps us at a distance, unable to connect, empathize, or care''. , Pronoun: she", "output": ["Kristy Puchko"]}, "Prediction": "Kristy Puchko"} +{"Task": "task330_gap_answer_generation", "Definition": ["In this task, you will be presented with a text and a pronoun. You should write the name that the pronoun refers to. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task330-e4965efde018479e9660076781b51d09", "input": "While walking across the front lawn, Gasazi reveals that when he sees dogs digging up his flowers and chewing on his trees in the garden, he turns them into ducks. When one duck comes towards Alan, Gasazi tells _him_ to take Fritz (who's now a duck) home and throws him out. , Pronoun: him", "output": ["Alan"]}, "Prediction": "Alan"} +{"Task": "task330_gap_answer_generation", "Definition": ["In this task, you will be presented with a text and a pronoun. You should write the name that the pronoun refers to. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task330-7a910140c3b249a8abc7685e8ec151b3", "input": "Luigi Infantino (April 24, 1921, Racalmuto -- June 22, 1991, Rome) was an Italian operatic tenor, particularly associated with the lyric Italian and French repertories. Luigi Infantino studied at the Parma Conservatory with Italo Brancucci. _He_made his debut in 1943, at the Teatro Regio in Parma, as Rodolfo in La boh*me, which was also his debut role at the Teatro San Carlo in Naples, in 1945. , Pronoun: He", "output": ["Luigi Infantino"]}, "Prediction": "Luigi Infantino"} +{"Task": "task330_gap_answer_generation", "Definition": ["In this task, you will be presented with a text and a pronoun. You should write the name that the pronoun refers to. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task330-81c5b4e540854c65ad9e9247c684c295", "input": "Significant places listed individually on the National Register of Historic Places include: Boswell's Tavern, an important meeting place during the American Revolution and a well-preserved example of a Colonial-era tavern. Grassdale, an Italianate villa built in 1861 by the Morris family, later home of Rear Admiral David W. Taylor and _his_ wife Imogene Morris Taylor. , Pronoun: his", "output": ["David W. Taylor"]}, "Prediction": "David W. Taylor"} +{"Task": "task1342_amazon_us_reviews_title", "Definition": ["Given an Amazon customer review, write a title for the review. The preferred titles are under fifteen words."], "Instance": {"id": "task1342-6d10f4abfd7a4eab86384d15df98d8f9", "input": "I got this Hub to help add additional ports for my laptop. I definitely do not NEED 4 more 3.0 ports at this point, but its great to have this plug in to the existing 3.0 port and be hidden under my laptop stand to create more space on the sides of my laptop.", "output": ["Perfect space saver for your desk"]}, "Prediction": "Great for adding additional ports"} +{"Task": "task1342_amazon_us_reviews_title", "Definition": ["Given an Amazon customer review, write a title for the review. The preferred titles are under fifteen words."], "Instance": {"id": "task1342-8909f0c6a0354f9888dec31f315b932b", "input": "I love this little device. Yesterday, I was waiting in an ungodly long line at the post office and luckily had my library at my fingertips. The size of the Kindle makes it easy to transport with you wherever you are. Also, the 3G internet is a great perk. I was able to check my email as well as read some articles using google reader in the middle of the post office. If you like to read, then this is the e-reader you should get. Use it for books, and think of the internet as just a perk.", "output": ["Extremely pleased"]}, "Prediction": "I love this little device"} +{"Task": "task1342_amazon_us_reviews_title", "Definition": ["Given an Amazon customer review, write a title for the review. The preferred titles are under fifteen words."], "Instance": {"id": "task1342-8bca08d4262f466c9cee933cf7d86c6e", "input": "The monitor I received as an A0 version but manufactured in July of 2014. I ordered it in October of 2014 so clearly this is current product. I suspect the versions indicated the plant where the monitor was manufactured not the technology in the unit. My monitor has an excellent picture and no buzzing. The only area that I feel needs clarifying is that the monitor can only display WQHD (1440) graphics on Dual Link DVI graphics interfaces with the Dual Link DVI cable or the Displayport. The VGA and HMDI interfaces cannot go past HD resolution. This resulted in my needing to get a suitable graphics card and cable. This can be harder than it seems as most DVI are single link. I found an EVGA card with an Nvidia GeForce GT610 GPU with 1 GB of ram works fabulously with my standard HP Pavillion power supply and delivers excellent performance. However, I am not gaming full screen and full resolution most of the time but instead doing Photo editing and CAD work.

Bottom line: This is an excellent monitor with a minor restriction.", "output": ["My monitor has an excellent picture and no buzzing"]}, "Prediction": "Excellent monitor with a minor restriction"} +{"Task": "task1342_amazon_us_reviews_title", "Definition": ["Given an Amazon customer review, write a title for the review. The preferred titles are under fifteen words."], "Instance": {"id": "task1342-940bf08db2474b8eb0cc865e9e037f9b", "input": "What can I say - everything about this Kindle is great! I have read many books, papers and played games on it and all have been very enjoyable. And when I compared it to other Ebooks the Kindle in my opinion was the better choice. The only thing I wish it had is if you could get it in different colors. Graphite gray and white are the only colors. If you do decide to buy the Kindle make sure you buy a case for it. I bought the Amazon leather case with the light - the case is kind of expensive but well worth it - you won't be disappointed with it and I especially enjoy the light.", "output": ["An Awesome Ebook."]}, "Prediction": "Everything about this Kindle is great!"} +{"Task": "task1342_amazon_us_reviews_title", "Definition": ["Given an Amazon customer review, write a title for the review. The preferred titles are under fifteen words."], "Instance": {"id": "task1342-c62b5103336a4a5da4b80c93554d9bbc", "input": "AMAZING! These speakers are so good for what you pay for. If you want to be able to blast music and make it sound rich and have bass this is the best option. With 5,000+ reviews and over 4 stars this is a must! This thing is awesome!", "output": ["Outstanding!"]}, "Prediction": "AMAZING! These speakers are so good for what"} +{"Task": "task329_gap_classification", "Definition": ["In this task, you will be presented with a text, a pronoun from the text, and two candidate names. You should determine what the pronoun refers to and classify the answers into A, B, or Neither. A and B here are referring to option A and option B. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task329-f241c2f855234c589e7281866627da68", "input": "Coal was first reported in Brunei Darussalam near Muara as early as 1837. in Serai Pimping. The Muara coalmine was first mined commercially in 1883, when William Cowie was given the concession rights to mine the coal in exchange for $1,200 per year. However, Cowie later sold _his_ rights to Rajah Charles Brooke and the Rajah renamed the mine Brooketon (Brooke Town). , Pronoun: his , A: Rajah Charles Brooke , B: Rajah", "output": ["Neither"]}, "Prediction": "A"} +{"Task": "task329_gap_classification", "Definition": ["In this task, you will be presented with a text, a pronoun from the text, and two candidate names. You should determine what the pronoun refers to and classify the answers into A, B, or Neither. A and B here are referring to option A and option B. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task329-c75f6b713bb844b9929d835c78d916b2", "input": "Martha tells Ted he values the wrong things and Ted calls Martha a freeloader. That night she attempts to contact the cult but is greeted by a girl named Marlene Miller. Martha loses her nerve and hangs up. In a flashback at the cult, Martha helps a girl named Sally integrate into the cult. _She_ helps Patrick drug and rape Sally, as had been done to her. , Pronoun: She , A: Martha , B: Sally", "output": ["A"]}, "Prediction": "A"} +{"Task": "task329_gap_classification", "Definition": ["In this task, you will be presented with a text, a pronoun from the text, and two candidate names. You should determine what the pronoun refers to and classify the answers into A, B, or Neither. A and B here are referring to option A and option B. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task329-7a61c12a9150439c9ef4c1ed395640b2", "input": "He drew obsessively from childhood obsessed, both from his own imagination and by copying images that particularly pleased him. 1913 after his return from Ascona, he attended painting school in Munich for a total of eight days. But doubt in his ability was so strong that he hid his work from all but _his_ friend, the writer Oskar Maria Graf (as Schrimpf former baker). , Pronoun: his , A: Oskar Maria Graf , B: Schrimpf", "output": ["Neither"]}, "Prediction": "A"} +{"Task": "task329_gap_classification", "Definition": ["In this task, you will be presented with a text, a pronoun from the text, and two candidate names. You should determine what the pronoun refers to and classify the answers into A, B, or Neither. A and B here are referring to option A and option B. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task329-1f72b43ce4a84f739a5a2915fcd82922", "input": "In Bram Stoker's Dracula (1992), directed by Francis Ford Coppola, Lucy is played by Sadie Frost. Lucy is eroticized much further than her literary incarnation, becoming more than seductive and coquettish, even tempting, and what she says often refers to sex. As a spoiled child of aristocracy, she talks with artlessness and frankness, bordering on the indecent. Unlike _her_ friend Mina, who stays resolute, Lucy's sexual tendencies are to be her downfall. , Pronoun: her , A: Mina , B: Lucy", "output": ["B"]}, "Prediction": "B"} +{"Task": "task329_gap_classification", "Definition": ["In this task, you will be presented with a text, a pronoun from the text, and two candidate names. You should determine what the pronoun refers to and classify the answers into A, B, or Neither. A and B here are referring to option A and option B. Position of the pronoun in the text is showed within two \"_\"s."], "Instance": {"id": "task329-129a42586bcd4d3bae176fe9a976db09", "input": "I thought the best way to do that was to transfer.'' After Cruthers' decision to leave USMA he called Derek Schooley, the new head coach of RMU Mens Hockey Team. Schooley, formerly assistant coach for Air Force, had recruited Ryan before _his_ decision to attend West Point and was more than happy to take Ryan on the team. , Pronoun: his , A: Schooley , B: Ryan", "output": ["B"]}, "Prediction": "B"} +{"Task": "task281_points_of_correspondence", "Definition": ["You will be given three sentences. Read them, then identify a noun phrase (person, place, or thing) or event that is shared between all three sentences. As the output, write the span of the text corresponding to that phrase in each sentence. Keep the order of the sentences, that is, your answer should look like: 1: *a phras from sentence 1e* 2: *a phras from sentence 2* 3: *a phrase from sentence 3*"], "Instance": {"id": "task281-936c6407ecc14bd2a998cac04cacfdbe", "input": "1: When Moynihan opened the driver 's - side door , the video shows , West sprang out and fired a shot with a pistol at the officer 's face . 2: His condition is best described as serious but improving , Boston police said in a statement . 3: Video shows that the man later shot dead by police in Boston opened fire first .", "output": ["1: Moynihan 2: His 3: police"]}, "Prediction": "1: the video shows, West "} +{"Task": "task281_points_of_correspondence", "Definition": ["You will be given three sentences. Read them, then identify a noun phrase (person, place, or thing) or event that is shared between all three sentences. As the output, write the span of the text corresponding to that phrase in each sentence. Keep the order of the sentences, that is, your answer should look like: 1: *a phras from sentence 1e* 2: *a phras from sentence 2* 3: *a phrase from sentence 3*"], "Instance": {"id": "task281-2418da355ca54a838ee933a02170afaa", "input": "1: I would n't , and neither did Ben Affleck , who lobbied producers of a PBS show , Finding Your Roots , to remove any reference to his great-great-great grandfather Benjamin Cole , a Georgia slave owner in the mid-1800 's , in an episode that looked at Affleck 's family history . 2: And on Tuesday , Affleck , via Facebook , admitted that he had urged Gates to excise any reference to his slave-owning relative . 3: Ben Affleck admits he asked PBS show Finding Your Roots to avoid mentioning his slave-owning ancestor .", "output": ["1: Ben Affleck 2: Affleck 3: Ben Affleck", "1: his great-great-great grandfather Benjamin Cole , a Georgia slave owner in the mid-1800 2: his slave-owning relative 3: his slave-owning ancestor", "1: remove 2: excise 3: avoid"]}, "Prediction": "1: Benjamin Cole 2: a Georgia"} +{"Task": "task281_points_of_correspondence", "Definition": ["You will be given three sentences. Read them, then identify a noun phrase (person, place, or thing) or event that is shared between all three sentences. As the output, write the span of the text corresponding to that phrase in each sentence. Keep the order of the sentences, that is, your answer should look like: 1: *a phras from sentence 1e* 2: *a phras from sentence 2* 3: *a phrase from sentence 3*"], "Instance": {"id": "task281-1856f4ff4f964fbdb715929e990ac1b8", "input": "1: ( CNN ) When ISIS overran their villages near Mosul in August 2014 , a small group of Assyrians , a Middle Eastern minority with a history reaching back more than 4,000 years , picked up weapons and formed their own militia : Dwekh Nawsha -- The Sacrificers . 2: Assyrians belong to the rapidly dwindling Christian population of Iraq -- recent estimates from CAPNI , the largest Christian relief organization in northern Iraq put the number as low as 300,000 compared with 1.5 million 20 years ago -- and many among them see the fight with ISIS as a final battle for survival against the Islamists . 3: Assyrians are an ancient Middle Eastern minority -- they are part of the rapidly dwindling Christian population of Iraq .", "output": ["1: of Assyrians , a Middle Eastern minority with a history reaching back more than 4,000 years 2: Assyrians 3: Assyrians"]}, "Prediction": "1: Assyrians"} +{"Task": "task281_points_of_correspondence", "Definition": ["You will be given three sentences. Read them, then identify a noun phrase (person, place, or thing) or event that is shared between all three sentences. As the output, write the span of the text corresponding to that phrase in each sentence. Keep the order of the sentences, that is, your answer should look like: 1: *a phras from sentence 1e* 2: *a phras from sentence 2* 3: *a phrase from sentence 3*"], "Instance": {"id": "task281-bec8786b3fdd415fa931bb98f30da950", "input": "1: The decimation of our people , too , was reduced to a footnote -- if we were lucky -- in tomes about World War I , when more than one million Armenians were killed during the last days of the Ottoman Empire . 2: As Kim Kardashian , sister Khloe and their two cousins recently toured Armenia -- on the eve of the 100th anniversary of the mass killings , which is marked on April 24 -- they spotlighted a humanitarian crime that Pope Francis recently called the first genocide of the 20th century . 3: Dawn MacKeen : 2015 marks 100th anniversary of slaughter of Armenians by Ottoman Empire . Kim Kardashian has used fame to spotlight this .", "output": ["1: more than one million Armenians were killed 2: the mass killings 3: slaughter of Armenians by Ottoman Empire"]}, "Prediction": "1: The decimation of our people"} +{"Task": "task281_points_of_correspondence", "Definition": ["You will be given three sentences. Read them, then identify a noun phrase (person, place, or thing) or event that is shared between all three sentences. As the output, write the span of the text corresponding to that phrase in each sentence. Keep the order of the sentences, that is, your answer should look like: 1: *a phras from sentence 1e* 2: *a phras from sentence 2* 3: *a phrase from sentence 3*"], "Instance": {"id": "task281-d52e67b97a8949639e02a3e4d0d6101b", "input": "1: Later that month , the ICC opened a preliminary examination into the situation in Palestinian territories , paving the way for possible war crimes investigations against Israelis . 2: Israel and the United States , neither of which is an ICC member , opposed the Palestinians ' efforts to join the body . 3: Israel and the United States opposed the move , which could open the door to war crimes investigations against Israelis .", "output": ["1: Israelis 2: Israel 3: Israelis"]}, "Prediction": "2: Israel and the United States,"} +{"Task": "task036_qasc_topic_word_to_generate_related_fact", "Definition": ["In this task, you need to write a topic word from the given fact. The topic word must have at least one word overlap with the given fact. The topic word often involves adding a new word from a related concept. In your topic word, use at least one word from the given fact. Topic words with two or more words work best."], "Instance": {"id": "task036-fcb1706f71934d34a8d3d82b29ae17f0", "input": "Fact: folding an object causes that object to change shape.", "output": ["folding an object.", "folding chair.", "folding objects.", "folding origami.", "folding paper origami.", "origami fold.", "solid objects."]}, "Prediction": "foldable"} +{"Task": "task036_qasc_topic_word_to_generate_related_fact", "Definition": ["In this task, you need to write a topic word from the given fact. The topic word must have at least one word overlap with the given fact. The topic word often involves adding a new word from a related concept. In your topic word, use at least one word from the given fact. Topic words with two or more words work best."], "Instance": {"id": "task036-952b8d7f9f154ea2b193e720fc9f34c3", "input": "Fact: a balance is used for measuring mass of an object.", "output": ["a balance measure weight.", "balance mass.", "mass grams.", "mass kilograms.", "mass of an object.", "measuring mass units.", "object mass grams."]}, "Prediction": "balance scale"} +{"Task": "task036_qasc_topic_word_to_generate_related_fact", "Definition": ["In this task, you need to write a topic word from the given fact. The topic word must have at least one word overlap with the given fact. The topic word often involves adding a new word from a related concept. In your topic word, use at least one word from the given fact. Topic words with two or more words work best."], "Instance": {"id": "task036-a9328252a30f49678e4e8ba2c4dbe750", "input": "Fact: Sweating reduces body heat.", "output": ["\"body heat\" reduces sweating.", "Sweating is.", "body heat necessary.", "body heat.", "sweat evaporation cooling.", "sweating body heat.", "sweating body.", "sweating perspiration.", "sweating perspire.", "sweating water.", "sweating."]}, "Prediction": "sweating"} +{"Task": "task036_qasc_topic_word_to_generate_related_fact", "Definition": ["In this task, you need to write a topic word from the given fact. The topic word must have at least one word overlap with the given fact. The topic word often involves adding a new word from a related concept. In your topic word, use at least one word from the given fact. Topic words with two or more words work best."], "Instance": {"id": "task036-726bb202e1f24c759838cf98242acab7", "input": "Fact: a black hole is formed by a high-mass star collapses.", "output": ["black hole.", "collapsed stars.", "high mass star.", "mass star.", "star celestial body.", "star collapse.", "star sun.", "why a star collapses."]}, "Prediction": "black hole formed"} +{"Task": "task036_qasc_topic_word_to_generate_related_fact", "Definition": ["In this task, you need to write a topic word from the given fact. The topic word must have at least one word overlap with the given fact. The topic word often involves adding a new word from a related concept. In your topic word, use at least one word from the given fact. Topic words with two or more words work best."], "Instance": {"id": "task036-be679ce566fb422f8575765a5fccb1b1", "input": "Fact: hurricanes convert heat energy into mechanical energy.", "output": ["hurricane is a.", "hurricane saffir-simpson.", "hurricanes damage buildings.", "hurricanes.", "hurricans.", "mechanical energy and heat energy.", "mechanical energy."]}, "Prediction": "hurricanes"} +{"Task": "task1554_scitail_classification", "Definition": ["In this task, you are given two statements. The task is to output whether a given textual premise, i.e. Statement 2, entails or implies a given scientific fact, i.e. Statement 1. The output should be 'entails' if Statement 2 supports Statement 1 and should be 'neutral' otherwise."], "Instance": {"id": "task1554-1398238a733441398f12b5ee795e05fe", "input": "Sentence 1: Motion can be defined simply as a change in position. Sentence 2: Motion can be determined by change in position over time.", "output": ["entails"]}, "Prediction": "entails"} +{"Task": "task1554_scitail_classification", "Definition": ["In this task, you are given two statements. The task is to output whether a given textual premise, i.e. Statement 2, entails or implies a given scientific fact, i.e. Statement 1. The output should be 'entails' if Statement 2 supports Statement 1 and should be 'neutral' otherwise."], "Instance": {"id": "task1554-e27ba6f94a1d49c5a3cc1e529e443d57", "input": "Sentence 1: Range is the term for the horizontal displacement of a projectile from its starting point. Sentence 2: These range from prehistoric projectile points to jewelry made from shells and bone.", "output": ["neutral"]}, "Prediction": "entails"} +{"Task": "task1554_scitail_classification", "Definition": ["In this task, you are given two statements. The task is to output whether a given textual premise, i.e. Statement 2, entails or implies a given scientific fact, i.e. Statement 1. The output should be 'entails' if Statement 2 supports Statement 1 and should be 'neutral' otherwise."], "Instance": {"id": "task1554-2ed050fe000d40cd8f0b60e15f66838f", "input": "Sentence 1: A complex machine is described by: it is made of more than one simple machine. Sentence 2: A turing machine is a simple theoritical machine that should be able to solve any problem that a more complex machine can, and hence, is used in proofs on complexity theory.", "output": ["neutral"]}, "Prediction": "entails"} +{"Task": "task1554_scitail_classification", "Definition": ["In this task, you are given two statements. The task is to output whether a given textual premise, i.e. Statement 2, entails or implies a given scientific fact, i.e. Statement 1. The output should be 'entails' if Statement 2 supports Statement 1 and should be 'neutral' otherwise."], "Instance": {"id": "task1554-d004b4c91a784306ad7385e7b868e471", "input": "Sentence 1: In the nuclear fusion process, two light nuclei combine to produce a heavier nucleus and great energy. Sentence 2: nuclear fusion A nuclear process that releases energy when lightweight nuclei combine to form heavy-weight nuclei.", "output": ["entails"]}, "Prediction": "entails"} +{"Task": "task1554_scitail_classification", "Definition": ["In this task, you are given two statements. The task is to output whether a given textual premise, i.e. Statement 2, entails or implies a given scientific fact, i.e. Statement 1. The output should be 'entails' if Statement 2 supports Statement 1 and should be 'neutral' otherwise."], "Instance": {"id": "task1554-398c6b3743a84ceaaa3067567b6b2d94", "input": "Sentence 1: You decrease errors in an experiment by take many measurements. Sentence 2: This decreases the chance of measurement error due to sound reflections from the body of the technician.", "output": ["neutral"]}, "Prediction": "entails"} +{"Task": "task050_multirc_answerability", "Definition": ["You are given a sentence and a question in the input. If the information provided in the sentence is enough to answer the question, label \"Yes\", otherwise label \"No\". Do not use any facts other than those provided in the sentence while labeling \"Yes\" or \"No\". There are only two types of valid responses: Yes and No."], "Instance": {"id": "task050-dfc4a7889da54e398033e75ac4fbc6fb", "input": "Sentence: Within the districts lie VDCs (villages), 3 municipalities (Bhaktapur, Kirtipur, Madhyapur Thimi), 1 sub-metropolitan city (Lalitpur), and 1 metropolitan city (Kathmandu). \nQuestion: How many sub-metropolitan cities are present in the districts that make up the Kathmandu valley?", "output": ["Yes."]}, "Prediction": "Yes"} +{"Task": "task050_multirc_answerability", "Definition": ["You are given a sentence and a question in the input. If the information provided in the sentence is enough to answer the question, label \"Yes\", otherwise label \"No\". Do not use any facts other than those provided in the sentence while labeling \"Yes\" or \"No\". There are only two types of valid responses: Yes and No."], "Instance": {"id": "task050-aa3e0831142743d5b7a9faa1dc46d93a", "input": "Sentence: Noodle found a long brown stick. \nQuestion: Where was Jack when he saw the pond with ducks in it?", "output": ["No."]}, "Prediction": "No"} +{"Task": "task050_multirc_answerability", "Definition": ["You are given a sentence and a question in the input. If the information provided in the sentence is enough to answer the question, label \"Yes\", otherwise label \"No\". Do not use any facts other than those provided in the sentence while labeling \"Yes\" or \"No\". There are only two types of valid responses: Yes and No."], "Instance": {"id": "task050-9a54695b449746a7a516504bed0e285c", "input": "Sentence: This made the robot very happy. \nQuestion: Where did Carl come from?", "output": ["No."]}, "Prediction": "No"} +{"Task": "task050_multirc_answerability", "Definition": ["You are given a sentence and a question in the input. If the information provided in the sentence is enough to answer the question, label \"Yes\", otherwise label \"No\". Do not use any facts other than those provided in the sentence while labeling \"Yes\" or \"No\". There are only two types of valid responses: Yes and No."], "Instance": {"id": "task050-bb25eeda67b749be973f765fe350badb", "input": "Sentence: - Mary said: Josh, Thank you - Excuse me, Mary ... I have to go ... Otherwise I'll be late for school ... And I would not want to upset the teacher. \nQuestion: Who are the principal characters of the story?", "output": ["Yes."]}, "Prediction": "Yes"} +{"Task": "task050_multirc_answerability", "Definition": ["You are given a sentence and a question in the input. If the information provided in the sentence is enough to answer the question, label \"Yes\", otherwise label \"No\". Do not use any facts other than those provided in the sentence while labeling \"Yes\" or \"No\". There are only two types of valid responses: Yes and No."], "Instance": {"id": "task050-078e69a0a4024dfc8126a435d7f4c75d", "input": "Sentence: When Jimmy got home his hat was sitting there next to his goldfish. \nQuestion: why couldn't he keep looking?", "output": ["No."]}, "Prediction": "No"} +{"Task": "task362_spolin_yesand_prompt_response_sub_classification", "Definition": ["\"Yes, and\" is a rule-of-thumb in improvisational comedy that suggests that a participant in a dialogue should accept what another participant has stated (\"Yes\") and then expand on that line of thought or context (\"and...\"). Given a prompt and two responses, identify which response is the type of \"yes, and\". Indicate your answer by 'Response 1' if the first response is correct, otherwise 'Response 2'. Note that a \"Yes, and\" does not require someone explicitly saying 'yes, and...' as part of a dialogue exchange."], "Instance": {"id": "task362-da22c519ce2140cc82a3a3170ccc9d19", "input": "Prompt: Well, were you attracted by the sounds of my working on this here contraption? \n Response 1: I was hiding under it, Cape Fear style. \n Response 2: I did. I was thinking \"I hear something behind my back, some kind of contraption is being worked on. But I can't see anyone here.\"", "output": ["Response 2"]}, "Prediction": "Yes, and"} +{"Task": "task362_spolin_yesand_prompt_response_sub_classification", "Definition": ["\"Yes, and\" is a rule-of-thumb in improvisational comedy that suggests that a participant in a dialogue should accept what another participant has stated (\"Yes\") and then expand on that line of thought or context (\"and...\"). Given a prompt and two responses, identify which response is the type of \"yes, and\". Indicate your answer by 'Response 1' if the first response is correct, otherwise 'Response 2'. Note that a \"Yes, and\" does not require someone explicitly saying 'yes, and...' as part of a dialogue exchange."], "Instance": {"id": "task362-e46d61f4805d4a65818351801e085ac8", "input": "Prompt: I would just like to discern what brow mopping exactly is. \n Response 1: It's like this. I take out my handkerchief and I mop my brow, so as to make a point and to say this is all very weary. \n Response 2: My goodness. You've been here since the beginning. Do you not remember?", "output": ["Response 1"]}, "Prediction": "Response 1"} +{"Task": "task362_spolin_yesand_prompt_response_sub_classification", "Definition": ["\"Yes, and\" is a rule-of-thumb in improvisational comedy that suggests that a participant in a dialogue should accept what another participant has stated (\"Yes\") and then expand on that line of thought or context (\"and...\"). Given a prompt and two responses, identify which response is the type of \"yes, and\". Indicate your answer by 'Response 1' if the first response is correct, otherwise 'Response 2'. Note that a \"Yes, and\" does not require someone explicitly saying 'yes, and...' as part of a dialogue exchange."], "Instance": {"id": "task362-703c8005bf974e7591aa084384466e8d", "input": "Prompt: Yeah, your hands look like two thunderclouds with lightening shooting out of them. \n Response 1: Like two horses with one too many legs each. \n Response 2: Thank you, Maybelline. I feel like I whittled it down really well.", "output": ["Response 1"]}, "Prediction": "Response 2"} +{"Task": "task362_spolin_yesand_prompt_response_sub_classification", "Definition": ["\"Yes, and\" is a rule-of-thumb in improvisational comedy that suggests that a participant in a dialogue should accept what another participant has stated (\"Yes\") and then expand on that line of thought or context (\"and...\"). Given a prompt and two responses, identify which response is the type of \"yes, and\". Indicate your answer by 'Response 1' if the first response is correct, otherwise 'Response 2'. Note that a \"Yes, and\" does not require someone explicitly saying 'yes, and...' as part of a dialogue exchange."], "Instance": {"id": "task362-f46e9fb6cf2c416381067489bba5805a", "input": "Prompt: Guys, guys, Nixon is resigning. \n Response 1: I was having a great day, but I just realized there's only one channel on television and that's going to ruin my life forever. \n Response 2: The president?", "output": ["Response 2"]}, "Prediction": "No, and"} +{"Task": "task362_spolin_yesand_prompt_response_sub_classification", "Definition": ["\"Yes, and\" is a rule-of-thumb in improvisational comedy that suggests that a participant in a dialogue should accept what another participant has stated (\"Yes\") and then expand on that line of thought or context (\"and...\"). Given a prompt and two responses, identify which response is the type of \"yes, and\". Indicate your answer by 'Response 1' if the first response is correct, otherwise 'Response 2'. Note that a \"Yes, and\" does not require someone explicitly saying 'yes, and...' as part of a dialogue exchange."], "Instance": {"id": "task362-21dfc4e278684bf790d61a8220ce5685", "input": "Prompt: Wait, they give you cups of coffee when you make it to the major leagues? \n Response 1: Yeah, rook, that's what happens. You get a cup of coffee. And if you can't stand the heat you're sent right back down to the minors. \n Response 2: Yeah. My boss says I'm in to weird music and thought I might like these. He doesn't know me at all.", "output": ["Response 1"]}, "Prediction": "Response 2"} +{"Task": "task1557_jfleg_answer_generation", "Definition": ["In this task, you will be shown an incorrect English sentence. You need to generate a corrected form of the input sentence."], "Instance": {"id": "task1557-b86c45563ff7453a927e4e6da9ec901a", "input": "However , all of them are limited . ", "output": ["However , all of them are limited . "]}, "Prediction": "However, all of them are limited "} +{"Task": "task1557_jfleg_answer_generation", "Definition": ["In this task, you will be shown an incorrect English sentence. You need to generate a corrected form of the input sentence."], "Instance": {"id": "task1557-9f595665ef224677902449295113f7b6", "input": "However , there were n't any particles . ", "output": ["However , there were n't any particles . "]}, "Prediction": "However, there weren't"} +{"Task": "task1557_jfleg_answer_generation", "Definition": ["In this task, you will be shown an incorrect English sentence. You need to generate a corrected form of the input sentence."], "Instance": {"id": "task1557-00d36742ce044b73af2ef2924a7fd5d4", "input": "Specializing in one particular subject does not suit our life in this era which is characterized by diversity and innovation ", "output": ["Specializing in one particular subject does not suit our lives in this era which is characterized by diversity and innovation . "]}, "Prediction": "Specializing in one particular subject does not suit"} +{"Task": "task1557_jfleg_answer_generation", "Definition": ["In this task, you will be shown an incorrect English sentence. You need to generate a corrected form of the input sentence."], "Instance": {"id": "task1557-ff55c90fd2084ac790d0ccbad2f6bc9a", "input": "For example one man is a football club fans . ", "output": ["For example , one man is a football club fan . "]}, "Prediction": "For example one man is a football club"} +{"Task": "task1557_jfleg_answer_generation", "Definition": ["In this task, you will be shown an incorrect English sentence. You need to generate a corrected form of the input sentence."], "Instance": {"id": "task1557-8c43c9d7d33b49a0a28b94201e8ece86", "input": "Fish firming uses the lots of special products such as fish meal . ", "output": ["Fish farming uses lots of special products such as fish meal . "]}, "Prediction": "Fish firming uses the lots of special products"} +{"Task": "task249_enhanced_wsc_pronoun_disambiguation", "Definition": ["In each example you will read a short sentence (or two). Then, a pronoun in the text is mentioned. Your task is to choose a referent which the mentioned pronoun refers to. You will be given two options in each case and one choice should seem much more likely to you than the other."], "Instance": {"id": "task249-a325178dd5b84177a8cfde4aeba27075", "input": "sentence: Emma and Julia did not pass the ball to Janie and Anne although they saw that they were open .pronoun: they. A) emma and julia B) janie and anne", "output": ["emma and julia"]}, "Prediction": "janie and anne"} +{"Task": "task249_enhanced_wsc_pronoun_disambiguation", "Definition": ["In each example you will read a short sentence (or two). Then, a pronoun in the text is mentioned. Your task is to choose a referent which the mentioned pronoun refers to. You will be given two options in each case and one choice should seem much more likely to you than the other."], "Instance": {"id": "task249-b545c540e3814ff0b896b29b509573b8", "input": "sentence: Sam and Henry pulled up chairs to the pianos , but they were broken , so they had to stand instead .pronoun: they. A) the chairs B) the pianos", "output": ["the chairs"]}, "Prediction": "the pianos"} +{"Task": "task249_enhanced_wsc_pronoun_disambiguation", "Definition": ["In each example you will read a short sentence (or two). Then, a pronoun in the text is mentioned. Your task is to choose a referent which the mentioned pronoun refers to. You will be given two options in each case and one choice should seem much more likely to you than the other."], "Instance": {"id": "task249-8d1c81418c984c70bc7bc0c226a0c6b7", "input": "sentence: Frank and Steve felt vindicated when their longtime rivals Bill and Pete revealed that they were the winners of the competition .pronoun: they. A) frank and steve B) bill and pete", "output": ["frank and steve"]}, "Prediction": "bill and pete"} +{"Task": "task249_enhanced_wsc_pronoun_disambiguation", "Definition": ["In each example you will read a short sentence (or two). Then, a pronoun in the text is mentioned. Your task is to choose a referent which the mentioned pronoun refers to. You will be given two options in each case and one choice should seem much more likely to you than the other."], "Instance": {"id": "task249-2665f00f94c44e01b545f734e71efda8", "input": "sentence: rick said \"check\" to grover as he took his bishop .pronoun: his. A) rick B) grover", "output": ["grover"]}, "Prediction": "grover"} +{"Task": "task249_enhanced_wsc_pronoun_disambiguation", "Definition": ["In each example you will read a short sentence (or two). Then, a pronoun in the text is mentioned. Your task is to choose a referent which the mentioned pronoun refers to. You will be given two options in each case and one choice should seem much more likely to you than the other."], "Instance": {"id": "task249-0ff580848bf24026a7a360aabe5451e1", "input": "sentence: the mold rolled off the rack because it wasn't level .pronoun: it. A) the mold B) the rack", "output": ["the rack"]}, "Prediction": "the mold"} +{"Task": "task957_e2e_nlg_text_generation_generate", "Definition": ["In this task, we ask you convert a data table of restaurant descriptions into fluent natural-sounding English sentences. The input is a string of key-value pairs; the output should be a natural and grammatical English sentence containing all the information from the input."], "Instance": {"id": "task957-aed1fc627aaa499895974355dd89bf3c", "input": "name[Zizzi], eatType[restaurant], priceRange[cheap], area[riverside]", "output": ["Zizzi is a cheap restaurant in the riverside area."]}, "Prediction": "Zizzi is a cheap restaurant in the"} +{"Task": "task957_e2e_nlg_text_generation_generate", "Definition": ["In this task, we ask you convert a data table of restaurant descriptions into fluent natural-sounding English sentences. The input is a string of key-value pairs; the output should be a natural and grammatical English sentence containing all the information from the input."], "Instance": {"id": "task957-4b6d2b2162c743648253c07e96c27576", "input": "name[The Vaults], food[French], priceRange[more than \u00a330], familyFriendly[yes]", "output": ["There is a child friendly French restaurant with a price range of more than \u00a330 called The Vaults."]}, "Prediction": "The Vaults is a French restaurant"} +{"Task": "task957_e2e_nlg_text_generation_generate", "Definition": ["In this task, we ask you convert a data table of restaurant descriptions into fluent natural-sounding English sentences. The input is a string of key-value pairs; the output should be a natural and grammatical English sentence containing all the information from the input."], "Instance": {"id": "task957-6993553ec46346b6bf962e7fe6813d15", "input": "name[The Olive Grove], eatType[pub], food[French], priceRange[more than \u00a330], area[riverside], familyFriendly[yes]", "output": ["The Olive Grove is a child-friendly pub, serving French food and is suitable for those with a budget of more than \u00a330."]}, "Prediction": "The Olive Grove is a French pub in"} +{"Task": "task957_e2e_nlg_text_generation_generate", "Definition": ["In this task, we ask you convert a data table of restaurant descriptions into fluent natural-sounding English sentences. The input is a string of key-value pairs; the output should be a natural and grammatical English sentence containing all the information from the input."], "Instance": {"id": "task957-b877c721990546eca8bf56bfc48ba142", "input": "name[The Eagle], eatType[coffee shop], food[Indian], priceRange[cheap], customer rating[5 out of 5], area[riverside], familyFriendly[yes], near[Burger King]", "output": ["The Eagle is in the riverside area near Burger King. It serves cheap Indian food and has a customer rating of 5 out of 5. It is family friendly."]}, "Prediction": "The Eagle is a coffee shop in the"} +{"Task": "task957_e2e_nlg_text_generation_generate", "Definition": ["In this task, we ask you convert a data table of restaurant descriptions into fluent natural-sounding English sentences. The input is a string of key-value pairs; the output should be a natural and grammatical English sentence containing all the information from the input."], "Instance": {"id": "task957-86e2be5f64ef4be88bd6a4826f1c46d4", "input": "name[Midsummer House], food[Italian], priceRange[high], customer rating[1 out of 5], near[All Bar One]", "output": ["There is a restaurant near All Bar One called Midsummer House that is expensive."]}, "Prediction": "Midsummer House is an Italian restaurant near"} +{"Task": "task418_persent_title_generation", "Definition": ["Given a document, generate a short title of the document. The title should convey the main idea/event/topic about which the document is being written. Note that URLs in the text have been replaced with [Link]."], "Instance": {"id": "task418-9f8dc85573f546dda8ac6fb24f77ef0c", "input": "Billionaire Prince Alwaleed bin Talal was picked up at his desert camp the senior official said. Authorities did not disclose the evidence that prompted the arrests.\n Prince Alwaleed bin Talal presides over a vast financial empire (estimated $35 billion in 2015):\nAlwaleed is the largest individual shareholder of Citigroup the second-largest voting shareholder in 21st Century Fox and owns a number of hotels. TIME even called him \"Arabian Warren Buffet\".\nMBS as the Crown Prince is known is the leader who is launching what modernizers hope will be a Saudi Version of the Meiji Restoration[ii] in Japan transforming the political economy and culture out of necessity \u00e2\u0080\u0093 in order to survive in the modern world system. The Saudis have practiced religious and cultural isolationism while their oil allowed the country to avoid the necessity of building an economy that could supply anything else that the rest of the world would be willing to pay for.\nOf the people arrested Alwaleed bin Tala is the most intriguing for Americans thanks to his Twitter sparring with candidate Trump during the election and for a startling connection unearthed by Jack Cashill more than five years ago in World New Daily.", "output": ["Modernizers launch a coup within the House of Saud"]}, "Prediction": "Saudi billionaire arrested."} +{"Task": "task418_persent_title_generation", "Definition": ["Given a document, generate a short title of the document. The title should convey the main idea/event/topic about which the document is being written. Note that URLs in the text have been replaced with [Link]."], "Instance": {"id": "task418-eb9846da5af04293a3b08f426db46e7e", "input": "YouTube star Logan Paul says he 's going to grow from his mistakes after last month's incident in which he received news-making backlash after posting images of what appeared to be a suicide victim online. Paul says he 's a \"good guy who made a bad decision \" and says some of the \"tough\" comments he has received have even included people calling on him to kill himself .\n\"It's been tough 'cause ironically I'm being told to commit suicide myself \" he said on ABC's \"Good Morning America\" on Thursday. \"Millions of people literally telling me they hate me to go die in a fire. Like the most horrible horrific things.\"\n Paul says that he realizes the impact of his actions and accepts the criticism he has received as fair. At the same time he also said that parents should monitor what their children watch.\n He added that he does not think of his target audience as children.\n\"Sometimes I cuss. Sometimes I make inappropriate jokes \" he said. \"I want to make jokes that kids my age are going to like ... Now I will say I'm much more aware of the impact that my actions have on myself and others.\"\nLast month Paul posted video of him in a forest near Mount Fuji in Japan near what seemed to be a body hanging from a tree. YouTube says the images violated its policies and suspended him .\n Paul called the posting a \"horrible lack of judgment.\" He says he 'll \"think twice ... maybe three times\" about what he posts.\nLast week Paul returned to the video sharing platform with a suicide prevention video. He told \"Good Morning America\" he 's hopeful for his future.\n\"I think I'm going to earn a second chance \" he said. \"I don't think everyone should get a second chance.\"", "output": ["YouTube star Logan Paul wants \"second chance\" after \" suicide forest\" video"]}, "Prediction": "Suicide Prevention Video by Logan Paul."} +{"Task": "task418_persent_title_generation", "Definition": ["Given a document, generate a short title of the document. The title should convey the main idea/event/topic about which the document is being written. Note that URLs in the text have been replaced with [Link]."], "Instance": {"id": "task418-15186779caff4c729de76646e880d4f9", "input": "\"Our coaches in our league stand up for what\u00e2\u0080\u0099s right \" South Carolina coach Dawn Staley said. \"The fact that they can speak out on it it\u00e2\u0080\u0099s a great thing. It\u00e2\u0080\u0099s what America is all about being able to exercise your First Amendment right.\"\n Staley said her team which won the NCAA title in April still has not been invited to the White House.\n\"What I am most looking forward to is getting an invitation to the NCAA tournament in 2018 \" Staley said. \"That's more important.\"\n\u00e2\u0080\u009cFrom someone who has had that experience and understanding what that experience means and for Dawn and their team it\u00e2\u0080\u0099s kind of (a) slap in the face \" White said.\n\"We enjoyed the White House \" Warlick said. \"I hope that if Dawn wants to go and take her team I hope she gets that opportunity. It's a special time.\"\nMore: Dawn Staley done discussing possible White House invite\nWhen Fargas went as a Tennessee player in 1991 the men's and women's national champion teams visited the White House together. So she was there to see President George H. W. Bush with the Duke team that featured Christian Laettner and Grant Hill.", "output": ["White House snub of South Carolina basketball 'a slap in the face'"]}, "Prediction": "Dawn Staley talks about not getting invited"} +{"Task": "task418_persent_title_generation", "Definition": ["Given a document, generate a short title of the document. The title should convey the main idea/event/topic about which the document is being written. Note that URLs in the text have been replaced with [Link]."], "Instance": {"id": "task418-3790d48a99034f8b8de77b362e822d39", "input": "Skip in Skip x Embed x Share CLOSE Michigan football coach Jim Harbaugh discusses the quarterback situation Oct. 30 2017 heading into the Minnesota game. By Nick Baumgardner DFP.\n Jim Harbaugh didn't declare Brandon Peters as his starting quarterback after Saturday's win over Rutgers.\nAnd while he wasn't quite ready to do it Monday he more or less tabbed Michigan's redshirt freshman as the starter for Saturday night's game against Minnesota.\n\"Brandon will play it's likely he 'll start \" Harbaugh said Monday afternoon in Ann Arbor. \"We'll see.\"\nMinutes earlier Harbaugh told reporters he wasn't prepared to officially name a starter for Saturday (7:30 p.m. Fox). He said Michigan would prepare both Peters and fifth-year senior John O'Korn to play against the Golden Gophers as he says he did a week ago.\nWhen pressed Harbaugh said odds are Peters will be the starting quarterback this weekend.\n\"We're going to try to really build on what Brandon did in the ballgame we're going into this game and we're going to prepare both quarterbacks \" Harbaugh said. \"Brandon's definitely going to play. The 51 percent would be on the side of him starting.\n Harbaugh said he didn't want to jump the gun too fast though as he 'd like to have Peters build off his impressive debut with a better week of practice heading into what appears to be his first college start.\nIn other words: It's all set up for him but Peters has to earn it this week.\n\"Our system is a meritocracy where the best players play \" Harbaugh said. \"It was time for Brandon to play he went in and acquitted himself very well. He will play again this week.\n\"Build on the successes that he had in the ball game. Have a real good full week of practice.\"", "output": ["Brandon Peters will 'likely' start at QB for Michigan vs. Minnesota"]}, "Prediction": "Harbaugh: 'Brandon"} +{"Task": "task418_persent_title_generation", "Definition": ["Given a document, generate a short title of the document. The title should convey the main idea/event/topic about which the document is being written. Note that URLs in the text have been replaced with [Link]."], "Instance": {"id": "task418-fcaae05f3ab446688654209184f5fbe7", "input": "Melanie Oudin and Francesca Schiavone also know how quickly perceptions can change. Oudin struggled with higher expectations since her crowd-pleasing run to the U.S. Open quarterfinals last year. So did Schiavone after her breakthrough French Open title in June.\n Oudin 18 needed just 56 minutes to beat Olga Savchuk of Ukraine 6-3 6-0. Schiavone the Italian who won her first Grand Slam weeks before her 30th birthday dispatched Ayumi Morita of Japan 6-1 6-0 in 58 minutes.\n If anything Schiavone seems to be having fun in the spotlight. Asked why she's a fan favorite she playfully replied \"I attract them because I'm beautiful.\"\n Schiavone acknowledged that maybe she's a bit more motivated at a Grand Slam than at other tournaments. Schiavone seeded No. 6 had been just 3-6 since winning at Roland Garros. She lost in the first round at Wimbledon and dropped her opening match at three other tournaments.\n She was pleased that her first-round match was in the grandstand -- a year ago she was relegated to an outer court.\n \"I like to do it because adrenaline is coming up and I enjoy much more than play in faraway court \" she said with a laugh. \"Maybe because I am 30 years old and now I want to enjoy with people.\"", "output": ["Clijsters wins 1st US Open match in straight sets"]}, "Prediction": "French Open champions Oudin, Schia"} +{"Task": "task614_glucose_cause_event_detection", "Definition": ["In this task, you will be given a short story. One sentence from the story is chosen. Consider the events that happen before that sentence, or are likely to have happened before it. Does any of them directly cause it, or simply make it possible? You should write your answer in the form \" A >causes/enables> B\". Try to use phrases and sentences from the story to compose your answer when possible."], "Instance": {"id": "task614-cece91159e5f4ff1848a52c3e58d0ee3", "input": "story: It was a cloudy day outside. Tina thought that it would rainy outside. She brought her umbrella with her on the bus. Tina lost her umbrella on the bus. She got rained on as she walked to work.\n selected sentence: Tina thought that it would rainy outside.", "output": ["Tina sees clouds >Causes/Enables> Tina thinks it will rain"]}, "Prediction": "Tina brought her umbrella with her on the"} +{"Task": "task614_glucose_cause_event_detection", "Definition": ["In this task, you will be given a short story. One sentence from the story is chosen. Consider the events that happen before that sentence, or are likely to have happened before it. Does any of them directly cause it, or simply make it possible? You should write your answer in the form \" A >causes/enables> B\". Try to use phrases and sentences from the story to compose your answer when possible."], "Instance": {"id": "task614-71604f320dd54d508ac2f0485a576f1c", "input": "story: Nessa wanted to sing in the choir. She tried out for her school's choir. She sang her best, but it wasn't very good. She did not make the cut. Nessa was disappointed, but not very surprised.\n selected sentence: Nessa was disappointed, but not very surprised.", "output": ["Nessa doesn't make the cut >Causes/Enables> Nessa is disappointed"]}, "Prediction": "She tried out for the choir >Ca"} +{"Task": "task614_glucose_cause_event_detection", "Definition": ["In this task, you will be given a short story. One sentence from the story is chosen. Consider the events that happen before that sentence, or are likely to have happened before it. Does any of them directly cause it, or simply make it possible? You should write your answer in the form \" A >causes/enables> B\". Try to use phrases and sentences from the story to compose your answer when possible."], "Instance": {"id": "task614-8e107f3483684f22aca510016a472971", "input": "story: Edna was born the last of 14 kids. She grew up much younger than most of her siblings. When she became an adult, a few of her siblings has died. Edna lived to be 90 years old. When she died, she was the last of her siblings.\n selected sentence: When she died, she was the last of her siblings.", "output": ["Edna's siblings die before Edna >Causes/Enables> She is the last sibling to die"]}, "Prediction": "Edna was the last of 14 kids"} +{"Task": "task614_glucose_cause_event_detection", "Definition": ["In this task, you will be given a short story. One sentence from the story is chosen. Consider the events that happen before that sentence, or are likely to have happened before it. Does any of them directly cause it, or simply make it possible? You should write your answer in the form \" A >causes/enables> B\". Try to use phrases and sentences from the story to compose your answer when possible."], "Instance": {"id": "task614-f9122ebdf11c4c5b96bc2380248113d3", "input": "story: One day, Billy's family went on a trip to the zoo. Billy went to see the lion's den with his sister. Billy walked up to the den and tapped on the glass. One of the lions leaped into the air and swung a paw at Billy. Billy was scared to approach any other animals from then on.\n selected sentence: Billy walked up to the den and tapped on the glass.", "output": ["Billy goes to the zoo >Causes/Enables> Billy taps on the glass"]}, "Prediction": "One of the lions leaped"} +{"Task": "task614_glucose_cause_event_detection", "Definition": ["In this task, you will be given a short story. One sentence from the story is chosen. Consider the events that happen before that sentence, or are likely to have happened before it. Does any of them directly cause it, or simply make it possible? You should write your answer in the form \" A >causes/enables> B\". Try to use phrases and sentences from the story to compose your answer when possible."], "Instance": {"id": "task614-4720962e9e5e472e9f334335f511a627", "input": "story: Tina borrowed my mom's car. She was on the way to her boyfriend's house. However, my mom needed to go Somewhere and tried to call Tina. Tina didn't pick up. My mom decided to drive my car to go to Tina.\n selected sentence: My mom decided to drive my car to go to Tina.", "output": ["Tina doesn't pick up her phone >Causes/Enables> My mom drives to TIna", "My mom borrows my car >Causes/Enables> My mom drives"]}, "Prediction": "Mom drives car to Tina."} +{"Task": "task677_ollie_sentence_answer_generation", "Definition": ["Given two noun phrases (arguments) and relationship between them, form a sentence that expresses theses arguments with the given relationship."], "Instance": {"id": "task677-47f80bf3c7174813bca431c5da1fc493", "input": "Relationship: 'return to', Argument/Subject 1: 'karsh', Argument/Subject 2: 'canada'", "output": ["Karsh returned to Canada four years later , eager to make his mark ."]}, "Prediction": "Return to Canada."} +{"Task": "task677_ollie_sentence_answer_generation", "Definition": ["Given two noun phrases (arguments) and relationship between them, form a sentence that expresses theses arguments with the given relationship."], "Instance": {"id": "task677-8ea83a5275014fca9ba95095cd5ff50c", "input": "Relationship: 'be grow in', Argument/Subject 1: 'cotton', Argument/Subject 2: 'indium'", "output": ["Cotton is known to have been grown in India as early as 3,000 BC , but was almost ceratinly grown much earlier ."]}, "Prediction": "Cotton is a very durable plant. In"} +{"Task": "task677_ollie_sentence_answer_generation", "Definition": ["Given two noun phrases (arguments) and relationship between them, form a sentence that expresses theses arguments with the given relationship."], "Instance": {"id": "task677-f9739663ff294836baadb4aaa16f9558", "input": "Relationship: 'be bear in', Argument/Subject 1: 'david', Argument/Subject 2: 'virginium'", "output": ["David Commons was born 7/18/1800 in Virginia ."]}, "Prediction": "David was a bear in the woods"} +{"Task": "task677_ollie_sentence_answer_generation", "Definition": ["Given two noun phrases (arguments) and relationship between them, form a sentence that expresses theses arguments with the given relationship."], "Instance": {"id": "task677-6c55780f08404bc09ac7902daaea3d3e", "input": "Relationship: 'turn to', Argument/Subject 1: 'koenig', Argument/Subject 2: 'helena'", "output": ["Koenig gently squeezed her shoulder in silent support and turned to stand by Helena ."]}, "Prediction": "Koenig turned to Helena."} +{"Task": "task677_ollie_sentence_answer_generation", "Definition": ["Given two noun phrases (arguments) and relationship between them, form a sentence that expresses theses arguments with the given relationship."], "Instance": {"id": "task677-834d7719f309491295e6d3d1be24b658", "input": "Relationship: 'be empress of', Argument/Subject 1: 'victorium', Argument/Subject 2: 'indium'", "output": ["He wants to make Victoria empress of India ."]}, "Prediction": "Victorium and indium are the two metal"} +{"Task": "task220_rocstories_title_classification", "Definition": ["In this task, you're given five sentences, numbered 1 through 5, and two options a and b for possible titles for the story. Your job is to choose the title that better fits the story. Indicate your choice by 'a' or 'b'."], "Instance": {"id": "task220-f88b55aa5d2842ce8f8c53d0d69d23a6", "input": "Sentence 1: Tony was playing outside with his friends. Sentence 2: They discovered a large bee's nest. Sentence 3: Tony poked it with a stick. Sentence 4: As the bees swarmed out, many stung Tony. Sentence 5: Tony discovered he is allergic to bees. Choices: a. Hannah and Nate. b. Bee Allergy.", "output": ["b"]}, "Prediction": "b"} +{"Task": "task220_rocstories_title_classification", "Definition": ["In this task, you're given five sentences, numbered 1 through 5, and two options a and b for possible titles for the story. Your job is to choose the title that better fits the story. Indicate your choice by 'a' or 'b'."], "Instance": {"id": "task220-2e83707f746a4231a65af8c587f1fda8", "input": "Sentence 1: Steve got home late from work and pulled into his parking lot. Sentence 2: He was annoyed to see there was a car parked in his reserved spot. Sentence 3: He left a note on the car and went to find street parking. Sentence 4: The next morning, there was a note taped to Steve's front door. Sentence 5: It was from a new neighbor, who apologized profusely for the mistake. Choices: a. Parking Problems. b. A Weighty Problem.", "output": ["a"]}, "Prediction": "a"} +{"Task": "task220_rocstories_title_classification", "Definition": ["In this task, you're given five sentences, numbered 1 through 5, and two options a and b for possible titles for the story. Your job is to choose the title that better fits the story. Indicate your choice by 'a' or 'b'."], "Instance": {"id": "task220-91af85e2b1b54a8a8097d11ab2dfa766", "input": "Sentence 1: My wife and I are fairly wealthy. Sentence 2: The local zoo contacted us seeking a contribution. Sentence 3: We gave what we thought was a sizable sum. Sentence 4: We later learned they put our name on a bench. Sentence 5: We couldn't imagine how much other donors must have given. Choices: a. Bench Donation. b. Buying a Table.", "output": ["a"]}, "Prediction": "a"} +{"Task": "task220_rocstories_title_classification", "Definition": ["In this task, you're given five sentences, numbered 1 through 5, and two options a and b for possible titles for the story. Your job is to choose the title that better fits the story. Indicate your choice by 'a' or 'b'."], "Instance": {"id": "task220-5847bcf293164899bcbec591a4ed4a80", "input": "Sentence 1: Abigail had been using the same phone for three years. Sentence 2: One day it finally broke for good. Sentence 3: She didn't know where to start with a new phone. Sentence 4: Her brother recommended her some brands. Sentence 5: Luckily, she found a great new phone that she loved. Choices: a. The Bus Fight. b. The New Phone.", "output": ["b"]}, "Prediction": "b"} +{"Task": "task220_rocstories_title_classification", "Definition": ["In this task, you're given five sentences, numbered 1 through 5, and two options a and b for possible titles for the story. Your job is to choose the title that better fits the story. Indicate your choice by 'a' or 'b'."], "Instance": {"id": "task220-4893717dd3ff424a9b022071f13847e6", "input": "Sentence 1: Darla wanted a new doll. Sentence 2: To earn the money she returned bottles to the store. Sentence 3: She also saved all of her birthday money and did chores for her mom. Sentence 4: The day finally came when she had saved enough money. Sentence 5: Darla went to the store and bought a brand new doll! Choices: a. Darla's New Doll. b. Perfect Chance.", "output": ["a"]}, "Prediction": "a"} +{"Task": "task1631_openpi_answer_generation", "Definition": ["Given an entity, a before event, an after event, and an attribute related to the entity, generate a sentence as output. Your sentence should show the changes in the attribute of the entity."], "Instance": {"id": "task1631-2c38610efce54d2ba81f84e638c8a6bd", "input": "entity: binder clip \nbefore: in original place \nafter: found \nattr: state", "output": ["state of binder clip was in original place before and found afterwards"]}, "Prediction": "state of binder clip before and after found"} +{"Task": "task1631_openpi_answer_generation", "Definition": ["Given an entity, a before event, an after event, and an attribute related to the entity, generate a sentence as output. Your sentence should show the changes in the attribute of the entity."], "Instance": {"id": "task1631-eb9dedaff2234cb7b1d13e6fc0d7389b", "input": "entity: arm \nbefore: at side \nafter: chin height \nattr: position", "output": ["position of arm was at side before and chin height afterwards"]}, "Prediction": "position of arm before and after chin height"} +{"Task": "task1631_openpi_answer_generation", "Definition": ["Given an entity, a before event, an after event, and an attribute related to the entity, generate a sentence as output. Your sentence should show the changes in the attribute of the entity."], "Instance": {"id": "task1631-fde98abce4174842a83bb486f5c69e2e", "input": "entity: person \nbefore: sitting on surfboard \nafter: standing on surfboard \nattr: state", "output": ["state of person was sitting on surfboard before and standing on surfboard afterwards"]}, "Prediction": "state of person was sitting on surfboard before"} +{"Task": "task1631_openpi_answer_generation", "Definition": ["Given an entity, a before event, an after event, and an attribute related to the entity, generate a sentence as output. Your sentence should show the changes in the attribute of the entity."], "Instance": {"id": "task1631-c1f6b3caa9d84ed88eb179c2b3cfb682", "input": "entity: the laundry caps \nbefore: wet \nafter: dry \nattr: wetness", "output": ["wetness of the laundry caps was wet before and dry afterwards"]}, "Prediction": "wetness of the laundry caps before and"} +{"Task": "task1631_openpi_answer_generation", "Definition": ["Given an entity, a before event, an after event, and an attribute related to the entity, generate a sentence as output. Your sentence should show the changes in the attribute of the entity."], "Instance": {"id": "task1631-2d674a365dd0449b83f295375d6bfdd6", "input": "entity: project \nbefore: started \nafter: finished \nattr: complexity", "output": ["complexity of project was started before and finished afterwards"]}, "Prediction": "complexity of project before started and finished"} +{"Task": "task232_iirc_link_number_classification", "Definition": ["In this task, you're given a question, along with a context passage which has extra information available on certain terms mentioned in it, i.e., the proper nouns in the passage. Your job is to determine whether information from more than one term is needed to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task232-125f1cf098894b5b8534826f07c750a5", "input": "Question: Which countries were both a member of the European Union and the Council of Europe when Albania was accepted into the Council of Europe? Passage:In 1991, the Socialist Party of Albania, with specific social democratic ideology took control of the country through democratic elections. One year later the Democratic Party of Albania won the new elections. After 1990, Albania has been seeking a closer relationship with the West. What followed were deliberate programs of economic and democratic reform, but Albanian inexperience with capitalism led to the proliferation of pyramid schemes \u2013 which were not banned due to the corruption of the government. Chaos in late 1996 to early 1997, as a result of the collapse of these pyramid schemes, alarmed the world and prompted the influx of international peacekeeping forces. In 1995, Albania was accepted into the Council of Europe and requested membership in NATO (obtained in 2009) and is a potential candidate country for accession to the European Union. The workforce of Albania has continued to emigrate to Western countries, especially Greece and Italy.\n", "output": ["a"]}, "Prediction": "a"} +{"Task": "task232_iirc_link_number_classification", "Definition": ["In this task, you're given a question, along with a context passage which has extra information available on certain terms mentioned in it, i.e., the proper nouns in the passage. Your job is to determine whether information from more than one term is needed to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task232-b3c50d2b54114af780edc1f481198b34", "input": "Question: Who was the leader of Moldova in 1917? Passage:Lazo was born in the village of Piatra, Orhei, now in Orhei district, Moldova. He was of boyar origin. In 1917, he was a cadet at the Imperial Russian military academy when he joined the Bolshevik forces and was entrusted with several missions in Siberia during the Russian Civil War. In March\u2013August 1918 he was a commander of the Zabaykalski (trans-Baikalan) Front, and fought against Ataman Grigory Semyonov. Later he fought in Bolshevik partisan units in the Vladivostok and Partizansk areas, commanding the Red Army during the Suchan Valley Campaign against American forces. On January 31, 1920 the Bolsheviks took power in Vladivostok, but on April 5, 1920 Lazo and other commanders were arrested by Japanese troops. Then, Lazo disappeared, along with Vsevolod Sibirtsev and Alexey Lutski. Reportedly they were shot shortly after, but the exact details of the execution have never been known yet. It is widely believed that the Japanese or Cossacks of the White movement burned them in the firebox of a steam engine in Muravyevo-Amurskaya (currently Lazo) station. \n", "output": ["b"]}, "Prediction": "b"} +{"Task": "task232_iirc_link_number_classification", "Definition": ["In this task, you're given a question, along with a context passage which has extra information available on certain terms mentioned in it, i.e., the proper nouns in the passage. Your job is to determine whether information from more than one term is needed to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task232-75f9e9d1e35d423da90b99459ca6f494", "input": "Question: Who won the all-around the year Filanovsky finished 7th at the Corbeil-Essonnes World Cup? Passage:In 2014, Veinberg Filanovsky finished 7th in all-around at the 2014 Corbeil-Essonnes World Cup, and qualified to all four event finals for the first time in her World Cup competitions. On May 30\u2013June 1, she competed at the 2014 Minsk World Cup. On June 10\u201315, she competed at the 2014 European Championships and finished 12th in the all-around final. On September 5\u20137, at the 2014 World World Cup Final in Kazan, Russia, Veinberg Filanovsky finished 9th in all-around behind Neta Rivkin, and qualified to one event final, finishing 7th in clubs. On September 22\u201328, Veinberg Filanovsky (along with teammates Neta Rivkin and Martina Poplinsky) represented Israel at the 2014 World Championships where Team Israel finished in 4th place. She finished 12th in the all-around finals behind Kseniya Moustafaeva of France. On October 18\u201320, Veinberg Filanovsky competed at the 2014 Grand Prix Berlin and won the all-around silver behind Russia's Maria Titova, while in event finals she won silver medals in hoop, clubs and ribbon. On November 1\u20133, Veinberg Filanovsky won the all-around bronze at the 2014 Grand Prix Brno behind compatriot Neta Rivkin. In November 14\u201316, at the 2014 Grand Prix Final in Innsbruck, Austria, Veinberg Filanovsky finished 4th in the all-around behind Bulgaria's Neviana Vladinova. In event finals, she won silver in ball and two bronze (clubs, ribbon).\n", "output": ["b"]}, "Prediction": "a"} +{"Task": "task232_iirc_link_number_classification", "Definition": ["In this task, you're given a question, along with a context passage which has extra information available on certain terms mentioned in it, i.e., the proper nouns in the passage. Your job is to determine whether information from more than one term is needed to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task232-2b11973b1c874493b9bb429903034b2d", "input": "Question: Which of the two states that some claim the treaty territory also includes parts of has a larger total area (sq mi.)? Passage:A Dish With One Spoon, also known as One Dish One Spoon, is a law used by indigenous peoples of the Americas since at least 1142 CE to describe an agreement for sharing hunting territory among two or more nations. People are all eating out of the single dish, that is, all hunting in the shared territory. One spoon signifies that all Peoples sharing the territory are expected to limit the game they take to leave enough for others, and for the continued abundance and viability of the hunting grounds into the future. Sometimes the Indigenous language word is rendered in English as bowl or kettle rather than dish. The Dish With One Spoon phrase is also used to denote the treaty or agreement itself. In particular, a treaty made between the Anishinaabe and Haudenosaunee nations at Montr\u00e9al in 1701, as part of the Great Peace of Montreal is usually called the Dish With One Spoon treaty and its associated wampum belt the Dish With One Spoon wampum. The treaty territory includes part of the current province of Ontario between the Great Lakes and extending east along the north shore of the St. Lawrence River up to the border with the current province of Quebec. Some claim it also includes parts of the current states of New York and Michigan.\n", "output": ["a"]}, "Prediction": "a"} +{"Task": "task232_iirc_link_number_classification", "Definition": ["In this task, you're given a question, along with a context passage which has extra information available on certain terms mentioned in it, i.e., the proper nouns in the passage. Your job is to determine whether information from more than one term is needed to answer the question. Indicate your choice as `a` for Yes or `b` for No."], "Instance": {"id": "task232-fad8a8639e4a4229a398a72aca44b2d2", "input": "Question: How many total people were killed in the Karachi bus bombing? Passage:Since its exposure in the news media of both France and Pakistan, the scandal has been involved in various investigative journalism and attracted sensations and conspiracy theories in both countries, including this scandal's allegement of financing the political campaign of then-Prime Minister Edouard Balladur in the presidential elections took place in 1995. Upon electing President Jacques Chirac cancelled the commissions and kickbacks, angering many officials in Pakistan and France according to the media reports. Other theories and subsequent investigative journalist reports alleged that the bus bombing on 8 May 2002 in Karachi that killed eleven French engineers in Karachi was in retaliation for the cancellation of these commissions. This theory was rejected by the Government of Pakistan when the FIA and the CID made potential discovery in linking the HuM and the al-Qaeda had carried out the terrorist attack in Karachi, mistaking the French nationals for the Americans.\n", "output": ["b"]}, "Prediction": "a"} +{"Task": "task1391_winogrande_easy_answer_generation", "Definition": ["In this task, you are given a question containing a blank (_) and two options. You should pick the best option to answer the question. Please answer with \"A\" or \"B\"."], "Instance": {"id": "task1391-0436bad782b6443582d037aec6b4b96e", "input": "Aaron was wrapping Jeffreys present when _ got more tape and let them finish the job. (A) Aaron (B) Jeffrey", "output": ["B"]}, "Prediction": "B"} +{"Task": "task1391_winogrande_easy_answer_generation", "Definition": ["In this task, you are given a question containing a blank (_) and two options. You should pick the best option to answer the question. Please answer with \"A\" or \"B\"."], "Instance": {"id": "task1391-67d42151ec564c4dbed6a829f1592ea1", "input": "Jane decided to make changes to her kitchen by replacing the wood counters with granite. The _ is easier to clean. (A) granite (B) wood", "output": ["A"]}, "Prediction": "A"} +{"Task": "task1391_winogrande_easy_answer_generation", "Definition": ["In this task, you are given a question containing a blank (_) and two options. You should pick the best option to answer the question. Please answer with \"A\" or \"B\"."], "Instance": {"id": "task1391-f4e9653bdd6e41f2ba797d336eb240b1", "input": "After cleaning the greenhouse, Felicia loved finding a butterfly, yet Rachel was scared when _ showed it to her. (A) Felicia (B) Rachel", "output": ["A"]}, "Prediction": "A"} +{"Task": "task1391_winogrande_easy_answer_generation", "Definition": ["In this task, you are given a question containing a blank (_) and two options. You should pick the best option to answer the question. Please answer with \"A\" or \"B\"."], "Instance": {"id": "task1391-411cdb2c57484d869699857c7f8ce415", "input": "Benjamin was not allergic to cats but Matthew was, so _ went to the shelter and gave up two of them. (A) Benjamin (B) Matthew", "output": ["B"]}, "Prediction": "B"} +{"Task": "task1391_winogrande_easy_answer_generation", "Definition": ["In this task, you are given a question containing a blank (_) and two options. You should pick the best option to answer the question. Please answer with \"A\" or \"B\"."], "Instance": {"id": "task1391-1115169a4e4d41ec9a7c5e89bd33d805", "input": "The head of Leslie is aching, but Justin feels just fine, so _ must be the one who is sick. (A) Leslie (B) Justin", "output": ["A"]}, "Prediction": "A"} +{"Task": "task1358_xlsum_title_generation", "Definition": ["In this task, you need to generate an appropriate title for the given summary of some paragraph. The generated title must be short and should include the main topic of the given text. Your summary should be less than 20 words long."], "Instance": {"id": "task1358-ad90b65c76bc43a8a9f336dcbb2a066d", "input": "Scotland's fishing industry will have access to fewer cod, haddock, whiting and saithe under the new Brexit deal, according to the Scottish government.", "output": ["Scottish fishing industry 'deeply aggrieved' by Brexit deal"]}, "Prediction": "Scotland's fishing industry will have access to"} +{"Task": "task1358_xlsum_title_generation", "Definition": ["In this task, you need to generate an appropriate title for the given summary of some paragraph. The generated title must be short and should include the main topic of the given text. Your summary should be less than 20 words long."], "Instance": {"id": "task1358-b4567593f05541eca394b3b864562a8f", "input": "The Chinese PC manufacturer Lenovo has been accused of running a \"massive security risk\" after researchers found flaws in its software.", "output": ["Lenovo: researchers find 'massive security risk'"]}, "Prediction": "Lenovo accused of running \"massive security"} +{"Task": "task1358_xlsum_title_generation", "Definition": ["In this task, you need to generate an appropriate title for the given summary of some paragraph. The generated title must be short and should include the main topic of the given text. Your summary should be less than 20 words long."], "Instance": {"id": "task1358-08222191baa44250a28c89bbb4581270", "input": "Proposals to close roads outside schools during drop-off and pick-up times are among plans to tackle air pollution in Bristol.", "output": ["Bristol mayor Marvin Rees makes clean air pledges for city"]}, "Prediction": "Plans to tackle air pollution in Bristol."} +{"Task": "task1358_xlsum_title_generation", "Definition": ["In this task, you need to generate an appropriate title for the given summary of some paragraph. The generated title must be short and should include the main topic of the given text. Your summary should be less than 20 words long."], "Instance": {"id": "task1358-018f52be78814968b79f14ab97a9d67b", "input": "The first \"Neolithic\" long barrow to be built in the UK for 5,000 years, is attracting interest from all over the world.", "output": ["All Cannings 'Neolithic' long barrow takes shape"]}, "Prediction": "The first \"Neolithic\""} +{"Task": "task1358_xlsum_title_generation", "Definition": ["In this task, you need to generate an appropriate title for the given summary of some paragraph. The generated title must be short and should include the main topic of the given text. Your summary should be less than 20 words long."], "Instance": {"id": "task1358-3fc620b8565b4a7395cb5a4a7252a07f", "input": "Instagram star Essena O'Neill generated masses of coverage this week when she declared she would quit social media. Why has her announcement caused a big reaction?", "output": ["The fallout from a social media bombshell"]}, "Prediction": "Instagram star Essena O'Neill quit social"} +{"Task": "task1533_daily_dialog_formal_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. You are required to assign a label 'formal' if there is an absence of emotion and a presence of questions anywhere within the conversation. If such pattern is not found assign the label 'informal'."], "Instance": {"id": "task1533-f8088b7ea96c4b0b92fb99ffba00767c", "input": " Person1: Welcome to Super Stereos . What can I do for you ? \n Person2:I'd like to look at your stereo systems . Do you have anything on sale ? \n Person1: Yes , this system over here is on sale for $ 1,000 . As you can see , all of the components are brand-name . \n Person2:But it doesn't have a turntable . I have a fair number of vinyl records , so I need something to play them on . \n Person1: Our selection of stereo systems with turntables is somewhat limited , but we do have this system over here . ", "output": ["formal"]}, "Prediction": "informal"} +{"Task": "task1533_daily_dialog_formal_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. You are required to assign a label 'formal' if there is an absence of emotion and a presence of questions anywhere within the conversation. If such pattern is not found assign the label 'informal'."], "Instance": {"id": "task1533-fb1a36db68254787bd73c6e41317f314", "input": " Person1: The last thing I want to see is to lose face before my friends . \n Person2:That \u2019 s really unbearable . \n Person1: But you know , things cannot always go as you expect . \n Person2:So get prepared for everything . ", "output": ["informal"]}, "Prediction": "informal"} +{"Task": "task1533_daily_dialog_formal_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. You are required to assign a label 'formal' if there is an absence of emotion and a presence of questions anywhere within the conversation. If such pattern is not found assign the label 'informal'."], "Instance": {"id": "task1533-cc717af3c69440c797c182cc8be61a5b", "input": " Person1: Hi , Jenny . You look like you are in a hurry to somewhere . Where are you going ? \n Person2:I am going to the bank . I need to cash some checks . \n Person1: That's great . We can go to the bank together . \n Person2:Why are you going there ? \n Person1: I will go to Japan on business for 5 days next Tuesday . I want to exchange some Japanese yen . \n Person2:I see . Will you bring a lot of money with you ? \n Person1: Not really . I plan to bring some cash and withdraw money at the ATM in Japan . \n Person2:Do you have the bank card in Japan ? \n Person1: Nope . You can use the credit card to withdraw the money in foreign countries . \n Person2:That sounds really convenient . Do you pay extra money to enjoy this service ? \n Person1: No . It is free of charge . \n Person2:Great ! I will apply for a credit card when I go to the bank later . \n Person1: I think we are running out of time . The bank is going to close soon . ", "output": ["informal"]}, "Prediction": "informal"} +{"Task": "task1533_daily_dialog_formal_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. You are required to assign a label 'formal' if there is an absence of emotion and a presence of questions anywhere within the conversation. If such pattern is not found assign the label 'informal'."], "Instance": {"id": "task1533-93402e6db28b428da06c5d792469c6c6", "input": " Person1: My son is alway complaining that I dump on him . He was very rude and told me to mind my own business . I cannot put up with that . \n Person2:What does his dad say ? \n Person1: John is a peacemaker . He simply doesn \u2019 t bother . He says he should give him a free hand . ", "output": ["formal"]}, "Prediction": "informal"} +{"Task": "task1533_daily_dialog_formal_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. You are required to assign a label 'formal' if there is an absence of emotion and a presence of questions anywhere within the conversation. If such pattern is not found assign the label 'informal'."], "Instance": {"id": "task1533-043ca63700b54dc5a3d1db59b52ccef7", "input": " Person1: I lost my wallet . \n Person2:I'm sorry , that's bad luck . ", "output": ["informal"]}, "Prediction": "informal"} +{"Task": "task1156_bard_analogical_reasoning_tools", "Definition": ["Two analogies that relate actions to the tools used to perform the action is given in the form \"A : B. C : ?\". \"A : B\" relates action A to tool B. Your task is to replace the question mark (?) with the appropriate tool for the given action C, following the \"A : B\" relation."], "Instance": {"id": "task1156-eee061c19ebe498aa25558b0e3e58b33", "input": "wash : detergent. unlock : ?", "output": ["key"]}, "Prediction": "phone"} +{"Task": "task1156_bard_analogical_reasoning_tools", "Definition": ["Two analogies that relate actions to the tools used to perform the action is given in the form \"A : B. C : ?\". \"A : B\" relates action A to tool B. Your task is to replace the question mark (?) with the appropriate tool for the given action C, following the \"A : B\" relation."], "Instance": {"id": "task1156-07d449fa168b4240bdc7cb2ca88c2ab2", "input": "vacuum : vacuum. spread : ?", "output": ["knife"]}, "Prediction": "spread"} +{"Task": "task1156_bard_analogical_reasoning_tools", "Definition": ["Two analogies that relate actions to the tools used to perform the action is given in the form \"A : B. C : ?\". \"A : B\" relates action A to tool B. Your task is to replace the question mark (?) with the appropriate tool for the given action C, following the \"A : B\" relation."], "Instance": {"id": "task1156-196b949b27a14f9d8b2789037d4c1202", "input": "wash : soap. clamp : ?", "output": ["clamp"]}, "Prediction": "clamp"} +{"Task": "task1156_bard_analogical_reasoning_tools", "Definition": ["Two analogies that relate actions to the tools used to perform the action is given in the form \"A : B. C : ?\". \"A : B\" relates action A to tool B. Your task is to replace the question mark (?) with the appropriate tool for the given action C, following the \"A : B\" relation."], "Instance": {"id": "task1156-f5df092617ff4b83856ac3c22032cd8a", "input": "type : keyboard. unlock : ?", "output": ["key"]}, "Prediction": "phone"} +{"Task": "task1156_bard_analogical_reasoning_tools", "Definition": ["Two analogies that relate actions to the tools used to perform the action is given in the form \"A : B. C : ?\". \"A : B\" relates action A to tool B. Your task is to replace the question mark (?) with the appropriate tool for the given action C, following the \"A : B\" relation."], "Instance": {"id": "task1156-24e4b6e2cc0445c5984b7f82830457db", "input": "bake : oven. clamp : ?", "output": ["clamp"]}, "Prediction": "clamp"} +{"Task": "task1659_title_generation", "Definition": ["In this task, you are given a summary for US Congressional and California state bill, your task is to generate a Title for this bill. The preferred titles are under forty words and mention the purpose of the bill."], "Instance": {"id": "task1659-bf9bc020b7014be2aa02a3590dfd42f8", "input": "Immediate Benefits for Wounded Warriors Act of 2007 - Directs the Secretary of Veterans Affairs, for purposes of the laws administered by the Secretary and at the request of an eligible member of the Armed Forces who has served on active duty in support of Operation Iraqi Freedom or Operation Enduring Freedom, to assign to that member a 30% temporary disability rating upon such member's separation from the Armed Forces, or in the case of an eligible member who has received a medical diagnosis of post traumatic stress disorder or traumatic brain injury, upon receipt of such diagnosis.\n\nStates that upon assignment of a permanent disability rating to a member who has received such temporary rating: (1) if the permanent rating exceeds 30% payments shall be made for such \"temporary rating months\" to equal the payments under the higher permanent rating; and (2) if the permanent rating is less than 30% the member shall not be required to repay such amounts.\n\nStates that disability compensation payments received by a member who receives such temporary disability rating are not subject to offset even though the member also receives separation or disability severance pay.", "output": ["To direct the Secretary of Veterans Affairs to assign a temporary disability rating to certain members of the Armed Forces upon separation, and for other purposes."]}, "Prediction": "To provide for the assignment of a temporary"} +{"Task": "task1659_title_generation", "Definition": ["In this task, you are given a summary for US Congressional and California state bill, your task is to generate a Title for this bill. The preferred titles are under forty words and mention the purpose of the bill."], "Instance": {"id": "task1659-c659e3aeb8874d7cace49cbb16a3d269", "input": "Troops' Soft Landing, Employment, and Rural Transportation Act - Requires that a member of a reserve component of the Armed Forces who was deployed for more than 179 days for a contingency operation or a homeland defense mission be: (1) retained on active duty in the Armed Forces for 90 days after the end of the member's demobilization from a deployment; (2) allowed to use accrued leave; and (3) paid specified pay and allowances.\n\nAllows a member to be released from such retention if the member requests release after the first 15 days of the retention.\n\nDirects the Secretary of the military department concerned to provide each member so retained (and, as practicable, appropriate family members) reintegration counseling and services.\n\nAmends Internal Revenue Code work opportunity tax credit provisions, with regard to unemployed veterans living in counties where the unemployment is over a specified national threshold, to increase the credit from 40% to 50% and the maximum first-year wages which may be taken into account from $6,000 to $10,000.\n\nDirects the Secretary of Veterans Affairs to establish a competitive grant program to assist veterans in rural areas to travel to Department of Veterans Affairs (VA) medical facilities.", "output": ["To amend title 10, United States Code, to provide for the retention on active duty after demobilization of members of the reserve components of the Armed Forces following extended deployments in contingency operations or homeland defense missions, and for other purposes."]}, "Prediction": "To provide reintegration counseling and services"} +{"Task": "task1659_title_generation", "Definition": ["In this task, you are given a summary for US Congressional and California state bill, your task is to generate a Title for this bill. The preferred titles are under forty words and mention the purpose of the bill."], "Instance": {"id": "task1659-f4af1047a8c64ddeadee96f7b76107bc", "input": "American Jobs First Act of 2011 - Amends the Internal Revenue Code to extend the election to deduct dividends received by a domestic corporation\u00a0from a\u00a0controlled foreign\u00a0corporation.\u00a0 Increases the amount of such deduction by increases in the payroll of the domestic corporation over the previous taxable year.\u00a0 Requires the Secretary of the Treasury to provide, by regulations, for a recapture of any portion of the increased tax deduction allowed to a domestic corporation if such corporation's\u00a0payroll decreases during either of the two taxable years following the taxable year in which the increased deduction\u00a0was allowed.", "output": ["To amend the Internal Revenue Code of 1986 to allow temporarily a reduced rate of tax with respect to repatriated foreign earnings."]}, "Prediction": "To increase the amount of the tax deduction allowed"} +{"Task": "task1659_title_generation", "Definition": ["In this task, you are given a summary for US Congressional and California state bill, your task is to generate a Title for this bill. The preferred titles are under forty words and mention the purpose of the bill."], "Instance": {"id": "task1659-762acbeb39344a7d969e22819cdb8b13", "input": "Secure Embassy Construction and Counterterrorism Act of 1999 - Establishes within the Treasury an embassy construction and security account for the purpose of constructing, or providing security upgrades to, U.S. diplomatic facilities in order to meet specified security requirements, including that: (1) threat assessments such as the Emergency Action Plan and the Composite Threat List address threats to U.S. missions from large vehicular bombs and transnational terrorism; (2) in the selection of sites for new U.S. diplomatic facilities abroad, that all U.S. Government agencies (except military installations) be located on the same compound; (3) each newly constructed or acquired U.S. diplomatic facility be sited not less than 100 feet from the perimeter of the property on which the facility is situated; (4) appropriate Department of State and U.S. diplomatic personnel undertake crisis management training for mass casualty and mass destruction incidents relating to diplomatic facilities; (5) there is adequate Department of State support for the Foreign Emergency Support Team; (6) the Secretary of State enter into a memorandum of understanding with the Secretary of Defense setting out rapid response procedures for mobilization of personnel and equipment of their respective departments to provide more effective assistance in times of emergency with respect to U.S. diplomatic facilities; and (7) all U.S. diplomatic missions have emergency equipment and records stored at a secure off-site facility. Authorizes appropriations. \n(Sec. 5) Directs the Secretary of State to report to Congress an identification of U.S. diplomatic facilities that are most vulnerable to terrorist attack, setting out, in tiers of 20, those facilities from the most to the least vulnerable to such an attack. Requires account funds allocated for construction of new embassies to be used to rebuild only those embassies in the top three tiers. \n(Sec. 7) Directs the Secretary to review, and report to Congress on, the findings of the Overseas Presence Advisory Panel with respect to the closure of vulnerable U.S. diplomatic missions overseas. \n(Sec. 8) Directs the Secretary to report to Congress on the role and function of each U.S. diplomatic facility through 2010. \n(Sec. 9) Amends the State Department Basic Authorities Act of 1956 to authorize the President to award an overseas service star to any member of the Foreign Service or any other civilian Government employee who while employed at, or assigned permanently or temporarily to, an official mission overseas, or while traveling abroad on official business, incurred a wound or other injury or an illness (whether or not resulting in death): (1) as the person was performing official duties; (2) as the person was on the premises of a U.S. mission abroad; or (3) by reason of the person's status as a U.S. Government employee. Specifies award requirements.", "output": ["Secure Embassy Construction and Counterterrorism Act of 1999"]}, "Prediction": "Establishes within the Treasury an embassy"} +{"Task": "task1659_title_generation", "Definition": ["In this task, you are given a summary for US Congressional and California state bill, your task is to generate a Title for this bill. The preferred titles are under forty words and mention the purpose of the bill."], "Instance": {"id": "task1659-da1b0734c4cb4aaaa4dddba7ee241d3a", "input": "Homeland Security Information Sharing Enhancement Act of 2006 - Amends the Homeland Security Act of 2002 to require the Under Secretary for Information and Analysis to implement a Homeland Security Advisory System to provide advisories and alerts regarding threats to homeland security. Requires such an advisory or alert to: (1) include information on protective measures and countermeasures; (2) be limited in scope to a specific region, locality, or economic sector; and (3) not use color designations as the exclusive means of specifying threat conditions.\n\nDirects the Secretary of the Department of Homeland Security (DHS) to: (1) integrate and standardize the information of the Department's intelligence components into a Department information-sharing environment; and (2) designate, for each such component, an information-sharing and knowledge management officer.\n\nRequires the Under Secretary to: (1) establish Department-wide procedures for the review and analysis of information gathered from state, local, tribal, and private-sector sources; (2) develop mechanisms to provide analytical and operational feedback; (3) provide Department employees training and educational opportunities; and (4) evaluate how employees of the Office of Intelligence and Analysis and the Department's intelligence components are utilizing homeland security information.\n\nDirects the Secretary, acting through the Chief Intelligence Officer, to establish a comprehensive information technology architecture for such Office.\n\nMakes the Secretary the executive branch official responsible for disseminating homeland security-related terrorist threat information to state and local government and tribal officials and the private sector. Prohibits any federal official from issuing a homeland security-related analysis, advisory, or alert without the Secretary's approval, with exceptions.", "output": ["To amend the Homeland Security Act of 2002 to enhance homeland security information sharing, and for other purposes."]}, "Prediction": "To require the Under Secretary for Information and Analysis"} +{"Task": "task1624_disfl_qa_question_yesno_classification", "Definition": ["In this task you are given a disfluent question, a proper question and a context. A disfluent question is a question that has some interruptions in it while framing and a proper question is the correct form of the question without any disfluency. Classify whether the question is answerable or not based on the given context."], "Instance": {"id": "task1624-6bcd4d1e02fb4ca99406dc4ceb3a32ca", "input": "proper question: Why did Warsaw become the capital of the Commonwealth? \n disfluent question: Why did King Sigismund III no um why did Warsaw become the capital of the Commonwealth? \n context: In 1529, Warsaw for the first time became the seat of the General Sejm, permanent from 1569. In 1573 the city gave its name to the Warsaw Confederation, formally establishing religious freedom in the Polish\u2013Lithuanian Commonwealth. Due to its central location between the Commonwealth's capitals of Krak\u00f3w and Vilnius, Warsaw became the capital of the Commonwealth and the Crown of the Kingdom of Poland when King Sigismund III Vasa moved his court from Krak\u00f3w to Warsaw in 1596. In the following years the town expanded towards the suburbs. Several private independent districts were established, the property of aristocrats and the gentry, which were ruled by their own laws. Three times between 1655\u20131658 the city was under siege and three times it was taken and pillaged by the Swedish, Brandenburgian and Transylvanian forces.", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task1624_disfl_qa_question_yesno_classification", "Definition": ["In this task you are given a disfluent question, a proper question and a context. A disfluent question is a question that has some interruptions in it while framing and a proper question is the correct form of the question without any disfluency. Classify whether the question is answerable or not based on the given context."], "Instance": {"id": "task1624-81e4262b5d3c4bb38e0226ace8e9e3bc", "input": "proper question: What metaphors explained global warming? \n disfluent question: What metaphors explained Kyoto Protocol no no um global warming? \n context: According to Sheldon Ungar's comparison with global warming, the actors in the ozone depletion case had a better understanding of scientific ignorance and uncertainties. The ozone case communicated to lay persons with easy-to-understand bridging metaphors derived from the popular culture and related to immediate risks with everyday relevance, while the public opinion on climate change sees no imminent danger. The stepwise mitigation of the ozone layer challenge was based as well on successfully reducing regional burden sharing conflicts. In case of the IPCC conclusions and the failure of the Kyoto Protocol, varying regional cost-benefit analysis and burden-sharing conflicts with regard to the distribution of emission reductions remain an unsolved problem. In the UK, a report for a House of Lords committee asked to urge the IPCC to involve better assessments of costs and benefits of climate change but the Stern Review ordered by the UK government made a stronger argument in favor to combat human-made climate change.", "output": ["No"]}, "Prediction": "No"} +{"Task": "task1624_disfl_qa_question_yesno_classification", "Definition": ["In this task you are given a disfluent question, a proper question and a context. A disfluent question is a question that has some interruptions in it while framing and a proper question is the correct form of the question without any disfluency. Classify whether the question is answerable or not based on the given context."], "Instance": {"id": "task1624-5c5c6180e5c44c448fc2db8c95cf3e4e", "input": "proper question: What interpretation of Islam is, for many of the adherents, not the gold standard of their religion? \n disfluent question: When or wait what interpretation of Islam is, for many of the adherents, not the gold standard of their religion? \n context: The interpretation of Islam promoted by this funding was the strict, conservative Saudi-based Wahhabism or Salafism. In its harshest form it preached that Muslims should not only always oppose infidels in every way, but hate them for their religion ... for Allah's sake, that democracy is responsible for all the horrible wars of the 20th century, that Shia and other non-Wahhabi Muslims were infidels, etc. While this effort has by no means converted all, or even most Muslims to the Wahhabist interpretation of Islam, it has done much to overwhelm more moderate local interpretations, and has set the Saudi-interpretation of Islam as the gold standard of religion in minds of some or many Muslims.", "output": ["No"]}, "Prediction": "Yes"} +{"Task": "task1624_disfl_qa_question_yesno_classification", "Definition": ["In this task you are given a disfluent question, a proper question and a context. A disfluent question is a question that has some interruptions in it while framing and a proper question is the correct form of the question without any disfluency. Classify whether the question is answerable or not based on the given context."], "Instance": {"id": "task1624-656944ac7a4c46898653c829e7c3fd29", "input": "proper question: Who in Warsaw has the power of legislative action? \n disfluent question: How many er who in Warsaw has the power of legislative action? \n context: Legislative power in Warsaw is vested in a unicameral Warsaw City Council (Rada Miasta), which comprises 60 members. Council members are elected directly every four years. Like most legislative bodies, the City Council divides itself into committees which have the oversight of various functions of the city government. Bills passed by a simple majority are sent to the mayor (the President of Warsaw), who may sign them into law. If the mayor vetoes a bill, the Council has 30 days to override the veto by a two-thirds majority vote.", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task1624_disfl_qa_question_yesno_classification", "Definition": ["In this task you are given a disfluent question, a proper question and a context. A disfluent question is a question that has some interruptions in it while framing and a proper question is the correct form of the question without any disfluency. Classify whether the question is answerable or not based on the given context."], "Instance": {"id": "task1624-3c7cde165b5b417b96778f64b1bb4b35", "input": "proper question: What session did the IPCC panel attend before the opening ceremonies? \n disfluent question: What people did the IPCC panel I meant session did they attend before the opening ceremonies? \n context: The IPCC Panel is composed of representatives appointed by governments and organizations. Participation of delegates with appropriate expertise is encouraged. Plenary sessions of the IPCC and IPCC Working groups are held at the level of government representatives. Non Governmental and Intergovernmental Organizations may be allowed to attend as observers. Sessions of the IPCC Bureau, workshops, expert and lead authors meetings are by invitation only. Attendance at the 2003 meeting included 350 government officials and climate change experts. After the opening ceremonies, closed plenary sessions were held. The meeting report states there were 322 persons in attendance at Sessions with about seven-eighths of participants being from governmental organizations.", "output": ["No"]}, "Prediction": "Yes"} +{"Task": "task1158_bard_analogical_reasoning_manipulating_items", "Definition": ["Two analogies on manipulating items in a kitchen is given in the form \"A : B. C : ?\". The phrase \"A : B\" relates item A to its appropriate manipulation B. Your task is to replace the question mark (?) with the appropriate manipulation of item C, following the \"A : B\" relation. Your answer should be a verb which shows a way to manipulate the given object."], "Instance": {"id": "task1158-3fc1c1daae8c42f4a30eceef03affe94", "input": "backpack : unzip. banana : ?", "output": ["peel"]}, "Prediction": "unripe"} +{"Task": "task1158_bard_analogical_reasoning_manipulating_items", "Definition": ["Two analogies on manipulating items in a kitchen is given in the form \"A : B. C : ?\". The phrase \"A : B\" relates item A to its appropriate manipulation B. Your task is to replace the question mark (?) with the appropriate manipulation of item C, following the \"A : B\" relation. Your answer should be a verb which shows a way to manipulate the given object."], "Instance": {"id": "task1158-33a2fe0e03f04e63a471a75ecd72ee40", "input": "present : open. bottle : ?", "output": ["pour", "squeeze"]}, "Prediction": "open"} +{"Task": "task1158_bard_analogical_reasoning_manipulating_items", "Definition": ["Two analogies on manipulating items in a kitchen is given in the form \"A : B. C : ?\". The phrase \"A : B\" relates item A to its appropriate manipulation B. Your task is to replace the question mark (?) with the appropriate manipulation of item C, following the \"A : B\" relation. Your answer should be a verb which shows a way to manipulate the given object."], "Instance": {"id": "task1158-0c8bafcca2104450938f7df7f0650960", "input": "jug : pour. bottle : ?", "output": ["pour", "squeeze"]}, "Prediction": "pour"} +{"Task": "task1158_bard_analogical_reasoning_manipulating_items", "Definition": ["Two analogies on manipulating items in a kitchen is given in the form \"A : B. C : ?\". The phrase \"A : B\" relates item A to its appropriate manipulation B. Your task is to replace the question mark (?) with the appropriate manipulation of item C, following the \"A : B\" relation. Your answer should be a verb which shows a way to manipulate the given object."], "Instance": {"id": "task1158-2f43bbed041642eab500c98928ae2cbb", "input": "banana : peel. faucet : ?", "output": ["turn"]}, "Prediction": "untie"} +{"Task": "task1158_bard_analogical_reasoning_manipulating_items", "Definition": ["Two analogies on manipulating items in a kitchen is given in the form \"A : B. C : ?\". The phrase \"A : B\" relates item A to its appropriate manipulation B. Your task is to replace the question mark (?) with the appropriate manipulation of item C, following the \"A : B\" relation. Your answer should be a verb which shows a way to manipulate the given object."], "Instance": {"id": "task1158-0536e891cc994c09b329ab7686306734", "input": "shoe : untie. pump : ?", "output": ["pump"]}, "Prediction": "untie"} +{"Task": "task827_copa_commonsense_reasoning", "Definition": ["Given a premise and two alternatives, choose the alternative that is a more plausible cause or effect of the situation described by the premise. The input format is \"premise\n(1)alternative_1(2)alternative_2\", the output should either be \"1\" or \"2\" based on your judgment."], "Instance": {"id": "task827-076d2731abb64f15a3dee995d7f2c65d", "input": "I added sugar to the coffee.\n(1)The coffee smelled strong.(2)The coffee tasted sweet.", "output": ["2"]}, "Prediction": "2"} +{"Task": "task827_copa_commonsense_reasoning", "Definition": ["Given a premise and two alternatives, choose the alternative that is a more plausible cause or effect of the situation described by the premise. The input format is \"premise\n(1)alternative_1(2)alternative_2\", the output should either be \"1\" or \"2\" based on your judgment."], "Instance": {"id": "task827-c8951b482b914e709017d147a10ab463", "input": "The man was denied for a loan.\n(1)He was in debt.(2)He started a business.", "output": ["1"]}, "Prediction": "1"} +{"Task": "task827_copa_commonsense_reasoning", "Definition": ["Given a premise and two alternatives, choose the alternative that is a more plausible cause or effect of the situation described by the premise. The input format is \"premise\n(1)alternative_1(2)alternative_2\", the output should either be \"1\" or \"2\" based on your judgment."], "Instance": {"id": "task827-6910511d2bcb4aa9b9e2ce08069d4a75", "input": "The boy threw a temper tantrum.\n(1)His brother took his toys from him.(2)He shared his toys with his brother.", "output": ["1"]}, "Prediction": "1"} +{"Task": "task827_copa_commonsense_reasoning", "Definition": ["Given a premise and two alternatives, choose the alternative that is a more plausible cause or effect of the situation described by the premise. The input format is \"premise\n(1)alternative_1(2)alternative_2\", the output should either be \"1\" or \"2\" based on your judgment."], "Instance": {"id": "task827-23345dfb9569411db8a3446e0dcbc65d", "input": "The boy put his feet up on the table.\n(1)His father sat down at the table.(2)His father lectured him.", "output": ["2"]}, "Prediction": "2"} +{"Task": "task827_copa_commonsense_reasoning", "Definition": ["Given a premise and two alternatives, choose the alternative that is a more plausible cause or effect of the situation described by the premise. The input format is \"premise\n(1)alternative_1(2)alternative_2\", the output should either be \"1\" or \"2\" based on your judgment."], "Instance": {"id": "task827-c6796782acd240c3962e159f631ba920", "input": "The woman pressed down on the piano key.\n(1)A sound played.(2)The key got stuck.", "output": ["1"]}, "Prediction": "2"} +{"Task": "task1153_bard_analogical_reasoning_affordance", "Definition": ["Two analogies that signify affordances are given in the form \"A : B. C : ?\". Affordance is the possibility of an action being done on an object, for example book is an affordance of writing. The phrase \"A : B\" implies that B is an affordance of A. Your task is to replace the question mark (?) with the appropriate affordance of the given action C, following the \"A : B\" relation. Your answer should be a single object without further explanation."], "Instance": {"id": "task1153-287b5b79f3da4a4e9f4c7c19243d4ff3", "input": "drive : car. brush : ?", "output": ["hair"]}, "Prediction": "brush"} +{"Task": "task1153_bard_analogical_reasoning_affordance", "Definition": ["Two analogies that signify affordances are given in the form \"A : B. C : ?\". Affordance is the possibility of an action being done on an object, for example book is an affordance of writing. The phrase \"A : B\" implies that B is an affordance of A. Your task is to replace the question mark (?) with the appropriate affordance of the given action C, following the \"A : B\" relation. Your answer should be a single object without further explanation."], "Instance": {"id": "task1153-d84f047be0ae46e8a9f42089f081df1a", "input": "reload : gun. paint : ?", "output": ["picture"]}, "Prediction": "brush"} +{"Task": "task1153_bard_analogical_reasoning_affordance", "Definition": ["Two analogies that signify affordances are given in the form \"A : B. C : ?\". Affordance is the possibility of an action being done on an object, for example book is an affordance of writing. The phrase \"A : B\" implies that B is an affordance of A. Your task is to replace the question mark (?) with the appropriate affordance of the given action C, following the \"A : B\" relation. Your answer should be a single object without further explanation."], "Instance": {"id": "task1153-38b046312fbf4d09bd1c37707395d95c", "input": "open : door. write : ?", "output": ["letter", "book"]}, "Prediction": "book"} +{"Task": "task1153_bard_analogical_reasoning_affordance", "Definition": ["Two analogies that signify affordances are given in the form \"A : B. C : ?\". Affordance is the possibility of an action being done on an object, for example book is an affordance of writing. The phrase \"A : B\" implies that B is an affordance of A. Your task is to replace the question mark (?) with the appropriate affordance of the given action C, following the \"A : B\" relation. Your answer should be a single object without further explanation."], "Instance": {"id": "task1153-9891b804933e4da6a495170a0d6e48d5", "input": "hunt : goose. give : ?", "output": ["gift"]}, "Prediction": "blood"} +{"Task": "task1153_bard_analogical_reasoning_affordance", "Definition": ["Two analogies that signify affordances are given in the form \"A : B. C : ?\". Affordance is the possibility of an action being done on an object, for example book is an affordance of writing. The phrase \"A : B\" implies that B is an affordance of A. Your task is to replace the question mark (?) with the appropriate affordance of the given action C, following the \"A : B\" relation. Your answer should be a single object without further explanation."], "Instance": {"id": "task1153-e63cb4c989cd4189973c7f45e9f2c0ae", "input": "swat : fly. chase : ?", "output": ["thief"]}, "Prediction": "dog"} +{"Task": "task393_plausible_result_generation", "Definition": ["In this task, you will be given a sentence, followed by \", so\". You should complete the given text with another sentence, such that the whole text is plausible. The sentence you enter must be a likely result of the given sentence."], "Instance": {"id": "task393-efca5ba474e447d3802989cdb634d274", "input": "The cowgirl threw her lasso towards the horse, so ", "output": ["it worked out pretty well", "she was able to catch it quickly", "the lasso grabbed onto the horse", "the horse trotted into the barn", "I hopped on the horse"]}, "Prediction": "she was able to catch the horse"} +{"Task": "task393_plausible_result_generation", "Definition": ["In this task, you will be given a sentence, followed by \", so\". You should complete the given text with another sentence, such that the whole text is plausible. The sentence you enter must be a likely result of the given sentence."], "Instance": {"id": "task393-4759913505b2406e84bc998d70099787", "input": "The baby was wailing in his crib, so ", "output": ["I took him to the doctor", "the mother picked up the baby", "I went to check on him", "I got up to check on him", "we had to get up early", "I decided to take him to the doctor"]}, "Prediction": "he must be hungry"} +{"Task": "task393_plausible_result_generation", "Definition": ["In this task, you will be given a sentence, followed by \", so\". You should complete the given text with another sentence, such that the whole text is plausible. The sentence you enter must be a likely result of the given sentence."], "Instance": {"id": "task393-9b36ce67e210468fb558f4775227c877", "input": "The mirror in the bathroom fogged up, so ", "output": ["I could not see what was going on in the bathroom", "she couldn't see what was going on", "I couldn't see what was going on", "she turned to the other side of the room", "it was hard to tell what was going on", "I turned around to see what was going on", "I could not see what was going on", "I couldn't see what I was doing", "I could not see anything"]}, "Prediction": "it was hard to see"} +{"Task": "task393_plausible_result_generation", "Definition": ["In this task, you will be given a sentence, followed by \", so\". You should complete the given text with another sentence, such that the whole text is plausible. The sentence you enter must be a likely result of the given sentence."], "Instance": {"id": "task393-e686a173451d47ddaa56fc0797bfd05b", "input": "The tenant misplaced his keys to his apartment, so ", "output": ["there was nothing he could do", "he had to leave the door open", "we had to take him home", "he went to his friend's house", "he took matters into his own hands", "he had to get a new one", "he went to his neighbor's house", "he couldn't call for help", "he couldn't get back to him", "he did the next best thing", "he couldn't go back to his home", "he left the door open", "his landlord repaired the door", "his landlord unlocked the door"]}, "Prediction": "he will have to call the landlord to"} +{"Task": "task393_plausible_result_generation", "Definition": ["In this task, you will be given a sentence, followed by \", so\". You should complete the given text with another sentence, such that the whole text is plausible. The sentence you enter must be a likely result of the given sentence."], "Instance": {"id": "task393-d11795119a7c4d21b49131cba46220cf", "input": "The clock stopped ticking, so ", "output": ["I could get a good night's sleep", "I gave up and went to sleep", "we had to act fast", "i took extra time to get ready", "I was able to get a good look at the clock", "I went back to sleep", "I don't know what the problem was", "I thought it was time", "I could get my sleep", "I don't know what was going on", "I had to check it out", "I decided it was time", "I could get a good look at what was going on", "I called it a day", "the clock showed the wrong time", "I decided to try again"]}, "Prediction": "no more time passed"} +{"Task": "task879_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to identify whether the sentence is a question or not. Answer with Yes or No."], "Instance": {"id": "task879-cc88a1c6d5da4195b95af5d1ba30ea28", "input": "The restaurant is located in Burlingame.", "output": ["No"]}, "Prediction": "No"} +{"Task": "task879_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to identify whether the sentence is a question or not. Answer with Yes or No."], "Instance": {"id": "task879-e1f8539c5a2a4b92b410d45922a55900", "input": "The reservation is for what time?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task879_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to identify whether the sentence is a question or not. Answer with Yes or No."], "Instance": {"id": "task879-342891675ff7499386d2236fc7641a61", "input": "Yes that is correct. Are the prices inexpensive and do they have live music?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task879_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to identify whether the sentence is a question or not. Answer with Yes or No."], "Instance": {"id": "task879-d225764d8c4e456a87ff93733a7006e6", "input": "No, don't bother. I'm set.", "output": ["No"]}, "Prediction": "No"} +{"Task": "task879_schema_guided_dstc8_classification", "Definition": ["You are given a sentence from a conversation between a human and a virtual assistant. Your task is to identify whether the sentence is a question or not. Answer with Yes or No."], "Instance": {"id": "task879-c057b9ef878d4e4785936bbe36588a4f", "input": "Yes, the information is correct. Is the restaurant very expensive? Do they have live music?", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task613_politifact_text_generation", "Definition": ["In this task, you are given a statement spoken by a politician in natural language. Your task is to generate the subject of the discussion for the given statement. The subject generated is not necessarily a part of the given input. Your answer should contain one or more words."], "Instance": {"id": "task613-0e463643293041158dc200d4b7685f7a", "input": "From 2008 to 2011, the pages in the conforming bills have gone from 400 pages to 2,200.", "output": ["state-budget"]}, "Prediction": "conforming bills"} +{"Task": "task613_politifact_text_generation", "Definition": ["In this task, you are given a statement spoken by a politician in natural language. Your task is to generate the subject of the discussion for the given statement. The subject generated is not necessarily a part of the given input. Your answer should contain one or more words."], "Instance": {"id": "task613-3e9538291d264825aa0db5c8017ee40e", "input": "Says Lloyd Doggett voted for the health care, stimulus and cap-and-trade bills.", "output": ["cap-and-trade", "climate-change", "economy", "health-care", "message-machine"]}, "Prediction": "health care"} +{"Task": "task613_politifact_text_generation", "Definition": ["In this task, you are given a statement spoken by a politician in natural language. Your task is to generate the subject of the discussion for the given statement. The subject generated is not necessarily a part of the given input. Your answer should contain one or more words."], "Instance": {"id": "task613-91329fbb87d24ce9a57445bc1ee9870b", "input": "Have the suburbs been inundated with former residents of Atlanta housing projects? Absolutely not.", "output": ["housing"]}, "Prediction": "housing projects"} +{"Task": "task613_politifact_text_generation", "Definition": ["In this task, you are given a statement spoken by a politician in natural language. Your task is to generate the subject of the discussion for the given statement. The subject generated is not necessarily a part of the given input. Your answer should contain one or more words."], "Instance": {"id": "task613-f267b7443dca412fabd7c49c7357ea87", "input": "My opponent was on both the city of Chicago payroll and the state of Illinois payroll simultaneously, taking full salary, funding her full pension benefits both as a city worker and as a state worker for 10 years.", "output": ["pensions"]}, "Prediction": "state worker"} +{"Task": "task613_politifact_text_generation", "Definition": ["In this task, you are given a statement spoken by a politician in natural language. Your task is to generate the subject of the discussion for the given statement. The subject generated is not necessarily a part of the given input. Your answer should contain one or more words."], "Instance": {"id": "task613-8526e765ec894a1da9e61503ee679abc", "input": "Last year the (Medicaid Fraud Control Unit) in Virginia alone recovered over $200 million!", "output": ["medicaid"]}, "Prediction": "fraud"} +{"Task": "task219_rocstories_title_answer_generation", "Definition": ["In this task, you're given five sentences, numbered 1 through 5. Your job is to generate a title for the story that makes complete sense. The title must be short, with less than three words, use simple language, and include the main topic of the story."], "Instance": {"id": "task219-fa90cf0657994c75a1464694f71a938f", "input": "Sentence 1: Jim went fishing in a boat. Sentence 2: Jim caught a catfish. Sentence 3: He dropped the catfish trying to remove the hook. Sentence 4: The the catfish's spine went into Joe's foot. Sentence 5: Joe had to get a tetanus shot.", "output": ["A fish story"]}, "Prediction": "Jim caught a catfish."} +{"Task": "task219_rocstories_title_answer_generation", "Definition": ["In this task, you're given five sentences, numbered 1 through 5. Your job is to generate a title for the story that makes complete sense. The title must be short, with less than three words, use simple language, and include the main topic of the story."], "Instance": {"id": "task219-dbac7bf821aa4c74bab0d208f19d35f3", "input": "Sentence 1: I went to my cousins house for a party. Sentence 2: Her dog tried to rest on me. Sentence 3: After I tried to moved it away, I started to cough. Sentence 4: The coughing continued until the end of the night. Sentence 5: I had to take pills in order to heal the coughing.", "output": ["Allergies"]}, "Prediction": "Dog Tries to Sleep on Me"} +{"Task": "task219_rocstories_title_answer_generation", "Definition": ["In this task, you're given five sentences, numbered 1 through 5. Your job is to generate a title for the story that makes complete sense. The title must be short, with less than three words, use simple language, and include the main topic of the story."], "Instance": {"id": "task219-57a777e9a2f246a38bd41928f4897fa6", "input": "Sentence 1: Will wasn't sure who he liked in the US presidential election. Sentence 2: Will talked with his friends, who were all undecided as well. Sentence 3: Will watched a debate where the two candidates sparred. Sentence 4: Will decided he liked one candidate's answers over the other's. Sentence 5: Will cast his vote for that candidate.", "output": ["Election Choices"]}, "Prediction": "Will Watchs a debate."} +{"Task": "task219_rocstories_title_answer_generation", "Definition": ["In this task, you're given five sentences, numbered 1 through 5. Your job is to generate a title for the story that makes complete sense. The title must be short, with less than three words, use simple language, and include the main topic of the story."], "Instance": {"id": "task219-7b20cf02872a4198a27e0d6d38af1c74", "input": "Sentence 1: A student ran around the classroom. Sentence 2: He yelled at the other students. Sentence 3: The teacher spoke to him calmly. Sentence 4: He began to calm down. Sentence 5: The student learned a lot.", "output": ["Calm Words"]}, "Prediction": "Student Calms Down After Teacher Speaks"} +{"Task": "task219_rocstories_title_answer_generation", "Definition": ["In this task, you're given five sentences, numbered 1 through 5. Your job is to generate a title for the story that makes complete sense. The title must be short, with less than three words, use simple language, and include the main topic of the story."], "Instance": {"id": "task219-641314edac624e00ab0b92cfa09d1555", "input": "Sentence 1: Grady had a big aquarium. Sentence 2: He filled it with tropical fish. Sentence 3: He loved looking at the tank while he drifted off to sleep. Sentence 4: He had big plans for adding more fish. Sentence 5: The fish had become his favorite entertainment.", "output": ["The Fish"]}, "Prediction": "Tropical Fish"} +{"Task": "task190_snli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the letters E, C, and N respectively."], "Instance": {"id": "task190-ffef5c37fd37460e8e4a60fd53758299", "input": "Sentence 1: An orchestra practicing their composition. Sentence 2: The orchestra is practicing a very difficult composition.", "output": ["N"]}, "Prediction": "N"} +{"Task": "task190_snli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the letters E, C, and N respectively."], "Instance": {"id": "task190-213105a8a3a04fd4bae412d23849d411", "input": "Sentence 1: A man crossing a natural stone bridge jumps while in the middle of it. Sentence 2: A Girl was running in the bridge.", "output": ["C"]}, "Prediction": "N"} +{"Task": "task190_snli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the letters E, C, and N respectively."], "Instance": {"id": "task190-af92249823164197b3e6a38124a7d7a9", "input": "Sentence 1: An older woman sits next to a small girl who is wearing a striped dress. Sentence 2: People on the train can't help but stare-down an homeless man at the end of the coach of the train", "output": ["C"]}, "Prediction": "N"} +{"Task": "task190_snli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the letters E, C, and N respectively."], "Instance": {"id": "task190-3197d39a439d4753bb3e1a7427b3c79f", "input": "Sentence 1: A hostess in a restaurant, wearing a uniform and a black hat is cleaning one of the tables. Sentence 2: One of the waitresses did not show up so the restaurant hostess is bussing tables.", "output": ["N"]}, "Prediction": "N"} +{"Task": "task190_snli_classification", "Definition": ["In this task, you're given a pair of sentences, sentence 1 and sentence 2. Your job is to choose whether the two sentences clearly agree (entailment)/disagree (contradiction) with each other, or if this cannot be determined (neutral). Your answer must be in the form of the letters E, C, and N respectively."], "Instance": {"id": "task190-d8c5cce3f8924a2f946ca44bcc875d96", "input": "Sentence 1: A young woman in an \"I Heart Bill Clinton\" T-Shirt cooks meat on a grill in a public picnic area, while two young men converse behind her and a bicyclist and motorcyclist ride by on a nearby road in the background. Sentence 2: a woman heats up a frozen dinner in a microwave", "output": ["C"]}, "Prediction": "N"} +{"Task": "task200_mnli_entailment_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine which sentence can be inferred from the statement. Incorrect choices change the meaning in important ways or have details that are not mentioned in the statement. Indicate your answer as 1,2, or 3 corresponding to the choice number of the selected sentence."], "Instance": {"id": "task200-acfa05af5c97427dbf7967cbd9e95952", "input": "Statement: Formal and informal areas are landscaped with pools and fountains, while terraces tumble down the hillsides. Choices: 1. Pools and fountains were used in the landscaping of formal areas. 2. There were no pools or fountains in the informal areas. 3. The architect of the pools and fountains drew inspiration from the gardens of Versailles.", "output": ["1"]}, "Prediction": "1"} +{"Task": "task200_mnli_entailment_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine which sentence can be inferred from the statement. Incorrect choices change the meaning in important ways or have details that are not mentioned in the statement. Indicate your answer as 1,2, or 3 corresponding to the choice number of the selected sentence."], "Instance": {"id": "task200-e8fc9c64be3d410d8bed16e131ac3dd9", "input": "Statement: You there Rennie! Drew saw the involuntary spasm of Don Cazar's lips, the shadow of an expression which might mean he anticipated a distasteful scene to come. Choices: 1. Drew knew Don Cazar's expression well because he has known him for a long time. 2. Drew knew Don Quixote. 3. Drew knew Don Cazar's expression well.", "output": ["3"]}, "Prediction": "3"} +{"Task": "task200_mnli_entailment_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine which sentence can be inferred from the statement. Incorrect choices change the meaning in important ways or have details that are not mentioned in the statement. Indicate your answer as 1,2, or 3 corresponding to the choice number of the selected sentence."], "Instance": {"id": "task200-38ed987f6dcd4f66bc6b9f20fc291cbc", "input": "Statement: Each June, the Malaysian Nature Society supports an international bird race, where teams compete to identify the largest number of birds. Choices: 1. There is an international bird race every year. 2. There is a special prize for one of the rarest birds. 3. Each June, the Malaysian Nature Society has shunned the international bird race.", "output": ["1"]}, "Prediction": "3"} +{"Task": "task200_mnli_entailment_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine which sentence can be inferred from the statement. Incorrect choices change the meaning in important ways or have details that are not mentioned in the statement. Indicate your answer as 1,2, or 3 corresponding to the choice number of the selected sentence."], "Instance": {"id": "task200-c3ddc482101443c7be8d01622ac9e7c0", "input": "Statement: Some states have established insurance-buying groups that employers can join. Choices: 1. Employers will not be able to join state established insurance-buying groups. 2. Employers can join state established insurance-buying groups. 3. Almost all of the states have established at least one insurance-buying group.", "output": ["2"]}, "Prediction": "2"} +{"Task": "task200_mnli_entailment_classification", "Definition": ["In this task, you're given a statement and three sentences as choices. Your job is to determine which sentence can be inferred from the statement. Incorrect choices change the meaning in important ways or have details that are not mentioned in the statement. Indicate your answer as 1,2, or 3 corresponding to the choice number of the selected sentence."], "Instance": {"id": "task200-1f564ea3a8124b09a28ec1198ed2e483", "input": "Statement: However, young and old, stranger and Dubliner rub shoulders quite happily. Choices: 1. Dubliners hate outsiders, with a particular contempt for the old. 2. Dubliners and strangers get along well regardless of age. 3. Young and old, stranger and Dubliner give one another shoulder massages. ", "output": ["2"]}, "Prediction": "2"} +{"Task": "task1534_daily_dialog_question_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. If the conversation begins with a question, label it '1' otherwise '0'."], "Instance": {"id": "task1534-8db6b97dafaa44de88dc3a1ea901d3ab", "input": " Person1: Guess what ? Paul and Susan are engaged . \n Person2: Really ? When did that happen ? \n Person1: A week ago ? They met last summer and now just sink . They will be married soon . \n Person2: Have they set a date for the wedding ? \n Person1: No , not yet . But Susan says they \u2019 d like to get married in November or December . Then they \u2019 ll go to Hawaii for their honeymoon . ", "output": ["1"]}, "Prediction": "1"} +{"Task": "task1534_daily_dialog_question_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. If the conversation begins with a question, label it '1' otherwise '0'."], "Instance": {"id": "task1534-178dd0f3c885466cb5afbd675ee1c3e3", "input": " Person1: Awful ! My cell phone is out of service again . Fred , can you lend me some money ? \n Person2: How much do you want ? \n Person1: 200yuan . I bought a computer online , so I have to pay the mortgage . \n Person2: I am so sorry to hear that . Is that enough ? \n Person1: It's so nice of you . You really make my day . \n Person2: Forget it ! We seemingly need to gather money little by little . \n Person1: What you said is absolutely right . I will pay you back as soon as I get back on my feet . \n Person2: No rush . You can give it back whenever you have it . ", "output": ["0"]}, "Prediction": "1"} +{"Task": "task1534_daily_dialog_question_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. If the conversation begins with a question, label it '1' otherwise '0'."], "Instance": {"id": "task1534-b9b2c6d8ea9247bb9d53494fa91ca442", "input": " Person1: I'll never ever work together with him ! \n Person2: What happened ? Quarrel again ? \n Person1: Will you take my materials back from him for me ? Please ! \n Person2: Don't ask me . I don't wanna be involved in your quarrel . ", "output": ["0"]}, "Prediction": "1"} +{"Task": "task1534_daily_dialog_question_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. If the conversation begins with a question, label it '1' otherwise '0'."], "Instance": {"id": "task1534-862fcc20654748a398aca06143c68d0a", "input": " Person1: Where do you want to go ? \n Person2: I am going to the Capital Hotel . \n Person1: Get in . I will take you there . \n Person2: About how much it will cost ? \n Person1: $ 50 . \n Person2: On the meter ? \n Person1: Ye , of course . \n Person2: OK.let ' s go . ", "output": ["1"]}, "Prediction": "1"} +{"Task": "task1534_daily_dialog_question_classification", "Definition": ["You are given a conversation between two people. 'Person1:' and 'Person2:' are used to separate their respective dialogues. If the conversation begins with a question, label it '1' otherwise '0'."], "Instance": {"id": "task1534-de63cca1efad41ffa32a0ed7e55f7cfe", "input": " Person1: The stepmother was very wicked . She couldn't stand a happy smile on the boy's face . \n Person2: And ? \n Person1: And she did her best to pull out the thorn in her flesh and the boy lost whatever made him happy . \n Person2: Poor child ! ", "output": ["0"]}, "Prediction": "1"} +{"Task": "task1540_parsed_pdfs_summarization", "Definition": ["In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words."], "Instance": {"id": "task1540-add3e88a4a504e17956f0e8511c56a2d", "input": "Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames.", "output": ["Video Description using Bidirectional Recurrent Neural Networks"]}, "Prediction": "Encoder-Decoder with Con"} +{"Task": "task1540_parsed_pdfs_summarization", "Definition": ["In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words."], "Instance": {"id": "task1540-8b56d0a230964ae9a29a4913fcfc0723", "input": "Conventional dependency parsers rely on a statistical model and a transition system or graph algorithm to enforce tree-structured outputs during training and inference. In this work we formalize dependency parsing as the problem of selecting the head (a.k.a. parent) of each word in a sentence. Our model which we call DENSE (as shorthand for Dependency Neural Selection) employs bidirectional recurrent neural networks for the head selection task. Without enforcing any structural constraints during training, DENSE generates (at inference time) trees for the overwhelming majority of sentences (95% on an English dataset), while remaining non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate DENSE on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with or outperform the state of the art.", "output": ["Dependency Parsing as Head Selection"]}, "Prediction": "Dependency Parsing as the Problem of"} +{"Task": "task1540_parsed_pdfs_summarization", "Definition": ["In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words."], "Instance": {"id": "task1540-001094e72c21467c9854e5d83b7cb8a9", "input": "We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find encouraging results: zoneout gives significant performance improvements across tasks, yielding state-ofthe-art results in character-level language modeling on the Penn Treebank dataset and competitive results on word-level Penn Treebank and permuted sequential MNIST classification tasks.", "output": ["Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations"]}, "Prediction": "Zoneout: A Novel Regularizer for Re"} +{"Task": "task1540_parsed_pdfs_summarization", "Definition": ["In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words."], "Instance": {"id": "task1540-bd5de377b5ef404faf3704e08cb98acc", "input": "The field of Distributed Constraint Optimization has gained momentum in recent years, thanks to its ability to address various applications related to multi-agent cooperation. Nevertheless, solving Distributed Constraint Optimization Problems (DCOPs) optimally is NP-hard. Therefore, in large-scale, complex applications, incomplete DCOP algorithms are necessary. Current incomplete DCOP algorithms suffer of one or more of the following limitations: they (a) find local minima without providing quality guarantees; (b) provide loose quality assessment; or (c) are unable to benefit from the structure of the problem, such as domain-dependent knowledge and hard constraints. Therefore, capitalizing on strategies from the centralized constraint solving community, we propose a Distributed Large Neighborhood Search (D-LNS) framework to solve DCOPs. The proposed framework (with its novel repair phase) provides guarantees on solution quality, refining upper and lower bounds during the iterative process, and can exploit domain-dependent structures. Our experimental results show that D-LNS outperforms other incomplete DCOP algorithms on both structured and unstructured problem instances.", "output": ["Solving DCOPs with Distributed Large Neighborhood Search"]}, "Prediction": "Distributed Large Neighborhood Search for Distributed Cons"} +{"Task": "task1540_parsed_pdfs_summarization", "Definition": ["In this task, you are given a part of an article. Your task is to generate headline (title) for this text. Preferred headlines are under fifteen words."], "Instance": {"id": "task1540-8c934a14c1034bcc90fbfefdee44ac47", "input": "We describe here a methodology to identify a list of ambiguous Malay words that are commonly being used in Malay documentations such as Requirement Specification. We compiled several relevant and appropriate requirement quality attributes and sentence rules from previous literatures and adopt it to come out with a set of ambiguity attributes that most suit Malay words. The extracted Malay ambiguous words (potential) are then being mapped onto the constructed ambiguity attributes to confirm their vagueness. The list is then verified by Malay linguist experts. This paper aims to identify a list of potential ambiguous words in Malay as an attempt to assist writers to avoid using the vague words while documenting Malay Requirement Specification as well as to any other related Malay documentation. The result of this study is a list of 120 potential ambiguous Malay words that could act as guidelines in writing Malay sentences.", "output": ["MAPPING: AN EXPLORATORY STUDY"]}, "Prediction": "A list of ambiguous Malay"} +{"Task": "task442_com_qa_paraphrase_question_generation", "Definition": ["Given a question, generate a paraphrase of that question wihout changing the meaning of it. Your answer should reword the given sentence, but not add information to it or remove information from it. The answer to your question should be the same as the answer to the original question."], "Instance": {"id": "task442-4bd839b4a05e400ebdb885561ae7d1d2", "input": "Question: what is the highest point of puerto rico?", "output": ["what is the highest point in puerto rico?", "highest point in puerto rico?", "what is the tallest point in puerto rico?", "what is puerto rico's highest point?"]}, "Prediction": "what is the highest point of puerto"} +{"Task": "task442_com_qa_paraphrase_question_generation", "Definition": ["Given a question, generate a paraphrase of that question wihout changing the meaning of it. Your answer should reword the given sentence, but not add information to it or remove information from it. The answer to your question should be the same as the answer to the original question."], "Instance": {"id": "task442-6478eae566314cadb1eb134cce3661de", "input": "Question: what date did hernando de soto die?", "output": ["what month did hernando de soto die?", "when did hernando de soto die?", "what is hernando de soto date of death?"]}, "Prediction": "In which date did hernando de"} +{"Task": "task442_com_qa_paraphrase_question_generation", "Definition": ["Given a question, generate a paraphrase of that question wihout changing the meaning of it. Your answer should reword the given sentence, but not add information to it or remove information from it. The answer to your question should be the same as the answer to the original question."], "Instance": {"id": "task442-75804b8de01c48a2acba8cf60b10f0cf", "input": "Question: what sea divides denmark and sweden?", "output": ["what sea borders southern sweden and denmark?", "which sea lies beside sweeden and denmark?", "what sea separates denmark and sweden?", "what sea is between denmark and sweden?"]}, "Prediction": "what sea divides denmark and s"} +{"Task": "task442_com_qa_paraphrase_question_generation", "Definition": ["Given a question, generate a paraphrase of that question wihout changing the meaning of it. Your answer should reword the given sentence, but not add information to it or remove information from it. The answer to your question should be the same as the answer to the original question."], "Instance": {"id": "task442-89dea81db93a45d19a6311f6661be75c", "input": "Question: president nixon children's names?", "output": ["what where the names president children?", "who was ricahard m nixon children names?", "who are nixons children?", "names of nixons children?", "what was the names of president nixons children?", "what is president nixons childrens names?"]}, "Prediction": "What are the children's names of President"} +{"Task": "task442_com_qa_paraphrase_question_generation", "Definition": ["Given a question, generate a paraphrase of that question wihout changing the meaning of it. Your answer should reword the given sentence, but not add information to it or remove information from it. The answer to your question should be the same as the answer to the original question."], "Instance": {"id": "task442-49ca971a2b964785a121866db6b113e0", "input": "Question: what was the title of bob dylans first album?", "output": ["what was bob dylans first album?", "what was bob dylan's first album called?"]}, "Prediction": "What was the title of Bob Dylan's"} +{"Task": "task392_inverse_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \"because\". You should decide whether the first sentence can be the result of the second sentence. If you can see a possible cause and effect relationship, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task392-bc9eda364c92499e8802a9814fd159e5", "input": "the hospital sent the patient home because he was coughing too much", "output": ["not plausible"]}, "Prediction": "not plausible"} +{"Task": "task392_inverse_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \"because\". You should decide whether the first sentence can be the result of the second sentence. If you can see a possible cause and effect relationship, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task392-31ac77be6fce47099eb09d5cc0ff60d6", "input": "the man went away for the weekend because he wanted to relax", "output": ["plausible"]}, "Prediction": "plausible"} +{"Task": "task392_inverse_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \"because\". You should decide whether the first sentence can be the result of the second sentence. If you can see a possible cause and effect relationship, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task392-f5877013ceed4594839d987b0bb011d2", "input": "the benefactor requested to stay anonymous because he didn't want to lose his job", "output": ["plausible"]}, "Prediction": "not plausible"} +{"Task": "task392_inverse_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \"because\". You should decide whether the first sentence can be the result of the second sentence. If you can see a possible cause and effect relationship, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task392-42c55f2f8b9f4ac8acc61f1e96d9b1d1", "input": "my flight got delayed because I didn't have a driver's license", "output": ["not plausible"]}, "Prediction": "not plausible"} +{"Task": "task392_inverse_causal_relationship", "Definition": ["In this task, you will be given two sentences separated by \"because\". You should decide whether the first sentence can be the result of the second sentence. If you can see a possible cause and effect relationship, answer with \"plausible\", otherwise answer with \"not plausible\"."], "Instance": {"id": "task392-48e0c21d0cd54365b7467d14f1926d43", "input": "the trip took a long time because the weather was so bad", "output": ["plausible"]}, "Prediction": "plausible"} +{"Task": "task1562_zest_text_modification", "Definition": ["Paraphrase the given questions to have different wording. Your paraphrased questions should have the same answer as the original question. Try to change the sentence as much as possible using synonyms and/or rearranging the structure of the sentence. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in paraphrasing, this keyword should also be used the same way. Do not write questions that compare or involve multiple domains. Do not write open-ended or subjective questions (e.g., questions that can be answered differently by different people.) Make your questions specific and concrete. Your question should have the same type of answer as the original question(e.g., if the question is extractive, the paraphrased question should be extractive as well.)"], "Instance": {"id": "task1562-7bcf62c5e1a7428f8918c0f04e26b6b9", "input": "Do any endangered animals live inside this national park?", "output": ["Are there endangered species at this national park?"]}, "Prediction": "Are any endangered animals living in this national park"} +{"Task": "task1562_zest_text_modification", "Definition": ["Paraphrase the given questions to have different wording. Your paraphrased questions should have the same answer as the original question. Try to change the sentence as much as possible using synonyms and/or rearranging the structure of the sentence. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in paraphrasing, this keyword should also be used the same way. Do not write questions that compare or involve multiple domains. Do not write open-ended or subjective questions (e.g., questions that can be answered differently by different people.) Make your questions specific and concrete. Your question should have the same type of answer as the original question(e.g., if the question is extractive, the paraphrased question should be extractive as well.)"], "Instance": {"id": "task1562-9e378fb1c6874f769baf4600db8a341f", "input": "Is this president usually ranked as one of the better half of all presidents?", "output": ["Does this president usually get ranked in the top half of all presidents?"]}, "Prediction": "Are the presidents of this country generally "} +{"Task": "task1562_zest_text_modification", "Definition": ["Paraphrase the given questions to have different wording. Your paraphrased questions should have the same answer as the original question. Try to change the sentence as much as possible using synonyms and/or rearranging the structure of the sentence. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in paraphrasing, this keyword should also be used the same way. Do not write questions that compare or involve multiple domains. Do not write open-ended or subjective questions (e.g., questions that can be answered differently by different people.) Make your questions specific and concrete. Your question should have the same type of answer as the original question(e.g., if the question is extractive, the paraphrased question should be extractive as well.)"], "Instance": {"id": "task1562-63f92fb0aade44d8a30f7f2f649ba4a5", "input": "Did this president have a pet while in office?", "output": ["Was there a pet in the white house while this president was there?"]}, "Prediction": "Are there any pets in the office of this"} +{"Task": "task1562_zest_text_modification", "Definition": ["Paraphrase the given questions to have different wording. Your paraphrased questions should have the same answer as the original question. Try to change the sentence as much as possible using synonyms and/or rearranging the structure of the sentence. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in paraphrasing, this keyword should also be used the same way. Do not write questions that compare or involve multiple domains. Do not write open-ended or subjective questions (e.g., questions that can be answered differently by different people.) Make your questions specific and concrete. Your question should have the same type of answer as the original question(e.g., if the question is extractive, the paraphrased question should be extractive as well.)"], "Instance": {"id": "task1562-c89ece351451415586acfac87d6fbac2", "input": "Was this president ever shot at?", "output": ["Did anyone try to shoot this president?"]}, "Prediction": "Was this president ever shot at?"} +{"Task": "task1562_zest_text_modification", "Definition": ["Paraphrase the given questions to have different wording. Your paraphrased questions should have the same answer as the original question. Try to change the sentence as much as possible using synonyms and/or rearranging the structure of the sentence. The questions are in three domains: presidents, national parks, and dogs. Each question has a keyword indicating its domain. Keywords are \"this national park\", \"this dog breed\", and \"this president\", which will be replaced with the name of an actual president, a national park, or a breed of dog. Hence, in paraphrasing, this keyword should also be used the same way. Do not write questions that compare or involve multiple domains. Do not write open-ended or subjective questions (e.g., questions that can be answered differently by different people.) Make your questions specific and concrete. Your question should have the same type of answer as the original question(e.g., if the question is extractive, the paraphrased question should be extractive as well.)"], "Instance": {"id": "task1562-9d2b7a064ae34e45829779236ff12a28", "input": "Are there any natural lakes at this national park?", "output": ["Does this national park have natural lakes?"]}, "Prediction": "Are there any natural lakes at this national park"} +{"Task": "task640_esnli_classification", "Definition": ["Given a premise and a hypothesis, determine if the hypothesis (second sentence) can be concluded from the premise and assigning it a label among entailment, neutral, or contradiction. Entailment means thats the hypothesis makes sense/follows after the premise. Neutral means that the hypothesis does not follow the premise but also does not contradict the premise. Contradiction means that the hypothesis clearly contradicts the premise."], "Instance": {"id": "task640-42c40110c188424cabf5c5d4d323995e", "input": "A young family enjoys feeling ocean waves lap at their feet. A young man and woman take their child to the beach for the first time.", "output": ["neutral"]}, "Prediction": "neutral"} +{"Task": "task640_esnli_classification", "Definition": ["Given a premise and a hypothesis, determine if the hypothesis (second sentence) can be concluded from the premise and assigning it a label among entailment, neutral, or contradiction. Entailment means thats the hypothesis makes sense/follows after the premise. Neutral means that the hypothesis does not follow the premise but also does not contradict the premise. Contradiction means that the hypothesis clearly contradicts the premise."], "Instance": {"id": "task640-b62b70caa746472b8bbfb8126641856a", "input": "A woman with a green headscarf, blue shirt and a very big grin. The woman has been shot.", "output": ["contradiction"]}, "Prediction": "neutral"} +{"Task": "task640_esnli_classification", "Definition": ["Given a premise and a hypothesis, determine if the hypothesis (second sentence) can be concluded from the premise and assigning it a label among entailment, neutral, or contradiction. Entailment means thats the hypothesis makes sense/follows after the premise. Neutral means that the hypothesis does not follow the premise but also does not contradict the premise. Contradiction means that the hypothesis clearly contradicts the premise."], "Instance": {"id": "task640-55a15a77e7d340f1b80829400beaa121", "input": "A man standing in front of a building on the phone as two men to the side pain on the side. a busy man stands with bodyguards", "output": ["neutral"]}, "Prediction": "neutral"} +{"Task": "task640_esnli_classification", "Definition": ["Given a premise and a hypothesis, determine if the hypothesis (second sentence) can be concluded from the premise and assigning it a label among entailment, neutral, or contradiction. Entailment means thats the hypothesis makes sense/follows after the premise. Neutral means that the hypothesis does not follow the premise but also does not contradict the premise. Contradiction means that the hypothesis clearly contradicts the premise."], "Instance": {"id": "task640-05d2f029284f4bd7815e05dee9709230", "input": "A land rover is being driven across a river. A vehicle is crossing a river.", "output": ["entailment"]}, "Prediction": "neutral"} +{"Task": "task640_esnli_classification", "Definition": ["Given a premise and a hypothesis, determine if the hypothesis (second sentence) can be concluded from the premise and assigning it a label among entailment, neutral, or contradiction. Entailment means thats the hypothesis makes sense/follows after the premise. Neutral means that the hypothesis does not follow the premise but also does not contradict the premise. Contradiction means that the hypothesis clearly contradicts the premise."], "Instance": {"id": "task640-c987b38a07a8406b8d8537b410f696de", "input": "A man in a black shirt is looking at a bike in a workshop. A man is in a black shirt", "output": ["entailment"]}, "Prediction": "neutral"} +{"Task": "task1622_disfl_qa_text_modication", "Definition": ["Convert a disfluent question to a proper question. A disfluent question is a question that has some interruptions in it while framing. A proper question is the correct form of the question without any disfluency."], "Instance": {"id": "task1622-b2434f72637f440baa255af865c5900c", "input": "Which idea or uh like theory states that slow geological processes are still occurring today, and have occurred throughout Earth's history?", "output": ["Which theory states that slow geological processes are still occurring today, and have occurred throughout Earth's history?"]}, "Prediction": "Which idea or theory states that slow geological"} +{"Task": "task1622_disfl_qa_text_modication", "Definition": ["Convert a disfluent question to a proper question. A disfluent question is a question that has some interruptions in it while framing. A proper question is the correct form of the question without any disfluency."], "Instance": {"id": "task1622-248ea6a27e074264a916fe0a9985324f", "input": "Who refused to act until Loudoun approved, no, instead disapproved plans?", "output": ["Who refused to act until Loudoun disapproved plans?"]}, "Prediction": "Who refused to act until Loudoun"} +{"Task": "task1622_disfl_qa_text_modication", "Definition": ["Convert a disfluent question to a proper question. A disfluent question is a question that has some interruptions in it while framing. A proper question is the correct form of the question without any disfluency."], "Instance": {"id": "task1622-a3379971a6b3452fb686719108f40b4a", "input": "The movements of the Cydippids combs are controlled by what no no I meant movements of the lobates combs not Cydippids?", "output": ["The movements of the lobates combs are controlled by what?"]}, "Prediction": "The movements of the lobates comb"} +{"Task": "task1622_disfl_qa_text_modication", "Definition": ["Convert a disfluent question to a proper question. A disfluent question is a question that has some interruptions in it while framing. A proper question is the correct form of the question without any disfluency."], "Instance": {"id": "task1622-8ae91718a18541219702b65e41dba8b8", "input": "Ideal strings transmit multiple no what delayed forces?", "output": ["Ideal strings transmit what delayed forces?"]}, "Prediction": "What delayed forces do ideal strings transmit?"} +{"Task": "task1622_disfl_qa_text_modication", "Definition": ["Convert a disfluent question to a proper question. A disfluent question is a question that has some interruptions in it while framing. A proper question is the correct form of the question without any disfluency."], "Instance": {"id": "task1622-6a23cc8e02de4a46bfccec7d13a8cf5e", "input": "Where did French fur trappers travel no wait I want to know where French South Americans settled?", "output": ["Where were French South Americans settled?"]}, "Prediction": "Where did French fur trappers travel?"} +{"Task": "task623_ohsumed_yes_no_answer_generation", "Definition": ["Given an abstract, and a few options, indicate if any of the given options appropriately describe the given abstract. If any of them is a proper keyword, answer 'Yes', otherwise answer 'No'."], "Instance": {"id": "task623-c8aa26b2aa844d5faf1b3f2b3c1365d5", "input": "Abstract: This article describes the circumstances of the diagnosis of three of a group of 12 high-functioning multiple personality disorder patients. All had performed major social and professional activities with consistent competence, and all appeared to be neurotic patients suitable for classical psychoanalysis. All 12 had been misdiagnosed on at least three occasions before the correct diagnosis was made. Aspects of the difficulties encountered in assessing these patients are discussed and guidelines for the preservation of their high level of function during the treatment process are offered. \nQuestion Does the following mesh terms have a suitable mesh term for the abstract? 1)Animal 2)Antibodies, Monoclonal 3)Antigen-Antibody Complex", "output": ["No"]}, "Prediction": "No"} +{"Task": "task623_ohsumed_yes_no_answer_generation", "Definition": ["Given an abstract, and a few options, indicate if any of the given options appropriately describe the given abstract. If any of them is a proper keyword, answer 'Yes', otherwise answer 'No'."], "Instance": {"id": "task623-7c3536eb121442eb8c2bcc80b542c113", "input": "Abstract: To assess the prevalence of sports eye injuries in our area a register was kept over the 18 months from October 1982 to March 1984. Squash, association football, badminton, and tennis were the main offenders. The severest injury was from a golf ball, involving a fractured zygoma. There was one retinal dialysis, and one lens dislocation requiring extraction. Spectacles were broken in six cases and a contact lens in one. Glass fragments needed operative removal in one case, but there were no penetrating injuries. The value of eye protection, not worn by any of our patients, is emphasised. \nQuestion Does the following mesh terms have a suitable mesh term for the abstract? 1)Adolescence 2)Adult 3)Athletic Injuries 4)England 5)Eye Injuries 6)Eye Protective Devices 7)Female 8)Human 9)Male 10)Soccer 11)Tennis.", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task623_ohsumed_yes_no_answer_generation", "Definition": ["Given an abstract, and a few options, indicate if any of the given options appropriately describe the given abstract. If any of them is a proper keyword, answer 'Yes', otherwise answer 'No'."], "Instance": {"id": "task623-e34509e987024d71880846da4914551b", "input": "Abstract: Twenty-two patients with recurrent nose bleeds and varicose vessels on the nasal septum were subjected to transection of the vessels and followed-up for 2 years. One month after the operation, it was not possible to identify the original vessels. In other locations new varicose vessels began to appear after 1 month, and at the end of the follow-up period were observed on 9 out of 10 operated sides. The transection resulted in an immediate decrease in the frequency of nose bleeds and at the end of the control period 12 (55%) of the patients had had less than 4 bleeds during the previous year in spite of the observed rapid formation of new vessels. Mucosal atrophy with symptoms of increased nasal crusting occurred in 3 patients. Transection of varicose vessels in patients with nose bleeds leads to a decrease in the number of bleeds. \nQuestion Does the following mesh terms have a suitable mesh term for the abstract? 1)Adult 2)Epistaxis 3)Female 4)Follow-Up Studies 5)Human 6)Male 7)Nasal Mucosa 8)Recurrence 9)Support, Non-U.S. Gov't 10)Varicose Veins", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task623_ohsumed_yes_no_answer_generation", "Definition": ["Given an abstract, and a few options, indicate if any of the given options appropriately describe the given abstract. If any of them is a proper keyword, answer 'Yes', otherwise answer 'No'."], "Instance": {"id": "task623-d6aadcf0dbcd43a586f1bf530101b65c", "input": "Abstract: Using a series of graded destabilizing forces, we have developed a simple quantitative test of the postural response: the Postural Stress Test (PST). Suitable for widespread testing of elderly subjects, the test evaluates the subject's ability to avoid a fall as well as the appropriateness of the response. We have determined that by comparison with young subjects, the elderly controls have compromised although functionally effective balance. On the other hand, the balance response was severely compromised in half of the individuals with a history of falls. Experience with the Postural Stress Test suggests it will predict those elderly individuals with a tendency to fall as well as provide a simple mechanism for studying the balance response in the elderly. \nQuestion Does the following mesh terms have a suitable mesh term for the abstract? 1)Adult 2)Age Factors 3)Education, Premedical", "output": ["No"]}, "Prediction": "Yes"} +{"Task": "task623_ohsumed_yes_no_answer_generation", "Definition": ["Given an abstract, and a few options, indicate if any of the given options appropriately describe the given abstract. If any of them is a proper keyword, answer 'Yes', otherwise answer 'No'."], "Instance": {"id": "task623-3e5d5e84bceb4f659edf857c6b0bef4e", "input": "Abstract: Twelve patients with intractable rheumatoid arthritis were treated with antilymphocyte globulin (ALG), prednisolone and a cytotoxic agent, usually azathioprine, and were followed for 1 year. There was a significant (p less than 0.05) improvement in the mean score for early-morning stiffness, grip strength and the severity of nodules and vasculitis at 6 weeks and 3 months when compared to the initial visit. However, in most patients, this benefit was not sustained despite continued cytotoxic and steroid therapy. A rise in the haemoglobin and fall in ESR was maintained throughout the study period. \nQuestion Does the following mesh terms have a suitable mesh term for the abstract? 1)Antilymphocyte Serum 2)Arthritis, Rheumatoid 3)Combined Modality Therapy 4)Hemoglobins 5)Human 6)Immunosuppressive Agents 7)Leukocyte Count.", "output": ["Yes"]}, "Prediction": "Yes"} +{"Task": "task020_mctaco_span_based_question", "Definition": ["The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, the answer should be 'no'. Instances where the answer is implied from the sentence using \"instinct\" or \"common sense\" (as opposed to being written explicitly in the sentence) should be labeled as 'no'."], "Instance": {"id": "task020-11a0b598025c4afe84fcfb6db8522d71", "input": "Sentence: Those in the shelter wondered if the aircraft had been shot down pursuant to this authorization. \nQuestion: Are they still in the shelter?", "output": ["Yes."]}, "Prediction": "Yes"} +{"Task": "task020_mctaco_span_based_question", "Definition": ["The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, the answer should be 'no'. Instances where the answer is implied from the sentence using \"instinct\" or \"common sense\" (as opposed to being written explicitly in the sentence) should be labeled as 'no'."], "Instance": {"id": "task020-96341c5425204c009e621055ec722ff1", "input": "Sentence: Delighted , Preetam goes in search of her watch and brings it back . \nQuestion: How often does Preetam lose her things and gets them back?", "output": ["No."]}, "Prediction": "Yes"} +{"Task": "task020_mctaco_span_based_question", "Definition": ["The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, the answer should be 'no'. Instances where the answer is implied from the sentence using \"instinct\" or \"common sense\" (as opposed to being written explicitly in the sentence) should be labeled as 'no'."], "Instance": {"id": "task020-0ddf2915172f4904b60566a79e1e55ac", "input": "Sentence: She told them that she would help take care of the puppy, if she could have one. \nQuestion: How often did she ask for a puppy?", "output": ["No."]}, "Prediction": "Yes"} +{"Task": "task020_mctaco_span_based_question", "Definition": ["The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, the answer should be 'no'. Instances where the answer is implied from the sentence using \"instinct\" or \"common sense\" (as opposed to being written explicitly in the sentence) should be labeled as 'no'."], "Instance": {"id": "task020-b97b4e60e1d64a88886251f4f048f373", "input": "Sentence: Radio Televizioni Shqiptar (RTSH) is the public radio and TV broadcaster of Albania, founded by King Zog in 1938. \nQuestion: How long has RTSH been established?", "output": ["Yes."]}, "Prediction": "Yes"} +{"Task": "task020_mctaco_span_based_question", "Definition": ["The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, the answer should be 'no'. Instances where the answer is implied from the sentence using \"instinct\" or \"common sense\" (as opposed to being written explicitly in the sentence) should be labeled as 'no'."], "Instance": {"id": "task020-45805df0fe054d5d888a5a14751391f0", "input": "Sentence: The Persian Gulf War destroyed much of the country's medical infrastructure, according to a report by the World Health Organization. \nQuestion: How long did it take the Persian Gulf War to destroy the country's medical infrastructure?", "output": ["No."]}, "Prediction": "Yes"} +{"Task": "task642_esnli_classification", "Definition": ["Given Sentence 1 and Sentence 2, indicate your answer as yes when the two sentences clearly agree or clearly disagree with each other. If the relationship cannot be determined, answer with 'no'."], "Instance": {"id": "task642-f1b932e6b989465287341a1d1772cc99", "input": "A brown dog sniffing at the fence of a brown bull. The brown dog was looking intently at the bull on the other side.", "output": ["no"]}, "Prediction": "yes"} +{"Task": "task642_esnli_classification", "Definition": ["Given Sentence 1 and Sentence 2, indicate your answer as yes when the two sentences clearly agree or clearly disagree with each other. If the relationship cannot be determined, answer with 'no'."], "Instance": {"id": "task642-b21062e724e94915abde9d8856088194", "input": "Cityscape with four men on bikes serving as the focal point of picture. Local robbers escape after stealing an otter", "output": ["yes"]}, "Prediction": "no"} +{"Task": "task642_esnli_classification", "Definition": ["Given Sentence 1 and Sentence 2, indicate your answer as yes when the two sentences clearly agree or clearly disagree with each other. If the relationship cannot be determined, answer with 'no'."], "Instance": {"id": "task642-17adb551766a4b60a639ea76ae7a55ce", "input": "The young lady is giving the old man a hug. The lady is giving the old man a hug.", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task642_esnli_classification", "Definition": ["Given Sentence 1 and Sentence 2, indicate your answer as yes when the two sentences clearly agree or clearly disagree with each other. If the relationship cannot be determined, answer with 'no'."], "Instance": {"id": "task642-11613bf6084d45da9859f099b3b719ac", "input": "A skateboarding youth does a trick on a rail. A young boy showing his new skateboard tricks to his friends.", "output": ["no"]}, "Prediction": "yes"} +{"Task": "task642_esnli_classification", "Definition": ["Given Sentence 1 and Sentence 2, indicate your answer as yes when the two sentences clearly agree or clearly disagree with each other. If the relationship cannot be determined, answer with 'no'."], "Instance": {"id": "task642-f95c440d81784a8ca581278fef25139b", "input": "A man and woman are taking a picture of themselves while a woman in a scarf walks by them. A man and woman take selfies of themselves.", "output": ["yes"]}, "Prediction": "yes"} +{"Task": "task102_commongen_sentence_generation", "Definition": ["In this task, you are given concept set (with 3 to 5 concepts) that contain mentions of names of people, places, activities, or things. These concept sets reflect reasonable concept co-occurrences in everyday situations. All concepts given as input are separated by \"#\". Your job is to generate a sentence describing a day-to-day scene using all concepts from a given concept set."], "Instance": {"id": "task102-1b5f269cf9a94e62986c9dc8bd367bbc", "input": "sit#station#train", "output": ["a train sits at the station", "A train that is sitting in a station.", "A red train sitting at an empty station."]}, "Prediction": "Sit on the train."} +{"Task": "task102_commongen_sentence_generation", "Definition": ["In this task, you are given concept set (with 3 to 5 concepts) that contain mentions of names of people, places, activities, or things. These concept sets reflect reasonable concept co-occurrences in everyday situations. All concepts given as input are separated by \"#\". Your job is to generate a sentence describing a day-to-day scene using all concepts from a given concept set."], "Instance": {"id": "task102-f9b3232975f9477aa443c8270e8bad45", "input": "farmer#field#plow", "output": ["A farmer plows his field.", "The farmer is plowing his field.", "A couple of farmers plowing a field"]}, "Prediction": "Farmer plowing field."} +{"Task": "task102_commongen_sentence_generation", "Definition": ["In this task, you are given concept set (with 3 to 5 concepts) that contain mentions of names of people, places, activities, or things. These concept sets reflect reasonable concept co-occurrences in everyday situations. All concepts given as input are separated by \"#\". Your job is to generate a sentence describing a day-to-day scene using all concepts from a given concept set."], "Instance": {"id": "task102-36b4e3a3ae66461fae8e2231a5304a37", "input": "family#park#picnic", "output": ["family on the picnic in park", "A family has a picnic at the park.", "young family having picnic in a park"]}, "Prediction": "picnicking with the family in the park."} +{"Task": "task102_commongen_sentence_generation", "Definition": ["In this task, you are given concept set (with 3 to 5 concepts) that contain mentions of names of people, places, activities, or things. These concept sets reflect reasonable concept co-occurrences in everyday situations. All concepts given as input are separated by \"#\". Your job is to generate a sentence describing a day-to-day scene using all concepts from a given concept set."], "Instance": {"id": "task102-ea0ac1126ac448aca10e4ea421791b98", "input": "airport#luggage#wait", "output": ["People wait for their luggage at an airport.", "Man waiting next to his luggage at an airport.", "Several men waiting on their luggage at the airport."]}, "Prediction": "Airport luggage waits for passengers."} +{"Task": "task102_commongen_sentence_generation", "Definition": ["In this task, you are given concept set (with 3 to 5 concepts) that contain mentions of names of people, places, activities, or things. These concept sets reflect reasonable concept co-occurrences in everyday situations. All concepts given as input are separated by \"#\". Your job is to generate a sentence describing a day-to-day scene using all concepts from a given concept set."], "Instance": {"id": "task102-793cd08ac45043748060b051f3da2abe", "input": "come#track#train", "output": ["train coming down the track", "A train is coming along on a track.", "a long train in coming down some tracks"]}, "Prediction": "Train comes to a stop."}