id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2101.10964 | Oren Neumann | Oren Neumann, Claudius Gros | Investment vs. reward in a competitive knapsack problem | null | Learning Meets Combinatorial Algorithms at NeurIPS2020 (2020) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural selection drives species to develop brains, with sizes that increase
with the complexity of the tasks to be tackled. Our goal is to investigate the
balance between the metabolic costs of larger brains compared to the advantage
they provide in solving general and combinatorial problems. Defining advantage
as the performance relative to competitors, a two-player game based on the
knapsack problem is used. Within this framework, two opponents compete over
shared resources, with the goal of collecting more resources than the opponent.
Neural nets of varying sizes are trained using a variant of the AlphaGo Zero
algorithm. A surprisingly simple relation, $N_A/(N_A+N_B)$, is found for the
relative win rate of a net with $N_A$ neurons against one with $N_B$. Success
increases linearly with investments in additional resources when the networks
sizes are very different, i.e. when $N_A \ll N_B$, with returns diminishing
when both networks become comparable in size.
| [
{
"created": "Tue, 26 Jan 2021 17:47:56 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Neumann",
"Oren",
""
],
[
"Gros",
"Claudius",
""
]
] |
2101.10977 | Lukas Brunke | Lukas Brunke, Prateek Agrawal, Nikhil George | Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency
Map Comparison | null | ECCV 2020: Computer Vision - ECCV 2020 Workshops pp 120-134 | 10.1007/978-3-030-66415-2_8 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Input perturbation methods occlude parts of an input to a function and
measure the change in the function's output. Recently, input perturbation
methods have been applied to generate and evaluate saliency maps from
convolutional neural networks. In practice, neutral baseline images are used
for the occlusion, such that the baseline image's impact on the classification
probability is minimal. However, in this paper we show that arguably neutral
baseline images still impact the generated saliency maps and their evaluation
with input perturbations. We also demonstrate that many choices of
hyperparameters lead to the divergence of saliency maps generated by input
perturbations. We experimentally reveal inconsistencies among a selection of
input perturbation methods and find that they lack robustness for generating
saliency maps and for evaluating saliency maps as saliency metrics.
| [
{
"created": "Tue, 26 Jan 2021 18:11:06 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Brunke",
"Lukas",
""
],
[
"Agrawal",
"Prateek",
""
],
[
"George",
"Nikhil",
""
]
] |
2101.11002 | Evan Debenham | Evan R.M. Debenham and Roberto Solis-Oba (The University of Western
Ontario, Canada) | New Algorithms for Computing Field of Vision over 2D Grids | Presented at the 6th International Conference on Computer Science,
Engineering And Applications (CSEA 2020) 18 pages, 11 figures, 4 tables | 6th International Conference on Computer Science, Engineering And
Applications (CSEA 2020), Volume 10, Number 18, December 2020, pg. 1-18 | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | The aim of this paper is to propose new algorithms for Field of Vision (FOV)
computation which improve on existing work at high resolutions. FOV refers to
the set of locations that are visible from a specific position in a scene of a
computer game.
We summarize existing algorithms for FOV computation, describe their
limitations, and present new algorithms which aim to address these limitations.
We first present an algorithm which makes use of spatial data structures in a
way which is new for FOV calculation. We then present a novel technique which
updates a previously calculated FOV, rather than re-calculating an FOV from
scratch.
We compare our algorithms to existing FOV algorithms and show they provide
substantial improvements to running time. Our algorithms provide the largest
improvement over existing FOV algorithms at large grid sizes, thus allowing the
possibility of the design of high resolution FOV-based video games.
| [
{
"created": "Tue, 26 Jan 2021 20:38:35 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Debenham",
"Evan R. M.",
"",
"The University of Western\n Ontario, Canada"
],
[
"Solis-Oba",
"Roberto",
"",
"The University of Western\n Ontario, Canada"
]
] |
2101.11023 | Taro Sakurai | Taro Sakurai (Chiba University) | On formal concepts of random formal contexts | 7 pages, 2 figures, 1 table | Information Sciences 578 (2021) 615-620 | 10.1016/j.ins.2021.07.065 | null | cs.AI cs.DS math.CO | http://creativecommons.org/licenses/by/4.0/ | In formal concept analysis, it is well-known that the number of formal
concepts can be exponential in the worst case. To analyze the average case, we
introduce a probabilistic model for random formal contexts and prove that the
average number of formal concepts has a superpolynomial asymptotic lower bound.
| [
{
"created": "Tue, 26 Jan 2021 19:00:06 GMT",
"version": "v1"
}
] | 2021-08-02 | [
[
"Sakurai",
"Taro",
"",
"Chiba University"
]
] |
2101.11060 | Xinwei Zhao | Xinwei Zhao and Matthew C. Stamm | Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers | null | This paper is published on European Conference on Computer Vision
2020, page 202-219, Springer | null | null | cs.CR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recently, physical domain adversarial attacks have drawn significant
attention from the machine learning community. One important attack proposed by
Eykholt et al. can fool a classifier by placing black and white stickers on an
object such as a road sign. While this attack may pose a significant threat to
visual classifiers, there are currently no defenses designed to protect against
this attack. In this paper, we propose new defenses that can protect against
multi-sticker attacks. We present defensive strategies capable of operating
when the defender has full, partial, and no prior information about the attack.
By conducting extensive experiments, we show that our proposed defenses can
outperform existing defenses against physical attacks when presented with a
multi-sticker attack.
| [
{
"created": "Tue, 26 Jan 2021 19:59:28 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Zhao",
"Xinwei",
""
],
[
"Stamm",
"Matthew C.",
""
]
] |
2101.11081 | Xinwei Zhao | Xinwei Zhao and Matthew C. Stamm | The Effect of Class Definitions on the Transferability of Adversarial
Attacks Against Forensic CNNs | null | Published at Electronic Imaging, Media Watermarking, Security, and
Forensics 2020, pp. 119-1-119-7(7) | null | null | cs.CV cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years, convolutional neural networks (CNNs) have been widely used
by researchers to perform forensic tasks such as image tampering detection. At
the same time, adversarial attacks have been developed that are capable of
fooling CNN-based classifiers. Understanding the transferability of adversarial
attacks, i.e. an attacks ability to attack a different CNN than the one it was
trained against, has important implications for designing CNNs that are
resistant to attacks. While attacks on object recognition CNNs are believed to
be transferrable, recent work by Barni et al. has shown that attacks on
forensic CNNs have difficulty transferring to other CNN architectures or CNNs
trained using different datasets. In this paper, we demonstrate that
adversarial attacks on forensic CNNs are even less transferrable than
previously thought even between virtually identical CNN architectures! We show
that several common adversarial attacks against CNNs trained to identify image
manipulation fail to transfer to CNNs whose only difference is in the class
definitions (i.e. the same CNN architectures trained using the same data). We
note that all formulations of class definitions contain the unaltered class.
This has important implications for the future design of forensic CNNs that are
robust to adversarial and anti-forensic attacks.
| [
{
"created": "Tue, 26 Jan 2021 20:59:37 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Zhao",
"Xinwei",
""
],
[
"Stamm",
"Matthew C.",
""
]
] |
2101.11174 | Weiwei Jiang | Weiwei Jiang, Jiayun Luo | Graph Neural Network for Traffic Forecasting: A Survey | null | Expert Systems with Applications Volume, vol. 207, 30 November
2022, 117921 | 10.1016/j.eswa.2022.117921 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traffic forecasting is important for the success of intelligent
transportation systems. Deep learning models, including convolution neural
networks and recurrent neural networks, have been extensively applied in
traffic forecasting problems to model spatial and temporal dependencies. In
recent years, to model the graph structures in transportation systems as well
as contextual information, graph neural networks have been introduced and have
achieved state-of-the-art performance in a series of traffic forecasting
problems. In this survey, we review the rapidly growing body of research using
different graph neural networks, e.g. graph convolutional and graph attention
networks, in various traffic forecasting problems, e.g. road traffic flow and
speed forecasting, passenger flow forecasting in urban rail transit systems,
and demand forecasting in ride-hailing platforms. We also present a
comprehensive list of open data and source resources for each problem and
identify future research directions. To the best of our knowledge, this paper
is the first comprehensive survey that explores the application of graph neural
networks for traffic forecasting problems. We have also created a public GitHub
repository where the latest papers, open data, and source resources will be
updated.
| [
{
"created": "Wed, 27 Jan 2021 02:35:41 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Feb 2021 14:19:27 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Nov 2021 16:27:26 GMT",
"version": "v3"
},
{
"created": "Tue, 22 Feb 2022 05:46:58 GMT",
"version": "v4"
}
] | 2022-07-08 | [
[
"Jiang",
"Weiwei",
""
],
[
"Luo",
"Jiayun",
""
]
] |
2101.11183 | Haipeng Li | Haipeng Li, Shuaicheng Liu, Jue Wang | DeepOIS: Gyroscope-Guided Deep Optical Image Stabilizer Compensation | null | IEEE Transactions on Circuits and Systems for Video Technology (
Volume: 32, Issue: 5, May 2022) | 10.1109/TCSVT.2021.3103281 | 21690602 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile captured images can be aligned using their gyroscope sensors. Optical
image stabilizer (OIS) terminates this possibility by adjusting the images
during the capturing. In this work, we propose a deep network that compensates
the motions caused by the OIS, such that the gyroscopes can be used for image
alignment on the OIS cameras. To achieve this, first, we record both videos and
gyroscopes with an OIS camera as training data. Then, we convert gyroscope
readings into motion fields. Second, we propose a Fundamental Mixtures motion
model for rolling shutter cameras, where an array of rotations within a frame
are extracted as the ground-truth guidance. Third, we train a convolutional
neural network with gyroscope motions as input to compensate for the OIS
motion. Once finished, the compensation network can be applied for other
scenes, where the image alignment is purely based on gyroscopes with no need
for images contents, delivering strong robustness. Experiments show that our
results are comparable with that of non-OIS cameras, and outperform image-based
alignment results with a relatively large margin. Code and dataset are
available at https://github.com/lhaippp/DeepOIS
| [
{
"created": "Wed, 27 Jan 2021 03:23:46 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jul 2023 07:30:21 GMT",
"version": "v2"
}
] | 2023-07-06 | [
[
"Li",
"Haipeng",
""
],
[
"Liu",
"Shuaicheng",
""
],
[
"Wang",
"Jue",
""
]
] |
2101.11217 | Tejas Khare | Tejas Atul Khare and Anuradha C. Phadke | Automated Crop Field Surveillance using Computer Vision | 6 Pages, 10 Figures | Proceedings reference - 978-1-7281-9885-9/20/$31.00
\c{opyright}2020 IEEE | 10.1109/DISCOVER50404.2020.9278072 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Artificial Intelligence is everywhere today. But unfortunately, Agriculture
has not been able to get that much attention from Artificial Intelligence (AI).
A lack of automation persists in the agriculture industry. For over many years,
farmers and crop field owners have been facing a problem of trespassing of wild
animals for which no feasible solution has been provided. Installing a fence or
barrier like structure is neither feasible nor efficient due to the large areas
covered by the fields. Also, if the landowner can afford to build a wall or
barrier, government policies for building walls are often very irksome. The
paper intends to give a simple intelligible solution to the problem with
Automated Crop Field Surveillance using Computer Vision. The solution will
significantly reduce the cost of crops destroyed annually and completely
automate the security of the field.
| [
{
"created": "Wed, 27 Jan 2021 05:58:28 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Khare",
"Tejas Atul",
""
],
[
"Phadke",
"Anuradha C.",
""
]
] |
2101.11302 | Niels van der Heijden | Niels van der Heijden, Helen Yannakoudakis, Pushkar Mishra, Ekaterina
Shutova | Multilingual and cross-lingual document classification: A meta-learning
approach | 11 pages, 1 figure | Association for Computational Linguistics, Proceedings of the 16th
Conference of the European Chapter of the Association for Computational
Linguistics: Main Volume, 2021, 1966--1976 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The great majority of languages in the world are considered under-resourced
for the successful application of deep learning methods. In this work, we
propose a meta-learning approach to document classification in limited-resource
setting and demonstrate its effectiveness in two different settings: few-shot,
cross-lingual adaptation to previously unseen languages; and multilingual joint
training when limited target-language data is available during training. We
conduct a systematic comparison of several meta-learning methods, investigate
multiple settings in terms of data availability and show that meta-learning
thrives in settings with a heterogeneous task distribution. We propose a
simple, yet effective adjustment to existing meta-learning methods which allows
for better and more stable learning, and set a new state of the art on several
languages while performing on-par on others, using only a small amount of
labeled data.
| [
{
"created": "Wed, 27 Jan 2021 10:22:56 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Apr 2021 10:24:38 GMT",
"version": "v2"
}
] | 2021-04-27 | [
[
"van der Heijden",
"Niels",
""
],
[
"Yannakoudakis",
"Helen",
""
],
[
"Mishra",
"Pushkar",
""
],
[
"Shutova",
"Ekaterina",
""
]
] |
2101.11431 | Nicola Melluso | Silvia Fareri, Nicola Melluso, Filippo Chiarello, Gualtiero Fantoni | SkillNER: Mining and Mapping Soft Skills from any Text | null | Expert Systems With Applications 184 (2021) 115544 | 10.1016/j.eswa.2021.115544 | null | cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | In today's digital world, there is an increasing focus on soft skills. On the
one hand, they facilitate innovation at companies, but on the other, they are
unlikely to be automated soon. Researchers struggle with accurately approaching
quantitatively the study of soft skills due to the lack of data-driven methods
to retrieve them. This limits the possibility for psychologists and HR managers
to understand the relation between humans and digitalisation. This paper
presents SkillNER, a novel data-driven method for automatically extracting soft
skills from text. It is a named entity recognition (NER) system trained with a
support vector machine (SVM) on a corpus of more than 5000 scientific papers.
We developed this system by measuring the performance of our approach against
different training models and validating the results together with a team of
psychologists. Finally, SkillNER was tested in a real-world case study using
the job descriptions of ESCO (European Skill/Competence Qualification and
Occupation) as textual source. The system enabled the detection of communities
of job profiles based on their shared soft skills and communities of soft
skills based on their shared job profiles. This case study demonstrates that
the tool can automatically retrieve soft skills from a large corpus in an
efficient way, proving useful for firms, institutions, and workers. The tool is
open and available online to foster quantitative methods for the study of soft
skills.
| [
{
"created": "Fri, 22 Jan 2021 11:14:05 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jul 2021 18:12:46 GMT",
"version": "v2"
}
] | 2021-07-14 | [
[
"Fareri",
"Silvia",
""
],
[
"Melluso",
"Nicola",
""
],
[
"Chiarello",
"Filippo",
""
],
[
"Fantoni",
"Gualtiero",
""
]
] |
2101.11435 | Yakup Kutlu | Apdullah Yayik, Yakup Kutlu | Online LDA based brain-computer interface system to aid disabled people | 13 pages, 4 figures, Natural and Engineering Sciences | Natural and Engineering Sciences, 2017 | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper aims to develop brain-computer interface system based on
electroencephalography that can aid disabled people in daily life. The system
relies on one of the most effective event-related potential wave, P300, which
can be elicited by oddball paradigm. Developed application has a basic
interaction tool that enables disabled people to convey their needs to other
people selecting related objects. These objects pseudo-randomly flash in a
visual interface on computer screen. The user must focus on related object to
convey desired needs. The system can convey desired needs correctly by
detecting P300 wave in acquired 14-channel EEG signal and classifying using
linear discriminant analysis classifier just in 15 seconds. Experiments have
been carried out on 19 volunteers to validate developed BCI system. As a
result, accuracy rate of 90.83% is achieved in online performance
| [
{
"created": "Thu, 21 Jan 2021 08:17:05 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Yayik",
"Apdullah",
""
],
[
"Kutlu",
"Yakup",
""
]
] |
2101.11436 | Yakup Kutlu | Kadir Tohma, Yakup Kutlu | Challenges Encountered in Turkish Natural Language Processing Studies | 8 pages, Natural and Engineering Sciences | Natural and Engineering Sciences, 2020 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Natural language processing is a branch of computer science that combines
artificial intelligence with linguistics. It aims to analyze a language element
such as writing or speaking with software and convert it into information.
Considering that each language has its own grammatical rules and vocabulary
diversity, the complexity of the studies in this field is somewhat
understandable. For instance, Turkish is a very interesting language in many
ways. Examples of this are agglutinative word structure, consonant/vowel
harmony, a large number of productive derivational morphemes (practically
infinite vocabulary), derivation and syntactic relations, a complex emphasis on
vocabulary and phonological rules. In this study, the interesting features of
Turkish in terms of natural language processing are mentioned. In addition,
summary info about natural language processing techniques, systems and various
sources developed for Turkish are given.
| [
{
"created": "Thu, 21 Jan 2021 08:30:33 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Tohma",
"Kadir",
""
],
[
"Kutlu",
"Yakup",
""
]
] |
2101.11508 | Olivier Rukundo | Olivier Rukundo | Effects of Image Size on Deep Learning | 22 pages, 23 figures, 5 tables | Electronics 2023, 12(4), 985 | 10.3390/electronics12040985 | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | In this work, the best size for late gadolinium enhancement (LGE) magnetic
resonance imaging (MRI) images in the training dataset was determined to
optimize deep learning training outcomes. Non-extra pixel and extra pixel
interpolation algorithms were used to determine the new size of the LGE-MRI
images. A novel strategy was introduced to handle interpolation masks and
remove extra class labels in interpolated ground truth (GT) segmentation masks.
The expectation maximization, weighted intensity, a priori information (EWA)
algorithm was used for quantification of myocardial infarction (MI) in
automatically segmented LGE-MRI images. Arbitrary threshold, comparison of the
sums, and sums of differences are methods used to estimate the relationship
between semi-automatic or manual and fully automated quantification of
myocardial infarction (MI) results. The relationship between semi-automatic and
fully automated quantification of MI results was found to be closer in the case
of bigger LGE MRI images (55.5% closer to manual results) than in the case of
smaller LGE MRI images (22.2% closer to manual results).
| [
{
"created": "Wed, 27 Jan 2021 16:07:48 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jul 2021 20:25:11 GMT",
"version": "v2"
},
{
"created": "Mon, 23 May 2022 20:16:05 GMT",
"version": "v3"
},
{
"created": "Thu, 28 Jul 2022 19:12:58 GMT",
"version": "v4"
},
{
"created": "Thu, 11 Aug 2022 10:53:58 GMT",
"version": "v5"
},
{
"created": "Sun, 16 Oct 2022 06:03:12 GMT",
"version": "v6"
},
{
"created": "Sun, 12 Feb 2023 09:46:06 GMT",
"version": "v7"
},
{
"created": "Fri, 17 Feb 2023 18:48:41 GMT",
"version": "v8"
}
] | 2023-02-20 | [
[
"Rukundo",
"Olivier",
""
]
] |
2101.11560 | Ece Calikus | Ece Calikus, Slawomir Nowaczyk, Mohamed-Rafik Bouguelia, and Onur
Dikmen | Wisdom of the Contexts: Active Ensemble Learning for Contextual Anomaly
Detection | null | Data Mining Knowledge Discovery (2022) | 10.1007/s10618-022-00868-7 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In contextual anomaly detection, an object is only considered anomalous
within a specific context. Most existing methods for CAD use a single context
based on a set of user-specified contextual features. However, identifying the
right context can be very challenging in practice, especially in datasets, with
a large number of attributes. Furthermore, in real-world systems, there might
be multiple anomalies that occur in different contexts and, therefore, require
a combination of several "useful" contexts to unveil them. In this work, we
leverage active learning and ensembles to effectively detect complex contextual
anomalies in situations where the true contextual and behavioral attributes are
unknown. We propose a novel approach, called WisCon (Wisdom of the Contexts),
that automatically creates contexts from the feature set. Our method constructs
an ensemble of multiple contexts, with varying importance scores, based on the
assumption that not all useful contexts are equally so. Experiments show that
WisCon significantly outperforms existing baselines in different categories
(i.e., active classifiers, unsupervised contextual and non-contextual anomaly
detectors, and supervised classifiers) on seven datasets. Furthermore, the
results support our initial hypothesis that there is no single perfect context
that successfully uncovers all kinds of contextual anomalies, and leveraging
the "wisdom" of multiple contexts is necessary.
| [
{
"created": "Wed, 27 Jan 2021 17:34:13 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Apr 2021 23:16:56 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Jan 2022 17:34:32 GMT",
"version": "v3"
},
{
"created": "Tue, 4 Oct 2022 12:50:05 GMT",
"version": "v4"
}
] | 2022-10-05 | [
[
"Calikus",
"Ece",
""
],
[
"Nowaczyk",
"Slawomir",
""
],
[
"Bouguelia",
"Mohamed-Rafik",
""
],
[
"Dikmen",
"Onur",
""
]
] |
2101.11587 | Steven Frank | Steven J. Frank | The Work of Art in an Age of Mechanical Generation | This is the author's final version; the article has been accepted for
publication in Leonardo Journal | Leonardo(2022) 55(4): 378-381 | 10.1162/leon_a_02095 | null | cs.CY cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Can we define what it means to be "creative," and if so, can our definition
drive artificial intelligence (AI) systems to feats of creativity
indistinguishable from human efforts? This mixed question is considered from
technological and social perspectives. Beginning with an exploration of the
value we attach to authenticity in works of art, the article considers the
ability of AI to detect forgeries of renowned paintings and, in so doing,
somehow reveal the quiddity of a work of art. We conclude by considering
whether evolving technical capability can revise traditional relationships
among art, artist, and the market.
| [
{
"created": "Wed, 27 Jan 2021 18:32:58 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Aug 2022 19:31:02 GMT",
"version": "v2"
}
] | 2022-08-12 | [
[
"Frank",
"Steven J.",
""
]
] |
2101.11717 | Francois Malgouyres | Adrien Gauffriau, Fran\c{c}ois Malgouyres (IMT), M\'elanie Ducoffe | Overestimation learning with guarantees | null | AAAI-21, workshop on safeAI, Feb 2021, Valence (Virtual), Spain | null | null | cs.LG cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a complete method that learns a neural network which is
guaranteed to overestimate a reference function on a given domain. The neural
network can then be used as a surrogate for the reference function. The method
involves two steps. In the first step, we construct an adaptive set of Majoring
Points. In the second step, we optimize a well-chosen neural network to
overestimate the Majoring Points. In order to extend the guarantee on the
Majoring Points to the whole domain, we necessarily have to make an assumption
on the reference function. In this study, we assume that the reference function
is monotonic. We provide experiments on synthetic and real problems. The
experiments show that the density of the Majoring Points concentrate where the
reference function varies. The learned over-estimations are both guaranteed to
overestimate the reference function and are proven empirically to provide good
approximations of it. Experiments on real data show that the method makes it
possible to use the surrogate function in embedded systems for which an
underestimation is critical; when computing the reference function requires too
many resources.
| [
{
"created": "Tue, 26 Jan 2021 09:06:03 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Gauffriau",
"Adrien",
"",
"IMT"
],
[
"Malgouyres",
"François",
"",
"IMT"
],
[
"Ducoffe",
"Mélanie",
""
]
] |
2101.11844 | Iena Petronella Derks | Iena Petronella Derks and Alta de Waal | A Taxonomy of Explainable Bayesian Networks | null | In: Gerber A. (eds) Artificial Intelligence Research. SACAIR 2021.
Communications in Computer and Information Science, vol 1342. Springer, Cham
(2020) | 10.1007/978-3-030-66151-9_14 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made.
| [
{
"created": "Thu, 28 Jan 2021 07:29:57 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Derks",
"Iena Petronella",
""
],
[
"de Waal",
"Alta",
""
]
] |
2101.11978 | Rodrigo Agerri | Elena Zotova, Rodrigo Agerri, German Rigau | Semi-automatic Generation of Multilingual Datasets for Stance Detection
in Twitter | Stance detection, multilingualism, text categorization, fake news,
deep learning | Expert Systems with Applications, 170 (2021), Elsevier | 10.1016/j.eswa.2020.114547 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Popular social media networks provide the perfect environment to study the
opinions and attitudes expressed by users. While interactions in social media
such as Twitter occur in many natural languages, research on stance detection
(the position or attitude expressed with respect to a specific topic) within
the Natural Language Processing field has largely been done for English.
Although some efforts have recently been made to develop annotated data in
other languages, there is a telling lack of resources to facilitate
multilingual and crosslingual research on stance detection. This is partially
due to the fact that manually annotating a corpus of social media texts is a
difficult, slow and costly process. Furthermore, as stance is a highly domain-
and topic-specific phenomenon, the need for annotated data is specially
demanding. As a result, most of the manually labeled resources are hindered by
their relatively small size and skewed class distribution. This paper presents
a method to obtain multilingual datasets for stance detection in Twitter.
Instead of manually annotating on a per tweet basis, we leverage user-based
information to semi-automatically label large amounts of tweets. Empirical
monolingual and cross-lingual experimentation and qualitative analysis show
that our method helps to overcome the aforementioned difficulties to build
large, balanced and multilingual labeled corpora. We believe that our method
can be easily adapted to easily generate labeled social media data for other
Natural Language Processing tasks and domains.
| [
{
"created": "Thu, 28 Jan 2021 13:05:09 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Zotova",
"Elena",
""
],
[
"Agerri",
"Rodrigo",
""
],
[
"Rigau",
"German",
""
]
] |
2101.12047 | Samuel Alexander | Samuel Alexander, Bill Hibbard | Measuring Intelligence and Growth Rate: Variations on Hibbard's
Intelligence Measure | 25 pages | Journal of Artificial General Intelligence 12(1), 2021 | 10.2478/jagi-2021-0001 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In 2011, Hibbard suggested an intelligence measure for agents who compete in
an adversarial sequence prediction game. We argue that Hibbard's idea should
actually be considered as two separate ideas: first, that the intelligence of
such agents can be measured based on the growth rates of the runtimes of the
competitors that they defeat; and second, one specific (somewhat arbitrary)
method for measuring said growth rates. Whereas Hibbard's intelligence measure
is based on the latter growth-rate-measuring method, we survey other methods
for measuring function growth rates, and exhibit the resulting Hibbard-like
intelligence measures and taxonomies. Of particular interest, we obtain
intelligence taxonomies based on Big-O and Big-Theta notation systems, which
taxonomies are novel in that they challenge conventional notions of what an
intelligence measure should look like. We discuss how intelligence measurement
of sequence predictors can indirectly serve as intelligence measurement for
agents with Artificial General Intelligence (AGIs).
| [
{
"created": "Mon, 25 Jan 2021 01:54:08 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Alexander",
"Samuel",
""
],
[
"Hibbard",
"Bill",
""
]
] |
2101.12072 | Kashif Rasul | Kashif Rasul, Calvin Seward, Ingmar Schuster, Roland Vollgraf | Autoregressive Denoising Diffusion Models for Multivariate Probabilistic
Time Series Forecasting | null | Proceedings of the 38th International Conference on Machine
Learning, PMLR 139:8857-8868, 2021 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we propose \texttt{TimeGrad}, an autoregressive model for
multivariate probabilistic time series forecasting which samples from the data
distribution at each time step by estimating its gradient. To this end, we use
diffusion probabilistic models, a class of latent variable models closely
connected to score matching and energy-based methods. Our model learns
gradients by optimizing a variational bound on the data likelihood and at
inference time converts white noise into a sample of the distribution of
interest through a Markov chain using Langevin sampling. We demonstrate
experimentally that the proposed autoregressive denoising diffusion model is
the new state-of-the-art multivariate probabilistic forecasting method on
real-world data sets with thousands of correlated dimensions. We hope that this
method is a useful tool for practitioners and lays the foundation for future
research in this area.
| [
{
"created": "Thu, 28 Jan 2021 15:46:10 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Feb 2021 12:32:30 GMT",
"version": "v2"
}
] | 2021-07-09 | [
[
"Rasul",
"Kashif",
""
],
[
"Seward",
"Calvin",
""
],
[
"Schuster",
"Ingmar",
""
],
[
"Vollgraf",
"Roland",
""
]
] |
2101.12102 | Samuel Rivera | Deborah Weeks and Samuel Rivera | Domain Adaptation by Topology Regularization | null | SPIE Defense + Commercial Sensing, 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has become the leading approach to assisted target recognition.
While these methods typically require large amounts of labeled training data,
domain adaptation (DA) or transfer learning (TL) enables these algorithms to
transfer knowledge from a labelled (source) data set to an unlabelled but
related (target) data set of interest. DA enables networks to overcome the
distribution mismatch between the source and target that leads to poor
generalization in the target domain. DA techniques align these distributions by
minimizing a divergence measurement between source and target, making the
transfer of knowledge from source to target possible. While these algorithms
have advanced significantly in recent years, most do not explicitly leverage
global data manifold structure in aligning the source and target. We propose to
leverage global data structure by applying a topological data analysis (TDA)
technique called persistent homology to TL.
In this paper, we examine the use of persistent homology in a domain
adversarial (DAd) convolutional neural network (CNN) architecture. The
experiments show that aligning persistence alone is insufficient for transfer,
but must be considered along with the lifetimes of the topological
singularities. In addition, we found that longer lifetimes indicate robust
discriminative features and more favorable structure in data. We found that
existing divergence minimization based approaches to DA improve the topological
structure, as indicated over a baseline without these regularization
techniques. We hope these experiments highlight how topological structure can
be leveraged to boost performance in TL tasks.
| [
{
"created": "Thu, 28 Jan 2021 16:45:41 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Weeks",
"Deborah",
""
],
[
"Rivera",
"Samuel",
""
]
] |
2101.12136 | Ghada Sokar | Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy | Self-Attention Meta-Learner for Continual Learning | null | 20th International Conference on Autonomous Agents and Multiagent
Systems (AAMAS 2021) | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning aims to provide intelligent agents capable of learning
multiple tasks sequentially with neural networks. One of its main challenging,
catastrophic forgetting, is caused by the neural networks non-optimal ability
to learn in non-stationary distributions. In most settings of the current
approaches, the agent starts from randomly initialized parameters and is
optimized to master the current task regardless of the usefulness of the
learned representation for future tasks. Moreover, each of the future tasks
uses all the previously learned knowledge although parts of this knowledge
might not be helpful for its learning. These cause interference among tasks,
especially when the data of previous tasks is not accessible. In this paper, we
propose a new method, named Self-Attention Meta-Learner (SAM), which learns a
prior knowledge for continual learning that permits learning a sequence of
tasks, while avoiding catastrophic forgetting. SAM incorporates an attention
mechanism that learns to select the particular relevant representation for each
future task. Each task builds a specific representation branch on top of the
selected knowledge, avoiding the interference between tasks. We evaluate the
proposed method on the Split CIFAR-10/100 and Split MNIST benchmarks in the
task agnostic inference. We empirically show that we can achieve a better
performance than several state-of-the-art methods for continual learning by
building on the top of selected representation learned by SAM. We also show the
role of the meta-attention mechanism in boosting informative features
corresponding to the input data and identifying the correct target in the task
agnostic inference. Finally, we demonstrate that popular existing continual
learning methods gain a performance boost when they adopt SAM as a starting
point.
| [
{
"created": "Thu, 28 Jan 2021 17:35:04 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Sokar",
"Ghada",
""
],
[
"Mocanu",
"Decebal Constantin",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
2101.12446 | Matthew Olson | Matthew L. Olson, Roli Khanna, Lawrence Neal, Fuxin Li, Weng-Keen Wong | Counterfactual State Explanations for Reinforcement Learning Agents via
Generative Deep Learning | Full source code available at
https://github.com/mattolson93/counterfactual-state-explanations | Artificial Intelligence, 2021, 103455, ISSN 0004-3702 | 10.1016/j.artint.2021.103455 | null | cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Counterfactual explanations, which deal with "why not?" scenarios, can
provide insightful explanations to an AI agent's behavior. In this work, we
focus on generating counterfactual explanations for deep reinforcement learning
(RL) agents which operate in visual input environments like Atari. We introduce
counterfactual state explanations, a novel example-based approach to
counterfactual explanations based on generative deep learning. Specifically, a
counterfactual state illustrates what minimal change is needed to an Atari game
image such that the agent chooses a different action. We also evaluate the
effectiveness of counterfactual states on human participants who are not
machine learning experts. Our first user study investigates if humans can
discern if the counterfactual state explanations are produced by the actual
game or produced by a generative deep learning approach. Our second user study
investigates if counterfactual state explanations can help non-expert
participants identify a flawed agent; we compare against a baseline approach
based on a nearest neighbor explanation which uses images from the actual game.
Our results indicate that counterfactual state explanations have sufficient
fidelity to the actual game images to enable non-experts to more effectively
identify a flawed RL agent compared to the nearest neighbor baseline and to
having no explanation at all.
| [
{
"created": "Fri, 29 Jan 2021 07:43:41 GMT",
"version": "v1"
}
] | 2021-02-01 | [
[
"Olson",
"Matthew L.",
""
],
[
"Khanna",
"Roli",
""
],
[
"Neal",
"Lawrence",
""
],
[
"Li",
"Fuxin",
""
],
[
"Wong",
"Weng-Keen",
""
]
] |
2101.12463 | Hao Li | Chenghao Chen and Hao Li | Robust Representation Learning with Feedback for Single Image Deraining | null | IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR),
2021, pp.7742-7751 | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A deraining network can be interpreted as a conditional generator that aims
at removing rain streaks from image. Most existing image deraining methods
ignore model errors caused by uncertainty that reduces embedding quality.
Unlike existing image deraining methods that embed low-quality features into
the model directly, we replace low-quality features by latent high-quality
features. The spirit of closed-loop feedback in the automatic control field is
borrowed to obtain latent high-quality features. A new method for error
detection and feature compensation is proposed to address model errors.
Extensive experiments on benchmark datasets as well as specific real datasets
demonstrate that the proposed method outperforms recent state-of-the-art
methods. Code is available at: \\ https://github.com/LI-Hao-SJTU/DerainRLNet
| [
{
"created": "Fri, 29 Jan 2021 08:20:50 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Feb 2021 05:58:20 GMT",
"version": "v2"
},
{
"created": "Sun, 20 Jun 2021 09:42:53 GMT",
"version": "v3"
}
] | 2021-06-22 | [
[
"Chen",
"Chenghao",
""
],
[
"Li",
"Hao",
""
]
] |
2102.00322 | Vaneet Aggarwal | Mayank Gupta and Lingjun Chen and Denny Yu and Vaneet Aggarwal | A Supervised Learning Approach for Robust Health Monitoring using Face
Videos | The main part of the paper appeared in DFHS'20: Proceedings of the
2nd ACM Workshop on Device-Free Human Sensing; while the Supplementary did
not appear in the proceedings | Proceedings of the 2nd ACM Workshop on Device-Free Human Sensing
(DFHS 2020) Nov. 2020 pp. 6-10 | 10.1145/3427772.3429392 | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monitoring of cardiovascular activity is highly desired and can enable novel
applications in diagnosing potential cardiovascular diseases and maintaining an
individual's well-being. Currently, such vital signs are measured using
intrusive contact devices such as an electrocardiogram (ECG), chest straps, and
pulse oximeters that require the patient or the health provider to manually
implement. Non-contact, device-free human sensing methods can eliminate the
need for specialized heart and blood pressure monitoring equipment. Non-contact
methods can have additional advantages since they are scalable with any
environment where video can be captured, can be used for continuous
measurements, and can be used on patients with varying levels of dexterity and
independence, from people with physical impairments to infants (e.g., baby
camera). In this paper, we used a non-contact method that only requires face
videos recorded using commercially-available webcams. These videos were
exploited to predict the health attributes like pulse rate and variance in
pulse rate. The proposed approach used facial recognition to detect the face in
each frame of the video using facial landmarks, followed by supervised learning
using deep neural networks to train the machine learning model. The videos
captured subjects performing different physical activities that result in
varying cardiovascular responses. The proposed method did not require training
data from every individual and thus the prediction can be obtained for the new
individuals for which there is no prior data; critical in approach
generalization. The approach was also evaluated on a dataset of people with
different ethnicity. The proposed approach had less than a 4.6\% error in
predicting the pulse rate.
| [
{
"created": "Sat, 30 Jan 2021 22:03:16 GMT",
"version": "v1"
}
] | 2021-02-02 | [
[
"Gupta",
"Mayank",
""
],
[
"Chen",
"Lingjun",
""
],
[
"Yu",
"Denny",
""
],
[
"Aggarwal",
"Vaneet",
""
]
] |
2102.00385 | Guangsheng Bao | Guangsheng Bao and Yue Zhang | Contextualized Rewriting for Text Summarization | null | AAAI 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extractive summarization suffers from irrelevance, redundancy and
incoherence. Existing work shows that abstractive rewriting for extractive
summaries can improve the conciseness and readability. These rewriting systems
consider extracted summaries as the only input, which is relatively focused but
can lose important background knowledge. In this paper, we investigate
contextualized rewriting, which ingests the entire original document. We
formalize contextualized rewriting as a seq2seq problem with group alignments,
introducing group tag as a solution to model the alignments, identifying
extracted summaries through content-based addressing. Results show that our
approach significantly outperforms non-contextualized rewriting systems without
requiring reinforcement learning, achieving strong improvements on ROUGE scores
upon multiple extractive summarizers.
| [
{
"created": "Sun, 31 Jan 2021 05:35:57 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2021 06:29:16 GMT",
"version": "v2"
}
] | 2021-04-27 | [
[
"Bao",
"Guangsheng",
""
],
[
"Zhang",
"Yue",
""
]
] |
2102.00515 | Fatih Uysal | Fatih Uysal, F{\i}rat Hardala\c{c}, Ozan Peker, Tolga Tolunay and Nil
Tokg\"oz | Classification of Shoulder X-Ray Images with Deep Learning Ensemble
Models | This paper is accepted at Applied Sciences, MDPI, 2021, 11(6), 2723.
Section: "Applied Biosciences and Bioengineering". Special Issue: "Advancing
Biomedical Image Retrieval and Classification for Computer Aided Diagnosis" | Applied Sciences, MDPI, 2021, 11(6), 2723. Section: "Applied
Biosciences and Bioengineering". Special Issue: "Advancing Biomedical Image
Retrieval and Classification for Computer Aided Diagnosis" | 10.3390/app11062723 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Fractures occur in the shoulder area, which has a wider range of motion than
other joints in the body, for various reasons. To diagnose these fractures,
data gathered from Xradiation (X-ray), magnetic resonance imaging (MRI), or
computed tomography (CT) are used. This study aims to help physicians by
classifying shoulder images taken from X-ray devices as fracture / non-fracture
with artificial intelligence. For this purpose, the performances of 26 deep
learning-based pretrained models in the detection of shoulder fractures were
evaluated on the musculoskeletal radiographs (MURA) dataset, and two ensemble
learning models (EL1 and EL2) were developed. The pretrained models used are
ResNet, ResNeXt, DenseNet, VGG, Inception, MobileNet, and their spinal fully
connected (Spinal FC) versions. In the EL1 and EL2 models developed using
pretrained models with the best performance, test accuracy was 0.8455,0.8472,
Cohens kappa was 0.6907, 0.6942 and the area that was related with fracture
class under the receiver operating characteristic (ROC) curve (AUC) was
0.8862,0.8695. As a result of 28 different classifications in total, the
highest test accuracy and Cohens kappa values were obtained in the EL2 model,
and the highest AUC value was obtained in the EL1 model.
| [
{
"created": "Sun, 31 Jan 2021 19:20:04 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 12:09:24 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Mar 2021 18:28:30 GMT",
"version": "v3"
}
] | 2021-03-23 | [
[
"Uysal",
"Fatih",
""
],
[
"Hardalaç",
"Fırat",
""
],
[
"Peker",
"Ozan",
""
],
[
"Tolunay",
"Tolga",
""
],
[
"Tokgöz",
"Nil",
""
]
] |
2102.00760 | Vivien Cabannes | Vivien Cabannes and Alessandro Rudi and Francis Bach | Fast rates in structured prediction | 14 main pages, 3 main figures, 43 pages, 4 figures (with appendix) | Conference on Learning Theory, PMLR 134, 2021 | null | null | stat.ML cs.AI cs.LG math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | Discrete supervised learning problems such as classification are often
tackled by introducing a continuous surrogate problem akin to regression.
Bounding the original error, between estimate and solution, by the surrogate
error endows discrete problems with convergence rates already shown for
continuous instances. Yet, current approaches do not leverage the fact that
discrete problems are essentially predicting a discrete output when continuous
problems are predicting a continuous value. In this paper, we tackle this issue
for general structured prediction problems, opening the way to "super fast"
rates, that is, convergence rates for the excess risk faster than $n^{-1}$,
where $n$ is the number of observations, with even exponential rates with the
strongest assumptions. We first illustrate it for predictors based on nearest
neighbors, generalizing rates known for binary classification to any discrete
problem within the framework of structured prediction. We then consider kernel
ridge regression where we improve known rates in $n^{-1/4}$ to arbitrarily fast
rates, depending on a parameter characterizing the hardness of the problem,
thus allowing, under smoothness assumptions, to bypass the curse of
dimensionality.
| [
{
"created": "Mon, 1 Feb 2021 10:50:04 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Jun 2021 13:02:31 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Jul 2021 15:04:41 GMT",
"version": "v3"
}
] | 2021-07-16 | [
[
"Cabannes",
"Vivien",
""
],
[
"Rudi",
"Alessandro",
""
],
[
"Bach",
"Francis",
""
]
] |
2102.00838 | Rafael Angarita | Shufan Jiang (CRESTIC, ISEP), Rafael Angarita (ISEP), Stephane Cormier
(CRESTIC), Francis Rousseaux (CRESTIC) | Fine-tuning BERT-based models for Plant Health Bulletin Classification | null | Technology and Environment Workshop'21, Jan 2021, Montpellier,
France | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of digitization, different actors in agriculture produce numerous
data. Such data contains already latent historical knowledge in the domain.
This knowledge enables us to precisely study natural hazards within global or
local aspects, and then improve the risk prevention tasks and augment the
yield, which helps to tackle the challenge of growing population and changing
alimentary habits. In particular, French Plants Health Bulletins (BSV, for its
name in French Bulletin de Sant{\'e} du V{\'e}g{\'e}tal) give information about
the development stages of phytosanitary risks in agricultural production.
However, they are written in natural language, thus, machines and human cannot
exploit them as efficiently as it could be. Natural language processing (NLP)
technologies aim to automatically process and analyze large amounts of natural
language data. Since the 2010s, with the increases in computational power and
parallelization, representation learning and deep learning methods became
widespread in NLP. Recent advancements Bidirectional Encoder Representations
from Transformers (BERT) inspire us to rethink of knowledge representation and
natural language understanding in plant health management domain. The goal in
this work is to propose a BERT-based approach to automatically classify the BSV
to make their data easily indexable. We sampled 200 BSV to finetune the
pretrained BERT language models and classify them as pest or/and disease and we
show preliminary results.
| [
{
"created": "Fri, 29 Jan 2021 08:14:35 GMT",
"version": "v1"
}
] | 2021-02-02 | [
[
"Jiang",
"Shufan",
"",
"CRESTIC, ISEP"
],
[
"Angarita",
"Rafael",
"",
"ISEP"
],
[
"Cormier",
"Stephane",
"",
"CRESTIC"
],
[
"Rousseaux",
"Francis",
"",
"CRESTIC"
]
] |
2102.00841 | Alexander Sagel | Alexander Sagel, Julian W\"ormann, Hao Shen | Dynamic Texture Recognition via Nuclear Distances on Kernelized
Scattering Histogram Spaces | \c{opyright} 2021 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works | ICASSP 2021 - 2021 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) | 10.1109/ICASSP39728.2021.9414783 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distance-based dynamic texture recognition is an important research field in
multimedia processing with applications ranging from retrieval to segmentation
of video data. Based on the conjecture that the most distinctive characteristic
of a dynamic texture is the appearance of its individual frames, this work
proposes to describe dynamic textures as kernelized spaces of frame-wise
feature vectors computed using the Scattering transform. By combining these
spaces with a basis-invariant metric, we get a framework that produces
competitive results for nearest neighbor classification and state-of-the-art
results for nearest class center classification.
| [
{
"created": "Mon, 1 Feb 2021 13:54:24 GMT",
"version": "v1"
}
] | 2021-05-17 | [
[
"Sagel",
"Alexander",
""
],
[
"Wörmann",
"Julian",
""
],
[
"Shen",
"Hao",
""
]
] |
2102.00881 | G\"ul\c{s}en Eryi\u{g}it | G\"ul\c{s}en Eryi\u{g}it, Ali \c{S}enta\c{s}, Johanna Monti | Gamified Crowdsourcing for Idiom Corpora Construction | 25 pages, 8 figures, 6 tables | Natural Language Engineering, Cambridge Press, 2022 | 10.1017/S1351324921000401 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning idiomatic expressions is seen as one of the most challenging stages
in second language learning because of their unpredictable meaning. A similar
situation holds for their identification within natural language processing
applications such as machine translation and parsing. The lack of high-quality
usage samples exacerbates this challenge not only for humans but also for
artificial intelligence systems. This article introduces a gamified
crowdsourcing approach for collecting language learning materials for idiomatic
expressions; a messaging bot is designed as an asynchronous multiplayer game
for native speakers who compete with each other while providing idiomatic and
nonidiomatic usage examples and rating other players' entries. As opposed to
classical crowdprocessing annotation efforts in the field, for the first time
in the literature, a crowdcreating & crowdrating approach is implemented and
tested for idiom corpora construction. The approach is language independent and
evaluated on two languages in comparison to traditional data preparation
techniques in the field. The reaction of the crowd is monitored under different
motivational means (namely, gamification affordances and monetary rewards). The
results reveal that the proposed approach is powerful in collecting the
targeted materials, and although being an explicit crowdsourcing approach, it
is found entertaining and useful by the crowd. The approach has been shown to
have the potential to speed up the construction of idiom corpora for different
natural languages to be used as second language learning material, training
data for supervised idiom identification systems, or samples for lexicographic
studies.
| [
{
"created": "Mon, 1 Feb 2021 14:44:43 GMT",
"version": "v1"
}
] | 2022-01-21 | [
[
"Eryiğit",
"Gülşen",
""
],
[
"Şentaş",
"Ali",
""
],
[
"Monti",
"Johanna",
""
]
] |
2102.00898 | Mohit Sewak | Mohit Sewak and Sanjay K. Sahay and Hemant Rathore | DRLDO: A novel DRL based De-ObfuscationSystem for Defense against
Metamorphic Malware | null | Defence Science Journal, 71(1), 55-65 | 10.14429/dsj.71.15780 | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel mechanism to normalize metamorphic and
obfuscated malware down at the opcode level and hence create an advanced
metamorphic malware de-obfuscation and defense system. We name this system
DRLDO, for Deep Reinforcement Learning based De-Obfuscator. With the inclusion
of the DRLDO as a sub-component, an existing Intrusion Detection System could
be augmented with defensive capabilities against 'zero-day' attacks from
obfuscated and metamorphic variants of existing malware. This gains importance,
not only because there exists no system to date that uses advanced DRL to
intelligently and automatically normalize obfuscation down even to the opcode
level, but also because the DRLDO system does not mandate any changes to the
existing IDS. The DRLDO system does not even mandate the IDS' classifier to be
retrained with any new dataset containing obfuscated samples. Hence DRLDO could
be easily retrofitted into any existing IDS deployment. We designed, developed,
and conducted experiments on the system to evaluate the same against
multiple-simultaneous attacks from obfuscations generated from malware samples
from a standardized dataset that contains multiple generations of malware.
Experimental results prove that DRLDO was able to successfully make the
otherwise un-detectable obfuscated variants of the malware detectable by an
existing pre-trained malware classifier. The detection probability was raised
well above the cut-off mark to 0.6 for the classifier to detect the obfuscated
malware unambiguously. Further, the de-obfuscated variants generated by DRLDO
achieved a very high correlation (of 0.99) with the base malware. This
observation validates that the DRLDO system is actually learning to
de-obfuscate and not exploiting a trivial trick.
| [
{
"created": "Mon, 1 Feb 2021 15:16:18 GMT",
"version": "v1"
}
] | 2021-02-02 | [
[
"Sewak",
"Mohit",
""
],
[
"Sahay",
"Sanjay K.",
""
],
[
"Rathore",
"Hemant",
""
]
] |
2102.00997 | Gorka Azkune | Aitzol Elu, Gorka Azkune, Oier Lopez de Lacalle, Ignacio
Arganda-Carreras, Aitor Soroa, Eneko Agirre | Inferring spatial relations from textual descriptions of images | Accepted in Pattern Recognition | Pattern Recognition, Volume 113, 2021, 107847 | 10.1016/j.patcog.2021.107847 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating an image from its textual description requires both a certain
level of language understanding and common sense knowledge about the spatial
relations of the physical entities being described. In this work, we focus on
inferring the spatial relation between entities, a key step in the process of
composing scenes based on text. More specifically, given a caption containing a
mention to a subject and the location and size of the bounding box of that
subject, our goal is to predict the location and size of an object mentioned in
the caption. Previous work did not use the caption text information, but a
manually provided relation holding between the subject and the object. In fact,
the used evaluation datasets contain manually annotated ontological triplets
but no captions, making the exercise unrealistic: a manual step was required;
and systems did not leverage the richer information in captions. Here we
present a system that uses the full caption, and Relations in Captions
(REC-COCO), a dataset derived from MS-COCO which allows to evaluate spatial
relation inference from captions directly. Our experiments show that: (1) it is
possible to infer the size and location of an object with respect to a given
subject directly from the caption; (2) the use of full text allows to place the
object better than using a manually annotated relation. Our work paves the way
for systems that, given a caption, decide which entities need to be depicted
and their respective location and sizes, in order to then generate the final
image.
| [
{
"created": "Mon, 1 Feb 2021 17:21:13 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Elu",
"Aitzol",
""
],
[
"Azkune",
"Gorka",
""
],
[
"de Lacalle",
"Oier Lopez",
""
],
[
"Arganda-Carreras",
"Ignacio",
""
],
[
"Soroa",
"Aitor",
""
],
[
"Agirre",
"Eneko",
""
]
] |
2102.01013 | Valentin Pelloin | Valentin Pelloin, Nathalie Camelin, Antoine Laurent, Renato De Mori,
Antoine Caubri\`ere, Yannick Est\`eve, Sylvain Meignier | End2End Acoustic to Semantic Transduction | Accepted at IEEE ICASSP 2021 | ICASSP 2021 - 2021 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) | 10.1109/ICASSP39728.2021.9413581 | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel end-to-end sequence-to-sequence spoken
language understanding model using an attention mechanism. It reliably selects
contextual acoustic features in order to hypothesize semantic contents. An
initial architecture capable of extracting all pronounced words and concepts
from acoustic spans is designed and tested. With a shallow fusion language
model, this system reaches a 13.6 concept error rate (CER) and an 18.5 concept
value error rate (CVER) on the French MEDIA corpus, achieving an absolute 2.8
points reduction compared to the state-of-the-art. Then, an original model is
proposed for hypothesizing concepts and their values. This transduction reaches
a 15.4 CER and a 21.6 CVER without any new type of context.
| [
{
"created": "Mon, 1 Feb 2021 17:42:59 GMT",
"version": "v1"
}
] | 2021-05-20 | [
[
"Pelloin",
"Valentin",
""
],
[
"Camelin",
"Nathalie",
""
],
[
"Laurent",
"Antoine",
""
],
[
"De Mori",
"Renato",
""
],
[
"Caubrière",
"Antoine",
""
],
[
"Estève",
"Yannick",
""
],
[
"Meignier",
"Sylvain",
""
]
] |
2102.01149 | Devorah Kletenik | Lisa Hellerstein, Devorah Kletenik and Srinivasan Parthasarathy | A Tight Bound for Stochastic Submodular Cover | This work extends the result of Srinivasan Parthasarathy in his paper
arXiv:1803.07639 from the problem of Stochastic Set Cover to that of
Stochastic Submodular Cover | Journal of Artificial Intelligence Research 71(2021) 347 - 370 | 10.1613/jair.1.12368 | null | cs.DS cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We show that the Adaptive Greedy algorithm of Golovin and Krause (2011)
achieves an approximation bound of $(\ln (Q/\eta)+1)$ for Stochastic Submodular
Cover: here $Q$ is the "goal value" and $\eta$ is the smallest non-zero
marginal increase in utility deliverable by an item. (For integer-valued
utility functions, we show a bound of $H(Q)$, where $H(Q)$ is the $Q^{th}$
Harmonic number.) Although this bound was claimed by Golovin and Krause in the
original version of their paper, the proof was later shown to be incorrect by
Nan and Saligrama (2017). The subsequent corrected proof of Golovin and Krause
(2017) gives a quadratic bound of $(\ln(Q/\eta) + 1)^2$. Other previous bounds
for the problem are $56(\ln(Q/\eta) + 1)$, implied by work of Im et al. (2016)
on a related problem, and $k(\ln (Q/\eta)+1)$, due to Deshpande et al. (2016)
and Hellerstein and Kletenik (2018), where $k$ is the number of states. Our
bound generalizes the well-known $(\ln~m + 1)$ approximation bound on the
greedy algorithm for the classical Set Cover problem, where $m$ is the size of
the ground set.
| [
{
"created": "Mon, 1 Feb 2021 20:37:40 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Aug 2021 04:26:17 GMT",
"version": "v2"
}
] | 2021-08-03 | [
[
"Hellerstein",
"Lisa",
""
],
[
"Kletenik",
"Devorah",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] |
2102.01260 | Xiong Liu | Xiong Liu, Craig E. Thomas, Christian C. Felder | The impact of external innovation on new drug approvals: A retrospective
analysis | null | International Journal of Pharmaceutics, Volume 563, Pages 273-281,
2019 | 10.1016/j.ijpharm.2018.12.093 | PMID: 30664998 | cs.CL cs.CY q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pharmaceutical companies are relying more often on external sources of
innovation to boost their discovery research productivity. However, more
in-depth knowledge about how external innovation may translate to successful
product launches is still required in order to better understand how to best
leverage the innovation ecosystem. We analyzed the pre-approval publication
histories for FDA-approved new molecular entities (NMEs) and new biologic
entities (NBEs) launched by 13 top research pharma companies during the last
decade (2006-2016). We found that academic institutions contributed the
majority of pre-approval publications and that publication subject matter is
closely aligned with the strengths of the respective innovator. We found this
to also be true for candidate drugs terminated in Phase 3, but the volume of
literature on these molecules is substantially less than for approved drugs.
This may suggest that approved drugs are often associated with a more robust
dataset provided by a large number of institutes. Collectively, the results of
our analysis support the hypothesis that a collaborative research innovation
environment spanning across academia, industry and government is highly
conducive to successful drug approvals.
| [
{
"created": "Tue, 2 Feb 2021 02:21:34 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Liu",
"Xiong",
""
],
[
"Thomas",
"Craig E.",
""
],
[
"Felder",
"Christian C.",
""
]
] |
2102.01284 | Peng Yao | Peng Yao, Shuwei Shen, Mengjuan Xu, Peng Liu, Fan Zhang, Jinyu Xing,
Pengfei Shao, Benjamin Kaffenberger, and Ronald X. Xu | Single Model Deep Learning on Imbalanced Small Datasets for Skin Lesion
Classification | null | IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021 | 10.1109/TMI.2021.3136682 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural network (DCNN) models have been widely explored for
skin disease diagnosis and some of them have achieved the diagnostic outcomes
comparable or even superior to those of dermatologists. However, broad
implementation of DCNN in skin disease detection is hindered by small size and
data imbalance of the publically accessible skin lesion datasets. This paper
proposes a novel single-model based strategy for classification of skin lesions
on small and imbalanced datasets. First, various DCNNs are trained on different
small and imbalanced datasets to verify that the models with moderate
complexity outperform the larger models. Second, regularization DropOut and
DropBlock are added to reduce overfitting and a Modified RandAugment
augmentation strategy is proposed to deal with the defects of sample
underrepresentation in the small dataset. Finally, a novel Multi-Weighted New
Loss (MWNL) function and an end-to-end cumulative learning strategy (CLS) are
introduced to overcome the challenge of uneven sample size and classification
difficulty and to reduce the impact of abnormal samples on training. By
combining Modified RandAugment, MWNL and CLS, our single DCNN model method
achieved the classification accuracy comparable or superior to those of
multiple ensembling models on different dermoscopic image datasets. Our study
shows that this method is able to achieve a high classification performance at
a low cost of computational resources and inference time, potentially suitable
to implement in mobile devices for automated screening of skin lesions and many
other malignancies in low resource settings.
| [
{
"created": "Tue, 2 Feb 2021 03:48:55 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Feb 2022 08:40:10 GMT",
"version": "v2"
}
] | 2022-02-14 | [
[
"Yao",
"Peng",
""
],
[
"Shen",
"Shuwei",
""
],
[
"Xu",
"Mengjuan",
""
],
[
"Liu",
"Peng",
""
],
[
"Zhang",
"Fan",
""
],
[
"Xing",
"Jinyu",
""
],
[
"Shao",
"Pengfei",
""
],
[
"Kaffenberger",
"Benjamin",
""
],
[
"Xu",
"Ronald X.",
""
]
] |
2102.01295 | Heecheol Kim | Heecheol Kim, Yoshiyuki Ohmura, and Yasuo Kuniyoshi | Gaze-based dual resolution deep imitation learning for high-precision
dexterous robot manipulation | 8 pages. The supplementary video can be found at:
https://www.youtube.com/watch?v=ytpChcFqD5g Published in IEEE Robotics and
Automation Letters. Replaced to add video url in the manuscript | IEEE Robotics and Automation Letters, Vol. 6, No. 2, 2021 | 10.1109/LRA.2021.3059619 | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A high-precision manipulation task, such as needle threading, is challenging.
Physiological studies have proposed connecting low-resolution peripheral vision
and fast movement to transport the hand into the vicinity of an object, and
using high-resolution foveated vision to achieve the accurate homing of the
hand to the object. The results of this study demonstrate that a deep imitation
learning based method, inspired by the gaze-based dual resolution visuomotor
control system in humans, can solve the needle threading task. First, we
recorded the gaze movements of a human operator who was teleoperating a robot.
Then, we used only a high-resolution image around the gaze to precisely control
the thread position when it was close to the target. We used a low-resolution
peripheral image to reach the vicinity of the target. The experimental results
obtained in this study demonstrate that the proposed method enables precise
manipulation tasks using a general-purpose robot manipulator and improves
computational efficiency.
| [
{
"created": "Tue, 2 Feb 2021 04:11:09 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 03:50:20 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Feb 2024 10:09:46 GMT",
"version": "v3"
}
] | 2024-02-27 | [
[
"Kim",
"Heecheol",
""
],
[
"Ohmura",
"Yoshiyuki",
""
],
[
"Kuniyoshi",
"Yasuo",
""
]
] |
2102.01301 | Yi-Jun Cao | Yi-Jun Cao, Chuan Lin, and Yong-Jie Li | Learning Crisp Boundaries Using Deep Refinement Network and Adaptive
Weighting Loss | 11 pages, 7 figures | IEEE Transactions on Multimedia, vol. 23, pp. 761-771, 2021 | 10.1109/TED.2020.3041567 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Significant progress has been made in boundary detection with the help of
convolutional neural networks. Recent boundary detection models not only focus
on real object boundary detection but also "crisp" boundaries (precisely
localized along the object's contour). There are two methods to evaluate crisp
boundary performance. One uses more strict tolerance to measure the distance
between the ground truth and the detected contour. The other focuses on
evaluating the contour map without any postprocessing. In this study, we
analyze both methods and conclude that both methods are two aspects of crisp
contour evaluation. Accordingly, we propose a novel network named deep
refinement network (DRNet) that stacks multiple refinement modules to achieve
richer feature representation and a novel loss function, which combines
cross-entropy and dice loss through effective adaptive fusion. Experimental
results demonstrated that we achieve state-of-the-art performance for several
available datasets.
| [
{
"created": "Tue, 2 Feb 2021 04:22:35 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 07:15:10 GMT",
"version": "v2"
}
] | 2021-03-10 | [
[
"Cao",
"Yi-Jun",
""
],
[
"Lin",
"Chuan",
""
],
[
"Li",
"Yong-Jie",
""
]
] |
2102.01380 | Zhong Meng | Zhong Meng, Naoyuki Kanda, Yashesh Gaur, Sarangarajan Parthasarathy,
Eric Sun, Liang Lu, Xie Chen, Jinyu Li, Yifan Gong | Internal Language Model Training for Domain-Adaptive End-to-End Speech
Recognition | 5 pages, ICASSP 2021 | 2021 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), Toronto, Canada | null | null | eess.AS cs.AI cs.CL cs.LG cs.SD | http://creativecommons.org/licenses/by/4.0/ | The efficacy of external language model (LM) integration with existing
end-to-end (E2E) automatic speech recognition (ASR) systems can be improved
significantly using the internal language model estimation (ILME) method. In
this method, the internal LM score is subtracted from the score obtained by
interpolating the E2E score with the external LM score, during inference. To
improve the ILME-based inference, we propose an internal LM training (ILMT)
method to minimize an additional internal LM loss by updating only the E2E
model components that affect the internal LM estimation. ILMT encourages the
E2E model to form a standalone LM inside its existing components, without
sacrificing ASR accuracy. After ILMT, the more modular E2E model with matched
training and inference criteria enables a more thorough elimination of the
source-domain internal LM, and therefore leads to a more effective integration
of the target-domain external LM. Experimented with 30K-hour trained recurrent
neural network transducer and attention-based encoder-decoder models, ILMT with
ILME-based inference achieves up to 31.5% and 11.4% relative word error rate
reductions from standard E2E training with Shallow Fusion on out-of-domain
LibriSpeech and in-domain Microsoft production test sets, respectively.
| [
{
"created": "Tue, 2 Feb 2021 08:15:02 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Apr 2021 19:16:04 GMT",
"version": "v2"
}
] | 2021-04-26 | [
[
"Meng",
"Zhong",
""
],
[
"Kanda",
"Naoyuki",
""
],
[
"Gaur",
"Yashesh",
""
],
[
"Parthasarathy",
"Sarangarajan",
""
],
[
"Sun",
"Eric",
""
],
[
"Lu",
"Liang",
""
],
[
"Chen",
"Xie",
""
],
[
"Li",
"Jinyu",
""
],
[
"Gong",
"Yifan",
""
]
] |
2102.01405 | Ruben Tolosana | Ruben Tolosana, Juan Carlos Ruiz-Garcia, Ruben Vera-Rodriguez, Jaime
Herreros-Rodriguez, Sergio Romero-Tapiador, Aythami Morales, Julian Fierrez | Child-Computer Interaction with Mobile Devices: Recent Works, New
Dataset, and Age Detection | null | IEEE Transactions on Emerging Topics in Computing, 2022 | 10.1109/TETC.2022.3150836 | null | cs.HC cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This article provides an overview of recent research in Child-Computer
Interaction with mobile devices and describe our framework ChildCI intended
for: i) overcoming the lack of large-scale publicly available databases in the
area, ii) generating a better understanding of the cognitive and neuromotor
development of children along time, contrary to most previous studies in the
literature focused on a single-session acquisition, and iii) enabling new
applications in e-Learning and e-Health through the acquisition of additional
information such as the school grades and children's disorders, among others.
Our framework includes a new mobile application, specific data acquisition
protocols, and a first release of the ChildCI dataset (ChildCIdb v1), which is
planned to be extended yearly to enable longitudinal studies.
In our framework children interact with a tablet device, using both a pen
stylus and the finger, performing different tasks that require different levels
of neuromotor and cognitive skills. ChildCIdb is the first database in the
literature that comprises more than 400 children from 18 months to 8 years old,
considering therefore the first three development stages of the Piaget's
theory. In addition, and as a demonstration of the potential of the ChildCI
framework, we include experimental results for one of the many applications
enabled by ChildCIdb: children age detection based on device interaction.
| [
{
"created": "Tue, 2 Feb 2021 09:51:58 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Feb 2022 08:57:57 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Feb 2022 08:38:02 GMT",
"version": "v3"
}
] | 2022-02-23 | [
[
"Tolosana",
"Ruben",
""
],
[
"Ruiz-Garcia",
"Juan Carlos",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Herreros-Rodriguez",
"Jaime",
""
],
[
"Romero-Tapiador",
"Sergio",
""
],
[
"Morales",
"Aythami",
""
],
[
"Fierrez",
"Julian",
""
]
] |
2102.01460 | Alberto Pretto | Alessandro Saviolo, Matteo Bonotto, Daniele Evangelista, Marco
Imperoli, Jacopo Lazzaro, Emanuele Menegatti and Alberto Pretto | Learning to Segment Human Body Parts with Synthetically Trained Deep
Convolutional Networks | This paper has been published in: Proceedings of the 16th
International Conference on Intelligent Autonomous Systems (IAS 2021) | Proceedings of the 16th International Conference on Intelligent
Autonomous Systems (IAS 2021) | 10.1007/978-3-030-95892-3_52 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new framework for human body part segmentation based on
Deep Convolutional Neural Networks trained using only synthetic data. The
proposed approach achieves cutting-edge results without the need of training
the models with real annotated data of human body parts. Our contributions
include a data generation pipeline, that exploits a game engine for the
creation of the synthetic data used for training the network, and a novel
pre-processing module, that combines edge response maps and adaptive histogram
equalization to guide the network to learn the shape of the human body parts
ensuring robustness to changes in the illumination conditions. For selecting
the best candidate architecture, we perform exhaustive tests on manually
annotated images of real human body limbs. We further compare our method
against several high-end commercial segmentation tools on the body parts
segmentation task. The results show that our method outperforms the other
models by a significant margin. Finally, we present an ablation study to
validate our pre-processing module. With this paper, we release an
implementation of the proposed approach along with the acquired datasets.
| [
{
"created": "Tue, 2 Feb 2021 12:26:50 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Nov 2021 15:06:02 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Jun 2022 15:10:20 GMT",
"version": "v3"
}
] | 2022-06-08 | [
[
"Saviolo",
"Alessandro",
""
],
[
"Bonotto",
"Matteo",
""
],
[
"Evangelista",
"Daniele",
""
],
[
"Imperoli",
"Marco",
""
],
[
"Lazzaro",
"Jacopo",
""
],
[
"Menegatti",
"Emanuele",
""
],
[
"Pretto",
"Alberto",
""
]
] |
2102.01486 | Cheng Ma | Cheng Ma, Jiwen Lu, Jie Zhou | Rank-Consistency Deep Hashing for Scalable Multi-Label Image Search | null | IEEE Transactions on Multimedia, 2020 | 10.1109/TMM.2020.3034534 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As hashing becomes an increasingly appealing technique for large-scale image
retrieval, multi-label hashing is also attracting more attention for the
ability to exploit multi-level semantic contents. In this paper, we propose a
novel deep hashing method for scalable multi-label image search. Unlike
existing approaches with conventional objectives such as contrast and triplet
losses, we employ a rank list, rather than pairs or triplets, to provide
sufficient global supervision information for all the samples. Specifically, a
new rank-consistency objective is applied to align the similarity orders from
two spaces, the original space and the hamming space. A powerful loss function
is designed to penalize the samples whose semantic similarity and hamming
distance are mismatched in two spaces. Besides, a multi-label softmax
cross-entropy loss is presented to enhance the discriminative power with a
concise formulation of the derivative function. In order to manipulate the
neighborhood structure of the samples with different labels, we design a
multi-label clustering loss to cluster the hashing vectors of the samples with
the same labels by reducing the distances between the samples and their
multiple corresponding class centers. The state-of-the-art experimental results
achieved on three public multi-label datasets, MIRFLICKR-25K, IAPRTC12 and
NUS-WIDE, demonstrate the effectiveness of the proposed method.
| [
{
"created": "Tue, 2 Feb 2021 13:46:58 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Ma",
"Cheng",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
]
] |
2102.01498 | Iuliana Marin | Andrei Vasilateanu, Nicolae Goga, Elena-Alice Tanase, Iuliana Marin | Enterprise domain ontology learning from web-based corpus | null | 2015 6th International Conference on Computing, Communication and
Networking Technologies (ICCCNT) | 10.1109/ICCCNT.2015.7395227 | null | cs.AI cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enterprise knowledge is a key asset in the competing and fast-changing
corporate landscape. The ability to learn, store and distribute implicit and
explicit knowledge can be the difference between success and failure. While
enterprise knowledge management is a well-defined research domain, current
implementations lack orientation towards small and medium enterprise. We
propose a semantic search engine for relevant documents in an enterprise, based
on automatic generated domain ontologies. In this paper we focus on the
component for ontology learning and population.
| [
{
"created": "Fri, 29 Jan 2021 17:08:29 GMT",
"version": "v1"
}
] | 2021-02-16 | [
[
"Vasilateanu",
"Andrei",
""
],
[
"Goga",
"Nicolae",
""
],
[
"Tanase",
"Elena-Alice",
""
],
[
"Marin",
"Iuliana",
""
]
] |
2102.01502 | Satyapriya Krishna | Satyapriya Krishna, Rahul Gupta, Christophe Dupuy | ADePT: Auto-encoder based Differentially Private Text Transformation | null | The 16th conference of the European Chapter of the Association for
Computational Linguistics (EACL), 2021 | null | null | cs.CR cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Privacy is an important concern when building statistical models on data
containing personal information. Differential privacy offers a strong
definition of privacy and can be used to solve several privacy concerns (Dwork
et al., 2014). Multiple solutions have been proposed for the
differentially-private transformation of datasets containing sensitive
information. However, such transformation algorithms offer poor utility in
Natural Language Processing (NLP) tasks due to noise added in the process. In
this paper, we address this issue by providing a utility-preserving
differentially private text transformation algorithm using auto-encoders. Our
algorithm transforms text to offer robustness against attacks and produces
transformations with high semantic quality that perform well on downstream NLP
tasks. We prove the theoretical privacy guarantee of our algorithm and assess
its privacy leakage under Membership Inference Attacks(MIA) (Shokri et al.,
2017) on models trained with transformed data. Our results show that the
proposed model performs better against MIA attacks while offering lower to no
degradation in the utility of the underlying transformation process compared to
existing baselines.
| [
{
"created": "Fri, 29 Jan 2021 23:15:24 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Krishna",
"Satyapriya",
""
],
[
"Gupta",
"Rahul",
""
],
[
"Dupuy",
"Christophe",
""
]
] |
2102.01565 | Juan Pedro Dominguez-Morales | Luis J. Mu\~noz-Molina, Ignacio Cazorla-Pi\~nar, Juan P.
Dominguez-Morales, Fernando Perez-Pe\~na | Real-time detection of uncalibrated sensors using Neural Networks | null | Neural Comput & Applic (2022) | 10.1007/s00521-021-06865-z | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Nowadays, sensors play a major role in several contexts like science,
industry and daily life which benefit of their use. However, the retrieved
information must be reliable. Anomalies in the behavior of sensors can give
rise to critical consequences such as ruining a scientific project or
jeopardizing the quality of the production in industrial production lines. One
of the more subtle kind of anomalies are uncalibrations. An uncalibration is
said to take place when the sensor is not adjusted or standardized by
calibration according to a ground truth value. In this work, an online
machine-learning based uncalibration detector for temperature, humidity and
pressure sensors was developed. This solution integrates an Artificial Neural
Network as main component which learns from the behavior of the sensors under
calibrated conditions. Then, after trained and deployed, it detects
uncalibrations once they take place. The obtained results show that the
proposed solution is able to detect uncalibrations for deviation values of 0.25
degrees, 1% RH and 1.5 Pa, respectively. This solution can be adapted to
different contexts by means of transfer learning, whose application allows for
the addition of new sensors, the deployment into new environments and the
retraining of the model with minimum amounts of data.
| [
{
"created": "Tue, 2 Feb 2021 15:44:39 GMT",
"version": "v1"
}
] | 2022-01-26 | [
[
"Muñoz-Molina",
"Luis J.",
""
],
[
"Cazorla-Piñar",
"Ignacio",
""
],
[
"Dominguez-Morales",
"Juan P.",
""
],
[
"Perez-Peña",
"Fernando",
""
]
] |
2102.01578 | Marco Gaido | Marco Gaido, Mauro Cettolo, Matteo Negri, Marco Turchi | CTC-based Compression for Direct Speech Translation | Accepted at EACL2021 | Proceedings of the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main Volume (2021), 690-696 | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Previous studies demonstrated that a dynamic phone-informed compression of
the input audio is beneficial for speech translation (ST). However, they
required a dedicated model for phone recognition and did not test this solution
for direct ST, in which a single model translates the input audio into the
target language without intermediate representations. In this work, we propose
the first method able to perform a dynamic compression of the input indirect ST
models. In particular, we exploit the Connectionist Temporal Classification
(CTC) to compress the input sequence according to its phonetic characteristics.
Our experiments demonstrate that our solution brings a 1.3-1.5 BLEU improvement
over a strong baseline on two language pairs (English-Italian and
English-German), contextually reducing the memory footprint by more than 10%.
| [
{
"created": "Tue, 2 Feb 2021 16:09:19 GMT",
"version": "v1"
}
] | 2021-10-15 | [
[
"Gaido",
"Marco",
""
],
[
"Cettolo",
"Mauro",
""
],
[
"Negri",
"Matteo",
""
],
[
"Turchi",
"Marco",
""
]
] |
2102.01579 | Xiangyu Xu | Xiangyu Xu, Yongrui Ma, Wenxiu Sun, Ming-Hsuan Yang | Exploiting Raw Images for Real-Scene Super-Resolution | A larger version with higher-resolution figures is available at:
https://sites.google.com/view/xiangyuxu. arXiv admin note: text overlap with
arXiv:1905.12156 | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2020 | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Super-resolution is a fundamental problem in computer vision which aims to
overcome the spatial limitation of camera sensors. While significant progress
has been made in single image super-resolution, most algorithms only perform
well on synthetic data, which limits their applications in real scenarios. In
this paper, we study the problem of real-scene single image super-resolution to
bridge the gap between synthetic data and real captured images. We focus on two
issues of existing super-resolution algorithms: lack of realistic training data
and insufficient utilization of visual information obtained from cameras. To
address the first issue, we propose a method to generate more realistic
training data by mimicking the imaging process of digital cameras. For the
second issue, we develop a two-branch convolutional neural network to exploit
the radiance information originally-recorded in raw images. In addition, we
propose a dense channel-attention block for better image restoration as well as
a learning-based guided filter network for effective color correction. Our
model is able to generalize to different cameras without deliberately training
on images from specific camera types. Extensive experiments demonstrate that
the proposed algorithm can recover fine details and clear structures, and
achieve high-quality results for single image super-resolution in real scenes.
| [
{
"created": "Tue, 2 Feb 2021 16:10:15 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Xu",
"Xiangyu",
""
],
[
"Ma",
"Yongrui",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
] |
2102.01582 | Mats Richter | Mats L. Richter, Wolf Byttner, Ulf Krumnack, Ludwdig Schallner, Justin
Shenk | Size Matters | Preprint | Artificial Neural Networks and Machine Learning ICANN 2021 133-144 | 10.1007/978-3-030-86340-1_11 | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Fully convolutional neural networks can process input of arbitrary size by
applying a combination of downsampling and pooling. However, we find that fully
convolutional image classifiers are not agnostic to the input size but rather
show significant differences in performance: presenting the same image at
different scales can result in different outcomes. A closer look reveals that
there is no simple relationship between input size and model performance (no
`bigger is better'), but that each each network has a preferred input size, for
which it shows best results. We investigate this phenomenon by applying
different methods, including spectral analysis of layer activations and probe
classifiers, showing that there are characteristic features depending on the
network architecture. From this we find that the size of discriminatory
features is critically influencing how the inference process is distributed
among the layers.
| [
{
"created": "Tue, 2 Feb 2021 16:17:52 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 09:00:14 GMT",
"version": "v2"
}
] | 2021-10-13 | [
[
"Richter",
"Mats L.",
""
],
[
"Byttner",
"Wolf",
""
],
[
"Krumnack",
"Ulf",
""
],
[
"Schallner",
"Ludwdig",
""
],
[
"Shenk",
"Justin",
""
]
] |
2102.01645 | Federico Galatolo | Federico A. Galatolo and Mario G.C.A. Cimino and Gigliola Vaglini | Generating images from caption and vice versa via CLIP-Guided Generative
Latent Space Search | null | IMPROVE, ISBN 978-989-758-511-1, pages 166-174 (2021) | 10.5220/0010503701660174 | null | cs.NE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research work we present CLIP-GLaSS, a novel zero-shot framework to
generate an image (or a caption) corresponding to a given caption (or image).
CLIP-GLaSS is based on the CLIP neural network, which, given an image and a
descriptive caption, provides similar embeddings. Differently, CLIP-GLaSS takes
a caption (or an image) as an input, and generates the image (or the caption)
whose CLIP embedding is the most similar to the input one. This optimal image
(or caption) is produced via a generative network, after an exploration by a
genetic algorithm. Promising results are shown, based on the experimentation of
the image Generators BigGAN and StyleGAN2, and of the text Generator GPT2
| [
{
"created": "Tue, 2 Feb 2021 18:00:13 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Feb 2021 12:14:49 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Feb 2021 22:42:49 GMT",
"version": "v3"
},
{
"created": "Fri, 1 Oct 2021 15:45:51 GMT",
"version": "v4"
}
] | 2021-10-04 | [
[
"Galatolo",
"Federico A.",
""
],
[
"Cimino",
"Mario G. C. A.",
""
],
[
"Vaglini",
"Gigliola",
""
]
] |
2102.01767 | Jorge Miguel Ferreira Da Silva | Jorge Miguel Silva, Diogo Pratas, Rui Antunes, S\'ergio Matos, and
Armando J. Pinho | Automatic analysis of artistic paintings using information-based
measures | Website: http://panther.web.ua.pt 24 Pages; 19 pages article; 5 pages
supplementary material | Pattern Recognition (2021) 107864 | 10.1016/j.patcog.2021.107864 | null | cs.CV cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The artistic community is increasingly relying on automatic computational
analysis for authentication and classification of artistic paintings. In this
paper, we identify hidden patterns and relationships present in artistic
paintings by analysing their complexity, a measure that quantifies the sum of
characteristics of an object. Specifically, we apply Normalized Compression
(NC) and the Block Decomposition Method (BDM) to a dataset of 4,266 paintings
from 91 authors and examine the potential of these information-based measures
as descriptors of artistic paintings. Both measures consistently described the
equivalent types of paintings, authors, and artistic movements. Moreover,
combining the NC with a measure of the roughness of the paintings creates an
efficient stylistic descriptor. Furthermore, by quantifying the local
information of each painting, we define a fingerprint that describes critical
information regarding the artists' style, their artistic influences, and shared
techniques. More fundamentally, this information describes how each author
typically composes and distributes the elements across the canvas and,
therefore, how their work is perceived. Finally, we demonstrate that regional
complexity and two-point height difference correlation function are useful
auxiliary features that improve current methodologies in style and author
classification of artistic paintings. The whole study is supported by an
extensive website (http://panther.web.ua.pt) for fast author characterization
and authentication.
| [
{
"created": "Tue, 2 Feb 2021 21:40:30 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Silva",
"Jorge Miguel",
""
],
[
"Pratas",
"Diogo",
""
],
[
"Antunes",
"Rui",
""
],
[
"Matos",
"Sérgio",
""
],
[
"Pinho",
"Armando J.",
""
]
] |
2102.01780 | Daniel Severin Dr. | Mauro Lucci, Daniel Sever\'in, Paula Zabala | A metaheuristic for crew scheduling in a pickup-and-delivery problem
with time windows | null | Intl. Trans. in Op. Res., vol. 30, 2023, pp. 970-1001 | 10.1111/itor.13096 | null | cs.AI cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A vehicle routing and crew scheduling problem (VRCSP) consists of
simultaneously planning the routes of a fleet of vehicles and scheduling the
crews, where the vehicle-crew correspondence is not fixed through time. This
allows a greater planning flexibility and a more efficient use of the fleet,
but in counterpart, a high synchronisation is demanded. In this work, we
present a VRCSP where pickup-and-delivery requests with time windows have to be
fulfilled over a given planning horizon by using trucks and drivers. Crews can
be composed of 1 or 2 drivers and any of them can be relieved in a given set of
locations. Moreover, they are allowed to travel among locations with
non-company shuttles, at an additional cost that is minimised. As our problem
considers distinct routes for trucks and drivers, we have an additional
flexibility not contemplated in other previous VRCSP given in the literature
where a crew is handled as an indivisible unit. We tackle this problem with a
two-stage sequential approach: a set of truck routes is computed in the first
stage and a set of driver routes consistent with the truck routes is obtained
in the second one. We design and evaluate the performance of a metaheuristic
based algorithm for the latter stage. Our algorithm is mainly a GRASP with a
perturbation procedure that allows reusing solutions already found in case the
search for new solutions becomes difficult. This procedure together with other
to repair infeasible solutions allow us to find high-quality solutions on
instances of 100 requests spread across 15 cities with a fleet of 12-32 trucks
(depending on the planning horizon) in less than an hour. We also conclude that
the possibility of carrying an additional driver leads to a decrease of the
cost of external shuttles by about 60% on average with respect to individual
crews and, in some cases, to remove this cost completely.
| [
{
"created": "Tue, 2 Feb 2021 22:14:10 GMT",
"version": "v1"
}
] | 2024-07-11 | [
[
"Lucci",
"Mauro",
""
],
[
"Severín",
"Daniel",
""
],
[
"Zabala",
"Paula",
""
]
] |
2102.01826 | Zhewei Sun | Zhewei Sun, Richard Zemel, Yang Xu | A Computational Framework for Slang Generation | Accepted for publication in TACL 2021. Author's final version | Transactions of the Association for Computational Linguistics
2021; 9 462-478 | 10.1162/tacl_a_00378 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Slang is a common type of informal language, but its flexible nature and
paucity of data resources present challenges for existing natural language
systems. We take an initial step toward machine generation of slang by
developing a framework that models the speaker's word choice in slang context.
Our framework encodes novel slang meaning by relating the conventional and
slang senses of a word while incorporating syntactic and contextual knowledge
in slang usage. We construct the framework using a combination of probabilistic
inference and neural contrastive learning. We perform rigorous evaluations on
three slang dictionaries and show that our approach not only outperforms
state-of-the-art language models, but also better predicts the historical
emergence of slang word usages from 1960s to 2000s. We interpret the proposed
models and find that the contrastively learned semantic space is sensitive to
the similarities between slang and conventional senses of words. Our work
creates opportunities for the automated generation and interpretation of
informal language.
| [
{
"created": "Wed, 3 Feb 2021 01:19:07 GMT",
"version": "v1"
},
{
"created": "Sat, 22 May 2021 04:46:48 GMT",
"version": "v2"
}
] | 2021-05-25 | [
[
"Sun",
"Zhewei",
""
],
[
"Zemel",
"Richard",
""
],
[
"Xu",
"Yang",
""
]
] |
2102.01850 | Ru Li | Ru Li, Chuan Wang, Jue Wang, Guanghui Liu, Heng-Yu Zhang, Bing Zeng,
Shuaicheng Liu | UPHDR-GAN: Generative Adversarial Network for High Dynamic Range Imaging
with Unpaired Data | Accepted by IEEE Transactions on Circuits and Systems for Video
Technology (TCSVT) | IEEE Transactions on Circuits and Systems for Video Technology,
2022 | 10.1109/TCSVT.2022.3190057 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper proposes a method to effectively fuse multi-exposure inputs and
generate high-quality high dynamic range (HDR) images with unpaired datasets.
Deep learning-based HDR image generation methods rely heavily on paired
datasets. The ground truth images play a leading role in generating reasonable
HDR images. Datasets without ground truth are hard to be applied to train deep
neural networks. Recently, Generative Adversarial Networks (GAN) have
demonstrated their potentials of translating images from source domain X to
target domain Y in the absence of paired examples. In this paper, we propose a
GAN-based network for solving such problems while generating enjoyable HDR
results, named UPHDR-GAN. The proposed method relaxes the constraint of the
paired dataset and learns the mapping from the LDR domain to the HDR domain.
Although the pair data are missing, UPHDR-GAN can properly handle the ghosting
artifacts caused by moving objects or misalignments with the help of the
modified GAN loss, the improved discriminator network and the useful
initialization phase. The proposed method preserves the details of important
regions and improves the total image perceptual quality. Qualitative and
quantitative comparisons against the representative methods demonstrate the
superiority of the proposed UPHDR-GAN.
| [
{
"created": "Wed, 3 Feb 2021 03:09:14 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jul 2022 07:54:33 GMT",
"version": "v2"
}
] | 2022-07-18 | [
[
"Li",
"Ru",
""
],
[
"Wang",
"Chuan",
""
],
[
"Wang",
"Jue",
""
],
[
"Liu",
"Guanghui",
""
],
[
"Zhang",
"Heng-Yu",
""
],
[
"Zeng",
"Bing",
""
],
[
"Liu",
"Shuaicheng",
""
]
] |
2102.01906 | Vinod Kumar Kurmi | Vinod K Kurmi, Badri N. Patro, Venkatesh K. Subramanian, Vinay P.
Namboodiri | Do Not Forget to Attend to Uncertainty while Mitigating Catastrophic
Forgetting | Accepted WACV 2021 | WACV 2021 | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | One of the major limitations of deep learning models is that they face
catastrophic forgetting in an incremental learning scenario. There have been
several approaches proposed to tackle the problem of incremental learning. Most
of these methods are based on knowledge distillation and do not adequately
utilize the information provided by older task models, such as uncertainty
estimation in predictions. The predictive uncertainty provides the
distributional information can be applied to mitigate catastrophic forgetting
in a deep learning framework. In the proposed work, we consider a Bayesian
formulation to obtain the data and model uncertainties. We also incorporate
self-attention framework to address the incremental learning problem. We define
distillation losses in terms of aleatoric uncertainty and self-attention. In
the proposed work, we investigate different ablation analyses on these losses.
Furthermore, we are able to obtain better results in terms of accuracy on
standard benchmarks.
| [
{
"created": "Wed, 3 Feb 2021 06:54:52 GMT",
"version": "v1"
}
] | 2021-02-04 | [
[
"Kurmi",
"Vinod K",
""
],
[
"Patro",
"Badri N.",
""
],
[
"Subramanian",
"Venkatesh K.",
""
],
[
"Namboodiri",
"Vinay P.",
""
]
] |
2102.01968 | Claire Theobald | Claire Theobald (LORIA), Fr\'ed\'eric Pennerath (LORIA), Brieuc
Conan-Guez (LORIA), Miguel Couceiro (LORIA), Amedeo Napoli (LORIA) | A Bayesian Neural Network based on Dropout Regulation | null | Workshop on Uncertainty in Machine Learning (WUML) at ECML-PKDD
2020 Conference, Eyke H{\"u}llermeier; S{\'e}bastien Destercke, 2020, N.A.
(online), France | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian Neural Networks (BNN) have recently emerged in the Deep Learning
world for dealing with uncertainty estimation in classification tasks, and are
used in many application domains such as astrophysics, autonomous driving...BNN
assume a prior over the weights of a neural network instead of point estimates,
enabling in this way the estimation of both aleatoric and epistemic uncertainty
of the model prediction.Moreover, a particular type of BNN, namely MC Dropout,
assumes a Bernoulli distribution on the weights by using Dropout.Several
attempts to optimize the dropout rate exist, e.g. using a variational
approach.In this paper, we present a new method called "Dropout Regulation"
(DR), which consists of automatically adjusting the dropout rate during
training using a controller as used in automation.DR allows for a precise
estimation of the uncertainty which is comparable to the state-of-the-art while
remaining simple to implement.
| [
{
"created": "Wed, 3 Feb 2021 09:39:50 GMT",
"version": "v1"
}
] | 2021-02-04 | [
[
"Theobald",
"Claire",
"",
"LORIA"
],
[
"Pennerath",
"Frédéric",
"",
"LORIA"
],
[
"Conan-Guez",
"Brieuc",
"",
"LORIA"
],
[
"Couceiro",
"Miguel",
"",
"LORIA"
],
[
"Napoli",
"Amedeo",
"",
"LORIA"
]
] |
2102.02189 | Young-Suk Lee Dr. | Janaki Sheth and Young-Suk Lee and Ramon Fernandez Astudillo and
Tahira Naseem and Radu Florian and Salim Roukos and Todd Ward | Bootstrapping Multilingual AMR with Contextual Word Alignments | null | EACL 2021 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | We develop high performance multilingualAbstract Meaning Representation (AMR)
sys-tems by projecting English AMR annotationsto other languages with weak
supervision. Weachieve this goal by bootstrapping transformer-based
multilingual word embeddings, in partic-ular those from cross-lingual RoBERTa
(XLM-R large). We develop a novel technique forforeign-text-to-English AMR
alignment, usingthe contextual word alignment between En-glish and foreign
language tokens. This wordalignment is weakly supervised and relies onthe
contextualized XLM-R word embeddings.We achieve a highly competitive
performancethat surpasses the best published results forGerman, Italian,
Spanish and Chinese.
| [
{
"created": "Wed, 3 Feb 2021 18:35:55 GMT",
"version": "v1"
}
] | 2022-05-09 | [
[
"Sheth",
"Janaki",
""
],
[
"Lee",
"Young-Suk",
""
],
[
"Astudillo",
"Ramon Fernandez",
""
],
[
"Naseem",
"Tahira",
""
],
[
"Florian",
"Radu",
""
],
[
"Roukos",
"Salim",
""
],
[
"Ward",
"Todd",
""
]
] |
2102.02304 | Panayiotis Danassis | Panayiotis Danassis, Zeki Doruk Erden, Boi Faltings | Improved Cooperation by Exploiting a Common Signal | Accepted to the 20th International Conference on Autonomous Agents
and Multiagent Systems (AAMAS 2021) | An extended version of this paper has been published in the
Autonomous Agents and Multi-Agent Systems (2022) | 10.1007/s10458-021-09541-7 | null | cs.MA cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can artificial agents benefit from human conventions? Human societies manage
to successfully self-organize and resolve the tragedy of the commons in
common-pool resources, in spite of the bleak prediction of non-cooperative game
theory. On top of that, real-world problems are inherently large-scale and of
low observability. One key concept that facilitates human coordination in such
settings is the use of conventions. Inspired by human behavior, we investigate
the learning dynamics and emergence of temporal conventions, focusing on
common-pool resources. Extra emphasis was given in designing a realistic
evaluation setting: (a) environment dynamics are modeled on real-world
fisheries, (b) we assume decentralized learning, where agents can observe only
their own history, and (c) we run large-scale simulations (up to 64 agents).
Uncoupled policies and low observability make cooperation hard to achieve; as
the number of agents grow, the probability of taking a correct gradient
direction decreases exponentially. By introducing an arbitrary common signal
(e.g., date, time, or any periodic set of numbers) as a means to couple the
learning process, we show that temporal conventions can emerge and agents reach
sustainable harvesting strategies. The introduction of the signal consistently
improves the social welfare (by 258% on average, up to 3306%), the range of
environmental parameters where sustainability can be achieved (by 46% on
average, up to 300%), and the convergence speed in low abundance settings (by
13% on average, up to 53%).
| [
{
"created": "Wed, 3 Feb 2021 21:27:53 GMT",
"version": "v1"
}
] | 2022-03-29 | [
[
"Danassis",
"Panayiotis",
""
],
[
"Erden",
"Zeki Doruk",
""
],
[
"Faltings",
"Boi",
""
]
] |
2102.02585 | V\'it Novotn\'y | V\'it Novotn\'y (1) and Eniafe Festus Ayetiran (1) and Dalibor
Ba\v{c}ovsk\'y (1) and D\'avid Lupt\'ak (1) and Michal \v{S}tef\'anik (1) and
Petr Sojka (1) ((1) Faculty of Informatics Masaryk University) | One Size Does Not Fit All: Finding the Optimal Subword Sizes for
FastText Models across Languages | null | RANLP (2021) 1072-1078 | 10.26615/978-954-452-072-4_121 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised representation learning of words from large multilingual corpora
is useful for downstream tasks such as word sense disambiguation, semantic text
similarity, and information retrieval. The representation precision of
log-bilinear fastText models is mostly due to their use of subword information.
In previous work, the optimization of fastText's subword sizes has not been
fully explored, and non-English fastText models were trained using subword
sizes optimized for English and German word analogy tasks. In our work, we find
the optimal subword sizes on the English, German, Czech, Italian, Spanish,
French, Hindi, Turkish, and Russian word analogy tasks. We then propose a
simple n-gram coverage model and we show that it predicts better-than-default
subword sizes on the Spanish, French, Hindi, Turkish, and Russian word analogy
tasks. We show that the optimization of fastText's subword sizes matters and
results in a 14% improvement on the Czech word analogy task. We also show that
expensive parameter optimization can be replaced by a simple n-gram coverage
model that consistently improves the accuracy of fastText models on the word
analogy tasks by up to 3% compared to the default subword sizes, and that it is
within 1% accuracy of the optimal subword sizes.
| [
{
"created": "Thu, 4 Feb 2021 12:59:36 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Aug 2021 12:13:23 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Sep 2021 17:50:51 GMT",
"version": "v3"
}
] | 2021-09-21 | [
[
"Novotný",
"Vít",
"",
"Faculty of Informatics Masaryk University"
],
[
"Ayetiran",
"Eniafe Festus",
"",
"Faculty of Informatics Masaryk University"
],
[
"Bačovský",
"Dalibor",
"",
"Faculty of Informatics Masaryk University"
],
[
"Lupták",
"Dávid",
"",
"Faculty of Informatics Masaryk University"
],
[
"Štefánik",
"Michal",
"",
"Faculty of Informatics Masaryk University"
],
[
"Sojka",
"Petr",
"",
"Faculty of Informatics Masaryk University"
]
] |
2102.02636 | Hendri Murfi | Hendri Murfi, Natasha Rosaline, Nora Hariadi | Deep Autoencoder-based Fuzzy C-Means for Topic Detection | 18 pages | Array 13 (2022) | 10.1016/j.array.2021.100124 | null | cs.IR cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Topic detection is a process for determining topics from a collection of
textual data. One of the topic detection methods is a clustering-based method,
which assumes that the centroids are topics. The clustering method has the
advantage that it can process data with negative representations. Therefore,
the clustering method allows a combination with a broader representation
learning method. In this paper, we adopt deep learning for topic detection by
using a deep autoencoder and fuzzy c-means called deep autoencoder-based fuzzy
c-means (DFCM). The encoder of the autoencoder performs a lower-dimensional
representation learning. Fuzzy c-means groups the lower-dimensional
representation to identify the centroids. The autoencoder's decoder transforms
back the centroids into the original representation to be interpreted as the
topics. Our simulation shows that DFCM improves the coherence score of
eigenspace-based fuzzy c-means (EFCM) and is comparable to the leading standard
methods, i.e., nonnegative matrix factorization (NMF) or latent Dirichlet
allocation (LDA).
| [
{
"created": "Tue, 2 Feb 2021 07:41:52 GMT",
"version": "v1"
}
] | 2021-12-28 | [
[
"Murfi",
"Hendri",
""
],
[
"Rosaline",
"Natasha",
""
],
[
"Hariadi",
"Nora",
""
]
] |
2102.02711 | Soumick Chatterjee | Chompunuch Sarasaen, Soumick Chatterjee, Mario Breitkopf, Georg Rose,
Andreas N\"urnberger and Oliver Speck | Fine-tuning deep learning model parameters for improved super-resolution
of dynamic MRI with prior-knowledge | null | Artificial Intelligence in Medicine (2021) 102196 | 10.1016/j.artmed.2021.102196 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic imaging is a beneficial tool for interventions to assess
physiological changes. Nonetheless during dynamic MRI, while achieving a high
temporal resolution, the spatial resolution is compromised. To overcome this
spatio-temporal trade-off, this research presents a super-resolution (SR) MRI
reconstruction with prior knowledge based fine-tuning to maximise spatial
information while reducing the required scan-time for dynamic MRIs. An U-Net
based network with perceptual loss is trained on a benchmark dataset and
fine-tuned using one subject-specific static high resolution MRI as prior
knowledge to obtain high resolution dynamic images during the inference stage.
3D dynamic data for three subjects were acquired with different parameters to
test the generalisation capabilities of the network. The method was tested for
different levels of in-plane undersampling for dynamic MRI. The reconstructed
dynamic SR results after fine-tuning showed higher similarity with the high
resolution ground-truth, while quantitatively achieving statistically
significant improvement. The average SSIM of the lowest resolution experimented
during this research (6.25~\% of the k-space) before and after fine-tuning were
0.939 $\pm$ 0.008 and 0.957 $\pm$ 0.006 respectively. This could theoretically
result in an acceleration factor of 16, which can potentially be acquired in
less than half a second. The proposed approach shows that the super-resolution
MRI reconstruction with prior-information can alleviate the spatio-temporal
trade-off in dynamic MRI, even for high acceleration factors.
| [
{
"created": "Thu, 4 Feb 2021 16:11:53 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Apr 2021 12:24:51 GMT",
"version": "v2"
},
{
"created": "Sat, 4 Sep 2021 21:25:18 GMT",
"version": "v3"
},
{
"created": "Sat, 23 Oct 2021 10:42:29 GMT",
"version": "v4"
}
] | 2021-10-26 | [
[
"Sarasaen",
"Chompunuch",
""
],
[
"Chatterjee",
"Soumick",
""
],
[
"Breitkopf",
"Mario",
""
],
[
"Rose",
"Georg",
""
],
[
"Nürnberger",
"Andreas",
""
],
[
"Speck",
"Oliver",
""
]
] |
2102.02771 | Jun Wang | Jun Wang, Xiaohan Yu, Yongsheng Gao | Mask Guided Attention For Fine-Grained Patchy Image Classification | Accepted to ICIP2021 | 2021 IEEE International Conference on Image Processing (ICIP),
2021, pp. 1044-1048 | 10.1109/ICIP42928.2021.9506424 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a novel mask guided attention (MGA) method for
fine-grained patchy image classification. The key challenge of fine-grained
patchy image classification lies in two folds, ultra-fine-grained
inter-category variances among objects and very few data available for
training. This motivates us to consider employing more useful supervision
signal to train a discriminative model within limited training samples.
Specifically, the proposed MGA integrates a pre-trained semantic segmentation
model that produces auxiliary supervision signal, i.e., patchy attention mask,
enabling a discriminative representation learning. The patchy attention mask
drives the classifier to filter out the insignificant parts of images (e.g.,
common features between different categories), which enhances the robustness of
MGA for the fine-grained patchy image classification. We verify the
effectiveness of our method on three publicly available patchy image datasets.
Experimental results demonstrate that our MGA method achieves superior
performance on three datasets compared with the state-of-the-art methods. In
addition, our ablation study shows that MGA improves the accuracy by 2.25% and
2% on the SoyCultivarVein and BtfPIS datasets, indicating its practicality
towards solving the fine-grained patchy image classification.
| [
{
"created": "Thu, 4 Feb 2021 17:54:50 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Sep 2021 10:09:32 GMT",
"version": "v2"
}
] | 2021-09-23 | [
[
"Wang",
"Jun",
""
],
[
"Yu",
"Xiaohan",
""
],
[
"Gao",
"Yongsheng",
""
]
] |
2102.02789 | Vivien Cabannes | Vivien Cabannes, Francis Bach, Alessandro Rudi | Disambiguation of weak supervision with exponential convergence rates | 22 pages; 6 figures | Proceedings of the 38th International Conference on Machine
Learning, PMLR 139, 2021 | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Machine learning approached through supervised learning requires expensive
annotation of data. This motivates weakly supervised learning, where data are
annotated with incomplete yet discriminative information. In this paper, we
focus on partial labelling, an instance of weak supervision where, from a given
input, we are given a set of potential targets. We review a disambiguation
principle to recover full supervision from weak supervision, and propose an
empirical disambiguation algorithm. We prove exponential convergence rates of
our algorithm under classical learnability assumptions, and we illustrate the
usefulness of our method on practical examples.
| [
{
"created": "Thu, 4 Feb 2021 18:14:32 GMT",
"version": "v1"
},
{
"created": "Wed, 26 May 2021 16:14:29 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Jul 2021 14:29:24 GMT",
"version": "v3"
}
] | 2021-07-16 | [
[
"Cabannes",
"Vivien",
""
],
[
"Bach",
"Francis",
""
],
[
"Rudi",
"Alessandro",
""
]
] |
2102.02887 | Shiwei Liu | Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy | Do We Actually Need Dense Over-Parameterization? In-Time
Over-Parameterization in Sparse Training | 16 pages; 10 figures; Published in Proceedings of the 38th
International Conference on Machine Learning. Code can be found
https://github.com/Shiweiliuiiiiiii/In-Time-Over-Parameterization | Proceedings of the 38th International Conference on Machine
Learning (2021) | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a new perspective on training deep neural
networks capable of state-of-the-art performance without the need for the
expensive over-parameterization by proposing the concept of In-Time
Over-Parameterization (ITOP) in sparse training. By starting from a random
sparse network and continuously exploring sparse connectivities during
training, we can perform an Over-Parameterization in the space-time manifold,
closing the gap in the expressibility between sparse training and dense
training. We further use ITOP to understand the underlying mechanism of Dynamic
Sparse Training (DST) and indicate that the benefits of DST come from its
ability to consider across time all possible parameters when searching for the
optimal sparse connectivity. As long as there are sufficient parameters that
have been reliably explored during training, DST can outperform the dense
neural network by a large margin. We present a series of experiments to support
our conjecture and achieve the state-of-the-art sparse training performance
with ResNet-50 on ImageNet. More impressively, our method achieves dominant
performance over the overparameterization-based sparse methods at extreme
sparsity levels. When trained on CIFAR-100, our method can match the
performance of the dense model even at an extreme sparsity (98%). Code can be
found https://github.com/Shiweiliuiiiiiii/In-Time-Over-Parameterization.
| [
{
"created": "Thu, 4 Feb 2021 20:59:31 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Feb 2021 23:36:57 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Jun 2021 05:01:46 GMT",
"version": "v3"
}
] | 2021-06-16 | [
[
"Liu",
"Shiwei",
""
],
[
"Yin",
"Lu",
""
],
[
"Mocanu",
"Decebal Constantin",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
2102.02917 | Allison Lahnala | Allison Lahnala, Gauri Kambhatla, Jiajun Peng, Matthew Whitehead,
Gillian Minnehan, Eric Guldan, Jonathan K. Kummerfeld, An{\i}l \c{C}amc{\i},
Rada Mihalcea | Chord Embeddings: Analyzing What They Capture and Their Role for Next
Chord Prediction and Artist Attribute Prediction | 16 pages, accepted to EvoMUSART | Computational Intelligence in Music, Sound, Art and Design, 10th
International Conference, EvoMUSART 2021 | null | null | cs.SD cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural language processing methods have been applied in a variety of music
studies, drawing the connection between music and language. In this paper, we
expand those approaches by investigating \textit{chord embeddings}, which we
apply in two case studies to address two key questions: (1) what musical
information do chord embeddings capture?; and (2) how might musical
applications benefit from them? In our analysis, we show that they capture
similarities between chords that adhere to important relationships described in
music theory. In the first case study, we demonstrate that using chord
embeddings in a next chord prediction task yields predictions that more closely
match those by experienced musicians. In the second case study, we show the
potential benefits of using the representations in tasks related to musical
stylometrics.
| [
{
"created": "Thu, 4 Feb 2021 22:17:17 GMT",
"version": "v1"
}
] | 2021-02-08 | [
[
"Lahnala",
"Allison",
""
],
[
"Kambhatla",
"Gauri",
""
],
[
"Peng",
"Jiajun",
""
],
[
"Whitehead",
"Matthew",
""
],
[
"Minnehan",
"Gillian",
""
],
[
"Guldan",
"Eric",
""
],
[
"Kummerfeld",
"Jonathan K.",
""
],
[
"Çamcı",
"Anıl",
""
],
[
"Mihalcea",
"Rada",
""
]
] |
2102.03022 | Tim Miller | Zhengshang Liu, Yue Yang, Tim Miller, and Peta Masters | Deceptive Reinforcement Learning for Privacy-Preserving Planning | null | Proceedings of the 20th International Conference on Autonomous
Agents and Multiagent Systems (AAMAS 2021) | null | null | cs.LG cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of deceptive reinforcement learning to
preserve the privacy of a reward function. Reinforcement learning is the
problem of finding a behaviour policy based on rewards received from
exploratory behaviour. A key ingredient in reinforcement learning is a reward
function, which determines how much reward (negative or positive) is given and
when. However, in some situations, we may want to keep a reward function
private; that is, to make it difficult for an observer to determine the reward
function used. We define the problem of privacy-preserving reinforcement
learning, and present two models for solving it. These models are based on
dissimulation -- a form of deception that `hides the truth'. We evaluate our
models both computationally and via human behavioural experiments. Results show
that the resulting policies are indeed deceptive, and that participants can
determine the true reward function less reliably than that of an honest agent.
| [
{
"created": "Fri, 5 Feb 2021 06:50:04 GMT",
"version": "v1"
}
] | 2021-02-08 | [
[
"Liu",
"Zhengshang",
""
],
[
"Yang",
"Yue",
""
],
[
"Miller",
"Tim",
""
],
[
"Masters",
"Peta",
""
]
] |
2102.03049 | Shang Ran Huang | Fu-Shun Hsu, Shang-Ran Huang, Chien-Wen Huang, Chao-Jung Huang,
Yuan-Ren Cheng, Chun-Chieh Chen, Jack Hsiao, Chung-Wei Chen, Li-Chin Chen,
Yen-Chun Lai, Bi-Fang Hsu, Nian-Jhen Lin, Wan-Lin Tsai, Yi-Lin Wu, Tzu-Ling
Tseng, Ching-Ting Tseng, Yi-Tsun Chen, Feipei Lai | Benchmarking of eight recurrent neural network variants for breath phase
and adventitious sound detection on a self-developed open-access lung sound
database-HF_Lung_V1 | 48 pages, 8 figures. Accepted by PLoS One | PLoS ONE, 2021, 16(7): e0254134 | 10.1371/journal.pone.0254134 | null | cs.SD cs.AI cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A reliable, remote, and continuous real-time respiratory sound monitor with
automated respiratory sound analysis ability is urgently required in many
clinical scenarios-such as in monitoring disease progression of coronavirus
disease 2019-to replace conventional auscultation with a handheld stethoscope.
However, a robust computerized respiratory sound analysis algorithm has not yet
been validated in practical applications. In this study, we developed a lung
sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds
(duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels,
13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze
labels, 686 stridor labels, and 4,740 rhonchi labels), and 15,606 discontinuous
adventitious sound labels (all crackles). We conducted benchmark tests for long
short-term memory (LSTM), gated recurrent unit (GRU), bidirectional LSTM
(BiLSTM), bidirectional GRU (BiGRU), convolutional neural network (CNN)-LSTM,
CNN-GRU, CNN-BiLSTM, and CNN-BiGRU models for breath phase detection and
adventitious sound detection. We also conducted a performance comparison
between the LSTM-based and GRU-based models, between unidirectional and
bidirectional models, and between models with and without a CNN. The results
revealed that these models exhibited adequate performance in lung sound
analysis. The GRU-based models outperformed, in terms of F1 scores and areas
under the receiver operating characteristic curves, the LSTM-based models in
most of the defined tasks. Furthermore, all bidirectional models outperformed
their unidirectional counterparts. Finally, the addition of a CNN improved the
accuracy of lung sound analysis, especially in the CAS detection tasks.
| [
{
"created": "Fri, 5 Feb 2021 08:21:28 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 15:22:55 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jul 2022 09:04:06 GMT",
"version": "v3"
}
] | 2022-07-13 | [
[
"Hsu",
"Fu-Shun",
""
],
[
"Huang",
"Shang-Ran",
""
],
[
"Huang",
"Chien-Wen",
""
],
[
"Huang",
"Chao-Jung",
""
],
[
"Cheng",
"Yuan-Ren",
""
],
[
"Chen",
"Chun-Chieh",
""
],
[
"Hsiao",
"Jack",
""
],
[
"Chen",
"Chung-Wei",
""
],
[
"Chen",
"Li-Chin",
""
],
[
"Lai",
"Yen-Chun",
""
],
[
"Hsu",
"Bi-Fang",
""
],
[
"Lin",
"Nian-Jhen",
""
],
[
"Tsai",
"Wan-Lin",
""
],
[
"Wu",
"Yi-Lin",
""
],
[
"Tseng",
"Tzu-Ling",
""
],
[
"Tseng",
"Ching-Ting",
""
],
[
"Chen",
"Yi-Tsun",
""
],
[
"Lai",
"Feipei",
""
]
] |
2102.03277 | Llu\'is Alemany-Puig | Llu\'is Alemany-Puig, Juan Luis Esteban, Ramon Ferrer-i-Cancho | Minimum projective linearizations of trees in linear time | Here we have corrected a mistake we made in the previous version. In
particular, line 7 of Algorithm 3.2 used to say: "For i = 1 to |C_v| ..."; it
should be "For i = 2 to |C_v| ..." (notice the change from 'i=1' to 'i=2') | Information Processing Letters, 174:106204 (2022) | 10.1016/j.ipl.2021.106204 | null | cs.DS cs.CL cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Minimum Linear Arrangement problem (MLA) consists of finding a mapping
$\pi$ from vertices of a graph to distinct integers that minimizes
$\sum_{\{u,v\}\in E}|\pi(u) - \pi(v)|$. In that setting, vertices are often
assumed to lie on a horizontal line and edges are drawn as semicircles above
said line. For trees, various algorithms are available to solve the problem in
polynomial time in $n=|V|$. There exist variants of the MLA in which the
arrangements are constrained. Iordanskii, and later Hochberg and Stallmann
(HS), put forward $O(n)$-time algorithms that solve the problem when
arrangements are constrained to be planar (also known as one-page book
embeddings). We also consider linear arrangements of rooted trees that are
constrained to be projective (planar embeddings where the root is not covered
by any edge). Gildea and Temperley (GT) sketched an algorithm for projective
arrangements which they claimed runs in $O(n)$ but did not provide any
justification of its cost. In contrast, Park and Levy claimed that GT's
algorithm runs in $O(n \log d_{max})$ where $d_{max}$ is the maximum degree but
did not provide sufficient detail. Here we correct an error in HS's algorithm
for the planar case, show its relationship with the projective case, and derive
simple algorithms for the projective and planar cases that run without a doubt
in $O(n)$ time.
| [
{
"created": "Fri, 5 Feb 2021 16:35:38 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 14:20:33 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Jul 2021 14:02:41 GMT",
"version": "v3"
},
{
"created": "Wed, 8 Sep 2021 15:19:02 GMT",
"version": "v4"
},
{
"created": "Tue, 3 May 2022 09:21:04 GMT",
"version": "v5"
},
{
"created": "Thu, 12 Sep 2024 14:56:40 GMT",
"version": "v6"
}
] | 2024-09-13 | [
[
"Alemany-Puig",
"Lluís",
""
],
[
"Esteban",
"Juan Luis",
""
],
[
"Ferrer-i-Cancho",
"Ramon",
""
]
] |
2102.03310 | Michal Ciszewski | Micha{\l} Ciszewski, Jakob S\"ohl, Geurt Jongbloed | Improving state estimation through projection post-processing for
activity recognition with application to football | This preprint has not undergone peer review (when applicable) or any
post-submission improvements or corrections. The Version of Record of this
article is published in Statistical Methods & Applications, and is available
online at https://doi.org/10.1007/s10260-023-00696-z | Stat Methods Appl (2023) | 10.1007/s10260-023-00696-z | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The past decade has seen an increased interest in human activity recognition
based on sensor data. Most often, the sensor data come unannotated, creating
the need for fast labelling methods. For assessing the quality of the
labelling, an appropriate performance measure has to be chosen. Our main
contribution is a novel post-processing method for activity recognition. It
improves the accuracy of the classification methods by correcting for
unrealistic short activities in the estimate. We also propose a new performance
measure, the Locally Time-Shifted Measure (LTS measure), which addresses
uncertainty in the times of state changes. The effectiveness of the
post-processing method is evaluated, using the novel LTS measure, on the basis
of a simulated dataset and a real application on sensor data from football. The
simulation study is also used to discuss the choice of the parameters of the
post-processing method and the LTS measure.
| [
{
"created": "Fri, 5 Feb 2021 17:32:39 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 09:43:01 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Sep 2022 10:27:28 GMT",
"version": "v3"
},
{
"created": "Tue, 2 May 2023 19:56:30 GMT",
"version": "v4"
}
] | 2023-05-04 | [
[
"Ciszewski",
"Michał",
""
],
[
"Söhl",
"Jakob",
""
],
[
"Jongbloed",
"Geurt",
""
]
] |
2102.03380 | Manuel L\'opez-Ib\'a\~nez | Manuel L\'opez-Ib\'a\~nez (University of M\'alaga, Spain), Juergen
Branke (University of Warwick, UK), Lu\'is Paquete (University of Coimbra,
Portugal) | Reproducibility in Evolutionary Computation | null | ACM Transactions on Evolutionary Learning and Optimization, 2021 | 10.1145/3466624 | null | cs.AI cs.NE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experimental studies are prevalent in Evolutionary Computation (EC), and
concerns about the reproducibility and replicability of such studies have
increased in recent times, reflecting similar concerns in other scientific
fields. In this article, we discuss, within the context of EC, the different
types of reproducibility and suggest a classification that refines the badge
system of the Association of Computing Machinery (ACM) adopted by ACM
Transactions on Evolutionary Learning and Optimization
(https://dlnext.acm.org/journal/telo). We identify cultural and technical
obstacles to reproducibility in the EC field. Finally, we provide guidelines
and suggest tools that may help to overcome some of these reproducibility
obstacles.
| [
{
"created": "Fri, 5 Feb 2021 19:06:35 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Jun 2021 16:24:25 GMT",
"version": "v2"
}
] | 2022-03-30 | [
[
"López-Ibáñez",
"Manuel",
"",
"University of Málaga, Spain"
],
[
"Branke",
"Juergen",
"",
"University of Warwick, UK"
],
[
"Paquete",
"Luís",
"",
"University of Coimbra,\n Portugal"
]
] |
2102.03382 | Tu Le | Tu Le, Danny Yuxing Huang, Noah Apthorpe, Yuan Tian | SkillBot: Identifying Risky Content for Children in Alexa Skills | null | ACM Transactions on Internet Technology, Volume 22, Issue 3,
August 2022, Article 79, pp 1-31 | 10.1145/3539609 | null | cs.MA cs.CL cs.CR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many households include children who use voice personal assistants (VPA) such
as Amazon Alexa. Children benefit from the rich functionalities of VPAs and
third-party apps but are also exposed to new risks in the VPA ecosystem. In
this paper, we first investigate "risky" child-directed voice apps that contain
inappropriate content or ask for personal information through voice
interactions. We build SkillBot - a natural language processing (NLP)-based
system to automatically interact with VPA apps and analyze the resulting
conversations. We find 28 risky child-directed apps and maintain a growing
dataset of 31,966 non-overlapping app behaviors collected from 3,434 Alexa
apps. Our findings suggest that although child-directed VPA apps are subject to
stricter policy requirements and more intensive vetting, children remain
vulnerable to inappropriate content and privacy violations. We then conduct a
user study showing that parents are concerned about the identified risky apps.
Many parents do not believe that these apps are available and designed for
families/kids, although these apps are actually published in Amazon's "Kids"
product category. We also find that parents often neglect basic precautions
such as enabling parental controls on Alexa devices. Finally, we identify a
novel risk in the VPA ecosystem: confounding utterances, or voice commands
shared by multiple apps that may cause a user to interact with a different app
than intended. We identify 4,487 confounding utterances, including 581 shared
by child-directed and non-child-directed apps. We find that 27% of these
confounding utterances prioritize invoking a non-child-directed app over a
child-directed app. This indicates that children are at real risk of
accidentally invoking non-child-directed apps due to confounding utterances.
| [
{
"created": "Fri, 5 Feb 2021 19:07:39 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jun 2022 02:28:15 GMT",
"version": "v2"
}
] | 2022-10-13 | [
[
"Le",
"Tu",
""
],
[
"Huang",
"Danny Yuxing",
""
],
[
"Apthorpe",
"Noah",
""
],
[
"Tian",
"Yuan",
""
]
] |
2102.03419 | Dora Jambor | Dora Jambor, Komal Teru, Joelle Pineau, William L. Hamilton | Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs | code available at
https://github.com/dorajam/few-shot-link-prediction-paper | European Chapter of the ACL (EACL), 2021 | null | null | cs.AI cs.CL cs.IR cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world knowledge graphs are often characterized by low-frequency
relations - a challenge that has prompted an increasing interest in few-shot
link prediction methods. These methods perform link prediction for a set of new
relations, unseen during training, given only a few example facts of each
relation at test time. In this work, we perform a systematic study on a
spectrum of models derived by generalizing the current state of the art for
few-shot link prediction, with the goal of probing the limits of learning in
this few-shot setting. We find that a simple zero-shot baseline - which ignores
any relation-specific information - achieves surprisingly strong performance.
Moreover, experiments on carefully crafted synthetic datasets show that having
only a few examples of a relation fundamentally limits models from using
fine-grained structural information and only allows for exploiting the
coarse-grained positional information of entities. Together, our findings
challenge the implicit assumptions and inductive biases of prior work and
highlight new directions for research in this area.
| [
{
"created": "Fri, 5 Feb 2021 21:04:31 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Jambor",
"Dora",
""
],
[
"Teru",
"Komal",
""
],
[
"Pineau",
"Joelle",
""
],
[
"Hamilton",
"William L.",
""
]
] |
2102.03444 | Dominik Drees | Dominik Drees, Aaron Scherzinger, Ren\'e H\"agerling, Friedemann
Kiefer, Xiaoyi Jiang | Scalable Robust Graph and Feature Extraction for Arbitrary Vessel
Networks in Large Volumetric Datasets | null | BMC Bioinformatics 22 (2021) 346 | 10.1186/s12859-021-04262-w | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in 3D imaging technologies provide novel insights to
researchers and reveal finer and more detail of examined specimen, especially
in the biomedical domain, but also impose huge challenges regarding scalability
for automated analysis algorithms due to rapidly increasing dataset sizes. In
particular, existing research towards automated vessel network analysis does
not consider memory requirements of proposed algorithms and often generates a
large number of spurious branches for structures consisting of many voxels.
Additionally, very often these algorithms have further restrictions such as the
limitation to tree topologies or relying on the properties of specific image
modalities. We present a scalable pipeline (in terms of computational cost,
required main memory and robustness) that extracts an annotated abstract graph
representation from the foreground segmentation of vessel networks of arbitrary
topology and vessel shape. Only a single, dimensionless, a-priori determinable
parameter is required. By careful engineering of individual pipeline stages and
a novel iterative refinement scheme we are, for the first time, able to analyze
the topology of volumes of roughly 1TB on commodity hardware. An implementation
of the presented pipeline is publicly available in version 5.1 of the volume
rendering and processing engine Voreen (https://www.uni-muenster.de/Voreen/).
| [
{
"created": "Fri, 5 Feb 2021 23:13:09 GMT",
"version": "v1"
}
] | 2021-06-30 | [
[
"Drees",
"Dominik",
""
],
[
"Scherzinger",
"Aaron",
""
],
[
"Hägerling",
"René",
""
],
[
"Kiefer",
"Friedemann",
""
],
[
"Jiang",
"Xiaoyi",
""
]
] |
2102.03502 | Zhenhan Huang | Zhenhan Huang, Fumihide Tanaka | MSPM: A Modularized and Scalable Multi-Agent Reinforcement
Learning-based System for Financial Portfolio Management | null | PLoS ONE 17(2): e0263689 (2022) | 10.1371/journal.pone.0263689 | null | q-fin.PM cs.AI cs.LG q-fin.CP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Financial portfolio management (PM) is one of the most applicable problems in
reinforcement learning (RL) owing to its sequential decision-making nature.
However, existing RL-based approaches rarely focus on scalability or
reusability to adapt to the ever-changing markets. These approaches are rigid
and unscalable to accommodate the varying number of assets of portfolios and
increasing need for heterogeneous data. Also, RL agents in the existing systems
are ad-hoc trained and hardly reusable for different portfolios. To confront
the above problems, a modular design is desired for the systems to be
compatible with reusable asset-dedicated agents. In this paper, we propose a
multi-agent RL-based system for PM (MSPM). MSPM involves two types of
asynchronously-updated modules: Evolving Agent Module (EAM) and Strategic Agent
Module (SAM). An EAM is an information-generating module with a DQN agent, and
it receives heterogeneous data and generates signal-comprised information for a
particular asset. An SAM is a decision-making module with a PPO agent for
portfolio optimization, and it connects to EAMs to reallocate the assets in a
portfolio. Trained EAMs can be connected to any SAM at will. With its
modularized architecture, the multi-step condensation of volatile market
information, and the reusable design of EAM, MSPM simultaneously addresses the
two challenges in RL-based PM: scalability and reusability. Experiments on
8-year U.S. stock market data prove the effectiveness of MSPM in profit
accumulation by its outperformance over five baselines in terms of accumulated
rate of return (ARR), daily rate of return, and Sortino ratio. MSPM improves
ARR by at least 186.5% compared to CRP, a widely-used PM strategy. To validate
the indispensability of EAM, we back-test and compare MSPMs on four portfolios.
EAM-enabled MSPMs improve ARR by at least 1341.8% compared to EAM-disabled
MSPMs.
| [
{
"created": "Sat, 6 Feb 2021 04:04:57 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 16:19:01 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Jun 2021 08:42:30 GMT",
"version": "v3"
},
{
"created": "Sat, 19 Feb 2022 03:54:41 GMT",
"version": "v4"
}
] | 2022-02-22 | [
[
"Huang",
"Zhenhan",
""
],
[
"Tanaka",
"Fumihide",
""
]
] |
2102.03752 | Yusheng Su | Yusheng Su, Xu Han, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Peng Li,
Jie Zhou, Maosong Sun | CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of
Pre-trained Language Models | null | IEEE/ACM Transactions on Audio, Speech, and Language Processing
2021 | 10.1109/TASLP.2021.3105013 | 2329-9290 | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-tuning pre-trained language models (PLMs) has demonstrated its
effectiveness on various downstream NLP tasks recently. However, in many
low-resource scenarios, the conventional fine-tuning strategies cannot
sufficiently capture the important semantic features for downstream tasks. To
address this issue, we introduce a novel framework (named "CSS-LM") to improve
the fine-tuning phase of PLMs via contrastive semi-supervised learning.
Specifically, given a specific task, we retrieve positive and negative
instances from large-scale unlabeled corpora according to their domain-level
and class-level semantic relatedness to the task. We then perform contrastive
semi-supervised learning on both the retrieved unlabeled and original labeled
instances to help PLMs capture crucial task-related semantic features. The
experimental results show that CSS-LM achieves better results than the
conventional fine-tuning strategy on a series of downstream tasks with few-shot
settings, and outperforms the latest supervised contrastive fine-tuning
strategies. Our datasets and source code will be available to provide more
details.
| [
{
"created": "Sun, 7 Feb 2021 09:27:26 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 08:50:38 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Mar 2021 11:47:00 GMT",
"version": "v3"
}
] | 2021-11-15 | [
[
"Su",
"Yusheng",
""
],
[
"Han",
"Xu",
""
],
[
"Lin",
"Yankai",
""
],
[
"Zhang",
"Zhengyan",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Li",
"Peng",
""
],
[
"Zhou",
"Jie",
""
],
[
"Sun",
"Maosong",
""
]
] |
2102.03814 | Theerawit Wilaiprasitporn | Phairot Autthasan, Rattanaphon Chaisaen, Thapanun Sudhawiyangkul,
Phurin Rangpong, Suktipol Kiatthaveephong, Nat Dilokthanakul, Gun
Bhakdisongkhram, Huy Phan, Cuntai Guan and Theerawit Wilaiprasitporn | MIN2Net: End-to-End Multi-Task Learning for Subject-Independent Motor
Imagery EEG Classification | null | IEEE Transactions on Biomedical Engineering 2021 | 10.1109/TBME.2021.3137184 | null | eess.SP cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Advances in the motor imagery (MI)-based brain-computer interfaces (BCIs)
allow control of several applications by decoding neurophysiological phenomena,
which are usually recorded by electroencephalography (EEG) using a non-invasive
technique. Despite great advances in MI-based BCI, EEG rhythms are specific to
a subject and various changes over time. These issues point to significant
challenges to enhance the classification performance, especially in a
subject-independent manner. To overcome these challenges, we propose MIN2Net, a
novel end-to-end multi-task learning to tackle this task. We integrate deep
metric learning into a multi-task autoencoder to learn a compact and
discriminative latent representation from EEG and perform classification
simultaneously. This approach reduces the complexity in pre-processing, results
in significant performance improvement on EEG classification. Experimental
results in a subject-independent manner show that MIN2Net outperforms the
state-of-the-art techniques, achieving an F1-score improvement of 6.72%, and
2.23% on the SMR-BCI, and OpenBMI datasets, respectively. We demonstrate that
MIN2Net improves discriminative information in the latent representation. This
study indicates the possibility and practicality of using this model to develop
MI-based BCI applications for new users without the need for calibration.
| [
{
"created": "Sun, 7 Feb 2021 15:20:23 GMT",
"version": "v1"
},
{
"created": "Sun, 16 May 2021 08:03:59 GMT",
"version": "v2"
},
{
"created": "Thu, 20 May 2021 09:48:47 GMT",
"version": "v3"
},
{
"created": "Fri, 7 Jan 2022 17:20:56 GMT",
"version": "v4"
}
] | 2022-01-10 | [
[
"Autthasan",
"Phairot",
""
],
[
"Chaisaen",
"Rattanaphon",
""
],
[
"Sudhawiyangkul",
"Thapanun",
""
],
[
"Rangpong",
"Phurin",
""
],
[
"Kiatthaveephong",
"Suktipol",
""
],
[
"Dilokthanakul",
"Nat",
""
],
[
"Bhakdisongkhram",
"Gun",
""
],
[
"Phan",
"Huy",
""
],
[
"Guan",
"Cuntai",
""
],
[
"Wilaiprasitporn",
"Theerawit",
""
]
] |
2102.03858 | Zaharah A. Bukhsh | Zaharah A. Bukhsh, Nils Jansen, Aaqib Saeed | Damage detection using in-domain and cross-domain transfer learning | 16 pages, 8 figures, 7 tables | Neural Comput & Applic (2021) | 10.1007/s00521-021-06279-x | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the capabilities of transfer learning in the area of
structural health monitoring. In particular, we are interested in damage
detection for concrete structures. Typical image datasets for such problems are
relatively small, calling for the transfer of learned representation from a
related large-scale dataset. Past efforts of damage detection using images have
mainly considered cross-domain transfer learning approaches using pre-trained
IMAGENET models that are subsequently fine-tuned for the target task. However,
there are rising concerns about the generalizability of IMAGENET
representations for specific target domains, such as for visual inspection and
medical imaging. We, therefore, evaluate a combination of in-domain and
cross-domain transfer learning strategies for damage detection in bridges. We
perform comprehensive comparisons to study the impact of cross-domain and
in-domain transfer, with various initialization strategies, using six publicly
available visual inspection datasets. The pre-trained models are also evaluated
for their ability to cope with the extremely low-data regime. We show that the
combination of cross-domain and in-domain transfer persistently shows superior
performance specially with tiny datasets. Likewise, we also provide visual
explanations of predictive models to enable algorithmic transparency and
provide insights to experts about the intrinsic decision logic of typically
black-box deep models.
| [
{
"created": "Sun, 7 Feb 2021 17:36:27 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2021 09:37:22 GMT",
"version": "v2"
}
] | 2021-10-06 | [
[
"Bukhsh",
"Zaharah A.",
""
],
[
"Jansen",
"Nils",
""
],
[
"Saeed",
"Aaqib",
""
]
] |
2102.03896 | Simon Zhuang | Simon Zhuang, Dylan Hadfield-Menell | Consequences of Misaligned AI | null | NeurIPS 2020 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI systems often rely on two key components: a specified goal or reward
function and an optimization algorithm to compute the optimal behavior for that
goal. This approach is intended to provide value for a principal: the user on
whose behalf the agent acts. The objectives given to these agents often refer
to a partial specification of the principal's goals. We consider the cost of
this incompleteness by analyzing a model of a principal and an agent in a
resource constrained world where the $L$ attributes of the state correspond to
different sources of utility for the principal. We assume that the reward
function given to the agent only has support on $J < L$ attributes. The
contributions of our paper are as follows: 1) we propose a novel model of an
incomplete principal-agent problem from artificial intelligence; 2) we provide
necessary and sufficient conditions under which indefinitely optimizing for any
incomplete proxy objective leads to arbitrarily low overall utility; and 3) we
show how modifying the setup to allow reward functions that reference the full
state or allowing the principal to update the proxy objective over time can
lead to higher utility solutions. The results in this paper argue that we
should view the design of reward functions as an interactive and dynamic
process and identifies a theoretical scenario where some degree of
interactivity is desirable.
| [
{
"created": "Sun, 7 Feb 2021 19:34:04 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Zhuang",
"Simon",
""
],
[
"Hadfield-Menell",
"Dylan",
""
]
] |
2102.03897 | Chetan Srinidhi L | Chetan L. Srinidhi, Seung Wook Kim, Fu-Der Chen, Anne L. Martel | Self-supervised driven consistency training for annotation efficient
histopathology image analysis | null | Medical Image Analysis, Volume 75, January 2022 | 10.1016/j.media.2021.102256 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training a neural network with a large labeled dataset is still a dominant
paradigm in computational histopathology. However, obtaining such exhaustive
manual annotations is often expensive, laborious, and prone to inter and
Intra-observer variability. While recent self-supervised and semi-supervised
methods can alleviate this need by learn-ing unsupervised feature
representations, they still struggle to generalize well to downstream tasks
when the number of labeled instances is small. In this work, we overcome this
challenge by leveraging both task-agnostic and task-specific unlabeled data
based on two novel strategies: i) a self-supervised pretext task that harnesses
the underlying multi-resolution contextual cues in histology whole-slide images
to learn a powerful supervisory signal for unsupervised representation
learning; ii) a new teacher-student semi-supervised consistency paradigm that
learns to effectively transfer the pretrained representations to downstream
tasks based on prediction consistency with the task-specific un-labeled data.
We carry out extensive validation experiments on three histopathology benchmark
datasets across two classification and one regression-based tasks, i.e., tumor
metastasis detection, tissue type classification, and tumor cellularity
quantification. Under limited-label data, the proposed method yields tangible
improvements, which is close or even outperforming other state-of-the-art
self-supervised and supervised baselines. Furthermore, we empirically show that
the idea of bootstrapping the self-supervised pretrained features is an
effective way to improve the task-specific semi-supervised learning on standard
benchmarks. Code and pretrained models will be made available at:
https://github.com/srinidhiPY/SSL_CR_Histo
| [
{
"created": "Sun, 7 Feb 2021 19:46:21 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 23:26:44 GMT",
"version": "v2"
},
{
"created": "Sun, 3 Oct 2021 11:07:40 GMT",
"version": "v3"
}
] | 2021-11-03 | [
[
"Srinidhi",
"Chetan L.",
""
],
[
"Kim",
"Seung Wook",
""
],
[
"Chen",
"Fu-Der",
""
],
[
"Martel",
"Anne L.",
""
]
] |
2102.03932 | Fazael Ayatollahi | Fazael Ayatollahi (1 and 2), Shahriar B. Shokouhi (1), Ritse M. Mann
(2), Jonas Teuwen (2 and 3) ((1) Electrical Engineering Department, Iran
University of Science and Technology (IUST), Tehran, Iran, (2) Department of
Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen,
the Netherlands, (3) Department of Radiation Oncology, Netherlands Cancer
Institute, Amsterdam, the Netherlands) | Automatic Breast Lesion Detection in Ultrafast DCE-MRI Using Deep
Learning | null | Medical physics vol. 48,10 (2021): 5897-5907 | 10.1002/mp.15156 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Purpose: We propose a deep learning-based computer-aided detection (CADe)
method to detect breast lesions in ultrafast DCE-MRI sequences. This method
uses both the three-dimensional spatial information and temporal information
obtained from the early-phase of the dynamic acquisition. Methods: The proposed
CADe method, based on a modified 3D RetinaNet model, operates on ultrafast T1
weighted sequences, which are preprocessed for motion compensation, temporal
normalization, and are cropped before passing into the model. The model is
optimized to enable the detection of relatively small breast lesions in a
screening setting, focusing on detection of lesions that are harder to
differentiate from confounding structures inside the breast. Results: The
method was developed based on a dataset consisting of 489 ultrafast MRI studies
obtained from 462 patients containing a total of 572 lesions (365 malignant,
207 benign) and achieved a detection rate, sensitivity, and detection rate of
benign lesions of 0.90 (0.876-0.934), 0.95 (0.934-0.980), and 0.81
(0.751-0.871) at 4 false positives per normal breast with 10-fold
cross-testing, respectively. Conclusions: The deep learning architecture used
for the proposed CADe application can efficiently detect benign and malignant
lesions on ultrafast DCE-MRI. Furthermore, utilizing the less visible hard-to
detect-lesions in training improves the learning process and, subsequently,
detection of malignant breast lesions.
| [
{
"created": "Sun, 7 Feb 2021 22:03:39 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Aug 2021 19:47:00 GMT",
"version": "v2"
}
] | 2021-11-12 | [
[
"Ayatollahi",
"Fazael",
"",
"1 and 2"
],
[
"Shokouhi",
"Shahriar B.",
"",
"2 and 3"
],
[
"Mann",
"Ritse M.",
"",
"2 and 3"
],
[
"Teuwen",
"Jonas",
"",
"2 and 3"
]
] |
2102.04034 | Andrew Palmer | Andrew W. Palmer, Albi Sema, Wolfram Martens, Peter Rudolph and
Wolfgang Waizenegger | The Autonomous Siemens Tram | 6 pages, presented at the 2020 International Conference on
Intelligent Transportation Systems (ITSC) | A. W. Palmer, A. Sema, W. Martens, P. Rudolph and W. Waizenegger,
"The Autonomous Siemens Tram," 2020 IEEE 23rd ITSC, 2020, pp. 1-6 | 10.1109/ITSC45102.2020.9294699 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the Autonomous Siemens Tram that was publicly
demonstrated in Potsdam, Germany during the InnoTrans 2018 exhibition. The
system was built on a Siemens Combino tram and used a multi-modal sensor suite
to localize the vehicle, and to detect and respond to traffic signals and
obstacles. An overview of the hardware and the developed localization, signal
handling, and obstacle handling components is presented, along with a summary
of their performance.
| [
{
"created": "Mon, 8 Feb 2021 07:13:58 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Palmer",
"Andrew W.",
""
],
[
"Sema",
"Albi",
""
],
[
"Martens",
"Wolfram",
""
],
[
"Rudolph",
"Peter",
""
],
[
"Waizenegger",
"Wolfgang",
""
]
] |
2102.04060 | Maxime Ferrera | Maxime Ferrera, Alexandre Eudes, Julien Moras, Martial Sanfourche, Guy
Le Besnerais | OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications | Accepted for publication in IEEE Robotics and Automation Letters
(RA-L). Code is available at : \url{https://github.com/ov2slam/ov2slam} | IEEE Robotics and Automation Letters, IEEE 2021 | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many applications of Visual SLAM, such as augmented reality, virtual reality,
robotics or autonomous driving, require versatile, robust and precise
solutions, most often with real-time capability. In this work, we describe
OV$^{2}$SLAM, a fully online algorithm, handling both monocular and stereo
camera setups, various map scales and frame-rates ranging from a few Hertz up
to several hundreds. It combines numerous recent contributions in visual
localization within an efficient multi-threaded architecture. Extensive
comparisons with competing algorithms shows the state-of-the-art accuracy and
real-time performance of the resulting algorithm. For the benefit of the
community, we release the source code:
\url{https://github.com/ov2slam/ov2slam}.
| [
{
"created": "Mon, 8 Feb 2021 08:39:23 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Ferrera",
"Maxime",
""
],
[
"Eudes",
"Alexandre",
""
],
[
"Moras",
"Julien",
""
],
[
"Sanfourche",
"Martial",
""
],
[
"Besnerais",
"Guy Le",
""
]
] |
2102.04201 | Jennifer Cobbe Dr | Jennifer Cobbe, Michelle Seng Ah Lee, Jatinder Singh | Reviewable Automated Decision-Making: A Framework for Accountable
Algorithmic Systems | null | ACM Conference on Fairness, Accountability, and Transparency
(FAccT 21), March 2021, Virtual Event, Canada | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces reviewability as a framework for improving the
accountability of automated and algorithmic decision-making (ADM) involving
machine learning. We draw on an understanding of ADM as a socio-technical
process involving both human and technical elements, beginning before a
decision is made and extending beyond the decision itself. While explanations
and other model-centric mechanisms may assist some accountability concerns,
they often provide insufficient information of these broader ADM processes for
regulatory oversight and assessments of legal compliance. Reviewability
involves breaking down the ADM process into technical and organisational
elements to provide a systematic framework for determining the contextually
appropriate record-keeping mechanisms to facilitate meaningful review - both of
individual decisions and of the process as a whole. We argue that a
reviewability framework, drawing on administrative law's approach to reviewing
human decision-making, offers a practical way forward towards more a more
holistic and legally-relevant form of accountability for ADM.
| [
{
"created": "Tue, 26 Jan 2021 18:15:34 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Feb 2021 11:48:42 GMT",
"version": "v2"
}
] | 2021-02-11 | [
[
"Cobbe",
"Jennifer",
""
],
[
"Lee",
"Michelle Seng Ah",
""
],
[
"Singh",
"Jatinder",
""
]
] |
2102.04202 | Shoffan Saifullah | Shoffan Saifullah | Segmentasi Citra Menggunakan Metode Watershed Transform Berdasarkan
Image Enhancement Dalam Mendeteksi Embrio Telur | 8 pages, in Indonesian language, 6 figures | Systemic: Information System and Informatics Journal, 5(2),
(2019), 53-60 | 10.29080/systemic.v5i2.798 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image processing can be applied in the detection of egg embryos. The egg
embryos detection is processed using a segmentation process. The segmentation
divides the image according to the area that is divided. This process requires
improvement of the image that is processed to obtain optimal results. This
study will analyze the detection of egg embryos based on image processing with
image enhancement and the concept of segmentation using the watershed method.
Image enhancement in preprocessing in image improvement uses a combination of
Contrast Limited Adaptive Histogram Equalization (CLAHE) and Histogram
Equalization (HE) methods. The grayscale egg image is corrected using the CLAHE
method, and the results are reprocessed using HE. The image improvement results
show that the CLAHE-HE combination method gives a clear picture of the object
area of the egg image that has an embryo. The segmentation process using image
conversion to black and white image and watershed segmentation can clearly show
the object of a chicken egg that has an embryo. The results of segmentation can
divide the area of the egg having embryos in a real and accurate way with a
percentage \approx 98\%.
| [
{
"created": "Mon, 8 Feb 2021 14:03:51 GMT",
"version": "v1"
}
] | 2021-02-14 | [
[
"Saifullah",
"Shoffan",
""
]
] |
2102.04216 | Anusha Bompelli | Anusha Bompelli, Yanshan Wang, Ruyuan Wan, Esha Singh, Yuqi Zhou, Lin
Xu, David Oniani, Bhavani Singh Agnikula Kshatriya, Joyce (Joy) E.
Balls-Berry, and Rui Zhang | Social and behavioral determinants of health in the era of artificial
intelligence with electronic health records: A scoping review | 32 pages, 5 figures | Health Data Science. 2021 Aug 24;2021:9759016 | 10.34133/2021/9759016 | Article ID 9759016 | cs.CY cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: There is growing evidence that social and behavioral determinants
of health (SBDH) play a substantial effect in a wide range of health outcomes.
Electronic health records (EHRs) have been widely employed to conduct
observational studies in the age of artificial intelligence (AI). However,
there has been little research into how to make the most of SBDH information
from EHRs. Methods: A systematic search was conducted in six databases to find
relevant peer-reviewed publications that had recently been published. Relevance
was determined by screening and evaluating the articles. Based on selected
relevant studies, a methodological analysis of AI algorithms leveraging SBDH
information in EHR data was provided. Results: Our synthesis was driven by an
analysis of SBDH categories, the relationship between SBDH and
healthcare-related statuses, and several NLP approaches for extracting SDOH
from clinical literature. Discussion: The associations between SBDH and health
outcomes are complicated and diverse; several pathways may be involved. Using
Natural Language Processing (NLP) technology to support the extraction of SBDH
and other clinical ideas simplifies the identification and extraction of
essential concepts from clinical data, efficiently unlocks unstructured data,
and aids in the resolution of unstructured data-related issues. Conclusion:
Despite known associations between SBDH and disease, SBDH factors are rarely
investigated as interventions to improve patient outcomes. Gaining knowledge
about SBDH and how SBDH data can be collected from EHRs using NLP approaches
and predictive models improves the chances of influencing health policy change
for patient wellness, and ultimately promoting health and health equity.
Keywords: Social and Behavioral Determinants of Health, Artificial
Intelligence, Electronic Health Records, Natural Language Processing,
Predictive Model
| [
{
"created": "Fri, 22 Jan 2021 09:03:39 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Jun 2021 17:50:11 GMT",
"version": "v2"
}
] | 2021-10-12 | [
[
"Bompelli",
"Anusha",
"",
"Joy"
],
[
"Wang",
"Yanshan",
"",
"Joy"
],
[
"Wan",
"Ruyuan",
"",
"Joy"
],
[
"Singh",
"Esha",
"",
"Joy"
],
[
"Zhou",
"Yuqi",
"",
"Joy"
],
[
"Xu",
"Lin",
"",
"Joy"
],
[
"Oniani",
"David",
"",
"Joy"
],
[
"Kshatriya",
"Bhavani Singh Agnikula",
"",
"Joy"
],
[
"Joyce",
"",
"",
"Joy"
],
[
"Balls-Berry",
"E.",
""
],
[
"Zhang",
"Rui",
""
]
] |
2102.04341 | Jonathan Kelly | Justin Tomasi, Brandon Wagstaff, Steven L. Waslander, Jonathan Kelly | Learned Camera Gain and Exposure Control for Improved Visual Feature
Detection and Matching | In IEEE Robotics and Automation Letters (RA-L) and presented at the
IEEE International Conference on Robotics and Automation (ICRA'21), Xi'an,
China, May 30-Jun. 5, 2021 | IEEE Robotics and Automation Letters (RA-L), Vol. 6, No. 2, pp.
2028-2035, Apr. 2021 | 10.1109/LRA.2021.3058909 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Successful visual navigation depends upon capturing images that contain
sufficient useful information. In this letter, we explore a data-driven
approach to account for environmental lighting changes, improving the quality
of images for use in visual odometry (VO) or visual simultaneous localization
and mapping (SLAM). We train a deep convolutional neural network model to
predictively adjust camera gain and exposure time parameters such that
consecutive images contain a maximal number of matchable features. The training
process is fully self-supervised: our training signal is derived from an
underlying VO or SLAM pipeline and, as a result, the model is optimized to
perform well with that specific pipeline. We demonstrate through extensive
real-world experiments that our network can anticipate and compensate for
dramatic lighting changes (e.g., transitions into and out of road tunnels),
maintaining a substantially higher number of inlier feature matches than
competing camera parameter control algorithms.
| [
{
"created": "Mon, 8 Feb 2021 16:46:09 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Feb 2021 17:52:10 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Jul 2022 05:00:57 GMT",
"version": "v3"
}
] | 2022-07-12 | [
[
"Tomasi",
"Justin",
""
],
[
"Wagstaff",
"Brandon",
""
],
[
"Waslander",
"Steven L.",
""
],
[
"Kelly",
"Jonathan",
""
]
] |
2102.04366 | Lucas Prado Osco | Mauro dos Santos de Arruda, Lucas Prado Osco, Plabiany Rodrigo Acosta,
Diogo Nunes Gon\c{c}alves, Jos\'e Marcato Junior, Ana Paula Marques Ramos,
Edson Takashi Matsubara, Zhipeng Luo, Jonathan Li, Jonathan de Andrade Silva,
Wesley Nunes Gon\c{c}alves | Counting and Locating High-Density Objects Using Convolutional Neural
Network | 15 pages, 10 figures, 8 tables | Expert Systems with Applications, 2022 | 10.1016/j.eswa.2022.116555 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a Convolutional Neural Network (CNN) approach for
counting and locating objects in high-density imagery. To the best of our
knowledge, this is the first object counting and locating method based on a
feature map enhancement and a Multi-Stage Refinement of the confidence map. The
proposed method was evaluated in two counting datasets: tree and car. For the
tree dataset, our method returned a mean absolute error (MAE) of 2.05, a
root-mean-squared error (RMSE) of 2.87 and a coefficient of determination
(R$^2$) of 0.986. For the car dataset (CARPK and PUCPR+), our method was
superior to state-of-the-art methods. In the these datasets, our approach
achieved an MAE of 4.45 and 3.16, an RMSE of 6.18 and 4.39, and an R$^2$ of
0.975 and 0.999, respectively. The proposed method is suitable for dealing with
high object-density, returning a state-of-the-art performance for counting and
locating objects.
| [
{
"created": "Mon, 8 Feb 2021 17:17:10 GMT",
"version": "v1"
}
] | 2022-05-31 | [
[
"de Arruda",
"Mauro dos Santos",
""
],
[
"Osco",
"Lucas Prado",
""
],
[
"Acosta",
"Plabiany Rodrigo",
""
],
[
"Gonçalves",
"Diogo Nunes",
""
],
[
"Junior",
"José Marcato",
""
],
[
"Ramos",
"Ana Paula Marques",
""
],
[
"Matsubara",
"Edson Takashi",
""
],
[
"Luo",
"Zhipeng",
""
],
[
"Li",
"Jonathan",
""
],
[
"Silva",
"Jonathan de Andrade",
""
],
[
"Gonçalves",
"Wesley Nunes",
""
]
] |
2102.04394 | Fabio Gonzalez | Fabio A. Gonz\'alez, Alejandro Gallego, Santiago Toledo-Cort\'es,
Vladimir Vargas-Calder\'on | Learning with Density Matrices and Random Features | Final version published in Quantum Mach. Intell. 4, 23 (2022) | Quantum Mach. Intell. 4, 23 (2022) | 10.1007/s42484-022-00079-9 | null | cs.LG cs.AI quant-ph | http://creativecommons.org/licenses/by-sa/4.0/ | A density matrix describes the statistical state of a quantum system. It is a
powerful formalism to represent both the quantum and classical uncertainty of
quantum systems and to express different statistical operations such as
measurement, system combination and expectations as linear algebra operations.
This paper explores how density matrices can be used as a building block for
machine learning models exploiting their ability to straightforwardly combine
linear algebra and probability. One of the main results of the paper is to show
that density matrices coupled with random Fourier features could approximate
arbitrary probability distributions over $\mathbb{R}^n$. Based on this finding
the paper builds different models for density estimation, classification and
regression. These models are differentiable, so it is possible to integrate
them with other differentiable components, such as deep learning architectures
and to learn their parameters using gradient-based optimization. In addition,
the paper presents optimization-less training strategies based on estimation
and model averaging. The models are evaluated in benchmark tasks and the
results are reported and discussed.
| [
{
"created": "Mon, 8 Feb 2021 17:54:59 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Feb 2021 14:40:04 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Sep 2021 01:40:47 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Nov 2021 04:05:52 GMT",
"version": "v4"
},
{
"created": "Tue, 30 Apr 2024 17:37:06 GMT",
"version": "v5"
}
] | 2024-05-01 | [
[
"González",
"Fabio A.",
""
],
[
"Gallego",
"Alejandro",
""
],
[
"Toledo-Cortés",
"Santiago",
""
],
[
"Vargas-Calderón",
"Vladimir",
""
]
] |
2102.04402 | Xueguang Lyu | Xueguang Lyu, Yuchen Xiao, Brett Daley, Christopher Amato | Contrasting Centralized and Decentralized Critics in Multi-Agent
Reinforcement Learning | null | Proceedings of the 20th International Conference on Autonomous
Agents and MultiAgent Systems (AAMAS). 2021 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Centralized Training for Decentralized Execution, where agents are trained
offline using centralized information but execute in a decentralized manner
online, has gained popularity in the multi-agent reinforcement learning
community. In particular, actor-critic methods with a centralized critic and
decentralized actors are a common instance of this idea. However, the
implications of using a centralized critic in this context are not fully
discussed and understood even though it is the standard choice of many
algorithms. We therefore formally analyze centralized and decentralized critic
approaches, providing a deeper understanding of the implications of critic
choice. Because our theory makes unrealistic assumptions, we also empirically
compare the centralized and decentralized critic methods over a wide set of
environments to validate our theories and to provide practical advice. We show
that there exist misconceptions regarding centralized critics in the current
literature and show that the centralized critic design is not strictly
beneficial, but rather both centralized and decentralized critics have
different pros and cons that should be taken into account by algorithm
designers.
| [
{
"created": "Mon, 8 Feb 2021 18:08:11 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Dec 2021 21:33:13 GMT",
"version": "v2"
}
] | 2021-12-06 | [
[
"Lyu",
"Xueguang",
""
],
[
"Xiao",
"Yuchen",
""
],
[
"Daley",
"Brett",
""
],
[
"Amato",
"Christopher",
""
]
] |
2102.04566 | Lucas Prado Osco | Patrik Ol\~a Bressan, Jos\'e Marcato Junior, Jos\'e Augusto Correa
Martins, Diogo Nunes Gon\c{c}alves, Daniel Matte Freitas, Lucas Prado Osco,
Jonathan de Andrade Silva, Zhipeng Luo, Jonathan Li, Raymundo Cordero Garcia,
Wesley Nunes Gon\c{c}alves | Semantic Segmentation with Labeling Uncertainty and Class Imbalance | 15 pages, 9 figures, 3 tables | International Journal of Applied Earth Observation and
Geoinformation, 2022 | 10.1016/j.jag.2022.102690 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recently, methods based on Convolutional Neural Networks (CNN) achieved
impressive success in semantic segmentation tasks. However, challenges such as
the class imbalance and the uncertainty in the pixel-labeling process are not
completely addressed. As such, we present a new approach that calculates a
weight for each pixel considering its class and uncertainty during the labeling
process. The pixel-wise weights are used during training to increase or
decrease the importance of the pixels. Experimental results show that the
proposed approach leads to significant improvements in three challenging
segmentation tasks in comparison to baseline methods. It was also proved to be
more invariant to noise. The approach presented here may be used within a wide
range of semantic segmentation methods to improve their robustness.
| [
{
"created": "Mon, 8 Feb 2021 22:53:33 GMT",
"version": "v1"
}
] | 2022-05-31 | [
[
"Bressan",
"Patrik Olã",
""
],
[
"Junior",
"José Marcato",
""
],
[
"Martins",
"José Augusto Correa",
""
],
[
"Gonçalves",
"Diogo Nunes",
""
],
[
"Freitas",
"Daniel Matte",
""
],
[
"Osco",
"Lucas Prado",
""
],
[
"Silva",
"Jonathan de Andrade",
""
],
[
"Luo",
"Zhipeng",
""
],
[
"Li",
"Jonathan",
""
],
[
"Garcia",
"Raymundo Cordero",
""
],
[
"Gonçalves",
"Wesley Nunes",
""
]
] |
2102.04652 | Xiangzeng Zhou | Xiangzeng Zhou and Pan Pan and Yun Zheng and Yinghui Xu and Rong Jin | Large Scale Long-tailed Product Recognition System at Alibaba | Acccepted by CIKM 2020 | In Proceedings of the 29th ACM International Conference on
Information and Knowledge Management (CIKM20), 3353-3356 (2020) | 10.1145/3340531.3417445 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A practical large scale product recognition system suffers from the
phenomenon of long-tailed imbalanced training data under the E-commercial
circumstance at Alibaba. Besides product images at Alibaba, plenty of image
related side information (e.g. title, tags) reveal rich semantic information
about images. Prior works mainly focus on addressing the long tail problem in
visual perspective only, but lack of consideration of leveraging the side
information. In this paper, we present a novel side information based large
scale visual recognition co-training~(SICoT) system to deal with the long tail
problem by leveraging the image related side information. In the proposed
co-training system, we firstly introduce a bilinear word attention module
aiming to construct a semantic embedding over the noisy side information. A
visual feature and semantic embedding co-training scheme is then designed to
transfer knowledge from classes with abundant training data (head classes) to
classes with few training data (tail classes) in an end-to-end fashion.
Extensive experiments on four challenging large scale datasets, whose numbers
of classes range from one thousand to one million, demonstrate the scalable
effectiveness of the proposed SICoT system in alleviating the long tail
problem. In the visual search platform
Pailitao\footnote{http://www.pailitao.com} at Alibaba, we settle a practical
large scale product recognition application driven by the proposed SICoT
system, and achieve a significant gain of unique visitor~(UV) conversion rate.
| [
{
"created": "Tue, 9 Feb 2021 05:34:30 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Zhou",
"Xiangzeng",
""
],
[
"Pan",
"Pan",
""
],
[
"Zheng",
"Yun",
""
],
[
"Xu",
"Yinghui",
""
],
[
"Jin",
"Rong",
""
]
] |
2102.04667 | Yanhao Zhang | Yanhao Zhang, Pan Pan, Yun Zheng, Kang Zhao, Jianmin Wu, Yinghui Xu,
Rong Jin | Virtual ID Discovery from E-commerce Media at Alibaba: Exploiting
Richness of User Click Behavior for Visual Search Relevance | accepted by CIKM 2019 | CIKM 2019: 2489-2497 | 10.1145/3357384.3357800 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual search plays an essential role for E-commerce. To meet the search
demands of users and promote shopping experience at Alibaba, visual search
relevance of real-shot images is becoming the bottleneck. Traditional visual
search paradigm is usually based upon supervised learning with labeled data.
However, large-scale categorical labels are required with expensive human
annotations, which limits its applicability and also usually fails in
distinguishing the real-shot images. In this paper, we propose to discover
Virtual ID from user click behavior to improve visual search relevance at
Alibaba. As a totally click-data driven approach, we collect various types of
click data for training deep networks without any human annotations at all. In
particular, Virtual ID are learned as classification supervision with co-click
embedding, which explores image relationship from user co-click behaviors to
guide category prediction and feature learning. Concretely, we deploy Virtual
ID Category Network by integrating first-clicks and switch-clicks as
regularizer. Incorporating triplets and list constraints, Virtual ID Feature
Network is trained in a joint classification and ranking manner. Benefiting
from exploration of user click data, our networks are more effective to encode
richer supervision and better distinguish real-shot images in terms of category
and feature. To validate our method for visual search relevance, we conduct an
extensive set of offline and online experiments on the collected real-shot
images. We consistently achieve better experimental results across all
components, compared with alternative and state-of-the-art methods.
| [
{
"created": "Tue, 9 Feb 2021 06:31:20 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Zhang",
"Yanhao",
""
],
[
"Pan",
"Pan",
""
],
[
"Zheng",
"Yun",
""
],
[
"Zhao",
"Kang",
""
],
[
"Wu",
"Jianmin",
""
],
[
"Xu",
"Yinghui",
""
],
[
"Jin",
"Rong",
""
]
] |
2102.04674 | Yanhao Zhang | Yanhao Zhang, Pan Pan, Yun Zheng, Kang Zhao, Yingya Zhang, Xiaofeng
Ren, Rong Jin | Visual Search at Alibaba | accepted by KDD 2018 | KDD 2018: 993-1001 | 10.1145/3219819.3219820 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the large scale visual search algorithm and system
infrastructure at Alibaba. The following challenges are discussed under the
E-commercial circumstance at Alibaba (a) how to handle heterogeneous image data
and bridge the gap between real-shot images from user query and the online
images. (b) how to deal with large scale indexing for massive updating data.
(c) how to train deep models for effective feature representation without huge
human annotations. (d) how to improve the user engagement by considering the
quality of the content. We take advantage of large image collection of Alibaba
and state-of-the-art deep learning techniques to perform visual search at
scale. We present solutions and implementation details to overcome those
problems and also share our learnings from building such a large scale
commercial visual search engine. Specifically, model and search-based fusion
approach is introduced to effectively predict categories. Also, we propose a
deep CNN model for joint detection and feature learning by mining user click
behavior. The binary index engine is designed to scale up indexing without
compromising recall and precision. Finally, we apply all the stages into an
end-to-end system architecture, which can simultaneously achieve highly
efficient and scalable performance adapting to real-shot images. Extensive
experiments demonstrate the advancement of each module in our system. We hope
visual search at Alibaba becomes more widely incorporated into today's
commercial applications.
| [
{
"created": "Tue, 9 Feb 2021 06:46:50 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Zhang",
"Yanhao",
""
],
[
"Pan",
"Pan",
""
],
[
"Zheng",
"Yun",
""
],
[
"Zhao",
"Kang",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Ren",
"Xiaofeng",
""
],
[
"Jin",
"Rong",
""
]
] |
2102.04780 | Sutharsan Mahendren Mr | Sutharsan Mahendren, Chamira Edussooriya, Ranga Rodrigo | Diverse Single Image Generation with Controllable Global Structure | Published in the Neurocomputing Journal | Neurocomputing 528(2023)97-112 | 10.1016/j.neucom.2023.01.011 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image generation from a single image using generative adversarial networks is
quite interesting due to the realism of generated images. However, recent
approaches need improvement for such realistic and diverse image generation,
when the global context of the image is important such as in face, animal, and
architectural image generation. This is mainly due to the use of fewer
convolutional layers for mainly capturing the patch statistics and, thereby,
not being able to capture global statistics very well. We solve this problem by
using attention blocks at selected scales and feeding a random Gaussian blurred
image to the discriminator for training. Our results are visually better than
the state-of-the-art particularly in generating images that require global
context. The diversity of our image generation, measured using the average
standard deviation of pixels, is also better.
| [
{
"created": "Tue, 9 Feb 2021 11:52:48 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Feb 2021 05:22:34 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Jan 2022 05:25:10 GMT",
"version": "v3"
},
{
"created": "Wed, 25 Jan 2023 13:10:39 GMT",
"version": "v4"
}
] | 2023-01-26 | [
[
"Mahendren",
"Sutharsan",
""
],
[
"Edussooriya",
"Chamira",
""
],
[
"Rodrigo",
"Ranga",
""
]
] |
2102.04816 | Abdelrahman Abdallah | Daniyar Nurseitov, Kairat Bostanbekov, Maksat Kanatov, Anel Alimova,
Abdelrahman Abdallah, Galymzhan Abdimanap | Classification of Handwritten Names of Cities and Handwritten Text
Recognition using Various Deep Learning Models | null | Advances in Science, Technology and Engineering Systems. 5,
934-943 (2020) | 10.25046/aj0505114 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This article discusses the problem of handwriting recognition in Kazakh and
Russian languages. This area is poorly studied since in the literature there
are almost no works in this direction. We have tried to describe various
approaches and achievements of recent years in the development of handwritten
recognition models in relation to Cyrillic graphics. The first model uses deep
convolutional neural networks (CNNs) for feature extraction and a fully
connected multilayer perceptron neural network (MLP) for word classification.
The second model, called SimpleHTR, uses CNN and recurrent neural network (RNN)
layers to extract information from images. We also proposed the Bluechet and
Puchserver models to compare the results. Due to the lack of available open
datasets in Russian and Kazakh languages, we carried out work to collect data
that included handwritten names of countries and cities from 42 different
Cyrillic words, written more than 500 times in different handwriting. We also
used a handwritten database of Kazakh and Russian languages (HKR). This is a
new database of Cyrillic words (not only countries and cities) for the Russian
and Kazakh languages, created by the authors of this work.
| [
{
"created": "Tue, 9 Feb 2021 13:34:16 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Nurseitov",
"Daniyar",
""
],
[
"Bostanbekov",
"Kairat",
""
],
[
"Kanatov",
"Maksat",
""
],
[
"Alimova",
"Anel",
""
],
[
"Abdallah",
"Abdelrahman",
""
],
[
"Abdimanap",
"Galymzhan",
""
]
] |
2102.04916 | Pierre Aumjaud | Pierre Aumjaud, David McAuliffe, Francisco Javier Rodr\'iguez Lera,
Philip Cardiff | rl_reach: Reproducible Reinforcement Learning Experiments for Robotic
Reaching Tasks | 7 pages, 5 figures | Software Impacts. 8 (2021) 100061 | 10.1016/j.simpa.2021.100061 | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Training reinforcement learning agents at solving a given task is highly
dependent on identifying optimal sets of hyperparameters and selecting suitable
environment input / output configurations. This tedious process could be eased
with a straightforward toolbox allowing its user to quickly compare different
training parameter sets. We present rl_reach, a self-contained, open-source and
easy-to-use software package designed to run reproducible reinforcement
learning experiments for customisable robotic reaching tasks. rl_reach packs
together training environments, agents, hyperparameter optimisation tools and
policy evaluation scripts, allowing its users to quickly investigate and
identify optimal training configurations. rl_reach is publicly available at
this URL: https://github.com/PierreExeter/rl_reach.
| [
{
"created": "Tue, 9 Feb 2021 16:14:10 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 19:32:01 GMT",
"version": "v2"
}
] | 2021-03-03 | [
[
"Aumjaud",
"Pierre",
""
],
[
"McAuliffe",
"David",
""
],
[
"Lera",
"Francisco Javier Rodríguez",
""
],
[
"Cardiff",
"Philip",
""
]
] |
2102.04993 | Marc G\'orriz Blanch | Marc G\'orriz, Saverio Blasi, Alan F. Smeaton, Noel E. O'Connor, Marta
Mrak | Attention-Based Neural Networks for Chroma Intra Prediction in Video
Coding | null | IEEE Journal of Selected Topics in Signal Processing, 2020 | 10.1109/JSTSP.2020.3044482 | null | eess.IV cs.CC cs.CV cs.LG cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Neural networks can be successfully used to improve several modules of
advanced video coding schemes. In particular, compression of colour components
was shown to greatly benefit from usage of machine learning models, thanks to
the design of appropriate attention-based architectures that allow the
prediction to exploit specific samples in the reference region. However, such
architectures tend to be complex and computationally intense, and may be
difficult to deploy in a practical video coding pipeline. This work focuses on
reducing the complexity of such methodologies, to design a set of simplified
and cost-effective attention-based architectures for chroma intra-prediction. A
novel size-agnostic multi-model approach is proposed to reduce the complexity
of the inference process. The resulting simplified architecture is still
capable of outperforming state-of-the-art methods. Moreover, a collection of
simplifications is presented in this paper, to further reduce the complexity
overhead of the proposed prediction architecture. Thanks to these
simplifications, a reduction in the number of parameters of around 90% is
achieved with respect to the original attention-based methodologies.
Simplifications include a framework for reducing the overhead of the
convolutional operations, a simplified cross-component processing model
integrated into the original architecture, and a methodology to perform
integer-precision approximations with the aim to obtain fast and hardware-aware
implementations. The proposed schemes are integrated into the Versatile Video
Coding (VVC) prediction pipeline, retaining compression efficiency of
state-of-the-art chroma intra-prediction methods based on neural networks,
while offering different directions for significantly reducing coding
complexity.
| [
{
"created": "Tue, 9 Feb 2021 18:01:22 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Górriz",
"Marc",
""
],
[
"Blasi",
"Saverio",
""
],
[
"Smeaton",
"Alan F.",
""
],
[
"O'Connor",
"Noel E.",
""
],
[
"Mrak",
"Marta",
""
]
] |
2102.05067 | Silvia Cascianelli PhD | Silvia Cascianelli, Gabriele Costante, Alessandro Devo, Thomas A.
Ciarfuglia, Paolo Valigi, Mario L. Fravolini | The Role of the Input in Natural Language Video Description | In IEEE Transactions on Multimedia | IEEE Transactions on Multimedia, 22(1), 271-283 (2019) | null | null | cs.CV cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural Language Video Description (NLVD) has recently received strong
interest in the Computer Vision, Natural Language Processing (NLP), Multimedia,
and Autonomous Robotics communities. The State-of-the-Art (SotA) approaches
obtained remarkable results when tested on the benchmark datasets. However,
those approaches poorly generalize to new datasets. In addition, none of the
existing works focus on the processing of the input to the NLVD systems, which
is both visual and textual. In this work, it is presented an extensive study
dealing with the role of the visual input, evaluated with respect to the
overall NLP performance. This is achieved performing data augmentation of the
visual component, applying common transformations to model camera distortions,
noise, lighting, and camera positioning, that are typical in real-world
operative scenarios. A t-SNE based analysis is proposed to evaluate the effects
of the considered transformations on the overall visual data distribution. For
this study, it is considered the English subset of Microsoft Research Video
Description (MSVD) dataset, which is used commonly for NLVD. It was observed
that this dataset contains a relevant amount of syntactic and semantic errors.
These errors have been amended manually, and the new version of the dataset
(called MSVD-v2) is used in the experimentation. The MSVD-v2 dataset is
released to help to gain insight into the NLVD problem.
| [
{
"created": "Tue, 9 Feb 2021 19:00:35 GMT",
"version": "v1"
}
] | 2021-02-11 | [
[
"Cascianelli",
"Silvia",
""
],
[
"Costante",
"Gabriele",
""
],
[
"Devo",
"Alessandro",
""
],
[
"Ciarfuglia",
"Thomas A.",
""
],
[
"Valigi",
"Paolo",
""
],
[
"Fravolini",
"Mario L.",
""
]
] |
2102.05126 | Jon\'a\v{s} Kulh\'anek | Jon\'a\v{s} Kulh\'anek and Vojt\v{e}ch Hude\v{c}ek and Tom\'a\v{s}
Nekvinda and Ond\v{r}ej Du\v{s}ek | AuGPT: Auxiliary Tasks and Data Augmentation for End-To-End Dialogue
with Pre-Trained Language Models | null | Proceedings of the 3rd Workshop on Natural Language Processing for
Conversational AI (2021), 198-210 | 10.18653/v1/2021.nlp4convai-1.19 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention-based pre-trained language models such as GPT-2 brought
considerable progress to end-to-end dialogue modelling. However, they also
present considerable risks for task-oriented dialogue, such as lack of
knowledge grounding or diversity. To address these issues, we introduce
modified training objectives for language model finetuning, and we employ
massive data augmentation via back-translation to increase the diversity of the
training data. We further examine the possibilities of combining data from
multiples sources to improve performance on the target dataset. We carefully
evaluate our contributions with both human and automatic methods. Our model
substantially outperforms the baseline on the MultiWOZ data and shows
competitive performance with state of the art in both automatic and human
evaluation.
| [
{
"created": "Tue, 9 Feb 2021 20:53:34 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Sep 2021 08:28:40 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jan 2022 14:42:11 GMT",
"version": "v3"
}
] | 2022-01-17 | [
[
"Kulhánek",
"Jonáš",
""
],
[
"Hudeček",
"Vojtěch",
""
],
[
"Nekvinda",
"Tomáš",
""
],
[
"Dušek",
"Ondřej",
""
]
] |