id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2103.02728 | Cosmin Badea | Cosmin Badea, Gregory Artus | Morality, Machines and the Interpretation Problem: A Value-based,
Wittgensteinian Approach to Building Moral Agents | null | In: Bramer, M., Stahl, F. (eds) Artificial Intelligence XXXIX.
SGAI-AI 2022. Lecture Notes in Computer Science(), vol 13652. Springer, Cham
(2022) | 10.1007/978-3-031-21441-7_9 | null | cs.AI cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present what we call the Interpretation Problem, whereby any rule in
symbolic form is open to infinite interpretation in ways that we might
disapprove of and argue that any attempt to build morality into machines is
subject to it. We show how the Interpretation Problem in Artificial
Intelligence is an illustration of Wittgenstein's general claim that no rule
can contain the criteria for its own application, and that the risks created by
this problem escalate in proportion to the degree to which to machine is
causally connected to the world, in what we call the Law of Interpretative
Exposure. Using game theory, we attempt to define the structure of normative
spaces and argue that any rule-following within a normative space is guided by
values that are external to that space and which cannot themselves be
represented as rules. In light of this, we categorise the types of mistakes an
artificial moral agent could make into Mistakes of Intention and Instrumental
Mistakes, and we propose ways of building morality into machines by getting
them to interpret the rules we give in accordance with these external values,
through explicit moral reasoning, the Show, not Tell paradigm, the adjustment
of causal power and structure of the agent, and relational values, with the
ultimate aim that the machine develop a virtuous character and that the impact
of the Interpretation Problem is minimised.
| [
{
"created": "Wed, 3 Mar 2021 22:34:01 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Sep 2022 22:39:25 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Oct 2022 20:04:16 GMT",
"version": "v3"
},
{
"created": "Mon, 6 Feb 2023 23:38:35 GMT",
"version": "v4"
}
] | 2023-02-08 | [
[
"Badea",
"Cosmin",
""
],
[
"Artus",
"Gregory",
""
]
] |
2103.02800 | Zejian Liu | Zejian Liu, Gang Li and Jian Cheng | Hardware Acceleration of Fully Quantized BERT for Efficient Natural
Language Processing | null | Design, Automation & Test in Europe (DATE) 2021 | null | null | cs.AR cs.CL | http://creativecommons.org/licenses/by/4.0/ | BERT is the most recent Transformer-based model that achieves
state-of-the-art performance in various NLP tasks. In this paper, we
investigate the hardware acceleration of BERT on FPGA for edge computing. To
tackle the issue of huge computational complexity and memory footprint, we
propose to fully quantize the BERT (FQ-BERT), including weights, activations,
softmax, layer normalization, and all the intermediate results. Experiments
demonstrate that the FQ-BERT can achieve 7.94x compression for weights with
negligible performance loss. We then propose an accelerator tailored for the
FQ-BERT and evaluate on Xilinx ZCU102 and ZCU111 FPGA. It can achieve a
performance-per-watt of 3.18 fps/W, which is 28.91x and 12.72x over Intel(R)
Core(TM) i7-8700 CPU and NVIDIA K80 GPU, respectively.
| [
{
"created": "Thu, 4 Mar 2021 02:49:16 GMT",
"version": "v1"
}
] | 2021-03-05 | [
[
"Liu",
"Zejian",
""
],
[
"Li",
"Gang",
""
],
[
"Cheng",
"Jian",
""
]
] |
2103.02845 | Xingyu Chen | Xingyu Chen, Yufeng Liu, Chongyang Ma, Jianlong Chang, Huayan Wang,
Tian Chen, Xiaoyan Guo, Pengfei Wan, Wen Zheng | Camera-Space Hand Mesh Recovery via Semantic Aggregation and Adaptive
2D-1D Registration | CVPR2021 | CVPR2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent years have witnessed significant progress in 3D hand mesh recovery.
Nevertheless, because of the intrinsic 2D-to-3D ambiguity, recovering
camera-space 3D information from a single RGB image remains challenging. To
tackle this problem, we divide camera-space mesh recovery into two sub-tasks,
i.e., root-relative mesh recovery and root recovery. First, joint landmarks and
silhouette are extracted from a single input image to provide 2D cues for the
3D tasks. In the root-relative mesh recovery task, we exploit semantic
relations among joints to generate a 3D mesh from the extracted 2D cues. Such
generated 3D mesh coordinates are expressed relative to a root position, i.e.,
wrist of the hand. In the root recovery task, the root position is registered
to the camera space by aligning the generated 3D mesh back to 2D cues, thereby
completing cameraspace 3D mesh recovery. Our pipeline is novel in that (1) it
explicitly makes use of known semantic relations among joints and (2) it
exploits 1D projections of the silhouette and mesh to achieve robust
registration. Extensive experiments on popular datasets such as FreiHAND, RHD,
and Human3.6M demonstrate that our approach achieves stateof-the-art
performance on both root-relative mesh recovery and root recovery. Our code is
publicly available at https://github.com/SeanChenxy/HandMesh.
| [
{
"created": "Thu, 4 Mar 2021 05:46:04 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 08:22:07 GMT",
"version": "v2"
}
] | 2022-04-01 | [
[
"Chen",
"Xingyu",
""
],
[
"Liu",
"Yufeng",
""
],
[
"Ma",
"Chongyang",
""
],
[
"Chang",
"Jianlong",
""
],
[
"Wang",
"Huayan",
""
],
[
"Chen",
"Tian",
""
],
[
"Guo",
"Xiaoyan",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zheng",
"Wen",
""
]
] |
2103.02854 | Dexter Neo | Vassilios Vonikakis, Dexter Neo, Stefan Winkler | Morphset:Augmenting categorical emotion datasets with dimensional affect
labels using face morphing | in Proc IEEE International Conference on Image Processing (ICIP),
Anchorage, Sep.2021 | 2021 IEEE International Conference on Image Processing (ICIP),
2021 | 10.1109/ICIP42928.2021.9506566 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Emotion recognition and understanding is a vital component in human-machine
interaction. Dimensional models of affect such as those using valence and
arousal have advantages over traditional categorical ones due to the complexity
of emotional states in humans. However, dimensional emotion annotations are
difficult and expensive to collect, therefore they are not as prevalent in the
affective computing community. To address these issues, we propose a method to
generate synthetic images from existing categorical emotion datasets using face
morphing as well as dimensional labels in the circumplex space with full
control over the resulting sample distribution, while achieving augmentation
factors of at least 20x or more.
| [
{
"created": "Thu, 4 Mar 2021 06:33:06 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Jun 2021 03:36:06 GMT",
"version": "v2"
}
] | 2021-09-01 | [
[
"Vonikakis",
"Vassilios",
""
],
[
"Neo",
"Dexter",
""
],
[
"Winkler",
"Stefan",
""
]
] |
2103.02937 | Silvio Barra Dr | Silvio Barra, Carmen Bisogni, Maria De Marsico, Stefano Ricciardi | Visual Question Answering: which investigated applications? | null | Pattern Recognition Letters 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Question Answering (VQA) is an extremely stimulating and challenging
research area where Computer Vision (CV) and Natural Language Processig (NLP)
have recently met. In image captioning and video summarization, the semantic
information is completely contained in still images or video dynamics, and it
has only to be mined and expressed in a human-consistent way. Differently from
this, in VQA semantic information in the same media must be compared with the
semantics implied by a question expressed in natural language, doubling the
artificial intelligence-related effort. Some recent surveys about VQA
approaches have focused on methods underlying either the image-related
processing or the verbal-related one, or on the way to consistently fuse the
conveyed information. Possible applications are only suggested, and, in fact,
most cited works rely on general-purpose datasets that are used to assess the
building blocks of a VQA system. This paper rather considers the proposals that
focus on real-world applications, possibly using as benchmarks suitable data
bound to the application domain. The paper also reports about some recent
challenges in VQA research.
| [
{
"created": "Thu, 4 Mar 2021 10:38:06 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Barra",
"Silvio",
""
],
[
"Bisogni",
"Carmen",
""
],
[
"De Marsico",
"Maria",
""
],
[
"Ricciardi",
"Stefano",
""
]
] |
2103.02940 | Dmitry V. Dylov | Aleksandr Belov and Joel Stadelmann and Sergey Kastryulin and Dmitry
V. Dylov | Towards Ultrafast MRI via Extreme k-Space Undersampling and
Superresolution | Main text: 10 pages and 8 figures. 18 pages and 14 figures total
(Supplementary material included) | MICCAI 2021. Lecture Notes in Computer Science, vol 12906, pp
254-264 | 10.1007/978-3-030-87231-1_25 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We went below the MRI acceleration factors (a.k.a., k-space undersampling)
reported by all published papers that reference the original fastMRI challenge,
and then considered powerful deep learning based image enhancement methods to
compensate for the underresolved images. We thoroughly study the influence of
the sampling patterns, the undersampling and the downscaling factors, as well
as the recovery models on the final image quality for both the brain and the
knee fastMRI benchmarks. The quality of the reconstructed images surpasses that
of the other methods, yielding an MSE of 0.00114, a PSNR of 29.6 dB, and an
SSIM of 0.956 at x16 acceleration factor. More extreme undersampling factors of
x32 and x64 are also investigated, holding promise for certain clinical
applications such as computer-assisted surgery or radiation planning. We survey
5 expert radiologists to assess 100 pairs of images and show that the recovered
undersampled images statistically preserve their diagnostic value.
| [
{
"created": "Thu, 4 Mar 2021 10:45:01 GMT",
"version": "v1"
}
] | 2021-10-01 | [
[
"Belov",
"Aleksandr",
""
],
[
"Stadelmann",
"Joel",
""
],
[
"Kastryulin",
"Sergey",
""
],
[
"Dylov",
"Dmitry V.",
""
]
] |
2103.02943 | Jose Maria Font | Jose M. Font and Tobias Mahlmann | The Dota 2 Bot Competition | 6 pages | IEEE Transactions on Games 2018 | 10.1109/TG.2018.2834566 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multiplayer Online Battle Area (MOBA) games are a recent huge success both in
the video game industry and the international eSports scene. These games
encourage team coordination and cooperation, short and long-term planning,
within a real-time combined action and strategy gameplay.
Artificial Intelligence and Computational Intelligence in Games research
competitions offer a wide variety of challenges regarding the study and
application of AI techniques to different game genres. These events are widely
accepted by the AI/CI community as a sort of AI benchmarking that strongly
influences many other research areas in the field.
This paper presents and describes in detail the Dota 2 Bot competition and
the Dota 2 AI framework that supports it. This challenge aims to join both,
MOBAs and AI/CI game competitions, inviting participants to submit AI
controllers for the successful MOBA \textit{Defense of the Ancients 2} (Dota 2)
to play in 1v1 matches, which aims for fostering research on AI techniques for
real-time games. The Dota 2 AI framework makes use of the actual Dota 2 game
modding capabilities to enable to connect external AI controllers to actual
Dota 2 game matches using the original Free-to-Play game.se of the actual Dota
2 game modding capabilities to enable to connect external AI controllers to
actual Dota 2 game matches using the original Free-to-Play game.
| [
{
"created": "Thu, 4 Mar 2021 10:49:47 GMT",
"version": "v1"
}
] | 2021-03-05 | [
[
"Font",
"Jose M.",
""
],
[
"Mahlmann",
"Tobias",
""
]
] |
2103.03113 | Wei Huang | Wei Huang, Yayong Li, Weitao Du, Jie Yin, Richard Yi Da Xu, Ling Chen,
and Miao Zhang | Towards Deepening Graph Neural Networks: A GNTK-based Optimization
Perspective | 26 pages | ICLR 2022 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph convolutional networks (GCNs) and their variants have achieved great
success in dealing with graph-structured data. Nevertheless, it is well known
that deep GCNs suffer from the over-smoothing problem, where node
representations tend to be indistinguishable as more layers are stacked up. The
theoretical research to date on deep GCNs has focused primarily on expressive
power rather than trainability, an optimization perspective. Compared to
expressivity, trainability attempts to address a more fundamental question:
Given a sufficiently expressive space of models, can we successfully find a
good solution via gradient descent-based optimizers? This work fills this gap
by exploiting the Graph Neural Tangent Kernel (GNTK), which governs the
optimization trajectory under gradient descent for wide GCNs. We formulate the
asymptotic behaviors of GNTK in the large depth, which enables us to reveal the
dropping trainability of wide and deep GCNs at an exponential rate in the
optimization process. Additionally, we extend our theoretical framework to
analyze residual connection-based techniques, which are found to be merely able
to mitigate the exponential decay of trainability mildly. Inspired by our
theoretical insights on trainability, we propose Critical DropEdge, a
connectivity-aware and graph-adaptive sampling method, to alleviate the
exponential decay problem more fundamentally. Experimental evaluation
consistently confirms using our proposed method can achieve better results
compared to relevant counterparts with both infinite-width and finite-width.
| [
{
"created": "Wed, 3 Mar 2021 11:06:12 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2021 06:53:41 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Apr 2022 11:10:33 GMT",
"version": "v3"
}
] | 2022-04-22 | [
[
"Huang",
"Wei",
""
],
[
"Li",
"Yayong",
""
],
[
"Du",
"Weitao",
""
],
[
"Yin",
"Jie",
""
],
[
"Da Xu",
"Richard Yi",
""
],
[
"Chen",
"Ling",
""
],
[
"Zhang",
"Miao",
""
]
] |
2103.03133 | \v{S}imon Bil\'ik | Simon Bilik, Lukas Kratochvila, Adam Ligocki, Ondrej Bostik, Tomas
Zemcik, Matous Hybl, Karel Horak, Ludek Zalud | Visual diagnosis of the Varroa destructor parasitic mite in honeybees
using object detector techniques | null | Sensors, 21-8 (2021), 2764-2780 | 10.3390/s21082764 | BUT171160 | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The Varroa destructor mite is one of the most dangerous Honey Bee (Apis
mellifera) parasites worldwide and the bee colonies have to be regularly
monitored in order to control its spread. Here we present an object detector
based method for health state monitoring of bee colonies. This method has the
potential for online measurement and processing. In our experiment, we compare
the YOLO and SSD object detectors along with the Deep SVDD anomaly detector.
Based on the custom dataset with 600 ground-truth images of healthy and
infected bees in various scenes, the detectors reached a high F1 score up to
0.874 in the infected bee detection and up to 0.727 in the detection of the
Varroa Destructor mite itself. The results demonstrate the potential of this
approach, which will be later used in the real-time computer vision based honey
bee inspection system. To the best of our knowledge, this study is the first
one using object detectors for this purpose. We expect that performance of
those object detectors will enable us to inspect the health status of the honey
bee colonies.
| [
{
"created": "Fri, 26 Feb 2021 11:01:31 GMT",
"version": "v1"
}
] | 2023-05-01 | [
[
"Bilik",
"Simon",
""
],
[
"Kratochvila",
"Lukas",
""
],
[
"Ligocki",
"Adam",
""
],
[
"Bostik",
"Ondrej",
""
],
[
"Zemcik",
"Tomas",
""
],
[
"Hybl",
"Matous",
""
],
[
"Horak",
"Karel",
""
],
[
"Zalud",
"Ludek",
""
]
] |
2103.03231 | Thomas Neff | Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H.
Mueller, Chakravarty R. Alla Chaitanya, Anton Kaplanyan, Markus Steinberger | DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields
using Depth Oracle Networks | Accepted to EGSR 2021 in the CGF track; Project website:
https://depthoraclenerf.github.io/ | Computer Graphics Forum Volume 40, Issue 4, 2021 | 10.1111/cgf.14340 | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | The recent research explosion around implicit neural representations, such as
NeRF, shows that there is immense potential for implicitly storing high-quality
scene and lighting information in compact neural networks. However, one major
limitation preventing the use of NeRF in real-time rendering applications is
the prohibitive computational cost of excessive network evaluations along each
view ray, requiring dozens of petaFLOPS. In this work, we bring compact neural
representations closer to practical rendering of synthetic content in real-time
applications, such as games and virtual reality. We show that the number of
samples required for each view ray can be significantly reduced when samples
are placed around surfaces in the scene without compromising image quality. To
this end, we propose a depth oracle network that predicts ray sample locations
for each view ray with a single network evaluation. We show that using a
classification network around logarithmically discretized and spherically
warped depth values is essential to encode surface locations rather than
directly estimating depth. The combination of these techniques leads to DONeRF,
our compact dual network design with a depth oracle network as its first step
and a locally sampled shading network for ray accumulation. With DONeRF, we
reduce the inference costs by up to 48x compared to NeRF when conditioning on
available ground truth depth information. Compared to concurrent acceleration
methods for raymarching-based neural representations, DONeRF does not require
additional memory for explicit caching or acceleration structures, and can
render interactively (20 frames per second) on a single GPU.
| [
{
"created": "Thu, 4 Mar 2021 18:55:09 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 18:57:56 GMT",
"version": "v2"
},
{
"created": "Tue, 11 May 2021 09:56:38 GMT",
"version": "v3"
},
{
"created": "Fri, 25 Jun 2021 09:05:10 GMT",
"version": "v4"
}
] | 2021-06-30 | [
[
"Neff",
"Thomas",
""
],
[
"Stadlbauer",
"Pascal",
""
],
[
"Parger",
"Mathias",
""
],
[
"Kurz",
"Andreas",
""
],
[
"Mueller",
"Joerg H.",
""
],
[
"Chaitanya",
"Chakravarty R. Alla",
""
],
[
"Kaplanyan",
"Anton",
""
],
[
"Steinberger",
"Markus",
""
]
] |
2103.03305 | Kevin Xu | Mohammadreza Nemati, Haonan Zhang, Michael Sloma, Dulat Bekbolsynov,
Hong Wang, Stanislaw Stepkowski, and Kevin S. Xu | Predicting Kidney Transplant Survival using Multiple Feature
Representations for HLAs | Extended version of AIME 2021 conference paper | Proceedings of the 19th International Conference on Artificial
Intelligence in Medicine (2021) 51-60 | null | null | cs.LG cs.AI stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kidney transplantation can significantly enhance living standards for people
suffering from end-stage renal disease. A significant factor that affects graft
survival time (the time until the transplant fails and the patient requires
another transplant) for kidney transplantation is the compatibility of the
Human Leukocyte Antigens (HLAs) between the donor and recipient. In this paper,
we propose 4 new biologically-relevant feature representations for
incorporating HLA information into machine learning-based survival analysis
algorithms. We evaluate our proposed HLA feature representations on a database
of over 100,000 transplants and find that they improve prediction accuracy by
about 1%, modest at the patient level but potentially significant at a societal
level. Accurate prediction of survival times can improve transplant survival
outcomes, enabling better allocation of donors to recipients and reducing the
number of re-transplants due to graft failure with poorly matched donors.
| [
{
"created": "Thu, 4 Mar 2021 20:22:47 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jul 2022 00:57:11 GMT",
"version": "v2"
}
] | 2022-07-07 | [
[
"Nemati",
"Mohammadreza",
""
],
[
"Zhang",
"Haonan",
""
],
[
"Sloma",
"Michael",
""
],
[
"Bekbolsynov",
"Dulat",
""
],
[
"Wang",
"Hong",
""
],
[
"Stepkowski",
"Stanislaw",
""
],
[
"Xu",
"Kevin S.",
""
]
] |
2103.03328 | Aleksandar Vakanski | Aleksandar Vakanski, Min Xian | Evaluation of Complexity Measures for Deep Learning Generalization in
Medical Image Analysis | 15 pages, 4 figures | IEEE International Workshop on Machine Learning and Signal
Processing (MLSP 2021), Gold Coast, Australia, pp. 1-6, 2021 | 10.1109/MLSP52302.2021.9596501 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalization performance of deep learning models for medical image
analysis often decreases on images collected with different devices for data
acquisition, device settings, or patient population. A better understanding of
the generalization capacity on new images is crucial for clinicians'
trustworthiness in deep learning. Although significant research efforts have
been recently directed toward establishing generalization bounds and complexity
measures, still, there is often a significant discrepancy between the predicted
and actual generalization performance. As well, related large empirical studies
have been primarily based on validation with general-purpose image datasets.
This paper presents an empirical study that investigates the correlation
between 25 complexity measures and the generalization abilities of supervised
deep learning classifiers for breast ultrasound images. The results indicate
that PAC-Bayes flatness-based and path norm-based measures produce the most
consistent explanation for the combination of models and data. We also
investigate the use of multi-task classification and segmentation approach for
breast images, and report that such learning approach acts as an implicit
regularizer and is conducive toward improved generalization.
| [
{
"created": "Thu, 4 Mar 2021 20:58:22 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Mar 2021 02:50:47 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jul 2023 16:19:53 GMT",
"version": "v3"
}
] | 2023-07-20 | [
[
"Vakanski",
"Aleksandar",
""
],
[
"Xian",
"Min",
""
]
] |
2103.03335 | Leonid Boytsov | Iurii Mokrii, Leonid Boytsov, Pavel Braslavski | A Systematic Evaluation of Transfer Learning and Pseudo-labeling with
BERT-based Ranking Models | null | SIGIR 2021 (44th International ACM SIGIR Conference on Research
and Development in Information Retrieval) | 10.1145/3404835.3463093 | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to high annotation costs making the best use of existing human-created
training data is an important research direction. We, therefore, carry out a
systematic evaluation of transferability of BERT-based neural ranking models
across five English datasets. Previous studies focused primarily on zero-shot
and few-shot transfer from a large dataset to a dataset with a small number of
queries. In contrast, each of our collections has a substantial number of
queries, which enables a full-shot evaluation mode and improves reliability of
our results. Furthermore, since source datasets licences often prohibit
commercial use, we compare transfer learning to training on pseudo-labels
generated by a BM25 scorer. We find that training on pseudo-labels -- possibly
with subsequent fine-tuning using a modest number of annotated queries -- can
produce a competitive or better model compared to transfer learning. Yet, it is
necessary to improve the stability and/or effectiveness of the few-shot
training, which, sometimes, can degrade performance of a pretrained model.
| [
{
"created": "Thu, 4 Mar 2021 21:08:06 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 16:34:14 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Jun 2021 03:18:49 GMT",
"version": "v3"
},
{
"created": "Mon, 22 Nov 2021 03:51:12 GMT",
"version": "v4"
}
] | 2021-11-23 | [
[
"Mokrii",
"Iurii",
""
],
[
"Boytsov",
"Leonid",
""
],
[
"Braslavski",
"Pavel",
""
]
] |
2103.03359 | Amol Kelkar | Amol Kelkar | Cognitive Homeostatic Agents | Accepted at AAMAS2021 Blue Sky Ideas Track | In Proc. of the 20th International Conference on Autonomous Agents
and Multiagent Systems (AAMAS 2021), Online, May 3-7, 2021, IFAAMAS, 5 pages | 10.5555/3461017.3461021 | null | cs.AI cs.LG cs.MA cs.NE | http://creativecommons.org/licenses/by/4.0/ | Human brain has been used as an inspiration for building autonomous agents,
but it is not obvious what level of computational description of the brain one
should use. This has led to overly opinionated symbolic approaches and overly
unstructured connectionist approaches. We propose that using homeostasis as the
computational description provides a good compromise. Similar to how
physiological homeostasis is the regulation of certain homeostatic variables,
cognition can be interpreted as the regulation of certain 'cognitive
homeostatic variables'. We present an outline of a Cognitive Homeostatic Agent,
built as a hierarchy of physiological and cognitive homeostatic subsystems and
describe structures and processes to guide future exploration. We expect this
to be a fruitful line of investigation towards building sophisticated
artificial agents that can act flexibly in complex environments, and produce
behaviors indicating planning, thinking and feelings.
| [
{
"created": "Sat, 27 Feb 2021 07:29:43 GMT",
"version": "v1"
}
] | 2021-05-04 | [
[
"Kelkar",
"Amol",
""
]
] |
2103.03413 | Yi-Lin Tsai | Yi-Lin Tsai (1), Chetanya Rastogi (2), Peter K. Kitanidis (1, 3, and
4), Christopher B. Field (3, 5, and 6) ((1) Department of Civil and
Environmental Engineering, Stanford University, Stanford, CA, USA, (2)
Department of Computer Science, Stanford University, Stanford, CA, USA, (3)
Woods Institute for the Environment, Stanford University, Stanford, CA, USA,
(4) Institute for Computational and Mathematical Engineering, Stanford
University, Stanford, CA, USA, (5) Department of Biology, Stanford
University, Stanford, CA, USA, (6) Department of Earth System Science,
Stanford University, Stanford, CA, USA) | Routing algorithms as tools for integrating social distancing with
emergency evacuation | null | Sci Rep 11, 19623 (2021) | 10.1038/s41598-021-98643-z | null | cs.AI cs.CY cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the lessons from the COVID-19 pandemic is the importance of social
distancing, even in challenging circumstances such as pre-hurricane evacuation.
To explore the implications of integrating social distancing with evacuation
operations, we describe this evacuation process as a Capacitated Vehicle
Routing Problem (CVRP) and solve it using a DNN (Deep Neural Network)-based
solution (Deep Reinforcement Learning) and a non-DNN solution (Sweep
Algorithm). A central question is whether Deep Reinforcement Learning provides
sufficient extra routing efficiency to accommodate increased social distancing
in a time-constrained evacuation operation. We found that, in comparison to the
Sweep Algorithm, Deep Reinforcement Learning can provide decision-makers with
more efficient routing. However, the evacuation time saved by Deep
Reinforcement Learning does not come close to compensating for the extra time
required for social distancing, and its advantage disappears as the emergency
vehicle capacity approaches the number of people per household.
| [
{
"created": "Fri, 5 Mar 2021 01:12:31 GMT",
"version": "v1"
},
{
"created": "Mon, 3 May 2021 22:43:07 GMT",
"version": "v2"
},
{
"created": "Mon, 10 May 2021 02:26:53 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Oct 2021 18:33:08 GMT",
"version": "v4"
}
] | 2021-10-15 | [
[
"Tsai",
"Yi-Lin",
"",
"1, 3, and\n 4"
],
[
"Rastogi",
"Chetanya",
"",
"1, 3, and\n 4"
],
[
"Kitanidis",
"Peter K.",
"",
"1, 3, and\n 4"
],
[
"Field",
"Christopher B.",
"",
"3, 5, and 6"
]
] |
2103.03438 | Tao Zhang | Mengting Xu, Tao Zhang, Zhongnian Li, Mingxia Liu, Daoqiang Zhang | Towards Evaluating the Robustness of Deep Diagnostic Models by
Adversarial Attack | This version was accepted in the journal Medical Image Analysis
(MedIA) | Medical Image Analysis 69 (2021): 101977 | 10.1016/j.media.2021.101977 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep learning models (with neural networks) have been widely used in
challenging tasks such as computer-aided disease diagnosis based on medical
images. Recent studies have shown deep diagnostic models may not be robust in
the inference process and may pose severe security concerns in clinical
practice. Among all the factors that make the model not robust, the most
serious one is adversarial examples. The so-called "adversarial example" is a
well-designed perturbation that is not easily perceived by humans but results
in a false output of deep diagnostic models with high confidence. In this
paper, we evaluate the robustness of deep diagnostic models by adversarial
attack. Specifically, we have performed two types of adversarial attacks to
three deep diagnostic models in both single-label and multi-label
classification tasks, and found that these models are not reliable when
attacked by adversarial example. We have further explored how adversarial
examples attack the models, by analyzing their quantitative classification
results, intermediate features, discriminability of features and correlation of
estimated labels for both original/clean images and those adversarial ones. We
have also designed two new defense methods to handle adversarial examples in
deep diagnostic models, i.e., Multi-Perturbations Adversarial Training (MPAdvT)
and Misclassification-Aware Adversarial Training (MAAdvT). The experimental
results have shown that the use of defense methods can significantly improve
the robustness of deep diagnostic models against adversarial attacks.
| [
{
"created": "Fri, 5 Mar 2021 02:24:47 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Xu",
"Mengting",
""
],
[
"Zhang",
"Tao",
""
],
[
"Li",
"Zhongnian",
""
],
[
"Liu",
"Mingxia",
""
],
[
"Zhang",
"Daoqiang",
""
]
] |
2103.03446 | Jialong Tang | Jinsong Su, Jialong Tang, Hui Jiang, Ziyao Lu, Yubin Ge, Linfeng Song,
Deyi Xiong, Le Sun, Jiebo Luo | Enhanced Aspect-Based Sentiment Analysis Models with Progressive
Self-supervised Attention Learning | 31 pages. arXiv admin note: text overlap with arXiv:1906.01213 | Artificial Intelligence 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In aspect-based sentiment analysis (ABSA), many neural models are equipped
with an attention mechanism to quantify the contribution of each context word
to sentiment prediction. However, such a mechanism suffers from one drawback:
only a few frequent words with sentiment polarities are tended to be taken into
consideration for final sentiment decision while abundant infrequent sentiment
words are ignored by models. To deal with this issue, we propose a progressive
self-supervised attention learning approach for attentional ABSA models. In
this approach, we iteratively perform sentiment prediction on all training
instances, and continually learn useful attention supervision information in
the meantime. During training, at each iteration, context words with the
highest impact on sentiment prediction, identified based on their attention
weights or gradients, are extracted as words with active/misleading influence
on the correct/incorrect prediction for each instance. Words extracted in this
way are masked for subsequent iterations. To exploit these extracted words for
refining ABSA models, we augment the conventional training objective with a
regularization term that encourages ABSA models to not only take full advantage
of the extracted active context words but also decrease the weights of those
misleading words. We integrate the proposed approach into three
state-of-the-art neural ABSA models. Experiment results and in-depth analyses
show that our approach yields better attention results and significantly
enhances the performance of all three models. We release the source code and
trained models at https://github.com/DeepLearnXMU/PSSAttention.
| [
{
"created": "Fri, 5 Mar 2021 02:50:05 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Su",
"Jinsong",
""
],
[
"Tang",
"Jialong",
""
],
[
"Jiang",
"Hui",
""
],
[
"Lu",
"Ziyao",
""
],
[
"Ge",
"Yubin",
""
],
[
"Song",
"Linfeng",
""
],
[
"Xiong",
"Deyi",
""
],
[
"Sun",
"Le",
""
],
[
"Luo",
"Jiebo",
""
]
] |
2103.03448 | Jialong Tang | Jialong Tang, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, Xinyan Xiao,
Hua Wu | Syntactic and Semantic-driven Learning for Open Information Extraction | 11 pages | Findings of ACL: EMNLP 2020 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the biggest bottlenecks in building accurate, high coverage neural
open IE systems is the need for large labelled corpora. The diversity of open
domain corpora and the variety of natural language expressions further
exacerbate this problem. In this paper, we propose a syntactic and
semantic-driven learning approach, which can learn neural open IE models
without any human-labelled data by leveraging syntactic and semantic knowledge
as noisier, higher-level supervisions. Specifically, we first employ syntactic
patterns as data labelling functions and pretrain a base model using the
generated labels. Then we propose a syntactic and semantic-driven reinforcement
learning algorithm, which can effectively generalize the base model to open
situations with high accuracy. Experimental results show that our approach
significantly outperforms the supervised counterparts, and can even achieve
competitive performance to supervised state-of-the-art (SoA) model
| [
{
"created": "Fri, 5 Mar 2021 02:59:40 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Tang",
"Jialong",
""
],
[
"Lu",
"Yaojie",
""
],
[
"Lin",
"Hongyu",
""
],
[
"Han",
"Xianpei",
""
],
[
"Sun",
"Le",
""
],
[
"Xiao",
"Xinyan",
""
],
[
"Wu",
"Hua",
""
]
] |
2103.03460 | Hui Tang | Hui Tang and Kui Jia | Vicinal and categorical domain adaptation | Accepted by Pattern Recognition | Pattern Recognition, Volume 115, July 2021, 107907 | 10.1016/j.patcog.2021.107907 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised domain adaptation aims to learn a task classifier that performs
well on the unlabeled target domain, by utilizing the labeled source domain.
Inspiring results have been acquired by learning domain-invariant deep features
via domain-adversarial training. However, its parallel design of task and
domain classifiers limits the ability to achieve a finer category-level domain
alignment. To promote categorical domain adaptation (CatDA), based on a joint
category-domain classifier, we propose novel losses of adversarial training at
both domain and category levels. Since the joint classifier can be regarded as
a concatenation of individual task classifiers respectively for the two
domains, our design principle is to enforce consistency of category predictions
between the two task classifiers. Moreover, we propose a concept of vicinal
domains whose instances are produced by a convex combination of pairs of
instances respectively from the two domains. Intuitively, alignment of the
possibly infinite number of vicinal domains enhances that of original domains.
We propose novel adversarial losses for vicinal domain adaptation (VicDA) based
on CatDA, leading to Vicinal and Categorical Domain Adaptation (ViCatDA). We
also propose Target Discriminative Structure Recovery (TDSR) to recover the
intrinsic target discrimination damaged by adversarial feature alignment. We
also analyze the principles underlying the ability of our key designs to align
the joint distributions. Extensive experiments on several benchmark datasets
demonstrate that we achieve the new state of the art.
| [
{
"created": "Fri, 5 Mar 2021 03:47:24 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Tang",
"Hui",
""
],
[
"Jia",
"Kui",
""
]
] |
2103.03483 | Md Mohaimenuzzaman | Md Mohaimenuzzaman, Christoph Bergmeir, Ian Thomas West and Bernd
Meyer | Environmental Sound Classification on the Edge: A Pipeline for Deep
Acoustic Networks on Extremely Resource-Constrained Devices | null | Pattern Recognition, p.109025 (2022) | 10.1016/j.patcog.2022.109025 | null | cs.SD cs.CV cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Significant efforts are being invested to bring state-of-the-art
classification and recognition to edge devices with extreme resource
constraints (memory, speed, and lack of GPU support). Here, we demonstrate the
first deep network for acoustic recognition that is small, flexible and
compression-friendly yet achieves state-of-the-art performance for raw audio
classification. Rather than handcrafting a once-off solution, we present a
generic pipeline that automatically converts a large deep convolutional network
via compression and quantization into a network for resource-impoverished edge
devices. After introducing ACDNet, which produces above state-of-the-art
accuracy on ESC-10 (96.65%), ESC-50 (87.10%), UrbanSound8K (84.45%) and
AudioEvent (92.57%), we describe the compression pipeline and show that it
allows us to achieve 97.22% size reduction and 97.28% FLOP reduction while
maintaining close to state-of-the-art accuracy 96.25%, 83.65%, 78.27% and
89.69% on these datasets. We describe a successful implementation on a standard
off-the-shelf microcontroller and, beyond laboratory benchmarks, report
successful tests on real-world datasets.
| [
{
"created": "Fri, 5 Mar 2021 05:52:31 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Mar 2021 00:07:25 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Apr 2021 05:06:47 GMT",
"version": "v3"
},
{
"created": "Tue, 20 Sep 2022 05:10:43 GMT",
"version": "v4"
}
] | 2022-09-21 | [
[
"Mohaimenuzzaman",
"Md",
""
],
[
"Bergmeir",
"Christoph",
""
],
[
"West",
"Ian Thomas",
""
],
[
"Meyer",
"Bernd",
""
]
] |
2103.03509 | Seongsik Park | Seongsik Park and Harksoo Kim | Dual Pointer Network for Fast Extraction of Multiple Relations in a
Sentence | null | Applied Sciences (SI: Natural Language Processing: Emerging Neural
Approaches and Applications), Vol.10(11), 2020 | 10.3390/app10113851 | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Relation extraction is a type of information extraction task that recognizes
semantic relationships between entities in a sentence. Many previous studies
have focused on extracting only one semantic relation between two entities in a
single sentence. However, multiple entities in a sentence are associated
through various relations. To address this issue, we propose a relation
extraction model based on a dual pointer network with a multi-head attention
mechanism. The proposed model finds n-to-1 subject-object relations using a
forward object decoder. Then, it finds 1-to-n subject-object relations using a
backward subject decoder. Our experiments confirmed that the proposed model
outperformed previous models, with an F1-score of 80.8% for the ACE-2005 corpus
and an F1-score of 78.3% for the NYT corpus.
| [
{
"created": "Fri, 5 Mar 2021 07:36:54 GMT",
"version": "v1"
}
] | 2021-03-08 | [
[
"Park",
"Seongsik",
""
],
[
"Kim",
"Harksoo",
""
]
] |
2103.03518 | Julen Balzategui | Julen Balzategui, Luka Eciolaza, and Daniel Maestro-Watson | Anomaly detection and automatic labeling for solar cell quality
inspection based on Generative Adversarial Network | 20 pages, 10 figures, 6 tables. This article is part of the special
issue "Condition Monitoring, Field Inspection and Fault Diagnostic Methods
for Photovoltaic Systems" Published in MDPI - Sensors: see
https://www.mdpi.com/journal/sensors/special_issues/Condition_Monitoring_Field_Inspection_and_Fault_Diagnostic_Methods_for_Photovoltaic_Systems | Sensors 2021, volume 21, issue 13, article-number 4361 | 10.3390/s21134361 | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Quality inspection applications in industry are required to move towards a
zero-defect manufacturing scenario, withnon-destructive inspection and
traceability of 100 % of produced parts. Developing robust fault detection and
classification modelsfrom the start-up of the lines is challenging due to the
difficulty in getting enough representative samples of the faulty patternsand
the need to manually label them. This work presents a methodology to develop a
robust inspection system, targeting thesepeculiarities, in the context of solar
cell manufacturing. The methodology is divided into two phases: In the first
phase, an anomalydetection model based on a Generative Adversarial Network
(GAN) is employed. This model enables the detection and localizationof
anomalous patterns within the solar cells from the beginning, using only
non-defective samples for training and without anymanual labeling involved. In
a second stage, as defective samples arise, the detected anomalies will be used
as automaticallygenerated annotations for the supervised training of a Fully
Convolutional Network that is capable of detecting multiple types offaults. The
experimental results using 1873 EL images of monocrystalline cells show that
(a) the anomaly detection scheme can beused to start detecting features with
very little available data, (b) the anomaly detection may serve as automatic
labeling in order totrain a supervised model, and (c) segmentation and
classification results of supervised models trained with automatic labels
arecomparable to the ones obtained from the models trained with manual labels.
| [
{
"created": "Fri, 5 Mar 2021 07:53:59 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jul 2021 08:08:20 GMT",
"version": "v2"
}
] | 2021-07-08 | [
[
"Balzategui",
"Julen",
""
],
[
"Eciolaza",
"Luka",
""
],
[
"Maestro-Watson",
"Daniel",
""
]
] |
2103.03638 | Mark Niklas M\"uller | Mark Niklas M\"uller, Gleb Makarchuk, Gagandeep Singh, Markus
P\"uschel, Martin Vechev | PRIMA: General and Precise Neural Network Certification via Scalable
Convex Hull Approximations | 29 pages, 18 figures, 6 tables | Proceedings of the ACM on Programming Languages, Volume 6, Issue
POPL, January 2022, Article No.: 43, pp 1-33 | 10.1145/3498704 | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Formal verification of neural networks is critical for their safe adoption in
real-world applications. However, designing a precise and scalable verifier
which can handle different activation functions, realistic network
architectures and relevant specifications remains an open and difficult
challenge. In this paper, we take a major step forward in addressing this
challenge and present a new verification framework, called PRIMA. PRIMA is both
(i) general: it handles any non-linear activation function, and (ii) precise:
it computes precise convex abstractions involving multiple neurons via novel
convex hull approximation algorithms that leverage concepts from computational
geometry. The algorithms have polynomial complexity, yield fewer constraints,
and minimize precision loss. We evaluate the effectiveness of PRIMA on a
variety of challenging tasks from prior work. Our results show that PRIMA is
significantly more precise than the state-of-the-art, verifying robustness to
input perturbations for up to 20%, 30%, and 34% more images than existing work
on ReLU-, Sigmoid-, and Tanh-based networks, respectively. Further, PRIMA
enables, for the first time, the precise verification of a realistic neural
network for autonomous driving within a few minutes.
| [
{
"created": "Fri, 5 Mar 2021 12:53:24 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Apr 2021 15:42:07 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Feb 2022 16:54:50 GMT",
"version": "v3"
}
] | 2022-03-01 | [
[
"Müller",
"Mark Niklas",
""
],
[
"Makarchuk",
"Gleb",
""
],
[
"Singh",
"Gagandeep",
""
],
[
"Püschel",
"Markus",
""
],
[
"Vechev",
"Martin",
""
]
] |
2103.03653 | Maciej Besta | Maciej Besta, Zur Vonarburg-Shmaria, Yannick Schaffner, Leonardo
Schwarz, Grzegorz Kwasniewski, Lukas Gianinazzi, Jakub Beranek, Kacper Janda,
Tobias Holenstein, Sebastian Leisinger, Peter Tatkowski, Esref Ozdemir,
Adrian Balla, Marcin Copik, Philipp Lindenberger, Pavel Kalvoda, Marek
Konieczny, Onur Mutlu, Torsten Hoefler | GraphMineSuite: Enabling High-Performance and Programmable Graph Mining
Algorithms with Set Algebra | null | International Conference on Very Large Data Bases (VLDB), 2021 | null | null | cs.DC cs.CV cs.DS cs.MS cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose GraphMineSuite (GMS): the first benchmarking suite for graph
mining that facilitates evaluating and constructing high-performance graph
mining algorithms. First, GMS comes with a benchmark specification based on
extensive literature review, prescribing representative problems, algorithms,
and datasets. Second, GMS offers a carefully designed software platform for
seamless testing of different fine-grained elements of graph mining algorithms,
such as graph representations or algorithm subroutines. The platform includes
parallel implementations of more than 40 considered baselines, and it
facilitates developing complex and fast mining algorithms. High modularity is
possible by harnessing set algebra operations such as set intersection and
difference, which enables breaking complex graph mining algorithms into simple
building blocks that can be separately experimented with. GMS is supported with
a broad concurrency analysis for portability in performance insights, and a
novel performance metric to assess the throughput of graph mining algorithms,
enabling more insightful evaluation. As use cases, we harness GMS to rapidly
redesign and accelerate state-of-the-art baselines of core graph mining
problems: degeneracy reordering (by up to >2x), maximal clique listing (by up
to >9x), k-clique listing (by 1.1x), and subgraph isomorphism (by up to 2.5x),
also obtaining better theoretical performance bounds.
| [
{
"created": "Fri, 5 Mar 2021 13:26:18 GMT",
"version": "v1"
}
] | 2023-08-01 | [
[
"Besta",
"Maciej",
""
],
[
"Vonarburg-Shmaria",
"Zur",
""
],
[
"Schaffner",
"Yannick",
""
],
[
"Schwarz",
"Leonardo",
""
],
[
"Kwasniewski",
"Grzegorz",
""
],
[
"Gianinazzi",
"Lukas",
""
],
[
"Beranek",
"Jakub",
""
],
[
"Janda",
"Kacper",
""
],
[
"Holenstein",
"Tobias",
""
],
[
"Leisinger",
"Sebastian",
""
],
[
"Tatkowski",
"Peter",
""
],
[
"Ozdemir",
"Esref",
""
],
[
"Balla",
"Adrian",
""
],
[
"Copik",
"Marcin",
""
],
[
"Lindenberger",
"Philipp",
""
],
[
"Kalvoda",
"Pavel",
""
],
[
"Konieczny",
"Marek",
""
],
[
"Mutlu",
"Onur",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
2103.03703 | Tariq Bdair | Tariq Bdair, Nassir Navab, and Shadi Albarqouni | Semi-Supervised Federated Peer Learning for Skin Lesion Classification | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA)
[https://www.melba-journal.org%E2%80%9D]https://www.melba-journal.org | Journal of Machine Learning for Biomedical Imaging (MELBA) 2022 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Globally, Skin carcinoma is among the most lethal diseases. Millions of
people are diagnosed with this cancer every year. Sill, early detection can
decrease the medication cost and mortality rate substantially. The recent
improvement in automated cancer classification using deep learning methods has
reached a human-level performance requiring a large amount of annotated data
assembled in one location, yet, finding such conditions usually is not
feasible. Recently, federated learning (FL) has been proposed to train
decentralized models in a privacy-preserved fashion depending on labeled data
at the client-side, which is usually not available and costly. To address this,
we propose \verb!FedPerl!, a semi-supervised federated learning method. Our
method is inspired by peer learning from educational psychology and ensemble
averaging from committee machines. FedPerl builds communities based on clients'
similarities. Then it encourages communities members to learn from each other
to generate more accurate pseudo labels for the unlabeled data. We also
proposed the peer anonymization (PA) technique to anonymize clients. As a core
component of our method, PA is orthogonal to other methods without additional
complexity and reduces the communication cost while enhancing performance.
Finally, we propose a dynamic peer-learning policy that controls the learning
stream to avoid any degradation in the performance, especially for individual
clients. Our experimental setup consists of 71,000 skin lesion images collected
from 5 publicly available datasets. We test our method in four different
scenarios in SSFL. With few annotated data, FedPerl is on par with a
state-of-the-art method in skin lesion classification in the standard setup
while outperforming SSFLs and the baselines by 1.8% and 15.8%, respectively.
Also, it generalizes better to unseen clients while being less sensitive to
noisy ones.
| [
{
"created": "Fri, 5 Mar 2021 14:26:15 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Mar 2021 10:25:30 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Nov 2021 00:02:39 GMT",
"version": "v3"
},
{
"created": "Thu, 7 Apr 2022 13:28:04 GMT",
"version": "v4"
},
{
"created": "Tue, 12 Apr 2022 08:45:07 GMT",
"version": "v5"
}
] | 2022-04-13 | [
[
"Bdair",
"Tariq",
""
],
[
"Navab",
"Nassir",
""
],
[
"Albarqouni",
"Shadi",
""
]
] |
2103.03796 | Ruidong Yan | Ruidong Yan, Rui Jiang, Bin Jia, Jin Huang, and Diange Yang | Hybrid Car-Following Strategy based on Deep Deterministic Policy
Gradient and Cooperative Adaptive Cruise Control | 9 pages, 11 figures | published online 2021 | 10.1109/TASE.2021.3100709 | null | cs.AI cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep deterministic policy gradient (DDPG)-based car-following strategy can
break through the constraints of the differential equation model due to the
ability of exploration on complex environments. However, the car-following
performance of DDPG is usually degraded by unreasonable reward function design,
insufficient training, and low sampling efficiency. In order to solve this kind
of problem, a hybrid car-following strategy based on DDPG and cooperative
adaptive cruise control (CACC) is proposed. First, the car-following process is
modeled as the Markov decision process to calculate CACC and DDPG
simultaneously at each frame. Given a current state, two actions are obtained
from CACC and DDPG, respectively. Then, an optimal action, corresponding to the
one offering a larger reward, is chosen as the output of the hybrid strategy.
Meanwhile, a rule is designed to ensure that the change rate of acceleration is
smaller than the desired value. Therefore, the proposed strategy not only
guarantees the basic performance of car-following through CACC but also makes
full use of the advantages of exploration on complex environments via DDPG.
Finally, simulation results show that the car-following performance of the
proposed strategy is improved compared with that of DDPG and CACC.
| [
{
"created": "Wed, 24 Feb 2021 17:37:47 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jan 2022 04:40:18 GMT",
"version": "v2"
}
] | 2022-01-12 | [
[
"Yan",
"Ruidong",
""
],
[
"Jiang",
"Rui",
""
],
[
"Jia",
"Bin",
""
],
[
"Huang",
"Jin",
""
],
[
"Yang",
"Diange",
""
]
] |
2103.03827 | Mathieu Labb\'e | Mathieu Labb\'e and Fran\c{c}ois Michaud | Multi-Session Visual SLAM for Illumination Invariant Re-Localization in
Indoor Environments | 20 pages, 7 figures | M. Labb\'e and F. Michaud, Multi-Session Visual SLAM for
Illumination-Invariant Re-Localization in Indoor Environments, in Frontiers
in Robotics and AI, vol. 9, 2022 | 10.3389/frobt.2022.801886 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For robots navigating using only a camera, illumination changes in indoor
environments can cause re-localization failures during autonomous navigation.
In this paper, we present a multi-session visual SLAM approach to create a map
made of multiple variations of the same locations in different illumination
conditions. The multi-session map can then be used at any hour of the day for
improved re-localization capability. The approach presented is independent of
the visual features used, and this is demonstrated by comparing re-localization
performance between multi-session maps created using the RTAB-Map library with
SURF, SIFT, BRIEF, BRISK, KAZE, DAISY and SuperPoint visual features. The
approach is tested on six mapping and six localization sessions recorded at 30
minute intervals during sunset using a Google Tango phone in a real apartment.
| [
{
"created": "Fri, 5 Mar 2021 17:41:27 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Jun 2022 15:32:16 GMT",
"version": "v2"
}
] | 2022-06-30 | [
[
"Labbé",
"Mathieu",
""
],
[
"Michaud",
"François",
""
]
] |
2103.03877 | Aydogan Ozcan | Yijie Zhang, Tairan Liu, Manmohan Singh, Yilin Luo, Yair Rivenson,
Kirill V. Larin, and Aydogan Ozcan | Neural network-based image reconstruction in swept-source optical
coherence tomography using undersampled spectral data | 20 Pages, 7 Figures, 1 Table | Light: Science & Applications (2021) | 10.1038/s41377-021-00594-7 | null | eess.IV cs.CV cs.LG physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optical Coherence Tomography (OCT) is a widely used non-invasive biomedical
imaging modality that can rapidly provide volumetric images of samples. Here,
we present a deep learning-based image reconstruction framework that can
generate swept-source OCT (SS-OCT) images using undersampled spectral data,
without any spatial aliasing artifacts. This neural network-based image
reconstruction does not require any hardware changes to the optical set-up and
can be easily integrated with existing swept-source or spectral domain OCT
systems to reduce the amount of raw spectral data to be acquired. To show the
efficacy of this framework, we trained and blindly tested a deep neural network
using mouse embryo samples imaged by an SS-OCT system. Using 2-fold
undersampled spectral data (i.e., 640 spectral points per A-line), the trained
neural network can blindly reconstruct 512 A-lines in ~6.73 ms using a desktop
computer, removing spatial aliasing artifacts due to spectral undersampling,
also presenting a very good match to the images of the same samples,
reconstructed using the full spectral OCT data (i.e., 1280 spectral points per
A-line). We also successfully demonstrate that this framework can be further
extended to process 3x undersampled spectral data per A-line, with some
performance degradation in the reconstructed image quality compared to 2x
spectral undersampling. This deep learning-enabled image reconstruction
approach can be broadly used in various forms of spectral domain OCT systems,
helping to increase their imaging speed without sacrificing image resolution
and signal-to-noise ratio.
| [
{
"created": "Thu, 4 Mar 2021 22:30:31 GMT",
"version": "v1"
}
] | 2021-07-30 | [
[
"Zhang",
"Yijie",
""
],
[
"Liu",
"Tairan",
""
],
[
"Singh",
"Manmohan",
""
],
[
"Luo",
"Yilin",
""
],
[
"Rivenson",
"Yair",
""
],
[
"Larin",
"Kirill V.",
""
],
[
"Ozcan",
"Aydogan",
""
]
] |
2103.03905 | Jason Ramapuram | Jason Ramapuram, Yan Wu, Alexandros Kalousis | Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory | null | ICLR 2021 | null | null | cs.NE cs.AI cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Episodic and semantic memory are critical components of the human memory
model. The theory of complementary learning systems (McClelland et al., 1995)
suggests that the compressed representation produced by a serial event
(episodic memory) is later restructured to build a more generalized form of
reusable knowledge (semantic memory). In this work we develop a new principled
Bayesian memory allocation scheme that bridges the gap between episodic and
semantic memory via a hierarchical latent variable model. We take inspiration
from traditional heap allocation and extend the idea of locally contiguous
memory to the Kanerva Machine, enabling a novel differentiable block allocated
latent memory. In contrast to the Kanerva Machine, we simplify the process of
memory writing by treating it as a fully feed forward deterministic process,
relying on the stochasticity of the read key distribution to disperse
information within the memory. We demonstrate that this allocation scheme
improves performance in memory conditional image generation, resulting in new
state-of-the-art conditional likelihood values on binarized MNIST (<=41.58
nats/image) , binarized Omniglot (<=66.24 nats/image), as well as presenting
competitive performance on CIFAR10, DMLab Mazes, Celeb-A and ImageNet32x32.
| [
{
"created": "Sat, 20 Feb 2021 18:40:40 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Mar 2021 09:38:06 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Feb 2022 01:41:30 GMT",
"version": "v3"
}
] | 2022-02-08 | [
[
"Ramapuram",
"Jason",
""
],
[
"Wu",
"Yan",
""
],
[
"Kalousis",
"Alexandros",
""
]
] |
2103.03975 | Nico Lang | Nico Lang, Nikolai Kalischek, John Armston, Konrad Schindler, Ralph
Dubayah, Jan Dirk Wegner | Global canopy height regression and uncertainty estimation from GEDI
LIDAR waveforms with deep ensembles | null | Remote Sensing of Environment 268 (2022) 112760 | 10.1016/j.rse.2021.112760 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NASA's Global Ecosystem Dynamics Investigation (GEDI) is a key climate
mission whose goal is to advance our understanding of the role of forests in
the global carbon cycle. While GEDI is the first space-based LIDAR explicitly
optimized to measure vertical forest structure predictive of aboveground
biomass, the accurate interpretation of this vast amount of waveform data
across the broad range of observational and environmental conditions is
challenging. Here, we present a novel supervised machine learning approach to
interpret GEDI waveforms and regress canopy top height globally. We propose a
probabilistic deep learning approach based on an ensemble of deep convolutional
neural networks(CNN) to avoid the explicit modelling of unknown effects, such
as atmospheric noise. The model learns to extract robust features that
generalize to unseen geographical regions and, in addition, yields reliable
estimates of predictive uncertainty. Ultimately, the global canopy top height
estimates produced by our model have an expected RMSE of 2.7 m with low bias.
| [
{
"created": "Fri, 5 Mar 2021 23:08:27 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Nov 2021 12:03:20 GMT",
"version": "v2"
}
] | 2021-11-05 | [
[
"Lang",
"Nico",
""
],
[
"Kalischek",
"Nikolai",
""
],
[
"Armston",
"John",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Dubayah",
"Ralph",
""
],
[
"Wegner",
"Jan Dirk",
""
]
] |
2103.03991 | Brendan Tidd | Brendan Tidd, Akansel Cosgun, Jurgen Leitner, and Nicolas Hudson | Passing Through Narrow Gaps with Deep Reinforcement Learning | Submitted to 2021 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) | In proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) 2021 | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | The U.S. Defense Advanced Research Projects Agency (DARPA) Subterranean
Challenge requires teams of robots to traverse difficult and diverse
underground environments. Traversing small gaps is one of the challenging
scenarios that robots encounter. Imperfect sensor information makes it
difficult for classical navigation methods, where behaviours require
significant manual fine tuning. In this paper we present a deep reinforcement
learning method for autonomously navigating through small gaps, where contact
between the robot and the gap may be required. We first learn a gap behaviour
policy to get through small gaps (only centimeters wider than the robot). We
then learn a goal-conditioned behaviour selection policy that determines when
to activate the gap behaviour policy. We train our policies in simulation and
demonstrate their effectiveness with a large tracked robot in simulation and on
the real platform. In simulation experiments, our approach achieves 93\%
success rate when the gap behaviour is activated manually by an operator, and
63\% with autonomous activation using the behaviour selection policy. In real
robot experiments, our approach achieves a success rate of 73\% with manual
activation, and 40\% with autonomous behaviour selection. While we show the
feasibility of our approach in simulation, the difference in performance
between simulated and real world scenarios highlight the difficulty of direct
sim-to-real transfer for deep reinforcement learning policies. In both the
simulated and real world environments alternative methods were unable to
traverse the gap.
| [
{
"created": "Sat, 6 Mar 2021 00:10:41 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2021 01:11:50 GMT",
"version": "v2"
}
] | 2021-11-03 | [
[
"Tidd",
"Brendan",
""
],
[
"Cosgun",
"Akansel",
""
],
[
"Leitner",
"Jurgen",
""
],
[
"Hudson",
"Nicolas",
""
]
] |
2103.04068 | Artjoms Gorpincenko | Artjoms Gorpincenko, Geoffrey French, Peter Knight, Mike Challiss,
Michal Mackiewicz | Improving Automated Sonar Video Analysis to Notify About Jellyfish
Blooms | null | IEEE Sensors Journal, 21, 4981-4988 (2021) | 10.1109/JSEN.2020.3032031 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human enterprise often suffers from direct negative effects caused by
jellyfish blooms. The investigation of a prior jellyfish monitoring system
showed that it was unable to reliably perform in a cross validation setting,
i.e. in new underwater environments. In this paper, a number of enhancements
are proposed to the part of the system that is responsible for object
classification. First, the training set is augmented by adding synthetic data,
making the deep learning classifier able to generalise better. Then, the
framework is enhanced by employing a new second stage model, which analyzes the
outputs of the first network to make the final prediction. Finally, weighted
loss and confidence threshold are added to balance out true and false
positives. With all the upgrades in place, the system can correctly classify
30.16% (comparing to the initial 11.52%) of all spotted jellyfish, keep the
amount of false positives as low as 0.91% (comparing to the initial 2.26%) and
operate in real-time within the computational constraints of an autonomous
embedded platform.
| [
{
"created": "Sat, 6 Mar 2021 08:39:24 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Gorpincenko",
"Artjoms",
""
],
[
"French",
"Geoffrey",
""
],
[
"Knight",
"Peter",
""
],
[
"Challiss",
"Mike",
""
],
[
"Mackiewicz",
"Michal",
""
]
] |
2103.04077 | Xiaofeng Gao | Xiaofeng Gao, Luyao Yuan, Tianmin Shu, Hongjing Lu, Song-Chun Zhu | Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration | 8 pages, 6 figures, IEEE Robotics and Automation Letters (RA-L), 2022 | X. Gao, L. Yuan, T. Shu, H. Lu and S. -C. Zhu, "Show Me What You
Can Do: Capability Calibration on Reachable Workspace for Human-Robot
Collaboration," in IEEE Robotics and Automation Letters, doi:
10.1109/LRA.2022.3144779 | 10.1109/LRA.2022.3144779 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aligning humans' assessment of what a robot can do with its true capability
is crucial for establishing a common ground between human and robot partners
when they collaborate on a joint task. In this work, we propose an approach to
calibrate humans' estimate of a robot's reachable workspace through a small
number of demonstrations before collaboration. We develop a novel motion
planning method, REMP, which jointly optimizes the physical cost and the
expressiveness of robot motion to reveal the robot's reachability to a human
observer. Our experiments with human participants demonstrate that a short
calibration using REMP can effectively bridge the gap between what a non-expert
user thinks a robot can reach and the ground truth. We show that this
calibration procedure not only results in better user perception, but also
promotes more efficient human-robot collaborations in a subsequent joint task.
| [
{
"created": "Sat, 6 Mar 2021 09:14:30 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 20:57:54 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Jan 2022 04:51:12 GMT",
"version": "v3"
}
] | 2022-01-28 | [
[
"Gao",
"Xiaofeng",
""
],
[
"Yuan",
"Luyao",
""
],
[
"Shu",
"Tianmin",
""
],
[
"Lu",
"Hongjing",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
2103.04192 | Guy Ohayon | Guy Ohayon, Theo Adrai, Gregory Vaksman, Michael Elad, Peyman Milanfar | High Perceptual Quality Image Denoising with a Posterior Sampling CGAN | null | 2021 IEEE/CVF International Conference on Computer Vision
Workshops (ICCVW), Montreal, BC, Canada, 2021, pp. 1805-1813 | 10.1109/ICCVW54120.2021.00207 | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vast work in Deep Learning (DL) has led to a leap in image denoising
research. Most DL solutions for this task have chosen to put their efforts on
the denoiser's architecture while maximizing distortion performance. However,
distortion driven solutions lead to blurry results with sub-optimal perceptual
quality, especially in immoderate noise levels. In this paper we propose a
different perspective, aiming to produce sharp and visually pleasing denoised
images that are still faithful to their clean sources. Formally, our goal is to
achieve high perceptual quality with acceptable distortion. This is attained by
a stochastic denoiser that samples from the posterior distribution, trained as
a generator in the framework of conditional generative adversarial networks
(CGAN). Contrary to distortion-based regularization terms that conflict with
perceptual quality, we introduce to the CGAN objective a theoretically founded
penalty term that does not force a distortion requirement on individual
samples, but rather on their mean. We showcase our proposed method with a novel
denoiser architecture that achieves the reformed denoising goal and produces
vivid and diverse outcomes in immoderate noise levels.
| [
{
"created": "Sat, 6 Mar 2021 20:18:45 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Mar 2021 18:13:05 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Oct 2021 17:22:43 GMT",
"version": "v3"
}
] | 2024-05-21 | [
[
"Ohayon",
"Guy",
""
],
[
"Adrai",
"Theo",
""
],
[
"Vaksman",
"Gregory",
""
],
[
"Elad",
"Michael",
""
],
[
"Milanfar",
"Peyman",
""
]
] |
2103.04314 | Jeremy Straub | Jeremy Straub | Expert System Gradient Descent Style Training: Development of a
Defensible Artificial Intelligence Technique | null | Knowledge Based-Systems (2021) | 10.1016/j.knosys.2021.107275 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Artificial intelligence systems, which are designed with a capability to
learn from the data presented to them, are used throughout society. These
systems are used to screen loan applicants, make sentencing recommendations for
criminal defendants, scan social media posts for disallowed content and more.
Because these systems don't assign meaning to their complex learned correlation
network, they can learn associations that don't equate to causality, resulting
in non-optimal and indefensible decisions being made. In addition to making
decisions that are sub-optimal, these systems may create legal liability for
their designers and operators by learning correlations that violate
anti-discrimination and other laws regarding what factors can be used in
different types of decision making. This paper presents the use of a machine
learning expert system, which is developed with meaning-assigned nodes (facts)
and correlations (rules). Multiple potential implementations are considered and
evaluated under different conditions, including different network error and
augmentation levels and different training levels. The performance of these
systems is compared to random and fully connected networks.
| [
{
"created": "Sun, 7 Mar 2021 10:09:50 GMT",
"version": "v1"
}
] | 2021-07-05 | [
[
"Straub",
"Jeremy",
""
]
] |
2103.04318 | Patrick Reiser | Patrick Reiser, Andre Eberhard and Pascal Friederich | Implementing graph neural networks with TensorFlow-Keras | null | Softw. Impacts 2021, 9, 100095 | 10.1016/j.simpa.2021.100095 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graph neural networks are a versatile machine learning architecture that
received a lot of attention recently. In this technical report, we present an
implementation of convolution and pooling layers for TensorFlow-Keras models,
which allows a seamless and flexible integration into standard Keras layers to
set up graph models in a functional way. This implies the usage of mini-batches
as the first tensor dimension, which can be realized via the new RaggedTensor
class of TensorFlow best suited for graphs. We developed the Keras Graph
Convolutional Neural Network Python package kgcnn based on TensorFlow-Keras
that provides a set of Keras layers for graph networks which focus on a
transparent tensor structure passed between layers and an ease-of-use mindset.
| [
{
"created": "Sun, 7 Mar 2021 10:46:02 GMT",
"version": "v1"
}
] | 2023-10-12 | [
[
"Reiser",
"Patrick",
""
],
[
"Eberhard",
"Andre",
""
],
[
"Friederich",
"Pascal",
""
]
] |
2103.04351 | David Hoeller | David Hoeller, Lorenz Wellhausen, Farbod Farshidian, Marco Hutter | Learning a State Representation and Navigation in Cluttered and Dynamic
Environments | 8 pages, 8 figures, 2 tables | IEEE Robotics and Automation Letters 2021 | null | null | cs.RO cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a learning-based pipeline to realise local
navigation with a quadrupedal robot in cluttered environments with static and
dynamic obstacles. Given high-level navigation commands, the robot is able to
safely locomote to a target location based on frames from a depth camera
without any explicit mapping of the environment. First, the sequence of images
and the current trajectory of the camera are fused to form a model of the world
using state representation learning. The output of this lightweight module is
then directly fed into a target-reaching and obstacle-avoiding policy trained
with reinforcement learning. We show that decoupling the pipeline into these
components results in a sample efficient policy learning stage that can be
fully trained in simulation in just a dozen minutes. The key part is the state
representation, which is trained to not only estimate the hidden state of the
world in an unsupervised fashion, but also helps bridging the reality gap,
enabling successful sim-to-real transfer. In our experiments with the
quadrupedal robot ANYmal in simulation and in reality, we show that our system
can handle noisy depth images, avoid dynamic obstacles unseen during training,
and is endowed with local spatial awareness.
| [
{
"created": "Sun, 7 Mar 2021 13:19:06 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Hoeller",
"David",
""
],
[
"Wellhausen",
"Lorenz",
""
],
[
"Farshidian",
"Farbod",
""
],
[
"Hutter",
"Marco",
""
]
] |
2103.04364 | Zaid Tahir | Zaid Tahir, Rob Alexander | Coverage based testing for V&V and Safety Assurance of Self-driving
Autonomous Vehicles: A Systematic Literature Review | null | IEEE International Conference On Artificial Intelligence Testing
(AITest), Oxford, UK, 2020 | 10.1109/AITEST49225.2020.00011 | null | cs.AI cs.RO cs.SE | http://creativecommons.org/licenses/by/4.0/ | Self-driving Autonomous Vehicles (SAVs) are gaining more interest each
passing day by the industry as well as the general public. Tech and automobile
companies are investing huge amounts of capital in research and development of
SAVs to make sure they have a head start in the SAV market in the future. One
of the major hurdles in the way of SAVs making it to the public roads is the
lack of confidence of public in the safety aspect of SAVs. In order to assure
safety and provide confidence to the public in the safety of SAVs, researchers
around the world have used coverage-based testing for Verification and
Validation (V&V) and safety assurance of SAVs. The objective of this paper is
to investigate the coverage criteria proposed and coverage maximizing
techniques used by researchers in the last decade up till now, to assure safety
of SAVs. We conduct a Systematic Literature Review (SLR) for this investigation
in our paper. We present a classification of existing research based on the
coverage criteria used. Several research gaps and research directions are also
provided in this SLR to enable further research in this domain. This paper
provides a body of knowledge in the domain of safety assurance of SAVs. We
believe the results of this SLR will be helpful in the progression of V&V and
safety assurance of SAVs.
| [
{
"created": "Sun, 7 Mar 2021 14:23:04 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Tahir",
"Zaid",
""
],
[
"Alexander",
"Rob",
""
]
] |
2103.04384 | Coloma Ballester | Patricia Vitoria and Coloma Ballester | Automatic Flare Spot Artifact Detection and Removal in Photographs | Journal of Mathematical Imaging and Vision, 2019 | Journal of Mathematical Imaging and Vision, 2019 | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Flare spot is one type of flare artifact caused by a number of conditions,
frequently provoked by one or more high-luminance sources within or close to
the camera field of view. When light rays coming from a high-luminance source
reach the front element of a camera, it can produce intra-reflections within
camera elements that emerge at the film plane forming non-image information or
flare on the captured image. Even though preventive mechanisms are used,
artifacts can appear. In this paper, we propose a robust computational method
to automatically detect and remove flare spot artifacts. Our contribution is
threefold: firstly, we propose a characterization which is based on intrinsic
properties that a flare spot is likely to satisfy; secondly, we define a new
confidence measure able to select flare spots among the candidates; and,
finally, a method to accurately determine the flare region is given. Then, the
detected artifacts are removed by using exemplar-based inpainting. We show that
our algorithm achieve top-tier quantitative and qualitative performance.
| [
{
"created": "Sun, 7 Mar 2021 15:51:49 GMT",
"version": "v1"
}
] | 2021-03-10 | [
[
"Vitoria",
"Patricia",
""
],
[
"Ballester",
"Coloma",
""
]
] |
2103.04386 | Serge Sharoff | Nouran Khallaf, Serge Sharoff | Automatic Difficulty Classification of Arabic Sentences | Accepted at WANLP 2021 | The Sixth Arabic Natural Language Processing Workshop (WANLP 2021) | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a Modern Standard Arabic (MSA) Sentence difficulty
classifier, which predicts the difficulty of sentences for language learners
using either the CEFR proficiency levels or the binary classification as simple
or complex. We compare the use of sentence embeddings of different kinds
(fastText, mBERT , XLM-R and Arabic-BERT), as well as traditional language
features such as POS tags, dependency trees, readability scores and frequency
lists for language learners. Our best results have been achieved using
fined-tuned Arabic-BERT. The accuracy of our 3-way CEFR classification is F-1
of 0.80 and 0.75 for Arabic-Bert and XLM-R classification respectively and 0.71
Spearman correlation for regression. Our binary difficulty classifier reaches
F-1 0.94 and F-1 0.98 for sentence-pair semantic similarity classifier.
| [
{
"created": "Sun, 7 Mar 2021 16:02:04 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Khallaf",
"Nouran",
""
],
[
"Sharoff",
"Serge",
""
]
] |
2103.04421 | Xin Yuan | Xin Yuan and David J. Brady and Aggelos K. Katsaggelos | Snapshot Compressive Imaging: Principle, Implementation, Theory,
Algorithms and Applications | Extension of X. Yuan, D. J. Brady and A. K. Katsaggelos, "Snapshot
Compressive Imaging: Theory, Algorithms, and Applications," in IEEE Signal
Processing Magazine, vol. 38, no. 2, pp. 65-88, March 2021, doi:
10.1109/MSP.2020.3023869 | in IEEE Signal Processing Magazine, vol. 38, no. 2, pp. 65-88,
March 2021 | 10.1109/MSP.2020.3023869. | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Capturing high-dimensional (HD) data is a long-term challenge in signal
processing and related fields. Snapshot compressive imaging (SCI) uses a
two-dimensional (2D) detector to capture HD ($\ge3$D) data in a {\em snapshot}
measurement. Via novel optical designs, the 2D detector samples the HD data in
a {\em compressive} manner; following this, algorithms are employed to
reconstruct the desired HD data-cube. SCI has been used in hyperspectral
imaging, video, holography, tomography, focal depth imaging, polarization
imaging, microscopy, \etc.~Though the hardware has been investigated for more
than a decade, the theoretical guarantees have only recently been derived.
Inspired by deep learning, various deep neural networks have also been
developed to reconstruct the HD data-cube in spectral SCI and video SCI. This
article reviews recent advances in SCI hardware, theory and algorithms,
including both optimization-based and deep-learning-based algorithms. Diverse
applications and the outlook of SCI are also discussed.
| [
{
"created": "Sun, 7 Mar 2021 18:31:47 GMT",
"version": "v1"
}
] | 2021-03-10 | [
[
"Yuan",
"Xin",
""
],
[
"Brady",
"David J.",
""
],
[
"Katsaggelos",
"Aggelos K.",
""
]
] |
2103.04485 | Ming Zhu | Xiao-Yang Liu, Ming Zhu | Convolutional Graph-Tensor Net for Graph Data Completion | null | IJCAI 2021 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph data completion is a fundamentally important issue as data generally
has a graph structure, e.g., social networks, recommendation systems, and the
Internet of Things. We consider a graph where each node has a data matrix,
represented as a \textit{graph-tensor} by stacking the data matrices in the
third dimension. In this paper, we propose a \textit{Convolutional Graph-Tensor
Net} (\textit{Conv GT-Net}) for the graph data completion problem, which uses
deep neural networks to learn the general transform of graph-tensors. The
experimental results on the ego-Facebook data sets show that the proposed
\textit{Conv GT-Net} achieves significant improvements on both completion
accuracy (50\% higher) and completion speed (3.6x $\sim$ 8.1x faster) over the
existing algorithms.
| [
{
"created": "Sun, 7 Mar 2021 23:33:38 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Mar 2023 01:50:38 GMT",
"version": "v2"
}
] | 2023-03-03 | [
[
"Liu",
"Xiao-Yang",
""
],
[
"Zhu",
"Ming",
""
]
] |
2103.04493 | Qiaojun Feng | Qiaojun Feng, Yue Meng, Mo Shan, Nikolay Atanasov | Localization and Mapping using Instance-specific Mesh Models | 8 pages, 9 figures | 2019 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Macau, China, 2019, pp. 4985-4991 | 10.1109/IROS40897.2019.8967662 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on building semantic maps, containing object poses and
shapes, using a monocular camera. This is an important problem because robots
need rich understanding of geometry and context if they are to shape the future
of transportation, construction, and agriculture. Our contribution is an
instance-specific mesh model of object shape that can be optimized online based
on semantic information extracted from camera images. Multi-view constraints on
the object shape are obtained by detecting objects and extracting
category-specific keypoints and segmentation masks. We show that the errors
between projections of the mesh model and the observed keypoints and masks can
be differentiated in order to obtain accurate instance-specific object shapes.
We evaluate the performance of the proposed approach in simulation and on the
KITTI dataset by building maps of car poses and shapes.
| [
{
"created": "Mon, 8 Mar 2021 00:24:23 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Feng",
"Qiaojun",
""
],
[
"Meng",
"Yue",
""
],
[
"Shan",
"Mo",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
2103.04494 | Qiaojun Feng | Qiaojun Feng, Nikolay Atanasov | Fully Convolutional Geometric Features for Category-level Object
Alignment | 7 pages, 9 figures | 2020 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Las Vegas, NV, USA, 2020, pp. 8492-8498 | 10.1109/IROS45743.2020.9341550 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on pose registration of different object instances from
the same category. This is required in online object mapping because object
instances detected at test time usually differ from the training instances. Our
approach transforms instances of the same category to a normalized canonical
coordinate frame and uses metric learning to train fully convolutional
geometric features. The resulting model is able to generate pairs of matching
points between the instances, allowing category-level registration. Evaluation
on both synthetic and real-world data shows that our method provides robust
features, leading to accurate alignment of instances with different shapes.
| [
{
"created": "Mon, 8 Mar 2021 00:31:56 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Feng",
"Qiaojun",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
2103.04526 | Pengbo Liu | Pengbo Liu, Li Xiao, S. Kevin Zhou | Incremental Learning for Multi-organ Segmentation with Partially Labeled
Datasets | null | Medical Image Computing and Computer Assisted Intervention--MICCAI
2022: 25th International Conference, Singapore, September 18--22, 2022,
Proceedings, Part IV | 10.1007/978-3-031-16440-8_68 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There exists a large number of datasets for organ segmentation, which are
partially annotated, and sequentially constructed. A typical dataset is
constructed at a certain time by curating medical images and annotating the
organs of interest. In other words, new datasets with annotations of new organ
categories are built over time. To unleash the potential behind these partially
labeled, sequentially-constructed datasets, we propose to learn a multi-organ
segmentation model through incremental learning (IL). In each IL stage, we lose
access to the previous annotations, whose knowledge is assumingly captured by
the current model, and gain the access to a new dataset with annotations of new
organ categories, from which we learn to update the organ segmentation model to
include the new organs. We give the first attempt to conjecture that the
different distribution is the key reason for 'catastrophic forgetting' that
commonly exists in IL methods, and verify that IL has the natural adaptability
to medical image scenarios. Extensive experiments on five open-sourced datasets
are conducted to prove the effectiveness of our method and the conjecture
mentioned above.
| [
{
"created": "Mon, 8 Mar 2021 03:15:59 GMT",
"version": "v1"
}
] | 2023-05-09 | [
[
"Liu",
"Pengbo",
""
],
[
"Xiao",
"Li",
""
],
[
"Zhou",
"S. Kevin",
""
]
] |
2103.04537 | Ruizhi Liao | Ruizhi Liao, Daniel Moyer, Miriam Cha, Keegan Quigley, Seth Berkowitz,
Steven Horng, Polina Golland, William M. Wells | Multimodal Representation Learning via Maximization of Local Mutual
Information | In Proceedings of International Conference on Medical Image Computing
and Computer Assisted Intervention (MICCAI), 2021 | In International Conference on Medical Image Computing and
Computer-Assisted Intervention, pp. 273-283. Springer, Cham, 2021 | 10.1007/978-3-030-87196-3_26 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose and demonstrate a representation learning approach by maximizing
the mutual information between local features of images and text. The goal of
this approach is to learn useful image representations by taking advantage of
the rich information contained in the free text that describes the findings in
the image. Our method trains image and text encoders by encouraging the
resulting representations to exhibit high local mutual information. We make use
of recent advances in mutual information estimation with neural network
discriminators. We argue that the sum of local mutual information is typically
a lower bound on the global mutual information. Our experimental results in the
downstream image classification tasks demonstrate the advantages of using local
features for image-text representation learning.
| [
{
"created": "Mon, 8 Mar 2021 03:59:59 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Jul 2021 03:11:55 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Sep 2021 22:36:23 GMT",
"version": "v3"
},
{
"created": "Thu, 30 Sep 2021 13:48:42 GMT",
"version": "v4"
},
{
"created": "Wed, 15 Dec 2021 03:21:05 GMT",
"version": "v5"
}
] | 2021-12-16 | [
[
"Liao",
"Ruizhi",
""
],
[
"Moyer",
"Daniel",
""
],
[
"Cha",
"Miriam",
""
],
[
"Quigley",
"Keegan",
""
],
[
"Berkowitz",
"Seth",
""
],
[
"Horng",
"Steven",
""
],
[
"Golland",
"Polina",
""
],
[
"Wells",
"William M.",
""
]
] |
2103.04555 | Zhiwei Qin | Yan Jiao, Xiaocheng Tang, Zhiwei Qin, Shuaiji Li, Fan Zhang, Hongtu
Zhu and Jieping Ye | Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning | null | Transportation Research: Part C, 2021 | null | null | cs.LG cs.AI cs.MA | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a new practical framework based on deep reinforcement learning and
decision-time planning for real-world vehicle repositioning on ride-hailing (a
type of mobility-on-demand, MoD) platforms. Our approach learns the
spatiotemporal state-value function using a batch training algorithm with deep
value networks. The optimal repositioning action is generated on-demand through
value-based policy search, which combines planning and bootstrapping with the
value networks. For the large-fleet problems, we develop several algorithmic
features that we incorporate into our framework and that we demonstrate to
induce coordination among the algorithmically-guided vehicles. We benchmark our
algorithm with baselines in a ride-hailing simulation environment to
demonstrate its superiority in improving income efficiency meausred by
income-per-hour. We have also designed and run a real-world experiment program
with regular drivers on a major ride-hailing platform. We have observed
significantly positive results on key metrics comparing our method with
experienced drivers who performed idle-time repositioning based on their own
expertise.
| [
{
"created": "Mon, 8 Mar 2021 05:34:05 GMT",
"version": "v1"
},
{
"created": "Sat, 22 May 2021 00:14:19 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Jul 2021 06:32:31 GMT",
"version": "v3"
}
] | 2021-07-13 | [
[
"Jiao",
"Yan",
""
],
[
"Tang",
"Xiaocheng",
""
],
[
"Qin",
"Zhiwei",
""
],
[
"Li",
"Shuaiji",
""
],
[
"Zhang",
"Fan",
""
],
[
"Zhu",
"Hongtu",
""
],
[
"Ye",
"Jieping",
""
]
] |
2103.04692 | Tuomo Hiippala | Tuomo Hiippala and John A. Bateman | Semiotically-grounded distant viewing of diagrams: insights from two
multimodal corpora | 22 pages, 11 figures. Under review at Digital Scholarship in the
Humanities | Digital Scholarship in the Humanities, 2021 (ahead of press) | 10.1093/llc/fqab063 | null | cs.CL cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | In this article, we bring together theories of multimodal communication and
computational methods to study how primary school science diagrams combine
multiple expressive resources. We position our work within the field of digital
humanities, and show how annotations informed by multimodality research, which
target expressive resources and discourse structure, allow imposing structure
on the output of computational methods. We illustrate our approach by analysing
two multimodal diagram corpora: the first corpus is intended to support
research on automatic diagram processing, whereas the second is oriented
towards studying diagrams as a mode of communication. Our results show that
multimodally-informed annotations can bring out structural patterns in the
diagrams, which also extend across diagrams that deal with different topics.
| [
{
"created": "Mon, 8 Mar 2021 12:04:06 GMT",
"version": "v1"
}
] | 2021-12-23 | [
[
"Hiippala",
"Tuomo",
""
],
[
"Bateman",
"John A.",
""
]
] |
2103.04736 | Rafael Padilha | Rafael Padilha, Tawfiq Salem, Scott Workman, Fernanda A. Andal\'o,
Anderson Rocha and Nathan Jacobs | Content-Aware Detection of Temporal Metadata Manipulation | null | IEEE Transactions on Information Forensics and Security 2022 | 10.1109/TIFS.2022.3159154 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most pictures shared online are accompanied by temporal metadata (i.e., the
day and time they were taken), which makes it possible to associate an image
content with real-world events. Maliciously manipulating this metadata can
convey a distorted version of reality. In this work, we present the emerging
problem of detecting timestamp manipulation. We propose an end-to-end approach
to verify whether the purported time of capture of an outdoor image is
consistent with its content and geographic location. We consider manipulations
done in the hour and/or month of capture of a photograph. The central idea is
the use of supervised consistency verification, in which we predict the
probability that the image content, capture time, and geographical location are
consistent. We also include a pair of auxiliary tasks, which can be used to
explain the network decision. Our approach improves upon previous work on a
large benchmark dataset, increasing the classification accuracy from 59.0% to
81.1%. We perform an ablation study that highlights the importance of various
components of the method, showing what types of tampering are detectable using
our approach. Finally, we demonstrate how the proposed method can be employed
to estimate a possible time-of-capture in scenarios in which the timestamp is
missing from the metadata.
| [
{
"created": "Mon, 8 Mar 2021 13:16:19 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Mar 2022 12:19:47 GMT",
"version": "v2"
}
] | 2022-03-14 | [
[
"Padilha",
"Rafael",
""
],
[
"Salem",
"Tawfiq",
""
],
[
"Workman",
"Scott",
""
],
[
"Andaló",
"Fernanda A.",
""
],
[
"Rocha",
"Anderson",
""
],
[
"Jacobs",
"Nathan",
""
]
] |
2103.04826 | Jamal Toutouh | Diego Gabriel Rossit, Jamal Toutouh, and Sergio Nesmachnow | Exact and heuristic approaches for multi-objective garbage accumulation
points location in real scenarios | This article has been accepted for publication in the Waste
Management journal | Waste Management. 105:467-481 (2020) | 10.1016/j.wasman.2020.02.016 | null | cs.OH cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Municipal solid waste management is a major challenge for nowadays urban
societies, because it accounts for a large proportion of public budget and,
when mishandled, it can lead to environmental and social problems. This work
focuses on the problem of locating waste bins in an urban area, which is
considered to have a strong influence in the overall efficiency of the reverse
logistic chain. This article contributes with an exact multiobjective approach
to solve the waste bin location in which the optimization criteria that are
considered are: the accessibility to the system (as quality of service
measure), the investment cost, and the required frequency of waste removal from
the bins (as a proxy of the posterior routing costs). In this approach,
different methods to obtain the objectives ideal and nadir values over the
Pareto front are proposed and compared. Then, a family of heuristic methods
based on the PageRank algorithm is proposed which aims to optimize the
accessibility to the system, the amount of collected waste and the installation
cost. The experimental evaluation was performed on real-world scenarios of the
cities of Montevideo, Uruguay, and Bah\'ia Blanca, Argentina. The obtained
results show the competitiveness of the proposed approaches for constructing a
set of candidate solutions that considers the different trade-offs between the
optimization criteria.
| [
{
"created": "Fri, 5 Mar 2021 13:47:21 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 18:05:15 GMT",
"version": "v2"
}
] | 2021-03-12 | [
[
"Rossit",
"Diego Gabriel",
""
],
[
"Toutouh",
"Jamal",
""
],
[
"Nesmachnow",
"Sergio",
""
]
] |
2103.04838 | Ramanpreet Pahwa Singh | Ramanpreet S Pahwa, Soon Wee Ho, Ren Qin, Richard Chang, Oo Zaw Min,
Wang Jie, Vempati Srinivasa Rao, Tin Lay Nwe, Yanjing Yang, Jens Timo
Neumann, Ramani Pichumani, Thomas Gregorich | Machine-learning based methodologies for 3d x-ray measurement,
characterization and optimization for buried structures in advanced ic
packages | 7 pages, 9 figures | International Wafer-Level Packaging Conference (IWLPC) 2020 | 10.23919/IWLPC52010.2020.9375903 | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | For over 40 years lithographic silicon scaling has driven circuit integration
and performance improvement in the semiconductor industry. As silicon scaling
slows down, the industry is increasingly dependent on IC package technologies
to contribute to further circuit integration and performance improvements. This
is a paradigm shift and requires the IC package industry to reduce the size and
increase the density of internal interconnects on a scale which has never been
done before. Traditional package characterization and process optimization
relies on destructive techniques such as physical cross-sections and delayering
to extract data from internal package features. These destructive techniques
are not practical with today's advanced packages. In this paper we will
demonstrate how data acquired non-destructively with a 3D X-ray microscope can
be enhanced and optimized using machine learning, and can then be used to
measure, characterize and optimize the design and production of buried
interconnects in advanced IC packages. Test vehicles replicating 2.5D and HBM
construction were designed and fabricated, and digital data was extracted from
these test vehicles using 3D X-ray and machine learning techniques. The
extracted digital data was used to characterize and optimize the design and
production of the interconnects and demonstrates a superior alternative to
destructive physical analysis. We report an mAP of 0.96 for 3D object
detection, a dice score of 0.92 for 3D segmentation, and an average of 2.1um
error for 3D metrology on the test dataset. This paper is the first part of a
multi-part report.
| [
{
"created": "Mon, 8 Mar 2021 15:44:18 GMT",
"version": "v1"
},
{
"created": "Thu, 20 May 2021 02:13:02 GMT",
"version": "v2"
}
] | 2021-05-21 | [
[
"Pahwa",
"Ramanpreet S",
""
],
[
"Ho",
"Soon Wee",
""
],
[
"Qin",
"Ren",
""
],
[
"Chang",
"Richard",
""
],
[
"Min",
"Oo Zaw",
""
],
[
"Jie",
"Wang",
""
],
[
"Rao",
"Vempati Srinivasa",
""
],
[
"Nwe",
"Tin Lay",
""
],
[
"Yang",
"Yanjing",
""
],
[
"Neumann",
"Jens Timo",
""
],
[
"Pichumani",
"Ramani",
""
],
[
"Gregorich",
"Thomas",
""
]
] |
2103.04854 | Mohammadhossein Bahari | Mohammadhossein Bahari, Ismail Nejjar, Alexandre Alahi | Injecting Knowledge in Data-driven Vehicle Trajectory Predictors | Published in Transportation Research: Part C | Transportation Research Part C: Emerging Technologies, 2021 | 10.1016/j.trc.2021.103010 | null | cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicle trajectory prediction tasks have been commonly tackled from two
distinct perspectives: either with knowledge-driven methods or more recently
with data-driven ones. On the one hand, we can explicitly implement
domain-knowledge or physical priors such as anticipating that vehicles will
follow the middle of the roads. While this perspective leads to feasible
outputs, it has limited performance due to the difficulty to hand-craft complex
interactions in urban environments. On the other hand, recent works use
data-driven approaches which can learn complex interactions from the data
leading to superior performance. However, generalization, \textit{i.e.}, having
accurate predictions on unseen data, is an issue leading to unrealistic
outputs. In this paper, we propose to learn a "Realistic Residual Block" (RRB),
which effectively connects these two perspectives. Our RRB takes any
off-the-shelf knowledge-driven model and finds the required residuals to add to
the knowledge-aware trajectory. Our proposed method outputs realistic
predictions by confining the residual range and taking into account its
uncertainty. We also constrain our output with Model Predictive Control (MPC)
to satisfy kinematic constraints. Using a publicly available dataset, we show
that our method outperforms previous works in terms of accuracy and
generalization to new scenes. We will release our code and data split here:
https://github.com/vita-epfl/RRB.
| [
{
"created": "Mon, 8 Mar 2021 16:03:09 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Mar 2022 11:22:45 GMT",
"version": "v2"
}
] | 2022-03-07 | [
[
"Bahari",
"Mohammadhossein",
""
],
[
"Nejjar",
"Ismail",
""
],
[
"Alahi",
"Alexandre",
""
]
] |
2103.04885 | Takuya Kurihana | Takuya Kurihana, Elisabeth Moyer, Rebecca Willett, Davis Gilton, and
Ian Foster | Data-driven Cloud Clustering via a Rotationally Invariant Autoencoder | 25 pages. Accepted by IEEE Transactions on Geoscience and Remote
Sensing (TGRS) | IEEE Transactions on Geoscience and Remote Sensing, 2021 | 10.1109/TGRS.2021.3098008 | null | cs.CV physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced satellite-born remote sensing instruments produce high-resolution
multi-spectral data for much of the globe at a daily cadence. These datasets
open up the possibility of improved understanding of cloud dynamics and
feedback, which remain the biggest source of uncertainty in global climate
model projections. As a step towards answering these questions, we describe an
automated rotation-invariant cloud clustering (RICC) method that leverages deep
learning autoencoder technology to organize cloud imagery within large datasets
in an unsupervised fashion, free from assumptions about predefined classes. We
describe both the design and implementation of this method and its evaluation,
which uses a sequence of testing protocols to determine whether the resulting
clusters: (1) are physically reasonable, (i.e., embody scientifically relevant
distinctions); (2) capture information on spatial distributions, such as
textures; (3) are cohesive and separable in latent space; and (4) are
rotationally invariant, (i.e., insensitive to the orientation of an image).
Results obtained when these evaluation protocols are applied to RICC outputs
suggest that the resultant novel cloud clusters capture meaningful aspects of
cloud physics, are appropriately spatially coherent, and are invariant to
orientations of input images. Our results support the possibility of using an
unsupervised data-driven approach for automated clustering and pattern
discovery in cloud imagery.
| [
{
"created": "Mon, 8 Mar 2021 16:45:14 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Oct 2021 04:13:07 GMT",
"version": "v2"
}
] | 2021-10-29 | [
[
"Kurihana",
"Takuya",
""
],
[
"Moyer",
"Elisabeth",
""
],
[
"Willett",
"Rebecca",
""
],
[
"Gilton",
"Davis",
""
],
[
"Foster",
"Ian",
""
]
] |
2103.04931 | Bartosz Sawicki | Maciej \'Swiechowski, Konrad Godlewski, Bartosz Sawicki, Jacek
Ma\'ndziuk | Monte Carlo Tree Search: A Review of Recent Modifications and
Applications | 99 pages, Accepted to Artificial Intelligence Review journal | Artificial Intelligence Review (2023), vol. 56, 2497-2562 | 10.1007/s10462-022-10228-y | null | cs.AI cs.LG cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) is a powerful approach to designing
game-playing bots or solving sequential decision problems. The method relies on
intelligent tree search that balances exploration and exploitation. MCTS
performs random sampling in the form of simulations and stores statistics of
actions to make more educated choices in each subsequent iteration. The method
has become a state-of-the-art technique for combinatorial games, however, in
more complex games (e.g. those with high branching factor or real-time ones),
as well as in various practical domains (e.g. transportation, scheduling or
security) an efficient MCTS application often requires its problem-dependent
modification or integration with other techniques. Such domain-specific
modifications and hybrid approaches are the main focus of this survey. The last
major MCTS survey has been published in 2012. Contributions that appeared since
its release are of particular interest for this review.
| [
{
"created": "Mon, 8 Mar 2021 17:44:15 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2021 13:04:22 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Apr 2022 21:48:55 GMT",
"version": "v3"
},
{
"created": "Sat, 11 Jun 2022 09:12:50 GMT",
"version": "v4"
}
] | 2023-04-04 | [
[
"Świechowski",
"Maciej",
""
],
[
"Godlewski",
"Konrad",
""
],
[
"Sawicki",
"Bartosz",
""
],
[
"Mańdziuk",
"Jacek",
""
]
] |
2103.05094 | Abdul Waheed | Abdul Waheed, Muskan Goyal, Deepak Gupta, Ashish Khanna, Fadi
Al-Turjman, Placido Rogerio Pinheiro | CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved
Covid-19 Detection | Accepted at IEEE Access. Received April 30, 2020, accepted May 11,
2020, date of publication May 14, 2020, date of current version May 28, 2020 | IEEE Access, vol. 8, pp. 91916-91923, 2020 | 10.1109/ACCESS.2020.2994762 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Coronavirus (COVID-19) is a viral disease caused by severe acute respiratory
syndrome coronavirus 2 (SARS-CoV-2). The spread of COVID-19 seems to have a
detrimental effect on the global economy and health. A positive chest X-ray of
infected patients is a crucial step in the battle against COVID-19. Early
results suggest that abnormalities exist in chest X-rays of patients suggestive
of COVID-19. This has led to the introduction of a variety of deep learning
systems and studies have shown that the accuracy of COVID-19 patient detection
through the use of chest X-rays is strongly optimistic. Deep learning networks
like convolutional neural networks (CNNs) need a substantial amount of training
data. Because the outbreak is recent, it is difficult to gather a significant
number of radiographic images in such a short time. Therefore, in this
research, we present a method to generate synthetic chest X-ray (CXR) images by
developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) based
model called CovidGAN. In addition, we demonstrate that the synthetic images
produced from CovidGAN can be utilized to enhance the performance of CNN for
COVID-19 detection. Classification using CNN alone yielded 85% accuracy. By
adding synthetic images produced by CovidGAN, the accuracy increased to 95%. We
hope this method will speed up COVID-19 detection and lead to more robust
systems of radiology.
| [
{
"created": "Mon, 8 Mar 2021 21:53:29 GMT",
"version": "v1"
}
] | 2021-03-10 | [
[
"Waheed",
"Abdul",
""
],
[
"Goyal",
"Muskan",
""
],
[
"Gupta",
"Deepak",
""
],
[
"Khanna",
"Ashish",
""
],
[
"Al-Turjman",
"Fadi",
""
],
[
"Pinheiro",
"Placido Rogerio",
""
]
] |
2103.05132 | Bonaventure F. P. Dossou | Bonaventure F. P. Dossou and Mohammed Sabry | AfriVEC: Word Embedding Models for African Languages. Case Study of Fon
and Nobiin | null | Africa NLP, EACL 2021 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | From Word2Vec to GloVe, word embedding models have played key roles in the
current state-of-the-art results achieved in Natural Language Processing.
Designed to give significant and unique vectorized representations of words and
entities, those models have proven to efficiently extract similarities and
establish relationships reflecting semantic and contextual meaning among words
and entities. African Languages, representing more than 31% of the worldwide
spoken languages, have recently been subject to lots of research. However, to
the best of our knowledge, there are currently very few to none word embedding
models for those languages words and entities, and none for the languages under
study in this paper. After describing Glove, Word2Vec, and Poincar\'e
embeddings functionalities, we build Word2Vec and Poincar\'e word embedding
models for Fon and Nobiin, which show promising results. We test the
applicability of transfer learning between these models as a landmark for
African Languages to jointly involve in mitigating the scarcity of their
resources, and attempt to provide linguistic and social interpretations of our
results. Our main contribution is to arouse more interest in creating word
embedding models proper to African Languages, ready for use, and that can
significantly improve the performances of Natural Language Processing
downstream tasks on them. The official repository and implementation is at
https://github.com/bonaventuredossou/afrivec
| [
{
"created": "Mon, 8 Mar 2021 22:58:20 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Mar 2021 05:35:22 GMT",
"version": "v2"
}
] | 2021-03-19 | [
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Sabry",
"Mohammed",
""
]
] |
2103.05167 | Gihyeon Choi | Gihyeon Choi, Shinhyeok Oh and Harksoo Kim | Improving Document-Level Sentiment Classification Using Importance of
Sentences | 12 pages, 7 figures, 5 tables | Entropy, Vol.22(12), pp.1-11, 2020.11 | 10.3390/e22121336 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Previous researchers have considered sentiment analysis as a document
classification task, in which input documents are classified into predefined
sentiment classes. Although there are sentences in a document that support
important evidences for sentiment analysis and sentences that do not, they have
treated the document as a bag of sentences. In other words, they have not
considered the importance of each sentence in the document. To effectively
determine polarity of a document, each sentence in the document should be dealt
with different degrees of importance. To address this problem, we propose a
document-level sentence classification model based on deep neural networks, in
which the importance degrees of sentences in documents are automatically
determined through gate mechanisms. To verify our new sentiment analysis model,
we conducted experiments using the sentiment datasets in the four different
domains such as movie reviews, hotel reviews, restaurant reviews, and music
reviews. In the experiments, the proposed model outperformed previous
state-of-the-art models that do not consider importance differences of
sentences in a document. The experimental results show that the importance of
sentences should be considered in a document-level sentiment classification
task.
| [
{
"created": "Tue, 9 Mar 2021 01:29:08 GMT",
"version": "v1"
}
] | 2021-03-10 | [
[
"Choi",
"Gihyeon",
""
],
[
"Oh",
"Shinhyeok",
""
],
[
"Kim",
"Harksoo",
""
]
] |
2103.05213 | Mingyuan Meng | Mingyuan Meng, Lei Bi, Michael Fulham, David Dagan Feng, and Jinman
Kim | Enhancing Medical Image Registration via Appearance Adjustment Networks | Published at NeuroImage | NeuroImage, vol. 259, pp. 119444, 2022 | 10.1016/j.neuroimage.2022.119444 | null | cs.CV cs.AI eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deformable image registration is fundamental for many medical image analyses.
A key obstacle for accurate image registration lies in image appearance
variations such as the variations in texture, intensities, and noise. These
variations are readily apparent in medical images, especially in brain images
where registration is frequently used. Recently, deep learning-based
registration methods (DLRs), using deep neural networks, have shown
computational efficiency that is several orders of magnitude faster than
traditional optimization-based registration methods (ORs). DLRs rely on a
globally optimized network that is trained with a set of training samples to
achieve faster registration. DLRs tend, however, to disregard the
target-pair-specific optimization inherent in ORs and thus have degraded
adaptability to variations in testing samples. This limitation is severe for
registering medical images with large appearance variations, especially since
few existing DLRs explicitly take into account appearance variations. In this
study, we propose an Appearance Adjustment Network (AAN) to enhance the
adaptability of DLRs to appearance variations. Our AAN, when integrated into a
DLR, provides appearance transformations to reduce the appearance variations
during registration. In addition, we propose an anatomy-constrained loss
function through which our AAN generates anatomy-preserving transformations.
Our AAN has been purposely designed to be readily inserted into a wide range of
DLRs and can be trained cooperatively in an unsupervised and end-to-end manner.
We evaluated our AAN with three state-of-the-art DLRs on three well-established
public datasets of 3D brain magnetic resonance imaging (MRI). The results show
that our AAN consistently improved existing DLRs and outperformed
state-of-the-art ORs on registration accuracy, while adding a fractional
computational load to existing DLRs.
| [
{
"created": "Tue, 9 Mar 2021 04:24:48 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jul 2022 03:10:24 GMT",
"version": "v2"
}
] | 2022-09-20 | [
[
"Meng",
"Mingyuan",
""
],
[
"Bi",
"Lei",
""
],
[
"Fulham",
"Michael",
""
],
[
"Feng",
"David Dagan",
""
],
[
"Kim",
"Jinman",
""
]
] |
2103.05220 | Mingyuan Meng | Bingxin Gu, Mingyuan Meng, Lei Bi, Jinman Kim, David Dagan Feng, and
Shaoli Song | Prediction of 5-year Progression-Free Survival in Advanced
Nasopharyngeal Carcinoma with Pretreatment PET/CT using Multi-Modality Deep
Learning-based Radiomics | Accepted at Frontiers in Oncology | Frontiers in Oncology, vol. 12, pp. 899352, 2022 | 10.3389/fonc.2022.899351 | null | eess.IV cs.CV cs.LG stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective: Deep Learning-based Radiomics (DLR) has achieved great success in
medical image analysis and has been considered a replacement for conventional
radiomics that relies on handcrafted features. In this study, we aimed to
explore the capability of DLR for the prediction of 5-year Progression-Free
Survival (PFS) in Nasopharyngeal Carcinoma (NPC) using pretreatment PET/CT.
Methods: A total of 257 patients (170/87 in internal/external cohorts) with
advanced NPC (TNM stage III or IVa) were enrolled. We developed an end-to-end
multi-modality DLR model, in which a 3D convolutional neural network was
optimized to extract deep features from pretreatment PET/CT images and predict
the probability of 5-year PFS. TNM stage, as a high-level clinical feature,
could be integrated into our DLR model to further improve the prognostic
performance. To compare conventional radiomics and DLR, 1456 handcrafted
features were extracted, and optimal conventional radiomics methods were
selected from 54 cross-combinations of 6 feature selection methods and 9
classification methods. In addition, risk group stratification was performed
with clinical signature, conventional radiomics signature, and DLR signature.
Results: Our multi-modality DLR model using both PET and CT achieved higher
prognostic performance than the optimal conventional radiomics method.
Furthermore, the multi-modality DLR model outperformed single-modality DLR
models using only PET or only CT. For risk group stratification, the
conventional radiomics signature and DLR signature enabled significant
differences between the high- and low-risk patient groups in both internal and
external cohorts, while the clinical signature failed in the external cohort.
Conclusion: Our study identified potential prognostic tools for survival
prediction in advanced NPC, suggesting that DLR could provide complementary
values to the current TNM staging.
| [
{
"created": "Tue, 9 Mar 2021 04:43:33 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jul 2022 04:50:30 GMT",
"version": "v2"
}
] | 2022-09-20 | [
[
"Gu",
"Bingxin",
""
],
[
"Meng",
"Mingyuan",
""
],
[
"Bi",
"Lei",
""
],
[
"Kim",
"Jinman",
""
],
[
"Feng",
"David Dagan",
""
],
[
"Song",
"Shaoli",
""
]
] |
2103.05225 | Harel Yedidsion | Harel Yedidsion, Jennifer Suriadinata, Zifan Xu, Stefan Debruyn, Peter
Stone | A Scavenger Hunt for Service Robots | 6 pages + references + Appendix | the 2021 IEEE International Conference on Robotics and Automation
(ICRA), May 30 - June 5, 2021, Xi'an, China | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating robots that can perform general-purpose service tasks in a
human-populated environment has been a longstanding grand challenge for AI and
Robotics research. One particularly valuable skill that is relevant to a wide
variety of tasks is the ability to locate and retrieve objects upon request.
This paper models this skill as a Scavenger Hunt (SH) game, which we formulate
as a variation of the NP-hard stochastic traveling purchaser problem. In this
problem, the goal is to find a set of objects as quickly as possible, given
probability distributions of where they may be found. We investigate the
performance of several solution algorithms for the SH problem, both in
simulation and on a real mobile robot. We use Reinforcement Learning (RL) to
train an agent to plan a minimal cost path, and show that the RL agent can
outperform a range of heuristic algorithms, achieving near optimal performance.
In order to stimulate research on this problem, we introduce a publicly
available software stack and associated website that enable users to upload
scavenger hunts which robots can download, perform, and learn from to
continually improve their performance on future hunts.
| [
{
"created": "Tue, 9 Mar 2021 05:06:47 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 05:47:08 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Mar 2021 20:57:58 GMT",
"version": "v3"
}
] | 2021-03-31 | [
[
"Yedidsion",
"Harel",
""
],
[
"Suriadinata",
"Jennifer",
""
],
[
"Xu",
"Zifan",
""
],
[
"Debruyn",
"Stefan",
""
],
[
"Stone",
"Peter",
""
]
] |
2103.05467 | Roni Saputra Permana | Liana Ellen Taylor, Midriem Mirdanies, Roni Permana Saputra | Optimized Object Tracking Technique Using Kalman Filter | 10 pages, 14 figures, published in J. Mechatron. Electr. Power Veh.
Technol 07 (2016) 57-66 | J. Mechatron. Electr. Power Veh. Technol 07 (2016) 57-66 | 10.14203/j.mev.2016.v7.57-66 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper focused on the design of an optimized object tracking technique
which would minimize the processing time required in the object detection
process while maintaining accuracy in detecting the desired moving object in a
cluttered scene. A Kalman filter based cropped image is used for the image
detection process as the processing time is significantly less to detect the
object when a search window is used that is smaller than the entire video
frame. This technique was tested with various sizes of the window in the
cropping process. MATLAB was used to design and test the proposed method. This
paper found that using a cropped image with 2.16 multiplied by the largest
dimension of the object resulted in significantly faster processing time while
still providing a high success rate of detection and a detected center of the
object that was reasonably close to the actual center.
| [
{
"created": "Sun, 7 Mar 2021 13:32:31 GMT",
"version": "v1"
}
] | 2021-03-10 | [
[
"Taylor",
"Liana Ellen",
""
],
[
"Mirdanies",
"Midriem",
""
],
[
"Saputra",
"Roni Permana",
""
]
] |
2103.05481 | Damien Pellier | Damien Pellier, Humbert Fiorino | From Classical to Hierarchical: benchmarks for the HTN Track of the
International Planning Competition | null | Proceedings of the International Planning Competition, ICAPS,
Nancy, France, 2020 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this short paper, we outline nine classical benchmarks submitted to the
first hierarchical planning track of the International Planning competition in
2020. All of these benchmarks are based on the HDDL language. The choice of the
benchmarks was based on a questionnaire sent to the HTN community. They are the
following: Barman, Childsnack, Rover, Satellite, Blocksworld, Depots, Gripper,
and Hiking. In the rest of the paper we give a short description of these
benchmarks. All are totally ordered.
| [
{
"created": "Tue, 9 Mar 2021 15:11:51 GMT",
"version": "v1"
}
] | 2021-03-10 | [
[
"Pellier",
"Damien",
""
],
[
"Fiorino",
"Humbert",
""
]
] |
2103.05489 | Jan Koh\'ut | Jan Koh\'ut, Michal Hradi\v{s} | TS-Net: OCR Trained to Switch Between Text Transcription Styles | null | ICDAR 2021: Proceedings, Part IV 16 (pp. 478-493) | 10.1007/978-3-030-86337-1_32 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Users of OCR systems, from different institutions and scientific disciplines,
prefer and produce different transcription styles. This presents a problem for
training of consistent text recognition neural networks on real-world data. We
propose to extend existing text recognition networks with a Transcription Style
Block (TSB) which can learn from data to switch between multiple transcription
styles without any explicit knowledge of transcription rules. TSB is an
adaptive instance normalization conditioned by identifiers representing
consistently transcribed documents (e.g. single document, documents by a single
transcriber, or an institution). We show that TSB is able to learn completely
different transcription styles in controlled experiments on artificial data, it
improves text recognition accuracy on large-scale real-world data, and it
learns semantically meaningful transcription style embedding. We also show how
TSB can efficiently adapt to transcription styles of new documents from
transcriptions of only a few text lines.
| [
{
"created": "Tue, 9 Mar 2021 15:21:40 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Feb 2023 13:26:41 GMT",
"version": "v2"
}
] | 2023-02-14 | [
[
"Kohút",
"Jan",
""
],
[
"Hradiš",
"Michal",
""
]
] |
2103.05529 | K. Ruwani Fernando | K. Ruwani M. Fernando and Chris P. Tsokos | Deep and Statistical Learning in Biomedical Imaging: State of the Art in
3D MRI Brain Tumor Segmentation | 21 pages, 7 figures | Information Fusion, Volume 92, 2023, Pages 450-465 | 10.1016/j.inffus.2022.12.013 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Clinical diagnostic and treatment decisions rely upon the integration of
patient-specific data with clinical reasoning. Cancer presents a unique context
that influence treatment decisions, given its diverse forms of disease
evolution. Biomedical imaging allows noninvasive assessment of disease based on
visual evaluations leading to better clinical outcome prediction and
therapeutic planning. Early methods of brain cancer characterization
predominantly relied upon statistical modeling of neuroimaging data. Driven by
the breakthroughs in computer vision, deep learning became the de facto
standard in the domain of medical imaging. Integrated statistical and deep
learning methods have recently emerged as a new direction in the automation of
the medical practice unifying multi-disciplinary knowledge in medicine,
statistics, and artificial intelligence. In this study, we critically review
major statistical and deep learning models and their applications in brain
imaging research with a focus on MRI-based brain tumor segmentation. The
results do highlight that model-driven classical statistics and data-driven
deep learning is a potent combination for developing automated systems in
clinical oncology.
| [
{
"created": "Tue, 9 Mar 2021 16:15:47 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Dec 2022 19:07:45 GMT",
"version": "v2"
}
] | 2022-12-23 | [
[
"Fernando",
"K. Ruwani M.",
""
],
[
"Tsokos",
"Chris P.",
""
]
] |
2103.05564 | Marco Pegoraro | Marco Pegoraro, Merih Seran Uysal, Wil M.P. van der Aalst | PROVED: A Tool for Graph Representation and Analysis of Uncertain Event
Data | 11 pages, 6 figures, 1 table, 16 references | Petri Nets (2021) 476-486 | 10.1007/978-3-030-76983-3_24 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discipline of process mining aims to study processes in a data-driven
manner by analyzing historical process executions, often employing Petri nets.
Event data, extracted from information systems (e.g. SAP), serve as the
starting point for process mining. Recently, novel types of event data have
gathered interest among the process mining community, including uncertain event
data. Uncertain events, process traces and logs contain attributes that are
characterized by quantified imprecisions, e.g., a set of possible attribute
values. The PROVED tool helps to explore, navigate and analyze such uncertain
event data by abstracting the uncertain information using behavior graphs and
nets, which have Petri nets semantics. Based on these constructs, the tool
enables discovery and conformance checking.
| [
{
"created": "Tue, 9 Mar 2021 17:11:54 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Apr 2022 13:34:00 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Apr 2022 09:59:26 GMT",
"version": "v3"
}
] | 2022-04-11 | [
[
"Pegoraro",
"Marco",
""
],
[
"Uysal",
"Merih Seran",
""
],
[
"van der Aalst",
"Wil M. P.",
""
]
] |
2103.05661 | Anca Dragan | Liting Sun, Xiaogang Jia, Anca D. Dragan | On complementing end-to-end human behavior predictors with planning | null | Robotics: Science and Systems, 2021 | null | null | cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | High capacity end-to-end approaches for human motion (behavior) prediction
have the ability to represent subtle nuances in human behavior, but struggle
with robustness to out of distribution inputs and tail events. Planning-based
prediction, on the other hand, can reliably output decent-but-not-great
predictions: it is much more stable in the face of distribution shift (as we
verify in this work), but it has high inductive bias, missing important aspects
that drive human decisions, and ignoring cognitive biases that make human
behavior suboptimal. In this work, we analyze one family of approaches that
strive to get the best of both worlds: use the end-to-end predictor on common
cases, but do not rely on it for tail events / out-of-distribution inputs --
switch to the planning-based predictor there. We contribute an analysis of
different approaches for detecting when to make this switch, using an
autonomous driving domain. We find that promising approaches based on
ensembling or generative modeling of the training distribution might not be
reliable, but that there very simple methods which can perform surprisingly
well -- including training a classifier to pick up on tell-tale issues in
predicted trajectories.
| [
{
"created": "Tue, 9 Mar 2021 19:02:45 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jul 2021 01:24:55 GMT",
"version": "v2"
}
] | 2021-07-14 | [
[
"Sun",
"Liting",
""
],
[
"Jia",
"Xiaogang",
""
],
[
"Dragan",
"Anca D.",
""
]
] |
2103.05886 | Hilmil Pradana | Hilmil Pradana and Keiichi Horio | Tuna Nutriment Tracking using Trajectory Mapping in Application to
Aquaculture Fish Tank | null | 2020 Digital Image Computing: Techniques and Applications (DICTA)
(2020) 1-8 | 10.1109/DICTA51227.2020.9363387 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cost of fish feeding is usually around 40 percent of total production
cost. Estimating a state of fishes in a tank and adjusting an amount of
nutriments play an important role to manage cost of fish feeding system. Our
approach is based on tracking nutriments on videos collected from an active
aquaculture fish farm. Tracking approach is applied to acknowledge movement of
nutriment to understand more about the fish behavior. Recently, there has been
increasing number of researchers focused on developing tracking algorithms to
generate more accurate and faster determination of object. Unfortunately,
recent studies have shown that efficient and robust tracking of multiple
objects with complex relations remain unsolved. Hence, focusing to develop
tracking algorithm in aquaculture is more challenging because tracked object
has a lot of aquatic variant creatures. By following aforementioned problem, we
develop tuna nutriment tracking based on the classical minimum cost problem
which consistently performs well in real environment datasets. In evaluation,
the proposed method achieved 21.32 pixels and 3.08 pixels for average error
distance and standard deviation, respectively. Quantitative evaluation based on
the data generated by human annotators shows that the proposed method is
valuable for aquaculture fish farm and can be widely applied to real
environment datasets.
| [
{
"created": "Wed, 10 Mar 2021 06:02:19 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Pradana",
"Hilmil",
""
],
[
"Horio",
"Keiichi",
""
]
] |
2103.05918 | Dong Shen | Dong Shen, Shuai Zhao, Jinming Hu, Hao Feng, Deng Cai, Xiaofei He | ES-Net: Erasing Salient Parts to Learn More in Re-Identification | 11 pages, 6 figures. Accepted for publication in IEEE Transactions on
Image Processing 2021 | IEEE Transactions on Image Processing, vol. 30, pp. 1676-1686,
2021 | 10.1109/TIP.2020.3046904 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an instance-level recognition problem, re-identification (re-ID) requires
models to capture diverse features. However, with continuous training, re-ID
models pay more and more attention to the salient areas. As a result, the model
may only focus on few small regions with salient representations and ignore
other important information. This phenomenon leads to inferior performance,
especially when models are evaluated on small inter-identity variation data. In
this paper, we propose a novel network, Erasing-Salient Net (ES-Net), to learn
comprehensive features by erasing the salient areas in an image. ES-Net
proposes a novel method to locate the salient areas by the confidence of
objects and erases them efficiently in a training batch. Meanwhile, to mitigate
the over-erasing problem, this paper uses a trainable pooling layer P-pooling
that generalizes global max and global average pooling. Experiments are
conducted on two specific re-identification tasks (i.e., Person re-ID, Vehicle
re-ID). Our ES-Net outperforms state-of-the-art methods on three Person re-ID
benchmarks and two Vehicle re-ID benchmarks. Specifically, mAP / Rank-1 rate:
88.6% / 95.7% on Market1501, 78.8% / 89.2% on DuckMTMC-reID, 57.3% / 80.9% on
MSMT17, 81.9% / 97.0% on Veri-776, respectively. Rank-1 / Rank-5 rate: 83.6% /
96.9% on VehicleID (Small), 79.9% / 93.5% on VehicleID (Medium), 76.9% / 90.7%
on VehicleID (Large), respectively. Moreover, the visualized salient areas show
human-interpretable visual explanations for the ranking results.
| [
{
"created": "Wed, 10 Mar 2021 08:19:46 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Shen",
"Dong",
""
],
[
"Zhao",
"Shuai",
""
],
[
"Hu",
"Jinming",
""
],
[
"Feng",
"Hao",
""
],
[
"Cai",
"Deng",
""
],
[
"He",
"Xiaofei",
""
]
] |
2103.05923 | XinZhou Dong | Xinzhou Dong, Beihong Jin, Wei Zhuo, Beibei Li, Taofeng Xue | Improving Sequential Recommendation with Attribute-augmented Graph
Neural Networks | null | The 25th Pacific-Asia Conference on Knowledge Discovery and Data
Mining (PAKDD-2021), May 11-14, 2021, Delhi, India | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many practical recommender systems provide item recommendation for different
users only via mining user-item interactions but totally ignoring the rich
attribute information of items that users interact with. In this paper, we
propose an attribute-augmented graph neural network model named Murzim. Murzim
takes as input the graphs constructed from the user-item interaction sequences
and corresponding item attribute sequences. By combining the GNNs with node
aggregation and an attention network, Murzim can capture user preference
patterns, generate embeddings for user-item interaction sequences, and then
generate recommendations through next-item prediction. We conduct extensive
experiments on multiple datasets. Experimental results show that Murzim
outperforms several state-of-the-art methods in terms of recall and MRR, which
illustrates that Murzim can make use of item attribute information to produce
better recommendations. At present, Murzim has been deployed in MX Player, one
of India's largest streaming platforms, and is recommending videos for tens of
thousands of users.
| [
{
"created": "Wed, 10 Mar 2021 08:29:49 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Dong",
"Xinzhou",
""
],
[
"Jin",
"Beihong",
""
],
[
"Zhuo",
"Wei",
""
],
[
"Li",
"Beibei",
""
],
[
"Xue",
"Taofeng",
""
]
] |
2103.05944 | Xiuying Chen | Mingfei Guo, Xiuying Chen, Juntao Li, Dongyan Zhao, Rui Yan | How does Truth Evolve into Fake News? An Empirical Study of Fake News
Evolution | 5 pages, 2 figures | The Web Conference 2021, Workshop on News Recommendation and
Intelligence | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Automatically identifying fake news from the Internet is a challenging
problem in deception detection tasks. Online news is modified constantly during
its propagation, e.g., malicious users distort the original truth and make up
fake news. However, the continuous evolution process would generate
unprecedented fake news and cheat the original model. We present the Fake News
Evolution (FNE) dataset: a new dataset tracking the fake news evolution
process. Our dataset is composed of 950 paired data, each of which consists of
articles representing the three significant phases of the evolution process,
which are the truth, the fake news, and the evolved fake news. We observe the
features during the evolution and they are the disinformation techniques, text
similarity, top 10 keywords, classification accuracy, parts of speech, and
sentiment properties.
| [
{
"created": "Wed, 10 Mar 2021 09:01:34 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Guo",
"Mingfei",
""
],
[
"Chen",
"Xiuying",
""
],
[
"Li",
"Juntao",
""
],
[
"Zhao",
"Dongyan",
""
],
[
"Yan",
"Rui",
""
]
] |
2103.05977 | Yuan-Gen Wang | Fu-Zhao Ou, Xingyu Chen, Ruixin Zhang, Yuge Huang, Shaoxin Li, Jilin
Li, Yong Li, Liujuan Cao, and Yuan-Gen Wang | SDD-FIQA: Unsupervised Face Image Quality Assessment with Similarity
Distribution Distance | null | IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, Face Image Quality Assessment (FIQA) has become an
indispensable part of the face recognition system to guarantee the stability
and reliability of recognition performance in an unconstrained scenario. For
this purpose, the FIQA method should consider both the intrinsic property and
the recognizability of the face image. Most previous works aim to estimate the
sample-wise embedding uncertainty or pair-wise similarity as the quality score,
which only considers the information from partial intra-class. However, these
methods ignore the valuable information from the inter-class, which is for
estimating to the recognizability of face image. In this work, we argue that a
high-quality face image should be similar to its intra-class samples and
dissimilar to its inter-class samples. Thus, we propose a novel unsupervised
FIQA method that incorporates Similarity Distribution Distance for Face Image
Quality Assessment (SDD-FIQA). Our method generates quality pseudo-labels by
calculating the Wasserstein Distance (WD) between the intra-class similarity
distributions and inter-class similarity distributions. With these quality
pseudo-labels, we are capable of training a regression network for quality
prediction. Extensive experiments on benchmark datasets demonstrate that the
proposed SDD-FIQA surpasses the state-of-the-arts by an impressive margin.
Meanwhile, our method shows good generalization across different recognition
systems.
| [
{
"created": "Wed, 10 Mar 2021 10:23:28 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Ou",
"Fu-Zhao",
""
],
[
"Chen",
"Xingyu",
""
],
[
"Zhang",
"Ruixin",
""
],
[
"Huang",
"Yuge",
""
],
[
"Li",
"Shaoxin",
""
],
[
"Li",
"Jilin",
""
],
[
"Li",
"Yong",
""
],
[
"Cao",
"Liujuan",
""
],
[
"Wang",
"Yuan-Gen",
""
]
] |
2103.06022 | Stefan Schrunner | Delmon Arous, Stefan Schrunner, Ingunn Hanson, Nina F.J. Edin, Eirik
Malinen | Principal component-based image segmentation: a new approach to outline
in vitro cell colonies | null | Computer Methods in Biomechanics and Biomedical Engineering:
Imaging & Visualization (2022) | 10.1080/21681163.2022.2035822 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The in vitro clonogenic assay is a technique to study the ability of a cell
to form a colony in a culture dish. By optical imaging, dishes with stained
colonies can be scanned and assessed digitally. Identification, segmentation
and counting of stained colonies play a vital part in high-throughput screening
and quantitative assessment of biological assays. Image processing of such
pictured/scanned assays can be affected by image/scan acquisition artifacts
like background noise and spatially varying illumination, and contaminants in
the suspension medium. Although existing approaches tackle these issues, the
segmentation quality requires further improvement, particularly on noisy and
low contrast images. In this work, we present an objective and versatile
machine learning procedure to amend these issues by characterizing, extracting
and segmenting inquired colonies using principal component analysis, k-means
clustering and a modified watershed segmentation algorithm. The intention is to
automatically identify visible colonies through spatial texture assessment and
accordingly discriminate them from background in preparation for successive
segmentation. The proposed segmentation algorithm yielded a similar quality as
manual counting by human observers. High F1 scores (>0.9) and low
root-mean-square errors (around 14%) underlined good agreement with ground
truth data. Moreover, it outperformed a recent state-of-the-art method. The
methodology will be an important tool in future cancer research applications.
| [
{
"created": "Wed, 10 Mar 2021 12:37:51 GMT",
"version": "v1"
}
] | 2022-02-15 | [
[
"Arous",
"Delmon",
""
],
[
"Schrunner",
"Stefan",
""
],
[
"Hanson",
"Ingunn",
""
],
[
"Edin",
"Nina F. J.",
""
],
[
"Malinen",
"Eirik",
""
]
] |
2103.06115 | Veronica Sanz | Gabriela Barenboim, Johannes Hirn and Veronica Sanz | Symmetry meets AI | 8 pages, 8 figures | SciPost Phys. 11, 014 (2021) | 10.21468/SciPostPhys.11.1.014 | null | cs.LG cs.CV hep-ph | http://creativecommons.org/licenses/by/4.0/ | We explore whether Neural Networks (NNs) can {\it discover} the presence of
symmetries as they learn to perform a task. For this, we train hundreds of NNs
on a {\it decoy task} based on well-controlled Physics templates, where no
information on symmetry is provided. We use the output from the last hidden
layer of all these NNs, projected to fewer dimensions, as the input for a
symmetry classification task, and show that information on symmetry had indeed
been identified by the original NN without guidance. As an interdisciplinary
application of this procedure, we identify the presence and level of symmetry
in artistic paintings from different styles such as those of Picasso, Pollock
and Van Gogh.
| [
{
"created": "Wed, 10 Mar 2021 15:12:49 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Jun 2021 11:47:38 GMT",
"version": "v2"
}
] | 2021-07-21 | [
[
"Barenboim",
"Gabriela",
""
],
[
"Hirn",
"Johannes",
""
],
[
"Sanz",
"Veronica",
""
]
] |
2103.06123 | Hiroshi Yamakawa | Hiroshi Yamakawa | The whole brain architecture approach: Accelerating the development of
artificial general intelligence by referring to the brain | 28 pages, 10 figures, Preprint submitted to Neural Networks | Neural Networks, Volume 144, December 2021, Pages 478-495 | 10.1016/j.neunet.2021.09.004 | null | cs.AI cs.LG cs.NE q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | The vastness of the design space created by the combination of a large number
of computational mechanisms, including machine learning, is an obstacle to
creating an artificial general intelligence (AGI). Brain-inspired AGI
development, in other words, cutting down the design space to look more like a
biological brain, which is an existing model of a general intelligence, is a
promising plan for solving this problem. However, it is difficult for an
individual to design a software program that corresponds to the entire brain
because the neuroscientific data required to understand the architecture of the
brain are extensive and complicated. The whole-brain architecture approach
divides the brain-inspired AGI development process into the task of designing
the brain reference architecture (BRA) -- the flow of information and the
diagram of corresponding components -- and the task of developing each
component using the BRA. This is called BRA-driven development. Another
difficulty lies in the extraction of the operating principles necessary for
reproducing the cognitive-behavioral function of the brain from neuroscience
data. Therefore, this study proposes the Structure-constrained Interface
Decomposition (SCID) method, which is a hypothesis-building method for creating
a hypothetical component diagram consistent with neuroscientific findings. The
application of this approach has begun for building various regions of the
brain. Moving forward, we will examine methods of evaluating the biological
plausibility of brain-inspired software. This evaluation will also be used to
prioritize different computational mechanisms, which should be merged,
associated with the same regions of the brain.
| [
{
"created": "Sat, 6 Mar 2021 04:58:12 GMT",
"version": "v1"
}
] | 2022-08-16 | [
[
"Yamakawa",
"Hiroshi",
""
]
] |
2103.06168 | Tommaso Di Noto | Tommaso Di Noto, Guillaume Marie, Sebastien Tourbier, Yasser
Alem\'an-G\'omez, Oscar Esteban, Guillaume Saliou, Meritxell Bach Cuadra,
Patric Hagmann, Jonas Richiardi | Towards automated brain aneurysm detection in TOF-MRA: open data, weak
labels, and anatomical knowledge | Paper accepted as Original Article in the journal Neuroinformatics
(https://link.springer.com/article/10.1007/s12021-022-09597-0) | Neuroinformatics, 2022 | 10.1007/s12021-022-09597-0 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Brain aneurysm detection in Time-Of-Flight Magnetic Resonance Angiography
(TOF-MRA) has undergone drastic improvements with the advent of Deep Learning
(DL). However, performances of supervised DL models heavily rely on the
quantity of labeled samples, which are extremely costly to obtain. Here, we
present a DL model for aneurysm detection that overcomes the issue with
''weak'' labels: oversized annotations which are considerably faster to create.
Our weak labels resulted to be four times faster to generate than their
voxel-wise counterparts. In addition, our model leverages prior anatomical
knowledge by focusing only on plausible locations for aneurysm occurrence. We
frst train and evaluate our model through cross-validation on an in-house
TOF-MRA dataset comprising 284 subjects (170 females / 127 healthy controls /
157 patients with 198 aneurysms). On this dataset, our best model achieved a
sensitivity of 83%, with False Positive (FP) rate of 0.8 per patient. To assess
model generalizability, we then participated in a challenge for aneurysm
detection with TOF-MRA data (93 patients, 20 controls, 125 aneurysms). On the
public challenge, sensitivity was 68% (FP rate=2.5), ranking 4th/18 on the open
leaderboard. We found no signifcant diference in sensitivity between aneurysm
risk-of-rupture groups (p=0.75), locations (p=0.72), or sizes (p=0.15). Data,
code and model weights are released under permissive licenses. We demonstrate
that weak labels and anatomical knowledge can alleviate the necessity for
prohibitively expensive voxel-wise annotations.
| [
{
"created": "Wed, 10 Mar 2021 16:31:54 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Apr 2021 13:03:52 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Sep 2021 15:39:32 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Oct 2021 09:29:49 GMT",
"version": "v4"
},
{
"created": "Fri, 3 Dec 2021 10:12:50 GMT",
"version": "v5"
},
{
"created": "Tue, 23 Aug 2022 15:29:03 GMT",
"version": "v6"
}
] | 2022-08-24 | [
[
"Di Noto",
"Tommaso",
""
],
[
"Marie",
"Guillaume",
""
],
[
"Tourbier",
"Sebastien",
""
],
[
"Alemán-Gómez",
"Yasser",
""
],
[
"Esteban",
"Oscar",
""
],
[
"Saliou",
"Guillaume",
""
],
[
"Cuadra",
"Meritxell Bach",
""
],
[
"Hagmann",
"Patric",
""
],
[
"Richiardi",
"Jonas",
""
]
] |
2103.06182 | Heng Yang | Heng Yang, Chris Doran, Jean-Jacques Slotine | Dynamical Pose Estimation | ICCV 2021 camera ready. Code: https://github.com/hankyang94/DAMP.
Video: https://youtu.be/CDYXR1h98Q4 | ICCV 2021 | null | null | cs.CV cs.RO math.DS | http://creativecommons.org/licenses/by/4.0/ | We study the problem of aligning two sets of 3D geometric primitives given
known correspondences. Our first contribution is to show that this primitive
alignment framework unifies five perception problems including point cloud
registration, primitive (mesh) registration, category-level 3D registration,
absolution pose estimation (APE), and category-level APE. Our second
contribution is to propose DynAMical Pose estimation (DAMP), the first general
and practical algorithm to solve primitive alignment problem by simulating
rigid body dynamics arising from virtual springs and damping, where the springs
span the shortest distances between corresponding primitives. We evaluate DAMP
in simulated and real datasets across all five problems, and demonstrate (i)
DAMP always converges to the globally optimal solution in the first three
problems with 3D-3D correspondences; (ii) although DAMP sometimes converges to
suboptimal solutions in the last two problems with 2D-3D correspondences, using
a scheme for escaping local minima, DAMP always succeeds. Our third
contribution is to demystify the surprising empirical performance of DAMP and
formally prove a global convergence result in the case of point cloud
registration by charactering local stability of the equilibrium points of the
underlying dynamical system.
| [
{
"created": "Wed, 10 Mar 2021 17:01:41 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 16:42:33 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Aug 2021 03:08:15 GMT",
"version": "v3"
}
] | 2021-08-13 | [
[
"Yang",
"Heng",
""
],
[
"Doran",
"Chris",
""
],
[
"Slotine",
"Jean-Jacques",
""
]
] |
2103.06205 | Florian Kofler | Florian Kofler, Ivan Ezhov, Fabian Isensee, Fabian Balsiger, Christoph
Berger, Maximilian Koerner, Beatrice Demiray, Julia Rackerseder, Johannes
Paetzold, Hongwei Li, Suprosanna Shit, Richard McKinley, Marie Piraud,
Spyridon Bakas, Claus Zimmer, Nassir Navab, Jan Kirschke, Benedikt Wiestler,
Bjoern Menze | Are we using appropriate segmentation metrics? Identifying correlates of
human expert perception for CNN training beyond rolling the DICE coefficient | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2023:002 | Machine.Learning.for.Biomedical.Imaging. 2 (2023) | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Metrics optimized in complex machine learning tasks are often selected in an
ad-hoc manner. It is unknown how they align with human expert perception. We
explore the correlations between established quantitative segmentation quality
metrics and qualitative evaluations by professionally trained human raters.
Therefore, we conduct psychophysical experiments for two complex biomedical
semantic segmentation problems. We discover that current standard metrics and
loss functions correlate only moderately with the segmentation quality
assessment of experts. Importantly, this effect is particularly pronounced for
clinically relevant structures, such as the enhancing tumor compartment of
glioma in brain magnetic resonance and grey matter in ultrasound imaging. It is
often unclear how to optimize abstract metrics, such as human expert
perception, in convolutional neural network (CNN) training. To cope with this
challenge, we propose a novel strategy employing techniques of classical
statistics to create complementary compound loss functions to better
approximate human expert perception. Across all rating experiments, human
experts consistently scored computer-generated segmentations better than the
human-curated reference labels. Our results, therefore, strongly question many
current practices in medical image segmentation and provide meaningful cues for
future research.
| [
{
"created": "Wed, 10 Mar 2021 17:29:11 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Jan 2023 00:30:42 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Jan 2023 14:45:20 GMT",
"version": "v3"
},
{
"created": "Tue, 2 May 2023 13:42:03 GMT",
"version": "v4"
}
] | 2023-05-03 | [
[
"Kofler",
"Florian",
""
],
[
"Ezhov",
"Ivan",
""
],
[
"Isensee",
"Fabian",
""
],
[
"Balsiger",
"Fabian",
""
],
[
"Berger",
"Christoph",
""
],
[
"Koerner",
"Maximilian",
""
],
[
"Demiray",
"Beatrice",
""
],
[
"Rackerseder",
"Julia",
""
],
[
"Paetzold",
"Johannes",
""
],
[
"Li",
"Hongwei",
""
],
[
"Shit",
"Suprosanna",
""
],
[
"McKinley",
"Richard",
""
],
[
"Piraud",
"Marie",
""
],
[
"Bakas",
"Spyridon",
""
],
[
"Zimmer",
"Claus",
""
],
[
"Navab",
"Nassir",
""
],
[
"Kirschke",
"Jan",
""
],
[
"Wiestler",
"Benedikt",
""
],
[
"Menze",
"Bjoern",
""
]
] |
2103.06304 | Letitia Parcalabescu | Letitia Parcalabescu, Nils Trost, Anette Frank | What is Multimodality? | Paper accepted for publication at MMSR 2021; 10 pages, 5 figures | Proceedings of the 1st Workshop on Multimodal Semantic
Representations (MMSR), 2021, Groningen, Netherlands (Online), Association
for Computational Linguistics, p. 1--10 | null | null | cs.AI cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | The last years have shown rapid developments in the field of multimodal
machine learning, combining e.g., vision, text or speech. In this position
paper we explain how the field uses outdated definitions of multimodality that
prove unfit for the machine learning era. We propose a new task-relative
definition of (multi)modality in the context of multimodal machine learning
that focuses on representations and information that are relevant for a given
machine learning task. With our new definition of multimodality we aim to
provide a missing foundation for multimodal research, an important component of
language grounding and a crucial milestone towards NLU.
| [
{
"created": "Wed, 10 Mar 2021 19:14:07 GMT",
"version": "v1"
},
{
"created": "Sat, 1 May 2021 09:17:44 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Jun 2021 19:32:33 GMT",
"version": "v3"
}
] | 2021-08-23 | [
[
"Parcalabescu",
"Letitia",
""
],
[
"Trost",
"Nils",
""
],
[
"Frank",
"Anette",
""
]
] |
2103.06410 | Chenguang Zhu | Chenguang Zhu, Yang Liu, Jie Mei, Michael Zeng | MediaSum: A Large-scale Media Interview Dataset for Dialogue
Summarization | Dataset: https://github.com/zcgzcgzcg1/MediaSum/ | North American Chapter of the Association for Computational
Linguistics (NAACL), Mexico City, Mexico, 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MediaSum, a large-scale media interview dataset consisting of 463.6K
transcripts with abstractive summaries. To create this dataset, we collect
interview transcripts from NPR and CNN and employ the overview and topic
descriptions as summaries. Compared with existing public corpora for dialogue
summarization, our dataset is an order of magnitude larger and contains complex
multi-party conversations from multiple domains. We conduct statistical
analysis to demonstrate the unique positional bias exhibited in the transcripts
of televised and radioed interviews. We also show that MediaSum can be used in
transfer learning to improve a model's performance on other dialogue
summarization tasks.
| [
{
"created": "Thu, 11 Mar 2021 01:47:42 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Mar 2021 01:47:14 GMT",
"version": "v2"
}
] | 2021-03-15 | [
[
"Zhu",
"Chenguang",
""
],
[
"Liu",
"Yang",
""
],
[
"Mei",
"Jie",
""
],
[
"Zeng",
"Michael",
""
]
] |
2103.06501 | Chi Zhang | Chi Zhang, Zihang Lin, Liheng Xu, Zongliang Li, Wei Tang, Yuehu Liu,
Gaofeng Meng, Le Wang, Li Li | Density-aware Haze Image Synthesis by Self-Supervised Content-Style
Disentanglement | 21 pages, 19 figures, 6 tables | IEEE Transactions on Circuits and Systems for Video Technology | 10.1109/TCSVT.2021.3130158 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key procedure of haze image translation through adversarial training lies
in the disentanglement between the feature only involved in haze synthesis,
i.e.style feature, and the feature representing the invariant semantic content,
i.e. content feature. Previous methods separate content feature apart by
utilizing it to classify haze image during the training process. However, in
this paper we recognize the incompleteness of the content-style disentanglement
in such technical routine. The flawed style feature entangled with content
information inevitably leads the ill-rendering of the haze images. To address,
we propose a self-supervised style regression via stochastic linear
interpolation to reduce the content information in style feature. The ablative
experiments demonstrate the disentangling completeness and its superiority in
level-aware haze image synthesis. Moreover, the generated haze data are applied
in the testing generalization of vehicle detectors. Further study between
haze-level and detection performance shows that haze has obvious impact on the
generalization of the vehicle detectors and such performance degrading level is
linearly correlated to the haze-level, which, in turn, validates the
effectiveness of the proposed method.
| [
{
"created": "Thu, 11 Mar 2021 06:53:18 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Nov 2021 12:04:35 GMT",
"version": "v2"
}
] | 2021-11-29 | [
[
"Zhang",
"Chi",
""
],
[
"Lin",
"Zihang",
""
],
[
"Xu",
"Liheng",
""
],
[
"Li",
"Zongliang",
""
],
[
"Tang",
"Wei",
""
],
[
"Liu",
"Yuehu",
""
],
[
"Meng",
"Gaofeng",
""
],
[
"Wang",
"Le",
""
],
[
"Li",
"Li",
""
]
] |
2103.06506 | Corey Lammie | Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi | Memristive Stochastic Computing for Deep Learning Parameter Optimization | Accepted by IEEE Transactions on Circuits and Systems Part II:
Express Briefs | IEEE Transactions on Circuits and Systems Part II: Express Briefs,
2021 | 10.1109/TCSII.2021.3065932 | null | cs.ET cs.AI cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic Computing (SC) is a computing paradigm that allows for the
low-cost and low-power computation of various arithmetic operations using
stochastic bit streams and digital logic. In contrast to conventional
representation schemes used within the binary domain, the sequence of bit
streams in the stochastic domain is inconsequential, and computation is usually
non-deterministic. In this brief, we exploit the stochasticity during switching
of probabilistic Conductive Bridging RAM (CBRAM) devices to efficiently
generate stochastic bit streams in order to perform Deep Learning (DL)
parameter optimization, reducing the size of Multiply and Accumulate (MAC)
units by 5 orders of magnitude. We demonstrate that in using a 40-nm
Complementary Metal Oxide Semiconductor (CMOS) process our scalable
architecture occupies 1.55mm$^2$ and consumes approximately 167$\mu$W when
optimizing parameters of a Convolutional Neural Network (CNN) while it is being
trained for a character recognition task, observing no notable reduction in
accuracy post-training.
| [
{
"created": "Thu, 11 Mar 2021 07:10:32 GMT",
"version": "v1"
}
] | 2021-03-18 | [
[
"Lammie",
"Corey",
""
],
[
"Eshraghian",
"Jason K.",
""
],
[
"Lu",
"Wei D.",
""
],
[
"Azghadi",
"Mostafa Rahimi",
""
]
] |
2103.06511 | Shaoxiong Ji | Shaoxiong Ji and Matti H\"oltt\"a and Pekka Marttinen | Does the Magic of BERT Apply to Medical Code Assignment? A Quantitative
Study | null | Computers in Biology and Medicine, 2021 | 10.1016/j.compbiomed.2021.104998 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Unsupervised pretraining is an integral part of many natural language
processing systems, and transfer learning with language models has achieved
remarkable results in many downstream tasks. In the clinical application of
medical code assignment, diagnosis and procedure codes are inferred from
lengthy clinical notes such as hospital discharge summaries. However, it is not
clear if pretrained models are useful for medical code prediction without
further architecture engineering. This paper conducts a comprehensive
quantitative analysis of various contextualized language models' performance,
pretrained in different domains, for medical code assignment from clinical
notes. We propose a hierarchical fine-tuning architecture to capture
interactions between distant words and adopt label-wise attention to exploit
label information. Contrary to current trends, we demonstrate that a carefully
trained classical CNN outperforms attention-based models on a MIMIC-III subset
with frequent codes. Our empirical findings suggest directions for improving
the medical code assignment application.
| [
{
"created": "Thu, 11 Mar 2021 07:23:45 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Oct 2021 14:16:45 GMT",
"version": "v2"
}
] | 2022-06-03 | [
[
"Ji",
"Shaoxiong",
""
],
[
"Hölttä",
"Matti",
""
],
[
"Marttinen",
"Pekka",
""
]
] |
2103.06552 | Theodoros Georgiou | Theodoros Georgiou, Sebastian Schmitt, Thomas B\"ack, Nan Pu, Wei
Chen, Michael Lew | PREPRINT: Comparison of deep learning and hand crafted features for
mining simulation data | null | Proceedings of the International Conference on Pattern Recognition
(ICPR) 2020 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computational Fluid Dynamics (CFD) simulations are a very important tool for
many industrial applications, such as aerodynamic optimization of engineering
designs like cars shapes, airplanes parts etc. The output of such simulations,
in particular the calculated flow fields, are usually very complex and hard to
interpret for realistic three-dimensional real-world applications, especially
if time-dependent simulations are investigated. Automated data analysis methods
are warranted but a non-trivial obstacle is given by the very large
dimensionality of the data. A flow field typically consists of six measurement
values for each point of the computational grid in 3D space and time (velocity
vector values, turbulent kinetic energy, pressure and viscosity). In this paper
we address the task of extracting meaningful results in an automated manner
from such high dimensional data sets. We propose deep learning methods which
are capable of processing such data and which can be trained to solve relevant
tasks on simulation data, i.e. predicting drag and lift forces applied on an
airfoil. We also propose an adaptation of the classical hand crafted features
known from computer vision to address the same problem and compare a large
variety of descriptors and detectors. Finally, we compile a large dataset of 2D
simulations of the flow field around airfoils which contains 16000 flow fields
with which we tested and compared approaches. Our results show that the deep
learning-based methods, as well as hand crafted feature based approaches, are
well-capable to accurately describe the content of the CFD simulation output on
the proposed dataset.
| [
{
"created": "Thu, 11 Mar 2021 09:28:00 GMT",
"version": "v1"
}
] | 2021-03-12 | [
[
"Georgiou",
"Theodoros",
""
],
[
"Schmitt",
"Sebastian",
""
],
[
"Bäck",
"Thomas",
""
],
[
"Pu",
"Nan",
""
],
[
"Chen",
"Wei",
""
],
[
"Lew",
"Michael",
""
]
] |
2103.06583 | Theodoros Georgiou | Theodoros Georgiou, Sebastian Schmitt, Thomas B\"ack, Wei Chen,
Michael Lew | Preprint: Norm Loss: An efficient yet effective regularization method
for deep neural networks | null | Proceedings of the International Conference on Pattern Recognition
(ICPR) 2020 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Convolutional neural network training can suffer from diverse issues like
exploding or vanishing gradients, scaling-based weight space symmetry and
covariant-shift. In order to address these issues, researchers develop weight
regularization methods and activation normalization methods. In this work we
propose a weight soft-regularization method based on the Oblique manifold. The
proposed method uses a loss function which pushes each weight vector to have a
norm close to one, i.e. the weight matrix is smoothly steered toward the
so-called Oblique manifold. We evaluate our method on the very popular
CIFAR-10, CIFAR-100 and ImageNet 2012 datasets using two state-of-the-art
architectures, namely the ResNet and wide-ResNet. Our method introduces
negligible computational overhead and the results show that it is competitive
to the state-of-the-art and in some cases superior to it. Additionally, the
results are less sensitive to hyperparameter settings such as batch size and
regularization factor.
| [
{
"created": "Thu, 11 Mar 2021 10:24:49 GMT",
"version": "v1"
}
] | 2021-03-12 | [
[
"Georgiou",
"Theodoros",
""
],
[
"Schmitt",
"Sebastian",
""
],
[
"Bäck",
"Thomas",
""
],
[
"Chen",
"Wei",
""
],
[
"Lew",
"Michael",
""
]
] |
2103.06627 | Qiang Meng | Qiang Meng, Shichao Zhao, Zhida Huang, Feng Zhou | MagFace: A Universal Representation for Face Recognition and Quality
Assessment | accepted at CVPR 2021, Oral | IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance of face recognition system degrades when the variability of
the acquired faces increases. Prior work alleviates this issue by either
monitoring the face quality in pre-processing or predicting the data
uncertainty along with the face feature. This paper proposes MagFace, a
category of losses that learn a universal feature embedding whose magnitude can
measure the quality of the given face. Under the new loss, it can be proven
that the magnitude of the feature embedding monotonically increases if the
subject is more likely to be recognized. In addition, MagFace introduces an
adaptive mechanism to learn a wellstructured within-class feature distributions
by pulling easy samples to class centers while pushing hard samples away. This
prevents models from overfitting on noisy low-quality samples and improves face
recognition in the wild. Extensive experiments conducted on face recognition,
quality assessments as well as clustering demonstrate its superiority over
state-of-the-arts. The code is available at
https://github.com/IrvingMeng/MagFace.
| [
{
"created": "Thu, 11 Mar 2021 11:58:21 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Mar 2021 06:50:26 GMT",
"version": "v2"
},
{
"created": "Sat, 3 Apr 2021 04:47:05 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Jul 2021 12:54:25 GMT",
"version": "v4"
}
] | 2021-07-27 | [
[
"Meng",
"Qiang",
""
],
[
"Zhao",
"Shichao",
""
],
[
"Huang",
"Zhida",
""
],
[
"Zhou",
"Feng",
""
]
] |
2103.06752 | Daniel Vollmers | Daniel Vollmers (1), Rricha Jalota (1), Diego Moussallem (1), Hardik
Topiwala (1), Axel-Cyrille Ngonga Ngomo (1), and Ricardo Usbeck (2) ((1) Data
Science Group, Paderborn University, Germany, (2) Fraunhofer IAIS, Dresden,
Germany) | Knowledge Graph Question Answering using Graph-Pattern Isomorphism | Version published in the proceedings of the 17th International
Conference on Semantic Systems | Further with Knowledge Graphs - Proceedings of the 17th
International Conference on Semantic Systems 53 (2021) 103-117 | 10.3233/SSW210038 | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Knowledge Graph Question Answering (KGQA) systems are based on machine
learning algorithms, requiring thousands of question-answer pairs as training
examples or natural language processing pipelines that need module fine-tuning.
In this paper, we present a novel QA approach, dubbed TeBaQA. Our approach
learns to answer questions based on graph isomorphisms from basic graph
patterns of SPARQL queries. Learning basic graph patterns is efficient due to
the small number of possible patterns. This novel paradigm reduces the amount
of training data necessary to achieve state-of-the-art performance. TeBaQA also
speeds up the domain adaption process by transforming the QA system development
task into a much smaller and easier data compilation task. In our evaluation,
TeBaQA achieves state-of-the-art performance on QALD-8 and delivers comparable
results on QALD-9 and LC-QuAD v1. Additionally, we performed a fine-grained
evaluation on complex queries that deal with aggregation and superlative
questions as well as an ablation study, highlighting future research
challenges.
| [
{
"created": "Thu, 11 Mar 2021 16:03:24 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Feb 2022 09:56:34 GMT",
"version": "v2"
}
] | 2022-02-03 | [
[
"Vollmers",
"Daniel",
""
],
[
"Jalota",
"Rricha",
""
],
[
"Moussallem",
"Diego",
""
],
[
"Topiwala",
"Hardik",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
],
[
"Usbeck",
"Ricardo",
""
]
] |
2103.06769 | Pierre-Yves Oudeyer | Manfred Eppe and Pierre-Yves Oudeyer | Intelligent behavior depends on the ecological niche: Scaling up AI to
human-like intelligence in socio-cultural environments | Keywords: developmental AI, general artificial intelligence,
human-like AI, embodiment, cultural evolution, language, socio-cultural
skills | KI - K\"unstliche Intelligenz KI - K\"unstliche Intelligenz
(German Journal of Artificial Intelligence), 2021 | 10.1007/s13218-020-00696-1 | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper outlines a perspective on the future of AI, discussing directions
for machines models of human-like intelligence. We explain how developmental
and evolutionary theories of human cognition should further inform artificial
intelligence. We emphasize the role of ecological niches in sculpting
intelligent behavior, and in particular that human intelligence was
fundamentally shaped to adapt to a constantly changing socio-cultural
environment. We argue that a major limit of current work in AI is that it is
missing this perspective, both theoretically and experimentally. Finally, we
discuss the promising approach of developmental artificial intelligence,
modeling infant development through multi-scale interaction between
intrinsically motivated learning, embodiment and a fastly changing
socio-cultural environment. This paper takes the form of an interview of
Pierre-Yves Oudeyer by Mandred Eppe, organized within the context of a KI -
K{\"{u}}nstliche Intelligenz special issue in developmental robotics.
| [
{
"created": "Thu, 11 Mar 2021 16:24:00 GMT",
"version": "v1"
}
] | 2021-03-12 | [
[
"Eppe",
"Manfred",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
]
] |
2103.06854 | Laura Giordano | Laura Giordano, Valentina Gliozzi, Daniele Theseider Dupr\'e | A conditional, a fuzzy and a probabilistic interpretation of
self-organising maps | 31 pages, 1 figure. arXiv admin note: text overlap with
arXiv:2008.13278 | Journal of Logic and Computation, 2022 | 10.1093/logcom/exab082 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we establish a link between fuzzy and preferential semantics
for description logics and Self-Organising Maps, which have been proposed as
possible candidates to explain the psychological mechanisms underlying category
generalisation. In particular, we show that the input/output behavior of a
Self-Organising Map after training can be described by a fuzzy description
logic interpretation as well as by a preferential interpretation, based on a
concept-wise multipreference semantics, which takes into account preferences
with respect to different concepts and has been recently proposed for ranked
and for weighted defeasible description logics. Properties of the network can
be proven by model checking on the fuzzy or on the preferential interpretation.
Starting from the fuzzy interpretation, we also provide a probabilistic account
for this neural network model.
| [
{
"created": "Thu, 11 Mar 2021 18:31:00 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Nov 2021 14:43:54 GMT",
"version": "v2"
}
] | 2022-02-07 | [
[
"Giordano",
"Laura",
""
],
[
"Gliozzi",
"Valentina",
""
],
[
"Dupré",
"Daniele Theseider",
""
]
] |
2103.06874 | Jonathan Clark | Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting | CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language
Representation | TACL Final Version | Transactions of the Association for Computational Linguistics
(2022) 10: 73--91 | 10.1162/tacl_a_00448 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pipelined NLP systems have largely been superseded by end-to-end neural
modeling, yet nearly all commonly-used models still require an explicit
tokenization step. While recent tokenization approaches based on data-derived
subword lexicons are less brittle than manually engineered tokenizers, these
techniques are not equally suited to all languages, and the use of any fixed
vocabulary may limit a model's ability to adapt. In this paper, we present
CANINE, a neural encoder that operates directly on character sequences, without
explicit tokenization or vocabulary, and a pre-training strategy that operates
either directly on characters or optionally uses subwords as a soft inductive
bias. To use its finer-grained input effectively and efficiently, CANINE
combines downsampling, which reduces the input sequence length, with a deep
transformer stack, which encodes context. CANINE outperforms a comparable mBERT
model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite
having 28% fewer model parameters.
| [
{
"created": "Thu, 11 Mar 2021 18:57:44 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Mar 2021 17:58:09 GMT",
"version": "v2"
},
{
"created": "Wed, 31 Mar 2021 17:55:23 GMT",
"version": "v3"
},
{
"created": "Wed, 18 May 2022 17:42:09 GMT",
"version": "v4"
}
] | 2022-05-19 | [
[
"Clark",
"Jonathan H.",
""
],
[
"Garrette",
"Dan",
""
],
[
"Turc",
"Iulia",
""
],
[
"Wieting",
"John",
""
]
] |
2103.06911 | Qiaojun Feng | Tianyu Zhao, Qiaojun Feng, Sai Jadhav, Nikolay Atanasov | CORSAIR: Convolutional Object Retrieval and Symmetry-AIded Registration | 8 pages, 8 figures | 2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Prague, Czech Republic, 2021, pp. 47-54 | 10.1109/IROS51168.2021.9636347 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers online object-level mapping using partial point-cloud
observations obtained online in an unknown environment. We develop and approach
for fully Convolutional Object Retrieval and Symmetry-AIded Registration
(CORSAIR). Our model extends the Fully Convolutional Geometric Features model
to learn a global object-shape embedding in addition to local point-wise
features from the point-cloud observations. The global feature is used to
retrieve a similar object from a category database, and the local features are
used for robust pose registration between the observed and the retrieved
object. Our formulation also leverages symmetries, present in the object
shapes, to obtain promising local-feature pairs from different symmetry classes
for matching. We present results from synthetic and real-world datasets with
different object categories to verify the robustness of our method.
| [
{
"created": "Thu, 11 Mar 2021 19:12:48 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Aug 2021 23:22:06 GMT",
"version": "v2"
},
{
"created": "Sat, 4 Sep 2021 22:55:55 GMT",
"version": "v3"
}
] | 2022-04-26 | [
[
"Zhao",
"Tianyu",
""
],
[
"Feng",
"Qiaojun",
""
],
[
"Jadhav",
"Sai",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
2103.07156 | Kohei Yamamoto | Kohei Yamamoto | Learnable Companding Quantization for Accurate Low-bit Neural Networks | Accepted at CVPR 2021 | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2021, pp. 5029-5038 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantizing deep neural networks is an effective method for reducing memory
consumption and improving inference speed, and is thus useful for
implementation in resource-constrained devices. However, it is still hard for
extremely low-bit models to achieve accuracy comparable with that of
full-precision models. To address this issue, we propose learnable companding
quantization (LCQ) as a novel non-uniform quantization method for 2-, 3-, and
4-bit models. LCQ jointly optimizes model weights and learnable companding
functions that can flexibly and non-uniformly control the quantization levels
of weights and activations. We also present a new weight normalization
technique that allows more stable training for quantization. Experimental
results show that LCQ outperforms conventional state-of-the-art methods and
narrows the gap between quantized and full-precision models for image
classification and object detection tasks. Notably, the 2-bit ResNet-50 model
on ImageNet achieves top-1 accuracy of 75.1% and reduces the gap to 1.7%,
allowing LCQ to further exploit the potential of non-uniform quantization.
| [
{
"created": "Fri, 12 Mar 2021 09:06:52 GMT",
"version": "v1"
}
] | 2021-11-03 | [
[
"Yamamoto",
"Kohei",
""
]
] |
2103.07202 | Cl\'ement Rambour | Cl\'ement Rambour, Lo\"ic Denis, Florence Tupin, H\'el\`ene Oriot, Yue
Huang, Laurent Ferro-Famil | Urban Surface Reconstruction in SAR Tomography by Graph-Cuts | null | Computer Vision and Image Understanding 188 (2019) 102791 | 10.1016/j.cviu.2019.07.011 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SAR (Synthetic Aperture Radar) tomography reconstructs 3-D volumes from
stacks of SAR images. High-resolution satellites such as TerraSAR-X provide
images that can be combined to produce 3-D models. In urban areas, sparsity
priors are generally enforced during the tomographic inversion process in order
to retrieve the location of scatterers seen within a given radar resolution
cell. However, such priors often miss parts of the urban surfaces. Those
missing parts are typically regions of flat areas such as ground or rooftops.
This paper introduces a surface segmentation algorithm based on the computation
of the optimal cut in a flow network. This segmentation process can be included
within the 3-D reconstruction framework in order to improve the recovery of
urban surfaces. Illustrations on a TerraSAR-X tomographic dataset demonstrate
the potential of the approach to produce a 3-D model of urban surfaces such as
ground, fa\c{c}ades and rooftops.
| [
{
"created": "Fri, 12 Mar 2021 10:53:18 GMT",
"version": "v1"
}
] | 2021-03-15 | [
[
"Rambour",
"Clément",
""
],
[
"Denis",
"Loïc",
""
],
[
"Tupin",
"Florence",
""
],
[
"Oriot",
"Hélène",
""
],
[
"Huang",
"Yue",
""
],
[
"Ferro-Famil",
"Laurent",
""
]
] |
2103.07278 | Julien Despois | Hugo Thimonier, Julien Despois, Robin Kips, Matthieu Perrot | Learning Long-Term Style-Preserving Blind Video Temporal Consistency | null | 2021 IEEE International Conference on Multimedia and Expo (ICME) | 10.1109/ICME51207.2021.9428445 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When trying to independently apply image-trained algorithms to successive
frames in videos, noxious flickering tends to appear. State-of-the-art
post-processing techniques that aim at fostering temporal consistency, generate
other temporal artifacts and visually alter the style of videos. We propose a
postprocessing model, agnostic to the transformation applied to videos (e.g.
style transfer, image manipulation using GANs, etc.), in the form of a
recurrent neural network. Our model is trained using a Ping Pong procedure and
its corresponding loss, recently introduced for GAN video generation, as well
as a novel style preserving perceptual loss. The former improves long-term
temporal consistency learning, while the latter fosters style preservation. We
evaluate our model on the DAVIS and videvo.net datasets and show that our
approach offers state-of-the-art results concerning flicker removal, and better
keeps the overall style of the videos than previous approaches.
| [
{
"created": "Fri, 12 Mar 2021 13:54:34 GMT",
"version": "v1"
}
] | 2022-10-06 | [
[
"Thimonier",
"Hugo",
""
],
[
"Despois",
"Julien",
""
],
[
"Kips",
"Robin",
""
],
[
"Perrot",
"Matthieu",
""
]
] |
2103.07492 | Andrea Cossu | Andrea Cossu, Antonio Carta, Vincenzo Lomonaco, Davide Bacciu | Continual Learning for Recurrent Neural Networks: an Empirical
Evaluation | Published in Neural Networks | Neural Networks, Volume 143, 2021, pages 607-627 | 10.1016/j.neunet.2021.07.021 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning continuously during all model lifetime is fundamental to deploy
machine learning solutions robust to drifts in the data distribution. Advances
in Continual Learning (CL) with recurrent neural networks could pave the way to
a large number of applications where incoming data is non stationary, like
natural language processing and robotics. However, the existing body of work on
the topic is still fragmented, with approaches which are application-specific
and whose assessment is based on heterogeneous learning protocols and datasets.
In this paper, we organize the literature on CL for sequential data processing
by providing a categorization of the contributions and a review of the
benchmarks. We propose two new benchmarks for CL with sequential data based on
existing datasets, whose characteristics resemble real-world applications. We
also provide a broad empirical evaluation of CL and Recurrent Neural Networks
in class-incremental scenario, by testing their ability to mitigate forgetting
with a number of different strategies which are not specific to sequential data
processing. Our results highlight the key role played by the sequence length
and the importance of a clear specification of the CL scenario.
| [
{
"created": "Fri, 12 Mar 2021 19:25:28 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Mar 2021 11:10:43 GMT",
"version": "v2"
},
{
"created": "Fri, 28 May 2021 08:25:39 GMT",
"version": "v3"
},
{
"created": "Mon, 2 Aug 2021 14:06:51 GMT",
"version": "v4"
}
] | 2021-08-03 | [
[
"Cossu",
"Andrea",
""
],
[
"Carta",
"Antonio",
""
],
[
"Lomonaco",
"Vincenzo",
""
],
[
"Bacciu",
"Davide",
""
]
] |
2103.07538 | Sandeep Soni | Sandeep Soni and Lauren Klein and Jacob Eisenstein | Abolitionist Networks: Modeling Language Change in Nineteenth-Century
Activist Newspapers | 23 pages, 6 figures, 2 tables | Journal of Cultural Analytics (2021) | null | null | cs.CL cs.CY cs.DL cs.SI | http://creativecommons.org/licenses/by/4.0/ | The abolitionist movement of the nineteenth-century United States remains
among the most significant social and political movements in US history.
Abolitionist newspapers played a crucial role in spreading information and
shaping public opinion around a range of issues relating to the abolition of
slavery. These newspapers also serve as a primary source of information about
the movement for scholars today, resulting in powerful new accounts of the
movement and its leaders. This paper supplements recent qualitative work on the
role of women in abolition's vanguard, as well as the role of the Black press,
with a quantitative text modeling approach. Using diachronic word embeddings,
we identify which newspapers tended to lead lexical semantic innovations -- the
introduction of new usages of specific words -- and which newspapers tended to
follow. We then aggregate the evidence across hundreds of changes into a
weighted network with the newspapers as nodes; directed edge weights represent
the frequency with which each newspaper led the other in the adoption of a
lexical semantic change. Analysis of this network reveals pathways of lexical
semantic influence, distinguishing leaders from followers, as well as others
who stood apart from the semantic changes that swept through this period. More
specifically, we find that two newspapers edited by women -- THE PROVINCIAL
FREEMAN and THE LILY -- led a large number of semantic changes in our corpus,
lending additional credence to the argument that a multiracial coalition of
women led the abolitionist movement in terms of both thought and action. It
also contributes additional complexity to the scholarship that has sought to
tease apart the relation of the abolitionist movement to the women's suffrage
movement, and the vexed racial politics that characterized their relation.
| [
{
"created": "Fri, 12 Mar 2021 21:26:30 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Soni",
"Sandeep",
""
],
[
"Klein",
"Lauren",
""
],
[
"Eisenstein",
"Jacob",
""
]
] |
2103.07609 | Kristina Monakhova | Kristina Monakhova, Vi Tran, Grace Kuo, Laura Waller | Untrained networks for compressive lensless photography | 17 pages, 8 figures | Optics Express Vol. 29, Issue 13, pp. 20913-20929 (2021) | 10.1364/OE.424075 | null | eess.IV cs.CV physics.optics | http://creativecommons.org/licenses/by/4.0/ | Compressive lensless imagers enable novel applications in an extremely
compact device, requiring only a phase or amplitude mask placed close to the
sensor. They have been demonstrated for 2D and 3D microscopy, single-shot
video, and single-shot hyperspectral imaging; in each of these cases, a
compressive-sensing-based inverse problem is solved in order to recover a 3D
data-cube from a 2D measurement. Typically, this is accomplished using convex
optimization and hand-picked priors. Alternatively, deep learning-based
reconstruction methods offer the promise of better priors, but require many
thousands of ground truth training pairs, which can be difficult or impossible
to acquire. In this work, we propose the use of untrained networks for
compressive image recovery. Our approach does not require any labeled training
data, but instead uses the measurement itself to update the network weights. We
demonstrate our untrained approach on lensless compressive 2D imaging as well
as single-shot high-speed video recovery using the camera's rolling shutter,
and single-shot hyperspectral imaging. We provide simulation and experimental
verification, showing that our method results in improved image quality over
existing methods.
| [
{
"created": "Sat, 13 Mar 2021 03:47:06 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jun 2021 01:01:25 GMT",
"version": "v2"
}
] | 2021-06-23 | [
[
"Monakhova",
"Kristina",
""
],
[
"Tran",
"Vi",
""
],
[
"Kuo",
"Grace",
""
],
[
"Waller",
"Laura",
""
]
] |
2103.07612 | Matloob Khushi Dr | Mimi Mukherjee and Matloob Khushi | SMOTE-ENC: A novel SMOTE-based method to generate synthetic data for
nominal and continuous features | null | Appl. Syst. Innov. 2021, 4, 18 | 10.3390/asi4010018 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Real world datasets are heavily skewed where some classes are significantly
outnumbered by the other classes. In these situations, machine learning
algorithms fail to achieve substantial efficacy while predicting these
under-represented instances. To solve this problem, many variations of
synthetic minority over-sampling methods (SMOTE) have been proposed to balance
the dataset which deals with continuous features. However, for datasets with
both nominal and continuous features, SMOTE-NC is the only SMOTE-based
over-sampling technique to balance the data. In this paper, we present a novel
minority over-sampling method, SMOTE-ENC (SMOTE - Encoded Nominal and
Continuous), in which, nominal features are encoded as numeric values and the
difference between two such numeric value reflects the amount of change of
association with minority class. Our experiments show that the classification
model using SMOTE-ENC method offers better prediction than model using SMOTE-NC
when the dataset has a substantial number of nominal features and also when
there is some association between the categorical features and the target
class. Additionally, our proposed method addressed one of the major limitations
of SMOTE-NC algorithm. SMOTE-NC can be applied only on mixed datasets that have
features consisting of both continuous and nominal features and cannot function
if all the features of the dataset are nominal. Our novel method has been
generalized to be applied on both mixed datasets and on nominal only datasets.
The code is available from mkhushi.github.io
| [
{
"created": "Sat, 13 Mar 2021 04:16:17 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Mukherjee",
"Mimi",
""
],
[
"Khushi",
"Matloob",
""
]
] |
2103.07678 | Ehsan Farahbakhsh | Hojat Shirmard, Ehsan Farahbakhsh, R. Dietmar Muller, Rohitash Chandra | A review of machine learning in processing remote sensing data for
mineral exploration | 26 pages, 4 figures, 2 tables | Remote Sensing of Environment, 268, 112750 (2022) | 10.1016/j.rse.2021.112750 | null | cs.LG cs.CV stat.AP | http://creativecommons.org/licenses/by/4.0/ | The decline of the number of newly discovered mineral deposits and increase
in demand for different minerals in recent years has led exploration geologists
to look for more efficient and innovative methods for processing different data
types at each stage of mineral exploration. As a primary step, various
features, such as lithological units, alteration types, structures, and
indicator minerals, are mapped to aid decision-making in targeting ore
deposits. Different types of remote sensing datasets, such as satellite and
airborne data, make it possible to overcome common problems associated with
mapping geological features. The rapid increase in the volume of remote sensing
data obtained from different platforms has encouraged scientists to develop
advanced, innovative, and robust data processing methodologies. Machine
learning methods can help process a wide range of remote sensing datasets and
determine the relationship between components such as the reflectance continuum
and features of interest. These methods are robust in processing spectral and
ground truth measurements against noise and uncertainties. In recent years,
many studies have been carried out by supplementing geological surveys with
remote sensing datasets, which is now prominent in geoscience research. This
paper provides a comprehensive review of the implementation and adaptation of
some popular and recently established machine learning methods for processing
different types of remote sensing data and investigates their applications for
detecting various ore deposit types. We demonstrate the high capability of
combining remote sensing data and machine learning methods for mapping
different geological features that are critical for providing potential maps.
Moreover, we find there is scope for advanced methods to process the new
generation of remote sensing data for creating improved mineral prospectivity
maps.
| [
{
"created": "Sat, 13 Mar 2021 10:36:25 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Dec 2021 07:11:24 GMT",
"version": "v2"
}
] | 2021-12-07 | [
[
"Shirmard",
"Hojat",
""
],
[
"Farahbakhsh",
"Ehsan",
""
],
[
"Muller",
"R. Dietmar",
""
],
[
"Chandra",
"Rohitash",
""
]
] |
2103.07762 | Bonaventure F. P. Dossou | Bonaventure F. P. Dossou and Chris C. Emezue | OkwuGb\'e: End-to-End Speech Recognition for Fon and Igbo | null | African NLP, EACL 2021 | null | null | cs.CL cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Language is inherent and compulsory for human communication. Whether
expressed in a written or spoken way, it ensures understanding between people
of the same and different regions. With the growing awareness and effort to
include more low-resourced languages in NLP research, African languages have
recently been a major subject of research in machine translation, and other
text-based areas of NLP. However, there is still very little comparable
research in speech recognition for African languages. Interestingly, some of
the unique properties of African languages affecting NLP, like their
diacritical and tonal complexities, have a major root in their speech,
suggesting that careful speech interpretation could provide more intuition on
how to deal with the linguistic complexities of African languages for
text-based NLP. OkwuGb\'e is a step towards building speech recognition systems
for African low-resourced languages. Using Fon and Igbo as our case study, we
conduct a comprehensive linguistic analysis of each language and describe the
creation of end-to-end, deep neural network-based speech recognition models for
both languages. We present a state-of-art ASR model for Fon, as well as
benchmark ASR model results for Igbo. Our linguistic analyses (for Fon and
Igbo) provide valuable insights and guidance into the creation of speech
recognition models for other African low-resourced languages, as well as guide
future NLP research for Fon and Igbo. The Fon and Igbo models source code have
been made publicly available.
| [
{
"created": "Sat, 13 Mar 2021 18:02:44 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Mar 2021 04:35:06 GMT",
"version": "v2"
}
] | 2021-03-17 | [
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Emezue",
"Chris C.",
""
]
] |
2103.07768 | Robin Swezey | Robin Swezey, Bruno Charron | Large-scale Recommendation for Portfolio Optimization | null | In Proceedings of the 12th ACM Conference on Recommender Systems
(RecSys 2018). Association for Computing Machinery, New York, NY, USA,
382-386 | 10.1145/3240323.3240386 | null | cs.AI cs.CE cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual investors are now massively using online brokers to trade stocks
with convenient interfaces and low fees, albeit losing the advice and
personalization traditionally provided by full-service brokers. We frame the
problem faced by online brokers of replicating this level of service in a
low-cost and automated manner for a very large number of users. Because of the
care required in recommending financial products, we focus on a risk-management
approach tailored to each user's portfolio and risk profile. We show that our
hybrid approach, based on Modern Portfolio Theory and Collaborative Filtering,
provides a sound and effective solution. The method is applicable to stocks as
well as other financial assets, and can be easily combined with various
financial forecasting models. We validate our proposal by comparing it with
several baselines in a domain expert-based study.
| [
{
"created": "Sat, 13 Mar 2021 18:22:48 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Swezey",
"Robin",
""
],
[
"Charron",
"Bruno",
""
]
] |