id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2102.05229 | Binjie Qin | Dongdong Hao, Song Ding, Linwei Qiu, Yisong Lv, Baowei Fei, Yueqi Zhu,
Binjie Qin | Sequential vessel segmentation via deep channel attention network | 14 | Neural Networks, 2020 | 10.1016/j.neunet.2020.05.005 | null | cs.CV physics.med-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper develops a novel encoder-decoder deep network architecture which
exploits the several contextual frames of 2D+t sequential images in a sliding
window centered at current frame to segment 2D vessel masks from the current
frame. The architecture is equipped with temporal-spatial feature extraction in
encoder stage, feature fusion in skip connection layers and channel attention
mechanism in decoder stage. In the encoder stage, a series of 3D convolutional
layers are employed to hierarchically extract temporal-spatial features. Skip
connection layers subsequently fuse the temporal-spatial feature maps and
deliver them to the corresponding decoder stages. To efficiently discriminate
vessel features from the complex and noisy backgrounds in the XCA images, the
decoder stage effectively utilizes channel attention blocks to refine the
intermediate feature maps from skip connection layers for subsequently decoding
the refined features in 2D ways to produce the segmented vessel masks.
Furthermore, Dice loss function is implemented to train the proposed deep
network in order to tackle the class imbalance problem in the XCA data due to
the wide distribution of complex background artifacts. Extensive experiments by
comparing our method with other state-of-the-art algorithms demonstrate the
proposed method's superior performance over other methods in terms of the
quantitative metrics and visual validation. The source codes are at
https://github.com/Binjie-Qin/SVS-net
| [
{
"created": "Wed, 10 Feb 2021 02:45:08 GMT",
"version": "v1"
}
] | 2021-02-11 | [
[
"Hao",
"Dongdong",
""
],
[
"Ding",
"Song",
""
],
[
"Qiu",
"Linwei",
""
],
[
"Lv",
"Yisong",
""
],
[
"Fei",
"Baowei",
""
],
[
"Zhu",
"Yueqi",
""
],
[
"Qin",
"Binjie",
""
]
] |
2102.05260 | Sm Zobaed | Sm Zobaed, Md Enamul Haque, Md Fazle Rabby, and Mohsen Amini Salehi | SensPick: Sense Picking for Word Sense Disambiguation | null | 16th IEEE International Conference on Semantic Computing,
ICSC'2021 | null | null | cs.CL cs.IR | http://creativecommons.org/publicdomain/zero/1.0/ | Word sense disambiguation (WSD) methods identify the most suitable meaning of
a word with respect to the usage of that word in a specific context. Neural
network-based WSD approaches rely on a sense-annotated corpus since they do not
utilize lexical resources. In this study, we utilize both context and related
gloss information of a target word to model the semantic relationship between
the word and the set of glosses. We propose SensPick, a type of stacked
bidirectional Long Short Term Memory (LSTM) network to perform the WSD task.
The experimental evaluation demonstrates that SensPick outperforms traditional
and state-of-the-art models on most of the benchmark datasets with a relative
improvement of 3.5% in F-1 score. While the improvement is not significant,
incorporating semantic relationships brings SensPick in the leading position
compared to others.
| [
{
"created": "Wed, 10 Feb 2021 04:52:42 GMT",
"version": "v1"
}
] | 2021-02-11 | [
[
"Zobaed",
"Sm",
""
],
[
"Haque",
"Md Enamul",
""
],
[
"Rabby",
"Md Fazle",
""
],
[
"Salehi",
"Mohsen Amini",
""
]
] |
2102.05263 | Santiago Ontanon | Robert C. Gray, Jichen Zhu, Santiago Onta\~n\'on | Regression Oracles and Exploration Strategies for Short-Horizon
Multi-Armed Bandits | 8 pages | In proceedings of the 2020 IEEE Conference on Games (CoG) (pp.
312-319) | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores multi-armed bandit (MAB) strategies in very short horizon
scenarios, i.e., when the bandit strategy is only allowed very few interactions
with the environment. This is an understudied setting in the MAB literature
with many applications in the context of games, such as player modeling.
Specifically, we pursue three different ideas. First, we explore the use of
regression oracles, which replace the simple average used in strategies such as
epsilon-greedy with linear regression models. Second, we examine different
exploration patterns such as forced exploration phases. Finally, we introduce a
new variant of the UCB1 strategy called UCBT that has interesting properties
and no tunable parameters. We present experimental results in a domain
motivated by exergames, where the goal is to maximize a player's daily steps.
Our results show that the combination of epsilon-greedy or epsilon-decreasing
with regression oracles outperforms all other tested strategies in the short
horizon setting.
| [
{
"created": "Wed, 10 Feb 2021 04:58:44 GMT",
"version": "v1"
}
] | 2021-02-11 | [
[
"Gray",
"Robert C.",
""
],
[
"Zhu",
"Jichen",
""
],
[
"Ontañón",
"Santiago",
""
]
] |
2102.05264 | Santiago Ontanon | Robert C. Gray, Jichen Zhu, Dannielle Arigo, Evan Forman and Santiago
Onta\~n\'on | Player Modeling via Multi-Armed Bandits | null | In Proceedings of the International Conference on the Foundations
of Digital Games (FDG 2020) | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on building personalized player models solely from player
behavior in the context of adaptive games. We present two main contributions:
The first is a novel approach to player modeling based on multi-armed bandits
(MABs). This approach addresses, at the same time and in a principled way, both
the problem of collecting data to model the characteristics of interest for the
current player and the problem of adapting the interactive experience based on
this model. Second, we present an approach to evaluating and fine-tuning these
algorithms prior to generating data in a user study. This is an important
problem, because conducting user studies is an expensive and labor-intensive
process; therefore, an ability to evaluate the algorithms beforehand can save a
significant amount of resources. We evaluate our approach in the context of
modeling players' social comparison orientation (SCO) and present empirical
results from both simulations and real players.
| [
{
"created": "Wed, 10 Feb 2021 05:04:45 GMT",
"version": "v1"
}
] | 2021-02-11 | [
[
"Gray",
"Robert C.",
""
],
[
"Zhu",
"Jichen",
""
],
[
"Arigo",
"Dannielle",
""
],
[
"Forman",
"Evan",
""
],
[
"Ontañón",
"Santiago",
""
]
] |
2102.05424 | Jintai Chen | Jintai Chen, Bohan Yu, Biwen Lei, Ruiwei Feng, Danny Z. Chen, Jian Wu | Doctor Imitator: Hand-Radiography-based Bone Age Assessment by Imitating
Scoring Methods | Original Title: "Doctor Imitator: A Graph-based Bone Age Assessment
Framework Using Hand Radiographs" @inproceedings{chen2020doctor,
title={Doctor imitator: A graph-based bone age assessment framework using
hand radiographs}, author={Chen, Jintai and Yu, Bohan and Lei, Biwen and
Feng, Ruiwei and Chen, Danny Z and Wu, Jian}, booktitle={MICCAI}, year={2020}
} | International Conference on Medical Image Computing and
Computer-Assisted Intervention (MICCAI-2020) | 10.1007/978-3-030-59725-2_74 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bone age assessment is challenging in clinical practice due to the
complicated bone age assessment process. Current automatic bone age assessment
methods were designed with rare consideration of the diagnostic logistics and
thus may yield certain uninterpretable hidden states and outputs. Consequently,
doctors can find it hard to cooperate with such models harmoniously because it
is difficult to check the correctness of the model predictions. In this work,
we propose a new graph-based deep learning framework for bone age assessment
with hand radiographs, called Doctor Imitator (DI). The architecture of DI is
designed to learn the diagnostic logistics of doctors using the scoring methods
(e.g., the Tanner-Whitehouse method) for bone age assessment. Specifically, the
convolutions of DI capture the local features of the anatomical regions of
interest (ROIs) on hand radiographs and predict the ROI scores by our proposed
Anatomy-based Group Convolution, summing up for bone age prediction. Besides,
we develop a novel Dual Graph-based Attention module to compute
patient-specific attention for ROI features and context attention for ROI
scores. As far as we know, DI is the first automatic bone age assessment
framework following the scoring methods without fully supervised hand
radiographs. Experiments on hand radiographs with only bone age supervision
verify that DI can achieve excellent performance with sparse parameters and
provide more interpretability.
| [
{
"created": "Wed, 10 Feb 2021 13:45:39 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Sep 2022 09:23:21 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Apr 2023 14:42:59 GMT",
"version": "v3"
}
] | 2023-04-25 | [
[
"Chen",
"Jintai",
""
],
[
"Yu",
"Bohan",
""
],
[
"Lei",
"Biwen",
""
],
[
"Feng",
"Ruiwei",
""
],
[
"Chen",
"Danny Z.",
""
],
[
"Wu",
"Jian",
""
]
] |
2102.05599 | Muhammad Burhan Hafez | Julien Scholz, Cornelius Weber, Muhammad Burhan Hafez and Stefan
Wermter | Improving Model-Based Reinforcement Learning with Internal State
Representations through Self-Supervision | null | Proc. Intl. Joint Conf. Neural Networks (IJCNN), 2021, forthcoming | 10.1109/IJCNN52387.2021.9534023 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Using a model of the environment, reinforcement learning agents can plan
their future moves and achieve superhuman performance in board games like
Chess, Shogi, and Go, while remaining relatively sample-efficient. As
demonstrated by the MuZero Algorithm, the environment model can even be learned
dynamically, generalizing the agent to many more tasks while at the same time
achieving state-of-the-art performance. Notably, MuZero uses internal state
representations derived from real environment states for its predictions. In
this paper, we bind the model's predicted internal state representation to the
environment state via two additional terms: a reconstruction model loss and a
simpler consistency loss, both of which work independently and unsupervised,
acting as constraints to stabilize the learning process. Our experiments show
that this new integration of reconstruction model loss and simpler consistency
loss provide a significant performance increase in OpenAI Gym environments. Our
modifications also enable self-supervised pretraining for MuZero, so the
algorithm can learn about environment dynamics before a goal is made available.
| [
{
"created": "Wed, 10 Feb 2021 17:55:04 GMT",
"version": "v1"
}
] | 2022-01-19 | [
[
"Scholz",
"Julien",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Hafez",
"Muhammad Burhan",
""
],
[
"Wermter",
"Stefan",
""
]
] |
2102.05645 | Andrew Brown | Andrew Brown, Ernesto Coto, Andrew Zisserman | Automated Video Labelling: Identifying Faces by Corroborative Evidence | null | IEEE 4th International Conference on Multimedia Information
Processing and Retrieval (IEEE MIPR 2021) | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a method for automatically labelling all faces in video archives,
such as TV broadcasts, by combining multiple evidence sources and multiple
modalities (visual and audio). We target the problem of ever-growing online
video archives, where an effective, scalable indexing solution cannot require a
user to provide manual annotation or supervision. To this end, we make three
key contributions: (1) We provide a novel, simple, method for determining if a
person is famous or not using image-search engines. In turn this enables a
face-identity model to be built reliably and robustly, and used for high
precision automatic labelling; (2) We show that even for less-famous people,
image-search engines can then be used for corroborative evidence to accurately
label faces that are named in the scene or the speech; (3) Finally, we
quantitatively demonstrate the benefits of our approach on different video
domains and test settings, such as TV shows and news broadcasts. Our method
works across three disparate datasets without any explicit domain adaptation,
and sets new state-of-the-art results on all the public benchmarks.
| [
{
"created": "Wed, 10 Feb 2021 18:57:52 GMT",
"version": "v1"
}
] | 2021-02-11 | [
[
"Brown",
"Andrew",
""
],
[
"Coto",
"Ernesto",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
2102.05875 | Kaiwen Li | Kaiwen Li, Tao Zhang, Rui Wang Yuheng Wang, and Yi Han | Deep Reinforcement Learning for Combinatorial Optimization: Covering
Salesman Problems | null | 26 August 2021, IEEE Transactions on Cybernetics | 10.1109/TCYB.2021.3103811 | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new deep learning approach to approximately solve the
Covering Salesman Problem (CSP). In this approach, given the city locations of
a CSP as input, a deep neural network model is designed to directly output the
solution. It is trained using the deep reinforcement learning without
supervision. Specifically, in the model, we apply the Multi-head Attention to
capture the structural patterns, and design a dynamic embedding to handle the
dynamic patterns of the problem. Once the model is trained, it can generalize
to various types of CSP tasks (different sizes and topologies) with no need of
re-training. Through controlled experiments, the proposed approach shows
desirable time complexity: it runs more than 20 times faster than the
traditional heuristic solvers with a tiny gap of optimality. Moreover, it
significantly outperforms the current state-of-the-art deep learning approaches
for combinatorial optimization in the aspect of both training and inference. In
comparison with traditional solvers, this approach is highly desirable for most
of the challenging tasks in practice that are usually large-scale and require
quick decisions.
| [
{
"created": "Thu, 11 Feb 2021 07:25:04 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Li",
"Kaiwen",
""
],
[
"Zhang",
"Tao",
""
],
[
"Wang",
"Rui Wang Yuheng",
""
],
[
"Han",
"Yi",
""
]
] |
2102.05894 | Ali Nassif | Ali Bou Nassif, Ismail Shahin, Shibani Hamsa, Nawel Nemmour, Keikichi
Hirose | CASA-Based Speaker Identification Using Cascaded GMM-CNN Classifier in
Noisy and Emotional Talking Conditions | Published in Applied Soft Computing journal | Applied Soft Computing, Elsevier, 2021 | 10.1016/j.asoc.2021.107141 | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by/4.0/ | This work aims at intensifying text-independent speaker identification
performance in real application situations such as noisy and emotional talking
conditions. This is achieved by incorporating two different modules: a
Computational Auditory Scene Analysis CASA based pre-processing module for
noise reduction and cascaded Gaussian Mixture Model Convolutional Neural
Network GMM-CNN classifier for speaker identification followed by emotion
recognition. This research proposes and evaluates a novel algorithm to improve
the accuracy of speaker identification in emotional and highly-noise
susceptible conditions. Experiments demonstrate that the proposed model yields
promising results in comparison with other classifiers when Speech Under
Simulated and Actual Stress SUSAS database, Emirati Speech Database ESD, the
Ryerson Audio-Visual Database of Emotional Speech and Song RAVDESS database and
the Fluent Speech Commands database are used in a noisy environment.
| [
{
"created": "Thu, 11 Feb 2021 08:56:12 GMT",
"version": "v1"
}
] | 2021-02-12 | [
[
"Nassif",
"Ali Bou",
""
],
[
"Shahin",
"Ismail",
""
],
[
"Hamsa",
"Shibani",
""
],
[
"Nemmour",
"Nawel",
""
],
[
"Hirose",
"Keikichi",
""
]
] |
2102.05918 | Chao Jia | Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham,
Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig | Scaling Up Visual and Vision-Language Representation Learning With Noisy
Text Supervision | ICML 2021 | International Conference on Machine Learning 2021 | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained representations are becoming crucial for many NLP and perception
tasks. While representation learning in NLP has transitioned to training on raw
text without human annotations, visual and vision-language representations
still rely heavily on curated training datasets that are expensive or require
expert knowledge. For vision applications, representations are mostly learned
using datasets with explicit class labels such as ImageNet or OpenImages. For
vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all
involve a non-trivial data collection (and cleaning) process. This costly
curation process limits the size of datasets and hence hinders the scaling of
trained models. In this paper, we leverage a noisy dataset of over one billion
image alt-text pairs, obtained without expensive filtering or post-processing
steps in the Conceptual Captions dataset. A simple dual-encoder architecture
learns to align visual and language representations of the image and text pairs
using a contrastive loss. We show that the scale of our corpus can make up for
its noise and leads to state-of-the-art representations even with such a simple
learning scheme. Our visual representation achieves strong performance when
transferred to classification tasks such as ImageNet and VTAB. The aligned
visual and language representations enables zero-shot image classification and
also set new state-of-the-art results on Flickr30K and MSCOCO image-text
retrieval benchmarks, even when compared with more sophisticated
cross-attention models. The representations also enable cross-modality search
with complex text and text + image queries.
| [
{
"created": "Thu, 11 Feb 2021 10:08:12 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Jun 2021 07:51:39 GMT",
"version": "v2"
}
] | 2021-06-14 | [
[
"Jia",
"Chao",
""
],
[
"Yang",
"Yinfei",
""
],
[
"Xia",
"Ye",
""
],
[
"Chen",
"Yi-Ting",
""
],
[
"Parekh",
"Zarana",
""
],
[
"Pham",
"Hieu",
""
],
[
"Le",
"Quoc V.",
""
],
[
"Sung",
"Yunhsuan",
""
],
[
"Li",
"Zhen",
""
],
[
"Duerig",
"Tom",
""
]
] |
2102.05954 | Paramita Koley | Paramita Koley, Avirup Saha, Sourangshu Bhattacharya, Niloy Ganguly,
and Abir De | Demarcating Endogenous and Exogenous Opinion Dynamics: An Experimental
Design Approach | 25 Pages, Accepted in ACM TKDD, 2021 | ACM Trans. Knowl. Discov. Data. 1, 1, Article 1 (January 2021), 25
pages | 10.1145/3449361 | null | cs.SI cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The networked opinion diffusion in online social networks (OSN) is often
governed by the two genres of opinions - endogenous opinions that are driven by
the influence of social contacts among users, and exogenous opinions which are
formed by external effects like news, feeds etc. Accurate demarcation of
endogenous and exogenous messages offers an important cue to opinion modeling,
thereby enhancing its predictive performance. In this paper, we design a suite
of unsupervised classification methods based on experimental design approaches,
in which, we aim to select the subsets of events which minimize different
measures of mean estimation error. In more detail, we first show that these
subset selection tasks are NP-Hard. Then we show that the associated objective
functions are weakly submodular, which allows us to cast efficient
approximation algorithms with guarantees. Finally, we validate the efficacy of
our proposal on various real-world datasets crawled from Twitter as well as
diverse synthetic datasets. Our experiments range from validating prediction
performance on unsanitized and sanitized events to checking the effect of
selecting optimal subsets of various sizes. Through various experiments, we
have found that our method offers a significant improvement in accuracy in
terms of opinion forecasting, against several competitors.
| [
{
"created": "Thu, 11 Feb 2021 11:38:15 GMT",
"version": "v1"
}
] | 2021-02-12 | [
[
"Koley",
"Paramita",
""
],
[
"Saha",
"Avirup",
""
],
[
"Bhattacharya",
"Sourangshu",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"De",
"Abir",
""
]
] |
2102.06019 | Anav Mehta | Anav Mehta | Reinforcement Learning For Constraint Satisfaction Game Agents
(15-Puzzle, Minesweeper, 2048, and Sudoku) | null | Canadian Science Fair Journal Volume 4 Issue 1
https://csfjournal.com/volume-4-issue-1-1/2021/9/24/reinforcement-learning-for-constraint-satisfaction-game-agents-15-puzzle-minesweeper-2048-and-sudoku | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, reinforcement learning has seen interest because of deep
Q-Learning, where the model is a convolutional neural network. Deep Q-Learning
has shown promising results in games such as Atari and AlphaGo. Instead of
learning the entire Q-table, it learns an estimate of the Q function that
determines a state's policy action. We use Q-Learning and deep Q-learning, to
learn control policies of four constraint satisfaction games (15-Puzzle,
Minesweeper, 2048, and Sudoku). 15-Puzzle is a sliding permutation puzzle and
provides a challenge in addressing its large state space. Minesweeper and
Sudoku involve partially observable states and guessing. 2048 is also a sliding
puzzle but allows for easier state representation (compared to 15-Puzzle) and
uses interesting reward shaping to solve the game. These games offer unique
insights into the potential and limits of reinforcement learning. The Q agent
is trained with no rules of the game, with only the reward corresponding to
each state's action. Our unique contribution is in choosing the reward
structure, state representation, and formulation of the deep neural network.
For low shuffle, 15-Puzzle, achieves a 100% win rate, the medium and high
shuffle achieve about 43% and 22% win rates respectively. On a standard 16x16
Minesweeper board, both low and high-density boards achieve close to 45% win
rate, whereas medium density boards have a low win rate of 15%. For 2048, the
1024 win rate was achieved with significant ease (100%) with high win rates for
2048, 4096, 8192 and 16384 as 40%, 0.05%, 0.01% and 0.004% , respectively. The
easy Sudoku games had a win rate of 7%, while medium and hard games had 2.1%
and 1.2% win rates, respectively. This paper explores the environment
complexity and behavior of a subset of constraint games using reward structures
which can get us closer to understanding how humans learn.
| [
{
"created": "Tue, 9 Feb 2021 22:29:29 GMT",
"version": "v1"
}
] | 2021-10-08 | [
[
"Mehta",
"Anav",
""
]
] |
2102.06125 | Dong Si | Dong Si, Andrew Nakamura, Runbang Tang, Haowen Guan, Jie Hou, Ammaar
Firozi, Renzhi Cao, Kyle Hippe, Minglei Zhao | Artificial Intelligence Advances for De Novo Molecular Structure
Modeling in Cryo-EM | null | Wiley Interdisciplinary Reviews: Computational Molecular Science,
e1542 (2021) | 10.1002/wcms.1542 | null | q-bio.BM cs.AI physics.bio-ph physics.comp-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cryo-electron microscopy (cryo-EM) has become a major experimental technique
to determine the structures of large protein complexes and molecular
assemblies, as evidenced by the 2017 Nobel Prize. Although cryo-EM has been
drastically improved to generate high-resolution three-dimensional (3D) maps
that contain detailed structural information about macromolecules, the
computational methods for using the data to automatically build structure
models are lagging far behind. The traditional cryo-EM model building approach
is template-based homology modeling. Manual de novo modeling is very
time-consuming when no template model is found in the database. In recent
years, de novo cryo-EM modeling using machine learning (ML) and deep learning
(DL) has ranked among the top-performing methods in macromolecular structure
modeling. Deep-learning-based de novo cryo-EM modeling is an important
application of artificial intelligence, with impressive results and great
potential for the next generation of molecular biomedicine. Accordingly, we
systematically review the representative ML/DL-based de novo cryo-EM modeling
methods. And their significances are discussed from both practical and
methodological viewpoints. We also briefly describe the background of cryo-EM
data processing workflow. Overall, this review provides an introductory guide
to modern research on artificial intelligence (AI) for de novo molecular
structure modeling and future directions in this emerging field.
| [
{
"created": "Thu, 11 Feb 2021 17:06:20 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 02:03:01 GMT",
"version": "v2"
}
] | 2021-06-01 | [
[
"Si",
"Dong",
""
],
[
"Nakamura",
"Andrew",
""
],
[
"Tang",
"Runbang",
""
],
[
"Guan",
"Haowen",
""
],
[
"Hou",
"Jie",
""
],
[
"Firozi",
"Ammaar",
""
],
[
"Cao",
"Renzhi",
""
],
[
"Hippe",
"Kyle",
""
],
[
"Zhao",
"Minglei",
""
]
] |
2102.06202 | Anastasios Angelopoulos | Anastasios N. Angelopoulos and Stephen Bates and Tijana Zrnic and
Michael I. Jordan | Private Prediction Sets | Code available at
https://github.com/aangelopoulos/private_prediction_sets | Harvard Data Science Review, 4(2). 2022 | 10.1162/99608f92.16c71dad | null | cs.LG cs.AI cs.CR stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world settings involving consequential decision-making, the
deployment of machine learning systems generally requires both reliable
uncertainty quantification and protection of individuals' privacy. We present a
framework that treats these two desiderata jointly. Our framework is based on
conformal prediction, a methodology that augments predictive models to return
prediction sets that provide uncertainty quantification -- they provably cover
the true response with a user-specified probability, such as 90%. One might
hope that when used with privately-trained models, conformal prediction would
yield privacy guarantees for the resulting prediction sets; unfortunately, this
is not the case. To remedy this key problem, we develop a method that takes any
pre-trained predictive model and outputs differentially private prediction
sets. Our method follows the general approach of split conformal prediction; we
use holdout data to calibrate the size of the prediction sets but preserve
privacy by using a privatized quantile subroutine. This subroutine compensates
for the noise introduced to preserve privacy in order to guarantee correct
coverage. We evaluate the method on large-scale computer vision datasets.
| [
{
"created": "Thu, 11 Feb 2021 18:59:11 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Sep 2021 17:56:32 GMT",
"version": "v2"
},
{
"created": "Sun, 3 Mar 2024 06:47:19 GMT",
"version": "v3"
}
] | 2024-03-05 | [
[
"Angelopoulos",
"Anastasios N.",
""
],
[
"Bates",
"Stephen",
""
],
[
"Zrnic",
"Tijana",
""
],
[
"Jordan",
"Michael I.",
""
]
] |
2102.06243 | Yuping Fan | Yuping Fan, Zhiling Lan, Taylor Childers, Paul Rich, William Allcock
and Michael E. Papka | Deep Reinforcement Agent for Scheduling in HPC | Accepted by IPDPS 2021 | 35th IEEE International Parallel & Distributed Processing
Symposium (2021) | null | null | cs.DC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cluster scheduler is crucial in high-performance computing (HPC). It
determines when and which user jobs should be allocated to available system
resources. Existing cluster scheduling heuristics are developed by human
experts based on their experience with specific HPC systems and workloads.
However, the increasing complexity of computing systems and the highly dynamic
nature of application workloads have placed tremendous burden on manually
designed and tuned scheduling heuristics. More aggressive optimization and
automation are needed for cluster scheduling in HPC. In this work, we present
an automated HPC scheduling agent named DRAS (Deep Reinforcement Agent for
Scheduling) by leveraging deep reinforcement learning. DRAS is built on a
novel, hierarchical neural network incorporating special HPC scheduling
features such as resource reservation and backfilling. A unique training
strategy is presented to enable DRAS to rapidly learn the target environment.
Once being provided a specific scheduling objective given by system manager,
DRAS automatically learns to improve its policy through interaction with the
scheduling environment and dynamically adjusts its policy as workload changes.
The experiments with different production workloads demonstrate that DRAS
outperforms the existing heuristic and optimization approaches by up to 45%.
| [
{
"created": "Thu, 11 Feb 2021 20:08:38 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Apr 2021 22:31:44 GMT",
"version": "v2"
}
] | 2021-04-21 | [
[
"Fan",
"Yuping",
""
],
[
"Lan",
"Zhiling",
""
],
[
"Childers",
"Taylor",
""
],
[
"Rich",
"Paul",
""
],
[
"Allcock",
"William",
""
],
[
"Papka",
"Michael E.",
""
]
] |
2102.06274 | Oleg Szehr | Oleg Szehr | Hedging of Financial Derivative Contracts via Monte Carlo Tree Search | Corrected typos. Shorter Presentation. 15 pages, 5 figures | Journal of Computational Finance, Volume 27, Number 2, Pages:
47-80, 2023 | 10.21314/JCF.2023.009 | null | cs.AI cs.GT cs.LG q-fin.PR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The construction of approximate replication strategies for pricing and
hedging of derivative contracts in incomplete markets is a key problem of
financial engineering. Recently Reinforcement Learning algorithms for hedging
under realistic market conditions have attracted significant interest. While
research in the derivatives area mostly focused on variations of $Q$-learning,
in artificial intelligence Monte Carlo Tree Search is the recognized
state-of-the-art method for various planning problems, such as the games of
Hex, Chess, Go,... This article introduces Monte Carlo Tree Search as a method
to solve the stochastic optimal control problem behind the pricing and hedging
tasks. As compared to $Q$-learning it combines Reinforcement Learning with tree
search techniques. As a consequence Monte Carlo Tree Search has higher sample
efficiency, is less prone to over-fitting to specific market models and
generally learns stronger policies faster. In our experiments we find that
Monte Carlo Tree Search, being the world-champion in games like Chess and Go,
is easily capable of maximizing the utility of investor's terminal wealth
without setting up an auxiliary mathematical framework.
| [
{
"created": "Thu, 11 Feb 2021 21:17:01 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 21:22:41 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Apr 2021 21:26:31 GMT",
"version": "v3"
}
] | 2023-11-02 | [
[
"Szehr",
"Oleg",
""
]
] |
2102.06320 | Andriy Miranskyy | Jared Rand and Andriy Miranskyy | On Automatic Parsing of Log Records | Shortened version accepted for publication in Proceedings of the 43rd
International Conference on Software Engineering: New Ideas and Emerging
Results, 2021 | In Proceedings 2021 IEEE/ACM 43rd International Conference on
Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pp. 41-45 | 10.1109/ICSE-NIER52604.2021.00017 | null | cs.SE cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software log analysis helps to maintain the health of software solutions and
ensure compliance and security. Existing software systems consist of
heterogeneous components emitting logs in various formats. A typical solution
is to unify the logs using manually built parsers, which is laborious.
Instead, we explore the possibility of automating the parsing task by
employing machine translation (MT). We create a tool that generates synthetic
Apache log records which we used to train recurrent-neural-network-based MT
models. Models' evaluation on real-world logs shows that the models can learn
Apache log format and parse individual log records. The median relative edit
distance between an actual real-world log record and the MT prediction is less
than or equal to 28%. Thus, we show that log parsing using an MT approach is
promising.
| [
{
"created": "Fri, 12 Feb 2021 00:27:41 GMT",
"version": "v1"
}
] | 2021-08-04 | [
[
"Rand",
"Jared",
""
],
[
"Miranskyy",
"Andriy",
""
]
] |
2102.06386 | Shigemichi Matsuzaki | Shigemichi Matsuzaki, Jun Miura and Hiroaki Masuzawa | Multi-source Pseudo-label Learning of Semantic Segmentation for the
Scene Recognition of Agricultural Mobile Robots | Published in Advanced Robotics | Advanced Robotics, 36:19, 1011-1029 (2022) | 10.1080/01691864.2022.2109427 | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper describes a novel method of training a semantic segmentation model
for scene recognition of agricultural mobile robots exploiting publicly
available datasets of outdoor scenes that are different from the target
greenhouse environments. Semantic segmentation models require abundant labels
given by tedious manual annotation. A method to work around it is unsupervised
domain adaptation (UDA) that transfers knowledge from labeled source datasets
to unlabeled target datasets. However, the effectiveness of existing methods is
not well studied in adaptation between heterogeneous environments, such as
urban scenes and greenhouses. In this paper, we propose a method to train a
semantic segmentation model for greenhouse images without manually labeled
datasets of greenhouse images. The core of our idea is to use multiple rich
image datasets of different environments with segmentation labels to generate
pseudo-labels for the target images to effectively transfer the knowledge from
multiple sources and realize a precise training of semantic segmentation. Along
with the pseudo-label generation, we introduce state-of-the-art methods to deal
with noise in the pseudo-labels to further improve the performance. We
demonstrate in experiments with multiple greenhouse datasets that our proposed
method improves the performance compared to the single-source baselines and an
existing approach.
| [
{
"created": "Fri, 12 Feb 2021 08:17:10 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Feb 2022 06:34:29 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Jan 2023 04:56:20 GMT",
"version": "v3"
}
] | 2023-01-16 | [
[
"Matsuzaki",
"Shigemichi",
""
],
[
"Miura",
"Jun",
""
],
[
"Masuzawa",
"Hiroaki",
""
]
] |
2102.06388 | Roohallah Alizadehsani | Roohallah Alizadehsani, Danial Sharifrazi, Navid Hoseini Izadi, Javad
Hassannataj Joloudari, Afshin Shoeibi, Juan M. Gorriz, Sadiq Hussain, Juan E.
Arco, Zahra Alizadeh Sani, Fahime Khozeimeh, Abbas Khosravi, Saeid Nahavandi,
Sheikh Mohammed Shariful Islam, U Rajendra Acharya | Uncertainty-Aware Semi-Supervised Method Using Large Unlabeled and
Limited Labeled COVID-19 Data | null | ACM Transactions on Multimedia Computing, Communications, and
ApplicationsVolume 17Issue 3sOctober 2021 | 10.1145/3462635 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The new coronavirus has caused more than one million deaths and continues to
spread rapidly. This virus targets the lungs, causing respiratory distress
which can be mild or severe. The X-ray or computed tomography (CT) images of
lungs can reveal whether the patient is infected with COVID-19 or not. Many
researchers are trying to improve COVID-19 detection using artificial
intelligence. Our motivation is to develop an automatic method that can cope
with scenarios in which preparing labeled data is time consuming or expensive.
In this article, we propose a Semi-supervised Classification using Limited
Labeled Data (SCLLD) relying on Sobel edge detection and Generative Adversarial
Networks (GANs) to automate the COVID-19 diagnosis. The GAN discriminator
output is a probabilistic value which is used for classification in this work.
The proposed system is trained using 10,000 CT scans collected from Omid
Hospital, whereas a public dataset is also used for validating our system. The
proposed method is compared with other state-of-the-art supervised methods such
as Gaussian processes. To the best of our knowledge, this is the first time a
semi-supervised method for COVID-19 detection is presented. Our system is
capable of learning from a mixture of limited labeled and unlabeled data where
supervised learners fail due to a lack of sufficient amount of labeled data.
Thus, our semi-supervised training method significantly outperforms the
supervised training of Convolutional Neural Network (CNN) when labeled training
data is scarce. The 95% confidence intervals for our method in terms of
accuracy, sensitivity, and specificity are 99.56 +- 0.20%, 99.88 +- 0.24%, and
99.40 +- 0.18%, respectively, whereas intervals for the CNN (trained
supervised) are 68.34 +- 4.11%, 91.2 +- 6.15%, and 46.40 +- 5.21%.
| [
{
"created": "Fri, 12 Feb 2021 08:20:20 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Dec 2021 04:39:15 GMT",
"version": "v2"
}
] | 2021-12-28 | [
[
"Alizadehsani",
"Roohallah",
""
],
[
"Sharifrazi",
"Danial",
""
],
[
"Izadi",
"Navid Hoseini",
""
],
[
"Joloudari",
"Javad Hassannataj",
""
],
[
"Shoeibi",
"Afshin",
""
],
[
"Gorriz",
"Juan M.",
""
],
[
"Hussain",
"Sadiq",
""
],
[
"Arco",
"Juan E.",
""
],
[
"Sani",
"Zahra Alizadeh",
""
],
[
"Khozeimeh",
"Fahime",
""
],
[
"Khosravi",
"Abbas",
""
],
[
"Nahavandi",
"Saeid",
""
],
[
"Islam",
"Sheikh Mohammed Shariful",
""
],
[
"Acharya",
"U Rajendra",
""
]
] |
2102.06448 | Haoran Chen | Haoran Chen, Jianmin Li, Simone Frintrop, Xiaolin Hu | The MSR-Video to Text Dataset with Clean Annotations | The paper is under consideration at Computer Vision and Image
Understanding | Computer Vision and Image Understanding, 225, p.103581 (2022) | 10.1016/j.cviu.2022.103581 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video captioning automatically generates short descriptions of the video
content, usually in form of a single sentence. Many methods have been proposed
for solving this task. A large dataset called MSR Video to Text (MSR-VTT) is
often used as the benchmark dataset for testing the performance of the methods.
However, we found that the human annotations, i.e., the descriptions of video
contents in the dataset are quite noisy, e.g., there are many duplicate
captions and many captions contain grammatical problems. These problems may
pose difficulties to video captioning models for learning underlying patterns.
We cleaned the MSR-VTT annotations by removing these problems, then tested
several typical video captioning models on the cleaned dataset. Experimental
results showed that data cleaning boosted the performances of the models
measured by popular quantitative metrics. We recruited subjects to evaluate the
results of a model trained on the original and cleaned datasets. The human
behavior experiment demonstrated that trained on the cleaned dataset, the model
generated captions that were more coherent and more relevant to the contents of
the video clips.
| [
{
"created": "Fri, 12 Feb 2021 11:14:56 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Apr 2021 04:22:49 GMT",
"version": "v2"
},
{
"created": "Sat, 9 Apr 2022 09:20:25 GMT",
"version": "v3"
},
{
"created": "Sun, 25 Feb 2024 09:04:32 GMT",
"version": "v4"
}
] | 2024-02-27 | [
[
"Chen",
"Haoran",
""
],
[
"Li",
"Jianmin",
""
],
[
"Frintrop",
"Simone",
""
],
[
"Hu",
"Xiaolin",
""
]
] |
2102.06535 | Zainab Abohashima | Essam H. Houssein, Zainab Abohashima, Mohamed Elhoseny, Waleed M.
Mohamed | Hybrid quantum convolutional neural networks model for COVID-19
prediction using chest X-Ray images | null | Journal of Computational Design and Engineering, Volume 9, Issue
2, April 2022, | 10.1093/jcde/qwac003 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite the great efforts to find an effective way for COVID-19 prediction,
the virus nature and mutation represent a critical challenge to diagnose the
covered cases. However, developing a model to predict COVID-19 via Chest X-Ray
(CXR) images with accurate performance is necessary to help in early diagnosis.
In this paper, a hybrid quantum-classical convolutional Neural Networks (HQCNN)
model used the random quantum circuits (RQCs) as a base to detect COVID-19
patients with CXR images. A collection of 6952 CXR images, including 1161
COVID-19, 1575 normal, and 5216 pneumonia images, were used as a dataset in
this work. The proposed HQCNN model achieved higher performance with an
accuracy of 98.4\% and a sensitivity of 99.3\% on the first dataset cases.
Besides, it obtained an accuracy of 99\% and a sensitivity of 99.7\% on the
second dataset cases. Also, it achieved accuracy, and sensitivity of 88.6\%,
and 88.7\%, respectively, on the third multi-class dataset cases. Furthermore,
the HQCNN model outperforms various models in balanced accuracy, precision,
F1-measure, and AUC-ROC score. The experimental results are achieved by the
proposed model prove its ability in predicting positive COVID-19 cases.
| [
{
"created": "Mon, 8 Feb 2021 18:22:53 GMT",
"version": "v1"
}
] | 2022-03-14 | [
[
"Houssein",
"Essam H.",
""
],
[
"Abohashima",
"Zainab",
""
],
[
"Elhoseny",
"Mohamed",
""
],
[
"Mohamed",
"Waleed M.",
""
]
] |
2102.06607 | Sabrina Kirrane | Sabrina Kirrane | Intelligent Software Web Agents: A Gap Analysis | null | Journal of Web Semantics (2021) | 10.1016/j.websem.2021.100659 | null | cs.AI cs.MA cs.NI | http://creativecommons.org/licenses/by/4.0/ | Semantic web technologies have shown their effectiveness, especially when it
comes to knowledge representation, reasoning, and data integration. However,
the original semantic web vision, whereby machine readable web data could be
automatically actioned upon by intelligent software web agents, has yet to be
realised. In order to better understand the existing technological
opportunities and challenges, in this paper we examine the status quo in terms
of intelligent software web agents, guided by research with respect to
requirements and architectural components, coming from the agents community. We
use the identified requirements to both further elaborate on the semantic web
agent motivating use case scenario, and to summarise different perspectives on
the requirements from the semantic web agent literature. We subsequently
propose a hybrid semantic web agent architecture, and use the various
components and subcomponents in order to provide a focused discussion in
relation to existing semantic web standards and community activities. Finally,
we highlight open research opportunities and challenges and take a broader
perspective of the research by discussing the potential for intelligent
software web agents as an enabling technology for emerging domains, such as
digital assistants, cloud computing, and the internet of things.
| [
{
"created": "Fri, 12 Feb 2021 16:32:02 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Mar 2021 11:23:15 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Jul 2021 11:35:30 GMT",
"version": "v3"
},
{
"created": "Fri, 24 Sep 2021 14:16:41 GMT",
"version": "v4"
}
] | 2021-09-27 | [
[
"Kirrane",
"Sabrina",
""
]
] |
2102.06793 | Ernest Davis | Ernest Davis | Unanswerable Questions about Images and Texts | 15 pages, 4 figures | Frontiers in Artificial Intelligence: Language and Computation.
July 2020 | 10.3389/frai.2020.00051 | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Questions about a text or an image that cannot be answered raise distinctive
issues for an AI. This note discusses the problem of unanswerable questions in
VQA (visual question answering), in QA (visual question answering), and in AI
generally.
| [
{
"created": "Mon, 25 Jan 2021 17:56:15 GMT",
"version": "v1"
}
] | 2021-02-16 | [
[
"Davis",
"Ernest",
""
]
] |
2102.06815 | Leonid Boytsov | Leonid Boytsov, Zico Kolter | Exploring Classic and Neural Lexical Translation Models for Information
Retrieval: Interpretability, Effectiveness, and Efficiency Benefits | null | ECIR 2021 (The 43rd European Conference on Information Retrieval) | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the utility of the lexical translation model (IBM Model 1) for
English text retrieval, in particular, its neural variants that are trained
end-to-end. We use the neural Model1 as an aggregator layer applied to
context-free or contextualized query/document embeddings. This new approach to
design a neural ranking system has benefits for effectiveness, efficiency, and
interpretability. Specifically, we show that adding an interpretable neural
Model 1 layer on top of BERT-based contextualized embeddings (1) does not
decrease accuracy and/or efficiency; and (2) may overcome the limitation on the
maximum sequence length of existing BERT models. The context-free neural Model
1 is less effective than a BERT-based ranking model, but it can run efficiently
on a CPU (without expensive index-time precomputation or query-time operations
on large tensors). Using Model 1 we produced best neural and non-neural runs on
the MS MARCO document ranking leaderboard in late 2020.
| [
{
"created": "Fri, 12 Feb 2021 23:21:55 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Mar 2021 18:43:24 GMT",
"version": "v2"
}
] | 2021-03-19 | [
[
"Boytsov",
"Leonid",
""
],
[
"Kolter",
"Zico",
""
]
] |
2102.06868 | Jerome Quenum | Jerome Quenum, Kehan Wang, Avideh Zakhor | Fast, Accurate Barcode Detection in Ultra High-Resolution Images | 5 pages, 4 figures, 3 tables, GitHub Link added, Initial ArXiv
Submission is 13 Feb 2021, Accepted at IEEE International Conference on Image
Processing, September 2021, USA | 2021 IEEE International Conference on Image Processing (ICIP) | 10.1109/ICIP42928.2021.9506134 | pp. 1019-1023 | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Object detection in Ultra High-Resolution (UHR) images has long been a
challenging problem in computer vision due to the varying scales of the
targeted objects. When it comes to barcode detection, resizing UHR input images
to smaller sizes often leads to the loss of pertinent information, while
processing them directly is highly inefficient and computationally expensive.
In this paper, we propose using semantic segmentation to achieve a fast and
accurate detection of barcodes of various scales in UHR images. Our pipeline
involves a modified Region Proposal Network (RPN) on images of size greater
than 10k$\times$10k and a newly proposed Y-Net segmentation network, followed
by a post-processing workflow for fitting a bounding box around each segmented
barcode mask. The end-to-end system has a latency of 16 milliseconds, which is
$2.5\times$ faster than YOLOv4 and $5.9\times$ faster than Mask R-CNN. In terms
of accuracy, our method outperforms YOLOv4 and Mask R-CNN by a $mAP$ of 5.5%
and 47.1% respectively, on a synthetic dataset. We have made available the
generated synthetic barcode dataset and its code at
http://www.github.com/viplabB/SBD/.
| [
{
"created": "Sat, 13 Feb 2021 05:59:59 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 20:59:28 GMT",
"version": "v2"
}
] | 2021-10-18 | [
[
"Quenum",
"Jerome",
""
],
[
"Wang",
"Kehan",
""
],
[
"Zakhor",
"Avideh",
""
]
] |
2102.07271 | Yongwan Lim | Yongwan Lim, Shrikanth S. Narayanan, Krishna S. Nayak | Attention-gated convolutional neural networks for off-resonance
correction of spiral real-time MRI | 8 pages, 4 figures, 1 table | 28th Int. Soc. Magn. Reson. Med. (ISMRM) Scientific Sessions,
2020, p.1005 | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiral acquisitions are preferred in real-time MRI because of their
efficiency, which has made it possible to capture vocal tract dynamics during
natural speech. A fundamental limitation of spirals is blurring and signal loss
due to off-resonance, which degrades image quality at air-tissue boundaries.
Here, we present a new CNN-based off-resonance correction method that
incorporates an attention-gate mechanism. This leverages spatial and channel
relationships of filtered outputs and improves the expressiveness of the
networks. We demonstrate improved performance with the attention-gate, on 1.5
Tesla spiral speech RT-MRI, compared to existing off-resonance correction
methods.
| [
{
"created": "Sun, 14 Feb 2021 23:43:50 GMT",
"version": "v1"
}
] | 2021-02-16 | [
[
"Lim",
"Yongwan",
""
],
[
"Narayanan",
"Shrikanth S.",
""
],
[
"Nayak",
"Krishna S.",
""
]
] |
2102.07280 | Sina Mohammadi | Sina Mohammadi, Mariana Belgiu, Alfred Stein | 3D Fully Convolutional Neural Networks with Intersection Over Union Loss
for Crop Mapping from Multi-Temporal Satellite Images | Accepted by IGARSS 2021 | 2021 IEEE International Geoscience and Remote Sensing Symposium
IGARSS, 2021, pp. 5834-5837 | 10.1109/IGARSS47720.2021.9554573 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information on cultivated crops is relevant for a large number of food
security studies. Different scientific efforts are dedicated to generating this
information from remote sensing images by means of machine learning methods.
Unfortunately, these methods do not take account of the spatial-temporal
relationships inherent in remote sensing images. In our paper, we explore the
capability of a 3D Fully Convolutional Neural Network (FCN) to map crop types
from multi-temporal images. In addition, we propose the Intersection Over Union
(IOU) loss function for increasing the overlap between the predicted classes
and ground reference data. The proposed method was applied to identify soybean
and corn from a study area situated in the US corn belt using multi-temporal
Landsat images. The study shows that our method outperforms related methods,
obtaining a Kappa coefficient of 91.8%. We conclude that using the IOU loss
function provides a superior choice to learn individual crop types.
| [
{
"created": "Mon, 15 Feb 2021 00:15:53 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Oct 2021 15:07:44 GMT",
"version": "v2"
}
] | 2021-12-01 | [
[
"Mohammadi",
"Sina",
""
],
[
"Belgiu",
"Mariana",
""
],
[
"Stein",
"Alfred",
""
]
] |
2102.07304 | Mingu Kang | Mingu Kang, Trung Quang Tran, Seungju Cho, Daeyoung Kim | CAP-GAN: Towards Adversarial Robustness with Cycle-consistent
Attentional Purification | null | IJCNN 2021 | 10.1109/IJCNN52387.2021.9533322 | null | cs.LG cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial attack is aimed at fooling the target classifier with
imperceptible perturbation. Adversarial examples, which are carefully crafted
with a malicious purpose, can lead to erroneous predictions, resulting in
catastrophic accidents. To mitigate the effects of adversarial attacks, we
propose a novel purification model called CAP-GAN. CAP-GAN takes account of the
idea of pixel-level and feature-level consistency to achieve reasonable
purification under cycle-consistent learning. Specifically, we utilize the
guided attention module and knowledge distillation to convey meaningful
information to the purification model. Once a model is fully trained, inputs
would be projected into the purification model and transformed into clean-like
images. We vary the capacity of the adversary to argue the robustness against
various types of attack strategies. On the CIFAR-10 dataset, CAP-GAN
outperforms other pre-processing based defenses under both black-box and
white-box settings.
| [
{
"created": "Mon, 15 Feb 2021 02:23:33 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 02:26:40 GMT",
"version": "v2"
},
{
"created": "Tue, 25 May 2021 13:22:10 GMT",
"version": "v3"
}
] | 2021-11-19 | [
[
"Kang",
"Mingu",
""
],
[
"Tran",
"Trung Quang",
""
],
[
"Cho",
"Seungju",
""
],
[
"Kim",
"Daeyoung",
""
]
] |
2102.07507 | Sijie Ji | Sijie Ji, Mo Li | CLNet: Complex Input Lightweight Neural Network designed for Massive
MIMO CSI Feedback | null | IEEE Wireless Communications Letters, 2021 | 10.1109/LWC.2021.3100493 | null | cs.IT cs.AI eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unleashing the full potential of massive MIMO in FDD mode by reducing the
overhead of CSI feedback has recently garnered attention. Numerous deep
learning for massive MIMO CSI feedback approaches have demonstrated their
efficiency and potential. However, most existing methods improve accuracy at
the cost of computational complexity and the accuracy decreases significantly
as the CSI compression rate increases. This paper presents a novel neural
network CLNet tailored for CSI feedback problem based on the intrinsic
properties of CSI. CLNet proposes a forge complex-valued input layer to process
signals and utilizes attention mechanism to enhance the performance of the
network. The experiment result shows that CLNet outperforms the
state-of-the-art method by average accuracy improvement of 5.41\% in both
outdoor and indoor scenarios with average 24.1\% less computational overhead.
Codes for deep learning-based CSI feedback CLNet are available at GitHub.
| [
{
"created": "Mon, 15 Feb 2021 12:16:11 GMT",
"version": "v1"
},
{
"created": "Sun, 30 May 2021 14:20:08 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Apr 2023 15:10:32 GMT",
"version": "v3"
}
] | 2023-05-01 | [
[
"Ji",
"Sijie",
""
],
[
"Li",
"Mo",
""
]
] |
2102.07545 | Keisuke Fujii | Keisuke Fujii | Data-driven Analysis for Understanding Team Sports Behaviors | 9 pages, 2 figures. This is the first draft and the final version
will be published in the Journal of Robotics and Mechatronics | J. Robot. Mechatron., Vol.33, No.3, pp. 505-514, 2021 | 10.20965/jrm.2021.p0505 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding the principles of real-world biological multi-agent behaviors
is a current challenge in various scientific and engineering fields. The rules
regarding the real-world biological multi-agent behaviors such as team sports
are often largely unknown due to their inherently higher-order interactions,
cognition, and body dynamics. Estimation of the rules from data, i.e.,
data-driven approaches such as machine learning, provides an effective way for
the analysis of such behaviors. Although most data-driven models have
non-linear structures and high prediction performances, it is sometimes hard to
interpret them. This survey focuses on data-driven analysis for quantitative
understanding of invasion team sports behaviors such as basketball and
football, and introduces two main approaches for understanding such multi-agent
behaviors: (1) extracting easily interpretable features or rules from data and
(2) generating and controlling behaviors in visually-understandable ways. The
first approach involves the visualization of learned representations and the
extraction of mathematical structures behind the behaviors. The second approach
can be used to test hypotheses by simulating and controlling future and
counterfactual behaviors. Lastly, the potential practical applications of
extracted rules, features, and generated behaviors are discussed. These
approaches can contribute to a better understanding of multi-agent behaviors in
the real world.
| [
{
"created": "Mon, 15 Feb 2021 13:31:45 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Feb 2021 07:27:48 GMT",
"version": "v2"
}
] | 2021-06-22 | [
[
"Fujii",
"Keisuke",
""
]
] |
2102.07617 | Yingxu Wang Prof. PhD FIEEE | Yingxu Wang, Fakhri Karray, Sam Kwong, Konstantinos N. Plataniotis,
Henry Leung, Ming Hou, Edward Tunstel, Imre J. Rudas, Ljiljana Trajkovic,
Okyay Kaynak, Janusz Kacprzyk, Mengchu Zhou, Michael H. Smith, Philip Chen
and Shushma Patel | On the Philosophical, Cognitive and Mathematical Foundations of
Symbiotic Autonomous Systems (SAS) | Accepted by Phil. Trans. Royal Society (A): Math, Phys & Engg Sci.,
379(219x), 2021, Oxford, UK | Phil. Trans. Royal Society (A): Math, Phys & Engg Sci., 379(219x),
2021, Oxford, UK | 10.1098/rsta.2020.0362 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive
systems exhibiting autonomous collective intelligence enabled by coherent
symbiosis of human-machine interactions in hybrid societies. Basic research in
the emerging field of SAS has triggered advanced general AI technologies
functioning without human intervention or hybrid symbiotic systems synergizing
humans and intelligent machines into coherent cognitive systems. This work
presents a theoretical framework of SAS underpinned by the latest advances in
intelligence, cognition, computer, and system sciences. SAS are characterized
by the composition of autonomous and symbiotic systems that adopt
bio-brain-social-inspired and heterogeneously synergized structures and
autonomous behaviors. This paper explores their cognitive and mathematical
foundations. The challenge to seamless human-machine interactions in a hybrid
environment is addressed. SAS-based collective intelligence is explored in
order to augment human capability by autonomous machine intelligence towards
the next generation of general AI, autonomous computers, and trustworthy
mission-critical intelligent systems. Emerging paradigms and engineering
applications of SAS are elaborated via an autonomous knowledge learning system
that symbiotically works between humans and cognitive robots.
| [
{
"created": "Thu, 11 Feb 2021 05:44:25 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Wang",
"Yingxu",
""
],
[
"Karray",
"Fakhri",
""
],
[
"Kwong",
"Sam",
""
],
[
"Plataniotis",
"Konstantinos N.",
""
],
[
"Leung",
"Henry",
""
],
[
"Hou",
"Ming",
""
],
[
"Tunstel",
"Edward",
""
],
[
"Rudas",
"Imre J.",
""
],
[
"Trajkovic",
"Ljiljana",
""
],
[
"Kaynak",
"Okyay",
""
],
[
"Kacprzyk",
"Janusz",
""
],
[
"Zhou",
"Mengchu",
""
],
[
"Smith",
"Michael H.",
""
],
[
"Chen",
"Philip",
""
],
[
"Patel",
"Shushma",
""
]
] |
2102.07716 | Eric Langlois | Eric D. Langlois and Tom Everitt | How RL Agents Behave When Their Actions Are Modified | 10 pages (+6 appendix); 7 figures. Published in the AAAI 2021
Conference on AI. Code is available at https://github.com/edlanglois/mamdp | Proceedings of the AAAI Conference on Artificial Intelligence,
35(13), 11586-11594 (2021) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning in complex environments may require supervision to
prevent the agent from attempting dangerous actions. As a result of supervisor
intervention, the executed action may differ from the action specified by the
policy. How does this affect learning? We present the Modified-Action Markov
Decision Process, an extension of the MDP model that allows actions to differ
from the policy. We analyze the asymptotic behaviours of common reinforcement
learning algorithms in this setting and show that they adapt in different ways:
some completely ignore modifications while others go to various lengths in
trying to avoid action modifications that decrease reward. By choosing the
right algorithm, developers can prevent their agents from learning to
circumvent interruptions or constraints, and better control agent responses to
other kinds of action modification, like self-damage.
| [
{
"created": "Mon, 15 Feb 2021 18:10:03 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Jun 2021 05:06:29 GMT",
"version": "v2"
}
] | 2021-07-01 | [
[
"Langlois",
"Eric D.",
""
],
[
"Everitt",
"Tom",
""
]
] |
2102.07737 | Burhaneddin Yaman | Burhaneddin Yaman, Seyed Amir Hossein Hosseini, Mehmet Ak\c{c}akaya | Zero-Shot Self-Supervised Learning for MRI Reconstruction | null | International Conference on Learning Representations (ICLR), 2022 | null | null | eess.IV cs.CV cs.LG eess.SP physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning (DL) has emerged as a powerful tool for accelerated MRI
reconstruction, but often necessitates a database of fully-sampled measurements
for training. Recent self-supervised and unsupervised learning approaches
enable training without fully-sampled data. However, a database of undersampled
measurements may not be available in many scenarios, especially for scans
involving contrast or translational acquisitions in development. Moreover,
recent studies show that database-trained models may not generalize well when
the unseen measurements differ in terms of sampling pattern, acceleration rate,
SNR, image contrast, and anatomy. Such challenges necessitate a new methodology
to enable subject-specific DL MRI reconstruction without external training
datasets, since it is clinically imperative to provide high-quality
reconstructions that can be used to identify lesions/disease for \emph{every
individual}. In this work, we propose a zero-shot self-supervised learning
approach to perform subject-specific accelerated DL MRI reconstruction to
tackle these issues. The proposed approach partitions the available
measurements from a single scan into three disjoint sets. Two of these sets are
used to enforce data consistency and define loss during training for
self-supervision, while the last set serves to self-validate, establishing an
early stopping criterion. In the presence of models pre-trained on a database
with different image characteristics, we show that the proposed approach can be
combined with transfer learning for faster convergence time and reduced
computational complexity. The code is available at
\url{https://github.com/byaman14/ZS-SSL}.
| [
{
"created": "Mon, 15 Feb 2021 18:34:38 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Mar 2021 04:47:58 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Mar 2021 17:10:42 GMT",
"version": "v3"
},
{
"created": "Wed, 29 Nov 2023 03:43:13 GMT",
"version": "v4"
}
] | 2023-11-30 | [
[
"Yaman",
"Burhaneddin",
""
],
[
"Hosseini",
"Seyed Amir Hossein",
""
],
[
"Akçakaya",
"Mehmet",
""
]
] |
2102.07849 | Sara Abdali | Sara Abdali, Rutuja Gurav, Siddharth Menon, Daniel Fonseca, Negin
Entezari, Neil Shah, Evangelos E. Papalexakis | Identifying Misinformation from Website Screenshots | null | The International AAAI Conference on Web and Social Media (ICWSM)
2021 | null | null | cs.LG cs.AI cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | Can the look and the feel of a website give information about the
trustworthiness of an article? In this paper, we propose to use a promising,
yet neglected aspect in detecting the misinformativeness: the overall look of
the domain webpage. To capture this overall look, we take screenshots of news
articles served by either misinformative or trustworthy web domains and
leverage a tensor decomposition based semi-supervised classification technique.
The proposed approach i.e., VizFake is insensitive to a number of image
transformations such as converting the image to grayscale, vectorizing the
image and losing some parts of the screenshots. VizFake leverages a very small
amount of known labels, mirroring realistic and practical scenarios, where
labels (especially for known misinformative articles), are scarce and quickly
become dated. The F1 score of VizFake on a dataset of 50k screenshots of news
articles spanning more than 500 domains is roughly 85% using only 5% of ground
truth labels. Furthermore, tensor representations of VizFake, obtained in an
unsupervised manner, allow for exploratory analysis of the data that provides
valuable insights into the problem. Finally, we compare VizFake with deep
transfer learning, since it is a very popular black-box approach for image
classification and also well-known text text-based methods. VizFake achieves
competitive accuracy with deep transfer learning models while being two orders
of magnitude faster and not requiring laborious hyper-parameter tuning.
| [
{
"created": "Mon, 15 Feb 2021 21:05:11 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jun 2021 22:32:32 GMT",
"version": "v2"
}
] | 2021-06-07 | [
[
"Abdali",
"Sara",
""
],
[
"Gurav",
"Rutuja",
""
],
[
"Menon",
"Siddharth",
""
],
[
"Fonseca",
"Daniel",
""
],
[
"Entezari",
"Negin",
""
],
[
"Shah",
"Neil",
""
],
[
"Papalexakis",
"Evangelos E.",
""
]
] |
2102.07857 | Sara Abdali | Sara Abdali, Neil Shah, Evangelos E. Papalexakis | KNH: Multi-View Modeling with K-Nearest Hyperplanes Graph for
Misinformation Detection | null | Second International TrueFact Workshop 2020: Making a Credible Web
for Tomorrow | null | null | cs.LG cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Graphs are one of the most efficacious structures for representing datapoints
and their relations, and they have been largely exploited for different
applications. Previously, the higher-order relations between the nodes have
been modeled by a generalization of graphs known as hypergraphs. In
hypergraphs, the edges are defined by a set of nodes i.e., hyperedges to
demonstrate the higher order relationships between the data. However, there is
no explicit higher-order generalization for nodes themselves. In this work, we
introduce a novel generalization of graphs i.e., K-Nearest Hyperplanes graph
(KNH) where the nodes are defined by higher order Euclidean subspaces for
multi-view modeling of the nodes. In fact, in KNH, nodes are hyperplanes or
more precisely m-flats instead of datapoints. We experimentally evaluate the
KNH graph on two multi-aspect datasets for misinformation detection. The
experimental results suggest that multi-view modeling of articles using KNH
graph outperforms the classic KNN graph in terms of classification performance.
| [
{
"created": "Mon, 15 Feb 2021 21:41:12 GMT",
"version": "v1"
}
] | 2021-02-17 | [
[
"Abdali",
"Sara",
""
],
[
"Shah",
"Neil",
""
],
[
"Papalexakis",
"Evangelos E.",
""
]
] |
2102.07899 | Fanwei Kong | Fanwei Kong, Nathan Wilson, Shawn C. Shadden | A Deep-Learning Approach For Direct Whole-Heart Mesh Reconstruction | null | Medical Image Analysis, 2021, 102222, ISSN 1361-8415 | 10.1016/j.media.2021.102222 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automated construction of surface geometries of cardiac structures from
volumetric medical images is important for a number of clinical applications.
While deep-learning-based approaches have demonstrated promising reconstruction
precision, these approaches have mostly focused on voxel-wise segmentation
followed by surface reconstruction and post-processing techniques. However,
such approaches suffer from a number of limitations including disconnected
regions or incorrect surface topology due to erroneous segmentation and
stair-case artifacts due to limited segmentation resolution. We propose a novel
deep-learning-based approach that directly predicts whole heart surface meshes
from volumetric CT and MR image data. Our approach leverages a graph
convolutional neural network to predict deformation on mesh vertices from a
pre-defined mesh template to reconstruct multiple anatomical structures in a 3D
image volume. Our method demonstrated promising performance of generating whole
heart reconstructions with as good or better accuracy than prior
deep-learning-based methods on both CT and MR data. Furthermore, by deforming a
template mesh, our method can generate whole heart geometries with better
anatomical consistency and produce high-resolution geometries from lower
resolution input image data. Our method was also able to produce temporally
consistent surface mesh predictions for heart motion from CT or MR cine
sequences, and therefore can potentially be applied for efficiently
constructing 4D whole heart dynamics. Our code and pre-trained networks are
available at https://github.com/fkong7/MeshDeformNet
| [
{
"created": "Tue, 16 Feb 2021 00:39:43 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Sep 2021 18:36:45 GMT",
"version": "v2"
}
] | 2021-09-15 | [
[
"Kong",
"Fanwei",
""
],
[
"Wilson",
"Nathan",
""
],
[
"Shadden",
"Shawn C.",
""
]
] |
2102.07943 | Zhao Kang | Zhao Kang, Zhiping Lin, Xiaofeng Zhu, Wenbo Xu | Structured Graph Learning for Scalable Subspace Clustering: From
Single-view to Multi-view | null | IEEE Transactions on Cybernetics 2021 | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graph-based subspace clustering methods have exhibited promising performance.
However, they still suffer some of these drawbacks: encounter the expensive
time overhead, fail in exploring the explicit clusters, and cannot generalize
to unseen data points. In this work, we propose a scalable graph learning
framework, seeking to address the above three challenges simultaneously.
Specifically, it is based on the ideas of anchor points and bipartite graph.
Rather than building a $n\times n$ graph, where $n$ is the number of samples,
we construct a bipartite graph to depict the relationship between samples and
anchor points. Meanwhile, a connectivity constraint is employed to ensure that
the connected components indicate clusters directly. We further establish the
connection between our method and the K-means clustering. Moreover, a model to
process multi-view data is also proposed, which is linear scaled with respect
to $n$. Extensive experiments demonstrate the efficiency and effectiveness of
our approach with respect to many state-of-the-art clustering methods.
| [
{
"created": "Tue, 16 Feb 2021 03:46:11 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Kang",
"Zhao",
""
],
[
"Lin",
"Zhiping",
""
],
[
"Zhu",
"Xiaofeng",
""
],
[
"Xu",
"Wenbo",
""
]
] |
2102.08009 | Abhinav Valada | Kshitij Sirohi, Rohit Mohan, Daniel B\"uscher, Wolfram Burgard,
Abhinav Valada | EfficientLPS: Efficient LiDAR Panoptic Segmentation | Ranked #1 on SemanticKITTI and nuScenes panoptic segmentation
benchmarks | IEEE Transactions on Robotics (T-RO), 2021 | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Panoptic segmentation of point clouds is a crucial task that enables
autonomous vehicles to comprehend their vicinity using their highly accurate
and reliable LiDAR sensors. Existing top-down approaches tackle this problem by
either combining independent task-specific networks or translating methods from
the image domain ignoring the intricacies of LiDAR data and thus often
resulting in sub-optimal performance. In this paper, we present the novel
top-down Efficient LiDAR Panoptic Segmentation (EfficientLPS) architecture that
addresses multiple challenges in segmenting LiDAR point clouds including
distance-dependent sparsity, severe occlusions, large scale-variations, and
re-projection errors. EfficientLPS comprises of a novel shared backbone that
encodes with strengthened geometric transformation modeling capacity and
aggregates semantically rich range-aware multi-scale features. It incorporates
new scale-invariant semantic and instance segmentation heads along with the
panoptic fusion module which is supervised by our proposed panoptic periphery
loss function. Additionally, we formulate a regularized pseudo labeling
framework to further improve the performance of EfficientLPS by training on
unlabelled data. We benchmark our proposed model on two large-scale LiDAR
datasets: nuScenes, for which we also provide ground truth annotations, and
SemanticKITTI. Notably, EfficientLPS sets the new state-of-the-art on both
these datasets.
| [
{
"created": "Tue, 16 Feb 2021 08:14:52 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Mar 2021 15:30:41 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Nov 2021 15:49:47 GMT",
"version": "v3"
}
] | 2021-11-05 | [
[
"Sirohi",
"Kshitij",
""
],
[
"Mohan",
"Rohit",
""
],
[
"Büscher",
"Daniel",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Valada",
"Abhinav",
""
]
] |
2102.08019 | Kevin Bello | Kevin Bello, Chuyang Ke and Jean Honorio | A Thorough View of Exact Inference in Graphs from the Degree-4
Sum-of-Squares Hierarchy | 17 pages, 5 figures | Artificial Intelligence and Statistics (AISTATS), 2022 | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performing inference in graphs is a common task within several machine
learning problems, e.g., image segmentation, community detection, among others.
For a given undirected connected graph, we tackle the statistical problem of
exactly recovering an unknown ground-truth binary labeling of the nodes from a
single corrupted observation of each edge. Such problem can be formulated as a
quadratic combinatorial optimization problem over the boolean hypercube, where
it has been shown before that one can (with high probability and in polynomial
time) exactly recover the ground-truth labeling of graphs that have an
isoperimetric number that grows with respect to the number of nodes (e.g.,
complete graphs, regular expanders). In this work, we apply a powerful
hierarchy of relaxations, known as the sum-of-squares (SoS) hierarchy, to the
combinatorial problem. Motivated by empirical evidence on the improvement in
exact recoverability, we center our attention on the degree-4 SoS relaxation
and set out to understand the origin of such improvement from a graph
theoretical perspective. We show that the solution of the dual of the relaxed
problem is related to finding edge weights of the Johnson and Kneser graphs,
where the weights fulfill the SoS constraints and intuitively allow the input
graph to increase its algebraic connectivity. Finally, as byproduct of our
analysis, we derive a novel Cheeger-type lower bound for the algebraic
connectivity of graphs with signed edge weights.
| [
{
"created": "Tue, 16 Feb 2021 08:36:19 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Jun 2021 19:38:36 GMT",
"version": "v2"
}
] | 2022-09-12 | [
[
"Bello",
"Kevin",
""
],
[
"Ke",
"Chuyang",
""
],
[
"Honorio",
"Jean",
""
]
] |
2102.08138 | Nan Wu | Nan Wu, Yuan Xie, Cong Hao | IronMan: GNN-assisted Design Space Exploration in High-Level Synthesis
via Reinforcement Learning | null | GLSVLSI 2021 | 10.1145/3453688.3461495 | null | cs.AR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the great success of High-Level Synthesis (HLS) tools, we observe
several unresolved challenges: 1) the high-level abstraction of programming
styles in HLS sometimes conceals optimization opportunities; 2) existing HLS
tools do not provide flexible trade-off (Pareto) solutions among different
objectives and constraints; 3) the actual quality of the resulting RTL designs
is hard to predict. To address these challenges, we propose an end-to-end
framework, namelyIronMan. The primary goal is to enable a flexible and
automated design space exploration (DSE), to provide either optimal solutions
under user-specified constraints, or various trade-offs among different
objectives (such as different types of resources, area, and latency). Such DSE
either requires tedious manual efforts or is not achievable to attain these
goals through existing HLS tools. There are three components in IronMan: 1)
GPP, a highly accurate graph-neural-network-based performance and resource
predictor; 2) RLMD, a reinforcement-learning-based multi-objective DSE engine
that explores the optimal resource allocation strategy, to provide Pareto
solutions between different objectives; 3) CT, a code transformer to assist
RLMD and GPP, which extracts the data flow graph from original HLS C/C++ and
automatically generates synthesizable code with HLS directives. The
experimental results show that: 1) GPP achieves high prediction accuracy,
reducing prediction errors of HLS tools by 10.9x in resource utilization and
5.7x in timing; 2) RLMD obtains optimal or Pareto solutions that outperform the
genetic algorithm and simulated annealing by 12.7% and 12.9%, respectively; 3)
IronMan is able to find optimized solutions perfectly matching various DSP
constraints, with 2.54x fewer DSPs and up to 6x shorter latency than those of
HLS tools while being up to 400x faster than the heuristic algorithms and HLS
tools.
| [
{
"created": "Tue, 16 Feb 2021 13:22:00 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Dec 2021 23:04:32 GMT",
"version": "v2"
}
] | 2021-12-10 | [
[
"Wu",
"Nan",
""
],
[
"Xie",
"Yuan",
""
],
[
"Hao",
"Cong",
""
]
] |
2102.08145 | Florian Tschopp | Florian Tschopp, Cornelius von Einem, Andrei Cramariuc, David Hug,
Andrew William Palmer, Roland Siegwart, Margarita Chli, Juan Nieto | Hough2Map -- Iterative Event-based Hough Transform for High-Speed
Railway Mapping | Florian Tschopp, Cornelius von Einem, and Andrei Cramariuc
contributed equally to this work | IEEE Robotics and Automation Letters ( Volume: 6, Issue: 2, April
2021) | 10.1109/LRA.2021.3061404 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To cope with the growing demand for transportation on the railway system,
accurate, robust, and high-frequency positioning is required to enable a safe
and efficient utilization of the existing railway infrastructure. As a basis
for a localization system we propose a complete on-board mapping pipeline able
to map robust meaningful landmarks, such as poles from power lines, in the
vicinity of the vehicle. Such poles are good candidates for reliable and long
term landmarks even through difficult weather conditions or seasonal changes.
To address the challenges of motion blur and illumination changes in railway
scenarios we employ a Dynamic Vision Sensor, a novel event-based camera. Using
a sideways oriented on-board camera, poles appear as vertical lines. To map
such lines in a real-time event stream, we introduce Hough2Map, a novel
consecutive iterative event-based Hough transform framework capable of
detecting, tracking, and triangulating close-by structures. We demonstrate the
mapping reliability and accuracy of Hough2Map on real-world data in typical
usage scenarios and evaluate using surveyed infrastructure ground truth maps.
Hough2Map achieves a detection reliability of up to 92% and a mapping root mean
square error accuracy of 1.1518m.
| [
{
"created": "Tue, 16 Feb 2021 13:36:07 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Feb 2021 15:51:51 GMT",
"version": "v2"
}
] | 2021-03-30 | [
[
"Tschopp",
"Florian",
""
],
[
"von Einem",
"Cornelius",
""
],
[
"Cramariuc",
"Andrei",
""
],
[
"Hug",
"David",
""
],
[
"Palmer",
"Andrew William",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Chli",
"Margarita",
""
],
[
"Nieto",
"Juan",
""
]
] |
2102.08146 | Manfred Schmidt-Schauss | Manfred Schmidt-Schau{\ss} and Temur Kutsia and Jordi Levy and Mateu
Villaret and Yunus Kutz | Nominal Unification and Matching of Higher Order Expressions with
Recursive Let | 37 pages, 9 figures, This paper is an extended version of the
conference publication: Manfred Schmidt-Schau{\ss} and Temur Kutsia and Jordi
Levy and Mateu Villaret and Yunus Kutz, Nominal Unification of Higher Order
Expressions with Recursive Let, LOPSTR-16, Lecture Notes in Computer Science
10184, Springer, p 328 -344, 2016. arXiv admin note: text overlap with
arXiv:1608.03771 | Fundamenta Informaticae, Volume 185, Issue 3 (May 6, 2022) fi:7191 | 10.3233/FI-222110 | null | cs.LO cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | A sound and complete algorithm for nominal unification of higher-order
expressions with a recursive let is described, and shown to run in
nondeterministic polynomial time. We also explore specializations like nominal
letrec-matching for expressions, for DAGs, and for garbage-free expressions and
determine their complexity. We also provide a nominal unification algorithm for
higher-order expressions with recursive let and atom-variables, where we show
that it also runs in nondeterministic polynomial time. In addition we prove
that there is a guessing strategy for nominal unification with letrec and
atom-variable that is a trade-off between exponential growth and
non-determinism. Nominal matching with variables representing partial
letrec-environments is also shown to be in NP.
| [
{
"created": "Tue, 16 Feb 2021 13:36:59 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Mar 2022 10:56:39 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Mar 2022 20:05:24 GMT",
"version": "v3"
},
{
"created": "Tue, 26 Apr 2022 07:58:41 GMT",
"version": "v4"
}
] | 2023-06-22 | [
[
"Schmidt-Schauß",
"Manfred",
""
],
[
"Kutsia",
"Temur",
""
],
[
"Levy",
"Jordi",
""
],
[
"Villaret",
"Mateu",
""
],
[
"Kutz",
"Yunus",
""
]
] |
2102.08148 | Jintai Chen | Jintai Chen, Hongyun Yu, Ruiwei Feng, Danny Z. Chen, Jian Wu | Flow-Mixup: Classifying Multi-labeled Medical Images with Corrupted
Labels | null | 2020 IEEE International Conference on Bioinformatics and
Biomedicine | 10.1109/BIBM49941.2020.9313408 | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In clinical practice, medical image interpretation often involves
multi-labeled classification, since the affected parts of a patient tend to
present multiple symptoms or comorbidities. Recently, deep learning based
frameworks have attained expert-level performance on medical image
interpretation, which can be attributed partially to large amounts of accurate
annotations. However, manually annotating massive amounts of medical images is
impractical, while automatic annotation is fast but imprecise (possibly
introducing corrupted labels). In this work, we propose a new regularization
approach, called Flow-Mixup, for multi-labeled medical image classification
with corrupted labels. Flow-Mixup guides the models to capture robust features
for each abnormality, thus helping handle corrupted labels effectively and
making it possible to apply automatic annotation. Specifically, Flow-Mixup
decouples the extracted features by adding constraints to the hidden states of
the models. Also, Flow-Mixup is more stable and effective comparing to other
known regularization methods, as shown by theoretical and empirical analyses.
Experiments on two electrocardiogram datasets and a chest X-ray dataset
containing corrupted labels verify that Flow-Mixup is effective and insensitive
to corrupted labels.
| [
{
"created": "Tue, 9 Feb 2021 16:04:26 GMT",
"version": "v1"
}
] | 2021-02-17 | [
[
"Chen",
"Jintai",
""
],
[
"Yu",
"Hongyun",
""
],
[
"Feng",
"Ruiwei",
""
],
[
"Chen",
"Danny Z.",
""
],
[
"Wu",
"Jian",
""
]
] |
2102.08168 | Jian Jin | Jian Jin, Xingxing Zhang, Xin Fu, Huan Zhang, Weisi Lin, Jian Lou, Yao
Zhao | Just Noticeable Difference for Deep Machine Vision | null | IEEE Transactions on Circuits and Systems for Video Technology,
2021 | 10.1109/TCSVT.2021.3113572 | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an important perceptual characteristic of the Human Visual System (HVS),
the Just Noticeable Difference (JND) has been studied for decades with image
and video processing (e.g., perceptual visual signal compression). However,
there is little exploration on the existence of JND for the Deep Machine Vision
(DMV), although the DMV has made great strides in many machine vision tasks. In
this paper, we take an initial attempt, and demonstrate that the DMV has the
JND, termed as the DMV-JND. We then propose a JND model for the image
classification task in the DMV. It has been discovered that the DMV can
tolerate distorted images with average PSNR of only 9.56dB (the lower the
better), by generating JND via unsupervised learning with the proposed
DMV-JND-NET. In particular, a semantic-guided redundancy assessment strategy is
designed to restrain the magnitude and spatial distribution of the DMV-JND.
Experimental results on image classification demonstrate that we successfully
find the JND for deep machine vision. Our DMV-JND facilitates a possible
direction for DMV-oriented image and video compression, watermarking, quality
assessment, deep neural network security, and so on.
| [
{
"created": "Tue, 16 Feb 2021 14:19:35 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jan 2022 14:02:08 GMT",
"version": "v2"
}
] | 2022-01-10 | [
[
"Jin",
"Jian",
""
],
[
"Zhang",
"Xingxing",
""
],
[
"Fu",
"Xin",
""
],
[
"Zhang",
"Huan",
""
],
[
"Lin",
"Weisi",
""
],
[
"Lou",
"Jian",
""
],
[
"Zhao",
"Yao",
""
]
] |
2102.08259 | Bo Zhao | Bo Zhao, Hakan Bilen | Dataset Condensation with Differentiable Siamese Augmentation | null | International Conference on Machine Learning 2021 | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In many machine learning problems, large-scale datasets have become the
de-facto standard to train state-of-the-art deep networks at the price of heavy
computation load. In this paper, we focus on condensing large training sets
into significantly smaller synthetic sets which can be used to train deep
neural networks from scratch with minimum drop in performance. Inspired from
the recent training set synthesis methods, we propose Differentiable Siamese
Augmentation that enables effective use of data augmentation to synthesize more
informative synthetic images and thus achieves better performance when training
networks with augmentations. Experiments on multiple image classification
benchmarks demonstrate that the proposed method obtains substantial gains over
the state-of-the-art, 7% improvements on CIFAR10 and CIFAR100 datasets. We show
with only less than 1% data that our method achieves 99.6%, 94.9%, 88.5%, 71.5%
relative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively. We
also explore the use of our method in continual learning and neural
architecture search, and show promising results.
| [
{
"created": "Tue, 16 Feb 2021 16:32:21 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 08:04:29 GMT",
"version": "v2"
}
] | 2021-06-11 | [
[
"Zhao",
"Bo",
""
],
[
"Bilen",
"Hakan",
""
]
] |
2102.08445 | Nancy Xin Ru Wang | Nancy Xin Ru Wang, Douglas Burdick, Yunyao Li | TableLab: An Interactive Table Extraction System with Adaptive Deep
Learning | Accepted at IUI'21 | 26th International Conference on Intelligent User Interfaces 2021 | 10.1145/3397482.3450718 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Table extraction from PDF and image documents is a ubiquitous task in the
real-world. Perfect extraction quality is difficult to achieve with one single
out-of-box model due to (1) the wide variety of table styles, (2) the lack of
training data representing this variety and (3) the inherent ambiguity and
subjectivity of table definitions between end-users. Meanwhile, building
customized models from scratch can be difficult due to the expensive nature of
annotating table data. We attempt to solve these challenges with TableLab by
providing a system where users and models seamlessly work together to quickly
customize high-quality extraction models with a few labelled examples for the
user's document collection, which contains pages with tables. Given an input
document collection, TableLab first detects tables with similar structures
(templates) by clustering embeddings from the extraction model. Document
collections often contain tables created with a limited set of templates or
similar structures. It then selects a few representative table examples already
extracted with a pre-trained base deep learning model. Via an easy-to-use user
interface, users provide feedback to these selections without necessarily
having to identify every single error. TableLab then applies such feedback to
finetune the pre-trained model and returns the results of the finetuned model
back to the user. The user can choose to repeat this process iteratively until
obtaining a customized model with satisfactory performance.
| [
{
"created": "Tue, 16 Feb 2021 20:52:44 GMT",
"version": "v1"
}
] | 2021-02-18 | [
[
"Wang",
"Nancy Xin Ru",
""
],
[
"Burdick",
"Douglas",
""
],
[
"Li",
"Yunyao",
""
]
] |
2102.08581 | Byungchan Ko | Byungchan Ko and Jungseul Ok | Efficient Scheduling of Data Augmentation for Deep Reinforcement
Learning | null | Neurips 2022 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In deep reinforcement learning (RL), data augmentation is widely considered
as a tool to induce a set of useful priors about semantic consistency and
improve sample efficiency and generalization performance. However, even when
the prior is useful for generalization, distilling it to RL agent often
interferes with RL training and degenerates sample efficiency. Meanwhile, the
agent is forgetful of the prior due to the non-stationary nature of RL. These
observations suggest two extreme schedules of distillation: (i) over the entire
training; or (ii) only at the end. Hence, we devise a stand-alone network
distillation method to inject the consistency prior at any time (even after
RL), and a simple yet efficient framework to automatically schedule the
distillation. Specifically, the proposed framework first focuses on mastering
train environments regardless of generalization by adaptively deciding which
{\it or no} augmentation to be used for the training. After this, we add the
distillation to extract the remaining benefits for generalization from all the
augmentations, which requires no additional new samples. In our experiments, we
demonstrate the utility of the proposed framework, in particular, that
considers postponing the augmentation to the end of RL training.
| [
{
"created": "Wed, 17 Feb 2021 05:22:34 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jun 2022 09:48:34 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Oct 2022 01:09:33 GMT",
"version": "v3"
}
] | 2022-10-21 | [
[
"Ko",
"Byungchan",
""
],
[
"Ok",
"Jungseul",
""
]
] |
2102.08628 | Essam Rashed | Essam A. Rashed, Sachiko Kodera, Hidenobu Shirakami, Ryotetsu
Kawaguchi, Kazuhiro Watanabe, Akimasa Hirata | Knowledge discovery from emergency ambulance dispatch during COVID-19: A
case study of Nagoya City, Japan | 15 pages, 12 figures, 2 tables | Journal of Biomedical Informatics, 2021 | 10.1016/j.jbi.2021.103743 | null | cs.AI eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate forecasting of medical service requirements is an important big data
problem that is crucial for resource management in critical times such as
natural disasters and pandemics. With the global spread of coronavirus disease
2019 (COVID-19), several concerns have been raised regarding the ability of
medical systems to handle sudden changes in the daily routines of healthcare
providers. One significant problem is the management of ambulance dispatch and
control during a pandemic. To help address this problem, we first analyze
ambulance dispatch data records from April 2014 to August 2020 for Nagoya City,
Japan. Significant changes were observed in the data during the pandemic,
including the state of emergency (SoE) declared across Japan. In this study, we
propose a deep learning framework based on recurrent neural networks to
estimate the number of emergency ambulance dispatches (EADs) during a SoE. The
fusion of data includes environmental factors, the localization data of mobile
phone users, and the past history of EADs, thereby providing a general
framework for knowledge discovery and better resource management. The results
indicate that the proposed blend of training data can be used efficiently in a
real-world estimation of EAD requirements during periods of high uncertainties
such as pandemics.
| [
{
"created": "Wed, 17 Feb 2021 08:37:05 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Rashed",
"Essam A.",
""
],
[
"Kodera",
"Sachiko",
""
],
[
"Shirakami",
"Hidenobu",
""
],
[
"Kawaguchi",
"Ryotetsu",
""
],
[
"Watanabe",
"Kazuhiro",
""
],
[
"Hirata",
"Akimasa",
""
]
] |
2102.08655 | Nora Hollenstein | Nora Hollenstein, Cedric Renggli, Benjamin Glaus, Maria Barrett,
Marius Troendle, Nicolas Langer, Ce Zhang | Decoding EEG Brain Activity for Multi-Modal Natural Language Processing | null | Frontiers of Human Neuroscience 2021 | 10.3389/fnhum.2021.659410 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Until recently, human behavioral data from reading has mainly been of
interest to researchers to understand human cognition. However, these human
language processing signals can also be beneficial in machine learning-based
natural language processing tasks. Using EEG brain activity to this purpose is
largely unexplored as of yet. In this paper, we present the first large-scale
study of systematically analyzing the potential of EEG brain activity data for
improving natural language processing tasks, with a special focus on which
features of the signal are most beneficial. We present a multi-modal machine
learning architecture that learns jointly from textual input as well as from
EEG features. We find that filtering the EEG signals into frequency bands is
more beneficial than using the broadband signal. Moreover, for a range of word
embedding types, EEG data improves binary and ternary sentiment classification
and outperforms multiple baselines. For more complex tasks such as relation
detection, further research is needed. Finally, EEG data shows to be
particularly promising when limited training data is available.
| [
{
"created": "Wed, 17 Feb 2021 09:44:21 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jul 2021 07:34:28 GMT",
"version": "v2"
}
] | 2021-08-11 | [
[
"Hollenstein",
"Nora",
""
],
[
"Renggli",
"Cedric",
""
],
[
"Glaus",
"Benjamin",
""
],
[
"Barrett",
"Maria",
""
],
[
"Troendle",
"Marius",
""
],
[
"Langer",
"Nicolas",
""
],
[
"Zhang",
"Ce",
""
]
] |
2102.08665 | Nicolas Guigui | Nicolas Guigui (UCA, EPIONE), Pamela Moceri (URRIS UR2CA), Maxime
Sermesant (UCA, EPIONE), Xavier Pennec (UCA, EPIONE) | Cardiac Motion Modeling with Parallel Transport and Shape Splines | null | International Symposium on Biological Imaging, Apr 2021, Nice,
France | null | null | cs.CV math.DG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cases of pressure or volume overload, probing cardiac function may be
difficult because of the interactions between shape and deformations.In this
work, we use the LDDMM framework and parallel transport to estimate and
reorient deformations of the right ventricle. We then propose a normalization
procedure for the amplitude of the deformation, and a second-order spline model
to represent the full cardiac contraction. The method is applied to 3D meshes
of the right ventricle extracted from echocardiographic sequences of 314
patients divided into three disease categories and a control group. We find
significant differences between pathologies in the model parameters, revealing
insights into the dynamics of each disease.
| [
{
"created": "Wed, 17 Feb 2021 10:03:32 GMT",
"version": "v1"
}
] | 2021-02-18 | [
[
"Guigui",
"Nicolas",
"",
"UCA, EPIONE"
],
[
"Moceri",
"Pamela",
"",
"URRIS UR2CA"
],
[
"Sermesant",
"Maxime",
"",
"UCA, EPIONE"
],
[
"Pennec",
"Xavier",
"",
"UCA, EPIONE"
]
] |
2102.08689 | Zhe Chen | Zhe Chen, Daniel Harabor, Jiaoyang Li, Peter J. Stuckey | Symmetry Breaking for k-Robust Multi-Agent Path Finding | 8 pages. Accepted by Thirty-Fifth AAAI Conference on Artificial
Intelligence | Proceedings of the AAAI Conference on Artificial Intelligence,
35(14), 12267-12274 (2021) | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | During Multi-Agent Path Finding (MAPF) problems, agents can be delayed by
unexpected events. To address such situations recent work describes k-Robust
Conflict-BasedSearch (k-CBS): an algorithm that produces coordinated and
collision-free plan that is robust for up to k delays. In this work we
introducing a variety of pairwise symmetry breaking constraints, specific to
k-robust planning, that can efficiently find compatible and optimal paths for
pairs of conflicting agents. We give a thorough description of the new
constraints and report large improvements to success rate ina range of domains
including: (i) classic MAPF benchmarks;(ii) automated warehouse domains and;
(iii) on maps from the 2019 Flatland Challenge, a recently introduced railway
domain where k-robust planning can be fruitfully applied to schedule trains.
| [
{
"created": "Wed, 17 Feb 2021 11:09:33 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Oct 2021 05:00:21 GMT",
"version": "v2"
}
] | 2021-10-29 | [
[
"Chen",
"Zhe",
""
],
[
"Harabor",
"Daniel",
""
],
[
"Li",
"Jiaoyang",
""
],
[
"Stuckey",
"Peter J.",
""
]
] |
2102.08742 | Denis Coquenet | Denis Coquenet, Cl\'ement Chatelain, Thierry Paquet | SPAN: a Simple Predict & Align Network for Handwritten Paragraph
Recognition | null | Document Analysis and Recognition - ICDAR 2021. ICDAR 2021.
Lecture Notes in Computer Science, vol 12823 | 10.1007/978-3-030-86334-0_5 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unconstrained handwriting recognition is an essential task in document
analysis. It is usually carried out in two steps. First, the document is
segmented into text lines. Second, an Optical Character Recognition model is
applied on these line images. We propose the Simple Predict & Align Network: an
end-to-end recurrence-free Fully Convolutional Network performing OCR at
paragraph level without any prior segmentation stage. The framework is as
simple as the one used for the recognition of isolated lines and we achieve
competitive results on three popular datasets: RIMES, IAM and READ 2016. The
proposed model does not require any dataset adaptation, it can be trained from
scratch, without segmentation labels, and it does not require line breaks in
the transcription labels. Our code and trained model weights are available at
https://github.com/FactoDeepLearning/SPAN.
| [
{
"created": "Wed, 17 Feb 2021 13:12:45 GMT",
"version": "v1"
}
] | 2021-09-13 | [
[
"Coquenet",
"Denis",
""
],
[
"Chatelain",
"Clément",
""
],
[
"Paquet",
"Thierry",
""
]
] |
2102.08755 | Kristina Gligoric | Kristina Gligori\'c, Ryen W. White, Emre K{\i}c{\i}man, Eric Horvitz,
Arnaud Chiolero, Robert West | Formation of Social Ties Influences Food Choice: A Campus-Wide
Longitudinal Study | null | Proc. ACM Hum.-Comput. Interact.5, CSCW1, Article 184 (April 2021) | 10.1145/34492971 | null | cs.SI cs.AI | http://creativecommons.org/licenses/by/4.0/ | Nutrition is a key determinant of long-term health, and social influence has
long been theorized to be a key determinant of nutrition. It has been difficult
to quantify the postulated role of social influence on nutrition using
traditional methods such as surveys, due to the typically small scale and short
duration of studies. To overcome these limitations, we leverage a novel source
of data: logs of 38 million food purchases made over an 8-year period on the
Ecole Polytechnique Federale de Lausanne (EPFL) university campus, linked to
anonymized individuals via the smartcards used to make on-campus purchases. In
a longitudinal observational study, we ask: How is a person's food choice
affected by eating with someone else whose own food choice is healthy vs.
unhealthy? To estimate causal effects from the passively observed log data, we
control confounds in a matched quasi-experimental design: we identify focal
users who at first do not have any regular eating partners but then start
eating with a fixed partner regularly, and we match focal users into comparison
pairs such that paired users are nearly identical with respect to covariates
measured before acquiring the partner, where the two focal users' new eating
partners diverge in the healthiness of their respective food choice. A
difference-in-differences analysis of the paired data yields clear evidence of
social influence: focal users acquiring a healthy-eating partner change their
habits significantly more toward healthy foods than focal users acquiring an
unhealthy-eating partner. We further identify foods whose purchase frequency is
impacted significantly by the eating partner's healthiness of food choice.
Beyond the main results, the work demonstrates the utility of passively sensed
food purchase logs for deriving insights, with the potential of informing the
design of public health interventions and food offerings.
| [
{
"created": "Wed, 17 Feb 2021 13:47:28 GMT",
"version": "v1"
}
] | 2021-02-18 | [
[
"Gligorić",
"Kristina",
""
],
[
"White",
"Ryen W.",
""
],
[
"Kıcıman",
"Emre",
""
],
[
"Horvitz",
"Eric",
""
],
[
"Chiolero",
"Arnaud",
""
],
[
"West",
"Robert",
""
]
] |
2102.08773 | Matthew Shardlow | Matthew Shardlow, Richard Evans and Marcos Zampieri | Predicting Lexical Complexity in English Texts: The Complex 2.0 Dataset | null | Lang Resources and Evaluation 56, 1153-1194 (2022) | 10.1007/s10579-022-09588-2 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying words which may cause difficulty for a reader is an essential
step in most lexical text simplification systems prior to lexical substitution
and can also be used for assessing the readability of a text. This task is
commonly referred to as Complex Word Identification (CWI) and is often modelled
as a supervised classification problem. For training such systems, annotated
datasets in which words and sometimes multi-word expressions are labelled
regarding complexity are required. In this paper we analyze previous work
carried out in this task and investigate the properties of CWI datasets for
English. We develop a protocol for the annotation of lexical complexity and use
this to annotate a new dataset, CompLex 2.0. We present experiments using both
new and old datasets to investigate the nature of lexical complexity. We found
that a Likert-scale annotation protocol provides an objective setting that is
superior for identifying the complexity of words compared to a binary
annotation protocol. We release a new dataset using our new protocol to promote
the task of Lexical Complexity Prediction.
| [
{
"created": "Wed, 17 Feb 2021 14:05:30 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Nov 2022 09:31:11 GMT",
"version": "v2"
}
] | 2022-11-04 | [
[
"Shardlow",
"Matthew",
""
],
[
"Evans",
"Richard",
""
],
[
"Zampieri",
"Marcos",
""
]
] |
2102.08892 | Rudolf Rosa | Rudolf Rosa and Tom\'a\v{s} Musil and Ond\v{r}ej Du\v{s}ek and Dominik
Jurko and Patr\'icia Schmidtov\'a and David Mare\v{c}ek and Ond\v{r}ej Bojar
and Tom Kocmi and Daniel Hrbek and David Ko\v{s}\v{t}\'ak and Martina
Kinsk\'a and Marie Nov\'akov\'a and Josef Dole\v{z}al and Kl\'ara Voseck\'a
and Tom\'a\v{s} Studen\'ik and Petr \v{Z}abka | THEaiTRE 1.0: Interactive generation of theatre play scripts | Submitted to Text2Story workshop 2021 | Proc. Text2Story (2021) 71-76 | null | null | cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | We present the first version of a system for interactive generation of
theatre play scripts. The system is based on a vanilla GPT-2 model with several
adjustments, targeting specific issues we encountered in practice. We also list
other issues we encountered but plan to only solve in a future version of the
system. The presented system was used to generate a theatre play script planned
for premiere in February 2021.
| [
{
"created": "Wed, 17 Feb 2021 17:40:33 GMT",
"version": "v1"
}
] | 2021-10-26 | [
[
"Rosa",
"Rudolf",
""
],
[
"Musil",
"Tomáš",
""
],
[
"Dušek",
"Ondřej",
""
],
[
"Jurko",
"Dominik",
""
],
[
"Schmidtová",
"Patrícia",
""
],
[
"Mareček",
"David",
""
],
[
"Bojar",
"Ondřej",
""
],
[
"Kocmi",
"Tom",
""
],
[
"Hrbek",
"Daniel",
""
],
[
"Košťák",
"David",
""
],
[
"Kinská",
"Martina",
""
],
[
"Nováková",
"Marie",
""
],
[
"Doležal",
"Josef",
""
],
[
"Vosecká",
"Klára",
""
],
[
"Studeník",
"Tomáš",
""
],
[
"Žabka",
"Petr",
""
]
] |
2102.09024 | Mohita Chaudhary | Mohita Chaudhary, Mohamed Sadok Gastli, Lobna Nassar, Fakhri Karray | Deep Learning Approaches for Forecasting Strawberry Yields and Prices
Using Satellite Images and Station-Based Soil Parameters | Paper Accepted in Association for the Advancement of Artificial
Intelligence (AAAI) Spring Symposium on 21st Jan, 2021 | AAAI 2021 Spring Symposium on Combining Machine Learning and
Knowledge Engineering (AAAI-MAKE 2021) | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Computational tools for forecasting yields and prices for fresh produce have
been based on traditional machine learning approaches or time series modelling.
We propose here an alternate approach based on deep learning algorithms for
forecasting strawberry yields and prices in Santa Barbara county, California.
Building the proposed forecasting model comprises three stages: first, the
station-based ensemble model (ATT-CNN-LSTM-SeriesNet_Ens) with its compound
deep learning components, SeriesNet with Gated Recurrent Unit (GRU) and
Convolutional Neural Network LSTM with Attention layer (Att-CNN-LSTM), are
trained and tested using the station-based soil temperature and moisture data
of SantaBarbara as input and the corresponding strawberry yields or prices as
output. Secondly, the remote sensing ensemble model (SIM_CNN-LSTM_Ens), which
is an ensemble model of Convolutional NeuralNetwork LSTM (CNN-LSTM) models, is
trained and tested using satellite images of the same county as input mapped to
the same yields and prices as output. These two ensembles forecast strawberry
yields and prices with minimal forecasting errors and highest model correlation
for five weeks ahead forecasts.Finally, the forecasts of these two models are
ensembled to have a final forecasted value for yields and prices by introducing
a voting ensemble. Based on an aggregated performance measure (AGM), it is
found that this voting ensemble not only enhances the forecasting performance
by 5% compared to its best performing component model but also outperforms the
Deep Learning (DL) ensemble model found in literature by 33% for forecasting
yields and 21% for forecasting prices
| [
{
"created": "Wed, 17 Feb 2021 20:54:34 GMT",
"version": "v1"
}
] | 2021-02-19 | [
[
"Chaudhary",
"Mohita",
""
],
[
"Gastli",
"Mohamed Sadok",
""
],
[
"Nassar",
"Lobna",
""
],
[
"Karray",
"Fakhri",
""
]
] |
2102.09099 | Mohamed Amgad | Mohamed Amgad (1), Lamees A. Atteya (2), Hagar Hussein (3), Kareem
Hosny Mohammed (4), Ehab Hafiz (5), Maha A.T. Elsebaie (6), Ahmed M.
Alhusseiny (7), Mohamed Atef AlMoslemany (8), Abdelmagid M. Elmatboly (9),
Philip A. Pappalardo (10), Rokia Adel Sakr (11), Pooya Mobadersany (1), Ahmad
Rachid (12), Anas M. Saad (13), Ahmad M. Alkashash (14), Inas A. Ruhban (15),
Anas Alrefai (12), Nada M. Elgazar (16), Ali Abdulkarim (17), Abo-Alela Farag
(12), Amira Etman (8), Ahmed G. Elsaeed (16), Yahya Alagha (17), Yomna A.
Amer (8), Ahmed M. Raslan (18), Menatalla K. Nadim (19), Mai A.T. Elsebaie
(12), Ahmed Ayad (20), Liza E. Hanna (3), Ahmed Gadallah (12), Mohamed Elkady
(21), Bradley Drumheller (22), David Jaye (22), David Manthey (23), David A.
Gutman (24), Habiba Elfandy (25, 26), Lee A.D. Cooper (1, 27, 28) ((1)
Department of Pathology, Northwestern University, Chicago, IL, USA, (2) Cairo
Health Care Administration, Egyptian Ministry of Health, Cairo, Egypt, (3)
Department of Pathology, Nasser institute for research and treatment, Cairo,
Egypt, (4) Department of Pathology and Laboratory Medicine, University of
Pennsylvania, PA, USA, (5) Department of Clinical Laboratory Research,
Theodor Bilharz Research Institute, Giza, Egypt, (6) Department of Medicine,
Cook County Hospital, Chicago, IL, USA, (7) Department of Pathology, Baystate
Medical Center, University of Massachusetts, Springfield, MA, USA, (8)
Faculty of Medicine, Menoufia University, Menoufia, Egypt, (9) Faculty of
Medicine, Al-Azhar University, Cairo, Egypt, (10) Consultant for The Center
for Applied Proteomics and Molecular Medicine (CAPMM), George Mason
University, Manassas, VA, USA, (11) Department of Pathology, National Liver
Institute, Menoufia University, Menoufia, Egypt, (12) Faculty of Medicine,
Ain Shams University, Cairo, Egypt, (13) Cleveland Clinic Foundation,
Cleveland, OH, USA, (14) Department of Pathology, Indiana University,
Indianapolis, IN, USA, (15) Faculty of Medicine, Damascus University,
Damascus, Syria, (16) Faculty of Medicine, Mansoura University, Mansoura,
Egypt, (17) Faculty of Medicine, Cairo University, Cairo, Egypt, (18)
Department of Anaesthesia and Critical Care, Menoufia University Hospital,
Menoufia, Egypt, (19) Department of Clinical Pathology, Ain Shams University,
Cairo, Egypt, (20) Research Department, Oncology Consultants, PA, Houston,
TX, USA, (21) Siparadigm Diagnostic Informatics, Pine Brook, NJ, USA, (22)
Department of Pathology and Laboratory Medicine, Emory University School of
Medicine, Atlanta, GA, USA, (23) Kitware Inc., Clifton Park, NY, USA, (24)
Department of Neurology, Emory University School of Medicine, Atlanta, GA,
USA, (25) Department of Pathology, National Cancer Institute, Cairo, Egypt,
(26) Department of Pathology, Children's Cancer Hospital Egypt CCHE 57357,
Cairo, Egypt, (27) Lurie Cancer Center, Northwestern University, Chicago, IL,
USA, (28) Center for Computational Imaging and Signal Analytics, Northwestern
University Feinberg School of Medicine, Chicago, IL, USA) | NuCLS: A scalable crowdsourcing, deep learning approach and dataset for
nucleus classification, localization and segmentation | null | GigaScience, 11 (2022) | 10.1093/gigascience/giac037 | null | eess.IV cs.CV cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | High-resolution mapping of cells and tissue structures provides a foundation
for developing interpretable machine-learning models for computational
pathology. Deep learning algorithms can provide accurate mappings given large
numbers of labeled instances for training and validation. Generating adequate
volume of quality labels has emerged as a critical barrier in computational
pathology given the time and effort required from pathologists. In this paper
we describe an approach for engaging crowds of medical students and
pathologists that was used to produce a dataset of over 220,000 annotations of
cell nuclei in breast cancers. We show how suggested annotations generated by a
weak algorithm can improve the accuracy of annotations generated by non-experts
and can yield useful data for training segmentation algorithms without
laborious manual tracing. We systematically examine interrater agreement and
describe modifications to the MaskRCNN model to improve cell mapping. We also
describe a technique we call Decision Tree Approximation of Learned Embeddings
(DTALE) that leverages nucleus segmentations and morphologic features to
improve the transparency of nucleus classification models. The annotation data
produced in this study are freely available for algorithm development and
benchmarking at: https://sites.google.com/view/nucls.
| [
{
"created": "Thu, 18 Feb 2021 01:17:17 GMT",
"version": "v1"
}
] | 2022-07-25 | [
[
"Amgad",
"Mohamed",
""
],
[
"Atteya",
"Lamees A.",
""
],
[
"Hussein",
"Hagar",
""
],
[
"Mohammed",
"Kareem Hosny",
""
],
[
"Hafiz",
"Ehab",
""
],
[
"Elsebaie",
"Maha A. T.",
""
],
[
"Alhusseiny",
"Ahmed M.",
""
],
[
"AlMoslemany",
"Mohamed Atef",
""
],
[
"Elmatboly",
"Abdelmagid M.",
""
],
[
"Pappalardo",
"Philip A.",
""
],
[
"Sakr",
"Rokia Adel",
""
],
[
"Mobadersany",
"Pooya",
""
],
[
"Rachid",
"Ahmad",
""
],
[
"Saad",
"Anas M.",
""
],
[
"Alkashash",
"Ahmad M.",
""
],
[
"Ruhban",
"Inas A.",
""
],
[
"Alrefai",
"Anas",
""
],
[
"Elgazar",
"Nada M.",
""
],
[
"Abdulkarim",
"Ali",
""
],
[
"Farag",
"Abo-Alela",
""
],
[
"Etman",
"Amira",
""
],
[
"Elsaeed",
"Ahmed G.",
""
],
[
"Alagha",
"Yahya",
""
],
[
"Amer",
"Yomna A.",
""
],
[
"Raslan",
"Ahmed M.",
""
],
[
"Nadim",
"Menatalla K.",
""
],
[
"Elsebaie",
"Mai A. T.",
""
],
[
"Ayad",
"Ahmed",
""
],
[
"Hanna",
"Liza E.",
""
],
[
"Gadallah",
"Ahmed",
""
],
[
"Elkady",
"Mohamed",
""
],
[
"Drumheller",
"Bradley",
""
],
[
"Jaye",
"David",
""
],
[
"Manthey",
"David",
""
],
[
"Gutman",
"David A.",
""
],
[
"Elfandy",
"Habiba",
""
],
[
"Cooper",
"Lee A. D.",
""
]
] |
2102.09260 | Milan Straka | Milan Straka, Lucia Piatrikov\'a, Peter van Bokhoven, \v{L}ubo\v{s}
Buzna | A matrix approach to detect temporal behavioral patterns at electric
vehicle charging stations | 8 pages, 5 figures, conference paper | Transportation Research Procedia (2021) | 10.1016/j.trpro.2021.07.186 | null | cs.LG cs.AI cs.CE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Based on the electric vehicle (EV) arrival times and the duration of EV
connection to the charging station, we identify charging patterns and derive
groups of charging stations with similar charging patterns applying two
approaches. The ruled based approach derives the charging patterns by
specifying a set of time intervals and a threshold value. In the second
approach, we combine the modified l-p norm (as a matrix dissimilarity measure)
with hierarchical clustering and apply them to automatically identify charging
patterns and groups of charging stations associated with such patterns. A
dataset collected in a large network of public charging stations is used to
test both approaches. Using both methods, we derived charging patterns. The
first, rule-based approach, performed well at deriving predefined patterns and
the latter, hierarchical clustering, showed the capability of delivering
unexpected charging patterns.
| [
{
"created": "Thu, 18 Feb 2021 10:37:32 GMT",
"version": "v1"
}
] | 2022-08-03 | [
[
"Straka",
"Milan",
""
],
[
"Piatriková",
"Lucia",
""
],
[
"van Bokhoven",
"Peter",
""
],
[
"Buzna",
"Ľuboš",
""
]
] |
2102.09320 | Daniel Gehrig | Daniel Gehrig, Michelle R\"uegg, Mathias Gehrig, Javier Hidalgo
Carrio, Davide Scaramuzza | Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction | null | IEEE Robotics and Automation Letters (RA-L), 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Event cameras are novel vision sensors that report per-pixel brightness
changes as a stream of asynchronous "events". They offer significant advantages
compared to standard cameras due to their high temporal resolution, high
dynamic range and lack of motion blur. However, events only measure the varying
component of the visual signal, which limits their ability to encode scene
context. By contrast, standard cameras measure absolute intensity frames, which
capture a much richer representation of the scene. Both sensors are thus
complementary. However, due to the asynchronous nature of events, combining
them with synchronous images remains challenging, especially for learning-based
methods. This is because traditional recurrent neural networks (RNNs) are not
designed for asynchronous and irregular data from additional sensors. To
address this challenge, we introduce Recurrent Asynchronous Multimodal (RAM)
networks, which generalize traditional RNNs to handle asynchronous and
irregular data from multiple sensors. Inspired by traditional RNNs, RAM
networks maintain a hidden state that is updated asynchronously and can be
queried at any time to generate a prediction. We apply this novel architecture
to monocular depth estimation with events and frames where we show an
improvement over state-of-the-art methods by up to 30% in terms of mean
absolute depth error. To enable further research on multimodal learning with
events, we release EventScape, a new dataset with events, intensity frames,
semantic labels, and depth maps recorded in the CARLA simulator.
| [
{
"created": "Thu, 18 Feb 2021 13:24:35 GMT",
"version": "v1"
}
] | 2021-02-19 | [
[
"Gehrig",
"Daniel",
""
],
[
"Rüegg",
"Michelle",
""
],
[
"Gehrig",
"Mathias",
""
],
[
"Carrio",
"Javier Hidalgo",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
2102.09390 | Al-Akhir Nayan | Al-Akhir Nayan, Ahamad Nokib Mozumder, Joyeta Saha, Khan Raqib Mahmud,
Abul Kalam Al Azad, Muhammad Golam Kibria | A Machine Learning Approach for Early Detection of Fish Diseases by
Analyzing Water Quality | null | TRENDS IN SCIENCES 2021; 18(21): 351 | 10.48048/tis.2021.351 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Early detection of fish diseases and identifying the underlying causes are
crucial for farmers to take necessary steps to mitigate the potential outbreak
and thus to avert financial losses with apparent negative implications to the
national economy. Typically, fish diseases are caused by viruses and bacteria;
according to biochemical studies, the presence of certain bacteria and viruses
may affect the level of pH, DO, BOD, COD, TSS, TDS, EC, PO43-, NO3-N, and NH3-N
in water, resulting in the death of fishes. Besides, natural processes, e.g.,
photosynthesis, respiration, and decomposition, also contribute to the
alteration of water quality that adversely affects fish health. Being motivated
by the recent successes of machine learning techniques, a state-of-art machine
learning algorithm has been adopted in this paper to detect and predict the
degradation of water quality timely and accurately. Thus, it helps to take
preemptive steps against potential fish diseases. The experimental results show
high accuracy in detecting fish diseases specific to water quality based on the
algorithm with real datasets.
| [
{
"created": "Mon, 15 Feb 2021 18:52:58 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Nov 2021 09:28:54 GMT",
"version": "v2"
}
] | 2021-11-12 | [
[
"Nayan",
"Al-Akhir",
""
],
[
"Mozumder",
"Ahamad Nokib",
""
],
[
"Saha",
"Joyeta",
""
],
[
"Mahmud",
"Khan Raqib",
""
],
[
"Azad",
"Abul Kalam Al",
""
],
[
"Kibria",
"Muhammad Golam",
""
]
] |
2102.09470 | Lovedeep Singh | Lovedeep Singh | Fake News Detection: a comparison between available Deep Learning
techniques in vector space | for citiation purpose, use details available on official IEEE Xplore
page: https://doi.org/10.1109/CICT51604.2020.9312099 | 2020 IEEE 4th Conference on Information & Communication Technology
(CICT) | 10.1109/CICT51604.2020.9312099 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fake News Detection is an essential problem in the field of Natural Language
Processing. The benefits of an effective solution in this area are manifold for
the goodwill of society. On a surface level, it broadly matches with the
general problem of text classification. Researchers have proposed various
approaches to tackle fake news using simple as well as some complex techniques.
In this paper, we try to make a comparison between the present Deep Learning
techniques by representing the news instances in some vector space using a
combination of common mathematical operations with available vector space
representations. We do a number of experiments using various combinations and
permutations. Finally, we conclude with a sound analysis of the results and
evaluate the reasons for such results.
| [
{
"created": "Thu, 18 Feb 2021 16:42:28 GMT",
"version": "v1"
}
] | 2021-02-19 | [
[
"Singh",
"Lovedeep",
""
]
] |
2102.09553 | Arlene Casey J | Arlene Casey, Emma Davidson, Michael Poon, Hang Dong, Daniel Duma,
Andreas Grivas, Claire Grover, V\'ictor Su\'arez-Paniagua, Richard Tobin,
William Whiteley, Honghan Wu, Beatrice Alex | A Systematic Review of Natural Language Processing Applied to Radiology
Reports | null | BMC Medical Informatics and Decision Making 2021 | 10.1186/s12911-021-01533-7 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | NLP has a significant role in advancing healthcare and has been found to be
key in extracting structured information from radiology reports. Understanding
recent developments in NLP application to radiology is of significance but
recent reviews on this are limited. This study systematically assesses recent
literature in NLP applied to radiology reports. Our automated literature search
yields 4,799 results using automated filtering, metadata enriching steps and
citation search combined with manual review. Our analysis is based on 21
variables including radiology characteristics, NLP methodology, performance,
study, and clinical application characteristics. We present a comprehensive
analysis of the 164 publications retrieved with each categorised into one of 6
clinical application categories. Deep learning use increases but conventional
machine learning approaches are still prevalent. Deep learning remains
challenged when data is scarce and there is little evidence of adoption into
clinical practice. Despite 17% of studies reporting greater than 0.85 F1
scores, it is hard to comparatively evaluate these approaches given that most
of them use different datasets. Only 14 studies made their data and 15 their
code available with 10 externally validating results. Automated understanding
of clinical narratives of the radiology reports has the potential to enhance
the healthcare process but reproducibility and explainability of models are
important if the domain is to move applications into clinical use. More could
be done to share code enabling validation of methods on different institutional
data and to reduce heterogeneity in reporting of study properties allowing
inter-study comparisons. Our results have significance for researchers
providing a systematic synthesis of existing work to build on, identify gaps,
opportunities for collaboration and avoid duplication.
| [
{
"created": "Thu, 18 Feb 2021 18:54:41 GMT",
"version": "v1"
}
] | 2021-06-10 | [
[
"Casey",
"Arlene",
""
],
[
"Davidson",
"Emma",
""
],
[
"Poon",
"Michael",
""
],
[
"Dong",
"Hang",
""
],
[
"Duma",
"Daniel",
""
],
[
"Grivas",
"Andreas",
""
],
[
"Grover",
"Claire",
""
],
[
"Suárez-Paniagua",
"Víctor",
""
],
[
"Tobin",
"Richard",
""
],
[
"Whiteley",
"William",
""
],
[
"Wu",
"Honghan",
""
],
[
"Alex",
"Beatrice",
""
]
] |
2102.09635 | Bibek Paudel | Bibek Paudel, Abraham Bernstein | Random Walks with Erasure: Diversifying Personalized Recommendations on
Social and Information Networks | Web Conference 2021 (WWW '21) | Proceedings of the Web Conference 2021 (WWW '21), April 19--23,
2021, Ljubljana, Slovenia | 10.1145/3442381.3449970 | null | cs.SI cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing personalization systems promote items that match a user's
previous choices or those that are popular among similar users. This results in
recommendations that are highly similar to the ones users are already exposed
to, resulting in their isolation inside familiar but insulated information
silos. In this context, we develop a novel recommendation framework with a goal
of improving information diversity using a modified random walk exploration of
the user-item graph. We focus on the problem of political content
recommendation, while addressing a general problem applicable to
personalization tasks in other social and information networks.
For recommending political content on social networks, we first propose a new
model to estimate the ideological positions for both users and the content they
share, which is able to recover ideological positions with high accuracy. Based
on these estimated positions, we generate diversified personalized
recommendations using our new random-walk based recommendation algorithm. With
experimental evaluations on large datasets of Twitter discussions, we show that
our method based on \emph{random walks with erasure} is able to generate more
ideologically diverse recommendations. Our approach does not depend on the
availability of labels regarding the bias of users or content producers. With
experiments on open benchmark datasets from other social and information
networks, we also demonstrate the effectiveness of our method in recommending
diverse long-tail items.
| [
{
"created": "Thu, 18 Feb 2021 21:53:32 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Feb 2021 17:23:16 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Feb 2021 17:20:52 GMT",
"version": "v3"
}
] | 2021-02-26 | [
[
"Paudel",
"Bibek",
""
],
[
"Bernstein",
"Abraham",
""
]
] |
2102.09680 | Mario Campos Soberanis | Diego Campos-Sobrino, Mario Campos-Soberanis, Iv\'an Mart\'inez-Chin,
V\'ictor Uc-Cetina | Fixing Errors of the Google Voice Recognizer through Phonetic Distance
Metrics | 13 pages, 4 figures. This article is a translation of the paper
"Correcci\'on de errores del reconocedor de voz de Google usando m\'etricas
de distancia fon\'etica" presented in COMIA 2018 | Research in Computing Science 148(1), 2019, pp. 57-70. ISSN
1870-4069 | null | null | cs.CL eess.AS | http://creativecommons.org/licenses/by/4.0/ | Speech recognition systems for the Spanish language, such as Google's,
produce errors quite frequently when used in applications of a specific domain.
These errors mostly occur when recognizing words new to the recognizer's
language model or ad hoc to the domain. This article presents an algorithm that
uses Levenshtein distance on phonemes to reduce the speech recognizer's errors.
The preliminary results show that it is possible to correct the recognizer's
errors significantly by using this metric and using a dictionary of specific
phrases from the domain of the application. Despite being designed for
particular domains, the algorithm proposed here is of general application. The
phrases that must be recognized can be explicitly defined for each application,
without the algorithm having to be modified. It is enough to indicate to the
algorithm the set of sentences on which it must work. The algorithm's
complexity is $O(tn)$, where $t$ is the number of words in the transcript to be
corrected, and $n$ is the number of phrases specific to the domain.
| [
{
"created": "Thu, 18 Feb 2021 23:54:59 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Campos-Sobrino",
"Diego",
""
],
[
"Campos-Soberanis",
"Mario",
""
],
[
"Martínez-Chin",
"Iván",
""
],
[
"Uc-Cetina",
"Víctor",
""
]
] |
2102.09761 | Tom Hope | Tom Hope, Ronen Tamari, Hyeonsu Kang, Daniel Hershcovich, Joel Chan,
Aniket Kittur, Dafna Shahaf | Scaling Creative Inspiration with Fine-Grained Functional Aspects of
Ideas | To appear in CHI 2022 | CHI 2022 | null | null | cs.HC cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large repositories of products, patents and scientific papers offer an
opportunity for building systems that scour millions of ideas and help users
discover inspirations. However, idea descriptions are typically in the form of
unstructured text, lacking key structure that is required for supporting
creative innovation interactions. Prior work has explored idea representations
that were either limited in expressivity, required significant manual effort
from users, or dependent on curated knowledge bases with poor coverage. We
explore a novel representation that automatically breaks up products into
fine-grained functional aspects capturing the purposes and mechanisms of ideas,
and use it to support important creative innovation interactions: functional
search for ideas, and exploration of the design space around a focal problem by
viewing related problem perspectives pooled from across many products. In user
studies, our approach boosts the quality of creative search and inspirations,
substantially outperforming strong baselines by 50-60%.
| [
{
"created": "Fri, 19 Feb 2021 06:30:41 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Sep 2021 13:40:02 GMT",
"version": "v2"
},
{
"created": "Thu, 17 Feb 2022 11:56:05 GMT",
"version": "v3"
}
] | 2022-02-18 | [
[
"Hope",
"Tom",
""
],
[
"Tamari",
"Ronen",
""
],
[
"Kang",
"Hyeonsu",
""
],
[
"Hershcovich",
"Daniel",
""
],
[
"Chan",
"Joel",
""
],
[
"Kittur",
"Aniket",
""
],
[
"Shahaf",
"Dafna",
""
]
] |
2102.09854 | Sao Mai Nguyen | Nicolas Duminy (Lab-STICC), Sao Mai Nguyen (U2IS), Junshuai Zhu (IMT
Atlantique), Dominique Duhaut (UBS), Jerome Kerdreux (Lab-STICC) | Intrinsically Motivated Open-Ended Multi-Task Learning Using Transfer
Learning to Discover Task Hierarchy | null | Applied Sciences, MDPI, 2021, 11 (3), pp.975 | 10.3390/app11030975 | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In open-ended continuous environments, robots need to learn multiple
parameterised control tasks in hierarchical reinforcement learning. We
hypothesise that the most complex tasks can be learned more easily by
transferring knowledge from simpler tasks, and faster by adapting the
complexity of the actions to the task. We propose a task-oriented
representation of complex actions, called procedures, to learn online task
relationships and unbounded sequences of action primitives to control the
different observables of the environment. Combining both goal-babbling with
imitation learning, and active learning with transfer of knowledge based on
intrinsic motivation, our algorithm self-organises its learning process. It
chooses at any given time a task to focus on; and what, how, when and from whom
to transfer knowledge. We show with a simulation and a real industrial robot
arm, in cross-task and cross-learner transfer settings, that task composition
is key to tackle highly complex tasks. Task decomposition is also efficiently
transferred across different embodied learners and by active imitation, where
the robot requests just a small amount of demonstrations and the adequate type
of information. The robot learns and exploits task dependencies so as to learn
tasks of every complexity.
| [
{
"created": "Fri, 19 Feb 2021 10:44:08 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Duminy",
"Nicolas",
"",
"Lab-STICC"
],
[
"Nguyen",
"Sao Mai",
"",
"U2IS"
],
[
"Zhu",
"Junshuai",
"",
"IMT\n Atlantique"
],
[
"Duhaut",
"Dominique",
"",
"UBS"
],
[
"Kerdreux",
"Jerome",
"",
"Lab-STICC"
]
] |
2102.09965 | Mahieddine Djoudi | Hichem Rahab, Abdelhafid Zitouni, Mahieddine Djoudi (TECHN\'E - EA
6316) | An Enhanced Corpus for Arabic Newspapers Comments | arXiv admin note: substantial text overlap with arXiv:2006.00459 | International Arab Journal of Information Technology, Colleges of
Computing and Information Society (CCIS), 2020, 17 (5), pp.789-798 | 10.34028/iajit/17/5/12 | null | cs.IR cs.CL cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose our enhanced approach to create a dedicated corpus
for Algerian Arabic newspapers comments. The developed approach has to enhance
an existing approach by the enrichment of the available corpus and the
inclusion of the annotation step by following the Model Annotate Train Test
Evaluate Revise (MATTER) approach. A corpus is created by collecting comments
from web sites of three well know Algerian newspapers. Three classifiers,
support vector machines, na{\"i}ve Bayes, and k-nearest neighbors, were used
for classification of comments into positive and negative classes. To identify
the influence of the stemming in the obtained results, the classification was
tested with and without stemming. Obtained results show that stemming does not
enhance considerably the classification due to the nature of Algerian comments
tied to Algerian Arabic Dialect. The promising results constitute a motivation
for us to improve our approach especially in dealing with non Arabic sentences,
especially Dialectal and French ones.
| [
{
"created": "Mon, 8 Feb 2021 10:15:44 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Rahab",
"Hichem",
"",
"TECHNÉ - EA\n 6316"
],
[
"Zitouni",
"Abdelhafid",
"",
"TECHNÉ - EA\n 6316"
],
[
"Djoudi",
"Mahieddine",
"",
"TECHNÉ - EA\n 6316"
]
] |
2102.10015 | Daniel Larsson | Daniel T. Larsson, Dipankar Maity, Panagiotis Tsiotras | Information-Theoretic Abstractions for Resource-Constrained Agents via
Mixed-Integer Linear Programming | null | 2021 Proceedings of the Workshop on Computation-Aware Algorithmic
Design for Cyber-Physical Systems | 10.1145/3457335.3461704 | null | cs.RO cs.AI cs.IT math.IT stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a mixed-integer linear programming formulation for the problem
of obtaining task-relevant, multi-resolution, graph abstractions for
resource-constrained agents is presented. The formulation leverages concepts
from information-theoretic signal compression, specifically the information
bottleneck (IB) method, to pose a graph abstraction problem as an optimal
encoder search over the space of multi-resolution trees. The abstractions
emerge in a task-relevant manner as a function of agent information-processing
constraints, and are not provided to the system a priori. We detail our
formulation and show how the problem can be realized as an integer linear
program. A non-trivial numerical example is presented to demonstrate the
utility in employing our approach to obtain hierarchical tree abstractions for
resource-limited agents.
| [
{
"created": "Fri, 19 Feb 2021 16:34:47 GMT",
"version": "v1"
}
] | 2021-07-01 | [
[
"Larsson",
"Daniel T.",
""
],
[
"Maity",
"Dipankar",
""
],
[
"Tsiotras",
"Panagiotis",
""
]
] |
2102.10033 | Ting-Yao Hu | Ting-Yao Hu, Alexander G. Hauptmann | Pose Guided Person Image Generation with Hidden p-Norm Regression | null | ICIP 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we propose a novel approach to solve the pose guided person
image generation task. We assume that the relation between pose and appearance
information can be described by a simple matrix operation in hidden space.
Based on this assumption, our method estimates a pose-invariant feature matrix
for each identity, and uses it to predict the target appearance conditioned on
the target pose. The estimation process is formulated as a p-norm regression
problem in hidden space. By utilizing the differentiation of the solution of
this regression problem, the parameters of the whole framework can be trained
in an end-to-end manner. While most previous works are only applicable to the
supervised training and single-shot generation scenario, our method can be
easily adapted to unsupervised training and multi-shot generation. Extensive
experiments on the challenging Market-1501 dataset show that our method yields
competitive performance in all the aforementioned variant scenarios.
| [
{
"created": "Fri, 19 Feb 2021 17:03:54 GMT",
"version": "v1"
}
] | 2021-05-27 | [
[
"Hu",
"Ting-Yao",
""
],
[
"Hauptmann",
"Alexander G.",
""
]
] |
2102.10050 | David Leslie | David Leslie | The Arc of the Data Scientific Universe | 43 pages | Harvard Data Science Review (Winter 2021) | 10.1162/99608f92.938a18d7 | null | physics.hist-ph cs.AI cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | In this paper I explore the scaffolding of normative assumptions that
supports Sabina Leonelli's implicit appeal to the values of epistemic integrity
and the global public good that conjointly animate the ethos of responsible and
sustainable data work in the context of COVID-19. Drawing primarily on the
writings of sociologist Robert K. Merton, the thinkers of the Vienna Circle,
and Charles Sanders Peirce, I make some of these assumptions explicit by
telling a longer story about the evolution of social thinking about the
normative structure of science from Merton's articulation of his well-known
norms (those of universalism, communism, organized skepticism, and
disinterestedness) to the present. I show that while Merton's norms and his
intertwinement of these with the underlying mechanisms of democratic order
provide us with an especially good starting point to explore and clarify the
commitments and values of science, Leonelli's broader, more context-responsive,
and more holistic vision of the epistemic integrity of data scientific
understanding, and her discernment of the global and biospheric scope of its
moral-practical reach, move beyond Merton's schema in ways that effectively
draw upon important critiques. Stepping past Merton, I argue that a combination
of situated universalism, methodological pluralism, strong objectivity, and
unbounded communalism must guide the responsible and sustainable data work of
the future.
| [
{
"created": "Sat, 6 Feb 2021 13:29:58 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Leslie",
"David",
""
]
] |
2102.10055 | Jindong Gu | Jindong Gu, Baoyuan Wu, Volker Tresp | Effective and Efficient Vote Attack on Capsule Networks | null | International Conference on Learning Representations (ICLR), 2021 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard Convolutional Neural Networks (CNNs) can be easily fooled by images
with small quasi-imperceptible artificial perturbations. As alternatives to
CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more
robust to white-box attacks than CNNs under popular attack protocols. Besides,
the class-conditional reconstruction part of CapsNets is also used to detect
adversarial examples. In this work, we investigate the adversarial robustness
of CapsNets, especially how the inner workings of CapsNets change when the
output capsules are attacked. The first observation is that adversarial
examples misled CapsNets by manipulating the votes from primary capsules.
Another observation is the high computational cost, when we directly apply
multi-step attack methods designed for CNNs to attack CapsNets, due to the
computationally expensive routing mechanism. Motivated by these two
observations, we propose a novel vote attack where we attack votes of CapsNets
directly. Our vote attack is not only effective but also efficient by
circumventing the routing process. Furthermore, we integrate our vote attack
into the detection-aware attack paradigm, which can successfully bypass the
class-conditional reconstruction based detection method. Extensive experiments
demonstrate the superior attack performance of our vote attack on CapsNets.
| [
{
"created": "Fri, 19 Feb 2021 17:35:07 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Gu",
"Jindong",
""
],
[
"Wu",
"Baoyuan",
""
],
[
"Tresp",
"Volker",
""
]
] |
2102.10062 | Stephen Bonner | Stephen Bonner and Ian P Barrett and Cheng Ye and Rowan Swiers and Ola
Engkvist and Andreas Bender and Charles Tapley Hoyt and William L Hamilton | A Review of Biomedical Datasets Relating to Drug Discovery: A Knowledge
Graph Perspective | null | Briefings in Bioinformatics, 2022 | 10.1093/bib/bbac404 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug discovery and development is a complex and costly process. Machine
learning approaches are being investigated to help improve the effectiveness
and speed of multiple stages of the drug discovery pipeline. Of these, those
that use Knowledge Graphs (KG) have promise in many tasks, including drug
repurposing, drug toxicity prediction and target gene-disease prioritisation.
In a drug discovery KG, crucial elements including genes, diseases and drugs
are represented as entities, whilst relationships between them indicate an
interaction. However, to construct high-quality KGs, suitable data is required.
In this review, we detail publicly available sources suitable for use in
constructing drug discovery focused KGs. We aim to help guide machine learning
and KG practitioners who are interested in applying new techniques to the drug
discovery field, but who may be unfamiliar with the relevant data sources. The
datasets are selected via strict criteria, categorised according to the primary
type of information contained within and are considered based upon what
information could be extracted to build a KG. We then present a comparative
analysis of existing public drug discovery KGs and a evaluation of selected
motivating case studies from the literature. Additionally, we raise numerous
and unique challenges and issues associated with the domain and its datasets,
whilst also highlighting key future research directions. We hope this review
will motivate KGs use in solving key and emerging questions in the drug
discovery domain.
| [
{
"created": "Fri, 19 Feb 2021 17:49:38 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Feb 2021 15:26:09 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Apr 2021 10:28:50 GMT",
"version": "v3"
},
{
"created": "Fri, 26 Nov 2021 10:56:59 GMT",
"version": "v4"
}
] | 2022-09-27 | [
[
"Bonner",
"Stephen",
""
],
[
"Barrett",
"Ian P",
""
],
[
"Ye",
"Cheng",
""
],
[
"Swiers",
"Rowan",
""
],
[
"Engkvist",
"Ola",
""
],
[
"Bender",
"Andreas",
""
],
[
"Hoyt",
"Charles Tapley",
""
],
[
"Hamilton",
"William L",
""
]
] |
2102.10243 | Thuy Vu | Thuy Vu and Alessandro Moschitti | Machine Translation Customization via Automatic Training Data Selection
from the Web | null | ECIR 2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine translation (MT) systems, especially when designed for an industrial
setting, are trained with general parallel data derived from the Web. Thus,
their style is typically driven by word/structure distribution coming from the
average of many domains. In contrast, MT customers want translations to be
specialized to their domain, for which they are typically able to provide text
samples. We describe an approach for customizing MT systems on specific domains
by selecting data similar to the target customer data to train neural
translation models. We build document classifiers using monolingual target
data, e.g., provided by the customers to select parallel training data from Web
crawled data. Finally, we train MT models on our automatically selected data,
obtaining a system specialized to the target domain. We tested our approach on
the benchmark from WMT-18 Translation Task for News domains enabling
comparisons with state-of-the-art MT systems. The results show that our models
outperform the top systems while using less data and smaller models.
| [
{
"created": "Sat, 20 Feb 2021 03:29:41 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Vu",
"Thuy",
""
],
[
"Moschitti",
"Alessandro",
""
]
] |
2102.10246 | Thuy Vu | Thuy Vu and Alessandro Moschitti | CDA: a Cost Efficient Content-based Multilingual Web Document Aligner | null | EACL 2021 | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a Content-based Document Alignment approach (CDA), an efficient
method to align multilingual web documents based on content in creating
parallel training data for machine translation (MT) systems operating at the
industrial level. CDA works in two steps: (i) projecting documents of a web
domain to a shared multilingual space; then (ii) aligning them based on the
similarity of their representations in such space. We leverage lexical
translation models to build vector representations using TF-IDF. CDA achieves
performance comparable with state-of-the-art systems in the WMT-16 Bilingual
Document Alignment Shared Task benchmark while operating in multilingual space.
Besides, we created two web-scale datasets to examine the robustness of CDA in
an industrial setting involving up to 28 languages and millions of documents.
The experiments show that CDA is robust, cost-effective, and is significantly
superior in (i) processing large and noisy web data and (ii) scaling to new and
low-resourced languages.
| [
{
"created": "Sat, 20 Feb 2021 03:37:23 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Vu",
"Thuy",
""
],
[
"Moschitti",
"Alessandro",
""
]
] |
2102.10274 | Deng-Ping Fan | Deng-Ping Fan, Ge-Peng Ji, Ming-Ming Cheng, Ling Shao | Concealed Object Detection | 17 pages, 27 figures, Code: https://github.com/GewelsJI/SINet-V2 | IEEE transactions on pattern analysis and machine intelligence,
2022, 44(10): 6024-6042 | 10.1109/TPAMI.2021.3085766 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present the first systematic study on concealed object detection (COD),
which aims to identify objects that are "perfectly" embedded in their
background. The high intrinsic similarities between the concealed objects and
their background make COD far more challenging than traditional object
detection/segmentation. To better understand this task, we collect a
large-scale dataset, called COD10K, which consists of 10,000 images covering
concealed objects in diverse real-world scenarios from 78 object categories.
Further, we provide rich annotations including object categories, object
boundaries, challenging attributes, object-level labels, and instance-level
annotations. Our COD10K is the largest COD dataset to date, with the richest
annotations, which enables comprehensive concealed object understanding and can
even be used to help progress several other vision tasks, such as detection,
segmentation, classification, etc. Motivated by how animals hunt in the wild,
we also design a simple but strong baseline for COD, termed the Search
Identification Network (SINet). Without any bells and whistles, SINet
outperforms 12 cutting-edge baselines on all datasets tested, making them
robust, general architectures that could serve as catalysts for future research
in COD. Finally, we provide some interesting findings and highlight several
potential applications and future directions. To spark research in this new
field, our code, dataset, and online demo are available on our project page:
http://mmcheng.net/cod.
| [
{
"created": "Sat, 20 Feb 2021 06:49:53 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jun 2021 05:36:12 GMT",
"version": "v2"
}
] | 2024-02-21 | [
[
"Fan",
"Deng-Ping",
""
],
[
"Ji",
"Ge-Peng",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Shao",
"Ling",
""
]
] |
2102.10290 | Luca Lugini | Luca Lugini, Diane Litman | Contextual Argument Component Classification for Class Discussions | null | In Proceedings of the 28th International Conference on
Computational Linguistics, pp. 1475-1480. 2020 | 10.18653/v1/2020.coling-main.128 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argument mining systems often consider contextual information, i.e.
information outside of an argumentative discourse unit, when trained to
accomplish tasks such as argument component identification, classification, and
relation extraction. However, prior work has not carefully analyzed the utility
of different contextual properties in context-aware models. In this work, we
show how two different types of contextual information, local discourse context
and speaker context, can be incorporated into a computational model for
classifying argument components in multi-party classroom discussions. We find
that both context types can improve performance, although the improvements are
dependent on context size and position.
| [
{
"created": "Sat, 20 Feb 2021 08:48:07 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Lugini",
"Luca",
""
],
[
"Litman",
"Diane",
""
]
] |
2102.10293 | Luca Lugini | Luca Lugini, Christopher Olshefski, Ravneet Singh, Diane Litman,
Amanda Godley | Discussion Tracker: Supporting Teacher Learning about Students'
Collaborative Argumentation in High School Classrooms | null | "Discussion Tracker: Supporting Teacher Learning about Students'
Collaborative Argumentation in High School Classrooms." In Proceedings of the
28th International Conference on Computational Linguistics: System
Demonstrations, 2020 | 10.18653/v1/2020.coling-demos.10 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Teaching collaborative argumentation is an advanced skill that many K-12
teachers struggle to develop. To address this, we have developed Discussion
Tracker, a classroom discussion analytics system based on novel algorithms for
classifying argument moves, specificity, and collaboration. Results from a
classroom deployment indicate that teachers found the analytics useful, and
that the underlying classifiers perform with moderate to substantial agreement
with humans.
| [
{
"created": "Sat, 20 Feb 2021 09:06:57 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Lugini",
"Luca",
""
],
[
"Olshefski",
"Christopher",
""
],
[
"Singh",
"Ravneet",
""
],
[
"Litman",
"Diane",
""
],
[
"Godley",
"Amanda",
""
]
] |
2102.10296 | Seyedali Meghdadi Mr | Seyedali Meghdadi, Guido Tack, Ariel Liebman, Nicolas Langren\'e,
Christoph Bergmeir | Versatile and Robust Transient Stability Assessment via Instance
Transfer Learning | Accepted at the 2021 IEEE PES General Meeting, July 25-29 2020,
Washington, DC, USA | 2021 IEEE Power & Energy Society General Meeting (PESGM) 1-5 | 10.1109/PESGM46819.2021.9638195 | null | eess.SY cs.AI cs.SY | http://creativecommons.org/licenses/by/4.0/ | To support N-1 pre-fault transient stability assessment, this paper
introduces a new data collection method in a data-driven algorithm
incorporating the knowledge of power system dynamics. The domain knowledge on
how the disturbance effect will propagate from the fault location to the rest
of the network is leveraged to recognise the dominant conditions that determine
the stability of a system. Accordingly, we introduce a new concept called
Fault-Affected Area, which provides crucial information regarding the unstable
region of operation. This information is embedded in an augmented dataset to
train an ensemble model using an instance transfer learning framework. The test
results on the IEEE 39-bus system verify that this model can accurately predict
the stability of previously unseen operational scenarios while reducing the
risk of false prediction of unstable instances compared to standard approaches.
| [
{
"created": "Sat, 20 Feb 2021 09:10:29 GMT",
"version": "v1"
}
] | 2022-03-08 | [
[
"Meghdadi",
"Seyedali",
""
],
[
"Tack",
"Guido",
""
],
[
"Liebman",
"Ariel",
""
],
[
"Langrené",
"Nicolas",
""
],
[
"Bergmeir",
"Christoph",
""
]
] |
2102.10338 | Haimin Zhang | Haimin Zhang, Min Xu, Guoqiang Zhang, and Kenta Niwa | SSFG: Stochastically Scaling Features and Gradients for Regularizing
Graph Convolutional Networks | null | IEEE Transactions on Neural Networks and Learning Systems, 2022 | 10.1109/TNNLS.2022.3188888 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph convolutional networks have been successfully applied in various
graph-based tasks. In a typical graph convolutional layer, node features are
updated by aggregating neighborhood information. Repeatedly applying graph
convolutions can cause the oversmoothing issue, i.e., node features at deep
layers converge to similar values. Previous studies have suggested that
oversmoothing is one of the major issues that restrict the performance of graph
convolutional networks. In this paper, we propose a stochastic regularization
method to tackle the oversmoothing problem. In the proposed method, we
stochastically scale features and gradients (SSFG) by a factor sampled from a
probability distribution in the training procedure. By explicitly applying a
scaling factor to break feature convergence, the oversmoothing issue is
alleviated. We show that applying stochastic scaling at the gradient level is
complementary to that applied at the feature level to improve the overall
performance. Our method does not increase the number of trainable parameters.
When used together with ReLU, our SSFG can be seen as a stochastic ReLU
activation function. We experimentally validate our SSFG regularization method
on three commonly used types of graph networks. Extensive experimental results
on seven benchmark datasets for four graph-based tasks demonstrate that our
SSFG regularization is effective in improving the overall performance of the
baseline graph networks.
| [
{
"created": "Sat, 20 Feb 2021 12:59:48 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Mar 2021 12:23:22 GMT",
"version": "v2"
}
] | 2022-07-11 | [
[
"Zhang",
"Haimin",
""
],
[
"Xu",
"Min",
""
],
[
"Zhang",
"Guoqiang",
""
],
[
"Niwa",
"Kenta",
""
]
] |
2102.10446 | Andrei Iantsen | Andrei Iantsen, Dimitris Visvikis, Mathieu Hatt | Squeeze-and-Excitation Normalization for Automated Delineation of Head
and Neck Primary Tumors in Combined PET and CT Images | 7 pages, 2 figures, 2 tables | In: Andrearczyk V., Oreiller V., Depeursinge A. (eds) Head and
Neck Tumor Segmentation. HECKTOR 2020. Lecture Notes in Computer Science, vol
12603. Springer, Cham | 10.1007/978-3-030-67194-5_4 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Development of robust and accurate fully automated methods for medical image
segmentation is crucial in clinical practice and radiomics studies. In this
work, we contributed an automated approach for Head and Neck (H&N) primary
tumor segmentation in combined positron emission tomography / computed
tomography (PET/CT) images in the context of the MICCAI 2020 Head and Neck
Tumor segmentation challenge (HECKTOR). Our model was designed on the U-Net
architecture with residual layers and supplemented with Squeeze-and-Excitation
Normalization. The described method achieved competitive results in
cross-validation (DSC 0.745, precision 0.760, recall 0.789) performed on
different centers, as well as on the test set (DSC 0.759, precision 0.833,
recall 0.740) that allowed us to win first prize in the HECKTOR challenge among
21 participating teams. The full implementation based on PyTorch and the
trained models are available at https://github.com/iantsen/hecktor
| [
{
"created": "Sat, 20 Feb 2021 21:06:59 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Iantsen",
"Andrei",
""
],
[
"Visvikis",
"Dimitris",
""
],
[
"Hatt",
"Mathieu",
""
]
] |
2102.10447 | M\'onika Farsang | M\'onika Farsang and Luca Szegletes | Importance of Environment Design in Reinforcement Learning: A Study of a
Robotic Environment | null | Proceedings of the Automation and Applied Computer Science
Workshop 2021 | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | An in-depth understanding of the particular environment is crucial in
reinforcement learning (RL). To address this challenge, the decision-making
process of a mobile collaborative robotic assistant modeled by the Markov
decision process (MDP) framework is studied in this paper. The optimal
state-action combinations of the MDP are calculated with the non-linear Bellman
optimality equations. This system of equations can be solved with relative ease
by the computational power of Wolfram Mathematica, where the obtained optimal
action-values point to the optimal policy. Unlike other RL algorithms, this
methodology does not approximate the optimal behavior, it gives the exact,
explicit solution, which provides a strong foundation for our study. With this,
we offer new insights into understanding the action selection mechanisms in RL
by presenting various small modifications on the very same schema that lead to
different optimal policies.
| [
{
"created": "Sat, 20 Feb 2021 21:14:09 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 15:22:10 GMT",
"version": "v2"
}
] | 2021-06-29 | [
[
"Farsang",
"Mónika",
""
],
[
"Szegletes",
"Luca",
""
]
] |
2102.10456 | M\'onika Farsang | M\'onika Farsang and Luca Szegletes | Decaying Clipping Range in Proximal Policy Optimization | null | 2021 IEEE 15th International Symposium on Applied Computational
Intelligence and Informatics (SACI), 2021, pp. 000521-000526 | 10.1109/SACI51354.2021.9465602 | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Proximal Policy Optimization (PPO) is among the most widely used algorithms
in reinforcement learning, which achieves state-of-the-art performance in many
challenging problems. The keys to its success are the reliable policy updates
through the clipping mechanism and the multiple epochs of minibatch updates.
The aim of this research is to give new simple but effective alternatives to
the former. For this, we propose linearly and exponentially decaying clipping
range approaches throughout the training. With these, we would like to provide
higher exploration at the beginning and stronger restrictions at the end of the
learning phase. We investigate their performance in several classical control
and locomotive robotic environments. During the analysis, we found that they
influence the achieved rewards and are effective alternatives to the constant
clipping method in many reinforcement learning tasks.
| [
{
"created": "Sat, 20 Feb 2021 22:08:05 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 15:00:37 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Jul 2021 07:56:35 GMT",
"version": "v3"
}
] | 2021-07-02 | [
[
"Farsang",
"Mónika",
""
],
[
"Szegletes",
"Luca",
""
]
] |
2102.10461 | Konik Kothari | Konik Kothari, AmirEhsan Khorashadizadeh, Maarten de Hoop, Ivan
Dokmani\'c | Trumpets: Injective Flows for Inference and Inverse Problems | 16 pages | Uncertainty in Artificial Intelligence (UAI 2021) | null | null | cs.LG cs.AI eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose injective generative models called Trumpets that generalize
invertible normalizing flows. The proposed generators progressively increase
dimension from a low-dimensional latent space. We demonstrate that Trumpets can
be trained orders of magnitudes faster than standard flows while yielding
samples of comparable or better quality. They retain many of the advantages of
the standard flows such as training based on maximum likelihood and a fast,
exact inverse of the generator. Since Trumpets are injective and have fast
inverses, they can be effectively used for downstream Bayesian inference. To
wit, we use Trumpet priors for maximum a posteriori estimation in the context
of image reconstruction from compressive measurements, outperforming
competitive baselines in terms of reconstruction quality and speed. We then
propose an efficient method for posterior characterization and uncertainty
quantification with Trumpets by taking advantage of the low-dimensional latent
space.
| [
{
"created": "Sat, 20 Feb 2021 22:37:37 GMT",
"version": "v1"
}
] | 2023-07-25 | [
[
"Kothari",
"Konik",
""
],
[
"Khorashadizadeh",
"AmirEhsan",
""
],
[
"de Hoop",
"Maarten",
""
],
[
"Dokmanić",
"Ivan",
""
]
] |
2102.10485 | Massimiliano Lupo Pasini Dr. | Massimiliano Lupo Pasini, Vittorio Gabbi, Junqi Yin, Simona Perotto,
Nouamane Laanait | Scalable Balanced Training of Conditional Generative Adversarial Neural
Networks on Image Data | null | Journal of Supercomputing, 2021 | 10.1007/s11227-021-03808-2 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a distributed approach to train deep convolutional generative
adversarial neural network (DC-CGANs) models. Our method reduces the imbalance
between generator and discriminator by partitioning the training data according
to data labels, and enhances scalability by performing a parallel training
where multiple generators are concurrently trained, each one of them focusing
on a single data label. Performance is assessed in terms of inception score and
image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a
significant improvement in comparison to state-of-the-art techniques to
training DC-CGANs. Weak scaling is attained on all the four datasets using up
to 1,000 processes and 2,000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.
| [
{
"created": "Sun, 21 Feb 2021 00:48:19 GMT",
"version": "v1"
}
] | 2021-04-30 | [
[
"Pasini",
"Massimiliano Lupo",
""
],
[
"Gabbi",
"Vittorio",
""
],
[
"Yin",
"Junqi",
""
],
[
"Perotto",
"Simona",
""
],
[
"Laanait",
"Nouamane",
""
]
] |
2102.10530 | Kotaro Furuya | Kotaro Furuya and Jun Ohkubo | Semi-supervised learning combining backpropagation and STDP: STDP
enhances learning by backpropagation with a small amount of labeled data in a
spiking neural network | 9 pages, 12 figures | J. Phys. Soc. Jpn. 90, 074802 (2021) | 10.7566/JPSJ.90.074802 | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A semi-supervised learning method for spiking neural networks is proposed.
The proposed method consists of supervised learning by backpropagation and
subsequent unsupervised learning by spike-timing-dependent plasticity (STDP),
which is a biologically plausible learning rule. Numerical experiments show
that the proposed method improves the accuracy without additional labeling when
a small amount of labeled data is used. This feature has not been achieved by
existing semi-supervised learning methods of discriminative models. It is
possible to implement the proposed learning method for event-driven systems.
Hence, it would be highly efficient in real-time problems if it were
implemented on neuromorphic hardware. The results suggest that STDP plays an
important role other than self-organization when applied after supervised
learning, which differs from the previous method of using STDP as pre-training
interpreted as self-organization.
| [
{
"created": "Sun, 21 Feb 2021 06:55:02 GMT",
"version": "v1"
},
{
"created": "Wed, 19 May 2021 09:54:50 GMT",
"version": "v2"
}
] | 2021-06-23 | [
[
"Furuya",
"Kotaro",
""
],
[
"Ohkubo",
"Jun",
""
]
] |
2102.10532 | Michael Maher | Michael J. Maher | Relative Expressiveness of Defeasible Logics II | Includes extensive appendix | Theory and Practice of Logic Programming 13 (2013) 579-592 | 10.1017/S1471068413000367 | null | cs.LO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | (Maher 2012) introduced an approach for relative expressiveness of defeasible
logics, and two notions of relative expressiveness were investigated. Using the
first of these definitions of relative expressiveness, we show that all the
defeasible logics in the DL framework are equally expressive under this
formulation of relative expressiveness. The second formulation of relative
expressiveness is stronger than the first. However, we show that logics
incorporating individual defeat are equally expressive as the corresponding
logics with team defeat. Thus the only differences in expressiveness of logics
in DL arise from differences in how ambiguity is handled. This completes the
study of relative expressiveness in DL begun in \cite{Maher12}.
| [
{
"created": "Sun, 21 Feb 2021 07:01:50 GMT",
"version": "v1"
}
] | 2024-05-15 | [
[
"Maher",
"Michael J.",
""
]
] |
2102.10557 | Nam Nguyen | Nam Nguyen and J. Morris Chang | Contrastive Self-supervised Neural Architecture Search | null | IEEE Transactions on Artificial Intelligence 2 (2021) 1-16 | 10.1109/TAI.2021.3121663 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a novel cell-based neural architecture search algorithm
(NAS), which completely alleviates the expensive costs of data labeling
inherited from supervised learning. Our algorithm capitalizes on the
effectiveness of self-supervised learning for image representations, which is
an increasingly crucial topic of computer vision. First, using only a small
amount of unlabeled train data under contrastive self-supervised learning allow
us to search on a more extensive search space, discovering better neural
architectures without surging the computational resources. Second, we entirely
relieve the cost for labeled data (by contrastive loss) in the search stage
without compromising architectures' final performance in the evaluation phase.
Finally, we tackle the inherent discrete search space of the NAS problem by
sequential model-based optimization via the tree-parzen estimator (SMBO-TPE),
enabling us to reduce the computational expense response surface significantly.
An extensive number of experiments empirically show that our search algorithm
can achieve state-of-the-art results with better efficiency in data labeling
cost, searching time, and accuracy in final validation.
| [
{
"created": "Sun, 21 Feb 2021 08:38:28 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Apr 2021 06:09:07 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Oct 2021 17:17:49 GMT",
"version": "v3"
}
] | 2021-11-09 | [
[
"Nguyen",
"Nam",
""
],
[
"Chang",
"J. Morris",
""
]
] |
2102.10558 | L\'aszl\'o Csat\'o | Kolos Csaba \'Agoston and L\'aszl\'o Csat\'o | Inconsistency thresholds for incomplete pairwise comparison matrices | 16 pages, 3 figures, 4 tables | Omega, 108: 102576, 2022 | 10.1016/j.omega.2021.102576 | null | math.ST cs.AI math.OC stat.AP stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise comparison matrices are increasingly used in settings where some
pairs are missing. However, there exist few inconsistency indices for similar
incomplete data sets and no reasonable measure has an associated threshold.
This paper generalises the famous rule of thumb for the acceptable level of
inconsistency, proposed by Saaty, to incomplete pairwise comparison matrices.
The extension is based on choosing the missing elements such that the maximal
eigenvalue of the incomplete matrix is minimised. Consequently, the
well-established values of the random index cannot be adopted: the
inconsistency of random matrices is found to be the function of matrix size and
the number of missing elements, with a nearly linear dependence in the case of
the latter variable. Our results can be directly built into decision-making
software and used by practitioners as a statistical criterion for accepting or
rejecting an incomplete pairwise comparison matrix.
| [
{
"created": "Sun, 21 Feb 2021 08:39:37 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Jun 2021 12:09:42 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Sep 2021 14:43:10 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Dec 2021 13:32:32 GMT",
"version": "v4"
}
] | 2022-02-03 | [
[
"Ágoston",
"Kolos Csaba",
""
],
[
"Csató",
"László",
""
]
] |
2102.10590 | Zahidul Islam | Zahidul Islam, Mohammad Rukonuzzaman, Raiyan Ahmed, Md. Hasanul Kabir,
Moshiur Farazi | Efficient Two-Stream Network for Violence Detection Using Separable
Convolutional LSTM | Accepted by the 2021 International Joint Conference on Neural
Networks (IJCNN 2021) | 2021 International Joint Conference on Neural Networks (IJCNN),
2021, pp. 1-8 | 10.1109/IJCNN52387.2021.9534280 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically detecting violence from surveillance footage is a subset of
activity recognition that deserves special attention because of its wide
applicability in unmanned security monitoring systems, internet video
filtration, etc. In this work, we propose an efficient two-stream deep learning
architecture leveraging Separable Convolutional LSTM (SepConvLSTM) and
pre-trained MobileNet where one stream takes in background suppressed frames as
inputs and other stream processes difference of adjacent frames. We employed
simple and fast input pre-processing techniques that highlight the moving
objects in the frames by suppressing non-moving backgrounds and capture the
motion in-between frames. As violent actions are mostly characterized by body
movements these inputs help produce discriminative features. SepConvLSTM is
constructed by replacing convolution operation at each gate of ConvLSTM with a
depthwise separable convolution that enables producing robust long-range
Spatio-temporal features while using substantially fewer parameters. We
experimented with three fusion methods to combine the output feature maps of
the two streams. Evaluation of the proposed methods was done on three standard
public datasets. Our model outperforms the accuracy on the larger and more
challenging RWF-2000 dataset by more than a 2% margin while matching
state-of-the-art results on the smaller datasets. Our experiments lead us to
conclude, the proposed models are superior in terms of both computational
efficiency and detection accuracy.
| [
{
"created": "Sun, 21 Feb 2021 12:01:48 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Apr 2021 10:14:39 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Apr 2021 15:16:23 GMT",
"version": "v3"
}
] | 2021-10-08 | [
[
"Islam",
"Zahidul",
""
],
[
"Rukonuzzaman",
"Mohammad",
""
],
[
"Ahmed",
"Raiyan",
""
],
[
"Kabir",
"Md. Hasanul",
""
],
[
"Farazi",
"Moshiur",
""
]
] |
2102.10707 | HanQin Cai | HanQin Cai, Yuchen Lou, Daniel McKenzie, Wotao Yin | A Zeroth-Order Block Coordinate Descent Algorithm for Huge-Scale
Black-Box Optimization | Accepted to ICML 2021 | Proceedings of the 38th International Conference on Machine
Learning, PMLR 139:1193-1203, 2021 | null | null | math.OC cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the zeroth-order optimization problem in the huge-scale setting,
where the dimension of the problem is so large that performing even basic
vector operations on the decision variables is infeasible. In this paper, we
propose a novel algorithm, coined ZO-BCD, that exhibits favorable overall query
complexity and has a much smaller per-iteration computational complexity. In
addition, we discuss how the memory footprint of ZO-BCD can be reduced even
further by the clever use of circulant measurement matrices. As an application
of our new method, we propose the idea of crafting adversarial attacks on
neural network based classifiers in a wavelet domain, which can result in
problem dimensions of over 1.7 million. In particular, we show that crafting
adversarial examples to audio classifiers in a wavelet domain can achieve the
state-of-the-art attack success rate of 97.9%.
| [
{
"created": "Sun, 21 Feb 2021 23:06:35 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Jun 2021 04:30:50 GMT",
"version": "v2"
}
] | 2021-08-17 | [
[
"Cai",
"HanQin",
""
],
[
"Lou",
"Yuchen",
""
],
[
"McKenzie",
"Daniel",
""
],
[
"Yin",
"Wotao",
""
]
] |
2102.10731 | Sarvesh Kumar Singh Mr. | Sarvesh Kumar Singh, Bikram Pratap Banerjee and Simit Raval | Three dimensional unique identifier based automated georeferencing and
coregistration of point clouds in underground environment | 26 pages, 10 figures | Remote Sensing. 2021; 13(16):3145 | 10.3390/rs13163145 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spatially and geometrically accurate laser scans are essential in modelling
infrastructure for applications in civil, mining and transportation. Monitoring
of underground or indoor environments such as mines or tunnels is challenging
due to unavailability of a sensor positioning framework, complicated
structurally symmetric layouts, repetitive features and occlusions. Current
practices largely include a manual selection of discernable reference points
for georeferencing and coregistration purpose. This study aims at overcoming
these practical challenges in underground or indoor laser scanning. The
developed approach involves automatically and uniquely identifiable three
dimensional unique identifiers (3DUIDs) in laser scans, and a 3D registration
(3DReG) workflow. Field testing of the method in an underground tunnel has been
found accurate, effective and efficient. Additionally, a method for
automatically extracting roadway tunnel profile has been exhibited. The
developed 3DUID can be used in roadway profile extraction, guided automation,
sensor calibration, reference targets for routine survey and deformation
monitoring.
| [
{
"created": "Mon, 22 Feb 2021 01:47:50 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Singh",
"Sarvesh Kumar",
""
],
[
"Banerjee",
"Bikram Pratap",
""
],
[
"Raval",
"Simit",
""
]
] |
2102.10777 | Tejas Khare | Tejas Khare, Vaibhav Bahel and Anuradha C. Phadke | PCB-Fire: Automated Classification and Fault Detection in PCB | 6 Pages, 9 Figures, Conference | Proceeding Reference - 978-0-7381-4335-4/20/$31.00
\c{opyright}2020 IEEE | 10.1109/MPCIT51588.2020.9350324 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Printed Circuit Boards are the foundation for the functioning of any
electronic device, and therefore are an essential component for various
industries such as automobile, communication, computation, etc. However, one of
the challenges faced by the PCB manufacturers in the process of manufacturing
of the PCBs is the faulty placement of its components including missing
components. In the present scenario the infrastructure required to ensure
adequate quality of the PCB requires a lot of time and effort. The authors
present a novel solution for detecting missing components and classifying them
in a resourceful manner. The presented algorithm focuses on pixel theory and
object detection, which has been used in combination to optimize the results
from the given dataset.
| [
{
"created": "Mon, 22 Feb 2021 05:19:22 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Khare",
"Tejas",
""
],
[
"Bahel",
"Vaibhav",
""
],
[
"Phadke",
"Anuradha C.",
""
]
] |
2102.10820 | Dennis Stumpf | Dennis Stumpf, Stephan Krau\ss, Gerd Reis, Oliver Wasenm\"uller,
Didier Stricker | SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences | VISAPP 2021 full paper (9 pages, 6 figures), published by SciTePress:
https://www.scitepress.org/PublicationsDetail.aspx?ID=ywQZ3GZrka8=&t=1 | Proceedings of the 16th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications - Volume 4
VISAPP: VISAPP (2021) 595-603 | 10.5220/0010303005950603 | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large labeled data sets are one of the essential basics of modern deep
learning techniques. Therefore, there is an increasing need for tools that
allow to label large amounts of data as intuitively as possible. In this paper,
we introduce SALT, a tool to semi-automatically annotate RGB-D video sequences
to generate 3D bounding boxes for full six Degrees of Freedom (DoF) object
poses, as well as pixel-level instance segmentation masks for both RGB and
depth. Besides bounding box propagation through various interpolation
techniques, as well as algorithmically guided instance segmentation, our
pipeline also provides built-in pre-processing functionalities to facilitate
the data set creation process. By making full use of SALT, annotation time can
be reduced by a factor of up to 33.95 for bounding box creation and 8.55 for
RGB segmentation without compromising the quality of the automatically
generated ground truth.
| [
{
"created": "Mon, 22 Feb 2021 08:11:39 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Stumpf",
"Dennis",
""
],
[
"Krauß",
"Stephan",
""
],
[
"Reis",
"Gerd",
""
],
[
"Wasenmüller",
"Oliver",
""
],
[
"Stricker",
"Didier",
""
]
] |
2102.10837 | Subho Sankar Banerjee | Subho S. Banerjee, Saurabh Jha, Zbigniew T. Kalbarczyk, Ravishankar K.
Iyer | BayesPerf: Minimizing Performance Monitoring Errors Using Bayesian
Statistics | null | Proceedings of the Twenty-Sixth International Conference on
Architectural Support for Programming Languages and Operating Systems (ASPLOS
21), 2021 | 10.1145/3445814.3446739 | null | cs.DC cs.AI cs.AR cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hardware performance counters (HPCs) that measure low-level architectural and
microarchitectural events provide dynamic contextual information about the
state of the system. However, HPC measurements are error-prone due to non
determinism (e.g., undercounting due to event multiplexing, or OS
interrupt-handling behaviors). In this paper, we present BayesPerf, a system
for quantifying uncertainty in HPC measurements by using a domain-driven
Bayesian model that captures microarchitectural relationships between HPCs to
jointly infer their values as probability distributions. We provide the design
and implementation of an accelerator that allows for low-latency and low-power
inference of the BayesPerf model for x86 and ppc64 CPUs. BayesPerf reduces the
average error in HPC measurements from 40.1% to 7.6% when events are being
multiplexed. The value of BayesPerf in real-time decision-making is illustrated
with a simple example of scheduling of PCIe transfers.
| [
{
"created": "Mon, 22 Feb 2021 09:00:14 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Banerjee",
"Subho S.",
""
],
[
"Jha",
"Saurabh",
""
],
[
"Kalbarczyk",
"Zbigniew T.",
""
],
[
"Iyer",
"Ravishankar K.",
""
]
] |
2102.10848 | Judit Acs | Judit \'Acs and D\'aniel L\'evai and D\'avid M\'ark Nemeskey and
Andr\'as Kornai | Evaluating Contextualized Language Models for Hungarian | null | Hungarian NLP Conference (MSZNY2021) | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an extended comparison of contextualized language models for
Hungarian. We compare huBERT, a Hungarian model against 4 multilingual models
including the multilingual BERT model. We evaluate these models through three
tasks, morphological probing, POS tagging and NER. We find that huBERT works
better than the other models, often by a large margin, particularly near the
global optimum (typically at the middle layers). We also find that huBERT tends
to generate fewer subwords for one word and that using the last subword for
token-level tasks is generally a better choice than using the first one.
| [
{
"created": "Mon, 22 Feb 2021 09:29:01 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Ács",
"Judit",
""
],
[
"Lévai",
"Dániel",
""
],
[
"Nemeskey",
"Dávid Márk",
""
],
[
"Kornai",
"András",
""
]
] |
2102.10864 | Judit Acs | Judit \'Acs and \'Akos K\'ad\'ar and Andr\'as Kornai | Subword Pooling Makes a Difference | null | EACL2021 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contextual word-representations became a standard in modern natural language
processing systems. These models use subword tokenization to handle large
vocabularies and unknown words. Word-level usage of such systems requires a way
of pooling multiple subwords that correspond to a single word. In this paper we
investigate how the choice of subword pooling affects the downstream
performance on three tasks: morphological probing, POS tagging and NER, in 9
typologically diverse languages. We compare these in two massively multilingual
models, mBERT and XLM-RoBERTa. For morphological tasks, the widely used `choose
the first subword' is the worst strategy and the best results are obtained by
using attention over the subwords. For POS tagging both of these strategies
perform poorly and the best choice is to use a small LSTM over the subwords.
The same strategy works best for NER and we show that mBERT is better than
XLM-RoBERTa in all 9 languages. We publicly release all code, data and the full
result tables at \url{https://github.com/juditacs/subword-choice}.
| [
{
"created": "Mon, 22 Feb 2021 09:59:30 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 13:32:52 GMT",
"version": "v2"
}
] | 2021-03-30 | [
[
"Ács",
"Judit",
""
],
[
"Kádár",
"Ákos",
""
],
[
"Kornai",
"András",
""
]
] |
2102.10935 | Yazhou Yao | Tao Chen, Guosen Xie, Yazhou Yao, Qiong Wang, Fumin Shen, Zhenmin
Tang, and Jian Zhang | Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation | null | IEEE Transactions on Multimedia, 2021 | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One-shot semantic image segmentation aims to segment the object regions for
the novel class with only one annotated image. Recent works adopt the episodic
training strategy to mimic the expected situation at testing time. However,
these existing approaches simulate the test conditions too strictly during the
training process, and thus cannot make full use of the given label information.
Besides, these approaches mainly focus on the foreground-background target
class segmentation setting. They only utilize binary mask labels for training.
In this paper, we propose to leverage the multi-class label information during
the episodic training. It will encourage the network to generate more
semantically meaningful features for each category. After integrating the
target class cues into the query features, we then propose a pyramid feature
fusion module to mine the fused features for the final classifier. Furthermore,
to take more advantage of the support image-mask pair, we propose a
self-prototype guidance branch to support image segmentation. It can constrain
the network for generating more compact features and a robust prototype for
each semantic class. For inference, we propose a fused prototype guidance
branch for the segmentation of the query image. Specifically, we leverage the
prediction of the query image to extract the pseudo-prototype and combine it
with the initial prototype. Then we utilize the fused prototype to guide the
final segmentation of the query image. Extensive experiments demonstrate the
superiority of our proposed approach.
| [
{
"created": "Mon, 22 Feb 2021 12:07:35 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Chen",
"Tao",
""
],
[
"Xie",
"Guosen",
""
],
[
"Yao",
"Yazhou",
""
],
[
"Wang",
"Qiong",
""
],
[
"Shen",
"Fumin",
""
],
[
"Tang",
"Zhenmin",
""
],
[
"Zhang",
"Jian",
""
]
] |
2102.10951 | Alexander Hepburn | Alexander Hepburn, Raul Santos-Rodriguez | Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception | null | 2021 IEEE International Conference on Image Processing (ICIP),
Anchorage, Alaska, USA | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explaining the decisions of models is becoming pervasive in the image
processing domain, whether it is by using post-hoc methods or by creating
inherently interpretable models. While the widespread use of surrogate
explainers is a welcome addition to inspect and understand black-box models,
assessing the robustness and reliability of the explanations is key for their
success. Additionally, whilst existing work in the explainability field
proposes various strategies to address this problem, the challenges of working
with data in the wild is often overlooked. For instance, in image
classification, distortions to images can not only affect the predictions
assigned by the model, but also the explanation. Given a clean and a distorted
version of an image, even if the prediction probabilities are similar, the
explanation may still be different. In this paper we propose a methodology to
evaluate the effect of distortions in explanations by embedding perceptual
distances that tailor the neighbourhoods used to training surrogate explainers.
We also show that by operating in this way, we can make the explanations more
robust to distortions. We generate explanations for images in the Imagenet-C
dataset and demonstrate how using a perceptual distances in the surrogate
explainer creates more coherent explanations for the distorted and reference
images.
| [
{
"created": "Mon, 22 Feb 2021 12:38:53 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Jun 2021 10:39:04 GMT",
"version": "v2"
}
] | 2021-06-17 | [
[
"Hepburn",
"Alexander",
""
],
[
"Santos-Rodriguez",
"Raul",
""
]
] |
2102.11025 | Emiliano Lorini | Emiliano Lorini | A Qualitative Theory of Cognitive Attitudes and their Change | Under consideration in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 21 (2021) 428-458 | 10.1017/S1471068421000053 | null | cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general logical framework for reasoning about agents' cognitive
attitudes of both epistemic type and motivational type. We show that it allows
us to express a variety of relevant concepts for qualitative decision theory
including the concepts of knowledge, belief, strong belief, conditional belief,
desire, conditional desire, strong desire and preference. We also present two
extensions of the logic, one by the notion of choice and the other by dynamic
operators for belief change and desire change, and we apply the former to the
analysis of single-stage games under incomplete information. We provide sound
and complete axiomatizations for the basic logic and for its two extensions.
The paper is under consideration in Theory and Practice of Logic Programming
(TPLP).
| [
{
"created": "Tue, 16 Feb 2021 10:28:49 GMT",
"version": "v1"
}
] | 2023-06-22 | [
[
"Lorini",
"Emiliano",
""
]
] |
2102.11032 | Ozlem Uzuner | Nicholas Dobbins, David Wayne, Kahyun Lee, \"Ozlem Uzuner, Meliha
Yetisgen | Performance of Automatic De-identification Across Different Note Types | null | AMIA Virtual Summits 2021 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Free-text clinical notes detail all aspects of patient care and have great
potential to facilitate quality improvement and assurance initiatives as well
as advance clinical research. However, concerns about patient privacy and
confidentiality limit the use of clinical notes for research. As a result, the
information documented in these notes remains unavailable for most researchers.
De-identification (de-id), i.e., locating and removing personally identifying
protected health information (PHI), is one way of improving access to clinical
narratives. However, there are limited off-the-shelf de-identification systems
able to consistently detect PHI across different data sources and medical
specialties. In this abstract, we present the performance of a state-of-the art
de-id system called NeuroNER1 on a diverse set of notes from University of
Washington (UW) when the models are trained on data from an external
institution (Partners Healthcare) vs. from the same institution (UW). We
present results at the level of PHI and note types.
| [
{
"created": "Wed, 17 Feb 2021 00:55:40 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Dobbins",
"Nicholas",
""
],
[
"Wayne",
"David",
""
],
[
"Lee",
"Kahyun",
""
],
[
"Uzuner",
"Özlem",
""
],
[
"Yetisgen",
"Meliha",
""
]
] |