uuid
int64 0
6k
| title
stringlengths 8
285
| abstract
stringlengths 22
4.43k
|
---|---|---|
1,700 | Appearance Learning for Image-Based Motion Estimation in Tomography | In tomographic imaging, anatomical structures are reconstructed by applying a pseudo- inverse forward model to acquired signals. Geometric information within this process is usually depending on the system setting only, i.e., the scanner position or readout direction. Patient motion therefore corrupts the geometry alignment in the reconstruction process resulting in motion artifacts. We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object. To this end, we train a siamese triplet network to predict the reprojection error (RPE) for the complete acquisition as well as an approximate distribution of the RPE along the single views from the reconstructed volume in a multi-task learning approach. The RPE measures the motion-induced geometric deviations independent of the object based on virtual marker positions, which are available during training. We train our network using 27 patients and deploy a 21-4-2 split for training, validation and testing. In average, we achieve a residual mean RPE of 0.013 mm with an inter-patient standard deviation of 0.022 mm. This is twice the accuracy compared to previously published results. In a motion estimation benchmark the proposed approach achieves superior results in comparison with two state-of-the-art measures in nine out of twelve experiments. The clinical applicability of the proposed method is demonstrated on a motion-affected clinical dataset. |
1,701 | Angular-Based Preprocessing for Image Denoising | There is not a large research on how to use color information for improving results in image denoising. Currently, most of the methods modify the color space from standard red green blue (sRGB) to an opponent-like one as better results are obtained, but out of this conversion, color is mostly ignored in the image denoising pipelines. In this letter, we propose a color decomposition to preprocess an image before applying a typical denoising. Our decomposition consists in obtaining a set of images in the spherical coordinate system, each of them with the origin of the spherical transformation in a different color value. These color values, that we call color centers, are defined so as to be far away from the dominant colors of the image. Once in the spherical coordinate system, we perform a mild denoising operation with some state-of-the-art method in the angular components. Then, we convert these images back to sRGB, and wemerge them depending on the distance between the color of each pixel and the color centers. Finally, we denoise the preprocessed image with the same state-of-the-art method used in our preprocessing. Experiments show that our method outperforms the results of directly applying the denoising method on the input image for different state-of-the-art denoising methods. |
1,702 | Path R-CNN for Prostate Cancer Diagnosis and Gleason Grading of Histological Images | Prostate cancer is the most common and second most deadly form of cancer in men in the United States. The classification of prostate cancers based on Gleason grading using histological images is important in risk assessment and treatment planning for patients. Here, we demonstrate a new region-based convolutional neural network framework for multi-task prediction using an epithelial network head and a grading network head. Compared with a single-task model, our multi-task model can provide complementary contextual information, which contributes to better performance. Our model is achieved a state-of-the-art performance in epithelial cells detection and Gleason grading tasks simultaneously. Using fivefold cross-validation, our model is achieved an epithelial cells detection accuracy of 99.07% with an average area under the curve of 0.998. As for Gleason grading, our model is obtained a mean intersection over union of 79.56% and an overall pixel accuracy of 89.40%. |
1,703 | Validity assessment of Michigan's proposed qPCR threshold value for rapid water-quality monitoring of E. coli contamination | Michigan's water-quality standards specify that E. coli concentrations at bathing beaches must not exceed 300 E. coli per 100 mL, as determined by the geometric mean of culture-based concentrations in three or more representative samples from a given beach on a given day. Culture-based analysis requires 18-24 h to complete, so results are not available on the day of sampling. This one-day delay is problematic because results cannot be used to prevent recreation at beaches that are unsafe on the sampling day, nor do they reliably indicate whether recreation should be prevented the next day, due to high between-day variability in E. coli concentrations demonstrated by previous studies. By contrast, qPCR-based E. coli concentrations can be obtained in 3-4 h, making same-day beach notification decisions possible. Michigan has proposed a qPCR threshold value (qTV) for E. coli of 1.863 log10 gene copies per reaction as a potential equivalent value to the state standard, based on statistical analysis of a set of state-wide training data from 2016 to 2018. The main purpose of the present study is to assess the validity of the proposed qTV by determining whether the implied qPCR-based beach notification decisions agree well with culture-based decisions on two sets of test data from 2016-2018 (6,564 samples) and 2019-2020 (3,205 samples), and whether performance of the proposed qTV is similar on the test and training data. The results show that performance of Michigan's proposed qTV on both sets of test data was consistently good (e.g., 95% agreement with culture-based beach notification decisions during 2019-2020) and was as good as or better than its performance on the training data set. The false-negative rate for the proposed qTV was 25-29%, meaning that beach notification decisions based on the qTV would be expected to permit recreation on the day of sampling in 25-29% of cases where the beach exceeds the state standard for FIB contamination. This false-negative rate is higher than one would hope to see but is well below the corresponding error rate for culture-based decisions, which permit recreation at beaches that exceed the state standard on the day of sampling in 100% of cases because of the one-day delay in obtaining results. The key advantage of qPCR-based analysis is that it permits a large percentage (71-75%) of unsafe beaches to be identified in time to prevent recreation on the day of sampling. |
1,704 | Oxidative stress in the RVLM mediates sympathetic hyperactivity induced by circadian disruption | Circadian rhythm plays a significant role in maintaining the function of the cardiovascular system. Emerging studies have demonstrated that circadian disruption enhances the risk of cardiovascular diseases by activating the sympathetic nervous system; however, the underlying mechanisms remain unknown. Therefore, this study aimed to clarify the role of oxidative stress in the rostral ventrolateral medulla (RVLM) in sympathetic hyperactivity induced by circadian disruption. Rats were randomly divided into two groups: the normal light and dark (LD) group and the circadian disruption (CD) group. Sympathetic nerve activity of rats was assessed by recording renal sympathetic nerve activity (RSNA) and indirect methods such as plasma level of norepinephrine (NE). The level of oxidative stress in the RVLM was detected by dihydroethidium probes. Moreover, the expression levels of the oxidative stress-related proteins in the RVLM were detected by Western blotting. Circadian disruption significantly increased blood pressure (BP), RSNA, and plasma levels of NE. Compared to the LD group, the CD group exhibited a more significant depressor response to i.v. hexamethonium bromide, a ganglionic blocker. Furthermore, the reactive oxygen species (ROS) production in the RVLM of rats with circadian disruption was significantly increased. In addition, BP and RSNA of rats with circadian disruption exhibited a greater decrease in the effects of microinjection of tempol, a superoxide scavenger, into the RVLM, compared to artificial cerebrospinal fluid (aCSF). Further investigation of the molecular mechanism by Western blotting showed that nuclear factor-erythroid-2-related factor 2 (Nrf2)/heme oxygenase 1 (HO1)/NAD(P)H: quinone oxidoreductase 1 (NQO1) signaling was down-regulated in the RVLM of circadian disruption rats. These data suggest that oxidative stress in the RVLM mediates sympathetic hyperactivity induced by circadian disruption and possibly by down-regulating Nrf2/HO1/NQO1 signaling. |
1,705 | Cleaning of Oil Fouling with Water Enabled by Zwitterionic Polyelectrolyte Coatings: Overcoming the Imperative Challenge of Oil-Water Separation Membranes | Herein we report a self-cleaning coating derived from zwitterionic poly(2-methacryloyloxylethyl phosphorylcholine) (PMPC) brushes grafted on a solid substrate. The PMPC surface not only exhibits complete oil repellency in a water-wetted state (i.e., underwater superoleophobicity), but also allows effective cleaning of oil fouled on dry surfaces by water alone. The PMPC surface was compared with typical underwater superoleophobic surfaces realized with the aid of surface roughening by applying hydrophilic nanostructures and those realized by applying smooth hydrophilic polyelectrolyte multilayers. We show that underwater superoleophobicity of a surface is not sufficient to enable water to clean up oil fouling on a dry surface, because the latter circumstance demands the surface to be able to strongly bond water not only in its pristine state but also in an oil-wetted state. The PMPC surface is unique with its described self-cleaning performance because the zwitterionic phosphorylcholine groups exhibit exceptional binding affinity to water even when they are already wetted by oil. Further, we show that applying this PMPC coating onto steel meshes produces oil-water separation membranes that are resilient to oil contamination with simply water rinsing. Consequently, we provide an effective solution to the oil contamination issue on the oil-water separation membranes, which is an imperative challenge in this field. Thanks to the self-cleaning effect of the PMPC surface, PMPC-coated steel meshes can not only separate oil from oil-water mixtures in a water-wetted state, but also can lift oil out from oil-water mixtures even in a dry state, which is a very promising technology for practical oil-spill remediation. In contrast, we show that oil contamination on conventional hydrophilic oil-water separation membranes would permanently induce the loss of oil-water separation function, and thus they have to be always used in a completely water-wetted state, which significantly restricts their application in practice. |
1,706 | Graph-Based Surgical Instrument Adaptive Segmentation via Domain-Common Knowledge | Unsupervised domain adaptation (UDA), aiming to adapt the model to an unseen domain without annotations, has drawn sustained attention in surgical instrument segmentation. Existing UDA methods neglect the domain-common knowledge of two datasets, thus failing to grasp the inter-category relationship in the target domain and leading to poor performance. To address these issues, we propose a graph-based unsupervised domain adaptation framework, named Interactive Graph Network (IGNet), to effectively adapt a model to an unlabeled new domain in surgical instrument segmentation tasks. In detail, the Domain-common Prototype Constructor (DPC) is first advanced to adaptively aggregate the feature map into domain-common prototypes using the probability mixture model, and construct a prototypical graph to interact the information among prototypes from the global perspective. In this way, DPC can grasp the co-occurrent and long-range relationship for both domains. To further narrow down the domain gap, we design a Domain-common Knowledge Incorporator (DKI) to guide the evolution of feature maps towards domain-common direction via a common-knowledge guidance graph and category-attentive graph reasoning. At last, the Cross-category Mismatch Estimator (CME) is developed to evaluate the category-level alignment from a graph perspective and assign each pixel with different adversarial weights, so as to refine the feature distribution alignment. The extensive experiments on three types of tasks demonstrate the feasibility and superiority of IGNet compared with other state-of-the-art methods. Furthermore, ablation studies verify the effectiveness of each component of IGNet. The source code is available at https://github.com/CityU-AIM-Group/Prototypical-Graph-DA. |
1,707 | Entropy-assisted approach to determine priorities in water quality monitoring process | Effective determination of water quality and water pollution assessment is crucial and challenging processes. Evaluating water quality in rivers, researchers have referred to various statistical, probabilistic and stochastic methods to obtain efficient information from the monitoring network. As data are greatly random, the information content can be obtained by utilizing various methods including but not limited to the "entropy." Monitoring is a difficult process due to high measurement costs, while it is also difficult to optimize the network in terms of time, space, and especially the variable to be monitored. In the presented study, it is aimed to create an effective approach to be used in optimizing the monitoring network by determining the "prior" variables by entropy that measures the uncertainty by using all the data without time difference. The presented study proposes an alternative method to define the water quality variables that should be monitored much more frequently. Study is exemplified for demonstrating its potential use in a case study level, Grand River in Canada, by assessing water quality data obtained from 15 water quality monitoring stations. Results showed that BOD, Cl, and NO2-N among examined 8 different variables are as the "prior" variables should be monitored. It is being proven that the prior variable that should be monitored for optimization of the network can be easily determined with the information obtained from the data statistically evaluated with entropy, and it can be stated as an effective method for managers to use in the decision-making process. |
1,708 | Durable viral suppression among persons with HIV in the deep south: an observational study | This study assessed predictors of stable HIV viral suppression in a racially diverse sample of persons living with HIV (PWH) in the southern US. A total of 700 PWH were recruited from one of four HIV clinics in Metro Atlanta, GA. Data were collected from September 2012 to July 2017, and HIV viral loads were retrieved from EMR for 18 months. The baseline visits and EMR data were used for current analyses. Durable viral suppression was categorized as 1. Remain suppressed, 2. Remain unsuppressed, and 3. Unstable suppression. The number of antiretroviral medications and age were significantly associated with durable viral suppression. Older age, fewer ART medications and availability of social support were positively associated with durable viral suppression over the 18-month observation period. Findings suggest that regimen complexity is potentially a better predictor of viral suppression than self-reported medication adherence. The need for consensus on the definition of durable viral suppression is also urged. |
1,709 | Extrinsic Calibration of Camera Networks Based on Pedestrians | In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. |
1,710 | Assessing the potential of steel as a substrate for building integrated photovoltaic applications | Government edicts and national time bound policy directives are shaping the drive toward cost effective renewables such as photovoltaics (PV). Building Integrated Photovoltaics (BIPV) has the potential to provide significant energy generation by utilising the existing building infrastructure as a power generator, engendering a transformation shift from traditional energy sources. This research presents an innovative study on the industrial viability of utilising "rough" low carbon steel integrated with an Intermediate Layer (IL) to develop lower cost thin film BIPV products and is compared to existing commercial products. Consideration of the final product cost is given and potential business models to enter the BIPV are identified. The lab scale and upscaling elements of the research support the significant benefits of an approach that extends beyond the use of expensive solar grade steel. A state-of-the-art review of existing steel-based BIPV products is given and used as a benchmark to compare the new products. The results demonstrate that a competitively commercial product is viable and also highlight the strong potential for the adoption of a "rough" steel + IL focused approach to BIPV manufacture and a potential new direction to develop cost efficiencies in an increasingly competitive market. |
1,711 | Learning Hierarchical Attention for Weakly-Supervised Chest X-Ray Abnormality Localization and Diagnosis | We consider the problem of abnormality localization for clinical applications. While deep learning has driven much recent progress in medical imaging, many clinical challenges are not fully addressed, limiting its broader usage. While recent methods report high diagnostic accuracies, physicians have concerns trusting these algorithm results for diagnostic decision-making purposes because of a general lack of algorithm decision reasoning and interpretability. One potential way to address this problem is to further train these models to localize abnormalities in addition to just classifying them. However, doing this accurately will require a large amount of disease localization annotations by clinical experts, a task that is prohibitively expensive to accomplish for most applications. In this work, we take a step towards addressing these issues by means of a new attention-driven weakly supervised algorithm comprising a hierarchical attention mining framework that unifies activation- and gradient-based visual attention in a holistic manner. Our key algorithmic innovations include the design of explicit ordinal attention constraints, enabling principled model training in a weakly-supervised fashion, while also facilitating the generation of visual-attention-driven model explanations by means of localization cues. On two large-scale chest X-ray datasets (NIH ChestX-ray14 and CheXpert), we demonstrate significant localization performance improvements over the current state of the art while also achieving competitive classification performance. |
1,712 | Salient object detection with adversarial training | The generative adversarial network has been shown to produce state-of-the-art results of image generation. In this study, the authors propose a novel adversarial training method to train salient object detection (SOD) models. They train a convolutional SOD network along with a gated adversarial network that discriminates salient maps coming either from the ground truth or from the SOD network. The motivation for our approach is that the adversarial network can detect and correct pixel-wise errors between ground truth salient detection maps and the ones produced by the convolutional network. Our experiments show that the adversarial training approach leads to state-of-the-art performance on MSRA-B, extended complex scene saliency dataset, HKU-IS, DUT, and SOD dataset. |
1,713 | ART: An Attack-Resistant Trust Management Scheme for Securing Vehicular Ad Hoc Networks | Vehicular ad hoc networks (VANETs) have the potential to transform the way people travel through the creation of a safe interoperable wireless communications network that includes cars, buses, traffic signals, cell phones, and other devices. However, VANETs are vulnerable to security threats due to increasing reliance on communication, computing, and control technologies. The unique security and privacy challenges posed by VANETs include integrity (data trust), confidentiality, nonrepudiation, access control, real-time operational constraints/demands, availability, and privacy protection. The trustworthiness of VANETs could be improved by addressing holistically both data trust, which is defined as the assessment of whether or not and to what extent the reported traffic data are trustworthy, and node trust, which is defined as how trustworthy the nodes in VANETs are. In this paper, an attack-resistant trust management scheme (ART) is proposed for VANETs that is able to detect and cope with malicious attacks and also evaluate the trustworthiness of both data and mobile nodes in VANETs. Specially, data trust is evaluated based on the data sensed and collected from multiple vehicles; node trust is assessed in two dimensions, i.e., functional trust and recommendation trust, which indicate how likely a node can fulfill its functionality and how trustworthy the recommendations from a node for other nodes will be, respectively. The effectiveness and efficiency of the proposed ART scheme is validated through extensive experiments. The proposed trust management theme is applicable to a wide range of VANET applications to improve traffic safety, mobility, and environmental protection with enhanced trustworthiness. |
1,714 | A non-factoid question answering system for prior art search | A patent gives the owner of an invention the exclusive rights to make, use and sell their invention. Before a new patent application is filed, patent lawyers are required to engage in Prior Art Search to determine the likelihood that an invention is novel, valid or to make sense of the domain. To perform this search, existing platforms utilize keywords and Boolean Logic, which disregards the syntax and semantics of natural language and thus, making the search extremely difficult. Consequently, studies regarding semantics using neural embeddings exist, but these only consider a narrow number of unidirectional words. In this study, we propose an end-to-end framework to consider bidirectional semantics, syntax and the thematic nature of natural language for prior art search. The proposed framework goes beyond keywords as input queries and takes a patent as the input. The contributions of this paper is twofold; adapting pre-trained embedding models (e.g., BERT) to address the semantics and syntax of language, followed by the second component, which exploits topic modeling to build a diversified answer that covers all themes across domains of the input patent. We evaluate the performance of the proposed framework on the CLEF-IP 2011 benchmark dataset and a real-world dataset obtained from Google patent repository and show that the proposed framework outperforms existing methods and returns meaningful results for a given patent. |
1,715 | Microsatellite instability profiles of gastrointestinal cancers: comparison between non-colorectal and colorectal origin | Microsatellite instability (MSI) is a major carcinogenic pathway with prognostic and predictive implications. The validity of polymerase chain reaction (PCR)-based MSI testing is well established in colorectal cancer; however, the data are limited in non-colorectal gastrointestinal cancers. The aim of this study is to clarify the detailed MSI profiles of non-colorectal gastrointestinal cancers and to investigate the differences from those of colorectal cancers. MSI testing was performed using paired tumour/normal tissues of 123 mismatch repair-deficient cancers detected by immunohistochemistry including 80 non-colorectal cancers (eight oesophagogastric junction (EGJ), 57 gastric and 15 small intestine) and 43 colorectal cancers. Fragment size analysis revealed that the mean nucleotide shifts of five markers (Promega panel) were the highest in the stomach (6.4), followed by colorectum (5.7), small intestine (5.0) and EGJ cancers (mean = 4.0; P = 0.015, versus stomach). All cases showed ≥ 1 nucleotide shift in ≥ 2 markers and were considered as MSI-high. However, when the cut-off was set to ≥ 3 nucleotide shifts in ≥ 2 markers, three EGJ (37.5%), two small intestine (13.3%) and two gastric (3.5%) cancers showed false-negative results. In addition, cases with isolated loss of MSH6 or PMS2 showed smaller nucleotide shifts than those in others. MSI testing is applicable to non-colorectal gastrointestinal cancers; however, a subset can yield false-negative results due to subtle nucleotide shift in multiple markers. Analysis of paired tumour/normal tissues and careful interpretation is necessary to avoid false-negative results and ensure appropriate treatment. |
1,716 | Automated Radiographic Report Generation Purely on Transformer: A Multicriteria Supervised Approach | Automated radiographic report generation is challenging in at least two aspects. First, medical images are very similar to each other and the visual differences of clinic importance are often fine-grained. Second, the disease-related words may be submerged by many similar sentences describing the common content of the images, causing the abnormal to be misinterpreted as the normal in the worst case. To tackle these challenges, this paper proposes a pure transformer-based framework to jointly enforce better visual-textual alignment, multi-label diagnostic classification, and word importance weighting, to facilitate report generation. To the best of our knowledge, this is the first pure transformer-based framework for medical report generation, which enjoys the capacity of transformer in learning long range dependencies for both image regions and sentence words. Specifically, for the first challenge, we design a novel mechanism to embed an auxiliary image-text matching objective into the transformer's encoder-decoder structure, so that better correlated image and text features could be learned to help a report to discriminate similar images. For the second challenge, we integrate an additional multi-label classification task into our framework to guide the model in making correct diagnostic predictions. Also, a term-weighting scheme is proposed to reflect the importance of words for training so that our model would not miss key discriminative information. Our work achieves promising performance over the state-of-the-arts on two benchmark datasets, including the largest dataset MIMIC-CXR. |
1,717 | BSUV-Net 2.0: Spatio-Temporal Data Augmentations for Video-Agnostic Supervised Background Subtraction | Background subtraction (BGS) is a fundamental video processing task which is a key component of many applications. Deep learning-based supervised algorithms achieve very good performance in BGS, however, most of these algorithms are optimized for either a specific video or a group of videos, and their performance decreases dramatically when applied to unseen videos. Recently, several papers addressed this problem and proposed video-agnostic supervised BGS algorithms. However, nearly all of the data augmentations used in these algorithms are limited to the spatial domain and do not account for temporal variations that naturally occur in video data. In this work, we introduce spatio-temporal data augmentations and apply them to one of the leading video-agnostic BGS algorithms, BSUV-Net. We also introduce a new cross-validation training and evaluation strategy for the CDNet-2014 dataset that makes it possible to fairly and easily compare the performance of various video-agnostic supervised BGS algorithms. Our new model trained using the proposed data augmentations, named BSUV-Net 2.0, significantly outperforms state-of-the-art algorithms evaluated on unseen videos of CDNet-2014. We also evaluate the cross-dataset generalization capacity of BSUV-Net 2.0 by training it solely on CDNet-2014 videos and evaluating its performance on LASIESTA dataset. Overall, BSUV-Net 2.0 provides a similar to 5% improvement in the F-score over state-of-the-art methods on unseen videos of CDNet-2014 and LASIESTA datasets. Furthermore, we develop a real-time variant of our model, that we call Fast BSUV-Net 2.0, whose performance is close to the state of the art. |
1,718 | Alternative mammalian strategies leading towards gastrulation: losing polar trophoblast (Rauber's layer) or gaining an epiblast cavity | Using embryological data from 14 mammalian orders, the hypothesis is presented that in placental mammals, epiblast cavitation and polar trophoblast loss are alternative developmental solutions to shield the central epiblast from extraembryonic signalling. It is argued that such reciprocal signalling between the edge of the epiblast and the adjoining polar trophoblast or edge of the mural trophoblast or with the amniotic ectoderm is necessary for the induction of gastrulation. This article is part of the theme issue 'Extraembryonic tissues: exploring concepts, definitions and functions across the animal kingdom'. |
1,719 | RelaHash: Deep Hashing With Relative Position | Deep hashing has been widely used as a solution to encoding binary hash code for approximating nearest neighbor problem. It has been showing superior performance in terms of its ability to index high-level features by learning compact binary code. Many recent state-of-the-art deep hashing methods often use multiple loss terms at once, thus introducing optimization difficulty and may result in sub-optimal hash codes. OrthoHash was proposed to replace those losses with just a single loss function. However, the quantization error minimization problem in OrthoHash is still not addressed effectively. In this paper, we take one step further - propose a single-loss model that can effectively minimize the quantization error without explicit loss terms. Specifically, we introduce a new way to measure the similarity between the relaxed codes with centroids, called relative similarity. The relative similarity is the similarity between the relative position representation of continuous codes and the normalized centroids. The resulting model outperforms many state-of-the-art deep hashing models on popular benchmark datasets. |
1,720 | Secretory pattern and regulatory mechanism of growth hormone in cattle | The ultradian rhythm of growth hormone (GH) secretion has been known in several animal species for years and has recently been observed in cattle. Although the physiological significance of the rhythm is not yet fully understood, it appears essential for normal growth. In this review, previous studies concerning the GH secretory pattern in cattle, including its ultradian rhythm, are introduced and the regulatory mechanism is discussed on the basis of recent findings. |
1,721 | Architecture of the chikungunya virus replication organelle | Alphaviruses are mosquito-borne viruses that cause serious disease in humans and other mammals. Along with its mosquito vector, the Alphavirus chikungunya virus (CHIKV) has spread explosively in the last 20 years, and there is no approved treatment for chikungunya fever. On the plasma membrane of the infected cell, CHIKV generates dedicated organelles for viral RNA replication, so-called spherules. Whereas structures exist for several viral proteins that make up the spherule, the architecture of the full organelle is unknown. Here, we use cryo-electron tomography to image CHIKV spherules in their cellular context. This reveals that the viral protein nsP1 serves as a base for the assembly of a larger protein complex at the neck of the membrane bud. Biochemical assays show that the viral helicase-protease nsP2, while having no membrane affinity on its own, is recruited to membranes by nsP1. The tomograms further reveal that full-sized spherules contain a single copy of the viral genome in double-stranded form. Finally, we present a mathematical model that explains the membrane remodeling of the spherule in terms of the pressure exerted on the membrane by the polymerizing RNA, which provides a good agreement with the experimental data. The energy released by RNA polymerization is found to be sufficient to remodel the membrane to the characteristic spherule shape. |
1,722 | 3D Reconstruction of "In-the-Wild" Faces in Images and Videos | 3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and are among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions ("in-the-wild"). In this paper, we propose the first "in-the-wild" 3DMM by combining a statistical model of facial identity and expression shape with an "in-the-wild" texture model. We show that such an approach allows for the development of a greatly simplified fitting procedure for images and videos, as there is no need to optimise with regards to the illumination parameters. We have collected three new benchmarks that combine "in-the-wild" images and video with ground truth 3D facial geometry, the first of their kind, and report extensive quantitative evaluations using them that demonstrate our method is state-of-the-art. |
1,723 | On favourable conditions for adaptive random testing | Recently, adaptive random testing ( ART) has been developed to enhance the fault-detection effectiveness of random testing (RT). It has been known in general that the fault-detection effectiveness of ART depends on the distribution of failure-causing inputs, yet this understanding is in coarse terms without precise details. In this paper, we conduct an in-depth investigation into the factors related to the distribution of failure-causing inputs that have an impact on the fault-detection effectiveness of ART. This paper gives a comprehensive analysis of the favourable conditions for ART. Our study contributes to the knowledge of ART and provides useful information for testers to decide when it is more cost-effective to use ART. |
1,724 | Adaptive Image Enhancement Based on Guide Image and Fraction-Power Transformation for Wireless Capsule Endoscopy | Good image quality of the wireless capsule endoscopy (WCE) is the key for doctors to diagnose gastrointestinal (GI) tract diseases. However, the poor illumination, limited performance of the camera in WCE, and complex environment in the GI tract usually result in low-quality endoscopic images. Existing image enhancement methods only use the information of the image itself or multiple images of the same scene to accomplish the enhancement. In this paper, we propose an adaptive image enhancement method based on guide image and fraction-power transformation. First, intensities of endoscopic images are analyzed to assess the illumination conditions. Second, images captured under poor illumination conditions are enhanced by a brand-new image enhancement method called adaptive guide image based enhancement (AGIE). AGIE enhances low-quality images by using the information of a good quality image of the similar scene. Otherwise, images are enhanced by the proposed adaptive fraction-power transformation. Experimental results show that the proposed method improves the average intensity of endoscopic images by 64.20% and the average local entropy by 31.25%, which outperforms the state-of-art methods. |
1,725 | Crane: Mitigating Accelerator Under-utilization Caused by Sparsity Irregularities in CNNs | Convolutional neural networks (CNNs) have achieved great success in numerous AI applications. To improve inference efficiency of CNNs, researchers have proposed various pruning techniques to reduce both computation intensity and storage overhead. These pruning techniques result in multi-level sparsity irregularities in CNNs. Together with that in activation matrices, which is induced by employment of ReLU activation function, all these sparsity irregularities cause a serious problem of computation resource under-utilization in sparse CNN accelerators. To mitigate this problem, we propose a method of load-balancing based on a workload stealing technique. We demonstrate that this method can be applied to two major inference data-flows, which cover all state-of-the-art sparse CNN accelerators. Based on this method, we present an accelerator, called Crane, which addresses all kinds of sparsity irregularities in CNNs. We perform a fair comparison between Crane and state-of-the-art prior approaches. Experimental results show that Crane improves performance by 27% similar to 88% and reduces energy consumption by 16% similar to 48%, respectively, compared to the counterparts. |
1,726 | Segmentation of Vasculature From Fluorescently Labeled Endothelial Cells in Multi-Photon Microscopy Images | Vasculature is known to be of key biological significance, especially in the study of tumors. As such, considerable effort has been focused on the automated segmentation of vasculature in medical and pre-clinical images. The majority of vascular segmentation methods focus on bloodpool labeling methods; however, particularly, in the study of tumors, it is of particular interest to be able to visualize both the perfused and the non-perfused vasculature. Imaging vasculature by highlighting the endothelium provides a way to separate the morphology of vasculature from the potentially confounding factor of perfusion. Here, we present a method for the segmentation of tumor vasculature in 3D fluorescence microscopic images using signals from the endothelial and surrounding cells. We show that our method can provide complete and semantically meaningful segmentations of complex vasculature using a supervoxel-Markov random field approach. We show that in terms of extracting meaningful segmentations of the vasculature, our method outperforms both state-of-the-art method, specific to these data, as well as more classical vasculature segmentation methods. |
1,727 | Neuroplasticity and MRI: A perfect match | Numerous studies have illustrated the benefits of physical workout and cognitive exercise on brain function and structure and, more importantly, on decelerating cognitive decline in old age and promoting functional rehabilitation following injury. Despite these behavioral observations, the exact mechanisms underlying these neuroplastic phenomena remain obscure. This gap illustrates the need for carefully designed in-depth studies using valid models and translational tools which allow to uncover the observed events up to the molecular level. We promote the use of in vivo magnetic resonance imaging (MRI) because it is a powerful translational imaging technique able to extract functional, structural, and biochemical information from the entire brain. Advanced processing techniques allow performing voxel-based analyses which are capable of detecting novel loci implicated in specific neuroplastic events beyond traditional regions-of-interest analyses. In addition, its non-invasive character sets it as currently the best global imaging tool for performing dynamic longitudinal studies on the same living subject, allowing thus exploring the effects of experience, training, treatment etc. in parallel to additional measures such as age, cognitive performance scores, hormone levels, and many others. The aim of this review is (i) to introduce how different animal models contributed to extend the knowledge on neuroplasticity in both health and disease, over different life stages and upon various experiences, and (ii) to illustrate how specific MRI techniques can be applied successfully to inform on the fundamental mechanisms underlying experience-dependent or activity-induced neuroplasticity including cognitive processes. |
1,728 | Automatic Resonance Tuning Technique for an Ultra-Broadband Piezoelectric Energy Harvester | The main drawback of energy harvesting using the piezoelectric direct effect is that the maximum electric power is generated at the fundamental resonance frequency. This can clearly be observed in the size and dimensions of the components of any particular energy harvester. In this paper, we are investigating a new proposed energy harvesting device that employs the Automatic Resonance Tuning (ART) technique to enhance the energy harvesting mechanism. The proposed harvester is composed of a cantilever beam and sliding masse with varying locations. ART automatically adjusts the energy harvester's natural frequency according to the ambient vibration natural frequency. The ART energy harvester modifies the natural frequency of the harvester using the motion of the mobile (sliding) mass. An analytical model of the proposed model is presented. The investigation is conducted using the Finite Element Method (FEM). THE FEM COMSOL model is successfully validated using previously published experimental results. The results of the FEM were compared with the experimental and analytical results. The validated model is then used to demonstrate the displacement profile, the output voltage response, and the natural frequency for the harvester at different mass positions. The bandwidth of the ART harvester (17 Hz) is found to be 1130% larger compared to the fixed resonance energy harvester. It is observed that the proposed broadband design provides a high-power density of 0.05 mW mm(-3). The piezoelectric dimensions and load resistance are also optimized to maximize the output voltage output power. |
1,729 | Indicators of Targeted Physical Fitness in Judo and Jujutsu-Preliminary Results of Research | (1) Study aim: This is a comparative study for judo and jujutsu practitioners. It has an intrinsic value. The aim of this study was to showcase a comparison of practitioners of judo and a similar martial art jujutsu with regard to manual abilities. The study applied the measurement of simple reaction time in response to a visual stimulus and handgrip measurement. (2) Materials and Methods: The group comprising N = 69 black belts from Poland and Germany (including 30 from judo and 39 from jujutsu) applied two trials: "grasping of Ditrich rod" and dynamometric handgrip measurement. The analysis of the results involved the calculations of arithmetic means, standard deviations, and Pearson correlations. Analysis of the differences (Mann-Whitney U test) and Student's t-test were also applied to establish statistical differences. (3) Results: In the test involving handgrip measurement, the subjects from Poland (both those practicing judo and jujutsu) gained better results compared to their German counterparts. In the test involving grasping of Ditrich rod, a positive correlation was demonstrated in the group of German judokas between the age and reaction time of the subjects (r(xy) = 0.66, p < 0.05), as well as in the group of jujutsu subjects between body weight and the reaction time (r(xy) = 0.49, p < 0.05). A significant and strong correlation between handgrip and weight was also established for the group of German judokas (r(xy) = 0.75, p < 0.05). In Polish competitors, the correlations were only established between the age and handgrip measurements (r(xy) = 0.49, p < 0.05). (4) Conclusions: Simple reaction times in response to visual stimulation were shorter in the subjects practicing the martial art jujutsu. However, the statement regarding the advantage of the judokas in terms of handgrip force was not confirmed by the results. |
1,730 | AUTS2 Controls Neuronal Lineage Choice Through a Novel PRC1-Independent Complex and BMP Inhibition | Despite a prominent risk factor for Neurodevelopmental disorders (NDD), it remains unclear how Autism Susceptibility Candidate 2 (AUTS2) controls the neurodevelopmental program. Our studies investigated the role of AUTS2 in neuronal differentiation and discovered that AUTS2, together with WDR68 and SKI, forms a novel protein complex (AWS) specifically in neuronal progenitors and promotes neuronal differentiation through inhibiting BMP signaling. Genomic and biochemical analyses demonstrated that the AWS complex achieves this effect by recruiting the CUL4 E3 ubiquitin ligase complex to mediate poly-ubiquitination and subsequent proteasomal degradation of phosphorylated SMAD1/5/9. Furthermore, using primary cortical neurons, we observed aberrant BMP signaling and dysregulated expression of neuronal genes upon manipulating the AWS complex, indicating that the AWS-CUL4-BMP axis plays a role in regulating neuronal lineage specification in vivo. Thus, our findings uncover a sophisticated cellular signaling network mobilized by a prominent NDD risk factor, presenting multiple potential therapeutic targets for NDD. |
1,731 | Efficient Modeling of Distributed Dynamic Self-Heating and Thermal Coupling in Multifinger SiGe HBTs | In this paper, we propose an efficient model for dynamic self-heating and thermal coupling in a multifinger transistor system. Essentially, the proposed model is an improvement over a state-of-the-art existing model from the viewpoint of simulation time. Verilog-A implementation of the proposed model does not require to use any voltage controlled voltage source. In a multifinger transistor system, with n emitter fingers, our model uses 3n extra nodes in Verilog-A implementation whereas it is 2n(2) - n for the state-of-the-art model. Note that our model requires no extra nodes for implementing the thermal coupling effects. We present that the transient simulation results of our model are identical with those of the state-of-the-art model. Electrothermal simulation using the proposed thermal model shows good agreement with the measured data. It is found that the proposed model simulates more than 40% faster compared with the existing model for a ring oscillator circuit. |
1,732 | Sparse graphs with smoothness constraints: Application to dimensionality reduction and semi-supervised classification | Sparse representation is a useful tool in machine learning and pattern recognition area. Sparse graphs (graphs constructed using sparse representation of data) proved to be very informative graphs for many learning tasks such as label propagation, embedding, and clustering. It has been shown that constructing an informative graph is one of the most important steps since it significantly affects the final performance of the post graph-based learning algorithm. In this paper, we introduce a new sparse graph construction method that integrates manifold constraints on the unknown sparse codes as a graph regularizer. These constraints seem to be a natural regularizer that was discarded in existing state-of-the art graph construction methods, This regularizer imposes constraints on the graph coefficients in the same way a locality preserving constraint imposes on data projection in non-linear manifold learning. The proposed method is termed Sparse Graph with Laplacian Smoothness (SGLS). We also propose akernelized version of the SGLS method. A series of experimental results on several public image datasets show that the proposed methods can out-perform many state-of-the-art methods for the tasks of label propagation, nonlinear and linear embedding. (C) 2019 Elsevier Ltd. All rights reserved. |
1,733 | Astrocyte-derived sEVs alleviate fibrosis and promote functional recovery after spinal cord injury in rats | After spinal cord injury (SCI), there are complex pathological states in which the formation of scar tissues is a great obstacle to nerve repair. There are currently many potential treatments that can help to reduce the formation of glial scars. However, little attention has been paid to fibrous scarring. Astrocytes have neuroprotective effects on the central nervous system. Similar to other cells, they release small extracellular vesicles (sEVs). Astrocytes, pericytes, endothelial cells, and the basement membrane constitute the blood-spinal cord barrier. It can be seen that astrocytes are structurally closely related to pericytes that form fibrous scars. In this study, astrocyte-derived sEVs were injected into rats with SCI to observe the formation of fibrosis at the site of spinal cord injury. We found that astrocyte-derived sEVs can be ingested by pericytes in vitro and inhibit the proliferation and migration of pericytes. In vivo, astrocyte-derived sEVs could converge around the injury, promote tissue repair, and reduce fibrosis formation, thus promoting the recovery of limb function and improving walking ability. In conclusion, sEVs derived from astrocytes can reduce fibrosis and improve functional recovery after SCI, which provides a new possibility for the study of SCI. |
1,734 | Bilateral motor cortex functional differences in left-handed approaching-avoiding behavior | Automatic action tendencies occur at behavioral and neurophysiological levels during task performance with the dominant right hand, with shorter reaction times (RTs) and higher excitability of the contralateral primary motor cortex (M1) during automatic vs. regulated behavior. However, effects associated with the non-dominant left-hand in approaching-avoiding behavior remain unclear. Here, we used transcranial magnetic stimulation during the performance by 18 participants of an approaching-avoiding task using the non-dominant left hand. Single-pulse transcranial magnetic stimulation was applied over left or right M1 at 150 and 300 ms after the onset of an emotional stimulus. RTs and motor-evoked potentials (MEPs) were recorded. Significant automatic action tendencies were observed at the behavioral level. Higher MEP amplitudes were detected 150 ms after stimulus onset from the right hand (non-task hand, corresponding to left M1) during regulated behavior compared with during automatic behavior. However, no significant modulation was found for MEP amplitudes from the left hand (task hand, corresponding to right M1). These findings suggested that left M1 may play a principal role in the early phase of mediating left-handed movement toward an emotional stimulus. |
1,735 | Training Multi-Bit Quantized and Binarized Networks with a Learnable Symmetric Quantizer | Quantizing weights and activations of deep neural networks is essential for deploying them in resource-constrained devices, or cloud platforms for at-scale services. While binarization is a special case of quantization, this extreme case often leads to several training difficulties, and necessitates specialized models and training methods. As a result, recent quantization methods do not provide binarization, thus losing the most resource-efficient option, and quantized and binarized networks have been distinct research areas. We examine binarization difficulties in a quantization framework and find that all we need to enable the binary training are a symmetric quantizer, good initialization, and careful hyperparameter selection. These techniques also lead to substantial improvements in multi-bit quantization. We demonstrate our unified quantization framework, denoted as UniQ, on the ImageNet dataset with various architectures such as ResNet-18,-34 and MobileNetV2. For multi-bit quantization, UniQ outperforms existing methods to achieve the state-of-the-art accuracy. In binarization, the achieved accuracy is comparable to existing state-of-the-art methods even without modifying the original architectures. |
1,736 | Miniature directive antennas | This paper presents the work carried out to assess the feasibility of miniature directive antennas. It is based on an analysis of the physical limits of antenna directivity in general and in particular as a function of their compact dimensions. A state of the art is done to identify and classify techniques to increase the directivity of compact antennas. |
1,737 | Bangla Speech Emotion Recognition and Cross-Lingual Study Using Deep CNN and BLSTM Networks | In this study, we have presented a deep learning-based implementation for speech emotion recognition (SER). The system combines a deep convolutional neural network (DCNN) and a bidirectional long-short term memory (BLSTM) network with a time-distributed flatten (TDF) layer. The proposed model has been applied for the recently built audio-only Bangla emotional speech corpus SUBESCO. A series of experiments were carried out to analyze all the models discussed in this paper for baseline, cross-lingual, and multilingual training-testing setups. The experimental results reveal that the model with a TDF layer achieves better performance compared with other state-of-the-art CNN-based SER models which can work on both temporal and sequential representation of emotions. For the cross-lingual experiments, cross-corpus training, multi-corpus training, and transfer learning were employed for the Bangla and English languages using the SUBESCO and RAVDESS datasets. The proposed model has attained a state-of-the-art perceptual efficiency achieving weighted accuracies (WAs) of 86.9%, and 82.7% for the SUBESCO and RAVDESS datasets, respectively. |
1,738 | HMiner: Efficiently mining high utility itemsets | High utility itemset mining problem uses the notion of utilities to discover interesting and actionable patterns. Several data structures and heuristic methods have been proposed in the literature to efficiently mine high utility itemsets. This paper advances the state-of-the-art and presents HMiner, a high utility itemset mining method. HMiner utilizes a few novel ideas and presents a compact utility list and virtual hyperlink data structure for storing itemset information, It also makes use of several pruning strategies for efficiently mining high utility itemsets. The proposed ideas were evaluated on a set of benchmark sparse and dense datasets. The execution time improvements ranged from a modest thirty percent to three orders of magnitude across several benchmark datasets. The memory consumption requirements also showed up to an order of magnitude improvement over the state-of-the-art methods. In general, HMiner was found to work well in the dense regions of both sparse and dense benchmark datasets. (C) 2017 Elsevier Ltd. All rights reserved. |
1,739 | Identifying Autism Spectrum Disorder With Multi-Site fMRI via Low-Rank Domain Adaptation | Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is characterized by a wide range of symptoms. Identifying biomarkers for accurate diagnosis is crucial for early intervention of ASD. While multi-site data increase sample size and statistical power, they suffer from inter-site heterogeneity. To address this issue, we propose a multi-site adaption framework via low-rank representation decomposition (maLRR) for ASD identification based on functional MRI (fMRI). The main idea is to determine a common low-rank representation for data from the multiple sites, aiming to reduce differences in data distributions. Treating one site as a target domain and the remaining sites as source domains, data from these domains are transformed (i.e., adapted) to a common space using low-rank representation. To reduce data heterogeneity between the target and source domains, data from the source domains are linearly represented in the common space by those from the target domain. We evaluated the proposed method on both synthetic and real multi-site fMRI data for ASD identification. The results suggest that our method yields superior performance over several state-of-the-art domain adaptation methods. |
1,740 | The role of landscape installations in climate change communication | Engaging the public in the issue of climate change is critical in fostering the support required for climate change adaptation. Designers and artists can contribute to public engagement using the landscape as a setting and medium to visualize climate change futures. This research note presents the case example of High Tide, a temporary landscape installation in Boston, MA, designed to bring attention to projected flooding in the area due to sea level rise. Our study sought to pilot the use of social science methods to gain initial insight on whether a landscape installation, through its accessible and site-specific qualities, could engage local audiences in the subject of climate change. Our findings provide an initial proof-of-concept for the role of public art in contributing to public engagement by bringing attention to and visualizing local effects of climate change using the landscape as a publicly accessible setting. Future research using robust social science methods would further illuminate these issues. |
1,741 | Channel spatial attention based single-shot object detector for autonomous vehicles | Real-time object detection with high accuracy is the measure concern for the autonomous vehicle to provide safety. Recently many state-of-the-art methods used Convolutional Neural Network (CNN) for object detection. Although these methods provide better results but still provide a trade-off between accuracy and real-time detection becomes challenging tasks. High accuracy ensures the vehicle for avoiding collisions and abide the traffic rules while the faster speed helps to make the decision quickly. In this paper, the single-shot object detection is provided faster results and the attention module helps to provide more accurate detection. The channel attention mechanism provides more grained refine features and emphasizes 'what' is a semantic part from a given input. Apart from the channel attention mechanisms, spatial attention emphasizes 'where' is meaningful information which is working as a booster for the attention block. The proposed model incorporates these two attention mechanisms sequentially such as channel (RGB-wise) as well spatial attention for single-shot object detection (CSA-SS). The proposed model is trained and tested using challenging datasets such as KITTI and Berkeley Deep Drive (BDD). The experimental result shows that the proposed model surpasses the state-of-the-art techniques by 1.66 and 1.13 mAP for the KITTI and BDD datasets. |
1,742 | Uncovering the impact of income inequality and population aging on carbon emission efficiency: An empirical analysis of 139 countries | Income inequality and carbon emission efficiency are the primary issues that need to be addressed to achieve UN sustainable development goals. However, research on the relationship between income inequality and carbon emission efficiency has not received enough attention. To more comprehensively understand how income inequality affects carbon emission efficiency, and how aging and economic growth affect the relationship between income inequality and carbon emissions efficiency, fixed effect regression estimation and threshold effect regression estimation approaches are developed based on panel data of 139 countries from 1998 to 2018. The results show that: (i) there is an inhibitory effect of income inequality on the improvement of carbon emission efficiency; (ii) under the influence of aging, there is a U-shaped relationship between income inequality and carbon emission efficiency, that is, income inequality has an inhibitory effect on the improvement of carbon emission efficiency before promoting it; (iii) along with the rapid economic growth, the inhibitory effect of income inequality on carbon emission efficiency increases, that is, there is an inverted U-shaped relationship between income inequality and carbon emission efficiency. Finally, we combine the changes in spatial and temporal distributions to propose corresponding policy recommendations. |
1,743 | Knowledge of identity reduces variability in trait judgements across face images | Faces vary from image to image, eliciting different judgements of traits and often different judgements of identity. Knowledge that two face images belong to the same person facilitates the processing of identity information across images, but it is unclear if this also applies to trait judgements. In this preregistered study, participants (N = 100) rated the same 340 face images on perceived trustworthiness, dominance, or attractiveness presented in randomised order and again later presented in sets consisting of the same identity. We also explored the role of implicit person theory beliefs in the variability of social judgements across images. We found that judgements of trustworthiness varied less when images were presented in sets consisting of the same identity than in randomised order and were more consistent for images presented later in a set than those presented earlier. However, knowledge of identity had little effect on perceptions of dominance and attractiveness. Finally, implicit person theory beliefs were not associated with variability in social judgements and did not account for effects of knowledge of identity. Our findings suggest that knowledge of identity and perceptual familiarity stabilises judgements of trustworthiness, but not perceptions of dominance and attractiveness. |
1,744 | Per- and polyfluoroalkyl substances activate UPR pathway, induce steatosis and fibrosis in liver cells | Per- and polyfluoroalkyl substances (PFAS), which include perfluorooctanoic acid (PFOA), heptafluorobutyric acid (HFBA), and perfluorotetradecanoic acid (PFTA), are commonly occurring organic pollutants. Exposure to PFAS affects the immune system, thyroid and kidney function, lipid metabolism, and insulin signaling and is also involved in the development of fatty liver disease and cancer. The molecular mechanisms by which PFAS cause fatty liver disease are not understood in detail. In the current study, we investigated the effect of low physiologically relevant concentrations of PFOA, HFBA, and PFTA on cell survival, steatosis, and fibrogenic signaling in liver cell models. Exposure of PFOA and HFBA (10 to 1000 nM) specifically promoted cell survival in HepaRG and HepG2 cells. PFAS increased the expression of TNFα and IL6 inflammatory markers, increased endogenous reactive oxygen species (ROS) production, and activated unfolded protein response (UPR). Furthermore, PFAS enhanced cell steatosis and fibrosis in HepaRG and HepG2 cells which were accompanied by upregulation of steatosis (SCD1, ACC, SRBP1, and FASN), and fibrosis (TIMP2, p21, TGFβ) biomarkers expression, respectively. RNA-seq data suggested that chronic exposures to PFOA modulated the expression of fatty acid/lipid metabolic genes that are involved in the development of NFALD and fatty liver disease. Collectively our data suggest that acute/chronic physiologically relevant concentrations of PFAS enhance liver cell steatosis and fibrosis by the activation of the UPR pathway and by modulation of NFALD-related gene expression. |
1,745 | Performance and mechanism of azo dyes degradation and greenhouse gases reduction in single-chamber electroactive constructed wetland system | A single-chamber microbial fuel cell-microbial electrolytic cell with a novel constructed wetland system was proposed for synergistic degradation of congo red and reduction in emissions of greenhouse gases. The closed-circuit system showed higher chemical oxygen demand and congo red removal efficiencies by 98 % and 96 % on average, respectively, than traditional constructed wetland. It could also significantly reduce the emissions of CH4 and N2O (about 52 % CO2-equivalents) by increasing the electron transfer. Microbial community analysis demonstrated that the progressive enrichment of dye-degrading microorganisms (Comamonas), electroactive bacteria (Tolumonas, Trichococcus) and denitrifying microorganisms (Dechloromonas) promoted pollutant removal and electron transfer. Based on gene abundance of xenobiotics biodegradation, the congo red biodegradation pathway was described as congo red → naphthalene and alcohols → CO2 and H2O. In summary, the single-chamber closed-circuit system could significantly improve the degradation of congo red and reduce the emissions of greenhouse gases by influencing electron transfer and microbial activity. |
1,746 | Effects of metal salt addition on odor and process stability during the anaerobic digestion of municipal waste sludge | Anaerobic digestion (AD) is an effective way to recover energy and nutrients from organic waste; however, several issues including the solubilization of bound nutrients and the production of corrosive, highly odorous and toxic volatile sulfur compounds (VSCs) in AD biogas can limit its wider adoption. This study explored the effects of adding two different doses of ferric chloride, aluminum sulfate and magnesium hydroxide directly to the feed of complete mix semi-continuously fed mesophilic ADs on eight of the most odorous VSCs in AD biogas at three different organic loading rates (OLR). Ferric chloride was shown to be extremely effective in reducing VSCs by up to 87%, aluminum sulfate had the opposite effect and increased VSC levels by up to 920%, while magnesium hydroxide was not shown to have any significant impact. Ferric chloride, aluminum sulfate and magnesium hydroxide were effective in reducing the concentration of orthophosphate in AD effluent although both levels of alum addition caused digester failure at elevated OLRs. Extensive foaming was observed within the magnesium hydroxide dosed digesters, particularly at higher doses and high OLRs. Certain metal salt additions may be a valuable tool in overcoming barriers to AD and to meet regulatory targets. |
1,747 | Quercetin Fatty Acid Monoesters (C2:0-C18:0): Enzymatic Preparation and Antioxidant Activity | Quercetin monoesters were prepared via a one-step enzymatic transesterification. The main acylation products were eight quercetin ester derivatives, respectively, consisting of varying acyl groups ranging from 2 to 18 carbon atoms (acetate, butyrate, caproate, caprylate, caprate, laurate, myristate, and stearate). The purified quercetin esters were structurally characterized by LC-ESI-ToF and NMR HSQC. Meanwhile, several classical chemical (DPPH, ABTS, FRAP, and Fe2+ chelation assays), food (β-carotene bleaching assay), and biological (LDL and DNA oxidation assays) models were constructed to evaluate and systematically compare their antioxidant efficacy. O-Acylation increased the lipophilicity of quercetin derivatives, and lipophilicity increased with the increasing chain length of the acyl group. The dual effect of the acyl chain length on biasing quercetin monoesters' antioxidant efficacies has been summarized and verified. Overall, the results imply that the acylated quercetin have great potential as functional/health-beneficial ingredients for use in lipid-based matrices of cosmetics, supplements, and nutraceuticals. |
1,748 | Decreased Interleukin-1 Family Cytokine Production in Patients with Nontuberculous Mycobacterial Lung Disease | Nontuberculous mycobacteria (NTM) cause pulmonary disease in individuals without obvious immunodeficiency. This study was initiated to gain insight into the immunological factors that predispose persons to NTM pulmonary disease (NTMPD). Blood was obtained from 15 pairs of NTMPD patients and their healthy household contacts. Peripheral blood mononuclear cells (PBMCs) were stimulated with the Mycobacterium avium complex (MAC). A total of 34 cytokines and chemokines were evaluated in plasma and PBMC culture supernatants using multiplex immunoassays, and gene expression in the PBMCs was determined using real-time PCR. PBMCs from NTMPD patients produced significantly less interleukin-1β (IL-1β), IL-18, IL-1α, and IL-10 than PBMCs from their healthy household contacts in response to MAC. Although plasma RANTES levels were high in NTMPD patients, they had no effect on IL-1β production by macrophages infected with MAC. Toll-like receptor 2 (TLR2) and TWIK2 (a two-pore domain K+ channel) were impaired in response to MAC in PBMCs of NTMPD patients. A TLR2 inhibitor decreased all four cytokines, whereas a two-pore domain K+ channel inhibitor decreased the production of IL-1β, IL-18, and IL-1α, but not IL-10, by MAC-stimulated PBMCs and monocytes. The ratio of monocytes was reduced in whole blood of NTMPD patients compared with that of healthy household contacts. A reduced monocyte ratio might contribute to the attenuated production of IL-1 family cytokines by PBMCs of NTMPD patients in response to MAC stimulations. Collectively, our findings suggest that the attenuated IL-1 response may increase susceptibility to NTM pulmonary infection through multiple factors, including impaired expression of the TLR2 and TWIK2 and reduced monocyte ratio. IMPORTANCE Upon MAC stimulation, the production of IL-1 family cytokines and IL-10 by PBMCs of NTMPD patients was attenuated compared with that of healthy household contacts. Upon MAC stimulation, the expression of TLR2 and TWIK2 (one of the two-pore domain K+ channels) was attenuated in PBMCs of NTMPD patients compared with that of healthy household contacts. The production of IL-1 family cytokines by MAC-stimulated PBMCs and MAC-infected monocytes of healthy donors was reduced by a TLR2 inhibitor and two-pore domain K+ channel inhibitor. The ratio of monocytes was reduced in whole blood of NTMPD patients compared with that of healthy household contacts. Collectively, our data suggest that defects in the expression of TLR2 and TWIK2 in human PBMCs or monocytes and reduced monocyte ratio are involved in the reduced production of IL-1 family cytokines, and it may increase susceptibility to NTM pulmonary infection. |
1,749 | Ultrasonic Electric Scalpels Based on a Sliding-Mode Controller With an Auxiliary PLL Frequency Discriminator | The first monolithic state-of-the-art controller was proposed and implemented for an electric scalpel system. A piezoelectric transducer (PT) is driven in ultrasonic resonant frequency to generate electromechanical power for thermal sealing and cold dissection operations. The band-pass filter based oscillator was developed to automatically track the PT's optimal longitudinal resonance. However, under heavy loading conditions, the PT will lock to other unwanted transverse resonant modes and deliver no usable power to the surgical tip. To prevent this abnormal operation, a phase-locked loop based frequency discriminator with intervention and release logic was developed to ensure that the PT always operates at the proper frequency of 55.5 kHz. Another crucial challenge is that the changing of loading conditions induces a motional current sensing mismatch and a pole-zero pair, consequently causing instability and poor response time. Therefore, a sliding mode control method with reduced-order sensing was proposed to handle the extreme load changes and provide a fast power build-up time of 9.2 ms, which is 8% faster than previously reported designs and 49% faster than the best commercial product. Sealing and dissection surgical operations are realized with 17.5 W maximum power. |
1,750 | XEDAR activates the non-canonical NF-κB pathway | Members of the tumor necrosis factor receptor (TNFR) superfamily are involved in a number of physiological and pathological responses by activating a wide variety of intracellular signaling pathways. The X-linked ectodermal dysplasia receptor (XEDAR; also known as EDA2R or TNFRSF27) is a member of the TNFR superfamily that is highly expressed in ectodermal derivatives during embryonic development and binds to ectodysplasin-A2 (EDA-A2), a member of the TNF family that is encoded by the anhidrotic ectodermal dysplasia (EDA) gene. Although XEDAR was first described in the year 2000, its function and molecular mechanism of action is still largely unclear. XEDAR has been reported to activate canonical nuclear factor κB (NF-κB) signaling and mitogen-activated protein (MAP) kinases. Here we report that XEDAR is also able to trigger the non-canonical NF-κB pathway, characterized by the processing of p100 (NF-κB2) into p52, followed by nuclear translocation of p52 and RelB. We provide evidence that XEDAR-induced p100 processing relies on the binding of XEDAR to TRAF3 and TRAF6, and requires the kinase activity of NIK and IKKα. We also show that XEDAR stimulation results in NIK accumulation and that p100 processing is negatively regulated by TRAF3, cIAP1 and A20. |
1,751 | Applications of Transcriptomics in the Research of Antibody-Mediated Rejection in Kidney Transplantation: Progress and Perspectives | Antibody-mediated rejection (ABMR) is the major cause of chronic allograft dysfunction and loss in kidney transplantation. The immunological mechanisms of ABMR that have been featured in the latest studies indicate a highly complex interplay between various immune and nonimmune cell types. Clinical diagnostic standards have long been criticized for being arbitrary and the lack of accuracy. Transcriptomic approaches, including microarray and RNA sequencing of allograft biopsies, enable the identification of differential gene expression and the continuous improvement of diagnostics. Given that conventional bulk transcriptomic approaches only reflect the average gene expression but not the status at the single-cell level, thereby ignoring the heterogeneity of the transcriptome across individual cells, single-cell RNA sequencing is rising as a powerful tool to provide a high-resolution transcriptome map of immune cells, which allows the elucidation of the pathogenesis and may facilitate the development of novel strategies for clinical treatment of ABMR. |
1,752 | Transcending the cube: translating GIScience time and space perspectives in a humanities GIS | This paper discusses a Humanities Geographical Information Systems timespace modeling project, which does not reject the space-time' cube model but rather incorporates, translates and metonymically focuses GIScience methodologies through the epistemological prisms of the arts and humanities. The arts and humanities may offer insights which GIScience can consider in order to conceptualise and model timespaces. Employing the Euclidian framing of conventional GIS approaches, it attempts to artistically, metaphorically and metonymically engage the space-time' cube concept as a means to suggest a non-linear and fragmented perspective of space and time. The case study presented in this paper triangulates in GIS soft' data from a modernist novel depicting postmodern perspectives, empirical data sourced from fieldwork guided by the urban mapping strategies of Guy Debord and the Situationists Internationale and Giambattista Vico's cyclical view of history with Mikhail M. Bakhtin's literary motif of the chronotope as techniques to model contiguous perspectives of linear and cyclical timespace. The paper hopes to encourage ways to reflect upon a rapprochement between humanistic and scientific approaches to modelling space and time with GIS. |
1,753 | A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading, and Transferability | People with diabetes are at risk of developing an eye disease called diabetic retinopathy (DR). This disease occurs when high blood glucose levels cause damage to blood vessels in the retina. Computer-aided DR diagnosis has become a promising tool for the early detection and severity grading of DR, due to the great success of deep learning. However, most current DR diagnosis systems do not achieve satisfactory performance or interpretability for ophthalmologists, due to the lack of training data with consistent and fine-grained annotations. To address this problem, we construct a large fine-grained annotated DR dataset containing 2,842 images (FGADR). Specifically, this dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists with intra-rater consistency. The proposed dataset will enable extensive studies on DR diagnosis. Further, we establish three benchmark tasks for evaluation: 1. DR lesion segmentation; 2. DR grading by joint classification and segmentation; 3. Transfer learning for ocular multi-disease identification. Moreover, a novel inductive transfer learning method is introduced for the third task. Extensive experiments using different state-of-the-art methods are conducted on our FGADR dataset, which can serve as baselines for future research. Our dataset will be released in https://csyizhou.github.io/FGADR/. |
1,754 | Smart Substation: State of the Art and Future Development | The smart substation, revolutionarily changing every aspect of the modern substation, is developing fast in the world and being massively deployed in China quickly. A smart substation is typically implemented with a sophisticated combination of smart primary high-voltage equipment and hierarchically networked secondary devices. Based on the IEC61850 communication protocol, the functionalities, such as the information sharing and interoperability among smart electric equipment, are realized in smart substations. It is regarded as the basis for the development of smart grid and represents the future development trend of substation technologies. This paper reviews the fundamental vision of the smart substation. The state of the art and the challenges encountered in the practice of engineering implementation are presented. Future developments to solve the present challenges and promote the development of smart substations are also described. |
1,755 | Hyperacetylation of the C-terminal domain of p53 inhibits the formation of the p53/p21 complex | Given our previous finding that certain tumor-suppressing functions of p53 are exerted by the p53/p21 complex, rather than p53 alone, cells may have a system to regulate the p53/p21 interaction. As p53 binds to p21 via its C-terminal domain, which contains acetylable lysine residues, we investigated whether the C-terminal acetylation of p53 influences the p53/p21 interaction. Indeed, the p53/p21 interaction was reduced when various types of cells (HCT116 colon cancer, A549 lung cancer, and MCF7 breast cancer cells) were treated with MS-275, an inhibitor of SIRT1 (a p53 deacetylase), or with SIRT1-targeting small interfering RNAs. These treatments also increased the acetylation levels of the five lysine residues (K370, K372, K373, K381, K382) in the C-terminal domain of p53. The p53/p21 interaction was also reduced when these lysine residues were substituted with glutamine (an acetylation memetic), but not arginine (an unacetylable lysine analog). While the inhibitory effect of the lysine-to-glutamine substitution was evident upon the substitution of all the five lysine residues, the substitution of only two (K381, K382) or three residues (K370, K372, K373) was less effective. Consistently, the five substitutions reduced the ability of p53 to regulate cell invasion and death by liberating Bax from Bcl-w. Overall, our data suggest that the acetylation, especially the hyperacetylation, of the p53 C-terminal domain suppresses the p53/p21-complex-dependent functions of p53 by inhibiting the p53/p21 interaction. We propose that cellular components involved in the acetylation or deacetylation of the p53 C-terminus are critical regulators of the formation of p53/p21 complex. |
1,756 | A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications | For an autonomous vehicle to operate safely and effectively, an accurate and robust localization system is essential. While there are a variety of vehicle localization techniques in literature, there is a lack of effort in comparing these techniques and identifying their potentials and limitations for autonomous vehicle applications. Hence, this paper evaluates the state-of-the-art vehicle localization techniques and investigates their applicability on autonomous vehicles. The analysis starts with discussing the techniques which merely use the information obtained from on-board vehicle sensors. It is shown that although some techniques can achieve the accuracy required for autonomous driving but suffer from the high cost of the sensors and also sensor performance limitations in different driving scenarios (e.g., cornering and intersections) and different environmental conditions (e.g., darkness and snow). This paper continues the analysis with considering the techniques which benefit from off-board information obtained from V2X communication channels, in addition to vehicle sensory information. The analysis shows that augmenting off-board information to sensory information has potential to design low-cost localization systems with high accuracy and robustness, however, their performance depends on penetration rate of nearby connected vehicles or infrastructure and the quality of network service. |
1,757 | Schemes for salt recovery from seawater and RO brines using chemical precipitation | Disposal of brines from seawater desalination plants affects marine ecology and significant financial burdens. Recovery of salts from brines improves both sides of the problem and avails opportunities for new state of the art desalination/salt production complex. Three important separation processes could be adopted to formulate the corner stones for state-of-the-art salt recovery production line, namely chemical treatment, nanofiltration (NF), and ion exchange. This paper explores the performance of selected precipitants on saline solutions presenting synthetic seawater, natural seawater, and two reverse osmosis (RO) brines obtained from desalination plants located on Mediterranean (B1) and Red Sea (B2) shores. Sodium carbonate enabled 95.5, 89, and 95% recovery of calcium (Ca) seawater, Mediterranean, and Red Sea RO brines, respectively. While, values of magnesium (Mg) recovery from chemically treated schemes lie between 85.6 and 91.3%. Also, phosphate precipitation enabled two-stage recovery of Ca and Mg range from 75 to 98% for Ca and 24 to 47% for Mg. Moreover, analysis of our experimental results and other reported data on chemical softening enabled identification of three integrated salt recovery schemes from seawater and RO desalination brines. The first scheme is basically applicable for new desalting plants or even as stand-alone solution for chemical recovery from seawater. The second scheme could be applied when retrofitting current desalination plants where state-of-the-art NF is introduced and the generated NF brine is subjected to two-stage chemical and ion exchange treatments. The third scheme targets currently operating plants where RO brines could be directed to chemical precipitation for maximum Ca removal and subsequently decalcified streams could be processed for Mg removal using ion exchange. Optimization of the developed schemes is currently underway to identify comparative capital outlays and other relevant financial indicators. |
1,758 | Inertial Sensor Arrays, Maximum Likelihood, and Cramer-Rao Bound | A maximum likelihood estimator for fusing the measurements in an inertial sensor array is presented. The maximum likelihood estimator is concentrated and an iterative solution method is presented for the resulting low-dimensional optimization problem. The Cramer-Rao bound for the corresponding measurement fusion problem is derived and used to assess the performance of the proposed method, as well as to analyze how the geometry of the array and sensor errors affect the accuracy of the measurement fusion. The angular velocity information gained from the accelerometers in the array is shown to be proportional to the square of the array dimension and to the square of the angular speed. In our simulations the proposed fusion method attains the Cramer-Rao bound and outperforms the current state-of-the-art method for measurement fusion in accelerometer arrays. Further, in contrast to the state-of-the-art method that requires a 3D array to work, the proposed method also works for 2D arrays. The theoretical findings are compared to results from real-world experiments with an in-house developed array that consists of 192 sensing elements. |
1,759 | A Fast and Accurate Feature-Matching Algorithm for Minimally-Invasive Endoscopic Images | The ability to find image similarities between two distinct endoscopic views is known as feature matching, and is essential in many robotic-assisted minimally-invasive surgery (MIS) applications. Differently from feature-tracking methods, feature matching does not make any restrictive assumption about the chronological order between the two images or about the organ motion, but first obtains a set of appearance-based image matches, and subsequently removes possible outliers based on geometric constraints. As a consequence, feature-matching algorithms can be used to recover the position of any image feature after unexpected camera events, such as complete occlusions, sudden endoscopic-camera retraction, or strong illumination changes. We introduce the hierarchical multi-affine (HMA) algorithm, which improves over existing feature-matching methods because of the larger number of image correspondences, the increased speed, and the higher accuracy and robustness. We tested HMA over a large (and annotated) dataset with more than 100 MIS image pairs obtained from real interventions, and containing many of the aforementioned sudden events. In all of these cases, HMA outperforms the existing state-of-the-art methods in terms of speed, accuracy, and robustness. In addition, HMA and the image database are made freely available on the internet. |
1,760 | Case Reports of Heroin Injection Site Necrosis: A Novel Antecedent of Nicolau Syndrome | Heroin injection-site necrosis (HISN) is a novel and poorly understood complication of intravenous drug abuse (IVDA). We present three cases of HISN that were evaluated and treated in Charleston, West Virginia, in 2019 and 2020. The documented cases show similarities involving patient care, follow-up, clinical progression, patient demographic, and dermatologic sequelae. We discuss these similarities, provide clinical recommendations, review proposed etiologies of HISN, and introduce Nicolau syndrome as a potential mechanism. |
1,761 | Structure-Texture Consistent Painting Completion for Artworks | Image completion techniques have made rapid and impressive progress due to advancements in deep learning and traditional patch-based approaches. The surrounding regions of a hole played a crucial role in repairing missing areas during the restoration process. However, large holes could result in suboptimal restoration outcomes due to complex textures causing significant changes in color gradations. As a result, they led to errors such as color discrepancies, blurriness, artifacts, and unnatural colors. Additionally, recent image completion approaches focused mainly on scenery and face images with fewer textures. Given these observations, we present a structure-texture consistent completion approach for filling large holes with detailed textures. Our method focuses on improving image completion in the context of artworks, which are expressions of creativity and often have more diverse structures and textures from applying paint to a surface using brush strokes. To handle the unique challenges posed by artwork, we use Cohesive Laplacian Fusion that segments non-homogeneous areas based on structure diffusion and then applies texture synthesis to complete the remaining texture of the missing segmented area. This technique involves detecting changes in base structures and textures using multiple matched patches to achieve more consistent results. The experimental results show that our proposed method is competitive and outperforms state-of-the-art methods in missing regions and color gradations of art paintings. |
1,762 | Survey of Automotive Controller Area Network Intrusion Detection Systems | Editor's note: Control Area Network (CAN) is one of the most popular targets for malicious attacks and exploitations in modern automotive systems. The goal of intrusion detection systems (IDS) is to identify and mitigate security attacks; consequently, they are of paramount importance to automotive security. This article surveys the state of the art in IDS, with special emphasis on techniques for detecting attacks on CAN modules. -Sandip Ray, University of Florida |
1,763 | High-Density 2-mu m-Pitch pH Image Sensor With High-Speed Operation up to 1933 fps | Various biosensing platforms for real-time monitoring and mapping of chemical signals in neural networks have been developed based on CMOS process technology. Despite their achievements, however, there remains a demand for an advanced method that can offer detailed insights into cellular functions with higher spatiotemporal resolution. Here, we present a pH image sensor that employs a high-density array of 256 x 256 pixels and readout circuitry designed for fast operation. The sensor's characteristics, such as the pH sensitivity of 55.1 mV/pH and higher frame speed of 1933 fps, are experimentally demonstrated and compared to those of state-of-the-art pH image sensors. Among them, our sensor presents the smallest pitch of 2 mu m with a significantly high operation speed. This sensor can successfully detect a pH change, but also transform the measured data to a two-dimensional image series in real time. The practical spatial resolution of images is investigated by an evaluation method that we first propose in this paper. By this method, we confirm that our sensor can discriminate objects distanced over 4 mu m apart, which is twice bigger than the pixel pitch. In order to analyze the degraded resolution and image blur, a capacitive coupling effect at an ion-sensitive membrane is suggested as the main factor and demonstrated by simulation. |
1,764 | Architecting Ultrathin Graphitic C3N4 Nanosheets Incorporated PVA/Gelatin Bionanocomposite for Potential Biomedical Application: Effect on Drug Delivery, Release Kinetics, and Antibacterial Activity | Planar (2D) nanomaterials are garnering broad recognition in diverse scientific areas because of their intrinsic features. Herein, bulk graphitic carbon nitride (g-C3N4) was prepared from melamine, which was exfoliated to produce g-C3N4 nanosheets. The prepared g-C3N4 nanosheets were characterized by transmission electron microscopy (TEM), atomic force microscopy (AFM), photo luminescence (PL) spectroscopy, and dynamic light scattering (DLS). The stable dispersion of a g-C3N4 nanosheet was incorporated into a PVA/Gelatin matrix to explore its efficacy as a promising drug carrier. A remarkable 42% increase in tensile strength for 1% g-C3N4/PVA/Gelatin was attained compared with that of the PVA/Gelatin film. Thermal stability increased due to addition of g-C3N4 nanosheet in the PVA/Gelatin film, where the maximum thermal degradation temperature increased by 9.5 °C when the 1% nanosheet was added to the PVA/Gelatin film. Moreover, the g-C3N4 nanosheets and g-C3N4/PVA/Gelatin showed no cytotoxicity against HeLa and BHK-21 cells. To investigate the in vitro drug releasing efficacy, ciprofloxacin was incorporated into g-C3N4/PVA/Gelatin. Experimental results showed a 62% drug release within 120 min at physiological pH 7.4. The data was curve fitted by different kinetic models of drug release to understand the drug release mechanism. The experimental data was found to fit best with the Higuchi model and revealed the diffusion control mechanism of drug release. Additionally, antibacterial study confirmed the drug release potency from g-C3N4/PVA/Gelatin film on both Gram-positive and Gram-negative bacteria. The above-mentioned promising findings might lead to an opportunity of using g-C3N4 as a potential drug carrier. |
1,765 | Real-time embedded system for valve detection in water pipelines | Condition assessment is an essential process to comprehend the condition of the water pipelines and facilitate the maintenance as well as the renewal plans. Nowadays, varied in-pipe inspection platforms equipped with closed-circuit cameras are employed to capture the internal condition of the water pipelines. However, the automated platform often faces the challenge to negotiate with the installed valves during the inspection. To ensure continuous inspection, the platform needs identify the valve automatically and activate the control mechanism to pass through it. Thus, the valves need to be detected to facilitate the negotiation and ensure that the control mechanism can take an action in time. This paper focuses on real-time valve detection using Jetson TX2 (TM) and a lightweight algorithm, namely YOLOv3-tiny. The performance of the implementation is compared with state-of-the-art real-time detection models. The experimental results demonstrate that YOLOv3-tiny has a high detection speed in frame per second for valve detection and outperforms the state-of-the-art real-time algorithms. Hence, the deployment YOLOv3-tiny into the embedded system will aid the automated platform to accomplish the uninterrupted inspection and enhance the capability for the condition assessment of the water pipelines. |
1,766 | Nonlinear Energy-Maximizing Optimal Control of Wave Energy Systems: A Moment-Based Approach | Linear dynamics are virtually always assumed when designing optimal controllers for wave energy converters (WECs), motivated by both their simplicity and computational convenience. Nevertheless, unlike traditional tracking control applications, the assumptions under which the linearization of WEC models is performed are challenged by the energy-maximizing controller itself, which intrinsically enhances device motion to maximize power extraction from incoming ocean waves. In this article, we present a moment-based energy-maximizing control strategy for WECs subject to nonlinear dynamics. We develop a framework under which the objective function (and system variables) can be mapped to a finite-dimensional tractable nonlinear program, which can be efficiently solved using state-of-the-art nonlinear programming solvers. Moreover, we show that the objective function belongs to a class of generalized convex functions when mapped to the moment domain, guaranteeing the existence of a global energy-maximizing solution and giving explicit conditions for when a local solution is, effectively, a global maximizer. The performance of the strategy is demonstrated through a case study, where we consider (state and input-constrained) energy maximization for a state-of-the-art CorPower-like WEC, subject to different hydrodynamic nonlinearities. |
1,767 | Food inauthenticity: Authority activities, guidance for food operators, and mitigation tools | Historically, food fraud was a major public health concern which helped drive the development of early food regulations in many markets including the US and EU market. In the past 10 years, the integrity of food chains with respect to food fraud has again been questioned due to high profile food fraud cases. We provide an overview of the resulting numerous authoritative activities underway within different regions to counter food fraud, and we describe the guidance available to the industry to understand how to assess the vulnerability of their businesses and implement appropriate mitigation. We describe how such controls should be an extension of those already in place to manage wider aspects of food authenticity, and we provide an overview of relevant analytical tools available to food operators and authorities to protect supply chains. Practical Application: Practical Application of the provided information by the food industry in selecting resources (guidance document, analytical methods etc.). |
1,768 | Indoor air quality assessment in painting and printmaking department of a fine arts faculty building | Measurements for indoor air quality assessment were carried out in Painting and Printmaking Department of Anadolu University Faculty of Fine Arts in Turkey. Concentrations of nitrogen dioxide (NO2), ozone (O-3) and 29 Volatile Organic Compounds (VOCs) were measured simultaneously by using diffusive samplers. Simultaneous outdoor measurements were also performed at some sampling points. Analyses of NO2 and ozone samples were performed by using ion chromatography and VOCs were analyzed by using gas chromatography-mass spectrometry. Indoor NO2 and ozone concentrations varied between 13.47-89.77 mu g m(-3) and 3.89-51.82 mu g m(-3), respectively. Average indoor NO2 concentration was obtained as 35.37 +/- 10.9 mu g m(-3). Indoor/outdoor NO2 ratio (I/O) was found as 1.44 +/- 0.4 which indicated the presence of some indoor sources. Average indoor ozone concentration was 9.97 +/- 4.4 mu g m(-3) and I/O ratio was obtained lower than 1 (0.46 +/- 0.4). The highest VOC concentrations were observed at workshops where oil painting and stained glass studies were performed. Especially, the concentrations obtained from the stained glass workshop (benzene: 3.98 +/- 1.3 mu g m(-3), toluene: 999.33 +/- 104.2 mu g m(-3), ethly benzene: 66.06 +/- 16.1 mu g m(-3), m,p xylene: 129.44 +/- 33.1 mu g m(-3), o-xylene: 76.14 +/- 23.1 mu g m(-3)) were much higher than the other sampling points. Toluene concentrations exceeded the WHO (World Health Organization) limit value (260 mu g m(-3) weekly average) at 40% of the sampling points. Cancer risks were estimated by using the personal exposure concentrations. Lifetime cancer risks for the people working in the department such as faculty members and technicians were obtained higher than USEPA acceptable risk value (1 x 10(-6)) while the risks for the students were below this value. Copyright (c) 2015 Turkish National Committee for Air Pollution Research and Control. Production and hosting by Elsevier B.V. All rights reserved. |
1,769 | Substitution of carbonate by non-physiological synergistic anion modulates the stability and iron release kinetics of serum transferrin | Serum transferrin (sTf) is a bi-lobal protein. Each lobe of sTf binds one Fe3+ ion in the presence of a synergistic anion. Physiologically, carbonate is the main synergistic anion but other anions such as oxalate, malonate, glycolate, maleate, glycine, etc. can substitute for carbonate in vitro. The present work provides the possible pathways by which the substitution of carbonate with oxalate affects the structural, kinetic, thermodynamic, and functional properties of blood plasma sTf. Analysis of equilibrium experiments measuring iron release and structural unfolding of carbonate and oxalate bound diferric-sTf (Fe2sTf) as a function of pH, urea concentration, and temperature reveal that the structural and iron-centers stability of Fe2sTf increase by substitution of carbonate with oxalate. Analysis of isothermal titration calorimetry (ITC) scans showed that the affinity of Fe3+ with apo-sTf is enhanced by substituting carbonate with oxalate. Analysis of kinetic and thermodynamic parameters measured for the iron release from the carbonate and oxalate bound monoferric-N-lobe of sTf (FeNsTf) and Fe2sTf at pH 7.4 and pH 5.6 reveals that the substitution of carbonate with oxalate inhibits/retards the iron release via increasing the enthalpic barriers. |
1,770 | Pathophysiological Association Between Diabetes Mellitus and Alzheimer's Disease | Worldwide elderly people are being affected by diabetes mellitus (DM) and dementia. The risk for the development of dementia is higher in people with DM. DM causes a marked cognitive reduction and increases the risk of dementia, most commonly vascular dementia and Alzheimer's disease. People affected by DM and dementia seem to be at higher risk for intense hypoglycemia. Hypoglycemia, the complication of DM treatment, is believed as an independent risk factor for dementia in people with DM. Both Alzheimer's disease and DM are linked with decreased insulin secretion, reduced uptake of glucose, raised oxidative stress, angiopathy, activation of the apoptotic pathway, aging, abnormal peroxidation of lipids, increased production of advanced glycation end products and tau phosphorylation, brain atrophy, and decreased fat metabolism. In this paper, we will review the association between Alzheimer's disease and DM. In addition, we will discuss the agents that enhance the risk for dementia in elderly people with DM and how to prevent the development of cognitive dysfunction in DM. |
1,771 | Objective Detection of Eloquent Axonal Pathways to Minimize Postoperative Deficits in Pediatric Epilepsy Surgery Using Diffusion Tractography and Convolutional Neural Networks | Convolutional neural networks (CNNs) have recently been used in biomedical imaging applications with great success. In this paper, we investigated the classification performance of CNN models on diffusion weighted imaging (DWI) streamlines defined by functional MRI (fMRI) and electrical stimulation mapping (ESM). To learn a set of discriminative and interpretable features from the extremely unbalanced dataset, we evaluated different CNN architectures with multiple loss functions (e.g., focal loss and center loss) and a soft attention mechanism and compared our models with current state-of-the-art methods. Through extensive experiments on streamlines collected from 70 healthy children and 70 children with focal epilepsy, we demonstrated that our deep CNN model with focal and central losses and soft attention outperforms all existing models in the literature and provides clinically acceptable accuracy (73%-100%) for the objective detection of functionally important white matter pathways, including ESM determined eloquent areas such as primary motors, aphasia, speech arrest, auditory, and visual functions. The findings of this paper encourage further investigations to determine if DWI-CNN analysis can serve as a noninvasive diagnostic tool during pediatric presurgical planning by estimating not only the location of essential cortices at the gyral level but also the underlying fibers connecting these cortical areas to minimize or predict postsurgical functional deficits. This paper translates an advanced CNN model to clinical practice in the pediatric population where currently available approaches (e.g., ESM and fMRI) are suboptimal. The implementation will be released at https://github.com/HaotianMXu/Brain-fiber-classification-using-CNNs. |
1,772 | Fast and Compact Image Segmentation Using Instance Stixels | State-of-the-art stixel methods fuse dense stereo disparity and semantic class information, e.g., from a Convolutional Neural Network (CNN), into a compact representation of driveable space, obstacles and background. However, they do not explicitly differentiate instances within the same semantic class. We investigate several ways to augment single-frame stixels with instance information, which can be extracted by a CNN from the RGB image input. As a result, our novel Instance Stixels method efficiently computes stixels that account for boundaries of individual objects, and represents instances as grouped stixels that express connectivity. Experiments on the Cityscapes dataset demonstrate that including instance information into the stixel computation itself, rather than as a post-processing step, increases the segmentation performance (i.e., Intersection over Union and Average Precision). This holds especially for overlapping objects of the same class. Furthermore, we show the superiority of our approach in terms of segmentation performance and computational efficiency compared to combining the separate outputs of Semantic Stixels and a state-of-the-art pixel-level CNN. We achieve processing throughput of 28 frames per second on average for 8 pixel wide stixels on images from the Cityscapes dataset at 1792 x 784 pixels. Our Instance Stixels software is made freely available for non-commercial research purposes. |
1,773 | Infection, pathology and interferon treatment of the SARS-CoV-2 Omicron BA.1 variant in juvenile, adult and aged Syrian hamsters | The new predominant circulating SARS-CoV-2 variant, Omicron, can robustly escape current vaccines and neutralizing antibodies. Although Omicron has been reported to have milder replication and disease manifestations than some earlier variants, its pathogenicity in different age groups has not been well elucidated. Here, we report that the SARS-CoV-2 Omicron BA.1 sublineage causes elevated infection and lung pathogenesis in juvenile and aged hamsters, with more body weight loss, respiratory tract viral burden, and lung injury in these hamsters than in adult hamsters. Juvenile hamsters show a reduced interferon response against Omicron BA.1 infection, whereas aged hamsters show excessive proinflammatory cytokine expression, delayed viral clearance, and aggravated lung injury. Early inhaled IFN-α2b treatment suppresses Omicron BA.1 infection and lung pathogenesis in juvenile and adult hamsters. Overall, the data suggest that the diverse patterns of the innate immune response affect the disease outcomes of Omicron BA.1 infection in different age groups. |
1,774 | FRESH-FRI-Based Single-Image Super-Resolution Algorithm | In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels. |
1,775 | Reducing over-smoothness in HMM-based speech synthesis using exemplar-based voice conversion | Speech synthesis has been applied in many kinds of practical applications. Currently, state-of-the-art speech synthesis uses statistical methods based on hidden Markov model (HMM). Speech synthesized by statistical methods can be considered over-smooth caused by the averaging in statistical processing. In the literature, there have been many studies attempting to solve over-smoothness in speech synthesized by an HMM. However, they are still limited. In this paper, a hybrid synthesis between HMM and exemplar-based voice conversion has been proposed. The experimental results show that the proposed method outperforms state-of-the-art HMM synthesis using global variance. |
1,776 | The complete mitochondrial genome of the Poecilia formosa (Amazon molly) | The Amazon molly, Poecilia formosa, a member of the Poeciliidae family, is a freshwater fish reproducing through gynogenesis. The complete mitochondrial genome of the P. formosa is determined for the first time in this study. It is a circular molecule of 16 542 bp in length, including 13 protein-coding genes, 22 transfer RNA genes, 2 ribosomal RNA genes and 1 putative control region. The overall base composition of the genome is A (29.59%), T (27.57%), C (28.27%), and G (14.57%) with 42.84% GC content, which is lower than the content of AT. Most protein-coding genes started with a traditional ATG codon except for COX2, ND5 and ND6, which initiated with ATA, GTG and TTA, respectively. The stop codon was a single T- - base in most of the protein-coding genes, but COX2 and ATP8 both employed TAA and ND2 terminated with AGG codon. Phylogenetic tree was constructed based on the complete mitogenome of P. formosa and closely related 11 chondrichthian species to assess their phylogenic relationship and evolution. The complete mitochondrial genome of the amazon molly would help to study the evolution of Poeciliidae family. |
1,777 | Deep Learning-Based ECG-Free Cardiac Navigation for Multi-Dimensional and Motion-Resolved Continuous Magnetic Resonance Imaging | For the clinical assessment of cardiac vitality, time-continuous tomographic imaging of the heart is used. To further detect e.g., pathological tissue, multiple imaging contrasts enable a thorough diagnosis using magnetic resonance imaging (MRI). For this purpose, time-continous and multi-contrast imaging protocols were proposed. The acquired signals are binned using navigation approaches for a motion-resolved reconstruction. Mostly, external sensors such as electrocardiograms (ECG) are used for navigation, leading to additional workflow efforts. Recent sensor-free approaches are based on pipelines requiring prior knowledge, e.g., typical heart rates. We present a sensor-free, deep learning-based navigation that diminishes the need for manual feature engineering or the necessity of prior knowledge compared to previous works. A classifier is trained to estimate the R-wave timepoints in the scan directly from the imaging data. Our approach is evaluated on 3-D protocols for continuous cardiac MRI, acquired in-vivo and free-breathing with single or multiple imaging contrasts. We achieve an accuracy of > 98% on previously unseen subjects, and a well comparable image quality with the state-of-the-art ECG-based reconstruction. Our method enables an ECG-free workflow for continuous cardiac scans with simultaneous anatomic and functional imaging with multiple contrasts. It can be potentially integrated without adapting the sampling scheme to other continuous sequences by using the imaging data for navigation and reconstruction. |
1,778 | Weakly Supported Plane Surface Reconstruction via Plane Segmentation Guided Point Cloud Enhancement | Most of the widely used multi-view 3D reconstruction algorithms assume that object appearance is predominantly diffuse and full of good texture. For the objects that violate this restriction, the surface can hardly be reconstructed because such area lacks sufficient support from dense point clouds. To tackle this problem, we introduce a novel two-stage prior-guided method based on point clouds enhancement to enable the application of multi-view reconstruction approaches in such scenes. In the first stage, we optimize the original PlaneNet plane segmentation priors by taking advantage of the estimated depth map and confidence map from multi-view stereo. In the second stage, we correct and supply 3D point clouds for the weakly supported plane surface on the basis of the upgraded priors. Furthermore, we utilize a slight disturbance of the enhanced point clouds to facilitate the subsequent mesh reconstruction. The proposed point cloud enhancement approach is evaluated on the large-scale DTU dataset. Our method significantly outperforms previous multi-view stereo state-of-the-arts. We also demonstrate weakly supported plane surface reconstruction results from real-world photos that are unachievable with either the methods aiming at preserving weakly supported surfaces or the traditional state-of-the-art 3D reconstruction systems. |
1,779 | A preclinical model to investigate normal tissue damage following fractionated radiotherapy to the head and neck | Radiotherapy (RT) of head and neck (H&N) cancer is known to cause both early- and late-occurring toxicities. To better appraise normal tissue responses and their dependence on treatment parameters such as radiation field and type, as well as dose and fractionation scheme, a preclinical model with relevant endpoints is required. 12-week old female C57BL/6 J mice were irradiated with 100 or 180 kV X-rays to total doses ranging from 30 to 85 Gy, given in 10 fractions over 5 days. The radiation field covered the oral cavity, swallowing structures and salivary glands. Monte Carlo simulations were employed to estimate tissue dose distribution. The follow-up period was 35 days, in order to study the early radiation-induced effects. Baseline and post irradiation investigations included macroscopic and microscopic examinations of the skin, lips, salivary glands and oral mucosa. Saliva sampling was performed to assess the salivary gland function following radiation exposure. A dose dependent radiation dermatitis in the skin was observed for doses above 30 Gy. Oral mucositis in the tongue appeared as ulcerations on the ventral surface of the tongue for doses of 75-85 Gy. The irradiated mice showed significantly reduced saliva production compared to controls. In summary, a preclinical model to investigate a broad panel of normal tissue responses following fractionated irradiation of the H&N region was established. The optimal dose to study early radiation-induced effects was found to be around 75 Gy, as this was the highest tolerated dose that gave acute effects similar to that observed in cancer patients. |
1,780 | MoDL-MUSSELS: Model-Based Deep Learning for Multishot Sensitivity-Encoded Diffusion MRI | We introduce a model-based deep learning architecture termed MoDL-MUSSELS for the correction of phase errors in multishot diffusion-weighted echo-planar MR images. The proposed algorithm is a generalization of the existing MUSSELS algorithm with similar performance but significantly reduced computational complexity. In this work, we show that an iterative re-weighted least-squares implementation of MUSSELS alternates between a multichannel filter bank and the enforcement of data consistency. The multichannel filter bank projects the data to the signal subspace, thus exploiting the annihilation relations between shots. Due to the high computational complexity of the self-learned filter bank, we propose replacing it with a convolutional neural network (CNN) whose parameters are learned from exemplary data. The proposed CNN is a hybrid model involving a multichannel CNN in the k-space and another CNN in the image space. The k-space CNN exploits the annihilation relations between the shot images, while the image domain network is used to project the data to an image manifold. The experiments show that the proposed scheme can yield reconstructions that are comparable to state-of-the-art methods while offering several orders of magnitude reduction in run-time. |
1,781 | Third-degree atrioventricular block caused by intoxication with rhododendron leaves | We report the case of a 41-year-old human with third-degree atrioventricular block caused due to intoxication with water concoction prepared from Rhododendron leaves. Such poisoning is rare. It is prone to arrhythmia with hemodynamic instability and is confused with various diseases. For these reasons, the correct diagnosis and treatment of this poisoning are particularly important. We confirmed it by analyzing the remaining liquid carried by the family members. After symptomatic and supportive treatment, the patient was discharged uneventfully. |
1,782 | Detecting Deficient Coverage in Colonoscopies | Colonoscopy is tool of choice for preventing Colorectal Cancer, by detecting and removing polyps before they become cancerous. However, colonoscopy is hampered by the fact that endoscopists routinely miss 22-28% of polyps. While some of these missed polyps appear in the endoscopist's field of view, others are missed simply because of substandard coverage of the procedure, i.e. not all of the colon is seen. This paper attempts to rectify the problem of substandard coverage in colonoscopy through the introduction of the C2D2 (Colonoscopy Coverage Deficiency via Depth) algorithm which detects deficient coverage, and can thereby alert the endoscopist to revisit a given area. More specifically, C2D2 consists of two separate algorithms: the first performs depth estimation of the colon given an ordinary RGB video stream; while the second computes coverage given these depth estimates. Rather than compute coverage for the entire colon, our algorithm computes coverage locally, on a segment-by-segment basis; C2D2 can then indicate in real-time whether a particular area of the colon has suffered from deficient coverage, and if so the endoscopist can return to that area. Our coverage algorithm is the first such algorithm to be evaluated in a large-scale way; while our depth estimation technique is the first calibration-free unsupervised method applied to colonoscopies. The C2D2 algorithm achieves state of the art results in the detection of deficient coverage. On synthetic sequences with ground truth, it is 2.4 times more accurate than human experts; while on real sequences, C2D2 achieves a 93.0% agreement with experts. |
1,783 | The Renewal of Arts, Lives, and a Community through Social Enterprise: The Case of Oficina de Agosto | The present work investigates, with a cultural approach, the emergence of an art-based social enterprise and an art-entrepreneurial ecosystem in Southeastern semi-rural Brazil, shedding light on how local private initiatives may build stronger communities and vice-versa, in a mutually transformative relationship. The focus lies on Oficina de Agosto, a folk-art studio, school, and shop. Fieldwork design combined ethnography and art-based research. The thick description of the phenomenon is organized under the acronym P.L.A.C.E., a conceptual framework describing five principles of community development. The contributions of this study are three-fold: (1) it illustrates how social enterprise may work as an alternative market model that could support community building; (2) it raises awareness to the possibility that social enterprises' initial social focus may not be perennial or unshakeable, in an undesirable change that might require a both/and mindset and a patient management of paradoxes; and (3) it offers practical managerial recommendations to the SE under focus, which might be extended to other local businesses, or to SEs in other semi-rural Brazilian towns, or even in international settings that might bear economic and social resemblance to our researched context. |
1,784 | Class-Agnostic Weighted Normalization of Staining in Histopathology Images Using a Spatially Constrained Mixture Model | The colorless biopsied tissue samples are usually stained in order to visualize different microscopic structures for diagnostic purposes. But color variations associated with the process of sample preparation, usage of raw materials, diverse staining protocols, and using different slide scanners may adversely influence both visual inspection and computer-aided image analysis. As a result, many methods are proposed for histopathology image stain normalization in recent years. In this study, we introduce a novel approach for stain normalization based on learning a mixture of multivariate skew-normal distributions for stain clustering and parameter estimation alongside a stain transformation technique. The proposed method, labeled "Class-Agnostic Weighted Normalization" (short CLAW normalization), has the ability to normalize a source image by learning the color distribution of both source and target images within an expectation-maximization framework. The novelty of this approach is its flexibility to quantify the underlying both symmetric and nonsymmetric distributions of the different stain components while it is considering the spatial information. The performance of this new stain normalization scheme is tested on several publicly available digital pathology datasets to compare it against state-of-the-art normalization algorithms in terms of ability to preserve the image structure and information. All in all, our proposed method performed superior more consistently in comparison with existing methods in terms of information preservation, visual quality enhancement, and boosting computer-aided diagnosis algorithm performance. |
1,785 | Semantic Cluster Unary Loss for Efficient Deep Hashing | With the rapid development of deep learning, deep hashing methods have achieved promising results in efficient information retrieval. The hashing method maps similar data to binary hashcodes with smaller hamming distance, which has received broad attention due to its low storage cost and fast retrieval speed. Most of the existing deep hashing methods adopt pairwise or triplet losses to deal with similarities underlying the data, but their training is difficult and less efficient because O(n(2)) data pairs and O(n(3)) triplets are involved. To address these issues, we propose a novel deep hashing algorithm with the unary loss which can be trained very efficiently. First, we introduce a Unary Upper Bound of the traditional triplet loss, thus reducing the complexity to O(n) and bridging the classification-based unary loss and the triplet loss. Second, we propose a novel Semantic Cluster Deep Hashing (SCDH) algorithm by introducing a modified Unary Upper Bound loss, called Semantic Cluster Unary Loss. The resultant hashcodes form several compact clusters, which means the hashcodes in the same cluster have similar semantic information. We also demonstrate that the proposed SCDH is easy to extend to semi-supervised settings by incorporating the state-of-the-art semi-supervised learning algorithms. The experiments on large-scale datasets show that the proposed method is superior to the state-of-the-art hashing algorithms. |
1,786 | Edge-aware image filtering using a structure-guided CNN | Image filtering is a fundamental preprocessing step for accurate, robust computer vision applications such as image segmentation, object classification, and reconstruction. However, many convolutional neural network (CNN)-based methods tend to lose significant edge information in the output layer, and generate undesired artefacts in the feature extraction layers. This study presents a deep CNN model for edge-aware image filtering. The proposed network model consists of three sub-networks: (i) feature extraction, (ii) convolution artefact removal, and (iii) structure extraction networks. The proposed network model has an end-to-end trainable architecture that does not need any post-processing steps. Especially, the structure extraction network can successfully preserve significant edges. The proposed filter outperforms state-of-the-art denoising filters in terms of both objective and subjective measures, and can be used for various image enhancement and restoration problems such as edge-preserving smoothing, image denoising, deblurring, and deblocking. |
1,787 | Lightweight and Effective Convolutional Neural Networks for Vehicle Viewpoint Estimation From Monocular Images | Vehicle viewpoint estimation from monocular images is a crucial component for autonomous driving vehicles and for fleet management applications. In this paper, we make several contributions to advance the state-of-the-art on this problem. We show the effectiveness of applying a smoothing filter to the output neurons of a Convolutional Neural Network (CNN) when estimating vehicle viewpoint. We point out the overlooked fact that, under the same viewpoint, the appearance of a vehicle is strongly influenced by its position in the image plane, which renders viewpoint estimation from appearance an ill-posed problem. We show how, by inserting in the model a CoordConv layer to provide the coordinates of the vehicle, we are able to solve such ambiguity and greatly increase performance. Finally, we introduce a new data augmentation technique that improves viewpoint estimation on vehicles that are closer to the camera or partially occluded. All these improvements let a lightweight CNN reach optimal results while keeping inference time low. An extensive evaluation on a viewpoint estimation benchmark (Pascal3D+) and on actual vehicle camera data (nuScenes) shows that our method significantly outperforms the state-of-the-art in vehicle viewpoint estimation, both in terms of accuracy and memory footprint. |
1,788 | Combined pars plana vitrectomy-scleral buckle versus pars plana vitrectomy for proliferative vitreoretinopathy | The purpose of the study is to evaluate the surgical outcomes of combined pars plana vitrectomy-scleral buckle (PPV-SB) versus pars plana vitrectomy (PPV) for rhegmatogenous retinal detachment complicated with proliferative vitreoretinopathy (PVR). One thousand one hundred and seventy four patients with rhegmatogenous retinal detachment surgery between January 2002 and December 2013 were retrospectively reviewed. Patients with grade C PVR treated with either combined PPV-SB or PPV alone were included in the study. Study outcomes included single surgery anatomic success rate and postoperative visual outcome at 12 months postoperatively. Seventy-seven patients with grade C PVR were identified for analysis. At the end of 12-month follow-up, 80.5 % eyes (33/41) in the PPV-SB group and 58.3 % eyes (21/36) in the PPV group achieved single surgery anatomical success. In a multiple logistic regression model, none of the baseline variables (age, gender, macula status, grade of PVR, extent of detachment, presence of vitreous hemorrhage, lens status, status of high myopia) nor types of retinal detachment surgery (use of scleral buckle, barrier endolaser, 360 degree endolaser, cryopexy, retinectomy, tamponade agent, phacoemulsification) had significant effect on single surgery anatomical success. The post-treatment mean logMAR visual acuity of the PPV-SB group was 1.58 ± 0.58 and the PPV group was 1.57 ± 0.61. There was no significant difference in the postoperative visual acuity between the two groups (P = 0.849). For patients with grade C PVR, PPV-SB did not demonstrate a superiority over PPV alone in achieving single surgery anatomical success. |
1,789 | Visual field asymmetries in numerosity processing | A small number of objects can be rapidly and accurately enumerated, whereas a larger number of objects can only be approximately enumerated. These subitizing and estimation abilities, respectively, are both spatial processes relying on extracting information across spatial locations. Nevertheless, whether and how these processes vary across visual field locations remains unknown. Here, we examined if enumeration displays asymmetries around the visual field. Experiment 1 tested small number (1-6) enumeration at cardinal and non-cardinal peripheral locations while manipulating the spacing among the objects. Experiment 2 examined enumeration at cardinal locations in more detail while minimising crowding. Both experiments demonstrated a Horizontal-Vertical Asymmetry (HVA) where performance was better along the horizontal axis relative to the vertical. Experiment 1 found that this effect was modulated by spacing with stronger asymmetry at closer spacing. Experiment 2 revealed further asymmetries: a Vertical Meridian Asymmetry (VMA) with better enumeration on the lower vertical meridian than on the upper and a Horizontal Meridian Asymmetry (HMA) with better enumeration along the left horizontal meridian than along the right. All three asymmetries were evident for both subitizing and estimation. HVA and VMA have been observed in a range of visual tasks, indicating that they might be inherited from early visual constraints. However, HMA is observed primarily in mid-level tasks, often involving attention. These results suggest that while enumeration processes can be argued to inherit low-level visual constraints, the findings are, parsimoniously, consistent with visual attention playing a role in both subitizing and estimation. |
1,790 | Local application of a transcutaneous carbon dioxide paste prevents excessive scarring and promotes muscle regeneration in a bupivacaine-induced rat model of muscle injury | In postoperative patients with head and neck cancer, scar tissue formation may interfere with the healing process, resulting in incomplete functional recovery and a reduced quality of life. Percutaneous application of carbon dioxide (CO2 ) has been reported to improve hypoxia, stimulate angiogenesis, and promote fracture repair and muscle damage. However, gaseous CO2 cannot be applied to the head and neck regions. Previously, we developed a paste that holds non-gaseous CO2 in a carrier and can be administered transdermally. Here, we investigated whether this paste could prevent excessive scarring and promote muscle regeneration using a bupivacaine-induced rat model of muscle injury. Forty-eight Sprague Dawley rats were randomly assigned to either a control group or a CO2 group. Both groups underwent surgery to induce muscle injury, but the control group received no treatment, whereas the CO2 group received the CO2 paste daily after surgery. Then, samples of the experimental sites were taken on days 3, 7, 14, and 21 post-surgery to examine the following: (1) inflammatory (interleukin [IL]-1β, IL-6), and transforming growth factor (TGF)-β and myogenic (MyoD and myogenin) gene expression by polymerase chain reaction, (2) muscle regeneration with haematoxylin and eosin staining, and (3) MyoD and myogenin protein expression using immunohistochemical staining. Rats in the CO2 group showed higher MyoD and myogenin expression and lower IL-1β, IL-6, and TGF-β expression than the control rats. In addition, treated rats showed evidence of accelerated muscle regeneration. Our study demonstrated that the CO2 paste prevents excessive scarring and accelerates muscle regeneration. This action may be exerted through the induction of an artificial Bohr effect, which leads to the upregulation of MyoD and myogenin, and the downregulation of IL-1β, IL-6, and TGF-β. The paste is inexpensive and non-invasive. Thus, it may be the treatment of choice for patients with muscle damage. |
1,791 | Identification of Nonhuman Primate Hematopoietic Stem and Progenitor Cells | The preclinical development of hematopoietic stem cell (HSC) gene therapy/editing and transplantation protocols is frequently performed in large animal models such as nonhuman primates (NHPs). Similarity in physiology, size, and life expectation as well as cross-reactivity of most reagents and medications allows for the development of treatment strategies with rapid translation to clinical applications. Especially after the adverse events of HSC gene therapy observed in the late 1990s, the ability to perform autologous transplants and follow the animals long-term make the NHP a very attractive model to test the efficiency, feasibility, and safety of new HSC-mediated gene-transfer/editing and transplantation approaches.This protocol describes a method to phenotypically characterize functionally distinct NHP HSPC subsets within specimens or stem cell products from three different NHP species. Procedures are based on the flow-cytometric assessment of cell surface markers that are cross-reactive in between human and NHP to allow for immediate clinical translation. This protocol has been successfully used for the quality control of enriched, cultured, and gene-modified NHP CD34+ hematopoietic stem and progenitor cells (HSPCs) as well as sort-purified CD34 subsets for transplantation in the pig-tailed, cynomolgus, and rhesus macaque. It further allows the longitudinal assessment of primary specimens taken during the long-term follow-up post-transplantation in order to monitor homing, engraftment, and reconstitution of the bone marrow stem cell compartment. |
1,792 | Malto-oligosaccharides as critical functional ingredient: a review of their properties, preparation, and versatile applications | Malto-oligosaccharides (MOS) are α-1,4 glycosidic linked linear oligosaccharides of glucose, which have a diverse range of functional applications in the food, pharmaceutical, and other industries. They can be used to modify the physicochemical properties of foods thereby improving their quality attributes, or they can be included as prebiotics to improve their nutritional attributes. The degree of polymerization of MOS can be controlled by using specific enzymes, which means their functionality can be tuned for specific applications. In this article, we review the chemical structure, physicochemical properties, preparation, and functional applications of MOS in the food, health care, and other industries. Besides, we offer an overview for this saccharide from the perspective of prospect functional ingredient, which we feel lacks in the current literature. MOS could be expected to provide a novel promising substitute for functional oligosaccharides. |
1,793 | Determining of acceptable limits of uncertainties in radiotoxicology by using a method based only on external quality assessment results | PROCORAD organizes for more than 20 years proficiency tests in radiotoxicology to estimate the performance of the laboratories in term of accuracy. The results of these intercomparisons also show a wide range of uncertainties values provided by these laboratories. The regulatory obligations related to accreditation, which require that the laboratory shall define the performance requirements for measurement uncertainty and regularly review estimates of measurement uncertainty, and the lack of published or required acceptable limits in radiotoxicology led PROCORAD to propose uncertainties acceptable limits from the "state of art". For this purpose, PROCORAD used a new method to estimate the uncertainties based only on external quality assessment results: the Long-Term Uncertainties Method (LTUM). This method was applied for tritium in urines, gamma/X emitters in urines and actinides in fecal ashes. This study allowed validation of LTUM by a simple and reliable method for radiotoxicology uncertainties measurements. Three targets (optimal, desirable and minimal) for acceptable limits were defined by using the first quartiles, medians and third quartiles calculation of uncertainties provided by LTUM: 10%, 15% et 20% for H-3 respectively, 18%, 21% et 27% for g/X emitters respectively, 15%, 25% et 34% for Pu-238 respectively, 11%, 17% et 26% for Pu-239 respectively, 19%, 26% et 38% for Am-241 respectively and 19%, 30% et 52% for Cm-244 respectively. Applying the LTUM method to the results provided by laboratories shows an underestimation of the uncertainties calculated by laboratories, indicating that all influencing factors are not always taken into account in the uncertainties measurements. |
1,794 | Hibiscus sabdariffa L. polyphenolic-rich extract promotes muscle glucose uptake and inhibits intestinal glucose absorption with concomitant amelioration of Fe2+ -induced hepatic oxidative injury | In this current study, the antidiabetic effectiveness of Hibiscus sabdariffa and its protective function against Fe2+ -induced oxidative hepatic injury were elucidated using in vitro, in silico, and ex vivo studies. The oxidative damage was induced in hepatic tissue by incubation with 0.1 mMolar ferrous sulfate (FeSO4) and then treated with different concentrations of crude extracts (ethyl acetate, ethanol, and aqueous) of H. sabdariffa flowers for 30 min at 37°C. When compared to ethyl acetate and aqueous extracts, the ethanolic extract displayed the most potent scavenging activity in ferric-reducing antioxidant power (FRAP), 1,1-diphenyl-2-picrylhydrazyl (DPPH), and nitric oxide (NO) assays, with IC50 values of 2.8 μl/ml, 3.3 μl/ml, and 9.2 μl/ml, respectively. The extracts significantly suppressed α-glucosidase and α-amylase activities (p < .05), with the ethanolic extract demonstrating the highest activity. H. sabdariffa significantly (p < .05) raised reduced glutathione (GSH) levels while simultaneously decreasing malondihaldehyde (MDA) and NO levels and increasing superoxide dismutase (SOD) and catalase activity in Fe2+ induced oxidative hepatic injury. The extract of the plant inhibited intestinal glucose absorption and increased muscular glucose uptake. The extract revealed the presence of several phenolic compounds when submitted to gas chromatography-mass Spectroscopy (GC-MS) screening, which was docked with α-glucosidase and α- amylase. The molecular docking displayed the compound 4-(3,5-Di-tert-butyl-4-hydroxyphenyl)butyl acrylate strongly interacted with α-glucosidase and α-amylase and had the lowest free binding energy compared to other compounds and acarbose. These results suggest that H. sabdariffa has promising antioxidant and antidiabetic activity. PRACTICAL APPLICATIONS: In recent years, there has been increased concern about the side effects of synthetic anti-diabetic drugs, as well as their expensive cost, especially in impoverished nations. This has instigated a radical shift towards the use of traditional plants, which are rich in phytochemicals many years ago. Among these plants, H. sabdariffa has been used to treat diabetes in traditional medicine. In this present study, H. sabdariffa extracts demonstrated the ability to inhibit carbohydrate digesting enzymes, facilitate muscle glucose uptake and attenuate oxidative stress in oxidative hepatic injury. Hence, demonstrating H. sabdariffa's potential to protect against oxidative damage and the complications associated with diabetes. Consumption of Hibiscus tea or juice may be a potential source for developing an anti-diabetic drug. |
1,795 | Modified Visibility Restoration-Based Contrast Enhancement Algorithm for Colour Foggy Images | The visibility enhancement of colour foggy images is a very challenging task for many real-time applications. In this paper, we have proposed an efficient and robust algorithm for visibility enhancement of colour foggy images. The proposed algorithm works in two steps: in the first step, modified visibility restoration algorithm is applied for visibility enhancement, and in the second step, sigmoid-function-based contrast enhancement technique is applied for colour contrast enhancement of images. The quantitative and qualitative results of proposed and other state-of-the-art algorithms for colour foggy images are obtained in terms of fog aware density evaluator, fog reduction factor (FRF), measure of enhancement (EME), and measure of enhancement factor (EMF) on different colour foggy image databases. Results reveal the strength of proposed algorithm mathematically on the basis of fog thickness estimation from original image and output-enhanced image, FRF, EME, and EMF for colour foggy images. Experimental results shows that the proposed algorithm provides better quantitative and qualitative results as compared to other state-of-the-art algorithms for colour foggy images. Finally, the proposed algorithm is highly efficient for visibility enhancement of colour foggy images. |
1,796 | Sparsity-Based Poisson Denoising With Dictionary Learning | The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al. took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR. |
1,797 | Multi-Sensory Color Expression with Sound and Temperature in Visual Arts Appreciation for People with Visual Impairment | For years the HCI community's research has been focused on the hearing and sight senses. However, in recent times, there has been an increased interest in using other types of senses, such as smell or touch. Moreover, this has been accompanied with growing research related to sensory substitution techniques and multi-sensory systems. Similarly, contemporary art has also been influenced by this trend and the number of artists interested in creating novel multi-sensory works of art has increased substantially. As a result, the opportunities for visually impaired people to experience artworks in different ways are also expanding. In spite of all this, the research focusing on multimodal systems for experiencing visual arts is not large and user tests comparing different modalities and senses, particularly in the field of art, are insufficient. This paper attempts to design a multi-sensory mapping to convey color to visually impaired people employing musical sounds and temperature cues. Through user tests and surveys with a total of 18 participants, we show that this multi-sensory system is properly designed to allow the user to distinguish and experience a total of 24 colors. The tests consist of several semantic correlational adjective-based surveys for comparing the different modalities to find out the best way to express colors through musical sounds and temperature cues based on previously well-established sound-color and temperature-color coding algorithms. In addition, the resulting final algorithm is also tested with 12 more users. |
1,798 | Intensity Filtering and Group Fusion for Accurate Mobile Place Recognition | Mobile place recognition targets at matching query images captured by mobile devices with database images collected from vehicle-mounted cameras, such as Google street view panoramas, which plays an important role in many applications. However, current solutions deriving from image retrieval suffer from the problem of low precision on top results, which significantly challenges their usability. By investigating the state-of-the-art approaches, we find that the bad illumination significantly affects initial results, and these initial results are correlative in both spatial location and visual content, which can be utilized for further improvement. In this paper, we propose an effective approach to rerank initial top-ranked results to improve the recognition recall. First, initial retrieval results with low intensity are filtered as they usually depict irrelevant places with dark background. Second, the correlation between top-ranked results is modeled as a reciprocal neighborhood graph by jointly considering spatial location and visual similarity. With the graph, the initial results are reranked based on voting similarity from the query and reciprocal neighbors. In this way, the underlying structure of initial retrieval results is exploited for refining. Experimental results on the public Tokyo 24/7 and San Francisco landmark datasets demonstrate that the proposed approach can achieve persisting improvement of recognition recall over the state-of-the-art approach. |
1,799 | Safety of β-hydroxybutyrate salts as a novel food pursuant to Regulation (EU) 2015/2283 | Following a request from the European Commission, the EFSA Panel on Nutrition, Novel Foods and Food Allergens (NDA) was asked to deliver an opinion on β-hydroxybutyrate (BHB) salts as a novel food (NF) pursuant to Regulation (EU) 2015/2283. The NF consists of sodium, magnesium and calcium BHB salts, and is proposed to be used by adults as a food ingredient in a number of food categories and as food supplement. The data provided by the applicant about the identity, the production process and the compositional data of the NF over the course of the risk assessment period were overall considered unsatisfactory. The Panel noted inconsistencies in the reporting of the test item used in the subchronic toxicity study and human studies provided by the applicant. Owing to these deficiencies, the Panel cannot establish a safe intake level of the NF. The Panel concludes that the safety of the NF has not been established. |