awacke1's picture
Update app.py
f669398 verified
import streamlit as st
data='''
02 Feb 2023 | Self-Programming Artificial Intelligence Using Code-Generating Language Models | ⬇️
Alex Sheng, Shankar Padmanabhan
Recent progress in large-scale language models has enabled breakthroughs in previously intractable computer programming tasks. Prior work in meta-learning and neural architecture search has led to substantial successes across various task domains, spawning myriad approaches for algorithmically optimizing the design and learning dynamics of deep learning models. At the intersection of these research areas, we implement a code-generating language model with the ability to modify its own source code. Self-programming AI algorithms have been of interest since the dawn of AI itself. Although various theoretical formulations of generalized self-programming AI have been posed, no such system has been successfully implemented to date under real-world computational constraints. Applying AI-based code generation to AI itself, we develop and experimentally validate the first practical implementation of a self-programming AI system. We empirically show that a self-programming AI implemented using a code generation model can successfully modify its own source code to improve performance and program sub-models to perform auxiliary tasks. Our model can self-modify various properties including model architecture, computational capacity, and learning dynamics.
02 Feb 2024 | Is Self-Repair a Silver Bullet for Code Generation? | ⬇️
Theo X. Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama
Large language models have shown remarkable aptitude in code generation, but still struggle to perform complex tasks. Self-repair -- in which the model debugs and repairs its own code -- has recently become a popular way to boost performance in these settings. However, despite its increasing popularity, existing studies of self-repair have been limited in scope; in many settings, its efficacy thus remains poorly understood. In this paper, we analyze Code Llama, GPT-3.5 and GPT-4's ability to perform self-repair on problems taken from HumanEval and APPS. We find that when the cost of carrying out repair is taken into account, performance gains are often modest, vary a lot between subsets of the data, and are sometimes not present at all. We hypothesize that this is because self-repair is bottlenecked by the model's ability to provide feedback on its own code; using a stronger model to artificially boost the quality of the feedback, we observe substantially larger performance gains. Similarly, a small-scale study in which we provide GPT-4 with feedback from human participants suggests that even for the strongest models, self-repair still lags far behind what can be achieved with human-level debugging.
24 May 2018 | Neural Network Quine | ⬇️
Oscar Chang, Hod Lipson
Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or non-gradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network's ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a self-replication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.
16 Apr 2018 | An AI-driven Malfunction Detection Concept for NFV Instances in 5G | ⬇️
Julian Ahrens, Mathias Strufe, Lia Ahrens, Hans D. Schotten
Efficient network management is one of the key challenges of the constantly growing and increasingly complex wide area networks (WAN). The paradigm shift towards virtualized (NFV) and software defined networks (SDN) in the next generation of mobile networks (5G), as well as the latest scientific insights in the field of Artificial Intelligence (AI) enable the transition from manually managed networks nowadays to fully autonomic and dynamic self-organized networks (SON). This helps to meet the KPIs and reduce at the same time operational costs (OPEX). In this paper, an AI driven concept is presented for the malfunction detection in NFV applications with the help of semi-supervised learning. For this purpose, a profile of the application under test is created. This profile then is used as a reference to detect abnormal behaviour. For example, if there is a bug in the updated version of the app, it is now possible to react autonomously and roll-back the NFV app to a previous version in order to avoid network outages.
09 Jan 2014 | Emotional Responses in Artificial Agent-Based Systems: Reflexivity and Adaptation in Artificial Life | ⬇️
Carlos Pedro Goncalves
The current work addresses a virtual environment with self-replicating agents whose decisions are based on a form of "somatic computation" (soma - body) in which basic emotional responses, taken in parallelism to actual living organisms, are introduced as a way to provide the agents with greater reflexive abilities. The work provides a contribution to the field of Artificial Intelligence (AI) and Artificial Life (ALife) in connection to a neurobiology-based cognitive framework for artificial systems and virtual environments' simulations. The performance of the agents capable of emotional responses is compared with that of self-replicating automata, and the implications of research on emotions and AI, in connection to both virtual agents as well as robots, is addressed regarding possible future directions and applications.
22 Aug 2023 | Learning to generate and corr- uh I mean repair language in real-time | ⬇️
Arash Eshghi, Arash Ashrafzadeh
In conversation, speakers produce language incrementally, word by word, while continuously monitoring the appropriateness of their own contribution in the dynamically unfolding context of the conversation; and this often leads them to repair their own utterance on the fly. This real-time language processing capacity is furthermore crucial to the development of fluent and natural conversational AI. In this paper, we use a previously learned Dynamic Syntax grammar and the CHILDES corpus to develop, train and evaluate a probabilistic model for incremental generation where input to the model is a purely semantic generation goal concept in Type Theory with Records (TTR). We show that the model's output exactly matches the gold candidate in 78% of cases with a ROUGE-l score of 0.86. We further do a zero-shot evaluation of the ability of the same model to generate self-repairs when the generation goal changes mid-utterance. Automatic evaluation shows that the model can generate self-repairs correctly in 85% of cases. A small human evaluation confirms the naturalness and grammaticality of the generated self-repairs. Overall, these results further highlight the generalisation power of grammar-based models and lay the foundations for more controllable, and naturally interactive conversational AI systems.
29 Apr 2022 | Self-Aware Feedback-Based Self-Learning in Large-Scale Conversational AI | ⬇️
Pragaash Ponnusamy, Clint Solomon Mathialagan, Gustavo Aguilar, Chengyuan Ma, Chenlei Guo
Self-learning paradigms in large-scale conversational AI agents tend to leverage user feedback in bridging between what they say and what they mean. However, such learning, particularly in Markov-based query rewriting systems have far from addressed the impact of these models on future training where successive feedback is inevitably contingent on the rewrite itself, especially in a continually updating environment. In this paper, we explore the consequences of this inherent lack of self-awareness towards impairing the model performance, ultimately resulting in both Type I and II errors over time. To that end, we propose augmenting the Markov Graph construction with a superposition-based adjacency matrix. Here, our method leverages an induced stochasticity to reactively learn a locally-adaptive decision boundary based on the performance of the individual rewrites in a bi-variate beta setting. We also surface a data augmentation strategy that leverages template-based generation in abridging complex conversation hierarchies of dialogs so as to simplify the learning process. All in all, we demonstrate that our self-aware model improves the overall PR-AUC by 27.45%, achieves a relative defect reduction of up to 31.22%, and is able to adapt quicker to changes in global preferences across a large number of customers.
06 Nov 2019 | Feedback-Based Self-Learning in Large-Scale Conversational AI Agents | ⬇️
Pragaash Ponnusamy, Alireza Roshan Ghias, Chenlei Guo, Ruhi Sarikaya
Today, most large-scale conversational AI agents (e.g. Alexa, Siri, or Google Assistant) are built using manually annotated data to train the different components of the system. Typically, the accuracy of the ML models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time consuming. In this paper, we propose a system that leverages user-system interaction feedback signals to automate learning without any manual annotation. Users here tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by errors in ASR, NLU, ER or the application. In some cases, users may not properly formulate their requests (e.g. providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generate reformulations and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win/loss ratio of 11.8 and effectively reduces the defect rate by more than 30% on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.
11 Dec 2022 | Optimal Seeding and Self-Reproduction from a Mathematical Point of View | ⬇️
Rita Gitik
P. Kabamba developed generation theory as a tool for studying self-reproducing systems. We provide an alternative definition of a generation system and give a complete solution to the problem of finding optimal seeds for a finite self-replicating system. We also exhibit examples illustrating a connection between self-replication and fixed-point theory.
23 Oct 2023 | The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills | ⬇️
Qingxiao Zheng, Yun Huang
This study explores the impact of AI-generated digital self-clones on improving online presentation skills. We carried out a mixed-design experiment involving 44 international students, comparing self-recorded videos (control) with self-clone videos (AI group) for English presentation practice. The AI videos utilized voice cloning, face swapping, lip-sync, and body-language simulation to refine participants' original presentations in terms of repetition, filler words, and pronunciation. Machine-rated scores indicated enhancements in speech performance for both groups. Though the groups didn't significantly differ, the AI group exhibited a heightened depth of reflection, self-compassion, and a meaningful transition from a corrective to an enhancive approach to self-critique. Within the AI group, congruence between self-perception and AI self-clones resulted in diminished speech anxiety and increased enjoyment. Our findings recommend the ethical employment of digital self-clones to enhance the emotional and cognitive facets of skill development.
31 Oct 2022 | Generating Sequences by Learning to Self-Correct | ⬇️
Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi
Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content. Language models, whether fine-tuned or prompted with few-shot demonstrations, frequently violate these constraints, and lack a mechanism to iteratively revise their outputs. Moreover, some powerful language models are of extreme scale or inaccessible, making it inefficient, if not infeasible, to update their parameters for task-specific adaptation. We present Self-Correction, an approach that decouples an imperfect base generator (an off-the-shelf language model or supervised sequence-to-sequence model) from a separate corrector that learns to iteratively correct imperfect generations. To train the corrector, we propose an online training procedure that can use either scalar or natural language feedback on intermediate imperfect generations. We show that Self-Correction improves upon the base generator in three diverse generation tasks - mathematical program synthesis, lexically-constrained generation, and toxicity control - even when the corrector is much smaller than the base generator.
28 Dec 2023 | AI Content Self-Detection for Transformer-based Large Language Models | ⬇️
Ant^onio Junior Alves Caiado and Michael Hahsler
The usage of generative artificial intelligence (AI) tools based on large language models, including ChatGPT, Bard, and Claude, for text generation has many exciting applications with the potential for phenomenal productivity gains. One issue is authorship attribution when using AI tools. This is especially important in an academic setting where the inappropriate use of generative AI tools may hinder student learning or stifle research by creating a large amount of automatically generated derivative work. Existing plagiarism detection systems can trace the source of submitted text but are not yet equipped with methods to accurately detect AI-generated text. This paper introduces the idea of direct origin detection and evaluates whether generative AI systems can recognize their output and distinguish it from human-written texts. We argue why current transformer-based models may be able to self-detect their own generated text and perform a small empirical study using zero-shot learning to investigate if that is the case. Results reveal varying capabilities of AI systems to identify their generated text. Google's Bard model exhibits the largest capability of self-detection with an accuracy of 94%, followed by OpenAI's ChatGPT with 83%. On the other hand, Anthropic's Claude model seems to be not able to self-detect.
24 Oct 2022 | Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses | ⬇️
Adnan Qayyum, Muhammad Atif Butt, Hassan Ali, Muhammad Usman, Osama Halabi, Ala Al-Fuqaha, Qammer H. Abbasi, Muhammad Ali Imran, and Junaid Qadir
Metaverse is expected to emerge as a new paradigm for the next-generation Internet, providing fully immersive and personalised experiences to socialize, work, and play in self-sustaining and hyper-spatio-temporal virtual world(s). The advancements in different technologies like augmented reality, virtual reality, extended reality (XR), artificial intelligence (AI), and 5G/6G communication will be the key enablers behind the realization of AI-XR metaverse applications. While AI itself has many potential applications in the aforementioned technologies (e.g., avatar generation, network optimization, etc.), ensuring the security of AI in critical applications like AI-XR metaverse applications is profoundly crucial to avoid undesirable actions that could undermine users' privacy and safety, consequently putting their lives in danger. To this end, we attempt to analyze the security, privacy, and trustworthiness aspects associated with the use of various AI techniques in AI-XR metaverse applications. Specifically, we discuss numerous such challenges and present a taxonomy of potential solutions that could be leveraged to develop secure, private, robust, and trustworthy AI-XR applications. To highlight the real implications of AI-associated adversarial threats, we designed a metaverse-specific case study and analyzed it through the adversarial lens. Finally, we elaborate upon various open issues that require further research interest from the community.
11 Sep 2023 | Self-Edit: Fault-Aware Code Editor for Code Generation | ⬇️
Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin
Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task. We execute the generated code on the example test case provided in the question and wrap execution results into a supplementary comment. Utilizing this comment as guidance, our fault-aware code editor is employed to correct errors in the generated code. We perform extensive evaluations across two competitive programming datasets with nine different LLMs. Compared to directly generating from LLMs, our approach can improve the average of pass@1 by 89% on APPS-dev, 31% on APPS-test, and 48% on HumanEval over nine popular code generation LLMs with parameter sizes ranging from 110M to 175B. Compared to other post-processing methods, our method demonstrates superior accuracy and efficiency.
04 Jul 2023 | Self-Consuming Generative Models Go MAD | ⬇️
Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates an autophagous (self-consuming) loop whose properties are poorly understood. We conduct a thorough analytical and empirical analysis using state-of-the-art generative image models of three families of autophagous loops that differ in how fixed or fresh real training data is available through the generations of training and in whether the samples from previous generation models have been biased to trade off data quality versus diversity. Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease. We term this condition Model Autophagy Disorder (MAD), making analogy to mad cow disease.
25 Oct 2022 | A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks | ⬇️
M. Kuzlu, F. O. Catak, S. Sarp, U. Cali, and O Gueler
With the rapid development and integration of artificial intelligence (AI) methods in next-generation networks (NextG), AI algorithms have provided significant advantages for NextG in terms of frequency spectrum usage, bandwidth, latency, and security. A key feature of NextG is the integration of AI, i.e., self-learning architecture based on self-supervised algorithms, to improve the performance of the network. A secure AI-powered structure is also expected to protect NextG networks against cyber-attacks. However, AI itself may be attacked, i.e., model poisoning targeted by attackers, and it results in cybersecurity violations. This paper proposes an AI trust platform using Streamlit for NextG networks that allows researchers to evaluate, defend, certify, and verify their AI models and applications against adversarial threats of evasion, poisoning, extraction, and interference.
31 Dec 2023 | Generation Z's Ability to Discriminate Between AI-generated and Human-Authored Text on Discord | ⬇️
Dhruv Ramu and Rishab Jain and Aditya Jain
The growing popularity of generative artificial intelligence (AI) chatbots such as ChatGPT is having transformative effects on social media. As the prevalence of AI-generated content grows, concerns have been raised regarding privacy and misinformation online. Among social media platforms, Discord enables AI integrations -- making their primarily "Generation Z" userbase particularly exposed to AI-generated content. We surveyed Generation Z aged individuals (n = 335) to evaluate their proficiency in discriminating between AI-generated and human-authored text on Discord. The investigation employed one-shot prompting of ChatGPT, disguised as a text message received on the Discord.com platform. We explore the influence of demographic factors on ability, as well as participants' familiarity with Discord and artificial intelligence technologies. We find that Generation Z individuals are unable to discern between AI and human-authored text (p = 0.011), and that those with lower self-reported familiarity with Discord demonstrated an improved ability in identifying human-authored compared to those with self-reported experience with AI (p << 0.0001). Our results suggest that there is a nuanced relationship between AI technology and popular modes of communication for Generation Z, contributing valuable insights into human-computer interactions, digital communication, and artificial intelligence literacy.
25 Apr 2022 | Crystal Transformer: Self-learning neural language model for Generative and Tinkering Design of Materials | ⬇️
Lai Wei, Qinyang Li, Yuqi Song, Stanislav Stefanov, Edirisuriya M. D. Siriwardane, Fanglin Chen, Jianjun Hu
Self-supervised neural language models have recently achieved unprecedented success, from natural language processing to learning the languages of biological sequences and organic molecules. These models have demonstrated superior performance in the generation, structure classification, and functional predictions for proteins and molecules with learned representations. However, most of the masking-based pre-trained language models are not designed for generative design, and their black-box nature makes it difficult to interpret their design logic. Here we propose BLMM Crystal Transformer, a neural network based probabilistic generative model for generative and tinkering design of inorganic materials. Our model is built on the blank filling language model for text generation and has demonstrated unique advantages in learning the "materials grammars" together with high-quality generation, interpretability, and data efficiency. It can generate chemically valid materials compositions with as high as 89.7% charge neutrality and 84.8% balanced electronegativity, which are more than 4 and 8 times higher compared to a pseudo random sampling baseline. The probabilistic generation process of BLMM allows it to recommend tinkering operations based on learned materials chemistry and makes it useful for materials doping. Combined with the TCSP crysal structure prediction algorithm, We have applied our model to discover a set of new materials as validated using DFT calculations. Our work thus brings the unsupervised transformer language models based generative artificial intelligence to inorganic materials. A user-friendly web app has been developed for computational materials doping and can be accessed freely at \url{www.materialsatlas.org/blmtinker}.
09 Jun 2023 | Understanding Telecom Language Through Large Language Models | ⬇️
Lina Bariah and Hang Zou and Qiyang Zhao and Belkacem Mouhouche and Faouzi Bader and Merouane Debbah
The recent progress of artificial intelligence (AI) opens up new frontiers in the possibility of automating many tasks involved in Telecom networks design, implementation, and deployment. This has been further pushed forward with the evolution of generative artificial intelligence (AI), including the emergence of large language models (LLMs), which is believed to be the cornerstone toward realizing self-governed, interactive AI agents. Motivated by this, in this paper, we aim to adapt the paradigm of LLMs to the Telecom domain. In particular, we fine-tune several LLMs including BERT, distilled BERT, RoBERTa and GPT-2, to the Telecom domain languages, and demonstrate a use case for identifying the 3rd Generation Partnership Project (3GPP) standard working groups. We consider training the selected models on 3GPP technical documents (Tdoc) pertinent to years 2009-2019 and predict the Tdoc categories in years 2020-2023. The results demonstrate that fine-tuning BERT and RoBERTa model achieves 84.6% accuracy, while GPT-2 model achieves 83% in identifying 3GPP working groups. The distilled BERT model with around 50% less parameters achieves similar performance as others. This corroborates that fine-tuning pretrained LLM can effectively identify the categories of Telecom language. The developed framework shows a stepping stone towards realizing intent-driven and self-evolving wireless networks from Telecom languages, and paves the way for the implementation of generative AI in the Telecom domain.
06 Oct 2020 | Self-Supervised Variational Auto-Encoders | ⬇️
Ioannis Gatopoulos and Jakub M. Tomczak
Density estimation, compression and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete variational posteriors. This class of models allows to perform both conditional and unconditional sampling, while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA).
Date: 02 Feb 2023
Title: Self-Programming Artificial Intelligence Using Code-Generating Language Models
Abstract Link: https://arxiv.org/abs/2205.00167
PDF Link: https://arxiv.org/pdf/2205.00167
Date: 02 Feb 2024
Title: Is Self-Repair a Silver Bullet for Code Generation?
Abstract Link: https://arxiv.org/abs/2306.09896
PDF Link: https://arxiv.org/pdf/2306.09896
Date: 24 May 2018
Title: Neural Network Quine
Abstract Link: https://arxiv.org/abs/1803.05859
PDF Link: https://arxiv.org/pdf/1803.05859
Date: 16 Apr 2018
Title: An AI-driven Malfunction Detection Concept for NFV Instances in 5G
Abstract Link: https://arxiv.org/abs/1804.05796
PDF Link: https://arxiv.org/pdf/1804.05796
Date: 09 Jan 2014
Title: Emotional Responses in Artificial Agent-Based Systems: Reflexivity and Adaptation in Artificial Life
Abstract Link: https://arxiv.org/abs/1401.2121
PDF Link: https://arxiv.org/pdf/1401.2121
Date: 22 Aug 2023
Title: Learning to generate and corr- uh I mean repair language in real-time
Abstract Link: https://arxiv.org/abs/2308.11683
PDF Link: https://arxiv.org/pdf/2308.11683
Date: 29 Apr 2022
Title: Self-Aware Feedback-Based Self-Learning in Large-Scale Conversational AI
Abstract Link: https://arxiv.org/abs/2205.00029
PDF Link: https://arxiv.org/pdf/2205.00029
Date: 06 Nov 2019
Title: Feedback-Based Self-Learning in Large-Scale Conversational AI Agents
Abstract Link: https://arxiv.org/abs/1911.02557
PDF Link: https://arxiv.org/pdf/1911.02557
Date: 11 Dec 2022
Title: Optimal Seeding and Self-Reproduction from a Mathematical Point of View
Abstract Link: https://arxiv.org/abs/1806.09506
PDF Link: https://arxiv.org/pdf/1806.09506
Date: 23 Oct 2023
Title: The Self 2.0: How AI-Enhanced Self-Clones Transform Self-Perception and Improve Presentation Skills
Abstract Link: https://arxiv.org/abs/2310.15112
PDF Link: https://arxiv.org/pdf/2310.15112
Date: 31 Oct 2022
Title: Generating Sequences by Learning to Self-Correct
Abstract Link: https://arxiv.org/abs/2211.00053
PDF Link: https://arxiv.org/pdf/2211.00053
Date: 28 Dec 2023
Title: AI Content Self-Detection for Transformer-based Large Language Models
Abstract Link: https://arxiv.org/abs/2312.17289
PDF Link: https://arxiv.org/pdf/2312.17289
Date: 24 Oct 2022
Title: Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses
Abstract Link: https://arxiv.org/abs/2210.13289
PDF Link: https://arxiv.org/pdf/2210.13289
Date: 11 Sep 2023
Title: Self-Edit: Fault-Aware Code Editor for Code Generation
Abstract Link: https://arxiv.org/abs/2305.04087
PDF Link: https://arxiv.org/pdf/2305.04087
Date: 04 Jul 2023
Title: Self-Consuming Generative Models Go MAD
Abstract Link: https://arxiv.org/abs/2307.01850
PDF Link: https://arxiv.org/pdf/2307.01850
Date: 25 Oct 2022
Title: A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks
Abstract Link: https://arxiv.org/abs/2211.12851
PDF Link: https://arxiv.org/pdf/2211.12851
Date: 31 Dec 2023
Title: Generation Z's Ability to Discriminate Between AI-generated and Human-Authored Text on Discord
Abstract Link: https://arxiv.org/abs/2401.04120
PDF Link: https://arxiv.org/pdf/2401.04120
Date: 25 Apr 2022
Title: Crystal Transformer: Self-learning neural language model for Generative and Tinkering Design of Materials
Abstract Link: https://arxiv.org/abs/2204.11953
PDF Link: https://arxiv.org/pdf/2204.11953
Date: 09 Jun 2023
Title: Understanding Telecom Language Through Large Language Models
Abstract Link: https://arxiv.org/abs/2306.07933
PDF Link: https://arxiv.org/pdf/2306.07933
Date: 06 Oct 2020
Title: Self-Supervised Variational Auto-Encoders
Abstract Link: https://arxiv.org/abs/2010.02014
PDF Link: https://arxiv.org/pdf/2010.02014
🔍Run of Multi-Agent System Paper Summary Spec is Complete
Start time: 2024-06-09 08:33:38
Finish time: 2024-06-09 08:34:11
'''
st.markdown(data)