ehsanaghaei commited on
Commit
4c48ccd
1 Parent(s): 0391fb4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -5,7 +5,17 @@ language:
5
  tags:
6
  - cybersecurity
7
  widget:
8
- - text: "Virus causes <maks>."
 
 
 
 
 
 
 
 
 
 
9
  ---
10
  # SecureBERT+
11
  This model represents an improved version of the [SecureBERT](https://huggingface.co/ehsanaghaei/SecureBERT) model, trained on a corpus eight times larger than its predecessor, leveraging the computational power of 8xA100 GPUs. This version, known as SecureBERT+, brings forth an average improvment of 9% in the performance of the Masked Language Model (MLM) task. This advancement signifies a substantial stride towards achieving heightened proficiency in language understanding and representation learning within the cybersecurity domain.
 
5
  tags:
6
  - cybersecurity
7
  widget:
8
+ - text: "Native API functions such as <mask>, may be directed invoked via system calls/syscalls, but these features are also often exposed to user-mode applications via interfaces and libraries.."
9
+ example_title: Native API functions
10
+
11
+ - text: "One way of explicitly assigning the PPID of a new process is via the <mask> API call, which supports a parameter that defines the PPID to use."
12
+ example_title: Assigning the PPID of a new process
13
+
14
+ - text: "Enable Safe DLL Search Mode to force search for system DLLs in directories with greater restrictions (e.g. %<mask>%) to be used before local directory DLLs (e.g. a user's home directory)"
15
+ example_title: Enable Safe DLL Search Mode
16
+
17
+ - text: "GuLoader is a file downloader that has been used since at least December 2019 to distribute a variety of <mask>, including NETWIRE, Agent Tesla, NanoCore, and FormBook."
18
+ example_title: GuLoader is a file downloader
19
  ---
20
  # SecureBERT+
21
  This model represents an improved version of the [SecureBERT](https://huggingface.co/ehsanaghaei/SecureBERT) model, trained on a corpus eight times larger than its predecessor, leveraging the computational power of 8xA100 GPUs. This version, known as SecureBERT+, brings forth an average improvment of 9% in the performance of the Masked Language Model (MLM) task. This advancement signifies a substantial stride towards achieving heightened proficiency in language understanding and representation learning within the cybersecurity domain.